m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1511}
hide / / print
ref: -2020 tags: evolution neutral drift networks random walk entropy population date: 04-08-2020 00:48 gmt revision:0 [head]

Localization of neutral evolution: selection for mutational robustness and the maximal entropy random walk

  • The take-away of the paper is that, with larger populations, random mutation and recombination make areas of the graph that take several steps to get to (in the figure, this is Maynard Smith's four-letter mutation word game) are less likely to be visited with a larger population.
  • This is because the recombination serves to make the population adhere more closely to the 'giant' mode. In Maynard's game, this is 2268 words of 2405 meaningful words that can be reached by successive letter changes.
  • The author extends it to van Nimwegen's 1999 paper / RNA genotype-secondary structure. It's not as bad as Maynard's game, but still has much lower graph-theoretic entropy than the actual population.
    • He suggests if the entropic size of the giant component is much smaller than it's dictionary size, then populations are likely to be trapped there.

  • Interesting, but I'd prefer to have an expert peer-review it first :)

{990}
hide / / print
ref: Peikon-2009.06 tags: Peikon Fitzsimmons Nicolelis video tracking walking BMI Idoya date: 01-06-2012 00:19 gmt revision:2 [1] [0] [head]

PMID-19464514[0] Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

  • yepp.

____References____

[0] Peikon ID, Fitzsimmons NA, Lebedev MA, Nicolelis MA, Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.J Neurosci Methods 180:2, 224-33 (2009 Jun 15)

{652}
hide / / print
ref: notes-0 tags: policy gradient reinforcement learning aibo walk optimization date: 12-09-2008 17:46 gmt revision:0 [head]

Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion

  • simple, easy to understand policy gradient method! many papers cite this on google scholar.
  • compare to {651}