m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{231} is owned by tlh24.{492} is owned by tlh24.
[0] Evarts EV, Relation of pyramidal tract activity to force exerted during voluntary movement.J Neurophysiol 31:1, 14-27 (1968 Jan)

[0] Taylor DM, Tillery SI, Schwartz AB, Direct cortical control of 3D neuroprosthetic devices.Science 296:5574, 1829-32 (2002 Jun 7)

[0] Evarts EV, Activity of pyramidal tract neurons during postural fixation.J Neurophysiol 32:3, 375-85 (1969 May)[1] Evarts EV, Relation of pyramidal tract activity to force exerted during voluntary movement.J Neurophysiol 31:1, 14-27 (1968 Jan)

[0] Moran DW, Schwartz AB, Motor cortical representation of speed and direction during reaching.J Neurophysiol 82:5, 2676-92 (1999 Nov)

[0] Fromm C, Evarts EV, Relation of size and activity of motor cortex pyramidal tract neurons during skilled movements in the monkey.J Neurosci 1:5, 453-60 (1981 May)

[0] Kettner RE, Schwartz AB, Georgopoulos AP, Primate motor cortex and free arm movements to visual targets in three-dimensional space. III. Positional gradients and population coding of movement direction from various movement origins.J Neurosci 8:8, 2938-47 (1988 Aug)[1] Georgopoulos AP, Kettner RE, Schwartz AB, Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population.J Neurosci 8:8, 2928-37 (1988 Aug)[2] Schwartz AB, Kettner RE, Georgopoulos AP, Primate motor cortex and free arm movements to visual targets in three-dimensional space. I. Relations between single cell discharge and direction of movement.J Neurosci 8:8, 2913-27 (1988 Aug)[3] Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT, On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex.J Neurosci 2:11, 1527-37 (1982 Nov)

[0] Schwartz AB, Cortical neural prosthetics.Annu Rev Neurosci 27no Issue 487-507 (2004)[1] Carmena JM, Lebedev MA, Henriquez CS, Nicolelis MA, Stable ensemble performance with single-neuron variability during reaching movements in primates.J Neurosci 25:46, 10712-6 (2005 Nov 16)

[0] Brockwell AE, Rojas AL, Kass RE, Recursive bayesian decoding of motor cortical signals by particle filtering.J Neurophysiol 91:4, 1899-907 (2004 Apr)

{1566}
hide / / print
ref: -1992 tags: evolution baldwin effect ackley artificial life date: 03-21-2022 23:20 gmt revision:0 [head]

Interactions between learning and evolution

  • Ran simulated evolution and learning on a population of agents over ~100k lifetimes.
  • Each agent can last several hundred timesteps with a gridworld like environment.
  • Said gridworld environment has plants (food), trees (shelter), carnivores, and other agents (for mating)
  • Agent behavior is parameterized by an action network and a evaluation network.
    • The action network transforms sensory input into actions
    • The evaluation network sets the valence (positive or negative) of the sensory signals
      • This evaluation network modifies the weights of the action network using a gradient-based RL algorithm called CRBP (complementary reinforcement back-propagation) which reinforces based on the temporal derivative, and complements (negative) when action does not increase reward, with some e-greedy exploration.
        • It's not perfect, but as they astutely say, any reinforcement learning algorithm involves some search, so generally heuristics are required to select new actions in the face of uncertainty.
      • Observe that it seems easier to make a good evaluation network than action network (evaluation network is lower dimensional -- one output!)
    • Networks are implemented as one-layer perceptrons (boring, but they had limited computational resources back then)
  • Showed (roughly) that in winner populations you get:
    • When learning is an option, the population will learn, and with time this will grow to anticipation / avoidance
    • This will transition to the Baldwin effect; learned behavior becomes instinctive
      • But, interestingly, only when the problem is incompletely solved!
      • If it's completely solved by learning (eg super fast), then there is no selective leverage on innate behavior over many generations.
      • Likewise, the survival problem to be solved needs to be stationary and consistent for long enough for the Baldwin effect to occur.
    • Avoidance is a form of shielding, and learning no longer matters on this behavior
    • Even longer term, shielding leads to goal regression: avoidance instincts allow the evaluation network to do something else, set new goals.
      • In their study this included goals such as approaching predators (!).

Altogether (historically) interesting, but some of these ideas might well have been anticipated by some simple hand calculations.

{1534}
hide / / print
ref: -2020 tags: current opinion in neurobiology Kriegeskorte review article deep learning neural nets circles date: 02-23-2021 17:40 gmt revision:2 [1] [0] [head]

Going in circles is the way forward: the role of recurrence in visual inference

I think the best part of this article are the references -- a nicely complete listing of, well, the current opinion in Neurobiology! (Note that this issue is edited by our own Karel Svoboda, hence there are a good number of Janelians in the author list..)

The gestalt of the review is that deep neural networks need to be recurrent, not purely feed-forward. This results in savings in overall network size, and increase in the achievable computational complexity, perhaps via the incorporation of priors and temporal-spatial information. All this again makes perfect sense and matches my sense of prevailing opinion. Of course, we are left wanting more: all this recurrence ought to be structured in some way.

To me, a rather naive way of thinking about it is that feed-forward layers cause weak activations, which are 'amplified' or 'selected for' in downstream neurons. These neurons proximally code for 'causes' or local reasons, based on the supported hypothesis that the brain has a good temporal-spatial model of the visuo-motor world. The causes then can either explain away the visual input, leading to balanced E-I, or fail to explain it, in which the excess activity is either rectified by engaging more circuits or engaging synaptic plasticity.

A critical part of this hypothesis is some degree of binding / disentanglement / spatio-temporal re-assignment. While not all models of computation require registers / variables -- RNNs are Turning-complete, e.g., I remain stuck on the idea that, to explain phenomenological experience and practical cognition, the brain much have some means of 'binding'. A reasonable place to look is the apical tuft dendrites, which are capable of storing temporary state (calcium spikes, NMDA spikes), undergo rapid synaptic plasticity, and are so dense that they can reasonably store the outer-product space of binding.

There is mounting evidence for apical tufts working independently / in parallel is investigations of high-gamma in ECoG: PMID-32851172 Dissociation of broadband high-frequency activity and neuronal firing in the neocortex. "High gamma" shows little correlation with MUA when you differentiate early-deep and late-superficial responses, "consistent with the view it reflects dendritic processing separable from local neuronal firing"

{1507}
hide / / print
ref: -2015 tags: winner take all sparsity artificial neural networks date: 03-28-2020 01:15 gmt revision:0 [head]

Winner-take-all Autoencoders

  • During training of fully connected layers, they enforce a winner-take all lifetime sparsity constraint.
    • That is: when training using mini-batches, they keep the k percent largest activation of a given hidden unit across all samples presented in the mini-batch. The remainder of the activations are set to zero. The units are not competing with each other; they are competing with themselves.
    • The rest of the network is a stack of ReLU layers (upon which the sparsity constraint is applied) followed by a linear decoding layer (which makes interpretation simple).
    • They stack them via sequential training: train one layer from the output of another & not backprop the errors.
  • Works, with lower sparsity targets, also for RBMs.
  • Extended the result to WTA covnets -- here enforce both spatial and temporal (mini-batch) sparsity.
    • Spatial sparsity involves selecting the single largest hidden unit activity within each feature map. The other activities and derivatives are set to zero.
    • At test time, this sparsity constraint is released, and instead they use a 4 x 4 max-pooling layer & use that for classification or deconvolution.
  • To apply both spatial and temporal sparsity, select the highest spatial response (e.g. one unit in a 2d plane of convolutions; all have the same weights) for each feature map. Do this for every image in a mini-batch, and then apply the temporal sparsity: each feature map gets to be active exactly once, and in that time only one hidden unit (or really, one location of the input and common weights (depending on stride)) undergoes SGD.
    • Seems like it might train very slowly. Authors didn't note how many epochs were required.
  • This, too can be stacked.
  • To train on larger image sets, they first extract 48 x 48 patches & again stack...
  • Test on MNIST, SVHN, CIFAR-10 -- works ok, and well even with few labeled examples (which is consistent with their goals)

{1445}
hide / / print
ref: -2018 tags: cortex layer martinotti interneuron somatostatin S1 V1 morphology cell type morphological recovery patch seq date: 03-06-2019 02:51 gmt revision:3 [2] [1] [0] [head]

Neocortical layer 4 in adult mouse differs in major cell types and circuit organization between primary sensory areas

  • Using whole-cell recordings with morphological recovery, we identified one major excitatory and seven inhibitory types of neurons in L4 of adult mouse visual cortex (V1).
  • Nearly all excitatory neurons were pyramidal and almost all Somatostatin-positive (SOM+) neurons were Martinotti cells.
  • In contrast, in somatosensory cortex (S1), excitatory cells were mostly stellate and SOM+ cells were non-Martinotti.
  • These morphologically distinct SOM+ interneurons correspond to different transcriptomic cell types and are differentially integrated into the local circuit with only S1 cells receiving local excitatory input.
  • Our results challenge the classical view of a canonical microcircuit repeated through the neocortex.
  • Instead we propose that cell-type specific circuit motifs, such as the Martinotti/pyramidal pair, are optionally used across the cortex as building blocks to assemble cortical circuits.
  • Note preponderance of axons.
  • Classifications:
    • Pyr pyramidal cells
    • BC Basket cells
    • MC Martinotti cells
    • BPC bipolar cells
    • NFC neurogliaform cells
    • SC shrub cells
    • DBC double bouquet cells
    • HEC horizontally elongated cells.
  • Using Patch-seq

{1414}
hide / / print
ref: -0 tags: US employment top 100 bar chart date: 11-12-2018 00:02 gmt revision:1 [0] [head]

After briefly searching the web, I could not find a chart of the top 100 occupations in the US. After downloading the data from the US Bureau of Labor Statistics, made this chart:

Click for full-size.

Surprising how very service heavy our economy is.

{1385}
hide / / print
ref: -0 tags: tungsten eletropolishing hydroxide cleaning bath tartarate date: 03-28-2017 16:34 gmt revision:0 [head]

Method of electropolishing tungsten wire US 3287238 A

  • The bath is formed of 15% by weight sodium hydroxide, 30% by weight sodium potassium tartrate, and 55% by weight distilled water, with the bath temperature being between 70 and 100 F.
    • If the concentration of either the hydroxide or the tartrate is below the indicated minimum, the wire is electrocleaned rather than electropolished, and a matte finish is obtained rather than a specular surface.
    • If the concentration of either the hydroxide or the tartrate is greater than the indicated maximum, the electropolishing process is quite slow.
  • The voltage which is applied between the two electrodes 18 and 20 is from 16 to 18.5 volts, the current through the bath is 20 to 24 amperes, and the current density is 3,000 to 4,000 amperes per square foot of surface of wire in the bath.

{1354}
hide / / print
ref: -0 tags: David Kleinfeld penetrating arterioles perfusion cortex vasculature date: 10-17-2016 23:24 gmt revision:1 [0] [head]

PMID-17190804 Penetrating arterioles are a bottleneck in the perfusion of neocortex.

  • Focal photothrombosis was used to occlude single penetrating arterioles in rat parietal cortex, and the resultant changes in flow of red blood cells were measured with two-photon laser-scanning microscopy in individual subsurface microvessels that surround the occlusion.
  • We observed that the average flow of red blood cells nearly stalls adjacent to the occlusion and remains within 30% of its baseline value in vessels as far as 10 branch points downstream from the occlusion.
  • Preservation of average flow emerges 350 mum away; this length scale is consistent with the spatial distribution of penetrating arterioles
  • Rose bengal photosensitizer.
  • 2p laser scanning microscopy.
  • Downstream and connected arterioles show a dramatic reduction in blood flow, even 1-4 branches in; there is little reduncancy (figure 2)
  • Measured a good number of vessels (and look at their density!); results are satisfactorily quantitative.
  • Vessel leakiness extends up to 1.1mm away (!) (figure 5).

{1342}
hide / / print
ref: -0 tags: NC state tap drill chart date: 08-02-2016 18:38 gmt revision:0 [head]

http://amasci.com/tesla/Tap_Drill_Chart.html

by way of: https://m.reddit.com/r/engineering/comments/4ry07t/does_anyone_have_a_stored_copy_of_this_tap_and/

{1267}
hide / / print
ref: -0 tags: stretchable nanoparticle conductors gold polyurethane flocculation date: 12-13-2013 02:12 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-23863931 Stretchable nanoparticle conductors with self-organized conductive pathways.

  • 13nm gold nanoparticles, citrate-stabilized colloidal solution
    • Details of fabrication procedure in methods & supp. materials.
  • Films are prepared in water and dried (like paint)
  • LBL = layer by layer. layer of polyurethane + layer of gold nanoparticles.
    • Order of magnitude higher conductivity than the
  • VAF = vacuum assisted floculation.
    • Mix Au-citrate nanoparticles + polyurethane and pass through filter paper.
    • Peel the flocculant from the filter paper & dry.
  • Conductivity of the LBL films ~ 1e4 S/cm -> 1e-6 Ohm*m (pure gold = 2 x 10-8, 50 x better)
  • VAF = 1e3 S/cm -> 1e-5 Ohm*m. Still pretty good.
    • This equates to a resistance of 1k / mm in a 10um^2 cross-sectional area wire (2um x 5 um, e.g.)
  • The material can sustain > 100% strain when thermo-laminated.
    • Laminated: 120C at 20 MPa for 1 hour.
  • See also: Preparation of highly conductive gold patterns on polyimide via shaking-assisted layer-by-layer deposition of gold nanoparticles
    • Patterned via MCP -- microcontact printing(aka rubber-stamping)
    • Bulk conductivity of annealed (150C) films near that of pure gold (?)
    • No mechanical properties, though; unlcear if these films are more flexible / ductile than evaporated film.

{1257}
hide / / print
ref: -0 tags: Anna Roe optogenetics artificial dura monkeys intrinisic imaging date: 09-30-2013 19:08 gmt revision:3 [2] [1] [0] [head]

PMID-23761700 Optogenetics through windows on the brain in nonhuman primates

  • technique paper.
  • placed over the visual cortex.
  • Injected virus through the artificial dura -- micropipette, not CVD.
  • Strong expression:
  • See also: PMID-19409264 (Boyden, 2009)

{1169}
hide / / print
ref: -0 tags: artificial intelligence projection episodic memory reinforcement learning date: 08-15-2012 19:16 gmt revision:0 [head]

Projective simulation for artificial intelligence

  • Agent learns based on memory 'clips' which are combined using some pseudo-bayesian method to trigger actions.
    • These clips are learned from experience / observation.
    • Quote: "..more complex behavior seems to arise when an agent is able to “think for a while” before it “decides what to do next.” This means the agent somehow evaluates a given situation in the light of previous experience, whereby the type of evaluation is different from the execution of a simple reflex circuit"
    • Quote: "Learning is achieved by evaluating past experience, for example by simple reinforcement learning".
  • The forward exploration of learned action-stimulus patterns is seemingly a general problem-solving strategy (my generalization).
  • Pretty simple task:
    • Robot can only move left / right; shows a symbol to indicate which way it (might?) be going.

{696}
hide / / print
ref: Jarosiewicz-2008.12 tags: Schwartz BMI learning perturbation date: 03-07-2012 17:11 gmt revision:2 [1] [0] [head]

PMID-19047633[0] Functional network reorganization during learning in a brain-computer interface paradigm.

  • quote: For example, the tuning functions of neurons in the motor cortex can change when monkeys adapt to perturbations that interfere with the execution (5–7) or visual feedback (8–10) of their movements. Check these refs - have to be good!
  • point out that only the BMI lets you see how the changes reflect changes in behavior.
  • BMI also allows pertubactions to target a subset of neurons. apparently, they had the same idea as me.
  • used the PV algorithm. yeck.
  • perturbed a select subset of neurons by rotating their tuning by 90deg. about the Z-axis. pre - perturb - washout series of experiments.
  • 3D BMI, center-out task, 8 targets at the corners of a cube.
  • looked for the following strategies for compensating to the perturbation:
    • re-aiming: to compensate for the deflected trajectory, aim at a rotated target.
    • re-waiting: decrease the strength of the rotated neurons.
    • re-mapping: use the new units based on their rotated tuning.
  • modulation depths for the rotated neurons did in fact decrease.
  • PD for the neurons that were perturbed rotated more than the control neurons.
  • rotated neurons contributed to error parallel to perturbation, unrotated compensated for this, and contributed to 'errors' in the opposite direction.
  • typical recording sessions of 3 hours - thus, the adaptation had to proceed quickly and only online. pre-perturb-washout each had about 8 * 20 trials.
  • interesting conjecture: "Another possibility is that these neurons solve the “credit-assignment problem” described in the artificial intelligence literature (25–26). By using a form of Hebbian learning (27), each neuron could reduce its contribution to error independently of other neurons via noise-driven synaptic updating rules (28–30). "
    • ref 25: Minsky - 1961;
    • ref 26: Cohen PR, Feigenbaum EA (1982) The Handbook of Artificial Intelligence; 27 references Hebb driectly - 1949 ;
    • ref 28: ALOPEX {695} ;
    • ref 29: PMID-1903542[1] A more biologically plausible learning rule for neural networks.
    • ref 30: PMID-17652414[2] Model of birdsong learning based on gradient estimation by dynamic perturbation of neural conductances. Fiete IR, Fee MS, Seung HS.

____References____

[0] Jarosiewicz B, Chase SM, Fraser GW, Velliste M, Kass RE, Schwartz AB, Functional network reorganization during learning in a brain-computer interface paradigm.Proc Natl Acad Sci U S A 105:49, 19486-91 (2008 Dec 9)
[1] Mazzoni P, Andersen RA, Jordan MI, A more biologically plausible learning rule for neural networks.Proc Natl Acad Sci U S A 88:10, 4433-7 (1991 May 15)
[2] Fiete IR, Fee MS, Seung HS, Model of birdsong learning based on gradient estimation by dynamic perturbation of neural conductances.J Neurophysiol 98:4, 2038-57 (2007 Oct)

{280}
hide / / print
ref: Evarts-1968.01 tags: Evarts motor control pyramidal tract M1 PTN tuning date: 01-16-2012 18:59 gmt revision:4 [3] [2] [1] [0] [head]

PMID-4966614[] Relation of pyramidal tract activity to force exerted during voluntary movement

  • PTNs with high conduction velocity tend to be silent during motor quiescence and show phasic activity with movement.
  • PTNs with lower axonal conduction velocities are active in the absence of movement; with movement they show both upward and downward modulations of the resting discharge.
  • many PTNs responded to a conditional stimulus before the movement.
  • in this study, they wanted to determine if phasic response was more correlated with displacement or with force.
    • did this with two different motions (flexion and extension) in two different force loads (opposing flexion and opposing extransion)
      • movements were slow (or at least nonballistic) and somewhat controlled - they had to last between 400 and 700ms.
      • monkeys usually carried out 3,000 cycles of the movement daily !!
  • "prior to the experiment, hte authour was biased to think that the displacement model (where the cortex commands a location/movement of the arm, which is then accomplished through feedback & feedforward mechanisms e.g. in the spinal cord) was correct; experimental results seem to indicate that force is very strongly represented in PTN population.
  • many PTN firing rates reflected dF/dt very strongly.
  • old, good paper. made with 'primitive' technology - but why do we need to redo this?

____References____

{951}
hide / / print
ref: Schwartz-1994.07 tags: Schwartz drawing spiral monkeys population vector PV date: 01-16-2012 18:52 gmt revision:1 [0] [head]

PMID-8036499[0] Direct cortical representation of drawing

____References____

[0] Schwartz AB, Direct cortical representation of drawing.Science 265:5171, 540-2 (1994 Jul 22)

{814}
hide / / print
ref: Zhang-2009.02 tags: localized surface plasmon resonance nanoparticle neural recording innovative date: 01-15-2012 23:00 gmt revision:4 [3] [2] [1] [0] [head]

PMID-19199762[0] Optical Detection of Brain Cell Activity Using Plasmonic Gold Nanoparticles

  • Used 140 nm diameter, 40 nm thick gold disc nanoparticles set in a 400nm array, illuminated by 850nm diode laser light.
    • From my reading, it seems that the diameter of these nanoparticles is important, but the grid spacing is not.
  • These nanoparticles strongly scatter light, and the degree of scattering is dependent on the local index of refraction + electric field.
  • The change in scattering due to applied electric field is very small, though - ~ 3e-6 1/V in the air-capacitor setup, ~1e-3 in solution when stimluated by cultured hippocampal neurons.
  • Noteably, nanoparticles are not diffraction limited - their measurement resolution is proportional to their size. Compare with voltage-sensitive dyes, which have a similar measurement signal-to-noise ratio, are diffraction limited, may be toxic, and may photobleach.

____References____

[0] Zhang J, Atay T, Nurmikko AV, Optical detection of brain cell activity using plasmonic gold nanoparticles.Nano Lett 9:2, 519-24 (2009 Feb)

{334}
hide / / print
ref: Taylor-2002.06 tags: Taylor Schwartz 3D BMI coadaptive date: 01-08-2012 04:29 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

PMID-12052948[0] Direct Cortical Control of 3D Neuroprosthetic Devices

  • actually not a bad paper... reasonable and short. they adapted the target size to maintain a 70% hit rate, and one monkey was able to floor this (reach and stay at the minimum)
  • coadaptive algorithm removed noise units based on (effectively) cross-validation.
    • both arms were restrained during performance & co-adaptation. Monkeys initially strained to move the cursor, but eventually relaxed.
  • Changes from hand control to brain control random but apparently somewhat consistent between days.
  • continually increasing performance in brain-control for both monkeys, arguably due to the presence of feedback and learning. They emphasize the difference between open-loop (Wessberg) and closed-loop control. (42 ± 5% versus 12 ± 5% of targets hit)
    • still, the percentage of correct trials is low - ~50% for the 8 target 3D task.
    • monkeys improved target hit rate by 7% from the first to the third block of 8 closed-loop movements each day.
  • claim that they were able to record some units for up to 2 months ?? ! In their other monkey, with teflon/polymide coated stainless electrodes, the neural recordings changed nearly every day, and eventually went away.
  • quote: Cell-tuning functions obtained during normal arm movements were not good predictors of intended movement once both arms were restrained. interesting.
  • coadaptive algorithm:
    • Raw PV yielded poor predictions.
    • first, effectively z-score the firing rate of each neuron.
    • junk / hash neurons were not removed.
    • Two different weights per neuron per axis (hence 6 weights altogether), one if firing rate was above the mean value, another if it was below. corrected for resulting drift. Sum (neuronal firing rates * weights) controlled velocity on each of the axes. (Hence, it is not surprising that the brain-control tuning was significantly different from the hand control - the output model is vastly different).
    • restarted the coadaptive algorithm every day?
    • coadaptive algorithm appears to be something like stochastic gradient descent with a step-size that decreases with increasing performance.
      • From her Case-western website, Dawn Taylor still seems to be on the coadaptive kick. Seems like it's bad to get stuck on one idea all your life ... though perhaps that is the best way to complete something.
    • Their movies in supplementary materials look rather good, better than most of the stuff that we have done. She did not quantify SNR or correlation coefficient.

____References____

{949}
hide / / print
ref: Velliste-2008.06 tags: Schwartz 2008 Velliste BMI feeding population vector date: 01-06-2012 00:19 gmt revision:1 [0] [head]

PMID-18509337[0] Cortical control of a prosthetic arm for self-feeding

  • Idea: move BMI into robotic control.
  • population vector control, which has been shown to be inferior to the Wiener filter.
  • 112 units for control in one monkey. 2 monkeys used.
  • 4D control -- x, y, z, gripper.
  • 1064 trials over 13 days, average success rate of 78%
  • Gripper opened as the arm returned to mouth. Works b/c marshmallows are sticky.

____References____

[0] Velliste M, Perel S, Spalding MC, Whitford AS, Schwartz AB, Cortical control of a prosthetic arm for self-feeding.Nature 453:7198, 1098-101 (2008 Jun 19)

{281}
hide / / print
ref: Evarts-1969.05 tags: Evarts pyramidal tract motor control M1 tuning date: 01-03-2012 23:08 gmt revision:2 [1] [0] [head]

PMID-4977837[0] Activity of Pyramidal Tract neurons during postural fixation

  • Force was thus dissociated from displacement, and it was possible to determine whether PTN discharges were related to position or force.
  • for the majority of PTNs discharge frequency was related to to the magnitude and rate of change of force rather than to the joint position or the speed of joint movement (same as the MUA in the Kinarm data!!)
  • task was simple: just try to avoid joint movement.
  • in comparison to [1] where PTN were related to force under joint displacement, this task shows they are still related to force even when the joint angle is fixed.
  • used sharpened tungsten electrodes to record 102 pyramidal tract neurons.
  • monkeys were trained to do the tasks in their home cages (obviously weren't recorded there - need to be headposted)
  • I'm not sure how he determined if it was or was not a pyramidal tract neuron.

____References____

{988}
hide / / print
ref: Butovas-2007.04 tags: Butovas Schwarts ICMS stimulation rat barrel cortex date: 01-03-2012 06:55 gmt revision:2 [1] [0] [head]

PMID-17419757[0] Detection psychophysics of intracortical microstimulation in rat primary somatosensory cortex.

  • headposted rats, ICMS to barrel cortex
  • single pulse threshold = 2 nC, around the threshold for evocation of short-latency action potentials near an electrode.
  • one pulse saturated at 80% correct.
  • multiple pulses had a higher rate, though this saturated at 15 pulses.
  • double pulse optimal in terms of power / discrimination.

____References____

[0] Butovas S, Schwarz C, Detection psychophysics of intracortical microstimulation in rat primary somatosensory cortex.Eur J Neurosci 25:7, 2161-9 (2007 Apr)

{96}
hide / / print
ref: Moran-1999.11 tags: electrophysiology motor cortex Schwartz Moran M1 tuning date: 01-03-2012 03:36 gmt revision:2 [1] [0] [head]

PMID-10561437[0] Motor cortical representation of speed and direction during reaching

  • velocity is represented in the motor cortex.
  • they developed an equation relating firing rate to the position and velocity.
  • EMG direction had significantly different tuning from the cortical activity
    • the effect of speed on EMG was also different.
  • used single-electrode recording - 1,066 cells!!
  • introduce the square-root transformation of the firing rate (from Ashe and Georgopolous 1994)

____References____

{959}
hide / / print
ref: -0 tags: Evarts force pyramidal tract M1 movement monkeys conduction velocity tuning date: 01-03-2012 03:25 gmt revision:3 [2] [1] [0] [head]

PMID-4966614 Relation of pyramidal tract activity to force exerted during voluntary movement.

  • One of the pioneering studies of electrophysiology in awake behaving animals; single electrode juice reward headposting: many followed.
  • {960} looked at conduction velocity, which we largely ignore now -- most highly mylenated axons are silent during motor quiescence and show phasic activity during movement.
    • Lower conduction velocity PTNs show + and - FR modulations. Again from [5]
  • [6] showed that PTN activity preceded EMG activity, implying that it was efferent rather than afferent feedback that was controlling the fr. as expected.
  • task: wrist flexion & extension under load.
  • task in monkey's home cage for a period of three months; monkeys carried out 3000 trials or more of the task (must have had strong wrists!)
  • Head fixated the monkeys for about 10 days prior unit recordings; "The monkeys learned to be quite cooperative in reentering the chair in the morning, since entrance to the chair was rewarded by the fruit juice of their choice (grape, apple, or orange). Indeed, some monkeys continued to work even in the presence of free water!
    • Maybe I should give mango some Hawaiian punch as well?
  • Mesured antidromic responses with a permanent electrode in the ipsilateral medullary pyramid.
  • Used glass insulated platinum-iridium electrodes [11]
  • traces are clean, very clean. I wonder if good insulation (in this case, glass) has anything to do with it?
  • controlled for displacement by varying the direction of load; PTNs seem to directly control muscles.
    • Fire during acceleration and movement for no load
    • Fire during load and co-contraction when loaded.
  • FR also related to δF/δt\delta F / \delta t : FR higher during a low but rising force than a high but falling force.
  • more than 100 PTN recorded from the precentral gyrus, but only 31' had clear and consistent relation to performance on the task.
    • 16 units on extension loads, 7 units flexion loads
    • It was only one joint afterall..
  • Cells responding to the same movement (flexion or extension) were often founf on the same vertical electrode tract.
  • Very little response to joint position.
  • Very clean moculations -- neurons are almost silent if there is no force production; FR goes up to 50-80Hz.
  • Prior to the exp Evart expected a position tuning model, but saw clear evidence of force tuning.
  • Group 1 muscle afferents have now been shown to project to the motor cortex of both monkey [1] and cat [9]. Make sense, as if the ctx is to control force, it needs feedback regarding its production.
  • Caveats: many muscles were involved in the study, mainly due to postural effects, and having one or two controls poorly delineates what is going on in the motor ctx.
    • Plus, all the muscles controlling the figers come into play -- the manipulandum must be gripped firmly, esp to resist extension loads.

{830}
hide / / print
ref: Rolston-2009.01 tags: ICMS artifacts stimulation Rolston Potter recording BMI date: 01-03-2012 02:38 gmt revision:3 [2] [1] [0] [head]

PMID-19668698[0] A low-cost multielectrode system for data acquisition enabling real-time closed-loop processing with rapid recovery from stimulation artifacts

  • Well written, well tested, but fundamentally simple system - only two poles active high-pass, one pole low-pass.
  • With TBSI headstages the stimulation artifact is brief - figure 8 shows < 4ms.
  • Includes NeuroWriter software, generously open-sourced (but alas windows only - C#).

____References____

[0] Rolston JD, Gross RE, Potter SM, A low-cost multielectrode system for data acquisition enabling real-time closed-loop processing with rapid recovery from stimulation artifacts.Front Neuroengineering 2no Issue 12 (2009)

{962}
hide / / print
ref: Harris-2009.06 tags: Bartholow 1874 Mary experiment stimulation ICMS date: 12-29-2011 05:13 gmt revision:2 [1] [0] [head]

PMID-19286295[0] Probing the human brain with stimulating electrodes: The story of Roberts Bartholow’s (1874) experiment on Mary Rafferty

  • Excellent review / history.
  • Actual citation: Experimental investigations into the functions of the human brain" The American Journal of the medical Sciences 1874
  • Actual subject: Marry Rafferty
  • Around his time people were shifting from using intuition and observation to direct treatment to using empiricism & science, especially from work on laboratory animals.
  • One of the innovations that could not be tolerated by his colleagues was the "physiological investigations of drugs by the destruction of animal life." He was a bit of an outsider, and not terribly well liked.
  • Before then the cortex was seen to be insensitive to stimulation of any kind.
  • Ferrier 1974b: in the striatum all movements are integrated which are differentiated in the cortex" -- striatal stimulation produces general contraction, not specific contraction.
  • Ferrier 1873 was the first to discover that AC stimulation yielded more prolonged and natural movements than DC.
  • The Dura mater is extremely sensitive to pain.
  • Mary Rafferty seems to have had a tumor (he calls it an ulcer) in the meninges (epithelioma).
  • He probably spread infection into her brain through the stimulating needles.

____References____

[0] Harris LJ, Almerigi JB, Probing the human brain with stimulating electrodes: the story of Roberts Bartholow's (1874) experiment on Mary Rafferty.Brain Cogn 70:1, 92-115 (2009 Jun)

{834}
hide / / print
ref: Brown-2008.03 tags: microstimulation recording artifact supression MEA ICMS date: 12-28-2011 20:43 gmt revision:3 [2] [1] [0] [head]

IEEE-4464125 (pdf) Stimulus-Artifact Elimination in a Multi-Electrode System

  • Stimulate and record on the same electrode within 3ms; record on adjacent electrodes within 500us.
  • Target at MEAs, again.
  • Notes that very small charge mismatches of 1% or less, which is common and acceptable in traditional analog circuit designs, generates an artifact that saturates the neural amp signal chain.
  • for stimulating & recording on the same electrode, the the residual charge must be brought down to 1/1e5 the stimulating charge (or less).
  • paper follows upon {833} -- shared author, Blum -- especially in the idea of using active feedback to cancel artifact charge & associated voltage.
  • target the active feedback for keeping all amplifier out of saturation.
  • vary highpass filter poles during artifact supression (!)
  • bias currents of 1fA on the feedback highpass stage. yikes.

Brown EA, Ross JD, Blum RA, Yoonkey N, Wheeler BC, and DeWeerth SP (2008) Stimulus-Artifact Elimination in a Multi-Electrode System. IEEE TRans. Biomed. Circuit Sys. 2. 10-21

{960}
hide / / print
ref: -0 tags: M1 Evarts PTN conduction velocity monkey electrophysiology spinal cord date: 12-25-2011 04:25 gmt revision:0 [head]

PMID-14283057 Relation of Discharge Frequency to conduction velocity in pyramidal tract neurons

  • Not all PTN arise from the giant Betz cells -- there are too many pyramical tract axons, and not enough betz cells.
  • Most axons come from smaller cortical neurons [8,11,12].
  • Large cells have large axons hence the highest conduction velocity. (cite the squid studies...)
  • Estimate conduction velocity my stimulating in the medullary pyramid (e.g. the pyramidal tract at the level of the medulla)
  • Conduction velocity, in m/s, is six times diameter in microns (roughly; he lists no source here)
  • Mean frequency for 28 rapidly conductin units was 4.1 Hz;
    • These had a non-moving FR of fractional Hz.
    • Showed bursts with sleep, a few spikes when drowsy, very quiet when not moving.
  • MFR for 34 slower cells was 15.6 Hz.
    • Resting rate was higher in these cells.
    • Also showed bursts / more irregular firing with sleep.
  • Amazingly clean recordings. envy.
  • Some cells have much more irregular / more
  • Brookhart [2] concluded that large, rapidly conducting pyramidal fibers are probably responsible for the phasic element of movement control, whereas the smaller slower neurons are responsible for the tonic element.
  • Also true in the spinal cord: large afferents of the nuclear bag fibers in the muscle spindle carry transient info; group II are smaller and carry steady-state info.
  • ref Mountcastle [14] regarding reciprocal pairs of neurons being (surprise) reciprocally activated during joint movements.

{953}
hide / / print
ref: -0 tags: Moran Schwartz Todorov controversy PV M1 motor control date: 12-22-2011 22:04 gmt revision:0 [head]

PMID-11017157 One motor cortex, two different views.

  • Commentary on {950}
  • Refutes Todorov's stiff -muscle perturbation analysis, saying that it grossly misapproximates what the monkey is actually doing (drawing on a touchscreen vertical in front of it), as the model of the arm in this case would be held stiffly in front of the monkey, rather than realistically falling to the animal's side.
  • They also claim that any acceleration term would cause the PV tuning to lead with higher curvature, which is not what they saw (?)

{832}
hide / / print
ref: Jimbo-2003.02 tags: MEA microstimulation artifact supression date: 12-17-2011 01:41 gmt revision:2 [1] [0] [head]

PMID-12665038[0] A system for MEA-based multisite stimulation.

  • stimulate and record the same MEA channel.
  • used voltage-control stimulation.
  • very low leakage current switches, DG202CSE, 100Gohm, Maxim, above. non-mechanical = low vibration.
  • switches switch between stimulator and preamp. obvious.
  • uses active shorting post-stimulation to remove residual charge,
  • uses active sample/hold of the preamplifier while the stimulator is connected to the electrodes.
  • adds stimulation pulse to the initial electrode offset (interesting!)

____References____

[0] Jimbo Y, Kasai N, Torimitsu K, Tateno T, Robinson HP, A system for MEA-based multisite stimulation.IEEE Trans Biomed Eng 50:2, 241-8 (2003 Feb)

{135}
hide / / print
ref: Vijayakumar-2005.12 tags: schaal motor learning LWPL PLS partial least sqares date: 12-07-2011 04:09 gmt revision:1 [0] [head]

PMID-16212764[0] Incremental online learning in high dimensions

ideas:

  • use locally linear models.
  • use a small number of regressions in selected dimensions of input space in the spirit of partial least squares regression. (like partial least-squares) hence, can operate in very high dimensions.
  • function to be approximated has locally low-dimensional structure, which holds for most real-world data.
  • use: the learning of of value functions, policies, and models for learning control in high-dimensional systems (like complex robots or humans).
  • important distinction between function-approximation learning:
    • methods that fit nonlinear functions globally, possibly using input space expansions.
      • gaussian process regression
      • support vector machine regression
        • problem: requires the right kernel choice & basis vector choice.
      • variational bayes for mixture models
        • represents the conditional joint expectation, which is expensive to update. (though this is factored).
      • each above were designed for data analysis, not incremental data. (biology is incremental).
    • methods that fit simple models locally and segment the input space automatically.
      • problem: the curse of dimensionality: they require an exponential number of models for accurate approximation.
        • this is not such a problem if the function is locally low-dim, as mentioned above.
  • projection regression (PR) works via decomposing multivariate regressions into a superposition of single-variate regressions along a few axes of input space.
    • projection pursuit regression is a well-known and useful example.
    • sigmoidal neural networks can be viewed as a method of projection regression.
  • they want to use factor analysis, which assumes that the observed data is generated from a low-dimensional distribution with a limited number of latent variables related to the output via a transformation matrix + noise. (PCA/ wiener filter)
    • problem: the factor analysis must represent all high-variance dimensions in the data, even if it is irrelevant for the output.
    • solution: use joint input and output space projection to avoid elimination of regression-important dimensions.
----
  • practical details: they use the LPWR algorithm to model the inverse dynamics of their 7DOF hydraulically-actuated gripper arm. That is, they applied random torques while recording the resulting accelerations, velocities, and angles, then fit a function to predict torques from these variables. The robot was compliant and not very well modeled with a rigid body model, though they tried this. The resulting LPWR generated model was 27 to 7, predicted torques. The control system uses this functional approximation to compute torques from desired trajectories, i think. The desired trajectories are generated using spline-smoothing ?? and the control system is adaptive in addition to the LPWR approximation being adaptive.
  • The core of the LPWR is partial-least squares regression / progression pursuit, coupled with gaussian kernels and a distance metric (just a matrix) learned via constrained gradient descent with cross-validation. The partial least squares (PLS) appears to be very popular in many fields, and there are an number of ways of computing it. Distance metric can expand without limit, and overlap freely. Local models are added based on MSE, i think, and model adding stops when the space is well covered.
  • I think this technique is very powerful - you separate the the function evaluation from the error minimization, to avoid the problem of ambiguous causes. Instead, when applying the LPWR to the robot, the torques cause the angles and accelerations -> but you invert this relationship: want to control the torques given trajectory. Of course, the whole function approximation is stationary in time - the p/v/a is sufficient to describe the state and the required torques. Does the brain work in the same way? do random things, observe consequences, work in consequence space and invert ?? e.g. i contracted my bicep and it caused my hand to move to the face; now I want my hand to move to my face again, what caused that? Need reverse memory... or something. Hmm. let's go back to conditional learning: if any animal does an action, and subsequently it is rewarded, it will do that action again. if this is conditional on a need, then that action will be performed only when needed.. when habitual, the action will be performed no matter what.. this is the nature of all animals, i think, and corresponds to rienforcement learning? but how? I suppose it's all about memory, and assigning credit where credit is due. the same problem is dealt with rienforcement learning. and yet things like motor learning seem so far out of this paradigm - they are goal-directed and minimize some sort of error. eh, not really. Clementine is operating on the conditioned response now - has little in the way of error. but gradually this will be built; with humans, it is built very quickly by reuse of existing modes. or conciousness.
  • back to the beginning: you dont have to regress into output space - you regress into sensory space, and do as much as possible in that sensory space for control. this is very powerful, and the ISO learning people (Porr et al) have effectively discovered this: you minimize in sensory space.
    • does this abrogate the need for backprop? we are continually causality-inverting machines; we are prredictive.

____References____

[0] Vijayakumar S, D'Souza A, Schaal S, Incremental online learning in high dimensions.Neural Comput 17:12, 2602-34 (2005 Dec)

{900}
hide / / print
ref: Helms-2003.01 tags: Schwartz BMI adaptive control Taylor Tillery 2003 date: 11-26-2011 00:58 gmt revision:1 [0] [head]

PMID-12929922 Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.

  • Lays out the coadaprive algorithm.
  • with supervised / adaptive training, ML estimator is able to get 80% of the targets correct.
  • Reviews in the Neurosciences (conference) Workshop on Neural and Artificial Computation.

{858}
hide / / print
ref: -0 tags: artificial intelligence machine learning education john toobey leda cosmides date: 12-13-2010 03:43 gmt revision:3 [2] [1] [0] [head]

Notes & responses to evolutionary psychologists John Toobey and Leda Cosmides' - authors of The Adapted Mind - essay in This Will change Everything

  • quote: Currently the most keenly awaited technological development is an all-purpose artificial intelligence-perhaps even an intelligence that would revise itself and grow at an ever-accelerating rate until it enacts millennial transformations. [...] Yet somehow this goal, like the horizon, keeps retreating as fast as it is approached.
  • AI's wrong turn was assuming that the best methods for reasoning and thinking are those that can be applied successfully to any problem domain.
    • But of course it must be possible - we are here, and we did evolve!
    • My opinion: the limit is codifying abstract, assumed, and ambiguous information into program function - e.g. embodying the world.
  • Their idea: intelligences use a number of domain-specific, specialized "hacks", that work for limited tasks; general intelligence appears as a result of the combination of all of these.
    • "Our mental programs can be fiendishly well engineered to solve some problems because they are not limited to using only those strategies that can be applied to all problems."
    • Given the content of the wikipedia page (above), it seems that they have latched onto this particular idea for at least 18 years. Strange how these sorts of things work.
  • Having accurate models of human intelligence would achieve two things:
    • It would enable humans to communicate more effectively with machines via shared knowledge and reasoning.
    • (me:) The AI would be enhanced by the tricks and hacks that evolution took millions of years, billions of individuals, and 10e?? (non-discrete) interactions between individuals and the environment. This constitutes an enormous store of information, to overlook it necessitates (probably, there may be seriuos shortcuts to biological evolution) re-simulating all of the steps that it took to get here. We exist as a cashed output of the evolutionary algorithm; recomputing this particular function is energetically impossible.
  • "The long term ambition [of evolutionary psychology] is to develop a model of human nature as precise as if we had the engineering specifications for the control systems of a robot.
  • "Humanity will continue to be blind slaves to the programs evolution has built into our brains until we drag them into the light. Ordinarily, we inhabit only the versions of reality that they spontaneously construct for us -- the surfaces of things. Because we are unaware that we are in a theater, with our roles and our lines largely written for us by our mental programs, we are credulously swept up in these plays (such as the genocidal drama of us versus them). Endless chain reactions among these programs leave us the victims of history -- embedded in war and oppression, enveloped in mass delusions and cultural epidemics, mired in endless negative-sum conflict \\ If we understood these programs and the coordinated hallucinations they orchestrate in our minds, our species could awaken from the roles these programs assign to us. Yet this cannot happen if knowledge -- like quantum mechanics -- remains forever locked up in the minds of a few specialists, walled off by the years of study required to master it. " Exactly. Well said.
    • The solution, then: much much better education; education that utilizes the best knowledge about transferring knowledge.
    • The authors propose video games; this is already being tested, see {859}

{855}
hide / / print
ref: -0 tags: sciences artificial Simon organizations economic rationality date: 12-01-2010 07:33 gmt revision:2 [1] [0] [head]

These are notes from reading Herbert A. Simon’s The Sciences of the Artificial, third edition, 1996 (though most of the material seems from the 70s). They are half quoted / half paraphrased (as needed when the original phrasing was clunky). I’ve added a few of my own observations, and reordered the ideas from the book.

“A large body of evidence shows that human choices are not consistent and transitive, as they would be if a utility function existed ... In general a large gain along one axis is required to compensate for a small loss along another.” HA Simon.

Companies within a capitalist economy make almost negligible use of markets in their internal functioning” - HA Simon. Eg. they are internally command economies. (later, p 40...) We take the frequent movability and indefiniteness of organizational boundaries as evidence that there is often a near balance between the advantages of markets and organizations”

  • Retail sales of automobiles are handled by dealerships
  • Many other commodities are sold directly to the consumer
  • In fast food there are direct outlets and franchises.
  • There are sole source suppliers that produce parts for much larger manufacturers.
I’m realizing / imagining a very flexible system of organizations, tied together and communicating via a liquid ‘blood’ of the market economy.

That said: organizations are not highly centralized structures in which all the important decisions are made at the center; this would exceed the limits of procedural rationality and lose many of the advantages attainable from the use of hierarchical authority. Business organizations, like markets, are vast distributed computers whose decision processes are substantially decentralized. In fact, the work of the head of a corporation is a market-like activity: allocating capital to promising or desirable projects.

In organizations, uncertainty is often a good reason to shift from markets to hierarchies in making decisions. If two different arms of a corporation - production and marketing - make different decisions on the uncertain number of units to be sold next year, there will be a problem. It is better for the management to share assumptions. “Left to the market, this kind of uncertainty leads directly to the dilemmas of rationality that we described earlier in terms of game theory and rational expectations”

I retain vivid memories of the astonishment and disbelief expressed by the architecture students to whom I taught urban land economics many years ago when I pointed to medieval cities as marveluosly patterned systems that had mostly just ‘grown’ in response to myriads of individual human decisions. To my students a pattern implied a planner in whose mind it had been conceived and whose hand it had been implemented. The idea that a city could acquire its patter as naturally as a snowflake was foreign to them ... they reacted to it as many christian fundamentalists responded to Darwin: no design without a Designer!

Markets appear to conserve information and calculation by assigning decisions to actors who can make them on the basis of information that is available to them locally. von Hayek: “The most significant fact about this system is the economy of knowledge with which it operates, o how little the individual participants need to know in order to make the right action”. To maintain actual Pareto optimality in the markets would require information and computational requirements that are exceedingly burdensome and unrealistic (from The New Palgrave: A dictionary of Economics)

Nelson and winter observe that in economic evolution, in contract to biological evolution, sucessful algorithms (business practices) may be borrowed from one firm to the other. The hypothesized system is Lamarkian, because any new idea can be incorporated in opearting procedures as soon as its success is observed" . Also, it's good as corporations don't have secual reproduction / crossover.

{844}
hide / / print
ref: work-0 tags: emg_dsp design part selection stage6 date: 09-22-2010 20:09 gmt revision:9 [8] [7] [6] [5] [4] [3] [head]

"Stage 6" part selection:

  • B527 to replace the BF537 -- big difference are more pins + USB OTG high-speed port. The previous deign used Maxim's MAX3421E, which seems to drop packets / have limited bandwidth (or perhaps my USB profile is incorrect?)
    • available in both 0.8mm and 0.5mm BGA. which? both are available from Digi-key. Coarser one is fine, will be easier to route.
    • Does not support mobile SDRAM nor DDR SDRAM; just the vanilla variety.
  • Continue to use the BF532 on the wireless devices (emg, neuro)
  • LAN8710 to replace the LAN83C185. Both can use the MII interface; the LAN83 is not recommended for new designs, though it is in the easier-to-debug TQFP package. Blackfin EZ-KIT for BF527 uses the LAN8710.
    • comes in 0.5mm pitch QFN-32 package.
    • 3.3V and 1.2V supply - can supply 1.2V externally.
  • SDRAM: MT48LC16M16A2BG-7E:D, digikey 557-1220-1-ND 16M x16, or 4M x 16 bit X 4 banks.
    • VFBGA-54 package.
    • 3.3v supply.
  • converter: AD7689 8 channel, 16-bit SAR ADC. has a built-in sequencer, which is sweet. (as well as a temperature sensor??!)
    • Package: 20LFCSP.
    • Seems we can run it at 4.0V, as in stage4.
  • Inst amp: MCP4208, available MSOP-8 (they call it 8-muMax). can use the same circuitry as in stage2 - just check the bandwidth; want 2khz maybe?
  • M25P16 flash, same as on the dev board.
    • Digikey M25P16-VMN6P-ND : 150mil width SOIC-8
  • USB: use the on-board high-speed controller. No need for OTG functionality; FCI USB connector is fine. Digikey 609-1039-ND.

{838}
hide / / print
ref: -0 tags: meta learning Artificial intelligence competent evolutionary programming Moshe Looks MOSES date: 08-07-2010 16:30 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

Competent Program Evolution

  • An excellent start, excellent good description + meta-description / review of existing literature.
  • He thinks about things in a slightly different way - separates what I call solutions and objective functions "post- and pre-representational levels" (respectively).
  • The thesis focuses on post-representational search/optimization, not pre-representational (though, I believe that both should meet in the middle - eg. pre-representational levels/ objective functions tuned iteratively during post-representational solution creation. This is what a human would do!)
  • The primary difficulty in competent program evolution is the intense non-decomposability of programs: every variable, constant, branch effects the execution of every other little bit.
  • Competent program creation is possible - humans create programs significantly shorter than lookup tables - hence it should be possible to make a program to do the same job.
  • One solution to the problem is representation - formulate the program creation as a set of 'knobs' that can be twiddled (here he means both gradient-descent partial-derivative optimization and simplex or heuristic one-dimensional probabilistic search, of which there are many good algorithms.)
  • pp 27: outline of his MOSES program. Read it for yourself, but looks like:
  • The representation step above "explicitly addresses the underlying (semantic) structure of program space independently of the search for any kind of modularity or problem decomposition."
    • In MOSES, optimization does not operate directly on program space, but rather on subspaces defined by the representation-building process. These subspaces may be considered as being defined by templates assigning values to some of the underlying dimensions (e.g., they restrict the size and shape of any resulting trees).
  • In chapter 3 he examines the properties of the boolean programming space, which is claimed to be a good model of larger/more complicated programming spaces in that:
    • Simpler functions are much more heavily sampled - e.g. he generated 1e6 samples of 100-term boolean functions, then reduced them to minimal form using standard operators. The vast majority of the resultant minimum length (compressed) functions were simple - tautologies or of a few terms.
    • A corollary is that simply increasing syntactic sample length is insufficient for increasing program behavioral complexity / variety.
      • Actually, as random program length increases, the percentage with interesting behaviors decreases due to the structure of the minimum length function distribution.
  • Also tests random perturbations to large boolean formulae (variable replacement/removal, operator swapping) - ~90% of these do nothing.
    • These randomly perturbed programs show a similar structure to above: most of them have very similar behavior to their neighbors; only a few have unique behaviors. makes sense.
    • Run the other way: "syntactic space of large programs is nearly uniform with respect to semantic distance." Semantically similar (boolean) programs are not grouped together.
  • Results somehow seem a let-down: the program does not scale to even moderately large problem spaces. No loops, only functions with conditional evalutation - Jacques Pitrat's results are far more impressive. {815}
    • Seems that, still, there were a lot of meta-knobs to tweak in each implementation. Perhaps this is always the case?
  • My thought: perhaps you can run the optimization not on program representations, but rather program codepaths. He claims that one problem is that behavior is loosely or at worst chaotically related to program structure - which is true - hence optimization on the program itself is very difficult. This is why Moshe runs optimization on the 'knobs' of a representational structure.

{837}
hide / / print
ref: -0 tags: artificial intelligence Hutters theorem date: 08-05-2010 05:06 gmt revision:0 [head]

Hutter's Theorem: for all problems asymptotically large enough, there exists one algorithm that is within a factor of 5 as fast as the fastest algorithm for a particular problem. http://www.hutter1.net/ai/pfastprg.htm

{583}
hide / / print
ref: notes-0 tags: usbmon decode chart linux debug date: 07-12-2010 03:29 gmt revision:3 [2] [1] [0] [head]

From this and the USB 2.0 spec, I made this quick (totally incomprehensible?) key for understanding the output of commands like

# mount -t debugfs none_debugs /sys/kernel/debug
# modprobe usbmon
# cat /sys/kernel/debug/usbmon/2u

To be used with the tables from the (free) USB 2.0 spec:

{820}
hide / / print
ref: notes-0 tags: CSV blog article group dynamics steinberg date: 07-05-2010 15:30 gmt revision:1 [0] [head]

Another excellent post from Steinberg regarding treating people as predictable nonlinear fluids. "The system works far better when a column is introduced off-center in front of the door,as demonstrated Mr. Torrens. "It's counterintuitive, but the column sends shock waves through the crowds to break up the congestion patterns." (...) Most traffic jams are emergent phenomena that begin with mistakes from just one or two drivers. According to Horvitz's models, they can actually "un-jam" traffic by calling drivers at a particular location, and giving them very specific instructions: "Move to the left-most lane, and then speed-up to 65."

{780}
hide / / print
ref: -0 tags: chess evolution machine learning 2004 partial derivative date: 10-26-2009 04:07 gmt revision:2 [1] [0] [head]

A Self-learning Evolutionary Chess Program

  • The evolved program is able to perform at near master level!
  • Used object networks (neural networks that can be moved about according to the symmetries of the problem space). Paul Werbos apparently invented these, too.
  • Approached the problem by assigning values to having pieces at particular places on the board (PVT, positional value tables). The value of a move was the value of the resulting global valuation (sum of value of pieces - value of opponents pieces) + PVT. They used these valuations to look a set number of moves in the future, using an alpha-beta search.
    • Used 4-plys (search depth) while in normal genetic evolution; 6 when pawns would be upgraded.
  • The neural networks looked at the first 2 rows, the last two rows, and a 4x4 square in the middle of the board - areas known to matter in real games. (The main author is a master-level chess player and chess teacher).
  • The outputs of the three neural networks were added to the material and PVT values to assess a hypothetical board position.
  • Genetic selection operated on the PVT values, neural network weights, piece valuation, and biases of the neural networks. These were initialized semi-randomly; PVT values were initialized based on open-source programs.
  • Performed 50 generations of 20 players each. The top 10 players from each generation survived.
  • Gary Kasparov was consulted in this research. Cool!
  • I wonder what would happen if you allowed the program to propose (genetically or otherwise) alternate algorithmic structures. What they describe is purely a search through weight space - what about a genetic search through algorithmic structure space? Too difficult of a search?
  • I mean, that's what humans (the authors) do while they were designing this program/algorithm. The lead author, as mentioned, is already a very good chess player, and hence he could imbue the initial program with a lot of good 'filters' 'kernels' or 'glasses' for looking at the chess board. And how did he arrive at these ideas? Practice (raw data) and communication (other peoples kernels extracted from more raw data, and validated). And how does he play? By using his experience and knowledge to predict probable moves into the future, evaluating their value, and selecting the best. And how does he evaluate his algorithmic? The same way! By using his knowledge of both chess and computer science to simulate hypothetical designs in his head, seeing how he thinks they will perform, and selecting the best one.
  • The problem with present algorithms is that they have no sense of artistic beauty - no love of symmetry, whether it be simple geometric symmetry (beautiful people have symmetric faces) or more fractal (fractional-dimensioned) symmetry, e.g. music, fractals (duh), human art. I think symmetry can enormously cut down the dimension of the search space in learning, hence is frequently worthy of its own search.
    • Algorithms do presently have a good sense of parsimony, at least, through the AIC / regularization / SVD / bayes net's priors / etc. Parsimony can be beauty, too.
  • Another notable discrepancy is that humans can reason in a concrete way - they actively search for the thing that is causing the problem, the thing that is contributing greatly to either good or bad results. They do this by the scientific method, sorta - hold all other things constant, perturb some section of the system, measure the output. This is the same as taking a partial derivative. Such derivative are used heavily/exclusively in training neural networks - weights are changed based on the partial derivative of that weight wrt the output-referenced error. So reasoning is similar to non-parallel backprop? Or a really slow way of taking partial derivatives? Maybe. The goal of both is to assign valuation/causation to a given weight/subsystem.
  • Human reasoning involves dual valuation pathways - internal, based on a model of the world, and external, which of course involves experimentation and memory (and perhaps scholarly journal papers etc). The mammalian cortex-basal ganglia-thalamus loop seems designed for running these sorts of simulations because it is the dual of the problem of selecting appropriate behaviors. (there! I said it!) In internal simulation, you take world state, apply forward transform with perturbation, then evaluate the result - see if your perturbation (partial derivative) yields information. In motor behavior, you take the body state, apply forward transformation with perturbation (muscle contraction), and evaluate the result. Same thing. Of course you don't have to do this too much, as the cortex will remember the input-perturbation-result.
  • Understanding seems to be related to this input-transform-evaluate cycle, too, except here what is changing is the forward transform, and the output is compared to known output - does a given kernel (concept) predict the output/observed data?
  • Now what would happen if you applied this input-transform-evaluate to itself, e.g. you allowed the system to evaluate itself. Nothing? Recursion? (recursion is a very beautiful concept.) Some degree of awareness?
  • Surely someone has thought of this before, and tried to simulate it on a computer. Wasn't AI research all about this in the 70's-80's? People have said that their big problem was that AI was then entirely/mostly symbolic and insufficiently probabilistic or data-intensive; the 90's-21st century seems to have solved that. This field is unfamiliar to me, it'll take some sussing about before I can grok the academic landscape.
    • Even more surely, someone is doing it right now! This is the way the world advances. Same thing happened to me with GPGPU stuff, which I was doing in 2003. Now everyone is up to that shiznit.
  • It seems that machine-learning is transitioning from informing my personal philosophy, to becoming my philosophy. Good/bad? Feel free to edit this entry!
  • It's getting late and I'm tried -> rant ends.

{769}
hide / / print
ref: life-0 tags: art design bantjes vector color date: 07-23-2009 14:14 gmt revision:0 [head]

Marian Bantjes - kickass designer. Just see her business card! or Saks snowflake theme

{716}
hide / / print
ref: Ribeiro-2004.12 tags: Sidarta Ribeiro reverberation sleep consolidation integration replay REM SWS date: 03-26-2009 03:19 gmt revision:2 [1] [0] [head]

PMID-15576886[0] Reverberation, storage, and postsynaptic propagation of memories during sleep

  • Many references in the first paragraph! They should switch to the [n] notation; the names are disruptive.
  • Show reverberation (is this measured in a scale-invariant way?) increases after novel object is placed in cage. Recorded from a single rat for up to 96 hours.
  • also looked at Zif-268 activation in the cortex (autoradiogram);
    • Previous results showed that Zif-268 levels are up-regulated in REM but not SWS in the hippocampus and cerebral cortex of exposed animals. (Ribeiro 1999)
    • hippocampal inactivation during REM sleep blocked zif-268 upregulation.
    • quote: "Increased activity is necessary but not sufficient to induce zif-268 expression, which also requires calcium inflow via NMDA channels and phosphorilation of the cAMP response element-binding protein (CREB)"
  • Sleep deprivation is much more detrimental to implicit than to explicit memory consolidation (Fowler et al. 1973; Karni et al. 1994; Smith 1995, 2001; Stickgold et al. 2000a; Laureys et al. 2002; Walker et al. 2002; Maquet et al. 2003; Mednick et al. 2003)

____References____

[0] Ribeiro S, Nicolelis MA, Reverberation, storage, and postsynaptic propagation of memories during sleep.Learn Mem 11:6, 686-96 (2004 Nov-Dec)

{695}
hide / / print
ref: -0 tags: alopex machine learning artificial neural networks date: 03-09-2009 22:12 gmt revision:0 [head]

Alopex: A Correlation-Based Learning Algorithm for Feed-Forward and Recurrent Neural Networks (1994)

  • read the abstract! rather than using the gradient error estimate as in backpropagation, it uses the correlation between changes in network weights and changes in the error + gaussian noise.
    • backpropagation requires calculation of the derivatives of the transfer function from one neuron to the output. This is very non-local information.
    • one alternative is somewhat empirical: compute the derivatives wrt the weights through perturbations.
    • all these algorithms are solutions to the optimization problem: minimize an error measure, E, wrt the network weights.
  • all network weights are updated synchronously.
  • can be used to train both feedforward and recurrent networks.
  • algorithm apparently has a long history, especially in visual research.
  • the algorithm is quite simple! easy to understand.
    • use stochastic weight changes with a annealing schedule.
  • this is pre-pub: tables and figures at the end.
  • looks like it has comparable or faster convergence then backpropagation.
  • not sure how it will scale to problems with hundreds of neurons; though, they looked at an encoding task with 32 outputs.

{674}
hide / / print
ref: notes-0 tags: Barto Hierarchal Reinforcement Learning date: 02-17-2009 05:38 gmt revision:1 [0] [head]

Recent Advancements in Hierarchal Reinforcement Learning

  • RL with good function-approximation methods for evaluating the value function or policy function solve many problems yet...
  • RL is bedeviled by the curse of dimensionality: the number of parameters grows exponentially with the size of a compact encoding of state.
  • Recent research has tackled the problem by exploiting temporal abstraction - decisions are not required at each step, but rather invoke the activity of temporally extended sub-policies. This is somewhat similar to a macro or subroutine in programming.
  • This is fundamentally similar to adding detailed domain-specific knowledge to the controller / policy.
  • Ron Parr seems to have made significant advances in this field with 'hierarchies of abstract machines'.
    • I'm still looking for a cognitive (predictive) extension to these RL methods ... these all are about extension through programmer knowledge.
  • They also talk about concurrent RL, where agents can pursue multiple actions (or options) at the same time, and assess value of each upon completion.
  • Next are partially observable markov decision processes, where you have to estimate the present state (belief state), as well as a policy. It is known that and optimal solution to this task is intractable. They propose using Hierarchal suffix memory as a solution ; I can't really see what these are about.
    • It is also possible to attack the problem using hierarchal POMDPs, which break the task into higher and lower level 'tasks'. Little mention is given to the even harder problem of breaking sequences up into tasks.
  • Good review altogether, reasonable balance between depth and length.

{643}
hide / / print
ref: notes-0 tags: artificial cerebellum robot date: 11-06-2008 17:16 gmt revision:1 [0] [head]

Artificial Cerebellum for robot control:

{502}
hide / / print
ref: notes-0 tags: nordic nrf24L01 state diagram flowchart SPI blackfin date: 06-25-2008 02:44 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Outline:

The goal is to use a nRF24L01 to make an asymmetrical, bidirectional link. The outgoing bandwidth should be maximized, ~1.5mbps, and the incoming bandwidth can be much smaller, ~17kbps, though on both channels we want guaranteed latency, < 4ms for the outgoing data, and < 10ms for the incoming data. Furthermore, the processor that is being used to run this, a blackfin BF532, does not seem to play well when both SPI DMA is enabled and most CPU time is being spent in SPORT ISR reading samples & processing them. Fortunately, the SPI port and SPORT can be run synchronously (provided the SPI port is clocked fast enough), allowing the processor to run one 'thread' e.g. no interrupts. It seem that with high-priority interrupts, the DMA engine is not able to service the SPI perfectly, and without DMA, data comes out of the SPI in drips and drabs, and cannot keep the radio's fifo full. Hence, must program a synchronous radio controller, where states are stored in variables and not in the program counter (PC register, saved upon interrupt, etc).

As in other postings on the nRF24L01, the plan is to keep the transmit fifo full for most of the 4ms allowed by the free-running pll, then transition back into either standby-I mode, or send a status packet. The status packet is always acknowledged by the primary receiver with a command packet, and this allows both synchronization and incoming bandwidth. Therefore, there are 4 classes of transfers:

  1. just a status packet. After uploading, wait for TX_DS IRQ, transition to RX mode, wait for RX_DR irq, clear ce, read in the packet, and set back to TX mode.
  2. one data packet + status packet. There are timeouts on both the transmission of data packets and status packets; in this case, both have been exceeded. Here TX data state is entered, the packet is uploaded, CE is asserted, send the status packet, wait for IRQ from both packets. This requires a transition from tx data CE high state to tx status CSN low state.
  3. many data packets and one status packet. This is the same as above, only the data transmission was triggered by a full threshold in the outgoing packet queue (in processor ram). In this case, two packets are uploaded to the radio before waiting for a TX_DS IRQ, and, at the end of the process, we have to wait for two TX_DS IRQs after uploading the data packet.
  4. many data packets. This is straightforward - upload 2 packets, wait for 1 TX_DS IRQ, {upload another, wait for IRQ}(until packets are gone), wait for final IRQ, set CE low.

screenshot of the derived code working (yea, my USB logic analyzer only runs on windows..yeck):

old versions:

{94}
hide / / print
ref: bookmark-0 tags: particle_filter unscented monte_carlo MCMC date: 12-11-2007 16:46 gmt revision:2 [1] [0] [head]

images/94_1.pdf

  • covers both the particle filter and the unscented kalman filter ... the unscented kalman filter is used as the proposal distribution.

{522}
hide / / print
ref: notes-0 tags: FCC part15 regulations radio date: 12-11-2007 01:23 gmt revision:5 [4] [3] [2] [1] [0] [head]

from http://www.fcc.gov/oet/info/rules/part15/part15-9-20-07.pdf :

  • Also available through the e-CFR (electronic code of federal regulations) site.
    • Telecommunications is title 47, radio devices are part 15. Industrial, medical, and scientific equipment is part 18.
  • TBSI 31 ch headstage is under the 300uv @ 3m for unintentional transmitters above 960Mhz, in compliance with Section 15.109 a and b.
  • 2.4ghz ISM band: relevant regulations are part 15, section 15.247 specifies max 1W output power & restricts the channel structure FHSS and DSSS systems within these bands.
  • Wideband systems: Section 15.250
    • -63dbm EIRP 1610 - 1990 mhz. EIRP measured with a 1mhz bandwidth.
    • minimum 50MHz bandwidth. (can meet this!!)
    • to get a certification, you have to file the testing documentation with the FCC (not sure what else you need to do..) "alternative measurement procedures may be considered by the commission"
    • emissions below 960mhz must be compliant with section 15.209
  • ultra-wideband systems: Section 15.501
    • UWB = bandwidth > 500mhz, or bandwidth > 0.2 center frequency.
    • -53.3dbm EIRP in the 1610-1990 - The regulators allow an extra 10db (factor of 3) if your bandwidth is at least ~ 350Mhz, as compared to the wideband specification.
    • -41.3dbm EIRP 3.1ghz - 10.6ghz. This is where it's at! (a factor of 4 larger..)

{443}
hide / / print
ref: notes-0 tags: party 1107 sarcasm date: 09-05-2007 02:56 gmt revision:1 [0] [head]

Here's the deal: myself, Jan & the rest of the double-one aught seven crew will be hosting a club swimming barbecue ~6 this friday. Since we'll only be warming up by the end of the club swimming bbq, and probably people will not want to leave anyway, this event is to extend the festivities indefinitely. We feel that this is essential, as recent news - for example the delicious (and completely unanticipated) subprime mortgage fun, the even more delicious lead paint that China has been supplying for augmenting our children's collective intelligence, and the incredibly momentous expenditure of more than $450 trillion on the Iraq war - are due cause for a little bit of indulgence and celebration!

So come party like Bernanke, and drop cash out of helicopters!

(Since we are all going to be generous and drop cash like we're the fed, 1107 would greatly appreciate it if you could direct a bit of that generosity on food / beer etc for the party. (so as to keep us all fed.) thanks :)

{370}
hide / / print
ref: -0 tags: gore curibata pencil art NYTimes magazine travel brazil date: 05-20-2007 16:35 gmt revision:1 [0] [head]

An awesome pencil drawing of al gore, in May 20th issue of NYtimes magazine.

Curibata, Brazil - a city unusual for its urban planning, ecological mind, bussing system, affluence (compared to the rest of Brazil, and ratio of parks to buildings. I would like to go there.

{353}
hide / / print
ref: Fromm-1981.05 tags: Evarts pyramidal tract size principle movements date: 04-23-2007 04:25 gmt revision:2 [1] [0] [head]

PMID-6809905[0] Relation of size and activity of motor cortex pyramidal tract neurons during skilled movements in the monkey

  • there did not seem to be a "size principle" in the strict sense that this term has been used with reference to spinal cord motoneurons.

____References____

{296}
hide / / print
ref: Kettner-1988.08 tags: 3D motor control population_vector Schwartz Georgopoulos date: 04-05-2007 17:09 gmt revision:1 [0] [head]

A triptych of papers (good job increasing your publication count, guys!):

  • PMID-3411363[0] Primate motor cortex and free arm movements to visual targets in three-dimensional space. III. Positional gradients and population coding of movement direction from various movement origins.
    • propose multilinear model to predict firing rate of nneuron (a regression that is the same direction as the kalman filter)
    • i don't see how this is that much different from below (?)
  • PMID-3411362[1] Primate motor cortex and free arm movements to visual targets in three-dimensional space. II. Coding of the direction of movement by a neuronal population.
    • they show, basically, that they can predict movement direction (note this is different from actual movement!) using the poulation vector scheme.
  • PMID-3411361[2] Primate motor cortex and free arm movements to visual targets in three-dimensional space. I. Relations between single cell discharge and direction of movement.
    • 568 cells!!
    • 8 directional targets, again -- not sure how they were aranged; they say 'in approximately equal angular intervals'
    • these findings generalize the previous 2D results [3] (tuning to external space) to 3D

____References____

{292}
hide / / print
ref: Schwartz-2004.01 tags: Schwartz BMI prosthetics M1 review 2004 date: 04-05-2007 16:12 gmt revision:1 [0] [head]

PMID-15217341[0] Cortical Neuro Prosthetics

  • closed-loop control improves performance. see [1]
    • adaptive learning tech, when coupled to the adaptability of the cortex, suggests that these devices can function as control signals for motor prostheses.

____References____

{256}
hide / / print
ref: math-0 tags: partial least squares PLS regression thesis italy date: 03-26-2007 16:48 gmt revision:2 [1] [0] [head]

http://www.fedoa.unina.it/593/

  • pdf does not seem to open in linux? no, doesn't open on windows either - the Pdf is screwed up!
  • here is a published version of his work.

{176}
hide / / print
ref: Brockwell-2004.04 tags: particle_filter Brockwell BMI 2004 wiener filter population_vector MCMC date: 02-05-2007 18:54 gmt revision:1 [0] [head]

PMID-15010499[0] Recursive Bayesian Decoding of Motor Cortical Signals by Particle Filtering

  • It seems that particle filtering is 3-5 times more efficient / accurate than optimal linear control, and 7-10 times more efficient than the population vector method.
  • synthetic data: inhomogeneous poisson point process, 400 bins of 30ms width = 12 seconds, random walk model.
  • monkey data: 258 neurons recorded in independent experiments in the ventral premotor cortex. monkey performed a 3D center-out task followed by an ellipse tracing task.
  • Bayesian methods work optimally when their models/assumptions hold for the data being analyzed.
  • Bayesian filters in the past were computationally inefficient; particle filtering was developed as a method to address this problem.
  • tested the particle filter in a simulated study and a single-unit monkey recording ellipse-tracing experiment. (data from Rena and Schwartz 2003)
  • there is a lot of math in the latter half of the paper describing their results. The tracings look really good, and I guess this is from the quality of the single-unit recordings.
  • appendix details the 'innovative methodology ;)

____References____

{139}
hide / / print
ref: Schaal-1998.11 tags: schaal local learning PLS partial least squares function approximation date: 0-0-2007 0:0 revision:0 [head]

PMID-9804671 Constructive incremental learning from only local information

{15}
hide / / print
ref: bookmark-0 tags: monte_carlo MCMC particle_filter probability bayes filtering biblography date: 0-0-2007 0:0 revision:0 [head]

http://www-sigproc.eng.cam.ac.uk/smc/papers.html -- sequential monte carlo methods. (bibliography)

{13}
hide / / print
ref: bookmark-0 tags: graffiti art urban photography date: 0-0-2006 0:0 revision:0 [head]

http://www.beautifulcrime.com/public/exhibitions/ Need flash to view the site.

{53}
hide / / print
ref: notes-0 tags: cartesian gantry robot date: 0-0-2006 0:0 revision:0 [head]

{45}
hide / / print
ref: bookmark-0 tags: muscle artifial catalyst nanotubes shape-memory alloy date: 0-0-2006 0:0 revision:0 [head]

http://www.newscientist.com/article/dn8859-methanolpowered-artificial-muscles-start-to-flex.html

{14}
hide / / print
ref: bookmark-0 tags: urban art san_francisco california vector_art mod_art store date: 0-0-2006 0:0 revision:0 [head]

http://www.upperplayground.com/