m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
[0] Santhanam G, Linderman MD, Gilja V, Afshar A, Ryu SI, Meng TH, Shenoy KV, HermesB: a continuous neural recording system for freely behaving primates.IEEE Trans Biomed Eng 54:11, 2037-50 (2007 Nov)

[0] Sodagar AM, Wise KD, Najafi K, A fully integrated mixed-signal neural processor for implantable multichannel cortical recording.IEEE Trans Biomed Eng 54:6 Pt 1, 1075-88 (2007 Jun)

[0] Ativanichayaphong T, He JW, Hagains CE, Peng YB, Chiao JC, A combined wireless neural stimulating and recording system for study of pain processing.J Neurosci Methods 170:1, 25-34 (2008 May 15)

[0] Loewenstein Y, Seung HS, Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity.Proc Natl Acad Sci U S A 103:41, 15224-9 (2006 Oct 10)

[0] Wahnoun R, Helms Tillery S, He J, Neuron selection and visual training for population vector based cortical control.Conf Proc IEEE Eng Med Biol Soc 6no Issue 4607-10 (2004)[1] Wahnoun R, He J, Helms Tillery SI, Selection and parameterization of cortical neurons for neuroprosthetic control.J Neural Eng 3:2, 162-71 (2006 Jun)[2] Fetz EE, Operant conditioning of cortical unit activity.Science 163:870, 955-8 (1969 Feb 28)[3] Fetz EE, Finocchio DV, Operant conditioning of specific patterns of neural and muscular activity.Science 174:7, 431-5 (1971 Oct 22)[4] Fetz EE, Finocchio DV, Operant conditioning of isolated activity in specific muscles and precentral cells.Brain Res 40:1, 19-23 (1972 May 12)[5] Fetz EE, Baker MA, Operantly conditioned patterns on precentral unit activity and correlated responses in adjacent cells and contralateral muscles.J Neurophysiol 36:2, 179-204 (1973 Mar)[6] Humphrey DR, Schmidt EM, Thompson WD, Predicting measures of motor performance from multiple cortical spike trains.Science 170:959, 758-62 (1970 Nov 13)

[0] Ferrari PF, Rozzi S, Fogassi L, Mirror neurons responding to observation of actions made with tools in monkey ventral premotor cortex.J Cogn Neurosci 17:2, 212-26 (2005 Feb)

[0] Vyssotski AL, Serkov AN, Itskov PM, Dell'Omo G, Latanov AV, Wolfer DP, Lipp HP, Miniature neurologgers for flying pigeons: multichannel EEG and action and field potentials in combination with GPS recording.J Neurophysiol 95:2, 1263-73 (2006 Feb)[1] Otto KJ, Johnson MD, Kipke DR, Voltage pulses change neural interface properties and improve unit recordings with chronically implanted microelectrodes.IEEE Trans Biomed Eng 53:2, 333-40 (2006 Feb)

{1545}
hide / / print
ref: -1988 tags: Linsker infomax linear neural network hebbian learning unsupervised date: 08-03-2021 06:12 gmt revision:2 [1] [0] [head]

Self-organizaton in a perceptual network

  • Ralph Linsker, 1988.
  • One of the first (verbose, slightly diffuse) investigations of the properties of linear projection neurons (e.g. dot-product; no non-linearity) to express useful tuning functions.
  • ''Useful' is here information-preserving, in the face of noise or dimensional bottlenecks (like PCA).
  • Starts with Hebbian learning functions, and shows that this + white-noise sensory input + some local topology, you can get simple and complex visual cell responses.
    • Ralph notes that neurons in primate visual cortex are tuned in utero -- prior real-world visual experience! Wow. (Who did these studies?)
    • This is a very minimalistic starting point; there isn't even structured stimuli (!)
    • Single neuron (and later, multiple neurons) are purely feed-forward; author cautions that a lack of feedback is not biologically realistic.
      • Also note that this was back in the Motorola 680x0 days ... computers were not that powerful (but certainly could handle more than 1-2 neurons!)
  • Linear algebra shows that Hebbian synapses cause a linear layer to learn the covariance function of their inputs, QQ , with no dependence on the actual layer activity.
  • When looked at in terms of an energy function, this is equivalent to gradient descent to maximize the layer-output variance.
  • He also hits on:
    • Hopfield networks,
    • PCA,
    • Oja's constrained Hebbian rule δw i<L 2(L 1L 2w i)> \delta w_i \propto &lt; L_2(L_1 - L_2 w_i) &gt; (that is, a quadratic constraint on the weight to make Σw 21\Sigma w^2 \sim 1 )
    • Optimal linear reconstruction in the presence of noise
    • Mutual information between layer input and output (I found this to be a bit hand-wavey)
      • Yet he notes critically: "but it is not true that maximum information rate and maximum activity variance coincide when the probability distribution of signals is arbitrary".
        • Indeed. The world is characterized by very non-Gaussian structured sensory stimuli.
    • Redundancy and diversity in 2-neuron coding model.
    • Role of infomax in maximizing the determinant of the weight matrix, sorta.

One may critically challenge the infomax idea: we very much need to (and do) throw away spurious or irrelevant information in our sensory streams; what upper layers 'care about' when making decisions is certainly relevant to the lower layers. This credit-assignment is neatly solved by backprop, and there are a number 'biologically plausible' means of performing it, but both this and infomax are maybe avoiding the problem. What might the upper layers really care about? Likely 'care about' is an emergent property of the interacting local learning rules and network structure. Can you search directly in these domains, within biological limits, and motivated by statistical reality, to find unsupervised-learning networks?

You'll still need a way to rank the networks, hence an objective 'care about' function. Sigh. Either way, I don't per se put a lot of weight in the infomax principle. It could be useful, but is only part of the story. Otherwise Linsker's discussion is accessible, lucid, and prescient.

Lol.

{1543}
hide / / print
ref: -2019 tags: backprop neural networks deep learning coordinate descent alternating minimization date: 07-21-2021 03:07 gmt revision:1 [0] [head]

Beyond Backprop: Online Alternating Minimization with Auxiliary Variables

  • This paper is sort-of interesting: rather than back-propagating the errors, you optimize auxiliary variables, pre-nonlinearity 'codes' in a last-to-first layer order. The optimization is done to minimize a multimodal logistic loss function; math is not done to minimize other loss functions, but presumably this is not a limit. The loss function also includes a quadratic term on the weights.
  • After the 'codes' are set, optimization can proceed in parallel on the weights. This is done with either straight SGD or adaptive ADAM.
  • Weight L2 penalty is scheduled over time.

This is interesting in that the weight updates can be cone in parallel - perhaps more efficient - but you are still propagating errors backward, albeit via optimizing 'codes'. Given the vast infractructure devoted to auto-diff + backprop, I can't see this being adopted broadly.

That said, the idea of alternating minimization (which is used eg for EM clustering) is powerful, and this paper does describe (though I didn't read it) how there are guarantees on the convexity of the alternating minimization. Likewise, the authors show how to improve the performance of the online / minibatch algorithm by keeping around memory variables, in the form of covariance matrices.

{1538}
hide / / print
ref: -2010 tags: neural signaling rate code patch clamp barrel cortex date: 03-18-2021 18:41 gmt revision:0 [head]

PMID-20596024 Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex

  • How did I not know of this paper before.
  • Solid study showing that, while a single spike can elicit 28 spikes in post-synaptic neurons, the associated level of noise is indistinguishable from intrinsic noise.
  • Hence the cortex should communicate / compute in rate codes or large synchronized burst firing.
    • They found large bursts to be infrequent, timing precision to be low, hence rate codes.
    • Of course other examples, e.g auditory cortex, exist.

Cortical reliability amid noise and chaos

  • Noise is primarily of synaptic origin. (Dropout)
  • Recurrent cortical connectivity supports sensitivity to precise timing of thalamocortical inputs.

{1534}
hide / / print
ref: -2020 tags: current opinion in neurobiology Kriegeskorte review article deep learning neural nets circles date: 02-23-2021 17:40 gmt revision:2 [1] [0] [head]

Going in circles is the way forward: the role of recurrence in visual inference

I think the best part of this article are the references -- a nicely complete listing of, well, the current opinion in Neurobiology! (Note that this issue is edited by our own Karel Svoboda, hence there are a good number of Janelians in the author list..)

The gestalt of the review is that deep neural networks need to be recurrent, not purely feed-forward. This results in savings in overall network size, and increase in the achievable computational complexity, perhaps via the incorporation of priors and temporal-spatial information. All this again makes perfect sense and matches my sense of prevailing opinion. Of course, we are left wanting more: all this recurrence ought to be structured in some way.

To me, a rather naive way of thinking about it is that feed-forward layers cause weak activations, which are 'amplified' or 'selected for' in downstream neurons. These neurons proximally code for 'causes' or local reasons, based on the supported hypothesis that the brain has a good temporal-spatial model of the visuo-motor world. The causes then can either explain away the visual input, leading to balanced E-I, or fail to explain it, in which the excess activity is either rectified by engaging more circuits or engaging synaptic plasticity.

A critical part of this hypothesis is some degree of binding / disentanglement / spatio-temporal re-assignment. While not all models of computation require registers / variables -- RNNs are Turning-complete, e.g., I remain stuck on the idea that, to explain phenomenological experience and practical cognition, the brain much have some means of 'binding'. A reasonable place to look is the apical tuft dendrites, which are capable of storing temporary state (calcium spikes, NMDA spikes), undergo rapid synaptic plasticity, and are so dense that they can reasonably store the outer-product space of binding.

There is mounting evidence for apical tufts working independently / in parallel is investigations of high-gamma in ECoG: PMID-32851172 Dissociation of broadband high-frequency activity and neuronal firing in the neocortex. "High gamma" shows little correlation with MUA when you differentiate early-deep and late-superficial responses, "consistent with the view it reflects dendritic processing separable from local neuronal firing"

{1520}
hide / / print
ref: -2004 tags: neural synchrony binding robot date: 09-13-2020 02:00 gmt revision:0 [head]

PMID-15142952 Visual binding through reentrant connectivity and dynamic synchronization in a brain-based device

  • Controlled a robot with a complete (for the time) model of the occipital-inferotemporal visual pathway (V1 V2 V4 IT), auditory cortex, colliculus, 'value cortex'.
  • Synapses had a timing-dependent assoicative BCM learning rule
  • Robot had reflexes to orient toward preferred auditory stimuli
  • Subsequently, robot 'learned' to orient toward a preferred stimuli (e.g. one that caused orientation).
  • Visual stimuli were either diamonds or squares, either red or green.
    • Discrimination task could have been carried out by (it seems) one perceptron layer.
  • This was 16 years ago, and the results look quaint compared to the modern deep-learning revolution. That said, 'the binding problem' is imho still outstanding or at least interesting. Actual human perception is far more compositional than a deep CNN can support.

{1519}
hide / / print
ref: -2020 tags: Neuralink commentary BMI pigs date: 08-31-2020 18:01 gmt revision:1 [0] [head]

Neuralink progress update August 28 2020

Some commentary.

The good:

  • Ian hit the nail on the head @ 1:05:47. That is not a side-benefit -- that was the original and true purpose. Thank you.
  • The electronics, amplify / record / sort / stim ASIC, as well as interconnect all advance the state of the art in density, power efficiency, and capability. (I always liked higher sampling rates, but w/e)
  • Puck is an ideal form factor, again SOTA. 25mm diameter craniotomy should give plenty of space for 32 x 32-channel depth electrodes (say).
  • I would estimate that the high-density per electrode feed-through is also SOTA, but it might also be a non-hermetic pass-through via the thin-film (e.g. some water vapor diffusion along the length of the polyimide (if that polymer is being used)).
  • Robot looks nice dressed in those fancy robes. Also looks like there is a revolute joint along the coronal axis.
  • Stim on every channel is cool.
  • Pigs seem like an ethical substitute for monkeys.

The mixed:

  • Neurons are not wires.
  • $2000 outpatient neurosurgery?! Will need to address the ~3% complication rate for most neurosurgery.
  • Where is the monkey data? Does it not work in monkeys? Insufficient longevity or yield? Was it strategic to not mention any monkeys, to avoid bad PR or the wrath of PETA?
    • I can't imagine getting into humans without demonstrating both safety and effectiveness on monkeys. Pigs are fine for the safety part, but monkeys are the present standard for efficacy.
  • How long do the electrodes last in pigs? What is the recording quality? How stable are the traces?
    • Judging from the commentary, assume this is a electrode material problem? What does Neuralink do if they are not significantly different in yield and longevity than the Utah array? (The other problems might well be easier than this one.)
      • That said, a thousand channels of EMG should be sufficient for some of the intended applications (below).
    • It really remains to be seen how well the brain tolerates these somewhat-large somewhat-thin electrodes, what percentage of the brain is disrupted in the process of insertion, and how much of the disruption is transient / how much is irrecoverable.
    • Pig-snout somatosensory cortex is an unusual recording location, making comparison difficult, but what was shown seemed rather correlated (?) We'd have to read an actual scientific publication to evaluate.
  • This slide is deceptive, as not all the applications are equally .. applicable. You don't need an extracellular ephys device to solve these problems that "almost everyone" will encounter over the course of their lives.
    • Memory loss -- Probably better dealt with via cellular / biological therapies, or treating the causes (stroke, infection, inflammation, neuroendocrine or neuromodulatory disregulation)
    • Hearing loss -- Reasonable. Nice complement to improved cochlear implants too. (Maybe the Neuralink ASIC could be used for that, too).
      • With this and the other reasonable applications, best to keep in context that stereo EEG, which is fairly disruptive w/ large probes, is well tolerated in epilepsy patients. (It has unclear effect on IQ or memory, but still, the sewing machine should be less invasive.)
    • Blindness -- Reasonable. Mating the puck to a Second Sight style thin film would improve channel count dramatically, and be less invasive. Otherwise you have to sew into the calcarine fissure, destroying a fair bit of cortex in the process & possibly hitting an artery or sulcal vein.
    • Paralysis -- Absolutely. This application is well demonstrated, and the Neuralink device should be able to help SCI patients. Presumably this will occupy them for the next five years; other applications would be a distraction.
      • Being able to sew flexible electrodes into the spinal cord is a great application.
    • Depression -- Need deeper targets for this. Research to treat depression via basal ganglia stim is ongoing; no reason it could not be mated to the Neuralink puck + long electrodes.
    • Insomina -- I guess?
    • Extreme pain -- Simpler approaches are likely better, but sure?
    • Seizures -- Yes, but note that Neuropace burned through $250M and wasn't significantly better than sham surgery. Again, likely better dealt with biologically: recombinant ion channels, glial or interneuron stem cell therapy.
    • Anxiety -- maybe? Designer drugs seem safer. Or drugs + CBT. Elon likes root causes: spotlight on the structural ills of our society.
    • Addiction -- Yes. It seems possible to rewire the brain with the right record / stim strategy, via for example a combination of DBS and cortical recording. Social restructuring is again a better root-cause fix.
    • Strokes -- No, despite best efforts, the robot causes (small) strokes.
    • Brain Damage -- Insertion of electrodes causes brain damage. Again, better dealt with via cellular (e.g. stem cells) or biological approaches.
      • This, of course, will take time as our understanding of brain development is limited; the good thing is that sufficient guidance signals remain in the adult brain, so AFAIK it's possible. From his comments, seems Alan's attitude is more aligned with this.
    • Not really bad per-se, but right panel could be better. I assume this was a design decision trade-off between working distance, NA, illumination, and mechanical constraints.
    • Despite Elon's claims, there is always bleeding when you poke electrodes that large into the cortex; the capillary bed is too dense. Let's assume Elon meant 'macro' bleeding, which is true. At least the robot avoids visible vessels.
    • Predicting joint angles for cyclical behavior is not challenging; can be done with EMG or microphonic noise correlated to some part of the gait. Hence the request for monkey BMI data.
  • Given the risk, pretty much any of the "sci-fi" applications mentioned in response to dorky twitter comments can be better provided to neurologically normal people through electronics, without the risk of brain surgery.
  • Regarding sci-fi application linguistic telepathy:
    • First, agreed, clarifying thoughts into language takes effort. This is a mostly unavoidable and largely good task. Interfacing with the external world is a vital part of cognition; shortcutting it, in my estimation, will just lead to sloppy & half-formed ideas not worth communicating. The compression of thoughts into words (as lossy as it may be) is the primary way to make them discrete enough to be meaningful to both other people and yourself.
    • Secondly: speech (or again any of the many other forms of communication) is not that much slower than cognition. If it was, we'd have much larger vocabularies, much more complicated and meaning-conveying grammar, etc (Like Latin?). The limit is the average persons cognition and memory. I disagree with Elon's conceit.
  • Regarding visual telepathy, with sufficient recording capabilities, I see no reason why you couldn't have a video-out port on the brain. Difficult given the currently mostly unknown representation of higher-level visual cortices, but as Ian says, once you have a good oscilloscope, this can be deduced.
  • Regarding AI symbiosis @1:09:19; this logic is not entirely clear to me. AI is a tool that will automate & facilitate the production and translation of knowledge much the same way electricity etc automated & facilitated the production and transportation of physical goods. We will necessarily need to interface with it, but to the point that we are thoroughly modifying our own development & biology, those interfaces will likely be based on presently extant computer interfaces.
    • If we do start modifying the biological wiring structure of our brains, I can't imagine that there will many limits! (Outside hard metabolic limits that brain vasculature takes pains to allocate and optimize.)
    • So, I guess the central tenet might be vaguely ok if you allow that humans are presently symbiotic with cell phones. (A more realistic interpretation is that cell phones are tools, and maybe Google etc are the symbionts / parasites). This is arguably contributing to current political existential crises -- no need to look further. If you do look further, it's not clear that stabbing the brains of healthy individuals will help.
    • I find the MC to be slightly unctuous and ingratiating in a way appropriate for a video game company, but not for a medical device company. That, of course, is a judgement call & matter of taste. Yet, as this was partly a recruiting event ... you will find who you set the table for.

{1517}
hide / / print
ref: -2015 tags: spiking neural networks causality inference demixing date: 07-22-2020 18:13 gmt revision:1 [0] [head]

PMID-26621426 Causal Inference and Explaining Away in a Spiking Network

  • Rubén Moreno-Bote & Jan Drugowitsch
  • Use linear non-negative mixing plus nose to generate a series of sensory stimuli.
  • Pass these through a one-layer spiking or non-spiking neural network with adaptive global inhibition and adaptive reset voltage to solve this quadratic programming problem with non-negative constraints.
  • N causes, one observation: μ=Σ i=1 Nu ir i+ε \mu = \Sigma_{i=1}^{N} u_i r_i + \epsilon ,
    • r i0r_i \geq 0 -- causes can be present or not present, but not negative.
    • cause coefficients drawn from a truncated (positive only) Gaussian.
  • linear spiking network with symmetric weight matrix J=U TUβI J = -U^TU - \beta I (see figure above)
    • That is ... J looks like a correlation matrix!
    • UU is M x N; columns are the mixing vectors.
    • U is known beforehand and not learned
      • That said, as a quasi-correlation matrix, it might not be so hard to learn. See ref [44].
  • Can solve this problem by minimizing the negative log-posterior function: $$ L(\mu, r) = \frac{1}{2}(\mu - Ur)^T(\mu - Ur) + \alpha1^Tr + \frac{\beta}{2}r^Tr $$
    • That is, want to maximize the joint probability of the data and observations given the probabilistic model p(μ,r)exp(L(μ,r))Π i=1 NH(r i) p(\mu, r) \propto exp(-L(\mu, r)) \Pi_{i=1}^{N} H(r_i)
    • First term quadratically penalizes difference between prediction and measurement.
    • second term, alpha is a L1 regularization term, and third term w beta is a L2 regularization.
  • The negative log-likelihood is then converted to an energy function (linear algebra): W=U TUW = -U^T U , h=U Tμ h = U^T \mu then E(r)=0.5r TWrr Th+α1 Tr+0.5βr TrE(r) = 0.5 r^T W r - r^T h + \alpha 1^T r + 0.5 \beta r^T r
    • This is where they get the weight matrix J or W. If the vectors U are linearly independent, then it is negative semidefinite.
  • The dynamics of individual neurons w/ global inhibition and variable reset voltage serves to minimize this energy -- hence, solve the problem. (They gloss over this derivation in the main text).
  • Next, show that a spike-based network can similarly 'relax' or descent the objective gradient to arrive at the quadratic programming solution.
    • Network is N leaky integrate and fire neurons, with variable synaptic integration kernels.
    • α\alpha translates then to global inhibition, and β\beta to lowered reset voltage.
  • Yes, it can solve the problem .. and do so in the presence of firing noise in a finite period of time .. but a little bit meh, because the problem is not that hard, and there is no learning in the network.

{1516}
hide / / print
ref: -2017 tags: GraphSAGE graph neural network GNN date: 07-16-2020 15:49 gmt revision:2 [1] [0] [head]

Inductive representation learning on large graphs

  • William L. Hamilton, Rex Ying, Jure Leskovec
  • Problem: given a graph where each node has a set of (possibly varied) attributes, create a 'embedding' vector at each node that describes both the node and the network that surrounds it.
  • To this point (2017) there were two ways of doing this -- through matrix factorization methods, and through graph convolutional networks.
    • The matrix factorization methods or spectral methods (similar to multi-dimensional scaling, where points are projected onto a plane to preserve a distance metric) are transductive : they work entirely within-data, and don't directly generalize to new data.
      • This is parsimonious in some sense, but doesn't work well in the real world, where datasets are constantly changing and frequently growing.
  • Their approach is similar to graph convolutional networks, where (I think) the convolution is indexed by node distances.
  • General idea: each node starts out with an embedding vector = its attribute or feature vector.
  • Then, all neighboring nodes are aggregated by sampling a fixed number of the nearest neighbors (fixed for computational reasons).
    • Aggregation can be mean aggregation, LSTM aggregation (on random permuations of the neighbor nodes), or MLP -> nonlinearity -> max-pooling. Pooling has the most wins, though all seem to work...
  • The aggregated vector is concatenated with the current node feature vector, and this is fed through a learned weighting matrix and nonlinearity to output the feature vector for the current pass.
  • Passes proceed from out-in... i think.
  • Algorithm is inspired by the Weisfeiler-Lehman Isomorphism Test, which updates neighbor counts per node to estimate if graphs are isomorphic. They do a similar thing here, only with vectors not scalars, and similarly take into account the local graph structure.
    • All the aggregator functions, and for course the nonlinearities and weighting matricies, are differentiable -- so the structure is trained in a supervised way with SGD.

This is a well-put together paper, with some proofs of convergence etc -- but it still feels only lightly tested. As with many of these papers, could benefit from a positive control, where the generating function is known & you can see how well the algorithm discovers it.

Otherwise, the structure / algorithm feels rather intuitive; surprising to me that it was not developed before the matrix factorization methods.

Worth comparing this to word2vec embeddings, where local words are used to predict the current word & the resulting vector in the neck-down of the NN is the representation.

{1507}
hide / / print
ref: -2015 tags: winner take all sparsity artificial neural networks date: 03-28-2020 01:15 gmt revision:0 [head]

Winner-take-all Autoencoders

  • During training of fully connected layers, they enforce a winner-take all lifetime sparsity constraint.
    • That is: when training using mini-batches, they keep the k percent largest activation of a given hidden unit across all samples presented in the mini-batch. The remainder of the activations are set to zero. The units are not competing with each other; they are competing with themselves.
    • The rest of the network is a stack of ReLU layers (upon which the sparsity constraint is applied) followed by a linear decoding layer (which makes interpretation simple).
    • They stack them via sequential training: train one layer from the output of another & not backprop the errors.
  • Works, with lower sparsity targets, also for RBMs.
  • Extended the result to WTA covnets -- here enforce both spatial and temporal (mini-batch) sparsity.
    • Spatial sparsity involves selecting the single largest hidden unit activity within each feature map. The other activities and derivatives are set to zero.
    • At test time, this sparsity constraint is released, and instead they use a 4 x 4 max-pooling layer & use that for classification or deconvolution.
  • To apply both spatial and temporal sparsity, select the highest spatial response (e.g. one unit in a 2d plane of convolutions; all have the same weights) for each feature map. Do this for every image in a mini-batch, and then apply the temporal sparsity: each feature map gets to be active exactly once, and in that time only one hidden unit (or really, one location of the input and common weights (depending on stride)) undergoes SGD.
    • Seems like it might train very slowly. Authors didn't note how many epochs were required.
  • This, too can be stacked.
  • To train on larger image sets, they first extract 48 x 48 patches & again stack...
  • Test on MNIST, SVHN, CIFAR-10 -- works ok, and well even with few labeled examples (which is consistent with their goals)

{1492}
hide / / print
ref: -2016 tags: spiking neural network self supervised learning date: 12-10-2019 03:41 gmt revision:2 [1] [0] [head]

PMID: Spiking neurons can discover predictive features by aggregate-label learning

  • This is a meandering, somewhat long-winded, and complicated paper, even for the journal Science. It's not been cited a great many times, but none-the-less is of interest.
  • The goal of the derived network is to detect fixed-pattern presynaptic sequences, and fire a prespecified number of spikes to each occurrence.
  • One key innovation is the use of a spike-threshold-surface for a 'tempotron' [12], the derivative of which is used to update the weights of synapses after trials. As the author says, spikes are hard to differentiate; the STS makes this more possible. This is hence standard gradient descent: if the neuron missed a spike then the weight is increased based on aggregate STS (for the whole trial -- hence the neuron / SGD has to perform temporal and spatial credit assignment).
    • As common, the SGD is appended with a momentum term.
  • Since STS differentiation is biologically implausible -- where would the memory lie? -- he also implements a correlational synaptic eligibility trace. The correlation is between the postsynaptic voltage and the EPSC, which seems kinda circular.
    • Unsurprisingly, it does not work as well as the SGD approximation. But does work...
  • Second innovation is the incorporation of self-supervised learning: a 'supervisory' neuron integrates the activity of a number (50) of feature detector neurons, and reinforces them to basically all fire at the same event, WTA style. This effects a unsupervised feature detection.
  • This system can be used with sort-of lateral inhibition to reinforce multiple features. Not so dramatic -- continuous feature maps.

Editorializing a bit: I said this was interesting, but why? The first part of the paper is another form of SGD, albeit in a spiking neural network, where the gradient is harder compute hence is done numerically.

It's the aggregate part that is new -- pulling in repeated patterns through synaptic learning rules. Of course, to do this, the full trace of pre and post synaptic activity must be recorded (??) for estimating the STS (i think). An eligibility trace moves in the right direction as a biologically plausible approximation, but as always nothing matches the precision of SGD. Can the eligibility trace be amended with e.g. neuromodulators to push the performance near that of SGD?

The next step of adding self supervised singular and multiple features is perhaps toward the way the brain organizes itself -- small local feedback loops. These features annotate repeated occurrences of stimuli, or tile a continuous feature space.

Still, the fact that I haven't seen any follow-up work is suggestive...


Editorializing further, there is a limited quantity of work that a single human can do. In this paper, it's a great deal of work, no doubt, and the author offers some good intuitions for the design decisions. Yet still, the total complexity that even a very determined individual can amass is limited, and likely far below the structural complexity of a mammalian brain.

This implies that inference either must be distributed and compositional (the normal path of science), or the process of evaluating & constraining models must be significantly accelerated. This later option is appealing, as current progress in neuroscience seems highly technology limited -- old results become less meaningful when the next wave of measurement tools comes around, irrespective of how much work went into it. (Though: the impedtus for measuring a particular thing in biology is only discovered through these 'less meaningful' studies...).

A third option, perhaps one which many theoretical neuroscientists believe in, is that there are some broader, physics-level organizing principles to the brain. Karl Friston's free energy principle is a good example of this. Perhaps at a meta level some organizing theory can be found, or likely a set of theories; but IMHO, you'll need at least one theory per brain area, at least, just the same as each area is morphologically, cytoarchitecturaly, and topologically distinct. (There may be only a few theories of the cortex, despite all the areas, which is why so many are eager to investigate it!)

So what constitutes a theory? Well, you have to meaningfully describe what a brain region does. (Why is almost as important; how more important to the path there.) From a sensory standpoint: what information is stored? What processing gain is enacted? How does the stored information impress itself on behavior? From a motor standpoint: how are goals selected? How are the behavioral segments to attain them sequenced? Is the goal / behavior even a reasonable way of factoring the problem?

Our dual problem, building the bridge from the other direction, is perhaps easier. Or it could be a lot more money has gone into it. Either way, much progress has been made in AI. One arm is deep function approximation / database compression for fast and organized indexing, aka deep learning. Many people are thinking about that; no need to add to the pile; anyway, as OpenAI has proven, the common solution to many problems is to simply throw more compute at it. A second is deep reinforcement learning, which is hideously sample and path inefficient, hence ripe for improvement. One side is motor: rather than indexing raw motor variables (LRUD in a video game, or joint torques with a robot..) you can index motor primitives, perhaps hierarchically built; likewise, for the sensory input, the model needs to infer structure about the world. This inference should decompose overwhelming sensory experience into navigable causes ...

But how can we do this decomposition? The cortex is more than adept at it, but now we're at the original problem, one that the paper above purports to make a stab at.

{1418}
hide / / print
ref: -0 tags: nanophotonics interferometry neural network mach zehnder interferometer optics date: 06-13-2019 21:55 gmt revision:3 [2] [1] [0] [head]

Deep Learning with Coherent Nanophotonic Circuits

  • Used a series of Mach-Zehnder interferometers with thermoelectric phase-shift elements to realize the unitary component of individual layer weight-matrix computation.
    • Weight matrix was decomposed via SVD into UV*, which formed the unitary matrix (4x4, Special unitary 4 group, SU(4)), as well as Σ\Sigma diagonal matrix via amplitude modulators. See figure above / original paper.
    • Note that interfereometric matrix multiplication can (theoretically) be zero energy with an optical system (modulo loss).
      • In practice, you need to run the phase-moduator heaters.
  • Nonlinearity was implemented electronically after the photodetector (e.g. they had only one photonic circuit; to get multiple layers, fed activations repeatedly through it. This was a demonstration!)
  • Fed network FFT'd / banded recordings of consonants through the network to get near-simulated vowel recognition.
    • Claim that noise was from imperfect phase setting in the MZI + lower resolution photodiode read-out.
  • They note that the network can more easily (??) be trained via the finite difference algorithm (e.g. test out an incremental change per weight / parameter) since running the network forward is so (relatively) low-energy and fast.
    • Well, that's not totally true -- you need to update multiple weights at once in a large / deep network to descend any high-dimensional valleys.

{1463}
hide / / print
ref: -2019 tags: optical neural networks spiking phase change material learning date: 06-01-2019 19:00 gmt revision:4 [3] [2] [1] [0] [head]

All-optical spiking neurosynaptic networks with self-learning capabilities

  • J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice
  • Idea: use phase-change material to either block or pass the light in waveguides.
    • In this case, they used GST -- germanium-antimony-tellurium. This material is less reflective in the amorphous phase, which can be reached by heating to ~150C and rapidly quenching. It is more reflective in the crystalline phase, which occurs on annealing.
  • This is used for both plastic synapses (phase change driven by the intensity of the light) and the nonlinear output of optical neurons (via a ring resonator).
  • Uses optical resonators with very high Q factors to couple different wavelengths of light into the 'dendrite'.
  • Ring resonator on the output: to match the polarity of the phase-change material. Is this for reset? Storing light until trigger?
  • Were able to get correlative-like or hebbian learning (which I suppose is not dissimilar from really slow photographic film, just re-branded, and most importantly with nonlinear feedback.)
  • Issue: every weight needs a different source wavelength! Hence they have not demonstrated a multi-layer network.
  • Previous paper: All-optical nonlinear activation function for photonic neural networks
    • Only 3db and 7db extinction ratios for induced transparency and inverse saturation

{1434}
hide / / print
ref: -0 tags: convolutional neural networks audio feature extraction vocals keras tensor flow fourier date: 02-18-2019 21:40 gmt revision:3 [2] [1] [0] [head]

Audio AI: isolating vocals from stereo music using Convolutional Neural Networks

  • Ale Koretzky
  • Fairly standard CNN, but use a binary STFT mask to isolate vocals from instruments.
    • Get Fourier-type time-domain artifacts as a results; but it sounds reasonable.
    • Didn't realize it until this paper / blog post: stacked conv layers combine channels.
    • E.g. Input size 513*25*16 513 * 25 * 16 (512 freq channels + DC, 25 time slices, 16 filter channels) into a 3x3 Conv2D -> 3*3*16+16=1603 * 3 * 16 + 16 = 160 total parameters (filter weights and bias).
    • If this is followed by a second Conv2D layer of the same parameters, the layer acts as a 'normal' fully connected network in the channel dimension.
    • This means there are (3*3*16)*16+16=2320(3 * 3 * 16) * 16 + 16 = 2320 parameters.
      • Each input channel from the previous conv layer has independent weights -- they are not shared -- whereas the spatial weights are shared.
      • Hence, same number of input channels and output channels (in this case; doesn't have to be).
      • This, naturally, falls out of spatial weight sharing, which might be obvious in retrospect; of course it doesn't make sense to share non-spatial weights.
      • See also: https://datascience.stackexchange.com/questions/17064/number-of-parameters-for-convolution-layers
  • Synthesized a large training set via acapella youtube videos plus instrument tabs .. that looked like a lot of work!
    • Need a karaoke database here.
  • Authors wrapped this into a realtime extraction toolkit.

{1426}
hide / / print
ref: -2019 tags: Arild Nokland local error signals backprop neural networks mnist cifar VGG date: 02-15-2019 03:15 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

Training neural networks with local error signals

  • Arild Nokland and Lars H Eidnes
  • Idea is to use one+ supplementary neural networks to measure within-batch matching loss between transformed hidden-layer output and one-hot label data to produce layer-local learning signals (gradients) for improving local representation.
  • Hence, no backprop. Error signals are all local, and inter-layer dependencies are not explicitly accounted for (! I think).
  • L simL_{sim} : given a mini-batch of hidden layer activations H=(h 1,...,h n)H = (h_1, ..., h_n) and a one-hot encoded label matrix Y=(y 1,...,y nY = (y_1, ..., y_n ,
    • L sim=||S(NeuralNet(H))S(Y)|| F 2 L_{sim} = || S(NeuralNet(H)) - S(Y)||^2_F (don't know what F is..)
    • NeuralNet()NeuralNet() is a convolutional neural net (trained how?) 3*3, stride 1, reduces output to 2.
    • S()S() is the cosine similarity matrix, or correlation matrix, of a mini-batch.
  • L pred=CrossEntropy(Y,W TH)L_{pred} = CrossEntropy(Y, W^T H) where W is a weight matrix, dim hidden_size * n_classes.
    • Cross-entropy is H(Y,W TH)=Σ i,jY i,jlog((W TH) i,j)+(1Y i,j)log(1(W TH) i,j) H(Y, W^T H) = \Sigma_{i,j} Y_{i,j} log((W^T H)_{i,j}) + (1-Y_{i,j}) log(1-(W^T H)_{i,j})
  • Sim-bio loss: replace NeuralNet()NeuralNet() with average-pooling and standard-deviation op. Plus one-hot target is replaced with a random transformation of the same target vector.
  • Overall loss 99% L simL_sim , 1% L predL_pred
    • Despite the unequal weighting, both seem to improve test prediction on all examples.
  • VGG like network, with dropout and cutout (blacking out square regions of input space), batch size 128.
  • Tested on all the relevant datasets: MNIST, Fashion-MNIST, Kuzushiji-MNIST, CIFAR-10, CIFAR-100, STL-10, SVHN.
  • Pretty decent review of similarity matching measures at the beginning of the paper; not extensive but puts everything in context.
    • See for example non-negative matrix factorization using Hebbian and anti-Hebbian learning in and Chklovskii 2014.
  • Emphasis put on biologically realistic learning, including the use of feedback alignment {1423}
    • Yet: this was entirely supervised learning, as the labels were propagated back to each layer.
    • More likely that biology is setup to maximize available labels (not a new concept).

{1419}
hide / / print
ref: -0 tags: diffraction terahertz 3d print ucla deep learning optical neural networks date: 02-13-2019 23:16 gmt revision:1 [0] [head]

All-optical machine learning using diffractive deep neural networks

  • Pretty clever: use 3D printed plastic as diffractive media in a 0.4 THz all-optical all-interference (some attenuation) linear convolutional multi-layer 'neural network'.
  • In the arxive publication there are few details on how they calculated or optimized given diffractive layers.
  • Absence of nonlinearity will limit things greatly.
  • Actual observed performance (where thy had to print out the handwritten digits) rather poor, ~ 60%.

{1174}
hide / / print
ref: -0 tags: Hinton google tech talk dropout deep neural networks Boltzmann date: 02-12-2019 08:03 gmt revision:2 [1] [0] [head]

Brains, sex, and machine learning -- Hinton google tech talk.

  • Hinton believes in the the power of crowds -- he thinks that the brain fits many, many different models to the data, then selects afterward.
    • Random forests, as used in predator, is an example of this: they average many simple to fit and simple to run decision trees. (is apparently what Kinect does)
  • Talk focuses on dropout, a clever new form of model averaging where only half of the units in the hidden layers are trained for a given example.
    • He is inspired by biological evolution, where sexual reproduction often spontaneously adds or removes genes, hence individual genes or small linked genes must be self-sufficient. This equates to a 'rugged individualism' of units.
    • Likewise, dropout forces neurons to be robust to the loss of co-workers.
    • This is also great for parallelization: each unit or sub-network can be trained independently, on it's own core, with little need for communication! Later, the units can be combined via genetic algorithms then re-trained.
  • Hinton then observes that sending a real value p (output of logistic function) with probability 0.5 is the same as sending 0.5 with probability p. Hence, it makes sense to try pure binary neurons, like biological neurons in the brain.
    • Indeed, if you replace the backpropagation with single bit propagation, the resulting neural network is trained more slowly and needs to be bigger, but it generalizes better.
    • Neurons (allegedly) do something very similar to this by poisson spiking. Hinton claims this is the right thing to do (rather than sending real numbers via precise spike timing) if you want to robustly fit models to data.
      • Sending stochastic spikes is a very good way to average over the large number of models fit to incoming data.
      • Yes but this really explains little in neuroscience...
  • Paper referred to in intro: Livnat, Papadimitriou and Feldman, PMID-19073912 and later by the same authors PMID-20080594
    • A mixability theory for the role of sex in evolution. -- "We define a measure that represents the ability of alleles to perform well across different combinations and, using numerical iterations within a classical population-genetic framework, show that selection in the presence of sex favors this ability in a highly robust manner"
    • Plus David MacKay's concise illustration of why you need sex, pg 269, __Information theory, inference, and learning algorithms__
      • With rather simple assumptions, asexual reproduction yields 1 bit per generation,
      • Whereas sexual reproduction yields G\sqrt G , where G is the genome size.

{1408}
hide / / print
ref: -2018 tags: machine learning manifold deep neural net geometry regularization date: 08-29-2018 14:30 gmt revision:0 [head]

LDMNet: Low dimensional manifold regularized neural nets.

  • Synopsis of the math:
    • Fit a manifold formed from the concatenated input ‘’and’’ output variables, and use this set the loss of (hence, train) a deep convolutional neural network.
      • Manifold is fit via point integral method.
      • This requires both SGD and variational steps -- alternate between fitting the parameters, and fitting the manifold.
      • Uses a standard deep neural network.
    • Measure the dimensionality of this manifold to regularize the network. Using a 'elegant trick', whatever that means.
  • Still yet he results, in terms of error, seem not very significantly better than previous work (compared to weight decay, which is weak sauce, and dropout)
    • That said, the results in terms of feature projection, figures 1 and 2, ‘’do’’ look clearly better.
    • Of course, they apply the regularizer to same image recognition / classification problems (MNIST), and this might well be better adapted to something else.
  • Not completely thorough analysis, perhaps due to space and deadlines.

{1407}
hide / / print
ref: -0 tags: tissue probe neural insertion force damage wound speed date: 06-02-2018 00:03 gmt revision:0 [head]

PMID-21896383 Effect of Insertion Speed on Tissue Response and Insertion Mechanics of a Chronically Implanted Silicon-Based Neural Probe

  • Two speeds, 10um/sec and 100um/sec, monitored out to 6 weeks.
  • Once the probes were fully advanced into the brain, we observed a decline in the compression force over time.
    • However, the compression force never decreased to zero.
    • This may indicate that chronically implanted probes experience a constant compression force when inserted in the brain, which may push the probe out of the brain over time if there is nothing to keep it in a fixed position.
      • Yet ... the Utah probe seems fine, up to many months in humans.
    • This may be a drawback for flexible probes [24], [25]. The approach to reduce tissue damage by reducing micromotion by not tethering the probe to the skull can also have this disadvantage [26]. Furthermore, the upward movement may lead to the inability of the contacts to record signals from the same neurons over long periods of time.
  • We did not observe a difference in initial insertion force, amount of dimpling, or the rest force after a 3-min rest period, but the force at the end of the insertion was significantly higher when inserting at 100 μm/s compared to 10 μm/s.
  • No significant difference in histological response observed between the two speeds.

{1406}
hide / / print
ref: -0 tags: insertion speed needle neural electrodes force damage injury cassanova date: 06-01-2018 23:51 gmt revision:0 [head]

Effect of Needle Insertion Speed on Tissue Injury, Stress, and Backflow Distribution for Convection-Enhanced Delivery in the Rat Brain

  • Tissue damage, evaluated as the size of the hole left by the needle after retraction, bleeding, and tissue fracturing, was found to increase for increasing insertion speeds and was higher within white matter regions.
    • A statistically significant difference in hole areas with respect to insertion speed was found.
  • While there are no previous needle insertion speed studies with which to directly compare, previous electrode insertion studies have noted greater brain surface dimpling and insertion forces with increasing insertion speed [43–45]. These higher deformation and force measures may indicate greater brain tissue damage which is in agreement with the present study.
  • There are also studies which have found that fast insertion of sharp tip electrodes produced less blood vessel rupture and bleeding [28,29].
    • These differences in rate dependent damage may be due to differences in tip geometry (diameter and tip) or tissue region, since these electrode studies focus mainly on the cortex [28,29].
    • In the present study, hole measurements were small in the cortex, and no substantial bleeding was observed in the cortex except when it was produced during dura mater removal.
    • Any hemorrhage was observed primarily in white matter regions of the external capsule and the CPu.

{1405}
hide / / print
ref: -0 tags: insertion speed neural electrodes force damage date: 06-01-2018 23:38 gmt revision:2 [1] [0] [head]

In vivo evaluation of needle force and friction stress during insertion at varying insertion speed into the brain

  • Targeted at CED procedures, but probably applicable elsewhere.
  • Used a blunted 32ga CA glue filled hypodermic needle.
  • Sprague-dawley rats.
  • Increased insertion speed corresponds with increased force, unlike cardiac tissue.
  • Greatuer surface dimpling before failure results in larger regions of deformed tissue and more energy storage before needle penetration.
  • In this study (blunt needle) dimpling increased with insertion speed, indicating that more energy was transferred over a larger region and increasing the potential for injury.
  • However, friction stresses likely decrease with insertion speed since larger tissue holes were measured with increasing insertion speeds indicating lower frictional stresses.
    • Rapid deformation results in greater pressurization of fluid filled spaces if fluid does not have time to redistribute, making the tissue effectively stiffer. This may occur in compacted tissues below or surrounding the needle and result in increasing needle forces with increasing needle speed.

{1400}
hide / / print
ref: -0 tags: robinson pasquali carbon nanotube fiber fluidic injection dextran neural electrode date: 12-28-2017 04:20 gmt revision:0 [head]

PMID-29220192 Fluidic Microactuation of Flexible Electrodes for Neural Recording.

  • Use viscous dextran solution + PDMS channel system
  • Durotomy (of course)
  • Parylene-C insulated carbon fiber electrodes, cut with FIB or razor blade
  • Used silver ink to electrically / mechanically attach for recordings.
  • Tested in hydra, rat brain slice (reticular formation of thalamus), and in-vivo rat.
  • Electrodes, at 12um diameter, E=120GPa, are approximately 127x stiffer than one 4x20um PI (E=9GPa) probe. Less damage though.

{1368}
hide / / print
ref: -0 tags: Lieber nanoFET review silicon neural recording intracellular date: 12-28-2017 04:04 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

PMID-23451719 Synthetic Nanoelectronic Probes for Biological Cells and Tissue

  • Review of nanowireFETS for biological sensing
  • Silicon nanowires can be grown via vapor-liquid-solid or vapor-solid-solid, 1D catalyzed growth, usually with a Au nanoparticle.
  • Interestingly, kinks can be introduced via "iterative control over nucleation and growth", 'allowing the synthesis of complex 2D and 3D structures akin to organic chemistry"
    • Doping can similarly be introduced in highly localized areas.
    • This bottom-up synthesis is adaptable to flexible and organic substrates.
  • Initial tests used polylysine patterning to encourage axonal and dendritic growth across a nanoFET.
    • Positively charged amino group interacts with negative surface charge phospholipid
    • Lieber's group coats their SU-8 electrodes in poly-d-lysine as well {1352}
  • Have tested multiple configurations of the nanowire FET, including kinked, one with a SiO2 nanopipette channel for integration with the cell membrane, and one where the cell-attached fluid membrane functions as the semiconductor; see figure 4.
    • Were able to show recordings as one of the electrodes was endovascularized.
  • It's not entirely clear how stable and scalable these are; Si and SiO2 gradually dissolve in physiological fluid, and no mention was made of longevity.

{1176}
hide / / print
ref: Gilgunn-2012 tags: kozai neural recording electrodes compliant parylene flexible dissolve date: 12-28-2017 03:50 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

IEEE-6170092 (pdf) An ultra-compliant, scalable neural probe with molded biodissolvable delivery vehicle

    • Optical coherence tomography is cool.
  • Large footprint - 150 or 300um, 135um thick (13500 or 40500 um^2; c.f. tungsten needle 1963 (50um) or 490 (25um) um^2.)
  • Delivery vehicle is fabricated from biodissolvable carboxy-methylcellulose (CMC).
    • Device dissolves within three minutes of implantation.
    • Yet stiff enough to penetrate the dura of rats (with what degree of dimpling?)
    • Lithographic patterning process pretty clever, actually.
    • Parylene-X is ~ 1.1 um thick.
    • 500nm Pt is patterned via ion milling with a photoresist mask.
    • Use thin 20nm Cr etch mask for both DRIE (STS ICP) and parylene etch.
  • Probes are tiny -- 10um wide, 2.7um thick, coated in parylene-X.
  • CMC polymer tends to bend and warp due to stress -- must be clamped in a special jig.
  • No histology. Follow-up: {1399}

{1396}
hide / / print
ref: -0 tags: rogers thermal oxide barrier neural implants ECoG coating accelerated lifetime test date: 12-28-2017 02:29 gmt revision:0 [head]

PMID-27791052 Ultrathin, transferred layers of thermally grown silicon dioxide as biofluid barriers for biointegrated flexible electronic systems

  • Thermal oxide proved the superior -- by far -- water barrier for encapsulation.
    • What about the edges?
  • Many of the polymer barrier layers look like inward-rectifiers:
  • Extensive simulations showing that the failure mode is from gradual dissolution of the SiO2 -> Si(OH)4.
    • Even then a 100nm layer is expected to last years.
    • Perhaps the same principle could be applied with barrier metals. Anodization or thermal oxidation to create a thick, nonporous passivation layer.
    • Should be possible with Al, Ta...

{1388}
hide / / print
ref: -0 tags: PEDOT PSS electroplate eletrodeposition neural recording michigan probe stimulation CSC date: 04-27-2017 01:36 gmt revision:1 [0] [head]

PMID-19543541 Poly(3,4-ethylenedioxythiophene) as a micro-neural interface material for electrostimulation

  • 23k on a 177um^2 site.
  • demonstrated in-vitro durable stimulation.
  • Electrodeposited with 6na for 900 seconds per electrode.
    • Which is high -- c.f. 100pA for 600 seconds {1356}
  • Greater CSC and lower impedance / phase than (comparable?) Ir or IrOx plating.

{747}
hide / / print
ref: Seymour-2007.09 tags: neural probe design recording Kipke Seymour parelene MEA histology PEDOT date: 02-23-2017 23:52 gmt revision:13 [12] [11] [10] [9] [8] [7] [head]

PMID-17517431[0] Neural probe design for reduced tissue encapsulation in CNS.

  • See conference proceedings too: PMID-17947102[1] Fabrication of polymer neural probes with sub-cellular features for reduced tissue encapsulation.
    • -- useful information.
  • They use SU8 - photoresist! - as a structural material. See also this.
    • They use silicon as a substrate for the fabrication, but ultimately remove it. Electrodes could be made of titanium, modulo low conductivity.
  • Did not / could not record from these devices. Only immunochemistry.
  • Polymer fibers smaller than 7um are basically invisible to the immune system. See [2]
  • Their peripheral recording site is 4 x 5um - but still not invisible to microglia. Perhaps this is because of residual insertion trauma, or movement trauma? They implanted the device flush with the cortical surface, so there should have been little cranial tethering.
  • Checked the animals 4 weeks after implantation.
  • Peripheral electrode site was better than shank location, but still not perfect. Well, any improvement is a good one...
  • No statistical difference between 4x5um lattice probes, 10x4um probes, 30x4um, and solid (100um) knife edge.
    • Think that this may be because of electrode micromotion -- the lateral edge sites are still relatively well connected to the thick, rigid shank.
  • Observed two classes of immune reactivity --
    • GFAP reactive hypertrophied astrocytes.
    • devoid of GFAP, neurofilament, and NEuN, but always OX-42 and often firbronectin and laminin positive as well.
    • Think that the second may be from meningeal cells pulled in with the stab wound.
  • Sensitivity is expected to increase with decreased surface area (but similar low impedance -- platinum black or oxidized iridium or PEDOT {1112} ).
  • Thoughts: it may be possible to put 'barbs' to relieve mechanical stress slightly after the probe location, preferably spikes that expand after implantation.
  • His thesis {1110}

____References____

[0] Seymour JP, Kipke DR, Neural probe design for reduced tissue encapsulation in CNS.Biomaterials 28:25, 3594-607 (2007 Sep)
[1] Seymour JP, Kipke DR, Fabrication of polymer neural probes with sub-cellular features for reduced tissue encapsulation.Conf Proc IEEE Eng Med Biol Soc 1no Issue 4606-9 (2006)
[2] Sanders JE, Stiles CE, Hayes CL, Tissue response to single-polymer fibers of varying diameters: evaluation of fibrous encapsulation and macrophage density.J Biomed Mater Res 52:1, 231-7 (2000 Oct)

{1376}
hide / / print
ref: -0 tags: review neural recording penn state extensive biopolymers date: 02-06-2017 23:09 gmt revision:0 [head]

PMID-24677434 A Review of Organic and Inorganic Biomaterials for Neural Interfaces

  • Not necessarily insightful, but certainly exhaustive review of all the various problems and strategies for neural interfacing.
  • Some emphasis on graphene, conductive polymers, and biological surface treatments for reducing FBR.
  • Cites 467 articles!

{1366}
hide / / print
ref: -0 tags: direct electrical stimulation neural mapping review date: 01-26-2017 02:28 gmt revision:0 [head]

PMID-22127300 Direct electrical stimulation of human cortex -- the gold standard for mapping brain functions?

  • Fairly straightforward review, shows the strengths and weaknesses / caveats of cortical surface stimulation.
  • Axon initial segment and nodes of Ranvier (which has a high concentration of Na channels) are the most excitable.
  • Stimulation of a site in the LGN of the thalamus increased the BOLD signal in the regions of V1 that received input from that site, but strongly suppressed it in the retinotopicaly matched regions of extrastriate cortex.
  • To test the hypothesis that the deactivation of extrastriate cortex might be due to synaptic inhibition of V1 projection neurons, GABA antagonists were microinjected into V1 in monkeys in experiments that combined fMRI, ephys, and microstim.
    • Ref 25. PMID-20818384
    • These findings suggest that the stimulation of cortical neurons disrupts the propagation of cortico-cortico signals after the first synapse.
    • Likely due to feedforward and recurrent inhibition.
  • Revisit the hypothesis of tight control of excitation and inhibition (e.g. in-vivo patch clamping + drugs). "The interactions between excitation and inhibition within cortical microcircuits as well as between inter-regional connections haper the predicability of stimulation."
  • The average size of a fMRI voxel:
    • 55ul, 55mm^2
    • 5.5e6 neurons,
    • 22 - 55e9 billion synapses,
    • 22km dendrites (??)
    • 220km axons.
  • In the 1970s, Daniel Pollen conducted a series of studies stimulating the visual cortex of cats and humans.
    • Observed long intra-stim responses, and post-stim afterdischarges.
    • Importantly, he also observed inhibitory effects of DES on cortical responses at the stimulation site.
      • The inhibitory effect depended on the state of the neuron before stimulation.
      • High spontaneous activity + low stim strengths = inhibition;
      • low spontaneous activity + high stim strengths = excitation.
  • In the author's opinion, there is an equal or greater number of inhibitory responses to electrical microstimulation as excitatory. Only, there is a reporting bias toward the positive.
  • Many locations for paresthesias:
    • postcentral sulcus (duh)
    • opercular area inferior postcentral gyrus (e.g. superior to and facing the temporal lobe)[60]
    • posterior cingulate gyrus
    • supramarginal gyrus
    • temporal lobe, limbic and isocortical structures.

{1361}
hide / / print
ref: -0 tags: neural coding rats binary permutation retrosplenial basolateral amygdala tetrode date: 12-19-2016 07:39 gmt revision:1 [0] [head]

PMID-27895562 Brain Computation Is Organized via Power-of-Two-Based Permutation Logic.

  • Nice and interesting data, sort of kitchen sink of experiments but ...
  • At first blush it seems they have re-discovered Haar wavelets / the utility of binary decompositions.
  • Figures 9 and 10, however, suggest a discriminable difference in representation in layers 2/3 and 5/6, supporting their binary hypothesis.
    • The former targeted the mouse's large retrosplenial cortex; the latter, the hamster's prelimbic cortex.

{1360}
hide / / print
ref: -0 tags: L1 cell adhesion neural implants microglia DRG spinal cord dorsal root inflammation date: 11-19-2016 22:55 gmt revision:1 [0] [head]

PMID-22750248 In vivo effects of L1 coating on inflammation and neuronal health at the electrode-tissue interface in rat spinal cord and dorsal root ganglion.

  • Kolarcik CL1, Bourbeau D, Azemi E, Rost E, Zhang L, Lagenaur CF, Weber DJ, Cui XT.
  • Quote: With L1, neurofilament staining was significantly increased while neuronal cell death decreased.
  • These results indicate that L1-modified electrodes may result in an improved chronic neural interface and will be evaluated in recording and stimulation studies.
  • Ok, so this CAM seems to mitigate against microglia / inflammation, but how was it selected vs any of the other CAMs and surface proteins? (This domain is almost completely unknown by me..)
  • Ultimate strategy likely to be a broad combination of mechanical (size, flexibility), biochemical (inflammation, cell migration), electrochamical (surface coatings) and vasculature-avoiding approaches.

{1334}
hide / / print
ref: -0 tags: micro LEDS Buzaki silicon neural probes optogenetics date: 04-18-2016 18:00 gmt revision:0 [head]

PMID-26627311 Monolithically Integrated μLEDs on Silicon Neural Probes for High-Resolution Optogenetic Studies in Behaving Animals.

  • 12 uLEDs and 32 rec sites integrated into one probe.
  • InGaN monolithically integrated LEDs.
    • Si has ~ 5x higher thermal conductivity than sapphire, allowing better heat dissipation.
    • Use quantum-well epitaxial layers, 460nm emission, 5nm Ni / 5nm Au current injection w/ 75% transmittance @ design wavelength.
      • Think the n/p GaN epitaxy is done by an outside company, NOVAGAN.
    • Efficiency near 80% -- small LEDs have fewer defects!
    • SiO2 + ALD Al2O3 passivation.
    • 70um wide, 30um thick shanks.

{1326}
hide / / print
ref: -0 tags: reactive oxygen accelerated aging neural implants date: 10-07-2015 18:45 gmt revision:1 [0] [head]

PMID-25627426 Rapid evaluation of the durability of cortical neural implants using accelerated aging with reactive oxygen species.

  • Takmakov P1, Ruda K, Scott Phillips K, Isayeva IS, Krauthamer V, Welle CG.
  • TDT W / PI implants completely failed (W etched and PI completely flaked off) after 1 week in 87C H2O2 / PBS solution. Not surprising.
    • In the Au plated W, the Au remained, the PI flaked off, while thin fragile gold tubes were left. Interesting.
  • Pt/Ii + Parylene-C microprobes seemed to fare better; one was unaffected, others experienced a drop in impedance.
  • NeuralNexus (Si3N4 insulated, probably, plus Ir recording pads) showed no change in H2O2 RAA, strong impedance drop (thicker oxide layer?)
  • Same for blackrock / utah probe (Parylene-C), though there the parylene peeled from the Si substrate a bit.

{1306}
hide / / print
ref: -2008 tags: tantalum chromium polyimide tungsten flexible neural implants adhesion layer date: 06-24-2015 22:53 gmt revision:2 [1] [0] [head]

PMID-18640155 Characterization of flexible ECoG electrode arrays for chronic recording in awake rats.

  • Yeager JD1, Phillips DJ, Rector DM, Bahr DF.
  • We tested several different adhesion techniques including the following: gold alone without an adhesion layer, titanium-tungsten, tantalum and chromium.
  • All films were DC magnetron sputtered, without breaking vacuum between the adhesion layer (5nm) and gold counductor layer (300nm).
  • We found titanium-tungsten to be a suitable adhesion layer considering the biocompatibility requirements as well as stability and delamination resistance.
  • While chromium and tantalum produced stronger gold adhesion, concerns over biocompatibility of these materials require further testing.
    • Thought: use tantalum directly, no Ti needed.
    • Much better than Cr -- much more ductile and biocompatible.
    • Caveat: studies showing reduction to stociometric Ta results in delamination.
  • Ta conductivity: 1.35e-7 Ohms * m; Ti 4.2e-7; 3x better (film can be 3x thinner..)

{875}
hide / / print
ref: Cosman-2005.12 tags: microstimulation RF pain neural tissue ICMS date: 09-04-2014 18:10 gmt revision:14 [13] [12] [11] [10] [9] [8] [head]

One of the goals/needs of the lab is to be able to stimluate and record nervous tissue at the same time. We do not have immediate access to optogenetic methods, but what about lower frequency EM stimulation? The idea: if you put the stimulation frequency outside the recording system bandwidth, there is no need to switch, and indeed no reason you can't stimulate and record at the same time.

Hence, I very briefly checked for the effects of RF stimulation on nervous tissue.

  • PMID-16336478[0] Electric and Thermal Field Effects in Tissue Around Radiofrequency Electrodes
    • Most clinical response to pulsed RF is heat ablation - the RF pulses can generate 'hot spots' c.f. continuous RF.
    • Secondary effect may be electroporation; this is not extensively investigation.
    • Suggests that 500kHz pulses can be 'rectified' by the membrane, and hence induce sodium influx, hence neuron activation.
    • They propose that some of the clinical effects of pulsed RF stimulation is mediated through LTD response.
  • {1297} -- original!
  • PMID-14206843[2] Electrical Stimulation of Excitable Tissue by Radio-Frequency Transmission
    • Actually not so interesting -- deals with RF powered pacemakers and bladder stimulators; both which include rectification.
  • Pulsed and Continous Radiofrequency Current Adjacent to the Cervical Dorsal Root Ganglion of the Rat Induces Late Cellular Activity in the Dorsal Horn
    • shows that neurons are activated by pulsed RF, albeit through c-Fos staining. Electrodes were much larger in this study.
    • Also see PMID-15618777[3] associated editorial which calls for more extensive clinical, controlled testing. The editorial gives some very interesting personal details - scientists from the former Soviet bloc!
  • PMID-16310722[4] Pulsed radiofrequency applied to dorsal root ganglia causes a selective increase in ATF3 in small neurons.
    • used 20ms pulses of 500kHz.
    • Small diameter fibers are differentially activated.
    • Pulsed RF induces activating transcription factor 3 (ATF3), which has been used as an indicator of cellular stress in a variety of tissues.
    • However, there were no particular signs of axonal damage; hence the clinically effective analgesia may be reflective of a decrease in cell activity, synaptic release (or general cell health?)
    • Implies that RF may be dangerous below levels that cause tissue heating.
  • Cellphone Radiation Increases Brain Activity
    • Implies that Rf energy - here presumably in 800-900Mhz or 1800-1900Mhz - is capable of exciting nervous tissue without electroporation.
  • Random idea: I wonder if it is possible to get a more active signal out of an electrode by stimulating with RF? (simultaneously?)
  • Human auditory perception of pulsed radiofrequency energy
    • Evicence seems to support the theory that it is local slight heating -- 6e-5 C -- that creates pressure waves which can be heard by humans, guinea pigs, etc.
    • Unlikely to be direct neural stimulation.
    • High frequency hearing is required for this
      • Perhaps because it is lower harmonics of thead resonance that are heard (??).

Conclusion: worth a shot, especially given the paper by Alberts et al 1972.

  • There should be a frequency that sodium channels react to, without inducing cellular stress.
  • Must be very careful to not heat the tissue - need a power controlled RF stimulator
    • The studies above seem to work with voltage-control (?!)

____References____

[0] Cosman ER Jr, Cosman ER Sr, Electric and thermal field effects in tissue around radiofrequency electrodes.Pain Med 6:6, 405-24 (2005 Nov-Dec)
[1] Alberts WW, Wright EW Jr, Feinstein B, Gleason CA, Sensory responses elicited by subcortical high frequency electrical stimulation in man.J Neurosurg 36:1, 80-2 (1972 Jan)
[2] GLENN WW, HAGEMAN JH, MAURO A, EISENBERG L, FLANIGAN S, HARVARD M, ELECTRICAL STIMULATION OF EXCITABLE TISSUE BY RADIO-FREQUENCY TRANSMISSION.Ann Surg 160no Issue 338-50 (1964 Sep)
[3] Richebé P, Rathmell JP, Brennan TJ, Immediate early genes after pulsed radiofrequency treatment: neurobiology in need of clinical trials.Anesthesiology 102:1, 1-3 (2005 Jan)
[4] Hamann W, Abou-Sherif S, Thompson S, Hall S, Pulsed radiofrequency applied to dorsal root ganglia causes a selective increase in ATF3 in small neurons.Eur J Pain 10:2, 171-6 (2006 Feb)

{1296}
hide / / print
ref: -0 tags: physical principles of scalable neural recording marblestone date: 08-25-2014 20:21 gmt revision:0 [head]

PMID-24187539 Physical principles for scalable neural recording.

  • Marblestone AH1, Zamft BM, Maguire YG, Shapiro MG, Cybulski TR, Glaser JI, Amodei D, Stranges PB, Kalhor R, Dalrymple DA, Seo D, Alon E, Maharbiz MM, Carmena JM, Rabaey JM, Boyden ES, Church GM, Kording KP.

{1274}
hide / / print
ref: -0 tags: flexible neural probe polyimide silicon polyethylene glycol dissolvable jove livermore loren frank date: 03-05-2014 19:18 gmt revision:0 [head]

http://www.jove.com/video/50609/insertion-flexible-neural-probes-using-rigid-stiffeners-attached-with

  • details the flip-chip bonding method (clever!)
  • as well as the silicon stiffener fabrication process.

{1264}
hide / / print
ref: -0 tags: shape memory polymers neural interface thiolene date: 12-06-2013 22:55 gmt revision:0 [head]

PMID-23852172 A comparison of polymer substrates for photolithographic processing of flexible bioelectronics

  • Describe the deployment of shape-memory polymers for a neural interface
    • Thiol-ene/acrrylate network (see figures)
    • Noble metals react strongly to the thiols, yielding good adhesion.
  • Cr/Au thin films.
  • Devices change modulus as they absorb water; clever!
  • Transfer by polymerization patterning of electrodes (rather than direct sputtering).
    • This + thiol adhesion still might not be sufficient to prevent micro-cracks.
  • "Neural interfaces fabricated on thiol-ene/acrylate substrates demonstrate long-term fidelity through both in vitro impedance spectroscopy and the recording of driven local field potentials for 8 weeks in the auditory cortex of laboratory rats. "
  • Impedance decreases from 1M @ 1kHz to ~ 100k over the course of 8 weeks. Is this acceptable? Seems like the insulator is degrading (increased capacitance; they do not show phase of impedance)
  • PBS uptake @ 37C:
    • PI seems to have substantial PBS uptake -- 2%
    • PDMS the lowest -- 0.22%
    • PEN (polyethelene napathalate) -- 0.36%
    • Thiol-ene/acrylate 2.19%
  • Big problem is that during photolithographic processing all the shape-memory polymers go through Tg, and become soft/rubbery, making thin metal film adhesion difficult.
    • Wonder if you could pattern more flexible materials, e.g. carbon nanotubes (?)
  • Good paper, many useful references!

{1249}
hide / / print
ref: -0 tags: retinal ganglion cells neural encoding Farrow date: 07-31-2013 16:21 gmt revision:0 [head]

PMID-21273316 Physiological clustering of visual channels in the mouse retina

  • Anatomy predicts that mammalian retinas should have in excess of 12 physiological channels, each encoding a specific aspect of the visual scene.
  • Although several channels have been correlated with morphological cell types, the number of morphological types generally exceeds the known physiological types.
  • Here, we attempted to sort the ganglion cells of the mouse retina purely on a physiological basis.
  • Result: The optimal partition was the 12-cluster solution of the Fuzzy Gustafson-Kessel algorithm.
    • This might be useful elsewhere ...
  • Farrow Lab is responsible for the 11,011 electrode array.

{1241}
hide / / print
ref: -0 tags: parylene silicon neural recording probes date: 06-07-2013 00:15 gmt revision:4 [3] [2] [1] [0] [head]

http://thesis.library.caltech.edu/4671/1/PhDThesisFinalChanglinPang.pdf

  • Notes: Michigan probes suffer from thickness limited to <15um, hence are often not stiff enough to penetrate the pia & arachnoid.
  • Likewise, utach arrays are fabricated through a substrate, so cannot be made longer than 1.5-2mm. Plus, they are connected with 25um gold wires, which is both rigid and requires a fair bit of work. (Perhaps with a wirebond machine?)
  • SiO2 suffers from high internal stress (formed at high temperature) and tends to hydrate over time, both making it a less than ideal insulator for biological applications.
    • Silicon is slowly attacked in saline.
  • Use Cr/Au traces, and Ti/Pt electrode sites on his probes.
    • 2.5um minimum trace width.
  • Importantly, they solve the problem of parylene to silicon interconnect by simply fabricating the wires on parylene -- like ours -- and only use silicon as a structural support.
    • Silicon is roughened via XeF2 for good parylene adhesion.
      • Alas, does not survive a long-term soak -- but maybe this is useful? (page 102)
        • This too can be solved via bringing the parylene in vacuum up to melting temperature to better bond with Si.
  • Metal pads on parylene are destroyed by wedge bonding -- heat and pressure are too high!
  • Their solution is to use conductive epoxy & fan the wires out to omnetics pitch (635um) in what they call parylene-PCB-omnetics connector (PPO).
  • Plated a 5um x 5um electrode with platinum black to reduce the impedance from 1.1M to 9.2k (!!)
    • Problem is that Pt black is fragile, and may be scraped off during insertion -- see figure on page 95.
  • Probe shanks are ~ 170um x 150um, tip spade-type patterned via DRIE.
  • To be able to sustain soaking and lifetime testing, thick parylene layers are needed for the flexible parylene cable. The total parylene thickness of our neural probes is about 13 μm which results in a long etching time. We use photoresist as a mask when etching parylene using RIE O2 plasma etching; the etching rate of parylene and photoresist in RIE is roughly 1:1. Thick photoresist (> 20 μm) with high resolution is needed. AZ 9260 thick-film photoresist is designed for the more-demanding higher-resolution thick-resist requirements. It provides high resolution with superior aspect ratios, as well as wide focus and exposure latitude and good sidewall profiles. A process of two spinning coats using AZ 9260 has been developed to make a high-resolution thick photoresist mask of about 30 μm. Figure 4-11 shows the thick photoresist on the probe tip to guarantee a sharp tip after plasma etching. The photoresist is hard baked in oven at 120 oC for 30 min; the thick photoresist needs to be carefully handled during baking to avoid thermal cracking.
  • Otline electrolysis-based actuators ... interesting but hopefully not needed.

{781}
hide / / print
ref: Polikov-2005.1 tags: neural response glia histology immune electrodes recording 2005 Tresco Michigan microglia date: 01-29-2013 00:34 gmt revision:10 [9] [8] [7] [6] [5] [4] [head]

PMID-16198003[0] Response of brain tissue to chronically implanted neural electrodes

  • Good review (the kind where figures are taken from other papers). Nothing terribly new (upon a very cursory inspection)
  • When CNS damage severs blood vessels, microglia are indistinguishable from the blood borne, monocyte-derived macrophages that are recruited by the degranulation of platelets and the cellular release of cytokines.
  • Furthermore, microglia are known to secrete, either constitutively, or in response to pathological stimuli, neurotrophic factors that aid in neuronal survival and growth.
    • Also release cytotoxic and neurotoxic factors that can lead to neuronal death in vitro.
    • It has been suggested that the presence of insoluble materials in the brain may lead to a state of 'frustrated phagocytosis' or inability of the macrophages to remove the foreign body, resulting in persistent release of neurotoxic substances.
  • When a 10x10 array of silicon probes was implanted in feline cortex, 60% of the needle tracks showed evidence of hemorrhage and 25% showed edema upon explantation of the probes after one day (Schmidt et al 1993) {1163}
    • Although a large number of the tracks were affected, only 3-5% of the area was actually covered by hemorrhages and edema, suggesting the actual damage to blood vessels may have been relatively minor. (!!)
  • Excess fluid and cellular debris diminishes 6-8 days due to the action of activated microglia and re-absorption.
  • As testament to the transitory nature of this mechanically induced wound healing response, electrode tracks could not be found in animals after several months when the electrode was inerted and quickly removed (Yuen and Agnew 1995, Rousche et al 2001; Csicsvari et al 2003, Biran et al 2005).
  • Biran et al 2005: observed persistent ED-1 immunoreactivity around silicon microelectrode arrays implanted in rat cortex at 2 and 4 weeks following implantation; not seen in microelectrode stab wound controls.
  • On the glial scar:
    • observed in the CNS of all vertebrates, presumably to isolate damaged parts of the nervous system and maintain the integrity of the blood-brain barrier.
    • mostly composed of reactive astrocytes.
    • presumably the glial scar insulates electrodes from nearby neurons, hindering diffusion and increasing impedance.
  • On the meninges:
    • Meningeal fibroblasts, which also stain for vimentin, but not for GFAP, may migrate down the electrode shaft from the brain surface and form the early basis for the glial scar.
  • On recording quality:
    • Histological examination upon explantation revealed that every electrode with stable unit recordings had at least one large neuron near the electrode tip, while every electrode that was not able to record resolvable action potentials was explanted from a site with no large neurons nearby.
  • Perhaps the clearest example of this variability was observed in the in vivo response to plastic “mock electrodes” implanted in rabbit brain by Stensaas and Stensaas (1976) {1210} and explanted over the course of 2 years. They separated the response into three types: Type 1 was characterized by little to no gliosis with neurons adjacent to the implant, Type 2 had a reactive astrocyte zone, and Type 3 exhibited a layer of connective tissue between the reactive astrocyte layer and the implant, with neurons pushed more than 100 um away. All three responses are well documented in the literature; however this study found that the model electrodes produced all three types of reactions simultaneously,depending on where along the electrode one looked.

____References____

[0] Polikov VS, Tresco PA, Reichert WM, Response of brain tissue to chronically implanted neural electrodes.J Neurosci Methods 148:1, 1-18 (2005 Oct 15)

{1201}
hide / / print
ref: Kato-2006.01 tags: bioactive neural probes flexible parylene japan Kato microspheres date: 01-28-2013 03:57 gmt revision:1 [0] [head]

PMID-17946847[0] Preliminary study of multichannel flexible neural probes coated with hybrid biodegradable polymer.

  • Conference proceedings. a little light.
  • :-)
  • probes made of parylene-C

____References____

[0] Kato Y, Saito I, Hoshino T, Suzuki T, Mabuchi K, Preliminary study of multichannel flexible neural probes coated with hybrid biodegradable polymer.Conf Proc IEEE Eng Med Biol Soc 1no Issue 660-3 (2006)

{1177}
hide / / print
ref: -0 tags: magnetic flexible insertion japan neural recording electrodes date: 01-28-2013 03:54 gmt revision:2 [1] [0] [head]

IEEE-1196780 (pdf) 3D flexible multichannel neural probe array

  • Shoji Takeuchi1, Takafumi Suzuki2, Kunihiko Mabuchi2 and Hiroyuki Fujita
  • wild -- they use a magnetic field to make the electrodes stand up!
  • Electrodes released with DRIE, as with Michigan probes.
  • As with many other electrodes, pretty high electrical impedance - 1.5M @ 1kHz.
    • 20x20um recording sites on 10um parylene.
  • Could push these into a rat and record extracellular APs, but nothing quantitative, no histology either.
  • Used a PEG coating to make them stiff enough to insert into the ctx (phantom in IEEE conference proceedings.)

{895}
hide / / print
ref: XindongLiu-2006.03 tags: neural recording electrodes stability cat parlene McCreery MEA date: 01-28-2013 02:50 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

IEEE-1605268 (pdf) Evaluation of the Stability of Intracortical Microelectrode Arrays

  • 35-50um IR electrodes, electrolytically sharpened at a 10 deg angle, with a 5um blunted tip.
  • Electrodes coated in parylene, and exposed at the tip with an eximer laser. Surface area of tip ~500um^2.
  • Sorted based on features (duration, pk-pk, ratio of + to -, ratio of + time to - time), followed by a demixing matrix (PCA?)
  • Did experiments in 25 cats with some task (for another paper?); got recordings for up to 800 days. Seems consistent with our results.
  • Neurons were stable (by their metrics) for up to 60 days.
  • sparse arrays showed stable recordings sooner than dense arrays, perhaps because they are larger and more qucikly become attached to the dura.
  • Electrodes were always unstable for the first 2-3 months. Stability index is as high as 30-40 days.
  • Average electrode yield was ~ 25%.
  • no histology.

____References____

Xindong Liu and McCreery, D.B. and Bullara, L.A. and Agnew, W.F. Evaluation of the stability of intracortical microelectrode arrays Neural Systems and Rehabilitation Engineering, IEEE Transactions on 14 1 91 -100 (2006)

{1195}
hide / / print
ref: Stevenson-2011.02 tags: Kording neural recording doubling northwestern chicago date: 01-28-2013 00:12 gmt revision:1 [0] [head]

PMID-21270781[0] How advances in neural recording affect data analysis.

  • Number of recorded channels doubles about every 7 years (slowish).
  • "Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons."

____References____

[0] Stevenson IH, Kording KP, How advances in neural recording affect data analysis.Nat Neurosci 14:2, 139-42 (2011 Feb)

{1178}
hide / / print
ref: -0 tags: parylene flexible neural recording drug delivery microfluidics 2012 inserter needle release date: 01-02-2013 22:41 gmt revision:1 [0] [head]

PMID-23160191 Novel flexible Parylene neural probe with 3D sheath structure for enhancing tissue integration

  • They seem to think that drugs are critical for success: "These features will enhance tissue integration and improve recording quality towards realizing reliable chronic neural interfaces."
  • Similar to Kennedy: "The sheath structure allows for ingrowth of neural processes leading to improved tissue/probe integration post implantation." 8 electrodes, 4 on the cone interior, 4 on the exterior.
    • opening is 50um at tip, 300 um at base.
  • Used a PEEK-stiffened parylene ZIF connection.
  • Only tested in agarose, but it did properly release from the inserter needle.
  • I wonder if we could use a similar technique..
  • "Lab on a chip" journal (Royal society of Chemistry). nice.

{1187}
hide / / print
ref: -0 tags: neural recording topologies circuits operational transconductance amplifiers date: 01-02-2013 20:00 gmt revision:0 [head]

PMID-22163863 Recent advances in neural recording microsystems.

  • Decent review. Has some depth on the critical first step of amplification.

{1184}
hide / / print
ref: -0 tags: optical neural recording photon induced electron transfer date: 01-02-2013 04:25 gmt revision:2 [1] [0] [head]

PMID-22308458 Optically monitoring voltage in neurons by photo-induced electron transfer through molecular wires.

  • Photoinduced electron transfer.
    • About what you would think -- a photon bumps an electron into a higher orbital, and this electron can be donated to another group or drop back down & fluoresce a photon.
  • Good sensitivity: ΔF/F\Delta F/F of 20-27% per 100mV, fast kinetics.
  • Not presently genetically targetable.
  • Makes sense in terms of energy: "A 100-mV depolarization changes the PeT driving force by 0.05 eV (one electron × half of 100-mV potential, or 0.05 V). Because PeT is a thermally controlled process, the value of 0.05 eV is large relative to the value of kT at 300 K (0.026 eV), yielding a large dynamic range between the rates of PeT at resting and depolarized potentials.
  • Why electrochromic dyes have plateaued:
    • "In contrast, electrochromic dyes have smaller delta G values, 0.003 (46) to 0.02 (47) eV, and larger comparison energies. Because the interaction is a photochemically controlled process, the energy of the exciting photon is the comparison energy, which is 1.5–2 eV for dyes in the blue-to-green region of the spectrum. Therefore, PeT and FRET dyes have large changes in energy versus their comparison energy (0.05 eV vs. 0.026 eV), giving high sensitivities; electrochromic dyes have small changes compared with the excitation photon (0.003–0.02 eV vs. 2 eV), producing low voltage sensitivity."

{1183}
hide / / print
ref: -0 tags: optical imaging neural recording diamond magnetic date: 01-02-2013 03:44 gmt revision:0 [head]

PMID-22574249 High spatial and temporal resolution wide-field imaging of neuron activity using quantum NV-diamond.

  • yikes: In this work we consider a fundamentally new form of wide-field imaging for neuronal networks based on the nanoscale magnetic field sensing properties of optically active spins in a diamond substrate.
  • Cultured neurons.
  • NV = nitrogen-vacancy defect centers.
    • "The NV centre is a remarkable optical defect in diamond which allows discrimination of its magnetic sublevels through its fluorescence under illumination. "
    • We show that the NV detection system is able to non-invasively capture the transmembrane potential activity in a series of near real-time images, with spatial resolution at the level of the individual neural compartments.
  • Did not actually perform neural measurements -- used a 10um microwire with mA of current running through it.
    • I would imagine that actual neurons have far less current!

{1181}
hide / / print
ref: -0 tags: neural imaging recording shot noise redshirt date: 01-02-2013 02:20 gmt revision:0 [head]

http://www.redshirtimaging.com/redshirt_neuro/neuro_lib_2.htm

  • Shot Noise: The limit of accuracy with which light can be measured is set by the shot noise arising from the statistical nature of photon emission and detection.
    • If an ideal light source emits an average of N photons/ms, the RMS deviation in the number emitted is N\sqrt N .
    • At high intensities this ratio NN\frac{N}{\sqrt N} is large and thus small changes in intensity can be detected. For example, at 10^10 photons/ms a fractional intensity change of 0.1% can be measured with a signal-to-noise ratio of 100.
    • On the other hand, at low intensities this ratio of intensity divided by noise is small and only large signals can be detected. For example, at 10^4 photons/msec the same fractional change of 0.1% can be measured with a signal-to-noise ratio of 1 only after averaging 100 trials.

{1179}
hide / / print
ref: -0 tags: optical coherence tomography neural recording squid voltage sensitive dyes review date: 12-23-2012 21:00 gmt revision:4 [3] [2] [1] [0] [head]

PMID-20844600 Detection of Neural Action Potentials Using Optical Coherence Tomography: Intensity and Phase Measurements with and without Dyes.

  • Optical methods of recording have been investigated since the 1940's:
    • During action potential (AP) propagation in neural tissue light scattering, absorption, birefringence, fluorescence, and volume changes have been reported (Cohen, 1973).
  • OCT is reflection-based, not transmission: illuminate and measure from the same side.
    • Here they use spectral domain OCT, where the mirror is not scanned; rather SD-OCT uses a spectrometer to record interference of back-scattered light from all depth points simultaneously (Fercher et al., 1995).
    • Use of a spectrometer allows imaging of an axial line within 10-50us, sufficient for imaging action potentials.
    • SD-OCT, due to some underlying mathematics which I can't quite grok atm, can resolve/annul common-mode phase noise for high temporal and Δphase\Delta phase measurement (high sensitivity).
      • This equates to "microsecond temporal resolution and sub-nanometer optical path length resolution".
  • OCT is generally (intially?) used for in-vivo imaging of retinas, in humans and other animals.
  • They present new data for depth-localization of neural activity in squid giant axons (SGA) stained with a voltage-sensitive near-infrared dye.
    • Note: averaged over 250 sweeps.
  • ΔPhase>>ΔIntensity\Delta Phase &gt;&gt; \Delta Intensity -- figure 4 in the paper.
  • Use of voltage-sensitive dyes improves the resolution of ΔI\Delta I , but not dramatically --
    • And Δphase\Delta phase is still a bit delayed.
    • Electrical recording is the control.
      • It will take significant technology development before optical methods exceed electrical methods...
  • Looks pretty preliminary. However, OCT can image 1-2mm deep in transparent tissue, which is exceptional.
  • Will have to read their explanation of OCT.
  • Used in a squid giant axon prep. 2010, wonder if anything new has been done (in vivo?).
  • Claim that progress is hampered by limited understanding of how these Δphase\Delta phase signals arise.

{1180}
hide / / print
ref: -0 tags: optical coherence tomography neural recording aplysia date: 12-23-2012 09:12 gmt revision:2 [1] [0] [head]

PMID-19654752 Detecting intrinsic scattering changes correlated to neuron action potentials using optical coherence imaging.

  • Aplysia, intrinsic imaging of scattering change following electrical stimulation.
    • Why did it take so long for them to get this paper out.. ?
  • Nicolelis first cited author.
  • Quality of recording not necessarily high.
  • quote: "Typical transverse resolutions in OCT (10-20um) are likely insufficient to identify smaller mamallian neurons that are often studied in neuroscience."
    • Solution: optical coherence microscopy (OCM), where a higher NA lens focuses the light to a smaller spot.
    • Expense: shorter depth-of-field.
  • Why does this work? "One mechanism of these optical signals is believed to be a realignment of charged membrane proteins in response to voltage change [6].
  • A delay of roughly 70ms was observed between the change in membrane voltage and the change in scattering intensity.
    • That's slow! Might be due to conduction velocity in Aplysia.
  • SNR of scattering measurement not too high -- the neurons are alive, afterall, and their normal biological processes cause scattering changes.
    • Killing the neurons with KCl dramatically decreased the variance of scattering, consistent with this hpothesis.
  • Birefringence: "Changes in the birefringence of nerves due to electrical activity have been shown to be an order of magnitude larger than scattering intensity changes" PMID-5649693

{1175}
hide / / print
ref: -0 tags: flexible polymer neural probes compliant MIT EPFL 2008 date: 12-22-2012 01:28 gmt revision:0 [head]

Demonstration of cortical recording using novel flexible polymer neural probes

  • Two layer platinum process minimizes probe size -- nice. Might be useful for our purposes.
  • used electrochemical etching to release the lithographically patterned devices from the sacrificial aluminum layer.
  • Impedance looks pretty high -- 500k at 1kHz.
  • They talk about PCA as though it's unusual to them (?)
  • Histology uncontrolled and un-quantitiative.

{1164}
hide / / print
ref: -0 tags: neural recording McGill Musallam electrodes date: 07-12-2012 22:53 gmt revision:0 [head]

http://www.mdpi.com/1424-8220/8/10/6704/pdf NeuroMEMS: Neuro Probe Microtechnologies

  • Good review (as of 2008) of the many different approaches for nervous system recording.

{763}
hide / / print
ref: work-2999 tags: autocorrelation poisson process test neural data ISI synchrony DBS date: 02-16-2012 17:53 gmt revision:5 [4] [3] [2] [1] [0] [head]

I recently wrote a matlab script to measure & plot the autocorrelation of a spike train; to test it, I generated a series of timestamps from a homogeneous Poisson process:

function [x, isi]= homopoisson(length, rate)
% function [x, isi]= homopoisson(length, rate)
% generate an instance of a poisson point process, unbinned.
% length in seconds, rate in spikes/sec. 
% x is the timestamps, isi is the intervals between them.

num = length * rate * 3; 
isi = -(1/rate).*log(1-rand(num, 1)); 
x = cumsum(isi); 
%%find the x that is greater than length. 
index = find(x > length); 
x = x(1:index(1,1)-1, 1); 
isi = isi(1:index(1,1)-1, 1); 

The autocorrelation of a Poisson process is, as it should be, flat:

Above:

  • Red lines are the autocorrelations estimated from shuffled timestamps (e.g. measure the ISIs - interspike intervals - shuffle these, and take the cumsum to generate a new series of timestamps). Hence, red lines are a type of control.
  • Blue lines are the autocorrelations estimated from segments of the full timestamp series. They are used to how stable the autocorrelation is over the recording
  • Black line is the actual autocorrelation estimated from the full timestamp series.

The problem with my recordings is that there is generally high long-range correlation, correlation which is destroyed by shuffling.

Above is a plot of 1/isi for a noise channel with very high mean 'firing rate' (> 100Hz) in blue. Behind it, in red, is 1/shuffled isi. Noise and changes in the experimental setup (bad!) make the channel very non-stationary.

Above is the autocorrelation plotted in the same way as figure 1. Normally, the firing rate is binned at 100Hz and high-pass filtered at 0.005hz so that long-range correlation is removed, but I turned this off for the plot. Note that the suffled data has a number of different offsets, primarily due to differing long-range correlations / nonstationarities.

Same plot as figure 3, with highpass filtering turned on. Shuffled data still has far more local correlation - why?

The answer seems to be in the relation between individual isis. Shuffling isi order obviuosly does not destroy the distribution of isi, but it does destroy the ordering or pair-wise correlation between isi(n) and isi(n+1). To check this, I plotted these two distributions:

-- Original log(isi(n)) vs. log(isi(n+1)

-- Shuffled log(isi_shuf(n)) vs. log(isi_shuf(n+1)

-- Close-up of log(isi(n)) vs. log(isi(n+1) using alpha-blending for a channel that seems heavily corrupted with electro-cauterizer noise.

{1113}
hide / / print
ref: -0 tags: neural recording doubling Stevenson Kording date: 02-08-2012 04:28 gmt revision:0 [head]

PMID-21270781 How advances in neural recording affect data analysis

  • Number of channels recorded doubles every 7 years.
  • This extrapolated from the past 50 years of growth.

{891}
hide / / print
ref: Bonfanti-0 tags: wireless neural recording wireless italy date: 01-20-2012 05:30 gmt revision:3 [2] [1] [0] [head]

PMID-21096380[0] "A multi-channel low-power system-on-chip for single-unit recording and narrowband wireless transmission of neural signal."

  • Use Manchester-encoded FSK, with 20-sample spike extraction feeding 2kb RAM.
  • Feature sub-threshold biased transistors on input stage for low noise, and MOS-bipolar pseudo-resistors + 0.15pf caps as filter elements. see schematic.
  • 105uW / channel with the PA amplifier disabled.
    • Only 4uA/channel consumed in the input stage.
    • DSP consumes 400uA
    • VCO 400uA, PLL 300uA.
  • Has a brief but useful review of the other wireless neural recorders in this field -- including ultrawideband.

____References____

[0] Bonfanti A, Ceravolo M, Zambra G, Gusmeroli R, Spinelli AS, Lacaita AL, Angotzi GN, Baranauskas G, Fadiga L, A multi-channel low-power system-on-chip for single-unit recording and narrowband wireless transmission of neural signal.Conf Proc IEEE Eng Med Biol Soc 2010no Issue 1555-60 (2010)

{214}
hide / / print
ref: Harrison-2003.06 tags: CMOS amplifier headstage electrophysiology neural_recording low_power chopper Reid Harrison date: 01-16-2012 04:43 gmt revision:12 [11] [10] [9] [8] [7] [6] [head]

IEEE-1201998 (pdf) A low-power low-noise CMOS amplifier for neural recording applications

  • detail novel MOS-bipolar pseudoresistor element to permit amplification of low-frequency signals down to milihertz range.
  • 80 microwatt spike amplifier in 0.16mm^2 silicon with 1.5 um CMOS, 1 microwatt EEG amplifier
  • input-referred noise of 2.2uV RMS.
  • has a nice graph comparing the power vs. noise for a number of other published designs
  • i doubt the low-frequency amplification really matters for neural recording, though certainly it matters for EEG.
    • they give an equation for the noise efficiency factor (NEF), as well as much detailed background.
    • NEF better than any prev. reported. Theoretical limit is 2.9 for this topology; they measure 4.8
  • does not compare well to Medtronic amp: http://www.eetimes.com/news/design/showArticle.jhtml?articleID=197005915
    • 2 microwatt! @ 1.8V
    • chopper-stabilized
    • not sure what they are going to use it for - the battery will be killed it it has to telemeter anything!
    • need to find the report for this.
  • tutorial on chopper-stabilized amplifiers -- they have nearly constant noise v.s. frequency, and very low input/output offset.
  • References: {1056} Single unit recording capabilities of a 100 microelectrode array. Nordhausen CT, Maynard EM, Normann RA.
  • [5] see {1041}
  • [9] {1042}
  • [12] {1043}
____References____

Harrison, R.R. and Charles, C. A low-power low-noise CMOS amplifier for neural recording applications Solid-State Circuits, IEEE Journal of 38 6 958 - 965 (2003)

{814}
hide / / print
ref: Zhang-2009.02 tags: localized surface plasmon resonance nanoparticle neural recording innovative date: 01-15-2012 23:00 gmt revision:4 [3] [2] [1] [0] [head]

PMID-19199762[0] Optical Detection of Brain Cell Activity Using Plasmonic Gold Nanoparticles

  • Used 140 nm diameter, 40 nm thick gold disc nanoparticles set in a 400nm array, illuminated by 850nm diode laser light.
    • From my reading, it seems that the diameter of these nanoparticles is important, but the grid spacing is not.
  • These nanoparticles strongly scatter light, and the degree of scattering is dependent on the local index of refraction + electric field.
  • The change in scattering due to applied electric field is very small, though - ~ 3e-6 1/V in the air-capacitor setup, ~1e-3 in solution when stimluated by cultured hippocampal neurons.
  • Noteably, nanoparticles are not diffraction limited - their measurement resolution is proportional to their size. Compare with voltage-sensitive dyes, which have a similar measurement signal-to-noise ratio, are diffraction limited, may be toxic, and may photobleach.

____References____

[0] Zhang J, Atay T, Nurmikko AV, Optical detection of brain cell activity using plasmonic gold nanoparticles.Nano Lett 9:2, 519-24 (2009 Feb)

{782}
hide / / print
ref: Song-2009.08 tags: wireless neural recording RF Brown laser optical Donoghue date: 01-15-2012 00:58 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

IEEE-5067358 (pdf) Wireless, Ultra Low Power, Broadband Neural Recording Microsystem

  • 16 channels.
  • Use a VCSEL (vertical cavity surface emission laser) to transmit data through the skin.
  • Nice design, and they claim to have made recordings for 1 month already.
  • One PCB, kapton substrate reinforced with alumina where needed.
  • Custom 12mW neural amplifier.

____References____

Song, Y.-K. and Borton, D.A. and Park, S. and Patterson, W.R. and Bull, C.W. and Laiwalla, F. and Mislow, J. and Simeral, J.D. and Donoghue, J.P. and Nurmikko, A.V. Active Microelectronic Neurosensor Arrays for Implantable Brain Communication Interfaces Neural Systems and Rehabilitation Engineering, IEEE Transactions on 17 4 339 -345 (2009)

{598}
hide / / print
ref: Santhanam-2007.11 tags: HermesB Shenoy continuous neural recording Utah probe flash wireless date: 01-09-2012 00:00 gmt revision:4 [3] [2] [1] [0] [head]

PMID-18018699[0] HermesB: a continuous neural recording system for freely behaving primates.

  • saved the data to compact flash. could record up to 48 hours continuously.
  • recorded from an acceleromter, too - neuron changes were associated with high head accelerations (unsurprisingly).
  • also recorded LFP, and were able to tell with some accuracy what behavioral state the monkey was in.
  • interfaces to the Utah probe
  • not an incredibly small system, judging from the photos.
  • 1600maH battery, 19 hour life @ 2/3 recording duty cycle -> current draw is 120mA, or 450mW.
    • can only record from two channels at once!
    • amplifier gain 610.
    • used ARM microcontroller ADUC2106

____References____

{1007}
hide / / print
ref: Dethier-2011.28 tags: BMI decoder spiking neural network Kalman date: 01-06-2012 00:20 gmt revision:1 [0] [head]

IEEE-5910570 (pdf) Spiking neural network decoder for brain-machine interfaces

  • Golden standard: kalman filter.
  • Spiking neural network got within 1% of this standard.
  • THe 'neuromorphic' approach.
  • Used Nengo, freely available neural simulator.

____References____

Dethier, J. and Gilja, V. and Nuyujukian, P. and Elassaad, S.A. and Shenoy, K.V. and Boahen, K. Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on 396 -399 (2011)

{1008}
hide / / print
ref: Fei-2011.05 tags: flash FPGA neural decoder BMI IGLOO f date: 01-06-2012 00:20 gmt revision:2 [1] [0] [head]

IEEE-5946801 (pdf) A low-power implantable neuroprocessor on nano-FPGA for Brain Machine interface applications

  • 5mW for 32 channels, 1.2V core voltage.
  • RLE using thresholding / transmission of DWT coefficients.
  • 5mm x 5mm.

____References____

Fei Zhang and Aghagolzadeh, M. and Oweiss, K. Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on 1593 -1596 (2011)

{1016}
hide / / print
ref: Lilly-1958 tags: Lilly MEA original neural tuning date: 01-04-2012 02:15 gmt revision:4 [3] [2] [1] [0] [head]

bibtex: Lilly-1958 Correlations between Neurophysiological Activity in the Cortex and Short-Term Behavior in the Monkey

  • 610 channels in 'Susie'! Unable to record from all of them for lack of recording technology.
  • references the rest of his work.
  • Was able to elicit pretty dramatic and fascinating stimulation responses:
    • 'shrink' as if warding off a blow to the contralateral side of the head;
    • at an adjacent electrode we found a pattern called 'goose', this pattern involved the whole body, and the reaction looks as if the monkey had been forcefully, mechanically stimulated par anum.
    • both were accompanied by high arousal.
  • Suggest that behavioral frequency-of-use corresponds rounghly to cortical rank-area order.
  • Note that the wave velocity (as imaged by his bavatron) in cortex can vary dramatically, from 1 m/sec to 0.1 m/sec.
    • With practice, one can see the boundaries between the 'arm' and 'leg' regions quite easily.
  • Stated our problem quite concisely: "One of the large difficulties in correlating structure, behavior, and CNS activity is the spatial problem of getting enough electrodes, and small enough electrodes, \emph{in} there with minimal injury. (This is why he was usnig pial electrodes). Still another problem is getting enough samples from each electrode per unit time, over a long enough time, to see what goes on during conditioning or learning [...] s for the problem of the investigator's absorbing the data -- if he has adequate recording techniques, he has a lot of time to work on a very short recorded part of a given monkey's life."
  • no figures :-(
  • Lilly could publish. a b -- though he appears to have ADHD (perhaps from the LSD)
    • also see his homepage -- He died in 2001, but it's still up.
  • images/1016_1.pdf

{393}
hide / / print
ref: Sodagar-2007.06 tags: neural recording telemetry Wise Najafi mulitichannel electrophysiology Michigan ASIC date: 01-03-2012 23:07 gmt revision:4 [3] [2] [1] [0] [head]

PMID-17554826[0] A fully integrated mixed-signal neural processor for implantable multichannel cortical recording.

  • document is rich in details! looks pretty well designed, too.
  • Michigan 3-d electrodes
  • inductively powered, 2Mbps output
  • 64 channels
  • 18b/spike for 64 channels in scan mode, continuous waveforms on 2 channels in monitor mode
  • programmable analog spike detection. resolution: 5 bits.
  • no timestamps - send them out as they come in, with a clock rate fast enough so that this does not matter.
    • temporary storage in SRAM
    • time compression and buffering is somewhat complex (?)
  • only transmit threshold crossings, positive, negative, and both.
    • they do not detail how the signal is telemetered - perhaps this is for another publication.
  • fabricated chip occupies 3.5 x 2.7 mm. 0.5um process.
  • fabricated chip has a power of 200uw @ 1.8V. that's 6.4mW altogether! I need to get down to this figure! (well..)

____References____

{1004}
hide / / print
ref: Dabrowski-2003.1 tags: ASIC neural recording poland neuroplat pseudoresistor date: 01-03-2012 15:24 gmt revision:4 [3] [2] [1] [0] [head]

IEEE-1351853 (pdf) Development of integrated circuits for readout of microelectrode arrays to image neuronal activity in live retinal tissue

  • Use miller effect to increas capacitance for HPF.
  • resistors are long channel PMOS 3um / 500um, biased in linear region @ 0V.
    • Transistors must be in linear region: implement gate following of input signal. By varying this gate voltage, can change the filter characteristics.
  • Amplifier looks rather clever.
  • 7uV RMS input-referred noise.

____References____

Dabrowski, W. and Grybos, P. and Hottowy, P. and Skoczen, A. and Swientek, K. and Bezayiff, N. and Grillo, A.A. and Kachiguine, S. and Litke, A.M. and Sher, A. Nuclear Science Symposium Conference Record, 2003 IEEE 2 956 - 960 Vol.2 (2003)

{366}
hide / / print
ref: Pearce-2004.01 tags: neural recording microfluidics in-vitro MEA date: 01-03-2012 06:53 gmt revision:4 [3] [2] [1] [0] [head]

PMID-17271187[0] Dynamic control of extracellular environment in in vitro neural recording systems.

  • they show how to create microfluidic channels on top of in-vitro microfluidic arrays.
  • used dorsal root ganglion cells.
  • key aspect:
    • make a thin cavity/space between two polycarbonate panes.
    • fill the cavity with liquid-phase isobornyl acrylate
    • cover the panes with a high-resolution mask
    • upon exposure to UV light the isobornyl polymerizes.
    • did this on top of a MEA-60
  • looks like they can very accurately deliver pulses and streams of fluid.

____References____

[0] Pearce T, Oakes S, Pope R, Williams J, Dynamic control of extracellular environment in in vitro neural recording systems.Conf Proc IEEE Eng Med Biol Soc 6no Issue 4045-8 (2004)

{894}
hide / / print
ref: Bonfanti-2010.09 tags: neural recording wireless manchester 2010 Italy date: 01-03-2012 01:02 gmt revision:2 [1] [0] [head]

IEEE-5619710 (pdf) A Multi-Channel Low-Power IC for Neural Spike Recording with Data Compression and Narrowband 400-MHz MC-FSK Wireless Transmission

  • Good 64-channel wireless neurochip with LNA, variable gain and filtering, spike extraction.
  • ~300uW/channel realized.
  • 2.7x3.1mm

____References____

Bonfanti, A. and Ceravolo, M. and Zambra, G. and Gusmeroli, R. and Borghi, T. and Spinelli, A.S. and Lacaita, A.L. ESSCIRC, 2010 Proceedings of the 330 -333 (2010)

{665}
hide / / print
ref: Cho-2007.03 tags: SOM self organizing maps Prinicpe neural signal reconstruction recording compression date: 01-03-2012 00:59 gmt revision:2 [1] [0] [head]

PMID-17234384[0] Self-organizing maps with dynamic learning for signal reconstruction.

  • They use a dynamically-learning self-organizing map to compress (encode) continuous neural signals so they can be sent over a wireless link. In this way, you do not have to sort and bin on the device (but this is relatively easy; it seems that their SOM is more computationally expensive than simple thresholding.) Nonetheless, it is an interesting approach.

____References____

[0] Cho J, Paiva AR, Kim SP, Sanchez JC, Príncipe JC, Self-organizing maps with dynamic learning for signal reconstruction.Neural Netw 20:2, 274-84 (2007 Mar)

{937}
hide / / print
ref: Rizk-2009.04 tags: Rizk neural recording Wolf Nicolelis turner FPGA wireless date: 01-03-2012 00:58 gmt revision:2 [1] [0] [head]

PMID-19255459[0] A fully implantable 96-channel neural data acquisition system.

  • also performs spike detection and extraction within the body.
  • Inductively powered.
  • 960Mhz data band, link up to 2m.
  • First fully implantable system aimed at BMI; however, fully implantable low channel-count have already been deployed.

____References____

[0] Rizk M, Bossetti CA, Jochum TA, Callender SH, Nicolelis MA, Turner DA, Wolf PD, A fully implantable 96-channel neural data acquisition system.J Neural Eng 6:2, 026002 (2009 Apr)

{873}
hide / / print
ref: Szuts-2011.02 tags: wireless neural recording Szuts Meister TDM FM spy camera date: 01-03-2012 00:55 gmt revision:4 [3] [2] [1] [0] [head]

PMID-21240274[0] A wireless multi-channel neural amplifier for freely moving animals.

  • 60 meters!
  • 64 channels!
  • < 4uV RMS input referred noise over 80-2.3Khz BW.
  • Mounted on a backpack. Seems reasonable.
  • Uses a http://www.spystuff.com/ RF transmitter for home video surveillance
    • BW: 8Mhz.
  • presently used with tetrode microdrive.
  • Use a 'neuroplat' AFE, ref [10] {1004}.
    • AC coupling, relatively narrow passband - 80 - 2.3kHz.
    • Channels are oversampled: 20kHz.
  • High power:
    • Neuroplat 165mW,
    • Transmitter 200mW,
    • Headboard circuitry 100mW,
    • Voltage regulators 180mW.
    • Total: 545mW. That does not match with Table 1, which is 345mw. ??
  • RX = http://ve6atv.sbszoo.com/platinum/docs/13cmRxDwg.pdf
    • rather direct TDM decoding scheme (each channel is a pulse; receiver oversamples by 6x & weights these samples; apply HP emphasis filter to individual samples. )
    • probably could get a more efficient RF encoding if they chose not to use the video link, but hey.
  • Some behavioral experiments .. not interesting (?)

____References____

[0] Szuts TA, Fadeyev V, Kachiguine S, Sher A, Grivich MV, Agrochão M, Hottowy P, Dabrowski W, Lubenov EV, Siapas AG, Uchida N, Litke AM, Meister M, A wireless multi-channel neural amplifier for freely moving animals.Nat Neurosci 14:2, 263-9 (2011 Feb)

{1003}
hide / / print
ref: Ming-2009.09 tags: wireless neural recording Ghovanloo NCSU PWM date: 01-03-2012 00:55 gmt revision:3 [2] [1] [0] [head]

IEEE-5333227 (pdf) In vivo testing of a low noise 32-channel wireless neural recording system

  • 32 channels.
  • Unique feature: analog-to-time PWM; digitization ciruitry is hence on the receiver.
  • Even with this, 4.9 uV rms input-referred noise 1Hz-10kHz. Good!
  • Another ASIC.
  • 5.6mW at +- 1.5V, 3.3 x 3.0 mm^2.
  • 1 bit adjustable gain; total gain 67 or 77dB.
  • analog-to-time PWM just uses rail-to-rail comparators, activated by a circulating register.
    • During each comparison, there is no digital transition anywhere on the chip, reducing substrate noise.
  • this 640kHz TDM signal feeds a VCO -> FSK or OOK modulation.
  • Custom receiver. USB.
  • Need to measure THD & input referred noise on mine.
  • -33dB crosstalk.
  • Ghovanloo seems pretty good at citing himself.

____References____

Ming Yin and Seung Bae Lee and Ghovanloo, M. Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE 1608 -1611 (2009)

{1005}
hide / / print
ref: Miranda-2010.06 tags: Meng Shenoy Hermes wireless neural recording digital COTS date: 01-03-2012 00:55 gmt revision:2 [1] [0] [head]

IEEE-5471737 (pdf) HermesD: A High-Rate Long-Range Wireless Transmission System for Simultaneous Multichannel Neural Recording Applications

  • 32 channels broadband 12 b/sample, FSK modulation of 3.7-4.1 carrier
  • 142mW lasts 33h using two 3.6V/1200maH LiSOCl2 batteries.
  • Circuarly polarized patch antenna + 13dBi circular horn antenna
  • -83dbM with a BER of 10^-9
  • Can easily be scaled up in terms of # of channels and bit rate to accomidate future systems.
  • they think that thresholding / compression / low bit-rate is dumb.
  • Cite Rizk and Obeid, and carefully review other work wrt saying that their present is the best (fair..)
  • [6][7] employ spike sorting -- have to check these.
    • "but the resources are usually too scarce to provide high-quality spike classification on a large number of channels simultaneously, with a reasonably low power-budged (false!)
  • My design is smaller.
  • Use utah array.
  • possible to have 6 receivers simultaneously.
  • 3.7-4.1Ghz good choice for transmission in terms of regulation / availability.
  • Transmission in cages below 1Ghz severly attenuated ; cages relatively transparent to anything above 4Ghz.
  • Used Intans RHA1016.
  • Input-referred noise 3.2uV; lsb value = 1.5uV, and spike amplitudes can be 6.3mV before clipping occurs.
  • CPLD packetizer.
  • FSK built around SMV3895A from Z-communications.
    • No PLL, as this consumes power, and both the room and the animal are temperature-controlled; temp drive 0.44Mhz/C
    • Only works for wideband systems: a 3.2Ghz signal with a b/w of only 10kHz is impractical without frequency stability mechanisms. finding a needle in a haystack..
  • Reciver and antenna use right-hand circular polarization (RHCP), which attenuates multipath.
    • The first and all odd ordered bounce reflections arrive at the reciever have their polarization reversed since their incidence angles are below the pseudo-brewster angle 60-70deg.
  • Receiver complicated to track variations in transmitter freq.
    • Use a transmission line to delay the FM signal @ IF for discrimination. (This is a noncoherent modulation technique).
    • Colpitts oscillator clock recovery. Clock storage time of about 30b!
    • Receiver sensitivity level -83 dBm.
  • set threshold at 3x RMS value of spike traces.
  • [11] Reid Harrison presents an 100 ch integrated amp with a total power consumption of only 3.5mW. {1006}

____References____

Miranda, H. and Gilja, V. and Chestek, C.A. and Shenoy, K.V. and Meng, T.H. HermesD: A High-Rate Long-Range Wireless Transmission System for Simultaneous Multichannel Neural Recording Applications Biomedical Circuits and Systems, IEEE Transactions on 4 3 181 -191 (2010)

{1006}
hide / / print
ref: Harrison-2009.08 tags: low power ASIC wireless neural recording Reid Harrison Shenoy date: 01-03-2012 00:55 gmt revision:2 [1] [0] [head]

IEEE-5061585 (pdf) Wireless Neural Recording With Single Low-Power Integrated Circuit

  • 100 channels, with threshold spike extraction.
  • 900Mhz FSK transmit coil.
  • Inductive power and data link.

____References____

Harrison, R.R. and Kier, R.J. and Chestek, C.A. and Gilja, V. and Nuyujukian, P. and Ryu, S. and Greger, B. and Solzbacher, F. and Shenoy, K.V. Wireless Neural Recording With Single Low-Power Integrated Circuit Neural Systems and Rehabilitation Engineering, IEEE Transactions on 17 4 322 -329 (2009)

{365}
hide / / print
ref: Akin-1995.06 tags: Najafi neural recording technology micromachined digital TETS 1995 PNS schematics date: 01-01-2012 20:23 gmt revision:8 [7] [6] [5] [4] [3] [2] [head]

IEEE-717081 (pdf) An Implantable Multichannel Digital neural recording system for a micromachined sieve electrode

  • Later pub: IEEE-654942 (pdf) -- apparently putting on-chip isolated diodes is a difficult task.
  • 90mw of power @ 5V, 4x4mm of area (!!)
  • targeted for regenerated peripheral neurons grown through a micromachined silicon sieve electrode.
    • PNS nerves are deliberately severed and allowed to regrow through the sieve.
  • 8bit low-power current-mode ADC. seems like a clever design to me - though I can't really follow the operation from the description written there.
  • class e transmitter amplifier.
  • 3um BiCMOS process. (you get vertical BJTs and Zener diodes)
  • has excellent schematics. - including the voltage regulator, envelop detector & ADC.
  • most of the power is dissipated in the voltage regulator (!!) - 80mW of 90mW.
  • tiny!
  • rather than using pseudoresistors, they use diode-capacitor input filter which avoids the need for chopping or off-chip hybrid components.
  • can record from any two of 32 input channels. I think the multiplexer is after the preamp - right?

____References____

Akin, T. and Najafi, K. and Bradley, R.M. Solid-State Sensors and Actuators, 1995 and Eurosensors IX.. Transducers '95. The 8th International Conference on 1 51 -54 (1995)

{663}
hide / / print
ref: Thorbergsson-2008.01 tags: recording nordic wireless neural date: 01-01-2012 19:05 gmt revision:2 [1] [0] [head]

PMID-19162894[0] Implementation of a telemetry system for neurophysiological signals.

  • used the Nordic chip with a 8051 on-board, along with an OPA348 and ADG804 multiplexer.
  • can only record one channel at at time, at only 3.7ksps.

____References____

[0] Thorbergsson PT, Garwicz M, Schouenborg J, Johansson AJ, Implementation of a telemetry system for neurophysiological signals.Conf Proc IEEE Eng Med Biol Soc 2008no Issue 1254-7 (2008)

{993}
hide / / print
ref: Sanchez-2005.06 tags: BMI Sanchez Nicolelis Wessberg recurrent neural network date: 01-01-2012 18:28 gmt revision:2 [1] [0] [head]

IEEE-1439548 (pdf) Interpreting spatial and temporal neural activity through a recurrent neural network brain-machine interface

  • Putting it here for the record.
  • Note they did a sensitivity analysis (via chain rule) of the recurrent neural network used for BMI predictions.
  • Used data (X,Y,Z) from 2 monkeys feeding.
  • Figure 6 is strange, data could be represented better.
  • Also see: IEEE-1300786 (pdf) Ascertaining the importance of neurons to develop better brain-machine interfaces Also by Justin Sanchez.

____References____

Sanchez, J.C. and Erdogmus, D. and Nicolelis, M.A.L. and Wessberg, J. and Principe, J.C. Interpreting spatial and temporal neural activity through a recurrent neural network brain-machine interface Neural Systems and Rehabilitation Engineering, IEEE Transactions on 13 2 213 -219 (2005)

{621}
hide / / print
ref: Ativanichayaphong-2008.05 tags: wireless neural recording stimulation date: 12-28-2011 21:15 gmt revision:3 [2] [1] [0] [head]

PMID-18262282[0] A combined wireless neural stimulating and recording system for study of pain processing

  • used rather simple unidirectional radio links.
  • provide schematics in the document!
  • one channel record; one-channel stim.
  • VHF bands are presntly open (?) -- perhaps use them?
  • 914 MHz transmit neural, 433Mhz RX stimulus commands.

____References____

{323}
hide / / print
ref: Loewenstein-2006.1 tags: reinforcement learning operant conditioning neural networks theory date: 12-07-2011 03:36 gmt revision:4 [3] [2] [1] [0] [head]

PMID-17008410[0] Operant matching is a generic outcome of synaptic plasticity based on the covariance between reward and neural activity

  • The probability of choosing an alternative in a long sequence of repeated choices is proportional to the total reward derived from that alternative, a phenomenon known as Herrnstein's matching law.
  • We hypothesize that there are forms of synaptic plasticity driven by the covariance between reward and neural activity and prove mathematically that matching (alternative to reward) is a generic outcome of such plasticity
    • models for learning that are based on the covariance between reward and choice are common in economics and are used phenomologically to explain human behavior.
  • this model can be tested experimentally by making reward contingent not on the choices, but rather on the activity of neural activity.
  • Maximization is shown to be a generic outcome of synaptic plasticity driven by the sum of the covariances between reward and all past neural activities.

____References____

{862}
hide / / print
ref: -0 tags: backpropagation cascade correlation neural networks date: 12-20-2010 06:28 gmt revision:1 [0] [head]

The Cascade-Correlation Learning Architecture

  • Much better - much more sensible, computationally cheaper, than backprop.
  • Units are added one by one; each is trained to be maximally correlated to the error of the existing, frozen neural network.
  • Uses quickprop to speed up gradient ascent learning.

{789}
hide / / print
ref: work-0 tags: emergent leabra QT neural networks GUI interface date: 10-21-2009 19:02 gmt revision:4 [3] [2] [1] [0] [head]

I've been reading Computational Explorations in Cognitive Neuroscience, and decided to try the code that comes with / is associated with the book. This used to be called "PDP+", but was re-written, and is now called Emergent. It's a rather large program - links to Qt, GSL, Coin3D, Quarter, Open Dynamics Library, and others. The GUI itself seems obtuse and too heavy; it's not clear why they need to make this so customized / panneled / tabbed. Also, it depends on relatively recent versions of each of these libraries - which made the install on my Debian Lenny system a bit of a chore (kinda like windows).

A really strange thing is that programs are stored in tree lists - woah - a natural folding editor built in! I've never seen a programming language that doesn't rely on simple text files. Not a bad idea, but still foreign to me. (But I guess programs are inherently hierarchal anyway.)

Below, a screenshot of the whole program - note they use a Coin3D window to graph things / interact with the model. The colored boxes in each network layer indicate local activations, and they update as the network is trained. I don't mind this interface, but again it seems a bit too 'heavy' for things that are inherently 2D (like 2D network activations and the output plot). It's good for seeing hierarchies, though, like the network model.

All in all looks like something that could be more easily accomplished with some python (or ocaml), where the language itself is used for customization, and not a GUI. With this approach, you spend more time learning about how networks work, and less time programming GUIs. On the other hand, if you use this program for teaching, the gui is essential for debugging your neural networks, or other people use it a lot, maybe then it is worth it ...

In any case, the book is very good. I've learned about GeneRec, which uses different activation phases to compute local errors for the purposes of error-minimization, as well as the virtues of using both Hebbian and error-based learning (like GeneRec). Specifically, the authors show that error-based learning can be rather 'lazy', purely moving down the error gradient, whereas Hebbian learning can internalize some of the correlational structure of the input space. You can look at this internalization as 'weight constraint' which limits the space that error-based learning has to search. Cool idea! Inhibition also is a constraint - one which constrains the network to be sparse.

To use his/their own words:

... given the explanation above about the network's poor generalization, it should be clear why both Hebbian learning and kWTA (k winner take all) inhibitory competition can improve generalization performance. At the most general level, they constitute additional biases that place important constraints on the learning and the development of representations. Mroe specifically, Hebbian learning constrains the weights to represent the correlational structure of the inputs to a given unit, producing systematic weight patterns (e.g. cleanly separated clusters of strong correlations).

Inhibitory competition helps in two ways. First, it encourages individual units to specialize in representing a subset of items, thus parcelling up the task in a much cleaner and more systematic way than would occur in an otherwise unconstrained network. Second, inhibition greatly restricts the settling dynamics of the network, greatly constraining the number of states the network can settle into, and thus eliminating a large proportion of the attractors that can hijack generalization.."

{783}
hide / / print
ref: Chae-2009.08 tags: wireless neural recording UWB Chinese ultra-wideband RF date: 10-12-2009 21:07 gmt revision:2 [1] [0] [head]

PMID-19435684[0] A 128-channel 6 mW wireless neural recording IC with spike feature extraction and UWB transmitter.

  • The title basically says it all.
  • Great details - all of the sub-circuits needed.
  • Really impressive work!

____References____

[0] Chae MS, Yang Z, Yuce MR, Hoang L, Liu W, A 128-channel 6 mW wireless neural recording IC with spike feature extraction and UWB transmitter.IEEE Trans Neural Syst Rehabil Eng 17:4, 312-21 (2009 Aug)

{690}
hide / / print
ref: Chapin-1999.07 tags: chapin Nicolelis BMI neural net original SUNY rat date: 09-02-2009 23:11 gmt revision:2 [1] [0] [head]

PMID-10404201 Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex.

  • Abstract: To determine whether simultaneously recorded motor cortex neurons can be used for real-time device control, rats were trained to position a robot arm to obtain water by pressing a lever. Mathematical transformations, including neural networks, converted multineuron signals into 'neuronal population functions' that accurately predicted lever trajectory. Next, these functions were electronically converted into real-time signals for robot arm control. After switching to this 'neurorobotic' mode, 4 of 6 animals (those with > 25 task-related neurons) routinely used these brain-derived signals to position the robot arm and obtain water. With continued training in neurorobotic mode, the animals' lever movement diminished or stopped. These results suggest a possible means for movement restoration in paralysis patients.
The basic idea of the experiment. Rat controlled the water lever with a forelimb lever, then later learned to control the water lever directly. They used an artificial neural network to decode the intended movement.

{776}
hide / / print
ref: work-0 tags: neural networks course date: 09-01-2009 04:24 gmt revision:0 [head]

http://www.willamette.edu/~gorr/classes/cs449/intro.html -- descent resource, good explanation of the equations associated with artificial neural networks.

{664}
hide / / print
ref: Darmanjian-2006.01 tags: wireless neural recording university Florida Principe telemetry msp430 dsp nordic date: 04-15-2009 20:56 gmt revision:1 [0] [head]

PMID-17946962[0] A reconfigurable neural signal processor (NSP) for brain machine interfaces.

  • use a Texas instruments TMS320VC33 200MFLOPS (yes floating point) DSP,
  • a nordic NRF24L01,
  • a MSP430F1611x as a co-processor / wireless protocol manager / bootloader,
  • an Altera EPM3128ATC100 CPLD for expansion / connection.
  • uses 450 - 600mW in use (running an LMS algorithm).

____References____

[0] Darmanjian S, Cieslewski G, Morrison S, Dang B, Gugel K, Principe J, A reconfigurable neural signal processor (NSP) for brain machine interfaces.Conf Proc IEEE Eng Med Biol Soc 1no Issue 2502-5 (2006)

{364}
hide / / print
ref: Linderman-2006.01 tags: neural recording technology compact flash stanford Shenoy 2006 date: 04-15-2009 20:55 gmt revision:3 [2] [1] [0] [head]

PMID-17946450[0] An Autonomous, broadband, multi-channel neural recording system for freely behaving primates

  • goal: recording system for freely-behaving animals.
    • problems: battery life, size
    • cannot sample broadband.
    • non autonomous.
  • solution:
    • compact flash, ARM core
    • accelerometer?
    • mounted inside the monkey's skull in the dental cement.
  • specs

____References____

[0] Linderman MD, Gilja V, Santhanam G, Afshar A, Ryu S, Meng TH, Shenoy KV, An autonomous, broadband, multi-channel neural recording system for freely behaving primates.Conf Proc IEEE Eng Med Biol Soc 1no Issue 1212-5 (2006)

{724}
hide / / print
ref: Oskoei-2008.08 tags: EMG pattern analysis classification neural network date: 04-07-2009 21:10 gmt revision:2 [1] [0] [head]

  • EMG pattern analysis and classification by Neural Network
    • 1989!
    • short, simple paper. showed that 20 patterns can accurately be decoded with a backprop-trained neural network.
  • PMID-18632358 Support vector machine-based classification scheme for myoelectric control applied to upper limb.
    • myoelectric discrimination with SVM running on features in both the time and frequency domain.
    • a survace MES (myoelectric sensor) is formed via the superposition of individual action potentials generated by irregular discharges of active motor units in a muscle fiber. It's amplitude, variance, energy, and frequency vary depending on contration level.
    • Time domain features:
      • Mean absolute value (MAV)
      • root mean square (RMS)
      • waveform length (WL)
      • variance
      • zero crossings (ZC)
      • slope sign changes (SSC)
      • William amplitude.
    • Frequency domain features:
      • power spectrum
      • autoregressive coefficients order 2 and 6
      • mean signal frequency
      • median signal frequency
      • good performance with just RMS + AR2 for 50 or 100ms segments. Used a SVM with a RBF kernel.
      • looks like you can just get away with time-domain metrics!!

{695}
hide / / print
ref: -0 tags: alopex machine learning artificial neural networks date: 03-09-2009 22:12 gmt revision:0 [head]

Alopex: A Correlation-Based Learning Algorithm for Feed-Forward and Recurrent Neural Networks (1994)

  • read the abstract! rather than using the gradient error estimate as in backpropagation, it uses the correlation between changes in network weights and changes in the error + gaussian noise.
    • backpropagation requires calculation of the derivatives of the transfer function from one neuron to the output. This is very non-local information.
    • one alternative is somewhat empirical: compute the derivatives wrt the weights through perturbations.
    • all these algorithms are solutions to the optimization problem: minimize an error measure, E, wrt the network weights.
  • all network weights are updated synchronously.
  • can be used to train both feedforward and recurrent networks.
  • algorithm apparently has a long history, especially in visual research.
  • the algorithm is quite simple! easy to understand.
    • use stochastic weight changes with a annealing schedule.
  • this is pre-pub: tables and figures at the end.
  • looks like it has comparable or faster convergence then backpropagation.
  • not sure how it will scale to problems with hundreds of neurons; though, they looked at an encoding task with 32 outputs.

{608}
hide / / print
ref: Zhu-2003.1 tags: M1 neural adaptation motor learning date: 09-24-2008 22:17 gmt revision:0 [head]

PMID-14511525 Probing changes in neural interaction during adaptation.

  • looking at the changes in te connectivity between cells during/after motor learning.
  • convert sparse spike trains to continuous firing rates, use these as input to granger causality test
  • used the Dawn Taylor monkey task, except with push-buttons.
  • perterbed the monkey's reach trajectory with a string to a pneumatic cylinder.
  • their data looks pretty random. 9-17 neurons recorded. learning generally involves increases in interaction.
  • sponsored by DARPA
  • not a very good paper, alas.

{475}
hide / / print
ref: bookmark-0 tags: neural recording companies electrodes wireless bioamplifier germany date: 10-22-2007 01:39 gmt revision:2 [1] [0] [head]

http://www.neuroconnex.com/ -- looks like they have some excellent products, but not sure how to purchase them.

  • links to specification sheets are broken.
  • they have a closed-loop stimulator for treatment of Parkinsons etc. cool!
also see Mega biomonitor. (14 bit resolution)

{467}
hide / / print
ref: bookmark-0 tags: Saab water injection neuralnet 900 turbo date: 10-15-2007 16:09 gmt revision:2 [1] [0] [head]

Self-learning fuzzy neural network with optimal on-line leaning for water injection control of a turbocharged automobile.

  • for a 1994 - 1998 Saab 900 SE (like mine).
  • also has details on the trionic 5 ECU, including how saab detects knock through pre-ignition ionization measurement, and how it subsequently modifies ignition timing & boost pressure.
  • images/467_1.pdf

{302}
hide / / print
ref: Wahnoun-2004.01 tags: BMI population_vector neural selection Brown 3D arizona ASU date: 04-06-2007 23:28 gmt revision:3 [2] [1] [0] [head]

PMID-17271333[0] Neuron selection and visual training for population vector based cortical control.

  • M1 and Pmd (not visual areas), bilateral.
  • a series of experiments designed to parameterize a cortical control algorithm without an animal having to move its arm.
  • a highly motivated animal observes as the computer drives a cursor move towards a set of targets once each in a center-out task.
    • how motivated? how did they do this? (primate working for its daily water rations)
  • I do not think this is the way to go. it is better to stimulate in the proper afferents and let the brain learn the control algorithm, the same as when a baby learns to crawl.
    • however, the method described here may be a good way to bootstrap., definitely.
  • want to generate an algorithm that 'tunes-up' control with a few tens of neurons, not hundreds as Miguel estimates.
  • estimate the tuning from 12 seconds of visual following (1.5 seconds per each of the 8 corners of a cube)
  • optimize over the subset of neurons (by dropping them) & computing the individual residual error.
  • their paper seems to be more of an analysis of this neuron-removal method.
  • neurons seem to maintain their tuning between visual following and brain-control.
  • they never actually did brain control

PMID-16705272[1] Selection and parameterization of cortical neurons for neuroprosthetic control

  • here they actually did neuroprosthetic control.
  • most units add noise to the control signal, a few actually improve it -> they emphasize cautious unit selection leaning to simpler computational/electrical systems.
  • point out that the idea of using chronically recorded neural signals has a very long history.. [2,3,4,5] [6] etc.
  • look like it took the monkeys about 1.6-1.8 seconds to reach the target.
    • minimum summed path length / distance to target = 3.5. is that good?

____References____

{263}
hide / / print
ref: Ferrari-2005.02 tags: tool use monkey neural response leaning mirror neurons F5 date: 04-03-2007 22:44 gmt revision:1 [0] [head]

PMID-15811234[] Mirror Neurons Responding to Observation of Actions Made with Tools in Monkey Ventral Premotor Cortex

  • respond when the monkey sees a human using a tool!

____References____

{23}
hide / / print
ref: Vyssotski-2006.02 tags: neurologger neural_recording recording_technology EEG SUA LFP electrical engineering date: 02-05-2007 06:21 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

PMID-16236777[0] Miniature neurologgers for flying pigeons: multichannel EEG and action and field potentials in combination with GPS recording.

Recording neuronal activity of animals moving through their natural habitat is difficult to achieve by means of conventional radiotelemetry. This illustration shows a new approach, exemplified by a homing pigeon carrying both a small GPS path recorder and a miniaturized action and field potential logger (“neurologger”), the entire assembly weighing maximally 35 g, a load carried easily by a pigeon over a distance of up to 50 km. Before release at a distant location, the devices are activated and store both positional and neuronal activity data during the entire flight. On return to the loft, all data are downloaded and can be analyzed using software for path analysis and electrical brain activity. Thus single unit activity or EEG patterns can be matched to the flight path superimposed on topographical maps. Such neurologgers may also be useful for a variety of studies using unrestrained laboratory animals in different environments or test apparatuses. The prototype on the hand-held pigeon records and stores EEG simultaneously from eight channels up to 47 h, or single unit activity from two channels during 9 h, but the number of channels can be increased without much gain in weight by sandwiching several of these devices. Further miniaturization can be expected. For details, see Vyssotski AL, Serkov AN, Itskov PM, Dell Omo G, Latanov AV, Wolfer DP, and Lipp H-P. Miniature neurologgers for flying pigeons: multichannel EEG and action and field potentials in combination with GPS recording. [1]

____References____

{7}
hide / / print
ref: bookmark-0 tags: book information_theory machine_learning bayes probability neural_networks mackay date: 0-0-2007 0:0 revision:0 [head]

http://www.inference.phy.cam.ac.uk/mackay/itila/book.html -- free! (but i liked the book, so I bought it :)

{20}
hide / / print
ref: bookmark-0 tags: neural_networks machine_learning matlab toolbox supervised_learning PCA perceptron SOM EM date: 0-0-2006 0:0 revision:0 [head]

http://www.ncrg.aston.ac.uk/netlab/index.php n.b. kinda old. (or does that just mean well established?)

{44}
hide / / print
ref: notes-0 tags: spike patterns neural response LGN spike_timing Sejnowski vision date: 0-0-2006 0:0 revision:0 [head]

http://www.jneurosci.org/cgi/reprint/24/12/2989.pdf

  • quote: " when a cortical neuron is repeatedly injected with the same fluctuating current stimulus, the timing of the spikes is highly precise from trial to trial and the spike pattern appears to be unique"
    • though: I'd imagine that somebody has characterized the actual transfer function of this.
  • mais: we conclude that the prestimulus history of a neuron may influence the precise timing of the spikes in repsonse to a stimulus over a wide range of time scales.
  • in vivo, it is hard to find patterns because neurons may jump between paterns & there is a large ammount of neuronal noise in there too. or there may be neural "attractors".
  • they observed long-term (seconds) firing patterns in cat LGN (interesting)

{64}
hide / / print
ref: bookmark-0 tags: neural_recording recording_technology electrical engineering DSP date: 0-0-2006 0:0 revision:0 [head]

{92}
hide / / print
ref: bookmark-0 tags: training neural_networks with kalman filters date: 0-0-2006 0:0 revision:0 [head]

with the extended kalman filter, from '92: http://ftp.ccs.neu.edu/pub/people/rjw/kalman-ijcnn-92.ps

with the unscented kalman filter : http://hardm.ath.cx/pdf/NNTrainingwithUnscentedKalmanFilter.pdf