use https for features.
text: sort by
tags: modified
type: chronology
hide / / print
ref: -2013 tags: synaptic learning rules calcium harris stdp date: 02-18-2021 19:48 gmt revision:3 [2] [1] [0] [head]

PMID-24204224 The Convallis rule for unsupervised learning in cortical networks 2013 - Pierre Yger  1 , Kenneth D Harris

This paper aims to unify and reconcile experimental evidence of in-vivo learning rules with  established STDP rules.  In particular, the STDP rule fails to accurately predict change in strength in response to spike triplets, e.g. pre-post-pre or post-pre-post.  Their model instead involves the competition between two time-constant threshold circuits / coincidence detectors, one which controls LTD and another LTP, and is such an extension of the classical BCM rule.  (BCM: inputs below a threshold will weaken a synapse; those above it will strengthen. )

They derive the model from optimization criteria that neurons should try to optimize the skewedness of the distribution of their membrane potential: much time spent either firing spikes or strongly inhibited.  This maps to a objective function F that looks like a valley - hence the 'convallis' in the name (latin for valley); the objective is differentiated to yield a weighting function for weight changes; they also add a shrinkage function (line + heaviside function) to gate weight changes 'off' at resting membrane potential. 

A network of firing neurons successfully groups correlated rate-encoded inputs, better than the STDP rule.  it can also cluster auditory inputs of spoken digits converted into cochleogram.  But this all seems relatively toy-like: of course algorithms can associate inputs that co-occur.  The same result was found for a recurrent balanced E-I network with the same cochleogram, and convalis performed better than STDP.   Meh.

Perhaps the biggest thing I got from the paper was how poorly STDP fares with spike triplets:

Pre following post does not 'necessarily' cause LTD; it's more complicated than that, and more consistent with the two different-timeconstant coincidence detectors.  This is satisfying as it allows for apical dendritic depolarization to serve as a contextual binding signal - without negatively impacting the associated synaptic weights. 

hide / / print
ref: -0 tags: synaptic plasticity 2-photon imaging inhibition excitation spines dendrites synapses 2p date: 08-14-2020 01:35 gmt revision:3 [2] [1] [0] [head]

PMID-22542188 Clustered dynamics of inhibitory synapses and dendritic spines in the adult neocortex.

  • Cre-recombinase-dependent labeling of postsynapitc scaffolding via Gephryn-Teal fluorophore fusion.
  • Also added Cre-eYFP to label the neurons
  • Electroporated in utero e16 mice.
    • Low concentration of Cre, high concentrations of Gephryn-Teal and Cre-eYFP constructs to attain sparse labeling.
  • Located the same dendrite imaged in-vivo in fixed tissue - !! - using serial-section electron microscopy.
  • 2230 dendritic spines and 1211 inhibitory synapses from 83 dendritic segments in 14 cells of 6 animals.
  • Some spines had inhibitory synapses on them -- 0.7 / 10um, vs 4.4 / 10um dendrite for excitatory spines. ~ 1.7 inhibitory
  • Suggest that the data support the idea that inhibitory inputs maybe gating excitation.
  • Furthermore, co-inervated spines are stable, both during mormal experience and during monocular deprivation.
  • Monocular deprivation induces a pronounced loss of inhibitory synapses in binocular cortex.

hide / / print
ref: -2013 tags: 2p two photon STED super resolution microscope synapse synaptic plasticity date: 08-14-2020 01:34 gmt revision:3 [2] [1] [0] [head]

PMID-23442956 Two-Photon Excitation STED Microscopy in Two Colors in Acute Brain Slices

  • Plenty of details on how they set up the microscope.
  • Mice: Thy1-eYFP (some excitatory cells in the hippocampus and cortex) and CX3CR1-eGFP (GFP in microglia). Crossbred the two strains for two-color imaging.
  • Animals were 21-40 days old at slicing.

PMID-29932052 Chronic 2P-STED imaging reveals high turnover of spines in the hippocampus in vivo

  • As above, Thy1-GFP / Thy1-YFP labeling; hence this was a structural study (for which the high resolution of STED was necessary).
  • Might just as well gone with synaptic labels, e.g. tdTomato-Synapsin.

hide / / print
ref: -0 tags: synaptic plasticity LTP LTD synapses NMDA glutamate uncaging date: 08-11-2020 22:40 gmt revision:0 [head]

PMID-31780899 Single Synapse LTP: A matter of context?

  • Not a great name for a thorough and reasonably well-written review of glutamate uncaging studies as related to LTP (and to a lesser extent LTD).
  • Lots of refernces from many familiar names. Nice to have them all in one place!
  • I'm left wondering, between CaMKII, PKA, PKC, Ras, other GTP dependent molecules -- how much of the regulatory network in synapse is known? E.g. if you pull down all proteins in the synaptosome & their interacting partners, how many are unknown, or have an unknown function? I know something like this has been done for flies, but in mammals - ?

hide / / print
ref: -0 tags: multifactor synaptic learning rules date: 01-22-2020 01:45 gmt revision:9 [8] [7] [6] [5] [4] [3] [head]

Why multifactor?

  • Take a simple MLP. Let xx be the layer activation. X 0X^0 is the input, X 1X^1 is the second layer (first hidden layer). These are vectors, indexed like x i ax^a_i .
  • Then X 1=WX 0X^1 = W X^0 or x j 1=ϕ(Σ i=1 Nw ijx i 0)x^1_j = \phi(\Sigma_{i=1}^N w_{ij} x^0_i) . ϕ\phi is the nonlinear activation function (ReLU, sigmoid, etc.)
  • In standard STDP the learning rule follows Δwf(x pre(t),x post(t)) \Delta w \propto f(x_{pre}(t), x_{post}(t)) or if layer number is aa Δw a+1f(x a(t),x a+1(t))\Delta w^{a+1} \propto f(x^a(t), x^{a+1}(t))
    • (but of course nobody thinks there 'numbers' on the 'layers' of the brain -- this is just referring to pre and post synaptic).
  • In an artificial neural network, Δw aEw ij aδ j ax i \Delta w^a \propto - \frac{\partial E}{\partial w_{ij}^a} \propto - \delta_{j}^a x_{i} (Intuitively: the weight change is proportional to the error propagated from higher layers times the input activity) where δ j a=(Σ k=1 Nw jkδ k a+1)ϕ \delta_{j}^a = (\Sigma_{k=1}^{N} w_{jk} \delta_k^{a+1}) \partial \phi where ϕ\partial \phi is the derivative of the nonlinear activation function, evaluated at a given activation.
  • f(i,j)[x,y,θ,ϕ] f(i, j) \rightarrow [x, y, \theta, \phi]
  • k=13.165 k = 13.165
  • x=round(i/k) x = round(i / k)
  • y=round(j/k) y = round(j / k)
  • θ=a(ikx)+b(ikx) 2 \theta = a (\frac{i}{k} - x) + b (\frac{i}{k} - x)^2
  • ϕ=a(jky)+b(jky) 2 \phi = a (\frac{j}{k} - y) + b (\frac{j}{k} - y)^2

hide / / print
ref: -0 tags: nonlinear hebbian synaptic learning rules projection pursuit date: 12-12-2019 00:21 gmt revision:4 [3] [2] [1] [0] [head]

PMID-27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

  • Here we show that the principle of nonlinear Hebbian learning is sufficient for receptive field development under rather general conditions.
  • The nonlinearity is defined by the neuron’s f-I curve combined with the nonlinearity of the plasticity function. The outcome of such nonlinear learning is equivalent to projection pursuit [18, 19, 20], which focuses on features with non-trivial statistical structure, and therefore links receptive field development to optimality principles.
  • Δwxh(g(w Tx))\Delta w \propto x h(g(w^T x)) where h is the hebbian plasticity term, and g is the neurons f-I curve (input-output relation), and x is the (sensory) input.
  • The relevant property of natural image statistics is that the distribution of features derived from typical localized oriented patterns has high kurtosis [5,6, 39]
  • Model is a generalized leaky integrate and fire neuron, with triplet STDP

hide / / print
ref: -2014 tags: Lillicrap Random feedback alignment weights synaptic learning backprop MNIST date: 02-14-2019 01:02 gmt revision:5 [4] [3] [2] [1] [0] [head]

PMID-27824044 Random synaptic feedback weights support error backpropagation for deep learning.

  • "Here we present a surprisingly simple algorithm for deep learning, which assigns blame by multiplying error signals by a random synaptic weights.
  • Backprop multiplies error signals e by the weight matrix W T W^T , the transpose of the forward synaptic weights.
  • But the feedback weights do not need to be exactly W T W^T ; any matrix B will suffice, so long as on average:
  • e TWBe>0 e^T W B e > 0
    • Meaning that the teaching signal Be B e lies within 90deg of the signal used by backprop, W Te W^T e
  • Feedback alignment actually seems to work better than backprop in some cases. This relies on starting the weights very small (can't be zero -- no output)

Our proof says that weights W0 and W
evolve to equilibrium manifolds, but simulations (Fig. 4) and analytic results (Supple-
mentary Proof 2) hint at something more specific: that when the weights begin near
0, feedback alignment encourages W to act like a local pseudoinverse of B around
the error manifold. This fact is important because if B were exactly W + (the Moore-
Penrose pseudoinverse of W ), then the network would be performing Gauss-Newton
optimization (Supplementary Proof 3). We call this update rule for the hidden units
pseudobackprop and denote it by ∆hPBP = W + e. Experiments with the linear net-
work show that the angle, ∆hFA ]∆hPBP quickly becomes smaller than ∆hFA ]∆hBP
(Fig. 4b, c; see Methods). In other words feedback alignment, despite its simplicity,
displays elements of second-order learning.

hide / / print
ref: -0 tags: DBS dopamine synaptic plasticity striatum date: 02-27-2012 21:57 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

PMID-11285003 Dopaminergic control of synaptic plasticity in the dorsal striatum.

  • Repetitive stimulation of corticostriatal fibers causes a massive release of glutamate and DA in the striatum, and depending on the glutamate receptor subtype preferentially activated, produces either long-term depression (LTD) or long-term potentiation (LTP) of excitatory synaptic transmission.
  • D1 and D2 (like) receptors interact synergistically to allow LTD formation, and in opposition while inducing LTP.
  • Stimulation of DA receptors has been shown to modulate voltage-dependent conductances in striatal spiny neurons, but it does not cause depolarization or hyperpolarization (Calabresi et al 2000a PMID-11052221; Nicola et al 2000)
  • Striatal spiny neurons present a high degree of colocalization of subtypes of DA and glutamate receptors. PMID-9215599
  • Striatal cells have up and down states. Wilson and Kawaguchi 1996 PMID-8601819
  • Both LTD and LTP are induced in the striatum by the repetitive stimulation of corticostriatal fibers.
    • Repetition is associated with the dramatic increase of both glutamate and DA in the striatum. (presynaptic?)
  • LTP is enhanced by blocking or removing D2 receptors.
  • More complexity here - in terms of receptors and blocking. (sure magnesium blocks NMDA receptors, but there are many other drugs used...)

hide / / print
ref: Prescott-2009.02 tags: PD levodopa synaptic plasticity SNr STN DBS date: 02-22-2012 18:28 gmt revision:2 [1] [0] [head]

PMID-19050033[0] Levodopa enhances synaptic plasticity in the substantia nigra pars reticulata of Parkinson's disease patients

  • In the SNpc -> SNr.
  • High frequency stimulation (HFS--four trains of 2 s at 100 Hz) in the SNr failed to induce a lasting change in test fEPs (1 Hz) amplitudes in patients OFF medication (decayed to baseline by 160 s). Following oral L-dopa administration, HFS induced a potentiation of the fEP amplitudes (+29.3% of baseline at 160 s following a plateau).
  • Aberrant synaptic plasticity may play a role in the pathophysiology of Parkinson's disease.


[0] Prescott IA, Dostrovsky JO, Moro E, Hodaie M, Lozano AM, Hutchison WD, Levodopa enhances synaptic plasticity in the substantia nigra pars reticulata of Parkinson's disease patients.Brain 132:Pt 2, 309-18 (2009 Feb)

hide / / print
ref: Harris-2008.03 tags: retroaxonal retrosynaptic Harris learning cortex backprop date: 12-07-2011 02:34 gmt revision:2 [1] [0] [head]

PMID-18255165[0] Stability of the fittest: organizing learning through retroaxonal signals

  • the central hypothesis: strengthening of a neuron's output synapses stabilizes recent changes in the same neuron's inputs.
    • this causes representations (as are arrived at with backprop) that are tuned to task features.
  • Retroaxonal signaling in the brain is too slow for an instructive (says at least the sign of the error wrt a current neuron's output) backprop algorithm
  • hence, retroaxonal signals are not instructive but selective.
  • At SFN Harris was looking for people to test this in a model; as (yet) unmodeled and untested, I'm suspicious of it.
  • Seems plausible, yet it also just seems to be a way of moving the responsibility for learning computation to the postsynaptic neuron (which is then propagated back to the present neuron). The theory does not immediately suggest what neurons are doing to learn their stuff; rather how they may be learning.
    • If this stabilization is based on some sort of feedback (attention? reward?), which may guide learning (except for the cortex, which does not have many (any?) DA receptors...), then I may be more willing to accept it.
    • It seems likely that the cortex is doing a lot of unsupervised learning: predicting what sensory info will come next based on present sensory info (ICA, PCA).


[0] Harris KD, Stability of the fittest: organizing learning through retroaxonal signals.Trends Neurosci 31:3, 130-6 (2008 Mar)

hide / / print
ref: Huber-2004.07 tags: sleep REM SWS wilson synaptic strength date: 04-01-2009 17:50 gmt revision:2 [1] [0] [head]

http://www.the-scientist.com/2009/04/1/34/1/ -- good layperson-level review of the present research on sleep. Includes interviews with Strickgold and other prominents. References:

http://www.the-scientist.com/2009/04/1/15/1/ -- points out that Western sleep style is a relative outlier compared to sleeping in other cultures. More 'primitive' cultures have polyphasic sleep, with different stages of alertness, dozing, napping, disengaged, vigilance, etc.

  • Quote: Other cultures tend towards "multiple and multiage sleeping partners; frequent proximity of animals; embeddedness of sleep in ongoing social interaction; fluid bedtimes and wake times; use of nighttime for ritual, sociality, and information exchange; and relatively exposed sleeping locations that require fire maintenance and sustained vigilance."


[0] Huber R, Ghilardi MF, Massimini M, Tononi G, Local sleep and learning.Nature 430:6995, 78-81 (2004 Jul 1)
[1] Klintsova AY, Greenough WT, Synaptic plasticity in cortical systems.Curr Opin Neurobiol 9:2, 203-8 (1999 Apr)
[2] Vyazovskiy VV, Cirelli C, Pfister-Genskow M, Faraguna U, Tononi G, Molecular and electrophysiological evidence for net synaptic potentiation in wake and depression in sleep.Nat Neurosci 11:2, 200-8 (2008 Feb)
[3] Pavlides C, Winson J, Influences of hippocampal place cell firing in the awake state on the activity of these cells during subsequent sleep episodes.J Neurosci 9:8, 2907-18 (1989 Aug)
[4] Pompeiano M, Cirelli C, Arrighi P, Tononi G, c-Fos expression during wakefulness and sleep.Neurophysiol Clin 25:6, 329-41 (1995)
[5] Hill S, Tononi G, Modeling sleep and wakefulness in the thalamocortical system.J Neurophysiol 93:3, 1671-98 (2005 Mar)
[6] Aton SJ, Seibt J, Dumoulin M, Jha SK, Steinmetz N, Coleman T, Naidoo N, Frank MG, Mechanisms of sleep-dependent consolidation of cortical plasticity.Neuron 61:3, 454-66 (2009 Feb 12)

hide / / print
ref: Tononi-2006.02 tags: sleep synaptic homeostasis plasticity date: 03-20-2009 15:45 gmt revision:1 [0] [head]

PMID-16376591[0] Sleep function and synaptic homeostasis.

  • Sleep keeps the neural network stable & the synaptic weights in check.
    • if you don't sleep do you get epilepsy?? don't have access to the article, would have to read it.


[0] Tononi G, Cirelli C, Sleep function and synaptic homeostasis.Sleep Med Rev 10:1, 49-62 (2006 Feb)