m8ta
You are not authenticated, login. 

{1540}  
Two Routes to Scalable Credit Assignment without Weight Symmetry This paper looks at five different learning rules, three purely local, and two nonlocal, to see if they can work as well as backprop in training a deep convolutional net on ImageNet. The local learning networks all feature forward weights W and backward weights B; the forward weights (+ nonlinearities) pass the information to lead to a classification; the backward weights pass the error, which is used to locally adjust the forward weights. Hence, each fake neuron has locally the forward activation, the backward error (or loss gradient), the forward weight, backward weight, and Hebbian terms thereof (e.g the outer product of the inout vectors for both forward and backward passes). From these available variables, they construct the local learning rules:
Each of these serves as a "regularizer term" on the feedback weights, which governs their learning dynamics. In the case of backprop, the backward weights B are just the instantaneous transpose of the forward weights W. A good local learning rule approximates this transpose progressively. They show that, with proper hyperparameter setting, this does indeed work nearly as well as backprop when training a ResNet18 network. But, hyperparameter settings don't translate to other network topologies. To allow this, they add in nonlocal learning rules:
In "Symmetric Alignment", the Self and Decay rules are employed. This is similar to backprop (the backward weights will track the forward ones) with L2 regularization, which is not new. It performs very similarly to backprop. In "Activation Alignment", Amp and Sparse rules are employed. I assume this is supposed to be more biologically plausible  the Hebbian term can track the forward weights, while the Sparse rule regularizes and stabilizes the learning, such that overall dynamics allow the gradient to flow even if W and B aren't transposes of each other. Surprisingly, they find that Symmetric Alignment to be more robust to the injection of Gaussian noise during training than backprop. Both SA and AA achieve similar accuracies on the ResNet benchmark. The authors then go on to explain the plausibility of nonlocal but approximate learning rules with Regression discontinuity design ala Spiking allows neurons to estimate their causal effect. This is a decent paper,reasonably well written. They thought trough what variables are available to affect learning, and parameterized five combinations that work. Could they have done the full matrix of combinations, optimizing just they same as the metaparameters? Perhaps, but that would be even more work ... Regarding the desire to reconcile backprop and biology, this paper does not bring us much (if at all) closer. Biological neural networks have specific and local uses for error; even invoking 'error' has limited explanatory power on activity. Learning and firing dynamics, of course of course. Is the brain then just an overbearing mess of details and overlapping rules? Yes probably but that doesn't mean that we human's can't find something simpler that works. The algorithms in this paper, for example, are well described by a bit of linear algebra, and yet they are performant.  
{1531}  
PMID24204224 The Convallis rule for unsupervised learning in cortical networks 2013  Pierre Yger 1 , Kenneth D Harris This paper aims to unify and reconcile experimental evidence of invivo learning rules with established STDP rules. In particular, the STDP rule fails to accurately predict change in strength in response to spike triplets, e.g. prepostpre or postprepost. Their model instead involves the competition between two timeconstant threshold circuits / coincidence detectors, one which controls LTD and another LTP, and is such an extension of the classical BCM rule. (BCM: inputs below a threshold will weaken a synapse; those above it will strengthen. ) They derive the model from optimization criteria that neurons should try to optimize the skewedness of the distribution of their membrane potential: much time spent either firing spikes or strongly inhibited. This maps to a objective function F that looks like a valley  hence the 'convallis' in the name (latin for valley); the objective is differentiated to yield a weighting function for weight changes; they also add a shrinkage function (line + heaviside function) to gate weight changes 'off' at resting membrane potential. A network of firing neurons successfully groups correlated rateencoded inputs, better than the STDP rule. it can also cluster auditory inputs of spoken digits converted into cochleogram. But this all seems relatively toylike: of course algorithms can associate inputs that cooccur. The same result was found for a recurrent balanced EI network with the same cochleogram, and convalis performed better than STDP. Meh. Perhaps the biggest thing I got from the paper was how poorly STDP fares with spike triplets: Pre following post does not 'necessarily' cause LTD; it's more complicated than that, and more consistent with the two differenttimeconstant coincidence detectors. This is satisfying as it allows for apical dendritic depolarization to serve as a contextual binding signal  without negatively impacting the associated synaptic weights.  
{1495}  
Why multifactor?
 
{1493}  
PMID27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation
