Direct Feedback alignment provides learning in deep neural nets
- from {1423}
- Feedback alignment is able to provide zero training error even in convolutional networks and very deep networks, completely without error back-propagation.
- Biologically plausible: error signal is entirely local, no symmetric or reciprocal weights required.
- Still, it requires supervision.
- Almost as good as backprop!
-
- Clearly written, easy to follow math.
- Though the proof that feedback-alignment direction is within 90 deg of backprop is a bit impenetrable, needs some reorganization or additional exposition / annotation.
- 3x400 tanh network tested on MNIST; performs similarly to backprop, if faster.
- Also able to train very deep networks, on MNIST - CIFAR-10, CIFAR-100, 100 layers (which actually hurts this task).
|