m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1472}
hide / / print
ref: -0 tags: computational neuroscience opinion tony zador konrad kording lillicrap date: 07-30-2019 21:04 gmt revision:0 [head]

Two papers out recently in Arxive and Biorxiv:

  • A critique of pure learning: what artificial neural networks can learn from animal brains
    • Animals learn rapidly and robustly, without the need for labeled sensory data, largely through innate mechanisms as arrived at and encoded genetically through evolution.
    • Still, this cannot account for the connectivity of the human brain, which is much to large for the genome; with us, there are cannonical circuits and patterns of intra-area connectivity which act as the 'innate' learning biases.
    • Mice and men are not so far apart evolutionary. (I've heard this also from people FIB-SEM imaging cortex) Hence, understanding one should appreciably lead us to understand the other. (I agree with this sentiment, but for the fact that lab mice are dumb, and have pretty stereotyped behaviors).
    • References Long short term memory and learning to learn in networks of spiking neurons -- which claims that a hybrid algorithm (BPTT with neuronal rewiring) with realistic neuronal dynamics markedly increases the computational power of spiking neural networks.
  • What does it mean to understand a neural network?
    • As has been the intuition with a lot of neuroscientists probably for a long time, posits that we have to investigate the developmental rules (wiring and connectivity, same as above) plus the local-ish learning rules (synaptic, dendritic, other .. astrocytic).
      • The weights themselves, in either biological neural networks, or in ANN's, are not at all informative! (Duh).
    • Emphasizes the concept of compressability: how much information can be discarded without impacting performance? With some modern ANN's, 30-50x compression is possible. Authors here argue that little compression is possible in the human brain -- the wealth of all those details about the world are needed! In other words, no compact description is possible.
    • Hence, you need to learn how the network learns those details, and how it's structured so that important things are learned rapidly and robustly, as seen in animals (very similar to above).

{1443}
hide / / print
ref: -2016 tags: MAPseq Zador connectome mRNA plasmic library barcodes Peikon date: 03-06-2019 00:51 gmt revision:1 [0] [head]

PMID-27545715 High-Throughput Mapping of Single-Neuron Projections by Sequencing of Barcoded RNA.

  • Justus M. Kebschull, Pedro Garcia da Silva, Ashlan P. Reid, Ian D. Peikon, Dinu F. Albeanu, Anthony M. Zador
  • Another tool for the toolboxes, but I still can't help but to like microscopy: while the number of labels in MAPseq is far higher, the information per read-oout is much lower; an imaged slice holds a lot of information, including dendritic / axonal morphology, which sequencing doesn't get. Natch, you'd wan to use both, or FISseq + ExM.

{1171}
hide / / print
ref: -0 tags: Zador Peikon cold spring plos date: 11-06-2012 19:46 gmt revision:1 [0] [head]

Sequencing the Connectome

  • Quote: "Interestingly, the utility of the connectome in C. elegans is somewhat limited because function is highly multiplexed, with different neurons performing different roles depending on the state of neuromodulation [7], possibly as a mechanism for compensating for the small number of neurons."
  • In comparison, the authors argue that the role of neurons in mammalian brains is much more highly determined by connectivity / physical location, and support this with examples from the visual system (area MT; how layer in V1 determines simple vs. complex tuning).
  • Only have started work on this highly ambitious project -- current plan is to use PRV amplicons for permuting the neuronal barcodes -- and offer no results, just the general framework of the idea.
    • Given that Ian spoke of the idea when he first started at CSH, I wonder just how practical this idea is?