m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1478}
hide / / print
ref: -2013 tags: 2p two photon STED super resolution microscope synapse synaptic plasticity date: 08-14-2020 01:34 gmt revision:3 [2] [1] [0] [head]

PMID-23442956 Two-Photon Excitation STED Microscopy in Two Colors in Acute Brain Slices

  • Plenty of details on how they set up the microscope.
  • Mice: Thy1-eYFP (some excitatory cells in the hippocampus and cortex) and CX3CR1-eGFP (GFP in microglia). Crossbred the two strains for two-color imaging.
  • Animals were 21-40 days old at slicing.

PMID-29932052 Chronic 2P-STED imaging reveals high turnover of spines in the hippocampus in vivo

  • As above, Thy1-GFP / Thy1-YFP labeling; hence this was a structural study (for which the high resolution of STED was necessary).
  • Might just as well gone with synaptic labels, e.g. tdTomato-Synapsin.

{1503}
hide / / print
ref: -0 tags: DNA paint FRET tag superresolution imaging oligos date: 02-20-2020 16:28 gmt revision:1 [0] [head]

Accelerated FRET-PAINT Microscopy

  • Well isn't that smart -- they use a FRET donor, which is free to associate and dissociate form a host DNA strand, and a more-permanently attached DNA acceptor, which blinks due to FRET, for superresolution imaging.
  • As FRET acceptors aren't subject to bleaching (or, perhaps, much less subject to bleaching), this eliminates that problem...
  • However, the light levels used ~1kW / cm^2, does damage the short DNA oligos, which interferes with reversible association.
  • Interestingly, CF488 donor showed very little photobleaching; DNA damage was instead the limiting problem.
    • Are dyes that bleach more slowly better at exporting their singlet oxygen (?) or aberrant excited states (?) to neighboring molecules?

{1492}
hide / / print
ref: -2016 tags: spiking neural network self supervised learning date: 12-10-2019 03:41 gmt revision:2 [1] [0] [head]

PMID: Spiking neurons can discover predictive features by aggregate-label learning

  • This is a meandering, somewhat long-winded, and complicated paper, even for the journal Science. It's not been cited a great many times, but none-the-less is of interest.
  • The goal of the derived network is to detect fixed-pattern presynaptic sequences, and fire a prespecified number of spikes to each occurrence.
  • One key innovation is the use of a spike-threshold-surface for a 'tempotron' [12], the derivative of which is used to update the weights of synapses after trials. As the author says, spikes are hard to differentiate; the STS makes this more possible. This is hence standard gradient descent: if the neuron missed a spike then the weight is increased based on aggregate STS (for the whole trial -- hence the neuron / SGD has to perform temporal and spatial credit assignment).
    • As common, the SGD is appended with a momentum term.
  • Since STS differentiation is biologically implausible -- where would the memory lie? -- he also implements a correlational synaptic eligibility trace. The correlation is between the postsynaptic voltage and the EPSC, which seems kinda circular.
    • Unsurprisingly, it does not work as well as the SGD approximation. But does work...
  • Second innovation is the incorporation of self-supervised learning: a 'supervisory' neuron integrates the activity of a number (50) of feature detector neurons, and reinforces them to basically all fire at the same event, WTA style. This effects a unsupervised feature detection.
  • This system can be used with sort-of lateral inhibition to reinforce multiple features. Not so dramatic -- continuous feature maps.

Editorializing a bit: I said this was interesting, but why? The first part of the paper is another form of SGD, albeit in a spiking neural network, where the gradient is harder compute hence is done numerically.

It's the aggregate part that is new -- pulling in repeated patterns through synaptic learning rules. Of course, to do this, the full trace of pre and post synaptic activity must be recorded (??) for estimating the STS (i think). An eligibility trace moves in the right direction as a biologically plausible approximation, but as always nothing matches the precision of SGD. Can the eligibility trace be amended with e.g. neuromodulators to push the performance near that of SGD?

The next step of adding self supervised singular and multiple features is perhaps toward the way the brain organizes itself -- small local feedback loops. These features annotate repeated occurrences of stimuli, or tile a continuous feature space.

Still, the fact that I haven't seen any follow-up work is suggestive...


Editorializing further, there is a limited quantity of work that a single human can do. In this paper, it's a great deal of work, no doubt, and the author offers some good intuitions for the design decisions. Yet still, the total complexity that even a very determined individual can amass is limited, and likely far below the structural complexity of a mammalian brain.

This implies that inference either must be distributed and compositional (the normal path of science), or the process of evaluating & constraining models must be significantly accelerated. This later option is appealing, as current progress in neuroscience seems highly technology limited -- old results become less meaningful when the next wave of measurement tools comes around, irrespective of how much work went into it. (Though: the impedtus for measuring a particular thing in biology is only discovered through these 'less meaningful' studies...).

A third option, perhaps one which many theoretical neuroscientists believe in, is that there are some broader, physics-level organizing principles to the brain. Karl Friston's free energy principle is a good example of this. Perhaps at a meta level some organizing theory can be found, or likely a set of theories; but IMHO, you'll need at least one theory per brain area, at least, just the same as each area is morphologically, cytoarchitecturaly, and topologically distinct. (There may be only a few theories of the cortex, despite all the areas, which is why so many are eager to investigate it!)

So what constitutes a theory? Well, you have to meaningfully describe what a brain region does. (Why is almost as important; how more important to the path there.) From a sensory standpoint: what information is stored? What processing gain is enacted? How does the stored information impress itself on behavior? From a motor standpoint: how are goals selected? How are the behavioral segments to attain them sequenced? Is the goal / behavior even a reasonable way of factoring the problem?

Our dual problem, building the bridge from the other direction, is perhaps easier. Or it could be a lot more money has gone into it. Either way, much progress has been made in AI. One arm is deep function approximation / database compression for fast and organized indexing, aka deep learning. Many people are thinking about that; no need to add to the pile; anyway, as OpenAI has proven, the common solution to many problems is to simply throw more compute at it. A second is deep reinforcement learning, which is hideously sample and path inefficient, hence ripe for improvement. One side is motor: rather than indexing raw motor variables (LRUD in a video game, or joint torques with a robot..) you can index motor primitives, perhaps hierarchically built; likewise, for the sensory input, the model needs to infer structure about the world. This inference should decompose overwhelming sensory experience into navigable causes ...

But how can we do this decomposition? The cortex is more than adept at it, but now we're at the original problem, one that the paper above purports to make a stab at.

{1461}
hide / / print
ref: -2019 tags: super-resolution microscopy fluorescent protein molecules date: 05-28-2019 16:02 gmt revision:3 [2] [1] [0] [head]

PMID-30997987 Chemistry of Photosensitive Fluorophores for Single-Molecule Localization Microscopy

  • Excellent review of all the photo-convertable, photo-switchable, and more complex (photo-oxidation or reddening) of both proteins and small molecule fluorophore.
    • E.g. PA-GFP is one of the best -- good photoactivation quantum yield, good N ~ 300
    • Other small molecules, like Alexa Fluor 647 have a photon yield > 6700, which can be increased with triplet quenchers and antioxidants.
  • Describes the chemical mechanism of the various photo switching -- review is targeted at (bio)chemists interested in getting into imaging.
  • Emphasize that critical figures of merit are photoactivation quantum yield Φ pa\Phi_{pa} and N, overall photon yield before photobleaching.
  • See also Colorado lecture

{1454}
hide / / print
ref: -2011 tags: Andrew Ng high level unsupervised autoencoders date: 03-15-2019 06:09 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Building High-level Features Using Large Scale Unsupervised Learning

  • Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng
  • Input data 10M random 200x200 frames from youtube. Each video contributes only one frame.
  • Used local receptive fields, to reduce the communication requirements. 1000 computers, 16 cores each, 3 days.
  • "Strongly influenced by" Olshausen & Field {1448} -- but this is limited to a shallow architecture.
  • Lee et al 2008 show that stacked RBMs can model simple functions of the cortex.
  • Lee et al 2009 show that convolutonal DBN trained on faces can learn a face detector.
  • Their architecture: sparse deep autoencoder with
    • Local receptive fields: each feature of the autoencoder can connect to only a small region of the lower layer (e.g. non-convolutional)
      • Purely linear layer.
      • More biologically plausible & allows the learning of more invariances other than translational invariances (Le et al 2010).
      • No weight sharing means the network is extra large == 1 billion weights.
        • Still, the human visual cortex is about a million times larger in neurons and synapses.
    • L2 pooling (Hyvarinen et al 2009) which allows the learning of invariant features.
      • E.g. this is the square root of the sum of the squares of its inputs. Square root nonlinearity.
    • Local contrast normalization -- subtractive and divisive (Jarrett et al 2009)
  • Encoding weights W 1W_1 and deconding weights W 2W_2 are adjusted to minimize the reconstruction error, penalized by 0.1 * the sparse pooling layer activation. Latter term encourages the network to find invariances.
  • minimize(W 1,W 2) minimize(W_1, W_2) i=1 m(||W 2W 1 Tx (i)x (i)|| 2 2+λ j=1 kε+H j(W 1 Tx (i)) 2) \sum_{i=1}^m {({ ||W_2 W_1^T x^{(i)} - x^{(i)} ||^2_2 + \lambda \sum_{j=1}^k{ \sqrt{\epsilon + H_j(W_1^T x^{(i)})^2}} })}
    • H jH_j are the weights to the j-th pooling element, λ=0.1\lambda = 0.1 ; m examples; k pooling units.
    • This is also known as reconstruction Topographic Independent Component Analysis.
    • Weights are updated through asynchronous SGD.
    • Minibatch size 100.
    • Note deeper autoencoders don't fare consistently better.

{1427}
hide / / print
ref: -0 tags: superresolution imaging scanning lens nanoscale date: 02-04-2019 20:34 gmt revision:1 [0] [head]

PMID-27934860 Scanning superlens microscopy for non-invasive large field-of-view visible light nanoscale imaging

  • Recently, the diffraction barrier has been surpassed by simply introducing dielectrics with a micro-scale spherical configuration when using conventional optical microscopes by transforming evanescent waves into propagating waves. 18,19,20,21,22,23,24,25,26,27,28,29,30
  • The resolution of this superlens-based microscopy has been decreased to ∼50 nm (ref. 26) from an initial resolution of ∼200 nm (ref. 21).
  • This method can be further enhanced to ∼25 nm when coupled with a scanning laser confocal microscope 31.
  • It has achieved fast development in biological applications, as the sub-diffraction-limited resolution of high-index liquid-immersed microspheres has now been demonstrated23,32, enabling its application in the aqueous environment required to maintain biological activity.
  • Microlens is a 57 um diameter BaTiO3 microsphere, resolution of lambda / 6.3 under partial and inclined illumination
  • Microshpere is in contact with the surface during imaging, by gluing it to the cantilever tip of an AFM.
  • Get an image with the microsphere-lens, which improves imaging performance by ~ 200x. (with a loss in quality, naturally).

{1346}
hide / / print
ref: -0 tags: super resolution imaging PALM STORM fluorescence date: 09-21-2016 05:57 gmt revision:0 [head]

PMID-23900251 Parallel super-resolution imaging

  • Christopher J Rowlands, Elijah Y S Yew, and Peter T C So
  • Though this is a brief Nature intro article, I found it to be more usefully clear than the wikipedia articles on super-resolution techniques.
  • STORM and PALM seek to stochastically switch fluorophores between emission and dark states, and are parallel but stochastic; STED and RESOLFT use high-intensity donut beams to stimulate emission (STED) or photobleach (RESOLFT) fluorophores outside of an arbitrarily-small location.
    • All need gaussian-fitting to estimate emitter location from the point-spread function.
  • This article comments on a clever way of making 1e5 donuts for parallel (as opposed to rastered) STED / RESOLFT.
  • I doubt stetting up a STED microscope is at all easy; to get these resolutions, everything must be still to a few nm!

{572}
hide / / print
ref: bookmark-0 tags: memory supermemo leraning psychology Hermann Ebbinghaus date: 05-08-2008 15:25 gmt revision:0 [head]

http://www.wired.com/medtech/health/magazine/16-05/ff_wozniak -- wonderful article, well written. Leaves you with a sense of Piotr Wozniak (SuperMemo's inventor) crazy, slightly surreal, impassioned, purposeful, but self-regressive (and hence fundamentally stationary) life.

  • Quote: SuperMemo was like a genie that granted Wozniak a wish: unprecedented power to remember. But the value of what he remembered depended crucially on what he studied, and what he studied depended on his goals, and the selection of his goals rested upon the efficient acquisition of knowledge, in a regressive function that propelled him relentlessly along the path he had chosen.
  • http://www.wired.com/images/article/magazine/1605/ff_wozniak_graph_f.jpg
  • Quote: This should lead to radically improved intelligence and creativity. The only cost: turning your back on every convention of social life.

{230}
hide / / print
ref: engineering notes-0 tags: homopolar generator motor superconducting magnet date: 03-09-2007 14:39 gmt revision:0 [head]

http://hardm.ath.cx:88/pdf/homopolar.pdf

  • the magnets are energized in 'opposite directions - forcing the field lines to go normal to the rotar.
  • still need brushes - perhaps there is no way to avoid them in a homopolar generator.

{128}
hide / / print
ref: bookmark-0 tags: neuroanatomy pulvinar thalamus superior colliculus image gray brainstem date: 0-0-2007 0:0 revision:0 [head]

http://en.wikipedia.org/wiki/Image:Gray719.png --great, very useful!

{20}
hide / / print
ref: bookmark-0 tags: neural_networks machine_learning matlab toolbox supervised_learning PCA perceptron SOM EM date: 0-0-2006 0:0 revision:0 [head]

http://www.ncrg.aston.ac.uk/netlab/index.php n.b. kinda old. (or does that just mean well established?)