m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1498}
hide / / print
ref: -0 tags: lavis jf dyes fluorine zwitterion lactone date: 01-22-2020 20:06 gmt revision:0 [head]

Optimization and functionalization of red-shifted rhodamine dyes

  • Zwitterion form is fluorescent and colored; lactone form is not and colorless.
  • Lactone form is lipophyllic; some mix seems more bioavailable and also results in fluorogenic dyes.
  • Good many experiments with either putting fluorine on the azetidines or on the benzyl ring.
  • Fluorine on the azetidine pushes the K ZLK_{Z-L} toward lactone form; fluorine on the benzyl ring pushes it toward the zwitterion.
  • Si-rhodamine and P-rhodamine adopt the lactone form, and adding appropriate fluorines can make them fluorescent again. Which makes for good red-shifted dyes, ala JF669
  • N-CH3 can be substituted in the oxygen position too, resulting in blue-shifted dye which is a good stand-in for EGFP.

{1495}
hide / / print
ref: -0 tags: multifactor synaptic learning rules date: 01-22-2020 01:45 gmt revision:9 [8] [7] [6] [5] [4] [3] [head]

Why multifactor?

  • Take a simple MLP. Let xx be the layer activation. X 0X^0 is the input, X 1X^1 is the second layer (first hidden layer). These are vectors, indexed like x i ax^a_i .
  • Then X 1=WX 0X^1 = W X^0 or x j 1=ϕ(Σ i=1 Nw ijx i 0)x^1_j = \phi(\Sigma_{i=1}^N w_{ij} x^0_i) . ϕ\phi is the nonlinear activation function (ReLU, sigmoid, etc.)
  • In standard STDP the learning rule follows Δwf(x pre(t),x post(t)) \Delta w \propto f(x_{pre}(t), x_{post}(t)) or if layer number is aa Δw a+1f(x a(t),x a+1(t))\Delta w^{a+1} \propto f(x^a(t), x^{a+1}(t))
    • (but of course nobody thinks there 'numbers' on the 'layers' of the brain -- this is just referring to pre and post synaptic).
  • In an artificial neural network, Δw aEw ij aδ j ax i \Delta w^a \propto - \frac{\partial E}{\partial w_{ij}^a} \propto - \delta_{j}^a x_{i} (Intuitively: the weight change is proportional to the error propagated from higher layers times the input activity) where δ j a=(Σ k=1 Nw jkδ k a+1)ϕ \delta_{j}^a = (\Sigma_{k=1}^{N} w_{jk} \delta_k^{a+1}) \partial \phi where ϕ\partial \phi is the derivative of the nonlinear activation function, evaluated at a given activation.
  • f(i,j)[x,y,θ,ϕ] f(i, j) \rightarrow [x, y, \theta, \phi]
  • k=13.165 k = 13.165
  • x=round(i/k) x = round(i / k)
  • y=round(j/k) y = round(j / k)
  • θ=a(ikx)+b(ikx) 2 \theta = a (\frac{i}{k} - x) + b (\frac{i}{k} - x)^2
  • ϕ=a(jky)+b(jky) 2 \phi = a (\frac{j}{k} - y) + b (\frac{j}{k} - y)^2

{1497}
hide / / print
ref: -2017 tags: human level concept learning through probabalistic program induction date: 01-20-2020 15:45 gmt revision:0 [head]

PMID-26659050 Human level concept learning through probabalistic program induction

  • Preface:
    • How do people learn new concepts from just one or a few examples?
    • And how do people learn such abstract, rich, and flexible representations?
    • How can learning succeed from such sparse dataset also produce such rich representations?
    • For any theory of learning, fitting a more complicated model requires more data, not less, to achieve some measure of good generalization, usually in the difference between new and old examples.
  • Learning proceeds bu constructing programs that best explain the observations under a Bayesian criterion, and the model 'learns to learn' by developing hierarchical priors that allow previous experience with related concepts to ease learning of new concepts.
  • These priors represent learned inductive bias that abstracts the key regularities and dimensions of variation holding actoss both types of concepts and across instances.
  • BPL can construct new programs by reusing pieced of existing ones, capturing the causal and compositional properties of real-world generative processes operating on multiple scales.
  • Posterior inference requires searching the large combinatorial space of programs that could have generated a raw image.
    • Our strategy uses fast bottom-up methods (31) to propose a range of candidate parses.
    • That is, they reduce the character to a set of lines (series of line segments), then simply the intersection of those lines, and run a series of parses to estimate the generation of those lines, with heuristic criteria to encourage continuity (e.g. no sharp angles, penalty for abruptly changing direction, etc).
    • The most promising candidates are refined by using continuous optimization and local search, forming a discrete approximation to the posterior distribution P(program, parameters | image).

{1496}
hide / / print
ref: -2017 tags: locality sensitive hashing olfaction kenyon cells neuron sparse representation date: 01-18-2020 21:13 gmt revision:1 [0] [head]

PMID-29123069 A neural algorithm for a fundamental computing problem

  • Ceneral idea: locality-sensitive hashing, e.g. hashing that is sensitive to the high-dimensional locality of the input space, can be efficiently solved using a circuit inspired by the insect olfactory system.
  • Here, activation of 50 different types of ORNs is mapped to 50 projection neurons, which 'centers the mean' -- concentration dependence is removed.
  • This is then projected via a random matrix of sparse binary weights to a much larger set of Kenyon cells, which in turn are inhibited by one APL neuron.
  • Normal locality-sensitive hashing uses dense matrices of Gaussian-distributed random weights, which means higher computational complexity...
  • ... these projections are governed by the Johnson-Lindenstrauss lemma, which says that projection from high-d to low-d space can preserve locality (distance between points) within an error bound.
  • Show that the WTA selection of the top 5% plus random binary weight preserves locality as measured by overlap with exact input locality on toy data sets, including MNIST and SIFT.
  • Flashy title as much as anything else got this into Science... indeed, has only been cited 6 times in Pubmed.

{1494}
hide / / print
ref: -2014 tags: dopamine medium spiny neurons calcium STDP PKA date: 01-07-2020 03:43 gmt revision:2 [1] [0] [head]

PMID-25258080 A critical time window for dopamine actions on the structural plasticity of dendritic spines

  • Remarkably short time window for dopamine to modulate / modify (aggressive) STDP protocol.
  • Showed with the low-affinity calcium indicator Fluo4-FF that peak calcium concentrations in spines is not affected by optogenetic stimulation of dopamine fibers.
  • However, CaMKII activity is modulated by DA activity -- when glutamate uncaging and depolarization was followed by optogenetic stimulation of DA fibers followed, the FRET sensor Camui-CR reported significant increases of CaMKII activity.
  • This increase was abolished by the application of DRAPP-32 inhibiting peptide, which blocks the interaction of dopamine and cAMP-regulated phospoprotein - 32kDa (DRAPP-32) with protein phosphatase 1 (PP-1)
    • Spine enlargement was induced in the absence of optogenetic dopamine when PP-1 was inhibited by calculin A...
    • Hence, phosphorylation of DRAPP-32 by PKA inhibits PP-1 and disinihibts CaMKII. (This causal inference seems loopy; they reference a hippocampal paper, [18])
  • To further test this, they used a FRET probe of PKA activity, AKAR2-CR. This sensor showed that PKA activity extends throughout the dendrite, not just the stimulated spine, and can respond to DA release directly.

{1493}
hide / / print
ref: -0 tags: nonlinear hebbian synaptic learning rules projection pursuit date: 12-12-2019 00:21 gmt revision:4 [3] [2] [1] [0] [head]

PMID-27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

  • Here we show that the principle of nonlinear Hebbian learning is sufficient for receptive field development under rather general conditions.
  • The nonlinearity is defined by the neuron’s f-I curve combined with the nonlinearity of the plasticity function. The outcome of such nonlinear learning is equivalent to projection pursuit [18, 19, 20], which focuses on features with non-trivial statistical structure, and therefore links receptive field development to optimality principles.
  • Δwxh(g(w Tx))\Delta w \propto x h(g(w^T x)) where h is the hebbian plasticity term, and g is the neurons f-I curve (input-output relation), and x is the (sensory) input.
  • The relevant property of natural image statistics is that the distribution of features derived from typical localized oriented patterns has high kurtosis [5,6, 39]
  • Model is a generalized leaky integrate and fire neuron, with triplet STDP

{1492}
hide / / print
ref: -2016 tags: spiking neural network self supervised learning date: 12-10-2019 03:41 gmt revision:2 [1] [0] [head]

PMID: Spiking neurons can discover predictive features by aggregate-label learning

  • This is a meandering, somewhat long-winded, and complicated paper, even for the journal Science. It's not been cited a great many times, but none-the-less is of interest.
  • The goal of the derived network is to detect fixed-pattern presynaptic sequences, and fire a prespecified number of spikes to each occurrence.
  • One key innovation is the use of a spike-threshold-surface for a 'tempotron' [12], the derivative of which is used to update the weights of synapses after trials. As the author says, spikes are hard to differentiate; the STS makes this more possible. This is hence standard gradient descent: if the neuron missed a spike then the weight is increased based on aggregate STS (for the whole trial -- hence the neuron / SGD has to perform temporal and spatial credit assignment).
    • As common, the SGD is appended with a momentum term.
  • Since STS differentiation is biologically implausible -- where would the memory lie? -- he also implements a correlational synaptic eligibility trace. The correlation is between the postsynaptic voltage and the EPSC, which seems kinda circular.
    • Unsurprisingly, it does not work as well as the SGD approximation. But does work...
  • Second innovation is the incorporation of self-supervised learning: a 'supervisory' neuron integrates the activity of a number (50) of feature detector neurons, and reinforces them to basically all fire at the same event, WTA style. This effects a unsupervised feature detection.
  • This system can be used with sort-of lateral inhibition to reinforce multiple features. Not so dramatic -- continuous feature maps.

Editorializing a bit: I said this was interesting, but why? The first part of the paper is another form of SGD, albeit in a spiking neural network, where the gradient is harder compute hence is done numerically.

It's the aggregate part that is new -- pulling in repeated patterns through synaptic learning rules. Of course, to do this, the full trace of pre and post synaptic activity must be recorded (??) for estimating the STS (i think). An eligibility trace moves in the right direction as a biologically plausible approximation, but as always nothing matches the precision of SGD. Can the eligibility trace be amended with e.g. neuromodulators to push the performance near that of SGD?

The next step of adding self supervised singular and multiple features is perhaps toward the way the brain organizes itself -- small local feedback loops. These features annotate repeated occurrences of stimuli, or tile a continuous feature space.

Still, the fact that I haven't seen any follow-up work is suggestive...


Editorializing further, there is a limited quantity of work that a single human can do. In this paper, it's a great deal of work, no doubt, and the author offers some good intuitions for the design decisions. Yet still, the total complexity that even a very determined individual can amass is limited, and likely far below the structural complexity of a mammalian brain.

This implies that inference either must be distributed and compositional (the normal path of science), or the process of evaluating & constraining models must be significantly accelerated. This later option is appealing, as current progress in neuroscience seems highly technology limited -- old results become less meaningful when the next wave of measurement tools comes around, irrespective of how much work went into it. (Though: the impedtus for measuring a particular thing in biology is only discovered through these 'less meaningful' studies...).

A third option, perhaps one which many theoretical neuroscientists believe in, is that there are some broader, physics-level organizing principles to the brain. Karl Friston's free energy principle is a good example of this. Perhaps at a meta level some organizing theory can be found, or likely a set of theories; but IMHO, you'll need at least one theory per brain area, at least, just the same as each area is morphologically, cytoarchitecturaly, and topologically distinct. (There may be only a few theories of the cortex, despite all the areas, which is why so many are eager to investigate it!)

So what constitutes a theory? Well, you have to meaningfully describe what a brain region does. (Why is almost as important; how more important to the path there.) From a sensory standpoint: what information is stored? What processing gain is enacted? How does the stored information impress itself on behavior? From a motor standpoint: how are goals selected? How are the behavioral segments to attain them sequenced? Is the goal / behavior even a reasonable way of factoring the problem?

Our dual problem, building the bridge from the other direction, is perhaps easier. Or it could be a lot more money has gone into it. Either way, much progress has been made in AI. One arm is deep function approximation / database compression for fast and organized indexing, aka deep learning. Many people are thinking about that; no need to add to the pile; anyway, as OpenAI has proven, the common solution to many problems is to simply throw more compute at it. A second is deep reinforcement learning, which is hideously sample and path inefficient, hence ripe for improvement. One side is motor: rather than indexing raw motor variables (LRUD in a video game, or joint torques with a robot..) you can index motor primitives, perhaps hierarchically built; likewise, for the sensory input, the model needs to infer structure about the world. This inference should decompose overwhelming sensory experience into navigable causes ...

But how can we do this decomposition? The cortex is more than adept at it, but now we're at the original problem, one that the paper above purports to make a stab at.

{1491}
hide / / print
ref: -0 tags: dLight1 dopamine imaging Tian date: 12-05-2019 17:27 gmt revision:0 [head]

PMID-29853555 Ultrafast neuronal imaging of dopamine dynamics with designed genetically encoded sensors

  • cpGFP based sensor. ΔF/F~3\Delta F / F ~ 3 .

{1490}
hide / / print
ref: -2011 tags: two photon cross section fluorescent protein photobleaching Drobizhev date: 12-05-2019 17:04 gmt revision:1 [0] [head]

PMID-21527931 Two-photon absorption properties of fluorescent proteins

  • Significant 2-photon cross section of red fluorescent proteins (same chromophore as DsRed) in the 700 - 770nm range, accessible to Ti:sapphire lasers ...
    • This corresponds to a S 0S nS_0 \rightarrow S_n transition
    • But but, photobleaching is an order of magnitude slower when excited by the direct S 0S 1S_0 \rightarrow S_1 transition (but the fluorophores are significantly less bright in this regime).
    • See also PMID-18027924
  • 2P cross-section in the 1000-1100 nm range corresponds to the chromophore polarizability, and is not related to 1p cross section.

{1489}
hide / / print
ref: -0 tags: surface plasmon resonance voltage sensing antennas PEDOT imaging spectroscopy date: 12-05-2019 16:47 gmt revision:1 [0] [head]

Electro-plasmonic nanoantenna: A nonfluorescent optical probe for ultrasensitive label-free detection of electrophysiological signals

  • Use spectroscopy to measure extracellular voltage, via plasmon concentrated electrochromic effects in doped PEDOT.

{1488}
hide / / print
ref: -0 tags: multimode fiber imaging date: 11-15-2019 03:10 gmt revision:2 [1] [0] [head]

PMID-30588295 Subcellular spatial resolution achieved for deep-brain imaging in vivo using a minimally invasive multimode fiber

  • Oh wow wowww
  • Imaged through a 50um multimode optical fiber!
  • Multimode scattering matrix was inverted through a LC-SLM

{1487}
hide / / print
ref: -0 tags: adaptive optics sensorless retina fluorescence imaging optimization zernicke polynomials date: 11-15-2019 02:51 gmt revision:0 [head]

PMID-26819812 Wavefront sensorless adaptive optics fluorescence biomicroscope for in vivo retinal imaging in mice

  • Idea: use backscattered and fluorescence light to optimize the confocal image through imperfect optics ... and the lens of the mouse eye.
    • Optimization was based on hill-climbing / line search of each Zernicke polynomial term for the deformable mirror. (The mirror had to be characterized beforehand, naturally).
    • No guidestar was needed!
  • Were able to resolve the dendritic processes of EGFP labeled Thy1 ganglion cells and Cx3 glia.

{1486}
hide / / print
ref: -2019 tags: non degenerate two photon excitation fluorophores fluorescence OPO optical parametric oscillator date: 10-31-2019 20:53 gmt revision:0 [head]

Efficient non-degenerate two-photon excitation for fluorescence microscopy

  • Used an OPO + delay line to show that non-degenerate (e.g. photons of two different energies) can induce greater fluorescence, normalized to input energy, than normal same-energy excitation.

{1485}
hide / / print
ref: -2015 tags: PaRAC1 photoactivatable Rac1 synapse memory optogenetics 2p imaging mouse motor skill learning date: 10-30-2019 20:35 gmt revision:1 [0] [head]

PMID-26352471 Labelling and optical erasure of synaptic memory traces in the motor cortex

  • Idea: use Rac1, which has been shown to induce spine shrinkage, coupled to a light-activated domain to allow for optogenetic manipulation of active synapses.
  • PaRac1 was coupled to a deletion mutant of PSD95, PSD delta 1.2, which concentrates at the postsynaptic site, but cannot bind to postsynaptic proteins, thus minimizing the undesirable effects of PSD-95 overexpression.
    • PSD-95 is rapidly degraded by proteosomes
    • This gives spatial selectivity.
  • They then exploited the dendritic targeting element (DTE) of Arc mRNA which is selectively targeted and translated in activiated dendritic segments in response to synaptic activation in an an NMDA receptor dependent manner.
    • Thereby giving temporal selectivity.
  • Construct is then PSD-PaRac1-DTE; this was tested on hippocampal slice cultures.
  • Improved sparsity and labelling further by driving it with the Arc promoter.
  • Motor learning is impaired in Arc KO mice; hence inferred that the induction of AS-PaRac1 by the Arc promoter would enhance labeling during learning-induced potentiation.
  • Delivered construct via in-utero electroporation.
  • Observed rotarod-induced learning; the PaRac signal decayed after two days, but the spine volume persisted in spines that showed Arc / DTE hence PA labeled activity.
  • Now, since they had a good label, performed rotarod training followed by (at variable delay) light pulses to activate Rac, thereby suppressing recently-active synapses.
    • Observed both a depression of behavioral performance.
    • Controlled with a second task; could selectively impair performance on one of the tasks based on ordering/timing of light activation.
  • The localized probe also allowed them to image the synapse populations active for each task, which were largely non-overlapping.

{1484}
hide / / print
ref: -0 tags: carbon capture links date: 10-18-2019 14:20 gmt revision:0 [head]

Carbon capture links:

{1483}
hide / / print
ref: -0 tags: Lucy Flavin mononucelotide FAD FMN fluorescent protein reporter date: 10-17-2019 19:54 gmt revision:1 [0] [head]

PMID-25906065 LucY: A Versatile New Fluorescent Reporter Protein

{1482}
hide / / print
ref: -2019 tags: meta learning feature reuse deepmind date: 10-06-2019 04:14 gmt revision:1 [0] [head]

Rapid learning or feature reuse? Towards understanding the effectiveness of MAML

  • It's feature re-use!
  • Show this by freezing the weights of a 5-layer convolutional network when training on Mini-imagenet, either 5shot 1 way, or 5shot 5 way.
  • From this derive ANIL, where only the last network layer is updated in task-specific training.
  • Show that ANIL works for basic RL learning tasks.
  • This means that roughly the network does not benefit much from join encoding -- encoding both the task at hand and the feature set. Features can be learned independently from the task (at least these tasks), with little loss.

{1479}
hide / / print
ref: -0 tags: ETPA entangled two photon absorption Goodson date: 09-24-2019 02:25 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

Can we image biological tissue with entangled photons?

How much fluorescence can we expect, based on reasonable concentrations & published ETPA cross sections?

Start with beer's law: A=σLN A = \sigma L N AA = absorbance; LL = sample length, 10 μm, 1e-3 cm; NN = concentration, 10 μmol; σ\sigma = cross-section, for ETPA assume 2.4e18cm 2/molec2.4e-18 cm^2 / molec (this is based on a FMN based fluorophore; actual cross-section may be higher). Including Avogadro's number and 1l=1000cm 31 l = 1000 cm^3 , A=1.45e5A = 1.45e-5

Now, add in quantum efficiency ϕ=0.8\phi = 0.8 (Rhodamine); collection efficiency η=0.2\eta = 0.2 ; and an incoming photon pair flux of I=1e12photons/sec/modeI = 1e12 photons / sec / mode (which roughly about the limit for quantum behavior; n = 0.1 photons / mode; will add this calculation).

F=ϕησLNI=2.3e6photons/secF = \phi \eta \sigma L N I = 2.3e6 photons/sec This is very low, but within practical imaging limits. As a comparison, incoherent 2p imaging creates ~ 100 photons per pulse, of which 10 make it to the detector; for 512 x 512 pixels at 15fps, the dwell time on each pixel is 20 pulses of a 80 MHz Ti:Sapphire laser, or ~ 200 photons.

Note the pair flux is per optical mode; for a typical application, we'll use a Nikon 16x objective with a 600 μm Ø FOV and 0.8 NA. At 800 nm imaging wavelength, the diffraction limit is 0.5 μm. This equates to about 7e57e5 addressable modes in the FOV. Then an illumination of 1e121e12 photons / sec / mode equates to 7e177e17 photons over the whole field; if each photon pair has an energy of 2.75eV,λ=450nm2.75 eV, \lambda = 450 nm , this is equivalent to 300 mW. 100mW is a reasonable limit, hence scale incoming flux to 2.3e172.3e17 pairs /sec.

Hence, the imaging mode is power limited, and not quantum limited (if you could get such a bright entangled source). And right now that's the limit -- for a BBO crystal, circa 1998 experimenters were getting 1e4 photons / sec / mW. So, 2.3e172.3e17 pairs / sec would require 23 GW. Yikes.

More efficient entangled sources have been developed, using periodically-poled potassium titanyl phosphate (PPPTP), which (again assuming linearity) puts the power requirement at 23 MW. This is within the reason of q-switched lasers, but still incredibly inefficient. The down-conversion process is not linear in intensity, which is why Goodson pumps with SHG from a Ti:sapphire to yield ~1e7 photons; but this of induces temporal correlations which increase the frequency of incoherent TPA.

Still, combining PPPTP with a Ti:sapphire laser could result in 1e13 photons / sec, which is sufficient for scanned microscopy. Since the laser is pulsed, it will still be subject to incoherent TPA; but that's OK, the point is to reduce the power going into the animal via larger ETPA cross-section. The answer to above is a tentative yes. Upon the development of brighter entangled sources (e.g. arrays of quantum structures), this can move to fully widefield imaging.

{1481}
hide / / print
ref: -0 tags: co2 capture entropy carbon dioxide date: 09-22-2019 00:46 gmt revision:1 [0] [head]

How much energy is thermodynamically required to concentrate CO 2CO_2 from one liter of air?

CO 2CO_2 concentration is 400ppm, or 0.4%. 1l of air is 1/22.4 or 44mMol. From wikipedia, the entropy of mixing is:

Δ mixS=nR(x 1ln(x 1)+x 2ln(x 2)) \Delta_{mix} S = n R (x_1 ln(x_1) + x_2 ln(x_2)) where x 1x_1 and x 2x_2 are the fraction of air and CO 2CO_2 (0.996 and 0.004)

This works out to 9.5e3J/K9.5e-3 J/K . At STP, 300K, this means you need only about 2.9J2.9 J to extract the carbon dioxide.

A car driving 1 km emits about 150g carbon dioxide. This is 3.4 moles, which will diffuse into 852 moles of air, or 19e3 liters of air (19 cubic meters). To pull this back out of the air then you'd need at minimum 55.3 kJ.

This is not much at all -- a car produces 100kW mechanical power, or 100kJ every second, and presumably it takes a minute to drive that 1km. But such perfectly efficient purification is not possible.

{1474}
hide / / print
ref: -0 tags: ETPA entangled two photon absorption Goodson date: 09-19-2019 15:49 gmt revision:13 [12] [11] [10] [9] [8] [7] [head]

Various papers put out by the Goodson group:

And from a separate group at Northwestern:

  • Entangled Photon Resonance Energy Transfer in Arbitrary Media
    • Suggests three orders of magnitude improvement in cross-section relative to incoherent TPA.
    • In SPDC, photon pairs are generated randomly and usually accompanied by undesirable multipair emissions.
      • For solid-state artificial atomic systems with radiative cascades (singled quantum emitters like quantum dots), the quantum efficiency is near unity.
    • Paper is highly mathematical, and deals with resonance energy transfer (which is still interesting)

Regarding high fluence sources, quantum dots / quantum structures seem promising.