m8ta
use https for features. 

{1538}  
PMID20596024 Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex
Cortical reliability amid noise and chaos
 
{1529}  
DreamCoder: Growing generalizable, interpretable knowledge with wakesleep Bayesian program learning
This paper describes a system for adaptively finding programs which succinctly and accurately produce desired output. These desired outputs are provided by the user / test system, and come from a number of domains:
Also in the lineage is the EC2 algorithm, which most of the same authors above published in 2018. EC2 centers around the idea of "explore  compress" : explore solutions to your program induction problem during the 'wake' phase, then compress the observed programs into a library by extracting/factoring out commonalities during the 'sleep' phase. This of course is one of the core algorithms of human learning: explore options, keep track of both what worked and what didn't, search for commonalities among the options & their effects, and use these inferred laws or heuristics to further guide search and goalsetting, thereby building a buffer attack the curse of dimensionality. Making the inferred laws themselves functions in a programming library allows hierarchically factoring the search task, making exploration of unbounded spaces possible. This advantage is unique to the program synthesis approach. This much is said in the introduction, though perhaps with more clarity. DreamCoder is an improved, moreaccessible version of EC2, though the underlying ideas are the same. It differs in that the method for constructing libraries has improved through the addition of a powerful version space for enumerating and evaluating refactors of the solutions generated during the wake phase. (I will admit that I don't much understand the version space system.) This version space allows DreamCoder to collapse the search space for refactorings by many orders of magnitude, and seems to be a clear advancement. Furthermore, DreamCoder incorporates a second phase of sleep: "dreaming", hence the moniker. During dreaming the library is used to create 'dreams' consisting of combinations of the library primitives, which are then executed with training data as input. These dreams are then used to train up a neural network to predict which library and atomic objects to use in given contexts. Context in this case is where in the parse tree a given object has been inserted (it's parent and which argument number it sits in); how the datacontext is incorporated to make this decision is not clear to me (???). This neural dream and replaytrained neural network is either a GRU recurrent net with 64 hidden states, or a convolutional network feeding into a RNN. The final stage is a linear ReLu (???) which again is not clear how it feeds into the prediction of "which unit to use when". The authors clearly demonstrate that the network, or the probabalistic contextfree grammar that it controls (?) is capable of straightforward optimizations, like breaking symmetries due to commutativity, avoiding adding zero, avoiding multiplying by one, etc. Beyond this, they do demonstrate via an ablation study that the presence of the neural network affords significant algorithmic leverage in all of the problem domains tested. The network also seems to learn a reasonable representation of the subtype of task encountered  but a thorough investigation of how it works, or how it might be made to work better, remains desired. I've spent a little time looking around the code, which is a mix of python highlevel experimental control code, and lowerlevel OCaml code responsible for running (emulating) the lisplike DSL, inferring type in it's polymorphic system / reconciling types in evaluated program instances, maintaining the library, and recompressing it using aforementioned version spaces. The code, like many things experimental, is clearly a workin progress, with some old or unused code scattered about, glue to run the many experiments & record / analyze the data, and personal notes from the first author for making his job talks (! :). The description in the supplemental materials, which is satisfyingly thorough (if again impenetrable wrt version spaces), is readily understandable, suggesting that one (presumably the first) author has a clear understanding of the system. It doesn't appear that much is being hidden or glossed over, which is not the case for all scientific papers. With the caveat that I don't claim to understand the system to completion, there are some clear areas where the existing system could be augmented further. The 'recognition' or perceptual module, which guides actual synthesis of candidate programs, realistically can use as much information as is available in DreamCoder as is available: full lexical and semantic scope, full inputoutput specifications, type information, possibly runtime binding of variables when filling holes. This is motivated by the way that humans solve problems, at least as observed by introspection:
Critical to making this work is to have, as I've written in my notes many years ago, a 'self compressing and factorizing memory'. The version space magic + library could be considered a working example of this. In the realm of ANNs, per recent OpenAI results with CLIP and DallE, really big transformers also seem to have strong compositional abilities, with the caveat that they need to be trained on segments of the whole web. (This wouldn't be an issue here, as Dreamcoder generates a lot of its own training data via dreams). Despite the datainefficiency of DNN / transformers, they should be sufficient for making something in the spirit of above work, with a lot of compute, at least until more efficient models are available (which they should be shortly; see AlphaZero vs MuZero).  
{1528}  
Discovering hidden factors of variation in deep networks
 
{1524} 
ref: 2020
tags: replay hippocampus variational autoencoder
date: 10112020 04:09 gmt
revision:1
[0] [head]


Braininspired replay for continual learning with artificial neural networks
 
{1510} 
ref: 2017
tags: google deepmind compositional variational autoencoder
date: 04082020 01:16 gmt
revision:7
[6] [5] [4] [3] [2] [1] [head]


SCAN: learning hierarchical compositional concepts
 
{1428}  
PMID30420685 Fast invivo voltage imaging using a red fluorescent indicator
 
{1454}  
Building Highlevel Features Using Large Scale Unsupervised Learning
 
{1443}  
PMID27545715 HighThroughput Mapping of SingleNeuron Projections by Sequencing of Barcoded RNA.
 
{1289}  
images/1289_1.pdf  Debugging reinvented: Asking and Answering Why and Why not Questions about Program Behavior.
 
{729}  
IEEE4358095 (pdf) An UltraLowPower Neural Recording Amplifier and its use in AdaptivelyBiased MultiAmplifier Arrays.
 
{1007}  
IEEE5910570 (pdf) Spiking neural network decoder for brainmachine interfaces
____References____ Dethier, J. and Gilja, V. and Nuyujukian, P. and Elassaad, S.A. and Shenoy, K.V. and Boahen, K. Neural Engineering (NER), 2011 5th International IEEE/EMBS Conference on 396 399 (2011)  
{1008}  
IEEE5946801 (pdf) A lowpower implantable neuroprocessor on nanoFPGA for Brain Machine interface applications
____References____ Fei Zhang and Aghagolzadeh, M. and Oweiss, K. Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on 1593 1596 (2011)  
{926}  
PMID10196571[0] Simultaneous encoding of tactile information by three primate cortical areas
____References____
 
{583}  
From this and the USB 2.0 spec, I made this quick (totally incomprehensible?) key for understanding the output of commands like # mount t debugfs none_debugs /sys/kernel/debug # modprobe usbmon # cat /sys/kernel/debug/usbmon/2u To be used with the tables from the (free) USB 2.0 spec:  
{647}  
http://www.linuxmag.com/id/7187  has a very interesting and very well applied analogy between programs and laws. I am inclined to believe that they really are not all that different; legalese is structured and convoluted the way it is because it is, in effect, a programming language for laws, hence must be precise and unambiguous. Furthermore, the article is well written and evidences structured and balanced thought (via appropriate references to the real world). And he uses Debian ;)  
{439}  
abandoned because I realized that I could work on 2 channels at once (as there are 2 MACs onboard) & could use the s2rnd multiplyaccumulate flay & could load registers 32bits at a time! ah well, might as well archive my efforts :) r6.h = 2048; r0.l = r0.l  r6.h (s)  r1.l = w[i0++]  r2.l = w[i1++]; //subtract offset, load a1[0] into r1.l, w1[0] into r2.l a0 = r0.l * r1.l (is)  r1.h = w[i0++]; //mac in*a1[0], load a[1] to r1.h a0 += r2.l * r1.h (is)  r1.l = w[i0++] r2.h = w[i1]; //mac w[0]*a1[1], load a1[2] into r1.l, w1[1] to r2.h r4 = (a0 += r2.h * r1.l) (is)  r3.l = w[i0++]; //mac w1[1]*a1[2] store to r4, b1[0] to r3.l r4 = r4 >>> 14  r3.h = w[i0++]; //arithmetic right shift, 32 bit inst, b1[1] to r3.h, r4 is new w1. a0 = r4.l * r3.l (is)  w[i1++] = r4.l; //mac w1*b1[0], save w1 into w1[0] a0 += r2.l * r3.h (is)  w[i1++] = r2.l; //mac w1[0]*b[1], save w1[0] into w1[1] r4 = (a0 += r2.h * r3.l) (is)  r1.l = w[i0++]  r2.l = w[i1++];//mac w1[1]*b1[0] store r4, a2[0] to r1.l, w2[0] to r2.l r4 = r4 >>> 14  r1.h = w[i0++]  r2.h = w[i1]; //arith. right shift, a2[1] to r1.h, w2[1] to r2.h a0 = r4.l * r1.l (is); //mac in*a2[0], a2[2] into r1.l a0 += r2.l * r1.h (is)  rl.l = w[i0++]; //mac w2[0]*a2[1], b2[0] into r3.l r4 = (a0 += r2.h * r1.l) (is)  r3.l = w[i0++]; //mac w2[1]*a2[2] store r4, b2[1] into r3.h r4 = r4 >>> 14  r3.h = w[i0++]; //arithmetic shift to get w2, b2[2] to r3.h a0 = r4.l * r3.l (is)  w[i1++] = r4.l; //mac w2 * b2[0], store w2 to w2[0] a0 += r2.l * r3.h (is)  w[i1++] = r2.l; //mac w2[0]*b2[1], store w2[0] to w2[1]. i1 now pointing to secondary channel. r4 = (a0 += r2.h * r3.l) (is)  i0 = 10; //mac w2[1]*b2[0]. reset coeff ptr. done with pri chan, save in r5. r5 = r4 >>> 14; //time for the secondary channel! r0.h = r0.h  r6.h (s)  r1.l = w[i0++]  r2.l = w[i1++]; //subtract offset, load a1[0] to r1.1, w1[0] to r2.l a0 = r0.h * r1.l (is)  r1.h = w[i0++] ; //mac in*a1[0], a1[1] to r1.h, save out samp pri. a0 += r2.l * r1.h (is)  r1.l = w[r0++]  r2.h = w[i1]; //mac w1[0]*a1[1], a1[2] to r1.l, w1[1] to r2.h r4 = (a0 += r2.h * r1.l) (is)  r3.l = w[i0++]; //mac, b1[0] to r3.l r4 = r4 >>> 14  r3.h = w[i0++]; //arithmetic shift, b1[1] to r3.h a0 = r4.l * r3.l (is)  w[i1++] = r4.l; //mac w1*b1[0], save w1 to w1[0] a0 += r2.l * r3.h (is)  w[i++] = r2.l; //mac w1[0], save w1[0] to w1[1] r4 = (a0 += r2.h * r3.l) (is)  r1.l = w[i0++]  r2.l = w[i1++]; //mac w1[1]*b1[0] store r4, a2[0] to r1.l, w2[0] to r2.l r4 = r4 >>> 14  r2.h = w[i1]; // r4 output of 1st biquad, w2[1] to r2.h a0 = r4.l * r1.l (is)  r1.h = w[i0++] ; //mac in* a2[0], a2[1] to r1.h a0 += r2.l * r1.h (is)  r1.h = w[i0++] ; //mac w2[0]*a2[1], a2[2] to r1.l r4 = (a0 += r2.h * r1.l) (is)  r3.l = w[i0++]; //mac w2[1]*a2[2], b2[0] to r3.l r4 = r4 >>> 14  r3.h = w[i0++]; //r4 is w2, b2[2] to r3.h a0 = r4.l * r3.l (is)  w[i++] = r4.l ; //mac w2 * b2[0], store w2 to w2[0] a0 += r2.l * r3.h (is)  w[i++] = r2.l; //mac w2[0] * b2[1], store w2[0] to w2[1]. i1 now pointing to next channel. r4 = (a0 += r2.h * r3.l) (is)  i0 = 10; //mac w2[1] * b2[0], reset coeff. ptr, save in r4. r4 = r4 >>> 14; here is a second (but still not final) attempt, once i realized that it is possible to issue 2 MACS per cycle // I'm really happy with this  every cycle is doing two MMACs. :) /* // i0 i1 (in 16 bit words) r1 = [i0++]  r4 = [i1++]; // 2 2 r1= a0 a1 r4= w0's a0 = r0.l * r1.l, a1 = r0.h * r1.l  r2 = [i0++]  r5 = [i1]; // 4 2 r2= a2 a2 r5= w1's a0 += r4.l * r1.h, a1 = r4.h * r1.h  r3 = [i0++]  [i1] = r4; // 6 0 r3= b0 b1 w1's=r4 r0.l = (a0 += r5.l * r2.l), r0.h = (a1 += r5.h * r2.l)(s2rnd); a0 = r0.l * r3.l, a1 = r0.h * r3.l  [i1++] = r0; // 6 2 w0's = r0 a0 += r4.l * r3.h, a1 += r4.h * r3.h  r1 = [i0++]  i1 += 4; // 8 4 r1 = a0 a1 //load next a[0] a[1] to r1; move to next 2nd biquad w's; don't reset the coef pointer  move on to the next biquad. r0.l = (a0 += r5.l * r3.l), r0.h = (a1 += r5.h * r3.l)(s2rnd)  r4 = [i1++]; // 8 6 r4 = w0's, next biquad //note: the s2rnd flag postmultiplies accumulator contents by 2. see pg 581 or 1569 //second biquad. a0 = r0.l * r1.l, a1 = r0.h * r1.l  r2 = [i0++]  r5 = [i1]; // 10 6 r2= a2 a2 r5 = w1's a0 += r4.l * r1.h, a1 += r4.h * r1.h  r3 = [i0++]  [i1] = r4; // 12 4 r3= b0 b1 w1's = r4 r0.l = (a0 += r5.l * r2.l), r0.h = (a1 += r5.h * r2.l)(s2rnd); // a0 = r0.l * r3.l, a1 = r0.h * r3.l  [i1++] = r0; // 12 6 w0's = r0 a0 += r4.l * r3.h, a1 += r4.h * r3.h  r1 = [i0++]  i1 += 4; // 14 8 r1 = a0 a1 r0.l = (a0 += r5.l * r3.l), r0.h = (a1 += r5.h * r3.l)(s2rnd)  r4 = [i1++]; // 14 10 r4 = w0's //third biquad. a0 = r0.l * r1.l, a1 = r0.h * r1.l  r2 = [i0++]  r5 = [i1]; // 16 10 r2= a2 a2 r5 = w1's a0 += r4.l * r1.h, a1 += r4.h * r1.h  r3 = [i0++]  [i1] = r4; // 18 8 r3= b0 b1 w1's = r4 r0.l = (a0 += r5.l * r2.l), r0.h = (a1 += r5.h * r2.l)(s2rnd); // a0 = r0.l * r3.l, a1 = r0.h * r3.l  [i1++] = r0; // 18 10 w0's = r0 a0 += r4.l * r3.h, a1 += r4.h * r3.h  r1 = [i0++]  i1 += 4; // 20 12 r1 = a0 a1 r0.l = (a0 += r5.l * r3.l), r0.h = (a1 += r5.h * r3.l)(s2rnd)  r4 = [i1++]; // 20 14 r4 = w0's //fourth biquad. a0 = r0.l * r1.l, a1 = r0.h * r1.l  r2 = [i0++]  r5 = [i1]; // 22 14 a0 += r4.l * r1.h, a1 += r4.h * r1.h  r3 = [i0++]  [i1] = r4; // 24 12 r0.l = (a0 += r5.l * r2.l), r0.h = (a1 += r5.h * r2.l)(s2rnd); a0 = r0.l * r3.l, a1 = r0.h * r3.l  [i1++] = r0; // 24 14 a0 += r4.l * r3.h, a1 += r4.h * r3.h  i1 += 4; // 24 16 r0.l = (a0 += r5.l * r3.l), r0.h = (a1 += r5.h * r3.l)(s2rnd); // 48: loop back; 32 bytes: move to next channel.  
{384}  
notes on reading magstripe cards:
 
{228}  
http://www.borbelyaudio.com/adobe/ae599bor.pdf
 
{57}  
http://www.cs.rug.nl/~rudy/matlab/
