m8ta
You are not authenticated, login. |
|
{1546} | |||
Local synaptic learning rules suffice to maximize mutual information in a linear network
x = randn(1000, 10); Q = x' * x; a = 0.001; Y = randn(10, 1); y = zeros(10, 1); for i = 1:1000 y = Y + (eye(10) - a*Q)*y; end y - pinv(Q)*Y / a % should be zero.
To this is added a 'sensing' learning and 'noise' unlearning phase -- one optimizes , the other minimizes . Everything is then applied, similar to before, to a gaussian-filtered one-dimensional white-noise stimuli. He shows this results in bandpass filter behavior -- quite weak sauce in an era where ML papers are expected to test on five or so datasets. Even if this was 1992 (nearly forty years ago!), it would have been nice to see this applied to a more realistic dataset; perhaps some of the following papers? Olshausen & Field came out in 1996 -- but they applied their algorithm to real images. In both Olshausen & this work, no affordances are made for multiple layers. There have to be solutions out there... | |||
{1545} | |||
Self-organizaton in a perceptual network
One may critically challenge the infomax idea: we very much need to (and do) throw away spurious or irrelevant information in our sensory streams; what upper layers 'care about' when making decisions is certainly relevant to the lower layers. This credit-assignment is neatly solved by backprop, and there are a number 'biologically plausible' means of performing it, but both this and infomax are maybe avoiding the problem. What might the upper layers really care about? Likely 'care about' is an emergent property of the interacting local learning rules and network structure. Can you search directly in these domains, within biological limits, and motivated by statistical reality, to find unsupervised-learning networks? You'll still need a way to rank the networks, hence an objective 'care about' function. Sigh. Either way, I don't per se put a lot of weight in the infomax principle. It could be useful, but is only part of the story. Otherwise Linsker's discussion is accessible, lucid, and prescient. Lol. | |||
{1493} | |||
PMID-27690349 Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation
| |||
{1447} | |||
PMID-16543459 Reward timing in the primary visual cortex
| |||
{1387} |
ref: -1977
tags: polyethylene surface treatment plasma electron irradiation mechanical testing saline seawater accelerated lifetime
date: 04-15-2017 06:06 gmt
revision:0
[head]
|
||
Enhancement of resistance of polyethylene to seawater-promoted degradation by surface modification
| |||
{1279} | |||
PMID-23024377 Plasma-assisted atomic layer deposition of Al(2)O(3) and parylene C bi-layer encapsulation for chronic implantable electronics.
| |||
{1152} | |||
http://web.cecs.pdx.edu/~greenwd/xmsnLine_notes.pdf -- Series termination will work, provided the impedance of the driver + series resistor is matched to the impedance of the transmission line being driven. School has been so long ago, I've forgotten these essentials! | |||
{760} |
ref: -0
tags: LDA myopen linear discriminant analysis classification
date: 01-03-2012 02:36 gmt
revision:2
[1] [0] [head]
|
||
How does LDA (Linear discriminant analysis) work? It works by projecting data points onto a series of planes, one per class of output, and then deciding based which projection plane is the largest. Below, to the left is a top-view of this projection with 9 different classes of 2D data each in a different color. Right is a size 3D view of the projection - note the surfaces seem to form a parabola. Here is the matlab code that computes the LDA (from myopen's ceven % TrainData and TrainClass are inputs, column major here. % (observations on columns) N = size(TrainData,1); Ptrain = size(TrainData,2); Ptest = size(TestData,2); % add a bit of interpolating noise to the data. sc = std(TrainData(:)); TrainData = TrainData + sc./1000.*randn(size(TrainData)); K = max(TrainClass); % number of classes. %%-- Compute the means and the pooled covariance matrix --%% C = zeros(N,N); for l = 1:K; idx = find(TrainClass==l); % measure the mean per class Mi(:,l) = mean(TrainData(:,idx)')'; % sum all covariance matrices per class C = C + cov((TrainData(:,idx)-Mi(:,l)*ones(1,length(idx)))'); end C = C./K; % turn sum into average covariance matrix Pphi = 1/K; Cinv = inv(C); %%-- Compute the LDA weights --%% for i = 1:K Wg(:,i) = Cinv*Mi(:,i); % this is the slope of the plane Cg(:,i) = -1/2*Mi(:,i)'*Cinv*Mi(:,i) + log(Pphi)'; % and this, the origin-intersect. end %%-- Compute the decision functions --%% Atr = TrainData'*Wg + ones(Ptrain,1)*Cg; % see - just a simple linear function! Ate = TestData'*Wg + ones(Ptest,1)*Cg; errtr = 0; AAtr = compet(Atr'); % this compet function returns a sparse matrix with a 1 % in the position of the largest element per row. % convert to indices with vec2ind, below. TrainPredict = vec2ind(AAtr); errtr = errtr + sum(sum(abs(AAtr-ind2vec(TrainClass))))/2; netr = errtr/Ptrain; PeTrain = 1-netr; | |||
{65} | |||
follow up paper: http://spikelab.jbpierce.org/Publications/LaubachEMBS2003.pdf
____References____ Laubach, M. and Arieh, Y. and Luczak, A. and Oh, J. and Xu, Y. Bioengineering Conference, 2003 IEEE 29th Annual, Proceedings of 17 - 18 (2003.03) | |||
{91} | |||
to remove lines beginning with a question mark (e.g. from subversion) svn status | perl -nle 'print if !/^?/' here's another example, for cleaning up the output of ldd: ldd kicadocaml.opt | perl -nle '$_ =~ /^(.*?)=>/; print $1 ;' and one for counting the lines of non-blank source code: cat *.ml | perl -e '$n = 0; while ($k = <STDIN>) {if($k =~ /\w+/){$n++;}} print $n . "\n";' By that metric, kicadocaml (check it out!), which I wrote in the course of learning Ocaml, has about 7500 lines of code. Here is one for resizing a number of .jpg files in a directory into a thumb/ subdirectory: ls -lah | perl -nle 'if( $_ =~ /(\w+)\.jpg/){ `convert $1.jpg -resize 25% thumb/$1.jpg`;}'or, even simpler: ls *.JPG | perl -nle '`convert $_ -resize 25% thumb/$_`;' Note that -e command line flag tells perl to evaluate the expression, -n causes the expression to be evaluated once per input line from standard input, and -l puts a line break after every print statement. reference For replacing charaters in a file, do something like: cat something | perl -nle '$_ =~ s/,/\t/g; print $_' | |||
{846} | |||
Shuffle lines read in from stdin. I keep this script in /usr/local/bin on my systems, mostly for doing things like ls | shuffle > pls.txt && mplayer -playlist pls.txt #!/usr/bin/perl -w use List::Util 'shuffle'; while (<STDIN>) { push(@lines, $_); } @reordered = shuffle(@lines); foreach (@reordered) { print $_; } | |||
{818} | |||
Say you have a program, called from a perl script, that may run for a long time. Get at the program's output as it appears? Simple - open a pipe to the programs STDOUT. See http://docstore.mik.ua/orelly/perl/prog3/ch16_03.htm Below is an example - I wanted to see the output of programs run, for convenience, from a perl script (didn't want to have to remember - or get wrong - all the command line arguments for each). #!/usr/bin/perl $numArgs = $#ARGV + 1; if($numArgs == 1){ if($ARGV[0] eq "table"){ open STATUS, "sudo ./video 0xc1e9 15 4600 4601 0 |"; while(<STATUS>){ print ; } close STATUS ; }elsif($ARGV[0] eq "arm"){ open STATUS, "sudo ./video 0x1ff6 60 4597 4594 4592 |"; while(<STATUS>){ print ; } close STATUS ; }else{ print "$ARGV[0] not understood - say arm or table!\n"; } } | |||
{796} | |||
An interesting field in ML is nonlinear dimensionality reduction - data may appear to be in a high-dimensional space, but mostly lies along a nonlinear lower-dimensional subspace or manifold. (Linear subspaces are easily discovered with PCA or SVD(*)). Dimensionality reduction projects high-dimensional data into a low-dimensional space with minimum information loss -> maximal reconstruction accuracy; nonlinear dim reduction does this (surprise!) using nonlinear mappings. These techniques set out to find the manifold(s):
(*) SVD maps into 'concept space', an interesting interpretation as per Leskovec's lecture presentation. | |||
{685} |
ref: BrashersKrug-1996.07
tags: motor learning sleep offline consolidation Bizzi Shadmehr
date: 03-24-2009 15:39 gmt
revision:1
[0] [head]
|
||
PMID-8717039[0] Consolidation in human motor memory.
____References____
| |||
{678} |
ref: Rasch-2009.06
tags: sleep cholinergic acetylcholine REM motor consolidation
date: 02-18-2009 17:27 gmt
revision:0
[head]
|
||
PMID-19194375[0] "Impaired Off-Line Consolidation of Motor Memories After Combined Blockade of Cholinergic Receptors During REM Sleep-Rich Sleep."
____References____ | |||
{660} | |||
In the process of installing compiz - which I decided I didn't like - I removed Xfce4's window manager, xfwm4, and was stuck with metacity. Metacity probably allows focus-follows-mouse, but this cannot be configured with Xfce's control panel, hence I had to figure out how to change it back. For this, I wrote a command to look for all files, opening each, and seeing if there are any lines that match "metacity". It's a brute force approach, but one that does not require much thinking or googling. find . -print | grep -v mnt | \ perl -e 'while($k = <STDIN>){open(FH,"< $k");while($j=<FH>){if($j=~/metacity/){print "found $k";}}close FH;}'This led me to discover ~/.cache/sessions/xfce4-session-loco:0 (the name of the computer is loco). I changed all references of 'metacity' to 'xfwm4', and got the proper window manager back. | |||
{614} | |||
PMID-18004384[0] A synaptic memory trace for cortical receptive field plasticity.
____References____ | |||
{588} | |||
images/588_1.pdf -- Good lecture on LDA. Below, simple LDA implementation in matlab based on the same: % data matrix in this case is 36 x 16, % with 4 examples of each of 9 classes along the rows, % and the axes of the measurement (here the AR coef) % along the columns. Sw = zeros(16, 16); % within-class scatter covariance matrix. means = zeros(9,16); for k = 0:8 m = data(1+k*4:4+k*4, :); % change for different counts / class Sw = Sw + cov( m ); % sum the means(k+1, :) = mean( m ); %means of the individual classes end % compute the class-independent transform, % e.g. one transform applied to all points % to project them into one plane. Sw = Sw ./ 9; % 9 classes criterion = inv(Sw) * cov(means); [eigvec2, eigval2] = eig(criterion); See {587} for results on EMG data. | |||
{525} | |||
Tim's list of skate-like devices, sorted by flatland speed, descending order:
| |||
{506} | |||
So, you want to write inline assembly for the blackfin processor, perhaps to speed things up in a (very) time-constrained environment? Check this first:
Nobody seems to have a complete modifier list for the blackfin, which is needed to actually write something that won't be optimized out :) here is my list --
examples:
Constraints for particular machines - does not include blackfin.
; register operands ; d (r0..r7) ; a (p0..p5,fp,sp) ; e (a0, a1) ; b (i0..i3) ; f (m0..m3) ; B ; c (i0..i3,m0..m3) CIRCREGS ; C (CC) CCREGS | |||
{427} | |||
I wanted to take lines like this: 272 :1007A500EB9FF5F0EA9E42F0E99D42F0E89C45F0AAand convert them into proper hex files. hence, perl: perl -e 'open(FH, "awfirm.hex"); @j = <FH>; foreach $H (@j){ $H =~ s/^s+d+s//; $H =~ s/\//; print $H; }' | |||
{409} |
ref: bookmark-0
tags: optimization function search matlab linear nonlinear programming
date: 08-09-2007 02:21 gmt
revision:0
[head]
|
||
http://www.mat.univie.ac.at/~neum/ very nice collection of links!! | |||
| |||
{216} | |||
to search for files that match a perl regular expression: (here all plexon files recorded in 2007) locate PLEX | perl -e 'while ($k = <STDIN>){ if( $k =~ /PLEXdddd07/){ print $k; }}' | |||
{141} |
ref: learning-0
tags: motor control primitives nonlinear feedback systems optimization
date: 0-0-2007 0:0
revision:0
[head]
|
||
http://hardm.ath.cx:88/pdf/Schaal2003_LearningMotor.pdf not in pubmed. | |||
{28} | |||
{75} | |||
{34} |
ref: bookmark-0
tags: linear_algebra solution simultaneous_equations GPGPU GPU LUdecomposition clever
date: 0-0-2006 0:0
revision:0
[head]
|
||