m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{421} is owned by tlh24.
[0] Gage GJ, Ludwig KA, Otto KJ, Ionides EL, Kipke DR, Naive coadaptive cortical control.J Neural Eng 2:2, 52-63 (2005 Jun)

[0] Clancy EA, Xia H, Christie A, Kamen G, Equalization filters for multiple-channel electromyogram arrays.J Neurosci Methods 165:1, 64-71 (2007 Sep 15)

[0] Brockwell AE, Rojas AL, Kass RE, Recursive bayesian decoding of motor cortical signals by particle filtering.J Neurophysiol 91:4, 1899-907 (2004 Apr)

{99}
hide / / print
ref: Gage-2005.06 tags: naive coadaptive control Kalman filter Kipke audio BMI date: 09-13-2019 02:33 gmt revision:2 [1] [0] [head]

PMID-15928412[0] Naive coadaptive Control May 2005. see notes

____References____

{1155}
hide / / print
ref: -0 tags: filtering spike sorting AUC ROC r date: 08-08-2012 23:35 gmt revision:12 [11] [10] [9] [8] [7] [6] [head]

A frequent task in the lab is to sort spikes (extracellular neural action potentials) from background noise. In the lab we are working on doing this wirelessly; to minimize power consumption, spike sorting is done before the radio. In this way only times of spikes need be transmitted, saving bandwidth and power. (This necessitates a bidirectional radio protocol, but this is a worthy sacrifice).

In most sorting programs (e.g. Plexon), the raw signal is first thresholded, then waveform snippets (typically 32 samples long) are compared to a template to accept/reject them, or to sort them into different units. The comparison metric is usually the mean-squared error, MSE, aka the L2 norm. This makes sense, as the spike shapes are assumed to be stereotyped (they may very well not be), and the noise white / uncorrelated (another debatable assumption).

On the headstage we are working with for wireless neural recording, jumps and memory moves are expensive operations, hence we've elected to do no waveform extraction, and instead match continuously match. By using the built-in MPEG compression opcodes, we can compute the L1 norm at a rate of 4 samples / clock -- very efficient. However, this was more motivated by hardware considerations an not actual spike sorting practice. Literature suggests that for isolating a fixed-pattern signal embedded in noise, the best solution is instead a matched filter.

Hence, a careful study of spike-sorting was attempted in matlab, given the following assumptions: fixed spike shape (this was extracted from real data), and uncorrelated band-limited noise. The later was just white noise passed through a bandpass filter, e.g.

cheby1(3, 2, [500/15e3 7.5/15])

Where the passband edges are 500 Hz and 15kHz, at a sampling rate of 30kHz. (Actual rate is 31.25kHz). Since the spike times are known, we can rigorously compare the Receiver Operating Characteristic (ROC) and the area under curve (AUC) for different sorting algorithms. Four were tried: L1 (as mentioned above, motivated by the MPEG opcodes), L2 (Plexon), FIR matched filter, and IIR matched filter.

The latter was very much an experiment -- IIR filters are efficiently implemented on the blackfin processor, and they generally require fewer taps than their equivalent FIR implementation. To find an IIR equivalent to a given FIR matched filter (whose impulse response closely looks like the actual waveshape, just time-reversed), the filter parameters were simply optimized to match the two impulse responses. To facilitate the search, the denominator was specified in terms of complex conjugate pole locations (thereby constraining the form of the filter), while the numerator coefficients were individually optimized. Note that this is not optimizing given the objective to maximize sorting quality -- rather, it is to make the IIR filter impulse response as close as possible to the FIR matched filter, hence computationally light.

And yet: the IIR filter outperforms the FIR matched filter, even though the IIR filter has 1/3 the coefficients (10 vs 32)! Below is the AUC quality metric for the four methods.

And here are representative ROC curves at varying spike SNR ratios.

The remarkable thing is that even at very low SNR, the matched IIR filter can reliably sort cells from noise. (Note that the acceptable false positive here should be weighted more highly; in the present analysis true positive and false positive are weighted equally, which is decidedly non-Bayesian given most of the time there is no spike.) The matched IIR filter is far superior to the normal MSE to template / L2 norm method -- seems we've been doing it wrong all along?

As for reliably finding spikes / templates / filters when the SNR < 0, the tests above - which assume an equal number of spike samples and non-spike samples -- are highly biased; spikes are not normally sortable when the SNR < 0.


Upon looking at the code again, I realized three important things:

  1. The false positive rate need to be integrated over all time where there is no spike, just the same as the true positive is over all time where there is a spike.
  2. All methods need to be tested with 'distractors', or other spikes with a different shape.
  3. The FIR matched filter was backwards!

Including #1 above, as expected, dramatically increased the false positive rate, which is to be expected and how the filters will be used in the real world. #2 did not dramatically impact any of the discriminators, which is good. #3 alleviated the gap between the IIR and FIR filters, and indeed the FIR matched filter performance now slightly exceeds the IIR matched filer.

Below, AUC metric for 4 methods.

And corresponding ROC for 6 different SNR ratios (note the SNRs sampled are slightly different, due to the higher false positive rate).

One thing to note: as implemented, the IIR filter requires careful matching of poles and zeros, and is may not work with 1.15 fixed-point math on the Blackfin. The method really deserves to be tested in vivo, which I shall do shortly.


More updates:

See www.aicit.org/jcit/ppl/JCIT0509_05.pdf -- they add an 'adjustment' function to the matched filter due to variance in the amplitude of spikes, which adds a little performance at low SNRs.

F(t)=[x(t)kσe˙ 1x(t)kσ] n F(t) = \left[ \frac{x(t)}{k \sigma} \dot e^{1-\frac{x(t)}{k \sigma}} \right]^n

Sigma is the standard deviation of x(t), n and k determine 'zoom intensity and zoom center'. The paper is not particularly well written - there are some typos, and their idea seems unjustified. Still the references are interesting:

  • IEEE-238472 (pdf) Optimal detection, classification, and superposition resolution in neural waveform recordings.
    • Their innovation: whitening filter before template matching, still use L2 norm.
  • IEEE-568916 (pdf) Detection, classification, and superposition resolution of action potentials in multiunit single-channel recordings by an on-line real-time neural network
    • Still uses thresholding / spike extraction and L2 norm. Inferior!
  • IEEE-991160 (pdf) Parameter estimation of human nerve C-fibers using matched filtering and multiple hypothesis tracking
    • They use a real matched filter to detect extracellular action potentials.


Update: It is not to difficult to convert FIR filters to IIR filters using simple numerical optimization. Within my client program, this is done using simulated annealing; have tested this using fminsearch in matlab. To investigate the IIR-filter fitting problem more fully, I sliced the 10-dimensional optimization space along pairs of dimensions about the optimum point as found using fminsearch.

The parameters are as follows:

  1. Two poles, stored as four values (a real and imaginary part for each pole pair). These are expanded to denominator coefficients before evaluating the IIR filter.
  2. Five numerator coeficients.
  3. One delay coefficient (to match the left/right shift).

The figure below plots the +-1 beyond the optimum for each axis pair. Click for full resolution image. Note that the last parameter is discrete, hence steps in the objective function. Also note that the problem is perfectly quadratic for the numerator, as expected, which is why LMS works so well.

Note that for the denominator pole locations, the volume of the optimum is small, and there are interesting features beyond this. Some spaces have multiple optima.

The next figure plots +-0.1 beyond the optimum for each axis vs. every other one. It shows that, at least on a small scale, the problem becomes very quadratic in all axes hence amenable to line or conjugate gradient search.

Moving away from planes that pass through a found optima, what does the space look like? E.g. From a naive start, how hard is it to find at least one workable solution? To test this, I perturbed the found optimum with white noise in the parameters std 0.2, and plotted the objective function as before, albeit at higher resolution (600 x 600 points for each slice).

These figures show that there can be several optima in the denominator, but again it appears that a very rough exploration followed by gradient descent should arrive at an optima.

{1125}
hide / / print
ref: -0 tags: active filter design Netherlands Gerrit Groenewold date: 02-17-2012 20:27 gmt revision:0 [head]

IEEE-04268406 (pdf) Noise and Group Delay in Actvie Filters

  • relevant conclusion: the output noise spectrum is exactly proportinoal to the group delay.
  • Poschenrieder established a relationship between group delay and energy stored in a passive filter.
  • Fettweis proved from this that the noise generation of an active filter which is based on a passive filter is appoximately proportional to the group delay. (!!!)

{989}
hide / / print
ref: Li-2009.07 tags: Zheng Odoherty Nicolelis unscented kalman wiener filter date: 01-07-2012 23:57 gmt revision:1 [0] [head]

PMID-19603074[0] Unscented Kalman filter for brain-machine interfaces.

  • Includes quadratic neuron tuning curves.
  • Includes n-1 past states for augmented state prediction.
  • Population vector .. has < 0 SNR.
  • Works best with only 1 future tap .. ?
  • Pursuit and center-out tasks.

____References____

[0] Li Z, O'Doherty JE, Hanson TL, Lebedev MA, Henriquez CS, Nicolelis MA, Unscented Kalman filter for brain-machine interfaces.PLoS One 4:7, e6243 (2009 Jul 15)

{175}
hide / / print
ref: BMI notes-0 tags: spike filtering rate_estimation BME 265 Henriquez date: 01-06-2012 03:06 gmt revision:1 [0] [head]

http://hardm.ath.cx:88/pdf/BME265_final.pdf

{97}
hide / / print
ref: Brown-2007 tags: Kalman filter BMI Black spike_sorting Donoghue date: 01-06-2012 00:07 gmt revision:1 [0] [head]

From Uncertain Spikes to Prosthetic Control a powerpoint presentation w/ good overview of all that the Brown group has done

{585}
hide / / print
ref: -0 tags: FIR LMS decorrelation adaptive filter matlab myopen date: 01-03-2012 03:35 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

LMS-based adaptive decorrelator, xn is the noise, xs is the signal, len is the length of the signal, delay is the delay beyond which the autocorrelation function of the signal is zero but the acf of the noise is non-zero. The filter is very simple, and should be easy to implement in a DSP.

function [y,e,h] = lms_test(xn, xs, len, delay)
h = zeros(len, 1); 
x = xn + xs; 
for k = 1:length(x)-len-delay
	y(k) = x(k+delay:k+len-1+delay) * h ; 
	e(k) = x(k) - y(k); 
	h = h + 0.0004 * e(k) * x(k+delay:k+len-1+delay)'; 
end
It works well if the noise source is predictable & stable: (black = sinusoidal noise, red = output, green = error in output)

Now, what if the amplitude of the corrupting sinusoid changes (e.g. due to varying electrode properties during movement), and the changes per cycle are larger than the amplitude of the signal? The signal will be swamped! The solution to this is to adapt the decorrelating filter slowly, by adding an extra (multiplicative, nonlinear) gain term to track the error in terms of the absolute values of the signals (another nonlinearity). So, if the input signal is on average larger than the output, the gain goes up and vice-versa. See the code.

function [y,e,h,g] = lms_test(xn, xs, len, delay)
h = zeros(len, 1); 
x = xn + xs; 
gain = 1;
e = zeros(size(x)); 
e2 = zeros(size(x)); 
for k = 1:length(x)-len-delay
	y(k) = x(k+delay:k+len-1+delay) * h; 
	e(k) = (x(k) - y(k)); 
	h = h + 0.0002 * e(k) * x(k+delay:k+len-1+delay)'; % slow adaptation. 
	y2(k) = y(k) * gain; 
	e2(k) = abs(x(k)) - abs(y2(k)); 
	gain = gain + 1 * e2(k) ; 
	gain = abs(gain);
	if (gain > 3) 
		gain = 3;
	end
	g(k) = gain; 
end

If, like me, you are interested in only the abstract features of the signal, and not an accurate reconstruction of the waveform, then the gain signal (g above) reflects the signal in question (once the predictive filter has adapted). In my experiments with a length 16 filter delayed 16 samples, extracting the gain signal and filtering out out-of-band information yielded about +45db improvement in SNR. This was with a signal 1/100th the size of the disturbing amplitude-modulated noise. This is about twice as good as the human ear/auditory system in my tests.

It doesn't look like much, but it is just perfect for EMG signals corrupted by time-varying 60hz noise.

{983}
hide / / print
ref: ODoherty-2011.1 tags: nicolelis odoherty nicolelis 2011 active tactile BMBI stimulation ICMS unscented Kalman filter date: 01-01-2012 18:27 gmt revision:3 [2] [1] [0] [head]

PMID-21976021[0] Active tactile exploration using a brain-machine-brain interface.

  • Tricky part was the temporal interleaving. 50ms stim / 50ms record.
    • No proof a priori as S1 stim could affect M1 processing.
  • Real perception, as the stimulation resulted from motor commands (through a BMI).
  • RAT = rewarded ICMS (200Hs pulses)
  • UAT = unrewarded ICMS, 400Hs, skip every 100ms.
  • NAT = no ICMS.
  • So short. damn you, nature.

____References____

[0] O'Doherty JE, Lebedev MA, Ifft PJ, Zhuang KZ, Shokur S, Bleuler H, Nicolelis MA, Active tactile exploration using a brain-machine-brain interface.Nature 479:7372, 228-31 (2011 Oct 5)

{957}
hide / / print
ref: -0 tags: Scott M1 motor control pathlets filter EMG date: 12-22-2011 22:52 gmt revision:1 [0] [head]

PMID-19923243 Complex Spatiotemporal Tuning in Human Upper-Limb Muscles

  • Original idea: M1 neurons encode 'pathlets', sophisticated high-level movement trajectories, possibly through the action of both the musculoskeletal system and spinal cord circuitry.
  • Showed that muscle pathlets can be extracted from EMG data, relkiably and between patients, implying that M1 reflects 'filter-like' properties of the body, and not high level representations.

{727}
hide / / print
ref: -0 tags: Filter ADC SAR SigmaDelta date: 04-10-2009 03:36 gmt revision:4 [3] [2] [1] [0] [head]

quote: For a given amount of 10kHz rejection, a 1ksps ΣΔ \Sigma \Delta ADC (a) is equivalent to a 1Msps SAR ADC (b). Filtering determines the ADC's effective throughput, not the sampling rate (c).

exactly! (click to see full res version. from an excellent article, "Managing high-voltage lithium-ion batteries in HEVs" by Michael Kultgen.

more notes:

  • gasoline holds 80 times the energy per mass as lithium ion batteries.
  • the peak efficiency of the internal combustion engine, however, is only 30% (I remember a figure of more like 20% for carnot cycle .. maybe this is for the Atkinson cycle), and 12% at RPM.
  • Cell balancing is a critical feature in series battery packs in e.g. HEVs. if a weak cell receives the same number of coulombs during charge and discharge as a strong cell, then that cell uses more of its available capacity, and hence becomes weaker.
    • HEVs (prius) use passive balancing - they discharge the higher-capacity cells to match that of the lower capacity cells. Not efficient, but it prolongs the life of the pack.
    • More expensive EVs use active balancing, where capacitors are switched between cells to redistribute charge.
  • an electric bus costs about $480,000 !!

other stuff in that issue:

  • when interfacing an external chip to a FPGA, make sure all line transitions are registered with the internal clock before fanning out to other registered logic. Otherwise, the flip-flops may not all register the same signal.
  • you can design a LED boost converter to run off a single alkaline cell (or lithium cell..) with a PNP & NPN transistor in positive feedback topology.

{438}
hide / / print
ref: notes-0 tags: filter DSP finite precision 16bit blackfin matlab date: 07-22-2008 03:25 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

I needed to evaluate different fixed-point filters & implement them in a way which would more-or-less easily transfer to hand-coded blackfin assembly. Hence, I wrote a small program in C to test two likely alternatives: a 5th order elliptic filter, with 82db stopband rejection, or a 4th order elliptic, with 72 db stopband rejection. see {421} also, and this useful reference. (also at {584})

UPDATE: this filter is not stable/ suitable for the 1.15 signed fraction format of the blackfin processor. As in the reference, you must use a non-canonic form which puts the numerator before the denominator. I had a great deal of difficulty trying to determine why the output of the Direct Form II filter was giving crap results with poles/zeros designed for lowpass operation. The reason appears to be that the Direct FormII filter requires dynamic range > +-1 for the w's (first feedback section). The numerator / b coefficients for these highpass filters is usually [1 -2 1] - a highpass - which returns 0 when passed saturated data. See the signal flowchart below.

The program for implementing these filters as cascaded biquad/ triquads follows. The method of generating the filter coefficients is in the comments. Note that the filter coefficients for the triquad exceed and absolute magnitude of 2, hence the triquad implementation has a fixed-point format of s.2.13 (one bit sign, 2 bits whole part, 13 bits fractional part). The biquad has s.1.14 format - one bit for the whole part. Also note that I incorporated a scale factor of x2 into the first stage, and x4 into the second stage, as the output of the ADC is only 12 bits and we can afford to expand it to the full range.

The filter responses, as designed (in floating point). 5th order is on the left, 4th on the right.

The filter output to white noise, in the time domain. 5th order has, in this impementation, a 'softer shoulder', which i do not think is appropriate for this application.

The same output in fourier space, confirming the softer shoulder effect. this was actually kinda unespected .. I thought an extra pole/zero would help! I guess the triquad & lower res coefficient quantization is less efficient.. ?

#include <stdio.h>
// gcc -Wall filter_test.c -o filter_test

//need to test fixed-point filtering in C (where it is easy)
//before converting to hand-coded assembly. 
short w1_1[2]= {0,0};
short w1_2[3]= {0,0,0};
short w2_1[2]= {0, 0}; 
short w2_2[2]= {0, 0}; 
/*
filter 1: 5th order. biquad then a triquad.
[B1, A1] = ellip(5,0.2,84, 6/31.25); %84db = 14 bits. 
rb = roots(B1); 
ra = roots(A1); 
pa1 = poly(ra(1:2)); 
pa2 = poly(ra(3:5)); 
pb1 = poly(rb(1:2)); 
pb2 = poly(rb(3:5)); 
b1_1 = round(pb1*sqrt(B1(1))*2^15)
a1_1 = round(pa1* 2^14)* -1
b1_2 = round(pb2*sqrt(B1(1))*2^15)
a1_2 = round(pa2*2^13)*-1 % triquad.
*/
//biquad1: 
short b1_1[3] = {1199, 87, 1199}; 
short a1_1[2] = {24544 -14085};
//triquad1: 
short b1_2[4] = {1199, 2313, 2313, 1199};
short a1_2[3] = {18090, -14160, 3893};

/*filter 2: can be implemented by 2 biquads. 
[B3, A3] = ellip(4,0.5,72, 6/31.25);
rb = roots(B3); 
ra = roots(A3); 
pa1 = poly(ra(1:2)); 
pa2 = poly(ra(3:4)); 
pb1 = poly(rb(1:2)); 
pb2 = poly(rb(3:4)); 
b2_1 = round(pb1*sqrt(B3(1))*2^15)
a2_1 = round(pa1* 2^14)* -1
b2_2 = round(pb2*sqrt(B3(1))*2^16) %total gain = 8. 
a2_2 = round(pa2*2^14)*-1
*/
//biquad2: 
short b2_1[3] = {2082, 914, 2082};
short a2_1[2] = {24338, -13537}; 
//biquad2 (2): 
short b2_2[3] = {4163, 6639, 4163};
short a2_2[2] = {24260, -9677}; 

int madd_sat(int acc, short a, short b){
	//emulate the blackfin
	//signed-integer mode of operating. page 567  in the programming reference.
	int c; 
	long long l, lo, hi;
	lo = (long long)(-2147483647); 
	hi = (long long)(2147483647); 
	c = (int)a * (int)b; 
	l = (long long)acc + (long long)c; 
	if(l < lo)
		l = lo; 
	if(l > hi)
		l = hi; 
	acc = (int)l; 
	return acc; 
}
//need to deal with samples one at a time, as they come in, from different channels
//need to quantize the coeficients
//need to utilize a biquad topology / filter structure.  
//   (this is simple - just segregate the poles. order 5 filter req. biquad & triquad. 
short filter_biquad(short in, short* a1, short* b1, short* w1){
	int acc; 
	short out, w; 
		
	//in varies from 0 to 0x0fff - ADC is straight binary. (0 to 4095). unsigned.
	acc = madd_sat(0, in, 16384); 
	acc = madd_sat(acc, w1[0], a1[0]); 
	acc = madd_sat(acc, w1[1], a1[1]);
	w = (short)(acc >> 14); //hopefully this does sign-extended shift. 
	acc = madd_sat(0, w, b1[0]); 
	acc = madd_sat(acc, w1[0], b1[1]); 
	acc = madd_sat(acc, w1[1], b1[0]); //symmetry.  
	out = (short)(acc >> 14); 
	w1[1] = w1[0]; 
	w1[0] = w; 
	return out; 
}
short filter_triquad(short in, short* a2, short* b2, short* w2){
	int acc; 
	short out, w; 
	//this one is a bit different, as the coefficients are > 2 in the denom. 
	// hence have to normalize by a different fraction. 
	//multiply the numerator by 4 for a net gain of 8.
	
	acc = madd_sat(0   , in, 8192); 
	acc = madd_sat(acc, w2[0], a2[0]); 
	acc = madd_sat(acc, w2[1], a2[1]); 
	acc = madd_sat(acc, w2[2], a2[2]); 
	w = acc >> 13; 
	acc = madd_sat(0   , w, b2[0]); 
	acc = madd_sat(acc, w2[0], b2[1]); 
	acc = madd_sat(acc, w2[1], b2[1]); 
	acc = madd_sat(acc, w2[2], b2[0]); //symmetry.
	out = (short)(acc >> 13); 
	w2[2] = w2[1]; 
	w2[1] = w2[0]; 
	w2[0] = w; 
	return out; 
}

int main( int argv, char* argc[]){
	//read the samples from stdin.
	int in, out; 
	while(scanf("%d", &in)){
		//compare 5th and 4th order filters directly. 
		in -= 2048; 
		out = filter_biquad(in, a1_1, b1_1, w1_1); 
		out = filter_triquad(out, a1_2, b1_2, w1_2); 
		printf("%d	", out); 
		out = filter_biquad(in, a2_1, b2_1, w2_1); 
		out = filter_biquad(out, a2_2, b2_2, w2_2); 
		printf("%d
", out); 
	}
	return 0; 
}
// gcc -Wall filter_test.c -o filter_test
//  cat filter_test_in.dat | ./filter_test > filter_test_out.dat

Below, some matlab code to test the filtering.

% make some noisy data. 

x = randn(2000, 1);
x = x .* 1000; 
x = x + 2048; 
i = find(x >= 4096)
x(i) = 4095; 
i = find(x < 0);
x(i) = 0; 
x = round(x); 
fid = fopen('filter_test_in.dat', 'w'); 
for k = 1:length(x)
	fprintf(fid, '%d
', x(k)); 
end
fprintf(fid, 'quit'); 
fclose(fid); 

Here is a blackfin assembly implementation of the filter, Note the filter weights (i0) and delays (i1, i2) need to be on two different cache/sram banks, or the processor will stall. Also note that , as per the second diagram above, the delays for biquad 1 output are synonymous to the delays for biquad 2 input, hence they are only represented once in memory.

	/*
	directform 1 biquad now, form II saturates 1.15 format.
	operate on the two samples in parallel (both in 1 32bit reg). 
	r0	x(n)				-- the input from the serial bus. 
	r1 	x(n-1) (yn-1)	-- ping-pong the delayed registers. 
	r2	x(n-2) (yn-2)	-- do this so save read cycles. 
	r3	y(n-1) (xn-1)
	r4	y(n-2) (xn-2)
	r5	b0 b1 			-- (low  high)
	r6	a0 a1
	
	i0 reads the coeficients into the registers; 
		it loops every 32 bytes (16 coef, 4 biquads)
	i1 reads the delays. 
		it loops every 640 bytes = 10 delays * 4 bytes/delay * 16 stereo channels.
		only increments. 
	i2 writes the delays, loops every 640 bytes. 
		also only increments. 
		if i1 and i2 are dereferenced in the same cycle, the processor will stall -- 
		each of the 1k SRAM memory banks has only one port. 
	
	format of delays in memory: 
	[x1(n-1) , 
	 x1(n-2) ,  
	 x2(n-1) aka y1(n-1) , 
	 x2(n-2) aka y1(n-2) , 
	 x3(n-1) aka y2(n-1) , 
	 x3(n-2) aka y2(n-2) , 
	 x4(n-1) aka y3(n-1) , 
	 x4(n-2) aka y3(n-2) , 
	 y4(n-1) , 
	 y4(n-2) ] 
	 --that's 10 delays, 4 bytes each. 
	*/
	
	r5 = [i0++] || r1 = [i1++]; 
	a0 = r0.l * r5.l , a1 = r0.h * r5.l || r6 = [i0++] ||  [i2++] = r0; 
	a0 += r1.l * r5.h, a1 += r1.h * r5.h || r2 = [i1++] ; 
	a0 += r2.l * r5.l, a1 += r2.h * r5.l || r3 = [i1++] ; 
	a0 += r3.l * r6.l, a1 += r3.h * r6.l || r4 = [i1++] ;
	r0.l = (a0 += r4.l * r6.h), r0.h = (a1 += r4.h * r6.h) (s2rnd) || [i2++] = r1;
	
	r5 = [i0++] || [i2++] = r0; 
	a0 = r0.l * r5.l, a1 += r0.h * r5.l || r6 = [i0++] || [i2++] = r3; 
	a0 += r3.l * r5.h, a1 += r3.h * r5.h || r1 = [i1++]; 
	a0 += r4.l * r5.l, a1 += r4.h * r5.l || r2 = [i1++]; 
	a0 += r1.l * r6.l, a1 += r1.h * r6.l; 
	r0.l = (a0 += r2.l * r6.h), r0.h = (a1 += r2.h * r6.h) (s2rnd); 
	
	r5 = [i0++] || [i2++] = r0; 
	a0 = r0.l * r5.l, a1 = r0.h * r5.l || r6 = [i0++] || [i2++] = r1; 
	a0 += r1.l * r5.h, a1 += r1.h * r5.h || r3 = [i1++]; 
	a0 += r2.l * r5.l, a1 += r2.h * r5.l || r4 = [i1++]; 
	a0 += r3.l * r6.l, a1 += r3.h * r6.l; 
	r0.l = (a0 += r4.l * r6.h), r0.h = (a1 += r4.h * r6.h) (s2rnd); 
	
	r5 = [i0++] || [i2++] = r0; 
	a0 = r0.l * r5.l, a1 += r0.h * r5.l || r6 = [i0++] || [i2++] = r3; 
	a0 += r3.l * r5.h, a1 += r3.h * r5.h || r1 = [i1++]; 
	a0 += r4.l * r5.l, a1 += r4.h * r5.l || r2 = [i1++]; 
	a0 += r1.l * r6.l, a1 += r1.h * r6.l; 
	r0.l = (a0 += r2.l * r6.h), r0.h = (a1 += r2.h * r6.h) (s2rnd); 
	
	[i2++] = r0; //save the delays. 
	[i2++] = r1; //normally this would be pipelined.

{584}
hide / / print
ref: notes-0 tags: IIR blackfin filter paper Drutarovsky date: 07-19-2008 17:20 gmt revision:4 [3] [2] [1] [0] [head]

images/584_1.pdf -- this is a highly useful paper, but is no longer available on the author's website. For the good of all, i post it here. These may also have utility:

  • EE-197 -- Multicycle instructions and Latencies for the Blackfin BF531/2/3 processors. (10 stage pipeline, I think this should also applicable to the BF534/6/7 processors)
    • Seems like they have the pipeline well forwarded/interlocked for most of the DSP-type operations that you might want to perform (this is not so surprising).
    • However, you should be careful with 8-word aligning 64 bit instructions to prevent instruction fetch stalls, and spreading simultaneous data reads among different sub-blocks of the data L1/SRAM.
      • Data sub-blocks are 4096 bytes long (0x1000).
  • EE-171 -- Multicycle instructions and latencies for the Blackfin BF535 (8 stage pipeline)

{523}
hide / / print
ref: bookmark-0 tags: DSP filter webdesign fisher book date: 12-12-2007 06:15 gmt revision:0 [head]

{94}
hide / / print
ref: bookmark-0 tags: particle_filter unscented monte_carlo MCMC date: 12-11-2007 16:46 gmt revision:2 [1] [0] [head]

images/94_1.pdf

  • covers both the particle filter and the unscented kalman filter ... the unscented kalman filter is used as the proposal distribution.

{493}
hide / / print
ref: Clancy-2007.09 tags: EMG channel equalization filter date: 11-11-2007 05:04 gmt revision:0 [head]

PMID-17614134[0] Equalization filters for multiple-channel electromyogram arrays.

  • idea: use digital filtering to equalize (as in communication systems) each electrode in a large array, and then use this to drive the common-mode (digital) rejection.

____References____

{441}
hide / / print
ref: notes-0 tags: DSP filter quantize lowpass elliptic matlab date: 09-02-2007 15:20 gmt revision:0 [head]

So, in order to measure how quantizing filter coeficients affects filter response, I quantized the coefficients of a 8th order bandpass filter designed with:

[B1, A1] = ellip(4,0.8,70, [600/31.25e3 6/31.25]);
here is a function that quantizes & un-quantizes the filter coeff, then compares the frequency responses:
function [Bq, Aq, Bcoef, Acoef] = filter_quantize(B, A) 
% quantize filter coeficients & un-quantize so as to get some idea to
% the *actual* fixed-point filter performance. 
% assume that everything in broken into biquads. 
base = 10; 
Aroots = roots(A); 
Broots = roots(B); 
order = length(Aroots)/2; % the number of biquads.
scale = B(1).^(1/order); % distribute the gain across the biquads. 
for o = 0:order-1
	Acoef_biquad(o+1, :) = poly(Aroots(o*2+1 : o*2+2));
	Bcoef_biquad(o+1, :) = poly(Broots(o*2+1 : o*2+2))*scale; 
end
Bcoef = round(Bcoef_biquad .* 2^base); 
Acoef = round(Acoef_biquad .* 2^base); 
% now, reverse the process. 
Bq2 = Bcoef ./ 2^base; 
Aq2 = Acoef ./ 2^base; 
for o = 0:order-1
	Arootsq(o*2+1: o*2+2) = roots(Aq2(o+1, :)); 
	Brootsq(o*2+1: o*2+2) = roots(Bq2(o+1, :)); 
end
Aq = poly(Arootsq); 
Bq = poly(Brootsq).*B(1); 
[H, W] = freqz(B, A); 
[Hq, Wq] = freqz(Bq, Aq); 
figure
plot(W, db(abs(H)), 'b')
hold on
plot(W, db(abs(Hq)), 'r')
axis([0 pi -100 0])
The result: high frequency is not much affected

but low frequency is strongly affected.

But this is at a quatization to 10 bits - quantization to 15 bits lead to reasonably good performance. I'm not sure if this conclusively indicates / counterindicates downsampling prior to highpassing for my application, but i would say that it does, as if you downsample by 2 the highpass cutoff frequency will be 2x larger hence the filter will be less senitive to quantization errors which affect low frequencies.

{410}
hide / / print
ref: notes-0 tags: multirate downsample DSP filter date: 08-09-2007 19:14 gmt revision:0 [head]

{176}
hide / / print
ref: Brockwell-2004.04 tags: particle_filter Brockwell BMI 2004 wiener filter population_vector MCMC date: 02-05-2007 18:54 gmt revision:1 [0] [head]

PMID-15010499[0] Recursive Bayesian Decoding of Motor Cortical Signals by Particle Filtering

  • It seems that particle filtering is 3-5 times more efficient / accurate than optimal linear control, and 7-10 times more efficient than the population vector method.
  • synthetic data: inhomogeneous poisson point process, 400 bins of 30ms width = 12 seconds, random walk model.
  • monkey data: 258 neurons recorded in independent experiments in the ventral premotor cortex. monkey performed a 3D center-out task followed by an ellipse tracing task.
  • Bayesian methods work optimally when their models/assumptions hold for the data being analyzed.
  • Bayesian filters in the past were computationally inefficient; particle filtering was developed as a method to address this problem.
  • tested the particle filter in a simulated study and a single-unit monkey recording ellipse-tracing experiment. (data from Rena and Schwartz 2003)
  • there is a lot of math in the latter half of the paper describing their results. The tracings look really good, and I guess this is from the quality of the single-unit recordings.
  • appendix details the 'innovative methodology ;)

____References____

{15}
hide / / print
ref: bookmark-0 tags: monte_carlo MCMC particle_filter probability bayes filtering biblography date: 0-0-2007 0:0 revision:0 [head]

http://www-sigproc.eng.cam.ac.uk/smc/papers.html -- sequential monte carlo methods. (bibliography)

{37}
hide / / print
ref: bookmark-0 tags: Unscented sigma_pint kalman filter speech processing machine_learning SDRE control UKF date: 0-0-2007 0:0 revision:0 [head]

{98}
hide / / print
ref: bookmark-0 tags: unscented kalman filter square-root Merwe date: 0-0-2007 0:0 revision:0 [head]

http://hardm.ath.cx/pdf/unscentedKalmanFilter.pdf -- the square root transform. contains a nice tabulation of the original algorithm, which i what I use.

http://hardm.ath.cx/pdf/unscentedKalmanFilter2000.pdf -- the original, with examples of state, parameter, and dual estimation

http://en.wikipedia.org/wiki/Kalman_filter -- wikipedia page, also has the unscented kalman filter

http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf - Julier and Ulhmann's original paper. a bit breif.

http://www.cs.ubc.ca/~murphyk/Papers/Julier_Uhlmann_mar04.pdf -- Julier and Ulhmann's invited paper, quite excellent.

{92}
hide / / print
ref: bookmark-0 tags: training neural_networks with kalman filters date: 0-0-2006 0:0 revision:0 [head]

with the extended kalman filter, from '92: http://ftp.ccs.neu.edu/pub/people/rjw/kalman-ijcnn-92.ps

with the unscented kalman filter : http://hardm.ath.cx/pdf/NNTrainingwithUnscentedKalmanFilter.pdf

{95}
hide / / print
ref: Wu-2004.06 tags: Switching Kalman Filter BMU Wei Wu Donoghue date: 0-0-2006 0:0 revision:0 [head]

PMID-15188861 Modeling and Decoding Motor Cortical Activity Using a Switching Kalman Filter

{100}
hide / / print
ref: bookmark-0 tags: kalman filter introduction Maybeck 1979 date: 0-0-2006 0:0 revision:0 [head]

http://www.cs.unc.edu/~welch/media/pdf/maybeck_ch1.pdf -- great explanation!! really sensible!