m8ta
use https for features.
text: sort by
tags: modified
type: chronology
{1487}
hide / / print
ref: -0 tags: adaptive optics sensorless retina fluorescence imaging optimization zernicke polynomials date: 11-15-2019 02:51 gmt revision:0 [head]

PMID-26819812 Wavefront sensorless adaptive optics fluorescence biomicroscope for in vivo retinal imaging in mice

  • Idea: use backscattered and fluorescence light to optimize the confocal image through imperfect optics ... and the lens of the mouse eye.
    • Optimization was based on hill-climbing / line search of each Zernicke polynomial term for the deformable mirror. (The mirror had to be characterized beforehand, naturally).
    • No guidestar was needed!
  • Were able to resolve the dendritic processes of EGFP labeled Thy1 ganglion cells and Cx3 glia.

{1341}
hide / / print
ref: -0 tags: image registration optimization camera calibration sewing machine date: 07-15-2016 05:04 gmt revision:20 [19] [18] [17] [16] [15] [14] [head]

Recently I was tasked with converting from image coordinates to real world coordinates from stereoscopic cameras mounted to the end-effector of a robot. The end goal was to let the user (me!) click on points in the image, and have the robot record that position & ultimately move to it.

The overall strategy is to get a set of points in both image and RW coordinates, then fit some sort of model to the measured data. I began by printing out a grid of (hopefully evenly-spaced and perpendicular) lines via a laserprinter; spacing was ~1.1 mm. This grid was manually aligned to the axes of robot motion by moving the robot along one axis & checking that the lines did not jog.

The images were modeled as a grating with quadratic phase in u,vu,v texture coordinates:

p h(u,v)=sin((a hu/1000+b hv/1000+c h)v+d hu+e hv+f h)+0.97 p_h(u,v) = sin((a_h u/1000 + b_h v/1000 + c_h)v + d_h u + e_h v + f_h) + 0.97 (1)

p v(u,v)=sin((a vu/1000+b vv/1000+c v)u+d vu+e vv+f v)+0.97 p_v(u,v) = sin((a_v u/1000 + b_v v/1000 + c_v)u + d_v u + e_v v + f_v) + 0.97 (2)

I(u,v)=16p hp v/(2+16p h 2+16p v 2) I(u,v) = 16 p_h p_v / ( \sqrt{ 2 + 16 p_h^2 + 16 p_v^2}) (3)

The 1000 was used to make the parameter search distribution more spherical; c h,c vc_h,c_v were bias terms to seed the solver; 0.97 was a duty-cycle term fit by inspection to the image data; (3) is a modified sigmoid.

I I was then optimized over the parameters using a GPU-accelerated (CUDA) nonlinear stochastic optimization:

(a h,b h,d h,e h,f h|a v,b v,d v,e v,f v)=Argmin u v(I(u,v)Img(u,v)) 2 (a_h,b_h,d_h,e_h,f_h | a_v,b_v,d_v,e_v,f_v) = Argmin \sum_u \sum_v (I(u,v) - Img(u,v))^2 (4)

Optimization was carried out by drawing parameters from a normal distribution with a diagonal covariance matrix, set by inspection, and mean iteratively set to the best solution; horizontal and vertical optimization steps were separable and carried out independently. The equation (4) was sampled 18k times, and equation (3) 34 billion times per frame. Hence the need for GPU acceleration.

This yielded a set of 10 parameters (again, c hc_h and c vc_v were bias terms and kept constant) which modeled the data (e.g. grid lines) for each of the two cameras. This process was repeated every 0.1 mm from 0 - 20 mm height (z) from the target grid, resulting in a sampled function for each of the parameters, e.g. a h(z)a_h(z) . This required 13 trillion evaluations of equation (3).

Now, the task was to use this model to generate the forward and reverse transform from image to world coordinates; I approached this by generating a data set of the grid intersections in both image and world coordinates. To start this process, the known image origin u origin| z=0,v origin| z=0u_{origin}|_{z=0},v_{origin}|_{z=0} was used to find the corresponding roots of the periodic axillary functions p h,p vp_h,p_v :

3π2+2πn h=a huv/1000+b hv 2/1000+(c h+e h)v+d hu+f h \frac{3 \pi}{ 2} + 2 \pi n_h = a_h u v/1000 + b_h v^2/1000 + (c_h + e_h)v + d_h u + f_h (5)

3π2+2πn h=a vu 2/1000+b vuv/1000+(c v+d v)u+e vv+f v \frac{3 \pi}{ 2} + 2 \pi n_h = a_v u^2/1000 + b_v u v/1000 + (c_v + d_v)u + e_v v + f_v (6)

Or ..

n h=round((a huv/1000+b hv 2/1000+(c h+e h)v+d hu+f h3π2)/(2π) n_h = round( (a_h u v/1000 + b_h v^2/1000 + (c_h + e_h)v + d_h u + f_h - \frac{3 \pi}{ 2} ) / (2 \pi ) (7)

n v=round((a vu 2/1000+b vuv/1000+(c v+d v)u+e vv+f v3π2)/(2π) n_v = round( (a_v u^2/1000 + b_v u v/1000 + (c_v + d_v)u + e_v v + f_v - \frac{3 \pi}{ 2} ) / (2 \pi) (8)

From this, we get variables n h,origin| z=0andn v,origin| z=0n_{h,origin}|_{z=0} and n_{v,origin}|_{z=0} which are the offsets to align the sine functions p h,p vp_h,p_v with the physical origin. Now, the reverse (world to image) transform was needed, for which a two-stage newton scheme was used to solve equations (7) and (8) for u,vu,v . Note that this is an equation of phase, not image intensity -- otherwise this direct method would not work!

First, the equations were linearized with three steps of (9-11) to get in the right ballpark:

u 0=640,v 0=360 u_0 = 640, v_0 = 360

n h=n h,origin| z+[30..30],n v=n v,origin| z+[20..20] n_h = n_{h,origin}|_{z} + [-30 .. 30] , n_v = n_{v,origin}|_{z} + [-20 .. 20] (9)

B i=[3π2+2πn ha hu iv i/1000b hv i 2f h 3π2+2πn va vu i 2/1000b vu iv if v] B_i = {\left[ \begin{matrix} \frac{3 \pi}{ 2} + 2 \pi n_h - a_h u_i v_i / 1000 - b_h v_i^2 - f_h \\ \frac{3 \pi}{ 2} + 2 \pi n_v - a_v u_i^2 / 1000 - b_v u_i v_i - f_v \end{matrix} \right]} (10)

A i=[d h c h+e h c v+d v e v] A_i = {\left[ \begin{matrix} d_h && c_h + e_h \\ c_v + d_v && e_v \end{matrix} \right]} and

[u i+1 v i+1]=mldivide(A i,B i) {\left[ \begin{matrix} u_{i+1} \\ v_{i+1} \end{matrix} \right]} = mldivide(A_i,B_i) (11) where mldivide is the Matlab operator.

Then three steps with the full Jacobian were made to attain accuracy:

J i=[a hv i/1000+d h a hu i/1000+2b hv i/1000+c h+e h 2a vu i/1000+b vv i/1000+c v+d v b vu i/1000+e v] J_i = {\left[ \begin{matrix} a_h v_i / 1000 + d_h && a_h u_i / 1000 + 2 b_h v_i / 1000 + c_h + e_h \\ 2 a_v u_i / 1000 + b_v v_i / 1000 + c_v + d_v && b_v u_i / 1000 + e_v \end{matrix} \right]} (12)

K i=[a hu iv i/1000+b hv i 2/1000+(c h+e h)v i+d hu i+f h3π22πn h a vu i 2/1000+b vu iv i/1000+(c v+d v)u i+e vv+f v3π22πn v] K_i = {\left[ \begin{matrix} a_h u_i v_i/1000 + b_h v_i^2/1000 + (c_h+e_h) v_i + d_h u_i + f_h - \frac{3 \pi}{ 2} - 2 \pi n_h \\ a_v u_i^2/1000 + b_v u_i v_i/1000 + (c_v+d_v) u_i + e_v v + f_v - \frac{3 \pi}{ 2} - 2 \pi n_v \end{matrix} \right]} (13)

[u i+1 v i+1]=[u i v i]J i 1K i {\left[ \begin{matrix} u_{i+1} \\ v_{i+1} \end{matrix} \right]} = {\left[ \begin{matrix} u_i \\ v_i \end{matrix} \right]} - J^{-1}_i K_i (14)

Solutions (u,v)(u,v) were verified by plugging back into equations (7) and (8) & verifying n h,n vn_h, n_v were the same. Inconsistent solutions were discarded; solutions outside the image space [0,1280),[0,720)[0, 1280),[0, 720) were also discarded. The process (10) - (14) was repeated to tile the image space with gird intersections, as indicated in (9), and this was repeated for all zz in (0..0.1..20)(0 .. 0.1 .. 20) , resulting in a large (74k points) dataset of (u,v,n h,n v,z)(u,v,n_h,n_v,z) , which was converted to full real-world coordinates based on the measured spacing of the grid lines, (u,v,x,y,z)(u,v,x,y,z) . Between individual z steps, n h,originn v,originn_{h,origin} n_{v,origin} was re-estimated to minimize (for a current zz' ):

(u origin| z+0.1u origin| z+0.1) 2+(v origin| z+0.1+v origin| z) 2 (u_{origin}|_{z' + 0.1} - u_{origin}|_{z' + 0.1})^2 + (v_{origin}|_{z' + 0.1} + v_{origin}|_{z'})^2 (15)

with grid-search, and the method of equations (9-14). This was required as the stochastic method used to find original image model parameters was agnostic to phase, and so phase (via parameter f f_{-} ) could jump between individual zz measurements (the origin did not move much between successive measurements, hence (15) fixed the jumps.)

To this dataset, a model was fit:

[u v]=A[1 x y z x 2 y 2 z 2 w 2 xy xz yz xw yw zw] {\left[ \begin{matrix} u \\ v \end{matrix} \right]} = A {\left[ \begin{matrix} 1 && x && y && z && x'^2 && y'^2 && \prime z'^2 && w^2 && x' y' && x' z' && y' z' && x' w && y' w && z' w \end{matrix} \right]} (16)

Where x=x10x' = \frac{x}{ 10} , y=y10y' = \frac{y}{ 10} , z=z10z' = \frac{z}{ 10} , and w=2020zw = \frac{ 20}{20 - z} . ww was introduced as an axillary variable to assist in perspective mapping, ala computer graphics. Likewise, x,y,zx,y,z were scaled so the quadratic nonlinearity better matched the data.

The model (16) was fit using regular linear regression over all rows of the validated dataset. This resulted in a second set of coefficients AA for a model of world coordinates to image coordinates; again, the model was inverted using Newton's method (Jacobian omitted here!). These coefficients, one set per camera, were then integrated into the C++ program for displaying video, and the inverse mapping (using closed-form matrix inversion) was used to convert mouse clicks to real-world coordinates for robot motor control. Even with the relatively poor wide-FOV cameras employed, the method is accurate to ±50μm\pm 50\mu m , and precise to ±120μm \pm 120\mu m .

{1339}
hide / / print
ref: -2012 tags: Emo Todorov contact invariant animation optimization complex motor behavior date: 05-04-2016 17:34 gmt revision:3 [2] [1] [0] [head]

* Watch the [http://homes.cs.washington.edu/~todorov/index.php?video=MordatchSIGGRAPH12&paper=Mordatch,%20SIGGRAPH%202012 movies! Discovery of complex behaviors through contact-invariant optimization]

  • Complex movements tend to have phases within which the set of active contacts (hands, feet) remains invariant (hence can exert forces on the objects they are contacting, or vice versa).
  • Discovering suitable contact sets is the central goal of optimization in our approach.
    • Once this is done, optimizing the remaining aspects of the movement tends to be relatively straightforward.
    • They do this through axillary scalar variables which indicate whether the a contact is active or not, hence whether to enable contact forces.
      • Allows the optimizer to 'realize' that movements should have phases.
      • Also "shapes the energy landscape to be smoother and better behaved"
  • Initial attempts to make these contact axillary variables discrete -- when and where -- which was easy for humans to specify, but made optimization intractable.
    • Motion between contacts was modeled as a continuous feedback system.
  • Instead, the contact variables c ic_i have to be continuous.
  • Contact forces are active only when c ic_i is 'large'.
    • Hence all potential contacts have to be enumerated in advance.
  • Then, parameterize the end effector (position) and use inverse kinematics to figure out joint angles.
  • Optimization:
    • Break the movement up into a predefined number of phases, equal duration.
    • Interpolate end-effector with splines
    • Physics constraints are 'soft' -- helps the optimizer : 'powerful continuation methods'
      • That is, weight different terms differently in phases of the optimization process.
      • Likewise, appendages are allowed to stretch and intersect, with a smooth cost.
    • Contact-invariant cost penalizes distortion and slip (difference between endpoint and surface, measured normal, and velocity relative to contact point)
      • Contact point is also 'soft' and smooth via distance-normalized weighting.
    • All contact forces are merged into a f 6f \in \mathbb{R}^6 vector, which includes both forces and torques. Hence contact force origin can move within the contact patch, which again makes the optimization smoother.
    • Set τ(q,q˙,q¨)=J(q) Tf+Bu\tau(q, \dot{q}, \ddot{q}) = J(q)^T f + B u where J(q) T J(q)^T maps generalize (endpoint) velocities to contact-point velocities, and f above are the contact-forces. BB is to map control forces uu to the full space.
    • τ(q,q˙,q¨)=M(q)q˙+C(q,q˙)q˙+G(q)\tau(q, \dot{q}, \ddot{q}) = M(q)\dot{q} + C(q, \dot{q})\dot{q} + G(q) -- M is inertia matrix, C is Coriolis matrix, g is gravity.
      • This means: forces need to add to zero. (friction ff + control uu = inertia + coriolis + gravity)
    • Hence need to optimize ff and uu .
      • Use friction-cone approximation for non-grab (feet) contact forces.
    • These are optimized within a quadratic programming framework.
      • LBFGS algo.
      • Squared terms for friction and control, squared penalization for penetrating and slipping on a surface.
    • Phases of optimization (continuation method):
      • L(s)=L CI(s)+L physics(s)+L task(s)+L hint(s)L(s) = L_{CI}(s) + L_{physics}(s) + L_{task}(s) + L_{hint}(s)
      • task term only: wishful thinking.
      • all 4 terms, physcics lessened -- gradually add constraints.
      • all terms, no hint, full physics.
  • Total time to simulate 2-10 minutes per clip (only!)
  • The equations of the paper seem incomplete -- not clear how QP eq fits in with the L(s)L(s) , and how c ic_i fits in with J(q) Tf+BuJ(q)^T f + B u

{1212}
hide / / print
ref: Nordhausen-1994.02 tags: Utah array electrodes optimization date: 08-14-2014 01:24 gmt revision:2 [1] [0] [head]

PMID-8180807[0] Optimizing recording capabilities of the Utah Intracortical Electrode Array.

  • Nordhausen, Rousch, Normann (1993)
  • Originally it was designed for stimulation in a visual prosthesis.
  • Thought that the large surface area would securely anchor it to the cortex
    • Turns out you need to put gore-tex on top to keep it from being expelled.
  • Varied the exposed electrode tip to determine the optimum area.
  • Oldschool computer plots ...

____References____

[0] Nordhausen CT, Rousche PJ, Normann RA, Optimizing recording capabilities of the Utah Intracortical Electrode Array.Brain Res 637:1-2, 27-36 (1994 Feb 21)

{54}
hide / / print
ref: bookmark-0 tags: intrinsic evolution FPGA GPU optimization algorithm genetic date: 01-27-2013 22:27 gmt revision:1 [0] [head]

!:

  • http://evolutioninmaterio.com/ - using FPGAs in intrinsic evolution, e.g. the device is actually programmed and tested.
  • - Adrian Thompson's homepage. There are many PDFs of his work on his homepage.
  • Parallel genetic algorithms on programmable graphics hardware
    • basically deals with optimizing mutation and fitness evaluation using the parallel arcitecture of a GPU: larger populations can be evaluated at one time.
    • does not concern the intrinsic evolution of algorithms to the GPU, as in the Adrian's work.
    • uses a linear conguent generator to produce random numbers.
    • used a really simple problem: Colville minimization problem which need only search through a four-dimensional space.
  • Cellular genetic algoritms and local search for 3-SAT problem on Graphic Hardware
    • concerning SAT: satisfiabillity technique: " many practical problems, such as graph coloring, job-shop scheduling, and real-world scheduling can be represented as a SAT problem.
    • SAT-3 refers to the length of the search clause. length 3 is apparently very hard..
    • they use a combination of greedy search (flip the bit that increases the fitness the largest ammount) and random-walk via point mutations to keep the algorithm away from local minima.
    • also use cellular genetic algorithm which works better on a GPU): select the optimal neignbor, not global, individual.
    • only used a GeForce 6200 gpu, but it was still 5x faster than a p4 2.4ghz.
  • Evolution of a robot controller using cartesian genetic programming
    • cartesian programming has many advantages over traditional tree based methods - e.g. blot-free evolution & faster evolution through neutral search.
    • cartesian programming is characterized by its encoding of a graph as a string of integers that represent the functions and connections between graph nodes, and program inputs and outputs.
      • this encoding was developed in the course of evolving electronic circuits, e.g. above ?
      • can encode a non-connected graph. the genetic material that is not utilized is analogous to biological junk DNA.
    • even in converged populations, small mutations can produce large changes in phenotypic behavior.
    • in this work he only uses directed graphs - there are no cycles & an organized flow of information.
    • mentions automatically defined functions - what is this??
    • used diffusion to define the fitness values of particular locations in the map. the fewer particles there eventually were in a grid location, the higher the fitness value of the robot that managed to get there.
  • Hardware evolution: on the nature of artifically evolved circuits - doctoral dissertation.
    • because evolved circuits utilize the parasitic properties of devices, they have little tolerance of the value of components. Reverse engineering of the circuits evolved to improve tolerance is not easy.

{968}
hide / / print
ref: Bassett-2009.07 tags: Weinberger congnitive efficiency beta band neuroimagaing EEG task performance optimization network size effort date: 12-28-2011 20:39 gmt revision:1 [0] [head]

PMID-19564605[0] Cognitive fitness of cost-efficient brain functional networks.

  • Idea: smaller, tighter networks are correlated with better task performance
    • working memory task in normal subjects and schizophrenics.
  • Larger networks operate with higher beta frequencies (more effort?) and show less efficient task performance.
  • Not sure about the noisy data, but v. interesting theory!

____References____

[0] Bassett DS, Bullmore ET, Meyer-Lindenberg A, Apud JA, Weinberger DR, Coppola R, Cognitive fitness of cost-efficient brain functional networks.Proc Natl Acad Sci U S A 106:28, 11747-52 (2009 Jul 14)

{821}
hide / / print
ref: work-0 tags: differential evolution function optimization date: 07-09-2010 14:46 gmt revision:3 [2] [1] [0] [head]

Differential evolution (DE) is an optimization method, somewhat like Neidler-Mead or simulated annealing (SA). Much like genetic algorithms, it utilizes a population of solutions and selection to explore and optimize the objective function. However, it instead of perturbing vectors randomly or greedily descending the objective function gradient, it uses the difference between individual population vectors to update hypothetical solutions. See below for an illustration.

At my rather cursory reading, this serves to adapt the distribution of hypothetical solutions (or population of solutions, to use the evolutionary term) to the structure of the underlying function to be optimized. Judging from images/821_1.pdf Price and Storn (the inventors), DE works in situations where simulated annealing (which I am using presently, in the robot vision system) fails, and is applicable to higher-dimensional problems than simplex methods or SA. The paper tests DE on 100 dimensional problems, and it is able to solve these with on the order of 50k function evaluations. Furthermore, they show that it finds function extrema quicker than stochastic differential equations (SDE, alas from 85) which uses the gradient of the function to be optimized.

I'm surprised that this method slipped under my radar for so long - why hasn't anyone mentioned this? Is it because it has no proofs of convergence? has it more recently been superseded? (the paper is from 1997). Yet, I'm pleased because it means that there are also many other algorithms equally clever and novel (and simple?), out their in the literature or waiting to be discovered.

{652}
hide / / print
ref: notes-0 tags: policy gradient reinforcement learning aibo walk optimization date: 12-09-2008 17:46 gmt revision:0 [head]

Policy Gradient Reinforcement Learning for Fast Quadrupedal Locomotion

  • simple, easy to understand policy gradient method! many papers cite this on google scholar.
  • compare to {651}

{409}
hide / / print
ref: bookmark-0 tags: optimization function search matlab linear nonlinear programming date: 08-09-2007 02:21 gmt revision:0 [head]

http://www.mat.univie.ac.at/~neum/

very nice collection of links!!

{141}
hide / / print
ref: learning-0 tags: motor control primitives nonlinear feedback systems optimization date: 0-0-2007 0:0 revision:0 [head]

http://hardm.ath.cx:88/pdf/Schaal2003_LearningMotor.pdf not in pubmed.