You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{1128} is owned by tlh24.{1126} is owned by tlh24.{852} is owned by tlh24.{433} is owned by tlh24.
hide / / print
ref: -2022 tags: adipose tissue micro-RNA miRNA extracellular vesicles diabetes date: 09-12-2022 18:00 gmt revision:0 [head]

PMID-36070680 Extracellular vesicles mediate the communication of adipose tissue with brain and promote cognitive impairment associated with insulin resistance

  • Claim that adipose tissue communicates with the brain through blood-borne extracellular vesicles containing miRNA.
  • These EV + miRNA impairs cognitive function and synaptic loss.
  • File this under 'not sure if i believe it / but sure is interesting if true'!

hide / / print
ref: -2019 tags: backprop neural networks deep learning coordinate descent alternating minimization date: 07-21-2021 03:07 gmt revision:1 [0] [head]

Beyond Backprop: Online Alternating Minimization with Auxiliary Variables

  • This paper is sort-of interesting: rather than back-propagating the errors, you optimize auxiliary variables, pre-nonlinearity 'codes' in a last-to-first layer order. The optimization is done to minimize a multimodal logistic loss function; math is not done to minimize other loss functions, but presumably this is not a limit. The loss function also includes a quadratic term on the weights.
  • After the 'codes' are set, optimization can proceed in parallel on the weights. This is done with either straight SGD or adaptive ADAM.
  • Weight L2 penalty is scheduled over time.

This is interesting in that the weight updates can be cone in parallel - perhaps more efficient - but you are still propagating errors backward, albeit via optimizing 'codes'. Given the vast infractructure devoted to auto-diff + backprop, I can't see this being adopted broadly.

That said, the idea of alternating minimization (which is used eg for EM clustering) is powerful, and this paper does describe (though I didn't read it) how there are guarantees on the convexity of the alternating minimization. Likewise, the authors show how to improve the performance of the online / minibatch algorithm by keeping around memory variables, in the form of covariance matrices.

hide / / print
ref: -0 tags: rutherford journal computational theory neumann complexity wolfram date: 05-05-2020 18:15 gmt revision:0 [head]

The Structures for Computation and the Mathematical Structure of Nature

  • Broad, long, historical.

hide / / print
ref: -2016 tags: MAPseq Zador connectome mRNA plasmic library barcodes Peikon date: 03-06-2019 00:51 gmt revision:1 [0] [head]

PMID-27545715 High-Throughput Mapping of Single-Neuron Projections by Sequencing of Barcoded RNA.

  • Justus M. Kebschull, Pedro Garcia da Silva, Ashlan P. Reid, Ian D. Peikon, Dinu F. Albeanu, Anthony M. Zador
  • Another tool for the toolboxes, but I still can't help but to like microscopy: while the number of labels in MAPseq is far higher, the information per read-oout is much lower; an imaged slice holds a lot of information, including dendritic / axonal morphology, which sequencing doesn't get. Natch, you'd wan to use both, or FISseq + ExM.

hide / / print
ref: -0 tags: hahnloser zebrafinch LMAN HVC song learning internal model date: 10-12-2018 00:33 gmt revision:1 [0] [head]

PMID-24711417 Evidence for a causal inverse model in an avian cortico-basal ganglia circuit

  • Recorded an stimulated the LMAN (upstream, modulatory) region of the zebrafinch song-production & learning pathway.
  • Found evidence, albeit weak, for a mirror arrangement or 'causal inverse' there: neurons fire bursts prior syllable production with some motor delay, ~30ms, and also fire single spikes with a delay ~10 ms to the same syllables.
    • This leads to an overall 'mirroring offset' of about 40 ms, which is sufficiently supported by the data.
    • The mirroring offset is quantified by looking at the cross-covariance of audio-synchronized motor and sensory firing rates.
  • Causal inverse: a sensory target input generates a motor activity pattern required to cause, or generate that same sensory target.
    • Similar to the idea of temporal inversion via memory.
  • Data is interesting, but not super strong; per the discussion, the authors were going for a much broader theory:
    • Normal Hebbian learning says that if a presynaptic neuron fires before a postsynaptic neuron, then the synapse is potentiated.
    • However, there is another side of the coin: if the presynaptic neuron fires after the postsynaptic neuron, the synapse can be similarly strengthened, permitting the learning of inverse models.
      • "This order allows sensory feedback arriving at motor neurons to be associated with past postsynaptic patterns of motor activity that could have caused this sensory feedback. " So: stimulate the sensory neuron (here hypothetically in LMAN) to get motor output; motor output is indexed in the sensory space.
      • In mammals, a similar rule has been found to describe synaptic connections from the cortex to the basal ganglia [37].
      • ... or, based on anatomy, a causal inverse could be connected to a dopaminergic VTA, thereby linking with reinforcement learning theories.
      • Simple reinforcement learning strategies can be enhanced with inverse models as a means to solve the structural credit assignment problem [49].
  • Need to review literature here, see how well these theories of cortical-> BG synapse match the data.

hide / / print
ref: -0 tags: journal review neuro date: 04-19-2013 22:58 gmt revision:1 [0] [head]

PLoS One:

PMID-23251670 Ultra-Bright and -Stable Red and Near-Infrared Squaraine Fluorophores for In Vivo Two-Photon Imaging

  • Podgorski K, Terpetschnig E, Klochko OP, Obukhova OM, Haas K.
  • between 750 and 950 nm, where absorption and scattering by tissues is minimized
  • Near-infrared (NIR) probes are ideal for biological imaging because few endogenous molecules in organisms absorb or emit in the NIR region: there is little background autofluorescence to contend with.
  • Squaraine-based fluorescent sensors have been developed for a variety of analytes including Ca2+ [20], pH [21], protein and DNA, and squaraine-based labels exhibit an increase in fluorescence intensity and lifetime upon binding to biomolecules [22], [23]. The photostability of squaraine dyes is comparable to those of conventional cyanine dyes [23], but can be substantially increased by the synthesis of a squaraine-rotaxane [24], an interlocked structure wherein a macrocycle encases the electrophilic squarylium core, preventing its exposure to nucleophilic attack in solution (Fig. 1a).
  • See also (this seems a growing trend):
    • PMID-23292608 Choi, H.S. et al. Targeted zwitterionic near-infrared fluorophores for improved optical imaging. Nat. Biotechnol. 31, 148–153 (2013).
      • focus on low background emission for maximizing SNR & image-guided surgery on tumors.
    • Lukinavičius, G. et al. A near-infrared fluorophore for live-cell super-resolution microscopy of cellular proteins. Nat. Chem. 5, 132–139 (2013).

PMID-22056675 A gene-fusion strategy for stoichiometric and co-localized expression of light-gated membrane proteins

  • Kleinlogel S, Terpitz U, Legrum B, Gökbuget D, Boyden ES, Bamann C, Wood PG, Bamberg E.
  • Push-pull (excitation and inhibition) or complementary (white light) optogenetics.
  • Fused with a gastric chloride pump for good membrane localization.

PMID-22056675 Substantial Generalization of Sensorimotor Learning from Bilateral to Unilateral Movement Conditions

  • Kleinlogel S, Terpitz U, Legrum B, Gökbuget D, Boyden ES, Bamann C, Wood PG, Bamberg E.
  • These findings collectively suggest a substantial overlap between the neural processes underlying bilateral and unilateral movements, supporting the idea that bilateral training, often employed in stroke rehabilitation, is a valid method for improving unilateral performance.

PMID-23408972 Credit Assignment during Movement Reinforcement Learning

  • Chadderdon GL, Neymotin SA, Kerr CC, Lytton WW. -- SUNY Downstate
  • A Bayesian credit-assignment model with built-in forgetting accurately predicts their [humans] trial-by-trial learning.

PMID-23382796 Visuomotor Learning Enhanced by Augmenting Instantaneous Trajectory Error Feedback during Reaching

  • Patton JL, Wei YJ, Bajaj P, Scheidt RA.
  • Learning in the gain 2 and offset groups was nearly twice as fast as controls. not surprising.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0054771 Flexible Switching of Feedback Control Mechanisms Allows for Learning of Different Task Dynamics

  • unimanual / bimanual tasks.

PMID-23365648 Recognizing Sights, Smells, and Sounds with Gnostic Fields

  • Christopher Kanan UCSD
  • Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of “gnostic” neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation.
    • Gnostic: of or relating to knowledge.
    • Supervised learning.
    • "The algorithm can be implemented in a few hours".
  • Tested by classifying contemporary artists from emulated auditory nerve responses. 78% accuracy.
  • Tested for image recognition w/ standardized datasets.
  • Method:
    • Feature-extraction.
    • PCA based whitening.
    • Coarse template matching within the gnostic units via dot product.
      • Feature vector is learned via unsupervised clustering of the whitened training features for each channel and category.
      • Numbre of gnostic units per category set by fn of number of festure vectors and their dimensionality.
    • Take the unit with the largest activity (inhibitive competition).
      • This is a highly nonlinear function
        • which normalizes based on population variability (contraharmonic mean -- weights the inverse of the SNR, effectively).
    • Sum over time.
    • Decode using a linear classifier over the gnostic units.
      • Trained using Balanced Winnow algorithm. (multiplicative and not additive weight updates, allegedly neurally inspired)

PMID-23300606 Decoding Hindlimb Movement for a Brain Machine Interface after a Complete Spinal Transection

  • Manohar A, Flint RD, Knudsen E, Moxon KA.
  • SC transection resulted in a 40% decrease in M1 information content & a persistent reduction in neuronal firing rates.
  • Very similar to Niolelis & Chapin 1999. Meh.
  • See Wyler 1980 {909}

Journal of Neural Engineering:

PMID-23449002 Model-based rational feedback controller design for closed-loop deep brain stimulation of Parkinson's disease.

  • Goal: rational design of stimulation pattern based on control theory.
  • Needed a model of PD, of course -- opted for a thalamic relay controlled by GPi inhibition.
  • Full PID controller

PMID-23428966 Improving brain-machine interface performance by decoding intended future movements.

  • Goal: improve BMI performance by minimizing the deleterious effects of delay in the BMI control loop.
  • We mitigate the effects of delay by decoding the subject's intended movements a short time lead in the future.

PMID-23428937 An implantable wireless neural interface for recording cortical circuit dynamics in moving primates.

  • Borton DA, Yin M, Aceros J, Nurmikko A. Brown.
  • 24Mbps, attached to Utah probe, discussed this with Schwarz.
  • Inductive recharging of li-ion battery.
  • Pigs, etc.

PMID-23428877 Local-learning-based neuron selection for grasping gesture prediction in motor brain machine interfaces.

  • Nonlinear neural activities are decomposed into a set of linear ones in a weighted feature space.
  • Used a margin to segregate different gestures and L1 normalization to remove irrelevant neurons.

PMID-22954906 Sparse decoding of multiple spike trains for brain-machine interfaces.

  • Tankus A, Fried I, Shoham S.
  • Similar idea as above --
  • This method is based on sparse decomposition of the high-dimensional neuronal feature space, projecting it onto a low-dimensional space of codes serving as unique class labels.
  • Tested against a range of existing methods using simulations and recordings of the activity of 1592 neurons in 23 neurosurgical patients who performed motor or speech tasks.

PMID-23010756 Comprehensive characterization and failure modes of tungsten microwire arrays in chronic neural implants.

  • Prasad A, Xue QS, Sankar V, Nishida T, Shaw G, Streit WJ, Sanchez JC.
  • {1193}

PMID-23283391 Performance of conducting polymer electrodes for stimulating neuroprosthetics.

  • Green RA, Matteucci PB, Hassarati RT, Giraud B, Dodds CW, Chen S, Byrnes-Preston PJ, Suaning GJ, Poole-Warren LA, Lovell NH.
  • PEDOT is a fine electrode substrate. Surprising?
  • Can deliver ~ 20x the charge of Pt.

PMID-23160018 Properties and application of a multichannel integrated circuit for low-artifact, patterned electrical stimulation of neural tissue.

  • Hottowy P, SkoczeÅ„ A, Gunning DE, Kachiguine S, Mathieson K, Sher A, WiÄ…cek P, Litke AM, DÄ…browski W.
  • Made a 64-channel 'Stimchip'
  • Each channel has a DAC-driven configurable voltage or current source.
    • Has additional artifact-minimization circuitry.
  • Designed for MEAs :-/

Nature Methods:

PMID-23524393 Whole-brain functional imaging at cellular resolution using light-sheet microscopy

  • Ahrens MB, Keller PJ.
  • Here we use light-sheet microscopy to record activity, reported through the genetically encoded calcium indicator GCaMP5G, from the entire volume of the brain of the larval zebrafish in vivo at 0.8 Hz, capturing more than 80% of all neurons at single-cell resolution.
  • 5um slices, 4um thick light sheet.
  • We determined an average signal-to-noise ratio of 180 ± 11 (mean ± s.e.m., n = 31; not considering the signal-to-noise ratio of the calcium indicator itself, see Online Methods) for neurons in different regions of the light sheet–based whole-brain recording. Owing to this high ratio and the short volumetric imaging interval, which was comparable to the time course of GCaMP5G at room temperature, the occurrence of action potentials within the recording interval was detectable in most cases.
  • We used the albino (slc45a2) mutant
    • The mouse brain is significantly bigger, is largely impenetrable to visible light and is surrounded by a skull. Realistically, we may not see methods that enable whole brain activity mapping in mammals at the cellular level for quite a while.
  • Moved the laser light beam in 2 dimensions & the objective in one; laser was scanned via piezoelectric mirrors, and the objective was also peizo-electric control.
    • Used segmentation to tease apart co-active ensembles.
    • Understanding of actual function not too deep, but then again neither was my reading of the paper.
    • Prominent feature is the autonomous hindbrain oscillator.

PMID-23142873 Two-photon optogenetics of dendritic spines and neural circuits

  • In neocortical slices.
  • C1V1 -- combination of ChR1 and VChR1. Slower kinetics more suitable for galvanometer based scanning.
  • AAV virus injected P21 mice, 400um from pial surface of somatosensory cortex.
  • measured currents via patch-clamp.
  • Also tested two-photon spatial light modulator (SLM)-based microscopy, a holographic method that enables optical targeting of groups of neurons or spines located in arbitrary three-dimensional (3D) positions
    • goal: several neurons can be selectively or simultaneously activated in three dimensions—an approach that could enable the optical dissection of the function of microcircuits with single-cell precision.

Nanowires, useful for Flip's idea.

  • These from [editorial http://www.nature.com/nmeth/journal/v9/n4/full/nmeth.1961.html]
  • PMID-22231664 Vertical nanowire electrode arrays as a scalable platform for intracellular interfacing to neuronal circuits
    • Robinson JT, Jorgolli M, Shalek AK, Yoon MH, Gertner RS, Park H. Harvard.
    • looks like it's limited to slices & 100's of neurons atm.
    • Compared to patch-pipe, of course.
    • Lithographic fabrication; pillars were thinned via thermal oxidation and wet chemical etching. Sounds very tricky.
    • 3um microwire length.
    • HEK293 and rat cortical neurons.
  • PMID-22179566 Intracellular recordings of action potentials by an extracellular nanoscale field-effect transistor
  • PMID-22327876 Intracellular recording of action potentials by nanopillar electroporation

Of personal interest:

Richardson-Lucy (RL) deconvolution for sub-diffraction limit imaging.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0056624 Collaborative Filtering for Brain-Computer Interaction Using Transfer Learning

  • Taylor the language of human-computer interaction to the users, based on k-NN in previous data.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0055518 Brain Training Game Boosts Executive Functions, Working Memory and Processing

  • 'Brain Age' is effective in a double-blind study.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0061390 Cognitive Training Improves Sleep Quality and Cognitive Function among Older Adults with Insomnia

  • Debatable causality.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0054402 Perceived Multi-Tasking Ability, Impulsivity, and Sensation Seeking

  • The findings indicate that the persons who are most capable of multi-tasking effectively are not the persons who are most likely to engage in multiple tasks simultaneously. To the contrary, multi-tasking activity as measured by the Media Multitasking Inventory and self-reported cell phone usage while driving were negatively correlated with actual multi-tasking ability
  • Finally, the findings suggest that people often engage in multi-tasking because they are less able to block out distractions and focus on a singular task. Participants with less executive control - low scorers on the Operation Span task and persons high in impulsivity - tended to report higher levels of multi-tasking activity.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052500 Learning and Long-Term Retention of Large-Scale Artificial Languages

  • We report data from a large-scale learning experiment that demonstrates that adults can learn words from unsegmented input in much larger languages than previously documented and that they retain the words they learn for years. These results suggest that statistical word segmentation could be scalable to the challenges of lexical acquisition in natural language learning.
  • A unique artificial language was generated for each participant. Each language had 1000 word types and 60,000 word tokens (for 10 hours of speech). Frequencies of words were distributed via a Zipfian frequency distribution: , where is the frequency of word and is its rank, such that there were a few highly frequent words and many more with lower frequencies (max = 8000, min = 10 tokens) [30].

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0052042 Non-Hebbian Learning Implementation in Light-Controlled Resistive Memory Devices

  • Light and voltage controlled memsistors. Interesting.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0058284 Attractor Metabolic Networks

  • We have found that the systemic enzymatic activities are governed by attractors with capacity to store functional metabolic patterns which can be correctly recovered from specific input stimuli. The network attractors regulate the catalytic patterns, modify the efficiency in the connection between the multienzymatic complexes, and stably retain these modifications. Here for the first time, we have introduced the general concept of attractor metabolic network, in which this dynamic behavior is observed.
  • Used a Hopfield network via a Boltzman machine.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0059196 Prenatal Exposure to a Polychlorinated Biphenyl (PCB) Congener Influences Fixation Duration on Biological Motion at 4-Months-Old: A Preliminary Study

  • infants exposed to PCBs have delayed / impaired development. Expected, but still sad.

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0060437 Hunger in the Absence of Caloric Restriction Improves Cognition and Attenuates Alzheimer's Disease Pathology in a Mouse Model

  • Ghrelin, a hunger-inducing drug attenuates AD pathology, in the absence of caloric restriction, and the neuroendocrine aspects of hunger also prevent age-related cognitive decline.

hide / / print
ref: -0 tags: story falls lake journal mexican coincidence date: 08-18-2011 17:32 gmt revision:2 [1] [0] [head]

I'm an avid open-water swimmer, and other than the quarry and beach, I spend many fridays hoping the water in Falls lake is not too choppy. If it's glassy and smooth (and even sometimes when it's not), I can fall into the hypnotic 4/4 chug of stroke-stroke-stroke-breathe, stroke-str ... not hard, since the brown water is featureless, and the above-water scenery doesn't change much either.

Several years ago I was out on Falls lake doing my thing, comfortably clear in the middle of the lake, heading back to the beach. In my unawareness I failed to notice that a thunderstorm had grown in the hot summer afternoon. Normally I'm rather debonaire about these things, but have been in places just before they were struck by lightning, and this felt a little like that.

So, SOL Tim starts considering the rather limited options (god) (hold breath for as long as possible) (are they the same?). Just then, some Mexican guy on a kayak comes paddling out of ... nowhere ... and asks me if I need help. I bearhug the back of his boat and we get back to shore before the storm breaks. .... Another friday, another season and I set off with a friend clear across Falls lake, which is far, like 3mi round trip. I chat with a Mexican dude before we launch the ships; i guess he seems a bit familiar, but I'm too nervous, eager, and worrying about the thoughts/abilities of my friend to think much. That swim goes fine, minus all the damned speadboats and the ravenous hunger that sets in afterward.

Yesterday I had intended to swim at a pool, but some toddling kid chose to contaminate it, and so back to Falls Lake. It's choppy and hard to swim, and I don't make it as far as intended; again before launching, I meet a Mexican dude, and he asks me if I'm crossing the lake again. I tell him no, not enough time; the water envelops, and I'm back in the swim coma, gone to the point when I get back the sun is down and the moon has risen.

Surprisingly, when I get back the Mexican guy and his family are still there, slowly cleaning up BBQ debris by the light of highbeams and one crappy flashlight. It's cool and peaceful on the lake, but they probably should have left half an hour ago; as I go to the restroom to change, I wave to the guy and realize two things simultaneously: (1) fuck, it's been the same guy, (2) he may have delayed departure, gracefully and surreptitiously, until I was back. Curiosity makes me want to ask if he had, to see if coincidence licked me again, but that's not right; I did't.

hide / / print
ref: ai-0 tags: automatic programming journal notes date: 12-31-2010 05:24 gmt revision:4 [3] [2] [1] [0] [head]

This evening, on the drive back from wacky (and difficult) Russian-style yoga, I got a chance to explain to my brother what I really want to be working on, the thing that really tickles my fancy. My brother and I, so much as genetic commonality and common upbringing seem to effect, have very similar styles of thinking, which made explaining things a bit easier. For you, dear readier, I'll expand a bit.

I'd like to write a program that writes other programs, iteratively, given some objective function / problem statement / environment in which to interact. The present concrete goal is to have a said program make a program that is able to lay out PCBs with quality similar to that of humans. The overarching framework that I'm planning on using is genetic/evolutionary algorithms (the latter does not have crossover, fyi), but no one has applied GA to the problem in this way: most people use GA to solve a particular instance of a problem. Rubbish, i say, this is energy wasteful!

Rubbish, you may return: the stated problem requires a degree of generalization and disconnect from the 'real world' (the PCB) that makes GAs extremely unlikely to come up with any solutions. Expressed another way: the space to be explored is too large (program begets program begets solution). This is a very sensible critique; there is no way in hell a GA can solve this problem. They are notably pathetic at exploring space in a energy-efficient way (to conclude a paragraph again with energy... ).

There are known solutions for this: memory -- cache the results, in terms of algorithm & behavior, of all 'hypotheses' or individuals tried out by a GA. This is what humans do -- they remember the results of their experiment, and substitute the result rather than running a test again. But humans do something far more sophisticated and interesting than just memory - they engineer systems; engineering is an iterative process that often goes down wrong design paths, yet it nonetheless delivers awesome things like Saabs and such.

As I described to K--, engineering is not magic and can be (has been?) described mechanistically. First of all, most engineering artifacts start off from established, well-characterized components, aggregated through the panoply of history. Some of these components describe how other components are put together, things that are either learned in school or by taking things apart. Every engineer, ala Newton, stands on the vast shoulders of the designers before; hence any program must also have these shoulders available. The components are assembled into a system in a seemingly ad-hoc and iterative procedure: sometimes you don't know what you want, so you play with the parts sorta randomly, and see what interesting stuff comes out. Other times you know damn well what you / your boss / the evil warlord who holds you captive wants. Both modes are interesting (and the dichotomy is artificial), but the latter is more computer-like, hence to be modeled.

Often the full details of the objective function or desired goal is very unclear in the hands of the boss / evil warlord (1), despite how reluctant they may be to admit this. Such an effect is well documented in Fred Brooks' book, __The Design of Design__. Likewise, how to get to a solution is unclear in the mind of an engineer, so he/she shuffles things around in the mind (2),

  1. looking for components that deliver particular desired features (e.g. in an electronic system, gain makes me think of an op-amp)
  2. looking for components that remove undesirable features (e.g. a recent noise problem on my wireless headstage made me think of a adaptive decorrelating filter I made once)
  3. looking for transforms that make the problem solvable in a linear space, something that Moshe Looks calls knob-twiddling.
    1. this is from both sides -- transforms that convert the problem or the nascent solution.
    2. An example would be the FFT. This makes it easy to see spectral features.
    3. Another example, used even more recently, is coordinate transforms - it makes thinks like line-line intersection much easier.
    4. When this doesn't work, you can do far more powerful automatic coordinate transform - math, calculus. This is ultimately what I needed when figuring out the shortest line segment between a line segment and a ellipse. Don't ask.

This search is applied iteratively, apparently a good bit of the time subconsciously. A component exists in our mind as a predictive model of how the thing behaves, so we simulate it on input, observe output, and check to see if anything there is correlated / decorrelated with target features. (One would imagine that our general purpose modeling ability grew from needing to model and predict the world and all the yummy food/dangerous animals/warlords in it). The bigger the number of internal models in the engineers mind, the bigger the engineers passion for the project, the more components can be simulated and selected for. Eventually progress is made, and a new subproblem is attacked in the same way, with a shorter path and different input/output to model/regress against.

This is very non-magical, which may appall the more intuitive designers among us. It is also a real issue, because it doesn't (or poorly) explains really interesting engineering: e.g. the creation of the Fourier transform, the creation of the expectation-maximization algorithm, all the statistical and mathematical hardware that lends beauty and power to our design lives. When humans create these things, they are at the height of their creative ability, and thus it's probably a bit ridiculous to propose having a computer program do the same. That does not prevent me from poking at the mystery here, though: perhaps it is something akin to random component assembly (and these must be well known components (highly accurate, fast internal models); most all innovations were done by people exceptionally familiar with their territory), with verification against similarly intimately known data (hence, all things in memory - fast 'iteration cycles'). This is not dissimilar to evolutionary approaches to deriving laws. A Cornell physicist / computer scientist was able to generate natural laws via a calculus-infused GA {842}, and other programs were able to derive Copernicus' laws from planetary data. Most interesting scientific formulae are short, which makes them accessible to GAs (and also aesthetically pleasurable, and/or memelike, but hey!). In contrast engineering has many important design patterns that are borrowed by analogy from real-world phenomena, such as the watermark algorithm, sorting, simulated annealing, the MVC framework, object-oriented programming, WIMP interface, verb/noun interface, programming language, even GAs themselves! Douglas Hofstadter has much more to say about analogies, so I defer to him here.

Irregardless, as K-- pointed out, without some model for creativity (even one as soulless as the one above), any proposed program-creating program will never come up with anything really new. To use a real-world analogy, at his work the boss is extremely crazy - namely, he mistook a circuit breaker for an elevator (in a one-story factory!). But, this boss also comes up with interminable and enthusiastic ideas, which he throws against the wall of his underlings a few dozen times a day. Usually these ideas are crap, but sometimes they are really good, and they stick. According to K--, the way his mind works is basically opaque and illogical (I've met a few of these myself), yet he performs an essential job in the company - he spontaneously creates new ideas. Without such a boss, he claimed, the creations of a program-creating-program will impoverished.

And perhaps hence this should be the first step. Tonight I also learned that at the company (a large medical devices firm) they try to start projects at the most difficult step. That way, projects that are unlikely to succeed are killed as soon as possible. The alternate strategy, which I have previously followed, is to start with the easiest things first, so you get some motivation to continue. Hmm...

The quandary to shuffle your internal models over tonight then, dear readers, is this: is creativity actually (or accurately modeled by) random component-combination creation (boss), followed by a selection/rejection (internal auditing, or colleague auditing)? (3)

  • (1) Are there any beneficent warlords?
  • (2) Yet: as I was educated in a good postmodernist tradition, this set of steps ('cultural software') is not the only way to design. I'm just using it since, well, best to start with something that already works.
  • (3) If anyone reads this and wants to comment, just edit this. Perhaps you want to draw a horizontal line and write comments below it? Anyway, writing is active thinking, so thanks for helping me think.

hide / / print
ref: work-0 tags: perl fork read lines external program date: 06-15-2010 18:08 gmt revision:0 [head]

Say you have a program, called from a perl script, that may run for a long time. Get at the program's output as it appears?

Simple - open a pipe to the programs STDOUT. See http://docstore.mik.ua/orelly/perl/prog3/ch16_03.htm Below is an example - I wanted to see the output of programs run, for convenience, from a perl script (didn't want to have to remember - or get wrong - all the command line arguments for each).


$numArgs = $#ARGV + 1;
if($numArgs == 1){
	if($ARGV[0] eq "table"){
		open STATUS, "sudo ./video 0xc1e9 15 4600 4601 0 |";
			print ; 
		close STATUS ; 
	}elsif($ARGV[0] eq "arm"){
		open STATUS, "sudo ./video 0x1ff6 60 4597 4594 4592 |";
			print ; 
		close STATUS ; 
	}else{ print "$ARGV[0] not understood - say arm or table!\n"; 

hide / / print
ref: -0 tags: linux keyboard international characters symbols date: 10-01-2009 14:09 gmt revision:1 [0] [head]

Need to type international symbols and characters on your keyboard, e.g. for writing in another language? Do this:

 cp /usr/share/X11/locale/en_US.UTF-8/Compose ~/.XCompose 
xmodmap -e 'keycode 115 = Multi_key  Multi_key  Multi_key  Multi_key'
xmodmap -e 'keycode 116 = Multi_key  Multi_key  Multi_key  Multi_key'

Where 115 and 116 are the windows keys on my keyboard. (You can find this out for your keyboard by running 'xev');


  • <windows key> s s -> ß ("Wie heiße du?")
  • <windows><shift><~> a -> ã ("Eles estão bons")
  • <windows><shift><"> u -> ü ("Bücher")
  • <windows><,> c -> ç ("almoço")
  • <windows><=> c -> € ("Custa-la €2")


And now for something completely unrelated but highly amusing, at least in title: Optimal Brain Damage

hide / / print
ref: notes-0 tags: ocaml run external command stdin date: 09-10-2008 19:32 gmt revision:1 [0] [head]

It is not obvious how to run an external command in ocaml & get it's output from stdin. Here is my hack, which simply polls the output of the program until there is nothing left to read. Not very highly tested, but I wanted to share, as I don't think there is an example of the same on pleac

let run_command cmd = 
	let inch = Unix.open_process_in cmd in
	let infd = Unix.descr_of_in_channel inch in
	let buf = String.create 20000 in
	let il = ref 1 in
	let offset = ref 0 in
	while !il > 0 do (
		let inlen = Unix.read infd buf !offset (20000- !offset) in
		il := inlen ; 
		offset := !offset + inlen;
	) done; 
	ignore(Unix.close_process_in inch);  
	if !offset = 0 then "" else String.sub buf 0 !offset

Note: Fixed a nasty string-termination/memory-reuse bug Sept 10 2008