You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{822} is owned by tlh24.{851} is owned by tlh24.
hide / / print
ref: -2018 tags: luke metz meta learning google brain sgd model mnist Hebbian date: 08-05-2021 01:07 gmt revision:2 [1] [0] [head]

Meta-Learning Update Rules for Unsupervised Representation Learning

  • Central idea: meta-train a training-network (a MLP) which trains a task-network (also a MLP) to do unsupervised learning on one dataset.
  • The training network is optimized through SGD based on small-shot linear learning on a test set, typically different from the unsupervised training set.
  • The training-network is a per-weight MLP which takes in layer input, layer output, and a synthetic error (denoted η\eta ), and generates a and b, which are then fed into an outer-product Hebbian learning rule.
  • η\eta itself is formed through a backward pass through weights VV , which affords something like backprop -- but not exactly backprop, of course. See the figure.
  • Training consists of building up very large, backward through time gradient estimates relative to the parameters of the training-network. (And there are a lot!)
  • Trained on CIFAR10, MNIST, FashionMNIST, IMDB sentiment prediction. All have their input permuted to keep the training-network from learning per-task weights. Instead the network should learn to interpret the statistics between datapoints.
  • Indeed, it does this -- albeit with limits. Performance is OK, but only if you only do supervised learning on the very limited dataset used in the meta-optimization.
    • In practice, it's possible to completely solve tasks like MNIST with supervised learning; this gets to about 80% accuracy.
  • Images were kept small -- about 20x20 -- to speed up the inner loop unsupervised learning. Still, this took on the order of 200 hours across ~500 TPUs.
  • See, as a comparison, Keren's paper, Meta-learning biologically plausible semi-supervised update rules. It's conceptually nice but only evaluates the two-moons and two-gaussian datasets.

This is a clearly-written, easy to understand paper. The results are not highly compelling, but as a first set of experiments, it's successful enough.

I wonder what more constraints (fewer parameters, per the genome), more options for architecture modifications (e.g. different feedback schemes, per neurobiology), and a black-box optimization algorithm (evolution) would do?

hide / / print
ref: -2019 tags: meta learning feature reuse deepmind date: 10-06-2019 04:14 gmt revision:1 [0] [head]

Rapid learning or feature reuse? Towards understanding the effectiveness of MAML

  • It's feature re-use!
  • Show this by freezing the weights of a 5-layer convolutional network when training on Mini-imagenet, either 5shot 1 way, or 5shot 5 way.
  • From this derive ANIL, where only the last network layer is updated in task-specific training.
  • Show that ANIL works for basic RL learning tasks.
  • This means that roughly the network does not benefit much from join encoding -- encoding both the task at hand and the feature set. Features can be learned independently from the task (at least these tasks), with little loss.

hide / / print
ref: -0 tags: computational biology evolution metabolic networks andreas wagner genotype phenotype network date: 06-12-2017 19:35 gmt revision:1 [0] [head]

Evolutionary Plasticity and Innovations in Complex Metabolic Reaction Networks

  • ‘’João F. Matias Rodrigues, Andreas Wagner ‘’
  • Our observations suggest that the robustness of the Escherichia coli metabolic network to mutations is typical of networks with the same phenotype.
  • We demonstrate that networks with the same phenotype form large sets that can be traversed through single mutations, and that single mutations of different genotypes with the same phenotype can yield very different novel phenotypes
  • Entirely computational study.
    • Examines what is possible given known metabolic building-blocks.
  • Methodology: collated a list of all metabolic reactions in E. Coli (726 reactions, excluding 205 transport reactions) out of 5870 possible reactions.
    • Then ran random-walk mutation experiments to see where the genotype + phenotype could move. Each point in the genotype had to be viable on either a rich (many carbon source) or minimal (glucose) growth medium.
    • Viability was determined by Flux-balance analysis (FBA).
      • In our work we use a set of biochemical precursors from E. coli 47-49 as the set of required compounds a network needs to synthesize, ‘’’by using linear programming to optimize the flux through a specific objective function’’’, in this case the reaction representing the production of biomass precursors we are able to know if a specific metabolic network is able to synthesize the precursors or not.
      • Used Coin-OR and Ilog to optimize the metabolic concentrations (I think?) per given network.
    • This included the ability to synthesize all required precursor biomolecules; see supplementary information.
    • ‘’’“Viable” is highly permissive -- non-zero biomolecule concentration using FBA and linear programming. ‘’’
    • Genomic distances = hamming distance between binary vectors, where 1 = enzyme / reaction possible; 0 = mutated off; 0 = identical genotype, 1 = completely different genotype.
  • Between pairs of viable genetic-metabolic networks, only a minority (30 - 40%) of reactions are essential,
    • Which naturally increases with increasing carbon source diversity:
    • When they go back an examine networks that can sustain life on any of (up to) 60 carbon sources, and again measure the distance from the original E. Coli genome, they find this added robustness does not significantly constrain network architecture.

Summary thoughts: This is a highly interesting study, insofar that the authors show substantial support for their hypotheses that phenotypes can be explored through random-walk non-lethal mutations of the genotype, and this is somewhat invariant to the source of carbon for known biochemical reactions. What gives me pause is the use of linear programming / optimization when setting the relative concentrations of biomolecules, and the permissive criteria for accepting these networks; real life (I would imagine) is far more constrained. Relative and absolute concentrations matter.

Still, the study does reflect some robustness. I suggest that a good control would be to ‘fuzz’ the list of available reactions based on statistical criteria, and see if the results still hold. Then, go back and make the reactions un-biological or less networked, and see if this destroys the measured degrees of robustness.

hide / / print
ref: -0 tags: tungsten rhenium refactory metals book russia metalurgy date: 10-31-2016 05:14 gmt revision:1 [0] [head]

Physical Metallurgy of Refactory Metals and Alloys

Properties of tungsten-rhenium alloys

  • Luna metals suggests 3% Re improves the tensile strength of the alloy; Concept Alloys has 26% Re.
  • This paper mesured 20% Re, with a strength of 1.9 GPa; actual drawn tungsten wire has a strength of 3.3 GPa.
    • Drawing and cold working greatly affects metal, as always!

hide / / print
ref: -0 tags: meta compilation self-hostying ACM date: 12-30-2015 07:52 gmt revision:2 [1] [0] [head]

META II: Digital Vellum in the Digital Scriptorium: Revisiting Schorre's 1962 compiler-compiler

  • Provides high-level commentary about re-implementing the META-II self-reproducing compiler, using Python as a backend, and mountain climbing as an analogy. Good read.
  • Original paper
  • What it means to be self-reproducing: The original compiler was written in assembly (in this case, a bytecode assembly). When this compiler is run and fed the language description (figure 5 in the paper), it outputs bytecode which is identical (or almost nearly so) to the hand-coded compiler. When this automatically-generated compiler is run and fed the language description (again!) it reproduces itself (same bytecode) perfectly.
    • See section "How the Meta II compiler was written"

hide / / print
ref: -0 tags: adhesion polymer metal FTIR epoxy eponol paint date: 05-01-2015 19:20 gmt revision:0 [head]

Degradation of polymer/substrate interfaces – an attenuated total reflection Fourier transform infrared spectroscopy approach

  • Suggests why eponol is used as an additive to paint.
  • In this thesis, attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy has been used to detect changes at the interfaces between poly (vinyl butyral-co-vinyl alcohol-co-vinyl acetate) (PVB) and ZnSe upon exposure to ozone, humidity and UV-B light.
  • Also, the response of PVB-aluminum interfaces to liquid water has been studied and compared with the same for eponol (epoxy resin, diglycidyl ether of bisphenol A)-aluminum interfaces.
  • In the presence of ozone, humidity and UV-B radiation, an increase in carbonyl group intensity was observed at the PVB-ZnSe interface indicating structural degradation of the polymer near the interface. However, such changes were not observed when PVB coated ZnSe samples were exposed to moisture and UV-B light in the absence of ozone showing that ozone is responsible for the observed structural deterioration. Liquid water uptake kinetics for the degraded PVB monitored using ATR-FTIR indicated a degradation of the physical structural organization of the polymer film.
  • Exposure of PVB coated aluminum thin film to de-ionized water showed water incorporation at the interface. There were evidences for polymer swelling, delamination and corrosion of the aluminum film under the polymer layer.
    • On the contrary, delamination/swelling of the polymer was not observed at the eponol-aluminum interface, although water was still found to be incorporated at the interface. Al-O species were also observed to form beneath the polymer layer.
    • A decrease of the C-H intensities was detected at the PVB-aluminum interface during the water uptake of the polymer, whereas an increase of the C-H intensities was observed for the eponol polymer under these conditions.
    • This is assigned to rearrangement of the macromolecular polymer chains upon interaction with water.

hide / / print
ref: -0 tags: palladium metal glass tought strong caltech date: 02-25-2014 19:02 gmt revision:1 [0] [head]

A damage-tolerant glass

  • Perhaps useful for the inserter needle?
  • WC-Co Tungesten carbide-cobalt cermet is another alternative.

hide / / print
ref: -0 tags: parylene metal adhesion Stieglitz date: 08-15-2013 17:22 gmt revision:0 [head]

PMID-20119944 Characterization of parylene C as an encapsulation material for implanted neural prostheses.

  • On Si3N4, platinum, and a first film of parylene-C, satisfactory adhesion was achieved with silane A-174, even after steam sterilization. (>1 N/cm)
  • higher adhesion for the parylene that was deposited at lower pressures.
  • but: higher deposition pressures results in lower crystalinity.
  • [33] parylene can be used to build freestanding nanowires.
  • Parylene does not stick to polyimide.
  • Parylene sticks to parylene well if left untreated.
  • Annealing parylene dramatically increased crystalinity / decreases elongation to break.
  • The deposited parylene C layers on untreated and with oxygen plasma-treated samples delaminated immediately after contact with saline. The behavior was also observed at two out of three samples of the A-174 treated wafers, but not in this magnitude.
    • A potential reason for these results could be contamination of the samples during assembly or excessive treatment with the adhesion promoter.

hide / / print
ref: -0 tags: ACF chip bonding parylene field's metal polyimide date: 07-10-2013 18:34 gmt revision:10 [9] [8] [7] [6] [5] [4] [head]

We're making parylene electrodes for neural recording, and one critical step is connecting them to recording electronics.

Currently Berkeley uses ACF (anisotropic conductive film) for connection, which is widely used for connecting flex tape to LCD panels, or for connecting driver chips to LCD glass. According to the internet, pitches can be as low as 20um, with pad areas as low as 800um^2. source

However, this does not seem to be a very reliable nor compact process with platinum films on parylene, possibly because ACF bonding relies on raised areas between mated conductors (current design has the Pt recessed into the parylene), and on rigid substrates. ACF consists of springy polymer balls coated in Ni and Au and embedded in a thermoset epoxy resin. The ACF film is put under moderate temperature (180C) and pressure (3mpa, 430psi), which causes the epoxy to cure in a state that leaves the gold/nickel/polymer balls to be compressed between the two conductors. Hence, even if the conductors move slightly due to thermal cycling, the small balls maintain good mechanical and electrical contact. The balls are dispersed sufficiently in the epoxy matrix that there is little to no chance of conduction between adjacent pads.

(Or so I have learned from the internet.) Now, as mentioned, this is an imperfect method for joining Pt on parylene films, possibly because the parylene is so flexible, and the platinum foil is very thin (200-300 nm). Indeed, platinum does not bond very strongly to parylene, hence care must be taken to allow sufficient overlap to prevent water ingress. My proposed solution -- to be tested shortly -- is to use a low-melting temperature metal with strong wetting ability -- such as Field's metal (bismuth, tin, indium, melting point 149F, see http://www.gizmology.net/fusiblemetals.htm) to low-temperature solder the platinum to a carrier board (initially) or to a custom amplifier ASIC (later!). Parylene is stable to 200C (392F), so this should be safe. One worry is that the indium/bismuth will wet the parylene or polyimide, too; however I consider this unlikely due to the difficulty in attaching parylene to any metal.

That said, there must be good reason why ACF is so popular, so perhaps a better ultimate solution is to stiffen the parylene (or ultimately polyimide) substrate so that it can support both the temperature/pressure of ACF bonding and the stress of a continued electrical/mechanical bond to polyimide fan-out board or ASIC. It may also be possible to gold or nickel electroplate the connector pads to be slightly raised instead of recessed.

Update: ACF bond to rigid 1/2 oz copper, 4mil trace / space connector (3mil trace/space board):

Note that the copper traces are raised, and the parylene is stretched over the uneven surface (this is much easier to see with the stereo microscope). To the left of the image, the ACF paste has been sqeezed out from between the FR4 and parylene. Also note that the platinum can make potential contact with vias in the PCB.

Update 7/2: Fields metal (mentioned above) does stick to platinum reasonably well, but it also sticks to parylene (somewhat), and glass (exceptionally well!). In fact, I had a difficult time removing traces of field's metal from the Pyrex beakers that I was melting the metal with. These beakers were filled with boiling water, which may have been the problem.

When I added flux (Kester flux-pen 951 No-clean MSDS), the metal became noticeably more shiny, and the contact angle increased on the borosilicate glass (e.g. looked more like mercury); this leads me to believe that it is not the metal itself that attaches to glass, but rather oxides of indium and bismuth. Kester 951 flux consists of:

  • 2-propanol 15% (as a denaturing agent) boiling point 82.6C
  • Ethanol 73% (solvent) boiling point 78.3C
  • Butyl Acetate 7% boiling point 127C, flash point 27C
  • Methanol <3% b.p. 64.7C
  • Carboxylic acids < 3% -- proton donors? formic or oxalic acid?
  • Surfacants < 1% -- ?
Total boiling point is 173F.

After coating the parylene/platinum sample with flux, I raised the field's metal to the flux activation point, which released some smoke and left brown organic residues on the bottom of the glass dish. Then I dipped the parylene probe into the molten metal, causing the flux again to be activated, and partially wetting the platinum contacts. The figure below shows the result:

Note the incomplete wetting, all the white solids left from the process, and how the field's metal caused the platinum to delaminate from the parylene when the cable was (accidentally) flexed. Tests with platinum foil revealed that the metal bond was not actually that strong, significantly weaker than that made with a flux-core SnPb solder. also, I'm not sure of the activation temperature of this flux, and think I may have overheated the parylene.

Update 7/10:

Am considering electrodeless Ni / Pt / Au deposition, which occurs in aqueous solution, hence at much lower temperatures than e-beam evaporation Electrodeless Ni ref. On polyimide substrates, there is extensive literature describing how to activate the surface for plating: Polyimides and Other High Temperature Polymers: Synthesis ..., Volume 4. Parylene would likely need a different possibly more aggressive treatment, as it does not have imide bonds to open.

Furthermore, if the parylene / polyimide surface is *not* activated, the electrodeless plating could be specific to the exposed electrode and contact sites, which could help to solve the connector issue by strengthening & thickening the contact areas. The second fairly obvious solution is to planarize the contact site on the PCB, too, as seen above. ACF bonds can be quite reliable; last night I took apart (and successfully re-assembled) my 32" Samsung LCD monitor, and none of the flex-on-glass or chip-on-flex binds failed (despite my clumsy hands!).

hide / / print
ref: -0 tags: microelectrodes original metal pipette glass recording MEA date: 01-31-2013 19:46 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

IEEE-4065599 (pdf) Comments on Microelectrodes

  • The amplifiers themselves, even back in 1950's, posed no problems -- low bandwidth. All that is required is low noise and high input impedance.
  • KCl Glass electrodes are LPF (10M resistive + 10pf parasitic capacitance); metal HPF (capacitive).
    • The fluid tip will not see external triphasic spikes of vertebrate axons above the noise level.
  • Metal probe the most useful.
  • Pt electrode in CSF behaves like a capacitor at low voltage across a broad frequency range. CSF has compounds that retard oxidation; impedance is more resistive with physiological saline.
  • Noise voltage generated by a metal electrode best specified by equivalent noise resistance at room temperature, E rmsnoise=4kTR nδF E_{rms noise} = \sqrt{4 k T R_{n} \delta F} R_n should equal the real part of the electrode impedance at the same frequency.
  • Much of electrochemistry: solid AgCl diffuses away from an electrode tip with great speed and can hardly be continuously formed with an imposed current. Silver forms extremely stable complexes with organic molecules having attached amino and sulfhydril groups which occur in plenty where the electrode damages the tissue. Finally, the reduction-oxidation potential of axoplasm is low enough to reduce methylene blue, which places it below hydrogen. AgCl and HgCl are reduced.
  • The external current of nerve fibers is the second derivative of the traveling spike, the familiar triphasic (??) transient.
  • Svaetichin [1] and Dowben and Rose [3] plated with Platinum black. This increases the surface area.
    • Very quickly it burns onto itself a shell of very adherent stuff. It is kept from intimate contact with the tissue around it by a shell.
    • We found that if we add gelatin to the chloroplatinic acid bath from which we plate the Pt, the ball is not only made adherent to the tip but is, in a sense, prepoisoned and does not burn a shell into itself.
  • glass insulation using woods metal (which melts at a very low temperature). Platinum ball was plated onto 2-3um pipette tip. 3um gelatinized platinum black ball, impedance 100kOhm at 1kHz.
    • Highly capacitive probe: can be biased to 1 volt by a polarizing current of 1e-10 amp. (0.1nA).
  • Getting KCl solution into 1um pipettes is quite hard! They advise vacuum boiling to remove the air bubbles.
  • Humble authors, informative paper.


' ''' ()

hide / / print
ref: notes-0 tags: thesis timetable contingency plan hahaha date: 12-06-2011 07:15 gmt revision:5 [4] [3] [2] [1] [0] [head]

Timetable / Plan:

  1. Get recording technology finished & assembled.
    1. Hardware
      1. Clean up prototype 2. Test in-chair with Clementine.
      2. Decide upon a good microelectrode-to-headstage connector with Gary.
      3. Fit headstage PCB into head-mounted chamber. Select battery and fit that too.
      4. Assemble one; contract Protronics to assemble 3 more.
      5. Contract Protronics to assemble 4 receiver boards.
    2. Software
      1. Headstage firmware basically complete; need to add in code for LFP measurement & transmission.
      2. Need some simple sort-client; use existing "Neurocaml" source as a basis. Alternately, use Rppl, inc's open-source "Trellis" electrophysiology suite.
      3. Integrate UDP reception into the BMI suite.
      4. Get an rugged all-in-one computer for display of BMI task - at tablet PC in a plexiglas box would be perfect.
    3. Due: June 30 2009
  2. Monkeys.
    1. Test in-cage recording with Clementine. He's a bit long in the tooth now, and does not have enough cells in M1/premotor cortices to do BMI control.
    2. Select two monkeys, train them on 2D target acquisition with a joystick using Joey's chair and setup. Make sure the monkeys can learn the 2D task in a reasonable amount of time; we don't want to waste time on dumb monkeys.
    3. Arrange for implantation surgeries this summer, depending on the availability of neurosurgeon.
    4. Work with Gary Lehew to assemble microelectrodes & head-mounted chamber.
    5. Get an ethernet drop in the vivarium for transmission of data.
    6. Due: August 30 2009
  3. Experiments
    1. Test & refine task 1 with both monkeys. Allow a maximum of 1 month to learn task 1. Neuron class (x/y/z) selected based on correlational structure (PCA of firing rate).
      1. Will have to get them to turn off Wifi (in same wireless band as the headstages) in the vivarium.
      2. Batteries will need to be replaced daily.
      3. Data will be inspected daily, to eliminate possible confounds / fix bugs / optimize the probability that the monkey learns.
      4. Expected data rate per headstage, given mean firing rate of 40Hz, full waveform storage, one LFP channel sampled at 1Khz = 3.5Gb / day. 1.5Tb drive, $120, will take 100 days to fill with data from 4 headstages.
      5. Very occasionally interleave 4-target test trials after the first week of learning, with both 'y' and 'z' neurons used to control the y-axis.
    2. Test & refine task 2 with both monkeys, in position control; here, record for a minimum of 1 month.
      1. Adjust cursor and target sizes to maintain task difficulty; measure asymptotic performance in bits/sec.
      2. Interleave randomly positioned target acquisition with stereotyped target sequences to measure neuronal tuning curves.
      3. Occasionally perturb cursor to see if there is an internal expectation of cursor motion.
    3. Switch task 2 to velocity control. Measure performance and learning effects of the switch. Train the monkey on this for at least 2 weeks, or until performance asymptotes.
    4. Shuffle the neuron class to make it non-topological, and re-train on position control in task 2 (this to test if topology matters). Train monkey for at least 3 weeks.
    5. Continue recording for as long as it seems worthwhile to do so.
    6. Due: February 1 2010
  4. Writing
    1. Write the DBS paper. This can be done in parallel with many other things, and should take about a month off and on.
    2. Keep good notes during experiments, write everything up within 1-2 months of finishing the proposed experiments.
    3. Write thesis.
    4. Due : June 2010

Contingency Plan:

  1. Recording technology does not work / cannot be made workable in a reasonable amount of time (Reasonable = 4 months.)
    1. Use Plexon, record for as long as possible (or permissible given our protocol - 4 hours) while monkey is in chair. If monkeys will not go into REM/SWS in a chair, as seems likely given what I've tried, scratch the sleep specific aim.
    2. Focus instead on making the simplified BMI work. Will have to assume that neuron identity does not change between sessions.
  2. Monkey surgery fails.
    1. Unlikely. If it does happen, we should just get another monkey. As Joey's travails in publishing his paper show, it is best to have two monkeys that learn and perform the same task.
    2. Even if the implants don't last as long as all the others, the core experiments can be completed within 2 months. Recording quality from even our worst monkey has lasted much longer than this.
  3. Monkey does not learn the BMI
    1. Focus on figuring out why the monkeys cannot learn it - start by re-implementing Dawn Taylor's kludgy autoadaptive algorithm, and go from there.
    2. Focus on sleep. Put a joystick into the cage, and train the monkey on relatively complex sequences of movement to see if there is replay.
    3. Use the experiment as a springboard to test more complicated decoding algorithms with the help of Zheng.
  4. There are no signs of replay.
    1. Try different mathematical methods of looking for replay.
    2. If still nothing, report that.

hide / / print
ref: -0 tags: meta learning Artificial intelligence competent evolutionary programming Moshe Looks MOSES date: 08-07-2010 16:30 gmt revision:6 [5] [4] [3] [2] [1] [0] [head]

Competent Program Evolution

  • An excellent start, excellent good description + meta-description / review of existing literature.
  • He thinks about things in a slightly different way - separates what I call solutions and objective functions "post- and pre-representational levels" (respectively).
  • The thesis focuses on post-representational search/optimization, not pre-representational (though, I believe that both should meet in the middle - eg. pre-representational levels/ objective functions tuned iteratively during post-representational solution creation. This is what a human would do!)
  • The primary difficulty in competent program evolution is the intense non-decomposability of programs: every variable, constant, branch effects the execution of every other little bit.
  • Competent program creation is possible - humans create programs significantly shorter than lookup tables - hence it should be possible to make a program to do the same job.
  • One solution to the problem is representation - formulate the program creation as a set of 'knobs' that can be twiddled (here he means both gradient-descent partial-derivative optimization and simplex or heuristic one-dimensional probabilistic search, of which there are many good algorithms.)
  • pp 27: outline of his MOSES program. Read it for yourself, but looks like:
  • The representation step above "explicitly addresses the underlying (semantic) structure of program space independently of the search for any kind of modularity or problem decomposition."
    • In MOSES, optimization does not operate directly on program space, but rather on subspaces defined by the representation-building process. These subspaces may be considered as being defined by templates assigning values to some of the underlying dimensions (e.g., they restrict the size and shape of any resulting trees).
  • In chapter 3 he examines the properties of the boolean programming space, which is claimed to be a good model of larger/more complicated programming spaces in that:
    • Simpler functions are much more heavily sampled - e.g. he generated 1e6 samples of 100-term boolean functions, then reduced them to minimal form using standard operators. The vast majority of the resultant minimum length (compressed) functions were simple - tautologies or of a few terms.
    • A corollary is that simply increasing syntactic sample length is insufficient for increasing program behavioral complexity / variety.
      • Actually, as random program length increases, the percentage with interesting behaviors decreases due to the structure of the minimum length function distribution.
  • Also tests random perturbations to large boolean formulae (variable replacement/removal, operator swapping) - ~90% of these do nothing.
    • These randomly perturbed programs show a similar structure to above: most of them have very similar behavior to their neighbors; only a few have unique behaviors. makes sense.
    • Run the other way: "syntactic space of large programs is nearly uniform with respect to semantic distance." Semantically similar (boolean) programs are not grouped together.
  • Results somehow seem a let-down: the program does not scale to even moderately large problem spaces. No loops, only functions with conditional evalutation - Jacques Pitrat's results are far more impressive. {815}
    • Seems that, still, there were a lot of meta-knobs to tweak in each implementation. Perhaps this is always the case?
  • My thought: perhaps you can run the optimization not on program representations, but rather program codepaths. He claims that one problem is that behavior is loosely or at worst chaotically related to program structure - which is true - hence optimization on the program itself is very difficult. This is why Moshe runs optimization on the 'knobs' of a representational structure.

hide / / print
ref: work-0 tags: metacognition AI bootstrap machine learning Pitrat self-debugging date: 08-07-2010 04:36 gmt revision:7 [6] [5] [4] [3] [2] [1] [head]

Jacques Pitrat seems to have many of the same ideas that I've had (only better, and he's implemented them!)--

A Step toward and Artificial Scientist

  • The overall structure seems good - difficult problems are attacked by 4 different levels. First level tries to solve the problem semi-directly, by writing a program to solve combinatorial problems (all problems here are constraint based; constraints are used to pare the tree of possible solutions; these trees are tested combinatorially); second level monitors lower level performance and decides which hypotheses to test (which branch to pursue on the tree) and/or which rules to apply to the tree; third level directs the second level and restarts the whole process if a snag or inconsistency is found, forth level gauges the interest of a given problem and looks for new problems to solve within a family so as to improve the skill of the 3 lower levels.
    • This makes sense, but why 4? Seems like in humans we only need 2 - the actor and the critic, bootstrapping forever.
    • Also includes a "Zeus" module that periodically checks for infinite loops of the other programs, and recompiles with trace instructions if an infinite loop is found within a subroutine.
  • Author claims that the system is highly efficient - it codes constraints and expert knowledge using a higher level language/syntax that is then converted to hundreds of thousands of lines of C code. The active search program runs runtime-generated C programs to evaluate and find solutions, wow!
  • This must have taken a decade or more to create! Very impressive. (seems it took 2 decades, at least according to http://tunes.org/wiki/jacques_20pitrat.html)
    • Despite all this work, he is not nearly done - it has not "learning" module.
    • Quote: In this paper, I do not describe some parts of the system which still need to be developed. For instance, the system performs experiments, analyzes them and finds surprising results; from these results, it is possible to learn some improvements, but the learning module, which would be able to find them, is not yet written. In that case, only a part of the system has been implemented: on how to find interesting data, but still not on how to use them.
  • Only seems to deal with symbolic problems - e.g. magic squares, magic cubes, self-referential integer series. Alas, no statistical problems.
  • The whole CAIA system can effectively be used as a tool for finding problems of arbitrary difficulty with arbitrary number of solutions from a set of problem families or meta-families.
  • Has hypothesis based testing and backtracking; does not have problem reformulation or re-projection.
  • There is mention of ALICE, but not the chatbot A.L.I.C.E - some constraint-satisfaction AI program from the 70's.
  • Has a C source version of MALICE (his version of ALICE) available on the website. Amazingly, there is no Makefile - just gcc *.c -rdynamic -ldl -o malice.
  • See also his 1995 Paper: AI Systems Are Dumb Because AI Researchers Are Too Clever images/815_1.pdf

Artificial beings - his book.

hide / / print
ref: -0 tags: ocaml latex metapost date: 07-23-2009 13:56 gmt revision:1 [0] [head]

http://mlpost.lri.fr/ -- allows drawing Latex or postscript figures programmatically. Interesting. Included in Debian. source

hide / / print
ref: -0 tags: metal halide lamp date: 02-14-2007 21:28 gmt revision:1 [0] [head]

for a barco data 3200LC, you need a HMI575/SE (single ended) lamp. unfortunately, this only lasts 750 hours :( and costs $150 http://www.bulbman.com/index.php?main_page=product_bulb_info&cPath=5399&products_id=10858

hide / / print
ref: bookmark-0 tags: metal_halide projector light CRI Venture Osram Phillips date: 0-0-2007 0:0 revision:0 [head]

Overview: a projector light should have good luminous efficiency, have a long life, and most importantly have plenty of energy in the red region of the spectrum. most metal halides have yellow/green lines and blue lines, few have good red lines.

http://www.osram.no/brosjyrer/english/K01KAP5_en.pdf in 1000 watt, the Osram Powerstar HQI-TS 1000/d/s looks the best: CRI > 90, 5900K color temperature. Unfortunately, I cannot seem to find any american places to buy this bulb, nor can i determine its average life. It can be bought, at a price, from http://www.svetila.com/eProdaja/product_info.php/products_id/442 { n.b. the osram HMI bulbs are no good-the lifetime is too short}

In 400 watt, the Eye Clean Arc MT400D/BUD looks quite good, with a CRI of 90, 6500K color temp. http://www.eyelighting.com/cleanarc.html. EYE also has a ceraarc line, but the 400w bulb is not yet in production (and it has a lower color temperature, 4000K). Can be bought from http://www.businesslights.com/ (N.B. they have spectral charts for many of the lights!)

  • I've also seen reference to the Phillips mastercolor line: http://www.nam.lighting.philips.com/us/ecatalog/hid/pdf/p-5497c.pdf
    • these are ceramic HPS white replacements ('retro-white'). 85CRI, 4000K color temperature, reasonably efficient over the life of the bulb.
  • Ushio
  • Venture lighting has a 400W naturalWhite e-lamp (5000k, 90+ CRI). For use with both pulse-start and the electronic ballasts that they sell.

and fYI, the electrodelass bulbs are made by Osram and are called "ICETRON". They are rather expensive, but last 1e5 hours (!). Typical output is 80 lumens/watt

more things of interest:

hide / / print
ref: bookmark-0 tags: machine_learning algorithm meta_algorithm date: 0-0-2006 0:0 revision:0 [head]

Boost learning or AdaBoost - the idea is to update the discrete distribution used in training any algorithm to emphasize those points that are misclassified in the previous fit of a classifier. sensitive to outliers, but not overfitting.

hide / / print
ref: bookmark-0 tags: teflon PTFE bonding metal polytetrafluoroethylene tetraflouroethylene date: 0-0-2006 0:0 revision:0 [head]


block copolymer: http://en.wikipedia.org/wiki/Copolymer