m8ta
You are not authenticated, login. |
|
{1567} |
ref: -0
tags: evolution simplicity symmetry kolmogorov complexity polyominoes protein interactions
date: 04-21-2022 18:22 gmt
revision:5
[4] [3] [2] [1] [0] [head]
|
|
Symmetry and simplicity spontaneously emerge from the algorithmic nature of evolution
The paper features a excellent set of references, including:
Letter to a friend following her article Machine learning in evolutionary studies comes of age Read your PNAS article last night, super interesting that you can get statistical purchase on long-lost evolutionary 'sweeps' via GANs and other neural network models. I feel like there is some sort of statistical power issue there? DNNs are almost always over-parameterized... slightly suspicious. This morning I was sleepily mulling things over & thought about a walking conversation that we had a long time ago in the woods of NC: Why is evolution so effective? Why does it seem to evolve to evolve? Thinking more -- and having years more perspective -- it seems almost obvious in retrospect: it's a consequence of Bayes' rule. Evolution finds solutions in spaces that have overwhelming prevalence of working solutions. The prior has an extremely strong effect. These representational / structural spaces by definition have many nearby & associated solutions, hence appear post-hoc 'evolvable'. (You probably already know this.) I think proteins very much fall into this category: AA were added to the translation machinery based on ones that happened to solve a particular problem... but because of the 'generalization prior' (to use NN parlance), they were useful for many other things. This does not explain the human-engineering-like modularity of mature evolved systems, but maybe that is due to the strong simplicity prior [1] Very very interesting to me is how the science of evolution and neural networks are drawing together, vis a vis the lottery ticket hypothesis. Both evince a continuum of representational spaces, too, from high-dimensional vectoral (how all modern deep learning systems work) to low-dimensional modular, specific, and general (phenomenological human cognition). I suspect that evolution uses a form of this continuum, as seen in the human high-dimensional long-range gene regulatory / enhancer network (= a structure designed to evolve). Not sure how selection works here, though; it's hard to search a high-dimensional space. The brain has an almost identical problem: it's hard to do 'credit assignment' in a billions-large, deep and recurrent network. Finding which set of synapses caused a good / bad behaviior takes a lot of bits. |