m8ta
You are not authenticated, login. |
|
{1576} |
ref: -0
tags: GFlowNet Bengio probabilty modelling reinforcement learing
date: 10-29-2023 19:17 gmt
revision:3
[2] [1] [0] [head]
|
||||
| |||||
{1559} | |||||
Some investigations into denoising models & their intellectual lineage: Deep Unsupervised Learning using Nonequilibrium Thermodynamics 2015
Generative Modeling by Estimating Gradients of the Data Distribution July 2019
Denoising Diffusion Probabilistic Models June 2020
Improved Denoising Diffusion Probabilistic Models Feb 2021
Diffusion Models Beat GANs on Image Synthesis May 2021
In all of above, it seems that the inverse-diffusion function approximator is a minor player in the paper -- but of course, it's vitally important to making the system work. In some sense, this 'diffusion model' is as much a means of training the neural network as it is a (rather inefficient, compared to GANs) way of sampling from the data distribution. In Nichol & Dhariwal Feb 2021, they use a U-net convolutional network (e.g. start with few channels, downsample and double the channels until there are 128-256 channels, then upsample x2 and half the channels) including multi-headed attention. Ho 2020 used single-headed attention only at the 16x16 level. Ho 2020 in turn was based on PixelCNN++
which is an improvement to (e.g. add selt-attention layers) Conditional Image Generation with PixelCNN Decoders
Most recently, GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Added text-conditional generation + many more parameters + much more compute to yield very impressive image results + in-painting. This last effect is enabled by the fact that it's a full generative denoising probabilistic model -- you can condition on other parts of the image! | |||||
{1547} | |||||
Meta-Learning Update Rules for Unsupervised Representation Learning
This is a clearly-written, easy to understand paper. The results are not highly compelling, but as a first set of experiments, it's successful enough. I wonder what more constraints (fewer parameters, per the genome), more options for architecture modifications (e.g. different feedback schemes, per neurobiology), and a black-box optimization algorithm (evolution) would do? | |||||
{1488} | |||||
PMID-30588295 Subcellular spatial resolution achieved for deep-brain imaging in vivo using a minimally invasive multimode fiber | |||||
{1440} | |||||
| |||||
{1424} | |||||
Curiosity-driven exploration by Self-supervised prediction
| |||||
{1413} | |||||
PMID-24711417 Evidence for a causal inverse model in an avian cortico-basal ganglia circuit
| |||||
{1402} | |||||
PMID-18336081 Adaptive integration in the visual cortex by depressing recurrent cortical circuits.
| |||||
{1340} | |||||
PMID-26867734 A biophysical model of the cortex-basal ganglia-thalamus network in the 6-OHDA lesioned rat model of Parkinson’s disease
Overall, not a bad paper. Not very well organized, which is not assisted by the large amount of information presented, but having slogged through the figures, I'm somewhat convinced that the model is good. This despite my general reservations of these models: the true validation would be to have it generate actual behavior (and learning)! Lacking this, the approximations employed seem like a step forward in understanding how PD and DBS work. The results and discussion are consistent with {1255}, but not {711}, which found that STN projections from M1 (not the modulation of M1 projections to GPi, via efferents from STN) truly matter.
| |||||
{1293} | |||||
PMID-24216311 Failure mode analysis of silicon-based intracortical microelectrode arrays in non-human primates
| |||||
{1134} | |||||
PMID-6838141[0] Speculations on the Functional Anatomy of Basal Ganglia Disorders
Got some things completely wrong:
____References____
| |||||
{1083} |
ref: Holgado-2010.09
tags: DBS oscillations beta globus pallidus simulation computational model
date: 02-22-2012 18:36 gmt
revision:4
[3] [2] [1] [0] [head]
|
||||
PMID-20844130[0] Conditions for the Generation of Beta Oscillations in the Subthalamic Nucleus–Globus Pallidus Network
____References____
| |||||
{1103} | |||||
PMID-16317234 A finite-element model of the mechanical effects of implantable microelectrodes in the cerebral cortex.
| |||||
{1095} | |||||
PMID-20505125[0] Deep brain stimulation alleviates parkinsonian bradykinesia by regularizing pallidal activity.
____References____
| |||||
{1059} | |||||
PMID-21719340 Modelization of a self-opening peripheral neural interface: a feasibility study. | |||||
{929} | |||||
PMID-17694874[0] The muscle activation method: an approach to impedance control of brain-machine interfaces through a musculoskeletal model of the arm.
____References____
| |||||
{888} | |||||
Experiment: you have a key. You want that key to learn to control a BMI, but you do not want the BMI to learn how the key does things, as
Given this, I propose a very simple groupweight: one axis is controlled by the summed action of a certain population of neurons, the other by a second, disjoint, population; a third population serves as control. The task of the key is to figure out what does what: how does the firing of a given unit translate to movement (forward model). Then the task during actual behavior is to invert this: given movement end, what sequence of firings should be generated? I assume, for now, that the brain has inbuilt mechanisms for inverting models (not that it isn't incredibly interesting -- and I'll venture a guess that it's related to replay, perhaps backwards replay of events). This leaves us with the task of inferring the tool-model from behavior, a task that can be done now with our modern (though here-mentioned quite simple) machine learning algorithms. Specifically, it can be done through supervised learning: we know the input (neural firing rates) and the output (cursor motion), and need to learn the transform between them. I can think of many ways of doing this on a computer:
{i need to think more about model-building, model inversion, and songbird learning?} | |||||
{950} | |||||
PMID-10725930 Direct cortical control of muscle activation in voluntary arm movements: a model.
| |||||
{154} |
ref: OReilly-2006.02
tags: computational model prefrontal_cortex basal_ganglia
date: 12-07-2011 04:11 gmt
revision:1
[0] [head]
|
||||
PMID-16378516[0] Making Working Memory Work: A Computational Model of Learning in the Prefrontal Cortex and Basal Ganglia found via: http://www.citeulike.org/tag/basal-ganglia ____References____
| |||||
{752} | |||||
http://www.theatlantic.com/doc/200501/kirn -- goood.
| |||||
{683} | |||||
PMID-14983183[0] Off-line replay maintains declarative memories in a model of hippocampal-neocortical interactions
____References____ | |||||
{675} | |||||
PMID-18808769 Modeling the organization of the basal ganglia.
| |||||
{673} |
ref: Vasilaki-2009.02
tags: associative learning prefrontal cortex model hebbian
date: 02-17-2009 03:37 gmt
revision:2
[1] [0] [head]
|
||||
PMID-19153762 Learning flexible sensori-motor mappings in a complex network.
| |||||
{618} | |||||
PMID-11506661[0] Parallel cortico-basal ganglia mechanisms for acquisition and execution of visuomotor sequences - a computational approach.
____References____ | |||||
{352} |
ref: bookmark-0
tags: postmodernism pseudoscience Alan Sokal
date: 04-23-2007 03:47 gmt
revision:0
[head]
|
||||
http://www.physics.nyu.edu/faculty/sokal/pseudoscience_rev.pdf
| |||||
{80} |
ref: Chan-2006.12
tags: computational model primate arm musculoskeletal motor_control Moran
date: 04-09-2007 22:35 gmt
revision:1
[0] [head]
|
||||
PMID-17124337[0] Computational Model of a Primate Arm: from hand position to joint angles, joint torques, and muscle forces ideas:
____References____ | |||||
{108} | |||||
http://www.berndporr.me.uk/iso3_sab/
| |||||
{36} | |||||
{81} |
ref: Stapleton-2006.04
tags: Stapleton Lavine poisson prediction gustatory discrimination statistical_model rats bayes BUGS
date: 0-0-2006 0:0
revision:0
[head]
|
||||