m8ta
You are not authenticated, login.
text: sort by
tags: modified
type: chronology
{1535}
hide / / print
ref: -2019 tags: deep double descent lottery ticket date: 02-23-2021 18:47 gmt revision:2 [1] [0] [head]

Reconciling modern machine-learning practice and the classical bias–variance trade-off

A formal publication of the effect famously discovered at OpenAI & publicized on their blog. Goes into some details on fourier features & runs experiments to verify the OpenAI findings. The result stands.

An interesting avenue of research is using genetic algorithms to perform the search over neural network parameters (instead of backprop) in reinforcement-learning tasks. Ben Phillips has a blog post on some of the recent results, which show that it does work for certain 'hard' problems in RL. Of course, this is the dual of the 'lottery ticket' hypothesis and the deep double descent, above; large networks are likely to have solutions 'close enough' to solve a given problem.

That said, genetic algorithms don't necessarily perform gradient descent to tweak the weights for optimal behaviror once they are within the right region of RL behavior. See {1530} for more discussion on this topic, as well as {1525} for a more complete literature survey.