Hacker News new | past | comments | ask | show | jobs | submit login
DeepMind Builds Neural Networks That Spontaneously Replay Past Experiences (towardsdatascience.com)
14 points by dsr12 on Oct 2, 2019 | hide | past | favorite | 1 comment



Learning new sequences by analogy seems super valuable, but I'm not sure experience replay in RL is the best way to apply it. Experience replay in their 2015 Atari paper is basically a hack to overcome catastrophic forgetting and extreme sample inefficiency, which are two problems neural network architectures have that we may very well overcome in a more principled way eventually.

What I mean by that is that, while being able to replay experiences is super valuable, and humans do it, we don't replay experiences 50 to 100 times as often as we actually experience things the way reinforcement learning algorithms do. There's something else going on there

Edit: having read the abstract of the paper now, I'm less sure the paper itself is talking about RL-style experience replay, that may have just been the author of the post connecting the two ideas.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: