Learning new sequences by analogy seems super valuable, but I'm not sure experience replay in RL is the best way to apply it. Experience replay in their 2015 Atari paper is basically a hack to overcome catastrophic forgetting and extreme sample inefficiency, which are two problems neural network architectures have that we may very well overcome in a more principled way eventually.
What I mean by that is that, while being able to replay experiences is super valuable, and humans do it, we don't replay experiences 50 to 100 times as often as we actually experience things the way reinforcement learning algorithms do. There's something else going on there
Edit: having read the abstract of the paper now, I'm less sure the paper itself is talking about RL-style experience replay, that may have just been the author of the post connecting the two ideas.
What I mean by that is that, while being able to replay experiences is super valuable, and humans do it, we don't replay experiences 50 to 100 times as often as we actually experience things the way reinforcement learning algorithms do. There's something else going on there
Edit: having read the abstract of the paper now, I'm less sure the paper itself is talking about RL-style experience replay, that may have just been the author of the post connecting the two ideas.