Hacker News new | comments | show | ask | jobs | submit login
Overcoming Catastrophic Forgetting in Neural Networks (rylanschaeffer.github.io)
123 points by RSchaeffer 101 days ago | hide | past | web | 11 comments | favorite



Great, understood a lot more than I had from the DeepMinds paper alone. Thought the mathematics was slightly beyond me, I got the gist of it still. Although this was talking about reinforcement learning in Atari, I was wondering if its works for other domain as well? supervised, unsupervised etc. If it does and say you have sparse data for task B but rich data for task A. Is this saying training first on A than transfer learning B makes it perform better on B? (As I type it, it's sounding like semi-supervised but it's not what I am trying to ask. :P) P.S: Pictures helped.


It would probably depend on how closely related the tasks are.

For instance, there's been a few papers talking about "transfer learning" where a network is trained on video data for the equivalent of tens of thousands of hours, and then used to control a robot (where the robot's inputs are partly from video). The pre-lesrned weights help significantly, as you'd imagine.

In another sense, it's often useful to use a pretrained network as the input to your model (so you'd run the images through another network that outputs a simplified representation of the images, and then run a second model on that). That's currently quite useful; I could see something like that being super useful here. Train on one task with lots of data, and then switch to something similar with less data.


Overcoming catastrophic forgetting is a genuine step toward strong AI.


Is anyone trying to combine networks? Ie two camera feeds, audio feeds, some ability to interact with surroundings (like hands or wheels)? I have a hunch having something to interact with and sense is necessary for consciousness.



Certainly will be great to see this type of thing come along, perhaps first as a toy like an artificial pet that doesn't matter if it makes mistakes.

Having actuators isn't necessary for consciousness because locked-in people are conscious but can't move.


Stephen Hawking doesn't have consciousness?


Stephen Hawking interacts with his surroundings


Mathjax appears to be broken if you use https everywhere or just visit with https[0]. Just a note to RSchaeffer. Nice article.

[0] https://rylanschaeffer.github.io/content/research/overcoming...


How does this compare, intuitively, to "short-term -> long-term memory transfer", where learned skills are stored in a subset of the neural network, and non-core details are forgotten?


Someone should consider hiring this young man.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: