Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why "selectively forget" should be a piece for AGI?


I guess we should start with the fact that models currently have no ability to remember at all.

You either fine-tune which is a very lossy process that degrades generality or you do in-context learning/RAG. Forgetting in its current form would be eliminating obsolete context, not forgetting would be using 1 million input tokens to answer "what is 2+2?".

In any case, any external mechanic to selectively manage context would be far too limiting for AGI.


I think maybe this refers to unlearning wrong information?


Also abstracting. No need to remember every milliseconds in its lifetime and consult them in every query.


I can remember for example when I was wrong and how and still responding correctly, I don't have to forget my wrong answer to respond with the correct one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: