I guess we should start with the fact that models currently have no ability to remember at all.
You either fine-tune which is a very lossy process that degrades generality or you do in-context learning/RAG. Forgetting in its current form would be eliminating obsolete context, not forgetting would be using 1 million input tokens to answer "what is 2+2?".
In any case, any external mechanic to selectively manage context would be far too limiting for AGI.
I can remember for example when I was wrong and how and still responding correctly, I don't have to forget my wrong answer to respond with the correct one.