Hacker News new | past | comments | ask | show | jobs | submit login

If they worked on problems, and trained themselves on their own output when they achieved more than the baseline. Absolutely they would get smarter.

I don't think this is sufficient proof for AGI though.






At present, it seems pretty clear they’d get dumber (for at least some definition of “dumber”) based on the outcome of experiments with using synthetic data in model training. I agree that I’m not clear on the relevance to the AGI debate, though.

There have been some much publicised studies showing poor performance of training from scratch on purely undiscriminated synthetic data.

Curated synthetic data has yielded excellent results. Even when the curation is AI

There is no requirement to train from scratch in order to get better, you can start from where you are.

You may not be able to design a living human being, but random changes and keeping the bits that performed better can.


If you put MuZero in a room with a board game it gets quite good at it. (https://en.wikipedia.org/wiki/MuZero)

We'll see if that generalizes beyond board games.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: