Hacker News new | past | comments | ask | show | jobs | submit login

The real test for AGI is if you put a million AI agents in a room in isolation for a 1000 years would they get smarter or dumber.





Put humans in a room in isolation and they get dumber. What makes our intelligence soar is the interaction with the outside world, with novel challenges.

As we stare in the smartphones we get dumber than when we roamed the world with eyes opened.


>As we stare in the smartphones we get dumber than when we roamed the world with eyes opened.

This is oversimplified dramatics. People can both stare at their smartphones, consume the information and visuals inside them, and live lives in the real world with their "eyes wide open". One doesn't necessarily forestall the other and millions of us use phones while still engaging with the world.


If they worked on problems, and trained themselves on their own output when they achieved more than the baseline. Absolutely they would get smarter.

I don't think this is sufficient proof for AGI though.


At present, it seems pretty clear they’d get dumber (for at least some definition of “dumber”) based on the outcome of experiments with using synthetic data in model training. I agree that I’m not clear on the relevance to the AGI debate, though.

There have been some much publicised studies showing poor performance of training from scratch on purely undiscriminated synthetic data.

Curated synthetic data has yielded excellent results. Even when the curation is AI

There is no requirement to train from scratch in order to get better, you can start from where you are.

You may not be able to design a living human being, but random changes and keeping the bits that performed better can.


If you put MuZero in a room with a board game it gets quite good at it. (https://en.wikipedia.org/wiki/MuZero)

We'll see if that generalizes beyond board games.


And would they start killing themselves, first as random "AI agents hordes" and then, as time progresses, as "AI agents nations"?

This is a rhetorical question only by half, my point being that no AGI/AI could ever be considered as a real human unless it manages to "copy" our biggest characteristics, and conflict/war is a big characteristic of ours, to say nothing about aggregation by groups (from hordes to nations).


Our biggest characteristic is resource consumption and technology production I would say.

War is just a byproduct of this on a scarce world.


> Our biggest characteristic is resource consumption and technology production I would say.

Resource consumption is characteristic of all life; if anything, we're an outlier in that we can actually, sometimes, decide not to consume.

Abstinence and developing technology - those are our two unique attributes on the planet.

Yes, really. Many think we're doing worse than everything else in nature - but the opposite is the case. That "balance and harmony" in nature, which so many love and consider precious, is not some grand musical and ethical fixture; it's merely the steady state of never-ending slaughter, a dynamic balance between starvation and murder. It often isn't even a real balance - we're just too close to it, our lifespans too short, to spot the low-frequency trends - spot one life form outcompeting the others, ever so slightly, changing the local ecosystem year by year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: