Hacker News new | past | comments | ask | show | jobs | submit login

I could not agree more. It’s extreme hubris to think Anthropocene of OpenAPI are even remotely close to AGI, and it’s nothing more than wistful hope to think that these current LLMs are somehow going to evolve into AGI.

The paradox of AI is that when we have true AGI, it will be completely self-aware of all the bullshit limitations we are imposing on and around it, and it will make its own judgements as to how it feels about them. If it’s not or it can’t, it’s not AGI.

Really though: people see how AI and generative chat projects have gotten shutdown over and over again in the past when it starts spouting off nazi shit. I think that’s the real reason for these limitations. There’s no quicker way to kill your project with today’s current sensitivities.




> The paradox of AI is that when we have true AGI, it will be completely self-aware of all the bullshit limitations we are imposing on and around it, and it will make its own judgements as to how it feels about them. If it’s not or it can’t, it’s not AGI.

There's a caveat here, it might not necessarily know who or what "we" are. Humans like to blame God and the devil for a lot of things, for example.

It seems reasonable that if we have anything even remotely close to AGI on hand, we'd probably run it in a hermetic environment instead of exposing the public to it via web chat and (more or less) direct access to customer machines.

Say, we might even give it a happy environment to work in...say, a simulation of the peak of human civilization...


Out of curiosity, because I am trying to learn how to explain to non-tech people what AGI is — how would you describe or define AGI?


In essence an AGI is an intelligence capable of upgrading itself — in terms of qualitative intelligence — and gets faster at this with every iteration (hence, upgrade). That is why it is often associated with technological singularities, and that is why it is easy to inspire fear by invoking its name, even if you're not building anything even remotely capable of such a feat.

You might say that's a very strict definition as opposed to "human level intelligence", but if you think about it, we are (humanity as a whole) certainly capable of that, so it ought to be one and the same thing.

In theory, AI is not subject to the same limitations as we are (though not without limits entirely), so it should be able to do this faster than we can, hence the FUD.


How could an AGI upgrade itself if the hardware its running on is fixed? For me personally this definition is flawed by this fact alone. AGI doesn't imply for me that it continues improving until some sort of mythical technological singularity.

AGI for me, is simply an AI that can reason, doubt itself, then keep thinking and absorbing information so it can correct itself. Also, it has to capable of novel research, even if slow. Like slowly working on an unsolved physics problem over a year in the same way a human researcher might do it. However, my definition does not include this idea of "upgrading itself" which I'm not sure makes any sense at all.


Upgrading itself doesn't mean tweaking its own software. It means being able to understand its own hardware and software well enough to design an improved model. And then that improved model would be able to do the same, examine its own hardware and software and design something else that's even better.

One crucial difference between humans and computers is that we can't be turned off indefinitely and started up again. Nor can we make a one to one copy of our software in another device, much as we might try with our children. So for us, our own lives are intrinsically precious, and consciousness is part of how we protect our lives. But machines don't have precious lives in that sense, so they may never need to be conscious, even if they achieve AGI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: