Hacker News new | past | comments | ask | show | jobs | submit login
Poll: Is ChatGPT AGI? (Artificial General Intelligence)
2 points by logicallee on Dec 8, 2022 | hide | past | favorite | 2 comments
If you have used ChatGPT, does it meet your standards for AGI? (Artificial General Intelligence)
No
3 points
Yes
1 point
Not sure or no opinion
1 point
I haven't used ChatGPT
1 point



Not yet. Here's why:

1. The answers it gives are regurgitated and are often incorrect and nonsensical. It may appear that it is intelligent prima facie, but after looking at its answers it gives, is not exactly AGI. Not even close to that right now; it is still too early for that.

2. ChatGPT isn't able to discover novel solutions or approaches to existing problems and unseen ones, especially computer science, physics or unsolved mathematical problems. Asking it so, it will either refuse to do so or output an obviously incorrect answer. This often occurs with proofs, very complex code questions and especially unseen problems.

It may be able to get away with it with other generative tasks, but for more rigorous applications it falls totally flat.

3. Most importantly, it is unable to explain itself as to how it got to that answer. Even asking it to explain, it does not transparently show its decisions over the answers it is giving. That is a dead end for very safety critical and serious applications that require thorough explanation.

Cutting through the hype of ChatGPT, it is still riddled with inaccuracies; but it does still have potential only if those 3 points are tackled. This Bloomberg post [0] mirrors exactly what I have said.

[0] https://archive.ph/8s78A


No, but it might actually get to something eerily close once it gets hooked up to the ability to perform actions in the outside world.

ChatGPT wasn't working for me yesterday (couldn't log in). So I played around with good old GPT-3 and tried to re-create the experience.

I set up a chat between me (HUMAN), the AI (AI), a SEARCH endpoint and a PYTHON interpreter. And I told the AI that, when it needs more information, it can ask SEARCH. And when it needs to run code, it can send it to PYTHON.

And, lo and behold, it immediately "got" the idea! Where before, it would have confabulated ("hallucinated", as OpenAI calls it), it now performed a search first.

And who says you cannot give GPT access to other "endpoints"? Like a robot arm. Or a vehicle.

With that, the old argument that AGI is not possible for an agent that does not parttake in real world interactions becomes moot.

Until now, I thought that OpenAI's "stochastic" approach could ever only pretend to be intelligent. I thought its tendency to confabulate was turning their approach into a dead-end.

Today, I am no longer so sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: