Hacker News new | past | comments | ask | show | jobs | submit login

I don't think AGI has ever moved in its goalposts, defined rather well by the Turing test. AGI must be general, capable of reasoning at least as well as a human in any domain of inquiry, at the same time. Showing more than human reasoning in certain domains is trivial, and has been happening since at least Babbage's difference engine.

However, while AI has been overtaking human reasoning on many specific problems, we are still very far from any kind of general intelligence that could conduct itself in the world or in open-ended conversation with anything approaching human (or basically any multicellular organism) intelligence.

Furthermore, it remains obvious that even our best specific models require vastly more training (number of examples + time) and energy than a human or animal to reach similar performance, wherever comparable. This may be due to the 'hidden' learning that has been happening in the millions of years of evolution that are encoded in any living being today, but it may also be that we are missing some fundamental advancements in the act of learning itself




The Turing is not about intelligence, it's about being able to credibly ape humans. You don't need to be able to credibly describe what it's like to fall from a bike or to fall in love in order to be intelligent... But you have to in order to pass a Turing test.

The first chat bots that I remember claiming to have passed the Turing test were actually credibly dumb (mimicking a teenage, non-native English speaker with deliberate grammar mistakes to paper over misunderstandings).


The Turing test ('the imitation game') was designed as an objective way to answer the question 'can this machine think'. The fact that it can be gamed by feigning stupidity or bad language skills is a weakness of the specifics of the test perhaps, but not the idea in principle.

Instead, the idea is to have an open-ended conversation with the machine, to explore its ability to display general human-level intelligence. It's true that intelligence in general is far more broad (after all, an ant colony would not even begin to pass the Turing test, but it still possesses general intelligence in a way that no AI does yet.

Building an artificial ant colony (or even 1 artificial ant that lives in a real colony) with all of the problem-solving abilities of a real one would still be a monumental achievement in AI. I think there is even hope that if we could do that, advancing to human level intelligence and beyond would be just around the corner, though that remains to be seen.


Insect-level intelligence in robots would already be extremely dangerous IMO.

Especially if they were equipped with reasonable models of our "fast thinking" pathways, which aren't that smart to begin with and are already easily gamed by bots on the Web.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: