Hacker News new | past | comments | ask | show | jobs | submit login

The problem is "intelligence" is not a very well defined concept. Usually, we define it (consciously or not) as "something the human is good at, and animals / artificial tools cannot". So, of course, as soon as a machine is able to do something, it does not fit the definition anymore.

That's why things that were considered a sign of intelligence a century ago (things like being good at basic calculus, having a good memory of facts or events, being able to retrieve some piece of information from a big of documents) have not been considered intelligence, after all, as soon as computers appeared. Chess playing was stil the typical intelligent activity, but it stopped being so as soon as deep blue won a game. Being able to go from point A to point B with nothing but a map, and being able to say "hmm, let's avoid this road, there's usually a lot of traffic at this time of the day" was, too, before the advent of connected GPS. And so on.

Now that we're getting closer and closer to Turing's test, there are more and more people claiming this test is pretty bad, after all, and isn't a good way to assess intelligence.

In a way, "artificial intelligence" is an oxymoron. It's a battle that cannot be won.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: