Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an old problem in AI. Chess was an AI problem, until a computer beat a grandmaster. Vision was an AI problem, now we have OpenCV. Many AI problems get shifted out of "AI" once they're solved.


It stems from our definition of an AI.

An AI is a computer doing those things a computer cannot do. As such, anything that a computer cannot do isn't AI, and anything a computer can do isn't AI either.


Hmm, the 'No true AI' fallacy, then, eh?


Pretty much, assuming you're making an analogy to the "no true Scotsman" fallacy.


One explanation for this could be that we think that some problem is so hard that any solution to it is necessarily so complicated that it could be adapted to solve pretty much anything. When we realize that that isn't the case, we stop calling it AI.


I don't think OpenCV really solved computer vision to be fair. There's definitely no model out there that can do image-based question & answering as well as a human can, or interpret the contents of an image (parse it, if you will) in an accurate way, with the exception of very few special cases.


Learning to do something is an AI problem.

Writing a program to play Chess is not AI but doing so has helped figure learning out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: