Hacker News new | past | comments | ask | show | jobs | submit login

That was Brooks' reaction against the logic-based AI of the 1980s. Around 1990 I went to a talk where Brooks was plugging the Cog project he was starting.[1] Brooks had some success with purely reactive insect-level AI, and was trying to make the big jump to human-level AI. I asked him, why not try for mouse-level AI. That might be within reach. He said "Because I don't want to go down in history as the man who developed the world's greatest robot mouse."

This is a classic problem with AI researchers. Somebody gets a good result, and then they start thinking strong human-level AI is right around the corner. AI went through this with search, planning, the General Problem Solver, perceptrons, the first generation of neural networks, and expert systems. Then came the "AI winter", late 1980s to early 2000s, when almost all the AI startups went bust. We're seeing some of it again in the machine learning / deep neural net era.

This time looks more promising, partly because we can throw more compute power at the problem. Many of the ideas in machine learning and neural nets are old, and are so inefficient that they were hopeless until people could beat on them with racks of GPUs.

The big difference this time is that AI is profitable. The field used to be tiny - maybe 20-30 people at MIT, Stanford, and CMU, with a few small groups elsewhere. Now there are hundreds of thousands of researchers, and profitable applications.

(I went through Stanford just as the expert system boom was collapsing and the "AI winter" was beginning. I met most of the big names from the logic-based AI era. It was kind of sad.)

[1] https://en.wikipedia.org/wiki/Cog_(project)




The hundreds of thousands of researchers number, is that just a guess? If it isn't I'd love to see the source.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: