This is a classic problem with AI researchers. Somebody gets a good result, and then they start thinking strong human-level AI is right around the corner. AI went through this with search, planning, the General Problem Solver, perceptrons, the first generation of neural networks, and expert systems. Then came the "AI winter", late 1980s to early 2000s, when almost all the AI startups went bust. We're seeing some of it again in the machine learning / deep neural net era.
This time looks more promising, partly because we can throw more compute power at the problem. Many of the ideas in machine learning and neural nets are old, and are so inefficient that they were hopeless until people could beat on them with racks of GPUs.
The big difference this time is that AI is profitable. The field used to be tiny - maybe 20-30 people at MIT, Stanford, and CMU, with a few small groups elsewhere. Now there are hundreds of thousands of researchers, and profitable applications.
(I went through Stanford just as the expert system boom was collapsing and the "AI winter" was beginning. I met most of the big names from the logic-based AI era. It was kind of sad.)