Hacker News new | past | comments | ask | show | jobs | submit login

What a randomly self-serving assumption to make.



I can't agree. If we can blur the difference between weak and strong AI, that 'transitional form' (missing link?) will be enormously important to the future of AI (and mankind).

In the past 50 years, AI has seen hundreds of small successes in narrow tasks that used to require humans. But none so far has shown the potential to scale up, generalize, and serve tasks other than the narrow one for which it was designed. Like IBM Watson, AlphaGo too is likely to be consigned to the AI scrapheap in the sky.

BUT... the deep net technique used by AlphaGo shows more promise to solve the remaining unsolved AI tasks than any AI method before it. Yes, we still don't know DL's limits, like whether it can integrate one-shot learning, or build and reuse a diverse knowledgebase, or transfer specific methods to solve new more general problems. But as of right now, it's shown greater promise to solve novel weak AI tasks than any past technique I've seen. The author overlooks that potential deliberately and provocatively, and IMO, pointlessly.

Can DL scale up into strong AI too? I think the important thing here isn't that the answer isn't obviously yes (as the author posits), but that the answer isn't obviously no. And in the 50+ year quest for strong AI, that's a first, at least for me.


It's not self-serving to say PhDs are better than me at predicting the future of their field.

It would be self-serving if I lied in an interview and said I was qualified for a job making robots if I'd never had any experience doing so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: