Hacker News new | past | comments | ask | show | jobs | submit login

Yes, it's called Cyc and he's been busy building it since then. They have literally been inputting knowledge facts into a computer since the 80s. It's partially automatic by parsing text from the internet now. They had a goal in terms of number of rules that they set in the 80s, when they'd get intelligent behavior and he showed that they are getting close now.

The talk is from 2005 and it also two years back since I watched it, so I am not confident to summarize it. I was quite impressed when I watched it for the first time, though. I reason I brought it up is more, like "see what you could do with an ontology", then "this is what it should look like" or "an ontology is all you need".




I'm somewhat skeptical if AI can be built by extrapolating the Cyc project. At best we'll get a sophisticated QA bot. I feel that vision is central to human cognition, and the Cyc project seems to be all about relationships of words to words, without any relations to vision stuff, images and video.


No one is probably reading this, but ...

I like I said I mentioned Cyc more because it is interesting then anything else. However, I do believe words and local image parts are just cognitive concepts and they will eventually be handled using the same algorithms, see e.g. the Socher et.al. paper I referenced above.

However, I am not so sure how this fits together with planning and acting autonomously (which would fall under reinforcement learning). But I wasn't really talking about building strong AI, just building an AI which is strong enough to convince people it is human during a 30 minute conversation.


I read it. Thanks for taking the time to reply.


How do you explain the cognition of blind people?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: