Hacker News new | past | comments | ask | show | jobs | submit login

Please share more. You can't tease us like that and leave us hanging.



I posted some stuff above, but I am not sure how accessible it is for someone without a Machine Learning/NLP background. Maybe the Hinton Google talk is not too bad.

Here is another Google Talk (not really a recent development, though):

http://video.google.com/videoplay?docid=-7704388615049492068...

He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities). I don't really agree that this is right approach, but might be interesting for you nonetheless.


> He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities).

Doug Lenat claimed that in the 1980s and spent quite a while trying to build it (Cyc IIRC). What is different this time? ("We can do it now" is a perfectly reasonable answer.)


Yes, it's called Cyc and he's been busy building it since then. They have literally been inputting knowledge facts into a computer since the 80s. It's partially automatic by parsing text from the internet now. They had a goal in terms of number of rules that they set in the 80s, when they'd get intelligent behavior and he showed that they are getting close now.

The talk is from 2005 and it also two years back since I watched it, so I am not confident to summarize it. I was quite impressed when I watched it for the first time, though. I reason I brought it up is more, like "see what you could do with an ontology", then "this is what it should look like" or "an ontology is all you need".


I'm somewhat skeptical if AI can be built by extrapolating the Cyc project. At best we'll get a sophisticated QA bot. I feel that vision is central to human cognition, and the Cyc project seems to be all about relationships of words to words, without any relations to vision stuff, images and video.


No one is probably reading this, but ...

I like I said I mentioned Cyc more because it is interesting then anything else. However, I do believe words and local image parts are just cognitive concepts and they will eventually be handled using the same algorithms, see e.g. the Socher et.al. paper I referenced above.

However, I am not so sure how this fits together with planning and acting autonomously (which would fall under reinforcement learning). But I wasn't really talking about building strong AI, just building an AI which is strong enough to convince people it is human during a 30 minute conversation.


I read it. Thanks for taking the time to reply.


How do you explain the cognition of blind people?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: