Hacker News new | past | comments | ask | show | jobs | submit login

I am graduate student in artificial intelligence.

I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.

We are getting there. We really are. A lot of iterations further and cleverbot could actually act more human than HAL. If you'd asked me a few month ago I would said that AI is basically just applied statistics, which sometimes manages to look intelligent if you don't look close enough. I changed my mind.

There is some quite recent research in Natural Language Processing/Information Retrieval which manages to blur the line between just statistics over words and really understanding what those words mean (if you define meaning as relations to other symbols).




As an ex graduate student in Natural Language Processing/ Machine Learning, I was curious what recent changes occurred that made you see things from a different perspective?

I have not been keeping track of newer research, so would love to know.


Rereading my post it I realize it sounds like a big break though occurred in the last few month. That is not the case. I was basically talking about deep learning neural networks. Anyway, here a number of things that caused me to change my mind:

1. I saw this talk:

http://videolectures.net/nips09_collobert_weston_dlnl/

this is their paper: http://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf but if you are interested in the bigger picture, I think you have to see the talk.

2. A saw this talk by Geoffrey Hinton about deep learning neural networks:

http://www.youtube.com/watch?v=AyzOUbkUf3M

3. I saw a talk and read some papers by these guys:

http://www.gavagai.se/ethersource-technology.php

It is not necessarily, that I believe in their approach, but it made rethink my idea of meaning.

EDIT: this is also a cool paper in this regard http://nlp.stanford.edu/pubs/SocherLinNgManning_ICML2011.pdf


I'm a PhD student at Stanford working with Andrew Ng, who is known for his work on Deep Learning. I've worked on these networks for the last few years.

I think it's great that people get excited about these advances, but it is also easy to extrapolate their capabilities, especially if you're not familiar with the details.

Indeed, we are making good progress but most of it relates specifically to perceptual parts of the cortex-- the task of taking unstructured data and automatically learning meaningful, semantic encodings of it. It is about a change of description from raw to high-level. For example, taking a block of 32x32 pixel values between 0 and 1 and transforming this input to a higher-level description such as "there is stimulus number 3731 in this image." And if you were to inspect other 32x32 pixel regions that happen to get assigned stimulus id 3731, you could for example find that they are all images of faces.

This capability should not be extrapolated to the general task of intelligence. The above is achieved by mostly feed-forward, simply sigmoid functions from input to output, where the parameters are conveniently chosen according to the data. That is, there is absolutely no thinking involved.

The mind, an intelligence, is a process of combining many such high-level descriptions, deciding what to store, when, how, retrieving information from the past, representing context, deciding relevance, and overall loopy process of making sense of things. A deep network is much less ambitious, as it only aims to encode its input in more semantic representation, and it's interesting that it turns out that you can do a good job at that just by passing inputs through a few sigmoids. Moreover, as far as I'm aware, there are no obvious extensions that could make the same networks adapt to something more AI-like. Depending on your religion, you may think that simply introducing loops in these networks will do something similar, but that's controversial for now, and my personal view is that there's much more to it.

Overall, I found this article to be silly. There is no system that I'm currently aware of that I consider to be on a clearly promising path to Turing-like strong AI, and I wouldn't expect anything that can reliably convince people that it is human in the next 20 years at least. Chat bot is a syntactical joke.


I am current working on an algorithm for unsupervised grammar learning. Part of what made me change my mind about this, is that I realized that what is required to learn syntax of language is also what is required to learn semantic relationships between objects, based on this syntactic data. You just have to go up one level of abstraction.

I believe we are not too far from having algorithms, which can parse a natural language sentence into a semantic representation which link abstract concepts in a way that is powerful enough for e.g. question answering beyond just information retrieval (statistical guess work based on word frequencies). I am not so sure how or if we can build this into strong AI, though.


Please share more. You can't tease us like that and leave us hanging.


I posted some stuff above, but I am not sure how accessible it is for someone without a Machine Learning/NLP background. Maybe the Hinton Google talk is not too bad.

Here is another Google Talk (not really a recent development, though):

http://video.google.com/videoplay?docid=-7704388615049492068...

He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities). I don't really agree that this is right approach, but might be interesting for you nonetheless.


> He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities).

Doug Lenat claimed that in the 1980s and spent quite a while trying to build it (Cyc IIRC). What is different this time? ("We can do it now" is a perfectly reasonable answer.)


Yes, it's called Cyc and he's been busy building it since then. They have literally been inputting knowledge facts into a computer since the 80s. It's partially automatic by parsing text from the internet now. They had a goal in terms of number of rules that they set in the 80s, when they'd get intelligent behavior and he showed that they are getting close now.

The talk is from 2005 and it also two years back since I watched it, so I am not confident to summarize it. I was quite impressed when I watched it for the first time, though. I reason I brought it up is more, like "see what you could do with an ontology", then "this is what it should look like" or "an ontology is all you need".


I'm somewhat skeptical if AI can be built by extrapolating the Cyc project. At best we'll get a sophisticated QA bot. I feel that vision is central to human cognition, and the Cyc project seems to be all about relationships of words to words, without any relations to vision stuff, images and video.


No one is probably reading this, but ...

I like I said I mentioned Cyc more because it is interesting then anything else. However, I do believe words and local image parts are just cognitive concepts and they will eventually be handled using the same algorithms, see e.g. the Socher et.al. paper I referenced above.

However, I am not so sure how this fits together with planning and acting autonomously (which would fall under reinforcement learning). But I wasn't really talking about building strong AI, just building an AI which is strong enough to convince people it is human during a 30 minute conversation.


I read it. Thanks for taking the time to reply.


How do you explain the cognition of blind people?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: