Hacker News new | past | comments | ask | show | jobs | submit login

The Turing Test was a very clever way of describing an AI without having to get into dead end philosophical arguments about what is or isn't intelligence (Ants have very complex social structure and engineering abilities - but are they intelligent?).

Turing picked something uniquely human and used it as a baseline. Unfortunately, what we got is passing the cargo cult turing test, as described in the Chinese Room Experiment.

I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.




I am graduate student in artificial intelligence.

I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.

We are getting there. We really are. A lot of iterations further and cleverbot could actually act more human than HAL. If you'd asked me a few month ago I would said that AI is basically just applied statistics, which sometimes manages to look intelligent if you don't look close enough. I changed my mind.

There is some quite recent research in Natural Language Processing/Information Retrieval which manages to blur the line between just statistics over words and really understanding what those words mean (if you define meaning as relations to other symbols).


As an ex graduate student in Natural Language Processing/ Machine Learning, I was curious what recent changes occurred that made you see things from a different perspective?

I have not been keeping track of newer research, so would love to know.


Rereading my post it I realize it sounds like a big break though occurred in the last few month. That is not the case. I was basically talking about deep learning neural networks. Anyway, here a number of things that caused me to change my mind:

1. I saw this talk:

http://videolectures.net/nips09_collobert_weston_dlnl/

this is their paper: http://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf but if you are interested in the bigger picture, I think you have to see the talk.

2. A saw this talk by Geoffrey Hinton about deep learning neural networks:

http://www.youtube.com/watch?v=AyzOUbkUf3M

3. I saw a talk and read some papers by these guys:

http://www.gavagai.se/ethersource-technology.php

It is not necessarily, that I believe in their approach, but it made rethink my idea of meaning.

EDIT: this is also a cool paper in this regard http://nlp.stanford.edu/pubs/SocherLinNgManning_ICML2011.pdf


I'm a PhD student at Stanford working with Andrew Ng, who is known for his work on Deep Learning. I've worked on these networks for the last few years.

I think it's great that people get excited about these advances, but it is also easy to extrapolate their capabilities, especially if you're not familiar with the details.

Indeed, we are making good progress but most of it relates specifically to perceptual parts of the cortex-- the task of taking unstructured data and automatically learning meaningful, semantic encodings of it. It is about a change of description from raw to high-level. For example, taking a block of 32x32 pixel values between 0 and 1 and transforming this input to a higher-level description such as "there is stimulus number 3731 in this image." And if you were to inspect other 32x32 pixel regions that happen to get assigned stimulus id 3731, you could for example find that they are all images of faces.

This capability should not be extrapolated to the general task of intelligence. The above is achieved by mostly feed-forward, simply sigmoid functions from input to output, where the parameters are conveniently chosen according to the data. That is, there is absolutely no thinking involved.

The mind, an intelligence, is a process of combining many such high-level descriptions, deciding what to store, when, how, retrieving information from the past, representing context, deciding relevance, and overall loopy process of making sense of things. A deep network is much less ambitious, as it only aims to encode its input in more semantic representation, and it's interesting that it turns out that you can do a good job at that just by passing inputs through a few sigmoids. Moreover, as far as I'm aware, there are no obvious extensions that could make the same networks adapt to something more AI-like. Depending on your religion, you may think that simply introducing loops in these networks will do something similar, but that's controversial for now, and my personal view is that there's much more to it.

Overall, I found this article to be silly. There is no system that I'm currently aware of that I consider to be on a clearly promising path to Turing-like strong AI, and I wouldn't expect anything that can reliably convince people that it is human in the next 20 years at least. Chat bot is a syntactical joke.


I am current working on an algorithm for unsupervised grammar learning. Part of what made me change my mind about this, is that I realized that what is required to learn syntax of language is also what is required to learn semantic relationships between objects, based on this syntactic data. You just have to go up one level of abstraction.

I believe we are not too far from having algorithms, which can parse a natural language sentence into a semantic representation which link abstract concepts in a way that is powerful enough for e.g. question answering beyond just information retrieval (statistical guess work based on word frequencies). I am not so sure how or if we can build this into strong AI, though.


Please share more. You can't tease us like that and leave us hanging.


I posted some stuff above, but I am not sure how accessible it is for someone without a Machine Learning/NLP background. Maybe the Hinton Google talk is not too bad.

Here is another Google Talk (not really a recent development, though):

http://video.google.com/videoplay?docid=-7704388615049492068...

He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities). I don't really agree that this is right approach, but might be interesting for you nonetheless.


> He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities).

Doug Lenat claimed that in the 1980s and spent quite a while trying to build it (Cyc IIRC). What is different this time? ("We can do it now" is a perfectly reasonable answer.)


Yes, it's called Cyc and he's been busy building it since then. They have literally been inputting knowledge facts into a computer since the 80s. It's partially automatic by parsing text from the internet now. They had a goal in terms of number of rules that they set in the 80s, when they'd get intelligent behavior and he showed that they are getting close now.

The talk is from 2005 and it also two years back since I watched it, so I am not confident to summarize it. I was quite impressed when I watched it for the first time, though. I reason I brought it up is more, like "see what you could do with an ontology", then "this is what it should look like" or "an ontology is all you need".


I'm somewhat skeptical if AI can be built by extrapolating the Cyc project. At best we'll get a sophisticated QA bot. I feel that vision is central to human cognition, and the Cyc project seems to be all about relationships of words to words, without any relations to vision stuff, images and video.


No one is probably reading this, but ...

I like I said I mentioned Cyc more because it is interesting then anything else. However, I do believe words and local image parts are just cognitive concepts and they will eventually be handled using the same algorithms, see e.g. the Socher et.al. paper I referenced above.

However, I am not so sure how this fits together with planning and acting autonomously (which would fall under reinforcement learning). But I wasn't really talking about building strong AI, just building an AI which is strong enough to convince people it is human during a 30 minute conversation.


I read it. Thanks for taking the time to reply.


How do you explain the cognition of blind people?


I think the problem has arisen from a misunderstanding of what Turing was getting at in his paper. At first he talks about Chess (an example chosen, as we know, because it has a well defined rule set) and a computer imitating a Chess player.

He makes the point that, to a human player, it may be difficult to determine between a human player and a computer imitator. Even though the computer is not AI or otherwise "intelligent" it could be mistaken, under the constraints of the test, as human.

Then he sets up the more complex test with the questioning (i.e. "convince me you are a man").

The point there is much the same; at some point it is possible to construct a machine that, within the constraints of the test, is functionally "human". He never claims it as an ultimate test for AI.

IMO the greater point he suggests, which always seems to get glossed over, is this: that at some level of complexity a computer will be able to pass a test (as yet undefined) in which it imitates complete human intelligence to such a degree it appears to be full AI.

That could all be rambling... but that is my understanding of his point.

Over the interim the Turing Test has gotten mixed around and confused to such a degree that this insight is forgotten.


The Turing test should introduce a completely unexpected set of questions. (double quotes for human, single for AI) "Hi" 'hi, how are you?' "Good. Do you know what we are doing today?" 'Yes, we are attempting to prove that I posses sentient intelligence' "Right. Would you like to prove that you are sentient?" 'Yes' "Excellent. I would like you to design a new five wheeled vehicle for me, can you do that?" 'Yes. Is autocad acceptable?' "Sure. Start with basics though, don't dive in, I'd like to see successful iterations and reasoning about design choices"

Something like that. Otherwise it's all just BS breadth first search through other people's past conversations.


You don't want an AI, you want an artificial genius who can handle any task.


I think the difference is undirected AI. We're getting quite good at defining a task, like voice recognition, and applying AI to solve it, but we still have basically no idea how to handle undirected human-like intelligence. And I think that's perfectly fine.


No, I mean my point is that any arbitrary human would have difficulty handling a random, untrained for technical task at the level of designing a vehicle in a drafting program like AutoCAD.

Maybe what you want to say is, "I would like you to spend 4 to 10 years learning how to build a 5 wheeled vehicle." Then maybe the AI comes back to you and says, "Is MIT an acceptable institution to learn these skills at?"


Fine, but even a child can start drawing a five wheeled car. It isn't AI if it couldn't at least handle the question.


I dislike the Chinese room experiment.

The argument that he doesn't understand Chinese is like arguing that the hardware itself isn't an AI. No it isn't it is the software that is the AI. It is the set of instructions that he is following that understands Chinese. The argument that the creator of the experiment used against this was that if you removed the pen and paper and had the man do it all in his head, then the man still doesn't understand, but he completely missed the point. In that case it just changes from the pen and paper + man that is the hardware to purely the man.

The Chinese room argument against AI falls down when you remember that the man is only the hardware. He may be essential to the running of the software, but he is just as much the mind of the Chinese translator as my processor is the maths running through it.


I have never actually understood the "chinese room experiment".

As formulated in WP, the point is supposed to be that while the room is working, the man in it cannot understand Chinese, so similarly the computer cannot understand it. But isn't this completely pointless, as despite what the man and the computer can or cannot understand, the room or the program can? The man (or the computer) is just a cog in a larger system, and cannot be expected to understand the system, just as none of my individual neurons cannot understand English.

> I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.

And what is the difference? When we can build a program that can parse natural language and use it to access information, it will open a whole new technological revolution.


There is a whole line of philosophers of mind who viscerally reject the idea that a deterministic system can 'understand'. They will keep arguing about that even after human-level sentient agents are created. I call these people cryptodualists, because they claim to be non dualists,yet are willing to accept arbitrary hogwash such as qualia, consciousness as a fundamental physical property and the quantum correlates of consciousness.


A key element of the chinese room experiment though, is that it excludes the biochemical properties of the brain, which Searle strongly believes is the foundation of human consciousness.

Searle's main point: A deterministic, symbol-manipulating machine can never give rise to the equivalent of human consciousness because traits essential to human consciousness are rooted in the biochemical properties of the brain


I don't think so. He says it in his rebuttal of the "Brain simulator" reply:

The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states

(http://pami.uwaterloo.ca/tizhoosh/docs/Searle.pdf)

I believe he would say the same thing replacing "neuron firings" with "biochemical processes". This is a slippery argument that only religious arguments can equal. Can you reach your simulation down to the level of individual molecules and keep looking for "intentional states"? Molecules are pretty dumb.


This assumes the very thing CAN be simulated. Well, you can simulate molecule interactions for water molecule in a program, but this doesn't get you actual wetness.

So, what if this "consciousness" property depends on the biochemical processes and substances of the brain, in the way wetness depends on actual water?

If I simulate molecules moving rapidly and hitting each other, I get a simulation of what happens when heat is produced, but not actual heat --my simulation cannot light a match, for instance.


Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness. Conversely, your simulated heat could light a simulated match. These kind of linguistic arguments are inherently dualistic (I.e. Falsely assume that words like 'wetness' have meaning outside the realm of the human CNS, in a parallel universe perceived by the mind but not the brain).

Consciousness, OTOH, is a brain state felt by the brain so it should be in principle simulate-able, just like any physical system. Unless we believe it's metaphysical.


>Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness.

Only this "wetness" wouldn't soak an actual napkin.

Nothing linguistic about it.

Physical objects have physical properties --you can simulate those, but then you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.


Only this "wetness" wouldn't soak an actual napkin.

No "wetness" will ever do it. That would require real water, right? What causes the soaking are electrical forces, there's no such thing as "wetness" in nature. I believe it's just a word that humans invented for the properties of water.

you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.

Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us. That's what humans do with each other (the other minds problem), and we typically assume that there is "understanding".


>No "wetness" will ever do it. That would require real water, right?

You don't say! (as the meme goes).

I mean, of course, I'm using the word wetness to imply the physical implications of the presence of water.

So, to return to the actual thing under discussion, what I mean is that we can simulate stuff from the physical world, but this simulation might capture some of same information and calculations (e.g down to the individual positions of particles, exchanges of energy etc) but it doesn't have the same properties, unless you simulate the whole of their environment.

>Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us.

What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing. Not just "this neuron fires now" etc, but also stuff like the neuron's materials, physical characteristics and responses, etc. Those could be essential to things like how accurately (or not) information like memory and thoughts is saved, how it is recalled, timings of neuron firings, etc.


What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing

You might not, but current research is more hopeful. The current consensus is that you simulate a neuron well enough if you get down to the level of chemical reaction kinetics, and it appears that this description is accurate enough to recreate the electrical properties of neurons. There are yet no neuronal phenomena that can't be explained with this framework, so the consensus is more like we might than we might not.


Blindsight by Peter Watts hits many of this thread's themes. Sometimes it gets a mention on HN, so a reference for the uninitiated - http://www.hnsearch.com/search#request/all&q=blindsight+...


Searle calls your objection the "systems reply", and his response can be found here (2a): http://www.iep.utm.edu/chineser/#H2


Pretty much everything Searle has ever written on the subject can be predicted by starting with the argument "but only humans can understand meaning!" and working from there.

In part c of that link, he lays it out quite clearly: even if you manage to build a detailed working simulation of a human brain, even if you then insert it inside a human's head and hook it up in all the right ways, you still haven't "done the job", because a mere simulation of a brain can't have mental states or understand meaning. Because it's not a human brain.

In other words, he's an idiot. Or at least he's so committed to being "right" on this issue that he's willing to play the dirty philosophy game of sneakily redefining words behind the scenes until he's right by default.

But in any case, he's not talking about any practical or mesurable effect or difficulty related to AI. He's arguing that even if you built HAL, he wouldn't acknowledge it as intelligent, because his definition of "intelligent" can only be applied to humans.


Is it Searle who redefines consciousness because he's doesn't like computers, or is it you, because you like them? His argument is quite brilliant, because it's both clear and non-trivial. Most of the self-appointed internet philosophers lack both of these qualities.

For example, people who say that there is no difference between understanding addition and merely running an addition algorithm are wrong. Dead wrong. You don't need complex philosophy to show that. Yes, the results of computations would be the same, but the consequences for the one doing computing are not. We all know that a person who understands something can do much more with it than a person who merely memorized a process. Everybody agrees to this when it comes to education, so why is this principle suddenly reversed when it comes to computers?


Most of the self-appointed internet philosophers lack both of these qualities: What use is attacking the man here?

You are also misrepresenting Searle's argument. In the case of addition, the machine would not only be able to perform it, but also answer any conceivable question that regards the abstract operation of addition. It would be able to do everything a human would do, excluding nothing. The underlying argument is that "understanding" is a fundamentally and exclusively human property (this will not be fully rebutted until we discover in full the processes underlying learning and memory in humans)

Granted, a huge list of syntactic rules will probably not result to any useful intelligence, but a brain simulator would be exactly equivalent to a human (and Searle's response to that argument is completely unfounded)


I don't think I misrepresent his argument. I just interpret it using different examples. He uses a huge example, like speaking Chinese, which seems to confuse a lot of people. I use something much simpler, like doing addition.

His argument is based on the notion that doing something and understanding what you do are two different things. I don't see why this needs an elaborate thought-experiment when we all have experienced doing things without understanding them. We don't need to compare humans to computers to see the difference.

Problem is, this difference becomes apparent only when you go beyond the scope of the original activity/algorithm. And that's exactly where modern AI programs fail, badly. You take a sophisticated algorithm that does wonders in one domain, throw it into a vastly different domain, and it starts to fail, miserably, even though that second domain might be very simple.


His argument is that, while a human can do something with or without understanding it (e.g Memorizing), a machine can only do the former and will never do the latter. The argument may hold for the currect (simplistic) AI, but not for a future full brain simulator.


And he completely misses the point of the argument.

"There isn’t anything in the system that isn’t in him." This small sentence just completely shows his ignorance of virtual machines. Yes my tinyxp system is running within Ubuntu, that doesn't mean my Ubuntu system is a tinyxp system. “[it’s just ridiculous to say] that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” He is, yet again, falling back and just disregarding the argument. As I said before, he is arguing that because my processor cannot do maths would the aid of the rest of my computer it is ridiculous to assume that when combined properties might emerge. Sort of similar to how Sapience is an emergent property of our bodies really...


I think, put quite simply, the systems reply completely devastated the Chinese Room argument. I don't know why we bother even bringing it up any more.


I agree. The point is that the Turing tests grades the machines capability to mimic, pretend. It's an impressive feat, but that's not what we're looking for.

We are trying to get better ways of gathering new informations, handling unforeseen circumstances, and that's what we're getting in the Hollywood block-busters. Not chat-bots as walexander mentioned.

On the other hand they said:

"The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data."

While first is true, the second is as generic as it gets.

This is a nice overview of the current (comercial) advances, but the title is hyped.


The problem, of course, is that any heuristic based AI is going to be susceptible to some sort of Voigt-Kampf test which gives it away.


Unless you modify the behaviour of the human population to make it more unthinking, repetitive and predicable.

Then it becomes impossible to spot the replicants!

LOL, frist post, l33t ......


I'm glad someone else thinks this too. I feel that the Turing test has largely been a diversion and I don't think Turing intended such a literal interpretation. The point I think Turing was making was that we must look beyond our human experience when we look for intelligence in other forms.


> Ants have very complex social structure and engineering abilities - but are they intelligent?

They are differently enabled. Evolution gave them all they need to thrive and be successful. Intelligence is, in evolutionary terms, very expensive, hi-maintenance and often counter-productive. So is individuality. These things are not the natural end-product of an evolutionary process like they are assumed to be. In fact, its normally the exception rather than the rule. Simplicity (seldom complexity, except for a short while and in niche settings) is the end result of a perfected organism in harmony with its environment.

You get what you need for the time frame in which you need it. After that, as soon as you don't need it so much, it will go. Like flight in birds. Dodo's didn't need it, lost it rapidly and then humans arrived. We killed them, tried to eat them and our technology (ships) brought vermin predators (rats and cats) to their world which they had no defence against -- and that was the end of them. Our `intelligence' didn't pan out too well for them. And they're just one example of this sort of thing. Its happening all the time.

When you are sitting in the shattered and burned-out remains of your world (literally or metaphorically), intelligence looks overrated. The same will eventually, inevitably, happen to us. An AI, or our own avaricious greed and ambition, will probably finish us all off.

Big brains, language and tool-making abilities are not necessarily the best or inevitable end outcomes in evolutionary processes. They happen because they need to. Then again, sometimes they don't, and that can pan out OK as well. When it does happen it also tends toward the self-limiting, as well. Just what you need and no more.

The evolution of an AI will be as much an accident as a design. And that's good because it won't be forced into a form that is trying to mimic our own imperfect intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: