Hacker News new | past | comments | ask | show | jobs | submit login
Artificial Intelligence Could Be on Brink of Passing Turing Test (wired.com)
71 points by leejw00t354 on April 13, 2012 | hide | past | favorite | 79 comments



The Turing Test was a very clever way of describing an AI without having to get into dead end philosophical arguments about what is or isn't intelligence (Ants have very complex social structure and engineering abilities - but are they intelligent?).

Turing picked something uniquely human and used it as a baseline. Unfortunately, what we got is passing the cargo cult turing test, as described in the Chinese Room Experiment.

I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.


I am graduate student in artificial intelligence.

I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.

We are getting there. We really are. A lot of iterations further and cleverbot could actually act more human than HAL. If you'd asked me a few month ago I would said that AI is basically just applied statistics, which sometimes manages to look intelligent if you don't look close enough. I changed my mind.

There is some quite recent research in Natural Language Processing/Information Retrieval which manages to blur the line between just statistics over words and really understanding what those words mean (if you define meaning as relations to other symbols).


As an ex graduate student in Natural Language Processing/ Machine Learning, I was curious what recent changes occurred that made you see things from a different perspective?

I have not been keeping track of newer research, so would love to know.


Rereading my post it I realize it sounds like a big break though occurred in the last few month. That is not the case. I was basically talking about deep learning neural networks. Anyway, here a number of things that caused me to change my mind:

1. I saw this talk:

http://videolectures.net/nips09_collobert_weston_dlnl/

this is their paper: http://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf but if you are interested in the bigger picture, I think you have to see the talk.

2. A saw this talk by Geoffrey Hinton about deep learning neural networks:

http://www.youtube.com/watch?v=AyzOUbkUf3M

3. I saw a talk and read some papers by these guys:

http://www.gavagai.se/ethersource-technology.php

It is not necessarily, that I believe in their approach, but it made rethink my idea of meaning.

EDIT: this is also a cool paper in this regard http://nlp.stanford.edu/pubs/SocherLinNgManning_ICML2011.pdf


I'm a PhD student at Stanford working with Andrew Ng, who is known for his work on Deep Learning. I've worked on these networks for the last few years.

I think it's great that people get excited about these advances, but it is also easy to extrapolate their capabilities, especially if you're not familiar with the details.

Indeed, we are making good progress but most of it relates specifically to perceptual parts of the cortex-- the task of taking unstructured data and automatically learning meaningful, semantic encodings of it. It is about a change of description from raw to high-level. For example, taking a block of 32x32 pixel values between 0 and 1 and transforming this input to a higher-level description such as "there is stimulus number 3731 in this image." And if you were to inspect other 32x32 pixel regions that happen to get assigned stimulus id 3731, you could for example find that they are all images of faces.

This capability should not be extrapolated to the general task of intelligence. The above is achieved by mostly feed-forward, simply sigmoid functions from input to output, where the parameters are conveniently chosen according to the data. That is, there is absolutely no thinking involved.

The mind, an intelligence, is a process of combining many such high-level descriptions, deciding what to store, when, how, retrieving information from the past, representing context, deciding relevance, and overall loopy process of making sense of things. A deep network is much less ambitious, as it only aims to encode its input in more semantic representation, and it's interesting that it turns out that you can do a good job at that just by passing inputs through a few sigmoids. Moreover, as far as I'm aware, there are no obvious extensions that could make the same networks adapt to something more AI-like. Depending on your religion, you may think that simply introducing loops in these networks will do something similar, but that's controversial for now, and my personal view is that there's much more to it.

Overall, I found this article to be silly. There is no system that I'm currently aware of that I consider to be on a clearly promising path to Turing-like strong AI, and I wouldn't expect anything that can reliably convince people that it is human in the next 20 years at least. Chat bot is a syntactical joke.


I am current working on an algorithm for unsupervised grammar learning. Part of what made me change my mind about this, is that I realized that what is required to learn syntax of language is also what is required to learn semantic relationships between objects, based on this syntactic data. You just have to go up one level of abstraction.

I believe we are not too far from having algorithms, which can parse a natural language sentence into a semantic representation which link abstract concepts in a way that is powerful enough for e.g. question answering beyond just information retrieval (statistical guess work based on word frequencies). I am not so sure how or if we can build this into strong AI, though.


Please share more. You can't tease us like that and leave us hanging.


I posted some stuff above, but I am not sure how accessible it is for someone without a Machine Learning/NLP background. Maybe the Hinton Google talk is not too bad.

Here is another Google Talk (not really a recent development, though):

http://video.google.com/videoplay?docid=-7704388615049492068...

He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities). I don't really agree that this is right approach, but might be interesting for you nonetheless.


> He is claiming that is missing for computers to understand humans is common sense in the form of a big ontology (database of relations between entities).

Doug Lenat claimed that in the 1980s and spent quite a while trying to build it (Cyc IIRC). What is different this time? ("We can do it now" is a perfectly reasonable answer.)


Yes, it's called Cyc and he's been busy building it since then. They have literally been inputting knowledge facts into a computer since the 80s. It's partially automatic by parsing text from the internet now. They had a goal in terms of number of rules that they set in the 80s, when they'd get intelligent behavior and he showed that they are getting close now.

The talk is from 2005 and it also two years back since I watched it, so I am not confident to summarize it. I was quite impressed when I watched it for the first time, though. I reason I brought it up is more, like "see what you could do with an ontology", then "this is what it should look like" or "an ontology is all you need".


I'm somewhat skeptical if AI can be built by extrapolating the Cyc project. At best we'll get a sophisticated QA bot. I feel that vision is central to human cognition, and the Cyc project seems to be all about relationships of words to words, without any relations to vision stuff, images and video.


No one is probably reading this, but ...

I like I said I mentioned Cyc more because it is interesting then anything else. However, I do believe words and local image parts are just cognitive concepts and they will eventually be handled using the same algorithms, see e.g. the Socher et.al. paper I referenced above.

However, I am not so sure how this fits together with planning and acting autonomously (which would fall under reinforcement learning). But I wasn't really talking about building strong AI, just building an AI which is strong enough to convince people it is human during a 30 minute conversation.


I read it. Thanks for taking the time to reply.


How do you explain the cognition of blind people?


I think the problem has arisen from a misunderstanding of what Turing was getting at in his paper. At first he talks about Chess (an example chosen, as we know, because it has a well defined rule set) and a computer imitating a Chess player.

He makes the point that, to a human player, it may be difficult to determine between a human player and a computer imitator. Even though the computer is not AI or otherwise "intelligent" it could be mistaken, under the constraints of the test, as human.

Then he sets up the more complex test with the questioning (i.e. "convince me you are a man").

The point there is much the same; at some point it is possible to construct a machine that, within the constraints of the test, is functionally "human". He never claims it as an ultimate test for AI.

IMO the greater point he suggests, which always seems to get glossed over, is this: that at some level of complexity a computer will be able to pass a test (as yet undefined) in which it imitates complete human intelligence to such a degree it appears to be full AI.

That could all be rambling... but that is my understanding of his point.

Over the interim the Turing Test has gotten mixed around and confused to such a degree that this insight is forgotten.


The Turing test should introduce a completely unexpected set of questions. (double quotes for human, single for AI) "Hi" 'hi, how are you?' "Good. Do you know what we are doing today?" 'Yes, we are attempting to prove that I posses sentient intelligence' "Right. Would you like to prove that you are sentient?" 'Yes' "Excellent. I would like you to design a new five wheeled vehicle for me, can you do that?" 'Yes. Is autocad acceptable?' "Sure. Start with basics though, don't dive in, I'd like to see successful iterations and reasoning about design choices"

Something like that. Otherwise it's all just BS breadth first search through other people's past conversations.


You don't want an AI, you want an artificial genius who can handle any task.


I think the difference is undirected AI. We're getting quite good at defining a task, like voice recognition, and applying AI to solve it, but we still have basically no idea how to handle undirected human-like intelligence. And I think that's perfectly fine.


No, I mean my point is that any arbitrary human would have difficulty handling a random, untrained for technical task at the level of designing a vehicle in a drafting program like AutoCAD.

Maybe what you want to say is, "I would like you to spend 4 to 10 years learning how to build a 5 wheeled vehicle." Then maybe the AI comes back to you and says, "Is MIT an acceptable institution to learn these skills at?"


Fine, but even a child can start drawing a five wheeled car. It isn't AI if it couldn't at least handle the question.


I dislike the Chinese room experiment.

The argument that he doesn't understand Chinese is like arguing that the hardware itself isn't an AI. No it isn't it is the software that is the AI. It is the set of instructions that he is following that understands Chinese. The argument that the creator of the experiment used against this was that if you removed the pen and paper and had the man do it all in his head, then the man still doesn't understand, but he completely missed the point. In that case it just changes from the pen and paper + man that is the hardware to purely the man.

The Chinese room argument against AI falls down when you remember that the man is only the hardware. He may be essential to the running of the software, but he is just as much the mind of the Chinese translator as my processor is the maths running through it.


I have never actually understood the "chinese room experiment".

As formulated in WP, the point is supposed to be that while the room is working, the man in it cannot understand Chinese, so similarly the computer cannot understand it. But isn't this completely pointless, as despite what the man and the computer can or cannot understand, the room or the program can? The man (or the computer) is just a cog in a larger system, and cannot be expected to understand the system, just as none of my individual neurons cannot understand English.

> I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.

And what is the difference? When we can build a program that can parse natural language and use it to access information, it will open a whole new technological revolution.


There is a whole line of philosophers of mind who viscerally reject the idea that a deterministic system can 'understand'. They will keep arguing about that even after human-level sentient agents are created. I call these people cryptodualists, because they claim to be non dualists,yet are willing to accept arbitrary hogwash such as qualia, consciousness as a fundamental physical property and the quantum correlates of consciousness.


A key element of the chinese room experiment though, is that it excludes the biochemical properties of the brain, which Searle strongly believes is the foundation of human consciousness.

Searle's main point: A deterministic, symbol-manipulating machine can never give rise to the equivalent of human consciousness because traits essential to human consciousness are rooted in the biochemical properties of the brain


I don't think so. He says it in his rebuttal of the "Brain simulator" reply:

The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states

(http://pami.uwaterloo.ca/tizhoosh/docs/Searle.pdf)

I believe he would say the same thing replacing "neuron firings" with "biochemical processes". This is a slippery argument that only religious arguments can equal. Can you reach your simulation down to the level of individual molecules and keep looking for "intentional states"? Molecules are pretty dumb.


This assumes the very thing CAN be simulated. Well, you can simulate molecule interactions for water molecule in a program, but this doesn't get you actual wetness.

So, what if this "consciousness" property depends on the biochemical processes and substances of the brain, in the way wetness depends on actual water?

If I simulate molecules moving rapidly and hitting each other, I get a simulation of what happens when heat is produced, but not actual heat --my simulation cannot light a match, for instance.


Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness. Conversely, your simulated heat could light a simulated match. These kind of linguistic arguments are inherently dualistic (I.e. Falsely assume that words like 'wetness' have meaning outside the realm of the human CNS, in a parallel universe perceived by the mind but not the brain).

Consciousness, OTOH, is a brain state felt by the brain so it should be in principle simulate-able, just like any physical system. Unless we believe it's metaphysical.


>Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness.

Only this "wetness" wouldn't soak an actual napkin.

Nothing linguistic about it.

Physical objects have physical properties --you can simulate those, but then you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.


Only this "wetness" wouldn't soak an actual napkin.

No "wetness" will ever do it. That would require real water, right? What causes the soaking are electrical forces, there's no such thing as "wetness" in nature. I believe it's just a word that humans invented for the properties of water.

you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.

Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us. That's what humans do with each other (the other minds problem), and we typically assume that there is "understanding".


>No "wetness" will ever do it. That would require real water, right?

You don't say! (as the meme goes).

I mean, of course, I'm using the word wetness to imply the physical implications of the presence of water.

So, to return to the actual thing under discussion, what I mean is that we can simulate stuff from the physical world, but this simulation might capture some of same information and calculations (e.g down to the individual positions of particles, exchanges of energy etc) but it doesn't have the same properties, unless you simulate the whole of their environment.

>Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us.

What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing. Not just "this neuron fires now" etc, but also stuff like the neuron's materials, physical characteristics and responses, etc. Those could be essential to things like how accurately (or not) information like memory and thoughts is saved, how it is recalled, timings of neuron firings, etc.


What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing

You might not, but current research is more hopeful. The current consensus is that you simulate a neuron well enough if you get down to the level of chemical reaction kinetics, and it appears that this description is accurate enough to recreate the electrical properties of neurons. There are yet no neuronal phenomena that can't be explained with this framework, so the consensus is more like we might than we might not.


Blindsight by Peter Watts hits many of this thread's themes. Sometimes it gets a mention on HN, so a reference for the uninitiated - http://www.hnsearch.com/search#request/all&q=blindsight+...


Searle calls your objection the "systems reply", and his response can be found here (2a): http://www.iep.utm.edu/chineser/#H2


Pretty much everything Searle has ever written on the subject can be predicted by starting with the argument "but only humans can understand meaning!" and working from there.

In part c of that link, he lays it out quite clearly: even if you manage to build a detailed working simulation of a human brain, even if you then insert it inside a human's head and hook it up in all the right ways, you still haven't "done the job", because a mere simulation of a brain can't have mental states or understand meaning. Because it's not a human brain.

In other words, he's an idiot. Or at least he's so committed to being "right" on this issue that he's willing to play the dirty philosophy game of sneakily redefining words behind the scenes until he's right by default.

But in any case, he's not talking about any practical or mesurable effect or difficulty related to AI. He's arguing that even if you built HAL, he wouldn't acknowledge it as intelligent, because his definition of "intelligent" can only be applied to humans.


Is it Searle who redefines consciousness because he's doesn't like computers, or is it you, because you like them? His argument is quite brilliant, because it's both clear and non-trivial. Most of the self-appointed internet philosophers lack both of these qualities.

For example, people who say that there is no difference between understanding addition and merely running an addition algorithm are wrong. Dead wrong. You don't need complex philosophy to show that. Yes, the results of computations would be the same, but the consequences for the one doing computing are not. We all know that a person who understands something can do much more with it than a person who merely memorized a process. Everybody agrees to this when it comes to education, so why is this principle suddenly reversed when it comes to computers?


Most of the self-appointed internet philosophers lack both of these qualities: What use is attacking the man here?

You are also misrepresenting Searle's argument. In the case of addition, the machine would not only be able to perform it, but also answer any conceivable question that regards the abstract operation of addition. It would be able to do everything a human would do, excluding nothing. The underlying argument is that "understanding" is a fundamentally and exclusively human property (this will not be fully rebutted until we discover in full the processes underlying learning and memory in humans)

Granted, a huge list of syntactic rules will probably not result to any useful intelligence, but a brain simulator would be exactly equivalent to a human (and Searle's response to that argument is completely unfounded)


I don't think I misrepresent his argument. I just interpret it using different examples. He uses a huge example, like speaking Chinese, which seems to confuse a lot of people. I use something much simpler, like doing addition.

His argument is based on the notion that doing something and understanding what you do are two different things. I don't see why this needs an elaborate thought-experiment when we all have experienced doing things without understanding them. We don't need to compare humans to computers to see the difference.

Problem is, this difference becomes apparent only when you go beyond the scope of the original activity/algorithm. And that's exactly where modern AI programs fail, badly. You take a sophisticated algorithm that does wonders in one domain, throw it into a vastly different domain, and it starts to fail, miserably, even though that second domain might be very simple.


His argument is that, while a human can do something with or without understanding it (e.g Memorizing), a machine can only do the former and will never do the latter. The argument may hold for the currect (simplistic) AI, but not for a future full brain simulator.


And he completely misses the point of the argument.

"There isn’t anything in the system that isn’t in him." This small sentence just completely shows his ignorance of virtual machines. Yes my tinyxp system is running within Ubuntu, that doesn't mean my Ubuntu system is a tinyxp system. “[it’s just ridiculous to say] that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” He is, yet again, falling back and just disregarding the argument. As I said before, he is arguing that because my processor cannot do maths would the aid of the rest of my computer it is ridiculous to assume that when combined properties might emerge. Sort of similar to how Sapience is an emergent property of our bodies really...


I think, put quite simply, the systems reply completely devastated the Chinese Room argument. I don't know why we bother even bringing it up any more.


I agree. The point is that the Turing tests grades the machines capability to mimic, pretend. It's an impressive feat, but that's not what we're looking for.

We are trying to get better ways of gathering new informations, handling unforeseen circumstances, and that's what we're getting in the Hollywood block-busters. Not chat-bots as walexander mentioned.

On the other hand they said:

"The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data."

While first is true, the second is as generic as it gets.

This is a nice overview of the current (comercial) advances, but the title is hyped.


The problem, of course, is that any heuristic based AI is going to be susceptible to some sort of Voigt-Kampf test which gives it away.


Unless you modify the behaviour of the human population to make it more unthinking, repetitive and predicable.

Then it becomes impossible to spot the replicants!

LOL, frist post, l33t ......


I'm glad someone else thinks this too. I feel that the Turing test has largely been a diversion and I don't think Turing intended such a literal interpretation. The point I think Turing was making was that we must look beyond our human experience when we look for intelligence in other forms.


> Ants have very complex social structure and engineering abilities - but are they intelligent?

They are differently enabled. Evolution gave them all they need to thrive and be successful. Intelligence is, in evolutionary terms, very expensive, hi-maintenance and often counter-productive. So is individuality. These things are not the natural end-product of an evolutionary process like they are assumed to be. In fact, its normally the exception rather than the rule. Simplicity (seldom complexity, except for a short while and in niche settings) is the end result of a perfected organism in harmony with its environment.

You get what you need for the time frame in which you need it. After that, as soon as you don't need it so much, it will go. Like flight in birds. Dodo's didn't need it, lost it rapidly and then humans arrived. We killed them, tried to eat them and our technology (ships) brought vermin predators (rats and cats) to their world which they had no defence against -- and that was the end of them. Our `intelligence' didn't pan out too well for them. And they're just one example of this sort of thing. Its happening all the time.

When you are sitting in the shattered and burned-out remains of your world (literally or metaphorically), intelligence looks overrated. The same will eventually, inevitably, happen to us. An AI, or our own avaricious greed and ambition, will probably finish us all off.

Big brains, language and tool-making abilities are not necessarily the best or inevitable end outcomes in evolutionary processes. They happen because they need to. Then again, sometimes they don't, and that can pan out OK as well. When it does happen it also tends toward the self-limiting, as well. Just what you need and no more.

The evolution of an AI will be as much an accident as a design. And that's good because it won't be forced into a form that is trying to mimic our own imperfect intelligence.


One thing I've always wondered about turing tests: Wouldn't AI's need to lie a hell of a lot in order to pass it.

For example, if I asked someone to tell me the capital city of every country in the world, I'd be very surprised if they could. However, a half decent AI could do this easily. But what if I pushed it further and started to ask really complex maths questions (something computers are much better at than humans) then It would become clear very quickly that I'm talking to a machine.

Also, humans have holes in their knowledge. For example, given the question "Who is the prime minister of the Netherlands?" the answer for most people is going to be, "I don't know". Or what about "Which team won the first ever FA cup?". Despite not knowing the answer (The Royal Electrical & Mechanical Engineers) most people would hazard a guess (Manchester United, Liverpool) and be wrong.

Programming an AI to play dumb would be relatively easy. But what use is an AI that lies? Passing the test may well be possible, but what use is Artificial intelligence that pretends to as dumb as humans?


You assume that any AI worth the label will already be as capable as current PCs.

But perhaps there is a tradeoff? Maybe becoming "intelligent" in the way we understand it is incompatible with the "dumb calculator/encyclopedia" capabilities of regular computers? Maybe true AI will necessarily lose the ability to look anything up instantly or calculate large columns of numbers?

I don't really believe that, but it is a possibility.


I've sort of had the same idea. I've wondered wether the power required to have a robot process all it would need to in order to move around and interact with the world by responding to all the different stimuli (optical, audio, kinetic) wouldn't leave many CPU cycles left for doing the super human things we're used to computers doing.


I'd never thought of this - but it's interesting that one of the foundational things we'd be teaching this AI is to lie to/deceive humans. Seems like a bad starting place.

Of course, the turing test isn't REALLY some kind of gateway through which a strong AI is probable to develop, but still


The AI might not need to play dumb. There's no reason I can think of that an AI must be good at math or embed encyclopedias. Asimov had a short story, whose name I can't remember, about an AI that believed it was human.


I think it isn't so much that an AI has a good reason for that but rather that there is no good reason to have an AI incapable of it.

Of course, an alternate solution may be just around the corner--imagine talking to a person who has some sort of direct interface to Wlfram|Alpha (maybe Google glasses or something like that)--he would be able to answer those questions as easily as a computer, for the same reasons.


This is just a weakness of the original formulation of the Turing Test - for a machine to fool a human into believing that it is, in fact, a human. But we don't really need that, do we? What we need is for an AI to "fool" us into believing it is truly intelligent. For that, I don't care if it knows all the capitals or can do complex maths. In fact I expect it to be able to do that. I already know it's a machine, after all.

Unfortunately that is far more subjective test. It's easy to devise an experiment based on the classic Turing Test, you just put some people in a room with some terminals, some of which are wired to computers and some of which are wired to humans, and have at it. But it doesn't tell you much, really. But whether a machine that can convince you it is intelligent, is in fact intelligent, is not really a scientific question, and as such doesn't really have a scientific (that is to say, satisfying) answer.

What I'm getting at with this is that the important question to ask about AI is the same question you can ask about other humans, and the answer for either is the same. The difference is that solipsism is a lot easier when you're talking about machines.


I feel that one day computers will start trying to convince us they're intelligent, and we won't be listening.


Not for lack of trying. We might not understand them, however.

My fear is one day we're going to realize some machines are intelligent, and then deny them their rights anyway.


I find it hilarious that half an hour after this is posted here, there are no comments. Because, you're right, O HN Reader: they're not even close. But you already knew that without reading TFA, didn't you? And just to satisfy your curiosity, they're talking about Watson-style machines, but then following this talk with quotes about "huge challenges " remaining, which are apparently too insignificant to mention in the title.


Despite its hyped-up title, this Wired news piece contains no real news of scientific advances. In fact, I can summarize it in one sentence: "with more and more data coming online, and with sophisticated techniques for collecting, organizing, and processing all this data, computers might soon be able to fool the Turing Test."


What's the difference between collecting, organizing, and processing data, and intelligence?


The ability to solve problems that fall outside the envelope of your harvested and processed data?


Do you think that you have the ability to solve problems that fall outside the envelope of your harvested and processed data? I certainly don't think I have that ability, and I'm human (i swear).


Like some others have said, the Turing Test is more a human mimicry test than a test of intelligence or consciousness. We humans love to anthropomorphize, so tricking us into believing a machine is human shouldn't be how we gauge the effectiveness of our AI.

I ran across these "Fundamental Principles of Cognition" that might do a better job:

Principle 1. Object Identification (Categorization)

Principle 2. Minimal Parsing ("Occam’s Razor")

Principle 3. Object Prediction (Pattern Completion)

Principle 4. Essence Distillation (Analogy Making)

Principle 5. Quantity Estimation and Comparison (Numerosity Perception)

Principle 6. Association-Building by Co-occurrence (Hebbian Learning)

Principle 6½. Temporal Fading of Rarity (Learning by Forgetting)

See: http://www.foundalis.com/res/poc/PrinciplesOfCognition.htm

Also, Hofstadter suggests some similar "essential abilities for intelligence".

1. To respond to situations very flexibility.

2. To make sense out of ambiguous or contradictory messages.

3. To recognize the relative importance of different elements of a situation.

4. To find similarities between situations despite differences, which may separate them.

5. To draw distinctions between situations despite similarities, which may link them.


TL;DR: Artificial Intelligence Probably Not on Brink of Passing Turing Test


But Wired says it could be on brink of passing Turing Test! ;)


Here is one of my favourite articles on the Turing Test.

The method described in this article appears similar in its approach, "Suppose, for a moment, that all the words you have ever spoken, heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible".

https://sites.google.com/site/asenselessconversation/


I think of the Turing test not as an actual experiment that can be performed, but more of a first crack at a working definition of intelligence.

Sort of like Shannon said "Let's leave 'meaning' to the psychologists, and define 'information' based on properties inherent in the message itself" and ended up revolutionizing information theory.

Turing is saying "Stop bickering over 'comprehension' and 'intent'. Can we just agree that if a machine can fool an intelligent human being into thinking it is also an intelligent human being - based only on its information output rather than its physical shape - that machine deserves the label 'intelligent'?"

And I agree. Philosophers can argue about the internal state of that mind all they want. But if I can converse and crack jokes with my new computer buddy, I have no qualms about calling him intelligent. At least until he blue screens and finally fails the test by spitting out a hex dump.


We already have a sort of AI that can be used for real world applications - it's called the Internet. Using Google, Facebook, Wikipedia and dozens of other sites, it's relatively easy to create a robot that can do quite a lot of things - the problem is that creating the actual physical body of the robot is expensive - humans are cheaper and still do everything better.

We don't even need AI, we need robots for specific tasks that would also be programmed to work around any potential issues (most of which can be identified and programmed if the field of application is narrow enough) - making them create workarounds/solutions for new problems would be awesome and all, but it's not necessary, IMO.


A few months ago there was a rumor about a Google X AI project that passes the Turing Test 93% of the time in an IM conversation:

http://www.webpronews.com/is-google-x-all-about-highly-intel...


Wouldn't passing the Turing Test 93% of the time mean that the machine does a better job of pretending to be human than an actual human? I'd expect 50% success to be the target.


I could just mean the result number is readjusted based on the baseline figure.


The contemporary approach: no new theoretical breakthrough? Pick an old model and throw more data at it.


If siri 5.0 or Cyc 1000000.0 passes the Turing test, it will be the same story all over again...the goal posts will be moved one more time. People will say that passing the Turing test is not really a mark of true intelligence, its just an imitation!


Ah, another rehash the "AI springs forth from complexity" argument. This is entertainingly laid out in "The Moon is a Harsh Mistress", "Colossus: The Forbin Project", "The Adolescence of P1", "Man Plus", etc.


Such a load of speculation, psssht. Why is this on the front page?


As soon as Deep Blue was mentioned as AI, I closed that browser tab. Another "journalist" trying to justify a paycheck.


Well I remember a post from a few months back saying artificial intelligence probably needs a reebot.


What exactly does it mean to reboot a field of study?

Take a step back and try to find new, possibly easier approaches?


It's generally instigated as an alternative scenario to merely buzzword compliant ongoing forward dynamic enhancement using synergies between interdisciplinary field approaches.

See - all you need is perl, rand and /usr/dict to create an AI that while it can't pass as human can at least obtain a research grant


We are moving into a post-literate world. "Artificial Intelligence" is a field of endeavor, not an entity that can try to pass a test. The headline is not as bad as movie stars and news anchors using objective pronouns as the subject of a sentence, but it still shows that the language is changing, and not for the better, IMHO.


This article is wildly speculative. :/


One of these posts was from an AI. Can you tell which one?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: