
Why Watson and Siri Are Not Real AI - slacka
http://www.popularmechanics.com/technology/engineering/news/why-watson-and-siri-are-not-real-ai-16477207
======
richardwhiuk
AI is repeatedly redefined to exclude things that we can do, that were
previously considered AI. Examples include image and character recognition,
speech recognition, machine translation.

Each time we say 'oh well that's not actually the machine thinking it's just
doing a search of a database / applying statistics / guessing' so it's not AI.
Except we don't really know what it would mean for a machine to think, or
understand text.

~~~
Zenst
Me personaly, I define AI as a system that can be fed non-biased input and
learn from that input to derive conclusions that we as humans can relate too.

So feeding a system lots of cat pictures (ie biased-input) to teach it what a
cat is for me is not AI. But a system which you feed in lots of random
pictures and it learns by itself what a cat is, that would be AI, least for
me.

What will be really interesting is a system which you can feed in all the
childrens section and see what comes out at the end, would be most insightful
into how we teach children and what we teach them. So a completely different
area of AI use from that alone - learning how to learn better.

~~~
kamaal
>>But a system which you feed in lots of random pictures and it learns by
itself what a cat is, that would be AI, least for me.

A very good definition. Also adding to your point. Lets say a machine is fed
with a billion pictures. Can the machine automatically categorize them by
reading through them? Doesn't matter if it refers to a cat as some
alphanumeric name 'ab12er' or a dog as 'p09iuy'.

But it should be able to categorize them. Then it should be able to read some
encyclopedia or some source of information and study the behaviors of 'ab12er'
and 'p09iuy'. Or the opposite, see 'ab12er' and 'p09iuy' and recognize them
and describe them what they are.

~~~
polymatter
Seems to me you are defining AI as a generalized categorization algorithm.
This sounds very narrow as it doesn't take into account any actual intelligent
behavior. For example, I believe constructing tools in order to accomplish a
specific goal (eg, constructing a net in order to catch fish) to be
intelligent behavior. The categorization AI you describe would be incapable of
this. Though it does sound like it would be a necessary prerequisite.

------
mathattack
_I might say though, that 30 to 40 years ago, when the field was really young,
artificial intelligence wasn 't about making money, and the people in the
field weren't driven by developing products. It was about understanding how
the mind works and trying to get computers to do things that the mind can do.
The mind is very fluid and flexible, so how do you get a rigid machine to do
very fluid things? That's a beautiful paradox and very exciting,
philosophically._

Unfortunately all these well intentioned AI professors built nothing and the
field devolved into LISP hacking. Now our choice is "accurate model" or
"useful expert system". At the time it bothered me that we couldn't do both.
Now I realize that it's ok for the model to be imperfect if the results are
useful.

The general thread also convinces me that Skynet type AI is still far off.

------
simonh
This is why I am so skeptical of the claims of people like Ray Kurzweil. I
honestly don't think we yet have even an outline of an idea of how a true
artificial mind might be architected, let alone implemented.

If we can't even design something, even in outline, how can we possibly
predict how long it might take to build it?

~~~
logicallee
Skeptical or not, we've already sequenced a complete blueprint for it that is
running on billions of copies of firmware in the wild. True, we can't even
emulate the firmware ourselves.

But we know it's possible and literally have a copy of the code that builds
it. We just have to figure out how to emulate it. That puts a hard upper limit
on how long all this can take. (It can't be "forever" since there is no
requirement for the emulation to happen in real-time.)

As far as processing speedups, meanwhile, we don't even have 3D chips yet,
just a single layer of silicon. Recent innovation:
[http://www.kurzweilai.net/first-true-3d-microchip-created-
ca...](http://www.kurzweilai.net/first-true-3d-microchip-created-cambridge-
scientists)

Given that a human brain is like 3 pounds of goo, and we have the genetic code
that builds it, the rest is just biological reverse engineering + code
refactoring of a binary blob without comments.

I am not saying this process is easy, but the idea that we can't make some
solid predictions is fairly weak. We already do genetic engineering.

I wouldn't place much money on a true artificial mind still not existing in 40
years.

~~~
DanBC
> I wouldn't place much money on a true artificial mind still not existing in
> 40 years.

We said that in the 50s, and every decade since then. While we understand a
lot more about the brain there are still gaps in knowledge and ability to scan
a working brain.

It may turn out to be a system with sensitive dependance on initial
conditions.

There are about 85 billion neurons in a human brain. (In transistors that's
only about 45 or so Intel's 10-core Xeon Westmere-EX.) that number is a
relatively new refinement of the old 100bn number. Not finding out how many
neurons we have until the 21st centuary makes me think that there is enty of
work left.

We don't even know why Golgi Stains miss neurons.

~~~
logicallee
Right, my reference to the fact that we still use flat CPU's is in part
referring to the transistor count. Also you are referring to a commercial
mass-produced CPU that debuted at $4200 nearly three years ago - and was
considered overly expensive at that price.[1]

And you're comparing this tiny single package to the human brain directly, not
even multiplying by a farm of them, which would be more than reasonable if we
knew what we were trying to emulate.

The issue, as you state, is that we don't really know what's going on. However
there is no reason to assume a fundamental barrier to continued innovation as
we learn more and more, and computers can do more and more.

[1]
[http://www.dailytech.com/Intel+Airs+10Core+Xeon+Server+Chips...](http://www.dailytech.com/Intel+Airs+10Core+Xeon+Server+Chips+at+Insanely+High+Prices/article21311.htm)

------
tiatia
Two AI Pioneers. Two Bizarre Suicides. What Really Happened?
[http://www.wired.com/techbiz/people/magazine/16-02/ff_aimyst...](http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery)

BTW, I wonder how much AI is based on "facts". I someone tells me, he slept
bad last night, then there are so many assumptions I make subconscious. I
assume he lives in a House/Apartment, he slept in a bed, he slept on the bed,
on a mattress, the mattress is on the bed and not the bed on the mattress, he
sleeps on it based on gravity etc. etc. etc. To make meaning out of sentences
you have to know a lot. Your first years in live may be nothing but acquiring
this knowledge.

~~~
icebraining
That idea has been kicking in the AI community for a long time. One of the
biggest projects for encoding every day knowledge in a machine readable way is
Cyc: [http://en.wikipedia.org/wiki/Cyc](http://en.wikipedia.org/wiki/Cyc)

------
bnegreve
> _Well, artificial intelligence is a slippery term. It could refer to just
> getting machines to do things that seem intelligent on the surface._

But is "natural intelligence" any different?

~~~
simonh
Yes, because it is general purpose intelligence. Google Translate can't even
play the simplest of games, let alone Chess or Jeopardy. Watson can't even
begin to tackle translation or Chess. Deep Blue has absolutely no capability
towards translation or playing Jeopardy. They are fixed-function, single-task
algorithms optimized towards a single very tightly defined problem domain.
That sort of research isn't going to get us to general purpose AI.

~~~
ivanca
Cats cannot play the simplest of games; does that mean that they do not have
general purpose intelligence?

And I do think this research is going to take us there; what is winning a game
more than translating your enemy actions into yours? Like translating a face
gesture on a poker game (or translating a word) into a action such as doubling
inside the game (or a http response).

~~~
Ygg2
Actually cats can play catch. So even by that definition they have some
rudimentary intelligence.

Cats aren't as intelligent as dogs, and they aren't social animals but they
have some intelligence. They can learn on their own to navigate surroundings,
move, catch prey and mate.

I'd love to see Watson do any of these without pre programming.

~~~
jjtheblunt
Rudimentary intelligence? How primate a thing to write. Cats and dogs are
evolved with different priorities, as are primates, and have ingenuities that
surprise one another, except when the others stop watching. I've seen cats
learn how doors work by just watching, doorknobs, etc. They are clever,
geometrically, but humans seem to resonate with the more vocal/verbal and
generally cooperative nature of dogs, as it's a very primate characteristic.

~~~
bad_user
I think this is because dogs are friendlier and more willing to learn from us,
whereas cats are very selfish :)

Either way, people claiming that either dogs or cats aren't very inteligent
haven't lived with one. It is true that it is not human intelligence, but
personally I feel that the only piece missing is natural language, which if
you think about it is the only distinctive trait separating us from primates.

And this will be the ultimate test for AI, the ability of a computer to have a
meaningful conversation with a human.

~~~
Ygg2
I think dogs are more intelligent BECAUSE they are social animals. Social
animals need to do everything a solitary animal does + ability to know what
his peers are thinking or about to do in order to coordinate.

Dogs can fake emotion (ever been bitten by a dog that waggles his tail?), know
what appeals to humans(sending cutest or most wounded pup to beg for food),
understand how subway works, etc. Cats have greater independence, but overall
aren't as clever.

Crows and killer whales, now those fuckers are intelligent.

The only thing exceptional about human mind is the ability to be EXTREME in
every aspects of our mind. Most creatures can do same as we, but we do it to a
higher degree.

~~~
bad_user
Speaking of social animals, spotted hyenas are very clever and groups of
hyenas are not only very big, but have very complex rules for social
interaction.

On the human mind, I do have a problem with assertions such as yours - saying
that we can be "extreme" doesn't say much about how we are built or why other
animals can't do it. We definitely don't have the biggest brains.

Sometime in the evolutionary process, we developed the ability to speak.
Chimpanzees have symbolic capacities which are rarely used in the wild.
Something happened to us, some social change and we've been practicing this
ability since tens or hundreds of thousands of years ago.

And speech is tremendously important because that's how we learn - we pass and
receive knowledge to and from others by means of natural language. Society
also leaped forward along with agriculture because that's when written
language happened, also allowing us to pass knowledge to future generations.
We also leaped forward when common people started learning to read. And
because of the ease of access to information nowadays, I also believe we're
amidst another revolution.

Now if you look at animals, they do have language. Most intelligent animals
rely on body language and even sounds to communicate. But one thing that we do
effortlessly is to invent new words, new metaphors to describe whatever we
want and our language has gotten so big that we can describe anything.

So there's a strong correlation there and the question on my mind is - are we
smart because of the ability to communicate, or are we able to communicate
because we are smart?

~~~
Ygg2
Well I haven't say or hypothesized why is that so.

Chimpanzees and crows can make tools, we make tools that make tools that make
tools.

Animals have language(s), we have several highly symbolic languages. Ours is
just more sophisticated.

I'm pretty sure there are examples of animals empathizing, humans can
empathize with a large part of biosphere.

There is nothing that fundamentally divides us. Or you can say that humans are
nothing special. It's just we have more most mental tasks at greater lengths
and do it more consistently. That's all.

------
tlarkworthy
One hypothesis is real ai will be achieved by simulating biological brains,
bottom up AI, instead of designing intelligent algorithms top down

~~~
chegra
That is like trying to create a bird instead of understanding the principle of
flight. By understanding the principle of intelligence we can do better than
our current level intelligence. One contender for understanding this principle
is AIXI:
[http://www.youtube.com/watch?feature=player_detailpage&v=V6u...](http://www.youtube.com/watch?feature=player_detailpage&v=V6umr1OP8uo#t=1091).

~~~
mehwoot
Well if we spent 60 years failing to fly we probably would be looking at
recreating what a bird does pretty closely.

------
lsnape
see also [http://www.theatlantic.com/magazine/archive/2013/11/the-
man-...](http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-
would-teach-machines-to-think/309529/)

------
KaoruAoiShiho
Surprised at the traction that this interview is gaining. This isn't all that
different from the amatuer blog post from a few days ago saying the same thing
where everyone educated the poster on the subject of modern AI development.

Almost all the comments from that thread can be applied to here. People with
this point of view suffer from a fundamental misunderstanding of what natural
intelligence is in my opinion.

------
axilmar
The way intelligence works, in my opinion, is this:

1) experiences are stored in the brain. Experiences contain inputs from the 5
senses as well as the sense of danger/satisfaction at that point.

2) at each given moment, the brain takes the current input and matches it
against the stored experiences. If there is a match (up to a threshold), then
the sense of danger/satisfaction is recalled. Thus the entity is able to
'predict', up to a specific point, if the outcome of the current situation is
bad or good for it, and react accordingly.

The key thing to the above is that the whole process is fused together: the
steps for adding new experiences, matching new experiences and recalling
reactions is fused together in big pile of neurons.

------
aaron695
Normal FUD.

It's doesn't answer the fundamental point everyone brings up about how we just
keep refine what AI is, the interview just avoids it with fluff.

Who says Watson doesn't understand? Why can't it have some sort of conciseness
albeit at a very very low level.

Do we really need the singularity to happen for true AI.

~~~
qbrass
If you ask Ken Jennings what his favorite song is, he'll probably tell you, or
he'll say that he doesn't have a favorite.

If you ask Watson the same question, it will probably give you a reference to
someone being asked that question in a magazine.

------
TheCoelacanth
This is completely moving the goalposts. Of course Watson and Siri are AI.
They aren't strong AI, but they are definitely AI.

