
A.I. Has Grown Up and Left Home - zbravo
http://nautil.us/issue/20/creativity/ai-has-grown-up-and-left-home-rp
======
playhard
I emailed Chomsky a month back about Thoughts and Language. I hope he doesn't
mind that I'm sharing it. My email,

I was fascinated by your talk at British Academy. I'm no linguist. Please
Ignore my ignorance.

If the formal language is not designed for communication but for our thoughts,
Is it not the case that there are two different mechanisms of perception? One
to understand internal language (if I can call it a language) and external
language we use socially? It is bewildering. As I type this email, I'm
thinking of the words that needs to be followed. Because of my ability for
pattern recognition or for the conceptual understanding of rules of the
external language. I'm not sure if my ability is a natural phenomenon or a
self trained one(as far as English is concerned). English is not my native
language. It is acquired by study. Recently, My thoughts have been primarily
in English. I could switch between English and my native language for
thoughts.

Is language the only way to think? Is it crazy to think that our brain does
not use language to think at all? Can thoughts/perception , like language,
have two different mechanism. Understanding external logic and internal logic?
The internal system might be visual, physical etc. It seems to lead to a
structure problem. Does internal thoughts have far more complex structure? If
dreams are thoughts, It is extremely complex and might be a evidence that
there are different systems. I strongly think it is because we have different
involuntary mechanisms to perceive like sense of smell etc.

Chomsky's reply,

Very little is understood about the interesting questions you’re raising. To
study them we’d have to have a way of accessing thinking by some means that
does not involve conscious use of language – which is probably a superficial
reflection of something happening within. But so far there are no good ways.

~~~
btilly
If you're interested in these questions, you should read
[http://www.amazon.com/Whos-Charge-Free-Science-
Brain/dp/0061...](http://www.amazon.com/Whos-Charge-Free-Science-
Brain/dp/0061906115) to learn more about how brains work.

There is good evidence that different parts of our brain think in very
different ways, and we suppress awareness of how different they are. Verbal
thinking in particular relies heavily on a section of the brain that is very
good at making rationalizations, and is not necessarily well connected to the
parts of your brain that are making the decisions you are trying to
understand. Therefore our verbal descriptions of thinking is interesting, but
not particularly reliable or informative.

It is worth noting that many negative reviews come from people who think that
current thought from philosophical schools should have been included. Given my
temperament and beliefs, I took this as a buy recommendation and am happy that
I did.

~~~
superobserver
Thanks. Reminds me of this book, which I also highly recommend in the same
vein: [http://www.amazon.com/The-Master-His-Emissary-
Divided/dp/030...](http://www.amazon.com/The-Master-His-Emissary-
Divided/dp/0300188374) .

~~~
btilly
Thanks, that book looks interesting as well.

------
infogulch
Computers are just as dumb as they were in the 50's. They're just dumb a lot
faster.

~~~
Mangalor
Some algorithms are smarter. The computer system that figures out how to book
flights for millions of people every day is quite intelligent when it comes to
that specific task.

The problem is, that intelligence is more like an insect's brain in that it
doesn't generalize to any other higher level problems. We're still trying to
figure out how to organize knowledge so the intelligence is transferable.
Doesn't seem too difficult to do in the next 30 years given currently
available tools.

~~~
karmacondon
I think the point OP is making is that the underlying technology and
"thihnking process" behind most AI hasn't changed much in the last 50-60
years. We look at a task like figuring out how to book millions of flights and
think "That seems like it would be very difficult for me to do, therefore a
computer must be intelligent if it can do it". But that isn't true for all
definitions of 'intelligent'. The basic optimization process, once figured out
(by humans) is fundamentally the same and is just being applied by the
computer at greater scales and speeds. If computers were getting smarter, for
some definitions of the term 'smarter', they would be identifying problems to
be solved on their own, or at least they would be coming up with novel
solutions without any human involvement.

A pen and paper can be used to make powerful calculations, but we don't
generally attribute intelligence to the tools themselves. The promise of
artificial intelligence was that computers would eventually become capable of
taking independent action and doing things that humans couldn't predict or do
themselves. To say that we're still trying to figure out how to do that is a
bit of an understatement. The time frame for fulfilling that expectation is
completely unknowable, at best.

~~~
anko
have you heard about deep learning? Computers are coming up with novel
solutions without human involvement. It was done before in the 70s and 80s
with neural networks, but it only worked on simple tasks. Now the main limits
seem to be computing power (and learning how to train it).

------
mhewett
Historical correction: The article mentions "AI Winter" as occurring in the
1970s. AI Winter actually refers to the period from about 1990-1997 when the
Expert Systems approach was deemed to be a failure, funding for AI dropped
considerably, and it was a mistake to put AI on your resume.

This opened a window that allowed the statistical-based AI techniques that
currently dominate the field to mature.

~~~
Animats
As someone who went through Stanford CS in 1983-1985, when the "expert
systems" people, led by Feigenbaum, were in charge, I can say that the expert
systems crowd was stuck by then, but in denial about it. Feigenbaum was
testifying before Congress: "The US will become an agrarian nation" if
Congress didn't fund a big national AI lab headed by him. The expert systems
crowd were claiming they were headed for strong AI in the near term. They were
in complete denial about the fact that expert systems mostly give you back
what you put in. You can code up some kinds of "how-to" information as rules,
but it's just a form of declarative programming.

It was frustrating hearing those professors. I'd spent time in industry before
going to Stanford for a MSCS, and I could tell that they were stuck and
covering for it.

A few years later, Stanford moved computer science from Arts and Sciences to
Engineering. This pushed the CS department into more practical directions. The
School of Engineering had deans, supervision, and planning. CS under Arts and
Sciences was sort of what each professor wanted to teach. There was no
undergraduate CS at Stanford before that move; prospective undergrad students
were told "get a liberal education".

That didn't end the "AI Winter" at Stanford, though. It took the DARPA Grand
Challenge, in 2003-2005, to do that. The head of DARPA, Dr. Tony Tether, used
the Grand Challenge to force the big-name AI schools to get some results.
DARPA had been funding automatic driving work at Stanford and CMU since the
1960s, with disappointing results. The big-name schools were quietly told that
if the private sector and amateurs outdid them in the Grand Challenge, DARPA
was turning off their AI funding. Suddenly entire CS departments were devoted
to the Grand Challenge. Stanford had to bring in machine learning people from
CMU to compete. That's what turned the department around and finally pushed
the logicians into the background.

------
ansible
We're still figuring out probabilistic knowledge representation. So much
depends on context.

Earlier approaches are fraught with problems. Expressing an encoded fact that
translates to "Obama is the president." is far too simplistic. It does not
take into account the proper context (president of the USA), and time period
(from January 20th, 2009 to the current time) and so on.

These contexts: conversation (were we talking about politics?), location (USA,
Russia, etc.), all need to be captured as best as possible and appropriately
considered when trying to apply a learned fact. Maybe the context is
hypothetical (talking about an alternate history where McCain won the
election) or fantastical (talking about President Garrett Walker in the TV
show House of Cards).

The existing systems I've seen can't really handle all of that.

Somehow we humans are generally good at figuring out which sets of contexts
are appropriate to apply in a given situation, and doing it quickly. What's
tricky is that even in a single conversation with one person, the context may
switch from historical, to hypothetical and back again quickly.

------
jayvanguard
This is one of the better explanations of this fundamental shift in AI
research. I remember even a decade ago this would have been a controversial
stance.

Now it is very clear the neats have lost and the scruffies are on the rise.
[http://en.wikipedia.org/wiki/Neats_vs._scruffies](http://en.wikipedia.org/wiki/Neats_vs._scruffies)

------
saosebastiao
The article mentions Marvin Minsky as an advocate of symbolic logic
approaches, and that Perceptrons (the book) was an attack on perceptrons and
neural nets and connectionist theories. This is false on both accounts.

1) Minsky was emblematic of a Scruffy, and to this day, Scruffy approaches are
associated with Minsky and MIT. He didn't believe that statistical or
connectionist approaches were bad, just that they were not a silver bullet and
not going to solve every problem with AI. He has embraced nearly every
prominent approach to AI, and simultaneously criticized the near-religious
beliefs that they spawn.

2) Minsky didn't attack perceptrons and neural nets. His book was an objective
criticism of them, laying out very clearly the mathematical limitations of
them, as well as philosophical problems with approaching them as a catch all
solution.

------
stainednapkin
[http://groups.csail.mit.edu/medg/people/doyle/gallery/minsky...](http://groups.csail.mit.edu/medg/people/doyle/gallery/minsky/mmm.html)

------
alextgordon
This shows the folly of modern approaches:

> Pornographic images, for instance, are frequently identified not by the
> presence of particular body parts or structural features, but by the
> dominance of certain colors in the images

Neural networks are unsuccessful because they are not detecting anything.
They're just making statistical correlations that happen to be right for the
specific set of training data they have received. There's no robustness
whatsoever. Artificial gullibility.

If we want to teach computers to think, then we have to teach them to model
the physical world we live in. Plus the emotional, social world.

And even _that_ is entirely separate from heavyweight abstract reasoning. Just
because a computer can pass the turing test, doesn't mean it can spontaneously
diagnose your medical problems, design a suspension bridge or play Go.

~~~
duaneb
> Neural networks are unsuccessful because they are not detecting anything.
> They're just making statistical correlations that happen to be right for the
> specific set of training data they have received. There's no robustness
> whatsoever. Artificial gullibility.

This is what the brain does, too. All inductive reasoning comes down to
statistical correlations. The human brain has NEVER had "robustness".

~~~
alextgordon
By robustness I mean the brain is pretty hard to exploit. You can't trick it
into thinking an image is pornographic just by changing the color balance.

~~~
chriswarbo
> The brain is pretty hard to exploit.

No it's not, just ask any advertiser.

> You can't trick it into thinking an image is pornographic just by changing
> the color balance.

If the only data someone has about an image is its colour balance and you
asked them to classify which they think are pornographic, then yes you can.

It's a pretty stupid idea to impose such limitations though, on humans or
ANNs. Even if you _did_ only care about colour balance, you don't need an ANN
to tell you that; just scale the image down to 1 pixel x 1 pixel, then look at
its colour.

~~~
alextgordon
> > By robustness I mean the brain is pretty hard to exploit. You can't trick
> it into thinking an image is pornographic just by changing the color
> balance. > No it's not, just ask any advertiser.

One of the great things about humans is that they can consider the context of
a remark, although apparently this is not universal.

~~~
abandonliberty
Maybe it's an AI.

------
PaulHoule
Seems to me you can buy and sell a face detector (or find it open source on
github) all the better if somebody or something somewhere can identify the
concept of a face detector.

------
rasz_pl
state of the art in 1984:

[https://www.youtube.com/watch?v=_S3m0V_ZF_Q](https://www.youtube.com/watch?v=_S3m0V_ZF_Q)

