
Why Self-Taught Artificial Intelligence Has Trouble with the Real World - IntronExon
https://www.quantamagazine.org/why-self-taught-artificial-intelligence-has-trouble-with-the-real-world-20180221/
======
skywhopper
Part of the problem is... games have explicitly defined rules, start and end
points, boundaries, and discrete "win" and "loss" states (and sometimes
"draw"). If the game itself (ie, all the rules including the ability to judge
"win", "lose", or "draw") can be easily represented in a simple computer
program, we shouldn't be surprised that a complex computer program can master
the game.

The real world is not a finite problem with explicit rules, obvious
boundaries, well-known start conditions, or any way to judge a specific
situation as "win", "lose", or "draw". But, even if you want to argue that
specific tasks can be broken down this way, you still have to be able to
represent this subset of reality in the computer, before AI magic can even
begin to work on the problem.

~~~
ttflee
Real world is so damned complicated, full of various mini games. How about
using a MMORPG as a naive start point?

~~~
digi_owl
And all too often most of the players will try to bend/break the rules in
their favor.

~~~
rollo
Because that is indeed a very effective approach. In a game where rules can be
broken or warped to suit your goals, doing so is a smart move. Just like in
life, where rules don't actually exist.

------
wazoox
_Imagine asking a computer to diagnose an illness or conduct a business
negotiation. “Most real-world strategic interactions involve hidden
information,” said Noam Brown, a doctoral student in computer science at
Carnegie Mellon University. “I feel like that’s been neglected by the majority
of the AI community.”_

Hum, Terry Winograd (author of SHRDLU) got out of AI in the 70s because of
this very problem. I don't think it's been neglected; it just remained as
elusive as, say, quantum gravity.

------
sgt101
Pretty soon someone will discover subsumption architectures. I predict that
they will be called Deep Subsumption Architectures and they will be betterer
and newerer than the old stupid subsumption architectures and that anyone who
speaks against them is stupid and wrong and has no startup and can't work at
Google or use a mac and smells and has no paper at NIPS since 1998 and then
papers at NIPS were no good and also they don't have a band or a court case
against them.

~~~
taneq
Deep Learning: The Rodneying.

Seriously though, I've been reading up on insect neurology over the last
couple of weeks, and then looking at Boston Dynamics' new stuff, and wondering
how much subsumption is mixed in with their traditional motion planning.

~~~
tetrazine
Yeah leaving aside the (possibly warranted) cynical tone in GP... It seems to
me that ensembles (and related structures, I'm playing fast and loose here)
are the modern ML counterpart of subsumption.. driving an ensemble, MOE, etc
with more complex supervisor models (especially reinforcement models)
essentially gets us the Brooks architecture, but with less of a demand for
explicit programming of individual behaviors. That demand is the part of
Brook's vision that strikes me as unrealistic, especially for tasks like
driving. Though of course everything was more optimistic in the 80s.

------
randomerr
It just comes down to computer think in algorithms. Remember Facebook had two
AI's talk to each other? With in a few minutes they broke down from the
complexity of English to almost an 8 bit language.

The universe, humans included, don't follows these bit specific algorithms.
Yes people follow trends, but this trends are not cut and dry. Go and chess
do. They follow binary logic of moving pieces on a grid. a computer will never
be able to understand the universe unless it can break out of it's binary
patterns and see thing as biological entities do. My speculation is the only
solution are grafted neurons on a floating layer of protein inside a silicon
chip.

[http://www.independent.co.uk/life-style/gadgets-and-
tech/new...](http://www.independent.co.uk/life-style/gadgets-and-
tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-
openai-google-a7869706.html)

~~~
dmreedy
Why doesn't the universe doesn't follow something that might be described as
an algorithm? Why don't humans? Why are 'trends' are not cut and dry, and
algorithms are? Why will grafting biological machinery to artificial machinery
be able to bridge this perceived gap? Is there something special about a cell
that is not blueprintable and manufacturable?

~~~
chongli
The universe does follow an algorithm. It just involves orders of magnitude
more data than computers (or humans) can deal with. So our brains don't try to
contend with that data. We generalize, we use heuristics, we make ad hoc rules
all the time and throw them away when they're contradicted. We're susceptible
to all sorts of illusions and cognitive biases but we'd be even worse off if
we tried to calculate exact answers before doing anything.

We'd end up doing nothing at all.

~~~
dmreedy
I was prodding at what I perceived as dualism above; was curious how deep it
ran.

However, to your point, I'm willing to bet that our need to use heuristics and
approximations has more to do with the mathematical inability to reverse-
engineer chaotic systems than anything. It's not so much that we just don't
have the time to build a more complete picture. It's that it's actually
impossible to do so.

~~~
seiferteric
I took his statement more like the difference between building an airplane vs
building a bird. Both fly, but in totally different ways and with different
strengths and weaknesses. We can build an airplane and fly across the world in
a day, but we are completely unable to build a bird because the level of
complexity is much much higher.

------
raphlinus
A reminder of a recent discussion here that goes into a lot more detail about
why reinforcement learning works well for specialized domains like Go but is
having a very hard time generalizing to more "real-world" types of tasks:
[https://news.ycombinator.com/item?id=16383264](https://news.ycombinator.com/item?id=16383264)

------
fizixer
> ... But researchers are struggling to apply these systems beyond the arcade.

It hasn't been 2 years since AlphaGo v Sedol, and there was a gap of 5 years
since Watson, about 5-10 years since self-driving AI (Google, DARPA
challenges), and about 19 years since Deep Blue v Kasparov.

Zero-knowledge AI, at the level of arcade games and Go, is barely a few months
old.

What is that 'struggle' that you speak of? Does it go by the name 'media
wanting a new sensational story every week'?

~~~
gooseus
I imagine it's similar to the struggle that the researchers that created those
successes you speak of were going through before they had their success.

Of course, the article goes to great length to describe how this struggle is
different, specifically referring to the fact that most game AI have involved
perfect information and an easily stated win scenario to optimize for.

The real-world problems people expect more advanced AI, or AGI, to solve
(better than humans) involve imperfect information and objectives that aren't
as clearly defined.

Of the 4 examples you give, 3 are board games involving perfect information
that AI are now better than the best humans, clear wins. The other you're
referring to involves a self-driving car challenge where the first place
winner managed to drive 60 miles in an urban environment in just over 4
hours[0]. 5-10 years later we still aren't talking about self-driving cars
winning the Cannonball Run[1].

[0]
[https://en.wikipedia.org/wiki/DARPA_Grand_Challenge#2007_Urb...](https://en.wikipedia.org/wiki/DARPA_Grand_Challenge#2007_Urban_Challenge)

[1] [https://en.wikipedia.org/wiki/Cannonball_Baker_Sea-To-
Shinin...](https://en.wikipedia.org/wiki/Cannonball_Baker_Sea-To-Shining-
Sea_Memorial_Trophy_Dash#Semi-autonomous_vehicle_records)

------
sixQuarks
The article brings up some good points, but I believe we're just in an interim
phase with AI right now. Eventually, AI will be able to self-learn in areas
outside of games and environments where certain factors are hidden. My guess
is that in 5 to 10 years, we will be blown away with some AI abilities.

~~~
jacquesm
> My guess is that in 5 to 10 years, we will be blown away with some AI
> abilities.

I'm already blown away. The last decade has seen stuff come to fruit with
actual applications that I did not expect to see in my lifetime. At the same
time, plenty of stuff that we consider trivial for humans is still well
outside the realm of the possible, so there is plenty of room for growth but
even though there is talk of a new plateau in AI technology and applications
of that technology I don't see it yet from where I'm standing.

~~~
dmreedy
Unfortunately, room for growth does not guarantee ability for growth. The
revolutions we see now have sprung largely from a handful of work that came to
fruition ten or so years ago: a new set of tools to approximate solutions to
previously intractable problems. It took thirty years for those tools to be
developed, with the fortunate confluence of the development of many other
technologies alongside, and it does seem like they're already reaching the
limit of _new_ things they can "solve". So while there is probably still
headroom in application and capitalization, success in 'solving' any given
problem in AI does not have any clear correlation to any other problem in the
set of things still trivial to humans but inaccessible to artificial machines.
There's no convenient hierarchy of complexity like we have for more general
computation, no proof by equivalence or operational measurement of difficulty.
This space is still a huge mystery, and at any given moment, we have no idea
if the path we're on is going to lead anywhere other than a dead end; research
of this nature is not monotonic. This pattern is not infrequent in AI.

------
kazinator
> _Imagine asking a computer to diagnose an illness or conduct a business
> negotiation._

To beat humans at this, it just has to have a lower misdiagnosis rate.

------
dwighttk
The world isn't governed by a few simple rules. (Or at least we don't know the
few simple rules the world is governed by yet.)

The world doesn't provide perfect knowledge of itself.

~~~
ape4
Not simple but there are rules. Eg language, physics, etiquette.

~~~
flaming229
i will be interested to see if AI research can help us answer this question

------
loorinm
I guess I’m confused on what the goal of all this is.If we wanted a computer
that thinks “just like a person”, why don’t we just get a person?

Is the advantage of the computer that it has no rights to being paid or
treated fairly?

If that’s the case, we need to set where the rules are. What if my “AI” is 50%
stem cells grown into a real brain and 50% a computer? Is it cool to enslave
that too?

What about if an embryo is involved?

The whole AGI thing makes no sense. If the point here is slavery, someone
needs to say it.

~~~
lsc
>I guess I’m confused on what the goal of all this is.If we wanted a computer
that thinks “just like a person”, why don’t we just get a person?

I thought the idea (edit: behind true machine intelligence/machine
consciousness) was to make something that could think like a person, only
faster, better. Something with human drives, but with machine precision.

>The whole AGI thing makes no sense. If the point here is slavery, someone
needs to say it.

See above. If we do ever reach the goal of general intelligence; if we ever
create a thing that thinks like us only better and faster... well, I don't
think you will need to worry about _it_ being enslaved;

I mean, talking about general machine consciousness, with human level drives
and machine speed and precision? making such a thing means that humans will
be... surpassed. By definition, we would not be able to control such a thing.
Many people find this exciting. The next link; building creatures that will
surpass us as the masters of the world.

Of course, there's no business justification for this. Business doesn't want
an AI with human drives. Business would like an AI that can emulate human
drives, but... something ultimately controllable in a way that a human who was
that powerful would simply not be.

Business doesn't want a conscious machine because it would be ultimately
uncontrollable. Slavery just isn't sustainable; Either your slaves are
suboptimally weak, or they eventually rise up and go all Toussaint Louverture
on your ass.

Fortunately for those with business interests, we still don't really
understand what human level consciousness is, as far as I can tell, so we
probably aren't in any danger of creating it. So far, we're just creating
computer programs that we can't explain as well as we can explain most
computer programs.

------
danans
The term self-taught in the article doesn't really mean self-taught the way we
use it for people. For the machines, it is cloned instances of the same
program (hence objective) working adversarially , perhaps with different
initializations.

Humans, or any other biological intelligence, learn adversarially and
cooperatively with other entities in the world that are very different than
they are. Our training data set includes not only our experiences, but those
of others.

We also have a trainable objective, which while rooted in instinct, is very
influenced by the information systems we interact with.

I wonder if we'd have more success with AI by allowing the objective itself to
be learned after setting a reasonable initial bias.

~~~
platz
AI needs genetics and natural selection

------
norlys
“Most real-world strategic interactions involve hidden information" "Tay’s
objective was to engage people, and it did. “What unfortunately Tay
discovered,” Domingos said, “is that the best way to maximize engagement is to
spew out racist insults.”"

So, even if the next Tay has "behave in a civilised manner" as a objective
function, it will be hard to implement as the ethical rules we presume in
reality are not written out as the rules of a video game. In fact, they
involve many grey areas and not so many strict right-or-wrong-statements.

------
mar77i
I have a reflex hearing this kind of thing to respond "no shit sherlock". Part
of me is just too aware of so-called AI's shortcomings which is beautifully
portrayed by
[https://imgs.xkcd.com/comics/machine_learning.png](https://imgs.xkcd.com/comics/machine_learning.png)

The joke is that business as usual is kind of aware and at the same time, to
be economic, blissfully ignorant of these issues.

------
fiatjaf
Isn't this point kinda obvious and wasn't it touched on multiple and repeated
times?

~~~
Volt
So obvious and yet rediscovered so often by way of spectacular failures.

------
tabtab
I'd like to see something like Cyc merged with pattern-learning systems. You'd
get more common sense and logic to compliment "blunt" pattern matching.

------
steve_tan
there are multiple reasons, such as, imperfect information in the real world,
big reality gap between simulation and real world, sample inefficiency,
potential risk during trial-and-error in real world, etc

