
The Myth of AI – Jaron Lanier - discreteevent
http://edge.org/conversation/jaron_lanier-the-myth-of-ai
======
tsunamifury
I think Jaron knows what many engineers who have worked on AI know: That the
external promise of AI is far more inflated than the capability of the actual
technology. It's simply that when the current technology gets just complicated
enough for the above-average person to no longer understand, everyone starts
assigning magical properties and expectations to it. This results in short
term over-valuations, which inevitably lead to disappointment [1].

Jaron has observed this several times and rightly seems to be tired of
repeating the same naive cycle.

As I'm just now entering my second cycle and watching tech repeat itself again
-- I'm beginning to understand his weariness.

[1] [https://hbr.org/2015/12/the-overvaluation-
trap](https://hbr.org/2015/12/the-overvaluation-trap)

~~~
IanDrake
The real question for me is:

What is motivating otherwise very intelligent people to promote the idea that
AI will take all our jobs and/or enslave us?

Possible answers:

1) To prop up their investments in AI to get higher valuation.

2) To reach a political goal, such as basic income, which is often justified
as necessary in a world where computers and robots take over the work force.

3) ?

~~~
gearhart
Whilst the "enslave us" thing is a bit alarmist, regarding it taking our jobs
there's a certain analogy here with mechanisation.

You pointed out (below) that historically people have often been terrified
that machines would take all of our jobs, and that terror has turned out to be
unfounded. But they weren't wrong, they were just wrong in thinking it would
be a bad thing.

Over the last 150 years, the proportion of Americans employed in agriculture
has dropped from ~70% to ~2% [1]. They've literally been replaced by machines.

A large proportion of those people are now doing menial intellectual jobs that
likely will be replaced by "AI". A complete shift in the nature of the work we
do isn't unprecedented, and it shouldn't be considered impossible, but it
shouldn't be considered disastrous either.

edit: [1]
[https://en.wikipedia.org/wiki/Agriculture_in_the_United_Stat...](https://en.wikipedia.org/wiki/Agriculture_in_the_United_States#Employment)

~~~
TeMPOraL
The difference is that this time we're running out of work we could give to
those who will get displaced by machines. Yes, we can keep inventing bullshit
jobs, but we've already reached the point when people are starting to ask, if
working for the sake of working makes any sense.

~~~
eanzenberg
Wat. I keep hearing the argument that there "won't be useful work to be done
soon." Considering that a median house still costs 4x yearly median salary
means that not only do we live in a resource-scarce world but we will continue
to for a long time.

Just because you don't "see" the work humans currently do doesn't allow you to
be ignorant to it happening. Just because you personally don't want to work
(as most people) doesn't mean there isn't plenty out there for people to do.

~~~
TeMPOraL
Why, there still will be plenty of work to do - maintenance, for instance. But
probably not enough for _everyone_.

That "a median house still costs 4x yearly median salary" means exactly
nothing. The size of median salary is market driven, and the price of housing
reflects the games banks and housing developers are playing, and thus can be
arbitrarily high.

~~~
eanzenberg
The house price is set by what buyer's are willing to pay for them. This is
taking into account that in median, 4 years of that person's work-output will
be required to purchase a house.

~~~
TeMPOraL
In which country is that? Last time I checked, the common way for financing a
house purchase in the entire Western world is through a mortgage loan, which
makes the house price tied to the size of the loan a bank is willing to give
to an average person.

~~~
eanzenberg
That's not how prices are set. Just because I can borrow $X thousand dollars
on my credit doesn't mean I go ahead and purchase all the big screen tvs and
computers I want. Just because I can borrow $XX thousand to purchase a car
doesn't mean I'm driving around in a new mercedes. Just because I can borrow
$X million to purchase a house doesn't mean I will purchase real estate which
will turn my net accrued savings to 0.

When people only think short-term (what's can I afford monthly) vs. long-term
(what will I pay out over the life of the loan vs. what I'm getting now) they
make mistakes. These mistakes are self-inflicted but these people tend to then
blame any and all others for why they can't get ahead, why the system doesn't
work, why the American dream is dead :).

~~~
TeMPOraL
That's besides the point. Most young people today can't afford a house. So
they take a mortgage loan to buy one, and surprisingly, houses tend to be
priced just at the level of the loan an average person can get.

The situation is different from the TVs and computers and cars because those
are not considered as important as your own house. Most people eventually
_have_ to move out, so they _have_ to participate in the game. I believe we
call it "inelastic demand".

------
apsec112
This article seems very confused. Most importantly, it doesn't distinguish
between what AI can do _right now_ , and what the _theoretical limit_ of AI
is, in 50 or 100 or 500 years. Everyone agrees that AI today, and in the near
future, doesn't even vaguely resemble a "person". That doesn't imply that much
more powerful, "person-like" AI will forever be impossible. Airplanes weren't
a practical means of travel in 1910, but by 1960 it was a different story, and
some people in 1910 had already realized that plane travel was coming.

Secondly, it does a lot of handwaving about "religion". I am an atheist, and
think most religious beliefs are irrational. However, that doesn't mean that
every belief that "looks like" religion (in some vague, poorly-defined way) is
irrational. The Aztec religion was false, but Hernan Cortes and his army of
men with guns was very real. It would have been stupid for Aztec atheists to
ignore Cortes because "that sounded like religion". The right question to ask
is "is this claim supported by the evidence?", not "how much like religion
does this claim sound?".

~~~
stepvhen
Religion itself does a lot of hand waving, and people tend to talk about the
"existential threat" of AI with a lot of hand waving. Its an easy comparison
to make. Also, a claim, by itself, can't be a religion. It would need people
to attach to it with a certain fervor.

>it doesn't distinguish between what AI can do right now, and what the
theoretical limit of AI is, in 50 or 100 or 500 years.

AI, by itself, cannot interact with the physical world, past bit-flipping
registers on a cpu (a negligible action). For AI to be a threat, in my
opinion, it would need to be coupled with some form of physical manifestation
that could, in some fashion, not be under the control of a human. Until that
happens, we can just unplug the machine. And we are already seeing the intense
difficulty of robotics, making good robots, and interfacing with the real
world. Historically, everything with AI is much, much harder than originally
thought.

~~~
TeMPOraL
There's a pretty cool story about underestimating bit-flipping :).

[http://lesswrong.com/lw/qk/that_alien_message/](http://lesswrong.com/lw/qk/that_alien_message/)

Besides, just look at us, humans. Who would ever build an AI and then keep it
away from any means of communication? That wouldn't be a very useful AI. If
one wants that, one can pick a rock and imagine it has a mind. Or talk to a
cat.

~~~
stepvhen
That was a fun read.

------
kfk
Well, at a logical level, it's one of the biggest existential threats, if 1.
it can be done; 2. humans are intelligent enough to pull it off. There is a
non 0 chance of AI (and related existential threat), as there is a non 0
chance of a meteorite coming down and wiping all of us off the Earth. But my
guess is that most of us would say "YES" to a meteorite deviation project,
while "Meh" to any attempt to better understand the AI problem.

~~~
pron
Well, we can predict more-or-less what will happen when a large meteorite hits
earth, but we have no clue (other than sci-fi scenarios) for the nature of the
threat AI poses, if only because we are so far away from understanding what AI
is. We don't know if intelligence can be separated from agency and agenda, we
don't know if intelligence can be separated from emotions, and we don't even
know if intelligence can even exceed our own (the argument for "human
intelligence at a higher speed" fails on several grounds). Hell, we don't even
know what intelligence _is_ (and the definition of "a general ability to solve
problem" doesn't work, as there are problems that what we generally regard as
intelligence clearly cannot solve while other qualities do).

What's the point of arguing about the potential threats something we have so
little knowledge of? Maybe it will come too fast, but maybe that unkillable
virus will, too. At this point in time, we simply have too little information
to have an intelligent assessment of the risks posed by AI. We can and should,
however, discuss the more immediate related dangers of our real-world machine
learning, such as self-reinforcing bias.

~~~
tome
> the argument for "human intelligence at a higher speed" fails on several
> grounds

I'd be interested if you said more about that.

~~~
pron
Sure, but you need to understand that I'm not trying to predict what would
necessarily happen, only to show how such a thing could _possibly_ fail.

Think what could happen if _your_ brain worked much faster. The world will
mostly seem to slow down. A very conceivable outcome is one of boredom and
possibly madness. You could perhaps multi-task and think of many different
things, but we are generally terrible at multitasking, so we have no idea
whether an intelligence even _could_ multitask well.

Another problem is that we don't know anything about the information capacity
of a mind. A faster mind of the same size could potentially have trouble
dealing with all that extra information (again -- madness, maybe?)

One thing that hints at those capacity limits is the need for sleep. We don't
know why, but some hypotheses say that it's necessary to do some "information
cleanup" operations in the brain. It is possible that an AI would need to
sleep, too. The AI may need to sleep a lot more to handle much more
information.

It is therefore conceivable at least that a mind has to work at a speed that
is commensurate with the speed of the world around it, and one that is
appropriate for its capacity.

------
waterlesscloud
"It's not so much a rise of evil as a rise of nonsense. It's a mass
incompetence, as opposed to Skynet from the Terminator movies. That's what
this type of AI turns into."

That's the danger. I think Lanier is a lot closer to Musk on that point than
he imagines, though.

------
circlefavshape
Am I missing something or does the first reply (by George Church) completely
ignore everything the original article says?

~~~
JetSetWilly
I don't think you are, he seems to respond with the predictable hyperbolic
future-boosting expected of futurologists, "singulatarians" etc. Everything is
all "exponential growth" and inevitable transformation right-around-the-
corner.

Except it isn't.

And this ignored every point that jaron was trying to make and falls into all
the traps he was trying to point out. It is an oblivious comment.

I must say, I don't find most of the contributors to edge particular
insightful.

------
nickpsecurity
Great article covering a lot of good topics. Especially the
overpromise->winter effect and the fact that lots of whats in AI's are
nonsense. My favorite part was this though:

"The truth is that the part that causes the problem is the actuator. It's the
interface to physicality. "

BOOM! That was my exact argument in counterpoints to Superintelligence risks.
I thought it was ridiculous to worry about what it thought when you could
easily control what it _did_ at the interface. I also pointed out that high
assurance security already has decades of work dealing with this exact problem
and pretty effectively. So, anyone worried about that sort of things should
focus on securing the interface that would be used in various domains to catch
issues.

Now, that's not to say a superintelligence can't break an evil scheme down
into a series of safe actions that result in catastrophe. There's
possibilities there. Just that all methods for handling them can and should be
at the interface. And can be implemented by verifiable, dumb algorithms.

~~~
Houshalter
Once AI is invented, what's to keep anyone in the world from building one? And
not keeping it in a box? And that's assuming the box even works, and the AI
can't hack out of it, or trick you into letting it out.

~~~
nickpsecurity
My AI that works in concert with surveillance state to hunt their AI's. Plus,
anything or anyone too smart will have to be registered and monitored.
Basically, the reaction to mutant powers on X-men.

------
freyr
> _There 's always been a question about whether a program is something alive
> or not since it intrinsically has some kind of autonomy at the very least,
> or it wouldn't be a program._

Huh? A program, by definition, has _no_ autonomy. Given its inputs, it
performs the appropriate sequence of actions it has been programmed to
perform. At no point is it governing its own behavior.

Can anybody name a single realizable program that exhibits autonomy?

Even machine learning algorithms, which adapt to their input, are
deterministic and incapable of self-governance. Given identical initial
conditions and inputs, the machine learning program will generate identical
deterministic output.

~~~
jtolmar
We live in a deterministic universe. You'll need a different definition of
autonomy if you want anything to have it.

------
VLM
"There has been a domineering subculture—that's been the most wealthy,
prolific, and influential subculture in the technical world—that for a long
time has not only promoted the idea that there's an equivalence between
algorithms and life, and certain algorithms and people, but a historical
determinism that we're inevitably making computers that will be smarter and
better than us and will take over from us."

On a different discussion site I recently pointed out there is a similar
religious belief or religious dogma in the idea of the self driving car, that
went over like a lead balloon. People acting in a religious manner and
expressing religious belief don't like to have that pointed out to them.
Probably vestigial monotheism, if they act religiously at their traditional
church on Sundays they don't like it pointed out that they worship a different
altar online, or the atheists get really wound up when they are called out for
joining the atheist movement but acting as deacons of the church of the self
driving car of the future.

Anyway just saying its a line of reasoning, that while true, can only lead to
unpleasantness. Like discussion of scientific racial / gender differences or
discussions about IQ, all you're gonna get is social signalling screaming "see
no evil".

~~~
AndrewKemendo
_vestigial monotheism_

That's a fantastic metaphor.

I'm curious though how someone could act in a religious manner about a self-
driving car.

To the original point though, I am very much in the camp of "historical
determinism that we're inevitably making computers that will be smarter and
better than us" and here is why: I believe that intelligence (and
"consciousness" if you want to go down that rabbit hole) is completely
material and as such it is possible that we will eventually understand the
mechanics of them.

If we can understand those, then history would indicate (Historical fallacy)
that we can replace or replicate those mechanics with more durable systems or
materials than the fairly fragile.

~~~
TheOtherHobbes
That's a religious position. We have absolutely no idea what consciousness is
or how it works.

More than that, we have don't even have ad hoc good models for motivation and
personality, never mind useful formal and explicit models.

I think Lanier is absolutely right, and I think there's a strong quasi-
theological element running through all of AI and programming.

Programming is a lot like making spells and incantations. If you get them
_just so_ you get the outcome you want. But you have to speak the language of
the system to make that possible. And you have to be very careful about
unintended consequences.

There's something very medieval about this - both in the sense of the pure
scholasticism of academic CS, but also in the practical sense of knowing how
to formulate the correct "prayers" to make useful things happen.

Lanier seems to be pointing out that CS is still haunted and influenced by
these religious metaphors, and that AI is the most visible example of that.

I think he's right - and more, I think that real AI, in the sense of
autonomous personalities, won't be possible until that's no longer true.

~~~
simbilou
> That's a religious position. We have absolutely no idea what consciousness
> is or how it works.

Well we know that it exists, or at the very least seem to exists. And there is
nothing in the physical word that we have understood so far that cannot be
ultimatly expressed computationally—even if it’s usually not the most useful
formalism. It is of course somewhat a leap of faith to then say _everything_
is computation (or can be expressed as computation), but I would argue that
science has already made this leap in it’s infancy: The Book of Nature is
written in the language of mathematics… Admittedly, modern philosophy of
science is a bit more nuanced but the old the statement from Galileo basically
stands. Now I agree that it’s not entirely rational, but it certainly isn’t
“religious”.

So yeah, I think there are very good arguments to be made that conciousness
can at least in principle emerge from some kind of computation. That
computation could be a complete simulation of every atom in a brain or maybe
some shortcut is possible it doesn’t matter for my point. That does not mean
however that I think it can be made or will be made, since has you pointed
out, we have no idea what we are talking about. I completely agree with Lanier
in this, the tech industry should stop focusing on this nonsense.

~~~
AndrewKemendo
_It is of course somewhat a leap of faith to then say _everything_ is
computation (or can be expressed as computation)_

You don't need any kind of leap of faith to actually start working on it.
That's the wonderful thing about AI and AGI more broadly. You can actually go
work on it, today. Are you going to solve it immediately? Of course not, but
at least you can chip away at the problems.

------
natch
The Edge has lost it.

Jaron's essay was a fantastic read, highly recommend it, but the respondents
are not engaging with it at all.

They are just using the Edge as a platform to promote their own ideas. Most of
them clearly did not read what he wrote. Even though I respect most of the
participants, the platform is not eliciting their best.

------
robotcookies
I don't think this is as much about AI as it may seem. I mean, yes, it's about
AI. But what it's really about is a human tendency which is to create things
and meaning when we don't necessarily have the evidence of them.

What this is really about is religion. More specifically, we have people who
are the most non-religious group out there (like scientists) and they dismiss
formal religion only to recreate another version of it:

"A core of technically proficient, digitally-minded people reject traditional
religions and superstitions. They set out to come up with a better, more
scientific framework. But then they re-create versions of those old religious
superstitions!"

And I see this more often than just in computer science. It happens all the
time in economics.

------
FreedomToCreate
There is a point in the article which the author kind of glosses over that is
really important. A lot of the bots (Siri, Cortana, Slack bot, etc...) are
refined through free data provided by the masses. These systems get smarter at
what they do, essentially for free.

And his second point, that once an AI understands it users preferences, it
ability to change slows down because unlike when it started, it ignores virgin
data, and only follows what hit has been taught. That is a huge potential
issue. Seems like a topic ripe for researchers in Machine learning to jump on.

~~~
tsunamifury
He might have seemed to gloss over it because he's written an entire book
about it called "Who Own's the Future".

------
nickpsecurity
Another thing I'll add is many people forget that the only known super-
intelligence, savant humans, still takes years of training data and
introspection to get the intelligence to function in society. Takes even more
to function better esp accounting for human nature.

So, I call bullshit on the idea there's a startup anywhere that's just going
to turn on an AI we can't handle. We'll likely see it coming a mile away or
the isolation it operated in will do it in when it faces human strategists.

------
cygnus_a
I think there are two valid existential concerns regarding AI:

1) Widespread use of advanced AI in robotics/production has the ability to
bankrupt the working populace, at which point something closer to pure
socialism may take over; not so much an annihilation scenario.

2) AI is weaponizable -- at a certain point, given other technological
advances in addition to AI, a relatively small amount of like-minded people
will be able to produce unique weapons of mass destruction.

~~~
amag
1) Or we enter a post-scarcity era where there will be an abundance of food
produced by autonomous machines. No one "needs" to work in order to feed
themselves. 2) There are a lot of things that are weaponizable - today.

~~~
cygnus_a
1) yeah! socialism!! 2) exactly! like a quadcopter with a handgun attached :)

------
moron4hire
God, what a lucid, piercing look at reality and the human condition he has.
The whole part on translators, just brilliant.

------
jerf
It is, in some sense, a very simple problem. When and if we advance to the
point that AI algorithms are themselves capable of writing AI algorithms, what
happens then? Quantify (albeit vaguely) the rate at which AI algorithms can
improve AI algorithms. If that number turns out to be roughly as slow as human
beings, there's no significant problem. If it turns out that number is greatly
larger than human beings, that's a problem.

It is not particularly clear than the number is large; hard problem is hard.
No sarcasm; arguably "general intelligence" is the hardest possibly
engineering problem. Adding a bit more intelligence to the problem when it
will already be getting worked on by a lot of humans may not move the needle
much to speak of at all. But it is not particularly clear that that rate is
small either, though; humans are smart, yet at the same time, really dumb in a
lot of ways. We have terrible working memory (7 +/\- 2 items is absurdly
small). We have lots of irrational biases. We have lots of things to do in our
lives that are not "thinking" (eating, sleeping, etc.), and the vast majority
of our brain is dedicated to those problems, not rational thought. We are
terrible at manipulating vast symbol systems without taking immense shortcuts,
which inevitably completely color our manipulations. What happens when
something that lacks those restrictions is turned loose on the problem of
writing AI algorithms?

Heck if I know. I'm a human too.

I honestly think that those people who are utterly convinced the rate must
necessarily be slow are just as wrong as those who are utterly convinced it
must necessarily be fast. We really don't know, _but_ it is hardly an invalid
concern.

So far, AI algorithms have not written very many AI algorithms. I've seen some
toys where one AI algorithm is hooked up to tune another one, but I'm not sure
if any of them have ever been practically useful. (All I know is all the ones
I've personally seen have amounted to toys.) And the system as a whole is
generally constrained by what the final AI algorithm can output anyhow; if the
domain and range of the AI function are immutably fixed, there's not much the
AI-meta-system can do to "escape" and do anything terribly nasty.

I do think we're still a ways away from this being an issue. AI is currently
still very much constrained to problems a great deal simpler than "writing AI
algorithms", which is right up there with the hardest possible things that
_human_ intelligences are currently capable of, and that only to a rare few
highly trained _and_ highly talented individuals. We're easily decades away
from this problem. But when the day comes, well, I wouldn't _bet_ on the self-
improving AI experiencing multiple orders-of-magnitude improvement in mere
minutes, but I'd sure hate to bet the future of the species that it's not
possible. It is not unreasonable to be concerned that the knee could be very
steep. We'll know more as we get closer.

