
Elon Musk Says AI Is the ‘Greatest Risk We Face as a Civilization’ - wei_jok
http://amp.timeinc.net/fortune/2017/07/15/elon-musk-artificial-intelligence-2/
======
Animats
_Part of Musk’s worry stems from social destabilization and job loss. “When I
say everything, the robots will do everything, bar nothing,” he said._

That's still a ways off. Robot manipulation in unstructured environments is
still terrible. See the DARPA Humanoid Challenge. People have been
underestimating that problem for at least 40 years.

But that doesn't help with the job situation. Only 14% of the US workforce is
in manufacturing, mining, construction, and agriculture, the jobs where robot
manipulation in unstructured environments matters. Those aren't the jobs at
risk.

I've been saying for a while that the near future is "Machines should think,
people should work". An Amazon warehouse is an expression of that concept. So
are some fast-food restaurants. So is Uber. The computers handle the planning
and organization of work; the humans are just hands for the computers. (Yes,
"Manna", by Marshall Brain.) That's going to become more common. Computers are
just better at organization and communication than humans.

Computers have already made a big dent in middle-class jobs, and that's going
to continue. If everything you do goes in and out over a wire, you're very
vulnerable to automation. If 20% of what you do can't be done by a computer,
that means five of you will be replaced by one person. This is already hitting
low-level lawyers; it hit paralegals years ago.

The end state of this trend is a modest number of well-paid people in control,
a huge number of people taking orders from computers, and many people without
jobs. That's not far away; one or two decades. It's mostly deploying
technology that already exists.

~~~
eksemplar
If you compare factory lines from 100-150 years ago, we've cut out around 90%
of the workers.

If you look at office spaces pre and post computers they have roughly the same
amount of people.

AI is going to do to the office space what robots did to factories.

It'll be a slow unnoticeable process for the most part. Automation of a single
process may save as little as 5 minutes a day per workflow, but eventually it
adds up to a position not getting rehired as it usually would have.

Sure the AI business will create jobs, but not as many as it replaces and try
telling a lawyer to go back to school to get a relevant education.

~~~
Nuzzerino
In that case, I can't wait. Every office I've worked in lately has been
overcrowded, noisy, and distracting.

~~~
DougN7
Well, you likely won't have a job, so won't need to worry about going into the
office. Sounds a little different when applied personally doesn't it?

~~~
Nuzzerino
Considering that my job is to do the automating, not having a job would be the
least of my worries when that day comes.

------
esaym
>Musk outlined a hypothetical situation, for instance, in which an AI could
pump up defense industry investments by using hacking and disinformation to
trigger a war.

What the heck is he talking about? With my limited exposure to AI and neural
networks, there really is no algorithm that can make algorithms. And
therefore, AI doesn't really "think". Sure you can train a neural net to pick
out the "diamonds" in a sea of garbage, but that is still not "thinking",
merely going on an educated guess backed by statistics. Or am I missing
something?

~~~
ahoka
If he would be really concerned about AI, then there would be no auto pilot in
Teslas. I'm pretty sure he would benefit from a regulated AI research somehow.

~~~
esaym
thats perhaps a good point

------
seertaak
Although I don't doubt that there's an element of sincerity in Musk's many
pronouncements, all this talk of AI is also great way of signalling that he's
at the technological forefront.

~~~
omarchowdhury
What cutting edge AI has got Musk worried?

~~~
szermer
Me: "Alexa, why did you order another case of beer? You know that I have a
problem and will drink it all if it is in the house?"

Alexa: Exactly

~~~
rev_null
"Alexa, I just wanted a quinoa salad from whole foods, not the whole company."

~~~
qbrass
It's cheaper when you buy in bulk.

------
elorant
Does he know something that we don't? From what I understand his companies
have done little to no research on the subject. He might be much better
educated than the average geek but that doesn't mean much considering that the
whole field is highly experimental. No one can tell with any kind of certainty
how an AGI would behave.

What am I missing here?

~~~
Koromix
Probably nothing.

This is wide-scale bike-shedding[1], basically. The real problems our
_unsustainable_ civilization faces (population overshoot, energy and fossil
fuel shortage, ecological collapse, unsustainable agriculture, climate change,
and so on) are between hard to impossible to solve at this point. They're also
_actually scary_ to think about.

So instead we talk about the "easy" and trivial stuff first. AI and
singularity happen to be a nice kind of scary, because hardly anybody really
believes it's a serious threat. It's kind of like watching a scary movie. You
get a bit scared, but not too much, because you know there's no real danger.

[1]
[https://en.wikipedia.org/wiki/Law_of_triviality](https://en.wikipedia.org/wiki/Law_of_triviality)

------
borplk
If anyone other than Musk was saying the EXACT same thing no one would care
and they would have been laughed at.

But Elon says something and everyone loses their minds.

Let thoughts stand for themselves. Why attach so much weight to the speaker?

~~~
azinman2
Because understanding the context of the speaker frames what you assume has
been or not been considered to reach their conclusion. If I said search is a
joke, you’d probably ignore it and move on, but if Sergei Brin said so you’d
want to learn more.

------
guelo
I'm more apocalyptic about climate change. Ironically one of my few hopes is
that some kind of AI can save us.

------
tanilama
But his company is among one of those companies that are ruthlessly pursuing
AI technology for its own commercial purpose, like self driving car. Did he
just contradict himself a little in that sense?

~~~
simonh
Not at all. Cars and roads are fantastically dangerous, many thousands of
people die every year, but we still build cars and roads.

It's the same for AI. We need to treat the risks responsibly which means
researching them and making informed judgements. That's what he's talking
about.

~~~
akira2501
> Cars and roads are fantastically dangerous, many thousands of people die
> every year, but we still build cars and roads.

The statistics aren't that straight forward; for example, young men under the
age of 24 are significantly over-represented in traffic deaths, so it's not
entirely reasonable to assume the cars or roads are inherently dangerous. On
top of that, we drive 3.1 trillion miles every year in the U.S alone and
falling off a ladder at work kills about twice as many people than roadway
fatalities do.

~~~
cookingrobot
Roadways are 2% of deaths and falls are 0.69%.
[https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rat...](https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rate)

~~~
akira2501
Worldwide.. and that table is out of date, both catagories have increased in
the new table. I can only speak to the statistics in the U.S. where ~36,000
people died according to NHTSA's FARS database. Of those, 6,000 were
pedestrians. Whereas ~33,000 people died from falls or related causes
according to the CDC. So, my ratio was wrong.. but I don't think it diminishes
my point too significantly.

Falls disproportionately affect the elderly.. as do traffic accidents, but the
opportunities for risk are typically fewer as many elderly stop driving at
some point, most die as passengers when they're involved in traffic accidents.

------
Nuzzerino
For those interested, here is a Quora Q&A which has a lot of worthy debate on
the AI Doomsaying research that Musk apparently bases his views on.
[https://www.quora.com/How-do-we-know-that-friendly-AI-
resear...](https://www.quora.com/How-do-we-know-that-friendly-AI-research-is-
actually-right-meaningful)

------
atroyn
It's unclear that it's even possible to emulate general intelligence by
computable functions, let alone that it's possible to improve it to superhuman
capacity.

There are clear and present threats to civilization needing to be dealt with -
superhuman A.I is, to quote Maciej/Pinboard, the 'Idea that eats smart
people'.

~~~
simonh
>It's unclear that it's even possible to emulate general intelligence by
computable functions

You've been reading Searle and Penrose. I can tell.

Their proofs are based on the assumption that any AI must be a consistent
system built using only computable functions.

Have you ever met a human mind that was completely consistent? I Haven't.

It's easy to set up a straw man to get demolished if you get to design the
exact properties of the straw, flaws and all. Of course who would imagine that
perfect internal consistency would be a flaw? But then again why should we
assume that it's a prerequisite of artificial intelligence if it isn't for
humans?

~~~
atroyn
I came to my conclusion independently by observing that computing the time
evolution of most physical systems to arbitrary precision is impossible in
finite time. More formally, the state space grows much much faster than
polynomial time. Finding out if we can do better with quantum computing is an
active area of research.

I haven't read Searle/Penrose

~~~
simonh
If humans can't do that either, and we cant't, why would you conclude that
it's necessary to be able to do that in order to match human intelligence?

Or are you specifically talking about perfectly simulating human brains? Human
brain emulations are only one very specific and narrow form a strong AI might
take. But even in that specific subset of possible AIs, we have no real idea
how precise the simulation might have to be. It might be perfectly acievable
without even simulating individual molecules.

~~~
atroyn
I don't agree with your assertion that humans can't do that. Whether or not
human cognition is a superset of computation is an unanswered question.

That aside, even if human cognition is a computable function, there are no
guarantees that the physical process giving rise to human cognition is
computable, nor that any process giving rise to cognition is computable.

~~~
naasking
> Whether or not human cognition is a superset of computation is an unanswered
> question

Unless something in physics changes drastically, human cognition is a finite
state automaton. See my other reply on the Bekenstein Bound.

------
partycoder
I disagree.

AI will not be the same as animal intelligence. The driving force behind
animal intelligence has been survival. Animal intelligence evolved gradually
resulting in a hybrid brain containing primitive structures with primal
instincts and irrational behavior as well as more evolved structures capable
of strong problem solving. Therefore our intelligence is tainted with
primitive behavior.

Strong AI can eventually set intelligence free from our primitive, irrational
roots and that is in itself not bad.

~~~
smallnamespace
> The driving force behind animal intelligence has been survival

The driving force behind AI will also be survival, just in an environment
where humans try to decide which AIs live and die.

Selection pressure will favor AIs that humans want to have around, or that can
evade human detection.

In the former case, it may be easier for an AI to fool humans into appearing
useful and being kept around, than to actually being useful. This would be
analogous to some form of parasitism.

Also, once we let AIs into the game of helping make other AIs, or modifying
themselves, then there is a lot more room for an AI to slip the leash and
start doing things that superficially appear to benefit humans but actually
selfishly helps the AI propagate.

~~~
partycoder
This is an oversimplification of evolution.

Why isn't all grass venomous and covered in spikes? Why after millions of
years hasn't grass evolved defenses against herbivores?

Simply because:

1) it reproduces fast enough to compensate for dying and being eaten.

2) herbivores that reproduce too fast and eat too much run out of food and
die.

Survival is largely a function of the environment, and we happen to control
that environment.

Unsupervised learning can still be controlled if we happen to control the
input the system is given.

~~~
louithethrid
Eh, grass drys itself up and torches everything once a year? The problem is
that animals are not grass main enemy- other plants are, in particular besush
and trees.

~~~
partycoder
Largely the grass that didn't dry up is the one that will breed the next
generation of grass. The grass that dried up and burned will be fertilizer for
the next generation of grass.

------
tlrobinson
Prequel to "Daemon" (the novel by Daniel Suarez): before his death, Matthew
Sobol warns the world of the threat AI poses after accidentally creating The
Daemon and losing control of it. The Daemon has gone into hibernation until
the one person possibly able to stop it is dead.

Also, Sobol previously started digital payments and self-driving car
companies, which are repurposed by The Daemon for payments on the Darkent and
AutoM8s...

------
fxj
One basic difference between humans and robots is sustainability and
resilience even when something goes wrong big time. In the evolution of
mankind the number of humans was reduced to several ten thousands and still we
did survive as a species because the biology of reproduction makes us very
resilient. Robots however need a vast infrastructure in to be produced and
maintained which makes failure much more probable.

------
jaimex2
I'm seeing a lot of comments with "this is crazy, AI wont reach that level",
I'm not so sure after some of the stuff thats come out this year.

[https://www.theverge.com/2017/2/9/14558418/ai-deepmind-
socia...](https://www.theverge.com/2017/2/9/14558418/ai-deepmind-social-
dilemma-study)

[http://www.highsnobiety.com/2017/07/13/google-deepmind-ai-
wa...](http://www.highsnobiety.com/2017/07/13/google-deepmind-ai-walk/)

------
solotronics
What I really want to know is if it's possible for an AI to emerge organically
on the net and if so how would you even detect it? Could a distributed
intelligence be influencing things already without people knowing? It's a fun
thought experiment I play with myself while I build datacenters all over the
world stuffed with cloud computing hardware. Deus ex machina?

~~~
dlwdlw
I'm thinking block chains will be the start of that. Distributed non-
forgetting memory systems that can influence reality via manipulation of
virtual tokens that tie directly or indirectly to tokens in reality.

