
Elon Musk: 'We are summoning the demon' with artificial intelligence (2014) - edward
http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/
======
pdkl95
We _are_ creating a "demon" with artificial intelligence, but it the origin
isn't machine learning or Kurzweil-style AIs. In fact, the demon already
exist, and is known by many names: "bureaucracy", "corporate policy", "work-
to-rule", etc.

We already have many situations where important decisions are made by
algorithm and heuristic. The "demon" is seen when the computer is followed
blindly, even in the instances where it doesn't make sense. We have all seen
times where the heuristic failed; sometimes this lead to stupid outcomes that
"must" be followed where a human with decision-making power would make the
obvious override.

To continue the metaphor, you can _summon_ a demon if you know it's true name.
Bureaucracy and "official policies" are hard to change once they are created,
so this gives the people that control who can _write_ the heuristics can
control the demon, allowing attacks to be hidden behind policy.

As an example, consider youtube's "Content ID" system. The failure of the
matching algorithm have been a problem for some time now, with user left with
ultimatums and the brick wall of google's "support". Even more problematic are
the cases where the Content ID system _did_ match correctly, but the situation
was covered by something outside of youtube.

I'm not saying automation is bad or that tools like Content ID shouldn't
exist. They are merely tools. The problem comes when we try and substitute a
tool for critical thinking and judgement. Those problems become a powerful
demon when you are forced to deal with only that tool, no questions, no
variances, and no appeals.

/* Incidentally, this is yet another situation where an episode of Max
Headroom has become _literal_ truth. (
[http://www.maxheadroom.com/mh_episode_14.html](http://www.maxheadroom.com/mh_episode_14.html)
) */

~~~
madaxe_again
You touch upon something here which I've been thinking about for a while.

AI is already here. It has been for a while. We call it "the state", or "a
corporation", more often than not.

We have created edifices and structures of control which take inputs and
present outputs. A corporation or government can be said to have thoughts,
opinions, and can certainly act.

We witness acts of heartless bureaucracy all too frequently, as you touch upon
with Content ID - and this, ultimately, is little different to the oft-touted
paperclip maximiser problem.

I view these structures as meta-consciousnesses, which are built on rules and
systems of working, and invoke human minds, typically in groups (ah, Abilene),
where necessary.

To bring about AI in the "it's done by computers" sense mandates replacing
those human components with automated ones, which we're already doing.

I'd say that, as you say, it's already here, in a weak form, and it's already
clear that without clear governance and thought into the construction of these
systems we run a severe risk of unintended consequences.

------
tbabb
In my opinion, this all comes down to smart people— who are not computer
scientists— believing Ray Kurzweil's absolutely batty ideas about the
singularity, which he is deeply emotionally over-invested in because he
doesn't want to die:
[http://www.slate.com/articles/technology/future_tense/2013/1...](http://www.slate.com/articles/technology/future_tense/2013/11/ray_kurzweil_s_singularity_what_it_s_like_to_pursue_immortality.html)

~~~
TeMPOraL
I don't think it has much to do with Kurzweil - more likely with Bostrom's new
book. But generally, I see it as smart people noticing various feedback loops
we live in and extrapolating the potential consequences. In a way,
bureaucracies, corporations and the market itself are AIs, just (currently)
very slow ones.

------
taliesinb
For anyone who is interested, Nick Bostrom's excellent "Superintelligence" is
the book that inspired Elon to this dire pronouncement.

~~~
davidwihl
Having read the book recently, it is quite clear that Bostrom has never coded
or built any ML or AI implementation and therefore lacks credibility on the
subject.

~~~
joe563323
Well that may be true, but Elon has done coding and hence his words are worth
considering.

~~~
setpatchaddress
Writing a game in BASIC in the 80's is like coding a website in PHP now. It
doesn't qualify one as an expert in AI. Not even if you dip a little into 6502
assembly language. Really.

The cult-of-personality aspect to this is a little frightening. Smart people
aren't always smart, and they are more than capable of oversubscribing to
their own intelligence.

The Lego comment is spot-on.

(Or was this sarcasm?)

~~~
dragonwriter
> Smart people aren't always smart,

More importantly, smart people that are experts in one domain aren't
necessarily experts in all domains in which they have some contact.

------
1971genocide
I really like the joke that someone made on twitter a while ago about this:

"The probability of creating AI that will accidentally destroy civilization is
same as the danger that a kid playing with lego will create a nuclear weapon.
"

I don't remember the exact wording but it was something similar to that.

Working with machine learning and AI algorithms all day its laughable to think
that our industry will reach singularily anytime soon.

This type of thinking will ultimately result in restrictions by the government
that in my opinion slow down the advancement in the field without any real
benefit.

The hardest thing to understand is Intelligence is not a human thing. Its just
a tool humans use just like our hands and the scientific method to do things
with.

There might be a time a group like ISIS will gain access to some sort of AI
algorithm to do bad things but I think the good people will have access to
that algorithm way before the bad guys, and the best way to allow the good
guys to gain access to general AI is to not restrict what they are doing.

~~~
CamRippa
[https://twitter.com/ryan_p_adams/status/563384710781734913](https://twitter.com/ryan_p_adams/status/563384710781734913)

------
adamconroy
Hawkings and Musk should stick to making comments on topics they understand.

------
deciplex
Humans already die and are replaced by other intelligent beings every day.
It's been going on for as long as we've had humans. It sucks, but it sucks
regardless of whether the replacement was created by means biological or
technological. Once you're dead, you're dead.

I would prefer transhumanism or at least something like The Culture of course,
but I find even violent replacement of humans by a superintelligent being is
preferable to us wallowing here on Earth until we go extinct, provided the
replacing superintelligence is not too monomaniacal (cf paperclip maximizer),
and goes on to explore the stars after taking care of us.

I find Elon Musk and Stephen Hawking are being overly anthropocentric here,
and if we're going to start creating intelligent beings at a level equal to or
exceeding our own, we should probably all start learning how to be less
anthropocentric.

