
Debunking myths about artificial intelligence - nikbackm
http://arstechnica.com/information-technology/2015/12/demystifying-artificial-intelligence-no-the-singularity-is-not-just-around-the-corner/
======
Moshe_Silnorin
The author hasn't taken the time to read the arguments for AI risk. First,
citing Kurzweil as a proponent of AI risk is very odd. Kurzweil thinks AI is
the bees knees and doesn't see much risk in it AFAICT. Second, neither
Bostrom, MIRI, nor the Future of Life institute claim Moore's law will
continue indefinitely or in any way endorse Kurzweil's work.

As for his other comments on AI risk, here's what I wrote last year on a
similar thread:

Nobody is afraid of today's AI algorithms. But if we make machines that are
smarter than us and have desires, they will influence the future to achieve
their desires. If these desires conflict with our own, things will not end
well for the dumber party.

As we really have no idea what we, collectively, think of as a moral terminal
goal, and less so how to formalize this, there is no reason to expect the
first AIs to have goals that correspond to what we want. If AIs self-replicate
in a competitive ecology, what would be selected for would be agents millions
of times more intelligent than us who use their intellects only to make more
copies of themselves - using all available resources including those we need
to survive. I'd recommend this summary of the arguments:
[https://medium.com/@LyleCantor/russell-bostrom-and-the-
risk-...](https://medium.com/@LyleCantor/russell-bostrom-and-the-risk-of-
ai-45f69c9ee204#.shqn6fqa4)

~~~
CM30
Don't these arguments kind of assume a few things though? Like, 'strong' AI
being implemented into things that really don't need it? I mean, the paperclip
example in the Medium article... did that factory ever need AI involved?
Couldn't a simple computer program no more advanced than those of today do the
exact same job... without the risk of all that 'thinking and making copies of
itself' stuff?

Maybe I'm being naive here, but you don't need strong AI in everything.

And they also kind of assume that there's only one AI at a time (what if two
factories are run by AIs designed to build different things, and they both end
up in this sort of situation at the same time?), that everything is connected
to the internet in some way, that humanity couldn't simply wipe out anything
that poses a threat (and no, an AI in a factory environment wouldn't get
access to anything that would allow for nuclear/chemical/biological weapons).
Just seems like a lot of this speculation is based around a society that's
making tons of careless mistakes one after another. But maybe I'm missing
something here.

~~~
cousin_it
Don't focus on one scenario leading to disaster. There are many scenarios
pointing roughly in the same direction. Here's the argument in its shortest
form:

The current situation, where unmodified humans are the smartest creatures
around and call the shots, is unstable. It could go several different ways
depending on the desires and quirks of the first superhuman intelligence to
appear. It could happen via math-based AI, nature-imitating AI, self-improving
mind uploads, biological intelligence amplification, or other means. We're
making fast progress on all these fronts, so I'd be surprised if at least one
of those didn't happen within the next century, possibly much sooner. Since
it's an arms race between many competing organizations, it's unrealistic to
expect that all superhuman intelligences will be kept powerless. So one way or
another, it will happen. How do we ensure that humanity and its values survive
the transition?

~~~
click170
> How do we ensure that humanity and its values survive the transition?

I think a better question is do we deserve to.

~~~
MBlume
Which of the people you know personally do you think deserve death?

------
tambourine_man
Very weak article.

It prepends several statements with “myth:” and does very little to debunk
them.

Example: _AI won 't spin out of control because Moore’s laws is nearing its
end._

First, is it really? Just because we can't move electrons reliably much faster
through much smaller sizes? What's stopping new materials, spintronics,
photonics and what have you to take us to the next decades? I've been hearing
the end to Moore’s laws is “coming real soon” since I was a kid. Lots of
clever explanations guaranteeing there was no way we could move past 200nm or
so due to physical size of the wavelength used in lithography. And yet here we
are using crazy stuff like phase shifting masks and interference patterns
routinely.

Second, the brain seems to be a very slow and massively parallel machine, so
maybe transistors are plenty small and fast already, we just need to ditch the
Von Neumann architecture.

Third, we only need a single strong AI to emerge for it to be a problem. I
don't see the clouds from these megacorps getting smaller any day.

I don't really know what I'm talking about, of course, but it takes stronger
arguments than _because quantum tunnelling and the speed of light_ to make
this case.

~~~
Qwertious
The fundamental driver behind Moore's Law is shrinking the circuits. The
problem is that the circuits are made up of atoms that deliver electrons, and
electrical circuits fundamentally _cannot_ go smaller than a single atom,
because electricity is movement of electrons between atoms.

We could, in theory, use other technology (some sort of photon-based
computer?) to get smaller (and therefore faster) computers, but that wouldn't
have the Moore's law mechanism.

~~~
sawwit
Though, aren't there also plenty of other mechanisms that affect it? For
example increasing speed of manufacturing procedures and improving knowledge
about how to design complex systems. The end seems to be extremely dense
blocks or spheres of highly optimized computation substrate, where each atom
sits very precisely at the right place. But the state of the art is merely a
few layers of crude photo-lithography. Imagine nanobots being programmed to
move atoms around to assemble 3D circuits at the 10nm scale. I think this is
where we are heading, and this will likely grant a couple of decades of
continuing exponential progress. And by then our computation power will be
insane. An equivalent of the simplest human brain models will likely fit into
pea.

------
numinary1
The arrogance of the subtext that machine intelligence is nothing like human
intelligence and therefore inherently inferior and nothing to be afraid of is
more disturbing than the prospect of artificial intelligence gone awry as it
reminds us how far awry natural intelligence goes routinely.

~~~
argonaut
I find it equally arrogant to believe that humans will achieve strong AI any
time in the near future.

~~~
Rapzid
I'm not convinced we are any closer to creating human-level AI than we were
before we invented computers..

~~~
argonaut
We are closer, but we are still far, far away.

------
ziedaniel1
It's understandable that many people see the disconnect between fears about
strong AI and the much more limited capabilities of AI today, and think that
the fears must be overblown. However, this article takes that gut reaction and
uses it to justify labeling the views of many prominent philosophers and AI
researchers as simply a "myth", without even beginning to address their
arguments. I strongly recommend reading Superintelligence, by Nick Bostrom.

------
frooxie
That article is so naive and misinformed that I was seriously wondering if it
was a prank. I suppose it's a nice read if you want to see someone obliviously
ramble about things entirely unrelated to the actual arguments for AI risk
scenarios.

------
abdias
Found this a little ironic, in a funny way - the article was written in
association with IBM. Funny enough, the movie 2001 which the article images
are from, is hinting to IBM but in an obscured way. For example, the name HAL
is IBM shifted one char to the left and you can even see IBM logo on some
machines in the movie.

Besides from that I think unless IBM knows the future and all unknowns, we
cannot really be sure what will come out of AI. Which is why we should be very
careful with it.

~~~
CM30
No, the point about HAL 9000 being named after IBM isn't true:

[http://www.visual-memory.co.uk/faq/index.html#slot7](http://www.visual-
memory.co.uk/faq/index.html#slot7)

The video game company HAL Laboratory (maker of the Kirby series)... they did
name their company so each letter was one before IBM:

[http://www.nintendolife.com/news/2012/11/iwata_explains_wher...](http://www.nintendolife.com/news/2012/11/iwata_explains_where_the_name_hal_laboratory_came_from)

~~~
abdias
Thanks, I stand corrected. It was a fun theory though and it will probably not
go away anytime soon.

Kubrick himself claimed HAL stood for "heuristic and algorithmic" according to
this article:

[http://www.slate.com/blogs/browbeat/2013/01/07/hal_9000_ibm_...](http://www.slate.com/blogs/browbeat/2013/01/07/hal_9000_ibm_theory_stanley_kubrick_letters_shed_new_light_on_old_debate.html)

------
Fricken
Jabbering and laughably geriatric Rupert Goodwin groans 'Get off my lawn!';
spilling his senility swill all over the singularitarian sycophants.

A stalwart moore's law mooter masterminds against the myths of demon-summoning
Musk the martian overpopulator and makes mockery of Hawking the Ad-hock Spock-
talker.

A luddite weasel word windbag from linearland soothsays naysayers with another
numbingly nominal caveat-riddled AI narrow now and forever 'nuffsaid.

An obstinate bunghole spelunker debunks from his spooge-buttered AI winter
dunce bunker.

From unsupervised pulpit comes a sump-pumping ass-pastor's anti-diluvian
Deepmind denuding diatribe defusing the delinquent debut of an indocile data
detonation.

~~~
mrSugar
Ok, we are still safe. As parent demonstrates, machines still can't even
compose coherent text. Nice try, robot slave!

------
arisAlexis
The only argument about strong AI not being a threat in this article is that
it is difficult to make computers fast enough.

