

Elon Musk: artificial intelligence is our biggest existential threat - bloat
http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat

======
razzaj
I always though our biggest existential threat were Nukes and other WMDs. AI
could potentially become yet another existential threat (especially if mixed
with WMDs).

I would not fret over computers using AI to "see" or "hear" or even learn how
to walk. I would start worrying the day computers start formulating judgments,
I could not find any research in that direction. Anyone knows if such research
exists?

On a more personal note, i can't say i agree with some people picking on Elon
for formulating this opinion. Especially that it does not sound so
unreasonable, really.

~~~
johansch
Well, there's:

* [http://en.wikipedia.org/wiki/Vicarious_(Company)](http://en.wikipedia.org/wiki/Vicarious_\(Company\))

* IBM's various initiatives led by [http://researcher.watson.ibm.com/researcher/view.php?person=...](http://researcher.watson.ibm.com/researcher/view.php?person=us-dmodha)

* Kurzweil is presumably doing something similar inside Google, as well.

~~~
ganzuul
IBM's Synapse/True North chip optimizes interconnect, much like synapses do.
It's interesting because it's actually a new silicon process, and already it's
low-power.

If it can be used to solve optimization problems, like logic minimization in a
compiler, it might actually enable practical 'strong'-type AI. This is because
IBM has essentially invented a practical hardware solution to NP-complete
problems, where theoretical software solutions don't seem to exist.

------
ytturbed
As a result of these fears, AIs will be sandboxed:

[http://goo.gl/AUJH4t](http://goo.gl/AUJH4t)

Subsequently having their interactions monitored and restricted for a period
of probation:

[http://goo.gl/3XJjHY](http://goo.gl/3XJjHY)

Look familiar?

~~~
benji-york
It's too bad sandboxes aren't very secure. See
[http://rationalwiki.org/wiki/AI-
box_experiment](http://rationalwiki.org/wiki/AI-box_experiment)

~~~
johansch
Yeah, since humans of today have been shown to break any/all (?) existing
sandboxes, how could we even try to create a sandbox that is unbreakable to an
accelerating, self-improving AI?

~~~
theoh
And would it be ethical? Hand Moravec has written disturbingly about this
issue: "So is there no difference between being cruel to characters in
interactive books or video games and people one meets in the street? Books or
games act on a reader's future only via the mind, and actions within them are
mostly reversed if the experience is forgotten. Physical actions, by contrast,
have greater significance because their consequences spread irreversibly. If
past physical events could be easily altered, as in some time-travel stories,
if one could go back to prevent evil or unfortunate deeds, real life would
acquire the moral significance of a video game. A more disturbing implication
is that any sealed-off activity, whose goings on can be forgotten, may be in
the video game category. Creators of hyperrealistic simulations---or even
secure physical enclosures---containing individuals writhing in pain are not
necessarily more wicked than authors of fiction with distressed characters, or
myself, composing this sentence vaguely alluding to them. The suffering
preexists in the underlying Platonic worlds; authors merely look on. The
significance of running such simulations is limited to their effect on
viewers, possibly warped by the experience, and by the possibility of
``escapees''\---tortured minds that could, in principle, leak out to haunt the
world in data networks or physical bodies. Potential plagues of angry demons
surely count as a moral consequence. In this light, mistreating people,
intelligent robots, or individuals in high-resolution simulations has greater
moral significance than doing the same at low resolution or in works of
fiction not because the suffering individuals are more real---they are not---
but because the probability of undesirable consequences in our own future is
greater."
[http://www.frc.ri.cmu.edu/~hpm/project.archive/general.artic...](http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html)

------
theblueadept
Wishful thinking. In the unlikely event that AI reaches the level of housefly
within my lifetime, it will occur at a level above humanity, in the same way
that an individual's conciousness occurs at a level above brain cells.

~~~
theoh
Any particular reason why you believe that? Sounds like a bit of a strong
statement given our lack of knowledge about how consciousness works.

~~~
johansch
In my experince most/all people who think this way do so because of religion.

~~~
theoh
Well, it sounds more like superorganism talk to me. The possibility of a
superorganism "above" humanity doesn't seem to me to have any bearing on the
development of AI "alongside" humanity...

