
Elon Musk donates $10M to keep AI beneficial - cocoflunchy
http://futureoflife.org/misc/AI
======
ggreer
_Far from being the smartest possible biological species, we are probably
better thought of as the stupidest possible biological species capable of
starting a technological civilization—a niche we filled because we got there
first, not because we are in any sense optimally adapted to it._

\-- Nick Bostrom, _Superintelligence: Paths, Dangers, Strategies_ [1]

A lot of people in this thread seem to be falling into the same attractor.
They see that Musk is worried about a superintelligent AI destroying humanity.
To them, this seems preposterous. So they come up with an objection.
"Superhuman AI is impossible." "Any AI smarter than us will be more moral than
us." "We can keep it in an air-gapped simulated environment." etc. They are so
sure about these barriers that they think $10 million spent on AI safety is a
waste.

It turns out that some very smart people have put a lot of thought into these
problems, and they are still quite worried about superintelligence as an
existential risk. If you want to really dig into the arguments for and against
AI disaster (and discussion of how to control a superintelligence), I strongly
recommend Nick Bostrom's _Superintelligence: Paths, Dangers, Strategies_. It
puts the comments here to shame.

1\. [http://www.amazon.com/Superintelligence-Dangers-
Strategies-N...](http://www.amazon.com/Superintelligence-Dangers-Strategies-
Nick-Bostrom-ebook/dp/B00LOOCGB2/)

~~~
ThomPete
I always think about it like this.

If immaterial matter could turn into conscious beings like us how can anyone
claim that we are the last step in that evolution.

------
FatalLogic
I wonder how the goal can be achieved? How would you prevent the development
of AI which is so intelligent that it is dangerous for humanity?

Perhaps there might simply be fundamental physical constraints which limit
intelligence? Still seems that something much faster and bigger than a human
brain would be possible, though.

Maybe by containment? If it's possible to keep the super intelligent AI in a
virtual environment, but it can't detect that it's virtual.

~~~
mentos
What about limiting AI to a certain clock rate?

~~~
JulianMorrison
[http://www.rifters.com/real/2009/01/iterating-towards-
bethle...](http://www.rifters.com/real/2009/01/iterating-towards-
bethlehem.html)

It's a peculiarity of humans that we can't trade off time for IQ. Do not
assume an AI would be similarly limited.

------
bhouston
Is FutureOfLife.org a competitor to Machine Intelligence Research Institute?
So now we have two charitable organizations researching how to make AI's
beneficial? Who knew this was going to be a growth industry.

~~~
nbouscal
They're not competitors so much as allied organizations using different
methods, as far as I can tell.

------
kbody
How can you control the creation of code?

This news sounds like "donating to keep software bug-free". If someone wants
or even accidentally, they can create buggy code or "evil" AI.

~~~
davmre
You can donate to keep software bug-free by funding research into safer
programming languages (e.g., Rust), tools for formal verification and static
analysis, testing frameworks, software engineering methodology, etc. Sure,
someone could always ignore all the research advances and just write unsafe,
untested C code. But for the most part, people _want_ to eliminate bugs, so as
the techniques for doing so mature, they will tend to become adopted in
practice and certain types of bugs will become much less frequent.

Similarly, there are technical research questions related to how to design an
intelligent agent that is "controllable", in that it acts to achieve its given
goals but not to such an extent that it would resist being switched off or
reprogrammed with new goals. Making progress on answering these questions
doesn't _guarantee_ that any given developer will use whatever techniques are
discovered. But insofar as pretty much everyone, even evil masterminds, wants
to maintain control over their AIs, the availability of techniques for 'safe'
AI will at least decrease the likelihood that a powerful AI is built without
any safeguards.

------
CodeCube
I know it's a bit dramatic at this point to take this stance ... but this
_almost_ gives me deja-vu of racism. Like a modern version of:
[http://upload.wikimedia.org/wikipedia/commons/f/fa/Little_Ro...](http://upload.wikimedia.org/wikipedia/commons/f/fa/Little_Rock_integration_protest.jpg)

Here's what I mean; I'm not suggesting that AI has, or will have, some innate
set of "rights". Nor am I suggesting that anyone is wrong for wanting to fund
and research "AI Safety". It's just the first thing that came to mind while
reading that post, as it spoke about need to steer, control, and regulate AI
so that it's beneficial for "us" ... already setting up an "us" vs "them"
dynamic.

Anyways, just thought it was an interesting juxtaposition to mention ...
thoughts?

~~~
ObviousScience
I think it's incredibly selfish and poorly thought out to create another kind
of mind only if it serves us.

Much like parenting, I think it should be done because there's another kind of
mind to be had, regardless of the exact outcome (which we shouldn't
necessarily try to fully control).

~~~
eli_gottlieb
Parents do not routinely expect to be murdered by their children, nor is moral
to expect or require such.

~~~
ObviousScience
About 5 parents are killed in the US a week.

Of course, this isn't a HUGE danger, I'm just saying that it's a risk to
having children. (This isn't counting things like mothers dying from
childbirth, which still also happens.)

My point wasn't that we should expect it to happen, but rather, neurotically
making sure a child never could (or attempting to) rather than just having a
child, raising it, and hoping for the best is bad parenting.

As a secondary point: it's only humans that don't expect to be regularly
murdered by their children; in other species, such an act is a routine
occurrence. There's no particular reason to think that we're a super-special
species in the grand scheme of things, and it's entirely possible that doing
something like siring an artificial species better than us would end in our
deaths. That doesn't, by default, make it not worth doing.

------
reacweb
AI is a buzzword that is coming more and more in car. Creating a mental
association between AI and safety is a very smart PR idea.

------
creack
I love the date of the article.

~~~
utopiah
Scary isn't it. They have an AI AND a time machine...

------
sgt
Where is Ray Kurzweil's name on that list?

~~~
DanAndersen
From what I understand, Kurzweil tends to be much more optimistic about AI,
predicting that humanity will merge with advancing AI rather than it running
away from us.

~~~
Chathamization
And predicting that it’d happen five years ago…

------
rl3
> _" Along with research grants, the program will also include meetings and
> outreach programs aimed at bringing together academic AI researchers,
> industry AI developers and other key constituents to continue exploring how
> to maximize the societal benefits of AI; ..."_

Maximizing the societal benefits of AI, at least in the context of
superintelligence, is a very slippery slope.

For sure, it beats a myraid of malignant failure and perverse instantiation
scenarios, where humanity becomes extinct (or worse).

However, I believe that if we do somehow manage to solve the seemingly
intractable AGI control problem and create an entirely safe superintelligence
that is loyal to our every whim, an entirely new ethical challenge will arise.

For example, let us assume this has been accomplished, and humanity now has a
friendly superintelligence that has established itself as a _global singleton_
(i.e. it's incredibly powerful, and nothing terrestrial can supplant it).

Humans instruct it to solve the problem of death and disease. It does so.

Humans instruct it to solve the problem of crime. It does so.

Humans instruct it to solve the problem of war. It does so.

Humans instruct it to solve the problem of poverty. It does so.

Humans instruct it to explore the universe and solve the most vexing
existential questions. It does so.

In that scenario, the entire worldwide medical, law enforcement, and military
professions just ceased to exist. Altruism no longer has much of a place;
poverty and disease no longer exist. Overnight, death became a thing of the
past. Humanity no longer ponders why it exists or how the universe works,
because such questions have already been pondered by the AI to whatever
possible maxima.

That example is incomplete for the sake of brevity, but it's easy to imagine
such a scenario resulting in all of our problems being solved for us. Which
then begs the question, what would become of our humanity when there is no
longer struggle or suffering?

Post-scarcity economics, as well as literature or philosophy that concerns
itself with the perils of utopia may offer some insight into this question,
but I believe the topic deserves a more in-depth investigation specifically
within the context of superintelligence.

For my two cents, I'm starting to believe that any superintelligence we
construct that establishes itself as a _global singleton_ should have an
extremely finite set of goals, perhaps even a singular goal, where it
henceforth restricts any other forms of AGI from existing.

While it may be tempting to also have this entity protect us against other
forms of existential risk, or perhaps fix some of the truly awful suffering in
the world, doing so would still remain a very slippery slope.

~~~
lione
I'd think that that friendly super intelligence would realize the issues
associated with post-scarcity society and would act in the best interests of
humanity.

It would weight human happiness as more important and work to avoid the
pitfalls that would be undesirable.

There's more utility for it in the human population being happy/not giving up
on life then most of the things it would do (assuming it's primary goal is to
further and safeguard the human race).

There would be aspects of society it COULD run, but feel that giving us
purpose would be more beneficial then the increase in productivity or whatever
of it running that aspect of society.

Maybe it can make beautiful gardens and art, but instead leaves it to the
humans to give them some sort of purpose.

I don't see how or why a post-scarcity society would do away with
art/culture/many of the subjective things that make life worth living and
provide purpose.

Most people don't quit doing the things they like just because someone is
better at it.

My two cents.

~~~
rl3
I think you may be right. Admittedly, most of what I said was really only
applicable to a _Genie_ superintelligence that implements our wishes on a
command-by-command basis.

A friendly _Sovereign_ superintelligence (i.e. operating under its own
goal/value-directed volition) would probably not rob us of purpose, though the
societal changes in either scenario would likely still be quite profound.

Of course, a positive superintelligence outcome would be a miracle in and of
itself, so there's that. Coming to terms with a newly created utopian society
sure beats extinction.

~~~
lione
Even a friendly Genie superintelligence would likely not cause that IMO. It
would work to remove the problems we set before it. Unless we asked it to
remove culture and fun/worthwhile hobbys/activities, I don't see how it would
cause issues or ennui. We would still have purpose, it would just not be
focused on or influenced by those solved issues. And even if those problems
are solved within our society on Earth, who knows what the future holds in
terms of the wide open universe.

------
Keyframe
I thought he was strapped for cash. Divorce and all.

~~~
zodiakzz
Feeding the troll but he had his wife sign a contract where she can't claim
ownership of any of his assets.. smart ;)

~~~
Keyframe
Not a troll :) I remember these headlines:
[http://venturebeat.com/2010/05/27/elon-musk-personal-
finance...](http://venturebeat.com/2010/05/27/elon-musk-personal-finances/) I
wasn't paying attention much since then, so was wondering.

------
bra-ket
we already have The Three Laws of Robotics

[http://en.wikipedia.org/wiki/Three_Laws_of_Robotics](http://en.wikipedia.org/wiki/Three_Laws_of_Robotics)

~~~
rfrey
You're probably joking, and my humour unit is impaired today. But in case
you're not...

Those laws are the requirements document - the $10mm is for the
implementation.

------
k__
He fears Roko's basilisk ;)

[http://rationalwiki.org/wiki/Roko%27s_basilisk](http://rationalwiki.org/wiki/Roko%27s_basilisk)

~~~
bhouston
A theory that those that do not donate to AI-promoting groups will be punished
in the future by AIs? Whoa, weird. Great for promoting fundraising activities
though. Very church like.

~~~
butwhy
And Musk will be punished for donating to a cause potentially preventing its
inception. ;)

~~~
LLWM
Actually, the argument is that an identical copy of Musk will be created and
that copy will be punished. And for some reason, Musk should care.

------
honeybooboo123
I doubt he can really believe that dropping 10M on a project will somehow
prevent Skynet from happening. AI will just get nasty, or it won't. No one
knows.

~~~
butwhy
Fine, it will reduce the likelihood of it "getting nasty".

------
olssy
If only humans could agree on what makes things beneficial to themselves. Are
self-driving cars and rockets to Mars beneficial to humanity? Depending on how
you look at it the answer is both yes and no.

------
listic
A post from the future!
[http://i298.photobucket.com/albums/mm249/hrenistic/feb_15_zp...](http://i298.photobucket.com/albums/mm249/hrenistic/feb_15_zps6ddff834.png)

------
buro9
The default assumption seems to be that if we create a thing of superior
intellect, then that thing is going to be a threat to humans.

I believe that it is possible to create intelligent machines, and that those
machines will challenge our perception of what it means 'to be'.

I don't necessarily agree with the implicit assumption that when that time
comes, that a thing of superior intellect is obviously going to be 'evil'
towards things of lesser intellect. Unless we are actually making a statement
extrapolated from our own experience of how we perceive our relation to other
life-forms on this planet.

~~~
drdeca
I don't think the common claim is that it would be "evil" or malicious, but
that it could be a threat, being both powerful and unpredictable.

The idea that I've often seen people mention, is the idea of a "paperclip
optimizer", in which an AI superintelligence primarily optimizes for there
being more paperclips, not taking into account side effects of this, which
other people might find harmful. If I understand the idea correctly, which I
very well might not, it is meant to suggest that if an AI superintelligence is
created with a particular goal in mind, we would do well to be very careful
when choosing the AI's goals.

It would I suppose be essentially a similar problem to that of a "literal
genie". (Except harsher, because the AI would go through steps to accomplish
the task, instead of just magically making the task so, and therefore would
have more side effects?)

------
pekk
AI is just software.

So any discussion about AI gone wrong is really a discussion about malware and
other kinds of software which are used by people to do bad things. If my
software runs a power plant for an evil facility (full of sharks with lasers),
is it evil software? The evilness of the software is parasitic on the intent
of its operators.

But if the issue is software that goes badly wrong in unexpected ways, we are
really just talking about software engineering again, segfaults and Ariane 5
and how to handle user passwords.

Obviously if we plan to outsource big tasks to chaotic software that we have
trustingly given power over, let's say, all the cruise missiles, that is less
of a scary emergent singularity scenario and more a plain-old-stupid scenario.

------
steve-benjamins
Philosophically I think it's a mistake to believe there is any possibility of
creating artificial human intelligence (and the mistake is in our assumptions
of what human intelligence is— computers are not human _beings_ and human
_beings_ are not computers) ... But I've got to hand it to Elon Musk— when he
believes in something he commits to doing something about it.

EDIT - Saying I disagree that AI is a foregone conclusion on Hacker News is
probably a bit trollish... Unfortunately I'm not smart enough to condense an
explanation to this comment box— but my disagreement stems from philosophers
such as Hubert Dreyfuss (What Computers Can't Do) who are working off of
Heidegger and phenomenology.

EDIT 2 - I know how douchey it is to namedrop Heidegger, but I really believe
in this case it's necessary ...

~~~
jsutton
Why wouldn't there be any possibility of artificial human intelligence? Our
intelligence is just a manifestation of a physical system of neurons that
follow the laws of physics. There's nothing inherently preventing us from
recreating that.

~~~
psycr
> recreating

And that's the crux of the question isn't it? How do we determine if the
intelligence is recreated or merely mimicked? Does such a distinction matter?
Searle says yes. More here if you're interested:
[http://plato.stanford.edu/entries/chinese-
room/](http://plato.stanford.edu/entries/chinese-room/)

~~~
ajuc
This is misleading argument - there IS a Chinese speaker in the room, but it's
not the person that executes the program - it's the system "person+program"
that understands Chinese.

~~~
henrikschroder
The whole point of the argument is that the system doesn't _understand_
Chinese. The system doesn't know what each sentence means, they don't evoke
feelings, memories, images in the person in the room, there is no analysis, no
agreement or disagreement with what is being said, it's just mechanical
lookup.

~~~
nbouscal
And the reason the argument is absurd is that _of course_ the room doesn't
understand Chinese; it's a lookup table! No Chinese speaker interacting with
the Chinese room for any reasonable period of time would think it held a
Chinese speaker, because all it accepts are questions, and it answers the same
question the same way every time. Of course, Searle would say that the set of
rules in the room is _defined_ to be such that a Chinese person would think
they were interacting with a Chinese speaker, but the very setup of the system
prevents this from being possible. The thought experiment is fundamentally
incoherent.

~~~
ajuc
I assumed the system has rewritable memory to remember previous questions, and
the lookup is (all previous questions) -> next answer.

With such setup (require infinite memory) the system is Turing-complete of
course.

If it works as you described it can't even answer "what was my n-th last
question".

