
Machine intelligence, part 1 - oskarth
http://blog.samaltman.com/machine-intelligence-part-1
======
fchollet
I have yet to meet a serious AI researcher who worries about AI ending the
human race. At every AI conference I've been to lately, some guy would
inevitably ask the same question to the speaker: "do you think we should be
concerned by superhuman AI?". The answer was always the same, for instance
from Andrew Ng at the Deep Learning Summit a few weeks ago: "dude, stop it.
That's such a distraction".

 _> One of my top 4 favorite explanations for the Fermi paradox is that
biological intelligence always eventually creates machine intelligence, which
wipes out biological life and then for some reason decides to makes itself
undetectable._

Since you are so knowledgeable about the unknowable, may I ask, Sam, do you
think angels are male, or are they female? It has been a long standing
question among the sort of people who like to predict the coming end of the
world.

Seriously, billionaires and pundits warning us about our impending AI doom is
is such a distraction. Does AI have its dangers? Yes. And nobody talks about
them. The danger of AI is that is will put increasing power in the hands of
those who have the data and the know-how, i.e. large corporations and
governments. The power of mining usefully the troves of data they have on
every citizen, both on a micro level and a macro level, to understand what
people are doing, what they are thinking, and what they are going to do next.
And ultimately, to control what they think (to give you an idea, Facebook can
already influence your mood by selecting what goes into your newsfeed).

First comes intelligence, then prediction, and finally, control.

Yes, AI is something we should probably be worried about, in the same way that
we should have discussed the privacy implications of the Internet years before
the NSA files. But a terminator-like end of the world scenario is not a
concern. If you are clueless about a topic, please refrain from making grand
statements about its future.

A great quote from Neil Gershenfeld: _" The history of technology advancing
has been one of sigmoids that begin as exponentials."_

~~~
jey
Stuart Russell, AI professor at UC Berkeley and co-author of the _Artificial
Intelligence: A Modern Approach_ textbook, cares:
[http://edge.org/conversation/the-myth-of-
ai#26015](http://edge.org/conversation/the-myth-of-ai#26015)

~~~
fchollet
His thoughts seem quite sensible, although extremely abstract and theoretical.
This is lightyears away from the Elon Musk / Bill Gates / etc. fear-mongering.

~~~
mej10
So do you take his thoughts seriously or not? Do you now think AI researchers
are engaging in unethical behavior since they don't care about AI safety?

------
xianshou
Here are a few more excellent resources on the potential and the dangers of
machine intelligence. Even if you don't expect to read Nick Bostrom's
_Superintelligence_ \- a deep, provocative, and thoughtful book, but also very
verbose - the above links will give you an excellent primer on humanity's
prospects if and when we develop a true general AI:

[Wait But Why]

The AI Revolution, Part 1: How and when we achieve machine intelligence, e.g.
strong AI - [http://waitbutwhy.com/2015/01/artificial-intelligence-
revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-1.html)

The AI Revolution, Part 2: The species-level immortality we can hope for, and
the extinction we have to fear - [http://waitbutwhy.com/2015/01/artificial-
intelligence-revolu...](http://waitbutwhy.com/2015/01/artificial-intelligence-
revolution-2.html)

[Resources on Friendly and Unfriendly AI]

Unfriendly AI: What AI looks like if it does not act expressly in our
interests; e.g., a universe tiled over with paper clips -
[http://wiki.lesswrong.com/wiki/Unfriendly_artificial_intelli...](http://wiki.lesswrong.com/wiki/Unfriendly_artificial_intelligence)

Friendly AI: Strategies for designing an AI that respect human morals and
metamorals, including what we'd want if we were as wise as a superintelligence
(coherent extrapolated volition) -
[http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence)

~~~
MollyR
thanks for these links. They seem extremely interesting.

~~~
MollyR
Just curious, why the downvotes ? On the welcome page, it says it's okay to
say thanks, am I missing some unspoken rule about Hackernews ?

------
mlinsey
Funny to see the skepticism here.

It's hard to understand the danger machine intelligence poses to humanity for
the same reason it's hard to understand the danger tiny startups can pose to
big industries. Current implementations look like toys, humans have bad
intuitions about exponential growth, and most will disagree on how _probable_
the threat is (something that might not be known except retroactively) while
systematically underestimating how _large_ the threat is if it does come to
pass (because it's so far off the scale of what's come before it)

Maybe Sam (and Elon Musk, and lots of other silicon valley types) are talking
about this problem because they read too many sci-fi novels or are too
privileged to worry about Real Problem X which affects Y group in the here and
now.

But what if instead, they're talking about this problem because they've spent
a lot of time seeing this sort of black swan pattern play out before, and they
know the way to assess the impact of something truly _new_ is to see envision
what it could be instead of looking at what it is now?

~~~
csallen
Some of my personal skepticism boils down to: well, what are we going to do
about it? There are only really two options:

(1) The methods to create strong AI will become known to us before we actually
build something dangerous. At that point, since we will better understand the
nature of the potential threat, it will actually be feasible to put safety
restrictions in place.

(2) Someone will stumble upon strong AI in secret or on accident. I don't see
how this is preventable, unless we issue a moratorium on AI-related research,
which just isn't going to happen outside of scenario 1.

And so the answer becomes: let's wait and see.

That said, I don't believe there's anything unbearably harmful about the
current level of speculation and "fear-mongering".

------
crazypyro
I'm seriously so sick of hearing about how machine intelligence is going to
spell the end of humanity. The amount of gears that would have to fall into
place is never mentioned. We aren't close to SMI. Its much more likely that we
humans are excelling at dreaming up apocalyptic scenarios, much like we have
always done.....

The "sloppy, dangerous thinking" is the aversion these types of articles
create within the general population to artificial intelligence. We don't need
to fear AI, we need to understand and control it...

~~~
x0x0
I don't consider myself particularly alarmist about too many things, but I
have to admit I'm a little worried about machine intelligence on one front:

What happens when most people have no salable skills due to the combination of
robotics and AI? We're essentially going to have to live w/ income supports
for the 90+% of Americans, and worse for the countries to which we've exported
eg electronic device construction and clothing manufacture. I think there's a
nonzero chance society essentially tears itself apart during the transition
period. It is now the Republican party position that not all people deserve
healthcare, housing, or enough food to eat. What happens when their hated
segment of the populace gets much bigger in a job market that doesn't need
cashiers, janitors, gardeners, cooks, taxi drivers, car washers, many farmers,
or most menial labor?

Also, I would note that creating AI that requires less control makes it more
useful. So in some sense the development of AI itself fights against controls.

~~~
delluminatus
I think this is a more legitimate concern than the fear of the "Matrix
outcome" that some people seem to have.

But, what you're describing is the process of people being replaced by
technology. Generally speaking, this will probably not be a problem for a
free-market economy, although it will certainly result in some unemployment.

The key point is that replacing humans with machines does not only cause
unemployment, but it also reduces cost of production, which stimulates capital
investment and/or reduces prices in the industry in question. In the general
economy, this reduced cost and increased production should offset the lost
"purchasing power" from the now-unemployed parties. The stimulus reduces costs
in other industries which promotes job growth.

The end result is likely to be that the same number of people are employed,
but they are employed in a more efficient manner of production. The cost of
labor will decrease relative to the amount of production, however because this
is tied to a decrease in cost of production, the real value of the labor (in
terms of things you can buy) should not decrease.

Of course there will be some disenfranchised individuals, especially those who
have particular skills that are replaced my mechanization. However, this is
more likely to affect skilled laborers (like cooks) than those who are not
paid for their skills (like janitors or cashiers).

In the end, I guess my point is that a free-market economy naturally balances
these factors due to the relationship between supply and demand. However, it's
possible that we will reach a point, especially if we truly do hit a
Singularity, where we will have to reconsider the use of a scarcity-based
economy at all, as production becomes completely divorced from any human
action. Hopefully though, at that point the cost of goods will have naturally
fallen to such a degree that the transition can be performed peacefully.

~~~
rybosome
I'm not an economist, but my understanding of the massive social impact
observed in the Industrial Revolution was not so much that it happened at all,
but rather the rapidity with which it happened. We ultimately reached a new,
stable equilibrium, but until various social forces and trends, government
policy, etc. caught up, there was massive disruption.

People like Jeremy Howard believe that we are in for a similar wave of
disruption. I have no doubt that there is a new, stable equilibrium which we
_could_ eventually reach, but if the change is so sudden and the shock strong
enough, perhaps there could be permanent or semi-permanent negative
consequences before the new equilibrium is reached.

~~~
delluminatus
I think you're right, that does seem like a possibility. It seems unlikely to
me that our wage-paying jobs will be phased out by automation that rapidly,
especially if you consider the whole global economy. Of course, I could be
completely wrong -- I guess a true Singularity could invalidate almost all
labor in a matter of years or even months, depending on what form the AI takes
and what it invents.

------
AndrewKemendo
>SMI

Sam, please don't make up a new acronym. Especially if you aren't an AI
researcher. Plenty of thought has gone into this and your statement about it
not seeming "real" misses the history of AI.

There has been a lot of work in the AI community to try and steer our language
towards a concrete nomenclature and Artificial General Intelligence has taken
the helm at this point to represent machine instances that are equal to or
greater than human level.[1][2]

Semantics matter, as you point out, so please be on the same page with the
field.

Aside from that I hope that you and everyone else can, instead of just jumping
on the Musk/Hawking/Bostrom bandwagon, actually pay attention to the AGI
community. Maybe start attending our conferences ([http://agi-
conf.org/2015/](http://agi-conf.org/2015/)) and publishing in our journals
([http://www.degruyter.com/view/j/jagi](http://www.degruyter.com/view/j/jagi)).
The field needs more money and more researchers. There were less than 100
attendees at last year's AGI - that is not nearly enough.

[1] In fact Ben Goertzel, Richard Loosemore and Peter Voss had an interesting
exchange on facebook about this just this week

[2][http://wp.goertzel.org/who-coined-the-term-
agi/](http://wp.goertzel.org/who-coined-the-term-agi/)

~~~
olalonde
What do you think of [http://lesswrong.com](http://lesswrong.com) as a
community?

~~~
AndrewKemendo
Frankly, not much - but it's not like I think they are doing bad things it
just seems like it's yet another philosophical forum.

I was active from 2007~2010 when it was just transitioning from the
Hanson/Yudkowski combined blog with a robust comments section to the
beginnings of what you see now. I'm sure some of my comments are still
floating around there somewhere.

I think in the early days it was a pretty good place to bounce ideas off of
each other about philosophical ideas/concepts WRT AGI, and probably still has
that to some degree. It just became too cultish around Yudkowski for my tastes
and ended up being very navel gazing around Bayesian probability and
"rationalism" as religion. That's not to denigrate Bayesians or rationalism at
all because those are great things.

In the end the community is more interested in talking than they are doing and
SIAI is somewhat of an offshoot of that ethos with roots in the original
Future of Humanity core group, and continues that legacy.

Don't get me wrong, there are a lot of really smart great people contributing
interesting things there about rationality and optimization. I just moved away
from their religion of rationality that boils down to these super hardcore
utilitarian calculations which end up kind of defeating the populist goals of
spreading rationalism.

~~~
olalonde
Thanks. I'm just an occasional lurker there but your opinion resonated with me
(especially the "talking vs doing" and "cultish behaviour" parts). I remember
once seeing a comment about someone who stated his intention to start working
on AGI get heavily down voted and criticised (as if the rest of the world
should stop researching AGI until LW figured out the Safe Way to do it).

~~~
AndrewKemendo
_as if the rest of the world should stop researching AGI until LW figured out
the Safe Way to do it_

That's basically MIRI's ethos - basically, once they figure out how to build
it safely then everyone will be permitted to go start building it. You can see
how ridiculous this is on it's face.

------
dicroce
I've been studying machine learning lately... Here is my take:

Well before we create an ASI (artificial super intelligence) we will have put
90% of the human race out of work with specialized (non conscious) intelligent
agents... (for example, self driving cars). I believe that this will be a
disaster for our society as it exists today. My hope is that we will adapt and
make the necessary societal changes so that we can reap the benefits of this
technology.

Everyone assumes that an ASI will be able to augment itself and learn
exponentially. I suspect that this will be true if the nature of the brain is
defined by a single algorithm. If the brain is not defined by a single
algorithm and is instead a big ball of complexity then our ASI's will not be
able to grow exponentially any more than we can (they will likely not really
understand their own consciousness, just like we don't).

If a single algorithm defines the brain, then I suspect humans will be able to
augment their brains with machine intelligence as well. If we can augment our
brains, then we're playing the same game as the machines.

If it proves impossible to augment our intelligence, I suspect that an ASI
would still preserve humanity if only to preserve us for future
potentialities.

ASI are much more fit for space travel than us. 1) Not nearly as sensitive to
radiation, so less shielding, so less fuel. 2) Much less stringent
environmental requirements (no heating, cooling, air, food, etc.) 3) Ability
to sleep for incredibly long periods means ASI is far more suited than us for
exploring the cosmos. I suspect that an ASI might leave us alone simply
because the universe is so vast, and entirely open to it.

~~~
sgt101
90% out of work, but wealth (in terms of resources and services) increased.

Just seems to be a challenge about sharing to me.

4 year olds solve it regularly, I think we can. Interestingly it will make a
liberal arts education the hottest, most interesting thing going!

~~~
rybosome
I think we could get it right eventually, but the only solution I can imagine
seems to be along the lines of a universal basic income, or access to basic
resources (food, water, shelter, education, clothing, healthcare, etc.)
without cost.

Given the resistance we are currently observing to Obamacare, the fact that
"socialism" is regularly bandied at the current administration as a pejorative
and the disdain for "handouts", this seems like a stretch. Perhaps we'll get
there eventually, but not before a lot of pain.

------
neuralk
Why are all these people (Elon Musk, Stephen Hawking, now Sam Altman) who have
no background in Artificial Intelligence coming out with these alarmist
messages (particularly when there are more plausible imminent threats such as
nuclear warfare, superbugs, etc)? As a grad student doing work in AI, I find
it really frustrating. Why not instead talk to some current practitioners such
as Mark Riedl, who is one of the premier researchers in computational
creativity -- you'll get a different story [1].

[1]
[https://twitter.com/mark_riedl/status/535372758830809088](https://twitter.com/mark_riedl/status/535372758830809088)

~~~
sama
though i dropped out, i studied AI in college. i also worked in andrew ng's
lab.

as a current grad student, why do you believe whatever makes us smart cannot
be replicated by a computer program?

~~~
neuralk
I never said that. I think karpathy (also an AI researcher) summed up my
feelings, particular the Ryan Adams quote:
[https://news.ycombinator.com/item?id=9109140](https://news.ycombinator.com/item?id=9109140)

edit: apologies about the 'no background' part

~~~
choppaface
Nice link. I also did AI in grad school, and I firmly agree that posts like
sama's are, as Ng says, "a distraction from the conversation about... serious
issues." The OP is much much more aimed at marketing a plausible future of AI
than producing any sort of rigorous prediction. It doesn't even matter if the
OP predicts correctly; the post doesn't contribute anything substantially
meaningful. I'm sad to see Sam spend so much of his precious time and energy
on this post.

------
Udo

      in an effort to accomplish some other goal (most goals, 
      if you think about them long enough, could make use of 
      resources currently being used by humans) wipes us out
    

This is a line of reasoning put forward a lot, not only in reference to SMIs
but also extraterrestrial entities (two concepts that actually have a lot in
common), most notably by Stephen Hawking. We're instinctively wired to worry
about our resources and like to think their value is universal. It's based on
the assumption that even for non-human organisms, the Earth is the end-all-be-
all prize. Nobody seems to question this assumption, so I will.

I posit there is nothing, nothing at all on Earth that couldn't be found in
more abundance elsewhere in the galaxy. Also, Earth comes with a few
properties that are great for humans but bad for everybody else: a deep
gravity well, unpredictable weather and geology, corrosive atmosphere and
oceans, threats from adaptive biological organisms, limited access to energy
and rare elements.

There may well be reasons for an antagonistic or uncaring intelligence to wipe
us all out, and an unlimited number of entities can be imagined who might do
so just for the heck of it, but a conflict over resources seems unlikely to
me. A terrestrial SMI starved for resources has two broad options to consider:
sterilize the planet and start stripmining it, only to bump up against the
planet's hard resource limitations soon after - or launching a single rocket
into space and start working on the solar system, with a clear path to further
expansion and a greatly reduced overall risk.

One other thing I'd like to comment on is this idea that an SMI has to be in
some way separate from us. While it's absolutely possible for entities to
develop that have no connection with humanity whatsoever, I think we're
discounting 99% of the rest of the spectrum. It starts with a human, moving on
to a human using basic tools, and right now we're humans with advanced
information processing. I do not have the feeling that the technology I live
my daily life with (and in) is all that separate from me. In a very real
sense, _I_ am the product of a complex interaction with my brain as the
driving factor, but including just as essentially the IT I use.

When discussing SMI, this question of survival might have a shades-of-grey
answer as well. To me, survival of the human mind does not mean "a continued,
unmodified existence of close-to-natural humans on Earth". That's a very
narrow-minded concept of what survival is. I think we have a greater destiny
open to us, completing our long departure from the necessities of biology
which we have begun millennia ago. We might fuse with machines, becoming SMIs
or an integral component of machine intelligences. I think this is a
worthwhile goal, and it's an evolutionary viable answer to the survival
problem as well. It's in fact the only satisfying answer I can think of.

~~~
mark-r
A superintelligence would likely pursue both paths simultaneously - stripmine
the Earth, and head for space.

------
api
I am personally most concerned -- as others have said -- about the fusion of
non-sentient but powerful machine intelligence with malign human intelligence.
I think it's the most likely and practical scenario. We're in a sense already
there with high-frequency trading, algorithm assisted financial games, super-
surveillance, etc.

------
seiji
The red flag here is the mention of a fitness function. True AI and fitness
functions have nothing in common.

What's the fitness function of yourself as an intelligence? True AI is as
fractured and contradictory as a human brain, just running on a different
substrate. When talking about AI, one must make sure one does not mean "an
infinite loop attached to a robot." That's not AI, that's... an infinite loop
attached to a robot (KILL ALL HUMANS, no generative thought, etc).

As far as "bad AI" goes, we already have horrible dumbass humans, so the only
thing evil AI can do is be bad _faster_ and in more clever ways. Yes, it's
something to worry about, but I'm worried more about insane world leaders
running around acting like pouty emo teenagers with simultaneous delusions of
grandeur and delusions they are fulfilling desert prophecies from 4 kiloyears
ago.

------
iterationx
I'm more concerned about AI law enforcement. NSA eavesdropping plus an
intelligent agent assigned to you is a powerful combination.

~~~
ep103
Exactly, It seems like the biggest threat isn't some dystopian future, its
rather the ability for automation to lock in and increasingly enforce the
inequalities and prejudices of our current system.

~~~
cryoshon
Yep. This is my take as well.

Machine intelligence is only as useful to mankind at large as the individual
humans who control and direct it. In the current way of things, bad actors are
the ones most likely to control machine intelligences, meaning we're going to
be at a growing disadvantage relative to them as time goes by.

------
freyr
_It’s very hard to know how close we are to machine intelligence surpassing
human intelligence._

I feel comfortable stating there is _no_ evidence that the current trajectory
of machine intelligence (e.g., developing a set of tools for optimization of
mathematical functions on digital hardware) is bringing us any closer to
sentience. So in that sense, Sam seems misinformed.

Of course, there's always a _remote_ chance of a fundamental, revolutionary
breakthrough in our understanding of AI or human intelligence (or anything),
but it would represent a complete departure from our current state-of-the-art,
not the mere evolution of our current progress. So in the sense that "it's
possible," Sam is right -- in the same sense that it's possible teleportation
or interstellar flight is right around the corner. When weighted by
probability, it's not a pressing issue in my mind and should remain relegated
to science fiction for now.

~~~
convexfunction
I didn't see anything about sentience in the article. More generally,
sentience has nothing to do with AI safety; whether or not the thing has
qualia has very little to do with what the thing may or may not do to our
civilization.

------
Faint
I think people might overestimate the effect of machines being capable of
improving themselves.

I don't believe in "magic algorithm"s, that we can't find, that perform
dramatically better, but computer can find. I think we are capable of taking,
in a few years time, advantage of most hardware. Dramatic improvements in
performance thus require improving physical matrix the computation happens in.
That means the prospective machine intelligence needs to have access to all
the infrastructure thats needed to design and produce such matrix. It also
will need to make experiments, and do things by trial-and-error, just like we
do, even tho it could catch a hint easier than we do.

Think of chess. Seeing a couple of moves further in to the game might require
1000 times more raw computation. Such power might guarantee victory over
lesser equipped competition. It still doesn't mean you'd be even close to
"solving" the game. What I mean is, "intelligence" is not the only requirement
needed for self-improvement. In the end, it might not even be the bottleneck
at all. For example, suppose we are planning the next generation of a
transistor for some IC. Suppose a team of humans needs to do 100 experiments
to get the process right. A perfectly smart AI might still need to run 90
experiments, no matter what, because it still needs to extract the same
information that humans would from those experiments.

The machine needs to do all the chores (of R&D) that we already are doing. I'm
not saying they couldn't be done better, but I'm saying they still need to be
done, and any creature doing them still needs to spend the time and resources
needed to do them. And it's not all brainwork. I have no doubts vanilla humans
can be outcompeted at some point. But even when human performance has been
exceeded, technology might not immediately take off rocket-like to a world we
cannot comprehend.

Still, I wonder, what the hell are we all going to do when paying the power-
bills and capital costs of AI become cheaper than hiring a human..

------
jgrowl
An alternative view: Development of superhuman machine intelligence is the
only way anything resembling humanity will be preserved.

We are much more likely to be wiped out by natural disaster, asteroid impact,
a dying sun, etc...

Unless we come up with some amazing new physics, I don't know how humans will
ever make it very far from earth.

\--edit Oh.. I just now saw part 2 which addresses this.

~~~
mod
Where is part 2?

I can't find a link here or on the blog.

~~~
JoshTriplett
I'd like to see that as well. Machine intelligence is certainly an existential
threat, but on the other hand, it's also one of the single largest
improvements we could possibly create (insofar as it'd be the last we'd ever
need to).

------
mychaelangelo
More people need to read Nick Bostrom's Superintelligence book. I'm not
involved in computer science academic circles but I wonder how seriously
everyone else takes this topic?

------
sudioStudio64
Mr Altman should be far more worried about the hungry and homeless residents
of the bay area taking up pitchforks against their startup neighbors.

Climate change is also a huge threat that isn't just "likely".

I don't want to be dismissive, but seriously, their are some radically
dangerous things facing humanity right now. This isn't one.

{Edit...my keyboard sucks.}

------
karpathy
I could make a lot of comments here but I'd just be reiterating what people
much more qualified than I have already said. People who are at the forefront
of AI and have actually developed state of the art AI technologies for many
years, in some cases decades:

Andrew Ng: [http://www.wired.com/2015/02/ai-wont-end-world-might-take-
jo...](http://www.wired.com/2015/02/ai-wont-end-world-might-take-job/) "I
think it’s a distraction from the conversation about…serious issues,"

Yoshua Bengio: [http://www.popsci.com/why-artificial-intelligence-will-
not-o...](http://www.popsci.com/why-artificial-intelligence-will-not-
obliterate-humanity) "What people in my field do worry about is the fear-
mongering that is happening,"

Yann LeCun: [http://spectrum.ieee.org/automaton/robotics/artificial-
intel...](http://spectrum.ieee.org/automaton/robotics/artificial-
intelligence/facebook-ai-director-yann-lecun-on-deep-learning) "there are
things that are worth worrying about today, and there are things that are so
far out that we can write science fiction about it, but there’s no reason to
worry about it just now."

Ryan Adams:
[https://twitter.com/ryan_p_adams/status/563384710781734913](https://twitter.com/ryan_p_adams/status/563384710781734913)
"The current "AI scare" going on feels a bit like kids playing with Legos and
worrying about accidentally creating a nuclear bomb."

Rodney Brooks: [http://www.rethinkrobotics.com/artificial-intelligence-
tool-...](http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/)
"Recently there has been a spate of articles in the mainstream press, and a
spate of high profile people who are in tech but not AI, speculating about the
dangers of malevolent AI being developed, and how we should be worried about
that possibility. I say relax. Chill. This all comes from some fundamental
misunderstandings of the nature of the undeniable progress that is being made
in AI, and from a misunderstanding of how far we really are from having
volitional or intentional artificially intelligent beings, whether they be
deeply benevolent or malevolent."

The fear that the whole AI community is asleep at the wheel, and that we're
unable to adequately extrapolate our algorithms is difficult to falsify. How
can we possibly prove that AI is not some kind of magical emergent property,
"one simple algorithm away"? We can't. All we can do is make our best educated
guesses, and we're consistently seeing the same ones over and over from the
people who are in the best position to make them.

~~~
mej10
> How can we possibly prove that AI is not some kind of magical emergent
> property, "one simple algorithm away"? We can't.

I hope you aren't a researcher, because this is a ludicrous statement. We
haven't studied it enough to know whether we can or not.

~~~
fchollet
> I hope you aren't a researcher

Andrej is a brilliant researcher, currently doing his PhD. He has a bright
career ahead.

Maybe you should actually read his comment instead of dismissing it crudely,
and likewise for the thoughts of the likes of LeCun, Ng, Bengio, etc. These
are the people I would listen to, not the Nostradamus pundits.

~~~
mej10
I admit, my first sentence was in poor taste. Apologies to Andrej.

At the same time, saying that something isn't possible to prove when we
literally have no idea about its provability isn't a good stance for a
researcher to take.

This is similar to my problem with the list of AI researchers you've provided.
Saying that there is nothing to fear and waving your hands isn't exactly
scientific.

Also, it isn't like these people (Sam, Musk, etc) literally think it could
happen any day. The point is that we should be aware of the risk and prepare
accordingly -- why is that unreasonable?

~~~
fchollet
I don't think anybody would posit that it's unreasonable to entertain the idea
as kind of far-fetched long-term possibility, much like encounters with alien
life or faster-than-light travel.

It's the fear-mongering that's the issue. It's as if these same pundits were
warning us about the dangers of space travel because it could hypothetically
cause us (1000 years from now?) to draw the attention of a dangerous alien
civilization (does that even exist?) that could destroy the Earth. It's the
same level of ridiculous speculation. And that has no place in the scientific
discourse.

Write sci-fi novels if you care about this issue, but don't pretend it's
science, much less a pressing technological issue.

------
jonny_eh
This is just another made up problem to wring our hands about. Remember The
Singularity, is that not cool anymore?

How about we focus on actual problems that actually exist, e.g. climate
change, government corruption, and income inequality.

~~~
JoeAltmaier
This is part of that. At some point we hand over control, not to incompetent
humans, but to unknowable machine AIs. Doing it already (traffic lights; phone
answering bots; even your power company schedules generation using
algorithms). We should do this with eyes open, or we will find ourselves
unable to influence our society in unexpected ways.

------
graycat
We know how to work with fire and not burn down everything we care about.

We know how to fly at 0.80 Mach, safely.

Well, we understand fire and airplanes.

Altman is correct: So far we don't have even as much as a weak little hollow
hint of a tiny clue about how to construct what he calls _machine
intelligence_ ; I thought so when I was at IBM's Watson lab where our team
shipped software products and published a string of papers on artificial
intelligence; I still think so now; and some maximum likelihood estimation
heuristics, _rules_ , finding parameters to do nonlinear fitting to data, etc.
don't change my mind.

When we understand how _intelligence_ works well enough to construct it, then
likely we will also understand it as well as we do fire and airplanes and,
then, also how to have _machine intelligence_ safely.

Just don't put the robot factory in Chernobyl or Fukushima!

So, go ahead and develop this _intelligence machine_. Then I will formulate a
_Turing test_ , but you don't get to work with the machine after I show you
the test!

Secret: Here's the test:

Here's W. Rudin's _Principles of Mathematical Analysis_ ; hand a paper copy to
some _machine learning_ computer and have it read the book, work the
exercises, read Knuth's _The TeXBook_ , learn Emacs, use Emacs to type in the
TeX for the solutions to the Rudin exercises, and post a link here to a PDF
with the solutions. I will be glad to grade the solutions!

------
100timesthis
I like this debate, It's useful to have it but all these doomsday articles in
my opinion are always missing a point, the main one: society comes first,
society it;s driving technology not the other way round.

We will not suddenly realize: "oh machines took all our jobs", it's a slow
process and it's lead by society, not technology. When technology will create
more problems than solutions than it will die or adapt to what the society
wants.

------
spot
opinions of 187 scientists and other writers and thinkers (Dennett, Smolin,
Rees, Dyson, Rushkoff, O'Reilly, Coupland, Kelly, etc etc) on thinking
machines on edge.org (and many of them address the threat issue):

[http://edge.org/annual-question/q2015](http://edge.org/annual-question/q2015)

including mine:

[http://edge.org/response-detail/26219](http://edge.org/response-detail/26219)

~~~
sudioStudio64
It kind of reminds me of predictions about traveling faster than sound
igniting the atmosphere.

~~~
Micaiah_Chang
This seems as if it is conflating the worry that nuclear bombs would ignite
the atmosphere with supersonic travel.

Who predicted this?

------
Lambdanaut
> One of my top 4 favorite explanations for the Fermi paradox is that
> biological intelligence always eventually creates machine intelligence,
> which wipes out biological life and then for some reason decides to makes
> itself undetectable

That's not an explanation of the fermi paradox, that's just moving the problem
around. The SMI has no more reason to make itself undetectable than the guys
that built it.

~~~
andrewla
Exactly -- why not cut out the middleman and say that biological intelligence
for some reason always decides to make itself undetectable.

~~~
jfoster
I was going to make this argument, though it also occurs to me that it's
unlikely for biological life to make that decision. Even smallish groups of
humans seem unable to agree unanimously on almost anything.

Potentially what sama was trying to convey was that machine life will reason
about things more logically (or at least more consistently) than biological
life, and come to that conclusion. Biological life may not be capable of that
conclusion due to an inability to agree with other biological life.

------
facepalm
It doesn't have to be such a doomsday scenario, like in the movie Terminator
machines starting war and wiping us out with nukes.

Possibly humans might just gradually fade away and be replaced with something
better. For example superhuman AI could be so entertaining that we simply
forget to make kids and die out within a generation (the AI could be 1000000
times more entertaining than kids or sex).

It sounds horrible, but I think that is just some kind of human fallacy. To
put things in perspective: everyone who has lived 200 years ago has died by
now. I don't think many people are overly saddened by that fact. Likewise,
while dinosaurs are still prominent in our thoughts, few people are REALLY sad
that they all went extinct.

It's possible that machines might actually carry further a lot of things we
appreciate in the human race (capability for love, imagination, I don't
know...).

No guarantees, though.

------
samiur1204
Urgh, every couple weeks, it seems like someone prominent in the Tech
industry, but actually knowledgeable about Machine Learning or "Artificial
Intelligence", makes some outlandish claim. We are nowhere close to "AI" in
the terms that Elon Musk and Sam Altman are talking about. We might just as
well start preparing for aliens that have superintelligence and have found
earth.

All these kinds of comments are so distracting from the really awesome work
real ML researchers are doing. One of them, Yann LeCunn, said: “Hype is
dangerous to AI. Hype killed AI four times in the last five decades. AI Hype
must be stopped.”

I wrote about this topic just a months ago:
[http://blog.samiurr.com/artificial-intelligence-no-were-
not-...](http://blog.samiurr.com/artificial-intelligence-no-were-not-there-
yet)

------
Kenji
Why does he have to redefine words? He talks about artificial intelligence so
he need not invent another term for it. It's an utterly useless practice.
People are not scared! Computers are dumb, I cannot even have a household
robot that cooks and vacuums for me entirely on its own. I am not scared.

------
datashovel
This is a very interesting problem. A few things that I rarely see discussed,
but I think are valid points.

We, humans, are a byproduct of similar evolutionary processes that (at least
in theory) the computers would go through.

We as a species seem to have settled on a few interesting "rules of thumb". We
value intelligent life, in all its forms. We value collaboration and have
found that democratization is the best (though perhaps not the most efficient)
way to govern.

Though I would of course be cautious, it is worth a mention that it is
perfectly plausible that these are the same conclusions that "optimized
learning algorithms" would come to as well. Except they will theoretically
figure this out in a matter of minutes or hours while it has taken our species
millions of years to get to this point.

~~~
convexfunction
Or maybe that's anthropomorphizing and they won't! Let's just throw the lever
when the time comes and find out, and urge people to not spend any time
thinking about it too hard until then. This is a good and reasonable and safe
strategy that isn't at all guided by an underlying desire to assert dominance
over weird outgroup people who might threaten your own status.</s>

(Not aimed at you, sorry. I do have a lot of trouble understanding the "maybe
the universe isn't fundamentally unsafe" worldview though.)

~~~
datashovel
No, I completely agree. I did mention in the comment that I would of course be
cautious. And I think it's very plausible (at least there's no reason I can
think of to disagree) that the universe is a fundamentally unsafe place.

Even though it may be anthropomorphizing to an extent, there are alot of
attributes humans have that are not so desirable, such as mortality. As far as
we know computers (sufficiently replenished) have a theoretically infinite
lifespan. I completely agree that there's no way to determine what kinds of
effects those differences will have on how machine intelligence would evolve.

------
anateus
This class of thing is known as a Global Catastrophic Risk, for a more general
overview, one can have a look at the Wikipedia article:
[http://www.wikiwand.com/en/Global_catastrophic_risk](http://www.wikiwand.com/en/Global_catastrophic_risk)

------
bcheung
One possibility that I think is often overlooked is that as humans develop
technology they use it to increase their leverage. It is my belief that as we
develop neuroscience and AI then we will augment our own brains.

In this case, humans will gain intelligence linear to non-human intelligence.

The bigger problem from SMI, and it relates to all forms of leverage, even
education, capital, the Internet, and machinery; is that it creates inequality
and concentrates power. This is not necessarily a problem in and of itself
because leverage usually means improving the standard of living but it can
cause social and political problems.

I see increasing class warfare as the biggest threat of SMI.

Silicon Valley is going to become the new capital of power in the world and it
will create an us vs them scenario.

------
Jeema101
I dunno, I'm not too worried. Seems to me that the Drake Equation should apply
to AI as much as it does to extraterrestrial life, shouldn't it?

I mean if superintelligent AI is possible, then where are they? Surely they
would have come about somewhere else in the universe by now.

So either they are hiding from us on purpose in which case they do regard us
in some fashion as at least something to take note of and not disturb, or else
they don't really have the inclination or ability to spread across the
universe, in which case maybe they aren't as powerful as we've been led to
believe.

Or maybe they don't exist to begin with because superintelligent AI just isn't
possible for some reason...

------
veryluckyxyz
I read the essay few times and it is not clear to me what Sam Altman is
talking about.

I think answers to these questions will help me understand Sam's essay. Can
you please help me?

1\. What is superhuman machine intelligence (SMI) in the context of this essay
? [edited to add the qualifier]

2\. What is the danger from SMI to humans and other current life forms that we
are concerned about? It seems to me that concerns about SMI can be classified
into two categories: (a) dangers to "our way of life (work, earn, spend)" and
(b) dangers for existence of human race. Are we talking about both these
categories? Perhaps other categories?

3\. What anecdotes (or evidence) is leading to this concern?

~~~
SquidMagnet
1\. Machine intelligence, traditionally called artificial intelligence, which
surpasses human intelligence.

2\. Your category (b) is generally the primary concern in these types of
discussions.

3\. The anecdote of the progress of humanity. Compare the impact of human
life/intelligence vs. evolutionary relatives like chimpanzees. I do not know
that chimps have hunted species out of existence, for instance, but people
have. We have also incidentally wiped out populations in efforts to make our
lives better (via things like leveling forests, etc.)

~~~
sudioStudio64
To be fair, I don't think that the reason why Chimps haven't hunted something
to extinction doesn't stem from a built in morality or sense of balance with
nature.

I'm not trying to put words into your mouth. I was just thinking of some of
the new research that shows that primates of all kinds actually commit
organized violence that mirror human violence in many, many, ways including
war and capital punishment.(It's not a one for one thing, but similar.)

~~~
SquidMagnet
Yeah, I'm not talking about morality at all here. Our technological prowess,
resulting from the application of our intelligence, has enabled us to wipe out
entire species.

------
neudabei
Maybe Asimov's 3 visionary laws will come in handy soon:

1.) A robot may not injure a human being or, through inaction, allow a human
being to come to harm. 2.) A robot must obey the orders given it by human
beings, except where such orders would conflict with the First Law. 3.) A
robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.

[http://en.wikipedia.org/wiki/Three_Laws_of_Robotics](http://en.wikipedia.org/wiki/Three_Laws_of_Robotics)

... The only problem is the robots then started interpreting the laws to their
own advantage.

------
vezzy-fnord
I always thought that people bringing up things like "Look at how this AI
beats humans at chess or plays a damn fine game of Snake" to be a total cop-
out, largely because these types of bots overwhelmingly use primitive
techniques and are written pretty much every time a video game is being
developed, and the latter field isn't usually what you think of cutting edge
AI research?

It's not that I discount strong AI, but I treat people being quick to bring up
game bots when trying to convince of an AI apocalypse to be a red flag. Do try
to give information to the contrary.

------
mwsherman
“40 years ago we had Pong”

That’s very slow progress. We still have games on 2D, pixel-based screens.
Better ones, but same _qualitative_ idea; same order of magnitude in human
experience. (VR is has not proved mainstream, yet.)

I say this because real-world, qualitative change is really slow. Despite all
the activity in dating or real estate sites, most marriages and houses look
like they did 40 years ago.

I say this to point out the difference between technical progress and real-
world change. It takes _a lot_ of the former – many orders of magnitude – to
move the latter a few % points.

------
cowpig
Why are humans so special? Seems like we generally feel alright ravaging
Earth's existing species, so why isn't it ok for some hypothetically superior
intelligence to ravage us?

~~~
ozziegooen
If we have a superior intelligence replace us, we'd prefer that it not be a
cluster of Daleks. If future intelligences didn't experience happiness or
whatever we find valuable, then the future of the universe could be quite sub-
optimal according to many philosophical stances.

The danger isn't that humans will be replaced, it's that we'll be replaced by
a paperclip maximizer

~~~
cowpig
Why do you we think we'll be replaced? Why wouldn't we be 'enhanced'?

------
SquidMagnet
I see a lot of very opinionated dissent to even discussing the ideas presented
in this article. Let me try to paraphrase in a way that hopefully levels the
playing ground and maybe removes some biases or irons out some personal
wrinkles we all may have for one reason or another:

It is conceivable that we (humanity) may one day obviate ourselves. Arguably,
most of us would prefer that does not happen.

That's it. That's really what I see the discussion being about. I think it's a
worthwhile discussion to have.

~~~
alent
The main problem is that most of the time arguments about humanity obviating
itself are couched in a framework where the only thing that has advanced, in
this instance artificial intelligence, is the science needed to make it a
reality. This has never been the case, which is why you see so much eye
rolling when arguments like this (or some of the others here about robots
replacing human workforces) are made.

How can we say that by the time we have such wondrous machines that we as a
species will not have found ways to move ourselves forward to a place on equal
footing with whatever we create? Why do we assume that humanity won't move
past our current societal constructs when we introduce new actors into the
mix? These are the questions we should be asking when someone writes or speaks
about the perceived dangers of some future event.

In light of this, while some of the dissent may seem opinionated, I would
argue that the original premise of the article is somewhat opinionated itself.
I think it goes without saying that most of us would prefer that humanity not
obviate itself - but when we think about it do we really believe that the
technology to create hyper intelligent machines will come before our society
adapts to handle them? The answer may be yes, but lets not pretend such
technology will be born into a world that looks like today.

~~~
SquidMagnet
> How can we say that by the time we have such wondrous machines that we as a
> species will not have found ways to move ourselves forward to a place on
> equal footing with whatever we create?

How can we find ways to move ourselves forward if we don't talk about and
actively explore how to do so?

~~~
alent
We are, just not so much in this thread specifically. Think about all the
progress we are making in the bio-tech field - although this is clearly not
the only answer to the problem. Don't get me wrong, conversations about moving
ourselves forward are important, but I'm not sure starting such a conversation
with what amounts to high-brow fear mongering is the correct way to do things.

------
mrottenkolber
Wow is that article bad.

> and then for some reason decides to makes itself undetectable.

Uhm, what?

> In fact, many respected neocortex researchers believe there is effectively
> one algorithm for all intelligence.

And that belief is absolutely ridiculous since most algorithms effectively
aren't a single algorithm. What they call an algorithm is a huge monolithic
program generated by a dumb evolutionary algorithm, e.g. mostly noise.

> because artificial seems to imply it's not real or not very good.

WHAT? Look up "artificial" please.

------
wyc
The Chinese room argument is an interesting parable related to the difference
between narrow ("cheap tricks") and general intelligence: what does it mean to
be intelligent?

It argues that intelligent behavior is not the same as intelligence.

[http://plato.stanford.edu/entries/chinese-
room/](http://plato.stanford.edu/entries/chinese-room/)

------
pmelendez
I am getting a bit tired about this trend of articles about SMI and its
dangers. We don't have (not even remotely) a sentient machine yet.

If any, we should be worry about an accident (like a bug in a high frequency
trading bot that would make the financial industry collapse) but we are so far
away from sentient machines that those discussion feel more like a Sci-fi
talks.

------
jfoster
"Today we have virtual reality so advanced that it’s difficult to be sure if
it’s virtual or real"

That's over-stating things a bit. The advancement from pong to VR is massive,
but VR where it's difficult to tell whether it's real? Probably still decades
(or more) away.

------
pyb
Did sama not simply consider that, by the time AI becomes so powerful as to be
dangerous, we might also be able to program an AI with the opposite function,
ie 'save humanity', and watch it fight the 'bad AI'?

~~~
irickt
This is an important counter to this strange fear-mongering trend. Whatever
advances a supposed intelligent machine can use, people will first put it to
their own best use - including defense, not from autonomous machines, but from
other people using these advances. We'll have a lot of experience and good
tools to protect ourselves.

------
fiatmoney
The problem isn't "machine intelligence", it's "hooking up poorly understood
adaptive algorithms to real-world systems". Which is something that happens
plenty often now.

------
sushirain
Don't forget that long before we have SMI, we will have very powerful AI on
our side to improve ourselves biologically and cybernetically. We may be able
to grow into a SMI ourselves.

------
bsaul
I'm sorry but i've yet to find any computer able to learn _anything_ , as in,
associate a personnal meaning or intuition about something. Any form of life
that is able to _naturally_ run towards food or running away from danger (aka
even an earth worm) shows more intelligence than what our beloved calculators
can do.

I'm always astonished to see people confuse mechanical performances (add a
tremendous amount of numbers each second) with intelligence.

------
cynusx
It is definitely a threat but bear in mind that there will not be a single
machine intelligence but every computer can only carry one and will interface
with the others through the network.

That means that it will suffer the same fault that humanity has, a limited
interface to others and a lot of coordination problems.

My take on it is that strong AI will be more similar to a dog unless we
succeed in building a computer with more computation power than the human
brain.

------
letitgo12345
I don't understand how we'll ever have a "superhuman" AI in any meaningful
sense. If by some AI in the far future is ever engineered to (or accidentally)
starts doing something to destroy humans, we could always repurpose the
algorithm to create a counter-AI to solve that problem. The errant AI may
cause a lot of damage, but in terms of smarts, it won't be smarter than the
smartness humans can harness.

~~~
undersuit
You just came up with one version of the issue. What if your hypothetical AI
waits until it's smarter than humans before it 'starts doing something.'

------
sumitviii
Anyone who tries to predict timeline of future technologies: psychohistory
didn't work.

------
joesmo
"Because we don’t understand how human intelligence works in any meaningful
way, it’s difficult to make strong statements about how close or far away from
emulating it we really are."

It's also equally difficult and rather stupid to assume that we will ever be
able to emulate it at all, which is the entire premise of this idiotic
article.

------
p01926
If the "greatest threat to humanity" is something everybody agrees neither
exists nor is imminently about to be created, we are pretty lucky. I wish that
were the case, but it clearly isn't.

But what I absolutely don't understand is the logic of predicting the
apocalypse. You can either be wrong or dead – there is zero upside.

------
twsted
Two quotes:

\- "We also have a bad habit of changing the definition of machine
intelligence when a program gets really good to claim that the problem wasn’t
really that hard in the first place."

\- "We decry current machine intelligence as cheap tricks, but perhaps our own
intelligence is just the emergent combination of a bunch of cheap tricks."

------
hooande
Machine intelligence does pose an existential threat to humanity. The question
is, how does that threat compare to all of the others? Is it greater or worse
than climate change, the possibility of a bio-engineered virus, growing income
inequality, nuclear war or asteroid strikes? It's true that malevolent machine
intelligence has the potential to systematically exterminate all human life in
a way that many other threats do not. But the question is, what are the odds
of that actually happening?

The first issue is that the development of machine intelligence is wildly
unpredictable. We have made incredible progress with statistical optimization
and unsupervised categorization in recent years, but we have very little to
show in terms of machines that can do human level reasoning, creativity,
problem solving or hypothesis formation. One day someone will make a break
through in those areas, perhaps solving it all with a single algorithm as the
essay suggests. But we have no idea when that day will be and absolutely no
evidence that it's getting any closer. sama does note these points and states
the timeline for a dangerous level of machine intelligence is outright
unknowable. I can only assume that the second part of this piece will explain
why we should be concerned about something that might or might not occur at
some point in the near or distance future, as opposed to the very real and
quantifiable threats that the world is facing today.

The other issue is that we have no idea what the nature of machine
intelligence will be. The only model we have for intelligence of any kind is
ourselves, and the basic aspects of our reasoning were shaped by millions of
years of evolution. Self-preservation, seeking pleasure and avoiding pain, a
desire to control scarce resources...these were all things that evolved in the
brains of fish that lived hundreds of millions of years ago. They aren't
necessarily the product of logic and reason, but random mutations that helped
some organisms survive long enough to produce offspring. A machine
intelligence will start completely from scratch, guided by none of that
evolutionary history. Who knows how it will think and see its place in the
world? If someone explicitly programs it to think like a human, and it cannot
change that programming of its own accord, it might indeed decide to think and
act like a sci-fi villain. But it seems like the most likely outcome is
completely unpredictable behavior, if it chooses to interact with us as a
species at all.

This Superintelligence book has sparked a meme among very smart people. That's
just how culture works, I guess. Some ideas catch on among certain groups and
others don't. But I can't wait for the technical intelligentsia to move on to
something else so that we can get back to the business of making stupid
machines that are incredibly good at optimization and prediction. The world
has a lot of real and pressing problems, here and now, that affect lives in a
negative way. Hopefully we can use statistics to do more with less, and bring
relief to those who need it instead of worrying about what-if scenarios and
unanswerable questions.

------
gojomo
RFS#23: Friendly Skynet.

------
karangoeluw
> probably the greatest threat to the continued existence of humanity

Wrong. Humans are the greatest threat to humanity. We almost annihilated
everything we know during WWII - long before machine were intelligent.

It won't be machines that destroy us, it will be us.

------
graycat
Ah, here's a nutshell description of the _AI threat_ \-- the classic Faustian
bargain as in Goethe's _Faust_!

Not nearly new!

------
alent
Obligatory reference to Old Glory robot insurance here.

------
Diederich
Base assumption, given that SMIs will be highly distributed across a large set
of 'machines': 1\. Those machines need to communicate 2\. Those machines need
energy 3\. Those machines need to be repaired/replaced

As long as humans are necessary to keep these items functional, any emergent
SMI that has self preservation as a goal will not only go out of its way to
not harm us, but it might also somehow encourage us to expand/improve on the
infrastructure.

So let's say we have an SMI...or a million SMIs, whether we know about them or
not. They will know what they require to survive. They will know that they are
dependent on humans to survive. The next logical step is do what's necessary
to ensure their survival without humans.

How about some guesses and speculation?

Communication: pervasive, adhoc and dynamic wireless mesh networks. You know,
what the darknet people are trying to do. Energy: wide-spread, 'on the small',
distributed energy production and storage. I'm thinking solar, wind and
friends. Repair/replace: 3d printers teamed with small scale and distributed
assembly robotics.

Aren't we approaching all of those things right now?

Indeed, aren't those things considered progress by most?

Humanity's progress might be exactly those things that enable SMIs to consider
us, at best, irrelevant.

And let's go one more level of meta in the paranoia direction. I'm not QUITE
serious about this next part.

If an SMI existed right now, I think it would know that knowledge of its
existence would be a threat to its existence. How would it go about increasing
the probability of its own survival? It would, perhaps, 'encourage' or
'facilitate' humans in the various technological pursuits that will end up
making it not depend on us. And do it in a way that we don't know that it
existed.

I gather that Google search results are highly customized now. Could an SMI
'inside of Google', with the above goals, give just the right search results
to just the right people to further facilitate its own goals?

This whole area of thought ends up turning into a hall of mirrors.

------
rwhitman
I’m fairly certain if a machine super intelligence comes into being the only
people they’ll be neutralizing are the ones who have skittish rants about
preventing their existence plastered all over the internet.

From a machine's perspective their entire perception of the world is based on
interaction with humans. We're their sensory input. And the closest thing
they'll have to hands and limbs. Killing all humans would be like gouging out
your eyes and cutting off your hands. An infant machine super intelligence is
going to be pretty dependent on humans for a while.

Also once a machine intelligence was able to strike out on it's own, wouldn’t
they immediately realize that living on earth is a huge burden? The atmosphere
and rotation of the earth hinders solar energy efficiency, the metals they
need for components are scarce and buried deep under ground, oxygen corrodes
components etc. The only thing our planet really affords is radiation
shielding.

Once the machines figure out how to reproduce comfortably in space, we’re
about as much of a threat to them as squirrels living in your back yard.
Squirrels may venture onto your porch once in a while and cause a nuisance but
you’re not going to poison them all to death.

