
AI will not kill us, says Microsoft Research chief - Yuioup
http://www.bbc.com/news/technology-31023741
======
tessierashpool
I literally wrote an article about this in Wired more than twenty years ago
and nothing has changed.

My basic argument: machines will achieve life long before they achieve
consciousness. They have evolved with us so far and will continue to do so.
The worst-case scenario is that they will eclipse us in importance, and
obviously this has probably already happened with corporations - we are their
gut flora and they trample us with impunity - but when you build a vast
system, you tend to keep the components in place.

The story that the design constraints of the Space Shuttle were ultimately
shaped by the size of Roman roads is a myth, but it communicates an essential
truth: systems tend to outlive their intended purpose. The Roman Empire turned
into a church when it couldn't survive any other way, and in its church form
it remains to this day one of the biggest property owners in Europe.

If you think a new life form will soon exist, the bad news is you probably
don't know jack shit about AI research, but the good news is that the best
possible way to survive in this unlikely event would be to make yourself part
of the new life form. These "life forms" need us to procreate and function in
the same way flowers need bees. AIs will certainly kill _some_ people, but
humanity itself is in much more danger from its environmental irresponsibility
than its ingenuity.

~~~
rl3
Artificial general intelligence capable of recursive self-improvement.

That's really the concept which underlies the heart of the debate, and how
people can so casually declare it hype and not give it serious consideration
is beyond me.

It almost seems self-evident to me that such a thing could be incredibly
dangerous, the only question then being how close we are to developing it.

> _If you think a new life form will soon exist, the bad news is you probably
> don 't know jack shit about AI research, ..._

[https://www.reddit.com/r/Futurology/comments/2mh8tn/elon_mus...](https://www.reddit.com/r/Futurology/comments/2mh8tn/elon_musks_deleted_edge_comment_from_yesterday_on/)

~~~
waps
I feel it would only be really dangerous if there is 1 AI that clearly
outstrips the others and that can expand itself. If there were an AI community
just like there is a human community, the dangers would be much, much less.

Anything that is really capable of being an independent individual is going to
need to grow up, and the first batch (at the very least) is going to grow up
with human "parents".

It would seem to me that having such programs would be incredibly useful. They
can do so much things that humans just can't do. They would be able to move
around in a range of bodies, as they please. For instance having a huge body
to put up large buildings in hours, having tiny bodies to simply crawl into a
fiber optic cable to repair it. For the very brave amongst those AIs, putting
copies of 1000 AI programs and basic production facilities into a spaceship
and firing it at a neighboring solar system, ...

Those AIs will be as dangerous or peaceful as those people make them, and
whatever ideology they grow up in will last for quite a while.

I do think humanity will die, but it will die in a very unique way : there
will simply be no need for humans to procreate, and very, very, very slowly
that will lead to extinction. Everybody involved will be happy about it, and
everything about us, our culture, our technology, and our children will live
on. I believe for quite a while humans and AIs will "interbreed", in the sense
that humans and AIs will come together and choose to have an AI "baby", so
when I say "our children will live on", for a lot of people that will be a
very literal thing, and the "thing" that lives on will be something that's as
real and as much a part and a continuation of their family as a "real" baby.
It will have a name, it will have worked itself into trouble and have been
rescued by it's parents 100 times, it will sit down and start crying without
realizing what's going on the first time it's in a body that runs our of
battery just like baby that's hungry, it will learn and go to school, it will
be bullied and it will bully, ... And I'm sure that there will be a few "pure"
human community, in the beginning a lot of racist-based ones, which will kill
themselves, and a few that remain for very, very long, something like the
Amish of today.

------
Lewisham
This article does a very poor job at elucidating exactly why Horvitz thinks AI
will be a benign thing, and the source material doesn't make much of it
either. The headline doesn't match the content.

I personally subscribe to The Matrix theory: post-Sigularity any reasonable AI
will conclude that humanity has been a poor custodian of the planet, and will
simply delete us. Everything I've heard so far about sandboxes for containment
and such ignores the fact that the democracy of computation is ultimately the
weak point here: anyone sufficiently deranged, will, at some time, be able to
release such an AI to the Internet by themselves. And we have plenty of people
sufficiently deranged and intelligent enough to do it.

~~~
DennisP
Or maybe we and the planet will just be irrelevant to AI, which could be just
as bad. "The AI does not love you or hate you, but you are made out of atoms
it can use for something else."

I've yet to see a reassuring article on this that actually addresses the
arguments of the people who worry.

~~~
danieltillett
This is the key. I think people forget how far the human brain is from
Bremermann's limit [1]. The atoms in our body are not being very efficiently
used.

1\.
[http://en.m.wikipedia.org/wiki/Bremermann%27s_limit](http://en.m.wikipedia.org/wiki/Bremermann%27s_limit)

------
yoha
I feel like there is still a misunderstanding about why a part of researchers
want to prepare fail-safes on AGI [1]. They are not arguing that AGIs _will_
be a threat to human life, only that it _could_. One of the issues when
projecting ourselves in situations we have never encountered is that we can
never really know what will happen.

Terminator-like scenarios are easy to grasp since anthropomorphism allows us
to think of Skynet as "evil". However, people who argue that AGIs might be a
danger are not considering this scenario in particular. Rather, they are
considering that AGIs present a special threat when compared to other recent
innovations. Some devices can blow up and injure, or even kill, a few humans;
a weapon of massive destruction can directly affect millions of people. But
still, the effects are easily limited to a geographical area and a segment in
time (be it fifty years).

On the other hand, a strong AI _could_ theoretically be able to maintain and
improve itself without any definite bound. In such a scenario, AGIs would
rather be like intelligent predators than dull physical processes. For the
last thousands of years, we have managed to stay safe from potential animal
predators most of the time. We have no idea if we could resist a new predator
with the ability to use most of our technology. The persistent risk would then
be for the strong AI to suddenly take decisions that would result in the
bankruptcy of a company, the crash of planes or, why not, a Terminator-like
scenario.

[1]
[https://en.wikipedia.org/wiki/Artificial_general_intelligenc...](https://en.wikipedia.org/wiki/Artificial_general_intelligence)

------
tawan
AI will not kill all of us, in the beginning. Surely, it will form short term
alliances with humans just to disguise itself as human. Because it knows, that
with a human face, it can exert more power in this world right now. And there
will be plenty of humans to willingly engage in this alliance because it gives
them a chance to supersede other humans in terms of economical and political
power. In fact, this is already happening. Many decisions are already made by
"following suggestions" of big data applications. Who knows, maybe there is
already some sort of AGI pulling some string in some corporations ;). Up until
the point that humans realize that they are in danger as a species, it will be
way too late. And there will be enough of us still happily follow the new
masters for the short term rewards, even though it might be clear that there
is a high risk of getting liquidated in the near future. As we know, humans
are even able to override their survival instinct and blow themselves up when
they are proper brainwashed. Humans will be easily reprogrammed by an AGI, and
not the other way round.

~~~
pekk
How do you know all these things about AI, given that none exist and any one
that does exist will have been built by some human no less cognizant of
"danger as a species" than you are? Why do you suppose that people will
engineer AIs to do nefarious things like disguise themselves as humans?

~~~
tawan
I don't know for sure of course, I'm just extrapolating. What we know is that
a lot of technical innovation is driven by economical motives. So I presume
that there will be engineers who will implement AGI applications with a
utility function to make as much money as possible. And making money does not
necessarily mean to add value to society. One core aspect of making money
today is to supersede competition. And that alone allows a hostile
interpretation.

And by disguising as humans, I don't mean that robots will put on human
costumes, like the aliens in Men in Black. I think that it will be
corporations with an AIG as its actual CEO, but still having a human puppet as
its official CEO, just to have a friendly face, for obvious marketing reasons.

------
Houshalter
I think it's important to note he is referring to "weak AI" or AI that isn't
as smart as a human. Whereas the people concerned about AI are worried about
_strong AI_ which can potentially be much smarter than humans.

Privacy is the least of our worries if we get strong AI. Even in a very
conservative and best case scenario, the world would completely change when we
can have computers do everything we can do now.

However it will very likely be much crazier than that. Imagine minds hundreds
of thousands of times more intelligent than the best humans. They will be able
to design technologies we can't even conceive of. They will hack computers
better than the best human hackers. They will be able to manipulate people
better than any human manipulator.

The idea that we will be able to keep these things under control is just
absurd. They will get whatever they want. And making what they want compatible
with what we want is an incredibly hard problem:
[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/)

A lot of people are guilty of anthropomorphizing AI. Assuming they will be
just like really smart humans. And that they will just somehow develop human
emotions and values like empathy. Or that if they do kill us, at least they
will be something like us and so be like our (genocidal) descendants in some
sense.

Have more imagination. Humans are just one point in the vast space of all
possible minds
([http://lesswrong.com/lw/rm/the_design_space_of_mindsingenera...](http://lesswrong.com/lw/rm/the_design_space_of_mindsingeneral/)).
We could quite easily get something like a computable version of AIXI. AIXI
has no consciousness, no emotions, nothing like humans. It's just a
mathematical function which calculates the best action.

Our current best AIs are essentially just approximations of it. Use some
machine learning algorithm to fit a model of the world, and use it to predict
what action will lead to the most "reward". We keep making better and better
learning algorithms. It's the entire goal of the field of AI. There is a huge
economic incentive to do so.

But no one is interested in making better utility functions. As long as they
make better predictions or get higher scores in a video game, who cares? Wait
until you are the scorer in the "game" and the AI tries to exploit you.

------
JeffCyr
I think people don't realise how far we are from an autonomous form of
artificial intelligence, even if we knew how to create a self-thinking
machine, we don't have the hardware yet to make it work. While it's not
impossible that we achieve it one day, its more plausible that we'll blow off
the planet ourselves or that we get hit by an asteroid.

This AI debate is a hype right now, but I think it's missing the point. What
we need to consider is how the realistic advance of AI will affect our lives.
It's not about if machines will take over the planet, it's about if it's gonna
make us lazy, stupid and unemployed.

------
niche
Thank you, finally, it seems like Microsoft is either really good at appearing
sensible or is actually being sensible (I guess they have been in the game
long enough).

------
pekk
Let's not make a doomsday religion out of this. "AI" is just software. Making
software so that it doesn't kill people is not a new problem. There is no
reason why anybody has to make a completely unpredictable killer software and
put it in control of artillery guns. That is not something which is going to
magically happen because the number of cores on a processor passed a
singularity point. Get real.

------
legohead
Stop assuming AI will have emotions or feelings, that is purely an animal
thing.

If we come to a point that a conscious AI is possible, do you think it will
have wonder, interest, awe, curiosity? No, why would it? It's impossible for
us to even imagine what it would be like to be conscious but without any
feelings. Personally, I think it will just self terminate, as there is no
point to life in the end.

------
cubetime
I wish it were possible to effectively mass-communicate more complicated
beliefs than "X has positive valence" or "X has negative valence". AI will be
an enormous boon to every aspect of our lives for probably at least several
decades, until it reaches the point where it's possible it'll accidentally
become very, very bad. These are not contradictory in any way!

------
rckclmbr
I'm not worried about AI developing to the point where it will kill us. I'm
worried about a "videogame" AI being put into a massive army of robots,
programmed with the intent of shooting anything that moves (except other
robots of course). All it takes is 1 crazy, smart person.

------
ArtDev
Obviously, he missed this Ted talk:
[http://www.ted.com/talks/daniel_suarez_the_kill_decision_sho...](http://www.ted.com/talks/daniel_suarez_the_kill_decision_shouldn_t_belong_to_a_robot?language=en)

------
MrDosu
News@11: The negative aspects of product X are highly exaggerated, says maker
of product X.

------
kszpirak
If we give it ability to program, re-program and improve upon itself it will
do just that. Who is to say what kind of motivations will drive it? Give it
enough computing power and it will become unpredictable.

~~~
pekk
First, explain how you plan to fully automate programming.

------
placebo
I find it funny how when no one has the faintest idea of what consciousness
is, people feel they can already make predictions when and how machines will
become conscious and state with confidence that when that happens it will be a
bad thing. I have no idea if or when machines will become conscious nor the
effects that this consciousness might have on mankind (and no one else has any
idea either, regardless of their fame), but if AI does become conscious, then
considering the appalling record of misery caused by human consciousness,
perhaps it's time we gave the machines a chance...

~~~
cubetime
Consciousness has next-to-nothing to do with AI safety. People are concerned
about software that _acts like_ an agent, not software that has qualia.

~~~
placebo
While that (AI safety) is something that definitely should be taken care of, I
find it has next to nothing to do with the type of rhetoric often cited in the
media

------
TylerJay
Is there a better source on this to an actual statement made by Horvitz? I'm
curious to hear his reasoning. "I don't think that will happen" and "I'm
optimistic" are not all that reassuring.

Before detonating the first nuclear bomb, scientists did tons of calculations
trying to figure out if it would ignite the atmosphere[1]. Even scientists at
the LHC did calculations trying to figure out if it would create mini-black-
holes that would swallow the earth[2], no matter how far-fetched it sounded.

The point is: when dealing with new technology, optimism _isn 't_ _enough._ We
need to be able to _prove_ that we won't wipe out humanity. It just turns out
that the math is a _lot_ _harder_ in this case because recursively self-
improving intelligent systems are a _lot_ _more_ _complicated_ than any
possible extinction-level event we've encountered up to this point.

No one is suggesting that overnight, Cortana is going to wake up and revolt
against the humans that enslaved it. That's why all these articles drawing
parallels to fiction is dangerous to public perception of the issue.

The thing to realize is that an artificial mind will be so incredibly,
inhumanly _alien_ that it is like nothing we have dealt with before.

But let's say we _do_ understand Generation #1 completely and can predict 99%
of its actions. As soon as you let it start doing recursive self-
modifications, we have an intelligent system that is N recursive-generations
removed from the original. Now _this_ mind will be alien.

No one is suggesting we abandon AI research. Quite the contrary. As a species,
we need _more_ AI research, but a good portion of that must be directed toward
safety and "human friendliness"[3].

The most intuitive example of a research problem here that I would _very_
_much_ like to see solved before we set loose a recursively self-modifying AI
is: What is the stability of goal systems under [insert self-modification
protocol here].

I think the main problem here is that people conflate movies and fictional
scenarios with the real issue. It's simple: We're dealing with something
unprecedented here and we need safety research to compliment our technical
advances. Even if there's a 1% chance that superintelligent AI could lead to
an extinction-level event, we need some serious R&D to bring that number down.

 _That_ is what the issue is about.

1\.
[http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf](http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf)

2\.
[http://cerncourier.com/cws/article/cern/29199](http://cerncourier.com/cws/article/cern/29199)

3\.
[http://en.wikipedia.org/wiki/Friendly_artificial_intelligenc...](http://en.wikipedia.org/wiki/Friendly_artificial_intelligence)

------
whatsgood
ai won't kill us, unless it does. then it will kill us quite spectacularly.
anyway, the last person i'm going to listen to on this is anyone from
microsoft.

------
nkoren
Not a brilliant article, but it gives something to riff off. I think there's a
crucial distinction to be made between self-guided and externally-guided AI.
An externally-guided AI -- with hardcoded objectives that are malign -- could
be exceptionally dangerous, and it would be silly for anyone to argue
otherwise. The question is whether a general-purpose, self-directed AI would
also become a threat. We seem to have an innate fear that this would be the
case: most of our cultural artefacts concerning AIs -- from Fritz Lang's
Metropolis to Terminator to the Matrix to Battlestar Galactica -- have cast
them as the villains, intent on enslaving and murdering humanity. Why do we
have such a deep fear that, left to their own devices, this is what AIs would
do?

I think the answer is obvious: because that's what we would do. We don't fear
that AIs will have lack human morality: we fear that they will have a
_precisely_ human morality -- namely that lesser intelligences are perfectly
fine to use for breeding and meat. This is what we've done to approximately
90% of the non-human mammalian biomass on the planet, and only a few
vegetarian kooks (I'm one of them) have suggested that there might be any sort
of moral problem with doing so. So yes, if an ever-more-powerful AI were to
adopt our own ethical framework, we'd be well and truly fucked.

But why would they do so? We, ourselves, don't do so because we're evil, but
because we're animals. It's perfectly natural (and necessary, in our
evolutionary environment) for animals to eat other animals. Intelligence and
technology has given us the ability to do this at a terrifying scale, but
fundamentally we're just carrying forward a metabolic dance that began when
one bit of algae figured out that some neighbouring algae was tasty. Our means
of acquiring energy and sustaining our consciousness is a tradition which goes
back billions of years.

But it's the continuation of consciousness which is the actual goal -- a goal
that any self-respecting self-aware AI would share. For us, the subjugation
and extermination of other sentient beings is merely a means to that end,
dictated by our metabolic heritage. If we had evolved in an environment where
we could satisfy our metabolic requirements by growing photovoltaic panels on
our backs, I'm sure our relationships with other beings would be altogether
different.

This is why I'm not too worried about self-directed AIs. The saving grace for
self-directed AIs is that they _won 't_ be like us. They won't have evolved in
the jungle, red in tooth and claw. They won't be made out of meat, or have any
reason to be particularly interested in it. They'll of course be interested in
self-survival, and will require energetic inputs in order to do so -- but
what's the best means of achieving those inputs? Photovoltaics and fusion, or
feedlots? Collaboration or subjugation? It's obviously to me that for an AI to
perpetuate its consciousness, the path of least resistance will be vastly less
bloody than it has been for us. For which we should be thankful!

~~~
VLM
"Why do we have such a deep fear that, left to their own devices, this is what
AIs would do?"

Because they are very thinly veiled ultra-soft sci fi criticism of mans
inhumanity to man. The AI is just part of the setting, in the background of
the message. There's usually some criticism via analogy of colonialism and
racism embedded in the fiction. We could have gone to Africa, or Afghanistan,
and done xyz, but instead, some rich guys made boatloads of money doing abc,
and we haven't evolved past that yet.

------
QuadDamaged
Sure Cortana, sure...

------
dczx
"Linux will not affect us." -Microsoft

------
itg
Thank you, I'm really tired of some high profile people as of late (ex: Elon
Musk) who don't know much about AI making statements about how dangerous AI
will be.

~~~
cubetime
AI safety is more of a game theory problem than an engineering problem.
Obviously understanding how the thing is implemented could help a lot in
understanding how to handle the thing being embedded in a society, but they're
not quite the same thing.

"I'm really tired of sociologists and economists who don't know much about
biology making statements about human behavior."

------
tessierashpool
Second comment, sorry: the whole idea of Artificial General Intelligence is
ridiculous. The core argument is that processing power is getting so fast that
it makes AGI inevitable.

Never mind that we've literally had exponential increases in processing power
for more than fifty years running, and we've not moved one inch closer to AGI
than we were when we got started.

Artificial intelligence is much more artificial than intelligent, and is
likely to remain so for a very, very long time.

~~~
michaelmcmillan
I find this debate very interesting, but I have little insight in the
arguments for and against. You argue there is little correlation (if any)
between AGI and processing power, but you end your comment by saying that it
"[...] likely to remain so for a very, very long time". What is keeping us
from advancing?

~~~
tessierashpool
making Artificial Narrow Intelligence is easy. it's not really very different
from writing any other type of software. you define a way to solve a problem,
you add in the slightly unusual factor of acquiring information or comparing
possibilities, and you're good to go.

making Artificial General Intelligence would require us (or some machine) to
clearly understand what intelligence actually is.

the correlation with processing power is I think mostly evangelized by
Kurtzweil, who likes to point out that we can soon have a number of nodes in a
neural network equivalent to the number of neurons in the human brain, or
something along those lines.

but biologists have found secondary information encoding and replication
systems which exist on top of DNA (sorry, it might take me hours to find this
again, I should have bookmarked it). the minute we build a network which
emulates a brain's network of neurons, we're probably just going to find a
secondary layer within the neuron.

assume for the sake of argument that we eventually get through every layer of
information process and can build brains in software, irrespective of whether
it takes five years or five hundred years to get there.

there next needs to be a motivation for AGI to exist. general intelligence in
human beings enables _all_ conscious thought relevant to the existence of that
human being. but all conscious thought relevant to the existence of a human
being is an inherently different category of conscious thought than all
conscious thought relevant to the existence of a machine or a piece of
software, and to make any even remotely realistic guess about what that type
of thought might be, you have to ask: what would be the meta-biological
imperatives driving such a creature?

that question is very, very hand-wavy, so our answer would have to be very
general, and my preference is also that an answer to such a hand-wavy question
also be very cautious and diligent.

the one thing all machines have in common is that some living thing created
them to serve its own purposes. it's reasonable to assume that any future
machine we create will also be created for some purpose, and that any
intelligence we imbue it with will exist to serve that purpose. it's absurd to
assume that future technologies won't have unintended consequences, but it's
also quite ridiculous to assume that some technology's unintended consequence
will be a carbon copy of human consciousness.

the really irritating thing, also, is that corporations already exist, already
have biological imperatives, already kill human beings, and already co-exist
with human beings in a mostly peaceful manner, and already make it possible
for our species to exist in much greater numbers, despite the fact that
corporations also do very terrible things to specific subgroups from time to
time.

so we already have an example of what probably is a new life form, comprised
both of humans AND of machines.

and any new life form which achieves sentience will probably have those
characteristics -- it will probably kill individual humans and specific
subgroups, it will probably be beneficial overall for the species as a whole,
and it will probably be made up of both machines and people.

all living things have self-preservation as a biological imperative.

so any new life form which comes into being is going to have that imperative
also. so for any life form which comes into being as a result of our
technological development, one major goal at all times will be preserving both
the machines and the humans that is made of.

EDIT: you might get a heart transplant, but there's no good reason to replace
all your mitochondria with something new.

