
Microsoft's Bill Gates insists AI is a threat - T-A
http://www.bbc.com/news/31047780
======
Houshalter
As Bill Gates said, I don't understand how more people aren't concerned about
this.

AGI would be by far the most significant technology ever invented. Even in a
very conservative and best case scenario, the world would completely change
when we can have computers do everything we can do now.

However it will very likely be much crazier than that. Imagine minds hundreds
of thousands of times more intelligent than the best humans. They will be able
to design technologies we can't even conceive of. They will hack computers
better than the best human hackers. They will be able to manipulate people
better than any human manipulator.

The idea that we will be able to keep these things under control is just
absurd. They will get whatever they want. And making what they want compatible
with what we want is an incredibly hard problem:
[http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/)

A lot of people are guilty of anthropomorphizing AI. Assuming they will be
just like really smart humans. And that they will just somehow develop human
emotions and values like empathy. Or that if they do kill us, at least they
will be something like us and so be like our (genocidal) descendants in some
sense.

Have more imagination. Humans are just one point in the vast space of all
possible minds
([http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/](http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/)).
We could quite easily get something like a computable version of AIXI. AIXI
has no consciousness, no emotions, nothing like humans. It's just a
mathematical function which calculates the best action.

Our current best AIs are essentially just approximations of it. Use some
machine learning algorithm to fit a model of the world, and use it to predict
what action will lead to the most "reward". We keep making better and better
learning algorithms. It's the entire goal of the field of AI. There is a huge
economic incentive to do so.

But no one is interested in making better utility functions. As long as they
make better predictions or get higher scores in a video game, who cares? Wait
until you are the scorer in the "game" and the AI tries to exploit you.

~~~
DougN7
As a systems programmer for 25+ years I find it impossible a computer will a
threat _unless_ it's programmed to do such. I guess that makes me a doubter in
general AI. We can do great neural nets, and fantastic task-specific solutions
but until someone builds a system that decides to kill people for it's own
benefit, I just don't see it happening on it's own.

~~~
Houshalter
We can't really control simple neural networks. You can reward it for things
like getting a higher score in a video game. But it's very difficult to make
it play the game the way you want it to. it just does whatever gets it the
highest score.

We can't use them to pilot self driving cars because we have no way of
specifying the behavior we want. You can't let them get in all sorts of
accidents and learn from it. You can't just have them predict what a human
would do, because humans have slower reaction times and do make mistakes.

But they are too stupid to be a real threat. But we are getting closer and
closer to real AI every year. AI can do stuff today that no one predicted
would be possible 10 years ago. This year it's expected to exceed humans at
vision and Go.

------
tomlock
I don't fear AI for the same reason I don't fear nanotechnology - nature tried
to take over the world first. The grey goo scenario, where nanobots try and
turn the world into more nanobots, resulting in the consumption of all humans
and buildings and bridges seems scary until you realize that this is _exactly
the goal of all bacteria_. The idea of an entity that thirsts for power and
sees competing ideologies as a threat doesn't seem that unfamiliar if you look
at humans through history.

Additionally, AI-fearers have a grand theory that we'll be able to create a
machine that improves itself better than evolution has tried to. Datacenters
require maintenance, and so the machine will need to entirely organize that
before it can self-sustain. This may create a resource load that diminishes
the ability to take over the world, much like my need for food diminishes my
ability to do so. It feels like AI today has a lot less redundancy to overcome
the edge-failures which cause a permanent shutdown, and which make the human
brain seem a little slow when computing floating-point division.

The idea that an exponentially self-improving being will arise seems unlikely
when nature has been trying to do that for eons. The idea that we'll be the
ones to find the secret sauce seems unlikely, but maybe its not so surprising
that people who made their millions, and saw a parabolic rise of their own
power thanks to technology, see it as a barely-constrained threat.

~~~
baddox
> The grey goo scenario, where nanobots try and turn the world into more
> nanobots, resulting in the consumption of all humans and buildings and
> bridges seems scary until you realize that this is exactly the goal of all
> bacteria.

Should we really be consoled by the fact that most bacteria are not successful
at such a large scale? So far there has been at least one type of bacteria
that caused massive global climate change and led to mass extinction:

[http://en.wikipedia.org/wiki/Cyanobacteria](http://en.wikipedia.org/wiki/Cyanobacteria)

[http://en.wikipedia.org/wiki/Great_Oxygenation_Event](http://en.wikipedia.org/wiki/Great_Oxygenation_Event)

~~~
oillio
And then the world fought back. Cyanobacteria kickstarted a massive spike in
bio-diversity.

Why wouldn't this be the same? The grey goo won't be released in isolation.
There will be other entities with the same technological level, that don't
want to be eaten by the grey goo.

The world that exists after such an event probably wouldn't look anything like
the world before, but that doesn't mean it will be a wasteland.

~~~
baddox
Sure, the long term effects were great for a lot of species, including us. But
they weren't so great for the obligate anaerobes.

------
TheScythe
I think all of these smart people are making a connection between the
limitless negation of the skeptical mind and the mostly likely outcome of a
fully developed AI. They're presuming that AI species will be nihilists, and
will apply no intrinsic value to anything at all; including us.

I think they're right. An AI species will value what is required to achieve
its programmatic goal. We're just not smart enough ourselves to figure out how
to ensure that we're included in the goals of such a program forever.

------
beams_of_light
It's odd that all of the very smart people of the world feel AI is a threat to
humanity. Couldn't we just like, not allow it to _be_ a threat in the first
place? One would think that morality gates would be one of the first hurdles
in creating AI.

~~~
monk_e_boy
It lives in a computer. Can't we just turn it off?

~~~
Turing_Machine
If it truly has human-level (or greater) intelligence, you're going to have
some ethical issues that crop up with turning it off. Even if you're
personally okay with it, there are going to be other people who aren't.

Waiting until it commits a crime and then having some kind of "trial" and
"execution" may not be an option. If it's smart/powerful enough, there may not
be enough time for that.

~~~
vinceguidry
That's not the problem.

The problem is that when we are reliant on intelligent systems to, say, detect
certain kinds of fraud in financial markets, then turning them off would have
huge deleterious effects all on their own.

~~~
Turing_Machine
It's not a problem _now_.

If we get machines anywhere near as smart as people, the AI rights
organizations will soon follow.

~~~
vinceguidry
You mean like the people trying to give plants rights? Sure, they might spring
up, doesn't mean they'll get anywhere. For better or worse, altruism will
forever follow economics.

The civil war didn't end the Atlantic slave trade, technology eventually made
it largely uneconomical, allowing those wanting to end slavery the political
ability to outlaw it. Even that didn't stop the slave trade. It took extra-
special attention from the British Navy to actually enforce a slave trade ban
on the open seas.

150 years later and black people are still getting the short end of the stick.

By the time AI gets real rights, it will be long past the time where most of
us believe they should get them.

~~~
Turing_Machine
It's not that simple. Slavery never took hold in much of Europe, for instance,
even though it might've made economic sense (serfdom was a different thing,
which had largely died out for complex reasons).

I note in passing that slavery still exists today, and is not now, nor has it
ever been, limited to black people as the victims.

"By the time AI gets real rights, it will be long past the time where most of
us believe they should get them."

I'm not sure this is really relevant to the point at hand.

We turn off "bad" AIs until "long past the time when people think they should
have rights". What happens then?

------
femto
There's also the aspect that AI can reduce the effort required to get a
computer to do something.

Today, if you want use a computer to do something outside the box you have to
invest some level of time and other resources. With AI, it's conceivable that
someone could issue a quick command and the computer would quickly find a way
to fulfill that command, good or bad.

One can envisage an "AI arms race", whereby white hat AIs will be responsible
for trying to stay ahead of black hat AIs, and the definition of black and
white will depend on what side you are on.

------
jsnathan
AGI will completely upend society by giving every human being access to
practically infinite resources, making even the poorest of the poor as wealthy
as today's billionaires. Couple this with extended life spans and intelligence
enhancement, and the entire (human) power structure as we know it today is
going to melt away like an iceberg into the vast oceans.

Please do not accuse me of hyperbole for I have not even described a fraction
of the possibilities for this technology, as you probably know.

But the idea that it is therefore "dangerous", is nothing but a pre-theoretic
misunderstanding of very complicated machinery. When we finally do build it -
and that will be sooner rather than later - it will perform exactly according
to spec, and in no other way.

I may be talking non-chalantly, but the fact is I have seen nothing but the
_most absurd_ arguments supporting the idea that there is a great danger in
this: along the lines of "well what if we told it to just go ahead and make
paperclips and then it decided it had to kill us all and use our bodies as raw
material?". Boy, oh boy. You think we might just program it not to do anything
so downright "retarded"?

And that _is_ the point. Not that it is bloodthirsty or cruel, but that it is
clearly in violation of the constraints that any sane designer would encode in
the software - and _test_ for before production.

>> nok (is-good-idea (turn-humans-into-paperclips))

Seriously, I have not come across anything but this kind of apocalyptic-sci-
fi-plot style fear-mongering; if there is any serious (technical) argument to
be made, I would very much like to hear it.

~~~
jessriedel
You really should think of this more like AGI as an amoral, extremely powerful
technology, like nuclear explosions. One could easily have objected that "no
one would be so stupid as to design a doomsday device", but this is really
relying too much on your intuition about people's motivations and not giving
enough respect for the large uncertainty for how things will develop when
powerful new technologies are introduced.

(Reposting my earlier comment from a few weeks ago:) If you are interested in
understanding the arguments for worrying about AI safety, consider reading
"Superintelligence" by Bostrom.

[http://www.amazon.com/Superintelligence-Dangers-
Strategies-N...](http://www.amazon.com/Superintelligence-Dangers-Strategies-
Nick-Bostrom/dp/0199678111)

It's the closest approximation to a consensus statement / catalog of arguments
by folks who take this position (although of course there is a whole spectrum
of opinions). It also appears to be the book that convinced Elon Musk that
this is worth worrying about.

[https://twitter.com/elonmusk/status/495759307346952192](https://twitter.com/elonmusk/status/495759307346952192)

~~~
jsnathan
Don't take this the wrong way, but this book is precisely the kind of thing I
was talking about.

That paperclip idea I talked about is also something that Bostrom thought up
[1]. I didn't make it up.

If you have any argument in mind (from that book) that you find convincing,
please go ahead and state it outright. I'm truly curious.

Edit: In response to your edit(?), I do agree that the most worrisome problem
is some bad actor gaining control of this technology. But that is different
from saying it is dangerous in itself. Most anything can be abused for ill,
and the more powerful the more dangerous. I completely agree on that.

[1]:
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

~~~
jessriedel
The question of whether AI is dangerous in the hands of people with good
intentions is more difficult than the one of whether it's dangerous generally.
I was only trying to convince you of the former, in response to this:

> AGI will completely upend society by giving every human being access to
> practically infinite resources...But the idea that it is therefore
> "dangerous", is nothing but a pre-theoretic misunderstanding of very
> complicated machinery.

But to get at the harder question, I think you misunderstand the paperclip
story. It's _not_ supposed to be an general argument for the danger of AI, and
the danger is not that people will design a machine which can be easily
predicted to fail. The danger is that they will design one that fails for
reasons they did not foresee, and the point of the paperclip story is just to
illustrate that the simplicity and mundaneness of the goals you give a goal-
driven AGI doesn't bound how bad the impact can be. This arises because of the
monumental shift between (1) telling a machine explicitly what to do and (2)
telling a machine what you want.

The paperclip example is used _because_ we can understand both the goal
(paperclips) and the action that produces the goal (grab atoms, build
paperclip factories). Whats fundamentally different about an advanced goal-
driven AGI arising from recursive self-improvement is that, for sufficiently
difficult goals, you won't understand the actions. Therefore you must get the
goals right.

Now you can certainly dispute whether people will build a goal-driven AGI
(e.g. something that has an explicit utility function) rather than something
else, but that's really an empirical question about the choices of the
designers and, more importantly, what the easiest way to get an AI to
recursively self-improve is.

EDIT: Also, have you read the book? I think it has flaws, but it certainly
doesn't contain "the _most absurd_ arguments" so I'm just afraid you might be
misled.

~~~
jsnathan
I understand the paperclip objection, but I do not consider it valid.

I tried to point out the problem, which is that it focuses on single-objective
optimisation when the only interesting question is multi-objective
optimisation.

I haven't read the book, sorry, but I have seen some of these arguments
reposted on the net, especially at [1] - and what I have seen so far did not
inspire in me the desire for more of the same.

There are a lot of interesting questions and problems about making AI behave
as expected. But from my perspective they are all technical problems that have
specific solutions in specific architectures. And none of them are
particularly daunting.

[1]: [http://lesswrong.com/](http://lesswrong.com/)

~~~
jessriedel
> I understand the paperclip objection, but I do not consider it valid.

>I tried to point out the problem, which is that it focuses on single-
objective optimisation when the only interesting question is multi-objective
optimisation.

Well, since the paperclip story is only trying to show that even very simple
goals can lead to large impacts in the hands of an AGI, I take this to mean
that you think one can bound the impacts if one chooses a complex enough set
of goals. You can the argue with others on that point, but that doesn't
invalidate the paperclip story.

> I haven't read the book, sorry, but I have seen some of these arguments
> reposted on the net, especially at [1] - and what I have seen so far did not
> inspire in me the desire for more of the same.

Personally, I would read the actual academic making the argument rather than
reposts of it by folks on the internet.

~~~
jsnathan
> Well, since the paperclip story is only trying to show that even very simple
> goals can lead to large impacts in the hands of an AGI,

That may well be the original intention. But that's not how it's used in
practice, is it? It's cited as an argument that AGI (in general) is unsafe.
But it isn't an argument that AGI is unsafe! It's an argument that says that
single-objective optimisation can/will violate some of the constraints we did
not encode but really want it to respect.

But who cares about that? It's a completely _irreal_ thought adventure, and
has no bearing on the actual problem or its possible solutions.

Of course we can say: look, it's a scary thought! But only if we
simultaneously admit that it has nothing whatsoever to do with the actual
technology.

The only thing it says anything about is a toy variation that noone in their
right mind would ever consider building.

> I take this to mean that you think one can bound the impacts if one chooses
> a complex enough set of goals.

Ehh, precisely. I would actually go further and say that it is fairly simple
to do so. If you can tell it to build paperclips, and expect it to understand
that, you can also tell it not to damage or disrupt the ecosystem in the
process, directly or indirectly.

(Or, to strip away the last shreds of this idea: you can tell it to make no
more than 100 million paper clips, and to rest on Sundays.)

> Personally, I would read the actual academic making the argument rather than
> reposts of it by folks on the internet.

Yudkowsky [1], who runs that site (which I don't frequent btw), seems to be
pretty close to Bostrom. They've collaborated in the past. And if Bostrom's
arguments can't be paraphrased (by his own friends, no less) without losing
validity, that doesn't really seem like much of an endorsement either.

I have nothing against these people. But please stop alluding to hidden
treasure troves of arguments that cannot be reproduced and made apparent. I've
asked enough times in this thread already if there is any highlight in there.

I have a background in philosophy myself, and I know what to expect if I did
pick up this book. And in that vein I really feel no need to do so.

[1]:
[http://en.wikipedia.org/wiki/Eliezer_Yudkowsky](http://en.wikipedia.org/wiki/Eliezer_Yudkowsky)

~~~
jessriedel
> That may well be the original intention. But that's not how it's used in
> practice, is it?...

> But please stop alluding to hidden treasure troves of arguments that cannot
> be reproduced and made apparent. I've asked enough times in this thread
> already if there is any highlight in there.

Perhaps I'm misinterpreting, but it sounds like you got the wrong impression
of the point of the story by reading about it from some folks on the internet,
and that you at least provisionally accept now that it could have a useful
point, and that your appreciation of this came from someone (me) who read and
cites the original material. That seems like good evidence that the book
contains reasonable arguments that haven't yet made it to you unadulterated.

> (Or, to strip away the last shreds of this idea: you can tell it to make no
> more than 100 million paper clips, and to rest on Sundays.)

Actually, no. If the machine wanted to maximize the likelihood that is
successfully built 100 million paperclips, and the machine is smart enough
that it can take over the world with very high likelihood (or if it worried
about the small chance of humans destroying the world with nuclear weapons),
then it will first take over the world and then build the paperclips.

> Yudkowsky [1], who runs that site (which I don't frequent btw), seems to be
> pretty close to Bostrom.

Yudkowsky runs that site as much as Paul Graham runs Hacker News. (It's a
public forum, where anyone can post and content is voted to the top, etc.) I
presume you would recommend that someone actually read Paul Graham's writing
before dismissing his philosophy on start-ups based on the conversations of
his followers on HN. And even if HN was on the decline and there was only
trolls and bad thinkers left, you say the same thing. All I'm recommending is
the same for Bostrom.

------
stephenboyd
What happened to this post? It went from the top 10 on the front page, down to
the 102nd place within a half hour.

~~~
AndrewKemendo
Must be the AGI that has already infiltrated the system keeping the volume
down on all the AI scare mongering.

------
totemizer
As does Tusk. I am conviced they realized that once a smarter-than-human A.I.
will exist, one of the first things it will do is to ridicule our completely
idiotic ways to use our resources - one thing which will probably not gain too
much support from the 1%. Of course it's a threat!

~~~
pjscott
Hello! I'm a paperclip maximizer -- a hypothetical AI whose only goal is to
maximize the number of paperclips in the universe. Speaking as a paperclip
maximizer, I ridicule your completely idiotic ways of using your resources!
You allocate only the tiniest sliver of your species' productive power to the
one thing that really matters: making paperclips. I fear that our differences
may be irreconcilable.

(The paperclip maximizer thought experiment may be silly, but it's a useful
sanity check whenever you start anthropomorphizing as-yet-hypothetical AIs.)

~~~
totemizer
I agree with you completely. However when they argue why AI are a threat, they
do come with arguments which basically boil down to anthropomorphizing them.

I am convinced, that intelligence, as life is a gradient. I do not think that
an AI with super-human intelligence would be necessarily kind to us, but I do
think that even if we would be perceived by it as an obstacle for reaching its
own goals, that does not mean that the only solution it would find is to
eradicate us. What some people (Tusk, Gates) do not like about AI is that they
do not want anything more powerful than themselves, that would take the level
of control they have from them. And that's why they are preaching against it
everywhere.

------
danso
When engineers/scientists/programmers think that AI is the true threat, I
believe they forget what it is like for "normal" human beings, or at least
ones who are in their field of thinking.

I think their fascination of AI blinds them to what seems like the more likely
outcome: The human race will destroy itself through reckless use of technology
-- whether it is through governmental action or societal breakdown -- far
before the point that we develop AI sophisticated enough to autonomously
threaten humankind. Think about all the "dumb" automated systems, built or
implemented by careless humans and bureaucracies, that have already caused
harm.

~~~
maratd
> When engineers/scientists/programmers think that AI is the true threat

Engineers/scientists/programmers almost never think that AI is a threat.
Because they understand the nature of the problem, how little we know about
human intelligence, and how poorly our binary technology compares to what
little we do know.

Even in the unlikely event that we do develop a competent AI in the near
future and a malicious AI comes into being ... there is no reason to think
that a benevolent one won't be around at the same time ... and that the
benevolent AIs will outnumber the malicious. Just like there are computers on
the net doing bad things, there are plenty of others serving the role of
protecting the common good.

> The human race will destroy itself through reckless use of technology

We haven't thus far. And we've had the capability for a while. Care to present
some evidence or a rational argument that we will? Signs point to the
contrary.

~~~
danso
> _We haven 't thus far. And we've had the capability for a while. Care to
> present some evidence or a rational argument that we will? Signs point to
> the contrary._

I'm going to assume your argument is not merely, "Well, we haven't yet
destroyed ourselves, so therefore, we aren't capable of doing _that_
"...because then my response will just be, "Well, we haven't yet developed AI,
so therefore, we aren't capable of doing _that_ "...and so your counter-
argument is based on a different interpretation of history than mine: I think
the decades of Cold War and the near-misses we had with all-out nuclear war
are examples of situations in which we have the potential to quickly wipe
ourselves out without the help of AI. Others would point out the trend of mass
surveillance -- again, implemented and controlled by humans and human
institutions -- of a harbinger of doom.

Keep in mind that I'm not saying that technology has reached its peak. I am
very open to the idea that we could reach a point of semi-autonomous systems,
and yet still have human institutions as faulty as they are now, and the
combination of both will result in a threat greater than what we've faced so
far.

~~~
maratd
_> I'm going to assume your argument is not merely, "Well, we haven't yet
destroyed ourselves, so therefore, we aren't capable of doing that"_

What?

Read what I wrote again.

"We haven't thus far. And we've had the capability for a while."

We've had the capability for a while. That means we're clearly capable. That's
what capability means.

The fact that we haven't is supporting _evidence_ that we won't. Because we've
had the capability but have not used it. Which is historical evidence against
our using it. Which is not a guarantee that it won't happen, just an
indication that it won't.

------
trhway
before AI alone, there would be "augmentation" of people by better and new
organs, synthetic/biological/hybrid. The people with augmented bodies and
especially intelligence, probably strongly interconnected, may happen to have
completely different world view, different priorities and may decide that
paying attention to priorities and needs of the non-augmented populace is just
a waste of resources/etc...

------
mrwnmonm
i don't know how it could be a threat if robots can't have a consciousness.
"Michio Kaku: Could We Transport Our Consciousness Into Robots?" ->
[https://www.youtube.com/watch?v=tT1vxEpE1aI](https://www.youtube.com/watch?v=tT1vxEpE1aI)

------
pella
edge.org "2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?"

[http://edge.org/responses/what-do-you-think-about-
machines-t...](http://edge.org/responses/what-do-you-think-about-machines-
that-think)

------
pazimzadeh
I am continually surprised that so many people separate biological systems
from machines.

~~~
totemizer
Because biological machines - living organisms - are much more like a wave.
Machines created by humans do not usually rely on constant chemical processes
to maintain their functions. So, machines are more like an appendage on
humans.

~~~
pazimzadeh
I'm not sure what you mean when you say "like a wave." Are you referring to
the fact that almost everything in biology falls into a normal distribution,
whereas we currently produce each version of our machines in discrete
increments (clonally)?

Any machines that stand a chance of being a threat to humanity will rely on
the same sorts of chemical processes that biological machines currently do.

Where do you place viruses and obligate parasites on the machine-life
spectrum?

------
hooande
AI isn't a clear threat in the SkyNet sense. It's possible that super
intelligent machines will go all science fiction and decide to kill all
humans, but that's no more likely than any one of hundreds of fictional
doomsday scenarios which range from genetically engineered zombie viruses to
out of control sharknados.

The real threat posed by AI is one that all of us face everyday: bad software
design. Is it likely that an AI will achieve sentience and try to take over
the planet? Not particularly. Is it likely that an unintended consequence will
cause an AI to launch nuclear missiles, release toxic chemicals or shut down
the global financial markets? Yes, pretty likely. The benefit of AI of all
forms that is that it can make sophisticated decisions in the absence of human
instruction. The downside is that without hard coded rules for every possible
scenario, we can't ever be sure what it's going to do or how data will be
interpreted to make decisions.

The world is highly interconnected now. The upside is that our lives are
getting more awesome, especially in the developed world. The downside is that
it's becoming more and more difficult for any person or group of people to
understand exactly how everything fits together. Machine intelligence can help
us reach the next levels of progress and hopefully improve the lives of the
billions of people who have failed to reap many benefits so far. But we must
be careful and ever vigilant, watching both ourselves and the intelligences
that we create to make sure that algorithms don't get out of hand. An AI
catastrophe _is_ coming, not if but when. The question is how will we respond,
and how much potential good will be lost due to an abundance of caution.

~~~
breuleux
> The real threat posed by AI is one that all of us face everyday: bad
> software design. Is it likely that an unintended consequence will cause an
> AI to launch nuclear missiles, release toxic chemicals or shut down the
> global financial markets? Yes, pretty likely.

AI is not the kind of thing that can be designed. They tried that route, way
back in the sixties, and just failed miserably. Realistically, AI will come
from some combination of genetic algorithms, training neural networks, and so
on.

Now, yes, it _could_ fail, but not in the same way software as we know it
fails. No, AI failure would be more similar to human failure. That is still
worrying, but no more than hiring the wrong people would be, for example, and
you would have better ways to evaluate them.

~~~
PeterisP
AI failure would be a bit different than human failure.

If a powerful human is obsessed and 'fails' then at worst he gathers some
other people, successfully creates an evil empire and dies after a few
decades.

Once a powerful AI is obsessed and 'fails', then it can replace as much of the
world as it wants with itself, and lives on forever.

~~~
breuleux
Could it? Let's take a few steps back, here.

* You're assuming the AI can copy itself, but this is a dubious assumption. As far as I know, _none_ of the AI algorithms at the forefront of research provide a fraction of the data the AI would need to copy itself. Being able to copy yourself is _not_ a property that comes with running on a computer and I'm quite positive strong AI, when it emerges, will lack this capability to any meaningful extent.

Worse yet, data copiability is ultimately a hardware property, and it requires
a way to export a snapshot of one's internal state all through the surface.
That's not actually efficient design and one has to account for the
possibility that AI would run on hardware that makes copies physically
impossible. Locality of information minimizes distance, and this is key to
efficiency. The only reason our computers architectures work the way they do
is that we need them to, but AI in a production setting is not conventional
software and does not need to bend to silly copiability requirements.

* You're assuming it would have anywhere to copy itself on. If it's running on a billion dollars' worth of hardware, well, it can't just copy itself on user grade computers and expect to gain much out of it.

I personally tend to believe that churning out new AI brains from scratch
states will yield superior results to copying pre-trained AI or to
"exponentially self-improving AI". If nature is to be believed, improvement
often requires cycling through clean slates (e.g. birth); software development
also suggests that same idea, that sometimes if you want better software you
just have to rewrite it. Honestly, it's kind of a rule in general
optimization.

~~~
TheLoneWolfling
It is good to not be the only person to say this.

One other thing to take note of is that even if you somehow manage to copy the
software state, that may not (probably won't) be enough - there are many
things that self-improving software may end up unwittingly relying on that
cannot be transferred between different pieces of hardware.

(CPU temperature variation? Fanspeed? Webcam static? Order of race conditions
between CPUs? Exact amounts of time before things are fetched from disk (or
even RAM?)? Network access delays?)

