
So You Want to Save the World - inetsee
http://lesswrong.com/lw/91c/so_you_want_to_save_the_world/
======
reasonattlm
Helping fund development of ways to cure aging is also a thing that falls
under saving the world, I think.

Interestingly, there's a great deal of overlap between this arm of the AI
community and the most forward-looking longevity engineers. They go to the
same conferences, seek funds from the same philanthropists, and for forth.
(Aubrey de Grey, biogerontologist and now SENS Foundation CSO, even used to be
an AI developer, back in the day when "AI" meant "incrementally better expert
systems"). See, this, for example:

[http://www.fightaging.org/archives/2010/08/artificial-
intell...](http://www.fightaging.org/archives/2010/08/artificial-intelligence-
and-engineered-longevity-the-better-tools-viewpoint.php)

The penumbra of people, funding sources, and networks surrounding the
Singularity Institute and Less Wrong are much the same as the one surrounding
the SENS Foundation and Methuselah Foundation - both of whom are making their
year end pledge drives, by the way.

<http://sens.org/node/2543>

<https://www.mfoundation.org/donate>

~~~
infinity
Developing a way to cure aging and saving the world are generally very
different things, depending on how you define saving the world. Because the
concept of saving involves an idea of saving from something. If we have a
concept of the world, what should it be saved from? And are we an essential
part of the world? Does the world end if we all die?

------
jonmc12
Mathematicians, computer scientists, and philosophers are going to save the
world?

As I read it, a group of researchers is creating philosophy around a branch of
thought in order to begin to define the problem. Thats all good, but proper
scientific research starts once a well defined problem is synthesized into a
testable hypothesis.

The gap between thought exploration and observable hypotheses can be called
philosophy. However, usually when this gap is paired with an assumed purpose
(absent a testable or confirmed hypothesis), this is best described as
religion.

Why say 'save the world' in order to solicit donations? You really don't know
enough about the problem to pre-suppose that any philosophy you are generating
will create testable observable knowledge, or that in turn, that the findings
of these observations would even come close to 'saving the world', or even
providing any measurable benefit to the world - you just don't know.

Why not just say you are a group of researchers trying to better define
potential problems generated from AI-related extrinsic risks?

Why do I care? Well, I think the SIAI, etc guys are really bright, and that
their work produces a lot of valuable insight about applying theories like
solomonoff induction, information theory, bayes, etc to theories of economics
and intelligence. There is a lot of value here. However, the religious-like
beliefs of the group just come off as strange to most, and frankly,
unscientific. I'd like to see these ideas propagate, get fleshed out and get
evolved. However, when I mention my interest/projects in AI/semantics to the
average silicon valley engineer/entrepreneur/investor and you would be
surprised how often I hear something like 'not like those singularity people,
right? those guys are like a weird cult'.

~~~
tlb
There is a clear hypothesis: that a sufficiently advanced AI will manipulate
most humans toward its own goals, and even its creators won't be able to stop
it once it gets going. So the stakes really are "saving the world".

This hypothesis isn't easy to test. That doesn't make it unscientific --
cosmology and evolutionary theories are also hard to test.

~~~
jonmc12
Thats not a hypothesis in the scientific sense, that is a scientifically-based
speculation. <http://en.wikipedia.org/wiki/Scientific_method>

btw, does SIAI, et al, actually align to a common speculation that logically
unfolds to the notion of 'saving the world'? Could you send me a link? The
best I could find is in this pdf - <http://singinst.org/upload/artificial-
intelligence-risk.pdf> \- pages 18, 19 gives an overview of this paper:
<http://singinst.org/upload/artificial-intelligence-risk.pdf>

I believe the paper is intended to explore possible risks - not to state
scientific hypotheses. Is it science that a super-intelligence will emerge at
all? Is it science what the timing of the onset of super-intelligence will be?
How about the transition speed of the onset? Is it proven or accepted that
this notion of 'manipulation of human goals' is meaningfully defined or even a
negative thing? Is it proven that Friendly AI would prevent any set of bad
scenarios from occurring? No.. of course not. Each of these points is
discussed as speculation in a general discussion of AI risks.

To take a stacked set of speculations, and say you are 'saving the world' - I
don't know how you get there without religion.

~~~
endtime
I don't think they really claim that. I think that they'd rather say "a
singularity is plausible and its effect would likely be of very high
magnitude, so even marginally positively increasing the kind of singularity
that might come about has expected value on the level of saving the world."

------
kristofferR
The future is absurdly interesting, I'm so grateful for my love of technology.
I actually am sorry for those who does not understand it.

Since I'm just 21, the future is naturally hugely important to me. I've
already started to save money for body/brain updates and longevity treatments.
I'll gladly admit that it's more important to me than saving for a house/car
or my pension. I'll rather live in a crappy apartment/with a crappy car for
the next decades and afford to live 150-200 years (when immortality surely
will be an option for the rich, which I plan to be) than to spend all my money
on a house/car and die a natural death at 80-100 years old.

That being said, it's important to live life to the max now instead of
delaying it in the hope of "eternal" life, which some longevity extremists
unfortunately do. I'd rather die young with a life truly liven than die 100
years old having spent a life avoiding all potential dangers.

I'm positive and hopeful about the future. We'll experience a lot of hickups
like huge societal and environmental changes, but overall it'll work itself
out.

I don't understand the "evil AI" issue though. We'll eventually reach a point
were it'll become difficult to seperate humans from robots. Humans will become
more techological while robots will become more biological and we will
eventually converge.

Research has shown that AI needs emotions in order to be truly useful or else
it won't have a way to decide what's important or right, leaving it
decisionless like people with a damaged amygdala. I don't see a logical reason
why artificial emotional "beings" would favor the future of purely
technological "beings" over humans/cyborgs - they/it doesn't have the same
evolutionary drive to advance it's own species as natural life, including
humans, have.

The real issue won't be man versus machine, it'll be superhumans versus
poor/technological conservative humans. Some people still won't have access to
clean water while others will enhance themselves to degrees barely imaginable
today.

I'm not saying this issue shouldn't be worked on, but I don't think it's worth
fearing the future over.

~~~
gizmo
The issue isn't "evil AI". Very few people in the transhumanist/AI movement
are concerned about evil AIs. The issue is that unless we are very careful to
engineer an AI that cares about human values it's going to be completely
indifferent to human values. Indifference is the enemy, evil is not.

To illustrate, consider the African Elephant. After it reaches 40 to 60 its
teeth fall out and it slowly dies in agony from starvation. This is not
because nature is evil or because natural selection is evil. This sort of
thing just happens because nature is completely indifferent to human values.
An AI, just like a process like natural selection, is just not going to care
about humans unless we get it _right_. And since we have only one attempt to
get it right, the stakes are so absurdly high.

Anyway, the guys at LessWrong have put a great deal of thought into these
issues. If you're interested in this sort of thing, check out the
sequences[1]. It's a few million words, but it has a terrific ratio of text to
insight.

[1] <http://commonsenseatheism.com/?p=12774>

~~~
kristofferR
Sure, "Evil AI" was just a quick and sloppy way to describe an AI acting
detrimental to humans, I didn't actually mean evil. Sorry, my mistake.

The thing is - in order for a true AI to act independently it has to have a
purpose. Humans act the way we do because we want to have some sort of
positive experience and want to avoid negative experiences. If we couldn't
experience anything either positive or negative we wouldn't have a reason or
motivation to do anything, we would just be. At that point we might as well be
static dead objects. For AIs to act intelligently & independently and not just
as algorithms solving a single task they need to have some sort of
purpose/goal to reach. An independent AI can't be indifferent - it needs a
basis for taking decisions.

I don't see how the purpose we give the AI could be detrimental for humans
without severe negligence. In addition - for an AI to be useful for humans it
needs to understand humans. We obviously create AIs to serve us and in order
to serve us independently it without needing manual input of tasks (which
would just make it an advanced computer) it needs to understand us.

~~~
atucker
Let's say that we tell the AI to eliminate malaria.

So it incinerates the biosphere. Now we don't have any malaria, but we also
don't have any humans.

The AI would have done exactly what we asked it to do, but not what we wanted
to do. For any reasonable request, you need to specify a ridiculous amount of
background information as to what is and isn't acceptable. Probably any simple
list you create will be missing something, and we'll be miserable/unhappy as a
result of it's exclusion.

------
lisper
If you want to save the world from existential threats you might also consider
donating to the B612 Foundation. It's a group ex-astronauts working to fly a
spacecraft to find and deflect asteroids on a collision course with earth.
<http://b612foundation.org/>

~~~
bayleo
I voted for asteroids as the most statistically feared existential risk in one
of the LW surveys and was shocked when the results came back:

"Of possible existential risks, the most feared was a bioengineered pandemic,
which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making
pandemics the overwhelming leader. Unfriendly AI followed with 180 votes
(16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes
(12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and
nanotech bringing up the rear with 46 votes each (4.2%)."

~~~
jessriedel
I think the reasoning is that an AI singularity is more likely than not within
the next 2 centuries. During that time, people can differ about whether they
think nuclear war, nanotech, or bioengineered pandemics are likely. But the
risk of a catastrophic asteroid strike are basically known, and small. (1km
asteroids hit the earth every 500k years.) So, depending on your assumptions,
asteroid deflection might be the most _cost-effective_ existential risk to
mitigate, but it shouldn't be the most statistically feared.

------
Jun8
Interesting and quite dense post. Singularity related articles and worries
about "good" AI building makes me a bit uneasy when the performance of
machines on many important tasks, e.g. object recognition, is _dismal_. So we
have _quite_ a way to go.

The usual rebuttal to the above is the "intelligence explosion" argument: once
AIs can improve themselves, their progress will be exponential. As a tl;dr for
the post, consider the Dewey summary of the argument:

    
    
      1. Hardware and software are improving, there are no signs that we will stop this, 
      2. and human biology and biases indicate that we are far below the upper limit on intelligence. 
      3. Economic arguments indicate that most AIs would act to become more intelligent. 
      4. Therefore, intelligence explosion is very likely. 
    

If you think about it, only (1) is a fact, the rest are assertions that are
open to debate. For example, in (2) how do we define the "upper limits of
intelligence", when even defining intelligence is problematic. Currently the
human brain is the most complex and intelligent object we know. For (3), I
don't know what sort of "economic arguments" are meant but as we all know
becoming "more intelligent" is not a simple hill climbing process, in fact
it's not clear how to go about it. Is Watson more intelligent than a person?

If you think about these facts for some time, you will find that AIs designing
more intelligent AIs scenario has very little plausibility actually, let alone
being highly probable.

~~~
dncrane
"Singularity related articles and worries about "good" AI building makes me a
bit uneasy when the performance of machines on many important tasks, e.g.
object recognition, is dismal. So we have quite a way to go."

Yudkowsky (2008), "AI as a Negative and Positive Factor in Global Risk"
<http://singinst.org/upload/artificial-intelligence-risk.pdf> addresses this
in section 7 (though it's really worth it to read the whole paper): "The first
moral is that confusing the speed of AI research with the speed of a real AI
once built is like confusing the speed of physics research with the speed of
nuclear reactions. It mixes up the map with the territory."

===============

"in (2) how do we define the "upper limits of intelligence", when even
defining intelligence is problematic."

Given the huge number of reproducible cognitive biases humans are known to
exhibit (<http://wiki.lesswrong.com/wiki/Bias>), it seems very unlikely that
humans are optimally intelligent (one way to define intelligence, from the
Omohundro paper below: 'We define “intelligent systems” to be those which
choose their own actions to achieve specified goals using limited resources'
so "more intelligent" means better at achieving those goals using limited
resources)

===============

"(3), I don't know what sort of "economic arguments" are meant but as we all
know becoming "more intelligent" is not a simple hill climbing process, in
fact it's not clear how to go about it."

Omohundro (2011), "Rationally Shaped Artificial Intelligence" Makes the case
for #3:
[http://selfawaresystems.files.wordpress.com/2011/10/rational...](http://selfawaresystems.files.wordpress.com/2011/10/rationally_shaped_ai.pdf)
(summary: by "more intelligent" we mean "more capable of achieving goals." So
any AI which has goals will act to become more intelligent so that it can more
effectively achieve those goals)

~~~
Jun8
Thanks for a great reply! I think my (and I don't think I'm alone in this)
bias is that when I think of intelligence I tend to think of tasks that the
human brain has evolved to perform, like linguistic and visual analysis. It
may be argued that these tasks form a small subspace of the "intelligence
space", but, boy, is the human brain good at solving these! That doesn't mean
that we are optimal; however, for many AI tasks the brain operates on such a
vastly different performance level that comparisons with machine AI seem out
of place. Even systems like Watson, which were created by many hundreds of
man-years, can stump humans in narrow domain tasks.

Now, it might be argued that some sort of paradigm shift will occur as
machines get more intelligent (e.g. similar to how results from STR and
quantum theory were impossible to predict to Newtonian-constrained thinking)
and some effects that we cannot predict now will prevail, like the massive
increase in intelligence in humans that caused/was caused by the advent of
language.

------
doku
Singularity through AI won't happen, not by electronic AI, but by biological
means. The advancement in biotech will out pace AI. Our sequencing technology
is advancing at exponential rate, a rate faster than Moore's law.

One way of creating AI is through imitation. We study and understand the brain
and build neural network chips. But why build the system from ground up when
we can just hack a already intelligent system, our brain. Evolution has
already created a cheap and very efficient system for intelligence. It's far
harder to build from ground up.

Once we reach immortality and bio brain upgrades, AI won't be a threat. Humans
are inherently selfish; we will spend more efforts on self improvement to stay
competitive. The new humans will be the super-intelligent being and our greed
is the mechanism for singularity. But that greed could also be hacked.

If a system capable of intelligence is build, why would it make self
improvements? or do anything? If a human have absolutely no drive, no sex
hormones, no food drive would it think about anything? Is it intelligent?

The intelligent system that we build will be for human's selfish gains and
will be used to improve humans. That means humans will improve along side with
the system, or humans are part of the system. What's really scary is if the
system is build with all the drive and motivation build in...

------
dwiel
This future sounds like the present to me.

The underlying worry in this article is that intelligence begets more
intelligence and that the new forms of intelligence might not have the same
morals/goals as the author. Beneath this worry is that due to its exponential
growth, intelligence and thus power quickly concentrates. Any thing/life that
is not directly supporting whatever goals this new intelligence has is at
great existential risk. We hope that these goals include benevolence towards
other, less powerful forms of life.

This all sounds reasonable, however they are held back by the idea that AI
must look like a really smart human that is remotely controlling a
robot/computer interface. The reality is that there is a lot of intelligence
which looks nothing like us. The 'intelligence' of a market would be one basic
example.

If we take this more abstract/literal view of Artificially/human created
intelligence, then most of what the article worries about is already upon us.
Some people have so much wealth that we might as well consider them augmented.
In many cases their interface to their augmented intelligence is another
human, or many humans but that does not take away from their power.

    
    
      they have hoards of wealth at their disposal
      wealth is power and can currently be exchanged for both human and computer intelligence
      that purchased intelligence is then used to further increase the wealth/power/intelligence of the agent (billionaire/corporation)
      NOTE: The richest 0.000000044% of the world's population have the same amount of wealth as the bottom 8.8%. http://www.stwr.org/poverty-inequality/key-facts.html
    

The result is that the rich/powerful/intelligent get more so, and everyone
else must hope that they can somehow serve the few or that the few will be
benevolent.

If you'd rather avoid the rich/99% memes, try to describe our current market
to someone from 1000 AD, and it sure sounds like an artificial intelligence:

It grows. No single person controls it. It controls how resources and human
labor are allocated in such a way so as to optimize its own growth. There are
cases where people tried to take over the job of the market and they almost
always fail (or always fail, depending on who you ask). Sure sounds like
Artificially created intelligence to me.

Either way, augmented billionaire, ultra-powerful cultural meme or human like
intelligence in a machine, he seems to imply that our best bet is to teach and
install morals and values in our 'children', with the hope that they will be
benevolent.

######

I think that creating decentralization and diversity is more important than
moral/value education; especially if you take the view that we are already in
the midst of it all.

One strong measure of the health of an ecological system is its diversity.
This is partially because a monoculture is risky - it has a single point of
failure. An ecosystem which is diverse is also an ecosystem which has the
resources and stability to explore and learn. This is required if an ecosystem
is going to survive change, as ours is now.

~~~
dncrane
"This all sounds reasonable, however they are held back by the idea that AI
must look like a really smart human that is remotely controlling a
robot/computer interface."

I think you misunderstood their position. The only assumption is that the AI
must have some goal, and the point is that, if there's no term for humans in
that goal, then we will likely be destroyed, not due to malice from any
superintelligence, but indifference -- we are made out of resources that the
AI could use to achieve its goals.

I think you're vastly underestimating how much better than us at achieving
goals a superintelligence would be. A superintelligence would be a lot more
powerful than any augmented billionaire, in the same way that a human is more
powerful than the head of any wolf pack. The worry isn't about slightly
augmented humans or slightly superhuman intelligence.

See these two papers for good arguments as to why human-level AI will likely
result in an "intelligence explosion":

Yudkowsky (2008), "AI as a Negative and Positive Factor in Global Risk"
<http://singinst.org/upload/artificial-intelligence-risk.pdf> Chalmers (2010),
The Singularity: A Philosophical Analysis
<http://consc.net/papers/singularityjcs.pdf>

~~~
dwiel
Thanks for the comment and links. I had only ever read 'science fiction' like
Ian Banks and Charles Stross Accelerando, but never any of the 'non-fiction'.
The linked articles were interesting.

I understand that the risk of an AI that wants the mass of the entire solar
system as its own and quickly becomes a matrioska brain. However, I question
the idea that we aren't already there. I would argue that an augmented
billionaire or better yet, a market economy compares to someone in the poorest
5% of the world just like a modern human compares to a wolf pack. A
billionaire can decide not just to go to space, but to build an industry out
of it. The poor can't find enough food. That is a huge difference. Is there
any research into quantifying these types of differences?

Sure a human level computer AI gets 'free' speed doubling every 18 months, but
so does the intelligence that surrounds an augmented human. Just look at how
much more intelligence we have available to us now as compared to 20 years
ago.

I agree that there will be/is an intelligence explosion. My point is simply
that we are already in the midst of it, or that there is no single instant to
point at and say 'that is when the singularity started.'

I think that this is an important point to make because it changes the framing
of the question from "How can we survive abstract superintelligence explosion
of the unknown future" to "How can we survive the existing intelligence
explosion." From "How can we teach the superintelligence we are bound to
create to be nice" to "How can we convince the existing superintelligence to
be nice" and/or "what changes in our social/governmental/memetic structure
should we change to survive the explosion we are experiencing?"

Also, if we view ourselves as already being in the intelligence explosion we
can look at how existing superintelligences treat other less intelligent
beings to see where our culture is likely to head as the explosion continues.
If we don't like how superintelligences treat lessintelligences now, then
maybe we should figure out why and how to change it.

The framing that the article provides sounds about as silly as a pack of
wolves discussing tactics they will use to make sure their new human creations
focus all of their energy on catching rabbits; so I tried to come at it from
an angle that has a hint of pragmatism and practicality.

~~~
dncrane
I guess the issue is how likely you think a hard takeoff/FOOM scenario is:
<http://wiki.lesswrong.com/wiki/Intelligence_explosion>

"Sure a human level computer AI gets 'free' speed doubling every 18 months,
but so does the intelligence that surrounds an augmented human. Just look at
how much more intelligence we have available to us now as compared to 20 years
ago."

A motivated human-level AI could get free speed doubling a lot more quickly
than every 18 months -- it could acquire more resources that already exist, it
could increase the speed of progress in hardware improvements, and, perhaps
most importantly, it might be able to improve itself to achieve a
qualitatively superhuman intelligence rather than just quantitatively.
Sections 3: "Underestimating the power of intelligence" and 7: "Rates of
intelligence increase" of the Yudkowsky paper I linked before, "AI as a
Negative and Positive Factor in Global Risk", address this well.
<http://singinst.org/upload/artificial-intelligence-risk.pdf>

I think it's a mistake to compare an AI superintelligence to anything on earth
currently, like market forces. An AI superintelligence could probably improve
itself a lot faster than any market could.

~~~
dwiel
Thank you.

I think I have been confusing the various types of singularity here.
<http://yudkowsky.net/singularity/schools>

I have been mixing the intelligence explosion with the accessing technological
progress.

Ill have to give all of this a bit more thought.

Thanks again

------
kylek
The first thing I thought of when I saw this article was the Marathon trilogy
(and Durandal's rampancy) by Bungie (pre-microsoft acquire).
<http://en.wikipedia.org/wiki/Marathon_Trilogy#Rampancy>

(sorry I just nostalgia'd myself :( )

------
thomasdavis
Man is dead.

------
chrismealy
Rapture for nerds. Don't waste your money, or your time.

~~~
Jach
[http://web.archive.org/web/20101227190553/http://www.acceler...](http://web.archive.org/web/20101227190553/http://www.acceleratingfuture.com/steven/?p=21)

~~~
onemoreact
Rationality suggests incorporating new ideas into existing ones. Where are
these people change their projections based on the slowing increase in
computing power? (aka second derivative of computing power).

~~~
dncrane
The intelligence explosion discussed in the original LessWrong post is a
separate issue from exponential growth, only loosely related, although folks
like Ray Kurzweil constantly conflate the two.

Please don't dismiss all ideas about AI, intelligence explosions, and
superintelligence (e.g. <http://singinst.org/upload/artificial-intelligence-
risk.pdf> ) just because some related concepts may be misguided.

(Not saying that you are being dismissive, just pointing out a distinction
that you or others may not be aware of.)

~~~
onemoreact
There far more issues on that side of the singularity idea than simple
questions of computational power. There are fundamental limitations on
information as a direct result of quantum mechanics which limit how accurately
you can say predict the weather without directly controlling it.

So, a super intelligent Jupiter brain simply can't predict* the temperature of
every cubic centimeter out to 5 decimal places 10 years from now regardless of
what computational power it's given or what measurements it takes. And,
peoples behavior is influenced by the weather so again it's can't model human
behavior to that nth degree. Now that's just one example, but you really can't
predict the stock market accurately over long time frames for the same reasons
etc etc.

*again ignoring more direct influences.

PS: Now QM might not be correct, but assuming a world view without evidence is
the realm of religion not reason.

~~~
dncrane
I don't think anybody from the SIAI is disagreeing with you here. I don't
think the kind of prediction you're talking about is part of the intelligence
explosion hypothesis. A superintelligence doesn't need literal omniscience, or
even something that close to it, in order to be much, much, more effective
than humans at achieving goals.

