

The Singularity Institute's Scary Idea (and Why I Don't Buy It)  - billswift
http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

======
Udo
The problems begin with this concept of a "provably friendly architecture",
which probably doesn't exist. Heck, humans aren't exactly provably friendly
either.

But the main argument against this design goal of being somehow provable and
intelligent at the same time is that we have fundamentally conflicting
paradigms:

The first is the idea that in order to build an artificial intelligence, we'd
need to build an actual artificial person. Complete with their own internal
representation of the world (or as you might say: its own hopes and dreams).
It's an approach to AI that is fundamentally uncertain in the way individual
AI entities will turn out personality-wise, but _it is_ guaranteed to work. We
know this approach will work, because we are ourselves machines built on this
principle. ==> good chance of success, somewhat limited danger

Then, conflicting with that comes this notion of a provably
friendly/unfriendly design which sounds like it was made up by CS theorists
who think intelligence is a function of raw processing power and thought
patterns are in any way related to rigid formulas. They're very likely wrong,
but luckily that also means this group of researchers will never produce
anything dangerous except maybe a lot of whitepapers. ==> virtually no chance
of happening, no danger

I do agree though that there might be a third kind of AI, a sort of wildly
self-improving problem-solving algorithm that has no real consciousness and
simply goes on an optimization rampage through the world. This would be a
disaster of possibly grey goo-like proportions. BUT, this approach to AI is
also very likely to be used in tools with only limited autonomy. And because
the capability of an unconscious non-entity AI to understand the world is
limited, the probability of it taking over the world also seems limited. ==>
small probability of autonomous takeoff, but if it happens it will be the end
of everything

~~~
metamemetics
> _a sort of wildly self-improving problem-solving algorithm that has no real
> consciousness and simply goes on an optimization rampage through the world_

That sounds pretty similar to humans.

~~~
Udo
That's funny but doesn't really make sense unless you manage to confuse
consciousness with conscience.

~~~
metamemetics
I meant the human species as a collective has no single consciousness. And
also that individual humans do not have metaphysical consciousness.

~~~
Udo
> _I meant the human species as a collective has no single consciousness._

Neither has an AI species, but that's not the issue. The point being made here
was that a danger could arise from a very efficient and powerful automaton
that has neither self awareness nor recognizes other beings with minds as
relevant. From that I argued the threat of it happening is actually low
because by its nature this kind of AI would probably lack the means to
instigate an autonomous takeover of our planet.

> _And also that individual humans do not have metaphysical consciousness._

Ah, I finally see where our misunderstanding comes from. Science doesn't talk
about consciousness (or metaphysics) in the spiritual sense. The question
whether people have metaphysical consciousness or not really depends on your
definition of those terms, so arguing "for" or "against" isn't really gonna do
anything besides getting you karma for oneliners.

As far as practical AI research is concerned, the definition of consciousness
is the same for humans and non-humans and while there are different degrees of
consciousness possible, there certainly is an agreement that the average human
has one.

------
moultano
I'm always unimpressed by how certain people are that a hard takeoff is even
possible. As near as we can tell, designing things is hard. (At least so long
as the P NP boundary exists.) I don't expect revisions of an AGI to have
enough marginal intelligence over previous revisions to offset the increase in
complexity that designing a further better one requires. It seems more likely
to me that any given AGI will asymptote or grow logarithmically (rather than
exponentially) simply because designing things seems to in general be
exponentially hard relative to the number of states involved.

~~~
AngryParsley
You don't have assign high probability to hard takeoff to support
ethical/friendly AI research. The consequences of a hard takeoff are so huge
(potentially destroying humanity) that even a low chance is worth making
lower.

Humans have been running on the same hardware for several thousand years. Just
our improved knowledge and culture has been enough to keep us growing at an
exponential rate. If our brains were end-user modifiable and had APIs, we'd be
able to increase our growth rate even faster. An AI could do even better than
self-modification. It could buy or hack enough computers to give itself
thousands of times more processing power.

I don't assign high probability to that sort of scenario, but it is high
enough that it's worth ameliorating.

------
billswift
And the Less Wrong discussion of it - currently at 66 comments and growing -
[http://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_ins...](http://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_institutes_scary/)

------
Ratufa
Somehow, I'm not comforted by reassurances that an AGI is likely to have a
human-like value system, given how humans have often treated other humans who
are 1) Different from themselves in some way and 2) Technologically more
primitive.

------
nickpinkston
Warning! Overly verbose blog post!

Topic: Scary idea = AI ending the human race.

------
metamemetics
Ever since Frankenstein was written, I think we have always been unfairly
predisposed to think our creations will turn on us. Probably due to the
ubiquitous nature of inner-guilt or as a social legacy of Christian sin.

------
billswift
And now Robin Hanson has weighed in at Overcoming Bias
[http://www.overcomingbias.com/2010/10/goertzel-on-
friendly-a...](http://www.overcomingbias.com/2010/10/goertzel-on-friendly-
ai.html) . Mostly agreeing with Ben Goertz's position. He is pretty skeptical
about, and has been for some time, the hard-takeoff position.

------
iwr
So paradoxically, the Singularity Institute can grow into a great enemy of AI
research. Should we deem Ray Kurzweil the first Pope of this anti-science
religion?

