
Three arguments against the singularity - michael_nielsen
http://www.antipope.org/charlie/blog-static/2011/06/reality-check-1.html
======
cousin_it
Most of the points in the essay (there's more than three) seem to me to be
wrong or off target. I started typing a point-by-point response but it turned
out quite long. If someone has the impression that the essay has some decisive
strong point, could you point it out so I can respond to just that?

EDIT: as a demonstration, I will deal with the first point in the essay. It
says: _"super-intelligent AI is unlikely because, if you pursue Vernor's
program, you get there incrementally by way of human-equivalent AI, and human-
equivalent AI is unlikely"_. Actually, getting to strong AI through math
(rather than through mimicking humans) sounds more probable to me. We already
have formalisms that can compute the most accurate possible prediction and the
most efficient possible way to optimize a a utility function (inferring the
right physical laws in the process) if given tons of computing power, for
example look up Solomonoff induction or Marcus Hutter's AIXI. These count as
superintelligences, or at least superweapons that can destroy the world.
Stross's argument does not demonstrate the unlikelihood of someone
implementing a fast approximation to AIXI tomorrow.

~~~
lucasjung
I agree with you, but I think I can respond in summary instead of point-by-
point:

"First: super-intelligent AI is unlikely because, if you pursue Vernor's
program, you get there incrementally by way of human-equivalent AI, and human-
equivalent AI is unlikely."

The arguments that follow don't say anything about why it's impossible or even
prohibitively difficult to do, they only provide reasons why people wouldn't
want to try. There are, however, motivations beyond those he takes into
consideration.

Uploading: Again, he doesn't make any arguments as to why uploading is not
achievable, he just talks about the very hard ethical questions that arise
when dealing with uploaded intelligences. That didn't stop us from inventing
nuclear weapons and a host of other ethicaly challenging technologies, so why
would it stop us from inventing uploading?

One particular statement he makes in this section I do want to address
specifically: "Uploading implicitly refutes the doctrine of the existence of
an immortal soul, and therefore presents a raw rebuttal to those religious
doctrines that believe in a life after death."

This is so obviously untrue that I don't know how any intelligent person could
use it in a serious argument. Religions that believe in immortal souls will
simply maintain that the uploaded copy is just that: a copy, a soulless
simulation of a real person. How can you possibly prove this one way or
another? "Soul" is a purely religious concept, beyond any temporal means of
observation or measurement, and therefore not subject to empirical study. I
still think you would see plenty of religious people oppose uploading, but for
different reasons.

He finishes by discussing the possibility that our entire universe is already
a simulation being run in the "real" world, and admits that nobody can prove
this one way or another, at least not anytime soon.

~~~
astine
"Religions that believe in immortal souls will simply maintain that the
uploaded copy is just that: a copy, a soulless simulation of a real person."

This might be true of some religious institutions but not of all. The concept
of the 'Soul' is actually derived from Aristotelianism (there is no mention of
it as such in the Bible) and many sects, (particularly Catholics) believe that
in order for something to be intelligent it needs to have a soul, that is, an
immaterial aspect. If an uploaded person, or an AI, or space alien or whatever
is demonstrated to be intelligent, it would by definition have a 'soul.'

You would still get lots of folks who refused to accept the person-hood of
uploaded persons and those folks might actually dominate, but the actual
theological consequences would be different (and far more complex) than you
imagine.

~~~
michael_dorfman
_The concept of the 'Soul' is actually derived from Aristotelianism (there is
no mention of it as such in the Bible)_

Wrong, and wrong. The notion of the "soul" in Greek philosophy predates
Aristotle; it's found in Plato's Phaedo, for example-- and there are several
words in the Hebrew bible that are usually translated as "soul" ( _nefesh_ and
_ruach_ , which correspond roughly to _psyche_ and _pneuma_ , respectively.)

~~~
astine
"and there are several words in the Hebrew bible that are usually translated
as "soul""

I stand corrected on this.

"The notion of the "soul" in Greek philosophy predates Aristotle."

True, but my point is that certain sects of Christianity borrow heavily from
Greek philosophy when trying to intellectualize what a 'soul' actually is.
Some tend to follow the Aristotelian conception rather than the Platonic (they
are distinct.) I'm not trying to say that Aristotle invented the concept of
the soul if that's what you're thinking.

------
weavejester
I don't buy the argument that we won't construct conscious AIs because they're
not useful. Many of our most celebrated achievements don't have a direct
practical use, yet that doesn't stop us from climbing mountains or sending
rockets to the Moon.

Can you really imagine a group of scientists sitting around a large computer
cluster and saying "Well, we _could_ create the first sentient AI and ensure
our names are enshrined in the history books, but why bother?"

~~~
imjustatechguy
We got to the moon because of massively political support at the highest
levels because it was viewed as a strategic necessity.

Climbing mountains doesn't seem that comparable to going to the moon.

Scientists need funding in order to act if it requires many man years.

Thus military uses of AI are likely to be well funded to the tune of hundreds
of millions if not billions because they can be viewed as a strategic
necessity if it seems like the Chinese could possibly develop an AI before us.
But if the general feeling is that a real AI isn't feasible in both Chinese
and US circles in the near or medium term future, or there are more
valuable/useful near and medium term objectives, there will not be a well
funded race to create one.

~~~
weavejester
Who said it needed to be well-funded? Creating a sentient AI is likely to
become an easier undertaking as general computing technology and AI research
advances. Creating such an AI in 20 or 30 years might require a large deal of
funding, but what about in 50, 60 or even 100 years?

~~~
imjustatechguy
I don't think anyone can predict more than 20-30 years into the future with
any real degree of accuracy.

So yeah, I agree that X might very well be possible (or in widespread usage)
in 50 or 100 years, where X is basically anything that seems magical today.

------
ThomPete
The whole debate is about whether you believe in transcendence and what you
consider the mind to be.

Not transcendence in the Physics/Metaphysics sense but rather in the (at least
to our knowledge) fact that dumb matter transcended into aware matter at one
point.

I.e. life happened through transcendence from "dumb" matter into aware matter.
Pattern recognizing feedback loops with memory that can reflect on our own
existence.

I do not have the knowledge to determine the feasibility of the various
methods but I do believe that the mind isn't a thing as such but rather a
system that simulate a reality, the reality that we experience from our
vantage point.

So to me the question is not as much whether we can transcend from one
physical form to another but rather how we will connect with more and more
external computation that again will mold our internal computation maybe even
take over some parts. At least the IO can be replaced (eyes, ears etc.)

But if that can be replaced then what's to stop other areas? Nothing dictates
that as far as we know right now.

Somehow the argument reminds me of the Searles (misguided) Chinese Room
argument.

The fact is that we don't know whether it's possible. So why not just let
those who think it is, work on a solution.

------
Tichy
"I don't want my self-driving car to argue with me about where we want to go
today"

The arguing is already happening. If you enter something in your navigation
system and it gives you the wrong suggestions (completions).

Also read up on the filter bubble.

------
jasongullickson
My favorite line -

 _And I certainly don't want to be sued for maintenance by an abandoned
software development project._

...let's all hope it never comes to that!

------
Symmetry
For all I've joined in with criticizing Stross elsewhere in this thread, I
really do think that technological change speeding up by vast amounts is
unlikely because of how much harder it is to create each new generation of
computers. If we were limited to 1960s technology the designing a new
processor of today's complexity might very well take over a 100 years. For
this reason I expect that even when its AIs running on ever faster computers
we might see faster technological growth, but not radically faster
technological growth (barring real algorithmic improvements which might or
might not be possible).

~~~
bermanoid
The idea is that the things doing the designing are also hard at work
redesigning themselves, presumably with a goal to get better at that, too. At
least until intelligence reaches some sort of plateau, which would almost
certainly be hit at some level way above human ability, it makes sense to
assume that some sort of locally exponential or even super-exponential
intelligence growth would occur.

That is, _if_ humans are able to start the chain reaction and come up with
something that's able to rewrite its own brain so that it's smarter than it
started...a lump of plutonium that's not at critical mass is just going to sit
there uselessly spewing out a slow trickle of neutrons, which is more or less
what's happening with AI today.

------
Produce
Note: I only skimmed over the article. I've also thought about this subject,
and about the reasons we haven't solved world hunger, aren't living on the
moon, don't have people on Mars, aren't able to live for 200+ years and
haven't reached the singularity. The one thing which retards technology,
society and humanity as a whole is fear. Out of fear come politics, laws,
armies, governments and wars. Fear is the thing that makes us want to solve
all of our problems at once with as little thought as possible. It's what
makes us "dumb lazy", which I define as short-terms laziness, as opposed to
"smart lazy" which is doing a little more now in exchange for much less later.
In other words, the bottleneck is the fear in each and every one of us. Once
we get past this stage (either through genetic modification, technological
augments, drugs, sheer willpower or a combination of the above), we will be
living in a post-singularity world.

------
Anissimov
Here's my concise response to Stross:

[http://www.acceleratingfuture.com/michael/blog/2011/06/respo...](http://www.acceleratingfuture.com/michael/blog/2011/06/response-
to-charles-stross-three-arguments-against-the-singularity/)

Michael Anissimov Singularity Institute

------
michaelchisari
Truth be told, I've never been much interested in the concept of "uploading"
consciousness. I find the concept of physical longevity (a la, Aubrey De
Grey's research) to be more practical, achievable and desirable.

~~~
sbierwagen
Ah, so you don't back up the files on your computer either?

~~~
michaelchisari
Yes, I do, because when I lose files, I'm still alive to care about it. When
I'm dead, I won't know better.

~~~
sbierwagen
Yes, the point of copying yourself _is_ the "not dying". Physical longevity
won't do a whole lot against a roadside bomb.

~~~
michaelchisari
_Yes, the point of copying yourself is the "not dying"._

Yes, and my point is that I don't really care about "not dying". I care about
living a long time. Getting hit by a bus (or killed by a roadside bomb)
doesn't bother me. Growing old and living for three decades in constant pain
and discomfort does.

------
iwwr
My main issue with the Singularity is the idea of exp-exp growth (growth that
makes exponential look linear). Or am I misunderstanding things?

~~~
Symmetry
To repost a link from last discussion, there are three major schools of though
on "The Singularity": [http://singinst.org/blog/2007/09/30/three-major-
singularity-...](http://singinst.org/blog/2007/09/30/three-major-singularity-
schools/)

~~~
dmbass
All three of those "major schools" are just different stages of the same
theory. Accelerating change leads to the intelligence explosion which leads to
the event horizon (the actual singularity).

~~~
Symmetry
Actually, the point of "Accelerating Change" is that it doesn't lead to an
intelligence explosion - change remains fairly predictable to the participants
through the Singularity. And an event horizon can easily happen even with slow
and constant change, it's compatible with the other two schools but doesn't
depend on them.

~~~
dmbass
You are suggesting that the superintelligence created by "Accelerating Change"
does not operate with the same parameters as the superintelligence of the
"Intelligence Explosion?"

The time when "Accelerating Change" no longer applies is the same time that
the "Intelligence Explosion" and the "Event Horizon" occur or at least that's
how it seems to me.

~~~
Symmetry
You nailed it with your first sentence there. The superintelligences created
in an "Accelerating Change" scenario are subjectively a lot like gods. The
first superintelligence created in the "Intelligence Explosion" scenario will
be subjectively a lot like God.

Also, the point of the "Accelerating Change" scenario is that there isn't an
intelligence exposion. The "Intelligence Explosion" scenario might or might
not have a period of accelerating change before the intelligence explosion,
but that doesn't make it the "Accelerating Change" _scenario_.

------
ignifero
The article presents the view that, due to social inertia, artificial
intelligence research will stop before it realizes its goal for ethical
reasons. To my knowledge this has never happened before.

~~~
cstross
Tell that to any researcher trying to get US government funding for work using
human embryonic stem cells.

Tell that to the German green party, who despite clearly wanting a carbon-
neutral environmentally friendly economy are in the process of closing down
Germany's nuclear reactor fleet.

(Etc.)

~~~
bermanoid
Sure, if AI actually requires some big beast of a codebase that would take
thousands of man years to produce, with tons of interdependencies and a
massively complicated architecture like an OS, it might be the kind of project
that could be blocked by making sure no Big Money goes towards it.

But if, as some people suspect, it's something that can be solved by setting
the right set of clever algorithms to work, running on twenty-years-out
hardware, then the ethical problems that the public has with the endeavor will
be irrelevant. Someone will just do it, ethics and regulations be damned,
especially given the monumental potential rewards.

There are arguments that suggest that the actual algorithmic complexity of the
"software" running on our brains is comparable to moderately complex software
that humans routinely develop (even though the computational and "RAM"
requirements of the "brain algorithm" are enormous), so I wouldn't discount
the possibility that a small research group might be able to do the job
without any serious level of financial support...

------
bluekeybox
"The reason it's unlikely is that human intelligence is an emergent phenomenon
of human physiology"

Physiology has as much to do with intelligence as electricity generation and
distribution has to do with computer function. We humans are _actors in a
society_ , and that's how we measure intelligence. All that's needed is to be
able to respond to social cues. Turing had it correct from the start. Don't
get it twisted.

------
peteretep
"If you thought the abortion debate was heated, wait until you have people
trying to become immortal via the wire. "

Misleading. People are against abortion because how else are you going to
punish those terrible terrible women for not staying at home and doing what
their fathers and brothers tell them?!

~~~
jokermatt999
I can understand strong arguments for and against abortion, but please don't
mis-characterize your opponents reasoning. I've met and actually talked with
strongly anti-abortion women (one is my mother, actually), and their reasoning
has absolutely _nothing_ to do with the anti-feminism/"traditional values".
They feel that abortion is taking an innocent life, period.

Now, feel free to debate with them whether that is true or not, but don't lie
about their reasoning.

That said, please, please, please keep these kind of topics off of hacker
news. I've never seen abortion end in anything but a flame war.

~~~
Shenglong
I was going to make a comment, but I agree with your last sentence.

