
Nick Szabo: "The Singularity" is an incoherent and religious idea - bumbledraven
http://unenumerated.blogspot.com/2011/01/singularity.html
======
swombat
Whilst I agree with he author that there are religious aspects to the whole
Singularity with a capital S movement, it seems to me his arguments are
equally unsound.

First of all, this is not an attack on the singularity idea, but on the
"strong AI" idea.

With respect to strong AI, Nick is very close to the problem, and that's the
issue. Like any good engineer/scientist, he sees problems everywhere. Yes,
there are many problems before we get to strong AI. That's not news. There
were many problems to resolve before people could pay a small sum of money and
let a giant metal tube take them through the air to a destination halfway
round the world without killing them - but those problems got resolved, one by
one. Many problems do not amount to an impossibility, only a damn hard problem
(which we knew strong AI was anyway).

As for his other assertion, that the concept of the singularity is a fantasy,
Nick's main argument is that the singularity will only last "for a time", and
that it will turn into a classic S-curve. He waves Feynman's name around as
supporting evidence, but does not address the fact that intelligence (and
artificial intelligence in particular) is not subject to the Malthusian laws
which have caused other phenomena to follow S-curves. Yes, we only have access
to so many particles, but the whole point of exponential AI is figuring out
better ways to use the same number of particles. There may be a theoretical
limit to how efficiently we can use those particles, but even so there are a
lot of particles, and if we can manufacture even just human-equivalent
computing matter in factories, that's already enough to achieve a singularity.

So, in summary:

\- Yes, there is a problem with Singularity stuff being quasi-religious (I've
been to a Singularity event and agree - though the proportion of kooks was
relatively low, there were certainly a few, some of them on the podium
speaking)

\- No, this article is not a convincing argument against either strong AI or
the potential of a singularity-like event to occur.

~~~
wladimir
Indeed. Belief that strong AI is _never_ possible is also religious. If the
human mind is simply a chemical computer (and, science currently says it is),
it can (eventually) be simulated with sufficiently powerful hardware.

~~~
borism
I haven't yet heard any critic of Singularity religion claim that strong AI is
never possible.

What we are skeptical of is that we need just a little bit of development and
strong AI will take-off just like that without any regard to physical world
constraints.

~~~
wladimir
Well, one critic I usually hear about singlularity subjects is that "you can't
upload your mind into a computer because the emotional/soul/bla aspects of a
human being could never be emulated"... which is obviously religious. Yes,
it's very very hard but certainly no reason to call it impossible.

------
jasonwatkinspdx
I believe the Fermi Paradox[1] is very compelling evidence that the evolution
of computation is logistic[2] and further that saturation comes before
practical interstellar travel. Otherwise I feel it's likely we would have
already seen evidence of Clarke's Overmind[3].

That intelligence becomes increasingly specialized rather than general is a
reasonable explanation for why such saturation may happen (in a hand wavy
way).

I also think that much of the discussion about the Singularity, downloading
brains, etc, makes many anthropomorphic assumptions that are entirely
unjustified. I suspect it's more likely that we'll have better luck building
artificial chemical brains than we ever will simulating some captured
representation of the state of a human brain on a digital computer.

On the upside, if computation's growth is logistic then certainly we're in the
exponentiation portion right now, which means humanity still is likely to see
generations of at least near linear improvement. We may be essentially trapped
in this solar system, but we'll likely have quite astounding information
processing capabilities.

1: <http://en.wikipedia.org/wiki/Logistic_function>

2: <http://en.wikipedia.org/wiki/Fermi_paradox>

3: <http://en.wikipedia.org/wiki/Childhoods_End> (a good book if you forgive
it a few mystical characterizations of psychic powers)

~~~
Tichy
How is the Fermi Paradox evidence for evolution of computation being logistic?
Does the logistic function appear somewhere in the Fermi Paradox?

I have a feeling you are not using very strict logic here.

------
orangecat
His one good point is that "The Singularity" is a bad name for a concept that
doesn't refer to a single event, which makes it easy to dismiss with the
"rapture of the nerds" strawman. Aside from that, it's mostly an argument from
personal incredulity, which is even less valid here than usual. I wouldn't
have believed you a year ago if you told me that a search and advertising
company had self-driving cars roaming around Mountain View, but here we are.

 _Despite all the sensory inputs we can attach to computers these days, and
vast stores of human knowledge like Wikipedia that one can feed to them,
almost all such data is to a computer nearly meaningless._

IBM's Watson might disagree
([http://money.cnn.com/2011/01/13/technology/ibm_jeopardy_wats...](http://money.cnn.com/2011/01/13/technology/ibm_jeopardy_watson/index.htm)).
But of course responding to natural language questions (er, answers) will be
moved to the "not real AI" category once it's commonplace.

~~~
dnautics
it's not real AI. Read "Fluid concepts and creative analogies". If you ask a
human, abc is to 123 as abd is to... The human would probably say 124.

Ask watson the same and it would answer "jackson five".

~~~
rbanffy
Which would be a very clever answer and even hint at some sense of humor.

~~~
dnautics
[http://www.google.com/search?sourceid=chrome&ie=UTF-8...](http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=abc+is+to+123+as+abd+is+to)

------
hugh3
The first problem with "The Singularity" is that nobody can actually agree on
a definition. For instance:

1\. The first comment on this article: _The singularity refers to the time in
human history when computers first pass the Turing test._

2\. Wikipedia: _A technological singularity is a hypothetical event occurring
when technological progress becomes so rapid that it makes the future after
the singularity qualitatively different and harder to predict_ \-- hey, y'know
what I call the point beyond which the future is hard to predict? I call it
"the present".

... and I'm too lazy to keep looking up other definitions, but they're
definitely out there.

The second problem with the idea is that some folks seem to have this flawed
logical chain in mind:

1\. Assume that a human brain can make a computer smarter than itself.

2\. In that case, the computer smarter than the human can make a computer
smarter than itself, which can in turn make a still smarter computer, and so
on, leading to vastly smarter computers very quickly.

This ignores the fact that if we ever _do_ make a computer smarter than a
human it will either be via (a) reverse-engineering of the human brain or (b)
some kind of evolutionary algorithm. The slightly-smarter computer is then no
more capable of building an even-smarter computer than we are, since it _also_
has to fall back on one of these two dull mechanical processes.

~~~
swombat
The human brain doesn't need to build a smarter brain. It just needs to build
something of equivalent smartness (which should be theoretically possible,
there's no reason to believe the human brain is the upper bound for all
generalised reasoning ability) on a substrate like silicon which is subject to
Moore's Law (and thus gets inherently faster with time) and which is immortal
and duplicable.

Build 1 functioning brain in silicon, and:

\- 18 months later you can build one that's twice as fast using the same
principles

\- duplicate this brain and get the power of multiple people thinking together
(but with greater bandwidth between them than any human group)

\- run this brain for 100 years and get an older intellectually functioning
human than has ever existed before

\- duplicate whatever learning this brain has accumulated over 100 years
(which, say, brings to the level of an Einstein) as many times as you have
physical resources for (so, clone Einstein)

All those are paths to super-human AI from the production of a human-
intelligence brain in a non-biological form.

So, if a human brain can make a computer brain, which is a reasonable
assumption, then a human brain can make a brain smarter than itself.

~~~
gruseom
_Build 1 functioning brain in silicon, and [...]_

That reminds me of Chesterton's definition of Western science: "Give us one
free miracle and we'll explain everything else." :)

Edit: Wish I could remember where I originally got that. Googling it digs up
not Chesterton but Terence McKenna.

~~~
swombat
But (part of) the point is, building a human brain in a non-biological
substrate is not a miracle. It would be a miracle in the same way that
transistors and penicillin are, not in the way that Jesus' resurrection is.
I.e., a fantastic, happy, unlikely but possible event that will change the
world for the better.

After all, we know that human brains can be built in some way: we have the
evidence for that claim inside billions of skulls. The question is then not to
push the theoretical boundaries of computational capability beyond some
theoretical level - but merely to achieve it again artificially.

We've managed to copy birds, fish, we've sent people to the moon, we've sent
probes outside the solar system, we've beaten countless diseases, we've
extended our own lifespans by decades, we've created monuments of human
culture... why assume that we won't achieve this too?

~~~
rimantas
> After all, we know that human brains can be built in some > way: we have the
> evidence for that claim inside billions of > skulls

Exactly. Skulls. With connection to living and feeling flesh.

> After all, we know that human brains can be built in some way: we have the
> evidence for that > claim inside billions of skulls

We cannot even model the brain of the simplest worm…

~~~
rbanffy
> Exactly. Skulls. With connection to living and feeling flesh.

And, unless you claim there is something inherently magical and miraculous in
that, it can be reproduced or abstracted.

> We cannot even model the brain of the simplest worm…

I don't believe that's true. I am quite sure we have some models of varying
accuracy (as in "good enough") for those. Maybe we cannot run the most
accurate ones (modeling the chemical processes within individual neurons) in
real time, but, by 2012, we'll be able to run them twice as fast as we do now.

------
jakeg
Article Summary: attack the very premise of achieving general intelligence by
attacking _narrow AI_ techniques such as machine learning and their obvious
inadequacies, and then acknowledging that they have not in fact, accomplished
general intelligence.

------
geuis
This is, typically, yet another diatribe that generalizes various "the
Singularity" concepts and denigrates them in a single fell swoop.

I adhere to many of the arguments put forth by Ray Kurzweil, who many dismiss
as a crank. However, unlike any (at least, most) other self-declared prophets
Kurzweil has a solid history of predictions going back at least 21 years.
While of course not all of the things he has said have come to pass, or they
come to pass at slightly different time periods, more than not they have been
spot on.

Of course, some nutcases really _do_ subscribe to The Singularity as a faith.
These people don't pay attention to the various sub-disciplines of biology,
computer science, and materials science that are among the core fields
generally related to The Singularity.

Folks like myself who think that something like TS is ahead of us base our
reasoning on the rapid advances that humanity is making in the core fields,
not on some religious belief. I'm an atheist and don't believe shit without
proof.

Its easy to tear down, but much harder to make a valid argument. Nick Szabo's
stating that something will never happen because it hasn't happened yet. I
suggest he start listening to what's going on in science.

~~~
dnautics
"Kurzweil has a solid history of predictions going back at least 21 years"

[http://www.acceleratingfuture.com/michael/blog/2010/01/kurzw...](http://www.acceleratingfuture.com/michael/blog/2010/01/kurzweils-2009-predictions/)

Confirmation bias. You're not looking hard enough at the stuff he says that
didn't come true.

Even so, "past performance is no indicator of future success. You could have
said the same thing about the goldilocks economy in 2001 but the notion that
the economy could continue exponentially forever was pretty dumb. All I have
to say about believing in exponential things, is, please, please consider the
bacterial growth curve.

[http://faculty.irsc.edu/FACULTY/TFischer/images/bacterial%20...](http://faculty.irsc.edu/FACULTY/TFischer/images/bacterial%20growth%20curve.jpg)

(that's a log graph) See, if you were an intelligent e coli during growth
phase, you might notice that everything was exponential and hunky dory, and
there was no end in sight.

~~~
BonesLF
Luckily as humans we can fight the logistic death phase after the singularity
'period'. Fight it with more and more superior AIs; perhaps something like
what Adams envisioned.

But we'll most likely lose to ourselves. It's a M.A.D. world.

------
A1kmm
I think it would be religion to not believe that it is theoretically possible
to build a device which emulates the human brain - sort of like the belief
that it was impossible to make 'organic' compounds out of compounds from
'inorganic' origin because they lack the vital force.

Whether it will ever be possible to do it on a physically possible Turing
machine in real time is perhaps slightly more debatable.

~~~
dnautics
when (and if) we will do it is even more debatable.

------
asnyder
Whether or not we reach "The Singularity" doesn't matter much to me, I just
want us to reach a point where there are hundreds of thousands of nanobots
aiding our individual immune system along with preventing degradation of our
internal organs, is that too much to ask? According to all the research we
shouldn't be too far off from that, though probably not in my lifetime.
Whether or not we reach an AI singularity doesn't matter to me, so long as we
have our nanobots.

~~~
wladimir
Whether or not we reach "nanobots" doesn't matter much te me, I just want my
flying cars! Is that too much to ask?

------
gwern
Hm. It appears Blogspot ate my comments. So I guess I'll just paste it here.

\---

> The "for a time" bit is crucial. There is as Feynman said "plenty of room at
> the bottom" but it is by no means infinite given actually demonstrated
> physics. That means all growth curves that look exponential or more in the
> short run turn over and become S-curves or similar in the long run, unless
> we discover physics that we do not now know, as information and data
> processing under physics as we know it are limited by the number of
> particles we have access to, and that in turn can only increase in the long
> run by at most a cubic polynomial (and probably much less than that, since
> space is mostly empty).

Yes, but the fundamental limit is so ridiculously high that it might as well
be infinite. Look at Seth Lloyd's bounds in <http://arxiv.org/abs/quant-
ph/9908043> . He can be off by many orders of magnitude and still Moore's law
will have many doublings to go.

(Incidentally, the only quasi-Singulitarian I am aware of who has claimed
there will be an actual completed infinity is Frank Tipler, and as I
understand it, his model required certain parameters like a closed universe
which have since been shown to not be the case.)

> As for "the Singularity" as a point past which we cannot predict, the stock
> market is by this definition an ongoing, rolling singularity, as are most
> aspects of the weather, and many quantum events, and many other aspects of
> our world and society. And futurists are notoriously bad at predicting the
> future anyway, so just what is supposed to be novel about an unpredictable
> future?

WHAT. We have plenty of models of all of those events. The stock market has
many predictable features (long-term appreciation of x% a year and fat-tailed
volatility for example), and weather has even more (notice we're debating the
long-term effects of global warming in the range of a few degrees like 0-5,
and not, I dunno, 0-1,000,000). Our models are much better than the stupidest
possible max-entropy guess. We can predict a hell of a lot.

The point of Vinge's singularity is that we can't predict past the spike. Will
there be humans afterwards? Will there be anything? Will world economic growth
rates and population shoot upwards as in Hanson's upload model? Will there be
a singleton? Will it simply be a bust and humanity go on much as it always has
except with really neat cellphones? Max-ent is our best guess; if we want to
do any better, then we need to actively intervene and make our prediction more
likely to be right.

> Even if there was such a thing as a "general intelligence" the specialized
> machines would soundly beat it in the marketplace. It would be very far from
> a close contest.

And the humans? If there is a niche for humans, that same niche is available
to the general intelligence, and since it can be self-improving and graft on
the specialized machines better than humans ever will, it ought to dominate.

The economic extinction of humanity? Seems like a Singularity to me. (The
economy was driven by human desires before, but what do the machines compete
for when the humans are gone? I certainly can't predict it.)

> Nor does the human mind, as flexible as it is, exhibit much in the way of
> some universal general intelligence. Many machines and many other creatures
> are capable of sensory, information-processing, and output feats that the
> human mind is quite incapable of.

And pray, how do we know of these feats? How were these machines constructed?
(Whether I kill you with my bare hands or by firing a bullet, you are still
dead.)

5678:

> The null hypothesis would be: there does not exist an algorithm and a
> computer which achieves the goals of AGI. I await the proof, or a modicum of
> scientific evidence to support this.

Shouldn't the null hypothesis be the Copernican hypothesis? - there is nothing
special about humans.

Every success of AI, every human function done in software, is additional
Bayesian evidence that there is nothing special about humans. If there were
something special, one ought to observe endless failure, much like trying to
trisect the angle or prove the parallel axiom; but instead, one observes
remarkably rapid progress. (Where was chemistry after its first 60 years? Feel
free to set your starting point anywhere from the earliest Chinese alchemists
to after Robert Boyle.)

Kevembuangga:

> More to the point, some singularitarians pretend to have a definition of
> "Universal Intelligence" which curiously enough isn't computable. :-)

If you have an actual argument, please feel free to give it and not just
engage in mockery.

Were you aware that uncomputable algorithms are useful and can easily be made
computable?

(You can solve the Halting Problem for a Turing machine with bounded memory,
for example. And every static analysis like type-checking, by Rice's theorem,
is attempting something Turing-complete, but they're still not useless. When
employing theorems, beware lest you prove something different from what you
think you are proving.)

------
dstein
I think of it as kind of an inverted religion. Instead of having faith that
something magical happened in the past, a singularitarian has faith that
something magical (or maybe terrifying) will happen in the future.

------
bermanoid
That someone would claim to "debunk" the entire Singularity concept without
showing any indication that they've ever read anything written on the topic by
Eliezer Yudkowsky is laughable (in particular, Szabo doesn't appear to really
grok the difference between the time scales of exponential growth in Moore's
law and the exponential growth we envision when we talk about an intelligence
FOOM).

Yes, if what you mean by "Singularity" is "that thing that Kurzweil says will
save us all, and it will happen by 2029 because Exponential Curves Are Awesome
And Go On Forever", then sure, it's a religious and borderline psychotic idea.

The rest of us realize that exponentials don't last forever, that strong AI is
a hard problem, that brain-scans won't solve the whole problem even if we
_did_ have the tech to do them, which we won't for quite a while, that the ML
algos we have today is not strong enough, that even defining a metric to
optimize against is extremely difficult, etc. We got it - we live in the real
world, AI is _really tough_ , and we haven't solved it yet. Further, we're
mostly _worried_ about the prospect of AI, rather than naively hopeful for
some geek rapture, because self-improving AI is likely to turn out quite badly
for us unless we're exceptionally careful.

"Faith" only enters into this when deciding whether you believe the
probability that we can build self-improving AI any time soon is significant
or not, and I'd argue that it's not so much faith as slightly-educated
guessing (predicting the probability of a tech breakthrough is an extremely
difficult thing to do, so sure, there are high margins of uncertainty). You
may disagree as to whether that probability is 1% over the next 20 years or 1%
over the next 100, but we're hardly talking about the odds of virgin-birth
here, so I find it completely unfair to call the whole idea
"religious"...frankly, either of those probabilities is, IMO, worth worrying
about, given the stakes. And you're flat out delusional if you don't admit a
1% chance of strong AI within 100 years.

I was going to more specifically tackle the other claims in this post, but
ultimately, it's not worth it. When it comes down to it, Szabo assumes that
the main thing that would make strong AI "easy" (or at least relatively easy),
the existence of a general algorithm for reasoning, doesn't exist. If you take
that as gospel, then sure, it's going to be pretty difficult to construct AI.
I can even see why he'd think that, having used genetic programming and other
ML techniques and finding little success (IMO we haven't pushed the genetic
programming paradigm anywhere _near_ far enough, but that's a story for
another day).

Plenty of people see things differently, and think that a general purpose
intelligence algorithm is within our grasp; I'd argue that there's some loose
evidence that human cognition largely owes itself to exactly such an
algorithm, though in our case it's probably a bit more complex than we'll be
able to come up with from scratch.

------
Vitaly
So he basically says that there will be no "general AI" because he studied the
field and can't see how it can work, duh! :)

