
Why the singularity may never arrive  - mixmax
http://www.maximise.dk/blog/2008/11/why-singularity-may-never-arrive.html
======
mhartl
Kurzweil deals with this "S-curve" phenomenon extensively in his books. He
shows that each _individual_ computing technology follows an S curve, but the
exponential price-performance of computation continues through a smooth
transition to the next method. For example, as vacuum tube computing
saturated, transistor computing picked up. This is not a coincidence; as one
technology reaches saturation, resources flow to the bottleneck, fostering the
next technique in the chain.

Intel hit just such a wall a few years ago with the size of its gates. Many
had predicted, based on this bottleneck, that Moore's Law might end some time
around 2003-2005. Intel's CTO Justin Rattner explained at this year's
Singularity Summit that there are smart people at Intel, and they found a way
around the problem. In fact, the transition was so smooth that few people even
noticed.

And so it goes. The computational capacity of matter is effectively infinite
on the scale of current technology, so there's no reason to expect computation
to saturate for a _long_ time. Barring a global cataclysm, it's hard to see
anything standing in the way of a technological singularity some time in the
next 50-100 years.

~~~
ars
Actually Intel did not really "find a way around the problem". CPU speeds have
barely improved since then, instead it's all about multi-core.

If passively parallel was all it took, we could have AI today, but we can't.
And it doesn't look like CPU's are going to get any faster.

The curve did exactly what the linked article said: it turned into an S curve.

Moore's law still works because that's a measure of cost, but if you plot a
single core, you get an S curve - which means no more singularity.

Plus, even if you had a super fast computer, you'd still not get a singularity
since no one knows how to program AI - even if you gave them the fastest
computer in the world.

~~~
mnemonicsloth
I don't want to jump to conclusions, but the way you throw around the term
"CPU speeds," (how do you even measure that?) you sound to me like a
programmer for whom computation is a magical process composed of functions,
arguments, processes and files.

Moore's Law is only indirectly related to CPU speed. Instead, it predicts the
MOST ECONOMICALLY PROFITABLE minimum feature size of a semiconductor
manufacturing process.

To a lot of people, those two things are one and the same, but in reality,
computing power tends to grow because of innovations in processor
architecture. In other words, material and device engineers will wring lots of
improvements out of a given process, giving the architecture guys more
transistors to implement bigger caches, longer pipelines, branch prediction,
and the like.

So while "Moore's Law" continues apace, "CPU speeds" (however you measure
those) _have_ stalled a bit. This is because the current slate of
_architectural_ improvements has been exhausted, and there's a lot of
uncertainty surrounding how to implement the Next Big Thing (core-level
parallelism). This shouldn't be terribly worrisome to us, as it's happened
before.

From the 1970s to the early 1990s, CPU manufacturers focused on "bit-level"
parallelism, basically throwing in bigger registers and more instructions to
burn through growing hardware budgets. When it became obvious that this
approach wasn't improving performance any more, we got tghe RISC processors
that enabled pipelining, upclocking, and caching.

If you didn't already know all of this -- and a lot more background besides --
your opinions about "programming an AI" are worse than useless. You're
contributing zero information, and adding a little more noise (in the form of
unsubstantiated certainty) to a field that's already debated too hotly.

------
patio11
I don't think the hole for the singularity argument is necessarily that
technological progress, measured by CPU speed or accessible memory or
bandwidth or whatever, will necessarily slow.

I think that the hole is probably "We have virtually inexhaustible amounts of
X and THIS MEANS MAGIC HAPPENS."

There are many resources for humans which were, in certain times and places,
very, very finite. Let me give you a trivial example: drinking water that
wouldn't kill you. The history of humanity up until quite recently was the
history of drinking water.

However, most areas of major Western nations hit, effectively, the Drinking
Water Singularity a long time ago: you can get what are (relative to
historical needs and prices) infinite amounts for nothing. (Seriously: think
of how much human labor it costs to draw one bucket of water from the well
located several miles from your village. Mentally got an idea for how much?
OK, now how much labor does it take a McDonalds employee to afford one bucket
worth of tap water? What, maybe a quarter of a second, if that?)

The change from cholera outbreaks to no cholera outbreaks, which happened way
the heck back on that curve, had PROFOUND consequences for civilization. The
change from "I could fill up a swimming pool for a trivial amount of money" to
"I could fill up TEN swimming pools for a trivial amount of money" had
negligible consequences for civilization. All that water wealth, nothing of
major consequence to spend it on.

(I know, I'm overlooking the fact that certain areas like the American
Southwest are actually facing water crunchiness again, and large portions of
the human population still fail to have their basic needs met. Ignore that for
the purpose of simplification -- and incidentally, "You can have all the water
you can drink but you can't water your lawn in the daytime" is still
singularity-esque relative to "Send a woman to walk a mile to bring back a
bucket of water".)

That is how I see the technology singularity coming about: what if I gave you
all the CPU cycles you could want for nothing and you found, after a certain
point, you had nothing of major consequence to spend them on? What are you
going to do, calculate a new Mersenne Prime every second for eternity and call
it the singularity?

[Edit: I originally said typhus, not cholera. Typhus is not caused by water
quality issues. Sorry, it has been a long time since I played Oregon Trail.]

~~~
cowmoo
There are currently problems that is pushing the envelope of computational
power, i.e., MD simulation (see D.E Shaw Research), engineering simulation on
jet engines (see Pratt & Whitney) - even state of art of Infiniband clusters
and custom-designed chips cannot simulate more than 10 seconds of molecular
interactions or the complete fluid dynamics of a jet engine under a reasonable
time-frame.

You are however right in the sense that AI can't be achieved by tossing more
CPU power. Almost all of the media hype on the Deep Blue Chess Playing or
hedge fund black boxes are all _narrow_ AI, that is, programs that look
"smart" because human beings have painstakingly poured so many techniques into
a specific and narrow problem-space. We have yet to come up with a artificial
general intelligence (AGI). For those who are interested in AGI, check out:
<http://www.agi-09.org/> and <http://journal.agi-network.org/> .

~~~
yters
Also called strong AI: <http://en.wikipedia.org/wiki/Hard_AI>

------
swombat
This article is overly simplistic to the point of uselessness. Population
growth peaked in 1963 not because of food production limits but because of
cultural and social changes related to growing wealth. That destroys the whole
Malthusian argument.

Similar demolitions can be performed on the rest of the article's key points:

 _The same thing will happen with technology - eventually we will run into
insurmountable barriers to growth and progress will stabilise at this level.
It's just a question of what the barriers are - the food for progress so to
say._

The food for progress is human intelligence. Singularity models are built
precisely on an exponential increase of intelligence that can be used to
produce more intelligence. The Malthusian model breaks down there again.

To presume that Kurzweil doesn't know about Malthus is pretty cocky and very
likely wrong.

~~~
mixmax
I'm sure Kurzweil knows about Malthus. The point is that nowhere in nature or
sociology is there such a thing as unlimited exponential growth - it is always
stopped by lack of resources.

Why should computation be any different?

~~~
jerf
The Malthusian curve occurs because progress impedes progress. The more you
grow, the less resources you have to grow with.

Technology doesn't work that way. Technology feeds technology. Therefore, the
curve will not look like the Malthus curve.

Kurzweil doesn't claim it'll actually go forever. That's a strawman. He simply
observes that technology feeding technology is a different shape.

Technology feeding technology would probably look more like a exponential
curve that hits a very sudden wall at the limit point, whatever it may be,
rather than a gradual falloff.

~~~
mixmax
Interesting argument...

~~~
swombat
It's one of the cornerstones of the singularity possibility. If you didn't get
that argument, you didn't really get the whole singularity movement...

------
jdminhbg
From the article:

"Thomas Malthus Put forth his theory of limits to human growth... What
happened was that population growth declined and has halved since its peak in
1963."

Of course, on the other hand, something else happened 3 years before that
peak: <http://en.wikipedia.org/wiki/Birth_control_pill>

------
jimbokun
Interesting suggestion that resources for computing will eventually run out in
some fashion, thereby invalidating the Singularity prediction. But what
resources?

I wonder where energy costs fit into this. Already, Internet companies are
seeing energy as a significant part of the costs for running their business.
How much energy would be consumed by a $1000 computer with the same computing
capacity as the entire human race? How much heat would it produce?

~~~
rms
Energy production is also increasing exponentially.

<http://en.wikipedia.org/wiki/Kardashev_scale>

~~~
jimbokun
According to the chart on that page, at a much slower rate than Moore's law.
So what is the implication of that for the Singularity? Will the increased
demand for energy to power computation cheaply force the Singularity to wait
until the energy growth curve catches up?

It is also relevant to note, I think, the current volatility in energy prices.
We are having a difficult time already supplying all of humanity's energy
demands.

Of course, as I note in another post Kurzweil has considered this and does not
think it will be a significant barrier. I'm just curious about how the numbers
work out.

------
Eliezer
Aaaand once again, author does not know what a "Singularity" is.

[http://www.singinst.org/blog/2007/09/30/three-major-
singular...](http://www.singinst.org/blog/2007/09/30/three-major-singularity-
schools/)

------
electromagnetic
The question of the singularity isn't whether it will happen, it's if it will
happen as the big overhaul of civilization people predict. I mean the best
example I can think of for the 'singularity phenomena' comes from Charles
Stross' book Singularity Sky in which a super-advanced race comes along and
basically asks everyone in an entire civilization "what do you dream of and
we'll do it for you?" Revolutionaries ask for self-replicating weapons and use
them, the governments policy is to pretend the super-advanced race isn't there
and lets the planet get destroyed.

Essentially the above is as predicted. However, what is likely to happen is
that fundamentally no human being will notice a difference between now and
post-singularity. It's like video games, people criticize the graphics today
for being too fake or not realistic enough when if you look back even a couple
of years the improvement is immense. If you look back to the NES the
improvement is fucking unbelievable, just like looking back from post-
singularity it will be fucking unbelievable but when you're in it you're still
going to complain that computers aren't fast enough, that something in a video
game doesn't seem realistic enough, etc.

We might have humanoid robots in 20 years time, but it isn't going to be holy
crap they magically appear. It'll be like in the movie iRobot, each new model
is better than the last just like with PC's, but it still took them like 40
years from the founding of the company before the supercomputer that ran them
decided to take over things.

------
david927
I always thought that Kurzweil was too simplistic in the way he comes to his
conclusions and this points to just a couple of the reasons.

I think if we ever hit singularity, it will be because of some black swans
spikes and leaps, not because of a gradual curve.

~~~
jwilliams
This article is even more simplistic.

 _"This chart looks like this chart - therefore they are the same... The
reason is because they are limited by some kind of resource, but I have no
idea what that is"_

------
schtog
The people actually working on general-purpose AI, exactly what the heck are
they doing?

It seems to me we:

* Don't have a clear idea what consciousness is * Have very pathetic attempts at creating it so far.

But machine learning has achieved some seriously impressive stuff. Although I
wouldn't call that intelligent. I'm into machine learning because it works and
it is of great help to humanity.

If someone told me to design a general-purpose AI, I wouldn't really know what
to do.

So what are these singularitans doing? Only philosophy or something real?

They strike me more as the alchemists of our time rather than serious
scientists.

~~~
MaysonL
Note that Isaac Newton was seriously into alchemy, and that chemistry evolved
out of alchemy...

------
aswanson
And it came to pass that the light from the self-sustaining fusion reaction
danced upon the surface of the orb that would be known as Terra, and this
dance would cause the essence of the orb's surface to dance in unison, in ways
that would sustain itself and yet change it's cadence and routine slightly
with each generation...

This in turn would feed another improbable series of creations that would form
their own yet again....and these would reflect...only to say that their own
ability to shape the dance is impossible...

------
acro
I would like to point out to the author that many natural phenomena like
rabbit populations are tightly constrained by environmental factors, mainly
available living space and food. Its these environmental factors that cause
the S-curves. These same constraints do not apply to technology.

------
trominos
This is not exactly on-topic, but here's a question that I still haven't
gotten a good answer to:

People who are excited about a potential singularity: why? Doesn't the idea of
being obsolete scare you?

~~~
yters
That's why Eliezer Yudkowsky is working on friendly AI.

<http://yudkowsky.net/>

Though, that still doesn't rule out people misusing friendly AI against each
other. Say some megalomaniac thinks it'd be much more convenient to have a
bunch of friendly AIs around than a human population prone to revolt. In that
case, friendly AI is not so great since it disrupts the power balance.

------
DaniFong
We are not on Moore's law for batteries.

------
Allocator2008
A $1000 computer with the equivalent knowledge of the whole human race won't
happen because it is physically impossible.

There are 10^11 neurons or so in the brain. There are 10^10 (say) people in
the world (more like 0.5 X 10^10 but anyway). So loosely there is an order of
10^20 neurons in humans on earth total. Avogadro's number has approx. 6 *
10^23 atoms in 12 grams of carbon, so, basically, you'd have to have 1 neuron
simulation per atom to fit the entire human intelligence on say, your hand
held mobile device, or close to a 1 : 1 simulated neuron to atom ratio.
Transistors are cool but they ain't that cool boys and girls. Won't happen.

Now, you could fit a 10^10 neuron simulation in a 10 meter square server room
1 meter deep, that might be possible. So replicate this set-up 10 billion
times and then you'd get the population of the human race on a computer.
Slightly more than $1,000 tho. :-)

~~~
jerf
You're thinking with modern technology, not future technology. What if $1000
(minus change to account for resource usage) bought me a computer seed that
_grew_ into a massive supercomputer in some relatively-short timeframe?

If self-replicating technology is possible, then $1000 for that much computer
power would actually be a ripoff.

(I'm not claiming that such a thing is possible, per se. But then again, we
_are_ such seeds ourselves, or rather, seeds that grow one human's worth of
computing power, so it's hard to argue that this is _physically impossible_.
People who doubt this is possible need to explain why we're going to run out
of progress before we figure out enough about biology to trick some existing
organism into doing something like this for us. We don't even have to build
the things ourselves...)

Arguing about whether the singularity is possible is a waste of time. Truth is
that according to the original definition, it's pretty much inevitable,
according to the physics we already know and the engineering that we can
almost already do. The _real_ argument people are trying to have is over the
final limits of technology, and an awful lot of people argue for limits that
are far, far too low, with very feasible paths for passing the limits plainly
in view even today.

------
Allocator2008
Oh, one other comment regarding population explosions in general. I just got a
tip from an inside source at the government that is top-secret classified
information:

"SOYLENT GREEN IS PEOPLE! SOYLENT GREEN IS PEOPLE!"

