
Kurzweil's rebuttal to Paul Allen - ca98am79
http://www.technologyreview.com/blog/guest/27263/?p1=blogs
======
hugh3
I grow tired of Kurzweil's vague arguments against people who disagree with
his vague predictions.

What I think Kurzweil doesn't understand is that in any argument about what's
going to happen in the future, the onus of proof inevitably lies with the guy
saying "This is what's going to happen", not the guy saying "Ehh, maybe not".

I don't know what's going to happen in the future, and I don't pretend to know
what's going to happen in the future, but whatever happens either (a) I'll
find out eventually or (b) it'll happen after I'm dead anyway. But john_b's
point about Kurzweil's lack of a null hypothesis is a good one.

So my question for Kurzweil is this: what will the world look like if you're
wrong? What possibilities are your predictions _excluding_? If I'm still alive
in 2060, and I look around at the world around me, under precisely what
conditions am I entitled to say "Well whaddya know, looks like Kurzweil was
wrong about that Singularity thing after all"?

~~~
khafra
I agree with your conclusions, but

> the onus of proof inevitably lies with the guy saying "This is what's going
> to happen", not the guy saying "Ehh, maybe not".

I'd say that the onus lies on the one making conjunctions instead of
disjunctions. Often, negative predictions are disjunctions, but this isn't
always the case: Compare "in 2100, North America will be inhabited by humans"
with "Ehh, maybe not."

~~~
hugh3
Something like that. It's actually hard to know where to divide up the onus of
proof when we're talking about predictions.

One thing's for sure, though. Kurzweil's "I'm right until proven otherwise"
attitude ain't the way to do it.

------
losvedir
Ugh, not this again.

 _That would mean that the design of the brain would require hundreds of
trillions of bytes of information. Yet the design of the brain (like the rest
of the body) is contained in the genome._

I believe it was on HN that this discussion came up before, but that's a short
sighted way of looking at it. Basically, it doesn't take into account all the
interactions of the environment required to turn that "source code" into a
person. Sure, the DNA would be sufficient if you were able to accurately
simulate cellular actions, protein folding, and physics in general, but we
just can't do that yet, and it doesn't look like we'll be able to any time
soon.

~~~
fl3tch
Exactly. Distributed computing grids still have trouble folding single
peptides with reasonable accuracy.

We have the source code, but we don't have the compiler.

~~~
rsl7
Worse, there are about to be seven billion brains on the planet. Not merely
one.. and human brains aren't so great in isolation.

~~~
wanorris
Why would it need to be in isolation? Machine vision, voice recognition,
speech synthesis, robotics -- there's no obvious reason why if we could build
such a brain we couldn't find a way for it to interact with people.

~~~
rsl7
I agree, but the devil is in those details. Look at the variations among
actual humans.

------
Bishop6
Kurzweil makes some good points here. He doesn't address each and every
criticism with his view of AI progress, but he does a good job calling out
Paul Allen on not doing the homework.

~~~
DanielStraight
Calling out someone for not doing their homework seems somewhere in the DH1-2
range:

<http://www.paulgraham.com/disagree.html>

There are some good points in Kurzweil's response, but the ones about Paul
Allen are definitely not them.

I think his best point was about extrapolating function from individual cells
or structures, without needing to understand every single cell or structure
individually.

~~~
jerf
I disagree. When one "addresses" a long-standing, well-thought out argument
with off-the-cuff snap statement or something only slightly more thought-out,
it is perfectly permissible to call them out as not being serious. It's like
going into a serious religion debate with "Evil exists, therefore God does
not. Ah-ha, you are defeated!", or "God must exist because something must have
started it all, so there!" as if in the thousands of years of years the debate
has been raging on _nobody has ever thought of those things_ , or addressed
them at length, in both directions.

If you're going to debate the singularity here, maybe you can get by with just
stating "I don't believe it's possible" without citing any logic, which as of
this writing there's at least two people in this comment set already who have
simply stated that without defense, but if you're going to debate one of the
leaders of the field it would help if you would at least grant your opponent
the courtesy of thinking that _just maybe_ over the course of the decades he's
been thinking about this, the obvious objections that you thought up in five
seconds _just might_ have been addressed at some point. You may not think
they've been adequately or correctly addressed, but don't pretend they haven't
been addressed at all.

Personally I'm not completely sold on the matter for a variety of reasons
myself, but the usual logic given for why you should be skeptical about it is
terrible. The interesting questions are a great deal more complicated than
something that can be dismissed with something that generally boils down to
"Look, I just can't imagine the world changing that much, so it won't".

~~~
ambler0
I think your comparison to arguments about religion is apt. Many of the
opponents to Kurzweil's ideas remind me of those whose opposition to the
possibility of a godless universe amounts to, "I can't imagine it, so I don't
believe it."

~~~
john_b
On the other hand, Kurzweil (at least in his essays and articles) often
ignores the question of what a fair null hypothesis is for the possibility of
the singularity. I think his gift for creating a compelling vision tends to
make people forget that the null hypothesis for a scientific assertion is
doubt.

Kurzweil provides both high level general evidence (like improvements in
computation) and low level, domain-specific evidence (like the discussion
about the pancreas) to support his claims, but none of that justifies the use
of the word "law" in "law of accelerating returns". He attempts an analogy
with thermodynamic laws and how they are derived from underlying statistical
principles, but there are no underlying fundamental principles of human
innovation and progress that are in any way comparable to the certainty and
universality of physical laws. This, I think, is why a lot of people (myself
included) have a hard time taking him seriously. He tries to apply the same
kind of formal analysis that works well in science to human beings and the
complex, highly non-scientific processes that underly innovation today. The
bottom line is that, until the singularity occurs, human beings will still be
needed to build ever more complex and powerful systems, but human beings do
not progress at anything close to an exponential rate.

------
kenjackson
What are the big advances in linear programming that happened since 1988?

As api mentions in the comments here on HN there are areas of work where
progress stopped. For example passenger jet speed was once thought to continue
at a rapid rate such that LA to Europe flights would be a few hours.
Skyscraper height was thought to continue, with advances in various
technologies and engineering methods making it desirable. Both hit realities
that signficantly slowed down their progress.

I tend to side with Allen on this. While we're bright people, I don't know if
I see us able to keep increasing computing power while keeping actual power
consumption reasonably low.

~~~
api
What passenger jet speed and skyscrapers really hit is economics: demand
limits to growth.

Most super-tall skyscrapers are economic disasters. There seems to be a
maximum _economically rational_ height to a skyscraper, and it's already been
reached. You _can_ build higher, but if you do you're wasting your money.

A human-level or beyond AI would probably be like the Burj Khalifa: an
economic disaster. Why build it when screwing and popping out babies is _far_
cheaper and already works? If you want to exceed human intelligence, it would
be a lot cheaper to augment human brains with external digital assistants
(like what you're using now) or implants than to re-engineer an entirely new
embodiment.

~~~
eavc
>Why build it when screwing and popping out babies is far cheaper and already
works?

Why build a word processor when pencil, paper, and a scribe is far cheaper and
already works?

It's about scale.

------
cpeterso
Here is a rough transcript of a Long Now Foundation talk by SF author Vernor
Vinge (and coiner of the term "singularity") entitled "What If the Singularity
Does NOT Happen?". He sees:

    
    
      * Scenario 1: A Return to MADness (nuclear war)
      * Scenario 2: The Golden Age (peace and prosperity)
      * Scenario 3: The Wheel of Time (catastrophic natural disaster)
    

<http://www-rohan.sdsu.edu/faculty/vinge/longnow/>

------
politician
The Singularity concept strikes me as a sort of wishful thinking. Technology
advancing so fast that we no longer can control or understand it? Yeah, that
already happened to my parent's generation with AOL, yet here I am texting
this on my iPhone. New generations understand intuitively what the previous
generation understood theoretically.

Even so, I fully expect memristors to deliver strong AI.

~~~
0x12
> Even so, I fully expect memristors to deliver strong AI.

Why would they?

Is strong AI a function of storage capacity or speed?

An AI running at 1/100th of what a future AI may be capable of is still an AI
and I can't see how a mere improvement of a couple of orders of magnitude
would do what decades of Moore's law have failed to do so far.

If strong AI was just a matter of speed then we could theoretically take any
of the large clusters available today and run it at an appreciable fraction of
its normal speed leading at a minimum to a validation of the fact that it is
indeed a strong AI that's been created.

The barrier seems to be more that we don't know how to go about building one
from a software perspective than that we wouldn't have the capability to
design the hardware.

So how would an advance in hardware suddenly fix that?

~~~
modeless
_Is strong AI a function of storage capacity or speed?_

Yes, absolutely! I actually think the most appropriate benchmark is memory
bandwidth, which hasn't been improving as fast as FLOPS or storage capacity.
It's not a matter of running a strong AI at 1/100 speed on today's fastest
supercomputer. It would be more like 1 billionth or trillionth speed.

The reason for our disappointingly slow progress in AI over the years is that
our hardware is still nowhere near powerful enough to usefully implement the
same algorithms as the brain, and we likely won't even develop the right
algorithms until we have hardware closer to the requirements, so we can test
and iterate.

~~~
weaponofchoice
I'd really appreciate it if you could suggest why we couldn't implement
similar algorithms as the brain, that possibly require a massive number of
fetches & executions simultaneously (guessing this is where memory bandwidth
plays in), but have the results show up much slower.

Shouldn't it be possible to have artificial AI mimicking human brain
algorithms at 1/100th the speed, where perhaps a single thought based on
learned information takes hours, instead of seconds?

~~~
modeless
You misunderstand. I'm saying we could, but the slowdown wouldn't be 1/100. It
would be more like 1/1 billion. At that speed, it would take years to simulate
a second of brain time. Not only would that be useless, it would be impossible
to know if you'd actually implemented it right without being able to test it
in a reasonable timeframe. That's why we'll only be able to develop brain-like
AI once our computers are much faster.

~~~
weaponofchoice
Appreciate the reply. Seems I did misunderstand.

I find it hard to agree, that despite the nanosecond latency times and the
terabytes of throughput we can wring out of single computing devices(gpu's
etc), we couldn't simulate brain-like AI faster than a billionth of what it
should be.

You're probably right though.

------
super_mario
String AI is not a hardware problem. It's not a matter of lack of
computational power. It is a software and modeling problem. If you had an AI
algorithm and a model, you could still run it on any Turning machine. It would
just take a lot longer (perhaps years or decades or more) to compute a single
thought on current hardware instead of real time or faster than real time on
some super fast future hardware.

There are people (like Roger Penrose) who argued that intelligence and
consciousness are not computational in nature (and hence no algorithm can be
conscious). Penrose goes all the way down to quantum mechanical effects in the
brain. I have not really followed developments on this and where Penrose's
argument currently stands.

~~~
spot
Nobody takes him seriously. He is just a Christian apologist who wraps it up
with quantum who haa.

------
api
What about energy?

It's true that if you look at most areas of technology they are advancing
rapidly. Except energy. Energy has stagnated since the 1950s.

I'm on the fence on this issue, but there are many very intelligent and
knowledgeable people who are predicting a kind of anti-singularity: in the
21st century, fossil fuel depletion will send us way back, perhaps even de-
industrialize most societies.

Is our civilization simply a machine that is transferring the order (low
entropy state) in fossil fuels into order within itself (technology and
economic complexity), and when those fossil fuels run out will this ordering
process cease?

The lack of major breakthroughs in energy in the past 50 years is pretty
dramatic. Nuclear looked like an energy panacea once, but it's turned out to
be clunky and hard to scale. Solar panels and wind turbines are interesting,
but the problem with those is that we basically can't store energy. Energy
storage is either super-expensive per kilowatt-hour and not scalable (e.g. Li-
Ion batteries) or very inefficient (e.g. water electrolysis to hydrogen).

Without a breakthrough on the order of cheap ultra-capacitors or fusion, I'm
afraid we'll be seeing peak everything pretty soon, including technological
complexity.

The thing is: all the technologies of the "singularity" are energy consumers.
Where are the producers? What is going to power the singularity?

Then there's another area that makes me horribly pessimistic: politics. Most
of our societies are degenerating to banana republic levels of corruption.
Even if the energy problem is technically solvable, it seems to me that our
political systems may be set up to do the absolute worst possible thing in
this area: ride the fossil fuel crash into the ground in an orgy of war and
despotism.

~~~
Troll_Whisperer
> _What about energy?

>It's true that if you look at most areas of technology they are advancing
rapidly. Except energy. Energy has stagnated since the 1950s._

This is demonstrably false. Solar power generation, for example, has been
enjoyinging the same kind of Moore's Law exponential price/power over the past
15 years that computer processing power has. This is hardly surprising due to
the fact that silicon wafer solar panels often use the same semi-conductor
suppliers that computer hardware manufacturers do. Newer thin-film solar
panels represent a jump in paradigm that promises even greater price
performance.

I could have brought up similar points about the progress of wind-power, bio-
fuel, or a number of other fields. Energy has anything but stagnated since the
1950s.

tldr: Stay off the peak oil scaremongering sites. They'll blind you.

~~~
borism
_Newer thin-film solar panels represent a jump in paradigm_

isn't this what Allen said in his critique? scientific achievement doesn't
just grow exponentially - there are those "jumps in paradigm" that move us
forward, but they're relatively rare and unpredictable.

~~~
Troll_Whisperer
As Kurzweil pointed out, Allen hadn't even read his book. In it Kurzweil
provides copious volumes of data to support his claim that the overall trend
is still exponential. As one paradigm starts to run out of steam, there is
greater and greater research pressure to find the next. Much as vacuum tubes
improved exponentially until nearing their limit upon which transistors and
then later ICs took over, the same has been happening with energy.

For the past 400 years human energy consumption per person has been growing at
a relatively smooth exponential curve, despite changes from wood-burning to
coal to whale oil to petroleum. Even a cursory unbiased study on the subject
will show that. Interestingly, for nearly the entire time, malthusian doomsday
prophets have enjoyed more popularity than more rigorous analysts.

------
bh42222
_Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a
physical law." I would point out that most scientific laws are not physical
laws, but result from the emergent properties of a large number of events at a
finer level. A classical example is the laws of thermodynamics (LOT). If you
look at the mathematics underlying the LOT, they model each particle as
following a random walk._

Oh that's a terrible point! Thermodynamics laws are nothing like predictions
about the future. I would have thought linguistic slight of hand like this is
beneath Kurzweil.

 _Allen's statement that every structure and neural circuit is unique is
simply impossible. That would mean that the design of the brain would require
hundreds of trillions of bytes of information. Yet the design of the brain
(like the rest of the body) is contained in the genome._

The design of the human brain is not entirely contained in the genome!

As soon as we mapped the human genome we were faced with a paradox. How come
the complexity difference between us and mice for example, is NOT proportional
to the difference in our genomes?

Here's an article form 2002 "Just 2.5% of DNA turns mice into men":
[http://www.newscientist.com/article/dn2352-just-25-of-dna-
tu...](http://www.newscientist.com/article/dn2352-just-25-of-dna-turns-mice-
into-men.html)

In other words, if you look at just how the genomes are different then humans
and mice ought be a lot more similar than we are.

We have since come to find out just what a huge role the feedback-interactions
of DNA and its products, like proteins and all kinds of RNA, play in the
development of life.

This staggeringly complex feedback mechanism is why despite the mapping of the
human genome, medical progress still remains excruciatingly slow. Much, much
faster the before! But not nearly as fast as we had hopped when the human
genome was first mapped.

 _Note that epigenetic information (such as the peptides controlling gene
expression) do not appreciably add to the amount of information in the
genome._

This is true in that they don't add much to the _genome_. But it is profoundly
wrong in that they do add _hugely_ to the actual resulting phenotype.

Kurzweil continues in this same vane for a while. I don't know if he has just
never bothered to look into the latest research or if his understandably
strong desire to not die has resulted in a huge confirmation bias.

When Kurzweil talks about the general trend of scientific progress I tend to
agree with him. But neither Paul Allen nor anyone else disagrees with the
notion that we will reach the singularity at some point in the future.

The argument is about the timing. And timing the future, is like timing the
stock market, something I don't care to try to do.

But when Kurzweil attempts to convince the reader that the singularity is
near, by using specific examples, that's when I start do disagree with him.
Because once he starts being specific, it becomes easy for me to see where he
is wrong, factually, objectively wrong.

~~~
endtime
>Oh that's a terrible point! Thermodynamics laws are nothing like predictions
about the future. I would have thought linguistic slight of hand like this is
beneath Kurzweil.

Could you elaborate on this? I'm not a huge Kurzweil fan, but as far as I can
tell he's saying something reasonable here - that when he talks about LOAR,
he's describing a phenomenon rather than a physical process, and that this is
an accepted usage of the word "law". I don't think he's playing semantic
tricks so much as responding to a semantic complaint.

~~~
bh42222
Our understanding of thermodynamics is very thorough. Our understanding allows
us to make a plethora of predictions, all of which are falsifiable, and have
been throughly tested over the years.

This is what makes our theories about thermodynamics real _scientific_
theories.

Predictions about the future, no matter how simple or based on long running
past trends, are only falsifiable in exactly one way: wait until the predicted
date passes.

I think a very, very informal use of the term "law" could cover both. But what
irks me as a science minded person, is that Kurzweil is attempting to equate
the informal meaning of "law", generic description of a phenomenon, with the
scientific "law", an actual testable, falsifiable theory with predictive
powers.

~~~
hexagonc
No, Kurzweil was not equating the "Law of Accelerating Returns" to a physical
law of the universe. Instead, he was comparing it, albeit clumsily, to laws
that govern aggregate behavior, like the second law of thermodynamics. There
are lot of things that we colloquially call "laws" that clearly don't have the
same footing as laws in physics, "Moore's Law" being one of them.

------
api
Another response:

Kurzweil also ignores economics. The advance of technology is driven in part
by economic forces. Computing power may stagnate not because we have reached
physical limits but because present-day computers are good enough for what 98%
of the market wants.

I see this trend developing. If anything, the trend in consumer computing is
toward _less powerful_ but lower-power and more portable computing devices. My
current laptop -- a Macbook Air -- is actually slower than my previous laptop.
But it is more portable and uses less energy. And it does everything I want. I
don't need more power right now.

The only areas driving the performance end are gaming, high performance
computing, and high-capacity data centers. How long will those go until they
too are basically satiated?

We've seen this in other areas. The envelope for aviation maxed out in the
1970s with things like the U2. Space flight seems to just now be emerging from
a long coma with things like SpaceX, but on closer examination SpaceX is just
reviving 1960s ideas and doing them at a lower cost with modern control
systems and materials technology.

My other reply about energy deals with supply-side limits to growth. This
response deals with demand limits to growth.

~~~
lupatus
What the market wants is every map, book, song, movie, game, and poem ever
made to be instantly accessible, searchable, and reviewable. They want their
work to be autosyncing, autobackedup, and to follow them from device to
device. They want to be able to securely talk with friends and family at any
time, to publically talk with friends and family at any time, to discover new
friends and family, and to be able to completely disconnect from friends and
family at will. They want intelligent tools that keep them from making dumb
decisions, tools to help them make even better good decisions, and tools that
won't get in the way of them making dumb decisions.

I think that the market for computing power has a looong way to go before it
taps-out what the market demands.

~~~
api
But how much do they want all this, and does this require major improvements
in computing power?

It looks to me that almost everything you listed could be done on the
computers of five years ago. It's all nothing but software improvements.
Existing hardware is good enough.

~~~
lupatus
Two years ago, I was developing some HVAC equipment modeling software as part
of a sales automation package whose worst case scenario needed to calculate
the max cooling capacity, and a few other thermodynamic stats, for roughly 18
million different configurations. No matter how much I optimized it, the CPU
didn't have enough juice to "instantaneously" plow through all of that math.
The best I could get it down to was about 20 seconds. More CPU power would
definitely be nice.

~~~
api
But how much did that 20 seconds delay cost you? It would have been _nice_ to
have the answer faster, but how much would you have been willing to shell out
for it?

I'm talking about economics. I'm talking about what people are willing to pay
for.

~~~
lupatus
I am also talking economics.

The customer would have loved for it to go faster, because then it would have
looked like magic.

In 1998, it would have been nice if I had had photo-realistic computer games,
but I patiently made do with what I had. Today, people eagerly shell-out money
for hardware that can run this year's even-more-photorealistic version of
Madden Football.

I think the demand for more power is definitely there.

------
shin_lao
Science progress is not just a question of intelligence.

Therefore, even if computers become more intelligent than humans, it is
doubtful a "singularity" will occur.

~~~
loup-vaillant
If not, then what?

Even while I agree with Alan Kay when he says that IQ << knowledge << outlook.
But all three happen in the brain regardless.

Imagine we manage to build a machine that produce more insights than Newton.
That particular form of intelligence would be quite likely to trigger a
singularity, don't you think?

~~~
shin_lao
No I don't, because you still need time and a certain amount of randomnesses
to make discoveries.

Another way to put it is that even if you are twice as intelligent as Newton,
you won't discover twice as many things or discover things twice faster.

~~~
weaponofchoice
Sifting through the day's top-list on the AI appstore... WTH's this? "Newton
AI. The power of a 1000 research assistants at the click of a button, and
they'll run all day tirelessly.". "99c launch deal, just for today -- get it
now!" Hmm. Click.

You're now waiting for it to download to your little AiPod that'll beam 'brain
bits' to your home-bots, that're now busy sketching out the next monalisa onto
a couple of shiny new dreamPads.

Why wouldn't those startup dudes down the street try and build a
beefier/faster "runs at 50x universe speed" AiPod for the AI platform you just
bought your Newton AI app for?

\-------

Why wouldn't AI be able to simulate 'regular universe time/ human time'
faster? Why couldn't AI have stronger, more varied randomness?

The _bottleneck_ would be interactions that require a peek into _regular
universe time_ : _live_ human input (phone calls, emails), weather, biological
data etc.

------
suivix
It's startling to me that everything I do comes from 50 megabytes of source
code.

~~~
archgoon
Running on a very complicated instruction set.

~~~
bermanoid
Let's say that you saw an executable file weighing in at 25 megabytes of hand-
coded assembly running on a modern day Core i7 chip.

Would you really assume that the source code that it would take to write such
a program would be over an order of magnitude different if you were targeting
a RISC processor instead? Now give yourself access to an expressive compiled
language, and estimate how much code it would take. Does the fact that you're
targeting RISC even matter anymore, algorithmically?

Unless you're assuming that the biological "instruction set" has some hard
coded primitives that make AI an easy problem, it literally doesn't matter at
all that the instruction set is complicated (or rather, it matters up to a
small constant factor), given that our programming constructs are vastly more
powerful than those available to neurons. It's the connectivity algorithms
that are important, and biology has absolutely no advantage there.

