
This is How Wrong Kurzweil Is - techdog
http://asserttrue.blogspot.com/2013/01/this-is-how-wrong-kurzweil-is.html
======
acabal
Honestly fully-sentient machines in 16 years seems totally ridiculous on its
face to me regardless of what kind of graphs and charts about technological
progress can be shown off. While I'm not really up to speed on AI research, in
the consumer space at least we've just barely achieved a semi-automated house-
vacuuming robot and fumbling speech-recognition in call centers. Unless I'm
totally off-base with what's capable today, emotions and intelligence in a
decade and a half would seem to require breakthrough after breakthrough in a
vast number of nearly unrelated disciplines; plus there'd have to be the
commercial demand needed to fund it. (Space stations and flying cars may be
technically possible today, but nobody wants to pay for them.)

Kurzweil has invented a few cool things but today the guy is essentially a
talented sci-fi writer. While those kinds of people are important, I have no
idea why so many people take him so seriously.

~~~
dmauro
People take him seriously because he has a good track record at making
predictions.

~~~
nollidge
Plenty of people also don't take him seriously because lots of his predictions
turn out to be hand-wavey or quite obvious.

[http://spectrum.ieee.org/computing/software/ray-kurzweils-
sl...](http://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-
futurism)

------
streptomycin
Less inflammatory title: These are some cherrypicked reasons why Kurzweil
might be wrong, ignoring reasons why Kurzweil might be right.

But that wouldn't make the front page.

Also, I'd be a little less critical if this article wasn't citing all this
quantum consciousness pseudoscience as one of the main reasons why Kurzweil is
wrong.

~~~
nollidge
Cherrypicked? He's describing how consciousness currently manifests itself.
Which cherries did he forget to pick?

Also considering much of Kurweil's "evidence" is best-fit graphs of cherry-
picked data points, it's kind of ironic to lob that the other way.

> if this article wasn't citing all this quantum consciousness pseudoscience

Yeah, I guess if by "all this pseudoscience" you mean a two mentions of two
very minimalistic claims. It's not like he's fabricating Deepak-Chopra-esque
word salad.

~~~
streptomycin
_Cherrypicked? He's describing how consciousness currently manifests itself.
Which cherries did he forget to pick?_

Any cherry that doesn't fit his preconceived conclusion. Kurzweil has
eloquently made his arguments in his books and elsewhere, which are summarily
ignored here. But it's not like Kurzweil is some random crackpot that nobody
supports, there are tons of scientists who agree with him. For instance, Henry
Markham and his collaborators have made similar predictions about how their
work will lead to human-level AI in the relatively near future.

 _Yeah, I guess if by "all this pseudoscience" you mean a two mentions of two
very minimalistic claims. It's not like he's fabricating Deepak-Chopra-esque
word salad._

Two brief bullet points out of a list of six brief bullet points is a fairly
substantial chunk of the argument.

~~~
pknight
Why not supply some actual reasoning to make your points? Such as 'bullet
point x' should be valued lightly because of 'insert evidence/arguments here'.
Just calling things pseudoscience or stating that he has 'lot's of support
from other people' only implies that you're being defensive.

------
TeMPOraL
> But to say that we will see, by 2029, the development of computers with true
> consciousness, plus emotions and all the other things that make the human
> brain human, is nonsense. We'll be lucky to see such a thing in less than
> several hundred years—if ever.

2029 may be a bit early, but I think that actually to say it will take at
least several hundred years is nonsense.

Look at the timescales of scientific and technological progress. Pretty much
99% of all knowledge and technologies we use are less than 200 years old. Most
of it is less than 100 years old. We went from zero to space in a single life
time. And the progress is not steady, nor is it slowing down, it's
_accelerating_. One thing Kurzweil is definitely right about is that people
don't understand exponential growth. Or any superlinear growth for that
matter.

------
Killah911
It's a bit presumptuous to think a sentient AI/machine has to replicate the
human brain. While there are complexities we still do not understand there are
also many shortcomings/defects that we do not necessarily need to replicate to
a machine. One of the cognitive biases happen to be us thinking very highly of
ourselves. Our brains are far from "perfect" (natural selection and all).

While 2029 may be too early (note that it actually might not be. Nature
doesn't exactly follow Moore's law, but so far electronics and computing do),
I doubt it will be several hundred years. The other thing to keep in mind are
geo-political event which may have adverse effects on scientific
development/progress. To expect there to be relative peace in the next 15
years is a far better prediction than hundreds of years. We might even loose
some advances by then.

~~~
abraininavat
Expecting an AI to replicate/model every detail of a brain is like expecting a
house to replicate/model every detail of a cave.

------
boothead
So after reading "On Intelligence" by Jeff Hawkins [1], it would seem that the
algorithm of the cortex is mostly figured out and it missing only the hardware
capable of simulating enough neurons. One comment I read a while back was that
the description in the book was good as far as it goes but there's more to do
with regards to making it a two way process. I.e. the part where the nervous
system takes action to confirm a prediction (I probably didn't explain that
very well).

My question is why would you want to build a human like intelligence? Wouldn't
it be enough to build tools that simulate parts (pattern recognition,
inference, reasoning etc) of our own intelligence so that me may augment what
we're capable of? What purpose would it serve to include emotions feelings and
what makes us human?

I'd love to hear from people who actually know more about this stuff than
reading a couple of popular science books!

[1] <http://en.wikipedia.org/wiki/On_Intelligence>

~~~
luser001
Upvoted for the OI reference, which I found to be a remarkable book. I highly
recommend it.

But I wouldn't go so far as to say "it would seem that the algorithm of the
cortex is mostly figured out": my takeaway from the book was that a lot of it
was solid educated guesses and connecting of dots. E.g., the concept that the
brain stores and manipulates patterns and this is the key intelligence behind
all the senses, and the multiple "levels" of intelligence and how they connect
to each other.

I totally agree with your second para (this is another point that is present
throughout in the book). Jeff Hawkins did a good job (for me) to distinguish
between intelligence and "human-ness". Recreating the human cortex will still
leave us _far_ _far_ away from creating an artificial human brain.

------
eterpstra
Whenever I see these Kurzweil naysayer articles, I always feel like they are
overly pessimistic, or short-sighted. Just because problems seem
insurmountable now, doesn't mean that they will be in a few years. People
always seem to forget about the exponential growth of information technology.
If you look at an insurmountable problem in the context of today's technology
and speed of discover, then yeah, it will be tough to believe that we'll have
sentient machines any time soon. But please do not forget that tomorrows
technology will be exponentially better, faster, stronger than what we have
today. In five years, the problems outlined in the article may seem like
child's play.

~~~
BruceIV
The hardware may be exponentially better, faster, stronger, but you've still
got the same people writing the software. I don't buy the whole "Moore's Law
will give us strong AI." - it just feels a bit like Asimov-era sci-fi with
"nuclear power will fix everything".

~~~
BruceIV
To provide a more concrete objection, I'll say that a human-level AI would
have to have a general and expandable framework for _learning_ , and I don't
think we really have any idea how to build one. Watson is cool and all (and
probably the best current counterexample to my point), but it needed a lot of
domain experts with doctorates to set it up to "understand" problems in a
relatively limited field.

------
pbw
Kurzweil's Discover article says if we build a sufficiently complex machine it
will be conscious as fully as humans are conscious. And he says we might do
this by 2029.

This blog post says there are all sorts of low-level physical processes in the
brain like detailed interactions of neurotransmitters and calcium ion channel
dynamics which are super complicated and scary. And it will take hundreds of
years to figure it all out and replicate it.

The answer is we don't have to replicate the human brain to create a system
complex enough to exhibit consciousness. Modeling is not simulation, we can
model a system after the human brain without simulating every last detail.

~~~
nathan_long
>> Modeling is not simulation, we can model a system after the human brain
without simulating every last detail.

Maybe so, but I think the author is arguing that those details are part of the
emergent whole; in other words, without them, you might get an intelligence,
but a very alien one.

~~~
pbw
Thomas waves his hands and says "the brain is really complicated". He doesn't
address how much complexity needs to be modeled, but he implies a lot of it
does. Kurzweil says a "sufficiently complex" program will exhibit
consciousness. So the rub is exactly what are the essential properties of the
brain, and what are the accidental ones or the ones that can be implemented
some other way.

------
twoodfin
Is there really a strong argument that human cognition and/or consciousness
relies on quantum effects? (That is, stronger than the vacuous argument that
_everything_ relies on some properties of a quantum mechanical system,
including our computers, by virtue of being made of matter.)

I seem to recall Roger Penrose's argument to that effect a couple of decades
ago being soundly criticized.

~~~
dekhn
No, there is not. The linked article is vacuous, and the quantum consciousness
has not come up with a testable hypothesis yet that explains any observed data
better than simpler models.

------
Udo
We're talking about a guy who believes he will be able to literally resurrect
dead people by feeding primitive personality data into an AI which in turn
would emulate the deceased. So, yes, he's been wrong for a long time. He has
always been simplifying issues to the point where they become absurd parodies.
This is not new.

Having said that, I'm glad he's around though. AI and transhumanism are need
of a skilled public advocate, which he is.

------
jimmytucson
While I instinctively agree with the author's conclusion, I have some
questions about his argument.

1) The foundational assumption is that, by "sentient", Kurzweil means "Homo-
complete":

    
    
        > Because of some of the criteria Kurzweil has set for sentient machines (e.g. that they have emotional systems indistinguishable from those of humans), I like to go ahead and assume that the kind of machine Kurzweil is talking about would have fears, inhibitions, hopes, dreams, beliefs, a sense of aesthetics, understanding (and opinions about) spiritual concepts, a subconscious "mind," and so on.
    

I'm not sure how Kurzweil feels about that but to me the whole point of
creating sentient machines is to achieve human-like intelligence without all
hindrances and idiosyncrasies that come with being a human. I'm thinking of a
thing like Data (or Spock) from Star Trek. Are these guys not "sentient"? I
know they're not real but they don't seem outwardly implausible or self-
contradictory.

Furthermore, wouldn't some humans fail to qualify as "sentient" (or, at least,
"Homo-complete") based on this definition? What about early homo sapiens? Do
we know for sure they had the capacity for schizophrenia or autism? Is it at
least conceivable that very early humans lacked the capacity to love in the
same way as modern humans? How about future generations? If they find a cure
for depression, won't tomorrow's pill-popping, ultra-content humans also be
"sentient"?

2) Even if you grant him this assumption, it seems like the author is
essentially saying that the only way to be "Homo-complete" is to be a "Homo"
(pardon me). That may very well be true and I think the author makes a
compelling argument that it is. But I still don't see why being a human is the
only way to be "sentient".

------
ricardobeat
So in addition to serifs being worthless now Kurzweil is completely wrong!
This guy is going to debunk all the truths we hold dear.

------
redwood
It is cool to learn about dendrites:
[https://docs.google.com/viewer?a=v&q=cache:FIejhf3q_y4J:...](https://docs.google.com/viewer?a=v&q=cache:FIejhf3q_y4J:www.columbia.edu/cu/biology/courses/g6002/2003/Euler-
Denk.pdf+dendritic-
dendritic+processing&hl=en&gl=us&pid=bl&srcid=ADGEESilRTCj0U6DwT-n--
B1QlJo2iUwgo5K1rPheFccDm0c96g48xFlPZoVlJMw00bUeotH6uSEPq_HPrrRjH7DHhK_5-l-zVcwXkgF4-ZWvxF6bGMGFDSsa5-MLyyVd_cxTytyvKAa&sig=AHIEtbRUBN6Mjv0uPjCpkx362m0_7KXeww)

I see now, more intuitively, how the brain's nonlinear triggering might come
about, and how plasticity can be achieved.

Isn't it worth noting, however: that the singularity may not require 100%
replication of the human brain, but rather replication of aspects of it? In
other words, mental illness should not be viewed as a requirement... it should
be avoided, if possible, in such modeling, assuming we're developing these
models to help run our infrastructure.

Creating an exact replica of a human brain on a machine seems an unnecessary
goal. What we want is a machine that can think like us but orders of magnitude
faster/broader. That doesn't mean we want it to think like us in every sense,
but rather to accomplish specific goals: like the end of poverty.

------
sovande
"imagine that you are the subject of some mental event--perhaps you are
experiencing an intense pain. Now imagine that one of your neurons is replaced
by a silicon chip prosthesis that has the exact same input/output profile as
the neuron it replaces. At the core of this thought experiment is the
presumption that such a replacement would be unnoticeable to you or to anyone
observing your behavior. Presumably, you would continue to experience pain
even though the physical realization of those mental events includes a silicon
chip where an organic neuron used to be. Now imagine that, one by one, the
rest of your neurons are swapped for silicon prostheses. Presumably there
would be no change in your mental life even though your brain, which was once
made of lipid and protein neurons, is now entirely composed of silicon
neuronoids." This thought experiment, first presented by Pylyshyn (1980) does
not seem too far fetched today.

------
nathan_long
TL;DR: we are very, very far from understanding the human mind, much less
simulating it.

~~~
nathan_long
I have always thought that SF accounts of intelligent robots were far too
hand-wavey.

I predict we'll never have machines with minds like humans. They will always
exceed our abilities in certain tasks for which they've been programmed, but
will never pass a long-term Turing test; for example, you'll never mistake a
bot for a human remote programming colleague, with whom you chat and
collaborate on features every day.

I take it as an axiom that the human mind cannot understand the human mind
fully, for the same reason that a box cannot contain another equal sized box.
The "singularity" requires a series of boxes, each of which contains a bigger
one.

~~~
upquark
Out of curiosity, why would you take that as an axiom? Do human minds possess
some qualities above physical reality? Are they unknowable in principle? If
you answer yes, how did you arrive to that answer? Isn't there a physical
experiment or demonstration that will change your mind?

I think the box analogy is flawed. We have certainly come a long way in
understanding our own brains and bodies, even in the past couple of decades,
and you could've said the same thing about boxes when people were first
venturing into e.g. neuroscience.

The model doesn't have to be entirely contained in a human mind either, that's
why we have a series of tools to aid us in modeling new tools, layer after
layer. A human mind cannot contain a complete specification / working model of
a modern computer either (if you think of all the layers involved), but modern
computers do exist.

~~~
martinced
That would be philosophical questions.

In addition to that, human minds may possess some qualities above _our current
understanding of physical reality_.

And wouldn't the experiment "proving" that we understand it all, precisely, be
the creation of an AI equaling or surpassing the human mind?

~~~
jerf
"human minds may possess some qualities above _our current understanding of
physical reality_."

While you are free to believe that, you should be aware that you've chosen to
leave science and logic behind, and expect to be treated as such. I do not
mean that in the sense that you are therefore wrong, or that that is
necessarily a bad thing; we must all make choices that leap beyond the light
that science can cast, as the light is quite limited in some ways. But
nevertheless, you have explicitly tossed anything that can be discussed in
concrete terms overboard.

Usually I'm saying this in the context of whether we'll ever have FTL. Yes,
you are free to believe that the next supercollider will produce some bizarre
result that turns out to be something we can harness for FTL... but meanwhile,
in the real world, the evidence gets ever tighter that it is impossible. If
you wish to take a flight of fancy, fine, but please be aware that is what you
are doing. And do not mistake it choosing to toss rationality overboard for
rational argument; you can't have it both ways.

------
albertzeyer
> ... not only computational capabilities but all the things that make the
> human mind human. .. this requires a developmental growth process starting
> in "infancy." A Homo-complete machine would not be recognizably Homo
> sapiens-like if it lacked a childhood, in other words. .. have the potential
> of becoming depressed, .. compulsivities, .. panic, .. addictions, ...

I don't see all that. I don't see why a strong AI would necessarily be equal
in emotions to humans and even develop similar behavior.

I haven't read any books by Kurzweil though, so I cannot tell if he makes that
requirement on a strong AI.

------
jamieb
Other great impossibilities: sequencing a human genome in 15 years.

Cost of first human genome: $2.7b

Cost 20 years later: $250.

I think 2029 might be pushing it, but neither would I be surprised. Strong AI
is always "20 years away". One day it wont be.

~~~
rafcavallaro
That presumption, that strong AI is inevitable, is precisely the point in
contention. The fact that it has been a decade away for nearly a half century
argues against its inevitability, not for it.

------
calinet6
This is spot on, and something I have thought for a long time without the time
or ability to elucidate it so clearly.

The complexity of the human brain is not just difficult, not just complicated
or hard to understand, it is _several orders of magnitude beyond our ability
to comprehend_. IMHO we will need a fundamental leap in our own intelligence
before we have the means to comprehend even how the brain works, and one more
leap to understand how we might reproduce its behavior.

It's a long way off. People who believe otherwise are doing just
that—believing.

~~~
kordless
The bits about the brain using quantum computing are only theories. TMK there
isn't any evidence of these theories being true or false.

I would hesitate to use the term 'spot on' when discussing something we know
so little about.

What we do know is that by 2030, a standard CPU will be doing somewhere in the
neighborhood of 10^19 computations per kw/hr. That capability, presumably on a
single chip, will rival or exceed the brain's computational ability, perhaps
by an order or two of magnitude.

What we do with that horsepower is up to us coders. Maybe we'll code up a
brain. Maybe we'll figure out how to sell you soap faster.

~~~
Strshps1MoreTim
Uff, 1st theory =/= hypothesis, 2nd if the brain quantum computation in
microtubules is true, 10^19 computations per kw/hr is not going to be nearly
enough.

------
axelav
Ellen Ullman's "Programming the Post-Human" from the October 2002 issue of
Harper's sheds some really interesting light on AI research, how the brain
functions & what it means to be a human. Definitely worth a read if you're
interested in the subject & available for free on Harper's site:

[http://harpers.org/archive/2002/10/programming-the-post-
huma...](http://harpers.org/archive/2002/10/programming-the-post-
human/?single=1)

------
martinced
People really ought to read "On Intelligence" if they want to understand how
wrong the entire academic AI field is.

It's not just the approach that is wrong. It is the entire mindset that is
wrong.

At the same time, people in this field are so wrong that "machines as
intelligent as human" may not be that far off because we really cannot be that
smart if we have so many experts so wrong in such a field ; )

~~~
yor
But "On Intelligence" is a subset of "the entire academic AI field". And, for
any of its variances from the norm with regards to opinion, it's fairly
typical of the "academic AI mindset".

