

Leading Neuroscientist Says Kurzweil Singularity Prediction A “Bunch Of Hot Air” - joe_the_user
http://singularityhub.com/2013/03/10/leading-neuroscientist-says-kurzweil-singularity-prediction-a-bunch-of-hot-air/

======
blackhole
The part of the article that bothers me here the most is a _leading
nueroscientist_ asserting that the brain is not computable. That is
demonstrably false: The brain is a quantum system, just like everything else
in the universe. All quantum systems containing n qubits can be simulated by
2^n classical bits. It may very well be _impractical_ to compute a decent
sized brain, but that's still technically _computable_ , which is a serious
problem in his whole "you can't reduce human nature to a computer algorithm"
tirade. Yes, you can. It will be impossibly huge and slow, but you _can_.

~~~
diminoten
What you're saying is merely postulation, it's not "demonstrably" true, not to
mention your use of the term "impractical" is probably the most misleading use
of that word I've read today.

You're writing off _thousands_ of years of philosophy and dozens of years of
experimental results in maybe a paragraph, if I'm being generous.

~~~
pwny
It actually IS demonstrably true that the brain is made of atoms. That makes
it a quantum system, which our modern physics tells us how to simulate. The
use of the term "impractical" here refers to the fact that we don't have
quantum computers and that we don't have the computing power required to
simulate such a large quantum system.

I'd say dismissing thousands of years of philosophy here is relevant, since it
was likely produced before we had the tools to understand what we're dealing
with. All experimental physics points towards an understanding of how quantum
systems work, and that's all we need to model any quantum system. The brain is
not any different because it's a brain.

~~~
dododo
quantum theory is a hypothesis. perhaps it's the best one so far for what the
brain is made from. but it's truth is only as demonstrable as this: it's not
yet been falsified.

~~~
pwny
You can say the exact same thing about any scientific model. My view on this
has always been that as long as the model accurately predicts experimental
results, assume it is correct for your calculations until it is proven to be
wrong.

Even then, we never stopped using classical mechanics even though they were
proven to be wrong at a variety of scales. They just happen to very closely
approximate reality in some contexts and are useful.

The fact of the matter is, we have tools that are correct as far as we know
and they point towards thinking that every quantum system is computable. Until
this has been proven wrong, the fallacy is believing the brain is different,
not the other way around.

~~~
diminoten
Theories not only have to predict outcomes of events, they must also be
falsifiable (and must expand on something thus far unexplained by other
theories, you can't just recreate gravitational theory, for example).

You are, by your own admission, working with an incomplete understanding of
how a scientific model functions. So I ask you, why should you be even
commenting on this topic? Why should anyone take what you have to say
seriously on this specific topic?

~~~
pwny
So no one should be commenting on this topic unless they have a perfect
understanding of scientific theory? That seems terribly counter-productive.

I'm commenting on this topic to share my opinion and, to the extent of my
knowledge, try to explain why I believe someone else's reasoning is flawed.

Now if you believe my reasoning is false, you're free to call that out. You're
not free, however, to dismiss my contribution to the discussion simply because
I'm not operating under perfect understanding of a field that isn't mine.

Call it out, explain why, participate in the discussion, and drop the personal
attacks. I think at least part of my point is valid, even after what you
pointed out.

~~~
diminoten
I'm pretty sure I _am_ free to dismiss your contribution "simply" because you
don't know what you're talking about.

But let's not get caught in the weeds here; I don't think you're correctly
conveying the level of certainty with which we understand quantum mechanics.
There's a _ton_ we don't have the slightest idea about in this area of
science, so let's not forget that.

------
nawitus
>“Fallacy is what people are selling: that human nature can be reduced to
[something] that [a] computer algorithm can run! This is a new church!”

Actually, the "new church" is that human nature is something more than
physics. Even if Kurzweil is wrong in his predictions, certainly this
neuroscientist is _also_ wrong with his metaphysical beliefs that the brain is
something more than a mere "machine".

~~~
jamestc
He doesn't suggest that human nature is more than physics, he is saying that
consciousness isn't computable. If neural subsystems are a hierarchically
arranged and internally generated chaotic dynamic activity, there is the
problem of long-term predictions and causal inference.

>It may seem paradoxical that a deterministic phenomenon is inherently
unpredictable, but in systems that exhibit chaotic behavior, small
uncertainties are amplified over time by the nonlinear interaction of a few
elements. The upshot is that behavior that is predictable in the short run
becomes intrinsically unpredictable in the long term. As a result,
physiologists cannot make strict causal inferences from the level of
individual neurons to that of neural mass actions, nor from the level of
receptor activity to internal dynamics. The causal connection between past and
future is cut.

[http://sulcus.berkeley.edu/freemanwww/manuscripts/IC13/90.ht...](http://sulcus.berkeley.edu/freemanwww/manuscripts/IC13/90.html)

~~~
Tichy
That seems irrelevant to making a working brain thingy. We don't want to
predict how another given brain will evolve, we want to mimic intelligence.
Your brain evolves differently than mine, yet we both are intelligent (I
think). I don't need to predict your thoughts to have thoughts of my own.

Perhaps the randomness is even necessary because otherwise some situations
could never be resolved (like the classic who should go first to go through a
door - after you - no, after you...).

------
Lerc
> “the brain is not computable and no engineering can reproduce it.”

If you assume this is true then the brain is a physical entity which cannot be
constructed by mechanisms that manipulate physical objects.

The question arises "why is a body growing a brain not a feat of engineering?
What did the body do that similarly sophisticated machines cannot?"

~~~
s_baby
I believe the argument is consciousness is not a causal product of relations
represented in the brains structures but a property of the substrate.

Consider that a brain model on a turing machine is equivalent to a sticks and
stones model running the same program. Do you believe some person moving a
bunch of sticks and stones around can produce conscious experience?

------
Tichy
There are also people who believe in god and all other sorts of things.
Doesn't mean they should be taken seriously.

~~~
OGinparadise
God means different things to different people, from one that watches
everything you do, to one that maybe is a force out there, like gravity.

~~~
vor_
A physical force like gravity? Why refer to it as a god then?

~~~
OGinparadise
Here's another belief <http://en.wikipedia.org/wiki/Pantheism>

Why call it God? Because it means different thing to different people and no
on owns the word. <http://en.wikipedia.org/wiki/God#Other_concepts>

------
bermanoid
If the first paragraph is correct and the crux of his argument is that the
brain is not computable, then there's nothing to see here. Once you assume
that, then of course you don't think it can be simulated.

That this further devolves into meandering about consciousness just says to me
that even at the point where Kurzweil's singularity has already happened, he
wouldn't call it AI. Yes - if you take it as an axiom that humans have special
sauce that you can't reproduce with an algorithm, AI is impossible.

------
pixl97
INSUFFICIENT DATA FOR MEANINGFUL ANSWER

<http://filer.case.edu/dts8/thelastq.htm>
<http://en.wikipedia.org/wiki/The_Last_Question>

~~~
jasonkostempski
I keep asking Siri "Can entropy be reversed?" hoping she'll respond with that.
And Google hoping it'll show just that answer, no links. Wolfram get's it.

------
DanielBMarkham
When Newton came up with his universal law of gravitation, he also made an
interesting observation. He had no idea _why_ gravity worked the way it did,
he could only model its behavior numerically.

It's been over 300 years and we're still trying to figure out exactly how
gravity works. But in a way it doesn't matter: his law has given humanity
tremendous abilities it didn't have before.

When Deep Blue won the Jeopardy contest a couple of years ago, it was obvious
that it wasn't intelligence in the sense that we commonly understood it. Yet
it was able to beat the human champions. Deep Blue wasn't a model of a human
player, but it didn't matter because for the purposes of its construction it
performed just as well as one.

My money says the singularity happens the exact same way -- we are able to
"fake" more and more things that look exactly like intelligence until one day
we're able to fake intelligence to a degree that it's virtually
indistinguishable from our own. We're eventually able to do something that
looks like moving our sentience into a computer even though it "won't really"
be doing it.

My guess is that we're hundreds of years away from that date, but whether it
happens or not, the fact that an expert right now in the complexity of the
underlying physical system has an opinion on the computational problem that
probably won't be solved until after 2100 doesn't seem to me to be very
relevant. Of course it's complicated. Of course we don't understand it. And of
course we can't duplicate the structure of things we don't understand. I
believe for any layman in the field all of that goes without saying?

~~~
Retric
The "singularity" is never going to happen because it assumes intelegence is a
single thing that can be increased with more processing power and better
algorithms. Completely ignoring things like information theory, over fitting,
insanity, training data, and chaos theory.

We can and probably will build reasonably general AI, but you far more likely
to see each generation of AI being an ever smaller improvement than any sort
of runaway exponential progression. Not to mention hardware progress has
slowed to a relative standstill.

~~~
joe_the_user
Suppose one has the means to build a flexible General AI with modern silicon
technology. How could this not result in a thing whose abilities couldn't then
be continuously increased at the rate regular silicon technology increased?

The whole "diminishing returns will stop us" just seems like a comforting
fairy tale for those who don't want to think about the consequences (which I
suspect a bit of thought does show won't be as rosy as Kurzweill imagines).

Edit: Hardware progress is still mostly following "Moore's Law". The only
that's not increasing is processor speed. But if we build a "reasonably
general AI", how could that box's capacities not be increased by tightly
integration with other similarly intelligent boxes?

~~~
Retric
When a dumb AI comes up with the correct answer using a smart AI is not going
to give you a better answer the old one is still correct. However, it could
give you a wrong answer for whatever reason. As to diminishing returns, there
is a window between a self driving car that's better than people and a
'perfect' self driving car but clearly that's a diminishing return situation,
because it was giving you the perfect answer 99.99% of the time there is not a
lot of room for improvement.

PS: S curves often look like exponential curves but the real world has real
limits so you can't have unlimited exponential progression of any type period
end of story. And it looks like we are on the down slope when it comes to
transistors. [http://www.extremetech.com/computing/123529-nvidia-deeply-
un...](http://www.extremetech.com/computing/123529-nvidia-deeply-unhappy-with-
tsmc-claims-22nm-essentially-worthless) "Nvidia deeply unhappy with TSMC,
claims 20nm essentially worthless" And that's for video cards which are
embarrassingly parallel.

------
kmfrk
Another good rebuke of Kurzweil's theory of the mind:
[http://www.nybooks.com/articles/archives/2013/mar/21/homuncu...](http://www.nybooks.com/articles/archives/2013/mar/21/homunculism/?pagination=false).

------
JohnsonB
>Addressing fellow scientists, he dismissed the singularity as “a bunch of hot
air,” and went on further to declare _that “the brain is not computable and no
engineering can reproduce it.”_

wat.

------
cwzwarich
The human mind is one of the few things that we encounter during everyday life
that we may not even have the right conceptual framework to understand.

It is an open question whether the physical processes of the brain can be
simulated by a computer, and it is even an open question whether the physical
processes of the brain account for the full range of human conscious
experience. I look forward to seeing this field evolve during my lifetime, but
significant progress may continue to elude us.

------
alexvr
We're just waiting for a really smart simulation programmer with access to a
super-powerful computer to create a simulation of a small, earth-like world so
we can watch as accelerated evolution creates some human-like (or not) being
with a similar brain. I think we can write an algorithm for an AI that can
pass the Turing test, but I think serious AI that will precipitate the
singularity will come as a result of evolutionary algorithms and very powerful
computers.

~~~
Filligree
That would take a seriously powerful computer.

I remember reading a short story along those lines.. it ended with the
simulated beings hacking physics and dropping off into a pocket universe,
without helping _at all_.

Well, I guess they didn't kill us all. Might have been written by Greg Egan.
Sounds familiar?

~~~
jokermatt999
Yup, Crystal Nights by Greg Egan

<http://ttapress.com/553/crystal-nights-by-greg-egan/>

~~~
Filligree
Right, that.

Border Guards probably counts as a sequel. I hope we'll get a novel-sized
story set in that universe, eventually.

------
joe_the_user
Some of this actually fits perfectly into the framework of Kurzweil's The
Singularity in the sense that you could argue that most neuroscientists are
too focused to the complexity of the brain and aren't anticipating the
progress that may be possible through rapid evolution of _tools for looking at
the brain_.

The said, I think Kurzweil's plan for constructing an intelligence actually
oversteps his basic approach. Tools are advancing on multiple fronts. Not only
does that give us multiple ways a singularity could happen (from brain-
simulation to simplistic-but-massive-ai to clever-ai to bio-computers) but the
multiple advancing fronts could go around apart walls (a sufficiently
sophisticate computer could make brain processes look less opaque etc).

Personally, I'd say Paul Allen's counter-argument, which I recall as boiling
down to the inherent limits of human-produced software, is the most plausible
counter-argument.

~~~
zmmmmm
> Personally, I'd say Paul Allen's counter-argument, which I recall as boiling
> down to the inherent limits of human-produced software, is the most
> plausible counter-argument

I often wonder if the ultimate end counter argument may be that the same
things that make us "intelligent" are also those that give us our human flaws
- exactly the same things that we were trying to avoid in the first place by
using computers. Perhaps we can't have human-like intelligence without also
being forgetful, inaccurate, selfish, lazy, irrational, greedy, angry, sad
etc. If someone did invent a computer with all those attributes, would it be
useful?

~~~
Filligree
Most of those attributes turn out to have perfectly good explanations in
evolutionary psychology.

That being the case, there's no reason to expect a mind that isn't produced by
the same process to possess them.

~~~
zmmmmm
> there's no reason to expect a mind that isn't produced by the same process
> to possess them

True ... but then will those minds be able to perform the feats of
intelligence that we hope for from the "singularity"? Will a mind unable put
aside the fact that valves can make a t.v. set run, unable to dream, imagine,
love, and lacking the motivation of greed and competition with its peers etc.
- will that mind be able to discover the transistor as an alternative? Perhaps
our evolutionary psychology is part of the reason we exhibit what intelligence
we have, not just an unnecessary relic of our past?

~~~
Filligree
> Perhaps our evolutionary psychology is part of the reason we exhibit what
> intelligence we have, not just an unnecessary relic of our past?

Dare we take the risk that it isn't?

------
dreamfactory
Is this any different from
<http://en.wikipedia.org/wiki/The_Emperors_New_Mind>?

------
snambi
it may be more accurate to say "parts of the brain of an average human may be
computable". Ofcourse I have no data to prove either way. There are incredible
human beings who lived in the past and living in the present, who brain is
faster than the fastest computers put together. But simulating super humans
like ramanujan, is it even possible?

~~~
pwny
Ramanujan isn't different than you or I in that regard. His brain was an
assembly of neurons, connected via synapses. There's nothing pointing towards
the fact that this assembly is any different from any machine, regardless of
achievements in the field of mathematics. Because we fail to understand it
fully yet doesn't mean it transcends physics, that would be absurd.

------
eli_gottlieb
Oh good, a few more years until I'm coerced into uploading my brain into
bizarre-o-land by someone's deliberately-create-the-singularity-project gone
horribly right.

