
The Singularity is Far: A Neuroscientist's View - caustic
http://www.boingboing.net/2011/07/14/far.html?dlvrit=36761
======
JunkDNA
I see a big parallel with the predictions for advances in neuroscience with
all the predictions that were made prior to the sequencing of the human genome
(the author touches on this a bit too). Lots of smart scientists really
believed that once the human genome was sequenced, we would have the keys to
the biological kingdom. What has actually happened is that we have discovered
that the complexity of the system is probably an order of magnitude more
complex than previously thought. Knowing the sequence of a gene turns out to
be important, but a pretty minor factor in explaining its function. Plus we
are learning that all sorts of simple rules we thought were true aren't always
the case.

I suspect a similar thing is playing out in neuroscience. As we peel back the
layers of the onion, ever more complexity will be revealed. The things Ray
Kurzweil predicts may well come true. He is a brilliant guy. But the timetable
is very optimistic.

The march of biological progress is very slow, in part because all the
experimentation involves living things that grow, die, get contaminated, run
away, don't show up for appointments, get high, etc... Lots of people from
other scientific disciplines, especially engineering related ones
underestimate just how long even the simplest biological experiments can take.

~~~
skarayan
"Lots of smart scientists really believed that once the human genome was
sequenced, we would have the keys to the biological kingdom."

Here's my (a computer scientist's) view on the matter:

Imagine that you have a relatively complex computer system written with object
oriented principles. Now, imagine that you are looking at the binary
representation of this system and trying to make sense out of the whole thing.
Also, imagine that you have no knowledge of how computer systems work and how
the layers between the computer program, programming language, possibly
virtual machine, and native code work.

There are layers involved between these objects and their binary
representation. I imagine that there are also layers between our genome
(analogy to binary code) and the leveraged representation of ourselves
(analogy to object oriented system).

I think that this is why it is hard to make much sense out of the genome, even
though the human genome was sequenced.

I also imagine that this is why it is hard to make sense out of the brain by
looking at the brain directly. An analogy would be that we are again looking
at binary representation of information.

It would be far more useful to figure out how this stuff works. I am not sure
how this is done at this time.

~~~
JunkDNA
Your example is a good one, but as others point out, the computer has been
designed logically. Biology hasn't been. And even then, there's just really
oddball crap that comes out of left field. I'll give you a good example (which
admittedly may require a few trips to wikipedia depending on your biology
background):

The codon used for translating DNA/RNA code to protein is well established.
It's a three-base degenerate code, meaning that there are several three-base
DNA sequences representing a given amino acid [1]. This code is very well
understood. If your DNA/RNA sequence has any of the three bases combos for
alanine, your protein gets an alanine in that position. It follows from this
that different DNA sequences can code for exactly the same amino acid sequence
in a protein. However, proteins with the same amino acid sequence are
chemically and biologicalally identical (ignoring things like post-
translational modification).

A few years ago, I read a paper [2] where the group hypothesized that in a
specific case, a rare codon for for an amino acid in a specific protein caused
the cellular machinery to stall at that position. They suggested that in the
intervening time, the protein misfolded into a different 3D shape. The
resulting protein therefore had different chemical properties despite having
identical amino acid sequence. Basically shredding what is often known as the
"central dogmal of molecular biology".

Now, this specific example probably needs to be confirmed, and might not be
very frequent. But it makes total sense when you understand how all the pieces
work. However this explanation would be very low on most biologist's lists of
reasons why a certain protein isn't functioning properly. In fact, when people
do genetic analysis looking for diseases, they routinely throw out all
synonymous changes before doing the stats. It makes you wonder how often we
miss this when looking for disease genes.

My larger point is that lots of biological science is a collection of edge
cases. We know so little about the systems we're studying and have such crude
tools to investigate them, that we get blindsided by things sitting in plain
sight all the time.

[1] <http://en.wikipedia.org/wiki/Genetic_code> [2]
<http://www.sciencemag.org/content/315/5811/525.abstract>

~~~
skarayan
"Your example is a good one, but as others point out, the computer has been
designed logically. Biology hasn't been."

I'll do a little play on words here. Are you saying that biology isn't
logical? Does it defy the laws of the universe, physics, and the axioms in
math? Certainly not. I think what you are saying is that it isn't the same as
a silicon chip and follows different rules. We just don't know enough about
biology to peel away the layers, but there are definitely layers. There has to
be at least a one-to-one mapping between the genome and a live being, but I
would bet that the layers are far more complicated.

I read a study a while back which monitors the different areas developed
inside a mouse's brain. The study concluded that a certain part of the brain
gets more developed as the mouse tries to run threw a maze vs a mouse that
does nothing at all. I gather that the study is trying to point out which area
of the brain is responsible for memory, and perhaps certain type of memory.

I fail to see how this study could tell us something meaningful regarding our
ability to take information from our senses and store it into our memory. How
could we understand more about how this process works? Are we that far away
from understanding the inner workings of the brain that we need to do studies
like this? If so, then I tend to believe that we are far from the singularity.

"My larger point is that lots of biological science is a collection of edge
cases. We know so little about the systems we're studying and have such crude
tools to investigate them, that we get blindsided by things sitting in plain
sight all the time."

I understand. From my previous example, I would probably come up with edge
cases too if I was trying to look at binary code for the first time and trying
to figure out how a complex computer system works. Perhaps I would poke the
system and monitor which part of the file ended up with more 1's and 0's.

~~~
JunkDNA
Since I can tell this topic is interesting to you, if you haven't read it, I
highly recommend "Brain Rules" by John Medina:
<http://www.amazon.com/dp/0979777747/>

It's a fantastic read that I think would do a much better job of discussing
this topic, especially in the realm of human cognition than I can.

~~~
skarayan
Thank you, I will read it.

------
giardini
He would be correct if creation of AI depended on a thorough understanding of
neuroscience. But I hope we needn't wait that long.

It's the old "Birds fly. To fly, man must fully understand bird flight."
argument. Yet today we still don't completely understand bird flight but
planes _do_ fly.

The analogy is not complete: we have yet to find the "air", the "turbulence",
a "Bernoulli principle", etc. of intelligence. That is to be determined. But
this approach is the only reasonable one.

As the author implies, waiting for neuroscience is like waiting for Godot.

~~~
burgerbrain
Exactly, we've had airplanes for a _century_ but working orthnocopters are
still something of a black art. Like so many things in engineering, it is
easier, _and better_ , to not pay naturally occurring phenomenon undue
attention. We are capable of engineering better.

~~~
elemenohpee
The lack of AI progress in the last 30 years is not a good sign. You're also
ignoring things like the materials that have come out of studying the pads on
geckos' feet.

~~~
burgerbrain
There has not been a lack of AI progress in the past 30 years, but rather some
sort of 'True Scottsman'-esque raised expectations phenomenon. Natural
language processing, computer vision, etc have seen _dramatic_ improvements in
recent years but every time there is an improvement people say "well that's
just standard stuff, not _real_ AI". As long as a technology is real, people
seem weirdly unwilling to accept that it is also an example of AI. The "real
stuff" seems to include "fictional" as an integral part of it's definition.

~~~
lutorm
Well, when people say AI, I think "Turing Test", not computer vision.

~~~
burgerbrain
But surely you actually understand that there is more to the field than
emulating humans.

~~~
lutorm
I guess that's a matter of definition. If humans are the only example of
"intelligence" that we know of, then it seems natural that artificial
intelligence would concern emulating humans.

------
Estragon
Not interested in arguing about his time table, but the example of DNA
sequencing only affording a linear increase in undersanding is bogus and he
ought to know that. It has _significantly_ accelerated genetics research by
making mapping a matter of a browser search. As an example, the fly lines
developed by Gerry Rubin _et al_ , which can be manipulated to express any
reporter gene in any genetically defined brain locus. That would have been
completely infeasible prior to complete genomic sequencing of the fly.

------
abecedarius
The OP asks reasonable technical questions about medical nanorobots. I'm not
going to defend Kurzweil, but some less-sloppy thinkers have written about
this kind of stuff, like Merkle, Freitas, and Drexler. E.g.
<http://www.merkle.com/cryo/techFeas.html>
<http://www.nanomedicine.com/NMIIA/15.3.6.5.htm> They do tackle questions like
how do you power these things; I wish he'd read and criticize them instead.

A 7-micron-long medical nanorobot sounds pretty damned big to me, btw -- in
_Nanosystems_ Drexler fits a 32-bit CPU in a 400nm cube, less than 1/300 of
the volume if we're talking about a 1-micron-radius cylinder.

------
Troll_Whisperer
This article was a very similar one to ones that biologists were publishing in
mid 80's when Kurzweil predicted the mapping of the human genome within 15
years. It's interesting how exponential progress is counter-intuitive even for
those who have been experiencing it in their fields for years.

~~~
Lost_BiomedE
I always thought that was Kurzweil's main and strongest point. We tend to
predict linearly when a lot of progress appears to happen exponentially. The
rest are embellishments.

The main call to action should be how to protect, organize, and invest in
ourselves given possible developments from the above.

------
macavity23
My big problem with Kurzweil's singularity is the massive handwaving he does
between 'computers are getting exponentially faster' and 'AI will arise'.

This depends on the assumption that 'intelligence' (and nobody can really
agree on what that means, which is a bad start) is representable in
algorithmic form. Maybe it is, maybe it isn't, but the lack of progress in
hard AI in the last 30 years isn't a good sign.

~~~
ekidd
There's never _any_ progress in AI, because once we figure out how to do
something, we stop calling it "AI".

In the last 30 years, computers have won at chess, won at Jeopardy, learned to
recognize spam with better than 99.5% accuracy, learned to recognize faces
with better than 95% accuracy, achieved semi-readable automatic translation,
figured out what movies I should add to my Netflix queue, and started to
recognize speech. We've seen huge advances in computer vision and statistical
natural language processing, and we're seeing a renaissance in machine
learning. Most of this stuff was considered "hard AI" as recently as 1992, but
the goalposts have moved.

And if intelligence can't be represented in algorithmic form, then what's the
brain doing? Even if we have immaterial souls that don't obey the laws of
physics, why do some brain lesions cause weirdly specific impairments to our
thought process? A huge chunk of our intelligence is clearly subject to the
laws of physics, and therefore can be wedged _somewhere_ into the
computational complexity hierarchy.

~~~
cdavid
Half of your examples are, in the grand scheme of things, trivial applications
of decades-old techniques. Recommendation, for example, is based on techniques
which are almost 100 years old (SVD). Winning chess has been rendered possible
through increased machine power, the algorithms necessary for it are hardly
ground breaking. If this is AI, then many things are AI: almost any
quantitative usage of statistics is AI, for example.

There is no agreement on this, but some leading AI researchers consider that
most AI problems should be solvable with decades old computers (i.e. we need
some completely new paradigms that nobody has yet thought about). See Mc
Carthy for example: <http://www-formal.stanford.edu/jmc/whatisai/node1.html>

~~~
nhaehnle
That was exactly his point, wasn't it? His examples _used to be_ considered
problems in AI. Now that the problems are understood, they are simply
considered to be algorithmic problems. Lots of output of the MIT AI labs in
the early days is now simply taught in algorithms courses, yet back then,
those researchers considered themselves to be doing AI research.

 _That_ is the moving of the goalposts.

------
nwr86
Nice to see a post on this topic from a neuroscientist, as I am very
interested in this area but know little biology.

One question though--the author says "while the fundamental insights that have
emerged to date from the human genome sequence have been important, they have
been far from evelatory." While not guaranteed, doesn't it seem likely that we
will understand _much_ , _much_ more about the human genome once the economies
of scale come into play? The price of sequencing a genome is currently on the
order of about $10000, and if they continue to fall at the rate they have
(which seems likely, based both past price decay and in-development
technologies), the cost to sequence a genome will be on the order of $100 well
before the end of this decade. Once we sequence millions-billions of genomes
and compare the information in said genomes with data from the corresponding
human subjects, I suspect we will learn a lot more than we would by trying to
understand a single person's genome. Moreover, given that the human genome is
on the order of roughly a gigabyte, it would seem difficult, but not
unreasonably so, to try and understand most the information in our DNA.

Thanks for any insight you can provide.

------
PaulHoule
I've never been impressed by the "simulate a single human" approach to AGI.

I don't know why it appeals to people. Has Christianity infected people with a
desire for personal immortality? Are people inured to flushing billions and
billions down the drain on biomedical research?

Another issue is that humans aren't that great anyway. The "game of life" is
really about statistical inference and people aren't that good at it -- the
success of Las Vegas proves it. If you can eliminate the systematic biases
that people make dealing with uncertainty, you can make intelligence which is
qualitatively superhuman, not just quantitatively superhuman.

It's much more believable that steady progress will be made on emulating and
surpassing human faculties. This won't be based on any one particular
methodology (symbol processing, neural nets, Bayesian networks) but will be
based on picking and choosing what works. Progress is going to be steady here
because progress means better systems each step of the way.

Sure, the Richard Dreyfuses will be with us each step of the way and will
diminish our accomplishments... and they might still be doing so long after
we're living in a zoo.

~~~
true_religion
> The "game of life" is really about statistical inference and people aren't
> that good at it -- the success of Las Vegas proves it.

Las Vegas is run by human beings. Having members of your species be worse at a
task than other members of your species doesn't prove that your species as a
whole is not good at the task.

Now...

> Has Christianity infected people with a desire for personal immortality? Are
> people inured to flushing billions and billions down the drain on biomedical
> research?

Why wouldn't one want personal immortality? To be fair, religious groups are
those least likely to support personal immortality of many sorts (e.g. brain
uploading) because of questions such as "what happens to the soul", and "isn't
this meddling in our creators work?".

Rather, I'd think that anyone who believes that _this_ life is all that exists
would want to prolong it indefinitely. It's better to be than not to be.

Do you have an argument against that?

~~~
evilduck
_Do you have an argument against that?_

Certainly: quality of life. Immortality may be a miserable, horrible
existence. Most of the things we derive happiness from in life are an integral
part of our normal life-death cycle of events, disrupting that cycle may not
be very pleasant long-term unless we can supplant those innate desires with
something equally fulfilling. We don't really know, it's never happened.

~~~
cpeterso
_> Most of the things we derive happiness from in life are an integral part of
our normal life-death cycle of events_

Most? Do you mind elaborating?

~~~
evilduck
Most of your desires and goals are related to the scarcity of time. In a broad
sense, having children for example is a rewarding experience for people, we
have an innate desire to reproduce and rear children (talk to any childless
post-30 woman who didn't explicitly choose that situation, you'll see it's
innate), but it's also something we probably wouldn't continue to do should
death be removed from humanity, there'd be no need. But in a smaller sense,
even something like enjoying a nice meal with friends would be called into
question. Your enjoyment of food only exist because we've got a biologically
programmed taste for foods that assists in our survival, and social
interaction in an infinite time scale even gets weird, everyone would
eventually know everyone and know everything there is to know, unique life
experiences would become commonality, etc.

Things get weird on infinite scales.

~~~
rcxdude
there's a lot of assumptions in that particular view of immortality. It
assumes that it would also mean infinite memory, a loss of senses and certain
capabilities, and that we cannot modify ourselves to create new urges. All of
these are possible and plausible, but for example in the simulated AI
situation only the loss of senses is likely to be true (as well as perhaps the
inability to modify ourselves, depending on the approach)

------
zmanian
I find two aspects of the singularity compelling.

Singularities have happened in the past when life evolves a solution to a
local problem such as photosynthesis, the social primate and agriculture.

Kurzweils singularity is just one of many potential signularities but the near
future seems either contain major innovation with great generality or
collapse.

------
mwhite
The likelihood of Kurzweil's particular vision of the singularity in this case
doesn't say anything about the likelihood of the singularity in general, i.e.
by the creation of artificial intelligence through methods that are nearer at
hand than nanobots or whole brain emulation.

------
jsmcgd
For me this complexity problem could insurmountable. I think the best approach
may be to side step this issue and try selective breeding of virtual
(increasingly) intelligent beings.

------
maeon3
Before knocking Kurzweil's predictions, review his predictions of the 1990's
and the people who mocked them. Kurzweil does not have a perfect track record.
I think his accuracy in predicting the future is way above average.

Also, I find his views of the future enlightening and useful, as he
illustrates lots of "just out of reach" engineering projects for me to
consider tackling.

Between the years of 1990 and 2005, Kurzweil predicted the following:

    
    
        * People will mainly use portable computers.
        * Portable computers will be lighter and easier to transport.
        * Internet access will be available almost everywhere.
        * Device cables will disappear.
        * Documents will have embedded moving images and sounds.
        * Virtual long distance learning will be commonplace. 
    

Mock his current predictions with care.
[http://www.associatedcontent.com/article/8181399/the_predict...](http://www.associatedcontent.com/article/8181399/the_predictions_of_ray_kurzweil.html)

~~~
Almaviva
Weren't all of those just obvious in 1990? Seriously, I don't think this is
the benefit of hindsight speaking.

~~~
Sayter
It's not as obvious as it sounds. If it is, then this will be an easy question
to answer:

What will technology be like in 2030?

~~~
bh42222
Well lets see, I have high hopes for brute force computation approaches to
scientific research. More IBM Watson-like systems, which teach themselves, and
more full automation of complex scientific experiments. Industrialize,
automate and computerize science, in other words.

If we do that, we have a chance of increasing the rate of scientific
discovery. And it does not require true God like AI, just a bunch of really
clever and domain specific Watsons.

Alternatively, the future looks much like today, except with a lot more and
better gadgets, but people still grow old and die.

Also we've either started massive C02 sequestering actions, like fertilizing
the oceans with iron, and using tons of charcoal in our farms, or nuclear
power provides a much larger % of global power, or Siberia is balmy.

And lastly the US and several other industrialized nations have gone through a
terrible economic/financial crisis and reform like the UK did in the 1970s and
Sweden at the begging of the 19th century.

And China is the world's super power, with India close behind, and no one
cares much about the EU (or what's left of it) and the US.

Tuna is extinct in the wild.

------
bluedanieru
I would agree he's certainly right about Kurzweil's unrealistic optimism, but
I'm not sure our understanding of the brain (and other aspects of our biology
for that matter) isn't increasing exponentially. Perhaps rather it just seems
linear compared to the turbo-charged progress of these enabling technologies?
Certainly we've come a lot further since Phineas Gage than a linear trajectory
would allow.

He should have thrown around some numbers while he was at it. I wonder if he'd
agree with clinical immortality by the end of this century, and mind-uploading
by the end of the next?

~~~
azza-bazoo
I think you might be missing the point, though. The argument is that we're
collecting exponentially more _data_ about the brain, but that data doesn't
translate directly to understanding.

You mentioned Phineas Gage. That case led to the idea of regions of the brain
controlling different things, which led to lobotomy as a psychiatric
treatment, which was used up until the 1960s or so. Then chemical methods
improved, and people came to understand that neurotransmitters played a role
too, which led to antidepressants and other drugs. Those drugs have improved,
but their design hasn't changed _that_ much in the last few decades. Obviously
this is over-simplified -- but it doesn't sound like an exponential growth of
understanding to me.

~~~
guilbep
At the end, we can't talk about exponential or linear growth. We have no
'mesure' for scientific advance. and we need mesure for mesuring something. Or
maybe somes exist but I'm not... okay so I googled it before posting some
stupid stuff: It seems that there's no 'real' mesure.. scientific paper to
prove it: <http://www.springerlink.com/content/m1h2150x02u153x3/>

