
And everyone gets a robot pony - rimantas
http://scienceblogs.com/pharyngula/2012/07/14/and-everyone-gets-a-robot-pony/
======
nessus42
I like this part of the OP:

 _"If singularitarians were 19th century engineers, they’d be the ones talking
about our glorious future of transportation by proposing to hack up horses and
replace their muscles with hydraulics. Yes, that’s the future: steam-powered
robot horses. And if we shovel more coal into their bellies, they’ll go
faster!"_

This is very true. Having been peripherally involved in a project to slice up
and image just a cubic millimeter of ferret brain, I think I can safely say
that we are not going to be able to accomplish anything remotely like reading
out the state of a human brain anytime in the foreseeable future. Just
reconstructing the 3D geometry of the neurons and synapses in that 1 mm cube
turns out to be a gargantuan feat, much less recording all the other stuff
that you would need to record.

Sure, things like this are fun to think about, and I have nothing against fun.
But any thoughts of being able to actually do this, fall into the realm of
Science Fiction. There's no point in putting any serious effort into figuring
out how we might accomplish this goal now, as by the time we are able to
actually do it, as the OP pointed out, our knowledge will be much different.
And any ideas that we have now--like dated Science Fiction--are just going to
be thought of as having been terribly quaint.

------
jessriedel
In addition to his confusion over the clock frequency speedup argument (as
pointed out by cultureulterior), PZ Myers doesn't seem to argue from a
concrete idea of how detailed a scan will have to be to capture the important
brain functions. The feasibility of brain scans is strongly sensitive to the
(currently disputed) necessary level of detail. It doesn't make sense for him
to say

> We can’t even record the complete state of a single cell;

unless he's just using it as a general statement about out technology. It's
completely possible that recording the state of a cell with molecular
resolution remains beyond our control even when we can scan brains to the
resolution necessary to simulate human cognition.

Also, when I heard that

> With the most elaborate and careful procedures, they report excellent
> fixation within 5 microns of the surface, and disruption of the tissue by
> ice crystal formation within 20 microns.

my estimate of brain scan feasibility went _up_. Of _course_ it's true that we
don't yet "have a method to lock down the state of a 3kg brain"; we're
discussing the far future!

Pre-WWII, I think someone could have made very similar arguments about
computers by pointing out how fast they would have to be to do something crazy
like a 3D simulation. By golly, they'd need _billions_ of transistors. Yes,
computers are different than brain scanning, so the scalability of one need
not imply the scalability of the other. But the point is that you can't just
argue "look how hard this is _now_ ; it will continue to be hard in the
future". You have to argue about why _progress_ in brain scanning will be slow
(compared to computers).

~~~
aidenn0
I think what the article is pointing out is that:

1) People who have never tried to scan a brain say "oh it's totally doable,
why haven't we done this with simple organisms?"

2) People who are actually trying to scan brains say "Um, have you even read
any of our papers, we know it would be useful, but it's hard"

~~~
jessriedel
> People who have never tried to scan a brain say "oh it's totally doable, why
> haven't we done this with simple organisms?"

Who are these people? Certainly not Chris Hallquist in the post linked by
Myers.

And where does Myers say this? I don't see him attributing to anyone the claim
that the scanning of simple organisms should already have happened.

------
reasonattlm
Below is some background reading on whole brain emulation from the Future of
Humanity Insititute. It isn't hard to come to a better understanding of the
present state of research and plausible future goals than is demonstrated by
the author of this piece.

<http://www.fhi.ox.ac.uk/Reports/2008-3.pdf>

"As this review shows, WBE on the neuronal/synaptic level requires relatively
modest increases in microscopy resolution, a less trivial development of
automation for scanning and image processing, a research push at the problem
of inferring functional properties of neurons and synapses, and relatively
business‐as‐usual development of computational neuroscience models and
computer hardware.

"This assumes that this is the appropriate level of description of the brain,
and that we find ways of accurately simulating the subsystems that occurs on
this level. Conversely, pursuing this research agenda will also help detect
whether there are low‐level effects that have significant influence on higher
level systems, requiring an increase in simulation and scanning resolution.

"There do not appear to exist any obstacles to attempting to emulate an
invertebrate organism today. We are still largely ignorant of the networks
that make up the brains of even modestly complex organisms. Obtaining detailed
anatomical information of a small brain appears entirely feasible and useful
to neuroscience, and would be a critical first step towards WBE. Such a
project would serve as both a proof of concept and test bed for further
development.

"If WBE is pursued successfully, at present it looks like the need for raw
computing power for real‐time simulation and funding for building large‐scale
automated scanning/processing facilities are the factors most likely to hold
back large‐scale simulations."

\---

And some further, easier background reading:

[http://www.fightaging.org/archives/2012/06/mind-uploading-
at...](http://www.fightaging.org/archives/2012/06/mind-uploading-at-the-
international-journal-of-machine-consciousness.php)

[http://www.fightaging.org/archives/2009/02/the-age-of-
artifi...](http://www.fightaging.org/archives/2009/02/the-age-of-artificial-
brains.php)

~~~
jamesaguilar
On a more flippant level, isn't it pretty obvious that robot ponies are going
to be a thing that is possible to have some time in the next century or two?
We've already got something that's kind of moving in that direction, and this
was a few years ago: <http://www.youtube.com/watch?v=W1czBcnX1Ww>

------
waterlesscloud
Recently I've been thinking this kind of thing is more likely to be a gradual
process, and not likely to take the literal form that's been discussed.

But as we use systems that expand our awareness, are adaptive to us as
individuals, and interface with us more tightly, we'll slowly become a sort
hybrid consciousness. And over time, we'll become more and more "online",
until some day, the machine portion of that consciousness will persist in a
meaningful way after the biological part has come to an end.

In other words, we might well come to a point where we "upload" by shaping the
systems we interact with over time. But it won't be the same as the bio
version of our consciousness, it will be something else.

Whether we will come to a point where that distinction doesn't matter to our
consciousnesses or not is an open topic.

~~~
dchichkov
I'd guess what one could try realistically with the current technologies is
linking two mice [via Ed Boyden like devices] and observing, if that could
produce an effect of extended awareness between these mice [like pain stimuli,
etc].

------
confluence
This problem is hard. Extremely hard. But that does not mean that it is not
theoretically possible (if very, very, very unlikely).

A lot of the brain uploading people seem to look at this through destructive
brain scanning (correct me if I'm wrong). That sounds like a terrible idea -
why does it have to be all in one go and not reversible.

What if - for example (I'm spitballing) - you move into the brain slowly
replacing each live neuron with an artificial neuron (far fetched I know -
nanotech is ridiculously hard/difficult), instead of going slice by slice on a
frozen brain.

There is no reason that this slow "viral" method couldn't be done (or reversed
- replace artificial neuron with neuron). This is akin to how we deploy
distributed systems - create a compiled slug and push it out node by node via
bittorrent. The change is tested on each node, and slowly moves out. If
anything goes wrong - just roll back to previous slug.

Once you have full conversion - upload away (as I presume getting states of
artificial neurons is relatively easy vs. organic cells). Have no doubts about
it - this is a super hard problem. But it is not impossible.

------
gwern
LW discussion:
[http://lesswrong.com/r/discussion/lw/dm3/pz_meyers_on_the_in...](http://lesswrong.com/r/discussion/lw/dm3/pz_meyers_on_the_infeasibility_of_whole_brain/)

------
cultureulterior
He completely misunderstands the clock frequency speedup argument.

~~~
jessriedel
Just so it's clear to everyone, he says

> And then going on to make more ludicrous statements…

> > _Axons carry spike signals at 75 meters per second or less (Kandel et al.
> 2000). That speed is a fixed consequence of our physiology. In contrast,
> software minds could be ported to faster hardware, and could therefore
> process information more rapidly._

> You’re just going to increase the speed of the computations — how are you
> going to do that without disrupting the interactions between all of the
> subunits? You’ve assumed you’ve got this gigantic database of every cell and
> synapse in the brain, and you’re going to just tweak the clock speed…how?
> You’ve got varying length constants in different axons, different kinds of
> processing, different kinds of synaptic outputs and receptor responses, and
> you’re just going to wave your hand and say, “Make them go faster!” Jebus.
> As if timing and hysteresis and fatigue and timing-based potentiation don’t
> play any role in brain function; as if sensory processing wasn’t dependent
> on timing. We’ve got cells that respond to phase differences in the activity
> of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to
> make it go faster.

The only explanation I can think of is that he thinks the proposal is to scan
this brain so that it can be duplicated and run as another fleshy brain. But
obviously, the idea is to simulate the entire brain on a computer, so that
simulating the brain faster than real life is just a matter of having a fast
computer. If he missed this, it makes me a bit skeptical of his other
criticisms.

~~~
moe
I think what he means is that the brain is multi-threaded with optimistic
locking and hardcoded timing constants.

Try running old MS-DOS games inside an emulator. Many of them will act quite
funny when you turn up the clock-speed.

~~~
Dylan16807
That's not exactly turning up the speed, that's emulating a faster CPU.

Try taking a game boy emulator and putting it on fast forward. Works
perfectly. If you have full emulation of a system you can go arbitrarily fast.

~~~
moe
Yes, it was a flawed analogy (aren't they all...).

However, these DOS-games usually fail in sped up emulations because they make
assumptions about external inputs such as the Real Time Clock.

I think the point of OP was that we can't reasonably speed up the "RTC" in a
brain emulation if you want it to interact with the real world, because that
would break all sorts of hardwired assumptions.

For a simple example, if you ran your brain at 4x speed then it would perceive
everything in super-slow-motion. At that speed it would already have
difficulties to understand when you speak to it (at the least it will have to
be a very patient brain).

At higher speeds pretty much all cognitive functions would probably break down
- unless you feed it recorded inputs that have been accelerated to match the
brain-speed.

~~~
Dylan16807
That is an issue, but there is a huge difference between having to slow the
brain back down in certain situations and being unable to speed it up at all.

I don't expect to play an entire game on fast forward, after all.

Edit: new line about breakdown: I just assumed that whatever input was given
would be sped up too. That part of the project seems far less complicated than
the brain simulation itself.

~~~
moe
My knowledge of brains is limited, but I'd think the issue remains the same
even if you cut off all external inputs.

Basics like memory decay are also tied to the system clock. So if you run your
brain at 1000x speed then it would probably simply forget everything almost
immediately.

And if you make a "simple" patch that prevents it from ever forgetting
anything then it would be overwhelmed because it is only wired to deal with a
certain amount of memories at a time.

In terms of the DOS-Game analogy: We may be able to patch a game that
originally ran in 256kb of Ram to run in 2GB and actually fill that up
(because we disabled the garbage collector). But the game probably uses
algorithms that break down when faced with such a large dataset.

At this point we're down to having to actually understand the game (or brain)
in detail, in order to make the changes required for running at higher
capacity.

~~~
Dylan16807
Actually having a higher capacity will be tricky, yes. But at least there
won't be cell decay in the scientists working 4000 hour weeks to figure it
out.

------
kornork
Kurzweil's next book is supposed to be about the advancing state of brain
imaging. His last books seemed to reduce the brain to its computational
capacity, and he kind of does some hand-waving about how our brain scanning
technology is getting better at accelerating rates.

Will he address the complexity of the cell and the brain, or will this fall
into the category of stuff that we'll of course understand in the future?

------
jakeonthemove
The article is pretty funny, but it's all true: the complexity of our brain is
just beyond our imagination at the moment.

Saying that we could preserve the brain or make a copy of it is like saying
that rockets are just open ended combustion engines and aliens have been
visiting us for a long time.

No, there's a reason why the former is called "rocket science" and the latter
is impossible because the speed of light doesn't allow it (it's been proven
time and time again that it's constant and there's nothing faster).

The Sun is the most powerful energy source we know of, and it's "just" a
result of billions of years of gases and particles coming together and somehow
successfully starting a chain reaction.

Then again, without the kind of optimism that Sci-Fi fans have we would not
have most of the technology we enjoy today...

------
pippy
Hacker news talked about this problem recently
<http://news.ycombinator.com/item?id=3987660>. Unique hardware is always
difficult to emulate, and that's why you cut corners. Cutting corers in this
field has gotten us quite far
[http://www.scientificamerican.com/article.cfm?id=graphic-
sci...](http://www.scientificamerican.com/article.cfm?id=graphic-science-ibm-
simulates-4-percent-human-brain-all-of-cat-brain).

But the OP of the article is ignoring the philosophical problem at hand: would
a perfectly simulated brain bring about sentience? To me the answer is no. A
machine can fake it, but its the complex noise of nature that gives us our
qualia.

~~~
ddfisher
As another person who's spent some time thinking about this: I disagree. Why
couldn't a machine simulate the noise as well?

------
olalonde
And yet, very few people would have thought this possible 30 years ago:
[http://singularityhub.com/2010/06/12/monkey-controls-
robot-a...](http://singularityhub.com/2010/06/12/monkey-controls-robot-arm-
with-7-degrees-of-freedom-video/)

------
woodchuck64
Totally unfair of PZ since Hallquist did say "surprised ... if it took only a
couple of decades". 20 years of computational/biological advances should give
us quite a lot.

------
guscost
Obligatory self-promotion:

<http://guscost.com/2011/04/12/science-analog-confabulation/>

------
jostmey
THANK YOU! Very few technologies develop at exponential rates like computer
science has. In general, the learning curve is steep and the progress is slow.

------
lnanek2
Do we really need to scan the brain? One of the greatest tech companies out
there, Google, makes almost all it's money figuring out the right ads to put
in front of people. They pay their employees the most for working in that area
too. Eventually these ad companies like Google will be able to model people
based on their life data. :) Of course it will be to figure out which ads to
show them and make more money, not to run the model and give it life. But
maybe they'll let the model act as your concierge for a fee.

~~~
bobbles
The culmination of these billions of dollars and research and engineering will
be a real life manifestation of 'clippy'.

------
bfrs
I have a question regarding this brain scanning and uploading idea:

Won't Heisenberg's principle make this impossible?

~~~
gwern
No. Why would it?

~~~
bfrs
Well, to me it seems that fine grained brain scanning would need something
like Laplace's demon:

<http://en.wikipedia.org/wiki/Laplace%27s_demon>

~~~
zanny
Fortunately, neurons are still orders of magnitude above atoms in terms of
volumetric density. So it is not that bad. Simulating all of the atoms in the
volume of a brain individually might be a pain.

------
bluesnowmonkey
He doesn't take the point about robot horses very far, but that's where the
conversation will go in time.

Maybe flight is a better example. We always knew it was possible to fly
because we saw birds doing it. Birds fly by flapping their wings. We tried to
make flying machines that flap their wings. They didn't fly.

Birds evolved to flap their wings because it exploited the technology
available to evolution. Birds are made out of the same stuff as other animals,
just tweaked a bit to be lighter. Flapping is a lot like running or swimming
physiologically -- swinging a limb back and forth in a certain pattern.
Evolution can do a lot with warm-blooded animals with limbs.

When we figured out how to make artificial flying machines, the solution
exploited the advantages of _our_ technology. Planes are made out of the same
stuff as cars, just tweaked a bit to be lighter. Spinning a propeller is a lot
like spinning a wheel. We can do a lot with combustion engines that spin
things.

The missing pieces were the principles of aerodynamics, some technology (IC
engine, construction materials), and some engineering specific to the problem
of flight. Since we figured it out, we're able to make machines that fly
faster and higher than anything in nature. _Vastly_ faster and higher.

Right now we're trying to build machines that think -- not just compute. We
know that thinking is possible because we see brains doing it. Brains do it
with neurons and synapses. We tried to make thinking machines out of neurons
and synapses. They didn't think.

Brains are made of neurons because that's what evolution had available.
Neurons are a lot like other cells in the body -- blood cells, skin cells,
muscle cells. Evolution is good at specializing cells to do all kinds of jobs.

When we figure out how to make thinking machines, the solution will exploit
the advantages of the technology of the day. It will look like something we
already have, but tweaked. We're good with transistors and silicon. We're good
at computer networks.

We definitely have missing pieces. We don't know much about the principles of
intelligence. We understand logic, but how do you get intelligence from logic?
And maybe we're still missing some key technology to make it work.
(Memristors? Graphene?) And once we have the principles and the technology,
it'll take some engineering to make it work, but we'll make it work. We'll
building thinking machines that are _vastly_ smarter, wiser, and more clever
than nature ever did.

The point is that if we ever upload a brain, it will be like building a
mechanical horse -- an over-engineered gimmick, a parlor trick. We'll already
have done much better at AI by approaching the problem from a different
direction.

It doesn't bode well for humanity though, in the long term. Just about the
only jobs left are _thinking_ jobs. What happens when it's a waste of time for
a human to think, just like it's a waste of time for a horse to pull a plow?
We didn't declare war on horses, Terminator-style. We just didn't keep very
many around.

