
Can We Copy the Brain? - wjSgoWPm5bWAhXB
https://spectrum.ieee.org/static/special-report-can-we-copy-the-brain
======
Bucephalus355
The best analogy to this field and the many, many problems it might run into
is biotech. This is actually quite funny because in the early days the main
analogy to biotech was the computer / state-machine.

For some background, biotech has undergone many booms and busts in the last
half century; unlike the last tech bubble, these booms and busts have flown
somewhat under the radar.

In fact, Jurassic Park, written in 1990, is actually a book /about/ the late
80’s biotech bubble. There is a really fun part in the book where they are
talking about making miniature dinosaurs that will only eat Ingen-brand
dinosaur pet food (Ingen was the company behind Jurassic Park).

Anyway, with biotech, the problem has been, again and again, the assumption
that a system is radically simpler than it. Biology is so incredibly
complicated that it puts the largest engineering book I have, a 1500 TCP/IP
protocol book (No Starch) to shame. I mean at least we know how TCP/IP works.
With biology, the manual might be closer to 900,000 pages and we only have 40%
of the table of contents and maybe 800 pages (out of order) so far.

BEST EXAMPLE: In the 70’s, once large portions of the genome started to be
identified and linked to specific genes that were known to cause traits or
diseases, it was widely assumed that creating/reading/updating/deleting genes
on DNA would be relatively straightforward, particularly as more of the genome
was uncovered. However, as was later discovered, many traits or diseases might
be the result of 200+ genes that are also used elsewhere. Turn off just 1 gene
for a disease, and 1). it won’t do anything because you didn’t turn off the
other 199, and 2). oh wow that gene was actually used for something else and
now you’ve lost the ability to form eyeballs / are born without anything in
your eye sockets.

~~~
knight-of-lambd
> Turn off just 1 gene for a disease, and 1). it won’t do anything because you
> didn’t turn off the other 199, and 2). oh wow that gene was actually used
> for something else and now you’ve lost the ability to form eyeballs / are
> born without anything in your eye sockets.

Certainly not on the same scale, but this resonates with my experiences with
legacy code.

I think this similarity is more than superficial. Energetic systems evolve
over time to become tangled, correlated messes, without some other force
counteracting this tendency (ie. refactoring). I wonder if DNA has analogous
mechanisms.

~~~
PeterisP
Yeah, in some way it feels that there's essentially a huge software (reverse)
engineering project.

We have the technical ability to read all the code in our DNA, understand what
small parts of it do (e.g. making a particular protein), and model some of the
small scale behavior.

And we've got a very, very, very large codebase of mishmash undocumented
legacy homegrown code that sort of does what we want but in an unstable and
occasionally buggy manner. And we've got a strong wish to fix some bugs (i.e.
genetic diseases) and possibly add some features (e.g. longer quality
lifespan, increased capabilities). So we'd like to reverse-engineer this
system.

The good part is that we only have to do it once and we can cooperate on it;
the bad part is that the system is really complex and (more importantly)
horribly interdependent; it actually implements pretty much all the practices
that we know makes code unmaintainable.

Anyway. The hypothesis I'm trying to make is that this seems to indicate that
research on advanced methodologies and tools to analyze and understand large
quantities of tangled (and possibly intentionally obfuscated) computer code;
work techniques and algorithms for computer(machine learning?)-aided
understanding and reverse engineering large quantities of code seem likely to
eventually have practical applications in biotech.

Yes, contemporary code behavior is quite far from protein interaction. That's
ok - we're quite far from starting to properly reverse-engineer (in this
context) biotech as well; with every decade, code (and its analysis) will
become more complex and biotech more understood, eventually meeting. And when
designing tools for analysis of very complicated systems, the tools will
anyway have to be adapted not to the systems but to the analyzer, to the
limitations of what structures the human researchers can understand and "keep
in their head" and what needs to be automatically summarized/structured by
tools.

~~~
mrep
> The good part is that we only have to do it once and we can cooperate on it;

I'm not a biology person but I think everyone except identical twins has
different DNA which makes the problem so much harder since "doing it once"
only solves one person's problem when every variable (dna) can potentially
interact with every other (n^n problem where n = 3,000,000,000 potential pairs
which is an insane number, granted it almost certainly has some defined
structure which reduces the potential differences but that will still be a
huge number) . Also, you have the whole nature versus nurture problem which
makes biology even harder.

~~~
adrianN
Even identical twins are not all that identical. Reading and understand the
DNA is one thing, reading and understanding the regulatory network that
controls which genes are expressed under what circumstances is at least as
difficult.

------
unpythonic
An excellent book which covers this topic is _Permutation City_ by Greg Egan
(1994). If we can clone all of you (physical characteristics, brain, memories,
personality) into a simulation, what are all the implications?

In it, a perfect replica of a person is scanned, and when it boots up, it
continues it's existence from the time of the scan. Hacking on the model is
possible, so you can avoid certain memories, update "physical" appearance,
etc.

Given how prevalent virtualization has become recently, I thought it was a
fairly modern book. I was surprised to find out it came out in the nineties,
and not only has it held up well, it would be considered visionary even if it
had been published yesterday.

~~~
aluhut
You should try the Takeshi Covacs books by Richard Morgan. There is a TV show
coming out in two days (good timing here) based on the first book (Altered
Carbon). It'll probably be a mess since the book has quite some violence. But
the thoughts above are played out in a interesting and entertaining way. All
three books play in the same universe with the "same" main character but are
not a trilogy.

~~~
fernly
Can you link to info on the TV show? Because all I can find is Morgan's recent
blog post recounting his visit to the set:

[https://www.richardkmorgan.com/2018/01/fragments-of-a-jet-
la...](https://www.richardkmorgan.com/2018/01/fragments-of-a-jet-lagged-
dream/)

~~~
fernly
oh wait, here it is,

[http://www.imdb.com/title/tt2261227/](http://www.imdb.com/title/tt2261227/)

I'm disappointed -- it's on Netflix.

~~~
aluhut
Why disappointed?

------
10-6
This report by IEEE is actually a very good collection of topics and research
currently being done in this field. They discuss specific problems and topics
like the neocortex, IIF [1], neuromorphic engineering [2], pose cells, SLAM
[3], and more.

For anyone interested in research being done in AI, ML, consciousness, etc.,
these are great articles written by actual scientists and researchers who are
doing the work (as opposed to the hyperbolic articles or tweets you see online
these days about AI).

[1]
[https://en.wikipedia.org/wiki/Integrated_information_theory](https://en.wikipedia.org/wiki/Integrated_information_theory)

[2]
[https://spectrum.ieee.org/semiconductors/design/neuromorphic...](https://spectrum.ieee.org/semiconductors/design/neuromorphic-
chips-are-destined-for-deep-learningor-obscurity)

[3] [https://spectrum.ieee.org/robotics/robotics-software/why-
rat...](https://spectrum.ieee.org/robotics/robotics-software/why-ratbrained-
robots-are-so-good-at-navigating-unfamiliar-terrain)

~~~
visarga
I'd like to add this article (with somewhat of a cheeky title): "The
impossibility of intelligence explosion"

[https://medium.com/@francois.chollet/the-impossibility-of-
in...](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-
explosion-5be4a9eda6ec)

It's written by François Chollet, creator of the Keras DL framework. In the
article it is shown how the environment and intelligence are interrelated.
Some of the points are expressed in the IEEE Special Report as well
(sensorimotor integration). There are many correlations with the recent push
towards simulation in AI - Atari, OpenAI Gym, AlphaGo, self driving cars, etc.
It's a new front of development, where simulation will create playgrounds for
AI.

The main point is that intelligence develops in the environment, and is a
function of the complexity of the environment and task at hand. There is no
general intelligence, or intelligence in itself, only task-related
intelligence. An intelligence explosion can't happen in the void (or in a
brain in a vat, or in a supercomputer that has no interface to the world, and
can't act on the world). The author concludes that AGI is impossible based on
environment and task limitations.

An interesting take because we're focusing too much on reverse engineering
"the brain" as if it exists in itself, outside the environment. We should
learn about meaning and behaviour from the environment and the structure of
the problems the agent faces. Meaning is not "secreted" in the brain.

~~~
red75prime
Far-fetched conclusions based on a misinterpretation of "no free lunch"
theorem. The theorem doesn't forbid intelligence which is universal only in
our own universe, as our own universe doesn't give us uniform distribution of
all possible problems.

~~~
visarga
I tend to believe that a hole in François' argument is that a sufficiently
powerful computer could simulate an environment inside, where the AI could
thrive.

~~~
red75prime
A hole? In that Swiss cheese? It's hardly surprising. He uses hypothetical
Chomsky language device to support his idea of "there couldn't be general
intelligence", while there's provably optimal intelligent agent (AIXI) and its
computable approximations. He uses the self-improvement trends established by
entities which aren't intelligent agents (military empires, civilization,
mechatronics, personal investing) to predict what self-improvement of
intellectual agent will be like. It's a pure London-streets-overflowing-with-
bullshit level prediction.

I am not extreme singularitarian. There are hard physical limits making
exponential progress and singularity impossible. But bad arguments are bad
arguments, it doesn't matter if conclusions are appealing.

------
otakucode
Sure... sorta. But there is a problem. You see, without continuous input from
the environment in the form of highly corellated input... we know what happens
to human consciousness. It rapidly dissolves and disappears. Consciousness is
intimately and (thus far) inextricably bound up in its embodied existence.
Facial paralysis has profound effects upon the ability of people to feel
emotions 'internally', eventually resulting in both an inability to even
recall ever feeling those emotions or having an ability to recognize them in
others. Should a consciousness divorced from a feedback loop created with the
same environment we share (or at least similar in most respects, a simulation
might work OK, I don't think we know) be created, the odds that we could even
recognize it as conscious are very low. Maybe a general measure of the systems
tendency to reduce entropy in some region either within or near itself?

We are feedback loops, and when the loop is broken, we stop being us.

~~~
lev99
Most people would accept being able to see and interact with the world a
prerequisite to consciousness. I understand this as an I/O problem. There must
be continuous high-bandwidth input (visual, audio, tactical) as well as very
detailed and dexterous output.

I wonder to what degree virtual realities can play the I/O function. I think
for the next half century virtual realities will mostly operate at a lower
level of detail then meatspace. Can a mind function well stuck in a lower
detail world?

Alternatively, cybernetics could serve. You bring up a real concern, but it's
a solvable problem.

~~~
JoshTriplett
> Most people would accept being able to see and interact with the world a
> prerequisite to consciousness.

People lacking sight, hearing, mobility, and similar would disagree.

By all means, we want the ability to interact with the world, bidirectionally.
But that's not a prerequisite for consciousness. And if the only thing we
manage, in the short term, is preservation and continued function, that would
be a _massive_ improvement, far larger and more important than the subsequent
incremental steps towards I/O.

~~~
stewbrew
> People lacking sight, hearing, mobility, and similar would disagree.

I think "to see" is meant as a placeholder for any kind of sensory input. I
doubt that people who lack any form of sensory input with no way to
communicate would disagree even if they could -- while they are in that state.
As somebody who experienced some knock-out after an accident, I'd say the
consciousness is a fragile something (but uses any straw to re-establish
itself).

~~~
JoshTriplett
> I doubt that people who lack any form of sensory input with no way to
> communicate would disagree even if they could -- while they are in that
> state.

If you're in a sensory deprivation tank, but you can still think, are you no
longer conscious? If you were in a _perfect_ one, or you had something that
somehow cut off the connection between the brain and the body, but you can
still think, are you no longer conscious? The ability to interact with the
world certainly makes life nicer, but consciousness does not _depend_ on that.

~~~
dwaltrip
> If you were in a perfect one, or you had something that somehow cut off the
> connection between the brain and the body, but you can still think, are you
> no longer conscious?

I think, if such an unlikely scenario were possible, that disturbing insanity
would result, and eventually it would degrade into something that does not
resemble human consciousness.

The reality check provided by sensory input stabilizes and fosters
consciousness.

Perhaps it is possible to design a conscious machine that would be more robust
against sensory deprivation, but humans certainly don't have that
characteristic. A good example of this is the very damaging effects of
solitary isolation prison cells.

------
excalibur
No, it's protected under the DMCA. If you wish, you may read the complaint at
ChillingEffects.org.

~~~
ben_w
Tom Scott did a video about that — “Welcome to Life: the singularity, ruined
by lawyers”

------
Rapzid
I read something along the lines of files having no meaning until we interpret
them with programs. The idea is that files are almost schemaless and we bestow
a schema on them by the way we consume them with programs. I believe this may
have been a SQL book...

In any case, what does "copying the brain" mean if we don't fully understand
how it works? How can we bestow meaning on the raw data, given we can even
collect it, without knowing exactly how the brain itself interprets it?

~~~
GCU-Empiricist
Ideally if you are copying a brain, you are going to copy the I/O schema
(spine connecting to a "body", eyes, ears, nose, mouth, ect). When you run a
brain as a computer program you don't want a disembodied mind, you want a
person who "really-is" in a VR environment.

~~~
JoeAltmaier
Discussed at length in the book "We are Legion; We are Bob" where a
disembodied mind creates a VR environment to preserve its sanity.

------
bane
Why bother? It's a great general purpose organic computer that you can power
with sugar and fit into a head...but there's no reason that some other
solution to the problem might be arrived at that works along entirely
different parameters. There's lots of evidence that brain design has some deep
flaws, extreme limitations and severe compromises. Look at the crazy stacked,
hemispherical design of mammal brains -- nonsense!

Besides, if we need a brain, there's literally trillions of them already all
over the planet. In fact they're so common they're already commoditized,
everybody has one!

~~~
7Z7
Be able to make a brain, hopefully means being able to make a better brain -
or even, to make our brains better.

------
pmoriarty
Copy at what level?

Cellular? Molecular? Atomic? Subatomic?

~~~
da_chicken
The subtitle appears to be "Intensive efforts to re-create human cognition
will transform the way we work, learn, and play." So, functional copying,
presumably.

~~~
rzzzt
One article of the series revolves around MOSFET-based analog circuitry for
the building blocks: [https://spectrum.ieee.org/computing/hardware/we-could-
build-...](https://spectrum.ieee.org/computing/hardware/we-could-build-an-
artificial-brain-right-now)

------
m3kw9
When they do “copy” it and successfully put it in a human, it will make Pet
Sematary the movie look like Driving Miss Daisy

------
alyx
Hard problem of conciousness

[https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousnes...](https://en.m.wikipedia.org/wiki/Hard_problem_of_consciousness)

~~~
Symmetry
And one of the linked articles was explicitly engaging with that.

[https://spectrum.ieee.org/computing/hardware/can-we-
quantify...](https://spectrum.ieee.org/computing/hardware/can-we-quantify-
machine-consciousness)

------
bmer
[https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind](https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind)

------
aaimnr
We've no idea how neurons actually work but hey, sure, let's copy the whole
brain! Most articles concerning brain recently are some kind of infotainment
porn.

~~~
quickthrower2
Downvoted as you probably didn't click the link and left the flippant comment.
It links to a page of interesting articles related to this topic.

------
tritium
Identical twins handily prove that copying a brain still produces different
people.

It’s true for siamese twins, and it’d be just as true for cloned genetic stem
cells 3D printed from a snapshot of the exact cellular structure of your brain
from a moment in time, transplanted into your body.

If I copy your brain, you won’t be inside it, even if it’s effectively you, as
far as anyone else can tell.

So, I can comfort myself with a replica of each dead parent. But the parents
that raised me, those people are dead. They won’t be there to see me sharing
christmas with their atomic-precision duplicates.

Virtual simulations aren’t people, aren’t human beings, no matter fidelity of
the simulation. Maybe you can hold a conversation with a hologram, but then
what?

Would one expect to experience what it feels like to become and persist as a
haptic-enabled hologram? Pretty sure that’s a bogus concept, turtles on down.

~~~
PeterisP
Why do you assume that identical twins have identical brains?

Brain physical structure is altered during lifetime experience, learning
changes large physical properties of neurons e.g. forming some new synapses
and pruning others.

Even at birth, identical twins have different fingerprints, different retinas
and yes, different cellular structures of their brains. And that only diverges
later.

It's not a given that a brain "3D printed from a snapshot of the exact
cellular structure of your brain" would be sufficiently good copy, we know
that there are other things that matter (e.g. various intra-cell properties
and chemical concentrations within particular locations of those neurons) and
there likely are some other things that matter but which we don't know yet.

However, assuming that we could make a sufficiently good copy, it would be
undistinguishable from you. It would have the same behavior, reactions,
memories, skills and understanding; it would have the same beliefs as you do -
including a belief that it's _you_ and that it has been you since your birth,
supported by memories of living your life. If the copy is sufficiently good,
barring the signs of the operation itself (scar tissue? machinery? video
evidence of it being done) neither your relatives nor (new?) you yourself
would be able to make a distinction.

Yes, you could argue that the new copy is different from the previous one,
that it's not _the same_ , that it lacks continuity. But that's arguing about
what do we mean when we say "the same". If you didn't know that your parents
had died, you'd have no way to tell when their copies come to celebrate
christmas with you.

~~~
perl4ever
Suppose someone invented a "matter transporter", that was to all external
appearances approximately equivalent to the standard device that is a science
fiction trope.

But what actually happens with this device, is that once a person steps into
the chamber, and is scanned, and the information to recreate them is sent to a
receiver elsewhere, by some beaming method, the original is dropped into a
"recycling chamber" where their body is shredded, compacted, and made into
soylent green or something.

Would you voluntarily use this device? All the people who go through it report
at the other end everything went well, since it in fact makes perfect copies.

One tangential idea I also want to mention is that there is a mental disorder
where people perceive things incorrectly as false and strange. So, given your
scenario of your parents coming to Christmas, there exists a disorder where
your brain tells you they are some sort of duplicates or imposters, but you
can't explain why. I don't have a reference offhand, but it's probably either
something I read in a book by the neurologist Oliver Sacks, or some book about
schizophrenia.

