
A major milestone has been reached in our Brain Preservation Prize - porejide
http://blog.brainpreservation.org/2015/05/26/may-2015-bpf-prize-update/
======
maaku
Note that this approach should be compared with cryonics, which attempts to
vitrify the brain and preserve it in a glass-like state at liquid nitrogen
temperatures:

[http://www.alcor.org/](http://www.alcor.org/)

Personally I have some reservations about the plastination approach to
personal longevity. Depending on your philosophical views on the nature of
consciousness, it may be that under this procedure you would die and cease to
exist, while some future emulation of your brain thinks it is you -- i.e.
biological-you and future-emulated-you are two separate people that just share
memories.

It is however interesting science and may be a short-cut path to getting
necessary scanning resolution for whole brain emulation.

~~~
danieltillett
Isn't this the same as saying you are a different person every morning and
that the person that went to sleep and the person that woke up are separate
people?

~~~
sanoli
No, its very probably not the same. The brain doesn't shut off when you go to
sleep. It keeps going, it keeps doing it's stuff, messing with memories,
thoughts, all the bodily functions and a whole extra stuff that we don't know
too well, but which probably still makes you you. Killing _that_ brain and
then simulating it somewhere else is not the same.

~~~
fallous
More importantly, does anyone suggest that running the original brain and the
copy doesn't result in two separate diverged intellects? If running two
results in separate consciousnesses then obviously they are not the same.

~~~
x1798DE
I don't think that follows. If "your" consciousness is just the consciousness
that has a continuity of memory with a previous version of you, then the two
copies would both be you, but they would not be one another.

~~~
maaku
Let's say that I a suitably advanced fMRI is developed that is able to map out
the connectome non-destructively, I use this device on you to create a copy of
your mental state, and then let you go about your day. At some point later I
turn on a whole-brain simulation from this data. What do you, the you-that-
walked-into-the-scanner expect to experience?

~~~
erroneousfunk
You'd experience nothing unusual.

~~~
maaku
I agree. But the further implication is that for the same reasons, if your
brain is plasticised and later scanned and turned into an uploaded whole-brain
emulation, you'd still be dead. Uploading is not a pathway to personal
longevity, or whatever you want to call continuation-of-me-not-just-my-
memories.

~~~
JoeAltmaier
That's the crux of it all, isn't it? Is your self any more than your
personality and memories? If so, then you'll have to resort to a soul or some
such. If not, then the upload is really 'you', or a copy anyway.

And who's to say a soul wouldn't attach to a copy anyway? Souls are not that
well understood. Perhaps it would be fooled by the copy, or have an affinity
for it, or some such. As long as we're speculating.

~~~
maaku
No, there are plenty of perfectly reasonable physical theories for the nature
of consciousness that don't equate identity with memories and don't involve
souls. There's no reason to resort to dualism.

For example, there is the identity-is-the-instance-of-computation theory which
says that it is not the information being computed (memories) that is
relevant, but the computation itself.

~~~
JoeAltmaier
Agreed, the hardware/wetware is just as important as the bits being uploaded.
Especially for chemical brains that store much of personality as neural
wiring.

But lets say that's uploaded as well, as part of the 'program' details. Then
where are we? An 'instance' of this is not actionably different from any
other, if it behaves exactly the same. Its arguable that they are the 'same
person' in some sense.

~~~
maaku
It matters to the person who is now dead and not living on in the machine.

~~~
JoeAltmaier
But they are living on! Kind of. Like you are, in that body of yours, once all
the cells are replaced by new cells every decade or so. Its ok; you still
sound like the same person.

~~~
maaku
Another strawman. No one is claiming that identity is tied to the molecules
that make up the body, even in aggregate. There's a sense in which a car
remains the same car even after continuing comprehensive maintenance has
replaced every single part, but that car stays different from the next car off
the production line. Does that example make sense?

~~~
JoeAltmaier
Come on! If its a new car, its a different car. Doesn't matter how convoluted
the path to get there (replace every part or build new). Not a strawman; an
example pointed right at the argument that 'a copy isn't the same thing'. Be
fair.

~~~
maaku
You be fair too. The instance-of-computation model of personal identity allows
for cells of your brain to come and go, but as long as the whole thing is
operating continuously, you remain. It is exactly analogous to my car example.

~~~
JoeAltmaier
Its also exactly analogous to my build-an-entirely-new-one and program it
exactly as the previous one was programmed. It has exactly the same result. If
I did it without anyone looking, they would never be able to tell the
difference.

~~~
fallous
Except the original that you replaced with your copy. You're opting for an
external functional description of identity but the discussion is about the
individual.

It should be understand as conceded that a perfect copy of me would pass any
Turing-style test applied by an external auditor that the copy is me, but that
doesn't mean that I am the copy.

~~~
JoeAltmaier
Its different in a sense, sure. But consider: if I replaced it so perfectly
that it was atom-by-atom identical, then God himself would not be able to say
if it was you or not. Unless we admit to some external agency that defines
'you' that is not present in the mechanism e.g. a soul.

------
iLoch
One step closer to me being able to spin up multiple instances of my brain to
work on my side project and play video games with.

~~~
ttty
Please add a load balancer too. I hope we can have an interface like amazon
ec2, but for brains

~~~
vidarh
That would have been a somewhat less ridiculous explanation for The Matrix.
"Oh, those vats full of humans? They're our neural network cloud".

~~~
jessaustin
Yes, and considering that _The Fall of Hyperion_ was published in 1990, that
scenario should have been obvious to the Wachowskis.

~~~
eli_gottlieb
It was actually what they intended, before the execs told them audiences
wouldn't understand it.

~~~
vidarh
Not surprising, considering I had relatively techie friends that were
absolutely mindblown by concepts like pervasive virtual reality (never mind
taking the next step beyond the Matrix to "uploading"). I was utterly taken
aback at the realisation of just how foreign ideas like that were to people
that weren't steeped in SF.

------
Udo
We just has a plastination vs. cryonics debate
([https://news.ycombinator.com/item?id=9595853](https://news.ycombinator.com/item?id=9595853))
which might be of interest here.

------
danieltillett
I am glad progress is being made here, but until we can avoid the destruction
that occurs in the last 24 hours of expected death (eg cancer), or the damage
that occurs with unexpected death (eg heart attack or trauma) from sitting at
room temperature for hours after death, then all we are going to be preserving
is grey mush.

~~~
imaginenore
You're assuming this "grey mush" can't be recovered. We don't actually know
that. A sufficiently advanced AI should be able to recover a person from way
less.

~~~
davidgerard
You're assuming the phrase "a sufficiently advanced AI" answers anything at
all.

Presumably you're assuming if the information is there at all - if the
necessary data hasn't been scrambled beyond the noise floor of the scrambling
process - then there's something for magic (because you're really talking
about magic here) to work with.

So, please (a) set out your claim with precision (b) back up your claim.

* What is the information you need to recover?

* To what degree is it scrambled?

* What of it is scrambled below the noise floor of the process?

* How do you know all this? (wrong answer: "here's a LessWrong/Alcor page." right answer: "here's something from a relevant neuroscientist.")

For comparison: even a nigh-magical superintelligent AI can't recover an ice
sculpture from the bucket of water it's melted into. It is in fact possible to
just lose information. So, since you're making this claim, I'd like you to
quantify just what you think the damage actually is.

~~~
SCHiM
I'm quite sure I've read somewhere that information cannot be lost in the
absolute sense, lost to us: yes, lost irrevocably and irrefutably: no.

In that sense, 'a sufficiently advanced AI' is not magic, because when people
say that they definitively have something in mind, at least the people I often
discuss this with do.

In short: if you're smart(fast, precise, determined) enough to look at the
individual molecules of a puddle of brain-goo. And if you can infer the way it
has collapsed by ray tracing those molecules back through how they collided
with each other/the walls of your mold then it should be possible to
reconstruct the spatial form of the brains at least. That's a pretty big IF
obviously, but equally obviously not impossible. If only you can look
deep/far/fast enough.

If you want to be theoretical about it, then yes. There is probably an upper
bound on how smart/big an AI mind can possibly be. And thus there is a limit
on how much information it can extract from arbitrary systems. So I agree with
your assertion that there is information that even the smartest of all AIs
cannot possibly reconstruct, but I'm not sure that the brain is such a
structure.

Any justification about why/how 'a sufficiently advanced AI' could come about
is more questionable.

Many knowledgeable people are making guesses based on our current
understanding of intelligence/computation/AI, and then extrapolating. The
paradoxical thing is that on the one hand AI-doomsday speakers tell us no to
anthropomorphise (for good reasons) with the motives of an AI, but on the
other hand apply human reasoning/understanding to predict such
machines/patterns.

~~~
davidgerard
> I'm quite sure I've read somewhere that information cannot be lost in the
> absolute sense, lost to us: yes, lost irrevocably and irrefutably: no.

This is probably not quite at the requested standard of backing up a claim,
and sounds very like "but you can't prove it isn't true!" But I'm not the one
making a claim.

In any case, please back up your claim. What is "the absolute sense"? How does
it differ from "in a practical sense", with examples?

> In short: if you're smart(fast, precise, determined) enough to look at the
> individual molecules of a puddle of brain-goo. And if you can infer the way
> it has collapsed by ray tracing those molecules back through how they
> collided with each other/the walls of your mold then it should be possible
> to reconstruct the spatial form of the brains at least. That's a pretty big
> IF obviously, but equally obviously not impossible. If only you can look
> deep/far/fast enough.

Noise floor. In this case, thermal noise.

Also, you literally can't know that much about all the molecules in your
puddle of goo. (Heisenberg.) We do not live in a Newtonian universe.

~~~
SCHiM
[http://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_i...](http://en.wikipedia.org/wiki/Entropy_in_thermodynamics_and_information_theory)

[http://phys.org/news/2014-09-entropy-black-
holes.html](http://phys.org/news/2014-09-entropy-black-holes.html)

[http://phys.org/news/2014-09-black-hole-
thermodynamics.html](http://phys.org/news/2014-09-black-hole-
thermodynamics.html)

Ben Crowell, phd in physics:

[http://physics.stackexchange.com/questions/83731/entropy-
inc...](http://physics.stackexchange.com/questions/83731/entropy-increase-vs-
conservation-of-information-qm)

The reason I didn't/don't back up those claims is because I'm really not
knowledgeable about those subjects. I'm not sure what good sources are/how
legitimate they are, but I have read it one day. Even if I cannot interpret
the technical jargon behind it and/or give more nuance to my claim due to a
low understanding of the subject.

Given a bit of googeling to "can information be lost", "conservation of
information" one finds the articles I linked to above.

But you have dodged my refutal of your initial claim, the one I was really
responding to, that: 'sufficiently advanced AI' is not just a stop-gap-word
for magic. Because in this case it doesn't stand for "I don't know how or why
but this and this", but instead it stands for "I don't know why(in the
motivational sense), but given a bigger brain one can use and interpret finer
instruments, which in turn enables us to extrapolate further back in time".

~~~
davidgerard
None of your links support your claim that winding the clock back is even
theoretically possible, and the stackexchange link seems to say it isn't: "The
resolution is that entropy isn't a measure of the total information content of
a system, it's a measure of the amount of hidden information, i.e.,
information that is inaccessible to macroscopic measurements." Even if you're
assuming a physical God, that physical God can't get good enough measurements.

~~~
SCHiM
I think that perhaps our views of the world are slight off-
kilter/incompatible.

I agree with you that even godlike-AI must have an upper bound on what they
can extract from a 'puddle of atoms'. It's obvious that given a handful of
atoms it's not possible to predict what happened to a completely different
bunch of atoms 5 billion years ago at the other side of the (observable)
universe. That's also not what I'm claiming.

What I do claim is that, given enough smarts, it's possible to do this to a
bunch of molecules present in the brain-goo.

I'm assuming here that whatever it is that makes the brain 'tick' is located
on the molecular level, and not a lower level.

As to your claim of being able to 'turn back time', don't we do this all the
time?

If we look at the link we've both referenced, say we had two pictures of the
last milliseconds of the book falling, and we knew the exact time between when
these pictures were taken then we can turn back the time right? We know
exactly how/when/where the book was if we can interpret those pictures.

In a similar way, the information about the locations of the molecules in the
'brain goo' is available to a 'sufficiently advanced AI'. Thus what I'm
arguing is that this is not information that is 'lost' in the way that we've
been discussing so far.

Therefore it's also not 'magic' when people refer to such AI, because when
they do they have this in mind. Not some law-bending/breaking super godlike-
ai, but rather a system with the resources needed to stitch together the
complete video from the last two images.

~~~
davidgerard
> I think that perhaps our views of the world are slight off-
> kilter/incompatible.

Yeah, possibly. I blame LessWrong fatigue. It's an entire site made of
handwavy claims that, no matter how far you trace back through the links,
never quite actually get backed up. So I tend to be harsh on similar claims,
particularly when they appear to be from that sphere (judging by the buzzword
"sufficiently advanced AI", which is in practice used to put forward
outlandish claims and then try to reverse the burden of proof).

I actually started reading the site because of a friend who was getting into
cryonics. I'd hitherto been neutral-to-positive on the idea, but the more I
investigated it the more I went "what the hell is this rubbish." (Writeup is
at
[http://rationalwiki.org/wiki/Cryonics](http://rationalwiki.org/wiki/Cryonics)
which is a very middling article, and is still about the best critical article
available on the subject ...) The handwavy claims are endemic, quite a few
rely on effective magic (actual answers from cryonicist: "But, nanobots!" or
"sufficiently advanced AI") and it really is largely just ill-supported guff,
even if I'm being super-charitable to the arguments. Extracting a disprovable
claim is nearly bloody impossible itself.

> As to your claim of being able to 'turn back time', don't we do this all the
> time? >If we look at the link we've both referenced, say we had two pictures
> of the last milliseconds of the book falling, and we knew the exact time
> between when these pictures were taken then we can turn back the time right?
> We know exactly how/when/where the book was if we can interpret those
> pictures.

But we couldn't do that if the data had been destroyed. That's the claim way
up there: the information is recoverable from the mashed-up goo. The two
pictures have been destroyed, we have the book sitting on the floor, there's
nothing to reconstruct the fall in sufficient detail.

I say this because whenever I've seen an actual neuroscientist who's been
asked this sort of question (can we recover the information with a magic AI or
whatever), they answer "wtf, no, it's been utterly trashed. No, not even in
theory. You can't even measure it. It's been trashed utterly." The questioner
usually comes back with "but if we use a SUFFICIENTLY ADVANCED AI ..." _i.e._
, if we let them assert their conclusion. And first they'd have to show you
could measure stuff on the nanometre scale without messing it up. Let alone,
_e.g._ , reconstructing the precise locations of proteins in a cell after
they've been denatured by cryoprotectant. Remember that it's a claim about
physical reality that's being made here.

(A couple of examples, from scientists who would LOVE to be able to preserve
and get back this information:
[http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson...](http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryonics/6krm)
[http://freethoughtblogs.com/pharyngula/2012/07/14/and-
everyo...](http://freethoughtblogs.com/pharyngula/2012/07/14/and-everyone-
gets-a-robot-pony/) )

>In a similar way, the information about the locations of the molecules in the
'brain goo' is available to a 'sufficiently advanced AI'.

Remember that there is no way to distinguish two molecules of the same
substance. You're requiring more information than can actually be measured
(Heisenberg).

