
Will You Ever Be Able to Upload Your Brain? - andres
http://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html?_r=0
======
rl3
The author seems to have written the article on the assumption that it will be
humans figuring out a brain upload mechanism.

While his time frame estimates might be sound given that context, I'd argue
they're wildly inaccurate given a more likely scenario: such things will be
brought into existence first via a super-intelligent agent, if at all.

If that proves to be the case, the time frame for brain uploading becomes
roughly bound to the advent of AGI. When that happens, the same entity capable
of creating our brain upload mechanism would likely be capable of endowing
humans with effectively immortal bodies just the same.

In other words, technology like this is almost certainly going to be part of a
post-singularity world (assuming there is a singularity in the first place),
and who knows what that will look like.

~~~
Retric
There is a huge range from slightly smarter than human AI vs the kind of
'magic' AI it would take to make the singularity happen. An AI that's 10%
smarter than any human would probably not be that noticeable outside of
testing, as it's not obvious who the smartest person in a room is. And it
would take a decade of study to catch up to the cutting edge on just one
field. 10x that and your doubling the rate of progress that a single person
was making, but that's not really going to change much.

Suppose we develop an AI that's 100x better than that, or 10 times as fast as
any person as in they can do in 1 hour what it would take a highly trained and
educated person 10 hours to do. Well, after a year or two they could probably
be roughly as capable as a really good 20 person team due to lower
communication overhead, but adding one more 20 person team is not going to
change the world much. A million of them, might double the rate that the world
progresses, but that's both a long time coming and probably not going to
change things much.

To really get the singularity type progress, you need an AI that can make it's
self smarter though better software and hardware, which seem like a far harder
problem than just building a working AI. After all what if the first thousand
just want to sit around and read Fan-fiction all day.

~~~
rl3
Most fast-takeoff AGI scenarios usually involve a singular intelligence
created from an algorithm that easily scales based on how much computing power
is allocated to it.

Combined with the notion of computing overhang[0], it's possible that our very
first AGI could far exceed human intelligence, such that it would have
capacity for rapid self-improvement right from the start.

I'm not saying this will necessarily be the case; slower takeoff scenarios are
certainly a possibility, albeit an unlikely one (IMHO).

Another point to consider is that computing hardware is vastly superior to the
human brain both in its capacity for raw calculation, and its sheer speed in
terms of raw latency.

Keeping that in mind, assume we have created an AGI simulation that's
approximately equal in complexity relative to the human brain. Not
intelligence, just complexity of the simulation itself.

Now consider that AGI is not bound by biological constraints, and would not
necessarily be structured in similar fashion to the human brain. This lends
itself to potentially far more efficient architectures, and with that, the
realization of AGI at a computational cost far below that of simulating a
complete human brain.

Moreover, given a completely alien architecture, it's not hard to imagine the
cognitive side of such a simulation interfacing rather directly with
traditional modes of computation.

To illustrate, imagine comparing a human being to an identical human being
that has a microprocessor integrated into their brain. Both humans may have
nearly identical capacity for abstract intelligence, but the one with the
microprocessor is going to be effectively far more intelligent, simply because
their capacity for dealing with raw data and calculations is vastly superior.
It's not hard to imagine a similar dynamic coming into play with AGI.

[0]
[http://wiki.lesswrong.com/wiki/Computing_overhang](http://wiki.lesswrong.com/wiki/Computing_overhang)

~~~
baddox
> Most fast-takeoff AGI scenarios usually involve a singular intelligence
> created from an algorithm that easily scales based on how much computing
> power is allocated to it.

Still, it's pretty easy to imagine this not resulting in the sort of "runaway"
intelligence most often associated with the notion of singularity. What if
general intelligence is inherently extremely computationally complex? Imagine
that the level of intelligence is represented by a number n, and say that
human intelligence is n=100. If the computational complexity of intelligence
is, say, O(1000^n), we won't expect to see hardware that's twice as
intelligent as humans any time soon even if we know the algorithm.

~~~
rl3
If that were the case, I would agree we would be quite far away from simulated
human-level cognition.

My main point was that either we have the computational capacity, or we do
not. If we do, the chances of overshooting the human-level mark by a large
margin are quite high, hence the risk.

Another angle on this is that perhaps human cognition has an unnecessarily
high degree of computational complexity, a degree that's far above the
threshold required for an AGI to be dangerous.

The Paperclip Maximizer[0] is an excellent example of this. Perhaps due to low
computational complexity of its architecture, the thing wouldn't even be
sentient in any sense of the word, and yet it'd still be perfectly capable of
destroying the world.

Why? Because its initial intelligence would be determined by the amount of
computing power allocated to it. Allocate enough computing power, and assuming
you've passed the magic threshold, it would start allocating itself computing
power in short order.

Perhaps what we should fear most is not sentient AGI capable of thought
similar to humans, but AGI that's a structured amalgamation of existing AI
technology—if only because the latter is far more likely to be feasible first.

[0]
[http://wiki.lesswrong.com/wiki/Paperclip_maximizer](http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

------
0xcde4c3db
I think discussions around this are ill-served by the "upload" metaphor, as it
somewhat implies that the new brain, the original brain, and the uploading
process are all largely separate things. It seems much more likely to me that
the new brain will take the form of an augmentation that survives the death of
the original brain. The personality would be "uploaded" not by deliberately
sampling parameters to feed a model, but by the new brain organically (heh)
becoming a component and embodiment of the existing personality. I'm sure this
isn't a new idea, but I don't know what it's called.

~~~
svantana
This is similar to the Moravec Transfer [1], a proposed method of (slowly)
transitioning to a non-biological brain. Indeed the whole concept of "the
self" is important to maintain, even if it is to some degree an illusion.

1\.
[http://everything2.com/title/Moravec+Transfer](http://everything2.com/title/Moravec+Transfer)

~~~
amelius
There is no guarantee that the subject will not lose a bit of consciousness on
every neuron that is being replaced (even if, to outsiders, it appears not to
be the case).

~~~
0xcde4c3db
I would expect a hybrid meat-computer personality to notice that at some point
in the transition. If not -- if consciousness is so profoundly ethereal that a
person's own consciousness can be gradually replaced with a simulated one
without the person themselves even noticing -- then what's the difference
between the "real" consciousness and the simulation?

~~~
meowface
>then what's the difference between the "real" consciousness and the
simulation?

Well, isn't that the whole idea? Objectively, there is no difference. If there
was, uploading wouldn't ever be possible.

The only potential difference is the subjective impression of "this
consciousness that I am now is the same consciousness that I was yesterday"
versus "this new consciousness is an exact clone of mine, but is not really
'me'".

This incremental upload technique simply tricks your subjective self into
thinking there is absolutely no change in continuity.

~~~
pdonis
_> This incremental upload technique simply tricks your subjective self into
thinking there is absolutely no change in continuity._

In other words, your subjective consciousness is somehow changed but you can't
subjectively tell? What does that even mean?

~~~
meowface
These are tiny, atomic, piecemeal changes that happen slowly over time.

If one of your neurons randomly dies right this instant, you would probably
not subjectively notice, even though that neuron composes your subjective
consciousness. Your neural network is highly redundant and fault-tolerant.

~~~
pdonis
Yes, neurons in our brains are dying every day, but it doesn't affect our
subjective consciousness. Are you simply saying that uploading, if it were
done carefully, would not make any more difference to our subjective
consciousness than neurons dying every day?

------
ryanmarsh
When I look at efforts to synthesize the human brain within a computer I see
an effort to make a synthetic bird rather than a machine that flies.

If this analogy has any use as a model beyond a cursory observation then a
focus on machines that solve hard problems will bear more fruit and earlier
(maybe ever) than attempts to fully replicate the human brain.

Today is it possible to build an (outwardly) anatomically correct and
functioning sparrow (in all respects) out of synthetic materials? I don't know
but surely it is hard. To make one certainly would have been impossible for
the Wright brothers.

Today we have things like the F22, A380, and little drones. Each is quite
complex but the complexity accreted over time, each layer a pragmatic solution
to a problem at hand.

If we take the same approach what kind of "thinking machine" might we end up
with in 100 years time?

~~~
kr4
Thinking (or thoughts) are different from consciousness. A simple experiment
can tell you this. Sit silently and observe your mind. You'd see thoughts keep
on popping up, if you don't indulge with them, each would fade away quickly.
If you do indulge, you get on a ride of similar chain of thoughts. This riding
is termed as "thinking".

IMO the neuron pathways that exist in our physical brain are determined based
on "rides of thoughts" we have taken in our life so far. But there's no visual
or physical proof to locate the consciousness that does this observation and
takes the ride.

So uploading brains means uploading only "the experiences" you have got in
your life but as long as the rider isn't there, these are just mere memories
with no further rides possible.

~~~
PhasmaFelis
Are you advocating for an immaterial soul?

~~~
c22
Is there any other kind of soul? It sounds like kr4 is arguing for
consciousness as a higher order emergent phenomenon separate from (but built
atop) thoughts.

~~~
PhasmaFelis
I mean, it sounds like they're saying that the physical (and thus uploadable)
brain is not itself conscious, but only a highway for thoughts to ride on and
a storehouse for memories. That clearly implies that there's a non-physical
thing that is actually conscious, whose existence kr4 takes on "gut feeling"
despite lack of evidence.

If consciousness is an emergent phenomenon of the physical properties of the
brain, then surely it will also emerge from a sufficiently accurate simulation
of those physical properties.

~~~
c22
> If consciousness is an emergent phenomenon of the physical properties of the
> brain, then surely it will also emerge from a sufficiently accurate
> simulation of those physical properties.

It's possible, but it may not be the "same" consciousness, and indeed, may not
even consider itself to be. Perhaps better terminology to explore this line of
theory would be to replace "thoughts" with "memories" and "consciousness" with
"self".

------
MichaelGG
> We all find our own solutions to the problem death poses.

No! No body does this. People make up excuses. People spin death to be a
positive, often under some guise of Deep Wisdom. People come up with all sorts
of ways to _cope_ with death, but calling any of them a solution is just
false. Death is a vile, atrocious thing, the biggest enemy humanity has.

We don't say that slaves all found solutions to slavery. Or that everyone
finds solutions to domestic abuse. Or solutions to dementia or Alzheimer's.
Death is a far greater evil.[1] So how disgusting is it to say everyone finds
a solution.

I admit the author probably didn't intend to imply this, but it's exactly that
kind of thinking that we should be aware of and fight. Just because it seems
inevitable, we should not make it socially acceptable to give up and view
death as anything but the wickedness it is.

1: Yes there are atrocities worse than death, but many of those involve death,
or are worse because of the limited timespans caused by death. Apart from
having your mind destroyed, I'm guessing most things would be healed by rather
long periods of time, than sufferers would prefer a period of suffering+long
OK life, vs suffering+death.

~~~
meric
A world without death is a world where flowers _never_ change into fruit, a
world where fruit _never_ change into a new plant. Remember there's a theory
of the universe ending in heat death, where entropy is maximum, where there is
no more change possible. To prevent death is to prevent change, and you bring
about a world, in the long term, becomes changeless, like a giant museum,
flowers and fruit and plants stay in the same state and never age,
caterpillars never turn into butterflies, where squid never mate and
reproduce, everyone immersed in their VR, day after day, for eternity.
Philosophically, what difference is that to the heat death of the universe
then?

The experience after death is exactly the same as the experience before birth.
Was it a vile, atrocious thing you weren't born until you were? Death is about
returning to that state. The disintegration of your mind is the mirror image
of the creation of it. A world where you have left without right, top without
bottom, beginning without end is like the universe before the big bang, or the
universe after the heat death, where we, you, me, and everything else, is
returned to being one.

Lengthening life span? That's a different matter. But I can tell you, you will
never stop death from happening. It's perfectly acceptable to accept death
will happen, and more isn't always better. ;)

~~~
teraflop
If death is so noble and necessary, why should anyone have a problem with
murder? Why should we bother trying to cure diseases? Do you think the world
was twice as dynamic and full of vitality 100 years ago, when the global
average life expectancy was half what it is today?

I don't think making people immortal would "prevent change" because there are
plenty of kinds of change that don't involve ending a person's life without
their consent. Is that not obvious?

~~~
meric
Death happens, if not now then when entropy is at maximum, and birth happens
too. Murder happens, and people angry at murders also happen. Diseases happen,
and cures for diseases happen also. People go to work late happens, people go
to work on time happens too. People fall in love and is rejected happens, and
people fall in love and is reciprocated happens also. If you were to tell me
what life is, that's life. That's vitality. You can have both left and right,
good and bad, or nothing. There is duality, and then there is oneness, and you
can't have one without the other. Life is duality, non-life is oneness.
Maximum oneness is maximum entropy.

~~~
kbenson
> If you were to tell me what life is, that's life.

That boils down to "life is stuff happening", which is true, useless, and does
nothing to answer the question posed. It is trivially obvious that it's
impossible to stop _all_ death, or even all human death, so I think it's also
obvious that what's being talked about is not the elimination of death, but
the elimination of the current main cause of death, which is aging.

~~~
meric
I was replying to MichaelGG's statement:

 _Just because it seems inevitable, we should not make it socially acceptable
to give up and view death as anything but the wickedness it is._

As I said before,

    
    
        Lengthening life span? That's a different matter.
    

EDIT: Reply to below, that's a good explanation, thanks. I don't think I could
come up with it so clear and succinct, if I could I would have!

~~~
kbenson
Yes, I was trying to address specifically that I think you are arguing a
points with is somewhat orthogonal to the current discussion, because both are
using similar terms to mean different things. Your terminology is more correct
(you are addressing death as a specific concept, and it's inevitability),
while they are addressing age extension, possibly to it's logical conclusion a
the end of the universe, but using "death" to denote that, when it's obvious
there are other causes of death that cannot be stopped.

In other words, I think there is no real difference of opinion, just a
difference in terms making it seem so. This appears to have been exacerbated
slightly through your flowery description of your point, which I take to be:
"Death of is impossible to stop in the end, and giving the false impression
that it is may be harmful. That said, if we can retard it in humans to a large
degree, that's useful. Additionally, death serves a useful purpose in many
systems, so we shouldn't lose sight of this."

------
kybernetikos
So, my concern about uploading my brain, is whose cloud service do you trust
to run your consciousness? Google? Microsoft? Amazon? Facebook? Apple?

The level of trust that I'd require is pretty high. You could imagine it once
again being relevant to check the pedigree of a company. Ideally there'd be a
cloud provider already running now that is known for its trustworthiness.
'established 2013' might actually mean something one day.

My main hope is for indistinguishability encryption to reach the level where I
wouldn't really have to trust the provider, but as long as you pay a
significant performance penalty you're going to end up massively
disadvantaged.

[https://www.youtube.com/watch?v=IFe9wiDfb0E](https://www.youtube.com/watch?v=IFe9wiDfb0E)

------
jacquesm
'When the construct laughed, it came through as something else, not laughter,
but a stab of cold down Case's spine. `Do me a favor, boy.' `What's that,
Dix?' `This scam of yours, when it's over, you erase this goddam thing.'

~~~
kbenson
It's approaching two decades since I read Neuromancer. I think it's time to
revisit, since I barely remember it. Amazingly enough though, I still have
very distinct memories of the CGA graphics and some situations from the game.
The coffin hotel, the diner, pawning organs. And I played that a decade before
I read the book. How has nobody remade this game yet, with the indie
resurgence of adventure games?

~~~
anonmeow
If you want a hard science-fiction novel about digital humans you could read
Greg Egan's Permutation City, it's a masterpiece.

~~~
MichaelGG
Greg Egan is fantastic; I particularly liked his short stories.

One should note that he says people shouldn't expect to just sit down and read
a book, but should use a pencil and paper to help figure things out. Though
that doesn't apply so much to Permutation City, it does to other stuff like
Orthogonal and Incandescence. (From what I've heard Schild's Ladder, too).

------
nradov
While I have no hard evidence for this, I expect we will eventually find that
human intelligence and consciousness depends heavily on quantum effects. Thus
it will always be impossible to scan and upload a human brain in a way that
captures the essence of a person's mind.

Even though I don't think it will ever happen I enjoyed reading the hard
science fiction novel "Hegemony" by Mark Kalina. It presents an interesting
vision of what life would be like in the far future with mind uploading.

[http://www.projectrho.com/public_html/rocket/atomicnovel.php](http://www.projectrho.com/public_html/rocket/atomicnovel.php)

~~~
nevinera
> expect we will eventually find that human intelligence and consciousness
> depends heavily on quantum effects. Thus it will always be impossible to
> scan and upload a human brain in a way that captures the essence of a
> person's mind

And you have a strong reason to believe that emulating or implementing those
quantum effects in a structure other than the human brain is impossible?

As a secondary argument - if an reacts to all stimuli and behaves in all ways
exactly as the corresponding human would, claiming that it "lacks the essence
of a person's mind" prompts a pretty obvious (and well-considered) question..
how do I know you're not just an essence-less entity that acts exactly like a
Real Human would act?

Or are you claiming that correct emulation of a human's behavior is
impossible, because 'quantum'?

~~~
nradov
At some point in the far future we may be able to implement an AGI as a
quantum computer. However I don't think we will ever be able to scan and
upload an existing human mind into that computer; there isn't even any
theoretical way to measure and store the whole quantum state.

~~~
nevinera
I suspect you aren't all that familiar with quantum mechanics - you seem to
think that an object as complex as a brain could have a single quantum state
(true) that cannot be decomposed into many orthogonal localized systems
(extremely unlikely).

There isn't a theoretical way (yet) to measure and store the whole
_electrical_ state either - why bring quantum mechanics into it?

------
gwern
[http://www.brainpreservation.org/ken-hayworths-personal-
resp...](http://www.brainpreservation.org/ken-hayworths-personal-response-to-
mit-technology-review-article/)

------
nshepperd
So, to justify their "just accept death, guys" conclusion, the author makes
broad sweeping statements like "details quite likely far beyond what any
method today could preserve in a dead brain" without providing any specific
arguments about the limitations of vitrification whatsoever. Not even a bald
assertion that "cryonics fails to preserve structure X". Yup. Okay.

------
anon4this1
Assuming that the ability to upload our entire brain to a computer will not be
available during out current lifetime, what about the idea of mass data
collection of our experiences, storage of this data, then periodically run
improving algorithms that combine all our experiences into a consciousness?

This would maybe involve wearing a camera 24/7 to record everything we see and
hear, and also some feedback on our own inner thoughts and our own recording
of our emotional responses to things. When we finally die, a computer crunches
all this data using neural networks to create a consciousness based on our
life experiences and emotional responses.

This would initially be maybe not fantastic, but as technology progresses the
crunching of the source data improves and every decade a new iteration of our
consciousness can be produced, hopefully coming closer and closer to the real
us.

------
comrh
Permutation City by Greg Egan explores what it would mean if you could easily
upload your consciousness but wealth meant access to better, faster, hardware
to store it.

~~~
ttctciyf
His short story, _Learning to be me_ , is a really great piece exploring a
related technology. Recommended!

------
geographomics
An additional complication is the role of the billions of glial cells that are
present in the brain. Their function of supporting neurons is quite well
characterised, but they can also more specifically modulate neuronal function,
so their place in a comprehensive connectome model shouldn't be ignored.

Then there is the network of vasculature, the flow of cerebrospinal fluid,
interactions with hormones, and just generally all the interfacing with other
bodily systems. All this would have to measured and modelled somehow too, in
addition to all the billions of neurons.

I agree with the author, there's no way all this is going to solved any time
soon.

------
mrdrozdov
What would I do with an uploadable version of my brain? I'd send it off to an
accelerated university where it can learn at a rate far faster than I ever
could. Plus, if we could upload a brain, then there's a good chance that we
understand how to manipulate the state of an existing brain. So after my brain
has graduated, we'll simply upload that virtual brain's state back into my
skull! All this would take about 2-3 min tops? But somehow I imagine this will
still cause university enrollment prices to climb. It's student loans all the
way down.

~~~
kemayo
It's actually interesting to consider the divergent paths followed by a merge
scenario presented here. If we assume that uploaded-you is put through a
simulated "going to university" experience, then it's a few years older than
you are. Even if it was _only_ simulating classes and studying, somehow
turning off any "human" needs for social interaction and enteratinment, it's
had years of time to think new thoughts, assimilate new ideas, and become a
fundamentally different person.

If it's completely overwriting your brain's current state, then you're
replacing yourself with someone similar to you but who'll be _obviously_
different to anyone who knows you.

If it's more a merge scenario, then you're (effectively) killing yourself and
the copy, and maybe the copy won't want to rejoin you if it means losing its
own distinct state.

Continuity of consciousness is weird to think about.

------
tim333
A couple of quibbles with the article:

>While progress is swift, no one has any realistic estimate of how long it
will take to arrive at brain-size connectomes. (My wild guess: centuries.)

It's not so hard to estimate. Just extrapolate progress on scanning, computing
etc. About 2050 plus or minus a couple of decades. We had a 20um scan in 2013
and you'd probably want to get that down to 20nm for a connectome so if you
assume resolution doubling every couple of years that would be about 2035.

(2013 scan: [http://io9.com/see-the-first-ultra-high-
resolution-3d-scan-o...](http://io9.com/see-the-first-ultra-high-
resolution-3d-scan-of-the-ent-514395280)

Images showing neural connections: [http://book.bionumbers.org/how-big-is-a-
synapse/](http://book.bionumbers.org/how-big-is-a-synapse/))

Of course as the article points out a connectome misses a lot of chemical
detail.

>It will almost certainly be a very long time before we can hope to preserve a
brain in sufficient detail and for sufficient time that some civilization much
farther in the future, perhaps thousands or even millions of years from now,
might have the technological capacity to “upload” and recreate that
individual’s mind.

Or quite possibly we can do it just now for $30k or so by sticking the body in
liquid nitrogen.
([http://www.cryonics.org/membership/](http://www.cryonics.org/membership/) ).
Maybe that won't work but maybe it will.

~~~
epistasis
A couple of quibbles with this: microscopy technology doesn't advance that
quickly, certainly not doubling every few years.

A new, entirely different technology is far more likely than creeping
improvements, as in semiconductors. However, these are far harder to predict.

~~~
tim333
I was thinking about the details. We have good enough microscopes already
resolution wise but to cut up a brain fine enough and picture it with existing
electron microscopes would take ages, probably centuries with current tech so
it needs a speed improvement more than anything. Also a 20nm scan could
produce an awful lot of data ~ a billion TB which could be an issue even
allowing for Moores law. Still there's quite a lot of research money going it
to this stuff.

------
anonmeow
With modern ML techniques it looks feasible to recreate at least online
behavior of an individual - facebook likes, comments etc. The datasets are
here, in facebook/google datcenters, and DL models are already used to model
conversation. It would be interesting to know just how many megabytes of logs
of your online activities is really necessary to extrapolate your behavior
into the future.

Facebook AI research is probably playing with such models right now.

~~~
astazangasta
Eh? Humans are not fixed entities, they adapt and learn. You can't extrapolate
into the future based on my previous anything. Six-year-old me can't tell you
who thirty-six year-old me is.

~~~
anonmeow
Given enough data and a good model (recurrent neural network can model
arbitrary algorithms, for example) you can learn algorithmic regularities in
data, including learning itself. There is a paper just about that:
[http://link.springer.com/chapter/10.1007%2F3-540-44668-0_13#...](http://link.springer.com/chapter/10.1007%2F3-540-44668-0_13#page-1)

If this can be scaled to human-like learning remains to be seen, but training
a conversation RNN model that remembers some details of past conversations and
acts on them should be possible.

------
teekert
TL:DR; The brain is very complex so it will take long, I don't know how long
and I don't dare to make an estimate, but, as I'm getting older I am more and
more at peace with dying.

------
mrdrozdov
I sense this will be one of those articles that will be a perfect example of
someone who had a very firm position against progress is clearly proven wrong.
I wonder if we could automatically flag articles like these based on the
number of tautologically negative arguments that appear...

------
cosmez
Upload my brain, so people in the future can use it as a DIY kit for
Artificial Intelligence?

This article sounds a lot like SOMA the video game. brain scans, uploading
your brain, etc [http://somagame.com/](http://somagame.com/)

------
roflchoppa
i find it hard to imagine how to replicate the smaller connections of my
brain, and the inter-connections that they all have. With that being said, i
hope that it could be done, because i would be super down, and would want to
be uploaded into the net.

~~~
anonmeow
Small volume of mouse cortex is already successfully scanned at 3x3x20nm voxel
resolution, with smallest synaptic details being visible
[http://www.cell.com/abstract/S0092-8674(15)00824-7](http://www.cell.com/abstract/S0092-8674\(15\)00824-7)
The tool is called ATLUM. The process could be scaled up.

~~~
zyxley
Yeah, it's fundamentally not "this is impossible" but "this is very hard and
expensive", and very hard and expensive tasks have a remarkable way of
becoming cheaper and easier over the long term.

------
rebootthesystem
Great Mambo Chicken and The Transhuman Condition

[http://www.amazon.com/Great-Mambo-Chicken-Transhuman-
Conditi...](http://www.amazon.com/Great-Mambo-Chicken-Transhuman-
Condition/dp/0201567512)

Very interesting book.

------
bitL
How would you program mind if you were writing a simulation?

------
nostomo17
seems hardly worth the trouble. don't flatter yourself your brain needs to
stay around - highly unlikely and very un-ecological to power up a machine to
maintain a presence of virtualitzed shit for brains. -Just saying. Next
Question?

~~~
tim333
> seems hardly worth the trouble

Assuming it works well why not keep in touch with the loved ones. Funerals are
so gloomy. Why not an upload party? You could make the virtual environment
like the versions of heaven in the various religious books and have that stuff
for real rather than make believe.

~~~
comex
Not just heaven. If the right religious nutjobs get ahold of an upload...

[https://en.wikipedia.org/wiki/Surface_Detail](https://en.wikipedia.org/wiki/Surface_Detail)

~~~
rl3
You need not look to fiction in order to be completely terrified:

[https://en.wikipedia.org/wiki/Mormon_Transhumanist_Associati...](https://en.wikipedia.org/wiki/Mormon_Transhumanist_Association)

