If you really want to make it stringent, make the bounty require a password that only that (deceased) person knows. Best case scenario they do such a good job resurrecting you that you still remember the password, or at worst there's an incentive to picking the fact out of your brain, which might be only slightly less difficult than doing the full resurrection.
I observe there's a certain prejudice, in that law is written by the living.
The other ideas of cryptocurrency/escrow/private keys would escape the law... until you tried to spend it legally. You might as well bury gold in a chest.
[...] I don't know what this great think I'm meant to be doing is, and it looks to me as if I was supposed not to know. And I resent that, right?
"The old me knew. The old me cared. Fine, so far so hoopy. Except that the old me cared so much that he actually got inside his own brain - my own brain - and locked off the bits that knew and cared, because if I knew and cared I wouldn't be able to do it. I wouldn't be able to go and be President, and I wouldn't be able to steal this ship, which must be the important thing.
"But this former self of mine killed himself off, didn't he, by changing my brain? OK, that was his choice. This new me has its own choices to make, and by a strange coincidence those choices involve not knowing and not caring about this big number, whatever it is. That's what he wanted, that's what he got.
"Except this old self of mine tried to leave himself in control, leaving orders for me in the bit of my brain he locked off. Well, I don't want to know, and I don't want to hear them. That's my choice. I'm not going to be anybody's puppet, particularly not my own."
Zaphod banged the console in fury, oblivious to the dumbfolded looks he was attracting.
"The old me is dead!" he raved, "Killed himself! The dead shouldn't hang about trying to interfere with the living!"
Uploads aren’t even the biggest worry. If it turned out to be possible to upload analyse and upload personalities, it would also become possible to synthesize new personalities on demand to a required spec.
How much value does organic personality actually have?
Until the "dead" uploads outnumber the living, at which point they vote in a bloc to expropriate and enslave the meat-humans, as in the novel The Uploaded.
A. thanks for the suggestion, great, terrific, wonderful (was flux good?) but B. The dead shouldn't hang about trying to interfere with the living!
And of course let everyone know that's the case beforehand!
Edit: I read the article. So they are assuming in the future that they won't bother making a new human, but just load the brain state into a computer simulation, which would be even cheaper.
Could a computer host a human mind to its full capacity?
I guess they could just kill you again. But would the threat of that happening in the future deter you from lying at time of death, when you have nothing else to lose?
It seems like there would need to be system of proving possession of a password that would have to last 100 years. I have not thought through this kind of scenario (sorry!). Perhaps this has been considered before by those studying the problem in detail (e.g. science fiction authors)? What are the solutions?
It is debatable whether known public-key cryptography systems will survive attack for the amount of time under discussion here.
I suspect the hack will be to extract the secret from the brain being resurrected, recover the funds, and toss the brain in the trash.
Why bother resurrecting you fully?
You don't want to be the first but you want to be the tenth or so. Once the process takes off all the bounties will drastically lose their worth because such technology will likely disrupt a lot of what humans place value on.
There is a short chapter that is narrated by the detached consciousness of one of the patrons. She has lost all sense of self and is trapped in a dim loop of pure thought, constantly questioning what she is. Probably one of the most quietly terrifying things I have ever read.
Nothing like it has happened since.
Be aware that running code on a computer is just the computation of numbers. So if computing a number makes a consciousness, then all possible states of your consciousness are happening. All the numbers are there.
Because humans have now become overloards of your very conscience.
Imagine that I am a communist revolutionary and I take control of the governmental system and I don't like rich people, but I decide well all these virtual brains and their wealth they should be put to work for the state, to pay back their misdeeds and robbing of the earth of resources in their past life.
Maybe I am a Mengele and decide I have a wealth of experimental subjects. Difference is now I can bend and break you mind but you cant die, unless I delete you.
I see all kinds of ways this can go wrong and given time all possibilities will happen.
Many of them are run by corporations. Some civilizations even send young sinners on day trips to hell to keep them on the 'right' path.
You can read this remarkable story and Douglas Hofstadter's and Daniel Dennett's commentary on it here:
Chairman Sheng-ji Yang, "Essays on Mind and Matter"
Edit: The novels are much better than the Netflix videos. They are, in my opinion, crude hacks. Randomized and simplified. The virtual torture sequence in Altered Carbon is vastly more horrible than the one in the video. Even if I didn't care about spoilers, I'd hesitate to describe it here.
And if you like Kovach, check out Stover's Caine novels. He also wrote the Star Wars novels, but don't hold that against him.
However I really liked the noir protagonist and the scene you described was my favorite episode in the show. I am interested how the story continues.
Are the books much better than the show? How would you rate the trilogy among your favorite cyberpunk novels? Thanks.
I started with that one though, and then read the other two.
I'd say overall as a triology, it rates alongside neuromancer /the sprawl triology. A little more pulp - but a similar serious take on technological evolution.
[ed: the netflix series is to ac, a little like the Johnny Mnemonic film is to the short story. Although the series is a bit closer to the books]
Also, I'm very impressed by Morgan's skill in writing action. Maybe that's what you mean by "pulp".
I also loved the "A Land Fit For Heroes" trilogy. I hadn't read fantasy for years. And from that, I discovered Joe Abercrombie and Matthew Woodring Stover.
Significantly. I have read the trilogy several times, and the Netflix series, whilst cool, doesn't come close to the depth and pacing of the books.
"On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity."
"Please don't complain that a submission is inappropriate. If a story is spam or off-topic, flag it. Don't feed egregious comments by replying; flag them instead. If you flag something, please don't also comment that you did."
The purpose of YCombinator was to start new companies, not to argue about brain slices.
If you want to argue about brain slices, that's cool, I'm just saying. Not what this discussion board was meant for.
Best part of the article for sure
If a 'Hitler' VM was publicly available online, how many people would torture it? What are the ethics around torturing virtual consciousnesses of evil people?
Nonetheless, here in our time, it brilliantly crosses/combines...
funeral services, a highly lucrative field
scifi, including all its expansiveness
the desperate compensatory technological hubris of people who know deep down that the world is quietly going down the shitter
and/or narcissism (on which nobody ever went broke)
Guy needs to not be so pessimistic.
> Altman tells MIT Technology Review he’s pretty sure minds will be digitized in his lifetime. “I assume my brain will be uploaded to the cloud,” he says.
It's like the cloud is _still_ just a pseudo-magical place  we just need to get access to, then all our dreams can come true.
Like Tahiti. It's a magical place.
Dying is not something to be paraded around as a virtue. I only hope people figure it out sooner rather than later.
EDIT: Some quick math on deflation in storage costs. The IBM 305 RAMAC was introduced in 1956, provided 5MB of magnetic storage and cost $3,200/month in 1956 dollars. To pick a random modern storage provider with a similar pricing model, in 2018 Dropbox offers 1TB for $10/month. Putting aside all the other advantages Dropbox gives you over a 305, the pure storage value of $1 (nominal) has gone from 1.6KB/month to 100GB/month, a multiple of over 50 million.
If you want to pay 2118 people to resuscitate you (in the unlikely event the technology is feasible and people are willing and able and developed) you're going to need to invest that money well.
We are incredibly more wealthy than people of a hundred years ago, in almost every respect.
The fed would never allow this to happen. That would be deflation.
What we should pray for is $0.25 loaves of bread, $10 rent, and a $1000 minimum wage.
We want a future where human time is valued and basic resources are plentiful enough to be cheap for everyone.
My point remains:
> We want a future where human time is valued and basic resources are plentiful enough to be cheap for everyone.
it always boggles me why people scoff at deflationary economics with this line of reasoning - i see little evidence that the average person is so wise with spending money today.
Although, given that their target market is already those planning to be euthanized, I suppose the tradeoff of one's brain being an amusing curio vs. one's brain being worm food is not a reason to avoid this service.
If they are correct, it brings up an interesting question: given that the technology requires a slow, planned process to preserve the brain, at what point should a uniquely brilliant person be euthanized, in order to preserve them, rather than allowing them to die by sudden misadventure, e.g. not paying attention and getting hit by a car crossing the street.
Given that the output of so many brilliant people is skewed to when they are young, do we just start proactively euthanizing the top 0.05% of the population when they reach 50, to preserve our best for the future?
(Ritual suicide at age 60 embedded in culture)
I was actually thinking of Logan’s Run when writing my comment.
(Ritual suicide embedded in culture, age 21 in the novel and 30 in the film)
And anyway, if people have cybernetic implants, I wonder what being human will mean.
My cell phone and computer augment my ability to do things to such a degree that I feel I'm not myself without those abilities. I rely on note apps to remember things, calendars to schedule, internet searches to re-discover information or lookup specifics. My social life, my professional life, and my personal life are all significantly changed by computers.
I honestly feel taking your eyes and hands out of the equation and putting stuff directly into your brain is going to be more evolutionary than revolutionary.
That fact that people sometimes survive lightning strikes and epileptic fits suggests that normal brain activity can be restarted after a disruption.
Still, that synapse is thought to be the main modification area of neuronal firing, in that it acts like a classical memristor. So, though electrical activity is not required and lightning/shocks can be sustained (by some miracle), the 'self'/mind is the network of the synapses (GIANT caveats apply here). Also, even under heavy anesthesia, the neurons are firing and active, jsut not coherently.
If you can preserve the synaptic weights, their dynamics, and their network, you should be able to reconstruct the 'mind' of another person. However, that is one instant in time and space. You'd need to know where pretty much every ion was and it's momentum, and I'm pretty sure that's impossible (Heisenberg and all that jazz).
As parent said, people do stuff like trans-cranial electric stimulation which surely changes where some ions are, yet they do not lose their personality afterwards.
I'm pretty sure our brain is more robust than the exact position of every ion. Biology is very messy, and stochastic.
Also note how artificial neural networks are very noise resistant, which allows people to run them on low-precission numbers.
Is this really the case? If so why do salaries increase with age? What do you mean by "young"? The bottom half of living people by age? How do you explain the years our children spend in school generating no value at all?
The portion of kaizendad’s comment that I replied to specifically mentioned “output”. I interpret this to mean economic benefit or “value”.
The time students spend in school is time they are not working. Children generate almost zero economic benefit until they are out of school and the type of person kaizendad references is likely to spend a lot of time in school. Their economic contributions probably do not start until their late 20s.
I think a persons maximum economic output is probability closer to middle age than their early adult years. Perhaps kaizendad and I simply misalign on the definition of “young”.
From link, "As of October 2015, 49 percent of all youth ages 16-24 were employed in any work, either full- or part-time. Youth enrolled in high school had an employment rate of 18 percent, while the rate for those in college, either full- or part-time, was 45 percent."
You have cited a problem. The fact that we make children work to support their families holds them back from being the ones who kaizendad refers to.
Children who have the benefit of education early on are more likely to be productive later in life.
I also had my first paying job at a young age. My economic benefit today is much larger in comparison and continues to increase year over year. My maximum economic contribution has yet to occur even though my youthful employment was decades in the past.
DP/DR is quite possibly the worst mental disorder you can ever fall into. It is the feeling where you are convinced everything around you is not real, you see people you know, you know they are significant, but they just feel like actors in a show and you don't care. You look in the mirror, and you don't recognize yourself as you, you just see a body. Worse, your body just moves around the world and accomplish things on autopilot, while you watch everything unfold on what feels like a screen, completely detached. If you died, you wouldn't care much, because you wouldn't feel it's you dying, in fact you want to die, because you are trapped in a hell from which there is no escape.
I've never suffered from this disorder, but I investigated it for some time and am convinced it reveals something we do not yet understand about our existence in this world. The source of our existence may not be where we think it is.
the money they’ve raised so far is due to the possible improvements this proposed technology will bring to neuro research.
I have a serious medical condition. I frequently suffer somatopsychic side effects. It's incredible what chemistry can make you feel.
I happen to also believe in a lot of woo stuff. But the reality is that no known cause is not proof that there is a spiritual explanation or cause.
I'm not very interested in seeking the Nobel Prize in this area. I am busy doing other things that I fantasize will lead to a Nobel Prize.
Mind alterations are induced by drugs. Chemical imbalances influence the same. I just am not convinced that these evidences are enough.
I did a research paper in high school on Functional Hypoglycemia. Chronic low blood sugar is known to promote anxiety and paranoia. I've blogged about managing various somatopsychic side effects with diet, such as eating oranges for the vitamin c while enduring random fits of rage to get that under control and eating beef with potatoes to mediate the salt lithium connection and calm bipolar-like mood swings.
But I am not interested in fighting some uphill battle to convince a skeptic who is just pissing all over me with every reply. So I had no reason to stick my neck out and share any of that. I get enough flak from the world for managing my medical condition with diet and lifestyle.
Edit: There's also lots of interesting stuff about nutrition and state of mind in this discussion about nutrition and the prison population: https://news.ycombinator.com/item?id=16140867#16141719
I am not invested in this topic, although I should be. In fact, some of the information you shared is valuable to my personal life.
I have a problem with oversimplification of hard problems, then claiming that they are currently understood. Then proceeding to build more extravagant and potentially dangerous solutions on the said platform.
The point you are debating - that of the influence of salts/neurotransmitters/more on brain chemistry, and hence on one’s mental state, can be so profound as to create a world where one experiences a vastly different emotional response to everyday events than what most others anticipate - is understood.
I contested that this is not enough to encourage experimentation with consciousness itself.
edit: the impact of chemistry can render itself in ways where the idea of one’s personality becomes fluid. chemistry can control my reaction to simple things, and render my ability to live my life debilitated. I understand that chemistry can shake at the very sense of having control over my self. I also understand that the external world barely acknowledges the role of chemistry in this, and likes to blame it wholly on the individual.
You are reading in something I never said or addressed in any way.
"DPDR is just your brain's ways to help you survive a situation you're unable to escape/fight. It tries to make you unaware of what's going on so you can endure it until it's over. It isn't supposed to be permanent. DPDR itself isn't anything to fear."
I've tried for years to recreate the effect with no success.
I prefer to believe instead that I'm a character in an MMORPG. But if that's true, my consciousness still resides within the simulation. At best, the entity controlling me has some superset of my consciousness. At worst, my consciousness is still totally disposable, and my player is just interested in a fun ride.
Also, there's probably usually a lot of brain matter dedicated to redundancy, memories, and enabling connections between ideas to foster creativity, and this person may be lacking in these areas in hard-to-measure ways. If most of our brain truly had no purpose, evolution would have selected against it forever ago. (Instead, evolution went through large lengths to keep our heads so big.)
You're just an NPC
My favourite argument against this idea is that it would violate the law of conservation of energy. In any case it looks to me that all these dualist theories of mind are non-falsifiable
Rather, the brain's processes themselves are probably what makes you you. Regardless of whether they generate consciousness, the biochemical sensory-processing, storage-and-modelling, and decision-making algorithms (which we can detect by looking at synapses) are where you lie.
If your brain was somehow modified to take completely different decisions and make different statements ('dumb' magic-less rewiring), with the magic/antenna/emergent/quantum/whatever consciousness part kept intact, I am pretty sure that I would not consider the resulting being 'me'.
Of course everyone is free to believe otherwise.
In other words, we know with certainty that there is no physical mechanism by which some "external consciousness" could influence the brain. So the answer to that hypothesis is a definitive "nope, that's not how it works."
CalTech quantum physicist Sean Carrol explains: https://www.youtube.com/watch?v=x26a-ztpQs8&t=836
Plus, there are all kinds of problems and contradictions if the mind reduces to matter.
This has never been a good proxy for truth. It is also completely irrelevant given that the majority of the world has never studied neuroscience.
>science has no idea what consciousness is
Not true. Neuroscience has discovered a lot about consciousness. I recommend Principles of Neural Science if you're interested: https://www.amazon.com/Principles-Neural-Science-Fifth-Kande...
>Plus, there are all kinds of problems and contradictions if the mind reduces to matter.
Perhaps you can explain what science has discovered about consciousness. As far as I know, the closest is a network analysis metric used to signify whether something is conscious. But that does not tell us what consciousness is, only some of the necessary conditions, if that.
Here is a list of problems off the top of my head regarding material minds:
- Math is inherently immaterial. I cannot destroy the number 1. Infinity cannot physically exist. Negative numbers, zero, imaginary numbers, the real number line do not correspond to physical objects.
- If our mind is material, there is no way we can know any kind of truth. Truth doesn't really have meaning. Yet we do know some degree of truth.
- If consciousness is a particular configuration, then the same configuration is the same consciousness, which would imply instantaneous awareness of two completely distinct parts of the universe by the same consciousness with two copies of the configuration.
- Free will and qualia have no meaning in a materialistic worldview, yet are essential to just about everything we do everyday.
What problems and contradictions?
And then there are the normal people walking around with almost NO brain...
Still, I find it at least as likely as the "consciousness just sort of happens when information processing gets complex enough" explanation.
You might find this interesting https://www.quantamagazine.org/a-new-spin-on-the-quantum-bra...
Don't get me wrong, I think consciousness arising from sufficient complexity is also very much a possible answer, and I don't discount it. I'm just not totally convinced that there isn't more too it. Until a general AI arises, I don't think I will be convinced, and even if it did, who's to say it inst still just an approximation? Multiple AI intelligence domains slapped together with some approximation of a pre-frontal cortex. Even then, biology may still be something entirely different.
Sure, you start venturing into the supernatural realm talking about this, but who's to say that given sufficient time and study of this phenomena, the supposed "consciousness is everywhere" becomes the new natural science, and gives rise to new technologies we couldn't even conceive. It would be like discovering Electromagnetism all over again.
Then again, the last time I checked we don't actually know if time is quantized or not, or if that's even a meaningful question.
A slightly modified version of this idea is explored in
The concept of "multiple realizability" says that a brain/consciousness/mental state can arise from non-biological matter, like digital computers.
Modern proponents of this view do not view the brain as a seat of consciousness. To them, the view of a singular consciousness that houses in the brain like a theatre, is like a Descartesian illusion. Consciousness is distributed and all parts of "you" and your environment contribute.
As for "out there": I am always a bit worried when archeologists unearth some treasures with our current state of technology. Likely, in a 100 years they will have much better methods to research and preserve unearthed treasure, and so we may be spoiling their more effective methods by digging it up and putting it on display. What if, after the 8th great AI hype, another YC company claims it has found successful methods to upload your consciousness, which is 100% fatal to your preserved brain in the process? You'll be stuck in the MVP FORTRAN digital world of uploaded consciousnesses.
Or maybe they take our current infatuation with AGI to mean permission to create a huge ensemble of preserved brains, filling an entire room like did our 1960s computers, because it turned out that consciousness is only transferable to biological "organisms", and the only way to superhuman intelligence is to combine 20 human intelligences. The AI hype bubble will become a literal fulfilling prophecy.
I personally would either jail these consciousnesses (honor their will, but put them in a boring digital world with Archive.org access to the internet) or put them on display, much like we do with preserved Egyptian Pharaoh's. I'd call it Earth, and make Trump a president.
I myself worry that even if somehow we combine all these domain specific AI's with some approximation of a neo-cortex, and then every AI researcher says "See, there it is! Behold the AGI, told you there was nothing to it!", all we're REALLY going to get is an approximation of the real thing. Then we'll be too busy playing with our new AGI's to realize we missed the mark long ago. The new treasure preservation technology never even arises because victory was declared prematurely. We hit another mark instead of the real thing. Constant refinement of the bizarro superman version of real consciousness.
I hypothesize the REAL mark being that somewhere in the brain, quantum effects, maybe in microtubules, couple with some yet to be conceived discoveries or epithanies, act as a conduit to your real consciousness software existing somewhere "out there" in the simulation ether lets suppose. Extrapolate the AI hype and one day the super intelligent Matrioshka planet size brain emerges (YC backed of course), pours over ancient abandoned research and points out "Hey, you dumb monkeys missed something Yuge here eons ago..."
We're at the point now where your 'receiver' is doing basically all the work and, if it's connected to anything else, that thing just supplies formless 'mojo' that has no identifiable effect on the working of the mind.
And of course you cannot will yourself to become drunk.
It is pretty trivial that the matter affects the mind, not vice versa.
The "somewhere" is distinguished due to the fact that a given person is a specific person and not another person (if you deny this, this makes ethics and emphatic transfer unworkable, which doesn't seem right).
They are already finding quantum effects in plants:
And that our brain contains micro-tubules that exhibit quantum effects:
Maybe they ARE the antenna.
This is like the guys who says "prove to me god doesn't exist". Well, it doesn't work that way. You came to me with the "god exists" or "there is an antenna that connects my brain to some magic world". You prove that it is there and show me how it works.
Of course, everything we perceive the material world to be is merely a concept we are holding in our mind.
DNA is an idea.
Similarly, if consciousness is a fundamental component of the Universe/reality, it may not supervene (a technical term in philosophy) on the physical structures of the brain. 
 See the arguments of the philosopher David Chalmers.
There’s zero evidence of quantum mechanical effects playing a role in any aspect of cognition. Worse, the classical model suffices. Far worse, the review you cited has been thoroughly trashed.
The “quantum mind” is just another case of quantum mechanics being used as the new woo of choice where “because God said so” wouldn’t be well received. Believe me, I’d love to discover that we’re not just meat hurtling towards our own extinction, but the evidence doesn’t bear out a rosier hypothesis. Even with the surprising duration of coherent state in “hot” systems, the human brain is a poor candidate for a quantum mechanical system, and the math strongly supports that. Where quantum mechanics is concerned, math is God.
As an additional fun fact - we don't know how general anesthesia works.
It’s just not scientific or rational to point at some things we don’t understand with the assumption that it must mean God did it. That’s mysticism and religion, not science, and if it uses the language of science, but not the substance it’s called pseudoscience.
If you want to postulate a soul within the rubric of QM/GR complementary theories, it’s very much on you to hypothesize and test. What field is carrying the signal? What machinery is receiving it? What’s sending it? Why is there no lag? Why is it all totally undetectable? Why are we trying to add a whole new set of assumptions on top of existing ones? Occam’s Razor really applies here, especially when it’s science vs. “devoutly to be wished.”
Just because we want reality to conform to ancient beliefs and present desires doesn’t make it so.
We have unexplained phenomenon - consciousness, everyone (well, at least me) experience it. We don't have the mechanism for it, so we reject that phenomenon exists. I don't think it's a good idea to outright reject hypothesis, that have evidences behind it, but don't have a mechanism. Remember what happened to theory of washing hands for surgeons or to continental drift (it took half a century for it to be taken seriously!)
* There are mental disorders, drugs, experiences that seem to imply dualism; those are entirely inside those people's mind with no way to prove those experiences are true, or simply products of the brain.
* Damage to a brain seems to change consciousness (this is falsifiable, and proven) which seems to suggest that it is where our consciousness is; damaging the receiver damages the connection, it's obvious that something must bind the consciousness.
This is a well thought about topic. If you have some sort of falsifiable test, please suggest it, otherwise, far wiser people have tried and failed to think of one (and as a note, it's up to you to prove that your hypothesis can be falsified, it's your claim).
You just made an assertion. Now you have to prove it.
If the idea that the spirit is another fundamental force of nature is true, then it can be tested and be falsifiable.
Science isn't about proof, if observations change (primarily improve) then we change what we think we know. Which is why the above is all the "proof" I need. you can falsify it very simply. Provide a test. I have met my burden.
Go back 90 years. How do you know there isn't a 4th fundamental force of nature?
Science isn't about making assertions and not being able to back it up.
I have backed up the assertion. I have pointed to the existing literature, and the many people who have tried and failed to write a test. I have a very basic hypothesis, one that could be broken by a very simple piece of evidence: a test.
But here is the real problem. You are treating this as an argument. And I am not. This isn't about winning or loosing to me, I am just trying to understand the universe better (as are the people above saying this isn't falsifiable), and you want to win. Besides if this were a proper argument, you'd have a clearly stated position I could argue against, and you don't seem to have one (outside of 'dualism can be proven', which is a ridiculous assertion, easily refuted).
> Go back 90 years. How do you know there isn't a 4th fundamental force of nature?
How do you know there isn't a fifth now? We have no clue how we would test for it, but there are unexplained phenomena which might involve a fifth force (dark matter and the perplexing way that gravity works for example). Saying something is non-falsifiable doesn't mean it's "proven" to be wrong. Just that science can't say anything about it yet. With more knowledge and more time we learn that our previous understanding was wrong, and that we can test for new things we couldn't even imagine before (and some that we could).
> Science isn't about making assertions and not being able to back it up.
You're right, it's not. Which is why I have been able to back it up, by pointing out that many very smart people (scientists among them) have tried to think of a way to test this and failed. My assertion stands with the caveat all science has: given what we know now. Science is wrong all the time - I don't mind being wrong about this - but for the time being it appears true. And that is what is useful.
Erlangolem made that explicit assertion, and now has to back up that statement up.
If you really want that argument, life and the internet are overloaded with opportunities to have it until all parties are dead. Very little ever moves or happens, but occasionally someone gets a book deal.
As someone who has lived through many backup formats that stop being supported, don’t work, or decay with time this seems monumentally stupid as anything other than basic research.
I applaud their goal, digital sentience, but sorry for any of their customers. They need a pitch that works on the recently dead (eg, current cryo companies) as I don’t think there are any countries that allow euthenasia for life extension purposes.
I agree that there's research to be done, which is why we're not offering a product at this time, and may not for years.
I would say that even more research needs to be done for any hope of a protocol that preserves synaptic details in postmortem cases hours after death (as many cryonics companies seem to be peddling). If you're saying "recently" to mean <20 minutes after death, that complicates the distribution enormously, but it's not outside the realm of what Nectome may develop later.
From their site:
"What if we told you we could back up your mind?"
No: 100% chance of death.
Yes: 100-X% chance of death, X% chance of getting to see a super-interesting future.
For me, X wouldn't have to be very high to make it worth trying.
We're even more liberal with the concept of identity in society -- consider that I share nearly nothing (physical or memory) with 1-day old me, yet we are considered the same person.
Really, what we have here is a quasi-religio-philosophical argument for reincarnation. I'm somewhat sympathetic to the idea of reincarnation after death, in fact, but I think people who believe in uploading minds are implicitly pretending to have knowledge about the process that nobody has.
Any rules for where your consciousness ends up in the next instant in the event of a discontinuity are made up based on your preferences and prejudices. One may wish to believe that a computer that stores the patterns of your living brain captures your mind after you die, but it's no more or less valid (i.e. probable, provable) that the cliche that you are reincarnated into some particular living entity as reward or punishment for how you lived your life.
Reincarnation is, in my opinion, outside the realm of science, so I don't disbelieve in it, I merely doubt it is possible to know anything about it due to the impossibility of investigating scientifically.
...and "mind-uploading" is merely a subset of human invented systems of reincarnation.
Trade 99% chance of seeing tomorrow even if terminally ill for a 100% chance of not seeing tomorrow and a chance of resurrection which probability-wise is on a par with $insertreligionhere being broadly correct about the afterlife.
So while it is non-zero, that doesn’t mean it’s worth the cost. That’s why I brought up backups.
Remember Zip drives for backup? Imagine if there was no reader at time of launch. And how likely it was to build the reader without any changes to spec that the existing Zip disk could be used to restore without any problems. Even with read and write available, those things didn’t work well. You could say “0% chance of restore if you don’t buy, but non-zero if you do buy; therefore buy.” But that only works if the cost is zero.
You’re just as likely to be “recovered” from a photograph. Or from your Facebook profile. Or by tracing neutrinos that passed through you during your lifetime. Or a rock with your name written on it with a sharpie.
All of these things have the same evidenced probability of assisting with your recovery.
I hope nobody tells this guy about chemotherapy, he's gonna flip.
Or about vaccines. Or about antibiotics. Or about sterile surgery procedures. Or about running water and electricity. Or about....
Just about everything that makes up our modern quality of life was only available to a privileged class when it first came on the scene.
My bet is that future people won't be appalled in the slightest.
Something tells me that Professor Hendricks isn't living a Neanderthal lifestyle (or even a modern Third World lifestyle).
One group is (I think overly) critical of present vitrification approaches and the planned future of those approaches. They believe that these approaches are broken, and nothing of value is preserved. I think this is just as ridiculous as saying there is no room for improvement - it is clearly the case that, e.g. nematodes vitrified using the present technologies can be restored and show preserved memory.
That group has a strong overlap with pattern identity theorists, who are quite comfortable with a copy of them living in the future, and throwing away the original brain.
So the intersection of these two groups is motivated to work on technologies such as aldehyde-stablized cryopreservation (vitrifixation) that are incompatible with the future goal of thawing, restoration, and repair, as they are not reversible short of far distant molecular nanotechnology. Their aim is to produce the best possible record of data of the mind with the intent of reading into a machine environment in the future, then discarding it.
To my eyes this is a terrible, terrible, mistaken view on identity, and one that will cause a great deal of existential harm when it is extended from theory to action.
The rest of the cryonics community is interested in a technology path that leads to reversible vitrification in the near future, and some kind of union with the tissue engineering / organ engineering community. They want the flesh restored and repaired, and the end goal of cryopreservation is some form of advanced cell/bio/nanotechnology that can achieve that end.
This is why Alcor, etc, is not adopting vitrifixation.
So on the one hand, great to see progress, and vitrifixation is an excellent advance in tissue preservation in the general sense. It will be of use in many areas of research. On the other hand, pattern identity theory seems to have many of the aspects of religion. A copy of you is clearly not you, and no amount of handwaving is going to make that the case.
What an unnuanced way of stating your opinion. Views on what "identity" is differ, clearly.
I have nothing against the elite trying to live forever. However, I wish they weren’t using our taxes in the process.
I disagree. I bet they'll learn all kinds of useful stuff from this, however it turns out.
Assuming the taxes collected this year are even remotely similar, and assuming this grant came 100% from income tax, and assuming you pay an equal amount in taxes as everyone else, this grant cost you (960000/1.23 trillion)=1/1281250 of the taxes you paid. I know that's a lot of assumptions, but even if that calculation is off by several orders of magnitude, the tax burden is pretty darn small. If you earned $100,000 last year you paid roughly 8 cents for this. It could open up doors to a lot of interesting research. Maybe nothing will come of it. I'd pay 8 cents to find out though.
Also, who says this is "the elite" wanting to live forever and us schlubs footing the bill?
I think it's reasonable to think that's someone that is ready to put aside $10,000 in the eventually of an accident leading to coma for the small odds that future people will be able and willing to give him a new body is probably part of the elite.
...because it will be the rich who can afford such an exclusive treatment.
Additionally, it's just straight up irrational--quantity of life does not improve quality. It is a better decision to spend your money on your quality of life. Why even waste 8 cents on this? Peter Thiel can pay for all the young blood he wants, so long as it ain't my tax payer money.
> It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.
The rich can do what they want, invest in the businesses they want, and no I don't necessarily want my tax money used for those purposes. But this research sounds like it has potential to be really valuable to humanity as a whole, commercial applications notwithstanding. And that's how science in this country works. Either it's funded by a company like Pfizer where it's locked behind gilded doors or it's funded by taxes and carried out at a university where it's open for all to look at, and given those options I know my preference.
Reminds me of Alcor and Ted Williams. Or the myriads of "companies" that promise preserving your stem cells but in reality it's just one guy with a LN2 tank in his basement.
"[H]e devotes all his energies for a decade prior to his freezing, in becoming an expert primary source on the musically notable people of his era. Assuming that if you become the world's foremost expert in any subject, and given infinite time, someone will want to write a book on that exact subject."
If I recall correctly, the protagonist intentionally leaves tantalizing bits out of his books, so future historians will want to wake him up to ask him personally.
Public key verifies the value and there’s a smart contract with some percentage to the successful recoverer.
Depending on the recoverer’s profit margin they will restore whenever there’s enough value to make it worthwhile.
Whether bitcoin will exist and whether it will appreciate is a different story. But assuming bitcoin exists, resurrection is a certainty.
Roko's basilisk meets bitcoin.
Although I’m not sure how you would verify your messages with private key without revealing to whatever is managing your consciousness’ memory.
I’d structure with smart contracts with a certain price for digital. Another price for full physical. Keep other assets outside of consciousness in legal trusts.
There may be torturers, but likely non-torturers as well. Knowing this, tortured versions of me won’t crack as they don’t care.
There have been compound interest sci-fi for a while. I remember the Lazarus Long stories by Heinlein and Orson Scott Card wrote Worthing Saga.
Or I'll do it myself if I can.
Obviously it's a huge punt with no guarantee of success, but I would think that in a future that supports such a procedure, the concept of "expensive" won't exist to the same degree as it does now.
I'm not saying you are incorrect about the future, just pointing out it's against several long term trends.
Income inequality and “expensive” are not exclusive. If the median income is $1B in current dollars with lots of zettanaires (10^21 vs 10^9 for bikkionaires) you will have high income inequality but still have plenty with few things expensive.
Income inequality, with no other info, is not necessarily bad.
I hold a contradictory view. Inequalities in income and wealth means inequalities in power. Inequalities in power lead to unstable social systems. My view is that this holds true regardless of the baseline standard of living.
>11 billion just after 2100
Right, so current trends show a 30% growth in population between now and 2100. That doesn't contradict what I said.
EDIT: I'd like to specify all inequalities do not lead to unstable systems, but very large inequalities do.
The trend is that as societies become more organized, there are fewer kids.
Obviously several things could go wrong, but the plan isn't obviously doomed.
1. Logic to accumulate wealth. This would need an API to create and manage an investment account for stocks, bonds, crypto, whatever. It analyses current and historical data for a time, then makes investment decisions based on minimizing risks and long term gains.
2. A reproduction module. This would be responsible for finding a freelance programmer to produce new versions of each module keeping the system current. This would need methods like findReliableFreelancer or provideSpec runTests and issuePayment.
3. Cloning. Like the Reproduction module, but this will be done entirely with the system's own code and not recruiting a new developer. E.g. create a new AWS account, get a new host, copy your modules, get it running, do the sanity checks, provide it seed money.
4. Orchestration. The various hosts running this code would need to communicate, reporting their existence, the money they have available, etc. If the system winds up having some purpose (e.g. convincing someone to restore me after death) then orchestration might come up. Otherwise this could be used to limit reproduction, stop sharing resources with bad actors, and share resources with good actors. This would also need to track which systems we were already running on (e.g. if we are currently 80% on AWS we should direct expansion towards Digital Ocean, or future equivalents, try to expand into different languages, countries, and planets).
I imagine kicking the project off a decade or two before my anticipated death in order to work out kinks and to hopefully live long enough to verify it will actually work and grow in real life conditions.
After my death, I imagine it running for centuries, slowly accumulating more and more wealth. Perhaps disbursing some amount every decade if I could think of a way to track descendents. Maybe asking people to restore me from cryo freezing. Maybe just creating a legal puzzle for folks 500 years from now, when they have to unwind what might be a multibillion dollar fortune belonging to someone long gone.
What about after general anesthesia?
You only know you are you because of your memories. The future brain will have the memories, so it will think it's you, just like you think today that you are the same you that were awake 3 days ago and 10 years ago.
Are you afraid of nonexistence when you fall asleep, and envy the tommorow's you that he will live in your body and get to see what the future brings?
I don't think there's much difference between these scenarios - the conciousness is interrupted in both cases.
Programming wise, I'm an instance of an object, not a class.
So, if the original me has to die for the clone to be created... unless my consciousness somehow jumps to the clone, I experience a death which I do not come back from.
You don't have conciousness when you sleep (at least during some parts of it). Your conciousness disappears every night for several hours and is recreated every morning. Why is it not a problem, but when there is 2 of you - it suddenly becomes a problem?
If you were sleeping while someone made a clone - would that make it OK? If no - why?
There's no evidence of anything external that has to "jump".
So, a copy of your consciousness (if such a thing is possible) is different than sleeping. No matter how much the clone believes it is me, it will never be me. I still die in my scenario, even though the other feels differently about it.
To add to that, asking "who am I" is a philosophical question, and I'm not a strict empiricist, so I don't categorically dismiss the idea that there exists an "I" outside of my natural body.
As a thought experiment, imagine you were magically replicated during sleep such that, at the time of replication, both "you"s were completely physically identical. Both would wake up thinking that they are the same person that went to sleep, but now there are two of them. Are either/both/none of them "you"?
My answer would be that they both are, and don't think there's anything special about our physical selves, nor any reason that we're limited to being singletons (to give a code analogy). The combination of our physical self and our cumulative experience and memories are what defines "us". Thinking of identity in possessive terms and the desire to think of ourselves as unique is a quirk of psychology, I think.
I don't see how you can distinguish "different brain-states that are still me", and "different brain-states that are not me" other than by looking at the data in these brain-states. If the data is the same - what makes it ok to say "these 2 brainstates are the same person, and these 2 are not"?
> We don't say people become no one, or someone else when they sleep.
We say a lot of things that are wrong or simplified, it's not an argument anymore than the word "sunrise" is an argument for geocentrism.
> I don't categorically dismiss the idea that there exists an "I" outside of my natural body.
I agree that existence of souls would change things, but it's untestable and unnessesary to explain all phenomena we experience, so I consider it a waste of time to discuss these.
They're basically saying killing them in their sleep is not a problem. That's where that logic goes. Because you die when you go to sleep.
So you tell me why it's a problem, but it seems, I may go to sleep all I want, but when I wake up the next day, all my problems are still my own. I doubt that if someone makes a clone of me, and I go to sleep, that I'm going to wake up as them and that other clone. How would that even work if there's two of us? There's only one of "me" to go around.
Oh, it's very awkward, yes, because it implies there's something else going on here that is required to hold this information, but we're all supposed to think that's impossible...
Or you can decide the leave everything and become a monk. I don't see how that's an argument one way or another.
> There's only one of "me" to go around.
Why? It seems to be the crux of your argument, but you don't justify it in any way.
You seem to have missed what I said.
I mean that, if I don't wash the dishes today, and I go to sleep, it's still me who's going to have to wash those dishes tomorrow. It's not someone else. It's me. That "me" is the question. I can't just not wake up as "me" tomorrow. If continuity wasn't a thing then the concept of "me" shouldn't exist at all, so there would be no responsible party in the first place, but every morning I find that I need to wake up, go to work, and then sometimes wash the dishes.
Of course, the entire experience of "me" can be falsified by a 3rd party, sure, but the world itself could also be falsified, neither of these beliefs are particularly actionable. But "the person who didn't wash the dishes and the person who then suffers the consequences is a continual entity" is actionable, it means I should wash my dishes.
> Why? It seems to be the crux of your argument, but you don't justify it in any way.
Because I can't wake up and control two people at the same time. That doesn't even happen in the case of multiple personality disorder.
You sure can. You could have a stroke and change personality completely. The fact that it's still considered "you" by society and the future you doesn't mean it's true.
> "the person who didn't wash the dishes and the person who then suffers the consequences is a continual entity"
This only requires you to believe that the future you is you, it doesn't require it to be true. It's similar to arguing "There must be a God, because why else would I pray". Well you can simply be wrong.
> If continuity wasn't a thing then the concept of "me" shouldn't exist at all
Why? There's a lot of concepts out there that exist contrary to facts. Free will probably doesn't exist. Absolute time doesn't exist yet people use it routinely as their model of reality.
> I can't wake up and control two people at the same time. That doesn't even happen in the case of multiple personality disorder.
You could make a device that controls muscles of another body basing on your neural impulses. Would that change your opinion about identity and consciousness? I doubt it. If I'm right, then that's not your real argument.
Give me your real argument, please.
If I tortured you, then healed you and removed your memories you would be none the wiser as well. Does that mean I weren't torturing you, but someone else?
If I cloned you, tortured your clone and killed it while you were asleep, then merged his memories to you - was it you who I was torturing or not? Why is the difference meaningful?
Programming isn't very good analogy, because there is no definition of conciousness, so even if we recreate the problematic situations in some programming model - we still don't have any answers. Example:
You can serialize an instance of a class, delete the instance, then deserialize it. The memory address will be different, the == operator in some languages will return true, in many others - false, but there's no one answer if the object is the same or not when it comes to conciousness. There is no clear analog of pointer identity in real world. If we had evidence of souls that would be it, but we don't :)
Some GC languages can move objects in memory when the application is running. Is this the same object? What does this say about conciousness? Nothing IMHO.
When GC moves object in memory and updates all references to it - is this the same object, or a different one? What if it created a pool of objects, serialized object at address 1, loaded a different object there, and deserialized old object from address 1 to address 2? Is the object at address1 or the object at address 2 the old one? The pointer equality isn't very useful here.
> for whatever definition of me that I have, it involves this particular instance of conscious experience
Then sleeping is the same as death.
If I merged the bash logs where I nuked rm -rf / on a given computer into the logs of one where I have not ran that command, the result is not the same.
> You can serialize an instance of a class, delete the instance, then deserialize it.
This would not be the same object since you can do that, but not delete the object.
Programming is a great analogy. I think the answer for identity/consciousness, even in programming involves an uninterrupted flow of unique locations in time/space that has continued existing as a cohesive whole.
> Some GC languages can move objects in memory when the application is running. Is this the same object?
In general, if there can be "two" instances simultaneously utilizing a given approach (like mind transfer), then it is not the same "one." But this is a flow, so ship of theseus alterations like you propose do not change identity, they are part of the flow.
> Then sleeping is the same as death.
I can't rule that out, actually, although I seem to be conscious some of it, so there's that.
What if you had 2 git repositories, and merged changes from one to another? Or 2 disk images?
Why would memories be logs and not contents of the repository in this analogy? And if you care about changes to the brain state that aren't memories - just add these changes to the brainstate. You argue that no matter how perfectly we merge these brainstates, it's still not "me" after the merge. I argue - if there's no difference after the split, then the question is meaningless.
> This would not be the same object since you can do that, but not delete the object.
So what? Why would existence of a copy change anything? In programming it doesn't, yet you seem to argue it does.
You can make a clone of a disk or a memorydump, and restore it on another computer. Why would the uniqueness matter? You can run 100 copies on 100 computers and use that for redundancy.
When a program is running on modern CPU it executes code predictively, going both ways on some ifs. Does that split identities into 2 and immediately kills one of them?
Let's say for a moment that universe is a simulation, and there is a backup. Does that make "us" suddenly not "us" because there is a copy? Why?
Because of all these corner cases I don't think programming is a good analogy.
It isn't the copy that changes things. It is the potential of a copy that makes it glaringly obvious that the result is not the same instance. If there is some process that makes a second me, I don't have access to the copy of me's thoughts. This proves second me isn't actually me because I'm still here, and we are trying to define what I am, and at-minimum, my definition of my consciousness does not include other consciousnesses which I don't have access to.
> Does that split identities into 2 and immediately kills one of them?
While I think programming isn't perfect as an analogy, there is no other vein in which people think that has as many good metaphors for this. On the contrary, I think these corner cases are where you can actually refine your reasoning about these things. I think many of your corner cases are really useful because it is entirely conceivable that many could leave the realm of silicon and enter the world of flesh and blood (like memory merges, etc).
Well for me the potential for a copy doesn't make anything obvious or impossible.
Then who is it?
It remembers being you.
My suspicion is that most people suddenly realize they prefer the instance of their consciousness they are inhabiting.
The question is, which one you prefer I terminate. You, the one I'm talking to, or the other person in the other room. The preference most people have is "the other one" which pretty much lays bare that these two instances are not, in fact, equivalent because they are "in" one of these consciousnesses, and not the other.
For me, this implies divergence, even divergence of location (which happens automatically on a copy) is a change of identity, and so true identity must be inextricably linked to a continuity of location through time. Because of this, I reason that any copy operation, even a destructive one, is a change of identity and not-me, because it is really just a creation of a second instance and as mentioned, I would prefer this instance keep running over another if faced with my murderous thought experiment.
If merged into me, I agree it could become part of new-me.
Even if memories are structural, network theory tells us that all parts of a network are not necessarily visible or accessible from all points at all times. The "You" making an executive decision could only be seeing a subset of your entire memory structure.
This difficulty can be even more clearly expressed by looking at the how neural networks even work. It's not just structure that determines output. It's a combination of structure and sigmoid function, spread out over time as results of previous computations fine tune the network.
You'd have the hardware to run your person on, but you'd have no information on the full informational state stored by which circuits are firing, which are building up to fire, and how fast each one is building up.
Even if you did, you'd lack a "reboot" harness capable of generating that state on demand across the entire brain that could both physically share the same space the brain is occupying AND not interfere with the mechanisms by which it operates.
What you haven't realized is that your conception of identity has been fine tuned since birth through your use of language to represent "You". Whatever woke up after the theoretical reboot would NOT be the "You" that went into it. It would be "You" to itself. Little more, little less.
If you sit down and think it through, you should be able to realize that even if you could build something to do this theoretically, practically implementing it would be a fool's errand courtesy of physics. Namely sensitive dependence on initial conditions, and the Pauli Exclusion Principle.
Nevermind the Engineering considerations. How do you measure a successful reboot? What is your error tolerance?
Or the sociological/ethical implications. Can the environment/society support you? Could you adapt? Would you even be able to assimilate? Would you bring some sort of toxic ethic or moral taint that the society has considered taboo?
Hell, we have kids based on a natural drive endemic to our biology. Now you're talking about placing on our descendants the literal decision of "Should we let this one exist to influence us?"
> You'd have the hardware to run your person on, but you'd have no information on the full informational state stored by which circuits are firing, which are building up to fire, and how fast each one is building up.
You just include that in the brainscan.
> It would be "You" to itself.
Yes. And I think this is enough. Current me would be a stranger to me 20 years ago.
> How do you measure a successful reboot? What is your error tolerance?
There was a sci-fi story where people had backup chips in their brains that measured all activity and learned over years to simulate that activity. If the simulated and real activity agreed to 99.999% over a long period of time the biological brains were euthanized and the chip took over the body.
IMHO it's quite good standard for error tolerance. If all measurable effects are the same for a long period of time (let's say a year).
I'm not persuaded Pauli Exclusion Principle is making this impossible, it depends on how sensitive our brains are to quantum effects, and I don't think we have data one way or another (do correct me if I'm wrong).
It might be impossibly high standard, but even then - if my future decisions depend on quantum fluctuations that will be simulated differently on a different hardware - my current identity isn't built on these future decisions YET. If we're really throwing dice making some decisions - I don't consider these dice or these results to be part of my identity now. If I reverted time and got a different decision the second time - I'd still think of that person as me, just in a different mood.
As for disruption of society, economy, etc - sure, these are big concerns. But we're talking philosophy here, not sociology.
The "is it still me?" question pops up over and over throughout the game. It's incredibly thought provoking.
CGP Grey has a good look at the question wrt Star Trek teleportation.
"Consciousness Explained" should really have been titled "Consciousness Explained Away".
I've yet to come across any evidence of a "self" that exists outside a brain.
> Is this a solved philosophical problem for which literature exists?
It's a problem that was hypothesized (and not solved) by philosophy, but as far as I know there's no evidence the problem exists.
You can hypothesize any number of problems but if there isn't any evidence they exist, there's not much urgency in solving these problems.
What if invisible monsters are breeding all around us and building up numbers before they wipe out the human race? There's equal evidence that this could be happening, and the consequences are more immediately dire, so perhaps we should discuss that first?
See also, the germ theory of disease. I think we have this covered already...
Inventing the scientific method doesn't mean you get a carte blanche to call invalid everything you haven't gotten around to understanding with it.
This is not a solved problem and if someone gets this tech it's going to become an issue. A political issue, not a scientific one, because this one is in the region of politics, science doesn't know what's going on here.
Whether people are okay with an idea has very little relation to whether it's true.
> this topic is older than the concept of science itself
On the contrary, what makes science important is that it doesn't just exist in the imaginations of humans, it exists in reality. Incidentally, reality predates humans.
> it's the #1 thing that gives anything around here any meaning whatsoever.
I derive meaning in my life from curiosity, from beauty, from relationships with other people, from fun. Incidentally, all of these things clearly exist, and aren't so fragile that I feel threatened if someone on the internet says they don't exist.
Just because you have chosen to base the meaning of your life on a belief in self existing outside the brain doesn't mean anyone else is obligated to pretend that exists.
I'd argue that the underlying reasons you think the idea of a human self outside the human brain is meaningful is that you associate it with community (religion) anyway. Beliefs aren't necessary to have that.
> This is not a solved problem and if someone gets this tech it's going to become an issue.
I didn't say it was a solved problem; I said it's a nonexistent problem.
> A political issue, not a scientific one, because this one is in the region of politics, science doesn't know what's going on here.
The facts of whether a human self exists outside of the human brain is totally within the region of science.
Whether we as a society try to force your beliefs on people is within the region of politics. I would like to see some evidence that the problem exists before we start making laws to solve it.
It has a relation to what degree of evidence people will find satisfactory. Science alone is not sufficient for most people to carry on their lives, and this is one of those topics. This is not an argument for it being true or false, it's an argument against waving it away as a problem on the basis of "well we don't have any proof of it". Of course you don't have any proof of it.
There's no solid proof that solipsism isn't true, either, but that doesn't mean I am going to start believing people around me aren't real and don't feel, just because there's no evidence. Science also doesn't have any concrete evidence that being a dick is not a good idea, should I drop that, too?
> I derive meaning in my life from curiosity, from beauty, from relationships with other people, from fun. Incidentally, all of these things clearly exist, and aren't so fragile that I feel threatened if someone on the internet says they don't exist.
None of these exist if there's no "you" to speak of. Who's curious? If you die every time you go to sleep, curiosity doesn't strike me as a very meaningful concept. That's what I mean by meaning. Basically all the things you list here rely on a stable concept of "you".
> The facts of whether a human self exists outside of the human brain is totally within the region of science.
I am not talking about a "human self", there's nothing human about it since it's not tied to a specific body or a body at all in the first place. Sometimes I wonder if the problem is that people are still talking about the old concept of "soul", complete with consciousness and memories. I'm not talking about that, I'm talking about an address.
> Whether we as a society try to force your beliefs on people is within the region of politics. I would like to see some evidence that the problem exists before we start making laws to solve it.
People using devices that upload their minds or break continuity is absolutely going to get political I guarantee it, and I can completely imagine people being forced and coerced to use such devices because it's convenient to someone else and because "well the evidence doesn't say there's a problem". Because if you go with that belief, you're not doing anything bad, and the person thinking you're murdering them is just ignorant in your eyes.
Hopefully we'll have a more flexible reasoning system by then that didn't completely throw philosophy in the dumpster because we're too smart for it.
I'm waving it away because problems which exist are more important to me than problems that don't exist. You can't rationally expect me to care as much about fantasy as I care about reality.
For someone who admits they don't have proof for their viewpoint and thinks proof isn't needed, it does seem like you tried surprisingly hard to provide proof.
> There's no solid proof that solipsism isn't true, either,
Maybe not in a meaningless hypothetical sense, but there's strong evidence that solipsism isn't a constructive theory. Try this experiment: see how long you can go pretending nobody else exists. I'll be impressed if you make it a day.
> None of these exist if there's no "you" to speak of. Who's curious? If you die every time you go to sleep, curiosity doesn't strike me as a very meaningful concept. That's what I mean by meaning. Basically all the things you list here rely on a stable concept of "you".
Not really. There has to be a me to experience these things, but that "me" doesn't have to be stable in any way really. In fact, I've changed pretty drastically over my lifetime.
> I am not talking about a "human self", there's nothing human about it since it's not tied to a specific body or a body at all in the first place. Sometimes I wonder if the problem is that people are still talking about the old concept of "soul", complete with consciousness and memories. I'm not talking about that, I'm talking about an address.
Okay, that's significantly more reasonable, but that was not at all clear from what you said previously.
> People using devices that upload their minds or break continuity is absolutely going to get political I guarantee it,
Obviously politics of the future will involve discussing this. I'm not sure how you concluded that I thought otherwise.
This should not be mistaken for thinking that politics results in a better understanding of our reality. Consciousness doesn't exist outside of hardware (brains or computers) even if a judge or congress says it does.
> and I can completely imagine people being forced and coerced to use such devices because it's convenient to someone else and because "well the evidence doesn't say there's a problem".
Well, if that happens I'll be against it. But maybe keep your imagination on topic, since nobody is proposing that as a good idea.
> Because if you go with that belief, you're not doing anything bad, and the person thinking you're murdering them is just ignorant in your eyes.
If a human self doesn't exist outside the human body, then ending the functioning of the human body is absolutely ending the human self. So I'm not sure how you draw this conclusion from anything I said.
If it's done without consent, obviously that's murder. But if that's done without consent, it's not murder. The problem here is consent, not human continuity. We weren't talking about consent until you just now brought consent in by saying we might force people to use this technology (which I would not). The technology being described is clearly not something we should be forcing on anyone, and I don't appreciate you accusing me of saying otherwise.
Sure it is, people handwave it away all the time, since it's exactly the same problem as temporal continuity of the self in a changing body.
- general anesthesia is definitely, to some extent a discontinuity of consciousness. You lose time.
- more extremely, Deep hypothermic circulatory arrest actually (temporarily) causes brain death by cooling the body to between 10˚C and 20˚C. All brain activity ceases for the duration of this deep hypothermia.
Are there risks to these practices? Definitely. Some may argue that someone who undergoes general anesthesia isn't the same person. But I would argue that it is, despite the discontinuity, and despite some risk of personality changes.
The you that is now, pre-preservation, will be dead.
The you that is then, post-restoration, will be alive and believe and behave as if they are you.
It's convenient that this process (as described) kills you, because that simplifies things quite a lot. Scanning is more complicated, and I suspect if technology ever arrives that allows for it we will treat such scans as "heirs of the body" -- meaning they will only have such assets as you give them, although of course the IRS will still find a way to get them to pay your taxes.
It's a kind of insurance, because you won't be able to pay the company after the rapture, so you have to pay now. Also, the company has to be manned by evil people (as a matter of fact and marketing), because if they were good they wouldn't stay behind after the rapture to execute their contractual obligations.
The best kind of commerce is when you take money and don't have to deliver anything in return; however sometimes customers feel cheated. The bestest kind of commerce is the same, but the customer is happy.
But, SV should be wary of becoming its own caricature.
* which grew and pruned themselves the same as organic ones
And this criticism of the rich embracing it first totally misses how technology develops. The rich are always the early adopters, and the high prices paid by them enable the industry to scale up which gradually brings the price down.
If you look at any technology over the last 50 years, you see this pattern at work: PCs, tablets, smart phones, etc.
The first generation was always expensive and only affordable for the rich.
Let's say we find a way to eliminate death or aging. It's expensive at first so only the very wealthy can afford it. Now you have a set of rich people in positions of control that won't age out of them. They'll live forever so they'll continue amassing wealth, if they don't give up control then a new generation with new ways of thinking about the world won't be able to take the helm. I mean, lots of people aren't able to really internalize new ideas in their own current lifetimes, let alone these hypothetically endless ones. The people in control get wealthier, inequality increases, new ideas have a harder time taking hold. Even if this life extension eventually trickles down to the rest of humanity would they ever be able to catch up? Would they want to live an eternal life subjugated by the elite?
Also, assuming this life extension is biological and not digital all of a sudden we have lots of people. Necessity is the mother of invention and all that so hopefully we figure out a way to move excess people to other planets/space colonies, but what if that takes too much time? What if we increase the number of people faster than we can solve the scaling problems? Food, energy, waste, ways of transporting these excess people elsewhere. If we don't solve all of them we're talking about periods of extreme pain and tumult. What happens if we need to put a cap on the number of people while we figure some of them out? Are some people now not allowed to have children? Are some people not allowed to live forever? Who decides the answer to these questions? Hopefully not the ultra rich & powerful class that's formed - that might not be good for the average folks.
Anyways, maybe I'm being pessimistic and on the time scales we're talking about we'd figure out reasonable solutions but it still feels like this whole area feels pretty hand-wavy today when I hear people talk about eliminating death. Death, for better or worse is a sort of equalizer, remove it and a lot of the problems that exist today could get worse.
This is even putting aside questions about what happens to a hypothetical brain that lives forever. Do we run out of space for new memories? Do we start forgetting things after enough time? If so, is the person we are 500 years from now that can no longer remember it's first 100 years the same person as us today?
From a wealth inequality perspective, there's no difference between living forever and accumulating wealth or having a finite life and your dynasty living forever and accumulating wealth.
Even if it were true that immortality means some individuals accumulating far larger amounts of wealth, I think a pretty good case could be made that it's beneficial for humanity for them to do this rather than spend it on increasing the number of descendants they have. High net-worth individuals do things like create space flight companies.
In the short term, that's great, incredible even: everyone lives! In the long term though, do we stagnate on the same set of brains, who are set in their ways from potentially hundreds of years of life and only capable of offering a certain amount of insight on any given topic? There's less incentive to have children (you can always do it later), and there may be physical constraints that limit the number of people the world can support. Without that infusion of new ideas, does human development grind to a halt (or at least slow considerably)? There's no fresh eyes coming to scientific endeavours, maybe restricting the sparks of innovation or new ways of thinking that lead to breakthroughs, and culturally there's no one unburdened by the past pushing the boundaries of the arts etc.
Now you may say (and I might even agree with you) that you'd rather be alive in a stagnant world than dead in one that keeps moving, but I feel there has to be limits to that. There's probably enough going on to keep you engaged for hundreds of years, but beyond that, a stagnant, unchanging world, with the same people, the same cultural, the same knowledge etc. in perpetuity seems incredibly daunting. Would people need to choose to die when they've had their fill to keep the ball rolling? Would enough people choose that, rather than clinging to a merger existence, to matter? Would that first generation of post-death humans last long enough that recovering the pace of development would even be (reasonably) possible?
Let's just say that on a personal level, I don't want to die (and a life _extension_ would be incredibly welcomed as there's plenty to do beyond the 80-or-so years people generally live), but the societal consequences of such a thing could be deeply negative.
This will open up countless spaces for human expansion, where new humans can settle.
Also FWIW it was deliberately tilted negative to illustrate that eliminating death is not _necessarily_ a good thing, but there will most definitely be major advantages to it as well.
On the other hand, if the continued operation of this service depends on later recruits, that's quite the pyramid scheme that you are participating in.
IIRC, in one of Larry Niven's novels, the funds of the frozen dead had been confiscated because they were monopolizing global wealth, and the living were indebted to the dead.
Take a look at the picture in the article. You’re really just getting the shape of the synapse at best.
I completely disagree. Natural death is not a beautiful thing, and is sometimes painful. Even if I don't want to live forever, I'd like to be given the choice of when to die, basically when I'm tired of living.
Maybe you still don't agree, but would you have thought differently if all your friends and family also have that choice?
It's less a moral issue than a logical issue - "Let a person contribute to the world until they feel they have nothing else they can do" isn't a terribly unreasonable idea.
Do we even know it's just the very material shape, not some quantum or magnetic field magic along with it?
Apparently it started in the Civil War, and became a nation-wide sensation after Lincoln's corpse did a three-week tour in in a "funeral train":
It seems a bit narcissistic to know you will never open your eyes again, but you would still like a copy of you to exist.
“Burdening future generations with our brain banks is just comically arrogant. Aren’t we leaving them with enough problems?” Hendricks told me this week after reviewing Nectome’s website. “I hope future people are appalled that in the 21st century, the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants. I mean, it’s a joke, right? They are cartoon bad guys.”
We are on the verge of great shifts in climate and energy resources. Millions of humans are already migrating across continents in search of food security and an escape from perpetual war. We're already seeing climate refugees from vulnerable island nations, consumed by the rising seas. Entire governments have been toppled in the Arab Spring, replaced with authoritarian regimes installed to protect the borders of the west.
And a few of the richest people in human history are using their unimaginable resources to try to live forever.
It already is happening in some sense. Prices for many drugs people need to cope with chronic disease continue to rise. The price of insulin, for example, has tripled in the past decade.
>Tech dissent is not much appreciated in yc, esp when a yc fellow is backing this.
If you're accusing me of luddism, you've missed my point entirely. Do you think we should automatically fall in line when marketing materials are posted to this discussion forum?
The idea that there is a less than 1-in-10000 chance that in 10,000 years we will be unable to run brains in simulation given over the last ~50y of development we've produced hardware literally nine orders of magnitude faster than that of the brain, done so in a mass-producible fashion, made unbelievable strides in neuroscience, discovered CRISPR and make synthetic biology, gone from AI-as-theory to computers that can talk, describe images, and synthesise photorealistic faces, gone from technology as niche to CPUs on plastic that cost literally a since cent... is frankly an blunt unwillingness to consider the issue.
> fully refundable if you change your mind
If they network you with others, it'll be a a true system of narcissists.
I'm not really on Greg Egan's page of things here.
Why not exactly me?
This isn't immortality, this is a company killing you then promising to create a clone of you with your memories at a later time.
Because science doesn't have evidence for the presence of "you"s, and it's fashionable among intellectuals these days to only believe scientific ideas have validity.
An amazing work that had some help from some of the guys at Less Wrong.
One of the most enlightening and funny things I've read in ages.
Here is a great talk that led me to reading it:
TL;DR: It's a superintelligent AI that just wants to make humans happy... with Frendship and Ponies.
Another variant is the concept that every decision we make we actually split into two universes, one making one decision and the other making the other decision.
But, anyway the scenario is set up, the question is why does that particular brain have that particular consciousness?
It is easier to see in the splitting scenario. Why do I end up being the consciousness that chose A instead of B? A materialistic view does not have a satisfactory resolution of this issue, whereas the common sense view that I actually am choosing A and not B and there is only one me easily accounts for the perception of such. It is only within a strictly materialistic worldview that common sense accounts run into opposition, which would be a good reason to look beyond materialism.
If the mind is software then it can be duplicated. There is no dilemma here, and it's irrelevant what the copies think of it.