This doesn't do anything to dissuade me of the notion that the singularity movement is a religious movement. The comfort of hope in an afterlife and hope in scientific immortality seem quite similar. While research into singularity and anti-aging could be better funded, I'm not sure why we need to 'believe' in these things or make sure we join the correct denominations (transhumanist vs non-transhumanist).
Besides, unless the universe is infinite we're just arguing about timeframes. I prefer not to think about death as annihilation because death doesn't mean that we never existed, we each have our unique portion of space-time.
This doesn't do anything to dissuade me of the notion that the space race is a religious movement. The comfort of hope in ascending to Heaven and hope in engineering feats seem quite similar. While research into aeronautics and rocket motors could be better funded, I'm not sure why we need to 'believe' in these things or make sure we join the correct aerospace company.
Besides, unless the universe is infinite we're just arguing about distances. I prefer not to think about death as annihilation because I'm afraid of turning into worm food.
I'm not sure what point you think this makes. That comment would be an appropriate response to someone writing an article about how space travel will free us from the fear of death and how we can only really have hope if we are spacehumanists.
Your comment was similar to one I was going to make... It's interesting to see that man still seeks that feeling of meaning (in life) and hope (after death) even when they do not believe in a God/afterlife.
How does one find value in life when - in the long run - their entire existence will be forgotten and their species just a blip on the timeline of Evolution? If you think about it, on the infinite (or even extremely large) timeline of this universe, our individual value pretty much equals 0.
Sure, in the short run it is evident that one's life can have value and purpose to humanity. Many powerful leaders and wars have proven this. But so what? Our egos won't be around to reap the glory of knowing that either. Zooming out to the larger timeline - or the big picture - of this universe, I can't see how someone finds value in this present life without a convincing belief in either:
2) The existence of a solution to mortality
Life without one of those beliefs would be like working hard with the goal of a profitable business but knowing that in the long-run my business would never turn over a single dollar of profit. It's pretty obvious that the big picture negates the short-term goals and meaning of that work.
3) The possibility that after fulfilling the manifest destiny of implacably rampaging across our universe our descendants will eventually tease apart the fundamental mathematical equations describing the foundations of our existence and find our universe not alone with nothing but the greek chaos surrounding it but amid a vast collection of universes for our future generations to spread to and conquer.
Goals of Man
☑ Don't get eaten by a lion
☑ Get out of Africa
☐ Get out of Earth
☐ Get out of Solar System
☐ Get out of Galaxy
☐ Get out of Local Group
☐ Get out of Earth-Visible Universe
☐ Get out of Universe
Life should be like running a profitable business, which you know will not last forever. The point of your business is giving a value to your customers today. Eventually a competing business will cater better to your customers' needs and put you out of business. This doesn't take away the value you've given your customers, nor the money you made.
Going one step further, the business does not even need to be profitable. You can find value in activities which are economically and socially useless most of the time, like art for example.
Yeah, I think I'll stick with that "Get out of Earth" one, but thanks for the suggestion.
"On a grand scale we simply want to save the world, so obviously we're just letting ourselves in for a lot of disappointment and we're doomed to failure since we didn't pick some cheap-ass two-bit goal like collecting all the Garbage Pail Kids cards."
"Thanks, those are interesting links. Still, nothing there that seems especially compelling.
Maxwell's demon assumes the energy cost for sorting the molecules is less than the net gain. That looks like another variant of the perpetual motion machine.
The fluctuation theorem says the 2nd law doesn't hold for very, very small entities, but its probability of being correct climbs exponentially as the system scales. So, it is fallacious to then say the 2nd law may not hold at the macro level due to it not holding at the micro level, since the theorem used to support this idea is itself an instance of the idea being false."
How about a tiny, molecular sized ratchet, that would absorb vibrations due to heat and turn one way, but refuse to turn the other way due to it's ratchet teeth. These could then be hooked up to a generator for free power.
There are limits to computation, but nothing prevents computers from achieving human or super human level intelligence. I suspect that 50 years from now we will have human level AI for a wide range of things. We have already passed that point for Chess and one by one other things will fall.
What really prevents the singularity is the limits of the physical world. The speed of light is already a problem such that PC ram is never going to drop in latency without moving it closer to the CPU. I don't care how smart you are you can't build a 101% efficient engine etc.
Are there any real signs of progress in computers that could lead to them achieving human level intelligence? Look, computer beat world champion in chess over 10 years ago, but is nowhere close to doing that in Go anytime soon: http://ru.ly/34
Like CAPTA's? I would say the number of things computers can do better than people is growing. They are not unified in a single project but computers are gaining around 100x the computational ability every 10 years so the gap between 1% as smart as a human and human level intelegence can close rapidly.
Granted the human brain has around 100 billion neurons vs 10,000 neurons, with about 30 million synaptic connections between. However 100,000,000,000 / 10,000 is only 1/10,000,000 which is around 30 years unless the software becomes more efficient.
PS: I don't think directly simulating a brain is the most efficient method of reaching human level AI but it's hard to see how it would fail.
No we haven't. Computers play chess by examining all possible outcomes, then choosing the optimal one. That seems to be how humans play chess too. But if you think about it, humans are much more spontaneous than that. We haven't succeeded in coding "spontaneity" yet.
There's no contradiction between Malthus and the coming of the singularity. Malthus essentially said that resources tend to be scarce, but there's no reason that they couldn't be scarce post-singularity.
This line of thinking leads to doom scenarios, when you think about how scarce resources would be allocated (bottom line - probably not to humans), so it does contradict singularity-as-the-promised-land scenarios.
Scarcity means that the curve will break. The singularity is the notion that the growth will continue indefinitely at an ever wilder pace until it reaches a singularity. I'm saying it won't happen because technological advance will run into scarcity of resources, just like any other complex system. Malthus was just an example of this, since his thinking was much the same as Kurzweil's in regard to extrapolation of datasets.
Whether this will happen before or after transhumanism I don't know. But like any other exponential curve it won't go on forever.
"Singularity" here doesn't mean "infinite improvement", it means "such a big improvement that, from the point of view of people trying to predict it now, all bets are off". At least, that's how Vinge used the word and I think it's the commonest usage among present-day singularitarians like Yudkowsky.
One thing I never got: sure, it's certainly possible that one day the technology will come into existence that will create beings (be it computers or genetically modified brains or whatever) that will be as superior to humans as we are to ants or rabbits. How on Earth does the singulatarian paradise follow from this premise? Doesn't it seem at least just as likely that those super-intelligent creations will eradicate humans, or enslave them, or do something to them which we don't have a concept for in our limited brains, and which is different from happy immortal existence?
"Singularity" doesn't entail "paradise" or anything like it. That would be why Eliezer Yudkowsky, the author of the piece linked up at the top there, has dedicated most of his adult life to the question "If we figure out how to make artificial intelligences, what do we need to do to have reasonable confidence that they aren't going to do awful things?".
okay, take the phrase "such a big improvement that, from the point of view of people trying to predict it now, all bets are off" and replace "improvement" with "change". The Singularity doesn't have to be good to still happen, unfortunately.
Lately I have read a lot about the singularity, and I am very confused why people are for this concept. Extending life through brain simulation, nanomachines support, and personality uploads does not sound like you will be still "alive". Does that make me a vitalist? A luddite I am not.
It is easy to debate for hours about the feasibility of singularity. I understand the accelerated returns and all,
but I think that much of these stuff and very-near milestones stem very much from wishful thinking by proponents. "Rapture" within lifetime, immortality. I think these are still too far-fetched, and I cannot understand how proponents of singularity swear by this timeline with utmost faith.
BTW, does cryonics work now??
Since Yehuda's body was not identified for three days after he died, there was no possible way he could have been cryonically suspended. Others may be luckier. If you've been putting off that talk with your loved ones, do it. Maybe they won't understand, but at least you won't spend forever wondering why you didn't even try.
I am reminded of a few lines from a talk by Richard Feynman:
For instance, the scientific article may say, "The radioactive phosphorus content of the cerebrum of the rat decreases to one-half in a period of two weeks." Now what does that mean?
It means that phosphorus that is in the brain of a rat -- and also in mine, and yours -- is not the same phosphorus as it was two weeks ago. It means the atoms that are in the brain are being replaced: the ones that were there before have gone away.
So what is this mind of ours: what are these atoms with consciousness? Last week's potatoes! They now can remember what was going on in my mind a year ago -- a mind which has long ago been replaced.
To note that the thing I call my individuality is only a pattern or dance, that is what it means when one discovers how long it takes for the atoms of the brain to be replaced by other atoms. The atoms come into my brain, dance a dance, and then go out -- there are always new atoms, but always doing the same dance, remembering what the dance was yesterday.
I found this fascinating: what we are is patterns. We have a built-in desire to extend the patterns for as long as possible, but at the same time we don't want to preserve the patterns in a fixed state, or we would never learn or grow.
Simultaneously, we are part of larger patterns which -- like the atoms in our brains -- we enter into, subtly alter, then leave. Until we have better ways of extending life, our influence on the larger patterns is the most enduring thing we can have. It is some comfort against the prospect of annihilation, however small.
Yeah, I similarly don't get the fascination with brain simulation. Say someone told me they had an exact simulation of by brain running, and I was convinced beyond a doubt they were right, would I be fine with dying at that point? No.
To paraphrase Samuel Clemens:
"I don't want to gain immortality through brain simulation. I want to gain immortality by not dying."
Yeah, that's a bit more convincing. I'm mainly skeptical about the idea of consciousness replication. My statement was meant to show that even if the brain simulation created a consciousness, it wasn't the same as my consciousness, which would still disappear if I were to die.
But there are so many wrong things with this article, it's difficult to know where to start.
Angry at the "way-things-are"? Does that even make sense?
It feels like he's saying "everyone is dying, and what is the government doing?!"
"When Michael Wilson heard the news, he said: 'We shall have to work faster.' "
I mean, can one really say that seriously, outside of a poor SF story? So this is a rational, skeptical person, and not a cult member? Although I'm not religious, I think his religious relatives seem to have a much wiser attitude to life.
It makes sense to (at least) the extent that you have grounds for thinking that the-way-things-are can be changed, which Eliezer does. In any case, he's describing how he feels, and it's not obviously wrong to feel "angry enough to do X" just because X is impossible.
What exactly is your objection? That EY et al think death can be more or less completely abolished, and that's obviously wrong, so they're probably being irrational? (If so: why is it obviously wrong?)
Thank you for posting this. We do need to work faster. In the day to day actions of waking, working, and sleeping until the next day we tend to lose sight of the longterm goal. At least I do. I am somewhat reassured my younger sister will live long enough, though I am not sure about my mom. I've had to accept my grandmother won't, as my grandad died in March. This is a good reminder to all of us who think there is an answer.
So, I agree (strongly) with the statements and feelings in this discussion. I am a transhumanist in the sense that I do also have this burning hope that we will defeat this inhuman universe and repeal death.
How can I get involved in helping with this? Where are the communities? How can I assist (other than donating money)?
Fight the moonlight, drama queen. Falling is not the problem, ego is. His sense of entitlement is baffling. Why does he want to be some sort of flying god if he's a human?
I'm all for the progress of science toward winged machines but demanding flight right now... is quite over the top.
(Edit: I do understand how someone could be surprised by his sense of entitlement. I don't see, however, how it follows that we should not at least try just because the problems sounds really, really hard. Humans had never flown, but that doesn't mean humans will never fly. Same goes for many things, including death.)
Your parody (?) doesn't work. For one, as far I as know there was never a group of "flightists" crying over the fate of non-flying humanity, blaming God or Mother Nature, talking about foolish non-flightists...
There's a big difference. Avoiding death is not merely a hard technical problem. It's a moral and existential issue. If some people became immortal, imagine the whole lot of new ethical, existential problems we would have.
I'm surprised how easily people equate "duplication of the mind" with "immortality". I'm agnostic when it comes to god, and I'm agnostic about that piece of the singularity. Maybe someday we'll have enough knowledge of the mind and reality to know for sure that "the self" really is just a bunch of organic bits in the ol' noggin, but anyone claiming to know that answer right now is too presumptuous.
At the bottom update, he proposes that standard procedure should be to cryogenically freeze unknown corpses. I as a taxpayer do not want to shoulder this burden. As time goes on, the expense of keeping unknown corpses frozen continually increases as they pile up.
The per capita expense of keeping a thousand frozen heads in a pool of liquid nitrogen is essentially negligible. You wouldn't notice it as a budget item. If it really bugs you, fund it yourself through private philanthropy. But out of all the things that government was ever proposed to do, this would be the most cost-effective in the history of time - even I, a libertarian, draw the line at not doing this. Cryonics is cheap, people are crazy.
It depends how far away our ability to reanimate them is. I don't know how many thousands or tens of thousands of unidentified people (or ones otherwise taken care of by the state) die every year. In a stable population the per capita cost of keeping them all would keep increasing.
Moreover, what would we do with them? Copy their consciousness into a computer some day? Who pays for that? And does it really soften the blow of losing a loved one? I don't think having a copy of my dead mother would make me feel any better about the real one having passed on.
Would we grow them new bodies? What does that cost and who pays for that? Would they even want that? Do we have the right to make that decision for them? If we reanimate everyone who dies, how quickly can our country no longer sustain such a rapidly increasing population?
All of this just seems like an illogical reaction to the death of a loved one. I don't mean that to be rude (I've had my own illogical reactions) but it's just not feasible, especially for a technology that we're not sure will ever amount to anything.
We're operating on different technological assumptions here. Cryonics reanimation takes nanotech and quite possibly machine superintelligence, both of which are self-replicating technologies. The software might be expensive, as 'twere, but that's a one-time cost.
Regarding the thing with "copies", that's a complex issue and I can only refer you to the entire sequence on personal identity on Overcoming Bias:
Umm, and remember there's no convincing reason to believe cryogenics work. Sure, we've cooled down some animals and brought them back. But keeping a head near absolute zero for years, and expecting to revive a consciousness? No evidence for it. None.
There are no revivals yet. That's expected - it isn't evidence.
If you can't examine the facts, can you examine the theory?
What is the theory? There's two I know of: biological revival, and informational "upload". The first sounds dubious to me - there's an awful lot of damage to repair, and the nanotech to do it is not even on the drawing board, let alone the knowledge to guide repair. The second sounds much more achievable based on some sort of slice, dice, scan and digitize process, followed by computer emulation. (See recent work by Anders Sandberg.)
What would be required for upload? Continuity of information. (Continuity of the potential for biological life is NOT required.) Do we have evidence for continuity of information? We know that vitrification tech has gotten pretty good recently. (Wikipedia: a rabbit kidney was vitrified, thawed, and worked.) That suggests enough information could be preserved for upload.
Conclusion: there is convincing reason to believe cryonics should work.
I read that again and I'm not sure that summary helps a lot. I'll just write about what I've read turned into in my mind, where it merged with my own experience and other things I've read.
Schopenhauer's main point is that death doesn't exist because time is not real. This is not very comforting unless you can imagine what it means that time is not real. I can't.
He does point out that there is no rational reason to fear death. The fear of death is an instinct, like aggressivity for example, which you can repress and even overcome. It's useful in some situations, when it makes you go to the doctor because you're sick, and it's useless in other situations, like when you're terminally ill. People who know they will die can overcome the fear of death and learn to accept it.
( see http://en.wikipedia.org/wiki/K%C3%BCbler-Ross_model )
So fear of death is not a reason for avoiding death. Is there a rational reason to consider death as something to avoid? Death is as natural as birth. The process of evolution even requires the cycle of birth and death.
What bothers Eliezer Yudkovski is not death, it's the suffering caused by the loss of someone dear. Making people immortal however will never rid life of suffering.
Of course there's a rational reason to avoid death (and hence, to consider death as something to avoid -- which, yes, is not at all the same thing as fearing it). Aren't there a whole lot of things you want to do? Well, if you die now then you won't get to do any of them. If you've achieved everything you want to in life, then perhaps you no longer have any reason to avoid death. Most people haven't.
But, actually, I think you'd have reason even then to avoid death, if like most people you instinctively hate or fear it. It's rational to pursue things you like and avoid things you dislike, if anything's rational at all.
I couldn't help thinking some more about what you wrote.
The things I want to do, like reading a specific book, only make sense if I am alive. If I die, my desire to read the book disappears. I would not say I want to live in order to read this book. I would say I want to read this book if I am alive.
But I agree with you that people can sometimes have a sense of purpose, of things that should be accomplished in their life. I remember I had this a long time ago. I'm still working on what I then thought I should do. In the meantime I have lost the feeling I must absolutely live so that my work can be accomplished. But the fact this feeling exists does make me think there is something wrong with my argument.
You are right it is rational to pursue things you like or avoid things you dislike and following your instincts. At the same time your different instincts are often in contradiction, and you rationally chose which one to repress. So the fact you have an instinct doesn't mean you should never repress it, and having an instinct of survival doesn't necessarily mean it would be good to live forever.
You're right. The argument about overcoming the fear of death is really not sufficient to make that point.
Let's leave at that: there are good arguments in Schopenhauer for not considering death as undesirable, and many of them are not necessarily dependent on his idea that time is whatever he says it is. I haven't read them carefully enough to convince anyone, let alone summarize them in a few paragraphs, but I do think it's worth pointing out this source.
Am I one of the few people who looks forward to death? To know that there WILL be an end to things is something of a comfort to me.
I'd like to live for a time, sure, but even five hundred years might be too long. I'd either get driven insane by the constant bow-shocks of ever advancing culture, or I'd lapse into some kind of boredom-induced stupor.
I have my little slice of spacetime, to do with as I wish. I could live a great and noble life, or a horrific and cruel one. I could live another five decades, or if I wanted to, end it all tomorrow.
No matter what I may do, or not do, in the end.. there are no regrets, no looking back. Just.. the steady decay of my body, the essential atoms of my makeup being reused by other organisms, processes, chemical reactions.
Really? Or is that just a coping mechanism to deal with an inevitable reality?
I wake up every morning an evolved version of myself - how could I ever become bored as a result of becoming ancient and wiser?
I want a bigger slice of spacetime. If you were awarded 3 wishes, sure, you could wish for something nice, you could wish to have your wishes vanish.. and that is control. But the ultimate answer to the game is to wish for more wishes - it is the only rational answer that gives you any real control over this wonder.
'The steady decay...' is a disease. If you had a curable disease would you so effortlessly arrive at the same conclusion? Is this characteristic of being human stricken with, say, cholera, that we would lay in bed, slowly dying in pain, with 'no regrets'? No!! we are creatures of survival, of sustainability, of life. We will rise to our feet, or pick up the phone, and find the treatment, and live another day of beautiful human experience.
I'd like to live for a time, sure, but even five hundred years might be too long. I'd either get driven insane by the constant bow-shocks of ever advancing culture, or I'd lapse into some kind of boredom-induced stupor.
It's not wrong either for the rest of us to be curious.
I'm both curious about living for 1000+ years, since no human has ever lived so long before, and about what happens after death. Given the chance for a 1000-year lifespan, I'd readily take it knowing that I will die, Singularity or no, given the very long but finite lifespan of the universe, and taking the long lifespan would allow me to experience both.
Although a "healthy fear" is certainly in order, I don't understand the reflexive hatred of death. I think it's more likely that consciousness is nonphysical and that there is some form of spiritual survival. Ian Stevenson's reincarnation research is pretty solid. (If I'm wrong, I'll never know.)
But then most atheists also succumb to comforting lies, and make excuses for death even less defensible than the outright lies of religion. They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day.
I would like to think that I, as an atheist, don't have to rationalize the unknown and accept that we just don't have the answers to the "why" of death. I realize he is in pain, but I don't think I'm lying to myself about death just because I don't have some over-arching dogma to believe in.