Hacker News new | past | comments | ask | show | jobs | submit login
Google vs. Death (time.com)
187 points by weu on Sept 18, 2013 | hide | past | favorite | 132 comments



So this article was presumably written beforehand and embargoed to coincide with the official announcement of Calico. While I guess there's nothing wrong with that, it does make me wonder what kind of pre-approved message is supposed to be imparted by a premeditated PR campaign.

Perhaps it is just to avoid scaring investors, going by Larry's concern assuaging in the second paragraph of his post.

One of the Google founders was also behind funding the first purely lab-grown burger eaten in London a few months ago. That also had an obviously professional PR campaign attached to it, although little of this was explicit in the press. A professionally edited HD recording of the event appeared on the BBC web site only a few minutes after publication, so obviously some PR mechanism was at play.

But that event wasn't to promote Google or anything else, it was absent of any commercial labels attached to the copy or the video itself, and no mention was made of who organized or funded it. I have no idea what the message, or the point of that campaign was either.

What are these campaigns trying to tell us?


I think you're reading too much into it. Of course there's a premeditated PR campaign -- it's the launch of a major new initiative!


Google does everything in a very orchestrated campaign. Bash Adwords or Search here and you will see "them" show up to defend it.

But unless Google already transferred $x billions to Calico's bank account, I have to laugh at the 20-30+ year investment agenda and that somehow Google will be in for the long run--unlike other companies. This will be shut down the minute Adwords growth slows down...and they're running out of places to put them due to over-saturation and "free" traffic to sites is disappearing at an alarming rate already.

Google, Page, Brin should've put $500 mil each and then pledged regular contributions.

Or maybe this is Brin's divorce package: Calico will buy 123andMe for a lot of money ;-)


You don't have any information on how much money they have invested.

All you do is speculate and inform us about your aversion towards Google.

And the enlightening insight that Google would probably drop this project if their profits started going down isn't really informative unless you think that we all here have problems with understanding fundamental logic.


>>"You don't have any information on how much money they have invested."

I never said I had the info. The people that know this chose not to say it. I speculated, which is all we can do.

>>"And the enlightening insight that Google would probably drop this project if their profits started going down isn't really informative unless you think that we all here have problems with understanding fundamental logic."

It's not that informative, I give you that; it's Wall Street 101 and most know by now.

>>"about your aversion towards Google."

Not sure blind love and gullibility is any better, assuming that I have an aversion to everything Google.


> Google does everything in a very orchestrated campaign. Bash Adwords or Search here and you will see "them" show up to defend it.

I would assume this is because a lot of Google engineers read Hacker News and they want to defend their work, rather than because of an orchestrated PR campaign.


Actually, no. I never read HackerNews of my own free will. What happens is, there is a team in the PR division that scans all the comments for anti-Google bias. Then company wide, Matt Cutts, and sometimes Larry Page, sends out an all hands email, and asks all Googlers to visit HN and vote down the comments, and post rebuttals. Stock Bonuses are given for the best rebuttals, and if you get someone hellbanned, you get to fly on Larry's private jet, although I've heard they have killed that perk now that the cheap jet fuel deal from NASA fell through. Ah well.


> Google does everything in a very orchestrated campaign.

Who doesn't, at that level?


When Google sponsors an awesome project like this, I see the tech company equivalent of Nike sponsoring LeBron James for $105 million. Even if the R&D angle doesn't work out, Google gets real value (i.e. indirectly interchangeable for money) in terms of PR and recruiting. Consumer products companies pay huge amounts of money for PR. Google is just doing the same thing in a typically Googlish way. I don't think it's as cynical as just that: Google, and Larry Page in particular have been talking about making the world a better place for 15 years in a way that makes me pretty inclined to believe them, but the side benefits must counterbalance the longshot-ness quite a bit.


It's a bit more risky than sponsoring a popular athlete I think. Few peope think we shouldn't play sports, but there is a surprisingly large group of people who believe we shouldn't be engaging on research like this. Reasons include religious, environmental, and social concerns.


There are plenty of people prepared to question the veneration of professional athletes.

(Though I should concede that they are not anything like a majority)


When I first read about Aubrey de Grey, I instinctively knew that he had the right approach to the problem of aging and immortality but I also suspected that he would run afoul of the religious, environmental, and social criticism that you're suggesting.

However, I no longer think that is the case, and I think if Calico takes the same tack they will mostly avoid it as well.

Here's why: a gross simplification of de Grey's entire research program is "aging and death happen because things start breaking faster than we can fix them, and because things start breaking that we have no idea how to fix." One of the greatest things about tackling the fountain of youth from this perspective is that it will continue to look like normal medicine and cosmetic procedures.

I predict that effective "eternal youth and immortality" will be achieved, but by the time the mass population notices enough to care, the majority of the population will be composed of people who grew up with the idea of accelerating progress on human health (in the same way that today's kids simply do not know an era without exponential advances in information technology). Thus, they will probably experience this threshold as completely normal, if they experience it at all.

In other words, there won't be a "magic moment". That moment will only appear retroactively, similar to today's magazine articles that say "hey look, here we are in the future with our video phones and what not! Isn't that nice?"

Let's say in the next 20 years the following things happen:

1. A therapeutic AIDS vaccine performs the equivalent of Polio eradication

2. Highly targeted and effective cancer therapies are developed at an increasingly alarming rate

3. An actual cure for baldness is found

4. A preventative therapeutic regimen for treating obesity at a genetic level is discovered

5. A "nano-cream" that restores collagen in the skin becomes available first by prescription, then over the counter

6. Alzheimer's and Parkinson's can be detected and prevented early and completely managed in those that are in advanced stages

Each one of those 6 things will likely face some opposition, but the opposition's voices will most probably not reach any kind of "critical mass"; people will (for example with the AIDS vaccine) simply regard as cruel the idea that people should die of AIDS because you don't want them having sex (a silly protest that I can still predict happening).

Each of these advances will happen individually, and to the average person they won't look anything alike.

But if you stack enough of these together long enough, you eventually get your fountain of youth. It's just that by the time it arrives there won't be riots in the streets, just 90% of the population saying "oh cool" and the other 10% viewed as harmless luddites with an interesting perspective on life.


I instinctively knew

Why?


I assume he used "instinctively" precisely because he doesn't know exactly why. Perhaps, "intuitively" might have been a better choice of word here.


Just to give my view: Because it's an engineering approach. We don't have to completely understand a highly complex system before we can make some fixes. Intervening before it breaks frees us from the work of repairing what we can't manage. This resonated with me the first time I read about it.


Hmmm, perhaps "instinctively gravitated toward the opinion that Aubrey de Grey was correct". That's more accurate; my "knowledge" does not, of course, imply proof of correctness.


death is bad


That doesn't imply anything about the most effective way to increase longevity.


I think you're reading too much into the words "right approach." In this case, "right approach" (I think) means something like "understanding that death is bad and trying to fight it."


There is something very weird about people routinely speaking of immortality as something that is actually a thing. For all we know, stars die eventually, the sun will burn out eventually, the universe as we know most likely comes to an end and so forth, you can not cheat basic physics. To me this is the same kind of refusal that makes some people believe in god, and that shows how hopelessly biased we get on topics we are emotionally involved in. On any other topic with the information available everyone sane would asses the likelihood of success as maybe less than 0.001%, but if you ask about this specific thing, it's more like 50%...

(Speaking in advance: I understand this is in fact about life extension)


Due to these circumstances, I suspect that this having possible applications into life extension is a really good way to get funding for your research, because death is 100% certain, and a .0001% chance of evading it sounds so good to some people that you can just about write your own check. Agreed though, death can be terrifying, but the idea of immortality is flawed in more ways than one. Are you really the same person you were 10 years ago? 10 Months? Weeks? Days? Minutes? Seconds? What is it, other than one's own fragile ego, that people are trying to perpetuate into eternity? A few extra years is one thing, immortality sounds like a fool's paradise to me.


> A few extra years is one thing, immortality sounds like a fool's paradise to me.

Feel free to pass on it then, assuming you'd actually make that decision if you really had the choice. That problem will naturally solve itself: the set of people still alive will trend towards 100% rejection of having a limited lifespan.

"Everyone who had serious philosophical conundra on that subject just, you know, _died_, a generation before. [...] didn't need to convert its detractors, just outlive them." -- Cory Doctorow, "Down and Out in the Magic Kingdom".


And what is your solution to not getting hit by a truck over an infinite period of time?


That's a technical problem, not a moral one.

More seriously, we could scan backups. Or we could live in the Matrix, not as meatbags plugged through the neck, but as programs –which could be backed up as well. (Don't ask me who gets to be root.)


It's funny, because Eli was making a moral argument, but I was not, yet that seems to be the default argument against immortality. I think you last question is actually close to what I was striking at - who is it that's immortal? What does it mean to be a consciousness freed from its humanity? I honestly am not sure that many people would end up being happy in such an arrangement, or necessarily actually survive in any sense as "themselves" for very long, despite the persistence of a body or simulant or whatever.


Improving truck-exterior safety standards: deformable impact points, robot drivers. Combine that with environmental improvements - fewer places to get squished between a truck and arbitrary railings, say. And add medical improvements to the point where you'd have to splat a brain to get a sure kill. Safety standards that require a Culture-style life-support collar on all trucks.

All engineering. It's not good against infinite time, but it ought to make traffic fatalities rare and remarkable.


We should be able to do digital backups of a brain within the next century. If we can attach robotics directly to the spinal column and regrow dead / damaged neurons with stem cells, we can also put people in a literal brain in a jar robot, which would hopefully have the brain effectively reinforced (or better yet, just be a brain in a jar in a secure vault remote controlling an avatar robot).


So you're going to volunteer for the sterilization and family-size controls necessary to make an indefinitely-living population feasible on a finite planet whose resources we're already overconsuming?

Or are you just full-on into the Google-executive mindset where you use life extension to live long enough to deliberately invoke every other Singularity/transhuman/futurist douchebag trope in the damn book and go sailing off to space to find more resources for your exponentially-expanding population to consume?


Those are obviously problems, but there are solutions. Among them, "everyone dies" is just about the shittiest. Yes, I would happily volunteer to be sterilized in exchange for living forever. In a heartbeat.


Add me up.

Also, we are nowhere near being immortal and in many developed countries people already have stopped reproducing (Germany, Japan, Spain, etc)


> Yes, I would happily volunteer to be sterilized in exchange for living forever. In a heartbeat.

A world without kids and young people is a really, really sad state of affairs. I don't know and I don't care about what others would do, but I think I will just choose to end my own life rather than continue to live in such a dystopian future.


A couple things.

1. There wouldn't be zero kids and young people. Just not very many. The death rate will always be non-zero, and our ability to support new people will grow over time.

2. I get that no more children is a bummer, but I'm not really understanding how it is so awful that it could be described as "dystopian", much less something to kill yourself over. The world is vast and interesting, and children are just one neat facet of it.

In any case, I can see how it might work to give people a choice. If you'd rather die sooner and also have children, you could opt out of both the sterilization and the immortality.


And you've already refrained from having children?

Then, ok, fair enough. The question is, how do you make that fully general so that our ecosystem doesn't collapse and kill the lot of us?

And then, how do you make the economics work out? Immortal transhumans need to eat, too, but once they get some kind of capital base going that expands faster than inflation, as long as they've got even the barest livable income, they'll eventually live long enough for their wealth to grow into "OWN EVERYTHING EVERYWHERE" levels.

And we thought today's asset bubbles were large!

Meanwhile, the young of the future will be even more screwed than we young are today, for precisely the same reasons but more so.


The thing is, none of these problems happen the instant we solve death (and "solving death" is not itself going to be instant). Life expectancy will increase, population will gradually increase with it, and we'll be able to see the problems we have to deal with as a result a very long way off.

It's similar to an argument people sometimes give me when I tell them I'm vegan, which goes along the lines of "If everyone suddenly became vegan, imagine what that would do to the world economy! How would we feed all of these people when our food producing infrastructure is animal based?"

And the reason I don't worry about that is that I know that scenario isn't going to happen. When people stop using animal products, which I think will happen - not for ethical reasons, but for economic reasons as cheaper, more authentic substitutes arise, and the sustainability of animal farming dwindles - it will not be an overnight process, and the world will have plenty of time to adjust.

It's basically the same with extreme longevity. We don't have to solve these problems with the tools we have today, because we don't even know when they will be problems that need solving. And we have no idea what tools will be available to us once we actually do have to solve those problems.

But if you flip the scenario around, and imagine that everyone already lives forever, and these problems start to show up, do you really think that anyone would even think to suggest "Let's have everyone die after around 70 years or so"? That idea would be grimly hilarious in a very Modest Proposal sort of way.


The problem is that the economic and ecological problems are problems we already have today, even before the application of any life-extension therapy more powerful than Good Old Fashioned Diet and Exercise.


The problems of an ageless population are not worth lining up and machine gunning 100,000 random humans per day with no regard to innocence or value.

And so neither are they worth permitting age to kill 100,000 random humans per day with no regard to innocence or value.


I think it's even simpler than that - if you are a billionaire life is pretty good, and the only thing that can get in your way is death so it's a natural place to want to invest - it's basically one of the only problems you have left!


I'm not a billionaire, but I love my life and death/ageing is one of the few real problems I have. I'd rather have billionaires thinking like me and investing their money in solving real problems than buying yachts and marrying/divorcing every year.


You can't cheat physics, but there's a great deal about fundamental physics that we don't yet understand. Given solutions to the most fundamental bug of biology, we'd have a few million or billion years to solve the cosmological limitations on lifespan, and that's ignoring the possibility of improvements to the speed of thought or to the subjective experience of time.

So, yes, the ultimate battle is with entropy, but we have a long way to go before that's the actual limiting factor.


Please point out even the slightest indication that entropy is beatable.

Otherwise, make an ethical argument that covering every inch of the planet in people is a good idea.

Otherwise, explain how you'll get the proper sterilization procedures working to control population growth and get resource consumption down to sustainable levels.


The problem of population growth exists regardless of life extension technology. The solution we are headed for already is that a lot of people will simply starve to death.


When such issues come into discussion I always think about this island off the coast of Alaska, called St. Matthew Island:

> In 1944, 29 reindeer were introduced to the island by the United States Coast Guard to provide an emergency food source. The coast guard abandoned the island a few years later, leaving the reindeer. Subsequently, the reindeer population rose to about 6,000 by 1963[5] and then died off in the next two years to 42 animals.[6] A scientific study attributed the population crash to the limited food supply in interaction with climatic factors (the winter of 1963–64 was exceptionally severe in the region).[1] By the 1980s, the reindeer population had completely died out.[2] Environmentalists see this as an issue of overpopulation.

(from here: http://en.wikipedia.org/wiki/St._Matthew_Island)


Isn't that lovely? The poor can starve to death and the rich can live forever!



So a prediction exists, with some evidence towards it, that it's possible to do computation that doesn't consume negentropy?

Well, I'll grant you one thing: not quite sure what it is you're computing, but assuming your plan works, that was a surprisingly short path to conserving the universe's resources until your computational substrate suffers proton decay.

I still have only hunches of what computation has to do with actual lives. Please, explain your Evil Plan out loud.


Evil Plan, first draft: Build a Matrix, scan and emulate everyone, recycle the meat.


At least you're openly admitting that's your plan. So, you know, we can drag you behind the chemical sheds and shoot you for Criminally Irresponsible Use of Applied Phlebotinum.


In others words make a domesday robot that remembers people before it kills them.

As they die, some will take solace in a religious belief that numbers in the machine represent everything they were and ever will be. Others will just die.

A digital tombstone to a dead race.


In his defense, if you're dying anyway, you might as well leave a "ghost" behind. The ghost might not be you, and it will certainly have some psychological issues to deal with due to knowing that it's one ontological level "down" from a real, flesh-and-blood person, but you were going to die anyway.


Why do you assume the implementation hardware matters?

If it does, why assume brain-meat is better, as opposed to worse?


I assume that ontological security matters. If I know my consciousness runs on meat, I know that I have my own personal substrate. If I know I'm in the Matrix, I know that whoever has `root` access can alter or deceive me as they please.

The one thing nobody ever specifies about these crazy schemes, which would otherwise be a great way for humanity to get the hell off of Earth and leave the natural ecosystem to itself in our absence, is who will be root, and how he's going to forcibly round up everyone who doesn't like your crazy futurist take-over-everyone's-minds scheme. Hell, what's going to stop him from rampaging across the real Earth and universe, destroying everything in sight, while everyone else fucks around having fun in VR?

I'm really wondering why this nasty, insane idea has been cropping up more frequently lately in geek circles.

And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!


> And that's not even starting into the sheer ludicrousness of claiming people's consciousness is pure software when we know that all kinds of enzymes and hormones affect our personalities!

That's a bug to fix in implementation accuracy. I'd obviously prefer more accuracy, but if it comes down to a choice between less-than-perfect available implementation accuracy or dying of old age, I'll happily take a less accurate implementation, especially one that preserves enough information to fix that issue later.

The much more serious bug I am concerned about is the continuity flaw: a copy of me does not make the original me immortal. I'd like the original me to keep thinking forever. Many proposals exist for how to ensure that. The scary problem that needs careful evaluation before implementing any solution: if you do it wrong, the copy will never know the difference, but the original will die.


No human should ever be root. But we might just trust a Friendly AI. Well, provided we manage to make the AI actually Friendly (as in, does exactly what's good for us instead of whatever imperfect idea of what's good for us we might be tempted to program).


And if we don't, we all die (at best), but that's nothing new. Nor is it avoidable by other means than FAI.

The route to unfriendly AI is revenue-positive right up until it kills us.


The question is not really whether such and such implementation is best. The question is, does changing implementation preserves subjective identity?

I bet many people here would not doubt the moral value of the emulation of a human (feelings and such are simulated to the point of being real), but would highly doubt that it would be, well, the "same" person as the original.


That's actually a good point, if a confusing one. I'd like to know the answer as well, though I believe there's a chance the answer will be "mu".


When the robot points the flamethrower at you, and announces using the Siri voice, "Fear not, a backup has been made", you will no longer be confused.


Yeah, by that point I'll know the AI is an Unfriendly AI, and I'll be deeply sorrowful and scared for the future.


Use the Dyson computation. If we uploaded to a matrix that ran on some Dyson-computation approach, then as time went by we'd run slower and slower in real time, but that wouldn't matter to us (and if our population continued to grow that would slow the time factor even further, as the simulation would have to run slower to compute us all - but again, who cares?). But we'd still be able to perform an unbounded amount of computation, so we'd be fine.


What if the hard drive goes on the fritz?


There's no particular reason that everyone has to be on Earth.


How about the probability of some accident killing you (however small) multiplied for the millions or billions of years you're alive?


Negligible, once you care to make backups of yourself. Continuous backups, ideally. In time, it will probably possible to blow up half a Jupiter Brain without killing anyone.


But there is not continuous "consciousness", so what is the difference between that and a clone of you with your memories?

Why not have thousands of them? What are you?


It would be awesome to get to the point where we're worrying about the thermodynamic death of the Universe. But we're not quite there yet. Let's take the first steps first.

And yes, "immortality" is a figure of speech. Much like "the infinite space" - we don't know if it's really infinite.


I'd happily take the 5,000 million years (or whatever physics say) lifespan now please.

If I can't call it immortality, well, it'll have to do.


If this random stat I found on Gizmodo (yeah, I Googled and couldn't find anything better) your chances of dying of an accident in a year are 1 in 1656.

So "immortality" is really a couple thousand years.

Don't get me started on "brain uploading". Unless you can guarantee me that I'll still be me (which is really a religious question) I'm sticking with a couple thousand years tops.

Impressive life extension. Not immortality. Not even talking about the "heat death of the universe" here - you're getting nowhere near that.

You can live your life through robotic surrogates, and that could extend your life further, but wherever you warehouse your body becomes a single point of failure so it had better be secure.


> Don't get me started on "brain uploading". Unless you can guarantee me that I'll still be me (which is really a religious question) […]

Luckily, religions are false. The supernatural is unlikely, and immaterial souls even more so. But we have quantum mechanics.

Current quantum mechanics say that copy&paste transportation doesn't kill you. (Yep. The question was philosophical, and the answer came from physics.) The reason is, you're not a heap of atom, you're an arrangement of atoms. Mind uploading goes a bit further, but should work just as well. Imagine a "temporary" uploading, where your memories from the Matrix are downloaded back into your brain (by rewiring your neurons accordingly). It's still you, only older. Anyway, a deeper understanding of our brains' inner workings may resolve the question of mind uploading more definitely. We'll see then.


Current quantum mechanics say that copy&paste transportation doesn't kill you. (Yep. The question was philosophical, and the answer came from physics.)

[citation needed]

As far as i'm aware, there's no evidence that the mind is anything more than a physical construct. As such, the idea of 'uploading' it makes no sense. You can create a copy, sure, and perhaps even a running simulation might be self-aware and identify as you, but you still only last as long as does the hemisphere of jelly in your head.

I agree though that it is a question of philosophy, but a different philosophy altogether. We will have to redefine what a 'mind' is to take into account the persistent pattern of the brain in whatever form it takes as software, but that's not actually going to solve the problem of mortality any more than religion does.


> [citation needed]

http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_id...

It's long, but it's worth it. I personally enjoyed reading all this.


Thank you, that was enjoyable. But...

http://en.wikipedia.org/wiki/No-cloning_theorem and the uncertainty principle suggest to be that while it might be possible to create a model which appears indistinguishable from an existing person, and could be considered "similar enough" philosophically or legally, by definition it would have to be considered a 'different' object because perfect copies are impossible.


> perfect copies are impossible.

Sure. But from one nanosecond to another, we're perturbed by thermal noise, without any qualms about what that noise does to ourselves.

I think we can safely assume that a copy whose imperfections are on the same order as thermal noise is a second original. That, or we admit that room temperature is enough to change us.


That, or we admit that room temperature is enough to change us.

It might be, at some level, I don't know.

Am I the same person I was when I was born? Am I the same person when I wake up as when I dream? Was Phineas Gage a different person after taking an iron shaft in the brain as before?

Maybe it's more accurate to describe people as processes rather than objects. Which could support your premise while not necessarily invalidating mine, since the whole concept of a singular, coherent self would itself be an illusion.


So, if the exact same arrangement of atoms were to appear somewhere else in the universe, would I be both? Would I be controlling two bodies at once, and see two worlds/viewpoints superimposed over one another?

I doubt it.


I'm afraid I don't feel physics really answers that particular philosophical question. I do a lot of philosophical questions arise from edge-cases in our relatively informal definitions of things.


Assuming that we accept that the 'arrangement' is what we are, and not the specific atoms, copying that arrangement would produce a new arrangement that isn't you, since all the atoms are arranged differently relative to all the other atoms in the universe.

Assuming that the preceding is wrong, and that it's only the relative arrangement of atoms inside the brain that matters, a computer representation of the arrangement of atoms is represented by a totally different arrangement of atoms that is the computer, and therefore cannot be you.

Uploading only works if we can say that you are just an abstract mathematical pattern that can be represented in any medium. Quantum physics does not address this question. It is a question of philosophy.


(I studied this kind of philosophy in college, mostly determinism, epistemology and the computational representational model of the mind)

Agreed.

What do you mean when you say "I".

Sure you can point the the brain. But when we look at the brain, we see the following "physical" attributes:

1. Atoms and their more complex configurations (molecules) 2. Electricity 3. Chemicals 4. Causal connections between the above 3.

So I ask again, where exactly is "I"? The easy answer is to say "I" is the conjunction of this system.

So that would mean that if we can take that system, and recreate it somewhere else, then that system would represent "I" as well.

This is problematic. Take the above as true and then think what it would mean to not CUT and paste, but just COPY and paste.

There are now two "I"'s walking around. Would you, looking at the copy and paste version of yourself then say that you are still "I"? Or are both of you "I"? Well, you are certainly not the "I" that your other self is saying is "I". Because you are looking at it. It can't be "I". So what the f#$k is powering that thing?

By that logic, what you are is more than just the physical. There are things like Qualia (http://en.wikipedia.org/wiki/Qualia), meaning, memories and the associations between completely unrelated things that, as far as I am aware, we cannot map to the physical manifestation of the brain.

None of this is to say that brain uploading is impossible, because we just simply don't have enough information to refute or defend its possibility.

But I believe that the brain is more complex then the pure physical.


I think we are in agreement. Although I'll comment that I think the brain is purely physical, but that the mind is an abstract entity that we don't have a good understanding of. It extends beyond the brain, not in a supernatural sense, but in a causal one.


Healthy life expectancy is only 75.0 max (in Japan)

If we expected to live much longer than that, we might be more cautious of accidents.

If you only have ~10-50 years to live, you might be prepared to take some small risks to save time: cross the road 100m in front of a speeding vehicle... it's a tiny risk and you might die of an acute disease tomorrow anyway.

If you have ~1000-unlimited years to live, people might generally take fewer accident-causing risks: the cost of an accident is hugely greater.

Then again, who knows? Maybe a society of immortals would need to take more risks in exchange for status.

But, you essentially decide your own level of accident-risk and I'm sure you could get it way down from 1/1656.


How much slower would someone drive if they knew they had a good shot at 2000 healthy years ahead of them? Would the airline industry be able to survive and still use jumbo jets? Perhaps the long-lifers would pay to ride in pods with parachutes which that can be ejected from the plane.


Oh, don't worry. Ecosystem collapse is already occurring. 2000 years? We won't last 200 at this rate.


That's only true if we assume that the chance of dying of an accident will remain constant in the next 2000 years (it won't).


Can anyone find meaningful detail in here about how Google hopes to solve the problem of death? Beyond the name of the company and the guy running it, this seems like a stock profile of Google X.


Succeed or not in their stated goal, it could end up throwing off a lot of interesting science, assuming actual bio/neuro/genetic science is backed by the venture. Presumably all this enhanced by vast amounts of computational prowess, as well.

That said, there are a lot of questions about what the philosophical, cultural, sociological and political underpinnings and implications might be. Creating a friendly environment for discussing those could be a worthwhile exercise as well. For example, what is the interaction between individual wellbeing and longevity, and the quality of world in which you live? Perhaps it's a two-way street, a wholistic set of factors that includes social wellbeing in addition to medical health, taking for example the case of that Greek island with famously long-living people (Ikaria).

As someone said, it's not the years in your life, it's the life in your years.


What's with the fake, trollish headline on HN?

It should say "Google vs. Death".


It does say that, now.

What was the original headline?


Last week, my fiancée telling about me a novel she was reading "Google Démocratie" by David Angevin and Laurent Alexandre (2011, French language). This article sure makes the premise of the novel (genetically modified trans-humans vs the good old fashioned kind) seem a little less far-fetched.


after money, fame, what else? the google founders desire to live forever


Desire _everyone_ to live forever.

What kind of monster are you that you don't think thats a laudable goal?


> What kind of monster are you that you don't think thats a laudable goal?

I think there are many compelling ethical and philosophical arguments one can make against wanting everyone to live forever.

We shouldn't confuse live extension with living forever. Oddly enough, David Attenborough is in the news today for suggesting population control is a huge and growing problem[1]. You could argue that research into extending human life when we are looking at significant problems with population overcrowding and competition for resources in the near future is putting the horse before the cart.

I am not sure which side of the fence I fall on. Medical research can allow people with previously incurable conditions to live full lives. This is a laudable goal. But that's not what you said - you're talking about living forever. Is that so laudable? I would certainly not call someone who thought that was a truly awful idea 'a monster'. Indeed, there's lots of speculative fiction based around how awful it would actually be to live forever (or even a very long time).

[1] http://www.telegraph.co.uk/culture/tvandradio/10316271/Sir-D...


Admittedly, you do have to laugh at the sheer irony. Research into life-extension is taking place in advanced countries that can't even maintain their current population levels without net immigration.

And also, at the same time, cannot be bothered to raise labor wages.

What the fuck?


> Research into life-extension is taking place in advanced countries that can't even maintain their current population levels without net immigration.

Sounds perfectly consistent to me.

> And also, at the same time, cannot be bothered to raise labor wages.

As long as the immigrants keep coming, why bother raising wages?


Right, the inconsistency is: the First World goes tsk tsk at the Third World for its overpopulation and poverty; meanwhile, it critically relies on the Third World to breed a neverending supply of cheap immigrant labor to exploit.


You could just as well say it offers people suffering from those circumstances the chance of a better life.


You have no idea what you are even saying. Have you even thought about what "everyone living forever" means? Probably not.


I have. Have you? I suspect not: http://www.nickbostrom.com/fable/dragon.html

Of course, the result won't actually achieve that. But it is not an unhonorable goal. Involuntary death is not a virtuous thing.


woah, woah, hold on there. I never said it was bad


Who would not desire that?


I never became confident that my worldview had matured into adulthood until I lost my fear of death. And not only that, someday will welcome it (and all the agonizing pain it may likely entail) as the ultimate justification for everything that came before and that will occur after.

Actually, I have no fucking clue what kind of mindset wants to live forever. Reminded of the beautiful (book) scene in Ender's Game where Wiggin ponders the death of the buggers, and the essential union of death and rebirth.


My favorite response to this view comes from Greg Egan's short story "Border Guards": The tragedians were wrong. They had everything upside-down. Death never gave meaning to life: it was always the other way round. All of its gravitas, all of its significance, was stolen from the things it ended. But the value of life always lay entirely in itself — not in its loss, not in its fragility.

Frankly, I think it's awful when people die. People are so interesting and irreplaceable and wonderful; I don't necessarily think it's an improvement for them to just fall over dead one day and be gone.

Please note that this is a relatively orthodox opinion: Even the Christian church has always felt that death and oblivion were rather horrible, and it thought that humans should live forever, albeit after a bit of debugging so they'd stop being quite so awful to each other. They approved of death only because it was the price of admission to immortality.


> Actually, I have no fucking clue what kind of mindset wants to live forever.

The kind of mindset capable of performing induction over the positive integers. For me, today was a good day. I want to live until tomorrow, at least, and I want tomorrow to be at least as good.

Therefore, I want to live forever, or at least as long as reasonably possible.


I.e. People who think they are machines want to live forever?


Your straw man doesn't work. In a trivial sense, we are machines. There's nothing magic between quantum mechanics and a fully functional brain. Our soul is material, made up of neurons and other cells.

Just like a man-made machine, we are physical processes. We're just much better at self reference than the machines we build.


You've contradicted the statement that you made elsewhere - that it's the arrangement of atoms that matters. There's nothing magical about the processes, but the specific state matters.


An arrangement of atoms is a snapshot of a physical process. I don't see the contradiction.


An arrangement of atoms is a snapshot of a small part of a physical process with arbitrary boundaries drawn around it.


The boundaries are not arbitrary, to the extent we can factor the configuration space. For instance, we can draw a rather sharp limit between me, and the keyboard I'm typing with.


And what does that particular choice of boundary represent?


The recognition of involuntary death as a terrible tragedy does not require the slightest fear of it.

I am also not afraid of illiteracy or racism though I also consider them terrible. If your ability to not fear death requires that you trivialize it, to pretend its something wholesome, then I regard that with the same kind of mild contempt that I hold for people justify their bigotry by convincing themselves that people of other creeds are inferior and so its /natural/ to discriminate against them.

I hardly think that a fictional child's excuses for committing genocide, themselves constituting a bit of an Author Tract by a writer with well known outspoken religious views on the proper nature of human interactions, is really much of a contribution here.

The suggested vague possibility of living forever doesn't force it on anyone, I wouldn't agree with that either. I think you should be free to stop existing on your own schedule.

With involuntary death removed, I think and hope that instead people would "die" a different way— by becoming different people over time, ending a chapter in their lives and adopting a new one, being reborn without ever dying, and hopefully conserving most of the best about themselves in the process.


You're almost certainly amongst the oldest HN readers. I'd put you at around 40.


You're off by 10 years..


13...


Deathism is an insidious and pervasive meme. It is so widely accepted that we didn't have a word for it.

That's why I am so happy right now with this announcement: extremely rich and influential people moving beyond the same old Death Stockholm Syndrome.


I find it interesting that they are actively pursuing it.

Kind of like watching a lion, king of the jungle, fighting and clawing furiously against a roaring avalanche to reach the top of a mountain

enjoy the show


A couple of years ago, after it was revealed that Sergey had some genetic disorder I wrote this sci-fi short story inspired by the fact that Larry/Sergey would be doing exactly this:

----

In the year 2010, scientists perfected suspended animation through the use of cryogenics for the purpose of surgery. After more than a decade of study and refinement, long term suspended animation became a reality, yet a privilege reserved for only the most wealthy and influential.

The thinking at the time was that only those who showed a global and fundamental contribution to society (while still viewed through the ridiculously tinted lenses of the global elite of the era) were worthy of entering into such state.

The process was both incredibly complex and costly. As each Transport, as they were known, required their own stand alone facility to be built around them. Significant resources were put into the development of each facility as they required complete autonomous support systems to accommodate whatever duration was selected by the Transport.

Standalone, yet fully redundant, power, security and life support systems were essential to the longevity of each facility.

Additionally, it was recognized that monetary resources would be subject to change over time, especially fiat-currency based resources. Thus there was a need to place physical holders of value that would be perceived to not deplete/dilute over time into the facilities for use by the Transport when they resuscitate.

These resources are the most sought after treasure of the new world.

After hundreds of years of human progress, civilization could no longer sustain itself in an organized self-supporting system. Through utter corruption of what some call the human soul, the world has fallen dark. There are very few outposts of safety in the current Trial of Life, as its now known.

Many Transporters have been found, resuscitated and exploited already. There are believed to be many many more, but their locations are both secret and secure. Akin to your life relying on the discovery of an undisturbed Tomb of a Pharaoh - even though every consciousness on the planet is also seeking the same tomb.

They are the last bastion of hope for they alone have the reserves of precious materials needed to sustain life for the current generation.

Metals, technology (however outdated), medicines, seeds, weapons and minerals are all a part of each Transport 'Crop'.

One find can support a group or community for years alone based on the barter and renewable resource potentials in each Crop.

One transport, found in 2465, that of a long dead nanotech pioneer - who was purportedly responsible for much of the cybernetic medical capabilities of the 21st century, which he sought to cure his genetic predisposition for a certain disease, was so vast that the still powerful city-state in the western province of North America was able to be founded.

The resources of this individual were extraordinary, but his resuscitation, as they all are, was rather gruesome and cold.

The security systems in each Transport Facility are biometric and very complex. They can only be accessed by a living, calm and (relatively) healthy Transport.

If the system, and its control mechanism AI, detect signs of duress, stress or serious injury to the Transport - they go into fail-safe. Which is to say they self detonate. Taking with them all resources, the Transport and the Seekers as well.

There have been many instances of this, such that the art of successful Resuscitation has become an extremely profitable business.

The most active and successful Resuscitation Team (RT) have been the ironically named, Live Well Group.

The most conniving, well practiced and profitable con in the history of mankind.

LWG alone has been responsible for the resuscitation of more than 370 Transports. Their group is currently the most powerful in the world. With their own city-state, established after the Brin case mentioned, they have a cast of thousands of cons all working to ensure the Transport believes they have been Awakened to a new, advanced, safe world and that they would be allowed to stake part in a significant way now that they have been Transported.

They are fooled into releasing their resources, then brutally tortured for information about any other Transports or any other knowledge they may possess, which invariably is less than nothing.

It is a hard world out there now, and the LWGs ruthless strive to locate the thousands of other Transport Facilities is both the worst aspect of our modern struggle - yet ironically will serve to be the basis of the ongoing endeavor of the species.

There is rumor of a vast facility of resources and Transports in an underground 'CITY' of the most elite Transports ever. A facility supposedly comprised of the 13 most powerful and rich bloodlines of people to have ever existed.

It is not known which continent this facility is on, but I believe it is in Antarctica - fully automated and with the ability to auto-resuscitate at a given time.

This is my mission, this is my life's work. To find and own this facility and crush any and all other groups that oppose me.


I like the background, though from a narrative standpoint, it is better presented piecemeal, as the hero progresses.

Just one quibble: cryonics (not "cryogenics") today is much less expensive than that. Okay, you don't live back (yet). But if we get working suspended animation, with resuscitation and all, it will probably be much cheaper, much more dependent on the survival of our current civilization, and much less protected.


I wrote it in 2010 after the wired article about Sergey's gene issue came out... it was a stream of conscious post, and I hadn't edited it since...

I predicted that given his resources and being found to have a genetic predisposition to a disease with later-in-life onset, he would focus on health/longevity.

With the article that came out at the same time about cryonic (thanks) surgery, I was thinking that he would be more likely to be able to freeze himself later in life than to cure his genetic disorder.


You'd think with everything Google already knows about most people they could just solve for x.


The date publish on the article is wrong? Its also cuts off and trys to get me to pay money?


Funny. No mention of Kurzweil.


Kurzweil is a joker. His entire hypothesis rests on the assumption that computational power is the only obstacle to general AI, which is manifestly not the case.


Kurzweil works for Google.


I mostly agree here, but he does have quite a bit of a history of accurate predictions. Not a logical reason to believe in his future predictive ability, but certainly seems relevant to the anti-aging conversation.


Damn, I thought Pratchet wrote another book...


I think I'll go with death, then.


That's fine, as long as you choose it rather than having it thrust upon you. Currently, we lose this choice after less than a century.


I worry that it will not be a choice (or at least taboo, like suicide is these days -- given that's basically what it will be).

EDIT: I wonder if the possibility of living nearly forever will make society even more risk-averse?


Improved healthcare, reduced war and so on have already made society much more risk-averse. This would just be another bit of improved healthcare.


Love that it has a cat name... they're the ones with seven lives, right?


I thought it was 9.


Depends what country you're from.


Google can't extend human life but maybe it can pick the right people who can.

This is similar to iPhone. Steve Jobs didn't make the iPhone possible. What he did was recognize something that many engineers had recognized for a long time and he permitted the resources to flow to the right people. Sure, he introduced some of his own artistic flair. But the combination of radio, UNIX, with a nice API was an idea whose time had come. We just needed a CEO with the balls to let his engineers go for it.

There are many brilliant scientists and engineers that can nearly do magic, and not enough people who know how to discern what visions to fund. Every so often we get blessed with a Steve Jobs or Elon Musk who can send the resources the right way, but for the most part we get Carly Fiorina.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: