
Futurist Ray Kurzweil Pulls Out All the Stops (and Pills) to Live to Witness the Singularity - edw519
http://www.wired.com/medtech/drugs/magazine/16-04/ff_kurzweil?currentPage=all
======
Prrometheus
>Kurzweil predicts that by the early 2030s, most of our fallible internal
organs will have been replaced by tiny robots. We'll have "eliminated the
heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all
the hormone-producing organs, kidneys, bladder, liver, lower esophagus,
stomach, small intestines, large intestines, and bowel. What we have left at
this point is the skeleton, skin, sex organs, sensory organs, mouth and upper
esophagus, and brain."

What developed country’s government would allow people to experiment with such
things? Risk-aversion is a real brake on the singularity, and it tends to
increase with wealth.

~~~
Tichy
I think it is already happening, there are artificial hearts, for example?

~~~
jimbokun
Yes, I think Kurzweil's response would be that these things will be replaced
in people with a condition that would be fatal if the original organs were
left intact. Over time, people would be come more and more "cybernetic" as
their natural parts failed and technology came up with "improved"
replacements.

------
mechanical_fish
I've got a better plan for my future: _I'm going to grow old and die_.

(Unless, of course, fate intervenes and I don't grow old. But I intend to keep
_planning_ to grow old right up until the last minute!)

It's a classic plan, and well tested. Lots of prior art. Lots of examples to
learn from, and a lot of infrastructure and literature.

You folks can find fault with my plan if you want. But here's the thing:
You're going to follow the same plan, whether you're willing to admit it or
not. [1]

Further reading: <http://kk.org/ct2/2007/09/my-life-countdown-1.php>

[1] I'd offer to bet, because my odds are really, really good... but tontines
are illegal for a good reason. ;)

~~~
byrneseyeview
Really? So if you lived in a post-Singularity world, in which clinical
immortality was an unquestioned human right, you would periodically get your
face wrinkled, inject yourself with bone-brittling nanobots, induce dementia,
and then wither away?

If you tried to inflict that fate on anyone else, it would be a terrible
crime. If you wanted that for yourself, it would be evidence of mental
illness. If you consider it anything but the status quo or an inevitability,
it's indefensible to desire it.

~~~
mechanical_fish
_So if you lived in a world... in which clinical immortality was an
unquestioned human right..._

ROTFL.

I live in a world where my wife and I had to fight our old insurance company
for basic preventive health care. (She tried to have a mammogram in the same
year that I had a simple 20-minute physical. Turns out that violated the fine
print in our policy, and we had to pay out of pocket.) I'm glad to say that
our uninsured, thirtysomething friend just managed to save enough money for
his cancer surgery, but others of our friends aren't so lucky. And I live in
_Massachusetts_ , where the state has forced insurance companies to sell
relatively affordable policies to individual contractors like me. If I lived
in a different state, I'd probably be paying over $1000 a month for the
privilege of having my claims denied.

So, sure, I'd love for clinical immortality to be possible, and love it even
more if it were an "unquestioned human right". But here in the USA I don't
even have the "unquestioned right" to _effective, existing medical
technologies_ , and that's a more pressing concern.

Meanwhile, would I want to age in a post-Singularity world? Who the hell
knows? I don't think you can be immortal and still be recognizably human --
the entire design of humans is predicated on mortality, just as it's
predicated on gravity and an oxygen atmosphere and the presence of edible
plants and animals. And who knows what the post-human aliens will think?

To the extent that post-singular beings remain human, I think they'll exhibit
many of the same self-destructive tendencies that humans do now. I used to be
a cancer researcher, and the sight of people lighting cigarettes still makes
me angry inside. It feels like they're setting fire to my hard work. But what
can you do? To discount the future is part of human nature.

Recommended reading:

On the alienness of the immortal human: William Gibson's _Count Zero_.

On the psychology of the immortal human: Doctorow's _Down and Out in the Magic
Kingdom_ is good. The much shorter, ha-ha-only-serious version is the tale of
Wowbagger, the Infinitely Prolonged in Douglas Adams' _Life, the Universe, and
Everything_. (Douglas Adams is one of those guys whose every throwaway joke
encapsulates an entire _series_ of philosophical novels.)

On the "unquestioned human right" to immortality: Roger Zelazny's _Lord of
Light_. You may have to read most of it before it becomes clear what I'm
talking about, but keep reading. It's an awesome work.

~~~
byrneseyeview
ROTFL is not the right reaction if you follow it up with, basically, "Your
hypothetical futuristic scenario is not like my real-life right-now scenario!"
I said "If" for a reason.

I don't think immortality is or should be an unquestioned right, but I suspect
that within a decade or two of it being a realistic possibility, letting
people die will be about as popular as slavery.

------
giardini
I loved Kurzweil's book. Unfortunately in all the talk about Moore's Law and
exponential growth, Murphy's Law was forgotten (perhaps in a self-referencing
manifestation of the same law).

Example: My HMO has figured out that, if a patient has a serious chronic
ailment (e.g., diabetes, heart disease, cancer, etc.) then it is cheaper for
them should the patient die:

I. Treat patient:

    
    
          cost of treatment           $100
    
          cost of future treatments   $???
    

Highly-variable costs borne largely by the HMO.

II. Let patient die:

    
    
          cost of treatment           $100
    
          cost of future treatments   $  0
    

Cost to HMO limited to $100. Remaining costs borne by the deceased's life
insurance company.

The perfect HMO patient is one who never visits his doctor and then dies
quickly.

Should the HMO increase profitability by speeding their chronically-ill
patients to a painless death? Stockholders cry "Yes!"; patients whimper a
fearful "No." Needless to say some cognitive dissonance must arise should
one's income be so directly tied to one's ability to (not) keep one's patients
alive.

Economic and social conflicts within the medical system are a greater
hindrance to the singularity than are technical hurdles.

~~~
Tichy
What is a HMO? In any case, if an insurance company would default to "Let
patient die", it would probably lose a lot of customers - shareholders would
not be pleased.

~~~
edu
From the _New Oxford American Dictionary_ :

health maintenance organization (abbr.: HMO)

a health insurance organization to which subscribers pay a predetermined fee
in return for a range of medical services from physicians and healthcare
workers registered with the organization.

------
david927
God bless him; those pills are exactly what will kill him.

The singularity is coming, but Moore's law won't have much to do with it. I
remember seeing a few-days-old kitten and for the first time in its little
life, it saw a dog. It hissed.

We already have the equivalent computing power of a kitten's brain. What we
don't have is the software that can recognize a dog without having ever seen
one before.

~~~
Alex3917
How can you say the pills are dangerous without knowing what they are?

~~~
david927
I'm channeling a pharmacist friend here. Either a pill has no effect (such as
Vitamin-C) and just gives you expensive urine, or it has an effect. And if it
has an effect, it has a side effect(s). No pharmacist in the world knows what
would happen if you mix that many pills, whatever they are, and so, unless
most of them are herbal non-drugs, he's taking a big chance, which is exactly
what he's trying to avoid. Thus the irony.

~~~
Hexstream
Vitamin C has no effect?! That's not what mommy told me :(

Any references?

~~~
earthboundkid
Linus Pauling died.

~~~
Panoramix
He's not the best example as he died at age 93.

------
dbreunig
Anyone seen one of his talks or ppts? Anyone else get those creepy cult
feelings? Anyone?

~~~
aswanson
Yes. I've laid out my objections to his timetables here before:
<http://news.ycombinator.com/item?id=142175>

I like what his books do in terms of sparking the imagination, but take his
claims with the same level of skepticism as I do an infomercial. Another thing
is the repitition; if you've read _Age of Intelligent Machines_ you've read
_Age of Spiritual Machines_ as well as _Singularity is Near_.

Some of the predictions of Spiritual Machines have fallen short already: Most
people don't use speech recognition to enter data into computers. Granted, he
chose 2009 as that target date, but I feel safe that won't be the main method
of entry in a few months.

~~~
ericb
Speech rec with Naturally speaking, as of a year ago when my buddy broke his
arm and tryed it, was getting there, but still not usable for him.

Speech is not as nice as typing for entering data. Typing seems [citation
needed] faster and like less work. Also, in an office, the voices absorb
mental bandwidth from everyone in earshot. So I think in this case Kurzweil
may or may not be wrong about the quality of the technology in a year, but
he's definitely wrong about speech being a preferred method for interfacing
with a computer.

~~~
Novash
Solved? I think this qualifies within his predictions.
<http://www.technologyreview.com/blog/editors/22037/>

~~~
aswanson
I'm guessing that given the difficulty of characterizing voiced speech is
still inaccurate, the more difficult task of translating neural impulses to
speech will result in even further degradation of performance. Unsolved: _At
the moment, the device has a limited vocabulary: 150 words and phrases._

A long way to go.

~~~
Novash
They still have an year and a half, which means twice the number of
transistors on their next hardware. Give them the benefit of doubt.

~~~
aswanson
No, I won't. The problem is not transistors; the problem is in the lack of
understanding how speech recognition works. It is very difficult to get right
and it is highly unlikely that those problems will be solved completely within
the next 18 months.

They could double the transistor count now by running the algorithm on a
better DSP or Cell processor.

------
tobiazz
I think it is important to remember that the immortality mentioned here is a
protection from biological causes of death, but there are plenty of other
obstacles to prolonged existence (especially prolonged self-consciousness and
memory, which is harder to wrap one's head around).

Take the popular theories of the universe's lifespan (or life cycle).

1) The universe is expanding exponentially: eventually every star will die and
there will be nothing to birth new stars. The universe will stop moving and
hit a zero Kelvin freeze.

2) The universe cycles between big bangs and big crunches: information cannot
survive through a crunch. Well maybe one binary bit? hehe.

3) Some sort of multi-universe collisions or string theory madness: who knows
but there could be some event that wipes out what we know as reality.

Basically, I am trying to point out that immortality is probably a
fundamentally flawed concept. A simpler approach might be to consider memory.
Memory storage cannot grow forever (let alone be accessed efficiently enough)
and while we could cap memory and still exist indefinitely, I think we would
have to hit a ceiling of knowledge attainment and we would just be spinning a
wheels like a more exciting version of a rock (not that humanity is any less
of a _process_ right now).

Smarter people than I have already dwelled upon these issues and probably have
better answers than mine (esp. since I have no answer). Asimov's "The Last
Question" is an interesting short story that covers this area a bit.

------
jcl
I can't help noticing that while we are making these exponential advances in
technology, we are also using exponential amounts of our limited supplies of
cheap resources -- things like oil, coal, copper, gold, and helium. I sure
hope Kurzweil gets his singularity before any of these limiting factors kick
in, or it could put a nasty kink in his graph.

~~~
Prrometheus
One of the most striking features of the 20th century is that the economy has
gotten lighter per unit of wealth. New goods are made of fewer raw materials
and more ideas. Greenspan has a famous piece on this, which unfortunately I
was not able to google up. I am with Julian Simon on the issue, I doubt that
raw material scarcity will be a significant brake on human advancement.

~~~
david927
That's a good point but it's only one aspect. Another striking feature of the
late 20th century is our ability to use advanced technology to extract
resources optimally. So instead of getting a slow decline in supply and a
corresponding higher prices slowing demand (the ubiquitous bell curve), you
get a diagonal line followed by a vertical -- in other words, a cliff.

That sounds alarmist but it's just because you don't notice until you're on
the wrong side of the graph. Cod was one of the most plentiful fish around and
because it was good and cheap it became the base of the "fish and chips"
British staple. Cod is now an endagered species. 90% of all large fish have
disappeared from the world's oceans in the past half century. And that's just
fish. We have new technology to extract oil efficiently sometimes called
"super straws". So instead of sputtering to a stop, we'll get there at full
speed. It's easy to say, "Oh we'll just slow down or find alternatives."
That's like going 150mph and saying if the curve is too tight, we'll just slow
down. Sure we will. James Dean style.

~~~
Prrometheus
The situation that you mentioned with the cod fits the "tragedy of the
commons" scenario. Where property rights are hard to define, resources are
over-extracted because there is no owner with the incentive to conserve.

I challenge you to name one resource with normal property rights that fits the
"diagonal line/cliff" scenario. I know of none, although it appears to be the
common wisdom of the internet at the moment.

------
bumbledraven
It would be very hard to argue that the singularity won't arrive eventually,
provided humans survive long enough. It's just the date that's tough to
predict. Sure, computing power has been growing exponentially, but as they
say, "past performance is no guarantee of future results", and there are so
many other variables aside from raw processing speed.

~~~
gruseom
_It would be very hard to argue that the singularity won't arrive eventually_

Really? What if consciousness is not an algorithm?

~~~
bumbledraven
What does consciousness have to do with it? Eventually there will be machines
that are capable of passing the Turing test or something similar. The
singularity is what happens next.

``To the question, "Will there ever be a computer as smart as a human?" I
think the correct answer is, "Well, yes... very briefly."'' - Vernor Vinge

~~~
gruseom
_What does consciousness have to do with it?_

It's human. If one or more aspects of being human aren't computable, then it's
plausible that computers will never pass the Turing test.

Of course, if you assume that the space of what can be expressed
algorithmically is equivalent to the space of all that is, some of these
claims become trivial. But that's a whopping leap of faith. Perhaps Richard
Dawkins could get around to writing a book about that one when he's finished
demolishing its more popular competitor. :)

~~~
bumbledraven
_[Consciousness] is human. If one or more aspects of being human aren't
computable, then it's plausible that computers will never pass the Turing
test._

You seem to be saying that a computer would have to actually _be_ human or
_be_ conscious in order to pass the Turing Test. But passing the Test only
requires the computer to _simulate_ some observable behaviors of a conscious
human well enough to fool a skilled interrogator. In particular, it only has
to simulate those behaviors that are observable through a low-bandwidth TTY.

I suppose one could argue that there _simply does not exist_ an algorithm that
could pass the Turing test (when run on a the powerful computers of the
future), but that seems like a very hard case to make, particularly here on
Hacker News. One could instead try to argue that though such an algorithm may
exist, we will never find it, but that doesn't seem to be what you are saying
either.

~~~
gruseom
I'm not saying there _simply does not exist_ such an algorithm; I don't have
an opinion on that one way or the other. I'm questioning your claim that "it
would be _very hard_ to argue that the singularity won't arrive eventually"
because "eventually there _will be_ machines that are capable of passing the
Turing test" (emphasis added). You don't know that. Nobody does. Some people
like the idea, some people don't; it's wishful thinking on both sides.

My point was to offer one possible scenario under which computers might _not_
pass the Turing test: namely, if there turns out to be some essential aspect
of humans (capable of being observed through a low-bandwidth TTY, if you
insist) that can't be captured algorithmically. Consciousness _might_ be one
such feature. (It's a reasonable one to suggest for two reasons: it's central
to human existence and no one has a clue what it is.)

"We'll just simulate it, then" is, in my view, a dodge. If there isn't an
algorithm for X, why should there necessarily be one that approximates X
arbitrarily closely?

Partisans of the "singularity" are fond of insisting that it's all a matter of
processing power, but that's only true _if there's an algorithm for
everything_. That blithe assumption seems to me an article of faith. It
reminds me of G.K. Chesterton's line, "Give us one free miracle and we'll
explain everything."

~~~
bumbledraven
You are suggesting that there may be some aspect of humans which (a) can be
observed by another un-augmented human over a low-bandwidth TTY and (b) can
_never_ \-- at any point in the future -- be simulated by a computer, no
matter how complicated the algorithm or how powerful the computer.

While I admit that this is _possible_ , it strikes me as implausible for the
following reason: the capabilities of computers are increasing as time goes
by, while those of humans are not (the Flynn effect notwithstanding).

I could understand if you were to say that we won't have the technology to
make computers that pass the Turing Test in the next _x_ years, for a
sufficiently large _x_. But you are suggesting that it is plausible that we
will _never_ do it, in the same sense that a physical object will never move
faster than the speed of light. In the absence of a well-accepted theory
supported by evidence (as we have for the limit of the speed of light), this
strikes me as a very strong claim that is difficult to defend given the
existence of technological evolution in the world.

~~~
gruseom
At this point I can only appeal back to what I already said: it depends on
whether or not _there's an algorithm for everything_. If this is true, then
sure: assuming enough processing power (and programming power), anything can
be simulated. And if it's _not_ true, well, it follows that there's a hard
limit, à la your speed-of-light analogy, to what computers will ever be
capable of. Doesn't it?

How is that a strong claim? I don't state that the "algorithms for everything"
thesis is true or false, because I don't know. Nobody does. My point is that
singularianism depends on a belief about this; a belief that is, based on what
we currently know, rather miraculous, and therefore not very hard to dispute.

Thanks, by the way, for quoting me as saying " _may_ be". I was starting to
doubt my ability to get my point across!

~~~
bumbledraven
_[I]t depends on whether or not there's an algorithm for everything._

We already know that there is not an algorithm for everything. Indeed, Turing
himself showed that there is not an algorithm for solving the halting problem
<[http://en.wikipedia.org/wiki/Halting_problem>](http://en.wikipedia.org/wiki/Halting_problem>).
But that has no bearing on whether or not there exists an algorithm that would
someday allow a computer to pass the Turing Test.

~~~
gruseom
Replace that phrase with _algorithm for everything that's intrinsic to being
human_.

Or, if you prefer, _algorithm for everything necessary to simulate being human
sufficiently well to consistently fool humans via low-bandwidth TTY_. My point
remains the same: it's a belief either way. And throwing processing power at
it will only yield the predicted effect in one of those cases.

Edit: the existence of undecidability results (and the general failure of the
formalist project) ought to make us more, not less skeptical of grandiose
claims concerning computability.

~~~
bumbledraven
_[I]t depends on whether or not there's an algorithm for everything necessary
to simulate being human sufficiently well to consistently fool humans via low-
bandwidth TTY._

But clearly such an algorithm _does_ exist, so this objection fails.

Its existence follows from these facts:

(1) only a fixed amount of data in the form of questions can flow through the
low-bandwidth TTY,

(2) only a fixed amount of data in the form of responses can be sent back
back, and

(3) there exists an algorithm for any function with a bounded input and
output. (To see this, note that the algorithm could consist of a lookup table
containing a correct output for each of the finite number of inputs it has to
deal with.)

I concede that it is another matter entirely whether or not we will ever
discover such an algorithm and implement it on a sufficiently powerful
computer. But the fact that an algorithm for passing the Test _must_ exist
takes the discussion out of the realm of physical and/or philosophical
possibility and into the realm of technology and engineering.

~~~
gruseom
Uh oh, you got technical on me!

It's been a long time since I've studied this stuff, but for what it's worth,
your argument feels incorrect to me. It might not be possible to address that
effectively here, but let me try.

Let's call a particular run of the Turing test a "Turing dialog". Say a Turing
dialog D consists of alternating statements C1,H1,...,Cn,Hn where C stands for
"computer" and H for "human". Say a "successful Turing dialog" is one that
succeeds in fooling the human participant - that is, Hn is something like "I
think you're human".

You're right that for any Turing dialog D, D is finite and therefore an
algorithm exists to reproduce it (a simple lookup table will do). But such an
algorithm is only good for one specific D. To build it, you'd have to know all
of D in advance.

To pass the Turing test, we need more. We need an algorithm, which, given _any
valid prefix_ of a Turing dialog, i.e. any sequence C1,H1,...,Ck,Hk, knows how
to produce Ck+1 in such a way that the dialog will end successfully. That's
not the same thing, is it?

~~~
bumbledraven
_We need an algorithm, which, given any valid prefix of a Turing dialog, i.e.
any sequence C1,H1,...,Ck,Hk, knows how to produce Ck+1 in such a way that the
dialog will end successfully._

Exactly, but since each Turing dialog is of bounded length (i.e. no more than
a megabyte in size, depending on the bandwidth of the TTY and the
predetermined maximum length of the test), the set of all Turing dialogs is
finite and is therefore subject to the dictionary algorithm approach.

It's rather like building out the game tree of optimal play for White in
chess. The tree for the Turing Test "game" would have to include all the
questions that could be asked at each point in the tree, but you only need one
possible correct ("Turing-test passing") response at each node.

------
sjh
For Kurzweil, it's an interesting way of asserting - advertising, if you will
- his own confidence in his capacities as a futurist.

------
AnotherUser
To address the points raised in these comments:

1\. We do NOT have "the equivalent computing power of a kitten's brain."
That's like saying that because we have apples we have the equivalent taste of
an orange.

The only real way to translate current silicon processor-based computing into
a biological brain-based form is to ask how <a
href="<http://www.technologyreview.com/Biotech/19767/>">many neurons can we
model on current hardware.</a> The answer to that is: not nearly enough...
yet. At the current rate of technological growth, we can expect to model a
full rat brain within ten years, with a human brain shortly to follow
(remember, exponential growth moves fast!) Once we can fully simulate a brain,
the only limit to its power is the speed of the processor it runs on.

This approach also addresses the "recognizing a dog" point. By accurately
simulating a brain, they will use the same algorithm the brain does to
recognize objects. It is also worth pointing out that <a
href="<http://www.onintelligence.org/index.php>">advances</a> <a
href="<http://www.numenta.com/>">are being made</a> in deriving the cortical
algorithm directly and using it for the very type of pattern recognition you
mention.

Finally, the point is subtly flawed. Cats recognize dogs "without having ever
seen one before" because the knowledge of what a dog is is hardwired into
their brains from birth. In effect, they <em>have</em> seen dogs before, or
some part of their brains have.

2\. The point applying survival of the fittest to robots is ridiculous. I
barely know where to begin.

First, we will use the term AI instead of robot, as a robot is just an AI with
a body. Second, the only thing that matters to an AI is how "smart" it is -
how efficient its algorithms are and how fast those algorithms run. Having a
big strong body doesn't even make sense in this context.

Additionally, the idea that AI's will design AI's smarter than them is also
flawed. If an AI figures out an algorithm that allows for a more efficient
thought process, what's to stop the AI from modifying itself to use that
algorithm?

3\. Pills: Pharmaceutical drugs have (often serious) side effects. Homeopathic
medicines usually have very little side effects. I'm not saying that taking
all those pills is healthy or beneficial, but the rampant side effects people
seem to be suggesting probably won't manifest.

The heuristic that seems to crop up in all matters of health/eating is "in all
things moderation."

4\. Ray Kurzweil suggests that the final form of immortality will manifest as
computer systems able to model/run our consciousnesses. That way we will be
able to exist for as long as the machines do, as well as back up and transfer
our consciousnesses.

I submit that, even if this is possible, it wouldn't make
<strong>you</strong>. Rather, it would merely ensure that some copy of you
persisted. Consider if you could make perfect clones of yourself, with all of
your memories and developments. Those clones would act exactly like you, but
they still wouldn't be you.

5\. On a personal note, if we agree that it's possible that a method for
achieving immortality will be discovered in our lifetimes, the logical course
of action is to devote the entirety of ones efforts towards realizing this
possibility. The reward surely justifies the risk.

~~~
r7000
Great points.

> Consider if you could make perfect clones of yourself, with all of your
> memories and developments.

> Those clones would act exactly like you, but they still wouldn't be you.

I believe the argument is something along the line of if you replace a neuron
with some sort of artificial neuron are you still "you"? How about another 10?
How about a handful here and a handful there over the course of a year? What
if 10% of your neurons are now artificial along with fake hips and knees and
regenerated biological teeth? not to mention the tip of your finger that you
accidentally sliced off..

~~~
ericb
On the same token, if you cloned me, and he was sitting next to me and I had
to choose who would get shot, I'd choose him. He isn't me because I can't
access his thoughts. I think identity is tied to physical and temporal
location and is in continual flux, so a clone is never me--it never occupies
the same points in space and time. The nerve replacement idea is tough,
though. Maybe our idea of identity is wrong, artificial, or too limiting.

~~~
pchristensen
Of course, he'd definitely pick you to get shot!

~~~
ericb
True, but what I'm getting at is if we were the "same" we wouldn't care.

~~~
pchristensen
Ah, the old "different forms of equality" problem. You're thinking === (same
object in memory) while I'm thinking == (identical copies).

~~~
ericb
Well said.

------
hooande
I worked with that Matt Phillips guy. Glad to see he spent his Yahoo money on
becoming immortal.

------
skmurphy
A more balanced view is here:
[http://www.portfolio.com/executives/features/2007/11/19/Long...](http://www.portfolio.com/executives/features/2007/11/19/Longevity-
Industry)

