
Are the robots about to rise? Google's new director of engineering thinks so - wikiburner
http://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence/print
======
higherpurpose
> But isn't he simply refusing to accept, on an emotional level, that everyone
> gets older, everybody dies?

Why? Why should we accept on an "emotional level" that we are about to die?
Just because it's _currently_ "inevitable"? Seems like a cop-out to me. I
think humans are meant to be better than just "accepting their fate", and that
we should always try to improve our lives and conditions.

~~~
spindritf
The ideas of singularity are based on extrapolating from the past (progress of
technology). If you extrapolate from the past human lives, eventuality of
death seems pretty inevitable for many generations to come. Which
extrapolation you choose probably says more about you than it does about the
world.

~~~
TeMPOraL
> _Which extrapolation you choose probably says more about you than it does
> about the world_

Exactly, i.e. you can't just go and extrapolate stuff at will, lest you end up
talking like that guy: [http://xkcd.com/605/](http://xkcd.com/605/).

Some extrapolations are more valid than others. There are casual reasons to
believe that the rate of progress will continue for some time. There are no
reasons to believe that death will always be inevitable (sans the thermal
death of the universe stuff, but I'm sure we'll figure something out by then).

~~~
spindritf
> There are casual reasons to believe that the rate of progress will continue
> for some time.

What are those reasons? What if you extrapolated the speed of passenger planes
in the 60s, or even 80s. Just 30 years ago. Would you be correct about the
21st century? How about space travel?

What if AI is just like that? We'll keep improving, and then it'll stall. It
may later recover. Or not.

That's the problem with extrapolation, and extending current trends into the
future. It's actually pretty reliable — you're often right. Until you're not.

~~~
marcosdumay
Biotechnology is an infant, like computers at the 50s. Why should we not
expect huge progress from it?

Now, I guess he'll have to redo his AI extrapolations. Chip manufacturing is
now mature, and while we'll probably be only slightly latter at the computers
that put 70% of the population out of a job, the current smaller rate of
improvement will make a lot of difference for his 2029 prevision.

~~~
TeMPOraL
> _Chip manufacturing is now mature_

Yes, but while we're hitting the limit of our current solutions, AFAIR there's
a lot more room to explore with 3D chips and optoelectronics. We might yet
squeeze some more progress out of it.

> _Biotechnology is an infant, like computers at the 50s. Why should we not
> expect huge progress from it?_

We should, and in this case we have a good reason to believe it - every living
thing on this planet, and every little bit of what we discover about them, is
an evidence that nanotechnology is possible, works, and can do amazing things.
The challenge in front of us is to understand, take control and re-purpose.

------
dekhn
One point the article misses: Ray is _a_ director of engineering, not _the_
director of engineering. There are more than one engineering directors at
Google.

~~~
1337biz
Sometimes I think Ray runs into the risk of becoming some sort of a PR mascot
for the company...

~~~
flycaliguy
I think he serves as an optimistic distraction as Google integrates itself
into the military industrial complex.

~~~
TeMPOraL
> _as Google integrates itself into the military industrial complex._

This is a very bold claim. Care to provide any reference/support for it?

~~~
flycaliguy
The Boston Dynamic acquisition. Also the marriage of American intelligence
services and the tech sector as the military increasingly takes to cyber
space.

~~~
jknightco
"Following the Boston Dynamics acquisition, Google says that it plans to honor
its existing contracts, including the military contract with DARPA, but it
doesn’t plan on pursuing any further military contracts after that."

Stop spreading FUD.

[http://www.extremetech.com/extreme/172859-google-acquires-
bo...](http://www.extremetech.com/extreme/172859-google-acquires-boston-
dynamics-and-seven-other-robotics-companies-next-stop-judgment-day)

~~~
flycaliguy
Oh yeah, "Don't Be Evil"...

------
flycaliguy
I always like to remind people that the road towards immortality is going to
involve a significant period in which us normals have to deal with immortal
rich people. Sounds awful, like, just about the worst societal dynamic I can
think of.

Don't be surprised if they realize there isn't enough room for the rest of us.
These new 1% immortals may also require a special country in which they are
not at risk of being tragically harmed by one of us billion mortals. Watch you
don't get bit by their 2 tonne Boston Dynamics guard dog...

~~~
exratione
[Rewind to 1940s]

"I always like to remind people that effectively treating heart disease is
going to involve a significant period in which us normals have to deal with
long-lived rich people. Sounds awful, like, just about the worst societal
dynamic I can think of."

[Back to the present]

Our age is characterized by the fact that access to types of medical
technology is basically flat. Rich people get to hire better doctors, but
there are no super-secret, ultra-restricted forms of medicine that are
inaccessible to everyone else. Your chances of getting into clinical trials of
the new new things are about as good as theirs, provided you are prepared to
pick up a phone and put in the time.

~~~
ForHackernews
I don't think that's true. To take one example, Steve Jobs famously gamed the
organ donor system by buying houses in many states in order to get on multiple
statewide registries. Now, most medical treatments aren't zero-sum in the way
of organ transplants, but let's not pretend rich people can't buy better
health than the rest of us.

~~~
TeMPOraL
Sure they can, but I think of it as rich people subsidizing the medical
research for the rest of us. It trickles down, and as the progress continues,
there's always another new expensive thing for rich people to pay for.

As for the organ transplants; somebody needs to pour more money into stem cell
research and related fields so that we can start growing organs, thus
eliminating the whole organ market (including the black one) and zero-sum
dynamics of transplats.

------
edoloughlin
_But he 's the sort of genius, it turns out, who's not very good at boiling a
kettle. He offers me a cup of coffee and when I accept he heads into the
kitchen to make it, filling a kettle with water, putting a teaspoon of instant
coffee into a cup, and then moments later, pouring the unboiled water on top
of it. He stirs the undissolving lumps and I wonder whether to say anything
but instead let him add almond milk – not eating diary is just one of his
multiple dietary rules – and politely say thank you as he hands it to me. It
is, by quite some way, the worst cup of coffee I have ever tasted._

Slightly off topic, but this sort of guff makes me abandon a lot of articles
in the first few paragraphs. In fact, I just did exactly that to come here and
complain. It's little more than the writer exercising his/her own ego. I'd
much rather they get to the point, which is what their interviewee has to say.

~~~
lisper
> It's little more than the writer exercising his/her own ego.

No, it's adding what is called "human interest" to the story, which (the
theory goes) makes it more interesting to non-technical people.

------
Killah911
I'd suggest "The most human human". It's unfortunate that those calling
themselves "futurists" somehow seem to think that being alive forever... is
kind'o shallow.

Death may be inevitable, but hopefully those who preach from the pulpit, have
a little more depth to them and have hopefully examined their lives more
carefully than to simply say, I'll just live forever. In that sense, I think
Jobs had the right idea. If life were "infinite" or even significantly
prolonged (i.e. 10 times the current life expectancy), I think we'd have a lot
of thinking to do to come to terms with such a new reality.

~~~
brador
Isn't our willingness to take risks connected with our inevitable, eventual,
death? Without a guaranteed eventual death few would take risks without
massive compensation.

Consider the need in human societies for revolution. With a massively long
lifespan, no one will revolt when the need arises, due to risk of death.
Leading to a pretty shitty state with no way out.

~~~
wpietri
There's a great short story along these lines from Larry Niven. I can't
remember the name or the collection, but maybe somebody else will.

A spaceport bar's noise suppression system is on the fritz, so the human
bartender keeps hearing random bits from a conversation somewhere in the bar.
One alien, a DNA-based one, has come to sell their life-extension technology.
The other points out innovative things that humans are doing, and speculates
that it's related to the short lifespan. Eventually, the DNA-based alien
agrees and decides to wait a few human generations just to see what we do
next. The bartender looks around desperately trying to figure out which of the
many tables it was, to no avail.

It's a commonplace among artists that constraints force creativity. A small
example is the 6-word-stories thing [1]. An example more relevant to your
comment is Kevin Kelly's life clock [2]. If you talk to people who have
cheated death, you might expect them to talk about being more careful. But the
ones I know all talk about being reminded to live fully while we can.

[1]
[http://www.wired.com/wired/archive/14.11/sixwords.html](http://www.wired.com/wired/archive/14.11/sixwords.html)
[2] [http://kk.org/ct2/2007/09/my-life-
countdown-1.php](http://kk.org/ct2/2007/09/my-life-countdown-1.php)

~~~
magnanimous
The story is called Limits and it is part of his Draco Tavern series.

------
himangshuj
seems more like a article the virtues of Rays past predictions and is rather
one sided. Ray Kurzweil, the man with the crystal ball

------
coldtea
Or you know, it's just another premature product from Google, to get news
coverage as an "innovative" company by rehashing older stuff in not-marketable
forms.

Like self-driving cars, computer-glasses, cloud-only-laptops and the like, all
met with minimal success.

~~~
worldsayshi
What's with this attitude? How can you be innovative without failing 9 times
out of 10. That's what innovation entails! If you haven't failed while
innovating you either was extremely lucky or you got yourself some divine
superpower.

The iPhone wasn't new either - as in not a rehash of old ideas. But it was a
new execution. These are all new executions.

~~~
coldtea
> _What 's with this attitude? How can you be innovative without failing 9
> times out of 10._

Easily, by not making PR announcements before you succeed.

Plus, I don't think is an absolute that it's necessary to fail multiple times
to get something innovative. Take for example Xerox Park for research. Or, in
the market area, Apple.

> _The iPhone wasn 't new either - as in not a rehash of old ideas. But it was
> a new execution. These are all new executions._

Well, there are new executions that are ready to succeed, and new executions
that are barely held together with chewing gum and wire. I don't think it's
any service to the public to tout the latter to high heavens.

~~~
TeMPOraL
> _Easily, by not making PR announcements before you succeed._

Oftentimes it's the PR that is the difference between success and failure.

~~~
worldsayshi
Indeed, the question for many of these innovations is not wether they will
work from a technical perspective but if they will work with the public.

------
Geee
I'm predicting a future where most people will live on in a virtual world with
'unlimited' lives. Just the brain will be kept alive in a box somewhere. Well,
that sounds like Matrix, but I think it's pretty inevitable.

~~~
coldtea
Or you know, where only a handful of people will live this way. In isolated,
heavily guarded, areas. With tons of energy, food, toys, technology, medicine
and the like.

And the majority will slave away and be harvested for work, organs, sex slaves
and such.

You know, sometimes you need to provide a more realistic picture of the future
(this is not totally unlike how people actually live in places like Rio or
Russia for example, and even L.A. [http://www.amazon.com/City-Quartz-
Excavating-Future-Angeles/...](http://www.amazon.com/City-Quartz-Excavating-
Future-Angeles/dp/0679738061)).

------
jmount
Thats his schtick- publicly speculating on this is a big part of his fame.

------
IsaacL
I have to admit that I'm not a huge fan of Ray Kurzweil - he's one of a large
group of people who believe that accelerating change will almost certainly be
good. I think the singularity could be good, but it could also be really bad,
and it's important to spend some resources on making sure it goes well.

MIRI (formally the Singularity Institute) has a mixed reputation around these
parts, but after reading fairly widely over the last year I think they have
the deepest thinking on the topic of AI. Here's a concise summary of their
worldview: [http://intelligence.org/2013/05/05/five-theses-two-lemmas-
an...](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-
of-strategic-implications/)

As I see it, their argument goes:

1\. It's tempting to think of AIs becoming either our willing servants or
integrating nicely with human society. In actuality, AIs will likely be able
to bootstrap themselves to superintelligence extremely rapidly; we'll soon be
dealing with alien minds that we fundamentally can't understand, and there
will be little stopping the AI/AIs from doing whatever they want.

2\. It's tempting to think, from analogy to the smartest human beings, that
superintelligent AIs would be wise and benevolent. In actuality, a
superintelligent AI could easily have strange or bizarre goals. I find this
makes more sense if you think of AIs as "hyperefficient optimisers", as the
word "intelligence" has some misleading connotations.

3\. OK, well surely we can leave the AIs with weird goals to do their thing,
and build other AIs to do useful things, like cure cancer or research nuclear
fusion? The trouble is that even an innocuous goal, when given to an alien
superintelligence, will very likely end badly. An AI programmed to compute PI
would realise that it could compute PI more efficiently by hacking all
available computer systems on the planet and installing copies of itself. Or
developing nanotechnology and converting all matter in the solar system into
extra computational capacity. You have to explicitly the program the AI to not
do this, and defining the set of things the AI should not do is a hard
problem. (Remember that 'common sense' and 'empathy' are human abilities, and
there's no reason that an AI would have anything like them).

4./5\. OK, well, we'll build an AI with the goal of maximising the happiness
of humanity. But then the AI ends up building a Brave-New-World style
dystopia, or kidnaps everyone and hooks them up to heroin drips to ensure they
are in constant opiated bliss. It's really hard to come up with a good set of
values to program into an AI that doesn't omit some important human value
(like consciousness, or diversity, or novelty, or creativity, or whatever).

I'm glad that Peter Norvig (director of research at Google) is concerned about
the issue of friendly AI. I'm curious to hear what other HN readers think of
these ideas.

Anticipating some common objections I hear from friends:

 _How could a superintelligent AI have a stupid goal like computing Pi?
/Wouldn't it be smart enough to break any controls we put on it?_

I think this objection assumes an AI would be wired together like a typical
intelligent human mind. If you think of an AI as a pure optimisation process,
it's clear that it would have no reason to reprogram the ultimate goals it
begins with.

 _If they 're smarter than us, we should just let the AI take over/AIs are
like our children, ultimately we should leave them free to do whatever they
want_

Again, this assumes the AIs are like super-powered human minds and that they
will do _interesting_ things once they take over, like contemplate the deep
mysteries of the universe. But it's clearly possible for the AIs to devote
themselves to really trivial tasks, like calculating digits of Pi for all
eternity.

