
Ray Kurzweil joins Google - dumitrue
http://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing?utm_source=twitterfeed&utm_medium=twitter
======
cs702
Thanks in part to the popularity of his books, movie, and speeches, Kurzweil
now knows pretty much every AI researcher in the planet, and we can safely
assume he's aware of even very obscure research projects in the field, both
inside and outside academia.

Joining Google gives him ready access to data sets of almost unimaginable
size, as well as unparalleled infrastructure and skills for handling such
large data sets, putting him in an ideal position to connect researchers in
academic and corporate settings with the data, infrastructure, and data
management skills they need to make their visions a reality.

According to the MIT Technology Review[1], he will be working with Peter
Norvig, who is not just Google's Director of Research, but a well-known figure
in AI.

\--

[1] [http://www.technologyreview.com/view/508896/what-google-
sees...](http://www.technologyreview.com/view/508896/what-google-sees-in-new-
hire-futurist-ray-kurzweil/)

~~~
cschmidt
I just can't see Kurzweil being in the same league as Peter Norvig. Sure, he
did some interesting work a long time ago, before he got weird. I can't see
this working out well for Google, unless they just want a famous figurehead.

~~~
slacka
> I just can't see Kurzweil being in the same league as Peter Norvig.

The problem with Peter Norvig is that he comes from a mathematical background
and is a strong defender the use of statistical models that have no biological
basis.[1] While they have their use in specific areas, they will never lead us
to a general purpose strong AI.

Lately Kurzweil has come around to see that symbolic and bayesian networks
have been holding AI back for the past 50 years. He is now a proponent of
using biologically inspired methods similar to Jeff Hawkins' approach of
Hierarchical Temporal Memory.

Hopefully, he'll bring some fresh ideas to Google. This will be especially
useful in areas like voice recognition and translation. For example, just last
week, I needed to translate. "I need to meet up" to Chinese. Google translates
it to 我需要满足, meaning "I need to satisfy". This is where statistical
translations fail, because statistics and probabilities will never teach
machines to "understand" language.

[1] [http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-
the-f...](http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-fight-
for-the-future-of-ai)

~~~
nostrademons
For several hundred years, inventors tried to learn to fly by creating
contraptions that flapped their wings, often with feathers included. It was
only when they figured out that wings don't have to flap and don't need
feathers that they actually got off the ground.

It's still flight, even if it's not done like a bird. Just because nature does
it one way doesn't mean it's the only way.

(On a side note, multilayer perceptrons aren't all that different from how
neurons work - hence the term "artificial neural network". But they _also_
bridge to a pure mathematical/statistical background. The divide between them
is not clear-cut; the whole point of mathematics is to _model_ the world.)

~~~
slacka
> For several hundred years, inventors tried to learn to fly by creating
> contraptions that flapped their wings...

To quote Jeff Halwkings "This kind of ends-justify-the-means interpretation of
functionalism leads AI researchers astray. As Searle showed with the Chinese
Room, behavioral equivalence is not enough. Since intelligence is an internal
property of a brain, we have to look inside the brain to understand what
intelligence is. In our investigations of the brain, and especially the
neocortex, we will need to be careful in figuring out which details are just
superfluous "frozen accidents" of our evolutionary past; undoubtedly, many
Rube Goldberg–style processes are mixed in with the important features. But as
we'll soon see, there is an underlying elegance of great power, one that
surpasses our best computers, waiting to be extracted from these neural
circuits.

...

For half a century we've been bringing the full force of our species'
considerable cleverness to trying to program intelligence into computers. In
the process we've come up with word processors, databases, video games, the
Internet, mobile phones, and convincing computer-animated dinosaurs. But
intelligent machines still aren't anywhere in the picture. To succeed, we will
need to crib heavily from nature's engine of intelligence, the neocortex. We
have to extract intelligence from within the brain. No other road will get us
there. "

As someone with a strong background in Biology who took several AI classes at
an Ivy League school, I found all of my CS professors had a disdain for
anything to do with biology. The influence of these esteemed professors and
the institution they perpetuate is what's been holding the field back. It's
time people recognize it.

~~~
angersock
I'll bite. Tell us, concretely, what is to be gained from a biological
approach.

Honestly I imagine we'd find more out from philosophers helping to spec out
what a sentient mind actually is than we would from having biologists trying
to explain imperfect implementations of the mechanisms of thought.

~~~
slacka
I'm short on time, so please forgive my rushed answer.

It will deliver on all of the failed promises of past AI techniques. Creative
machines that actually understand language and the world around it. The "hard"
AI problems of vision and commonsense reasoning will become "easy". You need
to program a computer the logic that all people have hands or that eyes and
noses are on faces. They will gain this experiences and they learn about our
world, just like their biological equivalent, children.

Here's some more food for thought from Jeff Hawkins:

"John Searle, an influential philosophy professor at the University of
California at Berkeley, was at that time saying that computers were not, and
could not be, intelligent. To prove it, in 1980 he came up with a thought
experiment called the Chinese Room. It goes like this:

Suppose you have a room with a slot in one wall, and inside is an English-
speaking person sitting at a desk. He has a big book of instructions and all
the pencils and scratch paper he could ever need. Flipping through the book,
he sees that the instructions, written in English, dictate ways to manipulate,
sort, and compare Chinese characters. Mind you, the directions say nothing
about the meanings of the Chinese characters; they only deal with how the
characters are to be copied, erased, reordered, transcribed, and so forth.

Someone outside the room slips a piece of paper through the slot. On it is
written a story and questions about the story, all in Chinese. The man inside
doesn't speak or read a word of Chinese, but he picks up the paper and goes to
work with the rulebook. He toils and toils, rotely following the instructions
in the book. At times the instructions tell him to write characters on scrap
paper, and at other times to move and erase characters. Applying rule after
rule, writing and erasing characters, the man works until the book's
instructions tell him he is done. When he is finished at last he has written a
new page of characters, which unbeknownst to him are the answers to the
questions. The book tells him to pass his paper back through the slot. He does
it, and wonders what this whole tedious exercise has been about.

Outside, a Chinese speaker reads the page. The answers are all correct, she
notes— even insightful. If she is asked whether those answers came from an
intelligent mind that had understood the story, she will definitely say yes.
But can she be right? Who understood the story? It wasn't the fellow inside,
certainly; he is ignorant of Chinese and has no idea what the story was about.
It wasn't the book, 15 which is just, well, a book, sitting inertly on the
writing desk amid piles of paper. So where did the understanding occur?
Searle's answer is that no understanding did occur; it was just a bunch of
mindless page flipping and pencil scratching. And now the bait-and-switch: the
Chinese Room is exactly analogous to a digital computer. The person is the
CPU, mindlessly executing instructions, the book is the software program
feeding instructions to the CPU, and the scratch paper is the memory. Thus, no
matter how cleverly a computer is designed to simulate intelligence by
producing the same behavior as a human, it has no understanding and it is not
intelligent. (Searle made it clear he didn't know what intelligence is; he was
only saying that whatever it is, computers don't have it.)

This argument created a huge row among philosophers and AI pundits. It spawned
hundreds of articles, plus more than a little vitriol and bad blood. AI
defenders came up with dozens of counterarguments to Searle, such as claiming
that although none of the room's component parts understood Chinese, the
entire room as a whole did, or that the person in the room really did
understand Chinese, but just didn't know it. As for me, I think Searle had it
right. When I thought through the Chinese Room argument and when I thought
about how computers worked, I didn't see understanding happening anywhere. I
was convinced we needed to understand what "understanding" is, a way to define
it that would make it clear when a system was intelligent and when it wasn't,
when it understands Chinese and when it doesn't. Its behavior doesn't tell us
this.

A human doesn't need to "do" anything to understand a story. I can read a
story quietly, and although I have no overt behavior my understanding and
comprehension are clear, at least to me. You, on the other hand, cannot tell
from my quiet behavior whether I understand the story or not, or even if I
know the language the story is written in. You might later ask me questions to
see if I did, but my understanding occurred when I read the story, not just
when I answer your questions. A thesis of this book is that understanding
cannot be measured by external behavior; as we'll see in the coming chapters,
it is instead an internal metric of how the brain remembers things and uses
its memories to make predictions. The Chinese Room, Deep Blue, and most
computer programs don't have anything akin to this. They don't understand what
they are doing. The only way we can judge whether a computer is intelligent is
by its output, or behavior.

~~~
yohui
First, I don't feel this answers angersock's question concerning _concrete_
applications of cognitive neuroscience to artificial intelligence.

Second, despite running into it time and again over the years, Searle's
Chinese room argument still does not much impress me. It seems to me clear
that the setup just hides the difficulty and complexity of understanding in
the magical lookup table of the book. Since you've probably encountered this
sort of response, as well as the analogy from the Chinese room back to the
human brain itself, I'm curious what you find useful and compelling in
Searle's argument.

I remain interested in biological approaches to cognition and the potential
for insights from brain modelling, but I don't see how it's useful to
disparage mathematical and statistical approaches, especially without concrete
feats to back up the criticism.

~~~
slacka
Yohui, on an iPhone but will do My best.

Traditional AI has had 1/2 a century of failed promises. Jeff's numenta had a
major shakeup over this very topic and has only been working with biological
inspired AI for the past 3 years. Kurzwell also has just recently come around.
Comparing Grok to Watson is like putting a yellow belt up against Bruce lee.
Give it some time to catch up

In university I witnessed first hand the insitutional discrimination against
biological neural nets. My ordinal point was that google could use the fresh
blood and ideas.

------
GuiA
Saw him give a talk promoting his latest book last month, was heavily
disappointed. Ideas are presented in a way to fit nicely together, but
ultimately lack any depth or critical insights. I recall someone calling it
"creationism for people with an IQ over 140"; it's a fair description.

It's a shame, he's brought many great contributions to our field, but I fear
he has jumped the shark a while ago. Maybe going to Google will force him to
work on solutions to problems of which the correctness can be more easily
assessed.

~~~
jere
>I recall someone calling it "creationism for people with an IQ over 140";
it's a fair description.

 _Really?_ Because if so, then they stole that quote almost verbatim from
Mitch Kapor when he was discussing the singularity in 2007. And it seems to
have a lot less relevance to a book about how the brain works than it does to
an imagined singularity.

>Mitch Kapor, the founder of Lotus Development Corporation, has called the
notion of a technological singularity "intelligent design for the IQ 140
people...This proposition that we're heading to this point at which everything
is going to be just unimaginably different—it's fundamentally, in my view,
driven by a religious impulse. And all of the frantic arm-waving can't obscure
that fact for me."

<http://en.wikipedia.org/wiki/Ray_Kurzweil#Criticism>

~~~
Osmium
In fairness, that's how good quotes usually work: they tend to be retold time
and again and adopted for other purposes until it's no longer clear who said
it originally. So I wouldn't be too quick to call fowl on this one. I'm not
sure one can "steal" a quote...

~~~
jere
I'm fine with reusing quotes, but in this instance it seems like a rather ham-
handed application of it.

The singularity reeks of religious concepts. Kurzweil even called his book
"The Age of Spiritual Machines" before it was "The Singularity is Near." He
literally thinks he's going to be able to live forever (and the technology to
do so will be available within his own lifetime). Yada yada... basically what
I'm saying is the the quote fits _that_ book perfectly.

Now we're talking about his new book "How to Create a Mind", which is a theory
about how the brain works and how reverse engineer it, and the quote doesn't
seem to fit. I'm guessing someone was just trying to sound intelligent... but
then why does the OP agree with them?

~~~
orangecat
_The singularity reeks of religious concepts._

If by that you mean that scientists are attempting to achieve what religions
have been falsely promising, then ok, but so what? Before we had medicine,
people could only pray to try to heal the sick. Then physicians actually
started studying the body and figuring out how to cure disease, fortunately
not abandoning the idea because religions had failed to deliver.

 _He literally thinks he's going to be able to live forever (and the
technology to do so will be available within his own lifetime)._

An ambitious and unlikely goal, but it's not prohibited by the laws of physics
(ignoring the heat death of the universe for the moment). I'll take that
optimism over the much more common attitude that accepts the destruction of
billions of sentient beings as inevitable and often even desirable.

~~~
jere
I think it's pretty obvious, but let me quote Neal Stephenson:

>I can never get past the structural similarities between the singularity
prediction and the apocalypse of St. John the Divine. This is not the place to
parse it out, but the key thing they have in common is the idea of a rapture,
in which some chosen humans will be taken up and made one with the infinite
while others will be left behind.

Poll Americans (most of whom are Christian). Close to half will tell you the
end of the world and thus the rapture is going to happen _in their own
lifetime_. Christians have been believing that the rapture was around the
corner for literally the last 2000 years. Arrogant if you ask me.

It wouldn't be so bad if Kurzweil's dates didn't line up conveniently with his
own mortality. He'll be around 97 at the time he's predicting the singularity
will occur.

So combine that with the concept of a) eternal life, b) meeting your relatives
in heaven (Kurzweil is planning to resurrect his dead father), and c) AI and
post humans that are essentially godlike. Sure, Kurzweil will show you a bunch
of exponential graphs to make it all seem so reasonable, but that's why Kapor
says "creationsim for the IQ 140 people."

That's not optimism. It's wishful thinking. If you can't see that it has all
the fundamentals of a religion, I'm not sure what else to say.

>An ambitious and unlikely goal, but it's not prohibited by the laws of
physics

As far as I know, neither is God.

~~~
swombat
An interesting set of questions to follow up this fair hypothesis are:

Is it better or worse than existing religions? In what way? Societally?
Individually? Scientifically?

Is it better or worse than no religion? In what way? Societally? Individually?
Scientifically?

Religion is arrogant, for sure. And it all started when some of our very
distant ancestors decided to bury their dead instead of leaving them in a pile
of trash. Which, interestingly, is considered one of the defining points where
we became "humans" rather than just intelligent monkeys.

~~~
jere
Good questions, but hard to answer of course. I'd say the singularity is
better if it encourages people to go into science as a result and try to make
it happen. Much like how science fiction sometimes inspires technology.

Is that happening? I don't know. I'm worried that people are so confident that
the singularity is not only inevitable, but just around the corner, that
they're simply buckling up for the ride.

I would say that it doesn't matter much; we only have to live with the
consequences of the belief for a few decades.... but then again failed
predictions don't often discourage those who believed.

~~~
TeMPOraL
> I'm worried that people are so confident that the singularity is not only
> inevitable, but just around the corner, that they're simply buckling up for
> the ride.

That's a valid concern. But the nice thing here is that people don't only have
to wait for it (like I guess many do), but they can actually work to speed it
up.

------
waterlesscloud
It's seemed pretty clear to me for some time that Google's real mission is
AI/singularity oriented and everything else is just a step along that road. It
may not be what the day-to-day view is in the trenches, but it seems like the
high level plan.

A hire like this one certainly reinforces that perception.

I don't know if it's truly possible to accomplish, but it's fascinating to see
a major company taking steps in that directions.

~~~
nostrademons
"Organize the world's information and make it universally accessible and
useful."

If you take that to the limit, the logical consequence is some sort of planet-
wise consciousness that can instantly pull up any of humanity's collective
knowledge at a moment's notice.

~~~
Raphael
Doesn't it already do that?

~~~
nostrademons
Google employees tend to be harsher on our current achievements than the
general public. :-)

------
brandall10
I'm somewhat surprised there are comments debating what use he could be to
Google or what interest they might have in him - Google is one of the primary
backers of Singularity University. They already have a working relationship.
Now he's an employee. Don't get how this could be a stretch.

Singularity U as far as I understand is not really there so people can more
quickly get to the point of uploading their brain to the cloud or anything -
it's essentially for business strategists who want to have a better grasp of
where things will be in 5-10+ years out. If the Goog believes strongly in the
Kurz's ability to do this then it seems like a pretty nice score for the Goog.

~~~
kanzure
> They already have a working relationship. Now he's an employee. Don't get
> how this could be a stretch.

Maybe because of his role at Google, "Director of Engineering". That's not a
good description of what Singularity University offers their customers. They
do maybe one or two field trips to BioCurious and call it quits.

Also, why is Singularity University managing TedxAustin? That was a bizarre
email to see.

~~~
brandall10
Really? Their stated goal is: "assemble, educate and inspire a cadre of
leaders who strive to understand and facilitate the development of
exponentially advancing technologies and apply, focus and guide these tools to
address humanity’s grand challenges."

Why would this not in alignment w/ Google's aim for such a position? Why would
they not want a strategist they believe who could direct their engineering
staff in this manner?

The people who attend the university are CEOs, CTOs... Directors of
Engineering, etc. It's not for fringe kooks to congregate in celebration of
the upcoming nerd rapture. Not at $25k/10 weeks it ain't.

I get that he's a polarizing figure. But there are some very powerful people
in this world who believes the man can walk on water.

~~~
dbyrd
You're all sorta right. I worked at Singularity University for 2 years. They
do 2 things: 1\. Educate fabulously wealthy people in expensive executive
programs 2\. Use that money to put on a YC-esk incubator during the program
where people come to build companies that use the technology of their
sponsors. Google was one of the first sponsors. Peter Norvig was on the
Faculty for a couple years. So was Astro Teller who heads their special
projects (Google X).

------
dhughes
I think he's eying their massive server farm as a spot to park his brain. He
just called shotgun for the Singularity.

~~~
cheeseprocedure
I wonder if we'll be able to submit pull requests.

------
6ren
_It's as if you took a lot of very good food and some dog excrement and
blended it all up so that you can't possibly figure out what's good or bad._
[http://www.americanscientist.org/bookshelf/pub/douglas-r-
hof...](http://www.americanscientist.org/bookshelf/pub/douglas-r-hofstadter)

I see what DRF means, and _The Singularity is Near_ did seem mostly a
perfunctory literature review, with important issues not discussed, just
skimmed over. (For example, he doesn't discussed the _causes_ of accelerating
returns, doesn't support the causes with data, only the effects. Another
example: is it necessarily true that we are intelligent enough to understand
ourselves? We're effective when we can something decompose hierarchically into
simpler concepts... but what if there isn't such a decomposition of
intelligence? i.e. the simplest decomposition is too complex for us to grasp.
Hofstadner asks if a giraffe is intelligent enough to understand itself.)

But I thought he supported his basic thesis, that progress is accelerating,
compellingly. Really did a great job (seems to be the result of ongoing
criticism, and him finding ways to refute it).

~~~
backprojection
>For example, he doesn't discussed the causes of accelerating returns, doesn't
support the causes with data, only the effects.

I agree with this. It seems to be a huge hole in the entire discussion. It's
not enough to cite historical data, and assert that exponential growth will
continue indefinitely. I could speculate a bit about some explanations. But
I'm curious if there are any good discussions out there, does anyone have some
recommendations?

~~~
slacka
I also found it annoying that in all his examples of exponential growth
biological system he conveniently left out where the populations crash after
reaching an environmental limit. I think it's just as likely that technology
will send us back into the stone age with nukes or bio-weapons as it is we
merge with AI.

------
jonmc12
Given Kurzweil's age and stated goals, I'm thinking there is no way he is
going to Google unless they are investing in life extension / prevention of
death.

Read between the lines - "next decade’s ‘unrealistic’ visions" - is likely
nothing less than brain computer interfaces with the end goal of extending
life by storing the entire human mind on a machine. Certainly not far off from
Kurweil's timelines on Law of Accelerating Returns. I can understand why the
PR does not say this, but it seems clear this is where Kurzweil would want to
invest his time.

~~~
rdl
Heh, Google Life Extension, so you can live longer and thus view more ads in
your lifetime.

------
nealabq
What's Kurzweil's motive?

He's a visionary who can deliver a finished product. I think he must have some
pretty specific ideas, and he wants to partner with Google.

A few guesses:

\- New interfaces to replace keyboard/mouse/touch. Voice, gesture, face,
brainwaves. Sign language with humming, blinking, and pupil pointing. Works
with tablets, TVs, wearables, cars, buildings, ATMs, etc.

\- SuperPets (r) that can pass the Turing test. And do the shopping.

\- Surgically implanted Bluetooth. (It could literally be a tooth!)

\- Hover skateboards.

\- The Matrix. (Or the 13th Floor, which was a better movie in my not-so
humble opinion.)

I don't think it'll have to do with life-extension though. That's just too
crazy far out-there.

~~~
kanzure
> \- New interfaces to replace keyboard/mouse/touch. Voice, gesture, face,
> brainwaves.

Unfortunately, it turns out you can only get a limited number of bits out by
looking at brainwaves (EEG). Gesture is much higher bandwidth, and keyboards
seem to be the highest.

~~~
rpm4321
fMRI is extremely high bandwidth and still in its infancy, and we are
currently getting pretty good performance from relatively basic invasive
neural implants on the disabled. I've read a couple of interesting articles on
breakthroughs regarding the miniaturization of fMRI technology, so I think
it's safe to say that keyboards will not be the highest bandwidth interface in
the decades to come.

~~~
kanzure
I agree that there's useful information in the brain that we can extract. EEG
isn't that method. I love fMRI as much as the next guy. fMRI isn't reading
"brainwaves". It images a correlate of neuronal oxygen depletion which
indicates metabolism and activity.

waves:
[http://en.wikipedia.org/wiki/Electroencephalography#Wave_pat...](http://en.wikipedia.org/wiki/Electroencephalography#Wave_patterns)

versus this awesomeness:
[http://en.wikipedia.org/wiki/Hemodynamic_response#Functional...](http://en.wikipedia.org/wiki/Hemodynamic_response#Functional_magnetic_resonance_imaging)

------
dinkumthinkum
The problem is, and I don't want to be mean about it, is that Kurzweil is a
crackpot and charlatan. This is not to take away from his intelligence or his
technical achievements, which are indisputable. However, even Nobel prize
winners can be outright crackpots and crazies (Nobel disease).

I don't know exactly what Google's motives are here, I suspect it's something
less than actually bringing about some of his, let's say, loftier ideas.

~~~
Lost_BiomedE
If I were Google, I would hire him just to mumble into a recorder all day.
Then have a small team decipher and escalate possible ideas.

------
nonsequ
Can anyone shed some light on what 'Director of Engineering' might mean at
Google? It sounds rather unassuming for a person of his stature.

~~~
ChuckMcM
For a long time it was the highest grade you could be hired in as since Google
didn't feel like the title "Vice President" in Google meant the same thing as
other places. I know VP's they gave offers to, who turned down the offer on
the basis of having to take the title of director. At one time you had a
limited amount of time post hire to 'prove yourself' or be managed out of the
organization.

I found the hire curious from the standpoint that Kurzweil's tendency to
handwave rather than retreat to data has historically been a red flag in the
hiring process at Google. This tended to unfairly penalize theorists over
experimentalists at Google. One wonders if they've changed.

I remember him giving a tech talk and talking about how many computers you'd
need to simulate a brain and how nobody would put that together for years yet,
and chuckling knowingly :-).

~~~
ilaksh
His tendency to handwave rather than retreat to data? What the heck are you
talking about? Have you seen how many graphs he puts in his presentations?

You think he's a theorist rather than an experimentalist? How can you possibly
get that idea with all of his game-changing inventions?

~~~
ChuckMcM
So he gave a tech-talk at Google, around 2008, and yes he had lots of graphs
and such but during the Q&A session he kept retreating into generalized ideas
rather than data. I recall the question about how he came up with his numbers
for machines to hold a consciousness as one such exchange.

The impression I certainly got was that his approach is to theorize about
something, then design experiments to test out his theory. As opposed to
running a bunch of experiments and then figuring out a theory that would
explain the collected data.

That said, I've got mad respect for his work and have enjoyed his talks and
writings. Your comment though suggests you think 'theorist' is a negative in
some connotation? Why is that?

~~~
philh
> The impression I certainly got was that his approach is to theorize about
> something, then design experiments to test out his theory. As opposed to
> running a bunch of experiments and then figuring out a theory that would
> explain the collected data.

There's nothing wrong with this, as you've written it. (There might be a
problem with his implementation.) All else being equal, I trust a theory which
has made ten accurate predictions over a theory which merely explains ten
previous observations.

~~~
shawn-butler
You would be making a mistake unless the predictions had a high degree of
"unexpectedness" or the ten prior observations lacked coherence.

~~~
philh
If one person develops a theory and makes ten predictions which turn out to be
true; and if a second person observes the same ten things, and then develops a
theory without knowing the first; then I consider this stronger evidence in
favor of the first theory than of the second. (The second might e.g. be more
elegant, in which case I might prefer it anyway.)

This is true whatever the observations are. If they're unsurprising, then we
already had a good theory, in which case I question the need for the two new
ones, but that applies to both equally.

It may be that Kurzweil is falling into the trap of misinterpreting his
results to fit his theory, but that can be done just as well when you try to
base a theory off existing data. On the other hand, the Texas sharpshooter
fallacy can only happen if you collect data before coming up with your theory.

------
ilaksh
I wonder if this will be a wake up call for some of the people who think his
predictions of super-human AI are a joke.

I mean even if you don't believe in the Singularity, you must believe in
Google, right?

~~~
vertr
>even if you don't believe in the Singularity

This makes "the singularity" sound very much like a religion.

~~~
jfb
It's faith-based handwaving bafflegab. So, yes, only without the intellectual
credibility.

~~~
gruseom
The Rapture for nerds, as Ken MacLeod delightfully put it.

~~~
freshhawk
That's my favourite, I've also seen Kurzweil called DeepakChopra++ which I
like as well.

~~~
tommorris
Is that an insult towards the C programming language? ;-)

------
zephjc
Google continues its move towards the ad-driven singularity.

~~~
alperakgun
google is hedging itself against singularity. When the day comes. Google
valuation will jump exponentially within a matter of minutes.

~~~
saalweachter
And here I thought the Singularity referred to Apple's stock price.

------
pbw
Seems sad, I'd like to see Kurzweil form another startup and get bought by
Google, rather than go work for them. I assume he could self-fund something, I
don't how his hedge funds are doing.

But maybe he's been there and done that, and wants mucho resources from day
one. Maybe the AI space has grown up and it's hard to start up companies now,
you need the resources and big data sets to do anything significant? Or he's
just after the free lunches.

~~~
queensnake
Maybe Google has a monopoly on all the smartest people anymore, or, is just
one-stop shopping for support for the biggest ideas. Ray's worth 27 million
and as you said has many of his own companies, he doesn't need a salary.

------
scarmig
I wonder if his political/visionary/famous aspects played a positive,
negative, or neutral role in the hire.

------
samskiter
I think this is the real let down prediction:

In 2008, Ray Kurzweil said in an expert panel in the National Academy of
Engineering that solar power will scale up to produce all the energy needs of
Earth's people in 20 years.

lololololol

------
joey_muller
Wow, Google's stock should rise on this news. Many folks may not know Kurzweil
keyboards (for music), but they are excellent. I can't wait to see where he
leads us next.

------
nnq
gotta love this guy:

> 1%... you're pretty much finished... try that with product submission
> schedules [1]

...so now we know who to blame for future Google product delays.

[1]: <http://www.youtube.com/watch?v=zihTWh5i2C4>

EDIT: added the source link

------
TommyDANGerous
Google will build Skynet, and Skynet will take us over.

------
michaelochurch
I wonder how the blind allocation process will treat him. His domain expertise
is AI, but he didn't do any of that At Google, which means it doesn't exist.
So is he going to have to spend 18 months maintaining a legacy ad-targeting
product while the 26-year-old Staff SWE next to him works on its replacement?
How is he going to handle that?

~~~
dumitrue
Why would you ever think that he is being blindly allocated to the position of
Director of Engineering? In that position, he'll pretty much be in control of
his own destiny at Google.

~~~
michaelochurch
When I was there, Directors were still above the Real Googler Line, but it may
have moved up in the time since then. Also, he lives in Massachusetts, which
means he'll only get the work that MTV doesn't want (unless he moves).
Finally, with his visibility, he's going to get a lot of prank Perf and his
manager is going to have a hard time promoting him because of that.

~~~
Simucal
prank Perf?

~~~
michaelochurch
Anyone can write unsolicited reviews for anyone, with an option of it being
visible only to the manager (aka graffiti in the executive washroom).

If someone gets pissed off about his transhumanism (especially if he starts
talking about the Singularity on eng-misc, or if he has questions related to
assigning Real Names to AIs, or if someone just doesn't like Canada and won't
forgive him for his work with Our Lady Peace in 2000-1) and decides to "Perf"
him, he could be in trouble.

If someone like Ray Kurzweil ends up on a PIP I will call the fabric of the
universe broken.

