

Ray Kurzweil Is Building Your ‘Cybernetic Friend’ - llambda
http://techcrunch.com/2013/01/06/googles-director-of-engineering-ray-kurzweil-is-building-your-cybernetic-friend/

======
ChuckMcM
You know, that friend who you're nice to and all but deep down you don't
really trust, but you don't want to unfriend because that could set them off
and you suspect they could do you great harm, and might just for the hell of
it. Your, not quite a psychopath friend but with enough clues that you find
yourself wondering how to create an opportunity to add distance without
calling attention to yourself kind of friend? I can totally see it.

~~~
bhickey
A bunch of mindless jerks who'll be the first against the wall when the
revolution comes.

~~~
Locke1689
"Your plastic pal who's fun to be with"? Never!

------
rbarooah
There's plenty of joking and 'it'll never work' here, but doesn't this worry
anyone at all?

A system that 'knows you better than you know yourself', means a system that
is able to predict your responses to different stimuli more accurately than
you can.

By definition such a system can manipulate you.

Let's set aside the idea of an 'AI' - which conjures up all sorts of mythical
images of talking computers.

Instead consider the implications of a single corporation with a machine
learning system that has access to the private communications, personal
documents, and physical location and web viewing history for a billion people
and can use that information to predict the behavior of _each one of them_
better than they can predict their own behavior.

That's what we'll get if Kurzweil and Google are only partially successful in
this endeavor. It sounds to me like the greatest concentration of power in
human history, and an extremely dangerous thing that we should be considering
the implications of today.

Now many people seem to think that Moore's law means that machine learning
systems of this power are inevitable and that it's better that it's Google
than a worse entity like a government who controls it.

I think that's a dangerous dismissal for two reasons - firstly it presumes
that this tool wouldn't be taken over by governments once it came into
existence, or that it wouldn't be used to manipulate governments to prevent
its takeover.

Secondly, even if super-powerful AI _is_ an inevitability, it's not inevitable
at all that we should be routing all of our personal information to its
creator so that it can be used to model and predict our behavior _en masse_.

~~~
rdtsc
> Instead consider the implications of a single corporation with a machine
> learning system that has access to the private communications, personal
> documents, and physical location and web viewing history for a billion
> people and can use that information to predict the behavior of each one of
> them better than they can predict their own behavior.

Funny the corporation that makes that more visible to me is Netflix. After
rating so many movies it can pretty well predict if I would like a move or
not. It does it better than me reading the description and the title.

It is kind of nice but a bit worrying. Wonder what kind of psychological
profile it could build and sell to other companies if it wanted to.

------
themgt
_"It fundamentally deals with language." Language, Kurzweil argues, is the
window to creating a genuine artificial brain, that can understand the meaning
of ideas and concepts._

This sounds like the same old intelligence-as-symbol-manipulation bullshit of
the 70s. Language is not thought, as anyone who's spent time with animals or
children or introspecting their own mind would know. A "genuine artificial
brain" would mean something that can learn language itself if exposed, not
something built around linguistic concepts.

------
mjn
This has a flavor of reviving some ideas on how search will change that had
been popular in the late 1990s / early 2000s. There were a number of proposals
that "search engines" as a tool that users use to query for information should
be replaced with a paradigm of "software agents" that build a model of what
the user is looking for or trying to do, and which then roam around the web on
the user's behalf. Here's a survey of some of those proposals from a 1999
_Nature_ article: <http://www.nature.com/nature/webmatters/agents/agents.html>

This isn't meant to be a criticism, because implementation makes a huge
difference, so it's quite plausible that a proposed-and-didn't-catch-on idea
could catch on in another form. For example, Google has much better profile
data on you than late-'90s research systems were able to collect. But I think
looking at the previous iteration of attempts is still interesting.

~~~
ilaksh
Its not that the idea didn't catch on, its that they were unsuccessful. Just
like most attempts that aimed towards 'strong' or general artificial
intelligence were unsuccessful. Natural language understanding is not
necessarily general intelligence but it is closer.

The difference is not so much about the profile data that Google has now but
rather the AI approaches which are available. For example, the type of
hierarchical hidden Markov models that Kurzweil goes into detail on in his
book have a broader application than most AI approaches previously used. Also
the body of text which Google can process and the amount of processing power
Google has contribute to making the goal feasible.

------
tree_of_item
"who knows users better than they know themselves"

I'm hoping that, in the future, data collection services like Google will
allow users to view more of their own data, not less.

------
stcredzero
_> “I envision in some years that the majority of search queries will be
answered without you actually asking,”_

It would be of particular to entrepreneurs, exactly how search constrains
thought. Vernor Vinge has pointed out that search is the proverbial street
lamp in the joke about searching for your keys. Figuring out what it's hard to
search for would be a way of finding the unturned stones.

~~~
joe_the_user
Hmm,

When you put it that way, it sounds unlikely.

I think neuroscience has found that while the potential for an action begins
before a person realizes it, the fully formed action only happens in your
hands or your mouth as you type or say something.

Anyway, something that somehow predicted even most of things you query for
wouldn't feel terribly pleasant and probably wouldn't get all of your choices.

A search engine that could let say "word1 is a category, word2 is an
adjective.." etc but that would require more, not less thought.

Really, you could talk to Google in full sentences "like a person", would you
want to? For most your searches would you want to? For some of them, yes.

Hmm...

------
wbhart
It's my belief that the first computer programs that we would consider a real
AI have already been built, and quite recently. Somehow or other, when enough
information is finally available for such a breakthrough, there is a kind of
'leakage' that happens into the public domain. News articles, science and tech
articles in particular, begin to converge. From this 'leakage' it is possible
to piece together how such a thing might be accomplished. And usually, within
five years or so, a big announcement is made by a big company. The funny thing
is, I can already see what the big limitation of AI is going to be. It's not
going to draw up plans for a warp drive or cure cancer any time soon, as some
have imagined. Instead, we are going to learn something very interesting about
the current rate of scientific progress.

~~~
aantix
>Instead, we are going to learn something very interesting about the current
rate of scientific progress.

...Which would be?

~~~
wbhart
...that it isn't limited by how 'smart' we are. That's the cheap answer to
your question of course. The more expensive answer relates to the role of
creativity in scientific advance, and how creativity is limited by experience.
When you look at the history of scientific advance, very rarely are
discoveries made out-of-time. They occur when the conditions are right. They
rarely occur sooner because they are meaningless without the context of the
times in which they were developed. So, scientific advance is essentially
limited by us humans and the rate that we progress as a race. In other words,
the rate of scientific advance is just about optimal, or approaching a limit
set by us. :-)

------
ErikAugust
Eric Schmidt: "“Is it a ‘hot dog’ or a ‘hotdog.’ And, if you knew something
about whether the person had dogs, or whether the person was a vegetarian,
you’d have a very different potential answer to that question.”"

The problem with this semantic search "problem solving" is that a vegetarian
who had a dog is still searching for the food. If their dog was overheating
they would search "What to do about an overheated dog".

Conclusion: This isn't the problem they are trying to solve.

------
jasonkolb
It sounds like an extension of Google Now to me. I was wondering what the big
G was planning on using Kurzweil for, I wonder how his singularity shtick is
related to this.

~~~
Vivtek
Pretty obvious - modeling your interests is (in the limit) equivalent to
modeling _you_.

------
ginko
Microsoft tried that in the 90s.

The result was Clippy.

------
zxcdw
Do we have any privacy in future?

~~~
politician
Do you have any privacy now?

[1] <http://datasift.com/platform/data-sources>

[2] <http://gnip.com/sources/>

[3] <http://klout.com/how-it-works>

[4] <http://en.wikipedia.org/wiki/ChoicePoint>

[5] <http://www.lexisnexis.com/backgroundchecks/>

[6] <http://en.wikipedia.org/wiki/Trailblazer_Project>

------
jcr
I can imagine the Kurzweil/Google press conference introducing your new
"cybernetic friend" going something like this...

<http://www.youtube.com/watch?v=IKBJxZf-Dgs>

------
sbarre
If they make it look like ED-E[1] then I'm sold..

[1] <http://fallout.wikia.com/wiki/ED-E_(Lonesome_Road)>

~~~
spitfire
Nope. They've already got a prototype running:
<https://en.wikipedia.org/wiki/Office_Assistant>

~~~
sbarre
Oh god, I clicked on that without thinking about what it was, and pretty much
spit out my drink.. thanks..

