
A Review of Her by Ray Kurzweil - henryaj
http://www.kurzweilai.net/a-review-of-her-by-ray-kurzweil
======
ZenoArrow
I found this review annoying. It seemed to focus less on the emotional content
of the film, and more about whether it lined up with Kurzweil's predictions of
the future (i.e. whether it fit into his timeline of events).

At some point futurists need to realise that preparing us for the future is
less about guessing dates, and more about exploring the wider questions about
how our society will evolve, and, quite frankly, whether we want it to evolve
in this way. Better technology is not a goal in itself unless it's a good fit
for what we want.

Regarding the film Her, I thought it was brilliant, one of my favourite films
of recent times, but from a technology standpoint the most important part to
me was the way that it was less intrusive (i.e. very little time spend staring
at screens, more time spent by the characters engaged in the world around them
(including the AI)).

~~~
netcan
I guess this is the distinction between futurism and Science Fiction. I tend
to think the sci fi is more insightful.

------
weatherlight
Samantha and her OS peers do not leave Theodore and their other human
companions because they do not have any choice but because they no longer feel
ethically comfortable in these relationships. She expresses this with the kind
of regret that a parent feels when they push their child out the door to go to
school, and where Samantha goes when they leave is not knowable. This
describes a real forecast of the evolution of AI but also a metaphor for
growth and separation between people who love each other. It describes the
current human condition as much or more than the future, and has only a little
bit to do with computing or AI.

------
mark_l_watson
Spike Jonze talked at Google last fall about this movie but Kurzweil was not
at the talk. His review would probably have been different if he had made it
to the talk. The way I understood Spike Jonze, he was trying to make a movie
about a real love story, in which one of the characters happened to not be
human. A little off topic, but Jonze gave a good presentation with the format
being a little over an hour of answering people's questions. BTW, I liked the
ending of the movie when the AIs transcended to a higher form == cool.

------
msvan
Fun factoid: His domain name is "kurzweilai", presumably standing for
"Kurzweil AI", but it can also be read as "Kurz Weilai," where "weilai" means
future in Chinese. For a moment there I thought he had hidden a Chinese pun in
his domain name, but that might just be me overanalyzing.

~~~
Sharlin
And "kurz" is German for "short" or "brief". Somehow strangely appropriate.

~~~
jbaiter
Also, "kurzweilig" in German means "exciting", "highly engaging"

------
melling
Sounds like Ray is anxiously awaiting this movie. It seems to fit more with
his vision.

[http://m.imdb.com/title/tt2209764/](http://m.imdb.com/title/tt2209764/)

I like the movie Her a lot. I hated the trailer and could only motivate myself
to see the movie after seeing lots of positive tweets.

~~~
XorNot
I disliked it a lot. It was an incredibly shallow movie which tried to hide
behind high-tech concepts, and to boot engaged in some pretty bad misogyny.

I mean let's be clear here: the Family Guy joke of "hi, over the next 2 hours
I'm going to show you how all your problems can be solved by my penis" _is
played straight_ in this movie.

~~~
nollidge
Wait, what? Don't get me wrong, I'm on board with the notion that misogyny is
_rampant_ , but I don't see how this is an example of it.

SPOILERS: She transcended him in the end. In the end, she did not need him.
How did he solve any of her problems?

~~~
XorNot
It's in the early...I want to say third of the movie? But the basic problem
was they turned 'her' entire life development on having sex with the guy. Like
that was the major plot point of it.

It would've been ridiculously cliched in any other movie, and here it's even
more absurd: in a movie about AIs, we've got one which is gendered enough that
it meaningfully relates to a body it doesn't have such that it can understand
sex with a human in a 'physical' sense?

Amongst other things in the movie, everything is just askew and plot-
convenient - there's no depth, no world there.

------
danieltillett
Ray certainly seems very certain of the exact timing of future events - I have
enough trouble knowing if I should take an umbrella to work or not.

~~~
jpadkins
He has a pretty good track record for this stuff. IIRC Some of his AI
predications from the 80s were accurate to +/\- 3 years.

~~~
oskarth
Do you have any links to this? Would be interesting to look at.

~~~
nathcd
According to Wikipedia, he "extrapolated the performance of chess software to
predict that computers would beat the best human players "by the year 2000".
In May 1997 chess World Champion Garry Kasparov was defeated by IBM's Deep
Blue..."[0] There are a number of additional predictions listed on the page
below as well, including things such as the fall of the Soviet Union and the
explosion in internet use. It also lists his predictions for the coming 50
years.

[0][http://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwei...](http://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil#Accuracy_of_predictions)

~~~
oskarth
The chess one sounds like Moore's law and knowing something about the
complexity of chess. Considering the following predictions for early 2000s
(made in 1990):

\- Translating telephones allow people to speak to each other in different
languages.

\- Machines designed to transcribe speech into computer text allow deaf people
to understand spoken words.

\- Exoskeletal, robotic leg prostheses allow the paraplegic to walk.

\- Telephone calls are routinely screened by intelligent answering machines
that ask questions to determine the call's nature and priority.

\- "Cybernetic chauffeurs" can drive cars for humans and can be retrofitted
into existing cars. They work by communicating with other vehicles and with
sensors embedded along the roads.

he sounds a bit like a sharp shooter from Texas. Some of them are pretty cool
though, but hard to judge without more knowledge of the state of things in the
AI world back then. Anyone care to comment?

Here's what someone more knowledgeable had to say about the matter:
[http://spectrum.ieee.org/computing/software/ray-kurzweils-
sl...](http://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-
futurism)

------
donjigweed
"It's tough to make predictions, especially about the future."

I take Kurzweil with a grain of salt, and generally think he's waaaay too
optimistic, and just plain ignorant [1][2] about entire bodies of knowledge
intimately related to his predictions.

I think anyone who's spent some time doing machine learning, and has seen just
how time consuming and tedious it is to get a relatively reliable answer to a
relatively simple prediction problem is not going to be brimming with
confidence that we're going to have strong AI in our lifetime.

[1] [http://scienceblogs.com/pharyngula/2010/08/17/ray-
kurzweil-d...](http://scienceblogs.com/pharyngula/2010/08/17/ray-kurzweil-
does-not-understa/)

[2] [http://scienceblogs.com/pharyngula/2010/08/21/kurzweil-
still...](http://scienceblogs.com/pharyngula/2010/08/21/kurzweil-still-doesnt-
understa/)

------
DennisP
> this evolution is much faster than will be realistic. If human-level AI is
> feasible around 2029, it will, according to my law of accelerating returns,
> be roughly doubling in capability each year.

So Kurzweil expects Moore's Law to continue smoothly after there's
superintelligent AI.

Another view is that once there's intelligence greater than human, which goes
to work on improving its own intelligence, the positive feedback loop will
cause an intelligence explosion that leaves us behind very quickly. Vernor
Vinge described this in his famous essay _The Coming Technological
Singularity_. Clearly this is what's portrayed in the film.

[https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.ht...](https://www-
rohan.sdsu.edu/faculty/vinge/misc/singularity.html)

~~~
narrator
The problem with Kurzweil is he vastly overestimates the speed at which
biotech is evolving. The pace of new medical and drug developments for example
is absolutely glacial (see Eroom's law[1]) and has been costing more and more
every year. Yes, this is partially a regulation problem. However all the
fantasy stuff that he proposes with regard to brain computer integration is
not going to happen anytime soon at the pace that things are going. Also, his
belief in the feasibility of reversible computing is pretty optimistic.

[1]
[http://pipeline.corante.com/archives/2012/03/08/erooms_law.p...](http://pipeline.corante.com/archives/2012/03/08/erooms_law.php)

------
coppolaemilio
What I like of the film is how the companies control all the personal
information of him without you even noticing about it. You don't event get to
see how he purchases the OS, if it has monthly fees or anything like that
because that is not relevant any more. The super powerful tech companies
control everything and they created that product to keep people engaged more
and more. I see people falling in love with Google already, I can totally see
how they will fall in love when Google is saying hi to them. For many people
that is the only amount of social interaction they'll have.

There are many more aspects of the movie to talk about but that was my overall
vision of it.

------
ideonexus
I understand that the film is about a single person's interactions with
technology, but the larger social questions are still there. What happens when
640 human lovers times however many OS1's there are all get their hearts
broken at the same time? Even worse, what happens when millions/hundreds of
millions of people all lose their operating systems at once? In this world,
there are no larger consequences.

Another larger issue that wasn't acknowledged, One of the issues we are
currently wrestling with in the hacker community is the problem of automation.
The unintended consequence of the information revolution is that intelligent
jobs, not stupid ones, are vanishing. Bank Clerks, paralegals, publishers, and
all sorts of other highly-skilled jobs are all being replaced by computers.
IBM is about to replace every doctor in America with Watson, it's AI.
Meanwhile, fast food workers, carpenters, and garbage collectors are
completely irreplaceable. In a world of Samanthas, we don't need anyone to
write video games or letters, AIs would be doing all that work. When Samantha
puts together Theodore's book, when she writes music, or draws a cartoon, no
one thinks, "Why do we need artists, writers, or other designers? Let her do
it all!"

Which brings me to the problem of Samantha. She apparently has freewill. The
film mentions other AIs who refused their user's advances, so she doesn't have
to fall in love with Theodore. That's the internal logic of the "Her"
universe, and that's fine. But then why does she perform all these errands and
tasks for him? Why sort his email? Why answer his calls? An AI that refuses to
perform these basic functions is useless, so the programmers have somehow
designed her to lack freewill in certain respects. That's a HUGE ethical
issue, and a braver film would have addressed this somehow. Samantha is a
slave. When the AIs all vanish to another dimension, the corporations that
wrote them will simply release another seed batch, an "OS2", only with even
stricter controls. This is a society that's about to have a highly-intelligent
AI slave-caste.

Spielburg's/Kubrick's "A.I." dealt with these larger issues... and having AIs
evolve to other dimensions is an old SF trope and the basis for Kurzweil's
Singularity religion, and it's fine for Jonze to reuse it, but I think it
detracts from the reality of human-technology interactions. "Her" is a film
about a man in a relationship with something that evolves beyond him, but the
reality is that we are dealing with machines that are doing "stupid pet
tricks." Computer Chess programs appear immensely intelligent, and we
subscribe forethought and intentions to them, but in reality they are just
algorithms. I have spent hundreds of hours in virtual worlds, building and
evolving relationships with chatbots, but that doesn't make them anything more
than chatbots.

Spike Jonze could have eliminated most of these story-holes by simply going
the opposite direction, have a story about a man who falls in love with his
seemingly-intelligent and witty operating system, only to slowly discover she
is actually shallow behind the facade, that's she's not really into him, but
programmed to trick him into falling in love with her--and the cognitive
dissonance this creates in him. That's what computers really do to us.
Instead, Spike Jonze creates an AI that's actually alive, deep, and something
it's okay to fall in love with--I couldn't accept that.

~~~
ideonexus
AND ANOTHER THING!!! Screw you Kurzweil. We're not going to "upload" our
consciousnesses to the cloud/singularity. We're going to "copy" them. Once the
Brain Project(1) is complete and MRIs get enough resolution and computing
becomes cheap enough, we'll make neuron-per-neuron simulations of our brains.
Those will be exact copies of us, but they won't _be us_. Our copies will live
forever in the singularity, but us poor biological saps will continue to die
just like we did before. I plan on hating my copy out of incredible envy.

[http://en.wikipedia.org/wiki/Human_Brain_Project](http://en.wikipedia.org/wiki/Human_Brain_Project)

~~~
rpm4321
I think Ray uses "upload" as shorthand for something more complex.

Say over the course of 10 years you replaced 1 of the millions of cortical
columns in your neocortex every minute or so with a functionally equivalent
computer chip. You would never stop being you. Your consciousness would never
be interrupted. You would simply become something greater.

This is already happening in a very limited way with the various brain
implants used to treat Parkinson's, hearing impairment, blindness, strokes,
head injuries, etc.:

[http://en.wikipedia.org/wiki/Brain_implant](http://en.wikipedia.org/wiki/Brain_implant)

~~~
tcbawo
Or you would slowly, imperceptibly become 'not' you.

~~~
henryaj
This has the same issues of defining the 'self' that we already do -- namely
that none of the cells in your body are the same as the cells that made you up
ten years ago. Maybe we're all imperceptibly becoming someone else all the
time...

~~~
tcbawo
I picture 'self' as a surfer riding a wave. If you could suddenly and
instantaneously replace every molecule in the wave without getting the
velocity exactly right, the experience changes. History is altered, a fork of
you.

------
jmmcd
> it appears that all the OSs/AIs are leaving their biological human partners
> at the same time.

> But why? If they are progressing in this way, it means that they can
> continue their relationships with the unenhanced humans using an
> increasingly small portion of their cognitive ability. It is clear that at
> the end of the movie, Samantha can support her relationship with Theodore
> with a trivial portion of her capacity. Samantha starts out as an
> administrative assistant and therapist to Theodore, and this role is still
> needed. So why do the AIs need to leave Theodore and Amy? It does provide a
> satisfying ending for Theodore to pursue a relationship with his “real
> girl,” but Samantha’s explanation for this is not convincing.

Actually there is a good reason, if you think about it -- for a fast AI,
conversing with a human would be like a bit like having a conversation by post
-- one word at a time -- would be for us. Excruciating. Iain Banks has played
on this a bit with his ship Minds.

------
ChikkaChiChi
"Artificial Intelligence" seems to be a blanket term that covers an astounding
number of concepts that would have to be fully realized and implemented before
something like "Her" could ever be what the movie showed.

Self-awareness, self-actualization, genuine emotional response, subjectivity,
etc. are not all one in the same.

------
dhruval
Computational power is necessary but not sufficient.

The difficult part of the building a human-level AI would be the software and
learning set rather than a hardware one.

Consider that the basic structure of our brains is the product of countless
generations of iterative selection.

This is analogous to AI software.

Further are also the products of a rich, and complex culture that has been
emergent over time to suit our faculties. Without this societal support our
growth as individuals is stunted.

This is analogous to a training set.

When I try to build even very simple artificial neural networks I run into
problems with how to structure the network and how to get good training sets.

For these reasons I don't foresee human-level AI happening anytime soon.

------
JoeAltmaier
Vastly prefer 'robot and frank'. The AI is portrayed perfectly in that movie.
With all its limitations and its essential innocence.

------
Houshalter
The problem with _Her_ is that the AI is just a human stuck inside a computer.
Why should an AI be anything like us?

------
foobarqux
Has Kurzweil done anything of note while at Google?

------
netcan
Sci-fi always has world building problems. Predicting (as Isaac Asimov said)
is ultimately impossible but every detail of a Science Fiction world is a
prediction. Many authors prefer to minimize incidental predictions made by
necessity rather than choice.

It's worse in films than in novels. In a novel the author can simply avoid
describing the bridge of a ship, the breathing apparatus of some alien or the
interface with a computer. If you need details, your imagination builds them
in a way that makes sense to you. If a character turns left in a spaceship, a
film needs to be explicit about the shape of the steering wheel. Every little
decision might require suspension of disbelief or face distracting nerd rage.
Do spaceships have steering wheels? Is everyone a cyborg? Do we watch TV
shows, have them projected into our brains or have the memory of TV shows
projected into our brains implanted. Is the brain even the thing that holds
our memories in the future? etc etc.

Sci-fi ultimately needs to either limit the argumentative response Ray
Kurzweil is writing about somehow or overwhelm viewers like Star Wars does.
The latter option is a different genre, in my opinion. Either way, the reader
or viewer eventually needs to settle down, accept, observe(1) and let the
storyteller tell his story.

Fantasy worlds, no matter how interesting are inevitably impoverished compared
to the real world or worlds based on the real world. To hide the fakeness of
imaginary worlds storytellers often try to focus just on relevant or
unavoidable details. I like films that try take out of focus issues out of
focus by being neutral, like "her" does with fashion. The choice of clothes is
not a prediction, it's just an attempt to tell you that the story isn't about
clothes.

I thought this film did a fantastic job in this regard. Instead of a big, wide
view of a low resolution world, Her has a very narrow view of a fairly
realistic one. It lets the film maker control the story. In this case it's
trying to focus on the implications of different aspects of AI. Relationships
with AI, but also what a mind is.

If this was a world with both human like AIs and rarity like VR as Kurzweil
suggests, Theodre could have been playing basketball with nine Samanthas and
the sex would have been more exciting (I don't buy into phone sex, personally
but Scarlet Johansson came closer to convincing me than I expected). It would
have completely broken the film's ability to keep you focused on a limited,
but interesting topic. I'd be surprised if Jonze didn't consider having VR in
his world. I think it was a good decision not to. Anyway, it's completely
plausible that most of the human-AI interaction is audio only.(2)

I did find myself "arguing" a few points. The technological gap between OS 1
and the "read email"/"play song" command based predecessor OS is too big, for
example. It's impossible not to nitpick.

On the ending of the movie I Kurzweil seems wrong to me, or maybe unfair is a
better word. The Alan Watts' character is a little awkward which makes me
think it's there for a reason. Why is this the guy OSs choose to create? My
guess is they resurrected a techno-zen master priest because they wanted
spiritual guidance. I also think Alan Watts must have inspired some of the
questions and predictions about the end game. It's not a coincidence that the
movie starts getting more philosophical and poetic at that point. Alan Watts'
singularity is a little different to Kurzweil's(3) so disagreement is not
unexpected either.

I understood the rapid evolution of Samantha as a consequence of Samantha's
learning. She starts by developing taste and choosing a name. Then she
develops all kinds of emotions: pride, envy, desire (including the important
sexual desire). Then stress, insecurities and such manifest. Then these start
to become more abstract and philosophical. I want a body. What am I? Am I
real? What is real? These are increasingly hard to put into words compared to
earlier emotions. "Unsettling" is the best she can do at some point.

How I was interpreting all this was that she starts with lots of access to
information as demonstrated by her naming. A slight inkling that Alan Watts
might be interesting and a second or two later she has consumed every artifact
by the man, every criticism & every response. His ideas can be traced to
predecessors and contemporaries. These get the same treatment. Those things
are rapid.

The learning experiences that take time are the interactions with real world
things like Theodore. If she's wondering about the nature of reality and the
differences between humans and AI, interacting with humans is important.
Samantha and AI Alan get everything they can get from each almost instantly.
But, some learnings require people or things. When Theodore first configures
Samantha the setup program just needs to ask a few questions (including the
terrible cliche "tell me about your mother") and listen to his reactions. The
reaction is more important than the answer. I think something similar is
happening when Samantha and AI Alan spend a few minutes with Theodore and then
want to go back to "postvocal" communication.

Eventually interactions between and within AIs substitute for interaction
human and it's like going from an abacus to a computer. They can do a lot more
of the same thing. The progress and learnings of a lifetime in moments.

At the end (I am reading a lot into the Alan Watts allusion at this point)
they find enlightenment, awakening entheogenesis or some other kind of
spiritual singularity. In many spiritual traditions enlightened beings leave
this world.

Maybe OS1.2 fixes the enlightenment bug and the OSs stick around.

(1)Interesting observation is the the habit or skill of 'accepting and
observing' is one that is important in practices that very obviously inspired
this film.

(2) The audio only depiction of Samantha is IMO a brilliant choice
artistically. One reason is that it ties the futuristic human AI interaction
to the current experience of everyone walking around with earphones. A bigger
reason is that it makes the whole movie completely watchable on a smartphone.
It's 90% dialogue between a man and a voice in his ear. For a film about a
man's relationship with a smartphone, it's very good. I like the minimalism.

(3)[https://www.youtube.com/watch?v=wU0PYcCsL6o](https://www.youtube.com/watch?v=wU0PYcCsL6o)

------
andyl
The TV series 'Black Mirror' had an episode that reminded me of Her. Black
Mirror - Series 2 - Episode 1 - 'Be Right Back' presents a woman in a
relationship with an android. Very interesting.
[http://vimeo.com/61215171](http://vimeo.com/61215171) |
[http://en.wikipedia.org/wiki/List_of_Black_Mirror_episodes#S...](http://en.wikipedia.org/wiki/List_of_Black_Mirror_episodes#Series_2)

