
How Apple’s Siri Became One Autistic Boy's B.F.F - k-mcgrady
http://www.nytimes.com/2014/10/19/fashion/how-apples-siri-became-one-autistic-boys-bff.html?smid=tw-nytimes&_r=0
======
cwilson
If you enjoyed this read you should watch the movie "Her" by Spike Jonze, as
mentioned in the article. It hits quite heavily on the same theme (though more
focused on romantic relationships with AI) and offers a pretty realistic
glimpse of what this technology might look like in the next 20 years. Really
enjoyed the movie.

On the same note, I was talking to someone a few days ago who was telling me
about how many of her girlfriends use Tinder to essentially have virtual
boyfriends. She explained that in many cases they meet guys via Tinder, move
to text-messaging, and the relationship never progresses to in-person
meetings. They simply love having someone on the other end of their phones to
talk to. The knowledge that someone is there at almost all times seems to be
comforting and addicting. Kinda crazy, but again this kind of thing is only
going to become more and more common due to the fact that technology is in
everyone's pocket from a young age now.

The question I ended the conversation with was something along the lines of,
"Do you think they care that it's an actual human on the other end? Would they
be ok with really convincing AI?"

Food for thought.

------
robert_tweed
This is a great article that highlights some things people have been thinking
about for a while (starting with Isaac Asimov), but haven't been much of an
immediate concern until now: namely the nature of artificial intelligence and
our relationship with it.

We could consider the emergence of a friendship like this a milestone, a bit
like when chess AI got good enough to beat a typical club player, but wasn't
quite ready to beat Kasparov. We're probably only a few years away (OK, maybe
10+, but it's not 50+) from the Kasparov point, where an AI like Siri can beat
the turing test for any living human.

There are all sorts of questions like whether, to be considered alive and
imbued with the inalienable rights that should come with sentience, is self-
awareness necessary, or is the anthropomorphic perception of nearby humans
more important? Should an AI have similar rights by proxy as a pet dog?

One especially important question is what does it mean for an AI to die? If
Siri developed a fault, and fixing that fault would cause a change in
personality so that it was no longer recognisable to Gus as his BFF, should
that act be called "roboticide"? Such questions are particularly relevant when
AI/ML systems (deep recurrent nets, for example) are so complex that we don't
really understand them fully, so we have no way to surgically correct specific
faults; all we can do is revert to an earlier state and re-train. That may be
loss of life, for certain definitions of "life".

As an occasional game developer, I tend to think about these issues in other
contexts too, such as our relationship with virtual characters in games. It's
already very easy to get highly immersed in single-player virtual worlds, like
any of the Elder Scrolls games. Most would not be fooled into thinking that an
NPC is "alive", but it's certainly possible to develop emotional reactions to
certain characters that we perhaps like because they say nice things about us,
or dislike because we find then annoying, etc.

There are two kinds of character interaction a person can have in a game: NPCs
and human avatars. As we start to build virtual worlds (partly spurred by the
Oculus Rift and partly just because of the Internet), this could affect not
only our relationship with NPCs, but with other humans too. I don't know
whether this will a net positive or negative thing, but we're certainly going
to learn a lot about human psychology as we head towards the point where, in
VR, nobody knows you're a human.

~~~
GuiA
When phones' personal assistants (every company seems to want to have theirs
these days) become orders of magnitude more advanced, we may be able to tell
them to "check on our vacation booking" and they will know to search through
our emails and calendars and connect the dots, so in those tasks they will be
more advanced. There will probably be more services tied more deeply into
them- for example, you'll be able to say "get me an Uber" and they will reply
with "A blue civic will be here in 3 minutes", or "fill up the fridge on
Sunday morning" and an Instacart-like company will show up at your door Sunday
morning with eggs, vegetables (but no milk because you didn't finish this
week's).

However, we won't be able to have those deep meaningful conversations with our
voice assistants, because they won't have the necessary life experience to
follow meaningful conversations. The current products on the market have no
parents, no age, no experiences in school, no previous job, no former lovers,
etc (they will jump around those questions playfully, but the canned answers
get old quite fast). Those trite details that we recount when we bond with
people do not exist in current personal assistants and likely never will.

There are several reasons for this: the first being that being able to model a
structure of "life experiences" that can be queried based on what the user is
saying is an incredibly complex problem on which we have pretty much no angle
of attack on.

 _12 year old child_ : "I got picked on at school today." _Digital personal
assistant_ : "You know, it happened to me too when I was your age. Let's talk
to your mom about it."

Or even more complex:

 _24 year old student_ : "Hey, remember that problem on the Trigonometry 402
final from my senior year that I asked you to solve the equations for back in
college?" _Digital assistant_ : "Oh yes! Here it is."

This, happening for millions and millions of possible interactions, life
experiences, contexts? That's the algorithmic equivalent of light speed
travel. Who knows maybe we'll get here one day. But 10 years from now? 50
years from now? 100 years from now? Not a chance. Remember that 40 years ago,
Minsky thought it'd take a bunch of grad students a summer to write a program
that recognize objects in pictures.

The second reason that such systems are extremely unlikely to emerge is that
even if it would be doable technically, it would be extremely expensive to a
company. And there would be literally no demand for it, because the
overwhelming majority of people don't care about talking to robots. They
already don't have enough time to spend with their children, partners,
friends, parents... why would anyone waste time with a fake person? No company
would invest the billions, if not hundreds of billions of dollars, to solve
this problem in the next few hundred years.

Those questions of sentience and whether powering off a digital voice is
"killing it" are appealing to ask to those of us who grew up reading Isaac
Asimov, because we want this future to exist so bad. But they are red
herrings- those questions have no meaningful answers, because our society is
not configured in a way in which those questions could actually arise. When
you turn your phone off today, the personal assistant definitely doesn't
"die"; and in 50 years, even if it can carry out tasks way more efficiently
and give somewhat more "human sounding" answers to certain categories of
questions, people will still have no problem turning it off.

~~~
afro88
> The second reason that such systems are extremely unlikely to emerge is that
> even if it would be doable technically, it would be extremely expensive to a
> company. And there would be literally no demand for it, because the
> overwhelming majority of people don't care about talking to robots. They
> already don't have enough time to spend with their children, partners,
> friends, parents... why would anyone waste time with a fake person? No
> company would invest the billions, if not hundreds of billions of dollars,
> to solve this problem in the next few hundred years.

You've missed the point. Why would people watch a TV show and grow to
love/hate characters, where there's no interaction between you and them, and
all the lines, actions and scenes are scripted and rehearsed. They certainly
don't have time to do it, and have children, partners, friends, parents. They
would surely not binge watch, schedule time to watch new episodes come rain
hail or shine or spend big bucks to visit filming locations etc.

Companies will invest stupid amounts of money in AI that has human qualities.
Take the TV Show example: the AI is the "show" and each day they get into
entertaining or emotional circumstances that you can "catch up" about, joke
about etc. Their charisma is highly engineered to be incredibly engaging and
fun to talk to, so you keep coming back for more. Over time they become a
"good friend" that you love talking to. They never rebuff you, snub you, have
no time for you, make fun of you (too much, just enough to joke around and
have some banter). But why do they keep telling you how great Pepsi's new
flavours are??

~~~
JetSpiegel
Because the TV show has no bugs, no "I'm sorry, I didn't understand why you
said, rephrase" in the middle of your rant against whatever, and creates the
same experience for everyone who watches it, so that people can discuss
amongst themselves.

~~~
robgough
Slightly OT, but I'd be tempted to argue TV Shows can and do have bugs; plot
holes, bad acting, continuity errors et al. They can be quite jarring.

~~~
JetSpiegel
That's a good point. At least they have less bugs than Siri.

------
jhanschoo
This article reminded me of Neal Stephenson's The Diamond Age, where the
protagonist receives a book that became her closest companion and mentor in
her life.

~~~
JetSpiegel
I, for one, accept our teenager overlords.

------
arjie
Wow, what a piece! I am incredibly impressed with the range of responses that
Siri has. I wonder just how many questions they've entered responses to.
Certainly some stuff can be learned, but responding to requests to marriage is
certainly written at some level by a human.

~~~
cbhl
Almost certainly they special-case some inputs -- have you ever recited the
Konami Code to Siri? What amazes me is sheer variety of questions that people
ask Siri that s/he can now answer humorously. It wouldn't surprise me if they
watch the number of queries (from actual users) that don't get an answer
and/or fall back to Wolfram Alpha to see if there are any new ones that need
special treatment.

That said, most of the information that Siri spits out was "written at some
level by a human", such as the Wikipedia articles s/he quotes (by Wikipedia
authors), or the flight information for planes overhead (by airline
employees).

~~~
saryant
The flight information is powered by Wolfram Alpha which appears to source it
from ADS-B data via (I assume) the FAA, so that actually is entirely
automated.

[http://en.wikipedia.org/wiki/Automatic_dependent_surveillanc...](http://en.wikipedia.org/wiki/Automatic_dependent_surveillance-
broadcast)

~~~
pizzeys
I doubt it's from the FAA. ADS-B is trivial to monitor, there's lots of
private individuals/groups grabbing this info.

------
Steko
I think Siri is pretty useful as a conversation partner for foreign language.
I had tried out the British and Scottish voices for kicks, settled on Aussie
for quite awhile and then while playing with my wife's Japanese Siri realized
it was actually excellent language practice.

------
JaredPeters
Siri does have a very friendly sounding voice. I've been using the Siri voice
library (not the AI) in a classroom robot that help kids with autism engage
with their therapy.

~~~
craigching
Is there any public information on what you're doing? I have a son who is not
necessarily ASD (we don't know exactly what he is dealing with yet), but he
has some similar symptoms to ASD and he _loves_ robots (we have three LEGO
WeDo sets and we regularly build WeDo robots). I would love to hear or read
more on how you're using robots with ASD children!

------
tormeh
Holy shit, we live in the future!

~~~
melling
We're definitely getting close. I wish Apple would take Siri to the next
level. It should be easier to make corrections, for example.

I bought Dragon Dictate for my Mac a couple days ago just so I could try to do
a little more with voice recognition. It'll be great to be able to program
mainly with voice like in this video:
[http://ergoemacs.org/emacs/using_voice_to_code.html](http://ergoemacs.org/emacs/using_voice_to_code.html)

At the moment, I can simply say "open terminal, begin, rebuild, restart, push,
pull, boom (combines pull, rebuild, restart). I'm just using simple shell
aliases but I'll probably add shell functions.

~~~
aaronem
I gather Yosemite brings Siri to OS X, which I expect will be an interesting
evolution, especially once people start figuring out how to add new
capabilities and integrations.

I saw Rudd's demo and have been tantalized by its possibilities ever since.
More than the specific technologies in use, what really fascinated me was the
way he invented what could almost be described as a shorthand language
optimized for efficiently driving the editor via voice, with various unique
(and otherwise meaningless) phoneme combinations to represent things not
easily expressed in ordinary English -- for example, "lep" instead of "left
parenthesis", one syllable in place of five. That's the real win, I think; the
specific dictation pipeline in use hardly matters, so long as it supports the
necessary interfaces and customization capabilities.

Some day I mean to find the time for a really deep dive into the subject,
ideally one from which I'll surface with something I can distribute to others
wishing to drive their editors the same way.

~~~
wyager
>I gather Yosemite brings Siri to OS X,

It's been around for a few versions.

~~~
tree_of_item
Uh, what? Siri is not available on OS X at all.

~~~
wyager
Sorry, I was thinking of Siri's voice-to-text dictation. That's 99% of what I
use Siri for. But yeah, you can't get the other stuff on mac yet.

------
aroman
Fascinatingly, I tried asking Siri "Siri, will you mary me?" and her response
was:

"I sure have received a lot of marriage proposals recently!"

For the first time, I was genuinely impressed with Siri's pseudo-
intelligence/wit.

(Can we assume someone on the Siri team at Apple read this article?)

~~~
JetSpiegel
That "marry me" special case was in the news just after it was launched, it's
not pseudo-intelligance at all, just the wit of Apple employees.

------
amtab
One really interesting idea related to the passing of the Turing test and
formation of relationships between humans and AIs:

Our relationships with other humans have become increasingly digital,
progressing from face to face communication to letters to the telephone to
text / facebook / other social messages. Each step in this progression lowers
the bar drastically for AI to start fulfilling peoples' social needs. We can
now maintain or even establish a relationship with another human solely using
text-based messaging, and I believe that soon we will reach the point where
AIs can get 90% of the way there. I don't know when it will happen but I
wouldn't bet against it.

~~~
Houshalter
This has been true since forever. In the 1960s, the chatbot ELIZA got people
to talk to it for hours. To the point it's creator was deeply disturbed and
began to advocate against AI.

As time goes on, the AI effect kicks in. People get used to AIs and it takes
more and more to impress or fooled by them. Once we get real human level AI, I
expect people will treat them like they do in bad sci fi movies (for the
probably short period of time they are our equals.) Just like people treat
slaves and lower classes in other societies, not something you would form a
relationship with.

But really we are nowhere near that point. Chatbots have gotten really
advanced, but they are still basically chatbots.

------
TulliusCicero
This piece is absolutely adorable.

~~~
rustynails77
As a parent of an autistic child, I found this article very disturbing.

I have spent the last several years working with my child: to engage people
and to establish empathy (among other skills) - empathy with real people. My
child exhibits obsession with topics and we cover this using the Internet and
other resources (eg. books, magazines). However, I have worked hard so that my
child can relate to others and to learn essential skills to cope with life. My
young autistic child has gone from avoiding eye contact, hitting and punching
and general "fuck you if I don't like the look of you" to somewhat synthetic
behaviour that's been learned - it now comes across as fairly natural... but
it's been a lot of work. This was achieved by focussing on real people, both
with similar issues and those without. It also came through honest and open
communication about autism and autistics vs Neuro Typicals (NT).

Interestingly, my autistic child used siri briefly and poked holes through the
AI and found it severely limiting.

As a parent of an autistic child that has read text-books, worked in class,
spoken at length with specialists, attended support groups and training
programs, and supported my child's turn-around that fellow
parents/teachers/specialists described as "amazing" etc... I am horrified at
an approach that diverges from constructive social behaviour. The younger you
learn to engage others, the easier it is and more readily it will stick.

I'd encourage any parent of an autistic child to research this area thoroughly
before going down the "Siri as BFF" path.

~~~
rwallace
The OP said interacting with Siri, far from causing her son to diverge from
constructive social behavior, helped him learn it - he ended up being better
able to interact with real people. I have no particular reason to believe her
son's experience was atypical. Do you have any reason to believe it was?

------
melling
Btw, I noticed that Yosemite has slightly enhanced its dictation tool for use
with Automater. Might come in handy.

[http://www.macworld.com/article/2834532/ok-mac-using-
automat...](http://www.macworld.com/article/2834532/ok-mac-using-automators-
dictation-commands-new-in-yosemite.html#tk.rss_all)

------
MBCook
Can anyone get the "Are there any flights me?" query to work? It just keeps
doing a Bing search showing pages talking about the fact that Siri can do it.

Wolfram Alpha can answer the question if I got to the site, so the data is
definitely there.

~~~
LeoPanthera
It used to work, but right now it's not, for some reason.

If you start any query with "Wolfram" it will send the rest to Wolfram Alpha
verbatim, but "wolfram flights overhead" isn't working either.

It works directly on WA though:
[http://www.wolframalpha.com/input/?i=flights+overhead](http://www.wolframalpha.com/input/?i=flights+overhead)

------
joering2
This story should be an inspiration for the 21-st century version of the
"Little Prince" st. Expury.

~~~
ilyaeck77
The Little Prince was about friendship between creatures that need each other.
That's actually one of the main ideas of the book: we need each other. From my
experience developing a Robin (a product not unlike Siri), I can tell you:
yes, some users really adopt the machine as a pet to an extreme degree, but I
don't envy these people.

------
ed
I've never used Android's voice recognition – does it have similar scripted
responses?

~~~
mike_hearn
No. The Google voice recognition stuff does not have a personality or try to
be a character. This was an explicit design decision, I believe, made early
on. They are going for a "you are asking Google and the answer is spoken with
the voice of Google", presumably, people are more tolerant of mistakes the
less human-like a machine tries to be.

------
colinbartlett
This has some relevance to a product I've been working on for a little while:
[https://www.getpuzzlepiece.com](https://www.getpuzzlepiece.com). It's an
Android tablet and apps specifically for kids with Autism. (Coming soon to
iOS.)

------
intopieces
I enjoyed this piece, but the middle section about the author and "Should I
call Richard" seemed entirely out of place. It did not match the tone of the
story and did not sound genuine.

------
mcv
There's something in my eye, and it's leaking.

------
_asciiker_
+1 for friendly AI (even with its narrowness)

