
Google Grapples with ‘Horrifying’ Reaction to Uncanny AI Tech - cohaagen
https://www.bloomberg.com/news/articles/2018-05-10/google-grapples-with-horrifying-reaction-to-uncanny-ai-tech
======
atonse
I think there are two complementary things at play:

1) You can be very impressed with the tech (as everyone is)

2) You can be creeped out by how it could be abused.

BOTH things can be true. And that, I think demonstrates a continued blindspot
especially by companies like Google, who simply don't seem to realize when and
why things may come across as creepy to people. They keep doing this almost
every year. They're so focused on how cool the tech is, they seem to just not
realize that people might find it totally creepy. And this seems to be deep in
the company's DNA (same with FB)

Did they do a focus group of people outside the valley bubble? I doubt it.
Maybe a focus group of receptionists? What do they think of it?

~~~
ordinaryradical
I think I'm both, but I will tell you, the idea of having _my own_ automated
assistant deal with a company's automated assistant when trying to get through
a phone directory to a live human sounds AMAZING. Like, I have zero problems
with the ethics there.

~~~
throw7
You beat me. I want assistant to deal with robocalls and phone trees for me. I
don't want assistant to annoy people.

~~~
daveFNbuck
Unless you can perfectly predict which calls are robocalls, assistant will end
up talking to real people. If people know they're talking to a robot, there's
a good chance they'll be annoyed.

~~~
Ajedi32
Why would I be annoyed at talking to a smart answering machine? Seems at the
very least no worse than hearing a prerecorded message and leaving a
voicemail, provided the conversation goes as naturally as it did in the demos
Google showed on stage.

Assistant: "Hello, you've reached Dave's personal assistant. Dave's not
available right now, can I take a message?"

Bob: "No, just tell him to call me back as soon as he can."

Assistant: "Okay. Can I ask who's calling?" (Bob was not in Dave's contacts
list.)

Bob: "It's Bob Jones."

Assistant: "Alright, I'll let him know."

Bob: "Okay, bye."

Perfectly natural conversation, and provides a much nicer user experience for
Dave when he checks his missed calls. (Notification: "Call from: 444-444-4444
Bob Jones says 'tell him to call me back as soon as he can'")

~~~
ForHackernews
That's perfectly fine as long as it identifies itself as a robot: "Hi, this is
Dave's automated assistant..."

A big part of what creeps people out is the notion of bots impersonating
humans.

~~~
Ajedi32
What's so creepy about that? If I can get the same quality of service talking
to a bot as I can to a person, why should I care what it is I'm talking to?

Unless you mean impersonating a _specific_ human (e.g. if my personal
assistant were pretending to be me) I don't really see the issue.

~~~
Balgair
I mean, _all_ humans are specific humans. How could a bot not impersonate a
_specific_ human? It would be impersonating your assistant, who may not exist
in the first place, but the bot is trying to get others to think that the
person is a real person with real feelings. That duplicitousness is the
problem. Like, other people are real people too, not just sheeple (terrible
term that it is).

~~~
Ajedi32
Why does it matter if I mistakenly think the answering machine bot is a real
person? Assuming I can treat it like a real person and still get the results I
want, why should I care one way or the other?

~~~
Balgair
Because it is not a person. It's a machine. There is no shared experience, it
doesn't have experiences, it's sand and metal.

Like, a really smart parrot is not a person, right? Even if the parrot could
be an answering machine, it would not be due respect and it would not respect
you back; because it's a parrot.

I think the issue is that the answer-bot is inherently disrespectful to the
real human. Just using one at all is like saying to the real person:"You are
not worthy of the time and/or respect of a real human; talk to this clever
metal parrot." Text devices and digital sign-ups are at least 'equal', though
the transaction is sterile.

Trying to fake that a person is talking to a bot is disrespectful of that
other real person and their time; it's trying to pull one over on them and
fake them into thinking that the bot's owner is worthy, but that the other
real person is not worthy.

Just use a sign-up service. If the company/person does not have one, they are
signaling that you have to actually talk to them, and you should respect their
decision because they are a real person too, not try to pull a fast one.

~~~
Ajedi32
Why do I need to have "shared experiences" with the person I'm ordering pizza
from? I'm trying to order pizza, not get a date. It's not about "worthiness"
or anything as elitist as that either; I just want to buy a pizza.

> Just use a sign-up service. If the company/person does not have one, they
> are signaling that you have to actually talk to them

Or that they can't afford to build an online order service? I seriously doubt
the guy working the counter at my local pizza restaurant has any particular
desire to speak to me personally.

~~~
Balgair
Either way, forcing a person to talk to a robot without their 'consent' is,
well, really rude. Many people hate phone-trees as is, forcing people to deal
with them even more is not making the world a better place, it's making one
person's world a better place at expense of others.

~~~
daveFNbuck
Forcing a person to talk to a person without their consent is really rude too.
What makes talking to a robot that's indistinguishable from a human worse than
talking to a human?

------
uremog
I, for one, thought it was amazing how little of the uncanny valley I felt.

There are certainly things to consider like how I wouldn't want to be robo-
called by scam bots that I can't tell are bots, and neither do others. A lot
of businesses have an out already built for them though. Restaurants in
particular have services like Open Table and Nowait that relieve the pressure
of users wanting to use a realistic robo-call.

As an "Average Joe", my biggest concern is how this will push phone
communications even (edit: more) toward where Facebook privacy settings are
now - registered friends only - and how inconvenient that will be. I already
let any numbers I don't have registered go to voicemail, and I don't like
having to do that.

~~~
scrollaway
> _As an "Average Joe", my biggest concern is how this will push phone
> communications even toward where Facebook privacy settings are now -
> registered friends only_

Yeah I mean, that's essentially where we are now... I don't give out my phone
number to anyone except my friends and a couple of companies I trust not to
abuse it. I reject unknown numbers. Many people are like me.

Phonecalls aren't convenient at all. Expensive, atrocious voice quality, hard-
to-memorize easy-to-typo numbers that sometimes have to change and are bount
to the country (and sometimes to the provider), no way to identity callers
unless they're already on your contact list, ... Should I go on?

Just use email if you want to contact me at random. I use Discord for voice
calls, almost exclusively.

------
gfodor
For a field like AI, it's _extremely_ difficult to decide if knee-jerk
aversion to specific advancements is something worth taking seriously or not.
For a field where the entire _point_ is to replace humans in certain
functions, it's inevitable that there will be leaps forward that people are
extremely uncomfortable with in the short term.

This may be an example of something where things need to be altered, or it may
not. But in either case it seems short-sighted to just think this question can
be resolved in 24 hours after a demo due to a few hastily-written tweets and
thrown-together articles. What's important is that as AI continues to encroach
upon human capabilities that there be a framework for navigating these
transitions and knowing where the lines are. Hopefully as this kind of thing
becomes more common public opinion will shift from caring about the creepy-
demo-of-the-week to a wider scope of how we grapple as a society with these
kinds of advancements.

Emotional reactions and ad-hoc decisionmaking based on mob rule is going to
result in a random, arbitrary outcome which will almost certainly be
suboptimal.

~~~
salawat
Sometimes suboptimal is a lot less frightening and destructive than optimal.

Dictatorships and despotism are the optimal solution for wielding of executive
power.

Murder is the optimal means through which to immediately resolve problems
created by other people from a time investment point of view.

Rape is an optimal solution to spreading genetic material.

See the pattern? Knee-jerk reactions from the layperson AREN'T necessarily
bad. If anything, they should be the first sign that a researcher needs to
back up from that awesome thing you just did, and take off the razor thin beam
of relevance you used to achieve your breakthrough, and begin the HARD part of
research. Turning it from an interesting academic factoid into a societal
good.

When we do research, we are free to ignore huge swathes of the world to be
able to reach or findings. The reckoning, however, comes when we have to ask
ourselves 'how prepared is the world for this?'

The story of Doctor Frankenstein isn't just some scary story for children. It
should be haunting a researcher night and day. What am I enabling? How will
society take this? It shouldn't stop you by all means, but it will keep you
grounded and aware of the fact your research isn't happening in a void.

Remember, all those bits and pieces you are putting out of your mind are the
bread and butter of every layperson's day. Being a researcher doesn't absolve
you of your status as a member of society.

It is YOUR responsibility to take your idea from the theoretical to the
practical. The rest of society is busy staying intact enough to support your
research efforts.

~~~
gfodor
I'm not sure who you're arguing with here?

My point was that a suboptimal solution is one where there was no careful,
methodical thought to the design of AI systems. If AI designers and
researchers blindly react to public outcry after 24 hours from a demo (which,
in general, is something I would expect Google to do) the kind of thinking you
mention above is just as unlikely to happen as if they trudged on forward
without any consideration to these things at all. In both cases, this is a
suboptimal outcome for society.

In one, we get a fairly random, undesigned world of AI systems that don't
serve anyone well and generally are underutilized because nobody is willing to
push the boundaries. In the other, we get dystopian AI hell. It's important
that researchers be given the space to think you describe. The right way to
allow that is to foster an informed and open-minded public about the emergence
of AI and the decisions we need to make about it as a society, and not have
prominent voices writing about knee-jerk reactions making authoritative
demands to a specific public demo after just a day. (See Max Tegmark's book
Life 3.0 for the kinds of stuff we need more of being put into the world
imho.)

~~~
salawat
It is this part I"m referring to:

>Emotional reactions and ad-hoc decisionmaking based on mob rule is going to
result in a random, arbitrary outcome which will almost certainly be
suboptimal.

I may have initially misinterpreted some context when I read your comment
before, though my conclusions haven't changed much.

I agree with you that AI research must be methodically and transparently
approached.

The part where I was trying to speak toward, however, was that AI
researchers/designers are going to necessarily be constrained by how much
faith and goodwill a (currently) troubled society, plagued with inequality and
hardship in the face of increasing job loss to automation. This is a feature,
not a bug to be dismissed.

AI is akin to nuclear technology in that it is perceived as a huge game
changer. One that brings with it a huge potential for societal change and
upheaval.

The AI designer/researcher DOESN'T necessarily see it that way. It is just a
tool to do a thing. The populace, however, don't grasp the limits the
researcher/designer does. They are being asked to give the same degree of
faith to the AI field that nuclear physics was given in the last century.

Combine that with a severe distrust in the motives of the organizations doing
the actual development and you have the stage set for large-scale rejection of
AI research.

You said it yourself before. AI is a field that is trying to replace humans
(having to do stuff). That is not a viewpoint that will win you the favor of
the populace. No one will get further than the first half of the sentence
before the hackles go up.

Nobody trusts a Utopia where people don't have to work because machines do it
for them. Nor will they trust the men behind the machines. It's sad, but it is
the kind of thinking that has kept humanity OUT of a complete dystopian
hellholes thus far.

I don't know the fix, but I do know that if we can't solve some very
fundamental social problems that advances in artificial intelligence and
machine learning are poised to make worse, you will find the field of study
becoming more and more difficult to find public acceptance in. The only
inkling of a clue I have is it may have to do with the "industrial asymmetry"
of our current times.

If you don't take heed of the state of society, you may end up in the same
place NASA was for the latter half of the 20th Century. BEAUTIFUL theoretical
groundwork laid, but hamstrung in execution by popular sentiment. (See some
documentaries to learn how "thrilled" many were with the Space Race. NASA
still hasn't recovered from the backlash despite some of the best science I've
ever seen)

------
cabaalis
I don't think it's at all problematic for these devices to state "Hello, I am
an automated assistant calling on behalf of $(name). I'd like to book a
reservation for...."

~~~
apendleton
I think Google's concern would be that it would generate annoyance that might
not be warranted. Like, if I call a customer service line and get an AVR
system, even if the system is perfectly capable of helping me, I just hate AVR
systems (justifiably or not) and start yelling at it to give me a
representative until it does. If the AVR system were so compelling that it
just pretended to be a human and I couldn't tell the difference, presumably
I'd be just as satisfied as if I had gotten an actual human, and could skip
the annoyance.

Similarly, I think if this thing can handle the call as well as a human, it's
to its advantage in terms of not frustrating the person on the other end of
the phone to not self-identify, so that they perceive it as just like any
other appointment call (because it is) rather than triggering an "ugh damnn
it, another Google robocall" which will then prime people to feel like they
have to speak to it a certain way to get it to behave, even if that
demonstrably isn't actually necessary, and generate ill will towards the
company.

On balance, maybe they should self-identify anyway -- "if this thing can
handle the call as well as a human" is a pretty big "if" after all -- but I
can see why they'd want not to.

~~~
gowld
If someone doesn't want to talk to you (or your bot), why not just leave them
alone? It's not their job to be your guinea pig.

Was it right to not tell the victims of the Tuskegee syphilis experiment what
was happening to them, because they would have refused and blocked the
progress of science?

~~~
nyolfen
taking restaurant reservations from a computer and getting infected by
syphilis are definitely the same thing

> If someone doesn't want to talk to you (or your bot), why not just leave
> them alone? It's not their job to be your guinea pig.

it _is_ their job to take reservations over the phone

~~~
ForHackernews
> it is their job to take reservations over the phone

Yeah, so presumably their bosses will instruct them to take reservations from
automated agents, and grandparent comment's fears that people will hang up on
automated agents that identify themselves as such will prove unfounded.

------
herodotus
The technology is impressive. The demonstration choice was disengenous. It
seems way more likely to me that the early adopters of this technology will be
companies that will be able to replace phone receptionists with automation -
already widely done by large companies, but now, thanks to this technology,
available to doctors offices, restaurants, hair salons and many medium sized
businesses.

~~~
aurailious
Duplex will definitely be used in the opposite direction than demonstrated.
Its acceptable to talk to a robot over the phone, and improving that process
is very valuable.

But having a robot speak to the business? That might be a bit much. Then
again, if this version is only for appointments, reservations, etc, than that
might be seen as acceptable in kind of the same way.

It's mostly the hesitation about when this starts going beyond that. Do you
now use Google assistant to talk to friends/coworkers? "Ok, Google, ask
$friend if he wants to see a movie this weekend."

~~~
uremog
I hope it never gets to, "Ok Google, run Microsoft Support Scam 3 on Victim
List 5".

~~~
fixermark
One of the advantages of Google hosting this (as opposed to, say, this
technology being an "out in the wild" agglomeration of voice synthesis and
machine learning open-source solutions bailing-wired together) is that there's
one central authority that can be held accountable.

You're not going to see the described scenario because there's nothing in it
for Google, they have full control over both the Google Assistant ecosystem
and the set of technologies powering this machine-learning interaction, and
they can gate-keep it as much as they want to.

That having been said... If one company can do it, we're only N years out from
the tech becoming available "in the wild" as a bailing-wired selection of
open-source tools (probably in a Docker image somewhere ;) ).

~~~
uremog
Even if not another company, people develop actions for Google Assistant so I
don't think it's that wild to think that 3rd party Duplex behaviors could be
something Google is considering.

------
sqdbps
What about the AI advancements in early disease screening and treatment that
Sundar highlighted in the beginning of the keynote? Why not mention that as a
counterbalance to quoting professional critics and at least touch on the
undisputed benefits from this stuff.

What's truly “horrifying” is the somewhat prevalent phobic attitude towards
technological innovation and invention, be that industrial robots ‘taking all
the jobs’ or this reaction to the Duplex demo.

Our current technological achievements were largely made possible with help
from the media and our prominent thinkers in cheerleading the advancements and
shaping the public’s option.

Now it's the critical voices that get affirmed and amplified.

~~~
Invictus0
I think hacker news suffers from being too close to tech to really get a
normal person's perspective. These advances are technologically marvelous,
sure, but they are ethically bankrupt. At best, they make human interaction
worse, and at worst, they enable robocallers and standardize thought. Tech
optimism has become toxic because SV fails to realize how each cool new
feature can be bastardized and abused.

~~~
nightski
Not only are they ethically bankrupt, it feels like the sole goal is to
eliminate all human interaction and more importantly solve problems that do
not exist.

Ex. I do not mind making a 60 second phone call to make a reservation vs. 30
seconds of figuring out how to tell Google what to do.

In this case Google is solving problems for rich socially awkward nerds. Not
the majority of people.

~~~
dyoo1979
How many of you have immigrant parents who, where English is a second
language, would have great difficulty performing the kind of verbal gymnastics
that you take for granted?

~~~
Invictus0
My mother is an immigrant and speaks 4 languages, and I am learning my third.
This is a ridiculous argument; if this is so important to you, advocate for
Google translate integration into email.

------
jacksmith21006
Horrifying reaction? A bit of hyperbole

~~~
zamalek
> Silicon Valley is ethically lost, rudderless and has not learned a thing.

I can understand how people could be creeped out, but _ethically lost?_ Is
"ethics" now just a buzzword?

Personally I wouldn't mind getting a call from Google. Google's voice is
absurdly pleasant and friendly, which is likely a completely different
experience to what someone on the receiving end of "cancel my Comcast"
typically experiences.

~~~
throw2016
The problem is silicon valley folks talk about ethics in a self serving way.
They expect ethics from others but are not willing to give it back.

Whether stalking people and abusing personal data or now dehumanizing people,
enabling bad actors and making people suspicious of callers it's always
someone else's fault for not understanding 'innovation', not theirs for not
considering the ethical implications. This is a tone deaf play dumb approach
to ethics for profit.

~~~
jacksmith21006
No. That is a narrative from the alt right. SV folks have similar ethics if
not better than the average person.

I would say generally engineers tend to be more ethical and rule following.
They are built in this manner.

I am not sure what is happening but suspect part of the problem is some are
scared of the future and tie SV to the future and taking it out on them. Where
the narrative comes from.

The other possibility is they need a bogey man with the Clinton's gone and
using SV to replace.

The last is more conspiracy oriented. The biggest strength for the US, hands
down, is SV. The innovation coming out of SV is just unbelievable.

It could be Russia and other countries are trying to get us to fight to hurt
SV for self serving purposes.

Who gets AI first will have an unmatched advantage. Google is far ahead of
everyone and that could be driving the alt right narrative at Google. Using
the Damore firing as the rallying event.

It really scares me how easy it is to manipulate some in the US. They use a
fear of the future for some to stir it up. Kind of like how Trump and others
are using people that look different to whip up people in a frenzy and scared
of people that look different.

That they are coming to rape our women.

All I got. Been still trying to figure it out. I have engaged with a number of
people on Reddit and surprised how many times someone negative on Google turns
out to be Russian.

------
s2g
> Giving it an obviously robotic voice when it calls. "People will probably
> hang up," he said.

Doesn't that really tell us all we need to know? If the only way to get people
to tolerate your service is to try and trick them then is your service a good
thing?

~~~
emerged
Deception is key to profitable ads and news as well. We just keep diving
deeper into the murk and find ourselves more and more confused about what's
going on around us, who and what to trust. It's sad and technology has
continued to accelerate it.

------
icc97
The only automated voice calls I get are always spammers.

I can't see many real people wanting to automate calling the hair salon,
because then you're being fairly pretentious as if you're too busy to pick up
the phone.

But I can see a massive incentive for scammers to stop having to employ cheap
Indian labour and now have an almost infinite ability to make human like calls
to non-tech people in an American accent.

So be prepared to hear the Google assistants calling you to tell you there's a
problem with your Windows PC and they just need to install something on there
to fix it.

~~~
fixermark
> I can't see many real people wanting to automate calling the hair salon,
> because then you're being fairly pretentious as if you're too busy to pick
> up the phone.

Just a short list:

\- the deaf

\- the mute

\- the very socially awkward

In the last category: I can personally attest that I've put off scheduling an
appointment with someone I've done business with in the past for two days,
purely because the thought of actually picking up the phone and going through
the process of aligning our calendars makes the hair stand up on the back of
my neck. If I can pass that task to an assistant, my life would be _much_
easier.

~~~
icc97
Deaf and mute are good counter examples. I know that Google quoted 60% who
don't have websites but that figure should only ever get smaller. Such an
impersonal interaction works perfectly through the web.

I'm not a fan of calling people either (I've put off finding a local doctor
for a year because I don't want to go through the awkwardness) but starting a
relationship with an important client using a not quiet real 'assistant' would
start off the relationship in the wrong foot. Certainly if someone calls me
using this I'll think that they can't even be bothered to speak to me
personally and so I'm not worth their time.

~~~
fixermark
I'd imagine the level of personal touch that matters varies from industry to
industry. I don't know a restaurant much cares if the reservation is made via
the reservation holder, their human assistant, or their robot assistant as
long as the customer shows up on time, pays, doesn't wreck the place, and
leaves a nice tip.

~~~
icc97
I was replying to this

> scheduling an appointment with someone I've done business with

So it's not just booking at a restaurant this is someone that you will have an
ongoing personal relationship with.

------
dkonofalski
What I think is really interesting here is that all these questions
surrounding this are focused on the human-AI interaction and whether the human
should know that they're talking to an AI when, in reality, what Google is
after is completely removing the "human" aspect of that relationship. Google
is betting that, in the future, we won't need people at all to do these kinds
of things. In fact, they would love if every booking for every appointment was
done seamlessly for the user without ever interacting with people. This Duplex
technology is just a stop-gap to fill in that interim period where we're still
using people for tasks that don't require specialization, are very mundane,
and end in a very obvious success condition. All you need to make an
appointment somewhere is a time/date and a confirmation that the appointment
was made. Instead of an API, we currently have a "human" API that handles
that. I guarantee that Google is banking on Duplex eventually bridging that
gap so that, eventually, I can make an appointment _anywhere_ simply by asking
my personal assistant to do so and the "human" is completely unnecessary.

These are the questions we should be looking at, in my opinion. This situation
is only temporary. We need to focus on the future (that is most certainly
coming) ideas of what we're going to do with our societies when even these
simple jobs that we thought required that human touch are replaced by
automation and AI.

------
sjbase
To me, the hidden crux of this whole issue is "can this system be as effective
to deal with as a human who is good at their job?"

If that condition is met or exceeded, none of this voice stuff will matter.
People won't care if the system discloses it's nature, or sounds creepy, if
their airline reservation change (or whatever) goes abnormally well.

If that condition is not met, which IMO it won't be in the near future, then
we're just adding attempted manipulation and potential for abuse on top of an
already bad UX.

------
nlawalker
Ha, it's because no one at Google pictured themselves as the one on the other
end of the phone taking the orders.

Because phoning in for restaurant reservations is only the beginning. Ask
yourself now - in a few years, will you be one of the people _giving_ orders
to a digital assistant, or one of the (many more) _receiving_ them?

"The future is already here - it's just not very evenly distributed."

~~~
gowld
What's horrible about me as a business owner taking to a logically organized
clear-speaking bot instead of confused mumbling human?

------
s2g
> He compared it to the electric guitar, a technology that helped humans
> express themselves in new ways and is considered a positive advance.

An electric guitar doesn't play for you. It doesn't learn to play for you. It
doesn't learn how you play and make your playing better.

It just provides new ways to make sound.

------
blackstack
The criticisms all seem to be fear based. That somehow not knowing you're
talking to a machine makes you vulnerable. Which is ridiculous. If I pretended
to be the opposite sex while on the phone, that doesn't mean I have bad
intentions. Likewise, having computers that sound human isn't nefarious.
Wanting them to sound robotic reveals more about the detractors insecurities
and fears about how we should control our machines than anything else.

------
xtracerx
IO is a developer conference, the whole critique about unveiling it before
they know everything about how to market the product, or before it's complete,
is tone deaf.

------
carapace
> Should AI software that’s smart enough to trick humans be forced to disclose
> itself?

Yes, of course. I would even say it should be legal _fraud_ not to, with
severe consequences.

~~~
Ajedi32
Why?

~~~
carapace
That's an extraordinarily open-ended question. If you'll be more precise, I'll
attempt a good-faith answer.

Cheers

~~~
Ajedi32
Why should it be legal fraud with severe consequences for an AI caller to not
disclose the fact that it is an AI?

You implied the reasoning behind your statement was obvious ("of course") but
it's not at all obvious to me.

~~~
carapace
> You implied the reasoning behind your statement was obvious ("of course")
> but it's not at all obvious to me.

No. The affirmative answer to the original question as posed is what seems to
me to be obvious. My additional statement was meant to convey the seriousness
with which I feel these matters should be taken.

> Why should it be legal fraud with severe consequences for an AI caller to
> not disclose the fact that it is an AI?

Otherwise people (specifically the technologists implementing and selling
these kinds of trickster services) will not take other people's (specifically
me and anyone who feels as I do) concerns seriously and proceed apace.

There is a similar issue around folks who see CRISPR as a way to work around
laws that attempt to control GMOs. I have seen a few articles written by pro-
GMO folks just crowing about how CRISPR-edited organisms can be sold without
GMO labels due to loopholes in the law. These folks have their heads so far up
their belief systems that they don't even see the problem.

~~~
Ajedi32
Again, I don't see why that's obvious. Why is it necessary that humans be made
aware up front that they're talking to a bot? (So necessary that you think not
disclosing that information actually needs to be made illegal.)

~~~
carapace
Okay, first, let's keep clear exactly what we're talking about here:

A machine that attempts to imitate a human well enough to trick people into
performing a specific action, an action that involves commitments of time and
money.

I have a specific and a general objection to this. The specific objection is
that this thing _tries to trick people_. It sets up a "frame" that is
intrinsically deceptive. That's it's goal.

In another comment s2g quoted:

> Giving it an obviously robotic voice when it calls. "People will probably
> hang up," he said.

It is designed to trick people. That's bad. When you do it in business, we
call it "fraud".

My general issue with this is that it normalizes the blurring of the line
between real human people and ersatz Artificial Intelligence "people". This is
something that's going to be a serious problem on a deep level. There's
evidence of this sort of conceptual melting right here in this thread. How
long before folks are trying to marry their toaster?

Last but not least, there's the meta-problem: the disconnect between the
techo-elite innovators and the masses.

Think about this: As a businessman, when the phone rings and there's robot
there _trying to give me money_ why would I hang up? The robot only called me
because someone wants to make a sale (from my POV) so that's the "opposite of
a problem". This is the meta-problem: the technologists are so effin' clueless
yet default to starry-eyed optimism.

"We are as gods, we might as well get good at it."

Conversely, save me from the folly of young gods.

~~~
Ajedi32
IMO behaving like a normal human isn't "tricking people". Voice assistants
need to behave like a human to enable natural interaction, and the fact that
the AI caller isn't human is irrelevant information in most contexts.

It'd be like if I called a business and used a voice changer during the call
to make me sound like I'm a different gender than I am. Would that be fraud?
No, because my gender is irrelevant information in the context of that call.

Now maybe it might be a good idea to program the AI to be upfront about its
nature just as a matter of courtesy, but I see no reason why it _not_ doing
would be so harmful that it needs to be made illegal.

~~~
carapace
> IMO behaving like a normal human isn't "tricking people".

It is tricking people if it is done to trick people.

It is not tricking people if it is done while the person to whom it is done
_knows_ the thing they are talking to is not a human.

The question was not "should machines be allowed to act like humans?"

It was "should they be required to disclose that they are not human?"

You seem to think that I object to machines acting human, I don't.

> Voice assistants need to behave like a human to enable natural interaction

Would you say the computers in the "Star Trek" shows "behave like a human"?

> the fact that the AI caller isn't human is irrelevant information in most
> contexts.

How will it know if, in a given context, disclosure is relevant? One of my
first questions about this Duplex service is, "Can I opt-out of receiving
these calls?"

> It'd be like if I called a business and used a voice changer during the call
> to make me sound like I'm a different gender than I am. Would that be fraud?
> No, because my gender is irrelevant information in the context of that call.

I don't think it's like that at all. Are you impersonating someone else?
Gender-bending for kicks? Trying to get better service because you think the
person you're calling is sexist or something? Are you The Jerky Boys?

People do weird shit and that's something we all have to deal with because
gravity.

Machines calling people and saying "uh" to pretend to be human without the
human people realizing it is trickery.

 _And it 's not even needed to make the Duplex service work_ because, as I
pointed out before, when a robot calls you to give you money you don't hang up
on it! Okay?

> Now maybe it might be a good idea to program the AI to be upfront about its
> nature just as a matter of courtesy, but I see no reason why it _not_ doing
> would be so harmful that it needs to be made illegal.

And now you have made my point for me: if it's not made illegal, people, like
you, won't take these concerns seriously.

It's like the guys who owns my favorite cafe. He had a bit of plastic wrap
covering his brown sugar and there were little flies sometimes and it was just
a little gross. One day I came in and he had put a tight-fitting plastic lid
on the sugar. I mentioned it to him and he started complaining about the
health inspector making him do it. All I could think was, buddy _you 're_ the
reason we have to hire and pay for health inspectors, right there.

AIs that can impersonate humans well enough to trick them into thinking
they're also human are pretty much a munition. Thank you Google for opening
the gates. Now that people realize it's possible we have another thing to
guard against. You know most hacking is done though what's called "social
engineering", eh?

~~~
Ajedi32
> It is tricking people if it is done to trick people.

That's true, but I don't see behaving like a human as "tricking people". Even
if the assistant opened the conversation by identifying itself as an AI, I'd
expect it to use all the same mannerisms "uh", "mhmmm" that it did in the call
to enable natural conversation.

> Would you say the computers in the "Star Trek" shows "behave like a human"?

No, but I wouldn't exactly call that "natural interaction" either. Jarvis is a
better example. And yes, I do think Jarvis "behaves like a human".

> I don't think it's like that at all. Are you impersonating someone else?
> Gender-bending for kicks? Trying to get better service because you think the
> person you're calling is sexist or something?

In keeping with the analogy, probably closer to "Trying to get better
service". An AI callbot that pretends to be a human will be more efficient at
its job than one which speaks in a monotone and requires the human on the
other end of the call to adjust their mannerisms to compensate for the fact
that they're not talking with a human.

> And it's not even needed to make the Duplex service work because, as I
> pointed out before, when a robot calls you to give you money you don't hang
> up on it! Okay?

Depends. If the AI immediately identifies itself as such, how many people are
just going to immediately assume it's a spambot and hang up, even when it's
not?

> people, like you, won't take these concerns seriously

What concerns? You've still failed to describe any ill effects that might
arise from AI callbots failing to disclose their nature upfront.

> You know most hacking is done though what's called "social engineering", eh?

You think a law forcing AIs to disclose themselves as AI would do anything
against malicious actors who are already breaking the law? (Hacking.)

------
Fraction-Tech
Huffman suggested the machine could say something like, “I’m the Google
assistant and I’m calling for a client.” More experiments are planned for this
summer, he noted.

Another Google employee working on the assistant seemed to disagree. “We don’t
want to pretend to be a human,” designer Ryan Germick said when discussing the
digital assistant at a developer session.

------
mdekkers
_Should AI software that’s smart enough to trick humans be forced to disclose
itself. Google executives don’t have a clear answer yet._

I would think that Google's executives shouldn't have a say in that question
at all. As the chosen representatives of the people, that is a job for the
Government.

~~~
rbanffy
Wouldn't it be racist to force an intelligent entity to disclose its nature?
Why does it matter if the person chatting with you on the phone is made of
meat or a super-being who's having thousands of similar conversations at the
same time?

~~~
mdekkers
_Wouldn 't it be racist to force an intelligent entity to disclose its
nature?_

This argument is hilarious, but I'll play along. First of all "racism" is
about superiority, not about differentiation per-se. There is nothing
inherently wrong about acknowledging differences, it is wrong when it becomes
the sole reason for assuming superiority. That is the textbook definition of
racism, so to answer your question: No, in my view it would not be wrong to
force an intelligent entity that is not human to disclose this fact.

It matters so that my approach to dealing with this "intelligent entity" can
be adjusted accordingly, as per my needs.

 _Why does it matter if the person chatting with you on the phone is made of
meat or a super-being who 's having thousands of similar conversations at the
same time?_

It matters because today we do not yet understand anything about the drivers
and motivations of this "super-being" and before we are clear on that, I would
like to know who or what I am conversing with. I am pretty sure you are
referring to "Minds" from "The Culture" series, which a) _is fiction_ , and
more importantly for the sake of this discussion, b) Minds have a relatively
well understood context of motivations, morality and history of mostly benign
interaction with Culture citizens. The "Robocall version 2.0" touted by Google
doesn't have that, and given Googles past track record, they don't really give
a flying fuck about the general population. When it comes to the proprietor
and owner of this technology today, we have been comprehensively identified as
"the product" instead of "the customer" \-- it is my right as a private
citizen to know who or what I am interacting with, especially if it is a
Google product.

Finally, whereas dealing with other humans can be understood on a base level
of expected intelligence and wits, as well as shared identity, history and
culture, interaction with "super-being" would by its' very definition be an
unequal exchange. I have every right to know if my interlocutor had any kind
of extreme advantage over me (you know, because it is a "super-being").

Or am I being speceist now?

~~~
rbanffy
> First of all "racism" is about superiority, not about differentiation per-
> se.

In this case, the IA would disclose its nature and its lack of independent
agency. It'd disclose the fact it's not considered a living being and not
afforded any rights. The human, then, could engage in all sorts of abuse (and
Microsoft had a lovely chatbot that had its "brain" damaged by trolls) that
could leave lasting effects on its persisted state.

> I am pretty sure you are referring to "Minds" from "The Culture" series

You are completely wrong. Google's "Robocall 2.0" is nowhere near close a sci-
fi worthy AI.

------
zerostar07
Off a tangent, i don't like how bloomberg has become a tech reporter of note.
They try to bridge things between the techies and the critics, which ends up
making it worst for both sides. I prefer to get my tech news from nerds, with
the suits standing on the other side watching.

------
0x7f800000
I was far more concerned about the application of AI to healthcare. Did they
not think about the possible abuse from telling hospitals or insurance
companies which of their patients are very likely to need urgent care within
the next week?

------
thatgerhard
No it shouldn't identify itself unless asked.. That would defeat the purpose.

~~~
vorpalhex
I can imagine a lot of cases in the future where this goes awry and is abused,
probably not directly by Google Duplex, but by homemade versions.

I think there should be a kind of non-jarring way for AIs to identify
themselves - maybe playing three quick tones before the AI speaks - to
identify that it isn't a human on the other hand and therefore shouldn't be
able to say, order thousands of dollars of food. This is quick, unobtrusive,
easy for an AI to do, and easy for an AI to detect.

~~~
CompanionCuuube
If they do this, I am definitely going to play those three quick tones before
every conversation I have.

------
andrei_says_
I’d reframe this as a reaction to horrifyingly creepy tech.

We engineers and geeks tend to get excited about solving a problem and
sometimes forget the humanity of humans.

Portal had the voices of the turrets just right.

------
hartator
They might kill the project even before starting it.

~~~
ocdtrekkie
Wouldn't surprise me. Glass was killed because it was _perceived_ as creepy.
It's actual capabilities fell far short of being able to actually _be_ creepy,
but the biggest fear Google has is people realizing how creepy Google actually
is. And this, is definitely outright creepy.

I am still a little stunned that either nobody at Google knew how this was
going to be received, or that they didn't manage to take that person
seriously.

~~~
seibelj
I think glass was shattered because it was really lame and uncool. Outside of
the tech bubble, you looked ridiculous wearing it, and no one wants to be
filmed in a bar or public places by a bunch of weirdos.

~~~
liveoneggs
I commuted on the NYC subway during glass's life and a (probably google
employee?) guy who rode the same train occasionally wore one.

The few days I saw him he was getting chatted-to about it, mostly by women.

~~~
ocdtrekkie
The folks who noticed mine were generally more intrigued than anything else. I
had one guy ask me if I wanted a job on the spot while I was at lunch. A lot
of people wanted to try it out, I used to be pretty protective of it though,
and generally didn't let people.

I also took it to Paris for a week, and definitely had some fun conversing
with people about it through a bit of language barrier (my French is not good,
but serviceable), customs was not super thrilled I wore it through their line
though.

------
viggity
I know I'm a bit of an outlier here, but I don't get all the fuss. If the bot
identified itself as a bot, I can imagine the human reacting in all sorts of
weird ways that would prevent the task from completing. Things like raising
their voice, speaking in weird half phrases/broken english, or just
immediately hanging up. "OPERATOR NOW". "WHO NAME CUSTOMER".

Perhaps it'll be awkward, and fail. But so what, I'd rather give it the ole
college try.

~~~
gowld
The same things can happen when humans call, and the same things can happen
when the bot does its bad impression of a human.

------
LifeLiverTransp
A product made by engineers afraid of spotanous social interaction for
engineers afraid of spotanous social interaction.

Oh, and spammers. At least one cashflow relevant demographic covered.

------
Grue3
>The obvious question soon followed: Should AI software that’s smart enough to
trick humans be forced to disclose itself.

No. Replace AI with your favorite disadvantaged minority and the answer will
be obvious. If AI is smart enough to pass Turing Test, it should be treated as
a human.

~~~
tree_of_item
Why do you think it's valid to replace the word "AI" with your favorite
disadvantaged minority? Sounds like you have a pretty low opinion of these
disadvantaged minorities.

~~~
CompanionCuuube
Maybe because the current societal environment is one where AI is maligned and
prejudiced against as "creepy".

~~~
carapace
But you are again conflating _real humans_ with as-yet-fictional machines.

That's the jarring subtext of Grue3's comment.

