
Google to warn when humans chat with convincing bots - baddash
http://www.bbc.com/news/technology-44081393
======
benbread
It'll be interesting to see how a human knowing they're talking to a bot
changes their behaviour - in the demos people thought they were talking to
another person, and are polite and professional - I wonder if knowing they're
talking to a machine people may change their tone, become more abrupt, speak
more slowly or become agressive - maybe it would lead to unconscious (or
likely conscious) discrimination against bot calls, in a similar way to
stories seen of people with 'ethnic' accents calling restaurants and being
told there are no reservations, when in fact there are.

~~~
drdaeman
Can't wait for a day we'll be talking about human privilege and quotas for
chatbots in call centers. /s

Your observation is most likely true, though. Humans do talk to machines
differently, and I'm not talking about those who swear at bots "just for fun".

Personally, as soon as I'm positive I'm talking with a bot (because it doesn't
respond naturally) I remove any unnecessary verbal clutter, as it is confusing
for the machine and my respect for the bot (haha) is by exactly omitting
what's irrelevant. Also I try to guess what keywords bot would recognize best,
so I would get what exactly I'm asking for.

So, "Hey, can you please help me? I'm looking for a way to configure Foo to do
Bar, but stuck with error code 123. I wonder if there's any blah blah blah.
Thank you very much!" becomes something like "How to configure Foo for Bar? It
fails with error 123." And that if I still try to use natural language and
don't go with `Foo Bar configuration error 123`. Surely, such conversation
with a human would be considered impolite to say the least.

The above's tech-biased, of course.

~~~
benbread
You're not alone in doing that, though it'll be interesting to see how these
boys perform when given these sort of succinct prompts and instructions, given
the Google example was trained on real conversations - you may find they
perform worse compared to 'natural' speech

~~~
TomMarius
I wonder how it does with non-native English speaker on A2-B1 level (e.g.
Chinese restaurant ran by immigrants).

------
exodust
Prior to this warning feature, I wonder what would have happened if during the
phone call the hairdresser had asked "are you a real person?". Would the
Google assistant reply "Ummm... I'm not real" or would it lie?

------
abraae
From reading the headline, I assumed Google was providing some useful service
where chrome or Google voice or some other Google medium would warn hapless
human when they ended up in conversation with an AI pretending to be human.

But no! Google itself IS said evil AI. But hey, it's ok, don't worry, it will
come with a built in warning!

Things like this make me think that big tech has really lost the plot. You'd
think in the current climate that Google would be keeping their heads down,
staying away from things that are creepy, unsettling and potentially providing
evildoers with another way to maliciously influence people.

But no... because ads.

~~~
exodust
I'm not sure it can be dismissed that easily. If Google stick to "providing
tools" and let others decide how to use them, maybe they will do better.

It's when Google product managers stand on a stage and tell us how their
technology will make our lives better, that I cringe. I don't want Google
telling me how I should live my life. Just provide the tools and tech, and let
us work out how best to use them for ourselves.

There will be situations where this "warning" will be unwanted. Google should
not be dictating when or how the warning is delivered. That should be at the
discretion and option of the business or individual.

I can see this tech being useful in the reverse scenario they demonstrated.
That is, the bot answering calls on behalf of the restaurant and accepting
bookings. Often when you call a restaurant, it's noisy and you just know
you've interrupted someone from doing other tasks.

~~~
em3rgent0rdr
> "the reverse scenario they demonstrated. That is, the bot answering calls on
> behalf of the restaurant and accepting bookings"

In the scenario of bots talking to bots, if they identify themselves as bots
to one another, then they could quickly switch over to a much more efficient
machine communication methods. :)

------
xwvvvvwx
Don’t really get this tbh. Why should I care if I’m talking to an AI or not?

------
verroq
I can't wait for automated phone spam to be weaponized so that we can receive
verbal Turing tests when we call the customer service hotline.

------
endergen
Will Google record calls or metadata? More importantly, to alleviate concerns,
will they indicate what their data policy is during a call?

~~~
ddtaylor
I'm sure they will record everything and say it's for quality / training
purposes like most call centers do. Only instead of training humans they are
training AIs.

~~~
endergen
Ha!

------
sgt101
The problem is that the technique is known, and it's going to be duplicated.
Although honourable people will not defy patents or reason to use it
maliciously dishonourable people will ! The community (and Google) needs to
develop a better solution to this and the deep fake video's.

~~~
c22
Some of the first people to start using this once it "breaks free" will surely
be businesses who are tired of answering the phone. How does this not end with
computers talking to each other using a low precision, inefficient, low
bandwidth machine protocol over the PSTN?

------
Alterlife
Frankly I'd love a bot framework to turn the tables and call into my ISP's IVR
to log a complaint.

A bot that would do all the waiting, trudge through the options, deal with the
transfers, tell them I did the standard debugging steps and get back to me the
complaint number.

That would be just incredible.

------
jdowner
Google may be the first to release a system like this but it won't take long
until there are equivalent services, which may not warn that it is an AI. How
long until those fun calls to automated systems start with a captcha?

sidenote: how long until the Butlerian Jihad?

~~~
pdkl95
> how long until the Butlerian Jihad?

It already started for some people. I personally know people that have made
fighting against technocracy their long-term goal. Technology has been such a
destructive force in their lives and is a continually growing threat. When you
have to worry all the time about the new Sword of Damocles that Silicon Valley
hangs over your head every month, thoughts of Butlerian Jihad style _active
violence_ against technocracy become inevitable.

In case anybody wants to dismiss this as merely an outlying opinion, consider
the poem "There's No Reception in Possum Springs" from the game _Night In the
Woods_ :

    
    
        ... (see [1] for the rest) ...
    
        Replace my job with an app
        Replace my dreams of a house and a yard
        With a couch in the basement
    
        "The future is yours!"
        Forced 24-7 entrepreneurs
        I just want a paycheck and my own life
    
        I'm on the couch in the basement
        They're in the house and the yard
        Some night I will catch a bus out to the west coast
    
        And burn their silicon city to the ground
    

[1]
[http://nightinthewoods.wikia.com/wiki/Selmers#Possum_Springs...](http://nightinthewoods.wikia.com/wiki/Selmers#Possum_Springs_Poetry_Society)

------
mgiannopoulos
Why does the BBC think we should read three times that this is “horrifying”?

~~~
speedplane
Because a computer is standing in for a human unbeknownst to another human.
The computer is effectively tricking a person into thinking it's another
person... but tricking nevertheless. This opens a whole new door of
opportunity and failure.

~~~
red75prime
Tricking implies intention. The computer has no such intention yet. Google has
it, or, most likely, Google just want to get data efficiently, tricking is a
side effect of how good their speech processing is.

~~~
phreeza
Does Google have intentions? Or do only individuals have intentions? Serious
question, I am sure philosophers have thought about this.

~~~
LyndsySimon
If Google - an organization with a purpose, charter, and organizational goals
- can have intent... how is that different from software?

~~~
red75prime
It is not that different. The software just doesn't have the intent to trick
humans into thinking it is a human. There's no feedback loop assessing
performance of tricking humans and changing behavior to increase it. To be
precise, such feedback loop is probably external to the software and
implemented by the engineers.

------
intellix
What difference does it make if a human, dog or robot tries to book a table at
a restaurant? As long as it speaks in English it doesn't matter. It's the same
outcome.

------
2sk21
In the first place, the demo would have been much better of it was used in
their cylinder rather than to impersonate people.

------
jmcnulty
This tech might work well dealing with Emergency Service calls, to filter out
inappropriate calls and only pass on genuine emergencies to the human operator
related the required service.

~~~
arcticfox
This sounds like the worst situation to use it in, unless it's actually better
at English than humans. Even if 99/100 emergency calls are garbage, you want
the best responder on the line immediately for the one call that might save
lives.

And if it decides to hang up on an actual emergency? That would be a special
kind of fail.

~~~
jmcnulty
I guess it depends on whether Google think it would be up to the job. If so
then it would need to under go a lengthy trial for the Services to establish a
suitable level of confidence. Emergency service calls are recorded, so there's
plenty of real world data to test and tune with. And for a live trial I'd have
real people shadowing Duplex; listening in and ready to take over if the call
doesn't go in the right direction.

------
John_KZ
And we should rely on Google's pinky swear? We need authentication for phone
calls and a set of laws requiring disclosure when this type of service is used
by legal entities. And we need these laws now.

~~~
icebraining
Because otherwise... we might be somewhat annoyed? What terrible events do you
see happening that require laws with such urgency?

~~~
John_KZ
There exists audio synthesis software that can mimic anyone with an almost
indistinguishable voice. There also exists software through which you can
automate responses. This will not go down very well. If automated scamming
isn't bad enough for you, there is no law prohibiting a corporation implicitly
posing as one of your friends, your employer or a person in general.

~~~
realharo
Wouldn't that just classify as fraud? I think existing laws already have that
covered.

After all, impersonation done by people is nothing new, this technology just
makes it easier.

~~~
John_KZ
>Wouldn't that just classify as fraud?

20 years ago, much of the ToS you blindly accept on many websites, would land
the developers and company management in jail. At least in my country. That is
for counts of misuse of personal information, defrauding the customer and
potentially espionage.

Also if they sold physical devices, lying about the function of the buttons,
installing erasure buttons that don't erase anything etc, would also cause a
class-action lawsuit and the district attorney to press charges for fraud.

I can totally see companies, especially the ones that don't care much about
keeping a good face, like dept collection services, using this kind of service
in extremely unethical ways while retaining plausible deniability in court.

~~~
icebraining
If companies are evading existing laws, what makes you think more laws will
help?

