
California law banning bots from pretending to be real people without disclosure - woodgrainz
https://www.newyorker.com/tech/annals-of-technology/will-californias-new-bot-law-strengthen-democracy
======
jlangenauer
I am reminded of Thoreau's quote: "There are a thousand hacking at the
branches of evil to one who is striking at the root."

The root of the problem is not that we have bots, but that we have normalised
lying and deception as part of everyday business. We allow companies to
pretend that bots are human beings, and allow call-center employees in third-
world countries to pretend (even sometimes though elaborate lying) that they
are located in the same country as you. We allow companies to tell outrageous
untruths in their advertising - see the Samsung ad which they're currently
being hauled over the coals for in Australia.

That's the real problem here, and the one we need to fix on a general level,
not by band-aid regulations over whichever dishonesty has managed to irritate
enough state representatives.

~~~
brownbat
> allow call-center employees in third-world countries to pretend

I agree companies should be forced to tell the truth to their customers, but
in all cases where they're asking for details about the customer service rep?
That could go weird places.

If a caller demanded to know the rep's HIV status, we wouldn't insist the
company disclose it. We'd probably demand they didn't. Honesty is important,
ok, but the customer has no reasonable need to know that. So, ok, it's not as
simple as "tell the customer everything." There are some judgment calls the
company has to make about what's important to disclose and what is irrelevant.
The service rep's race, religion, medical history, or sexual orientation are
probably not up for discussion. Why not national origin?

This isn't just about fairness to the rep, but the majority of customers need
the company to hold this line too. If some portion of the population strongly
believes that foreign call centers are lower quality, are you going to get the
best possible answers on a survey if you say, "this was a foreign call center,
how was the quality?"

Disclosing irrelevant details can sabotage collection of unbiased information
on call quality.

If customers as a whole want improved service quality, they'll want the
company to be able to collect unbiased post-call survey results.

If call centers in country X are all terrible, unbiased surveys will reveal
that. If it's really about training, and there are good and bad call centers
in several different countries, unbiased surveys will reveal that too.

I know passions run deep on this one, and it's a hard case, but this one seems
a little more complicated than the others.

~~~
jellicle
Declining to tell someone something is not the same as lying to them. And
declining to tell the customer "I am HIV positive" is not the same as
declining to tell them impersonal information.

You may or may not be aware but there are companies that actively instruct
their international call center personnel to tell customers "My name is Sally
and I live in Fort Worth Texas, right near you" when none of that is true.

~~~
wysifnwyg
I think this is an important distinction that the parent comment failed to
address and which destroys the premise of their entire argument.

------
dessant
I felt stupid, but I have fallen victim to this on dpd.com while tracking a
package. A helpful chat popup has appeared where I could request assistance
from a support agent, but it was not disclosed that the agent is a bot.

Needless to say I have spent a couple of minutes repeatedly asking a question,
and even rephrasing it while being frustrated that this person does not seem
to grasp my issue.

~~~
burfog
I've been assuming that it's a pre-determined initial message, but that a real
human in a low-cost country would immediately be involved if I bothered to
respond.

What portion of the chat pop up windows do you think are purely bot? Might any
of them be purely human?

~~~
LostJourneyman
It depends build to build. The standard approach is one of 4:

1) The bot initiates the conversation and your initial message gets sent
directly to an agent. This is the older model that's in most common use today.

2) The bot initiates contact and based on your responses does some simple
keyword matching and delivers help article links where possible or asks for
more information IVR style, then when it hits an "I don't know" point or if
the agent option is selected, offloads to an agent.

3) This is my favorite style, honestly: The bot initiates the interaction, and
does some machine learning backed AI chat, all the while the interaction is
monitored by an agent who can take over at any time. Similar to #2, if the bot
hits a sticking point, it'll just queue to an agent. This unfortunately is the
least common of the implementations.

4) This is the most modern and is becoming the industry leader: Fully AI bot
trained against a veritable Everest of chat conversations for that
entity/industry, only offloads to a human when you shout "HUMAN" at it enough
times or if it gets really stuck and confidence intervals start falling
rapidly.

 __ _NOTE /DISCLAIMER: ___I design and implement these systems for a living,
and we don 't often get much say in the customer-side UX, so I'm sorry if
you've gotten stuck with an arguably bad build!

(Edit: formatting)

~~~
zamadatix
[https://xkcd.com/806/](https://xkcd.com/806/)

Nobody will notice, you know you want to add it ;).

~~~
LostJourneyman
You have NO IDEA how tempted I've been to add that! I've literally had printed
this out and pinned to the wall of my cubicle for years!

------
throwaway13337
California is setting a dangerous precident.

If individual states each enact individual laws governing the internet, then
only large companies will have the resources to follow them.

We'll see a balkanization of the web wherein it's no longer very world wide.
Small internet businesses will become harder and harder to start. Big
monopolies will become entrenched.

It's not pretty.

~~~
sdoering
Or companies start acting like a sane person would. Not outsourcing
communication to a bot without disclosure for example. And do so globally.

Or taking the European GDPR regulations as an example actually caring a little
bit more about the user' data and enabling informed consent.

~~~
hajhatten
Agreed, when are you enforcing Article 11/13 in the US? /s

~~~
sdoering
Actually Article 11 is in effect in Germany. Article 13 also.

------
AnthonyMouse
That will be a fun one to interpret. So if I use autocorrect, am I a bot? What
if the device makes next word suggestions and I use them? What if the device
suggests the entire post, but I manually approve it? What if it suggests five
separate posts and I approve them all at once?

~~~
sushid
I'm not sure why you're being so facetious. In fact, I wonder why HN always
has comments like these.

Lawmakers are purposefully vague because judges can decipher what the spirit
of the law is and fine corporations or condone specific use cases when they
are brought up in court. You can't go into court to challenge a law with
hypothetical cases for a good reason. Do you want lawmakers to arbitrarily
impose constraints like only 10% of non article words can be suggested per
message composition or only 2 posts per minute are allowed?

It is a fact in life that technology changes and improves things beyond what
we could have foreseen in just a few years. The degree of flexibility built
into these laws is a huge plus. Not a flaw.

~~~
dorgo
>I wonder why HN always has comments like these

It's a work related disease. A coder must consider all corner cases in
advance. There is no judge to decipher the spirit of a program.

~~~
MrGilbert
slight-OT: Yes, it is. Which is frustrating in itself. In general, I've
developed a strategy:

Me: "So, what should the program do when XYZ occurs?"

Marketing: "Uhm... Dunno, haven't thought about it. I'd decide by, you know,
gut instinct. We haven't thought about that yet."

Me: _implements a virtual coin-flip using Random.Next()_

Once you've done that, it's easier.

~~~
ryandrake
Or, file a bug for each corner case, mark it release-blocking, and assign to
whoever is responsible for the requirements.

~~~
CGamesPlay
If you're interested in actually shipping, a more helpful approach is to
choose the simplest behavior to implement, notify the requirements person that
you've done that, and offer to create a release-blocking bug for the issue if
that is unacceptable.

Chances are, if the requirements person doesn't have an opinion on what the
copy for the dialog should be if the customer is 65+ and it's a Tuesday in a
month with 31 days, then it's because that choice doesn't really matter all
that much.

------
wickedlogic
Bots are going to be the way we interact with the web (and really all systems)
heading forward, this 'real people' at just 'browsers' is quite a
misunderstanding of what a 'user-agent' really means in this day and age.

If I launch a new tab in the background and tell it go establish some set of
factors for me, or locate price points and details for me, or buy something
for me (and right now as me)... or just have it let me browser and
interactively direct it but have it block ads as I go.

I know the law, and lawmakers, are looking at this from a fraudulent content
perspective, but they are going to be hard pressed to do anything in long run
to quell this.

------
mirimir
> Violators could face fines under state statutes related to unfair
> competition.

I doubt that anyone running bots, and who is technically competent, will be
identifiable or findable. I mean, I could do it, and I'm just a random
anonymous coward.

~~~
wpietri
I think part of the goal of this law is to make this be true:

> I doubt that anyone running bots, and who is technically competent, will be
> identifiable or findable.

Telemarketing, for example, was done a great deal by perfectly legal,
traceable businesses. Once it was made illegal, it was forced underground, and
volume dropped immensely.

> I could do it, and I'm just a random anonymous coward.

Could you? Hiding the flow of significant amounts of money is actually quite
hard. Robot salesmen masquerading as humans would be a plague, and I think
this law should keep that from becoming a legitimate business technique.

~~~
mirimir
I'd do it using Bitcoin. And I know how to use Bitcoin ~anonymously. Or some
other cryptocurrency that's actually anonymous.

It's true that getting assets from cryptocurrencies is hard. I don't do it. I
just spend my ~anonymous income on ~anonymous servers to play with.

But if you're moving enough assets, you can pay people, who know what they're
doing, to move it. As we've seen in real estate markets in many cities.

~~~
notahacker
Meanwhile, back in the real world, companies that were quite happy to pay for
robocalls and social media bots to promote their product are not interested in
paying anonymous people anonymously in Bitcoins for campaigns that might get
them into trouble, no matter how proficient the bot developers might be at
laundering their crypto earnings. That'll be the law working as intended

~~~
mirimir
True. But there's lots more deniability for political campaigns. Free speech
and all.

------
CM30
Of course, this all raises the question of whether there are only unethical
use cases for bots pretending to be human, or whether a law like this could
hit benign uses for bots as well.

For instance, ARGs could have bot accounts for fictional characters on social
media sites. These accounts could give pre recorded messages that then hint
that the user should visit some third party site for more clues or
information. Is that legally dubious? I can see it being so under this law,
but I don't think it's comparable to a business running say an automated chat
support system and pretending its bots are human.

Same goes with roleplaying bots on online community sites. These aren't a huge
thing right now, but they could be in future, with accounts that act like NPCs
do in video games or interact with the players account in side quests or what
not. These don't seem like they'd be morally 'wrong' things to have on a site,
but they'd probably get hit by this law regardless.

Point is, these types of bots don't necessarily only have dodgy use cases.

~~~
tablethnuser
In these gaming use cases the bots can be appropriately disclosed without
getting in the way of the game. A couple decades ago there was an ARG about
government conspiracies which called your house to give you clues from in-game
characters and it started with a 4th wall breaking preamble so that if someone
not playing answered the phone they wouldn't get worried. The game was still
fun. IIRC you could go into the settings and disable the preamble, which would
be a fine place in the UX to disclose the bots and capture the user's
agreement.

Unfortunately for the game (and the world), 9/11 happened a few months after
launch and due to the theme of the game it was shut down. Now it's just an
interesting bit of gaming history!

[https://en.m.wikipedia.org/wiki/Majestic_(video_game)](https://en.m.wikipedia.org/wiki/Majestic_\(video_game\))

------
alexheikel
I can say that people like humans way better than bots. We have an app that
looks like a bot but is a real person, and we say it at the beginning, but
people don’t believe. Once they figure it out, they go crazy. It’s always
local people by the way. So I think at some point, bots should be at least
explain what they are without pretending they are humans.

------
modernerd
Welcome to the Fifth Annual Californian Turing Test Chatbot Hackathon!

This time we've had to introduce some changes to abide by new state
legislature.

Messages entered into the chat console must be followed immediately by the
string " [I am a bot]", whether you are a bot or a human, but especially if
you are a human.

Good luck and have fun!

~~~
xamuel
New addition to every EULA: "XYZ Corp is a member of the Rand-Turing
coalition, an industry cooperative dedicated to the philosophical belief that
all human beings are robots. Based on this, every interaction you have with us
will be an interaction with a robot. By signing this license agreement you
agree that all members of our corporation from the lowest janitor up to the
CEO, are all robots."

------
dbieber
You can read the text of the law here:
[https://leginfo.legislature.ca.gov/faces/billCompareClient.x...](https://leginfo.legislature.ca.gov/faces/billCompareClient.xhtml?bill_id=201720180SB1001)

------
gnicholas
What about customer support or sales agents who are operating solely off a
script? I was contacted by a sales agent of a "lead generation" company, who
was being impersonated by her off-shore workers. Eventually during our email
exchange the real person jumped in (but did not announce that the prior
correspondence was with someone in an offshore sales center. They use these
agents to pretend to be their clients, and based on the not-great experience I
had when I was initially contacted (email was not well written, said things
about my company that wasn't quite right), I would not use them.

These folks are essentially like bots, insofar as they are "programmed" to
respond and significantly constrained in their latitude. They're like human
bots, no?

------
sailfast
“Bots are not people” - ok, but if they’re written by people for a specific
intent meant as speech, is that no longer protected speech? If I program a bot
to chant slogans on Twitter, isn’t that something I’d have the right to do?

I’d argue yes, but where they may have an argument is where I respond “no I am
human” if people ask me about being a bot - intentionally misleading.

They may have more luck here in the commercial space where they can better
regulate and enforce these ruleS like advertising and other sales practices.
Not sure where this goes in politics or other domains at all in terms of
enforeceability.

------
gregoryexe
I'm confident it will be as effective as the do not call registry.

------
msh317
This is a horrible idea, California doesn't have the ability to enforce such a
law, and hackers would simple operate from outside California

Bad idea

~~~
threezero
If the company or person is doing any business in California then California
does have the ability to enforce the law.

------
inlined
Political ads on tv require disclosure. I see no reason disclosure rules
shouldn’t apply on internet campaigns of all forms.

------
ajflores1604
Does anyone have any insight if this would affect market trading algorithms
and bots? The article says the law is "requiring that they reveal their
“artificial identity” when they are used to sell a product", but I'm not sure
how broad of a definition they want for the word "selling".

------
unreal37
This will bring the end of Twitter. A website where 75% of the content is
bots.

~~~
lajawfe
No, Twitter just needs a switch that will display 'posted by a bot' in place
of 'posted via Twitter for iPhone'.

------
sneak
Bots are communications tools configured and deployed by human beings, at
least until they can pass the Turing Test.

Human beings have human rights to express themselves however they wish.

~~~
icebraining
> Human beings have human rights to express themselves however they wish.

No. Commercial speech usually has disclosure requirements, even in the US (see
the Zauderer case), which humans have to follow. This is just another case of
compelled commercial speech.

------
threezero
I get the feeling that the commercial part of this law might hold up, but the
election part is highly questionable based on Supreme Court rulings.

------
archy_
Doesn't this effectively outlaw services that scrape your bank accounts (but
don't have an official API to work with)?

------
mcantelon
Sounds unworkable, but is likely a pretext for some other effort (like
undermining anonymity).

------
shultays
Some companies use bots for their support twitters. Samsung for example had a
twitter bot that replies everyone

[https://twitter.com/SamsungSupport/status/114041514667572838...](https://twitter.com/SamsungSupport/status/1140415146675728384)

And it even uses a human name. Really dishonest.

------
ourmandave
As if millions of Facebook accounts cried out in terror, and were suddenly
silenced.

~~~
0xffff2
As if millions of Facebook accounts kept right on going because none of the
people running them are in California and Facebook has no incentive to
actively police them.

------
KrishMunot
Does this have to do with the Google AI booking salon appointments over the
phone?

------
Fjolsvith
So much for Dr. Eliza.

[http://www.drdobbs.com/chatboteliza/199101503](http://www.drdobbs.com/chatboteliza/199101503)

[https://www.cyberpsych.org/eliza/](https://www.cyberpsych.org/eliza/)

~~~
14
I don't think this spells an end to Dr. Eliza. From what I understand, it only
has to be made known that it is a bot. The fact that Dr. Eliza is very clearly
a bot from the get go makes it completely legit. I do however wonder about
Ashley Madison as it was made evident that a lot of the woman on the site were
just bots. Yes the bots are just there to talk to the men but they are there
from the company to entice the men to upgrade their membership and pay so it
seems to me this would make them fall under this law.

------
cgb223
Cool, hopefully the spammers on Tinder will respect California law /s

------
vinniejames
So this disclosure will now be required with all robocall spam calls?

~~~
bshacklett
Aren't those already illegal, but just a nightmare to track down for
enforcement?

------
frigfog
Well, at least the Turing test is solved.

~~~
johnlbevan2
Haha, that was my first thought too...

Unless companies start asking all human employees to start claiming that
they're bots so as to subvert the new rule... there's not a law against that
yet.

------
sjg007
It’s a start, it should cover Facebook and Google.

------
zn44
what about people pretending to be bots?

------
buboard
But what about humans posing as bots?

------
scarejunba
If I make a generated face that speaks my stuff with a generated voice, is it
a bot?

~~~
scarejunba
Well, amusing that people thought this was wholly irrelevant. Here's what I
was thinking of when I said that:
[https://www.youtube.com/watch?v=ksb3KD6DfSI](https://www.youtube.com/watch?v=ksb3KD6DfSI)

Also amusing that there are two confident opposite opinions in response.

~~~
TeMPOraL
What does this video montage you linked have to do with computer-generated
faces speaking computer-generated voices?

~~~
scarejunba
It's the same content. Different people with different voices. You could have
it sound like you have your local folks giving you info when instead it's from
some centralized location.

I could control a message without it being a single guy saying it all.

------
arunbahl
What are the implications of this on the Turing Test?

~~~
arendtio
IANAL, but AFAIK no machine living in or interacting with someone in
California can complete the Turing Test without breaking the law.

A simple question like 'Who should I vote for?' would cause the machine to
either answer with the compliant 'Please note, I am not a human being...' or
with some illegal comment about the democratic process.

Maybe that law requires an additional paragraph, stating that humans
participating in a Turing Test should also identify themselves as bots ;-)

------
repolfx
I really hate this stuff. The article starts out with a paragraph of complete
nonsense:

 _" When you ask experts how bots influence politics—that is, what
specifically these bits of computer code that purport to be human can
accomplish during an election—they will give you a list: bots can smear the
opposition through personal attacks; they can exaggerate voters’ fears and
anger by repeating short simple slogans; they can overstate popularity; they
can derail conversations and draw attention to symbolic and ultimately
meaningless ideas; they can spread false narratives."_

Since when can bots "smear the opposition through personal attacks"? Bots that
post the same stuff written by humans over and over have existed for years and
are easily filtered out by spam filters - bulk spam doesn't change people's
politics anyway so in practice such bots are always advertising commercial
products. Bots that constantly invent _new_ ways to smear the opposition don't
yet exist, not even in the lab.

This whole story is asserting that there are programs routinely running around
the internet indistinguishable from humans, making points so excellent they
successfully persuade people to switch their political affiliation, which is
simply false.

In the article the word "experts" is a hyperlink. I was very curious what kind
of bot expert might believe these fantasies. To my total lack of surprise the
link goes to a single "expert" who in fact knows nothing about AI, bots or
technology in general - they're a political flak who worked for the Obama
campaign and studied a PhD in "communication".

This sort of manipulative deception is exactly why so many people no longer
trust the media. The New Yorker runs an article that starts by asserting a
fantasy as expert-supported fact, and then cites a member of the Obama
campaign who went into social science academia (i.e. a field that
systematically 'discovers' things that are false), and who has no tech
background or indeed any evidence of their thesis whatsoever.

My experience has been that actual experts in bots are never approached for
this sort of story.

~~~
michaelt
_This whole story is asserting that there are programs routinely running
around the internet indistinguishable from humans, making points so excellent
they successfully persuade people to switch their political affiliation, which
is simply false._

The theory isn't that bots are artificial general intelligences trying to
convince individuals with clever intellectual debate. The theory is bots try
to move the Overton Window [1] - to change _what the average person thinks the
average person thinks_ \- by making certain opinions/arguments appear more
prominent by repetition.

A bot doesn't need to be an AGI - or even capable of responding to replies to
its own posts. All it needs to do is keep 100 accounts in good standing with
reposts and low-effort comments, then every hundred posts or so a human
operator jumps on to make a driveby comment like "LOL give it up Mickey Mouse
is never going out of copyright" or "LOL we get it you vape" or "LOL it's the
government, what did you expect?" or "LOL like America hasn't done the same
thing but much worse" in an appropriate thread.

[1]
[https://en.wikipedia.org/wiki/Overton_window](https://en.wikipedia.org/wiki/Overton_window)

~~~
repolfx
Firstly, the theory in question is so vague it's hard to say what they are
really claiming.

But secondly and more importantly, even if what you say is true, the theory is
still total nonsense!

Where is the evidence for _any_ of this? Where are the networks of bots that
were caught spamming low-intelligence identically worded political comments,
yet somehow can't be caught by normal spam filters? Where is the testimony of
millions of people who decide how to vote by counting duplicate tweets?

This entire theory is literally a conspiracy theory. Like all conspiracy
theories, when basic questions are asked it suddenly shapeshifts and starts to
claim something different but still wrong.

I don't believe any such bots exist: can anyone show me the evidence that they
do? I mean _real_ , first-hand evidence, not assertions of dubious self-
proclaimed experts with an agenda.

I can for sure tell you that real humans are routinely labelled as "bots" by
people who believe in this conspiracy theory, and can cite evidence:

1\. It's happened to me.

2\. It's happened to other people:

[https://sputniknews.com/europe/201804211063771932-Smeared-
Ru...](https://sputniknews.com/europe/201804211063771932-Smeared-Russian-Bot-
UK-Man-Demolishes-Sky-News/)

 _That may have been the end of it, but then Ian took an invitation to appear
on Sky News. The news anchors began by asking the man, who appeared on video
remotely, whether he was truly a "Russian bot." "That is 100 percent a total
lie and complete fabrication by the UK government," Ian said, with a British
accent._

Here's a related case. In fairness, this time it's about "Russian trolls" not
"Russian bots", although I've noticed people tend to use the terms
interchangeably:

[https://order-order.com/2017/11/15/byline-outs-russian-
troll...](https://order-order.com/2017/11/15/byline-outs-russian-troll-turns-
glasgow-security-guard/)

The Twitter account in question turned out to be a Scottish car park security
guard.

Here's a third case of real people being accused of being political bots:

[https://www.wired.com/story/how-americans-wound-up-on-
twitte...](https://www.wired.com/story/how-americans-wound-up-on-twitters-
list-of-russian-bots/)

3\. Any time any detail or basic question about this theory is raised, this is
exactly what happens - someone pops up saying nobody is claiming the bots are
_genuinely_ artificially intelligent, or the claims are changed in other
subtle ways. But yes, that's exactly what this very article is claiming:

"The first bots, short for chatbots, couldn’t hide their artificiality. When
they were invented, back in the nineteen-sixties, they weren’t capable of
manipulating their users. Most bot creators worked in university labs and
didn’t conjure these programs to exploit the public. _Today’s bots have been
designed to achieve specific goals by appearing human_ and blending into the
cacophony of online voices"

The justification for this law is literally that people think AGI has been
achieved and is "manipulating" voters by spreading "false narratives". But
it's not true, is it?

~~~
lurker458
The US public consultation on net neutrality was famously skewed by bots
(posting identical messages, sometimes on behalf of dead people). On Reddit
some topics attract bot like behavior as well (groups of users from a non-US
ip block voting and posting in concert). Making it illegal won't stop this
entirely but will stifle it.

