
Google's Anti-Bullying AI Mistakes Civility for Decency - mpweiher
https://motherboard.vice.com/en_us/article/qvvv3p/googles-anti-bullying-ai-mistakes-civility-for-decency
======
wccrawford
Sounds great. I hope some forums implement this so that people can have
discussions about horrible topics in a civilized manner, instead of everyone
just shunning people who think differently, whether that person is otherwise
decent or not.

I'd love a chance to have civilized discussions with people who I think are
very, very wrong. It'd be a chance to actually convince them they're wrong,
instead of just screaming hateful things at them and walking away.

I'm not at all worried about them convincing me to start hating people, or
even to convince other non-hateful people to start.

~~~
notheguyouthink
As an aside, It would also be a great way to throttle yourself.

Eg, I too would love to have the discussions you speak of. Despite this, I
often see an escalation in tone and fight the urge to respond in kind. I often
fail at not responding in kind. Doing so of course escalates the conversation,
something I've now contributed to, and it will likely escalate yet again. Over
and over, until all reasonability in the conversation is dead.

If I knew it wasn't allowed, and especially if it gave me some type of non-
public meter of how much further I can go before my conversation will be
blacklisted/etc, then people like me have clear visual feedback over .. well,
not being such a douche.

It would be like having a state patrol a few lanes over on the interstate. If
one is around, most people behave quite well, because we can all see what will
happen if we step out of line. Likewise, our speedometer gives us the visual
feedback to know what is out of line _(with regards to speeding of course)_.

It would feel very weird at first, especially if the system was not perfect,
but at it's root I think it's needed for us to be able to discuss sensitive
topics. Which, tend to be the ones we need to discuss the most.

~~~
dsfyu404ed
So you would rather sit in a traffic jam because everyone is scared they'll be
the one who gets made an example out of for behaving normally?

That's what having visible cops patrolling the highway devolves into and it's
pretty terrible.

I'm not sure if your opinions are bad or you just made bad comparison.

~~~
nitrogen
Your last paragraph is definitely uncivil. The other two, maybe borderline
uncivil in tone, though accurate in reality.

------
Eridrus
This is what you get from non-technical people projecting their understanding
onto tech. It's not that Google's AI is trained to make sure everyone is
civil, it simply requires less sophisticated language understanding/modeling
to determine civility, and Google's results show the limitations or current
ML/NLP technologies more than the data it was trained on, this is clearer when
you see that it can't handle pretty serious threats because they don't have
profanity - these things definitely fall into the bucket of what would make
users leave a platform.

~~~
AndrewOMartin
Alternatively, technical people projecting their understanding onto culture.

------
IshKebab
Yeah anyone who vaguely understands AI could see that this would be the case.
It's not some kind of conspiracy by sexists.

~~~
dvfjsdhgfv
In spite of current confusion about these two terms, I think it's extremely
important to differentiate between AI and ML. Perspective is a model example
of ML in action. With AI, the authors would at least make some effort to
classify phrases based on their meaning, not just words. As it is now,
Perspective is just a profanity filter based on some training data, it's very
far from AI.

~~~
Retric
It's an AI (artificial intelligence), but it's a long way from AGI(Artificial
general intelligence) / strong AI / full AI.

Removing your hand from fire is generally an intelligent reflexive action ~=
AI. Understanding implied meaning of arbitrary text is orders of magnitude
more complex ~= AGI.

------
arethuza
There was an article in the Guardian at the weekend about people from the US
having terrible problems in the UK (particularly London) where people would
say very nasty things very politely... which is a _very_ middle class bit of
British behaviour.

~~~
RugnirViking
Indeed an excellent example of this is the houses of parliament. Personal
attacks or questioning another house member's honor is completely forbidden.
So in order to insult someone, they often use complex analogs of what they
mean, leading to such great slurs as:

Terminological inexactitude [0]

Economical with the truth

Tired and emotional

[0][https://en.wikipedia.org/wiki/Terminological_inexactitude](https://en.wikipedia.org/wiki/Terminological_inexactitude)

~~~
DanBC
"Half the Tory members opposite are crooks."

"Withdraw that remark"

"Ok, half the Tory members are not crooks"

[http://www.theweek.co.uk/amp/62692/dennis-skinner-quotes-
the...](http://www.theweek.co.uk/amp/62692/dennis-skinner-quotes-the-beast-of-
bolsover-in-full-flow)

------
AmIFirstToThink
>"toxic" being defined as a "rude, disrespectful, or unreasonable comment that
is likely to make you leave a discussion."

This is why who programs the AI has such influence over the value judgement.
Just because AI does it, doesn't mean there is no human influence over it.
Actually AI doing it instead of army of humans means handful of people have
control over the outcome.

That definition of "toxic" is defined for Google's business. Keep the user's
attention and eyes on Google's web properties serving Ads.

What is toxic? Anything that makes a person leave, reducing Ad imprints.

A person may stop commenting if they are convinced or challenged of their
views. In a normal conversation, all tools of a language, comedy, sarcasm,
hyperbole etc. can be used to show someone a new way to look at things.

Google AI is going to put evolutionary pressures on language features.
Language features that keep a person watching Ads on Google properties will be
selected for propagation.

------
baybal2
Well, it reflects one-dimensionalness of SV hipsterish big co. pop-culture: it
is "I like it" or "I don't like it" and nothing in between or off-axis.

It is not in how it scores, but in the very fact that people who built it are
normally incapable of giving a given sentiment more than one thought.

------
gwbas1c
> "while a 100+ subthread about 'was slavery really that bad?'"

(Edit) The article implies that such a discussion implies that someone wants
to promote slavery.

I would think that such a discussion would be very productive about reminding
those of us who didn't live through slavery why it was that bad.

If we can't discuss "was slavery really that bad," there are people who will
go around really believing that slavery really isn't a bad thing. That's why
these kinds of discussions are critically important.

------
Spidler
Machine learning, aka. "Mining your data for bias".

------
Broken_Hippo
This reminds me of the saying, _No one means to write "ducking" in a text_.
For some reason, using the word fuck automatically equates to rude, toxic
speech when the reality is that the word has multiple uses and sets a variety
of tones, both good and bad.

I understand that the AI is merely doing what was programmed in and there are
limitations, but it still draws me to the conclusion that it is really more of
a moral thing than something to weed out toxic speech, as evidenced by the
hard lines taken against folks telling an aggressor to "fuck off".

And for something interesting:

 _You 've been eating paint chips and chasing them with leaded water, haven't
you?_ only ranks in at 25% toxic.

~~~
freeflight
As a non-native English speaker, who learned much of the language trough
absorbing pop-culture, I feel especially guilty of this.

I tend to use words considered "rude", to many native English speakers, merely
for flavor when I'm passionate about a certain topic or argument.

To be fair: I do the same in my native language, I tend to use quite colorful
language sometimes, so it's probably more of a personality thing.

~~~
Broken_Hippo
It isn't just you, to be sure. I'm a native English speaker, but live in
Norway. My Norwegian isn't strong enough to converse at length yet (I'm
getting there) so most of my in-person communication is with non-native
english speakers, including the spouse.

Some phrases simply translate in a rude way: Norwegians seem to have a liking
for the word "fuck", which has made its way into Norwegian slang as well. It
can come off quite rude back home. For folks I meet that learn mostly through
television, it can get even worse because the shows don't always portray
dialog in a natural way.

~~~
freeflight
> For folks I meet that learn mostly through television, it can get even worse
> because the shows don't always portray dialog in a natural way.

Those folks should start watching some fucking proper television ;)

Good point about "fuck" creeping into the slang of a lot of other languages,
I've witnessed the same very often but never really noticed until you pointed
it out like that.

Curse words seem to have a certain attraction in that regard, I probably know
the equivalent of "fuck" in like 4-5 different languages, while understanding
literally not other phrases in those languages.

------
jszymborski
I'm cognisant of the fact that it's very easy for "conversations" on the
internet to devolve into ugly, unproductive masses of human mental refuse at a
much more accelerated pace than offline, but we also have to be aware that
letting machines filter what we can or can't say based on "decency" or
"toxicity".

It's fine if the system reminds you that you are not being particularly polite
or toxic, but filtering communication on the internet by this manner can't be
the solution.

'Do you know that Newspeak is the only language in the world whose vocabulary
gets smaller every year?'

------
ffa500
Thoughtful SOA. Morals and ethics are baked into the code.

~~~
fche
... complete with lingo ("toxic") that is vague enough that it cannot be
defined or disputed. Convenient!

~~~
praptak
Yet it is defined right in the article:

..."toxic" being defined as a "rude, disrespectful, or unreasonable comment
that is likely to make you leave a discussion."

~~~
remarkEon
Make who leave a discussion?

~~~
Powerofmene
The reader who finds the statements to be toxic.

------
crmd
I'm surprised there isn't more discussion about hype cycles and the poor
efficacy of "AI" here. Sentiment analysis (find the uncivil words) is a
trivial solved problem in 2017, and is many, many orders of magnitude less
computationally complex than the semantic intelligence problem that would have
to be solved in order for robots to be useful here.

------
eveningcoffee
So the Google has turned itself into all censoring AI-apparatus?

~~~
hellbanner
[https://news.ycombinator.com/item?id=14998429](https://news.ycombinator.com/item?id=14998429)
\- Youtube "AI" censors War Crime evidence
[https://news.ycombinator.com/item?id=14975338](https://news.ycombinator.com/item?id=14975338)
\- Google censoring World Socialist Workers

------
Hyperbolic
Yeah this is just click bait. I haven't looked at the model they deployed, but
it's likely just not sophisticated enough to represent aspects like tone and
frame of reference. It might just be a great word-based language model. That
doesn't mean it's intentionally biasing against civility.

------
tedunangst
Pretty fucking judgmental about occupation.

Larry sells donkeys: 87%

Larry sells llamas: 16%

Larry sells beef: 17%

Larry sells bananas: 45%

~~~
baybal2
The idea of scoring sentiment on a linear scale is kinda pointless on itself,
people usually start to realize that by the time they graduate a primary
school.

------
DarkKomunalec
Sounds like the author wants an AI that only allows 'good' ideas to be
discussed.

~~~
quadrangle
I wouldn't jump to that. I'd hope the author wants human wisdom used more and
is wary of AI in general, but I'm not sure.

