
Google announces workshops to tackle the spread of hate speech and fake news - rbanffy
https://arstechnica.co.uk/business/2017/04/google-fake-news-hate-speech-workshops/
======
Aqueous
A little uncomfortable about the coupling of hate speech and fake news. Hate
speech, while odious, is an unfortunate side-effect of free speech. And so we
already have a mechanism for regulating it: social norms. Even at its worst,
hate speech is the temporary symptom of some other, more insidious wrong - and
the temporary injury each instance of hate speech produces just does not
justify eating at a core value of liberal democracy.

Fiction that purports to be news poses a far greater threat to democratic
institutions, especially when it is targeted (and therefore invisible to the
normal editorial controls). A mis- or dis- informed populace empowers people
that worsen the core systemic problems that hate speech is a symptom of. But
in this case, too, there is also an historic, market-based mechanism for
combating it: editors. But unlike with hate speech, where Internet communities
can enforce norms that determine inclusion, the Internet has no built-in
editorial controls.

~~~
burkaman
I don't understand the line you're drawing. Isn't fake news also an
unfortunate side-effect of free speech? Aren't education and public reputation
existing mechanisms for regulating it?

Your argument that fake news is more dangerous makes sense, but it's not hard
to argue the other way. Hate speech causes suicides, while fake news only
causes temporary misinformation. Hate speech makes people withdraw from
society and public life, driving them out of democratic institutions. Fake
news usually just lets people hear what they want to hear, confirming beliefs
they already had. The "victims" of fake news are actively seeking it out,
while the victims of hate speech are being attacked.

Maybe that's not completely convincing, but hopefully you see what I mean. We
already have lots of controls on free speech, are there any intrinsic reasons
you think hate speech and fake news are categorically different?

~~~
Aqueous
I'm simply saying that while both are cases of free speech, fake news is
allowed to spread out of control on the Internet due to a lack of editorial
controls while hate speech is largely constrained by community norms.

------
coldtea
Translation: Google, the closest thing to a public global utility on the
internet, becomes an arbiter of what's "true" and what's "hate speech" \--
according to the prevailing interests and standards of its owners and of its
country.

~~~
Angostura
Hate-speech can be tricky. There are edge cases where what is "true" can be
hard to determine. But in a large proportion of cases veracity is easy to
determine.

Are you suggesting that we just throw up out hands and say "there's really no
point trying to determine what is a fact, these days?"

~~~
LyndsySimon
> Hate-speech can be tricky.

It seems pretty clear to me. Hate speech is free speech.

~~~
mratzloff
Sure, and a private company isn't obligated to present it.

~~~
dmerfield
Nor is a government. But the founders thought the _principle_ of free-speech
was a good one and so we have the 1st amendment.

~~~
splawn
With the 1st amendment, the _government_ can never censor (with a few
exceptions)... however _companies_ can censor whatever they want for whatever
reason they want, perhaps ironically, as an expression of freedom of speech.

~~~
dmerfield
I'm aware. My point was that the founders of the US Government chose to uphold
the _principle_ of free speech. The founders of companies like Google,
Facebook and Twitter chose not to.

~~~
splawn
I see. Your point is that the _principle_ of free speech is something
different than the 1st amendment and the authors of the amendment messed up
and allowed a loophole.

~~~
apostacy
I think that if a company has enough of a monopoly on information that they
become a de-facto governing agency, that they should be forced to uphold the
principles of the 1st amendment.

Facebook should be classified as something like a common carrier.

~~~
splawn
I understand now, and it's hard not to agree with that perspective.

On the other hand, because of the fundamental relationship of the free press
and health of democracy, shouldn't there be some kind of criteria for validity
the same way that health claims in advertising do, for example? I understand
that some things are subjective and impossible to apply criteria to, but what
about cases where people are _knowingly_ publishing demonstrable falsehoods to
mislead masses of people. Seems like a major problem. Btw, im not sure what my
opinion is on this.... Im just "devil advocating", I guess.

EDIT:/foundational/fundamental Im having a hard time with words today. :/

~~~
apostacy
I honestly don't know. I think in general though, we should try to make
information be as available as possible. Most of the justification for
regulations about communications content comes from a time when most people
had only three television channels, and a few newspapers.

Also, China has used exactly the same justification for censoring their
information. And they're not wrong. People do knowingly spread misinformation
and harm others.

I think that it's a really tempting problem to want to solve. If we could just
find some heuristic to know for certain that something was "fake", we'd have
it all figured out.

------
samdoidge
I'm more worried about this than letting anyone publish anything, and letting
people decide for themselves.

~~~
cooper12
Huh? The workshops are about promoting awareness of the issue and the article
says they'll "teach teens how to deal with offensive speech, flag
inappropriate content, and moderate comments". The European Commission issue
was specifically about "illegal content".

~~~
CrowCrowCrow
I think GP might mean that as offense is completely subjective, it's better to
let people define "hate speech" etc for themselves rather than having an
agenda being pushed by a government or corporate entity.

~~~
cooper12
That's your interpretation of what the workshop actually is. Read what I
quoted again, it focuses on the technical side of things: how to flag, how to
moderate comments, and how to deal with hate speech. Talking about an issues
doesn't immediately mean you're pushing an agenda. The only one assuming
people can't think for themselves is you.

~~~
ManFromUranus
Quite the opposite he is proposing people decide for themselves absent any
"workshops". You are the one who thinks that people will benefit from Googles
instruction on how to decide what is hate speech, frankly I think anyone
should be able to say anything. If you decide that its hateful content then
you can stop reading it at that point.

------
pvnick
Quid est veritas? Not sure how I feel about large corporate entities taking
the lead on what is and what is not true or acceptable speech. The internet
and mass media has oversaturated us with information. Rather than make us
smarter, it seems to have filled our minds with garbage. I don't know what the
answer is (I suspect the first step is unplugging the machine and opening a
book), but for some reason - and I can't quite put my finger on it - it seems
unwise to hand the responsibility of educating the masses on absolute truth to
organizations with shareholders. Especially Silicon Valley organizations that
tend to be comprised of employees with activist mindsets and a monolithic
left-wing ideology. Who fact-checks the fact-checkers?

------
FoeNyx
Social networks should also tackle how their recommendation algorithm might be
influencing the "information bubble" of citizens (or at least be gamed in
order to do so).

For example in France, during an election campaign, official media must strive
to share equally the "speaking time" between candidates. And while social
networks, with subsidiaries in France, have a more and more prominent role in
the information bubble of citizens, they currently ignore this regulation in
their automated recommendations.

I experienced it myself some days ago, while looking for some videos about
French candidates on youtube, I was persistently recommended some videos for a
specific candidate by youtube's recommendation algorithm.

I was apparently not the only one to notice that, as some people even
conducted automated tests [1] :

"Mélenchon, Le Pen and Asselineau make up nearly 60 percent of the candidates
that are mentioned in the titles of the most recommended videos. It is
particularly surprising for Asselineau to be recommended so much because he
received less than 1 percent of voter preferences in the latest polls.

Perhaps predictably, each of these three candidates is the top recommended
candidate when searching for their name. However, we have found that starting
from a search of any other candidate, these three candidates are still the
most recommended"

Theses 3 candidates are highly active on social networks, but also pro-frexit
and often categorized as populist.

[1]
[http://algotransparency.org/en/presse.html](http://algotransparency.org/en/presse.html)

------
ksk
This is more scary than the so called fake news itself. An advertising company
should be looked upon with the highest level of skepticism, much like a
politician.

~~~
shallot_router
Especially when that company controls not only the ads that make news websites
sustainable, but also what headlines people see, and where, when they search
news-related topics.

My 2 cents: drop the term "fake news" entirely and switch to a less vague
euphemism like "hoax mills" or something. Switching the mental frame to some
clearly shady, unknown "news" website/company that pumps out nothing but hoax
or highly exaggerated stories will make discussions about how to deal with it
less contentious. "Stop the proliferation and revenue sources of hoax mills"
reads way better than "tackle fake news".

This is a place Google rightly can and should responsibly step in (just like
how they try to keep blatant scams and malware/phishing sites from first page
search results). People just need to all be on the same page about what,
specifically, Google is trying to stop.

------
sharun
Most of the underdeveloped world can't and don't get on twitter and make a
noise about things they are seeing on YouTube that they might find offensive.
The reactions and consequences only once in a while get into the press. I am
quite sure there is serious social damage happening on a scale much larger
than anyone will admit, as most of these people have no access to help or
advice of any form.

~~~
forgotpwtomain
Most of the under-developed world on a daily basis walks by people dying of
hunger and curable diseases. You really think offensive YouTube content is
causing serious social damage?

~~~
sharun
Yes. Misguided ignorant people are surrounded by misguided ignorant people. If
they are supplied bad info on youtube or on a social network, there is no
corrective mechanism. It's not complicated to understand. You don't have to
believe me the consequences will just keep piling up.

~~~
forgotpwtomain
> Misguided ignorant people are surrounded by misguided ignorant people. If
> they are supplied bad info on youtube or on a social network, there is no
> corrective mechanism.

Let's say I don't dispute your first claim (though that also has no basis).
How many mass atrocities or deaths have historically been tied to info on
youtube or social networks? (please provide _citations_ ) - I'm willing to bet
BTC that US drone strikes have killed significantly more (but it's okay we
have a corrective mechanism!). Further what about the all the mass atrocities
prior to social media, what was the corrective action there?

Would you have someone watch and advice these people? The People's Party? The
Ministry of Truth? Google Department of Facts? The US Government? The very
same government that e.g. tacitly supported genocide in Indonesia? [0]

Actually a great amount (if not the majority) of crimes against humanity have
been carried out precisely by _organizations_ and _people_ with the kind of
_corrective ideology_ you are on here evangelizing.

I'm really quite surprised to be having this discussion honestly, I don't know
if it's some mental culture of the Silicon Valley bubble or complete lack of
historical knowledge or what...

[0]
[https://en.wikipedia.org/wiki/Indonesian_mass_killings_of_19...](https://en.wikipedia.org/wiki/Indonesian_mass_killings_of_1965%E2%80%931966)

~~~
sharun
How do you know what my corrective ideology is?

Without even knowing that or expressing any kind of curiosity about it, just
look at the assumptions you are making about me. Social media is just
conditioning us to keep reacting to one another and it doesn't have to be this
way.

I haven't made too many comments on HN cause this is the kind of reaction I
get all the time. If you are interested here's my fix -
[https://news.ycombinator.com/item?id=14125904](https://news.ycombinator.com/item?id=14125904)

------
harryf
What's going on with the title of this article? Does not reflect the body
accurately

~~~
socket0
Indeed, the art of badly written headlines apparently takes up a whole chapter
in the Ars Technica style guide. This should have been: "Google pushes fake
news and hate-speech workshops (and YouTube) on UK teens". In addition, even
the most basic grammar checking tool should have picked this up: "The ads was
later removed."

------
skj
That is definitely horrible, and I think it should have been given more
attention.

However, the main difference I see, as to news coverage, is the first example
happened in the USA.

If the latter had happened in the USA, it would have received astounding
coverage. Even if you believe that liberal media would have been quiet, Fox
would have had a field day.

That they didn't is evidence of the locality argument.

~~~
kakarot
You replied to the wrong OP :)

