
Alex Stamos: Asking tech companies to police hate speech is “a dangerous path” - mkm416
https://www.technologyreview.com/s/612332/facebooks-ex-security-boss-asking-big-tech-to-police-hate-speech-is-a-dangerous-path/
======
snowwrestler
"Tech companies" is just way too broad a designation to use here. No one is
seriously asking Apple to police hate speech in iMessage or Facetime, for
instance, or Verizon to police hate speech in SMS. No one expects AT&T to
police hate speech in a phone call.

What people are concerned about are the newsfeeds and timelines, specifically.
Companies like Facebook and Twitter and YouTube love to pretend that their
newsfeed/timeline products are just like chat apps or phone calls--neutral
messaging platforms.

They're not. And the specific reason they are not, is the algorithmic timeline
and content suggestions.

It's silly to worry about giving these products "the power to determine what
people can—and can’t—say online." They've already seized it for themselves--by
deciding for me which content will show up in my newsfeed/timeline/suggested
list. They decide which content gets promoted to me.

Yes they use an algorithm to do so, instead of human decisions. But guess who
built the algorithm?

Companies that run algorithmic newsfeeds and timelines need to own their role
as a publisher and a gatekeeper of content.

Instead of pretending they don't make choices, they should be introspective
and thoughtful about the criteria they are using to make those choices.
"Engagement" is not neutral criteria because emotions are not symmetrical.
Engagement is higher on topics of fear, anger, rage, violence. That's down to
our evolution; that's down to the amygdala.

So if you build a publishing system designed solely to maximize engagement,
it's going to become a system that preferentially serves content that feeds
negative emotions. There are articles and case studies where a person starts
with a fresh account and sees what kind of content gets pushed to them;
inevitably they get horrible conspiracy theories and fear-oriented content.

Making decisions about what content your audience sees is an act of
publishing, even if it's executed via complex algorithm. The companies doing
this need to accept their responsibility for what they decide to serve and
promote.

~~~
padolsey
> They're not. And the specific reason they are not, is the algorithmic
> timeline and content suggestions.

I think the more pertinent reason here is that these platforms have broadcast
capability (immediate communication with many people) as opposed to p2p
capability (traditional SMS or phone calls). Even if Twitter was strictly
chronological, without any algorithmic mutation, we'd still presumably be
insisting they police content, right? I agree with your conclusion that
they're publishers, but to me, what makes a publisher a publisher is not
content curation or mutation, but is simply broadcast capability. And so our
drive to regulate follows quite naturally from similar drives to regulate the
press and media.

~~~
Fellshard
Not really.

One involves a neutral role, in which subscribed feeds are delivered to users
without modification or filtering.

The other involves an active role on the part of the platform for any number
of reasons: increased engagement, removal of voices that may cause perceived
damage or lack of trust in the platform itself, or other, more ideological
reasons.

I've noticed an intentional avoidance of distinction, lately, between active
and passive behavior on a number of fronts, from sexual activity, to medical
advice and intervention, to social media publishing. It's a pretty crucial
component in ethical analysis that I suspect is being intentionally blurred.

~~~
padolsey
The public wouldn't buy the 'neutral' aspect, as every mechanism that a
platform provides, is in some way biasing the type of content that is
broadcast. Twitter's RT feature, which requires no curation or modification of
actual content by Twitter, still biases content to that, perhaps, which is
most divisive or simplistic. Broadcast technology, even without algorithmic
bells and whistles, is already a biased technology. I think we might in-fact
agree, but to me, there is no crucial ethical difference between a
chronological twitter and a algorithmically mutated one, for in either case,
the very platform itself (its existence, its design) pre-biases the type of
content that will flow through it, and thus we'd end up with content that
would make us consider policing it.

So, in your words, I would say, it is somehow fundamentally impossible for a
technology to be "passive".

~~~
leereeves
Retweeting doesn't bias content; it's rather the opposite: making it easy for
people to share content they like removes a longstanding bias. The old models
of broadcast media selected content to fit the biases of a few powerful media
executives.

You may find the choices of the average person "divisive or simplistic" but
the Retweet button doesn't dictate their choices.

"Policing" content, however, is all about dictating people's choices,
motivated by the thought that the "police" know better than less powerful
people, and reestablishing the biased filter controlled by the powerful
(new-)media executives.

------
mundo
This strikes me as a timely and important sentiment.

When we demand that Twitter ban anti-semitic tweets, or that Cloudflare block
white supremacist websites, or that Youtube deplatform Alex Jones, we are
taking the power to limit speech (which the founders felt was too important to
be wielded by the government) and handing it to middle managers at software
companies. The de jure rule is "Freedom of Speech shall not be infringed" but
the de facto rule is "Don't say anything that would upset the advertisers."

This seems like a Bad Idea (tm) but until/unless a decentralized
Mastodon/Scuttlebutt style platform gets traction, I don't know what the
solution is. It's a natural result of relying on private apps as a primary
method of communication.

~~~
charlesism

        > we are taking the power to limit speech (which the
        > founders felt was too important to be wielded by the 
        > government) 
    

Someone spray-paints a swastika on your car. Do you think the founders would
mind if you painted it over?

~~~
manfredo
It'd more like Ford trying to ban people from putting Hilary or Trump bumper
stickers on their cars. In your scenario, an individual had their property
vandalized. That is not comparable to platforms censoring certain views.

~~~
charlesism

        > That is not comparable to platforms 
        > censoring certain views.
    

Historically, it has been. If someone had sent a letter to The Pennsylvania
Chronicle, containing a recipe for baking a turd pie, I don't think Ben
Franklin would have felt the need to print it.

Facebook, Twitter, Google... they're the ones footing the bill to host their
users' content.

~~~
manfredo
The Pennsylvania Chronicle isn't a platform. It's a publisher. Readers'
letters getting published is the exception, not the norm.

The ideas discussed here would be more like a telecoms provider specifically
refusing to do business with someone because they disagree with their
politics.

~~~
mundo
What's the difference? AFAICT the idea that when a website gets big enough it
becomes de facto infrastructure and gets governed by different rules is pure
imagination.

~~~
basch
There are rules that treat telecoms differently precisely because there is
opportunity for market failure.

The argument is, that some software companies have crossed into becoming a
telecom like entity. A market failure exists, where consumers may need
protecting.

Obviously, current laws dont treat facebook, google, or microsoft that way.

Do we feel the same way about gmail/outlook starting to censor emails that
google/microsoft dont approve of?

------
menacingly
Putting aside that it's a slippery and malleable label to apply to undesirable
speech, there is an elitism hiding in these ban-hate-speech arguments.

The core assumption is that while _I_ am able to see these vile ideas for the
lies they are, the unsophisticated masses must not be allowed to hear them,
lest they fall prey.

This is problematic in ways that used to be obvious to people in free
societies, but for some reason seems lost now.

~~~
mhneu
The issue is that the wealthy and also foreign adversaries are exploiting the
algorithms to amplify speech that serves their interests. That typically does
not serve the interests of average people.

The issue is how to avoid exploitation and manipulation.

When the KKK marched several decades ago, it got coverage in newspapers and
media proportional to its influence in society. Today, the wealthy and foreign
opponents can weaponize hate speech like this to fan flames of division for
their own purposes. That is the problem.

~~~
menacingly
So, one common thing I see is I don't think left-leaning people right now
realize how similar to the extreme fringes of the right they sound.

Your first two paragraphs would be a huge hit on /pol right up until you got
to the point of resolving which adversaries and interests you're talking
about.

I don't know where it takes us when an authoritarian, silencing approach is
what both sides agree on, and they just haggle over where to point it.

------
olivermarks
Essence of article IMO "If democratic countries make tech firms impose limits
on free speech, so will autocratic ones"

Free speech is what defines a democratic country.

Terms like 'Hate Speech', 'Fake News' are buzz phrase distractions that get in
the way of the core of this reality. We already have a legal system in place
that defines libel, threats etc. We don't need a new layer of corporate
jurisdiction over our ability to speak online or monitoring what we can or
can't say

------
mrkstu
Cockroaches thrive in the dark corners. Sunlight is antiseptic. And forcing
ideas/speech into those dark corners don't keep them from growing, it merely
allows it to grow unobserved and un-countered.

Ideas are the only counter to other ideas and how we communicate those idea is
via speech. Suppression only invites martyrdom on the behalf of those
suppressed, increasing their credibility.

~~~
OmarIsmail
There is evidence that deplatforming works
([https://motherboard.vice.com/en_us/article/bjbp9d/do-
social-...](https://motherboard.vice.com/en_us/article/bjbp9d/do-social-media-
bans-work))

~~~
gdix
> though it may have some unintended consequences that have not been fully
> understood yet.

I think no one disagrees that it "works" insofar as it stops the bad person
from getting their bad ideas out there. Opponents of deplatforming generally
argue that the long-term reaction to the deplatforming is worse than the
problem the bad person's ideas were causing. Better to counter the bad ideas
with good ideas to do long term good.

~~~
erik_seaberg
Even apart from the Streisand effect, conspiring to suppress crappy ideas
gives them undeserved credibility. " _They_ didn't want you to hear this."

~~~
albedoa
That's a reasonable concern and may even be true for some instances, but in
the case of Milo Yiannopoulos for example, he practically just went away. Milo
himself says he spent all of his savings and lost his friends. Even his most
ardent fans stopped speaking out about him.

Nobody is listening to his ideas enough to give them underserved credibility.

------
ilovecaching
There's still too many unsolved philosophical questions here. What is hate
speech? What should the limits of free speech be? How do we contended with the
multitude of religious, legal, and cultural differences and anomalies when
policing news and thought across the world? How do we react to people
weaponizing the policing of hate speech to remove free speech?

I have yet to hear compelling answers to this problem, and I am not that
optimistic that it can be solved in the next few decades. I do agree that
trust busting is the wrong approach. At least the problem is currently
centralized.

~~~
atmosx
Speech is either free or not free, there is no middle path.

If you want free speech, you accept the consequences. If you want “regulated”
speech, there are consequences.

That’s it. I would argue that the level of satire a society can cope with, is
directly proportional to the quality of democracy the society has.

~~~
dwaltrip
Ignoring nuance doesn't make it disappear. Every developed country on earth
regulates speech to varying degrees.

------
lsh123
I don’t understand why someone needs to censor anything in the first place. If
a user finds posts of another user offensive, etc, then the first user can
unsubscribe/unfollow/block/... the second user. If everyone thinks the same
way then the offensive user will just speak with her/himself.

~~~
twinkletwinkle
That's how you end up with echo chambers and parallel online universes.

~~~
lsh123
I think censorship is the way to create echo chambers in a much faster way

------
neves
Well, these unnamed "tech companies" are responsible for the proliferation of
absurd lies that will elect a far right authoritarian candidate in my country,
Brazil. The spread of this lies happens with the support of a well financed
organization.

I always thought that the Internet would be a democratic platform that would
improve the debate in society. Maybe we would go back to a democracy without
intermediaries.

I was wrong.

We are entering a dystopian world where the profits of a handful of companies
are more important than the rest of society.

~~~
tedunangst
Modest proposal: The government should appoint a Department of Truth to review
and approve all social media posts in a country to eliminate election
misinformation.

~~~
ethanwillis
What's really depressing is that I can't tell if you're joking or not.

~~~
Arubis
(s)he's joking, but the only reason I'm sure is the use of "modest proposal",
not the actual content.

------
detcader
161 comments and no one has mentioned Glenn Greenwald and the Intercept's
prolific coverage on this issue? I'll do a quick websearch and fix that.

 _Should Twitter, Facebook and Google Executives be the Arbiters of What We
See and Read?_ August 21 2014 - [https://theintercept.com/2014/08/21/twitter-
facebook-executi...](https://theintercept.com/2014/08/21/twitter-facebook-
executives-arbiters-see-read/)

 _Facebook Is Collaborating With the Israeli Government to Determine What
Should Be Censored_ September 12 2016 -
[https://theintercept.com/2016/09/12/facebook-is-
collaboratin...](https://theintercept.com/2016/09/12/facebook-is-
collaborating-with-the-israeli-government-to-determine-what-should-be-
censored/)

Then: _Facebook Says It Is Deleting Accounts at the Direction of the U.S. and
Israeli Governments_ December 30 2017 -
[https://theintercept.com/2017/12/30/facebook-says-it-is-
dele...](https://theintercept.com/2017/12/30/facebook-says-it-is-deleting-
accounts-at-the-direction-of-the-u-s-and-israeli-governments/)

 _" hate speech" from:ggreenwald_ on Twitter -
[https://twitter.com/search?q=%22hate%20speech%22%20from%3Agg...](https://twitter.com/search?q=%22hate%20speech%22%20from%3Aggreenwald)

------
the_snooze
Tech companies are using algorithms to prioritize the messages we see, which
makes them incredibly valuable as advertising platforms. It seems like Mr.
Stamos wants these companies to have all the rewards and none of the
complicated responsibilities to match it. If they don't want that
responsibility, then they need to get out of the business of sorting and
recommending. Let them be like Craigslist.

------
blueboo
The choice is: police hate speech or promote hate speech.

Observation: promoting is cheaper (even profitable). But they can promote it
with plausible deniability.

Which is a more "dangerous" path? And to whom? Society? Shareholders?

~~~
chrisco255
Speech isn't "dangerous". You know what you do about people that say stupid
things: you call them out on it. The answer to hate speech is more speech.

~~~
natestemen
speech is dangerous. hate speech promotes and incites violence.

~~~
jshevek
We already have laws that deal with explicitly inciting people to violence.

So called "hate speech" generally is not an explicit endorsement of violence.

------
foobarbazetc
Counterpoint: no it isn’t. US law isn’t global anyway, and companies already
do it.

------
tempodox
Policing “hate speech” will just create niches where you won't be censored.
It's only a question of time until someone comes up with an idea to monetise
that. Imagine a platform where “moderate speech” will be banned because it's
against the house rules...

------
madrox
I've been social on the internet since the early 90s, and it's been a
wonderful place for most of it. Before it became so egalitarian, the people
adept at socializing on it were pretty left leaning or downright libertarian.
Now everyone is in on it, so we're getting confronted by parts of society we
could pretend didn't exist 10 years ago. I don't know why I didn't see it
coming.

There's a can/should debate hidden in here. Tech companies totally _can_
police hate speech (or any kind of speech) on their platforms, thanks to handy
things like a ToS. Whether they _should_ is a cultural question about what
kind of a society we want to have. If history has taught me anything, it's
that the _can_ side of the debate wins in the long run.

~~~
Arubis
> Now everyone is in on it, so we're getting confronted by parts of society we
> could pretend didn't exist 10 years ago. I don't know why I didn't see it
> coming.

Did you have the same biases I did? At that time and age (my teens, mostly), I
just assumed people with different values than mine were ignorant, and so
naturally they wouldn't be capable of using advanced technology.

I'm not proud of that, but there's still a _lot_ of that sentiment kicking
around, including in the form that giving people additional access to
technology and knowledge will educate the masses into the "correct" set of
values held by whoever's pushing for greater technological adoption.

~~~
Nasrudith
There was some truth to that assumption when the internet was more niche and
the old ecosystems were in place - it required a certain degree of curiosity
and willingness to learn and explore when there were established "reputable"
ways.

Greater access does help /if they are willing to use it in self improving
ways/ in the first place. If they just use it for tabloids and gossip it won't
be a library to them but tabloids and gossip.

------
IBM
Alex Stamos doesn't realize that the political tides have shifted. The
technolibertarianism that has been prevalent in Silicon Valley since at least
the 1990s is on its way out. Governments around the world are increasingly
asserting their sovereignty, and that's not going to change. The internet is
not a wild west where a bunch of tech people are free to do whatever they
want, ignoring all the consequences and negative externalities they create.

It's a coincidence that John Perry Barlow died at the height of all this, but
I think it's extremely symbolic that governments are asserting their power
just as technolibertarianism's radical cleric passed away.

------
OmarIsmail
This is already going down the path of "what is hate speech". Here is what I
think is a pretty simple definition:

Speech that explicitly or direct line implicitly dehumanizes anyone.

Then the arguments become pretty simple: does a statement dehumanize someone?
Does it indicate that they are any less human than another? That's a much
easier discussion to have.

~~~
ryanmonroe
I feel that this comment dehumanized me. Therefore, you must remove it. See
the issue?

~~~
OmarIsmail
How do you feel dehumanized? Let's have that discussion.

(see the answer?)

~~~
thetrumanshow
With the appropriately-tuned hate-filter you wouldn't get to defend your idea
and have a discussion, you would just be de-platformed.

