
How social-media platforms dispense justice - privong
https://www.economist.com/business/2018/09/08/how-social-media-platforms-dispense-justice
======
loup-vaillant
> _Internet firms in America are shielded from legal responsibility for
> content posted on their services. Section 230 of the Communications Decency
> Act of 1996 treats them as intermediaries, not publishers—to protect them
> from legal jeopardy._

> _When the online industry was limited to young, vulnerable startups this
> approach was reasonable._

That's the wrong way to look at it. The size of the company should not matter
here. It's how they operate that should determine which regulation applies to
them. There are only two reasonable alternative:

Either they are _intermediaries_ , in which case they should have the duty to
indiscriminately intermediate whatever they are asked to intermediate. This
means no centralised moderation. Maybe allow moderation at the sub-community
level (as Reddit does with its subforums), but no centralised censorship,
except _maybe_ upon reception of a warrant.

Or they are _editors_ , in which case they should have full legal
responsibility of the content, and suffer full legal consequences of whatever
illegal speech they allowed.

I believe such a clean dichotomy is the only way to ensure free speech in the
face of such centralised services. 'Cause I think the likes of Facebook,
YouTube, Twitter… would rather choose the "intermediary" path than risk taking
legal responsibility for their enormous user base.

~~~
XorNot
This implies no suggestions algorithms and news feeds either. If these sites
are pure intermediaries, then running machine-driven editorializing by
controlling and suggesting the content people see is not a neutral act - those
algorithms are biased and they choose how they act.

Imagine a phone company which selectively didn't send calls to you because
it's automated checkers determined you weren't interested. If you got nothing
but telemarketing, well, now they're no longer a pure transit are they?

Social media owners are already editors and they're trying to shirk their
responsibility.

~~~
loup-vaillant
> _This implies no suggestions algorithms and news feeds either._

Yep, it does.

~~~
caseysoftware
Frankly, I'm surprised this doesn't get more attention.

At present, Twitter, et al are in an incredibly weird position. They claim to
be "public spaces" which is the foundation of the lawsuit against Trump
blocking people _and_ making company announcements SEC-compliant. Further,
they claim to be "neutral" but apply "quality filters" and flag bad (but not
illegal!) content. But then Twitter revokes the blue checks for poor behavior
and they all ban people for bad (but not illegal!) behavior. at least claim
they weren't playing as editors.

So they're only the particular thing to a particular audience when they need
to be but something else when it's more convenient.

I don't see how the reasoning fits together.

------
HONEST_ANNIE
The root incentive to moderate is to keep the government from intervening or
preventing people turning against the platform. Government starts interfering
and people turn against them when there is a moral panic[1].

In democratic societies the best way for the social-media platforms to achieve
their goal is to muffle moral panics directly. That's different goal from the
moderation that government and public want from them (criminal or hidden actor
propaganda.)

I predict that the attention will be directed towards throttling sudden spikes
in moral panic inducing content. Legitimate moderation will be just small part
of the moderation effort. Most of the energy goes into protecting the
sensibilities of "proper" society.

There will be limiters for the reproduction number of any kind of
controversial thinking. It will be just easier and cheaper.

[1]: Moral Panics
[http://criminology.oxfordre.com/view/10.1093/acrefore/978019...](http://criminology.oxfordre.com/view/10.1093/acrefore/9780190264079.001.0001/acrefore-9780190264079-e-155)

------
jimnotgym
It will to see how the platforms fare once they have to exercise editorial
control over their content.

It will be especially interesting to see how their assumed competitive
advantage over 'traditional media' holds once it is subject to a massive
increase in overhead, which surely follows from having an army of moderators.

I wonder if they will start to ebb, consolidate, and find a place alongside
other media. Those that are scarcely profitable right now (Twitter for
instance) should be the early indicators.

~~~
village-idiot
Honestly, I’m surprised that social media is a thing. It appears to make its
users miserable, especially twitter and Instagram, and some of them barely
break even.

~~~
TeMPOraL
You could say the same about heroin. Or even alcohol, really. Taking immediate
pleasure at the price of pain later is a failure mode of human brain; we all
fall for it, to smaller or greater extent.

~~~
village-idiot
Good point.

------
bogomipz
>"Although most of the moderators work for third-party firms, the growth in
their numbers has already had an impact on the firms’ finances."

I was kind of surprised to read this. Wouldn't content policy be much more
coherent if these were employees of the company? I'm curious why they elected
to use third-party firms. I understand that localization is an issue but this
a global company with deep resources so I'm doubting that could be it.

~~~
gaius
_I 'm curious why they elected to use third-party firms._

Plausible deniability, and no liability for the long term effects on the
mental health of those exposed to extreme imagery. Knowing these companies
almost certainly a tax dodge of some sort is involved too.

~~~
bogomipz
Sure plausible deniability sounds about right for these folks. However I would
hope that the mileage on that is limited to single use. I'm sure they will use
this excuse to point blame elsewhere during their next crisis but I doubt it
will work for the one after that.

------
vladd
[http://archive.is/MMWqL](http://archive.is/MMWqL)

------
raincom
Now these platforms hide behind "external fact checkers", "black box
algorithms", etc. The real problem is that every fact is theory-laden, a
lesson learned from the history and philosophy of (natural) sciences.

Today, we are in a state of disputing facts, because people are disputing
theories behind stating such facts. Let me give you an example: a reporter
finds two dead bodies. The fact that two dead bodies discovered may not be
under dispute. The moment, one says that these two were killed, the other says
that these two committed suicide--you see, two theories (homicide vs suicide)
are presenting the phenomenon in two different ways.

Further accept that these two were indeed killed. Now look at possible reasons
behind these deaths. Multiple people offer often competing, contradictory,
reasons behind these deaths.

Basically, there are no facts. What we call facts are theory-laden facts that
are agreed up on by all sides; in other words, all side agree on the
underlying theory (say, homicide).

~~~
philipov
There are facts. You said yourself "The fact that two dead bodies discovered
may not be under dispute." That's the fact. Everything else you said isn't a
fact, it's a hypothesis. Just because people don't know the difference between
a fact and a hypothesis/conclusion doesn't mean there are no facts; unless you
want to go full ontological skeptic and dispute that it is possible to ever
know anything (never go full sophist!).

The real issue is that we have a breakdown of good faith in civil
conversation. That makes society impossible. At the point where you argue that
there are not two bodies in front of me, even though there are two bodies in
front of me, I might as well stop arguing with you and instead start punching
you. Then you can argue that your pain is just theoretical.

~~~
coldtea
> _There are facts. You said yourself "The fact that two dead bodies
> discovered may not be under dispute." That's the fact. Everything else you
> said isn't a fact, it's a hypothesis. Just because people don't know the
> difference between a fact and a hypothesis/conclusion doesn't mean there are
> no facts;_

The distinction between fact and hypothesis is a red herring.

First, unless one has verified something themselves, with their own eyes or
carefully checked primary sources, there are no facts, just reporting of
things claimed to be facts.

In other words, there are no "two bodies in front of you" and you're not being
denied by some fellow you discuss with while you're both looking at them. Just
reporting that such two bodies exist somewhere, that one believes and the
other doesn't. And more often than not, the reporting is of even more abstract
things, like statistics (collected with who knows what methodology, and
presented and baked to prove who knows what point, using all the tricks one
can use to lie with statistics).

Second, supposedly physical-domain facts can be fake as well. "This man was
shot by person X" (leaving no room to hypothesis) while X might not have done
it, or might have been framed - even if there were witnesses attesting that X
did it and even a court found it so (many people have been later found
innocent e.g. by DNA or further research decades after the fact, victims of
over-jealous prosecutors, false testimony, setups, facial similarities, racist
bias, and so on). Despite their guilt being not a hypothesis but a fact with
"evidence", it was still bogus.

Second, pure hypothesis is often presented as fact, and people are called out
for not believing it, all the time.

------
xienze
> or a photo alleging that Donald Trump wore a Ku Klux Klan uniform in the
> 1990s (leave it up but reduce distribution of it, and inform users it’s a
> fake).

If they’re so concerned about misinformation, why would they leave a fake
photo up, even with a disclaimer? Nah, I must be reading too much into this,
surely this is a policy they apply evenly, and they’d do the same with a
similarly fake photo of Obama.

~~~
meowface
Not sure if you're being sarcastic. I assume they would do the same for
doctored photos of Obama. Removing content entirely should only be reserved
for when serious crime is likely to occur, I think.

------
vinayms
Social media companies must be allowed to do policing this way only if they
abide by the national laws pertaining to freedom of expression. I mean, if a
company wants to list as a social media company where people share
information, and they also do policing, they must be answerable to local
courts if they take down posts using their judgement which goes against the
established freedom laws of the land, without hiding behind the garb of being
a private enterprise etc and saying that its their prerogative to manage their
platform as they please. I don't know if such a thing is already in place.

The companies are having a knee jerk reaction to the misuse by public, and
like all KJR this is heading the wrong way. The fact that the companies whose
primary aim it to maximize profits are taking it upon themselves to do these
sort of ethically and legally questionable activities itself will lead to bad
ramifications.

One obvious solution to controlling fake news is to remove the invisibility
cloak of users by forcing them, by law, to use government issued ID to
authenticate their accounts. This should make people more responsible before
sharing misinformation. However, the fragility of technology that would lead
to abusing authenticated user accounts is a big enough concern to put the onus
on the companies.

I don't care if the companies "employs executives who are thoughtful about the
task of making their platforms less toxic while protecting freedom of speech"
or use third party authenticating sources, or some fancy AI/ML, but unless
they are forced to abide by the laws, things will only worsen. Whether they
like it or not, social media companies are now entangled in social and legal
aspects and have to deal with all that comes with it. They have enjoyed
exponential growth in profits, now its time to get real.

