
Tech Companies Are Deleting Evidence of War Crimes - anfilt
https://www.theatlantic.com/ideas/archive/2019/05/facebook-algorithms-are-making-it-harder/588931/
======
subjoriented
The article centers around activity in Syria, but the problem is much more
endemic. In the United States, Apple has consistently taken down apps
providing information about US government airstrikes - including those that
strike wedding processions, children, bystanders, etc. Recent efforts to
identify and scrub foreign propagandists in the United States have silenced
legitimate voices of domestic dissent, as foreign influence campaigns
typically attempt to magnify grassroots dissenting opinions (this is true also
of US foreign intelligence efforts). There's many reasons why these kinds of
apps, comments, and conversations are taken down due to their content.

Fundamentally, being a gateway to information in a legal environment where the
hosting the content as a curator puts you under risk creates deep incentives
to whitewash content. Those forces are already present for companies which
might accidentally sustain 9gag/4chan type cultures/commentary which
contradicts however its productizing its platform.

Reddit is another example a company that has recently scrubbed itself of most
controversial content, including content critical of or dangerous to its
host's countries political and national security naratives. Twitter has
started down that journey as well.

It feels like we're experiencing the growing pains of social network and
hosted content boom reinvented on web technology over the past couple decades.

It's kind of amazing in retrospect how controversial some of the "mundane"
applications of technology to society have been, whereas a couple decades ago
most of the moral panic centered around concepts like "online dating", which
to date have actually been relatively controversy scarce.

~~~
tracer4201
Not a surprise. Many of us grew up believing we as Americans are exceptional,
that we are inherently the good guys in any international conflict.

The reality is our government has interests. Morality and interests are
orthogonal, with the latter being influenced primarily by profits or some
perceived direct or indirect threat to America’s elite.

~~~
navigatesol
> _that we are inherently the good guys in any international conflict._

How else could any human or organization function if not by acting in ways
they believe are right?

> _The reality is our government has interests._

The reality is _everyone_ has interests. The "government" isn't some
homogeneous entity pushing towards a single plan; it's an incredibly complex
network of individuals.

The nation isn't perfect, but does anyone really believe that, on net, America
hasn't been a force for good in the world? What is your benchmark?

~~~
ionised
> The nation isn't perfect, but does anyone really believe that, on net,
> America hasn't been a force for good in the world? What is your benchmark?

Lots of people all over the world who have been on the receiving end of US
military or economic aggression enacted in the interests of the US plutocrats.

African Americans, Native Americans, most of South America, pretty much all of
the Middle East and North Africa, anyone who has suffered structural violence
as a result of the US's insistence on a morally bankrupt Friedman hack-job
neoliberal economic ideology.

The US has done a lot of good things, but on balance I think it's done more
damage and has merely replaced the British Empire as the current Anglo-sphere
imperialist power.

~~~
primroot
Abusing its power to exceptionally shield itself from international justice,
while at the same time practicing extraterritorial judicial reach, is also not
a good sign of being a net force for good.

[https://www.reuters.com/article/uk-usa-icc/u-s-imposes-
visa-...](https://www.reuters.com/article/uk-usa-icc/u-s-imposes-visa-bans-on-
international-criminal-court-investigators-pompeo-idUSKCN1QW1ZH)
[https://www.csmonitor.com/World/Europe/2009/0213/p05s01-woeu...](https://www.csmonitor.com/World/Europe/2009/0213/p05s01-woeu.html)

~~~
ionised
Plus, as a nation it meets its own definition of a rogue state.

------
leshokunin
The problem is that we've equated some platforms with a certain kind of media.
Youtube is video, Twitter is news, Facebook is.. reality tv, I dunno. It's
pretty debatable whether those platforms are the right place for the coverage
of war crimes.

I'm not going to start an argument about the media not doing their job right.
However, this discussion would be moot if there was a news platform on par
with the above, and I don't mean Twitter. I'd love to see someone build or
modernize a news outlet that isn't driven by attention or clicks as a
currency.

~~~
bilbo0s
This.

If your evidence for war crimes is Twitter, there's a problem. And the problem
is not Twitter.

Twitter and Facebook posts should not be evidence of anything. That stuff is
so easily faked that courts would be entirely right to laugh such "evidence"
out of the courtroom.

Evidence should be gathered by appropriate authorities and kept in accordance
with international standards on evidence storage.

Hint: That's not "look at this tweet I got!"

~~~
chmod775
This makes no sense.

Pretty much all evidence starts out in some nonprofessional hands.

By this logic you could also say that you shouldn't record evidence of war
crimes on your Android smartphone, because Android footage "shouldn't be
evidence of anything" and "Evidence should be gathered by appropriate
authorities and kept in accordance with international standards on evidence
storage."

Videos from Facebook or Twitter can and have often been used as evidence in
court.

> Twitter and Facebook posts should not be evidence of anything. That stuff is
> so easily faked that courts would be entirely right to laugh such "evidence"
> out of the courtroom.

What courts usually do when such things may be disputed is they ask the
platform to confirm that the content in question is real. Source: I have been
asked to appear in front of a judge (usually in his office to sign a written
statement) to confirm the accuracy/authenticity of data from one of my
platforms in the past.

I don't think there often is a dispute though, because both parties are aware
this will happen and are also aware of the consequences of falsifying
evidence.

Obviously that screenshot/video together with electronic traces and your
statement would then be stored wherever it is they store evidence, not just on
FB/Twitter.

~~~
bilbo0s
> _Obviously that screenshot /video together with electronic traces and your
> statement would then be stored wherever it is they store evidence, not just
> on FB/Twitter..._

Money quote.

Twitter is not the evidence store.

Twitter is the place the evidence is gathered from. Along with numerous other
sources. And no, they don't simply take your word that the footage on your
platform was recorded at place X at time Y and is verified to document persons
of interest A, B, and C. They take your word that a twitter post was made at
time W containing video N. That's all. I think you may have misunderstood what
you were signing if you thought that your signature validated and verified the
content of a twitter post. (Or any internet post for that matter.)

That's what the investigations are for, investigations that I can assure you
entail far more than twitter posts when you're talking about war crimes.

Crimes require evidence. Real evidence, not twitter posts.

~~~
AnthonyMouse
You're completely ignoring the public pressure needed to bring about
enforcement of laws against connected figures.

If you give your evidence against government officials to other government
officials, good luck getting them bring a case that could hurt their career
prospects by pissing off the wrong people.

To get that rolling under those circumstances, you need public pressure. Which
means you need to distribute the source material far and wide, so that nobody
can claim they don't believe it because they don't have access to it, and the
whole internet can try to pick it apart use clues in the information to dig up
more information etc.

None of that has anything to do with digital signatures or anything like that.
It's a matter of seeing a video with a particular location and timetamp and
then finding other documentation from other people for the same place and
time, and looking up public records to verify that what you see in the video
matches etc. That's how investigations work. You don't verify by having the
original and some kind of signature, you verify with corroboration and the
additional evidence that the original evidence leads you to.

~~~
TeMPOraL
The problem is though that a video goes viral and then gets forgotten in a
time shorter than anyone can verify its authenticity. Public pressure may end
up getting applied incorrectly.

~~~
AnthonyMouse
People get upset about dumb stuff continuously. It's the job of the public
officials to verify the information for themselves. If it's false then they
can publish the information disproving it and the public pressure subsides.

Better to have the occasional tempest in a teapot than that people do nothing
even when something needs doing.

------
o10449366
This article seems mostly concerned about Facebook and YouTube's opaque
filtering systems, but the authors are looking in the wrong place. These
platforms aren't and shouldn't be responsible for hosting violent and
disturbing content - regardless of its purpose or utility. The authors
themselves discuss the can of worms that open when you start accommodating the
interests of some groups and not others; it isn't sustainable and it isn't
possible.

Instead, this article should have focused less on the decisions of AI systems
and neural networks, which can be difficult to decipher and interpret, and
more on the decisions of the humans behind these tech companies. The latter is
much easier to scrutinize and dissect.

I think the truly concerning initiatives are projects like Google's Dragonfly.
There's absolutely nothing opaque about a search engine that will actively
assist the Chinese government with whitewashing history for a billion
individuals. These are the decisions that should be examined.

~~~
djakjxnanjak
>Instead, this article should have focused less on the decisions of AI systems
and neural networks, which can be difficult to decipher and interpret, and
more on the decisions of the humans behind these tech companies. The latter is
much easier to scrutinize and dissect.

I agree with your comment overall but this common assumption bothers me. We
can put a debugger in a neural net, we can’t put one in a human brain.

People get angry when a policy decision is made that targets them, and are
told by that the people responsible that the decision was implemented with a
neural net. Then they get mad at the neural net, which shifts blame away from
the humans who control it.

------
basetop
Tech companies are deleting evidence of war crimes and everything else at the
behest of media companies like the atlantic.

Youtube, facebook, reddit, google search, etc have all been targeted by the
atlantic, nytimes, washingtonpost, cnn, etc and bullied into scrubbing content
and direct their users to "authoritative sources". Ironically enough, in
china, russia and most countries, "authoritative sources" are state propaganda
organs. But we are different or so I'm told. Our "authoritative sources" are
independent news organizations who strangely enough push the same message with
the same talking points. What a coincidence. For "independent" "news"
organizations, they sure are united in the same message.

This censorship has been going on for at least 5 years now. Maybe the atlantic
journalists should investigate their editors as to why the atlantic has
supported censorship? Or maybe the atlantic journalists should start
investigating nytimes, washingtonpost, cnn, etc. Why so many
government/intelligence agency people are working in media. Why so many
children of politicians ( Bushes, Clintons, McCains, Cuomos, etc ) have
prominent positions in the media?

Or is the atlantic only interested in war crime evidence that suit their
agenda ( pushing for war in syria, venezuela, etc )?

This article is so weird. It's like an arsonist setting a house on fire and
telling everyone that the house is on fire.

~~~
52-6F-62
What exactly are you claiming is being censored?

The media regularly investigates and reports on itself.

The press is far from perfect, but you’re alleging a vast and deep conspiracy
when reality is likely far simpler:

The right hand doesn’t know what the left hand is doing.

Sales and business have little impact on editorial. Editorial flies by the
whims of a wide variety of strong personalities.

Politicians and relevant players get columnist and editorial roles because of
their inside status and insight (of whatever level of quality that might be).

Suggesting those outlets produce a single unified perspective or ideology is a
bit much. Which would explain why you might see contradictory ideas.

I continually stress media literacy as crucial-especially these days. It’s not
a mystery to be solved. Most of is pretty straight forward hum drum.

Edited to add:

The Atlantic is a very different machine from WP or NYT. It’s magazine whose
content has been analysis and ideas and always has been. Their work will
always have perspective woven into their articles. It’s not new to them. But
it’s not a conspiracy, it’s part of their business model.

~~~
jtr1
These do not have to be mutually exclusive accounts. There does not have to be
a secret cabal of shadowy deep state operatives pushing a unified message,
there just needs to be a propensity for the rich and well-connected (by
definition a small group) to find their way into positions of influence in
rough proximity to major media outlets. As you've said:

 _Politicians and relevant players get columnist and editorial roles because
of their status and insight._

It's not difficult to see that wealth, status, and connection are self-
compounding. I also don't think it's unreasonable to say that many people in
this group hold views that are out of step with the majority of Americans,
particularly when you are talking about the pro-military intervention crowd.

~~~
52-6F-62
Granted, but what’s being implied and not said outright is that those people
are somehow influencing entire organizations of strong-willed career
professionals and not just given their own post to say what they want
alongside the rest.

I’m asserting the latter is usually what’s happening.

Over the past 100 years there have been many times where North American
governments have attempted to rule over the press. That never went well for
them (save the inception of Fox News).

OP seemed to suggest the large number of contradictory articles _was_ the
result of a larger unified conspiracy. I’m suggesting that is _highly_
unlikely.

For what it’s worth- people don’t get into working for the media because it
pays really well. Not usually anyway. So in that way I’d rule out wealth as a
primary motivating factor to sacrificing one’s principles in that business.

~~~
leftyted
It seems unlikely to me that there is no coordination between left wing media
institutions. They say the same things at the same time using the same
language. I don't read much right-wing media.

I'm not saying there's a shadowy conspiracy (though I'm not ruling out people
talking about "messaging" via email). It may be that they read each other's
stuff and the demands of the 24 hour news cycle dictate that there's a certain
amount of cross pollination and regurgitation.

> For what it’s worth- people don’t get into working for the media because it
> pays really well. Not usually anyway. So in that way I’d rule out wealth as
> a primary motivating factor to sacrificing one’s principles in that
> business.

Another perspective on this, courtesy of Noam Chomsky in conversation with a
journalist: "I’m sure you believe everything you’re saying. But what I’m
saying is that if you believed something different, you wouldn’t be sitting
here."

> Over the past 100 years there have been many times where North American
> governments have attempted to rule over the press. That never went well for
> them (save the inception of Fox News).

I think it's unknown to what degree American intelligence agencies influence
the media but I'm willing to bet it's "more than not at all". It may be
relatively benign stuff ("Don't use the term 'Islamic terrorism,' Al Qaeda
uses that stuff to propagandize") or it may be not so benign.

I agree with the original poster. It's absurd that the left-wing media has
been agitating for censorship of "hate" and then complaining when "evidence of
war crimes" is deleted. There's been shockingly little mainstream questioning
of just how quixotic it is to try to "remove hate".

I'm fairly concerned about the move toward censorship by internet platforms.
The only way tech companies can realistically do this is with algorithms. Lots
of people seem to think it's a given that these companies can algorithmically
remove "hateful things". No one seems to be saying that rules and laws need to
have precedent in order to be intelligible. For example, in the US, there are
incitement laws but _what actually constitutes_ "incitement of imminent
lawless action" is something that has been defined in the courtroom. You can't
just say "delete hate" without precisely defining hate. It's a blank check.

I find censorship via algorithms extremely scary. There will be false
positives and false negatives and there is absolutely zero recourse.
Censorship by algorithms is a perfect expression of bureaucracy. In perfect
bureaucracies, responsibilty is spread so thinly that it's impossible to
determine who is responsible for a mistake. With algorithms, there's actually
no human on the hook at all. Are you going to blame the programmers?

By all means, use machine learning to flag posts so humans can look at them.
But automating the removal of content and the banning of human users is a road
I strongly suspect we will regret going down. If the volume of content is
sufficient so that you _need_ to use algorithms to remove human-generated
content then I'd say it's time to reconsider whether that content should be
removed at all.

~~~
phatfish
In many cases the censorship is carried out by a 3rd party company setup in a
low wage location where someone with their own biases, religious convections
and influence of their own society moderates content for the west.

The tech giants don't want anything to do with it because it means employing
1000's of extra people and the costs that go with that. So it just gets
contracted out to the the lowest bidder.

It can't all be automated yet, not some machine learning algorithm dreamt up
be Facebook is any better of a prospect.

This is the problem with the black box of the multinational corporation
deciding what is moral or acceptable with no oversight by the government.

~~~
leftyted
> It can't all be automated yet, not some machine learning algorithm dreamt up
> be Facebook is any better of a prospect.

Some of it is automated. Youtube removes/demonitizes stuff based on automated
checks. I suspect other platforms do this stuff too or are headed in that
direction.

------
Digit-Al
So, here's an idea. Let's take the case of YouTube, instead of its algorithm
deleting this content, how about it is marked with a flag. Normal users are
completely unable to view, or even know of the existence of content marked
with this flag. It doesn't come up in searches and even a direct link will
just show some "content unavailable" message.

Human rights groups and the like can request permission for a designated user
to have a "special administrator" permission. Anyone with that permission will
see a permission toggle when they look at someone's user account. That toggle
will allow them to give or revoke permission to view these "forbidden" videos.

That will then allows these organisations to give access to anyone they deem
suitable to help them police these videos. There would need to be some sort of
safeguards to make sure that permission is not accidentally given to a minor
or something.

This should solve a lot of the problems I reckon. Thoughts?

~~~
jeandejean
Why would Human rights groups be entitled to have this censorship privilege?
So they'd decide now what's good or bad to display on Youtube?

If anyone needs evidence, it is Justice prosecutors and it is likely already
possible. Deleted from user-facing Youtube and Facebook and the like doesn't
mean it's deleted from their storage...

Plus the title is misleading: they're not deleting evidence of war crimes,
they're deleting upsetting and distressing videos from their open platform,
and that's a good thing! Nothing proves they don't already pass these videos
to law enforcement.

~~~
naasking
> Why would Human rights groups be entitled to have this censorship privilege?

Indeed, who watches the watchmen? The SPLC was recently embroiled in sexist
and racist controversies. No human organization is exempt from corruption. We
all have to watch each other while simultaneously be charitable and willing to
forgive.

> Plus the title is misleading: they're not deleting evidence of war crimes,
> they're deleting upsetting and distressing videos from their open platform,
> and that's a good thing!

Debatable. There's no such thing as a right to be "not distressed" or "not
upset". Being a member of any human society means being exposed to stresses of
all kinds, and we arguably all have the stressful duty of guarding our human
and civil rights.

~~~
Digit-Al
I am in absolute agreement with you that whether or not such material should
be removed is subject to debate. There are good arguments on both sides. I am
generally against censorship but I can recognise some merit in protecting
children from videos of people being shot, hanged, beheaded, etc.. Not that I
particularly want to go down the "think of the children" route.

However, all the debate aside, the simple truth is that governments are
ordering that content be removed so until we can have the debates it would
probably be a good idea to have some way to allow the content be viewed by
some people. Arguments that it's probably already available to law enforcement
are all very well but law enforcement has limited resources and it's
impossible for them to review all the flagged content so if there are people
who are willing to volunteer their time to review it looking for evidence of
crimes then we should probably have a way to enable that.

As for their being some bad actors in these organisations - that's pretty much
unavoidable, even law enforcement has some bad apples.

It's about compromising and making the best of what we have.

------
lifeisstillgood
The podcast episode "post no evil" about Facebooks moderation problems, covers
something very similar where during the Mexican border drug wars people were
posting images of dead bodies hung from overpassed and shootouts.

Facebook had internal debates over whether this counted as news or breached
content guidelines

I recommend the podcast greatly.

But perhaps we need a simpler approach - a funded "journalism" archive where
the tech firms can shift the posts to online storage (ie not delete it) but
section it off for later analysis.

Seems the only likely compromise.

~~~
medecau
Something like the Internet Archive?

~~~
jplayer01
Except nobody goes to the Internet Archive unless they're looking for
something extremely specific. The benefit of Facebook as a platform is that
you can reach a wide audience in a place where often going to the authorities
means either jack squat or your incarceration or worse.

~~~
medecau
Sure.

I meant to say the Internet Archive can be used as the 'funded "journalism"
archive' that GP mentioned.

~~~
lifeisstillgood
I did think that was one possibility - but this needs to be funded and
supported by the tech firms themselves not handed off and forgotten.

I can imagine a way of referencing these redacted images that facebooks just
puts up a big red sign and a warning - but that still allows people to find it
and it acts as a full web citizen.

------
8bitsrule
Social media have successfully created the illusion that they are (to some
extent, at least) public spaces. This is another reminder that they are not.

OTOH, truly public spaces on the internet ... where reasonable speech can
truly be free ... are in very short supply. By accident or intention, this is
a crappy state of affairs.

This isn't a new topic. 20 years ago, Lessig wrote, "We can build, or
architect, or code cyberspace to protect values that we believe are
fundamental, or we can build, or architect, or code cyberspace to allow those
values to disappear." [https://www.nytimes.com/2001/06/02/arts/adding-up-the-
costs-...](https://www.nytimes.com/2001/06/02/arts/adding-up-the-costs-of-
cyberdemocracy.html)

If our free speech is limited only to the inside walls of walled gardens,
that's not their fault.

~~~
cr0sh
> OTOH, truly public spaces on the internet ... where reasonable speech can
> truly be free ... are in very short supply.

Who decides what is "reasonable"? One's person or group's "reasonable" is
another's call to regulation, if not censorship by various means.

How can this conundrum be solved on an international network composed of both
adults and children accessing the same content?

I'm not expecting you or anyone else to chime in with a solution; I obviously
don't have one myself. But ultimately, this information hydra we've created
will need taming in some manner, or it may eat the world alive.

~~~
8bitsrule
Who decides? Good question. But IMO, democracy is not the strong suit of most
corporations.

Here in the US, back in the fabled wild west, the solution was not to expect
the law to be upheld by grocers, blacksmiths, or bankers. The public hired
professionals. So, I reckon, maybe we need some laws to enforce.

------
onesmallcoin
We couldn't have started the discussion of systematic censorship in a better
way It's hard to come up with one rule that works for everyone It's normal to
have a knee jerk response to censor something if what your looking at is
terrible and you wouldn't want other people to see the bad thing, but it is
also important that we as a race try to learn from our mistakes; and
censorship straight to deletion is burning the evidence. This may fall short
of crazy talk around here: But maybe we need a human element in the curation
of our digital libary, and a way for society as a whole to have input on this
process, not just in removing the offending content; but also in keeping track
of it too help people to be accountable for their actions. Full Disclosure:
I'm also a resident of christchurch, new zealand where the mosque shooting
happened recently I saw our goverment block multiple websites via DNS, in a
country where this kind pf censorship is otherwise unheard of: We honestly
didn't know if the systems were in place for this kind of censorship. So my 2c
is that if you are in a five-eyes nation, they almost certainly have the
capacity to conduct this kind of attack.

------
rlt
Damned if they do, damned if they don't.

~~~
yoz-y
This. It is simply not possible to have these platforms host this kind of
content and it boils down to money. If you want Twitter, Facebook, YouTube and
the likes you need ads. Advertisers don't want to be displayed alongside
gruesome content.

If people were ready to pay for a platform with "raw" footage of others' lives
there could be a place for it. So far the market has proven otherwise.

~~~
happytreefrens
There is a market for a lot of kinds of content, but companies will gladly
exchange some of their profit to push to silence certain kinds of messages and
to keep competitors down.

There's clearly a market for an uncensored social media network, but Gab has
been deplatformed from payment processors, web hosts, registrars... Companies
that refuse service to Gab will gladly host degenerate pornography and turn a
blind eye to illegal activity.

Much of this censorship cannot be chalked up to profit motive, and the only
conclusion is ideological drive or collusion with other tech companies to help
insure their respective monopolies and duopolies.

~~~
dageshi
I think it's just simpler than that, it's risk. Some content has a known level
of "bad" risk associated with it, legal pornography for instance, but other
content potentially has a massive and unquantifiable downside almost to the
point of being an existential threat to a platforms business.

You can't force people to host content or be associated with hosting content
which they fear will blow up in their faces.

------
intended
What do people expect?

Isn’t this uncharted territory for everyone?

Everyone can report, but the public sphere where this happens is owned by a
company which needs profit.

Humanity was always insulated from what the worst that happened at that minute
by the difficulty of transmitting information over older mediums.

This means incentives are not aligned here - speaking the truth, keeping you
engaged, and keeping Disney land clean.

A person in the UK who wakes up in the morning doesn’t want to see evidence
that South American violence has resulted in an entire village being
eradicated.

As a species We were insulated from thinking at species scale. We could do
village and perhaps nation scale on and average human basis. A rare few people
can think at a planetary or greater scale regularly.

So how precisely should this be structured ?

~~~
psychoslave
That's an interesting point regarding human cognition limits.

Thanks to this comment I found Capacity limits of informationprocessing in the
brain, by René Marois and Jason Ivanoff[1].

Intuitively I would think the gap of organizational difficulty between a
village and nation be far more important than the one between a nation and the
whole mankind. I mean that I would expect, a national organizational process
to require far more logistic sophistication and far many more explicitly
reproducible/scalable processes than what you need in a village. Of course,
some of these processes can be argued as directly conflicting with a "mankind
scale mindset", but it doesn't really seems to make a greater gap than one
that isolate a mind in a smaller human scale.

[1]
[http://www.psy.vanderbilt.edu/faculty/marois/Publications/Ma...](http://www.psy.vanderbilt.edu/faculty/marois/Publications/Marois_Ivanoff-2005.pdf)

------
stef25
Couldn't they take down the content but keep it available for neutral third
parties to evaluate and collect?

There has to be a better way to collect evidence of war crimes than leaving
decapitation videos visible on social media for all to see.

Maybe the ICC could even launch their own platform where this kind of content
can be submitted.

~~~
megous
Removed videos need not be graphic. War crimes take many forms. Bombings of
hospitals and schools. Mass shellings of cities. Destruction of vital
infrastructure, etc.

You'll not see any blood or torn bodies. It will just expose lies of some
nation state's propaganda.

These kinds of videos get removed too. Either directly or indirectly by
banning the entire account, if it posts more videos that are questionable -
which is likely. Or because of targetting by trolls using false complaints.

Though I'm not that worried. Many people already know the perils of YouTube,
and archive everything at first sight.

It just serves the nation state's propaganda, that is usually left untouched,
because it's served via more official media channels.

------
docker_up
This sounds like a perfect reason why sites like LiveLeak exist. I don't know
if I care that Facebook doesn't host videos that go against its TOS, but if
sites like LiveLeak are forbidden from hosting these videos even though they
want to, it would make me worried.

~~~
vokep
This.

Censor the mall (facebook, twitter) where families and your grandma hangout
from stuff they would never want to see

But let the wild west (liveleak, 4chan, etc) stay wild.

------
thereisnospork
A thought experiment:

If 9/11 had happened today, what would their policies mean for that footage?
What should they do to censor it, if anything? Consider too the increase in
camera density over the past two decades.

------
deanclatworthy
These platforms can do no right. If Facebook/Youtube leave a video online of a
beheading of John Smith from <Country X> then <Country X> inevitably calls
them out for promoting and hosting this "illegal" content. Now, they're being
chastised for taking it down too.

Maybe, just maybe, these companies have internal processes for handing over
such materials to the relevant authorities, and don't need to rely on armchair
detectives to investigate war crimes.

~~~
Xelbair
Maybe, just maybe, we can decide if content platforms are responsible for user
content or not.. and stick to that decision.

Quite frankly i am against any form of censorship.

~~~
cr0sh
Let's say you make users responsible for the content they create; how do you
intend to enforce that?

Country A's User B uploads Content C that User D in Country E find's
objectionable, or in Country E is illegal to view or possess, while in Country
A is perfectly legal.

How do you enforce that? Labels or tags aren't the answer, because Content C
could be labeled/tagged as X,Y,Z where X,Y,Z are orthogonal to the content,
and bear no relation to it.

Do you allow it to be viewed by User D, and if it isn't legal, then put that
on User D to be responsible to...what? Unview it?

More than a few of these problems are all because we humans have decided
without consensus or agreement that some information is "bad" while other
information is "good", and even that "good" information is only viewable by
certain people of that group who meet a certain age requirement.

All of it is very arbitrary, and at odds with itself on the world stage.

This issue won't be solved easily, if at all. Instead, we'll likely just
continue to make more convoluted laws that can't be followed and/or all
descend into totalitarian walled garden states isolated from each other while
atrocities continue behind those walls, telling each other within that "we're
the free and prosperous ones; those over there are not".

Why does this narrative seem so damn familiar...?

------
pergadad
Very interesting article. Essentially: there is a legitimate need for such
content, namely to hold the perpetrators accountable. So the question might
be: how can this content still be accessed later by e.g.
researchers/prosecutors/...? On the other hand that then opens the door for a
dictator (we seem to have plenty of them nowadays) to ask for deletion of
content and then use the stored materials still to identify e.g. regime
critics. Really difficult issue.

------
iscrewyou
I just wish it was easy to turn their algorithms off. All these companies were
good before those algorithms started. Like feed for Facebook & Twitter,
recommendations for YouTube.

That would solve a lot of problems. It would make sure people aren’t exposed
to things unwillingly and these companies won’t be on the hook for it but it
will also make sure things aren’t getting reported and taken down as much as
they should (for law enforcement tracking purposes).

------
happytreefrens
These tech companies decide, often in unison, to deplatform people, types of
legal content, or political views. There is clearly coordination among news
outlets, social media companies, and tech companies (everything from payment
processors to web hosting).

Who is doing this coordination? To what end?

Is there a common thread in the deplatforming, smearing, and silencing
campaigns?

~~~
AstralStorm
There need not be a single "who" behind it in as much as a set of convergent
processes with the same goal of control and power.

It is coordination of the kind of "going after the Joneses", because Chinese
get away with it and gave useful results, so can we, ethics and long term
effects be damned.

------
Nasrudith
To be frank tech companies are a second order symptom of fucked up societal
attitudes pushing the censorship.

The puritan mass market push towards "child friendly mass market" as a default
which leaves no place for the ugliness of reality. They don't care about
suffering - they just don't want to see it. It is narcissistic sociopathy as a
norm - how dare they run out of a burning building in their underwear!
Children might see them!

That it is the middle of the night is no concern - they don't want
"inappropriate content" in a back alley. They want it gone.

The tech companies tend towards doing nothing until the masses of "think of
the children" complaints about "objectionable" content. The absence of those
masses of people is what I miss the most about the internet not being
mainstream. I personally believe what we need is to offend those busybodies as
much as possible until they fuck off into their own bubble.

~~~
indigochill
>The absence of those masses of people is what I miss the most about the
internet not being mainstream.

I've seen this sentiment before, but I don't get it. Sure, the internet's
mainstream and there are parts of it that have a mainstream culture and that's
probably not going anywhere in the foreseeable future.

But the internet itself has no culture (heh). It's just a network. There are
still corners of the internet that have whatever culture you're looking for.
And if you can't find it, you can make it. It probably won't make much money
or make the news, but the option is there.

That said, I agree with the main thrust of your argument that tech companies
are just reflecting the mainstream preference to whitewash reality.

~~~
Nasrudith
The sentiment is based on the transformative and displacing effects of the
mainstream and "old internet" refers to culture as opposed to the network.

I am well aware that niche chasing is the way to achieve it for now. What the
sentiment longs for as ideal is for influence to have worked more in the
opposite direction. Instead we get advertisers trying to be the tail wagging
the dog again when people left their last venture over the blandness they
imposed.

------
nunez
Aren’t sites like LiveLeak better for this kind of content? Won’t this kind of
content make it there anyway?

~~~
lopmotr
It could be ordinary people using what they're familiar with. If an emergency
suddenly happens, who's going to spend half an hour dicking around researching
what site to use and going through their signup procedure and installing their
app and all that?

------
indigochill
Stratechery had an article that I think provides a reasonable compromise for
displaying and taking responsibility of content some might find objectionable
but which others would argue has value. It involves the poster being
responsible and owning the consequences for the content as opposed to some
third party platform which has to try to please opposing sides.

[https://stratechery.com/2019/a-regulatory-framework-for-
the-...](https://stratechery.com/2019/a-regulatory-framework-for-the-
internet/)

------
hellllllllooo
The issue with doing things at huge scale are all the edge cases. If Facebook
wants to have all the benefits and money it gets from owning its huge network
it also has the responsibility of having to treat different content with
nuance. These kinds of issues are one reason why Facebook wants to move
towards a more private network with more private interactions so it can avoid
having to deal with the endless complexity of human interaction in public.

------
Quequau
Surely there's a difference between not publishing (to the general public) on
their platform and not making such content available to the relevant
authorities.

------
j-c-hewitt
I think the whole notion of war as a whodunnit crime is pretty moronic at a
fundamental level. #1, you don't have jurisdiction. #2, if you hoped to gain
jurisdiction, the most important thing is not piecing together the facts of
the case, but winning the war.

If you want to go after the Libyan National Army for doing bad things, you
have to go to war with Libya and defeat their army.

The trouble with going to war with Libya if you are the United States is the
Atlantic Ocean and the absence of a real driving interest such as self defense
that would lead Americans to support a war with Libya.

In order to goad Americans into a war with Libya, the best hope for doing so
is to wave bloody shirts at them and somehow convince them that Libyans are
going to blow them up at any time now unless we go to war with Libya ASAP.

Facebook and the rest of them don't have a duty to simultaneously prevent the
wrong people from viewing Ogrish-style execution videos while also preserving
them.

So they want Americans to be radicalized in favor of war against Syria and
Libya (causing far more deaths than the typical radicalization which just
results in teens posting lots of pepe memes), but not radicalized in favor of
Islamic terrorists. That is just an impossible propaganda requirement that
cannot be met.

------
SR-71_Blackbird
I’m sorry, but this article lost my interest by the opening sub-heading.

Algorithms that take down “terrorist” videos could hamstring efforts to bring
human-rights abusers to justice.

“hamstring”? WTF.

------
lifeisstillgood
Of course the point I forgot to make is that we suddenly have access to all
the world's evidence - that every good and bad issue around the globe is now
passing through these forms data centres ... and we are not used to that. Our
brains seem wired to weight bad news much more heavily.

We need to remember Douglas Adams advice - "It's vitally important not to have
a sense of perspective"

------
weego
It's not evidence, at best it is hearsay. We have no way to validate truth
from fiction.

------
behringer
I'm not interested in seeing war crimes on facebook. If you want to
disseminate that stuff, use a public forum where that stuff is welcome. There
are countless places. Facebook and the spin users give is not welcome by me
and no doubt by others.

~~~
wnoise
Facebook is _millions_ , if not billions of tiny public forums. The problem is
that people seem to want them all to have the same standards. If you don't
want stuff like this on your wall or feed, it's easy to keep out. If the
admins of a group don't want this showing up, they too can filter it out. But
people seem to be demanding that not only do they not have to see content they
dislike, but that no one else can either.

~~~
lopmotr
It's the nature of censorship that people want to censor what other people
see, not what they see themselves. We believe we're able to make our own
decisions and we're in control of our minds, but those other people who aren't
us might not be and are in danger of bad influences.

~~~
mandelbrotwurst
Speak for yourself. People who support censorship, sure. "We"? No.

~~~
lopmotr
By "we", I mean most people, as evidenced by all democratic governments
censoring various things, presumably because their voters want to prevent the
rest of their country from seeing them. Same goes for restrictions on drugs,
alcohol, cigarettes, etc. People who're concerned about their health won't be
doing those things themselves and don't need a law to stop them, but they want
to protect what they see as irresponsible other people. That might be a fair
judgement but it's still a judgement about other people being somehow weaker
and needing to be protected from themselves.

------
Quanttek
I've noticed that myself as I'm analyzing influential public statements with
regards to the Rohingya for my Bachelor thesis on their discrimination and
genocide. Unfortunately, Facebook was one of the biggest channels for
dehumanizing and incendiary content but this is all gone now. I think FB
should at least provide researchers access to an archive of removed content.

------
cf141q5325
This is a very serious issue by far not limited to facebook and youtube. The
problem goes much further then private companies not wanting the content on
their site. Its very much motivated by a push of governments to purge
terrorism propaganda and stuff that could be used to radicalize people.

The best example is, or better was, reddits /r/syriancivilwar. The subreddit
collected all and every information available on the conflict, from every
side. This brought them in direct conflict with reddit admins, who banned
multiple users for linking to terrorism propaganda, namely ISIS videos and
AMAQ statements, the ISIS news agency. The subbredit is now a shell of its
former self because its reduced to a propaganda platform for factions which
are socially acceptable, instead of a place to document the war and its
atrocities.

It however doesnt stop there. The content wasnt hosted at reddit but literally
anywhere where there is a hope it might not be immediately taken down. From
piracy favored streaming services to archiv.org mirrors. Which all get taken
down for distributing terrorism propaganda instead of violating the TOS.

When you hear about terrorism propaganda, there is a good chance we are
talking about not just evidence of warcrimes, but also material that is
important to understand how people can get radicalized. And how to counter
that propaganda. People at risk of radicalization can still access the
material while the public as a whole looses the insight into the mind of those
people entirely. And the people at risk drift off into echo chambers. And for
a lack of information we are doing nothing to counter that propaganda, quite
the opposite. I doubt the media at large could have done ISIS a bigger favor
by their coverage of focusing simply on the barbarism of the videos. They
missed the point entirely, the barbarism was very much a part of ISIS
propaganda. The reporting was great for getting views, but only reenforced the
people who already entered the echochambers. And hoping on a purely military
solution sounds absolutely naive to me. Its very much a task for society as a
whole to watch out that people dont drop into those echo chambers and get
radicalized.

Framing this issue as "AI gone wrong" misses the point entirely. The content
would also have been deleted by a human. And with the given political
development, its not overly pessimistic to assume that people publicly hosting
content will be forced to remove any such content immediately or face
prosecution. Which only helps with getting reelected for signaling to combat
terrorism instead of actually doing anything against terrorism.

Not to sound overly pessimistic, but the situation looks really dire to me. I
had hoped that we as a society had learned our lesson about book burning. And
the issue is not a new one. Mein Kampf is still not readily available in
Germany for the "fear that it might turn people into nazis". Serdar Sommuncu
made the only reasonable thing here and started touring Germany reading from
Mein Kampf as well as famous propaganda speeches to show how ridiculous they
are. There is nothing special about them. There isnt some hidden insight in
those books that turn people into Nazis. But our treatment of them might
convince people at risk that there has to be something about them, or they
wouldnt be banned in the first place. I fear very much the same development
from the array of far right terrorists with their manifests. Instead of the
rambling of lunatics they are it gets treated as something that is dangerous
as it apparently might convince people.

------
v_lisivka
Number looks very little, if compared to Russia backing. Russia has whole TV
channels backed by FSB, whole troll fabrics, and their own agents even in
major Western media.

Why few orders of magnitude greater effort cannot help Russia, while much
smaller effort helps USA a lot?

~~~
ben_w
Quantity isn’t always the most important quality. The US (and the UK?) seems
to present propaganda as something the bad guys do, which is easier to believe
if the changes are limited. Limited doesn’t mean unimportant, and it being
limited forces focus on important changes.

(I’m not assuming competence here: I assume that’s random over time)

~~~
v_lisivka
So, you say, Russia seems to counter-present USA propaganda as something the
good guys do? It's looks stupid. Maybe I misunderstood you. Can you say
explicitly, who does good propaganda, and who does bad propaganda? Who is good
and who is bad?

~~~
ben_w
Mainly I’m saying that the propaganda produced by the USA says “Propaganda?
Us? Please put away your tinfoil hat, only bad guys do that.”

I don’t know what Russia’s self-image is, but the impression I have from the
outside is that it might as well be “<green frog emoji> Работа для нас в
государственной пропагандистской компании <Russian flag emoji>”.

------
msiyer
Is this really a technology problem? Are we so naive that we believe these
things were not happening in the past?

This problem cannot be solved in the consciousness we currently exist in.

The people running the world and those who ought to run it are entirely
different.

