
The trauma of Facebook’s content moderators - donohoe
https://restofworld.org/2020/facebook-international-content-moderators/
======
DavidVoid
That was good article and I think something really needs to be done to make
content moderation more humane.

This quote from the article

 _“It gets to a point where you can eat your lunch while watching a video of
someone dying. … But at the end of the day, you still have to be human.”_

reminded me of a similar article published by WIRED six years ago [1].

 _Eight years after the fact, Jake Swearingen can still recall the video that
made him quit. He was 24 years old and between jobs in the Bay Area when he
got a gig as a moderator for a then-new startup called VideoEgg. Three days
in, a video of an apparent beheading came across his queue._

 _“Oh fuck! I 've got a beheading!” he blurted out. A slightly older colleague
in a black hoodie casually turned around in his chair. “Oh,” he said, “which
one?” At that moment Swearingen decided he did not want to become a
connoisseur of beheading videos. “I didn't want to look back and say I became
so blasé to watching people have these really horrible things happen to them
that I'm ironic or jokey about it.”_

[1] [https://www.wired.com/2014/10/content-
moderation/](https://www.wired.com/2014/10/content-moderation/)

~~~
kevinskii
It would be great if there were a way to make content moderation more humane,
but perhaps this is like saying that it would be great if we could make
hospital emergency room work more humane. It is unavoidably traumatic, and ER
staff are known to detach and use dark humor as a coping mechanism. Content
moderation is a similarly noble and difficult profession.

~~~
agentdrtran
ER staff are paid a lot more, are sometimes even in a union, and also get the
benefit of helping people and seeing the effect of their help.

~~~
Spooky23
Ever hang out with an ER staffer?

Most of their workload is bullshit. People with colds and sore throats,
depressing people using ER as primary care, assholes using 911 to score
Medicaid cab vouchers.

And at any time, any number of people can show up with any kind of personal
tragedies from stokes to various traumas. My sister quit after a 13 year old
bled out from a gsw, and she walked out of the room and got kicked in the head
by an prisoner who had been stabbed after he bit the ears off of three other
prisoners, and broke the arm of a guard.

~~~
kevinskii
Bite one inmate's ear...shame on you.

Bite a 2nd inmate's ear...shame on him.

Bite a 3rd inmate's ear...that's fricking badass.

~~~
dang
Please don't do this here.

------
CM30
It's probably a bit controversial to say this, but haven't large sites had to
deal with these problems for years by the time the likes of
Facebook/Twitter/YouTube/Instagram/whatever came around?

It's obviously horrible doing this as a job and it's obviously a bit more
common on Facebook than say, a large old school internet forum, but... at
least in this case people are paid to do this. Reddit mods and old school
forum mods have to deal with this stuff for free.

~~~
tmpz22
Right but the difference is the sheer amount of money involved and the
politicization that comes with that. If Facebook was worth 100m nobody would
care. If facebook only has 10m users nobody would care. It's the scale that
completely breaks the legal and political frameworks the our society leans on.

~~~
wheelie_boy
I'd say another difference is the percentage of your attention is being
devoted to this kind of thing. Presumably reddit or phpbb mods aren't spending
40 hr/wk looking at gore.

~~~
Nasrudith
Wouldn't that suggest then that the "ideal" way to do it is a very part time
job then with high volume? Like say 5 hours a week. It would be a scheduling
and logistical nightmare of course but it would give more "dispersion time"
for trauma.

~~~
wheelie_boy
I'd be interested to see if the tooling around moderation could also be
improved. For example, could the images start heavily blurred, with a circle
of clarity that opens when you click. Or something like that, which would harm
throughput, but be more humane.

~~~
prawn
This is a very interesting idea. It could reveal only a portion and see how
often that was enough for the moderator to pass judgement.

------
MattGaiser
They should recruit people on Reddit for this. A large number of them seem
perfectly fine with this type of content.

There are dedicated communities to all these things.

I do not mean this as a hit against Reddit users. It is just that there are
small pockets of people who are psychologically capable of tolerating this or
even enjoy it. It seems more ethical to hire people from /r/watchpeopledie.

~~~
kylecazar
I'd wonder if frequents in these subreddits are typically professional and
healthy people to have on a team, though (honestly no idea, never been).

It's a strange conundrum because ideally you want good people who are
comfortable watching things good people rarely want to watch.

Don't have much experience with this demographic -- maybe there are more sane,
emotionally healthy people watching snuff films for fun than I instinctively
expect.

~~~
teddyh
> _maybe there are more sane, emotionally healthy people watching snuff films
> for fun than I instinctively expect._

Probably not, since there are no documented instances of any snuff films ever
found.

~~~
justanotheranon
[https://consortiumnews.com/2019/07/11/the-revelations-of-
wik...](https://consortiumnews.com/2019/07/11/the-revelations-of-wikileaks-
no-4-the-haunting-case-of-a-belgian-child-killer-and-how-wikileaks-helped-
crack-it/)

Thanks to Wikileaks publishing the Dutroux Dossier in 2008, we know hundreds
of snuff films were recovered from the notorious pedo-rapist Marc Dutroux.

Just because the police keep those films as sealed evidence that are never
published does NOT mean snuff films are an Urban Legend. I can see why the
police always keep snuff films secret. Imagine what would happen if snuff
films were uploaded all over the Internet? There would be mobs of angry
villagers armed with torches and pitchforks descending on jails and prisons to
lynch pedos. It would make harassment from QANONs look like school yard
bullying.

~~~
teddyh
Snuff films are, fortunately, still only a subject of urban legends, since
those films you mention do not meet the definition. Were those films made _in
order to entertain other people_ than the person who made them? Were the films
ever _distributed_ to any of those people? Was anyone ever actually killed
_in_ any of these movies? If the answer to _any one_ of those three questions
is “no”, then the movies, horrible as they may be, were _not_ “snuff” films,
as commonly defined. From what I can see in your reference, the first _two_ of
those things are decidedly _not_ true, and the reference is unclear regarding
the third.

(The films were, according to your reference, made only for _blackmail_
purposes, and therefore certainly not made for enjoyment of any viewer, nor
were they ever distributed to other people for their enjoyment. The reference
does claim that people were killed in a specific place which was also filmed,
but does not, what I can see, explicitly state that any murders were actually
filmed, other than incorrectly calling all the films “snuff” films.)

~~~
shadowprofile77
I think that some of you should take a look at news websites about the drug
and kidnapping cartels down in Latin America... They regularly post their own
snuff films online and I can guarantee you that they're not only very real but
also extraordinarily brutal. No urban legend about it.

------
intopieces
Every time you use Facebook, see an advertisement there, or click it, every
time you share content on Facebook and get others to engage with that
platform, you are contributing to a platform that is directly responsible for
human psychological harm, in many different ways.

Same for Twitter, and for Reddit, and Instagram... and probably TikTok.

I don’t believe the use of these platforms can be considered ethical.

~~~
Kiro
You are commenting on one of those platforms right now.

~~~
intopieces
HackerNews is a limited-focus link aggregation and comment platform. It
doesn't require the kind of moderation that larger scale, broad-focus social
media platforms do.

I don't envy the work that Dan and Scott have to do in the slightest, but I
don't think they'll end up with PTSD from it. At least, that's what I gathered
when I read "The Lonely Work of Moderating Hacker News"[0], especially this
description of it: "Pressed to describe Hacker News, they do so by means of
extravagant, sometimes tender metaphors: the site is a “social ecosystem,” a
“hall of mirrors,” a “public park or garden,” a “fractal tree."

[0][http://archive.is/bbzan](http://archive.is/bbzan)

~~~
twblalock
So you think the only ethical sites are the ones that operate a small scale?

~~~
intopieces
I can't say for sure, since I don't know every site. But I would posit that
ethical operation is inversely correlated with user base size.

------
irrational
I'm surprised there hasn't been a revenge movie where one of these moderators
use their access to find out where these people live and start hunting them
down like some sort of batman-like person. I could see a movie like this being
a means of making the more general population aware of the terrible stuff
content moderators have to watch.

~~~
slg
This is totally not what you are talking about, but the closest thing I have
seen to that is the Netflix docuseries Don't Fuck With Cats. It hits on a lot
of the same topics. The primary differences are that the hunters are Facebook
users and not moderators and they are hunting the person to get justice and
not violent revenge.

~~~
Tiksi
There's a show called Darknet
[https://en.wikipedia.org/wiki/Darknet_(TV_series)](https://en.wikipedia.org/wiki/Darknet_\(TV_series\))
that's somewhat along these lines too, but it's very much in the horror genre.

------
msapaydin
What I am wondering, and this is probably a dumb question, is why this has not
been automatized? Can't content moderation be done with modern and strong
machine learning based systems? There must be plenty of training data on this,
and just like a spam filter which does not require humans in most cases, this
should also be automatable. Why is it not?

~~~
nitwit005
It has been. Most of these sites catch a ton of images and video automatically
when they are similar enough to prior known content.

That doesn't matter from a staff point of view though. You have a queue to
work through. You'll be putting in an 8 hour day dealing with the stuff the
system doesn't catch. The automation just means they don't need as much staff.

~~~
three_seagrass
Yep. One way to view automation with machine vision / perception is that it
can cover ~85-95% of the true positives.

You're still going to get false positives and false negatives that need human
review, and at a scale of Facebook, that's a lot of humans.

~~~
msapaydin
I am just hoping that those currently uncovered cases will be "milder or more
nuanced" cases that will be less damaging to the psyche of human moderators
and will, once labeled correctly, improve the coverage rate of automated
moderators.

------
weresquirrel
If you haven’t seen The Cleaners, I highly recommend it.

[https://thoughtmaybe.com/the-cleaners/](https://thoughtmaybe.com/the-
cleaners/)

------
shadowgovt
I can't help but wonder what the numbers look like on this content. One of the
things Facebook should have enough data to know is how prevalent this sort of
thing is.

Is deeply offensive content generated by a handful of users frequently, or
many users less often? What's the volume look like?

~~~
filoleg
Agreed. I would definitely love to see some sort of a basic data analysis blog
post on this topic from FB, similar to how match.com used to publish data
analysis posts on their blog back in the day.

------
op03
Given auto translate works reasonably well these days, how much commonality
does Negative Content over different regions and languages have?

Anyone know? Is it like 10%? 80%?

There must be, by now, a whole lot of data on whats getting flagged
region/language wise.

Maybe we need a Cloudflare for Content.

~~~
latchkey
Depends on the language. In my experience Vietnamese -> English is horrid and
comes out as mostly garbage. Interestingly, the reverse has been better from
what my Vietnamese friends tell me.

------
ehnto
There are some things I will explicitly avoid in projects I'm working on. One
of them is allowing picture or video uploads to users if the service is free.
There's no low-risk, cost effective solution for moderating it, and in my
country we also don't have safe-harbour type laws, so any content on the
platform is your responsibility.

~~~
jcun4128
I had considered using services like Google Cloud Vision or other services for
explicit image detection anyway.

Most websites it seems when you upload an image it's immediately available, I
always wonder if it goes through some basic moderation system or just waits to
get reported.

I've done tagging jobs on MTurk before it's weird... seeing random people's
images.

~~~
ehnto
I would wager most wait for things to get reported rather than be pre-emptive
about it. I wonder if MTurk could ever be fast enough for an approval process
on something like Instagram.

I think vetting users rather than content is probably the most efficient
method of community curation though. If you have an approval process or other
ways to validate that users aren't nefarious then you would reduce your
workload by a bunch. Most bad images seem to come from burner accounts.

Unfortunately the only thing that comes to mind is a social credit as a
service style system and maybe that's not where we want to steer this ship...

~~~
jcun4128
Yeah the tagging jobs I did were weird like "find the baby" haha... And other
obvious tagging videos.

I can get from the user side about the burner accounts. I feel a physical
response when I get denied to post a first post say in a sub and I'm like
"Excuse me..." But yeah that's definitely discouraging/limiting to stick
around before contributing.

------
Havoc
It also sounds (from other articles) like people are seeing specific clips
multiple times. Surely a bit of semi-AI filtering should be able to blacklist
the regular stuff

------
praveen9920
The issue is as real as western countries dumping garbage in eastern countries
just because they can pay them off.

The local governments won't acknowledge this as problem because of the money
flowing in and a lot of people's livelihood is dependent on this. But we
should call it what it is, exploitation.

~~~
quadrifoliate
I think in regards to India, the issue is a little more complex, and
intertwined with social mores. This one sentence from the article is really
important:

> According to Dr. K. Jyothirmayi, a Hyderabad-based psychiatrist, stigmas,
> such as the perceived impact of a mental health diagnosis on one’s marital
> prospects, often prevent young Indians from seeking treatment.

From growing up in India, my view is that meditation and the general state of
the mind has been taken seriously for, well, centuries. But move the focus
towards anything with 'psych' or 'mental' in the title and people in India
will shy away from it _even if it is affordable or free_. "Seeing a mental
health counselor" is a recipe for considerable social stigma in India [1], not
an empowering practice like it's perceived as in the United States.

The solution to this is hard, and in my opinion not something that Facebook or
their Indian subcontractors can accomplish.

\-----------

[1] I'm talking "All your relatives will refer to you as the crazy person" or
"Your girlfriend's parents will call off the engagement" levels of stigma, not
just "I don't feel comfortable talking about it in a bar" levels. Like a lot
of other things, families with higher levels of income and education are
_sometimes_ an exception.

------
ericjang
I read somewhere long ago that there have been incidents of FBI employees
tasked with reviewing child pornography [1] who saw so much that they became
desensitized or even aroused by it. Does anyone know if this is well-
documented, or was this an exception rather than a common occurence?

[1] [https://www.fbi.gov/history/famous-cases/operation-
innocent-...](https://www.fbi.gov/history/famous-cases/operation-innocent-
images)

------
ed25519FUUU
I can't even barely read this article before becoming extremely sad for these
people and the individuals (especially children) who were abused in these
videos. I hope everyone involve gets the healing they need.

It's not the flat-earther and somesuch conspirary communities that worry me.
It's the people who share and get enjoyment from this kind of content, and the
networks which enable it.

------
maerF0x0
I cant help wonder about the economic(choices) side of this around 1) Should
this victimization be taken into account when punishing offenders eventually
tried and 2) Is the damage done to the content moderators marginally worth
keeping the marginally caught content off the platform (recall there already
is automation doing the bulk of the work)...

I do think we need to think about the side effects of laws where some US
citizen demands that Facebook must employ content moderations else be liable
for hosting it, it doesnt result in Facebook executives, engineers et al.
bearing the burden. It results in some poor person in a 3rd world country
bearing the burden.

Kind of like the environmental aspect of things I would think the best
investment is in total prevention of the heinous acts in the first place. ie,
it's much cheaper to prevent CO2 than it is to clean it up after the fact.

------
totetsu
Is this a case of trying to solve social problems with technical solutions?
Maybe a platform like FB that is not run by the community that uses it simply
cannot regulate users behaviour like a real social community can.

------
lowmemcpu
I recall a statistic from about 10 years ago that computer forensic
investigators in law enforcement burn out after two years due to the trauma of
the images they are exposed to.

~~~
throwaway0a5e
There's likely some confounding factors. Pressing "go" on the overpriced
software tools and then entering into evidence what you find is the lowest
level of work in that field so the churn is going to naturally be very high as
people move up or out. The pay also isn't that great.

~~~
mschuster91
No, that is not the issue. Rather the issue is that even the hardest stuff on
Facebook isn't remotely comparable to stuff of actual criminals, and the
effort is wildly different:

\- Facebook: it's violating rules? Delete, next.

\- Forensic IT on a multi TB disk _full_ with child porn: document _every_
photo, what it shows, extract identifiable faces to cross reference with other
content (to check for recurring places and victims), and the process is even
more gory for video content. You have to watch every second or the defense can
attempt "you didn't watch the video in full where the perp gives the victim an
ice cream at the end" or whatever else. The amount of time you spend with
documenting a single photo or video is many orders of magnitude worse than FB
content mods.

~~~
henrygrew
This sounds very grevious, it's sad that a human being has to do this work

------
anonu
Can we build better AI off of the data these moderators have generated to make
their lives easier?

Can you crowdsource the moderation task to double check what the ai is
flagging?

~~~
uniqueid

        > Can we build better AI
    

Nope. That's what the people in charge of our social media companies keep
trying, over and over, and it doesn't work at all. AI-moderation is like the
Ring is to Golem for them. They can't accept that "cheap and fair" moderation
isn't currently possible.

    
    
        > Can you crowdsource the moderation task
    

This is the clear answer, imo, and it isn't obvious to me why these C-level
execs avoid it. If I had to make a guess, maybe they think engaging with
millions of their users about moderation would open a can of worms.

~~~
shadowgovt
When the goal is to prevent users from seeing this content, crowd-sourcing
moderation defeats the purpose.

Users don't want the back-stop to be "a critical mass of community members
object;" for content like this, they want to see zero of it, ever, and will
choose to use another site that satisfies that hard constraint if FB cannot.

~~~
uniqueid

        > they want to see zero of it, ever
    

Many _want_ that, but very few (probably nobody at all, by this point)
_expect_ it.

    
    
        > When the goal is to prevent users from 
        > seeing this content, crowd-sourcing moderation 
        > defeats the purpose.
    

It's not actually all-or-nothing, and if it were, it would defeat it no less
frequently than paid moderation already does. People tolerated toxic internet
content before social media not because they never encountered any, but
because they encountered much less.

No system can ever stop 100% of users from seeing 100% of the material they
find objectionable. Every user has their own idea of what material is beyond
the pale, and even if we invent AI that is competent at blocking content, and
capable of tailoring its filters per-user, it will still fail some of the
time, because users are people, and people's tastes change over time.

Aside from that, the way Youtube, Reddit, Twitter et al, currently operate, a
user who reports objectionable content typically waits days, weeks, or years
(or forever) for the company to take action. If you give a user a little
_real_ agency, it goes a long way to mitigate their displeasure over
occasional exposure to unwanted content.

~~~
shadowgovt
What real agency does the system you propose offer? Because one down-
moderation from a volunteer moderator is a drop in the bucket.

Meanwhile, if your system requires users to view beheading videos regularly,
people will just migrate to a site that doesn't require that.

~~~
uniqueid
If I had to come up with a system, it would be some sort of reputation
hierarchy, with a small number of paid employees at the top. Each level is
responsible for auditing the levels under them.

    
    
        > What real agency does the system you propose offer?
    

The ability for millions of users to moderate themselves. So... fast response
time, by opinionated, emotionally-invested moderators, as opposed to the
status quo: slow response time by paid burn-outs following a flow-chart.

    
    
        > if your system requires users to view beheading videos regularly
    

I'm pretty sure that content would be reported faster than anything else. To
be fair, another part of the issue is ban-evasion using multiple accounts, and
that indeed requires additional measures to handle. Sadly, there's a
disincentive to dealing with fake accounts, because trolls count as "sign-ups"
too.

    
    
        > people will just migrate to a site that doesn't require that. 
    

Like Youtube and Facebook still occassionally do. Well, most users haven't
abandoned them yet, so there goes that theory :)

~~~
shadowgovt
> occasionally

Precisely. How often? Close to never.

~~~
uniqueid
Give or take a live-streamed Christchurch massacre?

~~~
shadowgovt
Yes. One disastrous livestream in the history of the feature, with further
controls added almost immediately as a result.

~~~
uniqueid
Neither Youtube nor Facebook currently are able to filter _all_ these
incidents. This happened _after_ Christchurch:

[https://www.bangkokpost.com/thailand/general/1853804/mass-
sh...](https://www.bangkokpost.com/thailand/general/1853804/mass-shooter-
killed-at-korat-mall-20-dead)

I could swear there was also a Christchurch copy-cat attack in Europe, but I
can't remember sufficient details about it to find an article. Perhaps that
one wasn't streamed on FB.

I don't have encyclopedic knowledge of Facebook atrocities (I closed my
account nearly a decade ago), but without vetting content before it goes live,
I don't see how they will entirely prevent these videos from reaching _some_
users.

~~~
shadowgovt
I don't know that _all_ is realistic. It's the desired unattainable goal.
_Almost all_ is the current status quo. If a service like FB were to adopt a
volunteer moderator model, that track record would crash by definition because
the moderators would be seeing the garbage.

(That's before we factor in unintended consequences such as the risk that a
critical mass of moderators decide the garbage is signal and start passing it.
It's more risk than FB wants to take on for a problem they already solve via
paid employees).

------
zacharycohn
About 9 years ago, I found myself at a party consisting mostly of content
moderators for The Cheezburger Network.

I walked out of there shellshocked.

------
Thorentis
Only a human should be able to decide what other humans can and cannot see. A
future where computers are responsible for "deciding" what information we
consume about the world is not one I want to live in. Computers are already
being used as tools of censorship, we don't need it to be expanded further.

~~~
ece
If computers can help with spam, they can certainly help with other well-
defined and well-audited types of illegal and misinformed content.

It would be worse to not invest in more automation, even malpractice.

------
afaq404alam
I hope they are tagging these videos and someone somewhere is working on a
neural network to classify the new ones.

------
alecco
Funny how they can automatically flag copyright for videos now even when x
mirrored but they can't identify these videos.

I can't believe there's so much original content for this. Maybe they could
share a db of hashes of known bad videos across sites and government agencies.

------
zitterbewegung
This is also relevant:
[https://www.theguardian.com/technology/2017/jan/11/microsoft...](https://www.theguardian.com/technology/2017/jan/11/microsoft-
employees-child-abuse-lawsuit-ptsd)

------
Magodo
I don't understand what's done with the contractors in the US after the
successful lawsuit, surely all those jobs were just moved offshore? There's no
mention of this at all in the article....

------
amelius
Perhaps it's a solution to use brainwave headsets. So whenever the viewer
would get too much negative stimulation, the video would stop and the viewer
could take a break.

~~~
droopyEyelids
The headset could also put a mark into their performance review for failing to
develop a sufficient coping strategy.

------
shadowgovt
This story is an interesting contrast to the story of Twitter dropping users
for spreading the QAnon conspiracy.

Clearly, there's near-universal agreement that _some_ moderation is fine; only
a handful of (downvoted) comments on this thread saying "The easiest way to
fix this problem would be for Facebook to stop censoring posts and let users
block what they don't want to see." But we clearly don't think that's a good
solution in this case.

The question of whether QAnon should also be blocked is one of degree, not
quality.

------
paulpauper
how do you become a Facebook moderator?

------
atlgator
Is Facebook worth it? Is it worth the trauma?

~~~
paulpauper
if it wasn't worth it, people would not do such jobs and people would not use
Facebook, so apparently for a lot of people it is worth it.

------
fareesh
Some great undercover video of what goes on at Cognizant here, as far as bias
in moderating actions is concerned:

[https://www.politicalite.com/latest/facebook-employee-if-
som...](https://www.politicalite.com/latest/facebook-employee-if-someone-is-
wearing-a-maga-hat-i-am-going-to-delete-them-for-terrorism/)

In this video it is shown that regarding beheadings, Facebook made it a point
to send their moderation contractors a memo that the image that was frequently
reported of President Trump's face with a knife at the neck where he is being
beheaded was an exception to the content policy on
beheadings/violence/incitement because it was considered to be "art" by some
museum somewhere in Portland - which from what I can see on TV - appears to be
an extremely volatile and violent place in some parts.

There is a reference to a cartoon post of Elmer Fudd shooting a cartoon gun at
Beto O'Rourke with the text "I'm here for your guns" at the top - which was
not given any similar exception.

I don't think Facebook or their partners are serious about this kind of thing
at all since they seem ok with advocating violence and gore when it resonates
with some personal opinions.

Also noteworthy is that the name of this whistleblower garners fewer search
results on Google, than the name of another whistleblower Eric Ciaramella who,
if you uttered a-la Voldemort, would get you banned instantly.

Broadly, content policy at these websites seems to be a lord of the flies,
unprincipled hackjob of whatever the political machinery at the organization
deems worthy of steering society in the direction of their whims. Given the
influence and power of Facebook and other tech platforms, this should worry
people. I'm less inclined to be sympathetic to these folks and the difficulty
of their jobs, given the apparent willingness to become social
constructionists with the power that they wield so irresponsibly.

------
jbrennan
In my opinion this is an argument in favour of shutting Facebook (and every
other large social network) down. This sort of work is damaging and abusive,
and nobody should have to endure it just so we can have social networks.

I understand there are some jobs in the world that need to deal with dark
stuff (like law enforcement), but social networks just aren’t worth the human
cost.

~~~
dangus
That’s a non-solution.

What you’re proposing is not just shutting down social networks, it’s shutting
down any website that involves user content, anything that allows photo/video
upload, comments, or any kind of user interaction. That’s impossible.

You point out that public safety jobs are view more “worth it,” and certainly
they are, but that logic brings up the question of who judges what job is
worth undergoing trauma.

In other words, is a subway or freight train driver’s job “worth it,” if they
have to see someone commit suicide on the tracks? What about crime scene
cleanup companies? Funeral services? Bus drivers? Truck drivers? Nobody’s
going to agree on where to draw the line in the sand.

A more realistic solution might be to make comprehensive support systems,
mental health resources, and treatment a legally mandated, completely free
service provided to any employee that works in these kinds of fields.

Finally, I think there are most certainly people out there who are not as
sensitive and affected by this content who would be candidates for these kinds
of roles. Perhaps there’s a way to test for that sensitivity before the real
job starts.

~~~
jbrennan
I’m actually not proposing shutting down any website that allows uploaded
content, just large / public sites that require this sort of moderation. Not
every site gets this stuff uploaded to it. The more private the network, the
less need for this kind of company-led moderation.

As far as “worth it” goes, some people have to be exposed to it so long as we
have law enforcement (but I’m certainly open to alternatives here). I’m not
sure the train operator is a fair comparison, because seeing a suicide is an
exceptional circumstance in their job, it’s not the norm. The content
moderators, however, are sadly expected to be exposed to traumatizing content
as part of their job description — it’s essentially the point of their job.

There are plenty of kinds of work we deem as hazardous to people’s health, and
thus are either banned or regulated. I’m not sure if there’s a healthy way to
expose people in these moderator jobs to the traumatizing content they face.
It just doesn’t seem worth the tradeoff to endanger them like this.

~~~
dangus
Think like a legislator. How do you write this regulation?

> [shut down] just large / public sites that require this sort of moderation

Let’s say I start a restaurant review website that allows comments and photos
to be uploaded. It does modest business for a while, I now have 50 employees.
I’m following the law because my site isn’t big enough to violate this “no
user content for big prominent websites” law.

Soon, it becomes big, like a major competitor to Yelp, and I’ve got 1,000
employees. But suddenly, this new law kicks in that says that I have to stop
accepting uploads because my site is too high profile. Now, I lay everyone off
and go out of business.

This just isn’t a workable solution, at least not in the particular way you’re
proposing it be constructed.

And really, you’re asking the second largest advertiser on the web (Facebook),
a Fortune 50 company, to just pack up its bags and shut down.

It’s not like I love Facebook or anything, but I’m sure their 45,000 employees
wouldn’t be happy about that.

------
pantaloony
Radical idea: if your service requires subjecting lots and lots of workers to
scarring images, the correct solution is for that service not to exist.

Facebook doesn’t have to let people post material to their site with very low
barriers to entry. Granted they have to if they want to continue to be what
they are and to make tons of money... but maybe they shouldn’t continue to be
what they are and make tons of money if this kind of abuse is necessarily
coupled to those outcomes.

“Well this sucks but we can’t keep operating if we don’t do it”. Well... sure
but the solution is _right there_ in that statement. Don’t keep operating.

~~~
rosywoozlechan
Any place anyone can add any user content has this problem. Anything with an
input box and a file upload would qualify as your not existing solution.

~~~
pantaloony
1) that seems basically fine, but also 2) non-commercial efforts run by
hobbyists and gated from the general public ought to be Ok. If you want to run
a PHPBB site and subject _yourself_ to harmful garbage by letting randos write
to your server, well, go nuts.

[edit] thought experiment: how many of the people making tons of money off
Facebook—c-suite, major shareholders—would find some other way to make money
if continuing to make Facebook mega bucks meant _they_ had to do this 5 days a
week? What would we think of any of them who chose “bring on the trauma, I
want those sweet greenbacks” and kept it up for _years_?

