
Catastrophic effects of working as a Facebook moderator - seek3r00
https://www.theguardian.com/technology/2019/sep/17/revealed-catastrophic-effects-working-facebook-moderator
======
_Microft
The situation might not be so much better on other services of that scale
(Youtube, image hosters,...) but what really makes the difference with FB for
me is that it reminded me of a tweet from a FB employee who claimed that the
amount of good that Facebbok does (might have been 'can do'?) is _unlimited_.
Unlimited, ffs? I can only wonder how it comes that self-perception and
reality can diverge so enormously.

(My eternal gratitude to the one who can dig up the tweet. All my efforts were
futile so far.)

~~~
Kiro
You're doing the same thing by claiming your opinion is "reality".

~~~
gerbilly
I think this is a false equivalence. His claim is the stronger one in my
opinion.

~~~
cthaeh
I think his opinion is far stronger. Facebook is a far better company than
apple. At least they dont hate poor people.

How crazy I sound to you is how crazy you all sound to me.

~~~
Grangar
A company considering its users its product can't get much lower IMO. Remember
that time they experimented on users' emotional states without any consent or
informing? [1] This is when they started to show their true cards. To me FB is
on its own level far below even oil companies and big banks, probably around
the same ranking as PMCs. You'd be far pushed to find me something worse.

Apple is no saint but FB's total disregard to its responsibility and position
in the world is so much worse.

1:
[https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook...](https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook-
manipulated-689003-users-emotions-for-science/)

~~~
cthaeh
Not really when that service is offered free of charge. I am pretty sure most
people in India, Africa don't give a shit that Facebook uses their data to
serve them ads. They and the world, gets an invaluable service in return.

I certainty don't care.

~~~
gerbilly
Facebook uses their data _today_ to target ads.

What the data gets used for later by Facebook, the government or organized
crime, when it inevitably gets stolen, is anybody's guess.

~~~
cthaeh
Apple uses their data _today_ to create various products.

What the data gets used for later by Apple, the government or organized crime,
when it inevitably gets stolen, is anybody's guess.

------
TheOperator
>They also said others were pushed towards the far right by the amount of hate
speech and fake news they read every day.

There's something wrong with mainstream reporting if mere exposure to social
media turns people far right. It really strikes me that people are trapped in
some pretty strong filter bubbles to the point mere exposure is enough to
change political belief.

Spend a week on a far right community and you'll be shown more stats that
point to a far-right conclusion than you can critically evaluate. In any
internet discussion of police racism for instance FBI crime stats will be
mentioned in a heartbeat but I don't think I've seen a mainstream journo bring
them up once. Social media and mainstream media fundamentally follows
different schemas of information simply because even bringing up certain data
can cause a mainstream journo reputational damage.

This is also causing an inverse filter bubble where hateful ideas which
actually have refutations don't get refuted because people refuse to discuss
the ideas on principle. Much of the data cited is crap and much of the
interpretations are crap but they're not meaningfully contested.

~~~
caconym_
> There's something wrong with mainstream reporting if mere exposure to social
> media turns people far right. It really strikes me that people are trapped
> in some pretty strong filter bubbles to the point mere exposure is enough to
> change political belief.

A different conclusion to draw from this is that far-right interests are
responsible for the majority of the objectionable content on social media. One
might further suggest that said content is deliberate propaganda, designed to
push people to the right, and that this is a central pillar of their strategy
that isn't shared to the same degree or extreme by other political factions.

This isn't "mere exposure". I haven't read the article so please correct me if
I'm wrong but this is a job, a place they go to sit every day to be bombarded
with this crap. To some extent they _have_ to sit and let it wash over them—I
don't imagine the people doing these jobs have much career mobility. IMO it's
not realistic to suggest that if they were just better-informed, they wouldn't
suffer these effects. The mind is not an inviolable fortress—no matter how
strong you think your defenses are, they can be worn down.

~~~
TheOperator
>One might further suggest that said content is deliberate propaganda,
designed to push people to the right, and that this is a central pillar of
their strategy that isn't shared to the same degree or extreme by other
political factions.

This is certainly part of it. I've literally watched Russian propaganda around
the Syrian war featuring a "Canadian independant journalist". Yet overall this
explanation strikes me as unsatisfying. From what I've heard from the
researchers propaganda techniques have been less about promoting the right
wing specifically and more about spreading social discord generally. The
Russians had efforts to promote Jill Stein that were promoting left wing
rather than right wing propaganda. I've never really bought that propaganda
efforts weren't getting overwhelmed by the influence of regular users
generally. It's possible but I haven't seen serious attempts to prove it.

>I don't imagine the people doing these jobs have much career mobility.

This is actually a more interesting criticism to me. Perhaps internet
moderators are more drawn to far-right thought than others because of their
life circumstances? This is a population that's lower income and tech savvy.

------
koevet
I recently met a guy in Berlin who is employed by a company that is a FB sub-
contractor for content screening. When I met him, he had been on sick leave
for 2 weeks, because of the awful working conditions and the stuff he had to
go through daily. He was obviously looking for a new position and, I can
attest, he looked like "damaged".

------
MrGilbert
One idea: To cope with all this, reduce the flagged posts presented by the
algorithm to maybe 2 - 3 hours a day. For the remaining 5 hours, the algorithm
could show them chats, pictures and videos that are harmless and uplifting.
This might create a counterbalance, showing that there is still something good
in social media.

I think that's how RL works in most countries. You are not running through the
streets, getting slapped with crime and therelike all the time.

Of course, this also means you need more people to deal with the same amount
of content as you do today.

~~~
duxup
I think you might be on the right path, at the same time mandated "happy
picture time" seems like it might be it's own weird world to live in.

Moderator1: "Hey man how is it going today?"

Moderator2: "Pretty good, it's kitten day."

~~~
MrGilbert
That sounds pretty surreal, I agree.

------
cmg
I spent most of last Friday keeping an eye on 4chan's /pol/ after finding out
that morning that users were planning an attack on my job.

Even just looking for one day, it took a serious emotional toll on me. I've
definitely seen some awful things on the Internet but the constant bombardment
of hate speech, racism, anti-Semitism, and all sorts of disturbing images and
text over the course of 6 hours made me feel physically sick a number of
times, and I had to take extra care to rest the next day.

This is anecdotal of course, but I can't imagine what the Facebook moderators
go through having to process at least one ticket a minute.

~~~
papreclip
Or the 4chan janitors, for that matter.

~~~
Smithalicious
Even worse than Facebook moderators, since they do it for free

~~~
duskwuff
OTOH: my understanding is that most (if not all?) 4chan janitors are recruited
from 4chan itself, so they have a pretty good idea of what they're getting
into.

------
moofight
There are many aspects to this, but it seems that the most lasting and strong
effects are due to visual content (especially raw content, violence...) rather
than text (even though text can be violent).

Which is why automated Image/Video moderation solutions (such as Vision,
Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is
cheaper/faster, but because it becomes a necessity. Or at least as a first
filter to weed out the "worst" content.

~~~
mandevil
At a $previousJob I had some tangential contact with professionals who track
child pornography, trying to identify and free the kids (people involved in
catching
[https://en.wikipedia.org/wiki/Christopher_Paul_Neil](https://en.wikipedia.org/wiki/Christopher_Paul_Neil)).
They felt that automation was of little help for what they were doing, and
that every image had to be looked at by at least one human (most of the images
by more than 1). They had a few tricks (apparently looking at the image in B/W
helped lessen the trauma) but they did not find value in the automated tools
we tried to build to help them.

Now, they felt much more empowered than what Facebook was doing: they kept
going because the goal was to stick cuffs on the wrists of the guys who were
doing this, and get those kids away from him, and they could put up with all
of the rest for that goal. They were treated as rockstars by the rest of the
people they interacted with, because they were the ones who got kids away from
the predators. They had frequent opportunities to take breaks and could set
their own schedule, with only the guilt that came from the longer they
delayed, the more time passed with the kids in the predators hands to drive
them.

Ultimately, feeling empowered to make a difference in the world is key, and if
Facebook treated screening as an important job and gave their moderators more
power to set their own working conditions I suspect that it would improve
their mental health by quite a bit.

~~~
hos234
Good point about empowerment.

I hope they are investing in an army of shrinks /psychologists/sociologists to
study,improve and supervise these centers, cause this stuff is not going away
by just deleting content.

------
forinti
They could use some image processing to make the videos look cartoonish, at
least for the first analysis.

That might soften the blow to the screeners.

------
duxup
I wonder what the solution is here.

To expose people to this stuff continuously seems wrong.

Then again so does exposing everyone to it / would probabbly kill the service
if it wasn't dealt with by someone.

Another concern is finding people who can handle these situations in a healthy
way, might be few and far between, and generally the folks exposed to it folks
hired into low pay / outsourced warm bodies in chairs kinda situations.

~~~
JKCalhoun
What if the answer is to kill social-media-as-we-know-it?

There are some sick and "extremist" people in this world. If their stage were
relegated to email threads I imagine we would not be having this discussion.

But I think what bothers me the most, and this is eluded to in the article, is
not that some extremist is posting crap, no surprise there. But that seemingly
once-normal relatives of mine are consuming this crap and then reposting it —
becoming the extremists themselves.

~~~
shantly
> What if the answer is to kill social-media-as-we-know-it?

This is the real answer. If running a service _requires_ paying people to look
at awful, damaging material—stop running the service. It shouldn't exist. It
doesn't need to. Absent things like Twitter and Facebook we'd find other ways
to keep in touch with people who matter. Special Email clients that do most of
the friends & family stuff that FB does but over email or whatever. Some new
protocol. Progress on that kind of stuff only stalled because we've got the
spyvertising- or VC-funded, moderator-harming services to compete with. Take
those away and the void _will_ be filled, with hardly a hiccup.

"Well gee whiz we just can't run our service without doing this to our
moderators, so I guess we have to"—no, you don't, period, at all. You can stop
running your service, or live with hurting people and stop acting like you
care.

~~~
JKCalhoun
I would suck as a CEO. The first time someone live-streamed an
attack/rape/murder on my platform I would pull the live feature.

------
bedhead
Yet another reason to stop trying to have a central authority, in this case
Facebook moderators, police speech. It can't be done effectively or without
major side effects. Let people filter on their own, we all do every day in the
real world and it works just fine.

~~~
Pfhreak
You are a lot less likely to run into violence, gore, exploited people, etc.
in person. The answer to, "human moderators get hurt by the constant stream of
truly awful stuff." is not, "let the stream of truly awful stuff just go
straight to everyone".

------
raslah
What strikes me about this issue as a whole is what it says about the true
state of "AI". This is a perfect job for such technologies. I mean how is it
that we're already making 'deep fake' videos and audio but can't feed a video
stream which is just a stream of images to an algorithm which can determine if
it's inappropriate. I recognize that some such tech is being utilized on the
front end in this case, and that the problem is non trivial, but I see this as
FB saying 'good enough' and not pushing as hard as they could to improve the
tech to where it can be trusted to make the decision. I sense that they may be
telling themselves they're doing social good by 'creating jobs'. Why must
humans be subjected to this torture? What happened to "move fast and break
things"? Why not put the algorithms out front and let them have the final say,
and let them learn and improve quickly? I suppose just because meat is cheaper
than chips.

~~~
traek
> I recognize that some such tech is being utilized on the front end in this
> case, and that the problem is non trivial, but I see this as FB saying 'good
> enough' and not pushing as hard as they could to improve the tech to where
> it can be trusted to make the decision. I sense that they may be telling
> themselves they're doing social good by 'creating jobs'.

This seems like a weird take to me. Why would this be your conclusion rather
than that the technology isn't good enough yet?

~~~
raslah
Because the move to hire so many moderators so fast was a big expensive move
that seemed like more of an implementation for PR purposes considering some of
the troubles the company was experiencing. When your motto is 'move fast and
break things', and you implement features like live video streaming without
much thought to the full ramifications of doing so, it would seem even if only
to me, that company might not be afraid to take a leap on tech that's 'not
quite ready'. Again, I know such a hasty implementation could hurt quality,
but given what's at stake (human health), they might get credit for doing the
right thing. Obviously, though in our society, where they wouldn't get such
credit, and would only get flamed for having a bad user experience for the
wrong videos being taken down, the mental health if a few humans who willingly
signed up is a small sacrifice for profitability.

------
thatguyagain
If you think about it, you have a large group in society who spend 8 hours a
day watching content so controversial it won't even reach the rest of us. Just
like any other company, people eventually quit, and new people join. Now if
you were the bad guy wanting to nudge a % of society in a specific political
direction, wouldn't this moderator group be a perfect target? Just bombard
[insert social media platform] with propaganda content and you _will_ reach
this group of people even if the content never appears on the platform. What a
messed up situation.

~~~
tomatotomato37
Not really feasible; a group of a couple thousand converts are useless if they
aren't targeted at a specific region; with Facebook's love of outsourcing you
can't even be sure they are in the same country.

------
post_break
Maybe you need people who are just numb to it? It seems like the kind of gig
that would attract the kinds of people who like seeing it. I know that's
morbid to think about but look at subreddits like RIP /r/watchpeopledie,
/r/enoughinternet, morbidreality is still around luckily. You get people who
don't mind the gore and terrible things, pay them to sift through it, and see
what the outcome is there?

~~~
michaelt
Maybe for gore. For child abuse images, not so much.

I can only imagine the aghast looks one would get from the PR and HR people,
just for _asking_ about forming a team of people who liked looking at child
abuse images!

~~~
dmurray
The conversation might go better if you framed it as paying the members of
that team less than they'd get in comparable jobs with less child abuse.

------
apolymath
Back in 1999 when I had AOL 4.0 at the age of 16, I would frequent
www.goregallery.com and other various gore-related websites that showcased
real crime & accident scenes from around the world. Still to this day, I am
fascinated by that kind of content. I don't really seek it out like I did as a
teenager, but it excites me nonetheless.

------
teachrdan
The solution here may be for all of us to flag posts that we know are benign,
like puppies or birthday greetings. This would reduce the percentage of
disturbing content moderators are forced to see, and costs us nothing at all.

~~~
Cthulhu_
That's not a solution, that's just increasing their workload. It doesn't
remove the root cause, nor the fact that people still have to assess a lot of
content - which apparently the super smart AIs can't identify themselves.

------
stickfigure
Just to be clear: There are no statistics in this article, just some anecdotes
selected to push a particular narrative.

I don't claim to know whether working as a FB moderator is 'catastrophic' or
not, but this article makes an emotional case for it rather than a rational
case for it. It reminds me of reporting around the Foxconn suicides - sure,
each is a tragedy, but it turns out the Foxconn suicide rate matches the
national rate.

~~~
titzer
"There are no statistics in this article, just some anecdotes selected to push
a particular narrative."

Please don't reduce people to data like this. Plese don't automatically assume
bad faith on the part of the authors of the article. Consider there are actual
people involved here, reporting their experiences. Don't make it into some
kind of social-justice battleground or science experiment, just consider their
perspective without instant judgment.

~~~
iamnotacrook
You could say exactly the same about anti-vaxxer nonsense. The lack of
statistics makes it something where you can only wring your hands and say "oh,
how awful" and not "yes, but..." and consider meaningful alternatives.

~~~
titzer
There is a stark difference in the anti-vaxxers are directly advocating for
dangerous policy changes. I don't see anything of the sort here.

~~~
chii
The problem is that to someone on the fence about vaccination, they take
legitimacy from articles that appeal to emotions and don't back up facts with
data. Anti-vaxxers write the same way.

~~~
Apocryphon
This article cites clear examples of psychological harm and trauma being
inflicted upon Facebook moderators. That itself is sufficient data.

------
0xADADA
The very fact that Facebook outsources its censorship to contractors
underlines the precarity of a workforce that is considered both disposable in
the short term and irrelevant in the long term.

Ultimately Facebook wishes to replace this workforce with AI automation where
possible, and then heteromate the remaining human work by enticing users of
the platform to inform on each other with posts that don't abide by the
platforms implicit social norms or opaque moderation rules.

------
J-dawg
There seems to be a fundamental hypocrisy in a legal system that considers
some material is so fundamentally _corrupting_ that it must be illegal for
people to possess or see it.

If this were truly the case, it should also be illegal to compel someone to
look at this same material as part of their employment.

~~~
jacquesm
I wonder how you look at vice-squad members?

And from what I've seen they are quite happy to receive tips about abusers
from the public and businesses that care enough to review their content. For
society to work some of us have to step forward and agree to do the dirty
work.

That some people are doing this for a living without proper counseling and
guidance is another matter, you can lay that at Facebook's door. The few times
that I've been exposed to that crap was enough to make me shut down some
services.

In fact, I think that _any_ business that deals in user generated content
should assume the cost of business that goes with that and have a system of
flagging and escalation in place.

~~~
J-dawg
> I wonder how you look at vice-squad members?

It was a poorly formed argument, but you've hit on the absurdity that I was
trying to point out. Obviously _somebody_ has to look at this stuff.

But having low paid, poorly trained people do it all day, every day? I don't
think it's crazy to suggest that shouldn't be legal.

And if Facebook can't exist in its current form without these teams, then I'm
totally fine with that.

