Hacker News new | past | comments | ask | show | jobs | submit login

They should recruit people on Reddit for this. A large number of them seem perfectly fine with this type of content.

There are dedicated communities to all these things.

I do not mean this as a hit against Reddit users. It is just that there are small pockets of people who are psychologically capable of tolerating this or even enjoy it. It seems more ethical to hire people from /r/watchpeopledie.

I spent my youth on 4chan. I saw everything that's mentioned in the article and worse.

Desensitization is certainly a thing. The first few times seeing those things were shocking and disturbing. Then it got exciting. Then it got boring. I don't actively look for that content, but never having left 4chan, I still sometimes stumble upon it and it causes no big reaction. I think I could totally do that content moderation job, I have to admit some morbid curiosity about what I would stumble upon exists in me. The only content that never stopped to disturb me was violence against animals.

Talking with my "normie" friends about this I realized I am the odd one out. I wonder just how much consuming that content in my youth changed me.

I'd wonder if frequents in these subreddits are typically professional and healthy people to have on a team, though (honestly no idea, never been).

It's a strange conundrum because ideally you want good people who are comfortable watching things good people rarely want to watch.

Don't have much experience with this demographic -- maybe there are more sane, emotionally healthy people watching snuff films for fun than I instinctively expect.

For me it was more like watching nature documentaries, morbid curiosity and all that. There wasn't really anything I'd consider snuff on that subreddit, it was natural death, murder, and accidents, all of which are part of natural life.

I found most of the videos less offensive than extremely graphic horror movies, which ironically I have less stomach for. If anything that subreddit made me generally more aware of my surroundings and safer in my daily life.

As long as they have a pulse and can sit in the chair long enough to do the job they'll likely be fine. Content moderation farms, call centers and the like tend to optimize their internal processes such that they don't need to restrict their hiring to "professional and healthy people". There are plenty of people who are insane and unhealthy that can hack this sort of work.

Do you want "insane and unhealthy" people who voluntarily look at horrifying stuff deciding whether that stuff should be allowed on the platform, though?

It'd be like having a jury pool made up of serial killers.

>Do you want "insane and unhealthy" people who voluntarily look at horrifying stuff deciding whether that stuff should be allowed on the platform, though?

These kinds of bottom of the barrel places of employment tend to have their policies and procedures tuned to make sure employees do things the way the company wants them done. That's just the nature of doing an unskilled, non-laborious job for a BigCo. There will be automatic spot checks, audits, reviews, etc. It is no different than the cube farm on the next floor where they review images of documents that AI systems have flagged as possibly fraudulent. Such systems are likely in place already for their current moderators.

> These kinds of bottom of the barrel places of employment tend to have their policies and procedures tuned to make sure employees do things the way the company wants them done.

Yes, but their hiring pool tends to be "everyone".

Those procedures might not work as well if your hiring pool is explicitly confined to "the people who watch ISIS torture videos for fun". I suspect standard policies and procedures include "fire the people who get off on this stuff".

Why not? They don't benefit from it being public. They do benefit from not being fired and presumably there would be audits.

This is usually a tempting fate question but what is the worst that could happen?

They have little power to abuse and obvious accountability. If they block kitten and baby pics and allow people being tortured to death they'll get fired quickly after the inevitable shitstorm.

Insane and unhealthy or creepy doesn't neccessarily mean dangerous. Even if they got twisted pleasure from it they would be gross but not inflicting actual harm. It isn't like people are being brutally murdered to give them kicks - they aren't a source of demand.

> maybe there are more sane, emotionally healthy people watching snuff films for fun than I instinctively expect.

Probably not, since there are no documented instances of any snuff films ever found.


Thanks to Wikileaks publishing the Dutroux Dossier in 2008, we know hundreds of snuff films were recovered from the notorious pedo-rapist Marc Dutroux.

Just because the police keep those films as sealed evidence that are never published does NOT mean snuff films are an Urban Legend. I can see why the police always keep snuff films secret. Imagine what would happen if snuff films were uploaded all over the Internet? There would be mobs of angry villagers armed with torches and pitchforks descending on jails and prisons to lynch pedos. It would make harassment from QANONs look like school yard bullying.

Snuff films are, fortunately, still only a subject of urban legends, since those films you mention do not meet the definition. Were those films made in order to entertain other people than the person who made them? Were the films ever distributed to any of those people? Was anyone ever actually killed in any of these movies? If the answer to any one of those three questions is “no”, then the movies, horrible as they may be, were not “snuff” films, as commonly defined. From what I can see in your reference, the first two of those things are decidedly not true, and the reference is unclear regarding the third.

(The films were, according to your reference, made only for blackmail purposes, and therefore certainly not made for enjoyment of any viewer, nor were they ever distributed to other people for their enjoyment. The reference does claim that people were killed in a specific place which was also filmed, but does not, what I can see, explicitly state that any murders were actually filmed, other than incorrectly calling all the films “snuff” films.)

I think that some of you should take a look at news websites about the drug and kidnapping cartels down in Latin America... They regularly post their own snuff films online and I can guarantee you that they're not only very real but also extraordinarily brutal. No urban legend about it.

A snuff film is a video in which someone actually IRL dies. There are about a zillion of them.

A snuff film is defined by most as at least having been made for the purpose of entertainment of others. Some include the proviso that it has to have been made for profit, and yet others demand that it has at least been sold. But regardless, not even the first level of these criteria have ever been fulfulled.


That article was last updated 2006 and does not take into account ISIS or Mexican drug cartel videos, which arguably meet your first definition and many would consider snuff.

One could hardly make the argument that those videos were made to entertain anybody for their enjoyment. Terrorists make videos to terrorize, i.e. scare, people, not for people to enjoy.

Making the argument that is at least partially made for the entertainment of other terrorists doesn't strike me as outlandish. Personally I wouldn't make that argument though.

Regardless my point was just that in my experience many would consider them 'snuff' even if it doesn't meet the exact technical definition.

> my point was just that in my experience many would consider them 'snuff' even if it doesn't meet the exact technical definition.

And those people would simply be wrong.

I mean, let’s back up a bit. Why is it important whether or not “snuff” films exist? Because many people want to believe the common urban legends about it – i.e. that there exists an (be it ever so small) underground production and market for it, even though, by all accounts, this does not exist, and has never existed. Believing such urban legends helps people feel justifiably horrified and helps validate their cynicism; basically the same reason why many people believe many other urban legends. However, no movies fitting that particular model have ever been found to exist.

If anyone wants to to be picky about the definition and claim that some movie or other fits the definition of “snuff”, or is “considered” to fit even though it doesn’t(!), this is only because they want to be able to say “see, snuff films exist!”, and thereby validate their preexisting belief in the urban legend. But this is a motte-and-bailey fallacy; it tries to defend the urban legend (which is hard) by trying to defend a technical definition of a word to be slightly wider than it really is (which is comparitively easier). Luckily, even though it is easier to defend, this latter position is still demonstrably wrong.

I take it you haven't seen the "ISIS burns people in a cage, dance dance revolution edit/remix" video.

Those people weren’t killed for the purpose of entertainment. The initial videos were made by terrorists, to terrorize, not to entertain. Remixes aren’t “snuff” films, otherwise any compilation video on /r/watchpeopledie would qualify.

I remember talking to an FBI agent who had to watch a lot of terrible stuff. He said that at first it was very distressing and sickening, but after awhile he became desensitized to it and could eat lunch with no issues while continuing to watch it. I wonder if some people have the psychological profile that allows them to become desensitized while other people never can. Though, it probably isn't good that people have to become desensitized.

If one can become desensitized to loading people into ovens, they can certainly to pictures of the same thing.

Perhaps. I would argue that there is some difference between seeing someone fall off a cliff and watching someone being tortured to death.

Then they should run sidebar ads on the gore sites. The point is that there appears to be a subset of the population that is not traumatized by this stuff. We may as well give them an opportunity to put that difference to use.

Edit: and saying things like "well clearly these people are already traumatized if they're seeking this out, it's not good for them, we shouldn't give these jobs to them" seems laughably paternalistic when the alternative is to traumatize more people by making them look at it instead.

There is a subset of people who seek this stuff out, yes. Whether or not it is traumatic for them does not strictly follow from that.

People come to their choices with all manner of existing problems and conflicts, and they don't always do things in their own best interest.

There are people that really like chocolate and some people that really hate chocolate.

Human are so diverse.

Is is really hard to believe that there are people that are not negatively affected by this ?

But are they same as the people actively seeking it?

Likewise, both could exist.

How do we know they're not traumatized by it?

That they seek it out doesn't mean they're not messed up.

^this. I am not arguing one way or another, because I don't see strong evidence making me lean predominantly one way. But imo, it is at least just as possible that they are seeking out that kind of content voluntarily because they are already traumatized in the first place, so this is a coping mechanism.

Maybe it helps dealing with that trauma. Maybe it makes the trauma worse. Maybe there was no trauma to begin with, and the person was seeking out that content simply due to morbid curiosity, but then it evolved into something else. Maybe there was no trauma to begin with and no trauma appeared after the exposure. Who knows.

But just because there are communities on reddit with people seeking that kind of material and not going on crazy violent sprees doesn't mean that it doesn't negatively affect them. It could be the case, but we should probably look into that first before jumping to "obvious solution" claims.

tl;dr: we gotta do more research and weigh pros and cons before simply declaring "we have an obvious solution, let's just hire all those gore content enthusiasts to moderate violent content, because they already do it on their own and don't seem to be going crazy in obvious ways due to it".

>Then they should run sidebar ads on the gore sites.

That would do wonders for the Facebook brand.

Have a subcontracting company do the moderation and have them run the sidebar ads.

Which is still linked to the brand - "Facebook pays $10 M to company that runs ads on gore and rape sites"

Look at how much trouble they got in when it was revealed they were using Onavo (even before acquisition).

Meh, if that’s the worst PR they can get they’d be massively ahead of where they are now. Especially considering how massively the press narrative has turned against them overall.

But that's my point - Facebook has a massive brand problem right now, and this would just be throwing more gas into the fire.

At some point that stops being a factor. If the haters are going to hate no matter what, you actually get impunity because your actual actions stop making a difference either way.

Are moderators really meant to watch the whole video in cases like that? It seems to me like they could stop once the content of the video became apparent. Even just a fraction of a second of that sort of content could be scaring, but I think probably less scaring than watching several minutes of it...

My basis for thinking this is I've seen videos like that before and clicked away quickly. It's disturbing, but I think I'm okay.

There were pockets where the video of the Jordanian pilot being burned to death by ISIS was popular. Same with the raw video of the New Zealand shooting.

Obviously not everyone, but they are there.

Most importantly, there's a big difference between seeing someone being tortured to death and frequently seeing people being tortured to death every weekday.

But are people like that people you want as your co-workers?

Serious question, I don't mean this facetiously. I think is an extremely complex question and I wonder if introducing someone who is openly comfortable with gore is dangerous to the psychological health of the company.

For example, at a previous company a co-worker publicly shared, without a trigger warning and in great detail, a very gorey thing he enjoyed watching to relax. Lots of people were extremely disturbed by this -- not disturbed by him, per se, but disturbed that he shared this without any kind of warning. I don't even resent him despite myself being pretty disturbed because what one considers normal is subjective, and if you regularly relax to this content you probably don't realise that this might not be anyone else's cup of tea. But it really makes me wonder -- what if the guy who sits next to me starts telling me that ISIS beheadings are relaxing to him?

Well, I probably wouldn't invite him to my game night, but if it's his job to watch ISIS videos, it seems like he'd be well-suited, no?

It seems like if you acknowledge that: a) Terrible job exists b) Terrible job is necessary

You should also allow for a person to do terrible job without thinking that they must be a terrible person by extension. Otherwise you continue the well-trodden history of certain professions like hide-makers, executioners, or coroners being ostracized from society for doing the job we told them to do.

I'm sorry if I implied that I think that people who would enjoy this work are bad people. I certainly don't feel this way; I mean, I myself have worked on a platform that plenty of Nazis use and i don't consider myself a Nazi, so it would be hypocritical of me to say as such.

I agree that I wouldn't personally spend time with said person, but my question was less about personal social responsibilities outside of work. My example I pointed out happened at work, during normal work hours, in a social space where people have inherently less control over who they surround themselves with. This is the part of it all that is fuzzy to me, and I don't have a clear answer for.

Would they need to be co-workers beyond corporate title? Content moderation is sitting at a desk clicking through things for hours a day.

Certainly in our own, private social spaces we are free to choose whomever we socialize with, but at work that option is less available. The example I gave happened during work hours over work communication, so there is always going to be social overlap that you can't simply opt out of (the age old 'what is culture fit' question applies here, and every co-worker contributes to company culture in some way).

And while a lot of content moderators do focus purely on clicking thru things, a lot of other ones still regularly interface with the company. I used to work at Discord, where content moderation of this level (and traumatic nature) was a constant concern. As engineers you may be staffed to work on a tool to help with moderation. Or you may be working on some fancy AI to help sort and tag unsafe content. And this is only from the product engineering perspective; content moderators will have to work with customer success to create content policy, or enforce bans for unsavory behavior. So I don't think they really are siloable away from the rest of the company, and I'm not even certain that that is a humane way to treat them.

> It is just that there are small pockets of people who are psychologically capable of tolerating this or even enjoy it.

I wouldn't assume "watches this stuff" and "is psychologically capable of tolerating this stuff" are one and the same. Those people might be actively harming themselves by watching the content. That's one thing when they're volunteering to do so (though that's an interesting conversation in and of itself) but it's quite another when Facebook is paying them to do it.

Honestly, this post speaks a lot of truth, at least in my experience. I use to be subbed to /r/watchpeopledie, really just out of morbid curiosity.

One day I had an anxiety attack because I had a dream that I had a heart attack and suddenly I had a deep fear of randomly dying.

All the random deaths on that sub started to really get to me, I even started to get depressed. It became harder and harder to watch content on it. Eventually I unsubbed from it and gradually bounced back to normal.

I don't really regret it because it gave me a new perspective on life and how valuable it is. But I would be lying if I said it didn't fuck me up in a way.

I think the best counter to becoming desensitized or depressed is having an over arching sense of mission and identifying your work with a cause for good that is greater than yourself.

take ER doctors. they daily witness human trauma far worse than viewing horror show videos and images. do you ever hear about ER docs who become depressed or mentally ill from being exposed to so much horrific human suffering? Doctors have a personal sense of doing good in the world through healing. They witness so much trauma that they become so desensitized that non doctors often criticize them for seeming so detached and for treating human bodies like mechanistic processes. But doctors have to be somewhat detached in order to do their jobs well.

Quick googling yields this - https://www.webmd.com/mental-health/news/20180508/doctors-su...

It's not specifically about ER doctors. Anecdotally, I'm a lawyer, and they warned us in law school that lawyers have abnormally high rates of substance abuse and depression, and constant jokes to the contrary not withstanding, most lawyers believe they do good. Emotionally hard jobs are hard, even if they are indisputably good.

> do you ever hear about ER docs who become depressed or mentally ill from being exposed to so much horrific human suffering?

Yes you do! But I would imagine it is at least somewhat offset by the fact that they are doing a lot of good: they are saving people from death while also witnessing death. I think a content moderator would struggle to feel such a strong sense of purpose.

These niche demographics are typically into it for the wrong reasons though, which includes fantasizing about the material (murder, rape, child pornography, etc.).

There are not hordes of people on Reddit watching intensely disturbing child pornography. The gore stuff maybe (although r/wpd was banned), but still I think any normal person, even the person who was occasionally on r/wpd, would be extremely traumatized by watching anything involving children. Which to be honest is probably a lot of the content discussed here, as sick as it is.

You want the subscribers to r/wpd. The people who mod it or those who come and visit everyday.

The entire point of this is to find those who deviate enough from normal to not be as impacted.

Perhaps we could use DL to "dehumanize" the content.

For example, instead of showing one person torturing another person, a DL filter could turn the scene into two teddy bears torturing eachother.

I think there's a good chance that if someone seems perfectly fine with this content, they've already got mental health issues.

I was thinking more of 4chan moderators.

But we want these people to actually do their job

Then let them do it for free.

They might be too young for the office...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact