Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my opinion this is an argument in favour of shutting Facebook (and every other large social network) down. This sort of work is damaging and abusive, and nobody should have to endure it just so we can have social networks.

I understand there are some jobs in the world that need to deal with dark stuff (like law enforcement), but social networks just aren’t worth the human cost.



That’s a non-solution.

What you’re proposing is not just shutting down social networks, it’s shutting down any website that involves user content, anything that allows photo/video upload, comments, or any kind of user interaction. That’s impossible.

You point out that public safety jobs are view more “worth it,” and certainly they are, but that logic brings up the question of who judges what job is worth undergoing trauma.

In other words, is a subway or freight train driver’s job “worth it,” if they have to see someone commit suicide on the tracks? What about crime scene cleanup companies? Funeral services? Bus drivers? Truck drivers? Nobody’s going to agree on where to draw the line in the sand.

A more realistic solution might be to make comprehensive support systems, mental health resources, and treatment a legally mandated, completely free service provided to any employee that works in these kinds of fields.

Finally, I think there are most certainly people out there who are not as sensitive and affected by this content who would be candidates for these kinds of roles. Perhaps there’s a way to test for that sensitivity before the real job starts.


I’m actually not proposing shutting down any website that allows uploaded content, just large / public sites that require this sort of moderation. Not every site gets this stuff uploaded to it. The more private the network, the less need for this kind of company-led moderation.

As far as “worth it” goes, some people have to be exposed to it so long as we have law enforcement (but I’m certainly open to alternatives here). I’m not sure the train operator is a fair comparison, because seeing a suicide is an exceptional circumstance in their job, it’s not the norm. The content moderators, however, are sadly expected to be exposed to traumatizing content as part of their job description — it’s essentially the point of their job.

There are plenty of kinds of work we deem as hazardous to people’s health, and thus are either banned or regulated. I’m not sure if there’s a healthy way to expose people in these moderator jobs to the traumatizing content they face. It just doesn’t seem worth the tradeoff to endanger them like this.


Think like a legislator. How do you write this regulation?

> [shut down] just large / public sites that require this sort of moderation

Let’s say I start a restaurant review website that allows comments and photos to be uploaded. It does modest business for a while, I now have 50 employees. I’m following the law because my site isn’t big enough to violate this “no user content for big prominent websites” law.

Soon, it becomes big, like a major competitor to Yelp, and I’ve got 1,000 employees. But suddenly, this new law kicks in that says that I have to stop accepting uploads because my site is too high profile. Now, I lay everyone off and go out of business.

This just isn’t a workable solution, at least not in the particular way you’re proposing it be constructed.

And really, you’re asking the second largest advertiser on the web (Facebook), a Fortune 50 company, to just pack up its bags and shut down.

It’s not like I love Facebook or anything, but I’m sure their 45,000 employees wouldn’t be happy about that.


This is not just a facebook problem though. Taking your proposal to its logical conclusion suggests that the internet should have no user created content on it - or am I missing some possible middle ground?


How on earth does the size make any difference to the supposed problem? Even if we somehow had a stable and viable size capping system to whatever nebulous concept of "too big" there would still be people exposed to the same content. Not to mention in the US said restrictions would get a "haha no" from the courts on First Ammendment grounds against arbitrary limitations on speech by source.


Why not just require social networks to verify identity and verify users are 18+ before allowing them to make an account? Furthermore, maybe even introduce a direct, easy-to-understand, hard-to-misuse law, enforceable locally (e.g. in the US it be a state law) that says committing fraud to join a social network is a misdemeanor with a $1,000 fine. Facebook can then report fraudsters to local police.

People would be less inclined to post this type of content to Facebook if it their account was very connected to their real identity, if they were 18+, and if there was meaningful punishment for making fake accounts or accounts with stolen identities.


This hasn't worked the previous times it has been tried. https://anildash.com/2020/07/21/a-federal-blue-checkmark-won...

I wonder if prosecuting people who post horrible stuff would have a deterrence effect, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: