Facebook is enormously valuable. They made something like $15B in net income in the last four quarters.
Content moderators are a necessary condition for that profit. If kiddie porn, gore and animal cruelty flooded the network, it would cease to be a destination visited by people that advertisers will pay to reach.
And yet, there are two sets of entry-level knowledge workers at Facebook: engineers ($150k/year, benefits, upward career trajectory) and content moderators ($30k/year, no benefits, likely going to acquire mental illnesses).
I understand the arguments about supply and demand of labour, but I'd have more respect for Facebook if they demonstrated awareness of this issue. The article talks about moderators re-evaluating the same piece of distressing content that they've already flagged. Why? I suspect because the moderator is cheap, and so Facebook isn't putting in the effort to ensure that every judgment needs to be made the minimum number of times.
More so than salary, I suspect Facebook considers the moderator cheap in terms of reputation risk. By outsourcing to contractors located offsite from main campus, engineers aren't thinking daily about the absolutely horrible stuff moderators are seeing, and so the one group doesn't impact Facebook's ability to hire engineers. This is a guess - can anyone at Facebook speak to whether engineers are aware of the working conditions of moderators, and agitate to improve their lot?
This problem applies to other forms of sites and content as well. The app store gets hundreds of submissions in the game category per day. Hosting and distributing that content is easy. Only a tiny fraction of those games are going to be played and rated enough times to show up in a recommendation engine. The bulk of the incoming content stream isn't being matched to interested people at all (sorting by interesting is the same curation issue as filtering by offensive).
Literally every content platform is going to have some problem similar to Facebook. We can blame Facebook, but the reality is no one has a good solution. Not even me.
Now I have no doubt that facebrick and the evil-goog will try for regulatory capture and employ their vast army of Washington lobbyists in rent-seeking activities but that is a separate problem applicable in every industry where market failure means you have to regulate, not just this one. (And yes it does need solving, absolutely urgently). If you don't have to regulate because the market is working, great! Don't regulate! That's not what we have here, this is market failure but we haven't yet regulated. It will come because there are clearly externalities in this market, Well it will come unless we give up on democracy first. Something with which facebrick and evil-goog seem really comfortable as long as they supplant it.
I don't think it's a problem any more than weather is a "problem" that required the invention of the roof. As far as user-generated content goes, it's easy to invent the hose, but the faucet is much more complex.
It's irresponsible to provide only acceleration ("reach," "virality," etc.) for information without also implementing brakes. Moderation has been a known fix for decades, but people like Zuck -- in disposition and inexperience -- won't launch anything human-powered, and moderation requires a human eye. Fuck us, right?
Let's suppose Apple wants to make sure every game in their store gets at least 5 ratings. To do this they are going to pay people to play each game for 5 min and then assign a rating. In an hour a curator can get through 12 games. Over an 8 hour day thats 96 games. Apple would need to hire about 7 (iirc) full time employees just to give every game one rating. We wanted every game to have 5 ratings. Assuming that users on average provide one rating (but sometimes 0), we still need curators to provide the other 4. That means Apple would be employing 28 full time employees just to get their content minimally curated.
Let's use the salary from the article and say each curator costs 30k a year. That means this whole department of curation is going to cost about 900k a year.
Most of what they are sorting is going to be shovelware anyway. To justify the cost of this department, the curators need to find gems, content that's really good but would have otherwise been looked over. They need to find 900k of gems per year. Considering the average earning of an app, this is just barely plausible. If you want to do anything more advanced with your curation, like have the curators generate key words or figure out which demographic might like it, your budget is blown for sure.
IDK how the economics of comment curation works out, but I bet it's even worse. After all, no one is even paying for comments, and making a comment is way less effort than making an app.
I don't care how much it costs, though, and I don't think Zuck cares either. If FB remains unmoderated it will invariably turn in to a shithole and people will go elsewhere rather than wading through a bunch of racist anti-vax cartoons just to find out where their friends are meeting for lunch next week. FB has plenty of money to pay as many human moderators as it would take to stop complaints, in all languages, but they won't. Where FB might go bankrupt if they don't implement brakes on reach, they'd just as soon prefer to go bankrupt rather than hire humans to do the work.
The issue is that if you want to identify content as "fun", "offensive", "dramatic", "tense", "crass" or any other of the myriad of labels that haven't been machine learned, very quickly the labor to generate these labels will cost more than the value generated by these labels.
This is just Facebook being cheap.
The most important thing for moderators would be group and individual therapy sessions run at work with hired therapists. Group therapy is great because you can relate to other moderators struggles which helps process, and individual to help process the most traumatic events. This is what Kik has implemented and seems to be effective.
They do this because acknowledging and dealing with bad users would also mean impacting fake users that make them money with ad fraud, growth metrics and ginning up engagement.
End of the day, if you operate a social platform, and you have people uploading child pornography and animal torture video, you should do everything in your power to make the user go away, for good.
I let you live your way, don't try to force others to live your way by using "scare tactics".
We can take a step backwards and eliminate Facebook, Insta, etc from our societies and enact legislation that will govern the second generation of social networks.
- Make social networks mandated reporters for suspected child abuse. Like teachers or little league coaches, reporting would absolve them of liability.
- Prohibit generation of profit from illegal activity.
- Require transparency in political advertising, with an accountable entity and individual listed and publicly available.
- Require reporting of funding source and material individuals for paid political advertising.
- Make the platform liable for false product claims in cases where the platform facilitates direct or implied endorsements for a product. (“Jedi72 likes this homeopathic cancer treatment!”
- Make the platform responsible for reasonable efforts to block and report on actions against fictitious users posing as people. Share liability for any claims, including libel, resulting from the actions of fictitious people.
A regulatory regime like this would either improve the signal/noise ratio or cause the company to abandon or prohibit certain actions or functions.
What I'm saying, seriously, is that we need a society-wide blameless post-mortem on social networks. It needs to be a slow, careful discussion, where all the stakeholders have their voices heard, and we decide what is good for us all. I don't know that we need them at all, I'm not sure they provide ANY value to ANYONE, but as a non-user I'd of course be open to persuasion by current victims. Erm, users. In the meantime, it should be illegal to operate a social network. Nobody needs to go to prison yet, even though I think it's horrifyingly clear at this point that the ethics are out the window.
One obvious one: advertising and public conversation must always be separated; the same way no public schoolteacher may read scripture in class, it should be illegal for an internet service that hosts public conversation (e.g. Twitter) to allow sponsored content.
We should also establish guidelines for addiction. After cigarettes, drugs, sugar, etc; we should as a society be prepared to understand the various forms that addictive products can take and regulate them aggressively before they become serious problems. UX patterns like infinite scroll, pull-to-refresh, and push notifications are particularly suspect.
I also think we should close the apparent loophole in COPPA that allows parents to post photos of their children on social networks.
> that they have to hire moderators (which they already do?) or do you want to make it illegal to post "bad things" on the internet?
I tend to come down on the "free speech" side of these issues as much as possible; I would prefer unregulated public fora that (perhaps by requiring identity verification) encouraged good-faith participation in substantial discussions. I think if you are just fooling around, maybe you should head down to the bar and get drunk with your friends and do your shit talking there.
Social networks aren't dead, so we can't have a post-mortem.
Also, you can't have a society-wide blameless analytical conversation about anything, especially if there are significant conflicting interests involved; that's not how people work. (It's perhaps noteworthy that even while suggesting this you offer no objective descriptions of impacts with reasoned analysis of contributing factors, instead jumping straight to blame and policy responses, with some labelled as “obvious”.)
So what about the countless number of people whose livelihoods depend on social media in some capacity? They're supposed to just be fine with being irrevocably fucked over until the moral panic about social media subsides? (or, in your words, "we decide what is good for us all")
Where does the small business owner who depends on a social network factor in as a "stakeholder"?
Drugs, sugar, cigarettes are measurably and objectively harmful to your health and that's why they are regulated (or should be).
There isn't similar comparable scientific evidence that social media is nearly as harmful except for questionable non-reproducible psychology studies, so I don't think it's comparable at all.
Social media has a lot of problems - even this article on just Facebook outlines a number of them  but I agree with the above poster in saying that you can't have a societal post-mortem analysis of the effects of social media given its very much _not dead_ state. Advocating that society just presses a 'shut down' button until "we collectively decide how we should proceed" is an entirely unrealistic scenario.
HN's posting frequency limits made that choice for me. The system is unfortunately biased towards drive-by comments and against engaging with feedback on your own comments.
> Advocating that society just presses a 'shut down' button until "we collectively decide how we should proceed" is an entirely unrealistic scenario
But that's what I'm advocating. I don't think I'm obligated to respond to people who only came by to reject the premise of my argument. More interesting discussions are available to anyone who shows up with an open mind.
Second, you do better at automatically detecting duplicate content that’s already been banned.
I have no bussiness solution, but the solution to the latter is to simply ignore/block people who post stuff you don't like.
Maybe the best solution is for the moderaters to unionize.
The deck is stacked against employee organization. If I were conspiracy-minded, I might even say this was by design.
Adopt the Danish model, make it mandatory for companies with workforces over a certain size to at least vote on unionisation. Will shift the default from not being unionised to being unionised except for cases were the workforce is actively convinced its detrimental.
Side thought, things like this help me understand why the Danes are so darn happy.
This overlooks the possibility that workers with different statuses can recognize commonality and, so, unionize as a coalition.
This is precisely what happened when the graduate students at the University of Virginia unionized with the University classified staff under UAW for health care and a living wage in 1996ish (? IIRC).
Workers can unionize across pay grades and professional classification if and when it suits such workers to do so.
Up until then people should unionize. Do whatever it takes to create a workplace whichd by human eyes at all, or worst case just once.
Miner49er 10 hours ago [-]
Facebook won't do anything to help these moderaters unless the cost of the negative PR exceeds the cost of doing something. Engineers at FB could help achieve this by causing agitation, for sure.
plankers 1 hour ago [-]
If you can think of a practical way to unionize workers on a 1-2 year contract with a high turnover rate, I'd love to hear it.
The deck is stacked against employee organizatio does not drain sanity from people.
Personally I understand browsing toxic content to foster "immunity" or at least familiarity. Doing it for money sounds like a lose-lose scenario. Call centers seem to sound better given the conditions.
If there's something wrong with this reasoning, please elaborate.
> If there's something wrong with this reasoning, please elaborate.
I would be utterly surprised if an NDA could be used to prevent an employee from exercising their right to organize (in the US at least). Just because you sign something does not make it legally enforceable.
It's really complicated and I don't think it's reasonable to assume people have access to a clear answer on enforceability and certainly not an answer they could rely upon enough to risk their job over,
That assumes perfect knowledge. As mentioned in the article, these moderators often do not understand what they are getting into until long after they have committed to the job. Once you have quit a previous job, moved, setup your new life, only then do you realize your new "brand consultant" job means watching kittens die all day.
Jobs like this also open persons to realities that we normal keep tucked away deep in our brains. We all know that cops don't investigate most reported crimes, but that doesn't come up very often for us. A facebook moderator sees the crimes, sees the lack of response by the police or anyone else, every minute. Repeatedly, every minute. Nobody can appreciate what this does to ones' mental state until you have been there.
When recession strikes and the gravy train ends, the same folks will be wearing Che Guevara T-shirts crying about unfairness.
If you call 20$, for a person who owes nothing and is probably starving, a "short term profit", you've very likely never experienced how is it to live in poor conditions.
Being put in prison is an improvement for many individuals in so far as they are guaranteed three square meals and a warm place to lay their head. Would you seriously argue that that's an improvement that is worth the cost to the individual?
I personally know people who have moved for half that amount.
C. Dickens: A Christmas Carol. In Prose. Being a Ghost Story of Christmas, 1843
If we're legitimately talking about situations where the balance of power is so unfavorable to the unemployed individual, the solution is to start talking about UBI, not micromanaging very specific roles at very specific companies.
edit: if it was not clear, facebook is in fact part of society (by definition) and their responsibility is proportional to their role in society, no more no less.
There are some kinds of nonviolent behavior we don't allow because we (I guess we is majority of society?) think it's wrong, often around issues of financial exploitation. For example, even if you are mentally sound you can't sell yourself into indentured servitude because that tends to only happen when someone with power exploits someone desperate, often (but not always) due to reasons out of their control.
UBI is basically an artificial wage. Like a lot of things in life, a wage is a negotiation between two entities of vastly different levels of power. As every public company proudly proclaims, the value of what employees do for them is far far less then the value the employees receive.
Why do you think those workers accept this fact? They like getting less value than they generated? It's clearly the result of an exploitative power dynamic - one side has to work or not eat, the other does not.
If you think someone should not have medicine and food dangled over their head in work negations, the answer is workers need to be paid in real value of the capital that they create and that needs to law.
I hope one day our wages are looked back on like no minimum wage, the 6 day 14+ hr work day, child labor, indentured servitude, slavery and etc which were often viewed as normal at their time but now disgusting.
This GIF is mocking the idea of UBI and while for sure a silly meme, has an element of truth IMO.
Capitalism works on leveraging advantage to pay workers less than the value that they create. Do you have anything to say about that, or are you going to continue to suggest magic discount coupons (UBI) which are worth less than the value a worker creates are something besides slavery with extra steps?
Feel free to scoff/mock me for questioning sacred holy capitalism - when immoral ideals are questioned and no logical answer is available, history tells us emotion and feelings are the standard responses.
If that's true, Facebook's workforce could quit tomorrow, go build a competitor, and claim the full $15B for themselves. Why don't they do that?
The traditional Communist argument has been that they can't because capital owns the means of production. Factory workers would happily go start their own factory, but they can't afford the machines to fill it. That argument doesn't work too well here: the means of production needed to build Facebook are accessible even to college students. Basically an editor, a scripting language, a web server, and some linux boxes.
Another reasonable argument is that they can't because they need to eat. The time it would take between quitting their jobs at Facebook, Inc. and seeing revenue from Facebook Co-Op would exhaust their savings. That is a problem UBI can solve. It's not that UBI pays what you are worth, but that it removes the urgency that sometimes prevents people from getting their full value in the market.
This idea - that workers create all the value in a company, not just the subset they are paid for - is testable. If you're sure it's true and you want everyone else to believe it, then I think you'd want to run the experiment. UBI could be a way of doing that. Could also do grants that pay living expenses during entrepreneurial activity, or just straight-up house and feed co-op founders without asking for an equity stake.
That is not how the labor market works... At all.
"Worth the pay" is overridden by "absolutely need this money for me or my kids to survive"
Wha...? Are you really that far removed from reality?
Tell that to the people who pick your fruits and veggies.
Now we have gov externalities trying to legislate something that shouldn’t have existed?
I’m not saying any of this is write or wrong, I’m just at a loss how people get mad at Uber when the model shouldn’t have made it. Where did the market dynamics go wrong here?
The issue with Uber is they pay $0.70/mi for a task where the cost of fulfilling the task is $1.00. The drivers do it because they can get quick cash with no friction.
You can already see the system breaking down. Uber XLs are getting older and shittier. 2012 Dodge Caravan that picked me up at the airport will be dead in a year. As the situation goes on, someone will die in a fiery wreck by an Uber with no brakes, and it will gather the national attention and kill the company.
I don't understand how this
isn't a problem perfectly
solved by the market.
Most people depend on their meagre income quite greatly, it's a bold thing to say someone could just 'quit' their job and spend 9 months looking for another one, especially if they don't have some pretty hard skills.
So in the realm of talent, yes, it would solve itself, which is why devs et. al. often get 'free lunch' among other things.
For most working people, it's much harder.
"All possibilities are fully encapsulated in very simple dynamics."
Yes, of course, but the incredible power asymmetry between FB and not hugely skilled workers is the dynamic that drives all of this.
anecdotal, friend who streams on twitch also mods for a few streamers and they even have issues when a stream is in subscriber mode only. simply because anonymity and distance from those being affected empower people to do bad things
edit : I am really surprised there is/are not companies which would spring up to provide these services seeing how most of these activities are required by law.
however before bemoaning what they are paid, just go look at your local 911 operators who are government employees. just because we think it should be paid more doesn't mean others do.
It seems like there are no easy solutions for this and it's really frustrating: Societal norms are just completely shredded right now.
And on twitter. Their most famous user uses his real name.
I think for most people that I'd want to be moderating content (e.g. not sociopaths) making them richer isn't going to do anything to deal with the real reason this is worse than a call centre - the content itself.
I'd rather see facebook put more effort into protecting moderators from the content they're viewing. I realise this is a non-trivial problem, but here's a few ideas of things they could do which may help:
* Once a video is confirmed bad, fingerprint it and block future uploads.
* Provide colour-inversion buttons to help reduce visual impact
* Rotate people between areas, so they aren't looking at the same class of bad content constantly
* Use ML training to detect which content is likely to be the most severe and for this content, reduce minimum viewing interval to ~ 5 seconds, and ensure that a single individual doesn't get too many of these.
* Flag accounts who post bad content too often, and shadow-ban them so their posts are not visible by anyone else (this could go in stages, starting off by blocking re-shares of their posts or something)
Too often? If we're talking about content that is traumatic to view (extreme violence, child porn, etc) then I think one time is enough for a flag, no?
That said, I do agree that shadow banning is the best option. Let them spin their wheels for a few weeks posting content before they realize no one is seeing it.
To add to this, some degree of pixelation would help without impacting the accuracy of assessments.
I was thinking also about blurring the picture/video.
2. Contour lines only
3. Windowed/Striped (only showing like 10-25% of the photo)
If 2/3 of the moderators above agree that it's a bad photo, it moves onto stage 4:
4. Color filters - Break the photo down into k-means color clusters, moderators choose which colors to see. They can see 100% of the colors, or one, or two, or however many they need.
If #4 agrees, the photo goes into the round folder.
No need for anyone to ever see the whole photo. It'll be pretty obvious that something was violating the TOS without anyone being able identify anything specific about the photo.
We're not talking about 'beyond the shadow of a doubt' here, we're talking about 'yeah, that's probably not ok.'
Neither the text nor the image themselves would be ban worthy but in combination it is boot-worthy. Extrapolating has Scunthorpe problem issues.
The only partial mitigation that comes to mind would be if a small circle of your friends had to "approve" your video before people outside your circle could see it. That would put them on the hook for sharing bad things too. That might even reduce the psychological load on the moderators.
Also in terms of the reputation of Facebook: any issue with moderation failures and Facebook can easily says that it's the contractor's fault
I want to believe the content moderators only end up seeing the 1% of content that is dubious, or brand new content (not published and hashed earlier). I don't understand in that regard why the one video mentioned in the article (about the iguana) wasn't marked as "ok" in the systems, or marked as "on hold for law enforcement" so the moderators didn't have to see it again.
They are passively aware but mostly don't care. The engineers who express frustration are in the vast minority. Most engineers are just super happy about their own situations, very focused on PSCs, and try to keep to their own stuff. Any internal frustrations are channelled towards appropriate feedback channels where the anger is watered down.
Their profit is not proof they are valuable, because they are willfully abdicating responsibility for the damage they cause, and, further, involving themselves in government debates over their own regulation.
Advertising businesses are inherently suspect, and when one is addictive and potentially enabling election fraud etc, we need to be extra careful in declaring its value.
Anyone participating in the debate should have to disclose if they are a daily user or if their company advertises on any social network. (I'm clean)
Edit: my opinion of course is that FB, Insta, etc (any advertising product with push notifications and/or infinite scroll) ought to be illegal. In 50 or 100 years we'll all recognize that social networks are like internet cigarettes.
Well cigarettes are still not illegal, so we have some ways to go...
Why do you think they don’t get benefits? Facebook contracts out to another company. The company they contract with still is probably providing some benefits
This leads me to believe that there is a qualitative difference in the calibre of benefits Cognizant provides versus those that Facebook provides.
Further, since the job has high turnover and can be expected to put a high toll on an employee's wellbeing, it's also important to me to think about what happens to their benefits if/when they quit. In the US model of employer-linked healthcare, this tends to result in losing your benefits when you need them most. By contrast, other organizations that do similar moderation jobs seem to offer psychological health care treatment even after the employee leaves the company. See https://www.theguardian.com/news/2017/may/25/facebook-modera... for more info.
It is a very politically correct statement, which, as it usually happens with politically correct statements, really says nothing. I mean, what exactly do you propose they should do about it?
I am the last person to defend Facebook, but, seriously, they are running a business. That is, essentially, finding a way to do/own/produce some stuff while spending less on it than you receive for it. And cost management is all about supply and demand.
After all, it's not like Facebook wants to pay engineers $150k/year and for them to be happy. It needs good engineers and the only way to get them is to keep them happy, or else they stop coming. Engineers are not really special, they are not even exactly more necessary than cleaners, and loaders, and content moderators. The only difference is, demand for them is higher than supply.
And that would be tragic in a sense, if everyone would be randomly assigned a role and if you draw an "engineer" you are lucky, but if you draw a "content moderator" you are not. Or if we were talking about some factory in some giant village in Vietnam, where no matter how smart you are and no matter what you do, you'll be working much harder job in conditions far, far worse than described for $500/year and you really don't have any choice.
But we are not. No, this is Florida, $30K/year and these are people choosing the job of content moderator over a job of an engineer, either because they don't want to or cannot do otherwise. They while many of them may have stories that can be seen as tragic (because who hasn't, really?), they are no more victims than anyone else born into living a life on the Earth. And thus, hardly entitled to anything.
Let me be clear: I really hate Facebook and I find companies with a modus operandi like Congizant's distasteful to say the least. And I can see how Cognizant can be held accountable for giving somebody a false hope, luring into doing the job while promising something else. But what exactly do you want for Facebook to do? They want a service as cheap as possible, they get asked a given price, they agree. That's it. It won't save their reputation after an article like this (because, well, public relations...), but it seems just fair.
If you go work at Facebook as a engineer chances are you aren't too bothered by ethical questions
My experience with the US health care system is that there is a wide gulf in health benefits. When I worked for Microsoft in the late 2000s(as an employee, not as "dash trash", as contractors were called), I paid some very small premium like $10-20/month for a plan with 0 deductible, 0 copay, and pretty much every doctor was in network and covered at 100%. When I dislocated my shoulder, some orthopedic surgeon who used to consult for an NBA team looked at it for me.
I understand that that was considered quite good insurance and that most people in the USA don't get that level of care, much less at that price.
If you read Glassdoor reviews of Cognizant, people complain about the insurance offerings available to employees. https://www.homebaseiowa.gov/sites/homebaseiowa.gov/files/do... looks like it might be what they offered 2 years ago. It doesn't look very good to me. If I was earning $15/hour, I think I'd pretty much buy the cheapest plan and only use it if I was actively bleeding to death.
I'll also link to https://www.theguardian.com/news/2017/may/25/facebook-modera... again - it describes how other companies that moderate distressing content approach this. It includes the company paying for psychological treatment, including after the employee leaves the company.
AFAIK, Microsoft 'self-insures', in the sense that they simply pay the medical bills for their employees.
-Public (https://www.facebook.com/communitystandards) and private guidelines
- A "Known Question" document (15k words)
- Facebook internal Workplace, with real-time (and sometime contradictory) decisions, for breaking news events
In my view, it's knowledge work. Sure, it's knowledge work that almost anyone can be trained to do, but there's a skill set required.
For example, moderators need to make judgment calls that distinguish between offensive uses and artistic/journalistic/parody uses. There are many cultures and in-groups on Earth, which makes this harder than it might seem at first blush. Not all nudity is obscenity, not all hate speech is blatant.
In these cases I'm not sure it really matters. Over-moderation is going to win overall and I don't think moderators are going to get into that much trouble for being slightly over-zealous. I could be wrong but I feel like after a week of being on the job you'd get the hang of it.
Some of the decisions taken by the moderators are then reviewed by a "super"-moderator and the match between the two decisions (including the rule chosen) is used as a performance indicator, which has to be above 90% to avoid troubles.
The second one is pretty important. Consider that sales, sports, and entertainment work similarly, where you can make a case that you should pay more to get the best and workers (or their agents) can use this to good effect.
It could be people prefer working at a desk and a screen in an air conditioned office to, say, schlepping boxes in a warehouse.
And judging from the article, agencies hiring content moderators are going to do more to paint the work more colourful and being a career-builder, by outright lying about what contractors will actually do day-to-day.
Of course that’s going to look more attractive at face-value than being pitched a warehouse job by the foreman.
> Cognizant also offers a 24/7 hotline, full healthcare benefits, and other wellness programs.
I bet this can be done with ML and IA instead human power. Yet, FB uses their human cattle to censor conservative voices.
IA and ML work in bulk. But they misclassify things that a human easily can discern.
Think Tesla VS Google. Google wants a self-driving car, and that is not working yet. Tesla wants a car that helps you to drive and that is good enough.
It's order of magnitude easier to be almost always correct that to be correct all the time. And you only need one kiddie porn video in your Facebook feed to start to worry about Facebook.
Where are the moderators living ?
Or the opposite. People flock to the internet to see this stuff, the material that isn't on TV or even dedicated commercial websites. They get to view it in the privacy of their homes, a secret between them an their trusted "friends" on facebook. Facebook knows that if it really cracks down, if cops bust down doors over this stuff, that people will go elsewhere to share horrible material. So facebook does a poor job of moderation, one just good enough to avoid new regulation but not so good as to actually make a difference.
Advertisers care about eyeballs. They may claim to not want to be associated with X or Y content, but in reality they would rather be in front of an audience than not.
There's a whole section of adtech dedicated to brand safety.
I believe it's over the top but it's something marketing folks do care about.
Also the good old pop up ads were created for that reason, aeons ago.
People who work at Facebook should be pushing for change. But they're numb to the schpiel. They're cushy and looked after and don't want to create a fuss.
Rosen doesn't care. Zuck doesn't care. Sheryl doesn't care. What DO they care about? Perception. Sit in any high-up integrity meeting and you'll see the only thing they seem to talk about is how "x" would be received by users at scale. There's no comment as to the ethics or corporate responsibility. You can be talking about something pretty out there like how human rights intersect with takedown decisions and all you've got is a bunch of people umming-and-ahhing about lossy metrics and how Zuck wants this or that so we better hurry up. Or how awwesome it'll look on our PSCs if we ship this thing.
This is utterly nightmarish, given how costly to one's life bedbugs are. (clearing out a home, including the replacing of all mattesses/couches, and bagging or hot-cleaning all clothing, sheets, towels, rugs...)
"A manager saw that she was not feeling well, and brought a trash can to her desk so she could vomit in it. So she did."
This particular manager put their employees in danger of catching illness, especially given what appears to be the open office floorplan where airborne sicknesses can travel the entire room. I'm shocked and apalled, and this is the stuff I'm comfortable quoting from the article to be shocked and apalled by. The other stuff has convinced me to help inform friends and family to get off facebook rather than passively clean myself of it only.
Once an employee has used their allotment, they receive a write up if they take off again no matter how ill. Too many write ups and you're fired. Managers have no discretion in this process, so they can only mitigate the impact by doing something like bringing the trash can over or buying hospital masks.
Not saying it’s acceptable! But this isn’t a facebook problem specifically.
How stupid are we as a nation that we don't mandate more humane/non-stupid policies, even for people in environments where coming to work unhealthy can often result in death?
This is one of the rare times I believe malice trumps stupidity. Our healthcare industry generates billions and some of that money is being spent lobbying to keep things just as they are.
>If a father and mother have the same employer, they must share their leave, in effect halving each person's rights, if the employer so chooses.
everything else you mentioned seems pretty bad, but what is so terrible about this? unless you are trying to keep your PTO buffer high so you can cash out on a higher salary when you leave, it doesn't seem to make much difference if you were going to need to take the unpaid leave anyway.
Because PTO (or even pure sick leave) has less restrictive usage conditions than FMLA, it means that anyone who uses needs FMLA for a longer-term issue like a new child will also exhaust all the leave they have available for short-term personal or family illness or need for stress relief and be unable to use that until they have reaccrued it.
This is a legal but extremely employee-hostile policy that contributes to bad working conditions for everyone (not just the employees that need FMLA.)
It says a great deal about how broken the United States' job market and social safety net are. If minimum wage were $15, they could find another job that paid their basic living expenses. If health care weren't left up to your employer, they wouldn't be out of luck while looking for a different job. If there were any alternative, they wouldn't stay in this hellscape.
They stay because this is the best deal they could find. Think about the kind of society that makes that the case.
If you're making $30K you certainly don't have any kind of buffer to hold you over between jobs, and don't have skills that are attractive to employers. Even basic things like interviewing are much harder. An engineer or data scientist can pretty much disappear from their desk for an entire day and no one will ask questions, you don't need time off for an interview, and if you do you likely have tons of leave. Someone making 30k most likely has to give notice of vacation many weeks in advance, if they even have leave benefits at all. And because your skills are less in demand your interview success rate is going to be much lower so you'll need even more time to interview.
It's likely very hard for these people to find new jobs even if they hate their current one, they are probably happy just to have income.
Given that you are the parent comment, is it so hard to read conversation like this as building on each other rather than an assumption of being relentlessly contrarian? This is needlessly hostile to someone who clearly agrees with you.
See abusive relationships or this handy article http://www.issendai.com/psychology/sick-systems.html
Due to the low physicality of the work, there are likely people who consider it easy work, despite the psychological impact that seems obvious to you or me.
I don't expect these contractors to get much long term benefit from the position. And it sounds like most quit or get fired.
This is really just a giant meatgrinder for the population outside of the "cognitive elite" class and probably a good sign of things to come for the workplace of the future of half of the population.
These people are literally trading dollars for their mental health. That is the choice they are making.
They aren't hustling because they want some extra cash. They're trading their mental health for the basic ability to be a living, eating human being.
I’m not necessarily against raising minimum wage. It should probably be reviewed and adjusted more often than it is. But wouldn’t we be better off with a system where people are incentivized to do more than the bare minimum? I don’t think minimum wage should be intended to be the baseline for a comfortable life. It should be high enough that people want to work in the first place rather than take handouts but not so high that it’s an opportunity to stagnate. Frankly, minimum wage is for first time workers and people who have zero marketable job skills or some other issue that prevents them from doing more productive work. Most people can achieve much more with a little work experience.
Why is this necessarily true? Couldn't it be the case that minimum wage were $15 but they still couldn't find any other job?
Edit: I may have misspoken when I said "surplus". What I was referring to is the very low unemployment rate we currently have, despite large numbers of people - with "jobs" - continuing to live in near-poverty.
No, we don't. An actual job is what happens when the willingness to pay for labor connects with the willingness to provide it for pay.
What you are seem to be referring to (additional willingness to purchase labor at prices below that that people are willing to accept) is not a surplus, it is is the normal condition of demand (and a parallel thing exists normally with supply, where there are people willing to provide something beyond what is currently being transacted in the market, but at higher-than-market prices.)
> Why is this necessarily true?
It costs money to take a surveying class, or buy tools to practice your skills. It also takes having money saved so if there is a crisis, car repairs, etc you don’t have to constantly work overtime to keep your bills paid.
Please reach out to a friend who is making less than a “living wage” in their area and ask them about career development and what they need. I appreciate your question.
... then low-skilled candidates would NOT be able to find any job at all, because if low skilled candidate's expertise does not support $15/hour salary, and employers who could, potentially, pay less (e.g. $10/hour) are legally prohibited from employing people at that lower rate -- then there are no other jobs available to that low skilled candidate at all. Which means forced and hopeless unemployment.
These people do not work for Facebook, and we don't know the nature of the contract in play. Are they paying per person, or a lump sum for some capacity at some accuracy rate. If Cognizant automated all of this would it be accepted under the contract?
Anyways, I don't want to shift focus away from Facebook so much as wanting to recognize the contracted call mpanies like Cognizant (which is what the whole article is about btw, with some comments referring to Facebook). Accenture and Cognizant really shouldn't escape the scrutiny just for being overshadowed by a bigger name.
In the article, one of the contractors says that Cognizant puts up a "dog-and-pony show" whenever FB executives visit. Again, it's ultimately up to FB to decide how much they want to push past the facade.
This downward pressure ends up directly impacting moderators. Cognizant needs to keep payroll costs low so they don't lose the contract, and the contracted accuracy target of 98% seems unrealistic. So moderators end up fearing for their jobs when they don't meet accuracy targets.
I don't think I could do that job for very long - let alone in a badly run, high pressure environment with low wages.
That one paragraph about organs was enough to ruin my day, and it was just text. I'm surprised such a "rash of videos" wasn't in the news somewhere.
He sought out the on-site counselor for support, but found him unhelpful.
“He just flat-out told me: ‘I don’t really know how to help you guys,’”
I'm left wondering if it was a clip from a horror movie or something. Shitty to see, but not necessarily newsworthy like "live children chopped up for organs" would be.
I feel like I'd have seen media reports of widely-shared videos featuring live, conscious kids being vivisected in order to harvest their organs.
I'm no fan of Facebook, but in their defense, this is sort of the point.
If the government, the military, academia, the police or anyone else would let their inventions lose on the world in the manner the private sector does and leaves it to us to figure out how to clean up the mess we'd call them insane and demand them to be shut down within a week.
In fact it's their very realness that makes them especially horrifying.
Some of the people interviewed were complaining primarily about the bad working conditions (and their complaints are valid as far as I care). I would wager these people are not as bothered by the content (though I'm sure they don't like it) as the ones who's primary complaints are about the content. They could probably do the job with less burnout if the rest of the job was made to suck less (i.e. it wasn't a shitty call center style job with all the accompanying baggage).
Edit: Why am I getting down-voted? Can people legitimately not fathom that some people would not be seriously bothered by seeing this content? People post violent content to Facebook. Shock and gore sites exist because some people actively seek out(!!) the kind of content that these moderators are being exposed to. It stands to reason that the subset of the population that at least finds that content not mentally damaging is substantially larger than the group that seeks it out.
My wife, who has worked in a busy McDonald's for 5 years, says this sort of moderation is far, far worse than anything she had to deal with. And she's had to deal with human waste, violence, and direct verbal abuse from customers.
Your point about making the working environment better is a valid one, but think it is overshadowed by your assertion that the horrific content is acceptable to "many people".
Your views do resonate with some people on Reddit though. I remember saying that I didn't like the pained yelps, limping, and whining of Dogmeat in Fallout 4, and I was attacked and ridiculed for that. I know I wouldn't last more than a minute at this Facebook moderation job, it would scar me for life. I don't think that means I'm "wrong", any more than your views are "right".
I think that perhaps those people need to realize that not everyone is as easily rattled as them. Some people go to the Holocaust museum and are mortified by the details of it and are sad for a week. Some people go and are like "yeah, people do terrible things sometimes, this is just the worst systemic instance to date" and then go out for beers afterward. Whenever there's an armed conflict there's some people who are F'd up by what they see and there's some people who say "yeah that sucked and I'm glad it's over" and there's people in between. I think it's pretty evident that the ability to cope with violence varies a lot between individuals.
>My wife, who has worked in a busy McDonald's for 5 years, says this sort of moderation is far, far worse than anything she had to deal with. And she's had to deal with human waste, violence, and direct verbal abuse from customers.
I've done that too. I'll take hospital janitor over anything in food service. The general public sucks. The floor doesn't complain when you didn't mop it up just the way it wanted. I'd probably try my hand at a moderating job before I went back to food service. Blood, violence, obscene pornography, etc, etc don't bother me. It's nasty but whatever, some people are terrible so what do you expect. There's other things that bother me but those aren't it.
> your assertion that the horrific content is acceptable to "many people".
There's levels of acceptable and there's a reason these employees are being paid more than those of the call center across the office park. I'm not suggesting that some employees find it acceptable in the abstract. I'm saying they are not so easily mentally harmed by it to consider it not worth the pay.
I see it no different than a physically fit 20yo who finds a ditch digging job to be worth his while because he can handle it whereas the 50yo almost certainly cannot handle it without much greater negative health consequences. If you can hack it for the pay then why not.
>I don't think that means I'm "wrong", any more than your views are "right".
You're not asserting that nobody can do this job and I'm asserting that there exist people who can do this job (or at least people who whom the "shitty call center job" conditions are the primary irritant preventing them from doing this job). That said, the down-vote to reply ration makes me suspect that many people simply do not believe that some other people do not think like them.
There are also people who are habituated with cutting into living bodies (surgeons) but the difference is they are highly paid.
It's a bit unfair to equivocate and call this kind of work a good option for some people, when the vast majority of people who are in these jobs probably will be psychologically harmed by it² and are doing it because they have few other options.
It is relegated to an underclass, like all dangerous and undesirable work.
1: Statistically at least.
2: See also: https://www.researchgate.net/publication/228141419_A_Slaught...
Of course the real answer is that this sort of site is a bad idea and shouldn't exist. Wide-open signup and public visibility of content, or ease of sharing it with strangers. Bad combo, don't care how much money it's making them (and other, similar sites).
Oh, dear god, no - have you ever used Reddit? This is what happens when you outsource moderation to the sorts of users who enjoy moderating other people.
Very completely different things. Subreddits would be the equivalent of Facebook Pages and they already have moderation tools for those that run the page.
Hello user, you have been randomly selected to help moderate.
Does this depict pedophilia? Y/N
[ Image here ]
This image was flagged by other users as depicting horrible gore and death, do you agree Y/N?
This post may contain hate speech, do you agree? Y/N
[ Embedded Nazi Screed post ]
Thank you for making Facebook a safer place for everybody!
"Facebook made me look at child porn" is probably a headline Facebook would prefer not to have going around.
Would that be expensive at the scale required by Facebook? Very.
Personally I'm fond of zucc-bux
Overall not possible. On Slashdot everything is basically public, on Facebook it's largely private/restricted and I'm sure from past discussions that the worst stuff is in private groups or clusters of friends. They could do much more aggressive banning of users (and detecting new accounts designed to circumvent bans) for sharing such content or being in groups focused around such content, but that might hurt their most important metrics.
So yeah, it was tongue in cheek, with a side of reductio.
I'm not saying if Sally starts a forum and it gets hacked and someone posts illegal stuff she should be fined or charged with anything. I'm not saying she should even be in trouble if she gives one person who turns out to be a jerkass posting rights and that person posts illegal stuff. But if she opens up public signups, gets a billion or two users, then says "well I guess I'll just have to pay a bunch of people money to look at all this gore porn that's getting posted" the rest of us should go "uh, no, you should instead just stop".
[EDIT] and given that second sentence, no, I'd get nowhere in American politics.
That being said - those cases are really rare and in terms of harm, the naked calculating exploitation that corporations flirt with is WAY worse imo than the harm a too-big-for-its-britches union causes.
It would be like expecting doctors to clean bedpans. Now it is a neccessary task and those who do so should receive appropriate respect even if it is just "Thanks glad I don't have to do it." but anyone off the street and willing to do the dirty job could do it without a doctor's training which they spent over half a decade on to be /entry level/.
Plus incredibly inefficient for effectively paying say $100/hr to a janitor when they could be saving lives instead.
Now asking them to do it in a neccessary and justifiable situation (say posting in an aid camp cut off by weather or in space thus making sending any extra bodies expensive) is one thing and they would be in the wrong then for refusing out of pride.
Absent that neccessity it shows poor sense of their actual value until medical degrees and skills become just as common as the skills to clean bedpans.
If I can adapt (and perhaps torture) your analogy, I'd say it's like Facebook currently has doctors who save lives and command high salaries, and janitors who change bedpans but don't have access to hot water and latex gloves. So inevitably, the janitors catch a disease (hospitals are filthy, after all! This is foreseeable!) and are no longer able to work, at which point they are replaced.
Given that the hospital can afford to pay the doctors, we might ask if they could splash out for latex gloves and soap for the janitors, too.
Content moderation job is not for everyone.
People are essentially asking if FB can increase salary/ perks by x amount because internet commenters are somehow not feeling good about current scenario.
Yeah, this is basically what I'm saying. :) Labour codes are developed over time. In the beginning, it's the wild west. Want to be a janitor and not use any personal protective equipment? Go right ahead! After workers keep getting sick, the government begins to legislate requirements around health and safety.
Awareness of mental illness is a relatively new thing. We'll probably see some developments in this area if the big tech companies continue to outsource moderation at scale. https://www.theguardian.com/news/2017/may/25/facebook-modera... is a nice article that describes accommodations that other companies provide to moderators. They include, as an example, monthly psychologist visits which continue after the person stops working for the organization. They also include training for the person's family and social support groups.
Not wanting drudge work is common to nearly all people who have options. Why should devs get all the hate?
(stolen from Scott Adams.)
I actually wonder now how much of their workforce would fit in that demographic.
just because some people become desensitized doesnt mean the obscene content wont damage the mental health of the entire set of people
Granted, not everyone does that, someone nibbling at the frontend or sitting deep in some distributed DB generally does not care much about data quality.
It’s management 101
Not to mention that, in data driven orgs, the launch/no launch debates are focused on those metrics. So if you want to call the shots you kind of need an in-depth understanding.
Just the economic incentive to cut moderators loose and use algorithms would be enough to do so if it were possible. After all you can buy a decent amount of compute power for 30+k/yr, that will certainly deliver more than the 100 or so QPD[ay] form a content moderator.
PS. I'm sure that most of the work is being done by algorithms anyhow.
Don't know about the banning situation, that'd require insider Facebook knowledge. I'm sure they ban some people.
[Clarity: Am not a Facebook employee.]