(My eternal gratitude to the one who can dig up the tweet. All my efforts were futile so far.)
How crazy I sound to you is how crazy you all sound to me.
Apple is no saint but FB's total disregard to its responsibility and position in the world is so much worse.
I certainty don't care.
What the data gets used for later by Facebook, the government or organized crime, when it inevitably gets stolen, is anybody's guess.
What the data gets used for later by Apple, the government or organized crime, when it inevitably gets stolen, is anybody's guess.
(I assume you're not just claiming that selling expensive products = hating poor people.)
2. While other firms like Google and Facebook have served to bridge the gap between the rich and the poor, between the first and third world, apple has done exactly the opposite.
I, and large portions of the non-western world see Apple as a representation of american power and western influence. And that has had alot alot alot of benefits for Apple but it also made me hate their guts so yeah.
In Portuguese, we have a word for this, "achismo", which is a play on the word machismo and the verb achar, which means "to think" as in "I think ...".
It's all opinions all the way down.
There's something wrong with mainstream reporting if mere exposure to social media turns people far right. It really strikes me that people are trapped in some pretty strong filter bubbles to the point mere exposure is enough to change political belief.
Spend a week on a far right community and you'll be shown more stats that point to a far-right conclusion than you can critically evaluate. In any internet discussion of police racism for instance FBI crime stats will be mentioned in a heartbeat but I don't think I've seen a mainstream journo bring them up once. Social media and mainstream media fundamentally follows different schemas of information simply because even bringing up certain data can cause a mainstream journo reputational damage.
This is also causing an inverse filter bubble where hateful ideas which actually have refutations don't get refuted because people refuse to discuss the ideas on principle. Much of the data cited is crap and much of the interpretations are crap but they're not meaningfully contested.
A different conclusion to draw from this is that far-right interests are responsible for the majority of the objectionable content on social media. One might further suggest that said content is deliberate propaganda, designed to push people to the right, and that this is a central pillar of their strategy that isn't shared to the same degree or extreme by other political factions.
This isn't "mere exposure". I haven't read the article so please correct me if I'm wrong but this is a job, a place they go to sit every day to be bombarded with this crap. To some extent they have to sit and let it wash over them—I don't imagine the people doing these jobs have much career mobility. IMO it's not realistic to suggest that if they were just better-informed, they wouldn't suffer these effects. The mind is not an inviolable fortress—no matter how strong you think your defenses are, they can be worn down.
This is certainly part of it. I've literally watched Russian propaganda around the Syrian war featuring a "Canadian independant journalist". Yet overall this explanation strikes me as unsatisfying. From what I've heard from the researchers propaganda techniques have been less about promoting the right wing specifically and more about spreading social discord generally. The Russians had efforts to promote Jill Stein that were promoting left wing rather than right wing propaganda. I've never really bought that propaganda efforts weren't getting overwhelmed by the influence of regular users generally. It's possible but I haven't seen serious attempts to prove it.
>I don't imagine the people doing these jobs have much career mobility.
This is actually a more interesting criticism to me. Perhaps internet moderators are more drawn to far-right thought than others because of their life circumstances? This is a population that's lower income and tech savvy.
You won't catch me defending anything so broad as "the left" but your statement that "the right merely questions beliefs with statistics, facts, and law" is laughable. Literally I am laughing at the absurdity of it, that anybody would say such a thing. You needn't look any further than the American right wing's top man, who e.g. routinely uses public social media to broadcast trivially falsifiable statements, dubious and unsourced "statistics", and other total nonsense.
You should seek to expand the boundaries of your model of reality.
> In a 2015 study, researchers discovered that familiarity can overpower rationality and that repetitively hearing that a certain fact is wrong can affect the hearer's beliefs.
I can make the inverse argument that being told an argument is so offensive it's not worth examining creates this bias in kind. In fact I would argue the point of opposing "hate speech" is not to rationally confront it but to denormalize it so it doesn't affect people's beliefs. Same goes for anti-heresy beliefs, "conservative-only" posting requirements, and many other forms of censorship. I don't think far right communities are unique in the amount of social conformity demanded from members.
I honestly would be surprised if any propaganda (no matter the subject matter) left no impression when applied with that intensity.
I think if this trend continued it would be effective against the weak arguments pushed by far-right boosters.
I think that's how RL works in most countries. You are not running through the streets, getting slapped with crime and therelike all the time.
Of course, this also means you need more people to deal with the same amount of content as you do today.
Moderator1: "Hey man how is it going today?"
Moderator2: "Pretty good, it's kitten day."
Maybe a better solution starts with reducing the scale of the problem, and includes measures like not tolerating so many fake accounts, being willing to lose real users permanently when you ban them, and cultivating better culture.
And at least pay the moderators better for their trauma, rather than exploit people who have no better options. New-grad programmers get six figures, to entice them to work for a 'social media' company they know is a bit sketchy, but some of the people who bear some of the darkest side of the business get paid peanuts.
Alternatively, one could change the business and infrastructure models, to distributed and interoperating common carriers (away from trying to snoop on, and manipulate, things people say, hear, and think). ISPs and protocols and programs, like we briefly had. But that requires no one becoming a billionaire by grabbing power over people.
I force them to watch the bad side of humanity, so I also "force" them to watch the good side of it.
It should be subtile, of course.
Even just looking for one day, it took a serious emotional toll on me. I've definitely seen some awful things on the Internet but the constant bombardment of hate speech, racism, anti-Semitism, and all sorts of disturbing images and text over the course of 6 hours made me feel physically sick a number of times, and I had to take extra care to rest the next day.
This is anecdotal of course, but I can't imagine what the Facebook moderators go through having to process at least one ticket a minute.
Which is why automated Image/Video moderation solutions (such as Vision, Rekog, Sightengine.com, Hive) will continue to grow. Not only because it is cheaper/faster, but because it becomes a necessity. Or at least as a first filter to weed out the "worst" content.
Now, they felt much more empowered than what Facebook was doing: they kept going because the goal was to stick cuffs on the wrists of the guys who were doing this, and get those kids away from him, and they could put up with all of the rest for that goal. They were treated as rockstars by the rest of the people they interacted with, because they were the ones who got kids away from the predators. They had frequent opportunities to take breaks and could set their own schedule, with only the guilt that came from the longer they delayed, the more time passed with the kids in the predators hands to drive them.
Ultimately, feeling empowered to make a difference in the world is key, and if Facebook treated screening as an important job and gave their moderators more power to set their own working conditions I suspect that it would improve their mental health by quite a bit.
I hope they are investing in an army of shrinks /psychologists/sociologists to study,improve and supervise these centers, cause this stuff is not going away by just deleting content.
I once found a confirmed child porn possessing target with a smart home I was able to access. That guy must of thought he was in a black mirror episode.
The problem is that context is even more important in visual content than in textual content, and we still don’t have any algorithms that can parse context as successfully as humans can.
That might soften the blow to the screeners.
To expose people to this stuff continuously seems wrong.
Then again so does exposing everyone to it / would probabbly kill the service if it wasn't dealt with by someone.
Another concern is finding people who can handle these situations in a healthy way, might be few and far between, and generally the folks exposed to it folks hired into low pay / outsourced warm bodies in chairs kinda situations.
There are some sick and "extremist" people in this world. If their stage were relegated to email threads I imagine we would not be having this discussion.
But I think what bothers me the most, and this is eluded to in the article, is not that some extremist is posting crap, no surprise there. But that seemingly once-normal relatives of mine are consuming this crap and then reposting it — becoming the extremists themselves.
This is the real answer. If running a service requires paying people to look at awful, damaging material—stop running the service. It shouldn't exist. It doesn't need to. Absent things like Twitter and Facebook we'd find other ways to keep in touch with people who matter. Special Email clients that do most of the friends & family stuff that FB does but over email or whatever. Some new protocol. Progress on that kind of stuff only stalled because we've got the spyvertising- or VC-funded, moderator-harming services to compete with. Take those away and the void will be filled, with hardly a hiccup.
"Well gee whiz we just can't run our service without doing this to our moderators, so I guess we have to"—no, you don't, period, at all. You can stop running your service, or live with hurting people and stop acting like you care.
I know someone who was fine until he had to help select/edit news footage (unfiltered audio/video footage of wars etc) for the BBC. After a few months he needed counseling for intrusive images, lack of sleep, depression. So, no news either?
The answer obviously is that.
But the questions is, what if we can't?
(Spoiler alert: we can't.)
Taken together, unless you keep people from migrating from one petty dictatorship to another to find one they like, the outcome will tend towards a small number of big megalopolis-style communities and a middling number of mid-size ones.
It's not a bad idea by any means. It's essentially a mirror of an idealized pre-industrial agrarian society. There might be some wrinkles added by basic human behavior patters that could be worth considering carefully is all.
Reddit is a pretty good example of it both working, and very much not working.
It's not the pipe dream of what the internet was supposed to be back in the 90's, but I think it's safe to say that experiment has failed. Humanity has shown itself not to be able to behave itself enough for this.
I'm sorry if this sounds abstract or convoluted or anything, I've had a long day and am a bit brain-fried but I have thought pretty extensively about this. I'm having a bit of a hard time putting it down in a HN comment. Maybe it's worth the effort writing this up.
> We have rich white men from Europe, from the US, writing to children from the Philippines … they try to get sexual photos in exchange for $10 or $20.
Because that's where you end up if you don't want any filtering.
The next step after that is that no one allows posting by the general public anymore.
Be careful what you wish for.
This seems like a weird take to me. Why would this be your conclusion rather than that the technology isn't good enough yet?
There is no upper bound in repulsiveness for the things FB mods watch. At least with Reddit there's the sitewide rules.
I can only imagine the aghast looks one would get from the PR and HR people, just for asking about forming a team of people who liked looking at child abuse images!
I think realization of the fact that the internet connects everyone and no matter what you espouse, someone will call you names for it, can indeed help with emotional responses to content.
Imagine seeing the worst of these subs for 8 hours with only a short lunch break.
I don't claim to know whether working as a FB moderator is 'catastrophic' or not, but this article makes an emotional case for it rather than a rational case for it. It reminds me of reporting around the Foxconn suicides - sure, each is a tragedy, but it turns out the Foxconn suicide rate matches the national rate.
Think of the disgust or shock you might feel if you saw, say an adult flirting with a kid, or if that’s not graphic enough, having sex. Now imagine you need to see that every hour or so for your job, 5 days a week, forever. There’s no end, there’s no stopping people.
At best you lose some of your humanity and innocence - nothing really shocks you any more. At worst it causes some form of PTSD or you even sympathize with what you are seeing. I can see how that could be less painful in short term than continuing to be shocked and depressed with things you don’t agree with.
Some things are just stupefyingly obvious. For example, constantly looking at videos of puppies getting microwaved or people getting beheaded in front of their kids for 8 hours a day, 5 days a week just might really fuck with you.
Like, what data do you want to back that up?
Please don't reduce people to data like this. Plese don't automatically assume bad faith on the part of the authors of the article. Consider there are actual people involved here, reporting their experiences. Don't make it into some kind of social-justice battleground or science experiment, just consider their perspective without instant judgment.
Appeals to emotion and innumeracy are serious problems in society right now and it's totally reasonable to identify when people pushing an argument are either appealing to emotion or being innumerate, so that others can plainly see that an argument lacks more substance than it appears to have.
Given that I didn't see any specific plan of action or policy advocated in this article, I just don't see the need to jump out an immediately attack and discount it. People are so used to arguing in black and white, we suddenly have to start fighting over an article that reports on people's experiences doing a job that we all would agree, really, really sucks. Appeal to emotion, maybe. But krimeney, have some empathy.
Ultimately Facebook wishes to replace this workforce with AI automation where possible, and then heteromate the remaining human work by enticing users of the platform to inform on each other with posts that don't abide by the platforms implicit social norms or opaque moderation rules.
If this were truly the case, it should also be illegal to compel someone to look at this same material as part of their employment.
And from what I've seen they are quite happy to receive tips about abusers from the public and businesses that care enough to review their content. For society to work some of us have to step forward and agree to do the dirty work.
That some people are doing this for a living without proper counseling and guidance is another matter, you can lay that at Facebook's door. The few times that I've been exposed to that crap was enough to make me shut down some services.
In fact, I think that any business that deals in user generated content should assume the cost of business that goes with that and have a system of flagging and escalation in place.
It was a poorly formed argument, but you've hit on the absurdity that I was trying to point out. Obviously somebody has to look at this stuff.
But having low paid, poorly trained people do it all day, every day? I don't think it's crazy to suggest that shouldn't be legal.
And if Facebook can't exist in its current form without these teams, then I'm totally fine with that.