Hacker News new | past | comments | ask | show | jobs | submit login
Facebook says I'm abusive content [video] (youtube.com)
286 points by josephcsible on May 20, 2021 | hide | past | favorite | 204 comments



Between bad actors taking advantage of DMCA takedowns and brigading mobs, it seems that automated (or semi-automated) moderating is absolutely a no-go at scale. Incidentally, it's also why these networks (YouTube/Facebook/Twitter/etc.) can achieve such a massive scale with basically zero human moderation (i.e. cheaply). Remember when we had forums and there were a few unsung heroes that kept our communities clean and tidy(-ish)? Even here, I'm pretty sure @dang is basically reading HN and cleaning it up all day.

Doing this at scale with semi-automated systems is quite literally impossible but until the courts start sanctioning social networks for their ad-hoc censorship (if this is even enforceable), we are sadly unlikely to see a change.


Back when Net Neutrality was hot topic, the nightmare scenario that was always trotted out was that the Internet would become cable television, devoid of the wide diversity of content and voices it has today. Every time I see a story like this (and there are a LOT of them) I half-jokingly wonder if we'll reach that future not because of the collapse of NN, but simply because everyone other than corporations will have been banned from all major platforms.

With that in mind, consider that this "failure" of scale may be less of a bug and more of a feature as far as the platforms are concerned. All the big players HATE the Internet as it originally existed. All those independent voices talking all over the place were difficult or impossible to monetize, and the legal hazard of trying to do business near anything resembling a truly free discussion forum is a nightmare. Far better to do away with all that pesky multi-directional communication and go back to the good old days: big company make product, big company market product, little people buy product. No need for discussion or community. That just gets in the way.

Maybe, if your lords are feeling generous, they'll give you a comment section where you can talk about how great product is. Any other posts will of course be scrubbed.


> Maybe, if your lords are feeling generous, they'll give you a comment section where you can talk about how great product is. Any other posts will of course be scrubbed.

There was a story on here yesterday I don't think many people saw, about a book being rejected from Apple Books because it contained the word "Leanpub"! https://news.ycombinator.com/item?id=27205541


If I’m reading that right, the book was rejected by an automated pre-screen at draft2digital, and Apple never even saw the rejected manuscript?


Yeah. Well, I guess the idea of Apple asking Draft2Digital not to send them books containing any mentions of competitors is precisely that they "never even see" them! But there's a happy end to the confusing tale: Draft2Digital later manually overruled the block, as they were confused why Leanpub would be blocked, as mentions of Amazon are usually the problem. (But the author had other books mentioning Amazon approved, so didn't think it was that. Particularly as censoring mentions of Leanpub got the book past the test. Weird.) The uncensored book is now on Apple Books.


I wonder if this has/will come up in Epic vs. Apple. Sounds pretty anticompetitive.


I think if all independent voices were banned from large platforms, people would just move elsewhere. I'm not saying that the large platform would collapse, but that a more independent platform would operate in parallel and take care of peoples' desire to both create and consume independent content.

It's a cycle that's been going on since at least the 19th century (via pamphlets), and I don't see it stopping any time soon.


The problem with "elsewhere" is, that "elsewhere" is marked as sexist, racist, far-right, nazi-, and free speech is marked as bad. Sites like parler lose their hosting, sites lose their domains, their apps are rejected by the app stores, and soon there is no "elsewhere" to go, atleast not for "the masses".


One way to look at it is that "elsewhere" is a protest pen, just like in the physical world where there is a plan to host this year's G7 protests in Plymouth, 76 miles away from the G7 venue [1].

[1] https://www.plymouthherald.co.uk/news/plymouth-news/mp-fears...


World is big. As long as there's a browser without built-in censorship (malicious websites filter, kek...) and world-wide Internet, that's not an issue. Parler can always host in Russian data center, for example, who could care less about US politics shenaningas, just like Russian dissidents are hosted in EU data centers.


World is big, but eg. mobile ecosystem is basically google and apple, and both banned the parler app.


You don't need app for something like Parler app. It might be more convenient (and even that's questionable, Reddit app is terrible compared to website), but it's not necessary.


And then that new platform goes the way of Parler (dropped by hosting providers) or Voat (infested by witches) or just spammed/DDOSed to death if they try not to rely on the advanced anti-spam infrastucture that's necesaaru on the modern web.


the key difference of this time is network effect. People aren't going to move elsewhere, and instead they (more precisely - their brains) will be slow boiled like that frog.


Of course, the draw card is that people can add their comments and feel like they're involved.

You don't want anyone else actually reading the great unwashed's comments, though!


Google is a somewhat large-ish player, right? It kind of loves the Internet as it originally existed. A cynic might say that's because it has the competetive advantage in monetising that.

Disclaimer: I'm a Googler, my opinions are my own, yadda yadda.


Google has been instrumental in destroying the Internet as it originally existed.

Remember when a bunch of people (and Googlers) threw a fit about Project Dragonfly, Google's "aborted" attempt at a censored search engine for China?

Now that it is widely known and accepted that Google routinely manipulates and censors search results in Western countries for political and anti-competitive ends, where is the outrage now? Where are the walkouts at Google campus?

If Google ever did "love the Internet as it originally existed", the people who lent that air to the company are long gone, replaced by new people who look at the Great Firewall of China with envy.


Source? That's not in line with my experience of the company.


Google wants a distributed internet where users and service providers are largely overlapping sets?


Doesn't it?


Are you for real man? Google does everything in it's power these days to make sure the user never leaves the google.com domain.


Obviously, automated (or semi-automated) moderating is absolutely a no-go at scale.

I don't think that's obvious at all.

You're saying that it's more important not to remove videos that aren't in violation of the platform's terms, eg false positives in the automated removal process, than it is to remove videos that are, eg actual content that should be removed. Why is it more important that someone's video remains on Facebook than Facebook removes illegal or toxic content? If acceptable videos are removed by mistake is that actually a problem that means the automated process isn't fit for purpose?

... social networks for their ad-hoc censorship...

Facebook removing a video is not censorship. They're not saying "no one can see this video". They're saying "we don't want to host this video". Those are absolutely not the same thing.


> Why is it more important that someone's video remains on Facebook than Facebook removes illegal or toxic content?

This is a straw-man; that's not what I'm saying at all. In fact, I'm arguing that you can probably have both -- by using human moderators.

> Facebook removing a video is not censorship.

It is censorship. They're saying "no one can see this video on our platform." Do you really want to get in some silly semantic debate when you know full well what I meant? It seems you're being purposefully uncharitable.


> when you know full well what I meant

Reminder: humans lack telepathy and so they depend on body language, tone, and word semantics to communicate.


It isn't really a semantic issue. Removing someone's communication from your site is literally censorship. It's simply not govt censorship. But censorship by quasi-monopolies makes it arguably similar and still an ethics debate.


Are you suggesting that video platforms should not be allowed to remove content?

That's a very extreme point of view.


I want to highlight the very common, very dishonest debate trick you just performed.

You skipped right past calling the thing for what it is (censorship), past admitting it's a problem, and you're now arguing against a hypothetical extreme solution no-one has yet proposed.

Can we assume that you now agree it's censorship, and that it's a problem, and it's only the solution you proposed that bothers you?


Can we assume that you now agree it's censorship, and that it's a problem, and it's only the solution you proposed that bothers you?

unishark posted that they believe it's censorship, so I asked whether or not it'd be OK for any content to be removed if that's the case. Does unishark think that Facebook 'censoring' some content is actually OK?

I don't think it's censorship, and I think Facebook removing content is fine explicitly because it isn't censorious. So no, you can't assume that I now agree it's censorship, because it's not and I don't.


How do Facebook's actions differ from censorship, then? What would have to be different so that you would consider it censorship?


Censorship is stopping people seeing content everywhere. Facebook are only stopping people seeing the content on Facebook.

The idea that Facebook has a monopoly on video is weird. If someone wanted to grow an audience around their video content would you suggest they post the videos to Facebook to do it?


Network television censors profanity and nudity. Indeed the govt requires much of it. Everyone calls it censorship. Yet you can still see the unedited versions of the same content at a theater or on cable.

Maybe if your idea of censorship hinged on govt involvement it might fit a bit better. Though I'd still say we need to be able to talk about censorship by other organizations besides govt, and that the dictionary definition of the word covers this.


There are two sides in the debate I referred to and my post did not advocate either side.


I'll also add that argument by pedantism is a sin of any good-faith debate. It represents an attempt to divert attention from the main topic, and instead to playing definition games.

I thus generally view an attempt to refute an argument with some pedantic gotcha as a bad-faith responsse.


Not all pedantic arguments about the definition of words are bad faith. Sometimes, the original argument genuinely employs a confusion of definition, this could make the argument void, see section "Equivocation" under <http://enwp.org/List_of_fallacies#Informal_fallacies>. This confusion happens both unintentionally and intentionally.

I would like to propose to not treat the pedantry counter as a sin. Instead think of it as a weak signal: the counter does not rank high on the hierarchy of disagreement. <https://bigthink.com/paul-ratner/how-to-disagree-well-7-of-t...> If a strong counter could be made, we would use that one instead of the pedantry counter.


I'm not even sure who is being criticized here. I think it's just an issue of tone. Bluntly telling someone they're wrong, when their statement is arguably correct, will generally trigger an argument over who is correct.


To call a distinction pedantic is to say it lacks importance to you. You are seeing a clash of values, not a bad-faith argument.


> Why is it more important that someone's video remains on Facebook than Facebook removes illegal or toxic content?

For the same reason that "It is better that ten guilty persons escape than that one innocent suffer".

> Facebook removing a video is not censorship. They're not saying "no one can see this video". They're saying "we don't want to host this video". Those are absolutely not the same thing.

Facebook has so much market share that they're uncomfortably close to the same thing.


For the same reason that "It is better that ten guilty persons escape than that one innocent suffer".

That would require someone to believe Facebook taking down a video is equivalent to someone being found guilty of a crime they didn't commit. No one believes that.

If Facebook deletes your video that's annoying. It might even affect your livelihood if you've decided to Facebook videos is your chosen career. It isn't the same as being found guilty of a crime you didn't commit.

"No false positives at all" is a good and proper goal for the justice system. Facebook's moderation system doesn't need to be held to the same, or even a similar, bar. False positives in video content removal are a very minor problem.


> No one believes that.

Except it does? For people who make their living from Youtube or whatever video platform, removing a video (usually without an appeals process) is basically a punishment regardless of guilt.


If Facebook delete a video the author is not being punished because the author has no claim on money they haven't earned yet. It's just the end of a business relationship. Arguing that you're entitled to post on a platform, and to earn money through it, is nonsense. Think of it like this - if I never buy a Starbucks coffee again I am not punishing Starbucks. They don't have any claim on my money, so I am not denying them anything. It's just the end of a business relationship. Facebook are choosing not to buy this guy's videos any more.

Arguing that Facebook has a near-monopoly on online video is also nonsense - the guy we're talking about posted on YouTube about this. Clearly he has other options.

The number of people who here to argue that they have some sort of 'right' to upload content for Facebook to publish, and that they're being censored and punished if Facebook chooses not to, makes me hope that HN readers never choose to start video hosting platforms. They'd be completely awful.


> If Facebook delete a video the author is not being punished because the author has no claim on money they haven't earned yet. It's just the end of a business relationship.

By that token, if your government decides to deport you, you are not being punished because you have no claim on citizenship you haven't lived yet. It's just the end of a citizenship relationship.

Hopefully you can see how that argument isn't helpful or useful. Would it be any more true if the Government delegated citizenship management to a corporation? Or three?

In an online-only world, where speech (and such livelihood being contingent on access) is controlled by companies, and nation states have no requirement to provide a different mechanism, it could be reasonably argued that a company giving everyone else a platform but denying you a platform has free-speech consequences.

The fact that it isn't managed that way, and that the law doesn't currently see it that way, doesn't change whether we should change our thinking as society changes.


By that token, if your government decides to deport you, you are not being punished because you have no claim on citizenship you haven't lived yet. It's just the end of a citizenship relationship.

Being a citizen of a country isn't a business relationship. You're not simply doing something and receiving payment in return. So it can be a punishment - the government is taking something away that you've earned through previous actions (eg applying for citizenship or asylum or winning a green card).

If Facebook were to demand an author return the money they'd been paid for videos when they're deleted then it would be a punishment. But they don't do that.


> Arguing that you're entitled to post on a platform, and..

Couldn't the same argument justify segregated businesses e.g. restaurants with "No blacks or Irish".

You might thing racial lines make it a different thing, but TBH, if there is unacceptable harms in discriminating on race, I can't see how it can't also exist discriminating on opinion/speech, which is also important (as Freedom of Speech is considered important, at least in the US).

If private business can be regulated on one topic, why not another? Plus the argument can be flipped: Yes, that is their business, so they can choose who they do business with; but this is our country, so we can choose who is allowed to do business, and how.

> The number of people who here to argue that they have some sort of 'right' to upload content for Facebook to publish

But people do have the right to speak freely. If all public domains where private, they would effectively lose that right. The issue here isn't just that facebook determines who can "speak", but that facebook and only a few other tech giants dominate the "cyber" public domain - and alternatives like parler disallowed one similar ground "why should private hosting not be allowed to choose who they host"; at some point if you allow private companies to monopolize something, and without regulations on how, they end up dictating how people are allowed to use anything in that domain.


> No one believes that.

Why not? I don't see a practical difference between Facebook wrongfully labelling a video as abusive and city wrongfully labelling your car as illegally parked. Former might even be more financially damaging.


> Facebook has so much market share that they're uncomfortably close to the same thing.

There's the actual problem. Let's work on that, and preserve the freedom of association.


How exactly do you suppose we do that? Can we do it quickly? If not, should we just let this problem keep happening until it's done?


I don't claim to know the best way to do this in specific, but generally, I see fb as a monopoly that needs to be broken up, somehow, through antitrust legislation.


> For the same reason that "It is better that ten guilty persons escape than that one innocent suffer".

That sounds great until the scandals of the type "Someone reposted SFW segments of this girl's revenge porn video and Facebook is taking days to remove them every time; since they're reposted every 24h they stay up forever" roll in.

Every so often we get another scandal on HN of the type "This horribly harmful content was posted on PLATFORM and PLATFORM didn't remove it fast enough". Augmenting the false-negative rate would make the problem worse.

Attackers can use automation to post their harmful content. There's no way defenders can keep up unless they use automation as well, no matter how decentralized they are.


> That sounds great until the scandals of the type "Someone reposted SFW segments of this girl's revenge porn video and Facebook is taking days to remove them every time; since they're reposted every 24h they stay up forever" roll in.

This line of argument seems uncomfortably similar to "we need to ban encryption because terrorists and pedophiles can use it". The fact that bad people can occasionally benefit from something isn't a reason to not give it to the rest of us.


> The fact that bad people can occasionally benefit from something isn't a reason to not give it to the rest of us.

It actually can be. There's a continuum.

At one extreme are things like plans for nuclear or biological weapons. Nobody distributes those because terrorists really want them for horrible ends and they're of low utility, so we do without them.

At the other extreme are things like junk mail - we all just put up with that minor abuse of the system because having a cheap, flexible mail system is way better than not having junk mail.

Like I said, it's a spectrum, and sometimes bad people being able to abuse something is a good reason for everyone to forgo it.


> At one extreme are things like plans for nuclear or biological weapons. Nobody distributes those because terrorists really want them for horrible ends and they're of low utility, so we do without them.

My understanding is that how to build nuclear weapons is relatively well-known, and the only reason that terrorist groups don't have them is that they can't get their hands on the required raw materials.


building a weapon that goes boom from hitting critical mass of uranium/other radioactive element is pretty straight forward. Getting enough uranium is one of the hard parts, but it's not the only hard part. getting a nuke somewhere without being stopped is pretty difficult too. sure you can put it in the back of a pickup and blow it up somewhere, but hiding the radiation of that bomb while you travel/assemble isn't easy. Building a rocket is difficult for nations.


Yes, I believe you're right about that, but I believe it's still the case that no one just distributes the plans. You have to learn engineering and nuclear physics then design your weapon yourself.


The point is, principles are great when you only consider their upsides. I'm not against encryption or for ultra-restrictive automated moderation, I'm just saying it's a trade-off.

In practice we want to pick the course of action that minimizes harm. Having your video demonetized is one kind of harm, having videos of harassment or revenge porn posted online is another.

The kind of people who say "we should never have false positives, ever" are never the people who will have to defend these positions once somebody gets harmed by them and kicks up a scandal.


> For the same reason that "It is better that ten guilty persons escape than that one innocent suffer".

Not even justice system uses that rule.


The reasonable doubt standard does require the benefit of doubt go towards innocence. Presumably the goal is for more guilty to go free than innocent being punished.


Which is different standard. And it applies only in actual trial, which is something quite rare and super expensive. Most of cases are decided outside of trial, where going to trial raises stakes a lot.


It isn't the goal as much as an unavoidable effect. I however don't feel that comparing video removal to incarceration is relevant. Compare it to traffic rules instead (or anything else you can think of). You have to stop unnecessarily at times, and can't move faster than the speed limit, etc, just because we have deemed the fail state worse than the needless limiting of good drivers' freedoms.

I'm not saying which is the case here, but certainly that the argument could be made.


> It isn't the goal as much as an unavoidable effect.

I don't see the value of this distinction. Creating a system where it is more likely that guilty go free is the practical equivalent of creating a system where more guilty go free.

I wasn't commenting on how well the analogy fits censorship. Though I would note that the criminal justice system doesn't set all people free, just that it adopts a more slanted standard of proof than the laxer standard used in some other judicial situations like civil and immigration courts. The discussion over allowable levels of false positives fits better than a dichotomy comparing a world with traffic laws to anarchy or something.


> For the same reason that "It is better that ten guilty persons escape than that one innocent suffer".

Unfortunately, that is not a mantra that will keep your platform DMCA compliant.

Also, how exactly does it apply to an internet mob calling for someone's head? Do we apply this mantra to protect the mob's behaviour, or to protect the person the mob is targeting?


DMCA compliance is easy. You just have to respond to takedown notices expeditiously. Active moderation for copyright infringement is not required. You don't have to care about infringing content you don't know about if no notice is sent. You can force notices to be mailed to a PO box if that is the desired path for contacting your complaint agent.


> You just have to respond to takedown notices expeditiously.

The platforms do just that, by removing the content in question.


Nobody complains they do that. The complaint is that that isn't all they do. Sometimes they take down material that they consider infringing without having received a notice, and they often get this wrong.


Many of the platforms have proactive DMCA applications that actively scan their own content. They have a non-zero false positive rate, although I have no idea what the probability is.


YouTube has its own non-DMCA process to streamline complaints and satisfy the rights holders who they want to maintain a business relationship with. If you don't have such concerns then basic safe harbor compliance is all you need.


There are plenty of contentious moderation decisions that are unrelated to the DMCA.


Which is why my post had two parts.


For your second part: by not listening to the mob but not banning them either.


Great personal mantra, terrible preventive policy.


Facebook has so much market share that they're uncomfortably close to the same thing.

Sorry...market share has nothing to do with their first amendment right to not host content that goes against their policies.

Fox News is the most watched cable news network. I wouldn't want the government forcing them to host people they don't want to host.


I think there's less difference than you think. Big social networks employ moderators too. People complain about the moderators on small forums too.

I think the only difference is that politicians don't even pretend to care about the complaints of moderators on small forums.


The thing is, small forums can wind up being completely good or completely bad, because the net quality of a small mod team has a high variance. So if you look properly, you can find the good ones. On a big platform, you wind up with a very uniform mediocrity.


The trouble is, manual moderating systems are also a no-go at scale. So what do you do?


We don't need scale. Google needs YouTube to be the only place to watch videos out there. I don't. Smaller communities self-moderate fine enough, and bigger ones can raise money from members to hire mods / editors / etc.

But this isn't working as well as it should because the platform / advertising oligopoly is eating up everyone's revenue and using those funds to vertically and horizontally integrate everything under a handful of megacorp brands.

For as long as this consolidation (aka "scale") continues, content moderation will remain "unsolvable".


> We don't need scale. Google needs YouTube to be the only place to watch videos out there. I don't. Smaller communities self-moderate fine enough, and bigger ones can raise money from members to hire mods / editors / etc.

The thing is, you still have the scale problem even if you break up Youtube into a hundred different video streaming sites.

Say that Google would need to hire a million full-time moderators (in practice it's probably more) to ensure quality moderation for every video uploaded. If it's not sustainable for them to do that, then it won't be sustainable for a hundred companies to hire ten thousand moderators each to do the job.

This is especially bad because bad actors can just post the same content on multiple platforms automatically. Unless platforms start pooling their moderation resources (in which case you're back to square one), having lots of small-scale platforms means that attackers can just upload the same thing on as many platforms as they can and hope it slip through the net in a few cases.

The only way we can stop that, as a society, is if we decide that any platform for content diffusion needs to factor in and charge the price of filtering that content from the get go. That means no more free uploads on Youtube; if you wanna upload something you have to pay a fee for moderators to look over your video and certify it as safe.

I think that would be a viable way for the ecosystem to go, but it would make it harder for small independent creators with no budget to get started.


> Say that Google would need to hire a million full-time moderators (in practice it's probably more)

You can't use made-up numbers to prove your point. Approx 800,000 hours of video are uploaded to Youtube every day. Watching all of those in full at normal speed would require only 100,000 full time mods, but that's just about the dumbest way to go about it, so in reality it will be much less.

> it won't be sustainable for a hundred companies

> bad actors can just post the same content on multiple platforms automatically

You'll have to reconcile this fatalism with the myriad of smaller communities managing spam and moderation just fine. Millions (not "hundreds") of topical forums (incl. independent ones, and on platforms like reddit) are doing fine, often without full time or paid moderators.

> if you wanna upload something you have to pay a fee for moderators to look over your video and certify it as safe.

Since no one will opt-in to that willingly while free alternatives exist, you're essentially proposing state-mandated censorship. No thanks. I'd rather Facebook / Youtube just die if their consolidated asses can't keep up with what they created.


> The thing is, you still have the scale problem even if you break up Youtube into a hundred different video streaming sites.

No, PeerTube is a decentralized video platform with independent servers, which can work together.


How do independent servers solve the problem of scaling moderation? Every video still has to get reviewed by somebody, so the total number of reviews done doesn't change.


No single company must pay for the army of reviewers. Different servers compete for good content, i.e. have incentive for good moderation at low scale, which is possible.


Okay, that's the theory, but in practice, how good are Peertube servers at, say, filtering out djihadist beheading videos?

EDIT: A quick search shows that I was asking the wrong question; in practice, there are servers dedicated to exclusively these kinds of videos.


Shrink "at scale" until the company can handle the responsiblity. There's no inherent right to grow endlessly, and limits can be legislated if FAANG and their "best of the best" employees can't (or don't want to) figure it out.


This isn't an excuse.

e.g. Imagine the argument was "Disposing of chemicals safely is a no-go at scale"

Businesses don't get to ignore negative externalities just because it's unprofitable


"Disposing chemicals" doesn't feel like the right analogy. That's a negative externality that wouldn't exist without the manufacturer -- if the manufacturer doesn't exist, then that problem goes away. "My video is not available on this platform" is different, because if the platform doesn't exist, you still have that problem.


> The problem is, manual moderating systems are also a no-go at scale. So what do you do?

Is it? I'd have imagined there's some ratio of 1 mod per N people for N not too small that would make things work. It might be more like 1:1K instead of the 1:1M (or whatever) that companies prefer, but is that genuinely unaffordable or is it just that companies don't like to pay for moderators?


Imho it cannot work because we dont globally agree whats okay to do and say.

I think mass media has some fundamental flaws that we havent accepted yet.

Edit: take out advertising in intellectual property and you would get rid of a bunch of bad actors and cobra effects.


I think your points are valid. But I'm not sure if I see them as moderation issues. As I see it, there's the question of what should be allowed, and there's the question of how to enforce it adequately. As I see it, the former is a policy issue (and likely tougher), whereas the latter is a moderation issue.


I 100% agree that moderation could be lightyears better.

However i see this as being a bit like the BBC problem: beleiving there is such a thing as being objective, there just isnt.

You can have a group of likeminded people who mostly agree and need a bit of moderation for when someones external bad day leaks into their interactions on the plaform. However if there is an underlying fundamental difference of opinion or goals then the moderators become just another weapon.

We need leaders not moderators. Moderators in difference of opinion just end up punishing the weak one way or another. Leaders show by example what we can aspire to, which in turn guides moderators in how to apply the rules.

If you have global platforms who are the leaders? What direction should they lead that works for china and the US?


Edit: to be clear if you cannot afford enough moderators to stop child abuse or murder videos being posted then shut your platform down as its not viable. I am more tlaking about how moderation gets weaponised etc.


Because to truly moderate YouYube, you have to actually watch the videos, and YouTube says they get 500 hours of video uploaded every minute[1], it's pretty easy to estimate the staffing it would take each video once. They're basically getting 30,000 minutes of video every minute, so if they have 30,000 people watching it, 24/7, it basically works. You need 4.2 fourty hour weeks to cover a whole week, so 126,000 people will do it. They've also report 2 billion monthly users, so around 1:16k moderator:user would get full coverage of videos; plus another team for comments. And of course a team of 1/8th of a million won't manage itself, and you need an escalation path, and non-robot moderator are actually not robots, so they won't (and shouldn't) spend their entire worktime watching.

126,000 workers actually seems reasonable to find, the 2 million you'd need at a 1:1k ratio would be hard, I think. The pay is going to be bad, and the garbage you have to watch won't be worth it for most, but you still need a reasonably analytical person to decide if things definitely follow the rules, definitely don't follow the rules, or need to be escalated. And they need to be attentive, because videos can be totally normal and fine and then get abusive later.

[1] https://blog.youtube/news-and-events/youtube-at-15-my-person...


The notion that adequate moderation requires watching every second of video at 1:1 speed is such a strawman on so many levels I don't even know how to respond.


Exactly what I was thinking. There are obviously significantly more effective ways to do things. You could probably just keep the automated systems in place, but have human moderators verify every video that the automated moderators flag (as you stated, obviously not by watching the entire video at 1:1 speed). That alone would be strictly better than how they are currently doing things, and probably not too difficult to pull off.


Note that humans-in-the-loop is the status quo already at YouTube.


Yet the humans are making mistakes like this thread is about.

I think the intent is to have better humans in the loop and to have a human appeal process.

Currently it seems like the human approved removal has no recourse and you’re just hosed.


I was under the impression that bans were automated, and appeals were handled by humans. I think it would be better if ban “suggestions” were automated and actual bans always came from a human.

Not sure the exact process at YouTube though.


126,000 people, assuming a $20,000 annual salary, would be 2.5 billion, or 12.5% of youtubes revenue in 2020. That seems more than reasonable.

And when you consider only flagged videos should *need* to be watched by a human.


$20k annually is below poverty line. I'm sure Google would love to be able to get people to solve their moderation problem for so little money they can't actually afford to both eat and pay rent, but if we're making hypotheticals here, we should at last hypothesize 126,000 people who get to live like real human beings.


I'm sorry, i actually had £20k, and just changed it to $ without converting; £20k is about what a 'Accounts Administrator' earns (https://uk.jobted.com/salary). So that's about $28k which is about 3.5 billion, or about 18% of their 2020 profits. Still very reasonable, most businesses have to put more than half their turnover into wages..


The other thing I'd say is that this isn't low-skill work. If you get thousands of people, train them for a couple days, pay poverty wages and tell them to make each decision in less than 30 seconds, you're not going to get good results.


The human moderators only have to watch what is reported, it is totally unnecessary to watch everything.


you would really only need humans that watch things that were reported by humans, not every single video.


Facebook would need to hire around 2.489 million moderators to meet a 1:1k ratio. Even at 1:10k, you’re looking at increasing the size of the company six-fold.


I gave 1:1K as just a random number, I don't know how much it would actually need. I'd expect it to be much lower. Maybe like 1:50k or less.

What exactly is the problem you're pointing to? Are you saying Facebook couldn't afford to pay that many mods? Note that they don't have to be employees or in Facebook offices...


You accept that there is an upper limit on community size, beyond which unsolvable problems await.


> manual moderating systems are also a no-go at scale

Yeah, I concede this is a very hard problem (especially when advertisers are involved). And even worse, maybe it's not even worth it to solve. After all, if a minority x% of your creators are expendable to false DMCA takedowns because your market share is so huge, who cares?


You revert back to becoming a 'dumb pipe' and make moderating someone else's problem.

Your ISP doesn't moderate for you either.


Okay, now we are back where we started. How does that 'someone else' moderate content at scale? Eventually someone has to take the burden.


If we're all bunched into small communities, there is no 'at scale'.

I regularly hang out in several different communities of thousands of people, including this one. All of them, except the really small ones, have one or a few people who moderate the forums, and ways to report things that are posted. If someone posts a link to disturbing video to HN, it's going to disappear right quick; even if the source lives on. If I try out a community where there's a lot of disturbing videos posted, I'm going to stop going there; but other people might go there specifically because of those video links. If the videos are legal, that's their business and none of mine; I don't want to see it, and by avoiding their community, I won't.


I think it basically looks like Reddit. You carve up the site into fiefdoms and then have users self-moderate. Moderation that happens at the site layer largely targets fiefdoms rather than users.

New accounts are easy to create, but new communities are harder because you have to wait for members. Plus most of your moderators are free. I wouldn't be surprised if Reddit only pays for people to moderate the default subs since advertisers and new users are more likely to see them.


They do if you use their email. All email systems are unusable without spam and origin filtering.


They also are unusable with spam and origin filtering. Not getting a message you were waiting for? Turns no message from that service is getting through, no notification, no nothing.


This is actually true, I can’t get any email for one specific Substack I paid for (and others work!). And since they do login by email I can’t log in at all.


Spam filtering is "keep you from seeing things you don't want to see". Moderation is "keep you from seeing things I don't want to show you".


That's not true. It's more that spam is automatic, and as such is easy to automate away. If someone replies "BUY MY PRODUCT http://product.link" on every post, that's spam. If someone is consistently shitty to other community members and generally acts like an asshole, that's someone who needs to be moderated away.


I partly agree. There's a skill and a moral quality internal to moderation, which means that good moderators will exclude the content they judge the community doesn't want without letting this judgement be swayed by their personal agenda. But bad moderation, where the moderator lacks the skill or the quality to do this, is very much a thing.


>So what do you do?

You could employ a little common sense when designing your product. Is it really wise for platforms like Facebook to allow anyone to start livestreaming unrestricted? I would argue no, at the very least they should be taking measures to minimise the blast radius should someone decide to walk outside and start shooting people.


Slashdot's system works just fine. With the exception of overtly-illegal content, moderation should consist solely of communal inputs to a function that operates solely at the discretion of the individual user.


> moderation should consist solely of communal inputs to a function that operates solely at the discretion of the individual user.

Exactly, Slashdot works because of its transparency; the user can see, what was moderated down, and thus whether the moderation was done to something he doesn't want to see, or whether it is something that someone else doesn't want him to see.


Keep forums smaller than 10x Dunbar’s Number.


Pay for it and stop being greedy.


Stop using bullshit centralized platforms. There is no gatekeeper on the matter of posting videos on the Internet. Established players (like Rossman) continuing to publish through the big players lends power to them. So stop doing that.


You've basically requested to cede all control of the message and to fade into irrelevance.


For what it's worth, Rossman also publishes on alternate platforms such as LBRY.


What about user moderation with [+] and [-] buttons ?


Can easily get you echo chambers if the numbers are visible and people start using them as agree/disagree buttons.

User moderation via "flag as inappropriate" might work though.


Does is actually matter for Facebook as a business? Outside the HN bubble, very few users will actually quit just because some random page was suspended. Most users won't even notice.


It matters when someone with a high enough profile can trigger an investigation into their business practices.


> Even here, I'm pretty sure @dang is basically reading HN and cleaning it up all day.

I'm not sure about this. I believe there are automated systems and shadow bans on HN (not entirely sure how they work). There's certainly community moderation via flagging and downvoting, which is essentially how the FB video was removed as well.



From another one of your comments:

> I'm arguing that you can probably have both -- by using human moderators.

I don't think human moderators can review 720K hours of video per day. (on YouTube) You would need to hire >100,000 moderators to continuously watch videos all day and have some redundancy in reviews, then 1,000s of supervisors to make sure moderators are doing their job correctly. (almost impossible when you review videos all day with no break) Let's not even get into comment moderation...


It would be enough, if human moderators would decide over "flagged" content - or in general - review the KI decisions.


And flags from accounts that routinely flag ok content should count less.


Or not at all obviously. And hundreds of other fine tuning measures, sure.


IMO it’s because he’s the owner of theDonald.win (community migrated to patriots.win).

My bet is his voice triggered the system.


His voice, tone and mannerisms are pretty triggering for sure. They trigger my gag reflexes.


where does it say he's the owner of thedonald.win?


Louis, while sometimes brash, has got to be one of the most genuine people on YouTube. As far as I can tell there is nothing about his content which is abusive. While I’m still not really sure how I feel about right to repair, I would hate to see him be censored as one of his best qualities is his uncensored nature.


I support the majority of right to repair principles but I find Rossman himself to be a person whose personality has suffered from its ever-increasing exposure to YouTube celebrity.

I certainly don't blame him. I know my own mind and I'm certain I wouldn't fare any better in his situation. I think most people would be unprepared for the psychology of becoming Internet-famous, of having their opinion continually validated by a hungry audience.

In fact it's genuinely fascinating to me when occasionally you see an individual who remains seemingly unaffected by celebrity, psychologically unaffected by their thought-leader status in a large community. I'd actually pick Linus Sebastian—of all people—as be a canonical example of this. Whether it's sincere or incredible acting skills, the stratospheric growth of his channel hasn't made him any more insufferable.

The few people who can nagivate this perfectly truly shine in the world of YouTube. (The Slow Mo Guys is another example. And weirdly, I'd have to give a nod to Project Farm as well, for content that seems entirely unaffected by channel success.)

A lot of my favourite creators fall somewhere between these extremes. (e.g. Technology Connections, Mark Rober, savagegeese, Engineering Explained.) Not unscathed by their popularity, but seem to be able to cope with the mindfuck thus far.


As a long viewer of Louis channel: IMO the change that you think you see with him is more a function of age (remember he is a lot younger than he looks like) than having anything with internet fame.

I changed a lot in my life as well at a similar age. Louis is certainly someone who always likes to work on hard (but not unsolveble) problems. What he does with right to repair, is just yet another one of those. This is just trying to repair the underlying cause for a lot of troubles reapir shops are or will be in.

I guess the only impact internet fame might have had is that he now sees this as a problem that can actually be solved (rather than one that he as yet another small repair shop owner cannot solve).


> he is a lot younger than he looks like

...the dude is 32 (born in -88). So that means he was in his late 20's when he started his channel.

Why does he look like a 50 year old IT veteran who has seen it all, twice :D


Maybe because he did. I can't fathom how hard it is building a good, reputable third-party service center and really wanting to serve his customers the best way, while being pretty much road-blocked by Apple. Harvesting ICs and caps from donor boards is definitely hampering the technicians' work.


Stress and constant battle ages you, no joke.


How has it suffered exactly? Current videos seem just like those from years ago to me, although he now covers even more topics in the "Adventures of a small Apple repair shop"


Playing devils advocate here

The reason you become famous in the first place is because you do a good job being creative with the possibility space in your present environment. Once you become famous that environment and possibility space changes, and you are forced to chose between being grounded which may mean choosing not to use the new superpowers given to you, or to continue doing what you have always done with your new capabilities...


I'd argue that isn't devil's advocate — rather it's pretty much spot on. Being restrained in the face of influence isn't the norm. The default state of the Human On The Internet to be certain of our own opinions and want to shove them in front of everyone who will listen.


Suffered in what way, exactly? Rossman's new content seems very much like his old content to me.


i mean, he knows how to take care of PPS, got good balls soldering and always splashing that nice flux on the nastiest stuff


Love Louis myself. Very bright guy, great work, honest.

What puzzles me is how can anybody not be sure about the right to repair.


Some bad and crazy ideas are attached to that label. Also, some perfectly reasonable ones. I’m not sure about “right to repair” because I never know which ones we’re talking about.


On paper, you already have the right to repair things that you own. However, it's hard to materially ensure the right to repair without regulating the market to ensure availability of replacement parts and schematics, ban certain classes of security mechanism, mandate a certain level of openness for manufacturing/diagnostic/recovery software, and so on. A lot of people see any initiative shaped like this (cf. net neutrality, product safety regulations) as little more than an excuse for government overreach, and even supporters can have various concerns about the laws/regulations being worded or implemented in a way that inadvertently ends up doing more harm than good.


Examples like this show the time is ripe for alternative social media platforms to replace the established networks.

Dealing with bad actors in public forums is not as hard as tech companies would want you to believe. This is evident, because we do it today. Its our system of courts.

Take any platform, and implement a simple e-court system with juries, being random users from the network. The injured party can effectively e-sue , and the pool of randomly users must all agree that abuse took place. This takes care of " No more frivolous takedowns.

This also takes care of the pesky catch-all ToS. Platform no longer needs to define abuse. Instead, we take approach of Justice Steward: "I know it when I see it"

Juries participating get rewards (badges, recognition, opportunities as judges / advocates, etc)

The reason companies do not implement such a system is because it takes power away from them (management, staff) and gives it to the users.

That's precisely why such a system would be a winner.


This would definitely end up with most users not wanting to sift through whatever case they get "assigned", and the platform effectively being moderated by a big group of power users. You can already see this happening with Reddit. A handful of people control the subs that make up for like 40% of the content on the site.

Also this completely breaks down when you start talking about having a jury agree on what is/isnt hate speech or something similar. You are never going to get an agreement on that.


Sounds like a great use case for a future iteration of captchas. "Please verify you are human by judging if this user is abusive".

You don't need agreement, just consensus. You don't need unanimity. Bias can be analyzed handily enough across multiple user "judgements".


Slashdot used this for moderation, dunno if they still do.

Basically people got a random selection of moderated comments and got to choose from different options.

If a mod's work was constantly off-base, they would not get mod rights again.


1) Real juries perform exactly as you describe. The vast majority of people want to get out. This is not different

2) That's exactly the point. If a group of random people can't agree that some post is hate speech, then its not hate speech beyond reasonable doubt.


> 1) Real juries perform exactly as you describe. The vast majority of people want to get out. This is not different

Its a requirement in real life. If you try making it a requirement to use your platform, I dont see why any users would stay. I think users will overwhelmingly prefer to have the content curated for them, so they dont have to look at child porn and beheadings every week for the opportunity to use social media.

> 2) That's exactly the point. If a group of random people can't agree that some post is hate speech, then its not hate speech beyond reasonable doubt.

This sounds like an incredibly quick way to make your platform just a collection of the worst parts of the internet. A place with a TOS determined by the users where you need an overwhelming majority[1] to convict anything isnt going to last long with people who just wanna share some pictures with friends.

[1] Your post says unanimous but Ill give it the benefit of the doubt and assume thats not literal, since getting a unanimous decision is essentially impossible on the internet.


I am not op, but child porn and beheadings would not be allowed regardless.

This is about more gray areas, like should we ban this Joe Rogan episode? I would personally spend a few minutes each week going through an e-jury if that would help scrub the communities I like on reddit. Now reddit is a sterile ecochamber, it could become a much more engaged platfom, like it used to be.


> 2) That's exactly the point. If a group of random people can't agree that some post is hate speech, then its not hate speech beyond reasonable doubt.

Real juries have a selection process that it supposed to weed out biased individuals. Real juries have judges that steer them towards a particular way of viewing evidence. Real juries are forbidden from doing their own research, and can only passively listen to lawyers make arguments.


this approach is used by the game, Counter-Strike: Global Offensive. the system is called Overwatch (not to be confused with the game of the same name..).

how it works: once a player has received enough reports for suspicious behaviour (i.e., cheating), or bad behaviour (griefing, throwing, etc.), a replay of the match is submitted to tens of other players. these players review the "demo", decide on the likelihood of the "suspect" being guilty of those offenses. if all reviewers come to a unanimous decision, then the suspect receives a ban.

the system is thoroughly abused[1] by hackers/cheaters who are able to spam reports for players they don't like, and by controlling hundreds, if not thousands, of bot accounts, can reach unanimous verdicts that no normal person would decide, inflicting bans on innocent players.

[1] - https://www.youtube.com/watch?v=N0OiOiqmi-c


So, I think the base concept is fine, but requires oversight. It's an augmentation of anticheat, not a replacement.

Hidden reliability scores should aid the classification of "judges". Misclassify often? Get downprioritized. As CS is a paid game, the typical issue of mass account creation to game this should be at least slightly mitigated.

Anticheat has to be defense in depth.


> Anticheat has to be defense in depth.

I'm pretty sure every layer of defense you can think of, they've tried.

- Machine learning to spot obvious cases? Check.

- Federated moderation? Check.

- Various trust schemes to weigh accounts based on how much they payed, whether they cheated in the past, etc? Check.

The only thing I don't think they've tried too much is sending false/stale game data in cases where they know the client has no way to know otherwise (eg lying about an enemy's position when the player has no way to see the enemy).

It's an arms race and it's hard to stay ahead at scale.

(though I'm not actually sure I trust accounts of how prevalent cheating is; I haven't had that many cheaters in my own games, and a lot of complaints I've seen come from either free users or people who admitted to cheating themselves, which is what I'd expect to see if Trust Factor worked; though it's a crap deal for free users)


well, therein lies the issue: this approach, one of scoring jurors (i think that’s a more appropriate term :), already happens. almost exactly as you describe.

defense in depth, indeed: Overwatch is one of three systems, the other two being VAC, and Trust. the former being more traditional cheat detection (which, empirically, is not effective at all), and the latter being an observation of a user’s interaction on the entire steam platform, amongst other things, to determine a “trust factor”. anecdotally, myself and many others can tell you that this is also ineffective.

for what it’s worth, 2020 and by the looks of it, 2021, will go down in CSGO’s history as the years that the cheaters won.


So, looking at Riot's anticheat for Valorant, they're generally considered to do a good job. Similarly for CSGO, FaceIT exists, and from what I've seen, is considered a good, low-cheater environment.

The issue with CSGO was not that the methods were ineffective, but that it's a constant war. The moment the devs stop fighting, the cheaters can and will most definitely win. Valve gave up, so CSGO is cheater infested. FaceIt didn't, and thus is a better environment.

For paid games, there's a fun addon there. If cheaters have to buy a new account every time a ban hits, in theory, effective anticheat can subsidize it's own cost. The catch being that they'll then start using hacked accounts to cheat on.

Then 2FA comes in... Such is the rabbit hole of anticheat, there's always a next step. Somehow, cheating at games is a prime objective to far too many people.


Valorant’s anti-cheat is considered to be good, but hacks are available. it’s also incredibly invasive - running in the kernel, meaning it must be enabled at boot time - something i don’t think is either necessary or desirable.

faceit/esea isn’t much better. and i suspect it’s only better at all because of the higher barrier to entry.

as for MFA - Trust takes this into account in determining its score.

to stay on topic here, i think what i was originally trying to say was, yes, having humans in the mix for self-moderation is a good idea in principle. in practise, it’s easy to manipulate because it’s very hard, potentially even impossible, to determine if you actually have a human using the machine. well, except in china, i suppose. but i don’t think we want to live in that world :)


I can't imagine what could go wrong with showing a random pool of your users suspected CSAM, beheadings, animal cruelty, and so on. Just out of curiosity, what do you plan to do in the unlikely case that you can't get people to do this grueling work for free? Just leave all of that content on the site?

It does mean that you won't be able to act on anything except the grossest of abuse. Given the hyper-partisan divide in America, you'd never get 10 people to agree on any case with even a whiff of politics. People will not vote on "is this abusive", they'll vote on "do I agree with this".

But, hey it's not all bad. Want some viewers for your video? Here's a growth hacking tip: Just have a friend file an abuse complaint against it, and you'll get it sent into the jury rotation pool.


1) RE: what could go wrong with showing suspected CSAM to users.

You are not describing something that doesn't happen in real life. Jury pools are introduced to a summary of the case to precisely ask jurors to recuse themselves if the content of said case is too X. Ejury adjudication would make this incredibly simple via flagging system. "X user has stated this is animal cruelty, and the algo concurrs". So you know what you are dealing with before anyone hits play.

2) RE: No one wants the work. Real-life Jury pools consistently have 100+ candidates for a final 12. E-juries are much better because they can actually scale and can cycle much faster. The world has 7B people. With the right incentives, you can find 10 or so people to take a case on.

Plus, you are forgetting that there are also real penalties for losing a case that can act to fend off the undesirables. For example, you could make it so that losing a case means the platform discloses your IP. Makes all your private postings/messages public. Open up the logs of use. Take your pick. There's a million ways to punish degenerates which now do not happen because algos decide and they have such a high false positive rate, to punish would be doubling down on a failed strategy.

3) RE: No agreement except for gross abuse. I have to disagree. 98%+ of people would be in agreement that death, CSAM, animal cruelty etc are offenses that should be dealt with. You already have consensus there. Anything else is in the spectrum of spam / hate / etc. Yet, if it is not beyond reasonable doubt, then its probably not gross.

4) Growth hacking: You seem to forget there are penalties for false accusations in real life. This is no different. If your friend wants to lose access to his own account, he can go right ahead and report a video for the 10 extra views....


I don't think you quite appreciate what a cesspool the Internet can be, especially when you plan to run a service that's going to be basically unmoderated. The amount of material to review is going to absolutely overwhelm the few volunteers. So what are you doing when there are not enough reviewers? Is your platform going to block that content until it is reviewed, or show it until it is reviewed? Now, whichever option you choose, take a minute to think about how it would be exploited.

As for your idea on "real penalties"... Somebody posts abusive material, and as a penalty the platform would distribute more of their material? Harsh. Oh, I sure hope they don't close my throwaway account created from a VPN IP. How could I ever replace it with a new one?

People doing this work professionally get psychological counseling for it. You simply could not subject normal users to it. Not morally, not legally, and not if you want to actually run a business.

I don't know what kind of support the judicial systems give juries who have to view that kind of material. But at least in those cases there are real consequences if the defendant is found guilty. Not so on the Internet.


The problem is that all of the content is siloed. When the services are run by small groups of like-minded people (or people self-hosting) this tends to be less of a problem.


This idea seems to ignore two things:

1) jury duty is maybe 1 day per 1-2 years in America for most people, and it's still widely disliked. In my experience, it's the most reliable place you can go and find adults acting like children who want to get out of school early. I'm not saying that's unreasonable, but I am saying that most people don't like expending their time on it.

2) A possible response to point 1) is that the frictions involved in virtual jury duty are much lower than those of real-world jury duty. But, this cuts both ways. One reason our legal system works is that people can't just automatically spam it with garbage claims (well, they can, but it requires spending time and resources). This defense doesn't apply when the whole thing is online. Heuristics for automatic defense of the jury system then run into the same problems of automatic moderating.

Overall, I agree that it's unfortunate that automated moderating makes so many high-profile screw-ups. But I'm also confused by how many people are sure that it's because tech companies are ignoring obvious solutions.


I like the idea, but it would need some refinement.

> Juries participating get rewards (badges, recognition, opportunities as judges / advocates, etc)

There are also external rewards that need to be considered. For example, being a juror because you want to silence someone. I wouldn't put it past 4chan to try something like that.

> The injured party can effectively e-sue

Without any kind of cost or risk, this will get abused. Remember that it takes 1 person to make a report, and 12 people to form the jury. It doesn't take much to flood the "court", and then there's effectively no moderation.

> The reason companies do not implement such a system is because it takes power away from them (management, staff) and gives it to the users.

I don't think it's directly about power. It's largely about advertising money. They need to make sure advertisers are okay with the ads being run next to the content. Advertisers will not care that a jury voted to keep a racist tirade.


Take that one shotgun suicide viral video incident on TikTok. Do you really think that should be shown to "a jury of random users" to decide whether it is safe to view?

Now imagine periodically reviewing videos like this being your 'duty'. In addition to scarring your users for life, they'd be leaving in droves, and you'd be inundated in lawsuits.


1) Tiktok - People actually curate this content already. The only difference is they are staff, and being paid. So maybetake the same compensation, and open it up at scale, so its an improved version of what is already happening for curation. Again, there are people in India paid to do this today. Its not like its not happening you just need to randomize it and spread out responsibility across a wider body.

2) This already happens in real life. I am sure there are mothers serving in jury pools about the murder of a toddler by a parent. The difference is that there are a number of things one could do to manage user preferences in such cases. Also, content would not be as pervasive as implied if penalties are effective at making borderline actors self-moderate, and permanently removing bad actors with real consequences (IP banning, public logs, etc).


I’m sorry, you can’t seriously expect a platform to willingly show users potentially horrific content. That’s just disconnected from reality entirely.


No thanks, I don't want my content judged by random people from Pakistan or Venezuela. That seems worse than the current moderation system.


> Examples like this show the time is ripe for alternative social media platforms to replace the established networks.

AFAICT social media replaced "the established networks" a decade ago.


SaaS rule on the internet "Every payment service becomes like PayPal. Every social network becomes Facebook. Every search engine becomes Google."

Most people at these company are fine people. There are forces (revenue, survival instanct, compliance, network effect, competition, being a monopol is nice, ...) that drive these companies. PayPal, Google, Facebook are the end state of the current forces in the market.

You do need to remove one or several of those (e.g. revenue driven) to end up in a different end state.


What is crazy is that since I manage a group that has 8.000 users in FB, we get lots of fake accounts requests to get into it so that they can SPAM whatever they want to sell.

So I report the profiles many times (profiles with nudes, explicit pictures, etc), and half of the time, I get a response that they do not violate FB policies. I am talking abut pictures of assholes, tits, etc.

And no, it is not that I am horrified of seeing that, is that if I want to see that I go to other pages.


Same, 5k user local town group here. Every day accounts with names of another continent, created in the last week, with sexually suggestive pictures that usually involve a caption with some url shortener to video chat with them, 100% of the time they follow the community standards in my experience.

I honestly believe the FB UI for reviewers only shows the picture and not the caption, I can't imagine any other explanation.


You’re wasting your time reporting them. They are either bot-generated or done in large sweatshop operations. Just reject the membership and move on.

It’ll get better when Facebook chooses to deal with this with more accuracy. Until then, any actions that go beyond basic protection of our groups is simply tilting at windmills.


It’s almost like they took extra measures to make extra sure their automods aren’t prudes but not enough steps making sure they don’t block perfectly fine content.


Is this not libel? Labeling his videos as “abusive” sounds like it is tarnishing his reputation and denying him any ad revenue he could get. It seems like it could be pursued via a law suit.


> Is this not libel?

No. “Abusive” is not a fact claim.


Is it not a “fact”? If they say that the video breaks their rules on abusing content, it goes away from opinion and more into fact.

Facebook deliberately created a set of rules/measures to determine abusive content, specifically such that they couldn’t be accused of being arbitrary, or opinion-driven. So their rules specifically ensure that their determinations are fact-based not opinion-based. If they determine that content has broken their rules on “abusive content” it seems pretty fact-based.

So labeling content as having broken their rules on abusive content is literally making a fact claim.


> Is it not a “fact”?

No, not in the sense required for libel law.

> If they say that the video breaks their rules on abusing content, it goes away from opinion and more into fact.

That would arguably be true if their standards were remotely objective; they are not.

> Facebook deliberately created a set of rules/measures to determine abusive content, specifically such that they couldn’t be accused of being arbitrary, or opinion-driven

Whether or not that is the intent of the rules, the rules are opinion-driven on their face.

> So their rules specifically ensure that their determinations are fact-based not opinion-based

No, they don’t. Have you read their rules?

> . If they determine that content has broken their rules on “abusive content” it seems pretty fact-based.

Really?

---

The following behavior isn't allowed on Facebook:

Posting things that don't follow the Facebook Community Standards (ex: threats, hate speech, graphic violence).

Using Facebook to bully, impersonate or harass anyone.

Abusing Facebook features (ex: sending friend requests to many people you don't know). Overusing features could make other people feel uncomfortable or unsafe. As a result, we have limits in place to limit the rate at which you can use features. Learn more aboutthese limits.

--- [0]

Seems like a whole lot of opinion in there to me, even before lookong at the Community Standards (incorporated by reference as they’ve indicated anything bot following them can be labeled “abusive”.)

But let’s look at them [1]...and find there are no fact-oriented rules ar all, but a statement that “...when we limit expression, we do it in service of one or more of the following values:” followed by a list of vague value statements without factual criteria.

[0] https://m.facebook.com/help/216782648341460

[1] https://m.facebook.com/communitystandards


I think you’re wrong. If Facebook defines a community standard, that can be opinion based. But saying someone violated those standards is fact based. It doesn’t matter what the rules are in this case. The rules aren’t the issue. Saying that someone violated those rules is the issue. Whether someone violated the rules or didn’t violate them is a fact. And if they say they violated rules when they didn’t, that’s is an incorrect, fact-based statement which I believe is a case for libel, especially if they say someone broke the rules for things like abusive content.


All these tech companies have a lot to lose from right to repair, so I wouldn't doubt someone is trying to take down his videos


I ran into a similar problem on Quora. Someone left an arguably false answer. I commented that I didn't think the answer was correct and showed why. The OP disagreed. I disagreed with his disagreement. He disagreed with mine. He reported me to Quora. I got a nasty message from Quora that my comments were spam and and they threatened to remove me (they were not remotely spam and they were also not in any way abusive or personal attacks).


Is Asus employing reputation management firms?


Isn't it better to assume every major company is than to blindly assume good actors?

The only ones I company I can't imagine is doing this is Google. They have an Awful reputation. Their Fruit counterpart, Nintendo, Samsung, and more are suspiciously positive at anti consumer news.


Im fairly confident it’s because he owns theDonald.win (the community migrated to patriots.win).


Do you have a source for that? A DDG search only shows he’s mentioned in one article of the site, which seems to have a few other domains as well.

I tried to do a whois on those domains, but it seems they’re managed by an intermediary and anonymized.


Perhaps they assume because that domain now redirects to a page that embeds one of Rossman's videos this is the case? thin evidence at best, that is simply one among many explanations for the availability of that redirection.


I like Rossmann’s spirit and 100% onboard with right-to-repair movement, just not his toxic delivery. His videos are ramblings of something that can be summarized in 30 seconds. It’s filled with hateful contempt and everything to bring out the worst in people. It’s a reality show.

I wouldn’t call it abusive and shouldn’t be banned, but it’s not without issues either. Most people find him great, perhaps something must be misjudged by me.


Sometimes bad things are being done, by bad people, and those things and people should be spoken of harshly, not in neutral, Vulcan-like delivery. Rossmann can be a bit severe, but nearly always with good reason: the people he's speaking ill of are generally crooks of one kind or another.

Of course, speaking or writing with nothing BUT appeals to emotion is vacuous and cynical, and the horde of youtube grifters that have sprung up to make money off getting people wound up are trash. But unlike them, Rossmann actually has well-considered content behind his passion, not just hot air.


Having a well-reasoned argument doesn't justify abusive language. And just because there are bad people in the world doesn't mean it's okay to treat them badly. Sometimes abuse and badness are necessary, such as in war or for self-defense. But when it's not necessary, it shouldn't be used just because you find yourself on the convenient side of the moral picket fence.


A well-reasoned person can handle abusive language, especially at the level you're comparing Rossman too. If you're comparing what Rossman outputs to the level of abuse and badness found in war or self-defense... well... think about that and maybe edit the comment afterwards.


is his language abusive? I would say it's abrasive, but I don't really see how it's abusing anyone except maybe large faceless companies. He doesn't really go after specific individuals


Right-to-Repair is such a wonderful idea stained by people such as Rossmann IMO. Contrast this with EFF and iFixit's approach and contribution to this movement.

> Sometimes abuse and badness are necessary

Sure, but we tend to only support this argument as far as it echoes in our echo chambers. We've silenced right-wing social media and a bunch of other "abuse and badness" and HN gives a green ticket to it. Some of it is truly horrifying, but there was a lot of collateral damage to freedom of expression and speech.

I don't support badness and abuse. Protesting for a cause is more about gaining support for the cause and convincing people that are against it. It is a "pull" model, not "push". By pushing harder, you're creating more echo chambers of vile, contemptful vigilente justice damaging the original cause and making no progress at it. Self defeating.


You're using an awful lot of adjectives but so far, no concrete examples. What specific statements has Rossmann made that could be genuinely construed as "vile, contemptful vigilante justice"?

I also completely disagree that we should ever sugercoat our language when addressing practices that have clear malice and greed behind them. Prioritising the feelings of some millionaire/billionaire shareholder over the thousands to millions of customers being exploited and short-changed smells of messed up priorities frankly.

The only people this approach genuinely "pushes" away are a group that you cannot expect to act in good faith in the first place - the companies deliberately obstructing the right to repair. You don't ask them nicely to do something they have zero profit motivation towards. You force them by enacting legislation or creating enough of a backlash that they calculate that supporting Right to Repair is better for their bottom line.


> The only people this approach genuinely "pushes" away are a group that you cannot expect to act in good faith in the first place - the companies deliberately obstructing the right to repair.

This is true, fair enough. The incentives aren't aligned here for big corporations. My points about protesting were more generic and beyond R2R initiative. You make good points that being nice to large corporations isn't going to achieve anything. But, would getting angry do anything? I am not quite convinced because regular Joe is going to buy that shiny piece of unrepairable equipment. We're just so used to throw-away culture, problem is much deeper. Both - businesses as well as consumers are to blame. Unfortunately, the slice of consumers that care about R2R is vanishingly small.


Lobbying and putting political pressure on local representatives is definitely a few steps above merely "getting angry". The angle that is going to make the average consumer care about R2R inevitably involves fostering righteous outrage. The enemy of enacting effective change is not anger, it's apathy.

I also don't buy that consumers are really to blame for this anymore than somebody born in a society that believes in the divine rights of kings is to blame for being a monarchist. Corporations and their friends in media and government influence or outright manufacture the overwhelming majority of the culture that is projected onto people. Is it really a surprise then that most people have internalised the precise world view and expectations of their purchases that is convenient to companies?


You may not be in a position to understand just how bad things have gotten. It gets to the point someone truly passionate, with the stamina to give a damn publicly (most of us wouldn't dare do such a thing, myself included) do something about it. Rossman is legit.


It is ridiculous that they refuse to publish and just reject. Even Orwell in 1948 knew that words can be cut out and substituted with an industrial scale staff. These days Apple could simply hire a KI to rewrite the offending passages.


Well, Facebook is free to ban anybody from using their services, since it is a private platform. If Rossmann wants, he can start his own billion dollar social network and post whatever he likes over there.


Yes, Facebook do have that right. But they also have a right to look stupid while doing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: