
Why do companies with unbounded resources still have terrible moderation? - moultano
https://moultano.wordpress.com/2019/10/02/why-do-companies-with-huge-resources-still-have-terrible-moderation/
======
wvenable
There are two classes of users: normal people who want to use your service and
people who actively weaponize it for their own ego. Those latter users have
all the time in world and can defeat any attempt to stop them. This is
because, online, time is the only resource that matters. You have a limited
amount of this resource. Your company has a limited amount of this resource.
Your normal users have a limited amount of this resource.

There is a discussion below about Flat Earthers having every right to speak --
and I agree. And in fact, I doubt anyone would truly care about Flat Earthers
having their Flat Earth forum or even adding an occasional Flat Earth comment
whenever NASA posts a round Earth photo. That's never been the problem. The
problem is that some users don't stop there -- they're on a mission to change
the world. They going to abuse your platform to the extreme to promote their
views. Most big platforms right now have a huge mess of people tossing free
bytes at each other.

If a small group of dedicated people wanted to ruin Hacker News for the rest
of us, I have no doubt in my mind that they could. No amount of moderation
could stop them. I've seen it first hand a few times with other services.

~~~
chr1
They couldn't ruin HN because of the system of voting, flagging, and high
karma requirement for downvoting.

And this applies in general, the only working way to moderate is distributing
the effort between users who are interested in the final result.

Ideally big sites would not have one single system for moderation, but
multiple lists so that if two groups keep flagging each other they get
separated and don't see other groups posts, instead of one group having to
leave the site.

~~~
codesushi42
I don't know.

HN has plenty of misinformation posted on it constantly. Countless posts that
are objectively wrong, and in many cases it is done intentionally. I have seen
lies supporting eugenics, lies about how quantum computing is a fake
conspiracy, etc etc. And yet the mods only jump in when you're impolite by
referring to OP with "you" and "you're" because it is a "personal attack".
They are optimizing on the wrong problem, IMO. I would rather discuss with
people who are snide rather than with liars who have an agenda.

HN has its own "fake news" problems. And this is the society we are living in;
you can say anything as long as it doesn't offend or hurt someone's feelings.
Even if it is objectively fake horse shit.

~~~
chr1
In your example if someone pointed out that quantum computers required some
equations to hold with far greater precision than anything measured, so there
is small but non-zero probability that it will turn out quantum computers can
not work, would it be a lie that moderators should delete?

I believe it's not moderators job to decide what is a lie and what is not, if
you see a lie, write a truthful comment refuting it, that way you can convince
people instead of trying to silence.

~~~
vonmoltke
> I believe it's not moderators job to decide what is a lie and what is not

codesushi42 didn't say it was. It said the system you describe still allows
lies, myths, and misinformation to spread. The only mention of moderators was
in the context of chastising the poster who calls the misinformation out.

> if you see a lie, write a truthful comment refuting it, that way you can
> convince people instead of trying to silence

The tools you referred to _are_ tools to silence, so you are "trying to
silence" every time you flag or downvote a post. As for refuting the
misinformation, I regularly see posts doing this downvoted while the
misinformation is upvoted[1]. This is particularly true in discussions on
contentious topics. Thus, I'm not nearly as certain as you are that these
tools would protect HN from a sufficiently dedicated group.

[1] At least once, because I downvote it and the comment remains black.

~~~
codesushi42
Exactly this. HN is a cesspool of misinformation. Whether it comes from people
pushing an agenda, or just plain ignorance. I take everything said here with a
grain of salt, and fact check _everything_. Because I have seen some very
creative and convincing misinformation here, that when researched doesn't hold
up under scrutiny.

My point is that people casually browsing this place are going to be infected
by the misinformation and spread it to others. All while the mods overfit on
whether someone's response wasn't polite enough, which in a lot of cases is
biased to begin with.

Stack Exchange is a much, MUCH more reliable source of information than the
horse shit I see pranced around here on a daily basis. Their moderation system
actually works because it empowers knowledgeable users.

~~~
tlb
I don't find this to be the case. Usually, when someone is wrong, they are
quickly corrected. So when reading a thread, I rarely come away with an
incorrect belief.

Maybe some example would help me understand what you mean. Could you link to
some clear misinformation on HN that wasn't convincingly refuted elsewhere in
the thread?

~~~
codesushi42
Plenty examples here, even with people coming out of the woodwork to support
eugenics under false pretenses:
[https://news.ycombinator.com/item?id=20542738](https://news.ycombinator.com/item?id=20542738)

Look up some accounts (jlawson) and you will find they consistently post right
wing BS here.

Or just search HN for eugenics and see all the crazy claims made...

Plenty other examples, but due to ignorance, like someone giving a long
convincing explanation of the thermodynamics behind the efficiency of steam
engines which sounded entirely convincing. But was complete utter horse shit
once researched, and no one refuted it. But it was upvoted plenty.

~~~
tlb
I looked in the places you suggested but none of them seem like cesspools of
misinformation.

Political opinions from across the right-left spectrum, if presented in a
civil and substantive way and without fake facts, are not against the rules
here. Extreme opinions tend to get downvoted pretty quickly, though.

~~~
codesushi42
I guess you didn't look hard enough. Because the top comment is a lie:

 _If you outlaw this, people like me (who think inflicting stupidly and ill-
health on their children by withholding these technologies is morally wrong)
will just fly to Singapore. Even if you could enforce this, not every country
will. 5+ standard deviation increases in many traits is on the table_

Nope, 5+ std deviations is not on the table.

~~~
tlb
You might disapprove of the sentiment, but that does not appear to be a lie. A
lie has to be factually incorrect. See
[https://en.wikipedia.org/wiki/Lie](https://en.wikipedia.org/wiki/Lie)

~~~
codesushi42
Claiming 5+ std deviations is a statement, not a sentiment. The author cites a
paper that makes no such claim. That counts as a lie.

~~~
tlb
The citation by Gwern mentions a number of traits which could plausibly be
improved 5 SD, like height (which would be +20 inches -- ie, a 7'5" man).

~~~
codesushi42
Height has nothing to do with intelligence, which is what the discussion
thread is about.

~~~
tlb
I think pointing to evidence that _many_ traits are susceptible to +5 SD
increases by genetic screening is reasonable in the context a discussion about
intelligence, even though +5 SD change in intelligence hasn't been
demonstrated.

+5 SD change in intelligence (ie, average IQ of 175) would be amazing, but
it's not obviously impossible. So it's "on the table" in the sense that cities
on Mars are "on the table". It's not prohibited by any known laws of nature,
and some experts are working on it.

There are lots of actual lies in circulation, so I encourage you to reserve
the use of the word for things that are demonstrably false, not just ambitious
claims that one might be skeptical of.

~~~
codesushi42
That was never shown, it is you who are stretching what can reasonably be
claimed with basically no evidence supporting such nonsense. None of what you
said was in the original source.

Stretching the truth like this is lying, whether you like it or not.

------
michaelbuckbee
The article concludes with:

1\. Effective moderation can only come from others steeped in the particular
cultural context they're part of.

2\. Effort should be given to moderate to give communities the ability to
moderate themselves at scale.

With that in mind, it may be worth considering Reddit. Each subreddit has it's
own moderation team from the community that is charged with enforcing sitewide
guidelines like not doxing people, etc.

This lets them penalize and ban at the community/subreddit level instead of
trying to interact with individuals.

There are problems, of course, but with 20million people a month interacting
with the site you're going to have problems no matter what (as the article
explains in detail).

~~~
moultano
I do think Reddit is particularly well suited to having good moderation by
virtue of its structure, and Twitter is particularly poorly suited. Twitter is
all about decontextualizing what people are saying.

~~~
AkshatM
I think a useful somewhat-apples-to-apples comparison for moderation is: Quora
and StackExchange. Both websites are intended to be Q&A platforms, but their
approach is markedly different and results in different moderation issues.

Quora is heavily centralized - all content must be moderated by the company,
there is little to no segregation of questions by dedicated community, and all
questions are allowed. Individual users must be name-identified and,
importantly, are the core ingredient for Quora - people come to read the
content of specific people, not for quality of the content in general. Users
have stake but no say in the evolution of the system, and are by design
incentivised to not care about question or answer quality but instead about
the people they engage with. Further, questions are considered to be "owned by
everyone" \- so users are free to alter questions indefinitely.

In contrast, StackExchange (not StackOverflow) has similar properties to
Reddit - partitioned into smaller communities, reliant on user moderation,
equipped with a somewhat reasonable appeals process that relies on a "meta"
site where users can contribute to improving the site. Users have both stake
and say, and importantly are dedicated to improving the average quality of
answers rather than who is answering.

The difference in moderation is stark. Quora's issues with moderation have
been legendary - anywhere from an inability to stop sockpuppet voting, pay-to-
upvote rings, and reversing helpful edits to outright refusing to take action
against documented sexual harassment by users against other users.
StackExchange has had no such issues, or rather it has had vanishingly fewer
issues compared to Quora.

While we obviously can't compare them on equal footing, given that the sites
are so different, this is some circumstantial evidence that community
moderation, allowing communities to manage their evolution, and removing the
focus on individuals is a better experience between two sites that seek to
accomplish the same goal.

I'm not sure what this would say about social networks - which are all about
emphasis on other people - but it does seem like a great model for online
communities that aren't social network focused.

------
Animats
This is why moderation should stick to the "clear and present danger" standard
established by US courts. There has to be a _credible threat of violence_
directed at a _specific individual_. "Hate speech" is too hard to distinguish
from political speech. Don't try to do it in advance. Prior restraint is bad.

Banning people from being excessively annoying over a period of time is
perhaps an option. That's after the fact, not before.

~~~
jfengel
That may not be optimal for the site, though. Many users will leave a site
before it reaches the level of violent threats.

The goal of moderators (on most sites) is neither to maximize speech nor to
protect people's feelings. The goal is to keep people engaged with the site,
and that of necessity means balancing the people who wish to be annoying with
those who will leave when their annoyance gets too high. That gets the maximum
engagement, which usually goes hand in hand with maximum revenue.

Even on sites whose goal is divorced from revenue, many will find that
Gresham's Law of Comments applies: bad speech drives out good. Hate speech
tends to drive away people from the topic at hand, because people respond to
hate speech as if it were a threat, even if it isn't immediately physical
violence. Responding to that dominates the conversation, and then people
interested in the original conversation leave.

Good user controls can help, but ultimately a site must keep its eye on the
bottom line. You may have a hard time distinguishing hate speech from valid
political speech, but users know when they're not enjoying the site, and
leave. It's up to the site's designers to pick their target audience and
encourage them to stay. And if that target audience is "everybody, including
people who want to engage in hate speech", they may find that "nobody, except
people who want to engage in hate speech" is what they end up with. Which may
well be what they want, but for those who don't, they need to recognize that
and take steps.

------
otterley
Moderation is indeed difficult, but automating moderation is a bar we need not
even get to yet, when even policies that are relatively easy to enforce keep
getting watered down.

Just today, for example, it was revealed that Facebook is no longer going to
prohibit false factual claims in paid political advertisements -- even if
those falsehoods are determined by generally-respected fact checkers. See,
e.g., [https://popular.info/p/facebook-says-trump-can-lie-in-
his](https://popular.info/p/facebook-says-trump-can-lie-in-his) (and
associated commentary
[https://news.ycombinator.com/item?id=21152869](https://news.ycombinator.com/item?id=21152869)).

~~~
Meekro
The problem is, "respected fact checkers" are anything but. The HN discussion
you linked to actually mentions some interesting things that these fact
checkers omitted.

Factcheck.org claims: "there is no evidence that Hunter Biden was ever under
investigation or that his father pressured Ukraine to fire Shokin on his
behalf."

However, the fired Ukranian prosecutor testified in court that "the truth is
that I was forced out because I was leading a wide-ranging corruption probe
into Burisma Holdings, a natural gas firm active in Ukraine and Joe Biden’s
son, Hunter Biden, was a member of the Board of Directors." [1]

Testimony taken under penalty of perjury is _evidence_ , and factcheck.org was
wrong to claim that there is no evidence.

[1] [https://thehill.com/opinion/campaign/463307-solomon-these-
on...](https://thehill.com/opinion/campaign/463307-solomon-these-once-secret-
memos-cast-doubt-on-joe-bidens-ukraine-story)

~~~
otterley
I agree that there might be evidence, and Factcheck.org may have gotten that
wrong. But perfection isn't the bar we're aiming for, nor is it practically
attainable - "usually correct" is reasonable.

Moreover, let's not confuse testimonial evidence with fact. They are not
identical. In our judicial system, the jury is the finder of fact; they weigh
the evidence to make conclusions.

We also can't let a minor technical error in a tangential report invalidate
the rest of what otherwise is a lie -- specifically the primary bold claim in
the example advertisement that's at the heart of the article.

~~~
buboard
"usually correct" is what the general media is, and they all usually have an
agenda. Whenever they get it wrong, the process of commenting itself is a
fact-checking exercise that s usually better than the "fact checkers" (case in
point this thread). That nullifies the idea of factcheckers imho

I think the biggest issue with social media is: do not make truth a popularity
contest

------
anonymous6500
To me it is fallacy of bigtech to misclassify moderation problem as just a
typical ML problem. Hence a false belief that ML models, standard approaches
that they use for their other ML problems, and cheap annotators can solve it.

What, I think, can be done:

1\. Don't just hire expensive PhDs and hope that algorithms can correctly
classify racism, etc. Hire an expensive product visionary who can build
holistic approach to address moderation problem and knows what is solvable
with ML and what needs pivoting to human-guided resolution.

2\. Don't hire lots of cheap annotators in "Rural Inida" but hire or train
fewer expensive experts and give them powerful tools that can scale their work
to handle efficiently all suspicious traffic.

3\. Give power to "normal" users to flag inappropriate content or behavior and
loop this in a thought-out workflow with your ML and your experts on the other
end.

4\. Partition users and content so that bad actors and bad content get
clustered away and are less easily accessible for others.

Why bigtech companies don't do it? Besides fallacy of thinking this is a
"typical" ML problem, moderation is hard. It's also hard to make a business
case for short-minded bosses and prove revenue increase or expenses cut.
Lastly, some bigtech ones benefit a lot from abuse so they can't change
immediately to stop it all.

~~~
2sk21
I'm reading the new book on the failings of pure ML based AI by Gary Marcus
and Ernie Davis (Rebooting AI) that makes a similar point

------
Mathnerd314
According to [https://www.theverge.com/2019/6/26/18744264/something-
awful-...](https://www.theverge.com/2019/6/26/18744264/something-awful-
youtube-moderation-rich-kyanka-lgbtq) the best mods are chosen by the
community they're moderating. You see this on Reddit. The mods mostly complain
about how limited the tools are; they certainly aren't using machine learning
written by PhD's. But Reddit ends up being fairly well-moderated; it's the
subreddits themselves that end up banned for various reasons.

~~~
sedatk
Reddit mods aren’t chosen by the community. Reddit isn’t even mentioned in the
article.

~~~
barry-cotter
Do you mean admins? Because mod teams are self-selecting once a subreddit is
up and running and if you don’t like a subreddit you can always start your
own. People don’t vote for mods but they participate in subreddits they like.

In Albert Hirschman’s typology it’s all exit, voice and loyalty are
irrelevant.

~~~
Karunamon
This ignores network effects. Significant community forks are pretty rare in
Reddit's history, and launching a new subreddit is not a trivial thing to do -
that goes double since you can expect mods to suppress all mention of the new
community since they have absolutely no constraints on their conduct.

~~~
buboard
it happens though. example is r/libertarian vs r/goldandblack . The new
subreddits are smaller because they are usually populated with users "that
care", and that's a good thing. It's not like they remain obscure too

~~~
spiralx
Wasn't /r/goldandblack a split from /r/anarchocapitalism, not /r/libertarian?

~~~
buboard
Yes but now an alternative to both

------
dmckeon
Moderating a forum of a few hundred recurring posters, who nominally all want
to participate in discussions of a few specified topics is usually feasible.

Moderating open venues with no sense of commonality or community where anyone
can just show up and say anything is ... much less feasible.

------
CM30
There are really two issues here, both of which make it difficult for these
sites to moderate their 'communities':

1\. Moderation doesn't scale. It can't be automated with an algorithm (well),
it can't be done well by a bunch of low paid contractors with a handbook and
it can't be done across a million user platform by about three people.

Sites like Twitter, Facebook, YouTube etc tried to get round this by using the
above algorithms and contractors, and that's partly why they're so terrible at
it.

2\. Moderation is community specific, with each community having different
standards for what's acceptable. For instance, a site like 4chan or 8chan may
find nearly anything acceptable, whereas the official Disney or Lego site
might only want stuff that's appropriate for a young audience.

That's why forums work so well, because the userbase and owners decide what's
acceptable there, and those outside of said grouo can join/not join based on
their preferences. Same with Reddit to some degree, though that is complicated
by people being able to move between subreddits so easily and them being
hosted in the same place.

Unfortunately, many of these larges don't have this. They have one free for
all open space where hundreds of different communities are forced to interact
with each other. They stick millions of people into a newsfeed/search/whatever
that despise each other's views, have completely ideas about what's acceptable
and hate every ounce of the other side.

That's a recipe for disaster. Putting those who fundamentally hate each other
in the same area for hours per day leads to a toxic atmosphere, and creates a
situation that's almost impossible to police well. It's why school and prison
are so bad; the kids and prisoners just have too many fundamentally different,
conflicting worldviews to get along.

The only way to fix this is to:

1\. Accept that ommunities should be moderated by their members, not robots or
contractors.

2\. Split the community into smaller ones where people can set their own
boundaries for acceptable behaviour.

3\. Try and stop intercommunity trolling as best as possible

------
PeterStuer
And yet US companies feel it appropriate to judge the rest of the world's
conversations against the backlight of select US sensitivities and blind
spots.

------
fouc
Moderation shouldn't be hired out. The community should provide the
moderation.

------
h2odragon
My "truth" is not your "truth" is not someone else's "truth". None of us is
wise enough to tell someone else that their truth is wrong, so expecting
someone to write software that is would be asking a bit much.

Filtering out opinions the majority doesn't want to pay to hear is simpler and
more lucrative anyway.

~~~
klyrs
There _are_ objective truths. It doesn't matter if you think the earth is
flat. It's not. And I really wish this was a contrived example

~~~
h2odragon
The person who believes (or says they do) that the earth is flat, has every
right to do so. I agree with you, they're objectively wrong. I find it
arrogant to say they have no right to speak because of that, and will not.

~~~
ggggtez
Ok, but now replace flat earthers with people advocating and organizing
meetings for people interested in joining ISIS.

Why do we want a world where platforms protect videos promoting and glorifying
beheadings, exactly?

~~~
throwaway_bad
> protect videos promoting and glorifying beheadings

To play devil's advocate, "objectively" these beheadings are still happening
somewhere in the world. It's an undeniable truth even though you don't want to
see the evidence for it. For a lot of people this is their flat earth. No
evidence means it's not happening.

Why not allow the content to be seen, let the world be horrified that there
are people driven to become like this, and try to understand what led them to
do those things?

------
lacker
The title is silly, because no companies have unbounded resources. If they
really did have unbounded resources, they probably _would_ have better
moderation. I have plenty of ideas that would work if I could only hire
infinity software engineers to implement them.

~~~
moultano
Consider the audience as people who primarily experience this moderation, and
have no idea what these sort of things actually cost. For them, the resources
of a dominant social media company seem arbitrarily large. (Originally I
titled it "huge resources", but it looks like HN has some filter that strips
qualifiers like that and it looked silly as "companies with resources")

------
blue_devil
I'm becoming increasingly convinced digital information has inflicted on us
the curse of not being able to forget. The online accumulation aids an
eternalisation of our short-term memory that is overwhelming us individually
and collectively.

------
KoftaBob
Because it's not a high enough priority for them. As long as it's not
significantly affecting their profits, they'll do the bare minimum moderation.
Same idea goes for the weak cybersecurity companies seem to have. When it
comes to their IP they'll spend the resources to protect it, but when it comes
to their customers info leaking, they don't care.

------
WalterBright
It's like the difference between porn and art. You know it when you see it,
but writing a rule for it is impossible.

~~~
wvenable
You might not even know it if you see it! It might be dog whistles that you
don't even understand until it's pointed out to you.

~~~
WalterBright
Check out old Mae West movies where she adheres to the letter of the Hayes
Code yet always finds a saucy way to get her point across. A couple classics:

Reporter: "What do you think of foreign affairs?" West: "I'm all for 'em!"

"Is that a gun in your pocket or are you just happy to see me?"

Language is infinitely malleable. Algorithms don't stand a chance.

~~~
Gibbon1
> Language is infinitely malleable

I mentioned this before but my GF worked on an mmo's for tweens. Said one
experiment with filters ended with a 12 year old boy writing, I'm going to
stick my giraffe up your pink bunny.

------
Symmetry
I'm not sure that distinguishing white supremacist symbols from the Circle
Game[1] is even possible in theory just from a single post.

[1][https://www.dictionary.com/e/slang/circle-
game/](https://www.dictionary.com/e/slang/circle-game/)

------
reader5000
Reading this made me realize that the internet needs to be decentralized as
soon as possible.

------
scarejunba
Because their resources are not unbounded. On the level of moderation, they
are poor.

------
kemitchell
Because they insist on making step one adoption of their fanciest model, on
the off chance they can reframe the whole project of content moderation as an
opportunity to refine marketable, proprietary tech?

------
wslh
Simple answer: because moderation doesn't move the business needle.

~~~
reilly3000
I would argue it absolutely does. When NYC started "moderating" Times Square a
lot more people started coming and spending money. People pay money for all
kinds of blocking, scanning and auditing of email content, web content, etc. I
expect moderation, and when its not done well, I don't come back.

------
rodrigo975
This article is purely theoretical and I don't think they do such
brainstorming about how to train the IA. It's proved that they just take a
brunch of poor peoples poorly paid to deal with real messages and try to
moderate them sometimes with a poor understanding of the context, and use it
as an input to teach the IA how to do "proper" moderation.

That's how figure-eight makes its money

[https://cacm.acm.org/magazines/2013/8/166313-software-
aims-t...](https://cacm.acm.org/magazines/2013/8/166313-software-aims-to-
ensure-fairness-in-crowdsourcing-projects/abstract)

~~~
icebraining
> It's proved that they just take a brunch of poor peoples poorly paid to deal
> with real messages and try to moderate them sometimes with a poor
> understanding of the context, and use it as an input to teach the IA how to
> do "proper" moderation.

That's exactly what the article says: "you’d write up some guidelines (...)
and then contract with some external company to have human beings read those
guidelines and rate lots of examples that you send them" and then it mentions
people from places with low salaries, like rural India and the Philippines.

And then in the possible solutions, it mentions hiring people well steeped
into the specific cultural context of those you're trying to moderate.

This is where I think the article undermines itself, because if paying for
some different (and probably more expensive) set of people may be a solution,
then the question that was supposed to be answered remains: "why do companies
with unbounded resources not do that?"

~~~
moultano
While hiring those people might be more expensive, I expect it isn't much.
Annotators willing to read a set of guidelines and perform rating tasks long
term are generally not cheap. I suspect that for something like twitter,
identifying people most suited to moderate a particular piece of content is
itself a very difficult technical and social problem. So even assuming you can
figure out how to find these people and hire them, there's still a long path
from there to improving moderation.

------
ggm
because Free.

If we hadn't invented free, we'd have moderation because being repeatedly
abusive and troll live for money, is in the end self defeating.

If you want moderation enough, get rid of free.

~~~
swagasaurus-rex
If it is not free, it is a walled garden.

Maybe the optimism people had of the internet being an all inclusive community
was ill fated.

Exclusivity is may be the next fad, driven by the trolling of few in an open
community.

------
trabant00
Because it would require omniscient AI for a task that even humans fail at
doing it perfectly?

Does anybody really expect automated moderation to work even remotely decent?

------
KaiserPro
I think there are a few ways to better deploy moderation.

With twitter and its shouty kin, splitting the userbase into "normies" and
"broadcaster" (the top 10% with the most views/reach)

Broadcasters have a disproportionate effect in terms of reach and tone.
Spending time (and people) closely moderating in a very visible and
transparent way will set the tone for everyone.

Twitter is hamstrung because it's most famous user (Trump) breaks its
guidelines monthly (I mean he literally threatened to bomb iran, if thats not
violent and intimidating behaviour I don't know what is)

Also, controversy drives engagement, which drives revenue. So I would say that
its a feature of twitter, not a bug.

~~~
quotemstr
> he literally threatened to bomb iran,

When people talk about keeping violence off social networks, they mean keeping
_personal_ violent threats off social networks. Whether to make war or peace
is a political question, one that's perfectly fair to discuss in public. You
don't get to impose pacifist foreign policy on the president by conflating
random threats of personal violence with the legitimate business of
statecraft.

~~~
KaiserPro
You are interpreting this as a political statement.

Threatening to bomb a country is threatening violence, this is against the
terms of service: [https://help.twitter.com/en/rules-and-policies/twitter-
rules](https://help.twitter.com/en/rules-and-policies/twitter-rules) to quote:

> you may not threaten violence against an individual or a group of people

Like the law, it either applies equally to everyone, or its does not apply.

> You don't get to impose pacifist foreign policy on the president by
> conflating random threats of personal violence with the legitimate business
> of statecraft.

Again, you are reading way more into this. Either the Terms of service apply
to everyone, or they do not apply.

According to those terms of service, there is no different between me
threatening to kill a group of people, and trump doing the same.

As I said, transparency and consistency is the only way it can be implemented
effectively.

~~~
quotemstr
Bombing another country is a legitimate act of the state. It's no more an
endorsement of "violence" than wishing for the arrest of a notorious serial
killer is an act of "kidnapping". Conflating legitimate acts of the state with
extralegal personal criminality _is_ a political statement.

If a site's terms of service prohibit endorsing legitimate acts of the state,
that TOS is a political statement.

~~~
KaiserPro
> Bombing another country is a legitimate act of the state

yes, it can be.

> It's no more an endorsement of "violence"

War _is_ violence.

> conflating legitimate acts of the state with extralegal personal criminality

Well, you're saying that I equate the president's tweets as his personal
threats. That's orthogonal to extralegal. It is also irrelevant, for the
reasons I'm about to lay out:

The Terms of Service are the basis of a contract for using the site[1]. Unless
there is a specific clause for "legitimate acts of state" then its a binary
choice. Choosing not to implement them equally is a political choice.

look, its perfectly possibly to not agree with a person, but insist that
legal/procedural process is adhered to. The president is a person like any
other, and subject to the same legal/procedural process as the rest of us.

[1] well usually.

------
antipropaganda
What also makes moderation hard is that it can be abused. For example, anyone
who is influential and persuasive enough can be labelled a Nazi by political
opponents and targeted campaigns can be run to get them deplatformed

~~~
ptah
it is pretty clear from actual postings whether or not someone is anti-
immigrant and/or white supremacist

------
patientplatypus
Because moderation doesn't scale.

Case in point - youtube. They changed their demonitization algo in response to
complaints and now they have complaints from other users that the algo is
producing false positives. For example, the "Great War" history podcast had
100s of youtube videos demonitized because they talk about storm troopers and
Nazis, which of course isn't great if you _are_ a Nazi, but maybe ok if you're
a history podcast.

The issue is that each video generates on an individual level so little money
for youtube that it just doesn't pay to have an individual go through each and
every video on their platform and make a value judgement on worth. I mean
anyone can upload a video for free and each watch of a video is maybe at
maximum a few cents from an advertiser in profit to youtube.

Facebook and all other content aggregation web forums face the same conundrum.
And the problem doesn't get better if we invent new algos or people more
efficiently (somehow). On the one hand you now have an algo that determines
what can be said or not said en masse and is making the decisions in an
increasingly black box (as the heuristics will be by necessity rather
complicated), on the other you have the very poor filtering videos or at the
very least fallible human judgement still at the wheel on what gets to be said
and not said.

Who watches the watchers? At the moment we have a handful of powerful media
conglomerates that effectively control through monopsony almost all the social
media on the internet. What, we've got Alibaba, Facebook, youtube, reddit, and
instagram - that covers at a rough guess like 60-70% of social media traffic?
But these are companies that are only successful at scale - you go on a social
media platform because all your friends are on it (a la the myspace model) and
you are given a "free" user experience because aggregating millions of people
allows for a few cents in ad revenue per user to subsidize the servers at
scale.

This problem is probably baked into the cake of this particular pattern and
isn't fixable. Boy is it going to be wild when we finally crack how to make
deep fake videos!

------
zebby
You don't need to moderate your social media to death. Intensive moderation is
used to control what your users are talking about. Just let the users talk and
say whatever they want. I'm tired of these nerds getting people banned when
they say something mean online.

------
Erudite_Genius
The title of this article literally reads "Why do those with unlimited power
not take steps to limit their power?".

Also, name ONE company with INFINITE resources.

~~~
colechristensen
Are you being sarcastic or do you think the authors and publishers were
actually saying that omnipotent beings exist?

