
YouTube’s LGBTQ Problem Is a Business-Model Problem - tareqak
https://onezero.medium.com/youtubes-lgbtq-problem-is-a-business-model-problem-2b3cb3754b64
======
pdkl95

      s/(algorithmic|human) moderation/a rule of law/g
    

This mess will only get worse as long as Google insists on trying to fix
unsolved social conflicts unilaterally. Google made themselves the primary
target when they started to make decisions with large social consequences by
fiat. They _de facto_ implemented _rule of man_ [1].

The solution is to _not_ be the sole entity with the power to make decisions
about unsolved social issues. Relinquish the power to make decisions about
what kind of content is "acceptable" by implementing a _rule of law_ managed
by some kind of publicly accountable, transparent governance,

Google (and anybody else that controls something the public relies upon as
infrastructure) has a ch9oice: keep the power to make socially-important
decisio9ns and accept the blame and responsibility for those decisions, or
abandon that power to make the whole mess someone else's problem.

[1]
[https://en.wikipedia.org/wiki/Rule_of_man](https://en.wikipedia.org/wiki/Rule_of_man)

~~~
mushufasa
All countries already have laws regulating speech to various degrees of
freedom.

Google's need to comply with these laws is part of the genesis for the
moderation (e.g. copyright protection, child pornography).

In fact, aggressive government laws for censorship are why Google chooses not
to operate in China.

The topic at hand is about monetization of ads -- the "acceptability" of the
content is determined by the ad buyers. Not some idea of social good -- it's
just the market.

The problem here is about execution not policy. The plaintiffs argue Google's
algorithms are not working as intended, not that their intentions are
misguided.

Are you suggesting that Google should create additional authority to regulate
itself beyond what is legally required and needed for their business? E.g.
create some expert panel that can compel Google to censor certain content?
That would create more problems than it would solve.

~~~
seanmcdirmid
Google was operating in China just fine even with censorship, they left
because China started hacking gmail accounts. Some countries have much more
rule of law than others.

~~~
cromwellian
I think that's an oversimplification of the history. After the email hacking,
Google decided to stop censoring in retaliation, and redirected traffic to HK.
Eventually, they were blocked, and a few years layer, practically all Google
services were blocked. It's not like Gmail was "pulled" from China, the GFW
blocks it.

Sooner or later, even if Search was still being censored, the other services
would probably be blocked, or have unacceptable conditions imposed on them.
The amount of effort to make YouTube and Gmail bend to PRC desires I think
would have been a bridge too far. Once Xi Jinping took office especially.

I mean, I've seen super mild videos on Bilibili, where say a Mainland athlete
or contestant loses to a Taiwanese one get removed. YouTube would be
overloaded quite quickly by 50 cent army takedown requests.

~~~
seanmcdirmid
YouTube was already blocked when google left China (to say, it was never in
China), gmail was never hosted in China and the hacking occurred on servers in
Hong Kong.

There is some pretty crazy stuff on Chinese video sharing platforms,
censorship in China is arbitrary and selective. That alone would make it very
difficult for a western company to operate a media company in china.

------
RcouF1uZ4gsC
One of the big problems with machine learning algorithms at this time is that
they have a hard time figuring out intent. A lot of the terms used by the
LGBTQ community, have been used by homophobic people as slurs. Yes it is great
that the community is reclaiming those words, but current ML tech has no way
of distinguishing when those terms are being used by the community in a
respectful manner from where they are used by homophobic people to promote
hate.

This is also the case with African Americans, and there is a similar problem
of machine learning flagging African American content as hate speech.
[https://www.vox.com/recode/2019/8/15/20806384/social-
media-h...](https://www.vox.com/recode/2019/8/15/20806384/social-media-hate-
speech-bias-black-african-american-facebook-twitter)

The problem may be that we cannot depend on machine learning algorithms to do
an acceptable job of moderation, and that a lot more humans need to be hired
as moderates. However, doing that really changes the economics of the platform

~~~
mushufasa
What about crowdsourcing efforts? Why are user flags insufficient?

Why can't there be a balance between full automation and human review, such as
an appeal system or 'whitelist' program? Are the economics really the
obstacle, or is it more of a corporate stance towards not adjudicating
subjective controversies to avoid liability under the Communications Decency
Act Section 230?

~~~
RcouF1uZ4gsC
>What about crowdsourcing efforts? Why are user flags insufficient?

Because this will devolve into different groups flagging their opponents'
videos. This will especially hurt the voices of minorities and other
vulnerable populations.

~~~
mushufasa
What about attaching a 'reputation' score to the flagger? E.g. something like:

\- if there are sufficient flags, trigger a moderator review

\- flaggers who correctly flagged content get +1 score or -1 score if not

that would make it harder for people to consistently flag content maliciously.
Like age-old moderation systems such as calling 911 or police. If you abuse
calling 911, you can get fined.

I imagine this has been tried before, and am genuinely curious why this is
worse than flawed ML.

~~~
vintermann
People were asking such questions since at least 1997. Slashdot had the (imo,
great) idea of rotating various voting privileges randomly. Raph Levien's
Advogato used a network of trusted users and certification which was designed
to be resistant against attack. Kuro5hin also had some novel ideas about what
was needed to create robust high quality discussion (which turned out to be an
obvious failure, but potentially something one could learn from).

But at some point, sites decided that experimenting with stuff like that was
unimportant compared to just bringing in more users. Letting everyone vote all
the time obviously has huge issues, but it's emotionally satisfying to
downvote, so it keeps users engaged.

The people most prone to dominate the site with voting on everything and
coordinating efforts, are also the ones which will complain most loudly if
they don't dominate the discussion. So just give them what they want.

It worked. Experimenting with self-moderation mechanisms went extremely out of
fashion.

~~~
pixl97
One thing I dislike about most current voting/moderation systems is they are
one dimensional, you get up or down (or in the case of FB they only have a
point dimension of upvote).

I'd love a system were you could get 2 vote choices per post and then have 8
or more vote options.

Like/dislike (or agree/disagree), off topic/on topic, factually
correct/factually incorrect, funny/hateful, user is a prick/user is a swell
guy, or whatever seems like a good option for the discussion forum.

I mean, something can be funny, but off topic at the same time. Or something
can be factually correct but the person making the post can be a real jerk
about it.

~~~
Faark
Valve added a "Funny" button to steam reviews, since many top reviews were
great jokes but had little substantial in them. I feel like it helped at least
somewhat.

I also have a hard time believing the GP that there is no experimentation
anymore. Probably a lot more subtle, but you can do a lot of stuff behind the
scene without giving feedback to the user. I'd be shocked if there is not at
least some form of shadowbanning bad actors.

------
Mirioron
The thing is that a lot of us saw this coming. Whenever people complain about
how YouTube needs to remove X type of content, they also need to realize that
if YouTube actually starts policing that type of content then there will be
false positives. Since YouTube will do it algorithmically, the chance that
there will be collateral damage is much higher and YouTube takes a long time
to reverse decisions they've already made.

For example, YouTube doesn't want violent content. Just having the word "kill"
in your title can get the content demonetized, even if the content is about a
video game and properly categorized as such. It's been this way for years.

~~~
jakelazaroff
The point is that they _shouldn 't_ do it algorithmically. They need to have
guidelines for what type of content is acceptable, and hire people to enforce
them.

~~~
Mirioron
Then the costs of YouTube probably balloon so high that it doesn't matter for
creators whether their videos are demonetized or not, because they aren't
going to earn anything from them.

~~~
Fomite
We may be hitting the point where societal impact is great enough that people
don't necessarily accept "We can't do that at scale" as an out.

~~~
goatinaboat
Indeed. Given the vast wealth Google etc have accumulated its clear that
actually, they _could_ afford to do it, and still make good, but not
astounding, profits.

The rationale “we can’t make as much money as we’d like to if we followed the
law” just won’t wash.

~~~
Mirioron
The platform economics change in that case. Right now, _anybody_ can upload
content for free and it'll appear very quickly on the channel. If you increase
the cost per video by mandating human oversight on videos, then YouTube will
probably remove the ability for (most) people to upload videos for free.

I don't think that that's preferable for society.

~~~
goatinaboat
Is what we have now preferable?

Imagine a YouTube that cost $10 to upload a video and $1 per comment. The
quality of the content would be much higher and the hate speech would drop to
zero. Wouldn’t that be preferable for society?

~~~
dsjoerg
Global median per-capita daily income is $8. So I have to spend my whole day's
income to upload a video?

You're effectively restricting speech to the rich, and the poor just have to
sit there and listen. That would be sad and unnecessary.

~~~
goatinaboat
You can afford a camera, a laptop and a high speed connection, but not a token
amount to cover moderating your upload? I don’t imagine this being a common
scenario.

~~~
Mirioron
You can make videos on a computer that you can get for a few hundred dollars.
That means you could make 20-30 videos for the same amount of money your
equipment cost. That _is_ a prohibitive scenario.

I create content for YouTube and if I had to pay $10 per video then I would
never make another YouTube video again. It doesn't matter that there are
thousands of people (and tends of thousands+ sometimes), because I simply
couldn't afford to spend time and effort to make a video and then spend extra
money to share it.

You could make the argument that perhaps society is better off if I don't
create content, but there are at least some people that claim they enjoy it.

------
fareesh
What if monetized videos can't be viewed without spending "ad bucks"?

To earn "ad bucks", you have to watch an ad. The ad watching takes place in a
dedicated ads section of the site, completely independent of all videos.

The vast majority of monetized videos would cost 1 ad buck, so watch one ad.
The ad watching is just a giant "watch ad" button. The algorithm goes through
all all the info about me and shows me a relevant ad, just like it does
already.

If you want to opt out of ad bucks, you can just subscribe to that channel and
pay some money. YouTube takes some, the creator gets the rest.

Why not just use this model instead of this politically driven sham that
currently exists?

I'm the viewer, I'm the product. My attention has value - so why not let me to
worry about what videos I want to support and leave the corporations out of
it? YouTube is nothing without the viewer.

If YouTube is a platform and not a publisher, then they should act like the
phone company. If I want to call some psychic hotline, just connect the call
and take your cut.

~~~
gnicholas
Interesting idea. I think people would turn on ad mode and then go to the
other room or switch to a different window. I definitely did this when Hulu (I
think) offers the option to watch one long commercial instead of several short
ones spread throughout the program.

~~~
fareesh
Users can already just swap to another window or block the ad outright.

In my view this should eliminate the ad blocking group entirely.

------
lukaszkups
> Specifically, the lawsuit accuses YouTube of filtering, demonetizing, and
> otherwise limiting videos that deal with LGBTQ identities, making it hard
> for their creators to reach a wide audience and make money. The suit alleges
> violations of free-speech protections and civil rights, among other
> statutes, and seeks class-action status.

But the youtube isn't a public institution - it's a private platform - does
that apply as well even if they would put e.g. banned topics/words in their
terms and conditions document? (I'm just curious)

~~~
icebraining
They're making the case that YouTube is a "public forum", which would limit
the owner's ability to regulate speech in it. I don't think that claim has
much of a leg to stand on, considering the recent Supreme Court decision:
[https://www.techdirt.com/articles/20190617/16001942415/supre...](https://www.techdirt.com/articles/20190617/16001942415/supreme-
court-signals-loud-clear-that-social-media-sites-are-not-public-forums-that-
have-to-allow-all-speech.shtml) and a previous decision by a lower court
against YouTube itself:
[https://www.techdirt.com/articles/20180327/14362539515/court...](https://www.techdirt.com/articles/20180327/14362539515/court-
tosses-dennis-pragers-silly-lawsuit-against-youtube-refuses-his-request-
preliminary-injunction.shtml)

------
cromwellian
There's an assumption if you use human raters, you won't have false negatives
or positives, but even human beings disagree over how to categorize speech,
what's homophobic, what's legitimate sarcasm or irony, what is violent
content.

In fact, on top of human raters exhibiting classification problems in the face
of ambiguity, there is also the issue of bias. Radiolab on NPR had a great
series on Facebook's human censors that shows it just opens up a different
pandora's box.

There is no clear, non-messy way to do censorship that doesn't end up with a
sizable number of people mad, and with some subset of people hit by accidental
or deliberate false positives.

I'm afraid there's no easy answer and it will inherently be constant battle
and fight between sides.

~~~
ptah
possibly, they should have LGBTQ humans do the rating on what is homophobic
etc

~~~
paulnechifor
Sounds like a conflict of interest. If someone gets sued for rape, the jury
doesn't get filled with rape victims.

~~~
ptah
maybe the jury should have some rape victims to make it non-biased

------
coolblah
This problem wouldn't exist if there was no "moderation" (which is censorship)
at all.

~~~
greglindahl
How do you propose to pick the next autoplay video?

How do you propose making advertisers happy?

Easy to cry censorship; hard to address real business issues.

~~~
bjt2n3904
Easy as all getout.

1\. Don't have "autoplay". Make the user choose. Autoplay is not good for
society. "Tip your head back, turn your brain off, and let the algorithm feed
you what will keep you here."

2\. Give the advertisers more fine grained controls on what types of videos
their ads show on. If a stink goes up because Dove doesn't want to be on a
PewDiePie video, then that isn't Google's fault. It's Dove's. Let the
advertiser add PDP to the blacklist, or make a more exclusive whitelist.

It's like Twitter. "Oh, oh no... how do we stop people seeing all this awful
content?"

A) Stop deciding for the user (and therefore the entire website) what is awful

B) Give the user better blocking controls.

However, this won't happen, because Google and Twitter like being able to
shape what the "algorithm" shows you, and keeping that control in their hands.
This is why the chronological timeline isn't coming back.

~~~
greglindahl
Totally cool that it's easy for you! Except that youtube doesn't want to do 1,
and for 2, you have to classify the videos to let the advertisers choose,
which is exactly the same problem that youtube already has.

------
icebraining
Finally found the actual court submission:
[https://www.docdroid.net/g7RXXi1/youtube-lgbtq-
lawsuit.pdf](https://www.docdroid.net/g7RXXi1/youtube-lgbtq-lawsuit.pdf)

------
WalterBright
I remember when it was just seven words you cannot say. Now, it's hundreds.

[https://news.slashdot.org/story/19/08/15/2058257/advertisers...](https://news.slashdot.org/story/19/08/15/2058257/advertisers-
are-blacklisting-news-stories-that-contain-forbidden-words)

------
wvh
This is an example of how censorship can boomerang back into the face of those
it tries or claims to protect. I think policing words and topics will have a
negative impact in the long run for most situations. Simple AI (or even human
censorship) reacts to words and not intent, and ends up actually suppressing
the minority.

------
rdtwo
Truth is ad buyers don’t want to be associated with any controversial content
and a majority of Americans still feel uncomfortable about Gay and lesbian
issues and even more uncomfortable about transgender issues. It’s smart for
advertisers to exclude lgbt content from their ads if they don’t think that’s
a target demographic

------
Animats
Arguing that there's a right to get ad revenue from your videos is a big
stretch. Has that ever come up before in the history of free speech? Classic
issues are more like "can we make the newspaper run our ad if we pay the
regular rates?"

Google recently won a lawsuit which complained YouTube was putting
conservative content in "restricted" mode.[1] Others complain Google is not
doing enough to avoid shoving too many extreme videos at people.[2]

[1] [https://www.reuters.com/article/us-alphabet-google-
youtube-c...](https://www.reuters.com/article/us-alphabet-google-youtube-
censorship/google-defeats-lawsuit-claiming-youtube-censors-conservatives-
idUSKBN1H320D)

[2] [https://www.nytimes.com/2019/08/11/world/americas/youtube-
br...](https://www.nytimes.com/2019/08/11/world/americas/youtube-brazil.html)

~~~
ppseafield
It's not just about "not getting ad revenue". It's about accounts getting
suspended, homophobic ads running on pro-lgbt videos, and a total lack of
enforcement on their harassment policies.

[https://twitter.com/gaywonk/status/1134264395717103617?lang=...](https://twitter.com/gaywonk/status/1134264395717103617?lang=en)

~~~
CapricornNoble
"homophobic ads running on pro-lgbt videos"

There are homophobic ads on YT? What does that even look like? Can you link to
an example?

~~~
rdtwo
So anti ads- everyone looses type of deal? I guess I’d like to know if a
company hates me so I can never shop there so bring it on

------
buboard
Paywall

------
tareqak
Here is a graph of how this thread trended:
[http://hnrankings.info/20709563/](http://hnrankings.info/20709563/) .

~~~
username90
Why would it do that? Did some moderator intervene?

~~~
makomk
Most likely a moderator intervened to stop it being algorithmically kicked off
the front page - we're well over the threshold where the flame war detector
normally kicks in and downranks submissions (roughly when it has more replies
than upvotes).

------
yektaw
No, no it’s not.

------
plutonorm
The root cause of this is a mismatch between capitalism and a well
functioning, fair society. Again. The earth is dying, the population polarised
and our social systems are failing to do their job. Something has to change.
There must be another model out there that doesn't devolve into horror, as
both capitalism and communism seem destined to do. We need to revisit the 60's
only this time we need an actionable plan and the will to follow through or
our children's children aren't going to make it. Don't balk at this, our
structural problems are _existential_.

------
tu7001
Is it a joke? In Poland, for example, Youtube, in wholesale, blocks channels
which are criticize on LGBTQUERTY. The question is why the are able to censor
content. I don't expect that my supplier turn off gas for me, if he thinks,
that I use it to kill people; it's not his business, ot's the matter of law
enforcement, the police, etc.

~~~
ChickeNES
> Is it a joke?

No?

> In Poland, for example, Youtube, in wholesale, blocks channels which are
> criticize on LGBTQUERTY.

YouTube is US based and hate speech is looked down upon in the US. Don't like
it? Find a service not based in the US.

> The question is why the are able to censor content.

It's a private website, they can do whatever they want. Again, find another
site if you want hate content.

> I don't expect that my supplier turn off gas for me, if he thinks, that I
> use it to kill people; it's not his business, ot's the matter of law
> enforcement, the police, etc.

???

A) if they know you're using it to kill people they would be an accomplice in
the eyes of the law

B) they would have every right to refuse service

