
An ex-YouTube whistleblower on their recommendation engine - seapunk
https://threader.app/thread/1094359564559044610
======
whiddershins
We’ve all completely lost perspective.

The engine wants people to spend as much time as possible on YouTube.

Netflix’s wants the same, even though it’s not even ad based!

Facebook wants the same.

As infinitum.

This is just addiction peddling. Nothing more. I think we have no idea how
much damage this is doing to us. It’s as if someone invented cocaine for the
first time and we have no social norms or legal framework to confront it.

One thing you can never get back is time. It runs one way and as of this
writing, we all have a finite amount of it.

Conspiracy videos are the least of the problem.

~~~
bad_user
I’m hooked on HN even though the conversations here don’t give me any pleasure
and I’m wasting valuable time.

I’m clearly addicted, but does HN have any sort of algorithm designed to
maximize my time on it?

I’m sure these companies do everything in their power to make their services
more addictive, however we have to consider that this is just looking for a
scapegoat.

~~~
kibwen
Just so you're aware, HN has configurable settings to help force you off the
site after a certain amount of time:

 _" Like email, social news sites can be dangerously addictive. So the latest
version of Hacker News has a feature to let you limit your use of the site.
There are three new fields in your profile, noprocrast, maxvisit, and minaway.
(You can edit your profile by clicking on your username.) Noprocrast is turned
off by default. If you turn it on by setting it to "yes," you'll only be
allowed to visit the site for maxvisit minutes at a time, with gaps of minaway
minutes in between. The defaults are 20 and 180, which would let you view the
site for 20 minutes at a time, and then not allow you back in for 3 hours. You
can override noprocrast if you want, in which case your visit clock starts
over at zero."_

~~~
antonkm
Wow, HN is awesome. Activating now, thanks for letting me know.

------
Andre607
Does anyone else find these kinds of Twitter threads incredibly hard to
follow?

It's very difficult to contextualize and adapt to reading these short
incremental bursts of text or 'Twitter threads'. It always feels like
important context and exposition is missing. I seem to understand the gist,
that Youtube seems to have promoted flat earth videos disproportionately, but
the 'whistleblower' aspect is not immediately apparent.

~~~
dallashoxton
100% agreed.

>(If it decreases between 1B and 10B views on such content, and if we assume
one person falling for it each 100,000 views, it will prevent 10,000 to
100,000 "falls") 14/

What does this sentence even mean? It's very hard to parse to the point of
being incoherent.

The fact that people have gone from writing long-form essays to "twitter
threads" to argue something is tragic.

~~~
ec109685
That given the number of views of this content and how likely a user will fall
for it, there are a significant number of people who will not be tricked into
believing something not true due to this algorithm change.

------
Laforet
Can confirm, I've had flat earth stuff recommended to me after watching normal
astronomy/rocketry videos (Vsauce, Scott Manley, etc), notwithstanding the
fact I am also subscribed to several skeptic channels. The AI clearly wasn't
able to distinguish context beyond a few keywords.

This just adds to the general uselessness of the recommendation feature - I've
seen that video five times, it's good but could you offer me something else?

~~~
kachurovskiy
I'm discovering amazing content every day thanks to YouTube recommendations,
mostly in the diy and construction / technology categories. Haven't seen a
single conspiracy video yet.

~~~
Laforet
Welp, I am jelly. My recommendation gets so repetitive that I had to use
reddit as a recommendation engine. r/mealtimevideos worked for a while but
soon youtube started to recommend the same set of videos on its top page and I
havn't found the alternative yet.

Another gripe I have is that for creators who run multiple channels (and don't
get out of their way to cross-promote), it is often very easy to not realise
this until you manually check out their upload page and realise you missed out
a lot of their content, all while the recommendation engine pushes the same
couple of videos I have already watched over and over again.

The only occasion things clicked for me is when I was researching the Pitcairn
island. For a few days there was a drip feed of obscure but highly relevant
videos that keyword search failed to pick up. One day the stream stopped
suddenly and has never returned since.

~~~
kachurovskiy
Well, here are my favorite channels, maybe they can help shift your
recommendation engine into the new gear! :-D

[https://www.youtube.com/channel/UCtPhq1N-uNfUHiPDnwxqImg](https://www.youtube.com/channel/UCtPhq1N-uNfUHiPDnwxqImg)

[https://www.youtube.com/channel/UCcDzxOd6Q6OLvBYNH2q98EQ](https://www.youtube.com/channel/UCcDzxOd6Q6OLvBYNH2q98EQ)

[https://www.youtube.com/channel/UC5NO8MgTQKHAWXp6z8Xl7yQ](https://www.youtube.com/channel/UC5NO8MgTQKHAWXp6z8Xl7yQ)

[https://www.youtube.com/channel/UC6x7GwJxuoABSosgVXDYtTw](https://www.youtube.com/channel/UC6x7GwJxuoABSosgVXDYtTw)

[https://www.youtube.com/channel/UCWXEQsK3UiHszjwgGN5HUeQ](https://www.youtube.com/channel/UCWXEQsK3UiHszjwgGN5HUeQ)

[https://www.youtube.com/channel/UCworsKCR-
Sx6R6-BnIjS2MA](https://www.youtube.com/channel/UCworsKCR-Sx6R6-BnIjS2MA)

[https://www.youtube.com/channel/UC9RM-
iSvTu1uPJb8X5yp3EQ](https://www.youtube.com/channel/UC9RM-iSvTu1uPJb8X5yp3EQ)

[https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q](https://www.youtube.com/channel/UCsXVk37bltHxD1rDPwtNM8Q)

[https://www.youtube.com/channel/UCivA7_KLKWo43tFcCkFvydw](https://www.youtube.com/channel/UCivA7_KLKWo43tFcCkFvydw)

[https://www.youtube.com/channel/UC6mIxFTvXkWQVEHPsEdflzQ](https://www.youtube.com/channel/UC6mIxFTvXkWQVEHPsEdflzQ)

[https://www.youtube.com/channel/UCu37IhZOWNoPDFdyk98rc4g](https://www.youtube.com/channel/UCu37IhZOWNoPDFdyk98rc4g)

[https://www.youtube.com/channel/UCp0rRUsMDlJ1meYAQ6_37Dw](https://www.youtube.com/channel/UCp0rRUsMDlJ1meYAQ6_37Dw)

[https://www.youtube.com/channel/UC7IcJI8PUf5Z3zKxnZvTBog](https://www.youtube.com/channel/UC7IcJI8PUf5Z3zKxnZvTBog)

[https://www.youtube.com/channel/UCu6mSoMNzHQiBIOCkHUa2Aw](https://www.youtube.com/channel/UCu6mSoMNzHQiBIOCkHUa2Aw)

[https://www.youtube.com/channel/UC2C_jShtL725hvbm1arSV9w](https://www.youtube.com/channel/UC2C_jShtL725hvbm1arSV9w)

[https://www.youtube.com/channel/UCAL3JXZSzSm8AlZyD3nQdBA](https://www.youtube.com/channel/UCAL3JXZSzSm8AlZyD3nQdBA)

~~~
Laforet
Thank you! I will make sure I check out each of them.

------
CptFribble
Fringe conspiracy videos and weird kids cartoon mashups aren't the problem.
The problem is this:

[https://medium.com/@francois.chollet/what-worries-me-
about-a...](https://medium.com/@francois.chollet/what-worries-me-about-ai-
ed9df072b704)

Everyone carries a set of basic assumptions about how the human system works,
like what's acceptable public behavior, how the government should be run
(which affects who you vote for), expectations about how certain people will
act, etc.

These assumptions are based on the information we take in each day, like
articles read, images viewed/scrolled past, and so on. We ARE our media diets,
whether we want to admit it or not.

Our opinions are formed slowly, they change slowly, and there are several well
known "bugs" in human thought patterns that make us tend to prefer an echo
chamber and reject information that doesn't line up with held beliefs.

AI enables fine-tuned control of exactly what assumptions people gain and
maintain. What happens if YouTube silently starts recommending PragerU videos
to all the millions of high schoolers that match the "impressionable" profile?
What would that do to the basic expectations of the citizenry about what the
"right" kind of government and tax system is?

What if the platform was used to convince everyone that annual slavery
reparations _must_ be made in perpetuity?

No one wants to admit that ads work on _them,_ but we all know they work in
general. So how much of what you _know_ is true, was fed to you? In a world
where everything we consume runs through relatively black-box algorithms
(black-box to us outsiders, anyway), how much of our knowledge and beliefs are
our own?

Guess I'm getting myself down the rabbit hole here. There's no easy answers
anywhere. I think it's only a matter of time until we get our own Ozymandias.

~~~
drewmol
>if YouTube silently starts recommending PragerU videos to all the millions of
high schoolers that match the "impressionable" profile?

As someone who very rarely aligns with any of Dennis Prager's political dogma,
that would be so much better than the "Top 10 Reasons The Earth Might Actually
Be Flat" type that dominates reccomendations now, albiet less profitable.

We should of course expect YouTube to behave in a way that prioratizes the
benefits to them financialy, etc.

Like you said there's no easy answers anywhere. Understanding that all of
YouTube's reccomendations are effectivey silent, programitic , and optimized
for profits that get calculated on industry stardard, necessarily shallow
engagement metrics like clicks and views is a helpful start but seemingly
uncommon knowledge.

------
throwawaysea
I hope YouTube (and Google as a whole) are applying these changes to all
content. If they are specifically altering the behavior for only certain
content (e.g. flat earth videos), I think that raises some serious red flags.
I don't want YouTube to be the arbiters of what people watch based on their
compass, which is going to have an obvious bias.

The reality is that almost all online platforms are driven by user behavior.
If those user behaviors lead to relationships between videos that drive
recommendations, then that should be respected. Who is YouTube to say that's
bad? However, if the quality of recommendations is skewed by one group of
heavy users (i.e. it is being gamed), they could consider tweaking how much
weight is given to such users and apply it across the board. That is, they
need to apply such changes not just to 'conspiracy theory' videos but also to
things like political videos from the left (and the right), which surely
experience the same skew in recommendations from a minority of users.

An aside: some comments here seem to say that YouTube should keep this content
but 'stop promoting it'. Well not recommending content is in effect the same
as censoring it outright. If an order of magnitude fewer users find some
content, the net impact is the same.

------
warp_factor
This post should also make people realize how powerful youtube and tech
monopolies became.

By changing their recommendation algorithms, they can change the way a whole
society thinks! Just look at the recommendation from the blog: "Recommend more
round-earth videos".

So far, this extreme political power has been used without the platforms
managing it, but it would only make sense for all of those platforms to use it
in their advantage.

~~~
rland
I would like for these centralized platforms decentralized re recommendations.
I can picture an API that anyone can use, that would enable third parties to
design recommendation algorithms for YouTube. Right now, YouTube is in charge,
because they have all of the user data, so you have to go with their
algorithm. A sort of "marketplace" for recommendation algos -- some explicitly
marketed towards different genres, personality styles, politics, etc.

Right now it's just a tremendous black box, although it works well most of the
time ime.

------
ggggtez
> explains why the bot became racist in less than 24 hours

Eh, the connection is tenuous. Sure ai becomes biased by the users who use it
the most, but then you really still have to stop and all yourself: why is it
we are taking about conspiracy videos instead of how people spend millions of
hours watching cat videos, or memes?

There is a lot to say about the topic, but the answer really boils down to the
fact the recommender systems are just good at working with variables it has
access to. I don't know of any AI that has access to: "this person murdered
someone" as an input. So literally, how _should_ an AI be expected to optimize
against that?

Manually tweaking the AI based on things in the real world makes sense, I'd
just wish the author didn't have have it and mention Tay which is not really
the same issue at all. Tay is an instance of deliberate manipulation, but
YouTube is a case of wrong incentives.

~~~
buckminster
> Sure ai becomes biased by the users who use it the most

There's an easy fix. Exclude the hardcore users (the top 10% say) from your
recommendation engine.

~~~
vanderZwan
Hmm, regardless of whether that works, that makes me wonder if it would work
for the vicious cycle that is "popular things getting more popular because
they are already popular".

It would probably lead to some interesting delayed feedback effects where
things that are _nearly_ popular get recommended, push out the popular, then
the displaced stuff gets pushed into the nearly popular zone, etc. Perhaps
some popularity + cooldown system would work.

Of course, all of this goes against the short-term incentives of maximizing
engagement, so there's no way YouTube will ever implement this (even though it
would probably greatly improve diversification of content).

~~~
buckminster
I'm excluding the most active users, not the most popular things, so I think
it would behave quite naturally. It just wouldn't have input from the
obsessives. I agree it's unlikely to happen though.

~~~
vanderZwan
Ah, yes. I misread. Yours is fundamentally simpler (that is a compliment) and
at face value it indeed seems like a good solution. Outliers distort data,
after all.

The system I thought you suggested in my misreading seems complimentary to
your solution though.

------
ianai
I think I’m more shocked by how willing people are to take fringe ideas
seriously on the internet. I had a clear moment when new to the internet where
I realized it should be taken with a grain of salt. But apparently many people
never have that realization or they seemingly choose what they will believe.

~~~
jeromegv
I've seen it in the family. There seems to be a mentality among people from an
older generation to think that if its published somewhere, then it must be
true or close to be true.

All their life, they were exposed to newspapers, magazines. Suddenly their
habit changed and now they were getting news from the newspaper on the
internet. But it was always true or close to be true, nothing was entirely
made up.

And then one day, they started getting news from friends on Whatsapp. Links
from Youtube shared to them. There was never any change, it's still the
internet giving them news. It never occurred to them that some what they were
watching was entirely made up.

I remember my wife's mom telling us that Steve Jobs launched the iPad on
Dragons Den. Had no idea what she was talking about and then she showed us a
doctored video of Steve Jobs at Dragons Dens, showing the iPad to everyone.
She thought it was true. And we told her it was made up. And she couldn't
understand why someone would spend time creating something false on the
internet. She just couldn't understand it.

~~~
majormajor
Terry Pratchett's books satirized both the "I don't trust the government but
do trust the random dude in the bar" and the "they couldn't write it down if
it wasn't true" trope for decades. He wasn't the first, I'm sure, just the one
that most immediately comes to mind.

Neither of these is new. The scale and velocity is different, in a meaningful
way, but "the internet" hasn't changed fundamental human behaviors there.

~~~
BethGagaShaggy
What if I don't trust the government because I know the random dude in the bar
works for the Department of Defense? They should hire better people.

~~~
ianai
Government positions have to be filled by people who are willing to and can
get clearance. This decidedly diminishes people willing to apply and capable
of applying.

------
pizza
Related (
[https://twitter.com/math_rachel/status/1094041631211249664](https://twitter.com/math_rachel/status/1094041631211249664)
)

 _Another good example of resisting the tyranny of metrics is Meetup trying to
avoid a runaway feedback loop in its recommendation system, as described by
@estola[https://www.youtube.com/watch?v=MqoRzNhrTnQ*](https://www.youtube.com/watch?v=MqoRzNhrTnQ*)

------
S_A_P
I’m becoming more and more frustrated with YouTube, because I have seen it
take my stepson and my son and turn them in to bedridden sloths. Ok that was
hyperbole but there is a whole new level of friction to get them to do
anything around the house or even get out of the car when we arrive somewhere.
Of course as their parent I impose boundaries on YouTube time. However I don’t
think that YouTube has been a great influence on either of them. It’s a hard
problem.

~~~
hi5eyes
my cousins kids will watch youtube videos on their phone/tablet while playing
games on another phone/tablet or on another monitor screen while playing
fortnite/pc games

youtube is dangerous; just like netflix and facebook, they're designed to get
people to stay on for as long as possible and obviously children are very
susceptible to the design

~~~
dageshi
Is that any different to using a computer with the tv on for background noise?
Or computer + podcast/audio book/radio?

Seems like this is just an updated version of what people have been doing ever
since the computer was invented.

~~~
mavhc
'Socrates famously warned against writing because it would “create
forgetfulness in the learners’ souls, because they will not use their
memories.” He also advised that children can’t distinguish fantasy from
reality, so parents should only allow them to hear wholesome allegories and
not “improper” tales, lest their development go astray. '

'The French statesman Malesherbes railed against the fashion for getting news
from the printed page, arguing that it socially isolated readers and detracted
from the spiritually uplifting group practice of getting news from the pulpit.
A hundred years later, as literacy became essential and schools were widely
introduced, the curmudgeons turned against education for being unnatural and a
risk to mental health. An 1883 article in the weekly medical journal the
Sanitarian argued that schools “exhaust the children’s brains and nervous
systems with complex and multiple studies, and ruin their bodies by protracted
imprisonment.” Meanwhile, excessive study was considered a leading cause of
madness by the medical community.'

' In 1936, the music magazine the Gramophone reported that children had
“developed the habit of dividing attention between the humdrum preparation of
their school assignments and the compelling excitement of the loudspeaker” and
described how the radio programs were disturbing the balance of their
excitable minds. '

------
mbell
> => Platforms that use AIs often get biased by tiny groups of hyper-active
> users.

I don't doubt this but it's unclear to me how this is much different this is
than many other systems. I think there are a _lot_ of systems that have an
operating point defined by a small group of hyperactive users. e.g. The vast
majority of twitter / reddit / probably even hacker news content is created by
a tiny % of users. Even if you move past the content creation phase, curation
is often also controlled by a small group of hyperactive users. e.g. The
relatively small number of people that switch to the 'New' view on HN or
reddit whom collectively decide what gets moved onto the front page where the
masses will see it. It seems to me that an AI approach could be more 'fair' or
'democratic' in comparison to many of these existing systems in that it should
consider the patterns of all users in some way, vs just relying on a small
self selecting group to perform the same function.

To wrap back to this YouTube situation, I think the deeper question here is
why we seem to have so many people willing to believe anything they see on
YouTube? More concerning I guess is the general idea that if we hide the
offending content that is a 'solution'. It's really just putting a bandaid on
a deeper problem and hoping it goes away.

------
frabcus
Anyone know anything about the technicality of the change?

The Facebook content filtering end of year 2018 report talks about engagement
going up the closer to a policy line (any policy - hate speech, nudity etc)
content is. And that applies even if you shift the policy line.

It says they are going to change the Facebook Newsfeed to downgrade content
which is near to a policy line, with a reverse curve to the extra engagement
it gets for being close.

I hope the YouTube change is going to be similarly fundamental. Really curious
what it is and hope it is a good one.

------
untog
This is interesting and valuable information, but I can't help but be a little
bitter at whistleblowers like this. This algorithm has been going for _years_
, and these people only emerge after the media started highlighting it
recently. I wish they'd blown the whistle at the time they were actually
creating these algorithms.

~~~
whatshisface
Maybe the implications weren't clear at the time, or maybe the public wasn't
willing to take the time to understand a complicated issue until the media
became interested in explaining it to them for other reasons.

------
J253
It’d be a nice “feature” to be able to disable recommendations after watching
a video. When a video ends, it just ends. If I want to watch something else, I
have to explicitly search for it.

I know it’s the exact opposite of a good monitization scheme but it’d be a
good step in putting the power back back in the hands of the user or parent of
the user.

------
KaoruAoiShiho
Yikes, while I understand there is some harm done by the AI leading people
down "rabbit holes" I'd still rather have the AI stay independent than have
humans manually biasing the results. The results of the AI reveals the worst
instincts of humans but a manually tuned AI can be turned into a weapon for
propaganda. That would be a worse result in every way. A naturally conspiracy
theorist AI can be escaped and controlled on a per-person level by simply
being a logically thinking person and controlling your feed and likes while a
propaganda machine that serves a political purpose cannot. That really sucks.

~~~
jeromegv
Someone coded the AI, the AI is biased by default. There is no such thing as
an "independent" AI.

~~~
KaoruAoiShiho
That's not necessarily the case, the current trendy deep learning model has
mostly black box AIs that are not manually tuned.

~~~
KirinDave
... Pardon me, but are you under the impression that most neural networks are
purely unsupervised learning with no opportunities for operators to bias their
results?

If so, this is is a misapprehension. Even unsupervised learning systems are
subject to data based biasing. The selection of input, the decisions on how to
partition the dataset, the decisions made on how to judge and measure
overfitting, and potentially hundreds of other small hyperparameter decisions
made by operators can substantially change the output of the system. In the
case of the top post, it's very clear that "does this increase time-on-site"
was an operator-chosen scoring function for the results of the system in
question.

Frighteningly, these decisions are often unexamined and unrecorded. This has
lead to a series of impressive-sounding systems that can neither be reproduced
nor truly audited. [0]

And that's just the approximated function. The tensors it outputs are then
further processed in some way, as they're often of no value in isolation. The
decisions on how to use those outputs also have a profound impact on how we
view the results of learning systems.

If anything, we're not cautious enough as a society about using this
technology. We see all sorts of crazy in the news that's wrapped up in the
over-hype. We see law enforcement failing to use even basic tools [1] because
they're so excited.

[0]:
[http://science.sciencemag.org/content/359/6377/725](http://science.sciencemag.org/content/359/6377/725)

[1]: [https://gizmodo.com/defense-of-amazons-face-recognition-
tool...](https://gizmodo.com/defense-of-amazons-face-recognition-tool-
undermined-by-1832238149)

~~~
KaoruAoiShiho
You're right of course, though that's still quite a bit of distance from the
deliberate political interference explained in the OP. I find business goals
like increasing engagement to be morally neutral vs political goals like
shaping people's views.

~~~
siidooloo
The OP didn’t say they we deliberately political. He said they were by
default.

------
DisruptiveDave
> There are 2 ways to fix vicious circles like with "flat earth" 1) make
> people spend more time on round earth videos 2) change the AI

There's a 3rd. It's complex, difficult (to execute and prove), and hardly ever
pushed. Put the onus on individuals to be realistic, honest, self aware, and
diligent. Sure, it's not easy, not clean, and will likely lose more than it
wins in the short term. But ignoring it completely is just...dangerous.
Continuing to ask Daddy to change the world for you breeds less-than-desirable
attitudes, beliefs, and perceptions. This concerns me more than any Internet
video does.

~~~
wtracy
First, a lot of the people watching these videos are high-functioning
schizophrenics. Asking them to be more self-aware and diligent is like asking
a paraplegic to spend more time walking.

Second, how do you plan to do this? Put a banner on every Youtube page that
says, "Don't trust this video"? What we're doing now clearly isn't working,
and I don't know what you propose to do that we aren't already doing.

------
didibus
Do we even have a word which is the opposite of educating?

Most normally agrees that wide spread, accessible, quality education is a key
part of having a democratic, free and wealthy society.

We've started to see some online platforms doing a sort of reverse education.
What is that?

It seems the idea itself of education comes from the philosophy of
enlightenment.

I guess education is always the promotion of ideas and instructions. So in
this case, we are seeing that the ideas and instructions that lead to
democracy are not being educated properly, and other ideas are being promoted
more heavily.

So the question is, what ideas and instructions should we educate people on?
Should the algorithms be biased on purpose towards those? Should it be ideas
of enlightenment emphasizing reason and individualism rather than tradition?

In this way, should education be regulated? Like should society protect itself
against improper education? Who defines improper?

YouTube is proving itself to be a great educator. Very effective at it, but it
seems its teachings have been controversial. They're not the teachings that
lead to our free democracy in the first place.

Personally, I think we need to protect what we achieved, and we have a duty to
control education so that it reflects the ideas and instructions that worked
well in the past to get us where we are in terms of democracy, freedom,
wealth, human rights, etc. But I also recognise that whoever you give this
power too could abuse it. Not an easy problem we're facing.

------
RileyJames
Doesn’t this come back to the capability to think critically and act with self
control?

The internet is full of bullshit, so is the rest of the world.

If you walk down the street and do as every sign, store, person or
organisation desires of you, you’re going to run out of time and money before
you make it more than a couple of blocks.

You will live at the whims of those who are determined to influence you.

Why would we ban conspiracy theory videos when we have churches in every town?
Some ideas require proof and others dont?

------
supergirl
> two years ago I found out that many conspiracies were promoted by the AI
> much more than truth, for instance flat earth videos were promoted ~10x more
> than round earth ones

is that surprising? why would anyone want to see a video about how the earth
is round. conspiracy video is much more interesting, even if you don’t believe
the conspiracy

------
scoutt
What about instead of playing a pre-roll ad, they instead start playing some
"warning" video saying that some of the videos users will see (not necessarily
the video after the ad) will sometimes try to deceive and/or confuse them?
Maybe citing some example titles/videos. And of course, the "warning" video
doesn't have to be played every time; just occasionally. This might generate
some kind of awareness.

The AI can work by selecting the right "warning" video for a given audience.
Telling this to a 65 y/o is not the same that saying it to a 20 y/o.

------
tambourine_man
“=> Platforms that use AIs often get biased by tiny groups of hyper-active
users. ”

------
throwawaysea
Question for those in this thread who want to see 'conspiracy theory' content
either not recommended/promoted or censored outright: do you also want the
same thing for videos that support your own ideologies? For example if you are
a liberal Democrat (as I imagine most of us here are), do you question
YouTube's recommendation engine when you click and follow video
recommendations towards videos you hit 'Like' on? Aren't hyperactive users
skewing recommendations by promoting fringe political views (e.g. socialism)
in those instances?

To be clear, I don't want to turn this into a debate of left versus right or
about what is fringe and what isn't. If there is a problem with the quality of
recommendations on YouTube, I believe it exists more broadly and not just with
'conspiracy theory' content. My point therefore is to question whether this
recent noise about YouTube recommendations is really about the algorithms or
if it is just about the power dynamics between groups of people who have
different opinions and want to block ideas they disagree with personally.

After all, isn't it just a vocal minority of hyperactive Internet users who
are complaining about YouTube's recommendations and fomenting Internet outrage
about it in the first place?

------
bjt2n3904
A few thoughts on this.

Ultimately, whose fault is it that a user watches a "conspiracy" video? The
article seems to consider the user base as mindless people with their mouths
open to be spoon-fed content. Do the viewers have no agency?

In that thread, why is the solution to curtail "conspiracy" videos? What
defines a conspiracy theory? I mean, we all seem to agree here about vaccines
and moon landing, but what's to stop YouTube from labeling a political party
as "conspiracy" and filtering them out of existence? Can't the users choose
for themselves, or are they too "stupid" to pick what's best for them?

Yes, YouTube is free to filter as they see fit... but if they want a
recommendation engine, AND they want to filter out "bad things", it's a very
valid question to ask who decides what is bad. I believe it is essential for
YouTube to be transparent about what their algorithms are designed to filter
out to avoid this scenario from happening.

Part of the the solution that no one seems to be talking about is to caution
others against passive consumption of media. YouTube et al want users to plug
their ears, close their eyes, tilt their heads back, and consume. YouTube can
condition their streams to be as "healthy" as they want, for whatever
definition of "healthy" YouTube chooses--but there is no way to healthily
allow yourself to be fed a diet of things you don't pick, even if it looks
good.

"Forcing" YouTube to do this will simply result in the lame, ineffective
warnings you see at casinos. "Remember! Mindlessly consume responsibly! If you
need help, uh... throw away your computer I guess!" This needs to be done on a
human level, thinkers to parents, parent to children. Not a legislative level.

~~~
specialist
It's just another form of addiction. Exploited for profit.

------
stebann
We are responsible as citizens for pushing the political regulations for the
dangerous consequences(AI's effects). It seems that politicians won't do it
because they are interested in all this non-sense theories, it makes easy for
them to introduce all kind of sneaky ideas that will benefit their agenda.
They don't want us to think clearly about the power they have through
technology and services being deployed continously. They won't do it at least
we coerce them to do it.

------
prirun
"Platforms that use AIs often get biased by tiny groups of hyper-active
users."

I think nearly everything in life gets biased by tiny groups of hyper-active
users. For example, one story about a razor blade in an apple, published on
the news for a week, leads many to think trick-or-treating is now a dangerous
activity.

For me, a key thing is to limit exposure to ALL media. It's ALL biased in one
way or another. Learn to trust your actual experience as much or more than
what you read and hear from others.

------
firexcy
I do find YouTube’s recommendation algorithm prone to low quality contents.
Sometimes I click one or two clickbait videos out of curiosity, and as soon as
I return to homepage I would find alike videos starting filling my feed. I
have to manually unlike them to force the algorithm back to the right track or
there will only be more.

Now I simply disable YouTube’s watch and search history and almost never care
about what’s on the homepage.

------
nraynaud
I watch a lot of YouTube, many university level courses that I absolutely do
not understand, I hope to become an influential whale for education.

------
alkibiades
and what about “conspiracies” that end up true? like that the gulf of tonkin
was a false flag? or that iraq doesn’t have wmds? (it was considered a
conspiracy at the time)

~~~
raverbashing
They're not censoring the video, they're simply not promoting them

And they're not going for _every_ conspiracy, mostly flatearth, antivax stuff
(hopefully)

~~~
alkibiades
>>> They're not censoring the video, they're simply not promoting them

so assume youd be fine them doing that for say... elizabeth warrens videos but
not donald trumps? Because, "hey its not censorship"

------
hokus
I recommend this video:
[https://www.youtube.com/watch?v=WfGMYdalClU](https://www.youtube.com/watch?v=WfGMYdalClU)

------
didibus
Seems like the beginning of the AI overlord trope. Where what the AI chooses
to do to satisfy the goal you gave it backfires. Like enslaving us for our own
good.

------
sarcasmOrTears
If I had youtube or facebook I would make different recommendation engines and
let people choose one. This should stop the complaining

~~~
marcodave
it would most likely end up totally meta, with ads for recommendation engines.
Would be hilarious to see it happen though.

------
emmelaich
Positive feedback is bad. All engineers know that.

These are Dawkin's memetic viruses being spread.
[https://en.wikipedia.org/wiki/Viruses_of_the_Mind](https://en.wikipedia.org/wiki/Viruses_of_the_Mind)

BTW, why does he post Youtube conspiracy links? He should at least suggest
using incognito/private!

------
Jorge1o1
The real lesson learned: rather than actually fixing the problems with your
algorithm, just manually block stuff out...

People do the same thing too with chatbots. If you have a text file with a
list of bad words the bot isn’t allowed to say, it’s not really cutting-edge
AI is it.

------
xfitm3
I support YouTube denouncing flat earthers because the science is clear to me:
they're wrong. I think this will get slippery if they make similar tweaks for
politics or social issues.

This raises the question is it really YouTube's job to stop crazy? Where is
the responsibility of the viewer?

~~~
jeromegv
The problem is not about Youtube stopping or not stopping the crazy. The
problem is that: "flat earth videos were promoted ~10x more than round earth
ones". They actually promoted the crazy quite a lot more than they promoted
the real one. Keep the crazy videos on but jeez, stop promoting them above
everything else.

~~~
whatshisface
I think that if people want to watch flat earth videos, then the right thing
to do is connect them to flat earth videos. I for one think flat earth videos
are hilarious.

~~~
shaki-dora
Everyone in this threat is using „flat earth“ as a stand-in for the actually
dangerous stuff eating at the foundations of democracy. Unfortunately, saying
that some of those 4chan conspiracies are wrong, stupid, and potentially
dangerous is deemed to be political and therefore not appreciated on HN.

------
zozbot123
TL;DR: the Youtube AI overweights heavy viewers (people who have more
"engagement" with YT) and thereby ends up naturally promoting the Youtube
equivalent of clickbait. YT creators notice, and make their videos more and
more like Youtube-bait so that they'll be promoted. A tight feedback loop is
created, and hilarity ensues. Pretty much the exact same dynamics as
"Elsagate", except optimized to bait adults, not kids.

~~~
rebuilder
It's optimized to bait a subset of adults who are heavy users of YT. I have to
wonder how good the results are for Youtube as a business, overall - have they
ended up optimizing for a small group of users at the expense of catering to
the larger majority?

------
paulsutter
If it is written in PowerPoint, it's probably AI

> "I worked on the AI that promoted them"

~~~
softwaredoug
Using the term “AI” was also a big red flag that this person doesn’t know how
the algorithm works. It’s not some sky net sentient thing trying to get us to
watch more videos... it’s math and training data

