
YouTube’s Algorithm Incentivizes the Wrong Behavior - furcyd
https://www.nytimes.com/2019/06/14/opinion/youtube-algorithm.html
======
strikelaserclaw
"If YouTube won’t remove the algorithm, it must, at the very least, make
significant changes, and have greater human involvement in the recommendation
process.", man does this person know how many videos and how many users
YouTube has? They cannot use anything except an algorithm to recommend videos.
They cannot use anything except an algorithm to detect videos inappropriate
for children. It seems YouTube is working on this, and this opinion seems like
a ill thought out fluff piece to enrage readers and sell this persons book.

~~~
kartan
> They cannot use anything except an algorithm to recommend videos.

I agree that with the current business model it is not possible for YouTube to
sort it manually.

When I was a kid, a long long time ago, it would have been impossible to
conceive that a TV channel showed that kind of content regularly and continue
open. If their answer would have been that they cannot fix it because it costs
money there would have been an outraged response.

If YouTube cannot keep things legal, cannot respect people rights, cannot be a
good responsible part of society because it is not cost effective for me the
way to go is clear. And that is true for YouTube, Facebook or any other
business digital or not.

~~~
CamperBob2
How about actually demonstrating harm to children (or to anyone else) before
launching a moral panic?

Is that an option?

~~~
shearskill
I’d say having a 13 year old far right YouTube star post a video threatening
to kill the CEO might be harmful, but maybe that’s ok?

[https://www.newstatesman.com/science-tech/social-
media/2019/...](https://www.newstatesman.com/science-tech/social-
media/2019/04/rise-YouTube-ironic-teenage-right-Soph)

~~~
CamperBob2
Do you seriously think that kid was radicalized on YouTube? Where were the
parents?

~~~
shearskill
2018: “I’ll pick a topic and just give my opinion about it try to be
entertaining, try to be funny, try to be unique and say something other people
haven’t said before,” youtuber said.

[https://redwoodbark.org/46876/culture/redwood-students-
view-...](https://redwoodbark.org/46876/culture/redwood-students-view-youtube-
different-lens/)

2019:

In response, the principal of the high school sent a note to students and
parents Thursday night regarding the "hate-based video and text posts
attributed to one of our students":

[https://www.kron4.com/news/bay-area/bay-area-girl-says-
she-l...](https://www.kron4.com/news/bay-area/bay-area-girl-says-she-ll-keep-
making-controversial-youtube-videos/2008692937)

------
rosterface
Ah, the classic “think of the children!” argument. It is no one’s
responsibility other than the parent to ensure their child isn’t watching
inappropriate content (which will be different for every family and
individual).

This article suggests that machine learning and collaborative filtering are
incapable of producing healthy recommendations. I beg to differ, the New York
Times may not like the result but they work for the vast majority of users on
any service with too much content to manually curate.

~~~
rspeer
There are healthy recommender systems, like Spotify.

YouTube is a _disastrously_ unhealthy recommender system, and they've let it
go completely out of control.

~~~
ihuman
How is Spotify's different from Youtube?

~~~
notriddle
I'm pretty sure all content on Spotify gets manually curated first, so abusive
tagging doesn't happen, and some of the worst content simply doesn't get
uploaded at all. Spotify also doesn't try to be a news site, so they can
afford to have a couple week's lag between uploading a song and having it show
up in people's recommendation feed.

------
restingrobot
I don't think this is incentivizing bad behavior. It's merely showing the
viewer more of what they are already watching with a gradual introduction to
broader material. The example of a youtube serving content to "pedophiles" is
borderline asinine. The neural network is just making suggestions on viewing,
it's not telling people to watch. In regards to the complaint that "adult"
content is being served to adolescents, there is an option to filter out
sensitive content all together.

Also, as a parent to 4 children myself, the idea of letting my kids loose on
the internet completely devoid of any supervision is ridiculous. When did it
become youtube's responsibility to parent the children in its audience? Should
we also ban HBO, Amazon, and Netflix from providing recommendations because it
might be a child in front of the screen?

This is just another pointed attempt to censor free speech via the abuse of
technology companies. The idea being that the platform will be restrictive if
they are constantly badgered about it.

~~~
masklinn
> with a gradual introduction to broader material.

It doesn't gradually introduce broader material, it gradually introduces more
"engaging" material.

~~~
restingrobot
I would argue that your point is semantics, but even so you still have a
choice of whether or not to watch the recommended more "engaging" material. It
doesn't change the overall point of my statement.

~~~
tvanantwerp
I'd say it's quite a different point. My own experience has been that the
recommended "engaging" material is something in the same genre as whatever I
just saw, but with a clickbaitier title, flashier thumbnail, and overall
poorer informational quality. It's the different between saying "I see you
enjoy sandwiches, maybe you would also enjoy salads or a plate of sushi" and
"I see you enjoy sandwiches--here's a candy bar, an off-brand soda made with
high-fructose corn syrup, and a carton of cheap grocery store ice cream."

~~~
restingrobot
The semantics argument I was pointing out was in regards to "broader" vs
"engaging". That's not what my statement was about, it was that no matter what
the algorithm recommends to you, you still have the choice whether or not to
watch it. The point you are making is purely anecdotal as I assure you the
neural network is not simply showing you

>same genre as whatever I just saw, but with a clickbaitier title, flashier
thumbnail, and overall poorer informational quality

~~~
Faark
You can keep telling yourself that you have a "choice", but in the end we all
are just humans, with quite predictable behavior. Bias selection of content is
since forever one of the more effective ways of shaping opinion. Politics is
fighting hard on that front for a reason. For the first time ever are some
very few algorithms selecting content for millions of people, with apparently
little human oversight. Yes, this should worry us. Simply assuming the results
of those will benefit mankind, especially in the long term, would be foolish.
It's not quite exactly like the usual ai safety paperclip scenario, but by now
it should be very obvious that optimizing watch-time, even with current "ai",
comes with significant unintended side effects / drawbacks.

------
la_barba
I think YouTube has just exposed the kind of content people were already
interested in, and possibly consuming outside of the public eye. We find it
frightening that people readily click on abhorrent content. When they probably
were doing it over other platforms earlier. The internet had gore videos for
the longest time. I remember a shotgun suicide video that kids in my school
used to shock each other with. If Google as a private company chooses to ban
content, than that is their right, but an apriori expectation that an
entertainment platform should control peoples social behavior and enforce
morality is harmful in a free society IMHO.

~~~
zanny
People were fueling industries of creatively bankrupt content well before the
Internet came around, just look at the long term existence of tabloids.

Youtube is optimizing for the underlying psychological mechanisms that put
people in that mood because it makes them suggestive and because none of this
stuff has substance or meaning they can graze on it like how junk food
producers want to promote.

~~~
FabHK
I think the analogy to junk food is instructive. Both fast food and YouTube
maximise revenue while minimising costs by exploiting human flaws and foibles,
and do so much more effectively than was possible 100 years ago. It is
creating an environment that is radically different than the one we evolved
in.

Watching hours of YouTube - obesity of the mind. Kind of.

------
KoenDG
I doubt this person does not care about the subject they wrote about.

And if the algorithm is producing negative side effects, then, of course, it
should be looked at and changed.

I'm no expert myself, but to my understanding: any algorithm is limited by its
data set.

Based on its data set, an algorithm comes to conclusions. But one can then, of
course, ask: what's the basis for these conclusions?

I recall reading that a certain AI had been fooled into thinking a picture of
a banana was showing a toaster or a helicopter, after a few part of the image
were changed to contain tiny bits of those items.

It turned out that the AI used the apparent texture on places in the image to
determine what was on the image, rather than doing a shape comparison.

Which sounds like a time-saving measure. Though it may very well have been the
method that most consistently produced correct results, _for the given
dataset_.

Frankly, the attitude of "we don't know how it works and we don't care" cannot
possibly end well.

Neither the attitude "oh well make a better dataset then".

I get that we're all excited about the amazing abilities we're seeing here,
but that doesn't mean we shouldn't look where we're going.

I recall a story of an AI researcher who didn't want to define anything
because he was afraid of introducing bias. Upon hearing this, his colleague
covered up his eyes. When asked why he did this, he replied: "The world no
longer exists". And the other understood.

Because of course the world still exists. And just the same way: it's
impossible get rid of bias.

Some human intervention is needed. Just like constant checks and comparison
against human results.

~~~
dbt00
The problem of the dataset is not just that AI will pick shortcuts and naive
heuristics, because humans will too.

The problem of the dataset is that you're not in control of who populates the
dataset and what their intentions are. There's no understanding of an
adversarial model and threat handling.

------
michaelbuckbee
The NYTimes uses a very human "algorithm" to determine what to report on and
if you look at the comparison of causes of death to what's reported it's
wildly off:

Data: [https://ourworldindata.org/uploads/2019/05/Causes-of-
death-i...](https://ourworldindata.org/uploads/2019/05/Causes-of-death-in-USA-
vs.-media-coverage.png)

This isn't a knock against the NYTimes so much as it is of humanity, we're all
fascinated by the lurid and sensational (note that the Google searches are
similarly off) and this permeates all levels of life.

------
vgetr
I feel like things were mostly fine until the 2016 election, after which
journalists became _very_ concerned. If I had a nickel for each, “The
algorithms are coming! The algorithms are coming!”, I’d be rich. I mean, I
didn’t like the outcome either, but these types of articles seem to motivated
by a) finding a scapegoat and b) wanting to use “algorithm” in a sentence.

------
bjt2n3904
What a pleasant way of stating that humans are basically good. We just keep
passing the buck. "We'd be fine if it weren't for this algorithm!"

    
    
        We believe that man is essentially good.
        It’s only his behavior that lets him down.
        This is the fault of society.
        Society is the fault of conditions.
        Conditions are the fault of society.
    

If you ask me, "YouTube's algorithm" is simply exposing the way humanity is.
And trying to get an algorithm to "shepherd" humanity to be better is simply
Orwellian.

------
module0000
> If YouTube won’t remove the algorithm, it must, at the very least, make
> significant changes

It _must_? No, it doesn't have to do a damn thing. It's a product from a
_publicly traded company_ , therefore it " _must_ " return value for
stockholders. That means more behavior that increases ad revenue. The author
is out of touch with reality. Stop feeding your kids youtube if you don't want
them exposed to youtube. It's a private service(youtube), not a public park.

~~~
drewbug01
> It must? No, it doesn't have to do a damn thing.

Subject to the laws of the jurisdiction in which it operates, of course. We
could - if we so wanted - pass laws to regulate this behavior. That is perhaps
the best option, in my own opinion.

> It's a product from a publicly traded company, therefore it "must" return
> value for stockholders.

The dogma that it "must" return value for shareholders is not an absolute
rule[1]; rather it's a set of market expectations and some decisions from
Delaware (which have an outsize impact on business law) that encourage it. But
it's not _required_. In fact, many states allow a type of corporation that
specifically and directly allows directors to pursue non-shareholder-value
goals - the benefit corporation[2].

> The author is out of touch with reality.

Please re-read the HN guidelines[3].

> Stop feeding your kids youtube if you don't want them exposed to youtube.
> It's a private service(youtube), not a public park.

This is the doctrine of "caveat emptor," essentially - that a consumer is
ultimately responsible for all behavior. However, a wealth of regulation
exists because that's unworkable in practice. The FDA and the EPA come to
mind, but we also regulate concepts like "false advertising." Your stance here
ignores the realities of life in service of ideological purism.

[1]
[http://web.archive.org/web/20190327123200/https://www.washin...](http://web.archive.org/web/20190327123200/https://www.washingtonpost.com/business/economy/businesses-
focus-on-maximizing-shareholder-value-has-numerous-
costs/2013/09/05/bcdc664e-045f-11e3-a07f-49ddc7417125_story.html?utm_term=.4c2fd1a74b58)

[2]
[https://en.wikipedia.org/wiki/Benefit_corporation](https://en.wikipedia.org/wiki/Benefit_corporation)

[3]
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
Nasrudith
No we cannot pass laws that do that no matter how indignant we may be. The
whole bloody point of the constitution is that no matter how pissed off the
majority (or "the majority" which is just a noisy minority as it may be) is
that you cannot simply legislate away rights.

The vague "do something!" regulation push has all of the marks of a moral
panic and all participants should slap themselves hard enough to leave a mark
and repeat "It is never too import to be rational."

~~~
AlexandrB
Please explain what rights would be legislated away in this case. It's
definitely not the 1st amendment - you can still say what you want, just not
on necessarily on the platform of your choice. This was equally true in the
broadcast TV days. So what other right(s) would be legislated away by
regulating Youtube's content?

~~~
Nasrudith
Broadcasters had the special pleading with some scintilla of a point in that
there were actual shared commons to prioritize. In practice it was a fig-leaf
as you never saw arguements in broadcast censorship over 'values' to wrestle
over airwave ownership but instead bullshit doctrines like 'community
standards'. The fact that the US has a long history of laying out rights for
all, seeing the revolutionary implications and then saying 'No wait that can't
be right it is too different.' and going back to the bullshit control they had
before for a few centuries is a whole other sad topic.

One thing that did make it through that was the ruling that mediums which lack
said limitation like cable and internet don't have the rationale for that
restriction and thus the censorship that weak minds had become accustomed to
vanished in a puff of logic. This has been the case since cable porn channels
were a thing.

By regulating YouTube you effectively regulate what /all/ platforms may push.
It isn't simply that YouTube decides that "You know what we don't want to post
that." \- an exercise of their collective Freedom of Association but "The
government doesn't want us to post that so we can't." You can't just deputize
tasks to third parties and expect the limits on exercises of power to vanish.
Otherwise we'd see hordes of private detectives as a work around to Fourth
Amendment rights.

Said regulations on youtube would be a major infringement upon freedom of the
press and speech. Not to mention it is logically equivalent to censoring your
own press is whenever it fits whatever criteria they dislike.

------
eganist
This seems like it takes some notes from Veritasium's theory on YouTube's
recommendation algorithm which he posted after his initial reservoir shade
balls video went viral. (edited for clarity)

[https://www.youtube.com/watch?v=fHsa9DqmId8](https://www.youtube.com/watch?v=fHsa9DqmId8)
for his theory.

------
raz32dust
YouTube's incentives are the best among such platforms IMO. They allow a
simple profit sharing model where a part of the ad-revenue goes to the content
creator. This is unlike instagram, for example, where the content creators
have to peddle products in their ads to make money. Take a fitness channel for
example - on YouTube, the content creator can just be honest, and the views
alone will guarantee income. On the other hand, on instagram, they have to
resort to selling snake oil. I love YouTube for this, and I am constantly
amazed by how YouTube has become a livelihood earner for so many.

------
kauffj
[http://archive.is/6lbCR](http://archive.is/6lbCR)

(Archive link for those who prefer non-broken web experiences)

------
everyoneisbias
It's all about advertising money. TV and newspapers are dying and they need
someone to blame.

~~~
restingrobot
I personally think this has deeper political motives as well, but yes I
completely agree with you!

------
Analemma_
I don't know if YouTube's problems are so bad that the argument applies in
this case, but in general, "We can't comply with this regulation, it would be
too difficult at our scale" is not considered a valid defense. Just as banks
shouldn't be allowed to get so large that they can't fail without wreaking
havoc on the economy, if algorithmic recommendation and moderation can't work,
then maybe social networks shouldn't be allowed to get so large that human
moderation is not possible.

~~~
skybrian
The queue for getting your video posted on YouTube would grow infinitely. (Or,
more realistically, people would give up and not bother once it takes years.)

But I guess they could charge money to get to the head of the line?

~~~
PretzelFisch
That's not true you can upload a video and not allow it to be recommended
until some human review was done. Most youtube channels don't need the
recommendation engine.

~~~
LocalPCGuy
That just isn't feasible. Videos would literally take years to get into the
recommended status - another comment pointed out there are 500 new videos
uploaded per SECOND.

~~~
munk-a
If there was one dude, sure. But apparently YouTube is in the business of
supporting the upload of 500 videos/second so they need to deal with the
consequences of it. It's not like there's any regulation forcing them to be
the place everyone uploads videos to and there are some valid competitors
(though they're far less into the publishing/editorializing facet - vimeo is
much more often direct linked for instance)

~~~
jerf
To be clear, I am not speaking for anybody in this thread but myself.

But I will unapologetically and forthrightedly say that, yes, if we're going
to assert that YouTube has certain responsibilities for the nature of the
videos that it hosts, and that it turns out that the nature of those
responsibilities is such that YouTube can't possible meet them, then, yes,
YouTube as we know it should be essentially shut down, at least going forward.

I am NOT going to say we should deliberately craft the responsibilities in
such a way that YouTube is deliberately shut down. However, if it turns out
that they are incapable of applying even the bare minimum effort that we as a
society deem it necessary for them to apply, then, yes, it is absolutely a
consequence that YouTube as we know it today may have to be so radically
altered as to be a different site entirely.

In the general case, when the law requires certain obligations of you as a
business, and you as a business can not meet them, that does not mean that
suddenly those obligations are not applied to you. It means that your business
is not legally viable, and needs to change until it is. It may be the case
that there is no solution to being legally viable and being profitable, in
which case, your business will cease to exist. Just as there is, for instance,
no solution to being a business built around selling torrent files containing
unlicensed commercial content to people. You can't defend yourself by saying
you can't afford to get the licenses; your suitable legal remedy was to never
have started this business in the first place. There's some concerns around
grandfathering here to deal with, certainly, but they can still be affected
going forward.

There is no guarantee that there is a solution where a company exerting
whatever minimal control they are obligated to assert by society is capable of
growing to the size of YouTube. If that is the case, so be it. The solution is
not to just let them go because they happened to grow fast first.

(My solution to freedom of expression is an explosion of video sites, where
each of them has ways of holding the videos to the societally-mandated minimum
standard, and no one site can do it all because they simply can't muster the
resources to be The One Site, because as they grow larger they encounter
_anti-_ scaling effects. Given how increasingly censorious Silicon Valley is
becoming, as we are now into censoring the discussions about censoring
discussions like the recent removal of Project Veritas from Twitter for its
discussion of Pinterest censoring pro-life films, I expect this to increase
the range of expression, not diminish it.)

~~~
nostrademons
Not speaking on behalf of what I _want_ , but on behalf of what is _true_ :

> It may be the case that there is no solution to being legally viable and
> being profitable, in which case, your business will cease to exist.

Or your business will exist illegally.

There's this interesting interplay between law and economics, where law is
generally taken as a prerequisite for frictionless commerce, and yet at the
same time if activities that large groups of people wish to partake in are
made illegal, the market just routes around them and black markets spring up
to provide them. Prohibition. The War on Drugs. Filesharing. Gambling.
Employing illegal immigrants. Usury. Short-term rentals. Taxi medallions.
Large swaths of the economy under communism.

There are a couple other interesting phenomena related to this: the very
illegality of the activity tends to create large profits around it (because it
creates barriers to entry, such that the market often ends up monopolized by a
small cartel), and the existence of widespread black markets erodes respect
for rule of law itself. When people see people around them getting very rich
or otherwise deriving benefit from flouting the law, why should they follow
it?

Switching to editorializing mode, I think that this gradual erosion of respect
for law to be quite troubling, and I also think that the solution to it needs
to be two-fold: stop trying to outlaw behaviors that are offensive to some but
beloved by others, and start enforcing laws that if neglected really will
result in the destruction of the system.

~~~
jerf
"Or your business will exist illegally."

True.

In the context of this particular case, I was assuming that nothing the
current size of YouTube could exist illegally, as that would imply that
whatever authority was declaring them "illegal", but not capable of doing
anything about it despite it nominally living in its jurisdiction, must be
anemic and impotent to the point of being nearly non-existent.

There's already an underground proliferation of video sites, spreading
copyrighted content out of the bounds of what the rightsholders want, so it's
pretty much assured we'd end up with illegal alternatives. :)

------
sneakernets
In addition to this, seeing content creators being slaves to the algorithm is
an eye-opening experience. Especially when it comes to the children's videos.
It's all computer generated garbage powered by responses to changes in
algorithms. If kids suddenly watch more content with alligators, prepare for
that being the only thing created, recommended or playing. It's wild.

------
tunesmith
Still looking for recommendations that are influenced by people's best-self
intentions of who they want to be, rather than influenced by their worst-self
behaviors.

------
edoo
We know we have to keep kids sheltered from people which may have unscrupulous
neural networks at play looking for a way to further their own goals at the
expense of a child's well being and overall health.

Engagement on the internet is also being driven by neural networks that are
learning to adapt to the users brain chemistry to statistically modify
behavior for maximum engagement/profit. Perhaps it is time to realize that
these services are going to be analogous to a random stranger offering your
kid candy for their own twisted goals that are unlikely compatible with a
child's well being. If you consider a service like YouTube as an untrusted
source of interaction perhaps you'll be as likely to block or monitor it the
same as random chat rooms.

------
toss1
YouTube can be a source of astonishingly great education and entertainment
that will help grow society, as well as astonishingly horrid corruptions of
nearly anything that will corrode society at it's roots.

Most of these discussion posts seem to miss the point that 'engagement' or
'upvotes' does NOT equal value.

Also missing is the concept that a company with a massive platform has any
social responsibility to at least not poison the well of society.

And claiming "it's the parent's responsibility" may have some truth value, but
it does not and should not be an excuse to absolve the platform owner of
responsibility.

The key to longer term success of the platforms is to abandon the low-hanging-
fruit of "engagement" as a measure value and develop more substantitive
metrics that actually relate to value delivered, both to the individual
watcher and society as a whole.

As one audience member, I find their recommendations to be basically crap,
nearly never leading me to something more valuable than what I just watched
(sure, they'll occasionally put up a recommendation that has enough
entertainment value to watch, but much of the time I want my 5min back). To
find any real value, I need to search again. That already tells us that their
"engagement"-based algos are insufficient to serve the needs.

------
gersh
I think there is an inherent in optimizing for retention time. Ideally, the
recommendation should help find stuff which improve people's health, make them
happier, or more informed about the world. However, it doesn't seem like
YouTube has metrics on those things. Furthermore, things like that probably
can't be determined very quickly on new content.

------
diogenescynic
I mostly watch movie reviews on YouTube and I'm constantly being recommended
either weird Joe Rogan, alt-right content, or makeup videos. I don't get it.
I've never clicked or watched anything remotely associated with it. I suspect
a lot of the popular YouTube channel's are gaming the algorithms or SEO their
videos to get more recommendations.

~~~
restingrobot
The neural network takes into account videos that other people who watched the
same one you did watched. It's quite possible that the movie trailer you
watched was popular among demographics that also watched those
recommendations. If you don't have a lot of data for yourself, you will see a
heavier bias towards other people's videos.

------
JDiculous
The "wrong behavior" that Youtube incentives is promoting and encouraging
clickbait garbage content (just look at the default homepage). The holy metric
now is "watch time", the result being that creators stretch out their content
to 10 minutes because then Youtube is more likely to promote it (and midroll
add = twice the revenue). Yesterday Youtube recommended me a 10 minute video
of some guy explaining how he made this simple drone shot that could've been
condensed down to a single sentence - "Turn sensors off". What a waste of
time.

But hey they're a corporation and thus have no accountability to the public
good.

------
umvi
Does the algorithm incentivize bad behavior or simply reflect the desires of
the viewers?

Someone watching lots of DIY home repair videos will start seeing more. In
that case it seems like it's incentivizing good behavior. Likewise, someone
watching lots of soft porn on YouTube will be recommended more soft porn.

The algorithm isn't responsible for helping you make good life choices. The
algorithm is responsible for recommending videos that you would like, and it
seems like it does a good job of that, generally.

Unfortunately, some people like bad things and that's an age old problem that
is hard to fix.

That said, it would be nice if users could CTOA (choose their own algorithm)
instead of letting Google be the sole gatekeeper.

~~~
bubblewrap
I guess you could compare it to criminals using the telephone. Invention of
the telephone helped a lot of people, but unfortunately it also helps
criminals.

Likewise the YouTube algorithm helps many people, but criminals or unwanted
people (like pedophiles) can also use it.

It's ok to think about ways to prevent it, but I don't think it should be the
first concern. Imagine outlawing telephony, because criminals could benefit
from its use.

~~~
DanBC
Telephony is an interesting example because for many many years it had very
tight restrictions.

Telephone service providers were monopolies, sometimes government monopolies.
There was only one type of telephone you could use, and that was supplied by
that same monopoly. It was illegal to attach anything else to the line either
directly or indirectly. There were even (outside the US) laws on what you
could say when talking on the phone.

Here's an article from 1994 about a modem maker who had products that were not
officially licensed to connect to the network.
[https://www.newscientist.com/article/mg14219263-000-technolo...](https://www.newscientist.com/article/mg14219263-000-technology-
cut-price-challenge-by-modem-maker/)

------
jerrac
Two ideas come to mind. First, make the engine recommend a few videos that it
thinks you probably won't watch. That could help break up the echo chamber
effect.

Second, allow users to blacklist, or whitelist, different kinds of content. If
someone is struggling with sexual attraction to minors, let them blacklist
content with minors in it. If I don't want to see the latest anti(or
pro)-<insert political figure here> videos, I should be able to filter them
out. I have no interest in Minecraft, so why should I have to keep scrolling
past Minecraft videos just because I watch a lot of game related videos?

That said, all the calls for regulation or censorship concern me. I haven't
seen the video, but Steven Crowder saying mean things isn't exactly something
that should be censored. Any more than all the videos calling President Trump
names. What I'm seeing signs of is a society that silences any speech that
doesn't fit in a specific, politically correct, box. And that box is being
defined by advertising companies who don't want to be associated with topics
that their potential customers find uncomfortable. That's not a direction any
of us should support...

------
bubblewrap
There seem to be some extreme cases where the algorithm fails. That doesn't
imply in general it doesn't work well.

Sounds again like hyperbole from the NYT.

I find it more interesting to consider what would actually be a good outcome
for the viewers. I suppose originally all those recommender algorithms simply
optimized for viewer engagement. Obviously that may not be the best outcome
for consumers. Perhaps enraging content makes people stick on a platform
longer, for example. But it would be "better" for a viewer to see more
educational content and even to disconnect after a certain while.

But how would you even quantify that, for the algorithm to be able to train
for it?

The son of a friend of mine taught himself programming from YouTube videos,
which YouTube had recommended to him. I wouldn't complain about a result like
that.

------
cyrksoft
Big Media is dying and they are desperate to shut down competition. This is a
political article and nothing more.

------
kodz4
It's not complicated.

They need to stop showing people the upvote and view COUNTS. Behind the scenes
they can still use it to make recommendations.

Those numbers are pseudo signals of quality to people who encounter content
they have never encountered before.

Even when they have doubts that are watching something unhealthy the mind goes
"well if the rest of the world thinks this dumbass is important I better pay
attention..."

If a dumbass hurting people on video gets 10 million views other dumbasses
worldwide automatically get triggered looking at the count. "hey I can do this
maybe I should run for President..."

Remove the counts and you remove the pseudo signal of quality.

~~~
verall
It is complicated. I think thats a bad solution.

I want to see the counts. I feel it is far more transparent to see the counts
than for things to just be surfaced or not opaquely. Youtube is not a
discussion site and it does not work as one. How popular things are is a part
of the context of pop culture, and most youtube content is pop culture.

~~~
cheez
Every single day, I watch the channel of a guy who has put out < 15 minute
videos going back to nearly the founding of YouTube.

He gets an average of 10-15 views per day.

The value this guy adds to my day is literally measurable in $$$.

If I could find more people like him, that would be great, but instead these
are my recommendations:

    
    
        - 5 ways to do X
        - Bill Gates breaks down blah blah blah
        - Something about Tesla
        - One video by a guy I discovered outside of YouTube who is similar to the guy I watch every day. I don't watch this one that much though.
    

YouTube's algorithm is not designed for discovery. It's designed for
engagement. So I keep separate accounts:

    
    
        1. Account for actually useful stuff where YT's recommendations are useless
        2. Account where YT's recommendations are OK: white noise like things. Howard Stern interviews, etc
    

I wish you could configure the algorithm for discovery somehow.

~~~
Nasrudith
Of course it is a matter of metrics - it has no way of knowing what is useful.
The closest way to algorithmically discover (outcomes over time) would be
prone to spurious correlations and be so intrusive it would make Cambridge
Analytica look like LavaBit.

~~~
cheez
I'm thinking "make things more discoverable" than "find more useful things" if
that makes sense. I'm willing to wade through it myself if you present me with
options.

------
CountHackulus
To quote Jim Sterling, YouTube has a YouTube problem.

------
Buge
> PewDiePie, a skinny, fast-talking Swede

Is he really fast-talking? He seems kind of slow talking to me, when I watch
his videos I use 2x speed.

~~~
dredmorbius
"Fast talker" is not merely descriptive but idiomatic:

[https://idioms.thefreedictionary.com/fast+talker](https://idioms.thefreedictionary.com/fast+talker)

[https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFalla...](https://www.logicallyfallacious.com/tools/lp/Bo/LogicalFallacies/46/Argument-
by-Fast-Talking)

Cf: fast and loose:

[https://idioms.thefreedictionary.com/play+fast+and+loose+wit...](https://idioms.thefreedictionary.com/play+fast+and+loose+with)

------
not_that_noob
YouTube also leads you down radical rabbit holes as that keeps the algorithm
happy. How many of the recent terror attacks (ISIS or New Zealand type
incidents) were fostered by YouTube watching?

~~~
bubblewrap
My guess would be zero.

------
alt_f4
yet another NYT anti-tech hit piece

------
annadane
It's pretty interesting to see the increase lately of comments defending
various companies. Not _saying_ there's a conspiracy but it's a bit weird

~~~
wtracy
Hacker News has always been like this. The site is specifically targeted at
people associated with venture-capital-funded startups.

A very large percentage of those people are former FAANG employees, employees
of companies trying to get acquired by FAANG, or employees of companies doing
exactly the same things that FAANG are getting criticized for. (Some are all
three!)

I doubt any of them are getting paid to AstroTurf, but they know what side
their bread is getting buttered on.

~~~
dang
> The site is specifically targeted at people associated with venture-capital-
> funded startups.

Quite un-so! HN is targeted at one group only: the intellectually curious. So
curiosity—keeping HN interesting—is the only thing we care about and optimize
for.

As for VC-funded startups, since only about 10% of HN users are located in
Silicon Valley, the number working for such startups is certainly far less
than that. 50% of HN users are outside the US. This community is orders of
magnitude more diverse than your image of it here.

People work for many different employers. Are their views impacted by their
work experience? Of course they are—for every user here. There's nothing
sinister about that. If someone posts a view that's wrong, others will correct
it. As long as they do so thoughtfully, that's great. The cycle of life on HN.

~~~
dlivingston
20% of Americans on HN are localized to the Valley? That seems insanely high.
Can you cite references?

~~~
dang
[https://news.ycombinator.com/item?id=16633521](https://news.ycombinator.com/item?id=16633521)

~~~
dlivingston
I stand corrected. Thanks!

~~~
dang
Yours is the first comment I've seen that expressed surprise it's so high. The
most widespread perception is that HN users are largely based in SV. That has
led to a lot of misunderstandings.

------
ilikehurdles
Google absolutely can do all of those things without an algorithm. What they
can't do is accomplish that without impacting profit margins (or at the
minimum, executive bonuses). "If it impacts business as usual, then it is
impossible" is a naive/flawed/libertarian stance.

~~~
xondono
You do realize that to cover current needs (400h uploaded every minute),
YouTube would need to employ more than 72000 people working full time right?

~~~
bjourne
But 99.9% of all videos uploaded never gets more than a few handfuls of views
so those are irrelevant. Of the remaining 0.1%, you don't need to watch every
second of every frame - speeding it through at twice the speed should be
doable. So by your own calculations, 72 000 * 0.001 * 0.5 = 36 people working
full time.

~~~
xondono
You can set that 0.001 factor as big or as low as you like, but then we’d get
the same nytimes hit piece saying this is _intentionally_ being done by
humans.

------
emilfihlman
I mean PewDiePie's info is rather public but what's with the need to "dox" him
right in the beginning?

------
0815test
Quite true, but let's not pretend that Twittr, Tumbler and Fakebook aren't
also "incenting" all sorts of distorted behaviors of their own! These sites
are "algorithms" all the same, even if the workings of these algorithms are in
some ways more transparent. We need open and widespread federation via
technologies like Mastodon, Matrix and ActivityPub, so that if you don't like
one "algorithm" you can easily switch to another that's more appropriate to
your use case.

~~~
nkozyra
> We need open and widespread federation via technologies like Mastodon,
> Matrix and ActivityPub, so that if you don't like one "algorithm" you can
> easily switch to another that's more appropriate to your use case.

This always sounds good, but decentralized is nearly impossible to commoditize
or make appealing to the general public. Outside of evangelism and word-of-
mouth, how are people going to escape the Youtube advertising budget and
instead choose - en masse - the product that is better for their privacy?

There's just so much money and inertia to fight.

~~~
swiley
YouTube removing harmless content over copyright etc is one way.

------
jiveturkey
Wow, I'm conflicted. First, an obvious idiot statement, which helps us ground
our analysis:

> Human intuition can recognize motives in people’s viewing decisions, and can
> step in to discourage that — which most likely would have happened if videos
> were being recommended by humans, and not a computer. But to YouTube’s
> nuance-blind algorithm — trained to think with simple logic — serving up
> more videos to sate a sadist’s appetite is a job well done.

So this person is advocating that a human (ie, _another_ human besides
oneself, an employee at youtube), have access to the click stream of
individual users? This proposal, in 2019??? Of course this would have to be
compulsory to be effective. Why would I want a megacorp to be making moral
decisions for me? I'm ok with them making amoral algorithmic decisions.

The author is generalizing the problem of YT Kids, which should be human
curated, to all of youtube.

OTOH, yeah feeding our worst impulses is kind of a problem. But, tweaking the
algorithm isn't the solution. The machine itself is designed to thrive on
attention.

