
Social Media Has Failed Its Self-Driving Test - signor_bosco
https://www.bloomberg.com/view/articles/2017-11-08/social-media-has-failed-its-self-driving-test
======
jerkstate
Maybe the author is right - we can't rely on YouTube, Facebook, and AI to
police these things, maybe it is the job of the parents to control what our
kids see and hear.

There are thousands upon thousands of rendered, cartoon, and live-action
videos featuring popular childrens characters like Spider-Man and Elsa that
have extremely disturbing themes dealing with punishment,
handcuffing/restraint, dismemberment, injections, bodily waste, etc. It seems
like they get more and more unusual as autoplay goes on. As a parent they are
extremely unnerving and I'll certainly never let young children watch youtube
unattended and I warn every parent I know about them.

There are places for curated content and there are content bazaars. YouTube
Autoplay is a bazaar. Facebook is a bazaar. Recognize where you are, the
motivations of those putting things in front of you and your family, and act
accordingly.

~~~
azangru
Is there any reliable body of evidence about what happens to children if they
watch content that their parents find unnerving? Do they become sociopaths?
Drug addicts? Neurotics? Criminals? Is there any real need for controlling
what a child can see and what he can not?

~~~
tb303
I don't think you are familiar with the content. As a parent, I am. We've
reported them to YT and they don't act. These are videos that pretend to be
about spiderman or peppa pig but involve blood, violence, sexual themes,
dismemberment, etc. I understand what you are asking but it's irrelevant here,
this isn't an intellectual exercise where we decide "hey, this isn't so bad
after all."

~~~
megaman22
Sounds like Robot Chicken. Fortunately Cartoon Network firewalls off Adult
Swim with an hour or so of King of the Hill or other bland, boring material.

~~~
astura
No, Robot Chicken is a show that's intended for an adult audience and intended
to amuse adults.

This garbage on YouTube Kids is aimed at getting children to watch it and
intended to be disturbing to children. The titles say things like
"educational, learn numbers" and the channels are called stuff like "Kids TV."

------
edgarvaldes
The first paragraphs in the app description[1]:

>YouTube Kids has tons of fun and educational videos that are just right for
kids. There’s also a whole bunch of parental controls that let you create an
experience that’s just right for your family.

>CONTENT ON YOUTUBE KIDS

>Our app is designed to filter out inappropriate videos for kids, but no
system is perfect. If a video that’s inappropriate shows up, you have the
power to block it, flag it, and bring it to our attention for fast review.

So, it attempts to be "right for kids" and "filter out inappropriate videos".
Is that working? I'm not quite sure.

[1][https://play.google.com/store/apps/details?id=com.google.and...](https://play.google.com/store/apps/details?id=com.google.android.apps.youtube.kids&hl=en)

~~~
mabub24
This essay[1] by James Bridle points out that YouTube Kids' algorithmic
filtering is failing. Thousands (millions maybe) of videos are uploaded
containing violent and disturbing imagery and tailored to match the search
inputs of children. The entire article is quite interesting.

[1]: [https://medium.com/@jamesbridle/something-is-wrong-on-the-
in...](https://medium.com/@jamesbridle/something-is-wrong-on-the-
internet-c39c471271d2)

~~~
replicatorblog
The linked article and NYT report would really benefit from some statistical
reinforcement. Some of the videos that the two posts show are indeed
disturbing, but how many of these are there actually? Bridle's point was
almost more aesthetic, pointing out the crudity of animation which is a
widespread problem.

That said there is a MASSIVE difference in a poorly animated earworm like the
"Finger Family Song" and Chase from Paw Patrol being decapitated in a strip
club, which is a rarity.

~~~
toss1
It has also been repeatedly pointed out that the YTube reporting system to
takedown inappropriate content consistently fails.

Plausible scenario is that it fails is that they assign a few human reviewers,
and set the threshold of reports required so that those humans aren't
overworked. Then they make the erroneous assumption that their staffing for
the task is adequate and that items falling below the reporting threshold must
be fine. Wrong.'

------
mattkevan
I’m repeating much of what I wrote on a previous thread about the same topic
as I think it’s relevant here too.

I believe the discussion here about parenting is a diversion from the real
issues at play.

This is not a ‘won’t somebody think of the children moral panic’. Instead,
we’re feeling a foreshock of the problems we’re going to experience, caused by
the unintended consequences of the systems we have built - and what that means
for us as a society.

YouTube videos for kids is a lighthearted entry to the subject - we could be
discussing news, porn, education, whatever, and the problems and implications
would be broadly the same.

The key issues raised here are: * The ‘delamination’ of content and author and
how that affects the awareness and trust of its source. If, for example, a
scientific paper and a bunch of woo is presented in the same way who can tell
which is more legitimate? ‘Just teach critical thinking’, or ‘it’s called
‘parenting’’ are not acceptable solutions as by the time enough people have
been taught to provide herd immunity we will have long since succumbed to this
pandemic of bullshit.

* ‘...the impossibility of determining the degree of automation which is at work here’. If both humans and machines are creating content tailored for every possible niche, interest, fetish and keyword combination, and algorithmic personalisation makes it possible to exist entirely within our own personal tag clouds what does it mean for us as a society, which requires a basic set of shared values to function?

Junk content and the pandering to base instincts is not new, but our ability
now to automate the creation and dissemination of such content to pander to
every possible interest and combination of interests at vast scale is new -
and we do not yet have the cultural toolkit to deal with it.

I wonder whether in a few years time, once we’ve really felt the impact of all
this, people will look on today’s enthusiasm for putting ‘social’ in
everything with the contempt and horror we do with last century’s enthusiasm
for putting radium in everything.

------
hosh
"Anyone who has ever given an iPad to a small kid knows the kind of thing
children find on YouTube before they're able to type: Toy unboxing and nursery
rhyme videos, official and pirated cartoons featuring popular characters like
Peppa Pig. It's up to parents, of course, if they are okay with their child
getting engrossed in these (we took the iPad away from our four-year-old
daughter because we noticed consuming the content made her reluctant to learn
to read and irritable when the tablet wasn't within reach). But the stuff
Bridle found was arguably worse than what I'd seen before my wife and I made
the decision."

Something Neal Stephenson remarked in Diamond Age: this kind of tech is not a
substitute for parenting.

------
upofadown
Sort of like complaining that romance novels provide an inaccurate model of
human relationships...

Social media (even YouTube) is mostly intended to entertain. If it turns out
to be bad at other stuff, well, that just means we have to create systems that
are. Super popular web sites are not going to be any more socially
constructive than the sort of magazines you find at the checkout aisle of a
grocery store. Nothing has really changed here.

------
replicatorblog
Regulating tech co's is one approach, another is trying to create new social
norms around the use of these platforms. My Facebook feed is almost entirely
pictures of friend's babies and announcements about local events. That's
because I've curated my feed and deliberately not liked any news sources. Some
friends and family share their opinions and stories, but even during the
election, it was pretty limited. It's tempting to create a new federal
apparatus to regulate these tools, but what's really lacking is the common
sense found at a dinner table "No talk of politics or religion."

------
d1zzy
It's interesting reading the comments to see how one proposed "humanly
curated" alternative (PBS kids) has been discarded as "not being free" yet
people are up in arms that Youtube Kids is not using humans to curate their
videos. These 2 requirements are in contradiction. I don't know if it's even
profitable to have a lot more humans reviewing Youtube Kids videos while
keeping the service "free" (ad based) but even if we assume that may still be
profitable it's pretty clear it will be significantly less profitable then it
is now which means that Google (a company not known to be shy from shutting
down services with tons of users) may simply shut it down and move to do
something else where they can get higher profit.

If the market/technology does exist where someone else could do a better job
then there would be many alternatives, it's not like Youtube Kids has some
lock in mechanism.

PS: the self-driving moral decision comparison in the article seems baseless,
I very much doubt anyone developing self-driving algorithms are considering
"moral decisions", there is no moral decision to be made, the car will try to
stop (in its lane) as fast as it can if it detects an obstacle, it won't try
to swerve (after performing some imagined moral decision algorithm) because
that would be illegal and it potentially increases liability of the software
manufacturer.

------
neilmock
The assumption in all articles of this variety is that media consumers are
either incapable of or not motivated to evaluate the veracity of content
placed in front of them. This, rather than the ethics of 'media company X'
seems to be the issue worth considering.

~~~
yaur
The target consumers in this case are small children, that they are incapable
of evaluating the content placed in front of them is exactly the point.

~~~
indubitable
No, it's not. The author used the blog post as supplementary material, but
like Steve Jobs the author also had the common sense to take away his child's
access to electronic devices for reasons outside of the content. More
generally, when somebody is too young to consent to something - the decision
their rests with the parent. As the author alludes to, this is mostly done
just to silence the kids up rather than enrich their development. I think the
content here is far less concerning than this increasingly common 'tablet-
babysitter' behavior.

------
malvosenior
I don’t understand the point of this article. I found Bridle’s original piece
to be much ado about nothing. He basically complains about algorithmicly
generated content being too “weird” for kids then does English lit style
analysis looking for malicious undertones in said content. I didn’t agree with
it at all but at least it was an original blog post that was trying to make a
point.

This Bloomberg piece on the other hand simply links to Bridle’s blog and says
“Yeah! What that guy said!”. _Really_ thin “content” imo.

------
gkya
> we took the iPad away from our four-year-old daughter

Not referring to the person of the author, but people use tablets and phones
as stand-by buttons for their kids. However much active the kid might be, they
get parallised watching cartoons and whatnot. But in a recent interview with a
local doctor from a working-class neighbourhood where I live, he was referring
how excess use of this is causing kids to suffer from under-developed motor
skills (Evrensel.net on the _Esenyalı_ neighbourhood).

------
jdietrich
As of 2015, there were 576,000 hours of video uploaded to YouTube per day. As
of 2014, Facebook users generated 4 petabytes of data per day - four billion
gigabytes of photos, videos and text. Does anyone believe that it's feasible
to manually review even a significant fraction of that content?

Social media is media, but it's also social. A lot of people are trying to
apply the standards of traditional media to something that's fundamentally
different. I'm not sure where on the spectrum social media lies between
"private conversation" and "network TV", but I don't think that we can exactly
apply the social norms of either.

The vast majority of people would agree that some degree of monitoring and
censorship is necessary for the prevention of crime. Beyond that, I think we
simply have to accept that some amount of content on social media might offend
us.

Social media is created by people with a very diverse range of beliefs,
opinions and standards of decency, so it's unlikely that we'll establish a
universally agreeable set of standards. Social media companies have a limited
ability to control the content on their sites reactively or via algorithms,
but it's logistically impossible for them to consistently enforce their terms
of use across all content.

Many voices in the traditional media would have us believe that occasional
offence is a fundamental flaw in social media, but I think that's a bogus
narrative.

[https://www.statista.com/statistics/259477/hours-of-video-
up...](https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-
youtube-every-minute/) [https://research.fb.com/facebook-s-top-open-data-
problems/](https://research.fb.com/facebook-s-top-open-data-problems/)

------
OCASM
If you don't want your children to be addicted to that content then don't give
them access to it in the first place.

Maybe people willfully choose fake news on Facebook because they're tired of
the fake news the mass media broadcast.

"Oh but those are wrong choices and somebody needs to put a stop to it!"

This is just a push for autoritarianism. No thanks.

~~~
dmschulman
Your post history would suggest you're a troll, so I'm downvoting this
comment, but additionally, you're mingling a bunch of complicated ideas and
prescribing a broad generalization.

We live in a market-driven society, people are free to choose to consume what
they want to consume (as you point out) but that doesn't give carte blanche to
the producers in the marketplace to do whatever they want in the name of ad
revenue. When it does reach a tipping point, as I think we're seeing with this
Medium article, regulation will start to come to a niche industry that had
previously operated under the radar. Especially when human health consequences
are involved, the government steps in to ensure protection to its citizens
(and its most vulnerable members, children).

If you really are upset that you won't be able to view algorithmically
generated children's cartoon videos stuffed with tropes and keywords that
exist for no other purpose than to make money, I'm concerned for your
wellbeing.

~~~
OCASM
So personal responsibility be damned. Lets just have the government determine
what is good for us and what is not.

~~~
AlexandrB
I'd rather the elected government do it rather than an unaccountable
multinational corporation. What do you think "relevance" algorithms are other
than some company determining what is good for us and what is not?

Edit: P.S. Where was the personal responsibility of Facebook after they were
caught peddling propaganda placed by foreign actors? As I recall it took a lot
of arm twisting to get them to admit there was even an issue.

~~~
OCASM
The difference is choice. Companies can offer whatever they want and it's up
to me to decide whether I consume their products and services or not.

Government on the other hand makes the decision and forces it on everyone. Not
just those who elected it, everyone.

Edit: as for the foreign propaganda, how many nations and what interests
were/are they peddling? If it's already illegal then sure, it should be
forbidden from the network. If it's not, then I'm fine with it.

~~~
AlexandrB
> The difference is choice. Companies can offer whatever they want and it's up
> to me to decide whether I consume their products and services or not.

Please, tell me how I can escape Google? God knows I've tried - I have a
Fastmail email account, I use DDG, and I stick to Apple hardware. That doesn't
change the fact >90% of advertising I see on the open web is still "suggested"
for me by Google's algorithms, that my employer uses Google Apps and Google
Drive for just about everything, that many links I encounter on social
networks are AMP, and that most of time I need to verify I'm not a bot I'm
filling in a recaptcha.

Saying I have a choice not to use Google is only true in the academic sense.
It's theoretically possible, but not without extreme effort and probably not
without finding a career outside of the tech industry.

~~~
OCASM
You can choose to switch jobs and not use the internet :p

What you don't have is the power to make others choose what you'd want them
to.

------
indubitable
I find it quite ironic that the author at one point complains of misleading
tags being used in titles to generate clickbait (or 'searchbait') in an
article that awkwardly shoe-horned in "self-driving" into a couple of
sentences to artificially generate additional clicks. It's currently the "top
story" on Google from Bloomberg for "self driving."

Actually ironic is not the most accurate word. It's _telling._ This is another
article which is essentially a call to censorship. And what's to be censored?
Well like most of these articles, that's not really discussed beyond whatever
the author happens to disagree with. You see, jamming in misleading tags to
produce hits isn't a problem when it benefits the author or their publication,
but when others do it? Oh, it's time for a _serious conversation_ now.

------
dogruck
The only solution is to opt-out. There’s no other solution.

That said, there is a role for government regulation. Specifically, to break
up monopolies such as Google.

Then, when there is proper competition, you have some freedom to choose the
platform that’s policed by algorithms that you agree with.

------
cryptoz
The article ends with

> A step back to assert human control -- even if it cuts into the tech
> companies wide profit margins -- is overdue indeed.

The author seems to think these problems cannot be solved with technology.
Maybe they think that, or else Facebook, Twitter, et. al., would have solved
it already?

I think that it is far more likely that our society directly incentivizes the
companies to _not_ solve the problems. That is why they are not solved.
Because we lavish them with money to leave them unsolved.

Legislate that political news must be regulated. Legislate that companies that
exert social control over a population, like Facebook, must do so responsibly.
Don't just tell them that humans must curate all content. That won't work.
Humans are too expensive and corruptible anyway. The solution is structural in
society just as the problem is. Take away the profit incentive for big tech
firms to cheat and feed lies, and our society will improve directly in line
with what the author probably wants.

~~~
ekidd
> _Legislate that political news must be regulated._

This is not as simple as it sounds, of course, because the _regulators_ would
also have plenty of incentives for bad behavior.

I actually think that government regulation can be tremendously useful in
certain circumstances. But it's not a magic solution, and you have to be
clever about it. In the particular case of political news, the potential
downsides of giving too much power to regulators are at least as dangerous as
giving too much power to Facebook or Google.

------
jcoffland
Where are these examples of poisonous kids cartoons?

------
tschellenbach
Filter bubbles have always been an issue. This is not a new problem.
Traditional media has the exact same problem.

~~~
dang
The trouble with a comment like this is that, by merely venting and not
teaching the reader anything, it's no better than what it's complaining about.

It's extra work, of course, to put substance behind what you're saying, and
that isn't always an option. But refraining from posting is.

------
mherdeg
I wonder to what extent this YouTube Kids stuff represents a "moral panic"
\--- oh no, look at the depraved things our children are watching! And they
are TEACHING an ALGORITHM to learn that LIVE CHILD ACTORS should be made to
act out even more depraved versions of these things to get more page views!

Versus a real problem -- oh shit, we accidentally created a brand new Internet
where there are no more editors, everyone is free to find their own content
democratically, hooray, but some of the people we put in charge of creating
that content are hyperactive amoral entities.

I know a bunch of Facebook employees have this view of the whole "filter
bubble", "fake news" situation which is, more or less -- Deal With It. The
news feed algorithm is here to stay, it's the new printing press, editors are
obsolete, and everyone gets to decide their own truth now. That view is
evinced ambivalently at e.g. [https://www.wired.com/story/the-solution-to-
facebook-overloa...](https://www.wired.com/story/the-solution-to-facebook-
overload-isnt-more-facebook/) .

But I'm not sure how that view copes with the YouTube monetization model which
.... starts with children's videos ... goes to ad revenue ... and leads to
children being videotaped apparently in real distress for money.

Here are 2 points of reference:

(1) YouTube channel "DaddyOfFive" had 750,000+ subscribers and featured what
was fairly clearly child abuse ("pranks"). Somebody finally thought it might
be a good idea to report this to authorities and the father lost custody of
some of the children. [https://www.washingtonpost.com/news/the-
intersect/wp/2017/04...](https://www.washingtonpost.com/news/the-
intersect/wp/2017/04/25/the-saga-of-a-youtube-family-who-pulled-disturbing-
pranks-on-their-own-kids/)

(2) The comment at
[https://news.ycombinator.com/item?id=14910125](https://news.ycombinator.com/item?id=14910125)
links to an article at [https://vigilantcitizen.com/moviesandtv/something-is-
terribl...](https://vigilantcitizen.com/moviesandtv/something-is-terribly-
wrong-with-many-kids-videos-on-youtube/) which provides some evidence that
toddlers' content preferences are influencing the creation of new content
which involves, as part of making the content, putting children in distress.
This seems to cause actual measurable harm to children.

The news.ycombinator comment said in part :

> The feedback loop is so tight that many of the creators have converged on
> designing some of the most visceral (and disturbing) material that appeals
> to kids at a reptilian level. Think about how genuinely weird little kids
> are. Sometimes they do stuff that would be scary if it wasn't just a minute
> of innocent pretend and a toy. Stuff like kidnapping each other or
> performing surgery on each other. Well, now there are hundreds of thousands
> of YT videos illustrating that kind of material in fine detail. Many of them
> have millions of views.

And the sample videos' captions include:

> Many videos feature the girls screaming or crying their heart out. In this
> video, this girl has something in her mouth that tastes horrible … And she’s
> forced to keep it in her mouth … And she doesn’t seem to be acting.

> Here, a girl takes something from the toilet and force feeds it to the other
> girl. She doesn’t like this.

> In another video, a creepy dude with clown makeup barges into the girls’
> house and starts grabbing them while the girls scream and attempt to resist
> him.

Is this wrong? Should this not be happening? Is this anything but a natural
effect of the 2010-2011 "filter bubble" and the 2016 "fake news" phenomena?

------
jorgec
Social Media has failed.

~~~
DrScump
Social Media has succeeded in its primary mission: to make money.

------
abtinf
TLDR the inevitable opinion piece using FUD about social media to make the BUT
THINK OF THE CHILDREN!!! argument against free speech.

~~~
dmschulman
Serious question: What kind of speech is being censored here? There is no
conceivable message behind these videos, it's a cash grab, that is the point
of the article.

------
mftf
Perhaps the author does not understand the logistical nightmare of having
human moderators manually approve all content.

Further, the fact that YouTube, Facebook, et al. Have failed to police content
to the author's satisfaction is not indicative of a failure in AI - especially
when these corporations are effectively incentivized by view counts and
advertisement _not_ to remove these videos.

In short, YouTube's failure is not a valid indicator of the effecticeness of
AI in general.

~~~
Chaebixi
> Perhaps the author does not understand the logistical nightmare of having
> human moderators manually approve all content.

Pretty sure he does, from the OP:

> A step back to assert human control -- even if it cuts into the tech
> companies wide profit margins -- is overdue indeed.

The tech companies use these algorithms because they're cheap and they're
good-enough _for their profit-making purposes_. If Google's algorithms can't
keep _YouTube Kids_ kid-friendly, then they need to hire the human moderators
to make up the difference or kill the product (because it's false
advertising).

