Hacker News new | past | comments | ask | show | jobs | submit login
YouTube admits 'wrong call' over deletion of Syrian war crime videos (middleeasteye.net)
241 points by jacobr on Aug 18, 2017 | hide | past | web | favorite | 138 comments

I think youtube needs to consider backing off regulating political content.

The fact is politics and morality are inherently intermingled. One can use words like extremist, but sometimes the extremists are the "correct" ones (like our founding fathers who orchestrated a revolution). How could any system consistently categorize "appropriate" videos without making moral judgements?

Interesting that we have a whole vocabulary of words reserved for those on the "wrong side": insurrection, sedition, traitor, deserter, criminal, smuggler, terrorist, extremist, rebel, revolutionary, etc.

But from the "right side" of an ideology, those are the freedom fighters, visionaries, defenders, Founding Fathers, Underground Railroad, idealists, etc.

Controlling vocabulary is a very powerful tactic in politics, as illustrated in 1984. People respond in very predictable ways to certain words, hence their power.

Why doesn't Youtube remove this "extremist" content I wonder...


It's surprising to me how sanitized that footage feels. You watch a hundred people get killed, but it's not really striking a nerve the way a single beheading can.

That's probably why footage of the American military usually doesn't get taken down. Our military kills from a thermal view half a mile away.

US military spends a lot of time and effort on propaganda. Lines like "bring our boys home" are literally inserted into hundreds of movies. They also release a lot of this type of footage to portray combat as less horrific than it actually is.

Who is being killed in this video? The top comments (liked by the uploader) are vile.

It is removed now.

I just saw the first 20 seconds of it after your comment.

As I understand, their "machine learning" grep "stuff in arabic"| send to people in places like Tunis, Albania and Morocco (true, both Google and Facebook run their "AI flagging" shops there, blogs giving account of that are well known)

Agree but it's far easier for politicians (May, Rudd, Trump, Turnball, Putin, Netanyahu, Xi Jinping, Mugabe, Zuma, Chan-ocha, el-Sisi, Erdoğan, Khamenei, Maduro - the list is endless) to blame videos on Youtube for radicalising people then it is to tackle the long running political, historical and socio-economic grievances that fuel the fire.

Not only politicians, unfortunately.


He did say that the list is endless.

Too bad my comment got flagged by HNers who disagree


I still support your free speech (right or wrong)


Do you think any rich established British company supported the American revolution? Why do you expect Google to help anything new or different regardless of how "correct" it is? Big established players can't afford revolutions, too risky for them.

Rich Established French Companies did, as did Nations and people outside the British Empire.

Your comments would have to mean YouTube/Goolge is a Syrian Company not a American Company when it pertains to the Topic at hand.

Your Analogy is simply defunct on many levels, further your switch from British Companies supporting the revolution, to "Google Supporting anything new or different" is nonsensical. Google supports all kinds of "New and Different" things that will likely change the face of society in profound ways, things like AI, Self Driving Cars and other Moonshot Programs.

No I really do not understand what point your driving at here

>Rich Established French Companies did, as did Nations and people outside the British Empire.

Yes, because there was little or no risk for them. Google will happily side with the revolution in Bangladesh or Gambia, not in the US.

But we are talking about them taking down footage as it pertains to a revolution / civil war / conflict in Syria not the US.

Did I miss something? Is there massive Conflict in the US? Is their an active Violent Revolution happening? Because if you think the clashes that have occurred at protests between Anti-Fa and the Nazi;s equate to "Violent Revolution" then you will be in for a shocking awakening if the unthinkable occurs and the US does find itself in an actual violent revolution or civil war again.

The last US Civil war took the lives of 2% of the population, to put that into context if happening today that would that would be 6.5 Million people killed.

No, the US is not involved in a Violent Revolution Google needs to pick a side over, and you should pray that day never comes

I would argue that the US is in the brink of such a revolution. The American civil war happened because the Republican Party came out of nowhere and decided to free the slaves against the financial interests of the cotton industry. Currently, Trump came out of nowhere and has posed collective threat to a lot of industries including the military industry, tech and manufacturing industries that rely on outsourcing and foreign employees, remaining CleanTech industries and many others. There is every reason for these industries to support the opposition and that is exactly what they are doing.

There is no analog between the President Trump being elected and What happened during the lead up to the Civil War. That is absurd.

Further the idea that Trump is a threat to Military, Manufacturing, or many other industries is equally absurd, you may have a small case for Tech but I think that is over stated.

Trump is a HUGE supporter of Military and by extension military contractors, wanting to increase the military expenditure by large margin.

Protectionist policies often Aid US Manufacturing at the expense of Cheap Imports from China, so US Manufacturer, actual manufacturers that are making stuff in the US, not Foreign importers which are often confused as "Manufacturers" support Trump

Sounds like you might be in a Echo Chamber...

>Trump is a HUGE supporter of Military and by extension military contractors

To really give the military a huge boost, they need more wars. You can't spend much on "defense" without ongoing wars or big threats... it maxes out pretty quickly. How will Trump justify more wars? When Bush ran the 2000 campaign, one of the messages was to get out of foreign wars. If 9/11 hadn't happened, the military industry would have suffered greatly. 9/11 happened, and every war was justified until recently because "terrorists".

Now the best prospects they have are North Korea, which is a joke, and ISIS, which the US and allies have created to defeat Syria. When they finally realize that none of these will fly, they will try to get rid of Trump and take matters to Russia and hence Syria again... which they have been trying since after he won.

>Protectionist policies often Aid US Manufacturing at the expense of Cheap Imports from China

One can't actually implement protectionist policies without being heavily biased towards big industries. If this happens, everyone, including people who side with Trump will cry corruption. There is no way to do it right... and if you think Trump will be able to pull it off from a Republican platform, you are very out of touch. Even if the big players pretend that it is okay (because it doesn't affect them as much), it is definitely not okay because they are forced to price their products up because they are not allowed to do business as they would like to.

Aren't you always making moral judgments on non-political videos too? Which videos are "adult", which videos are scams, which videos are inappropriate, etc...

So I think the distinction here is that "what is porn?" (again, a deep question that the supreme court has addressed, considering how much nudity there is in art) does have a non-moral answer, even if it's arbitrary (such as whatever arouses 10% of people).

The other distinction is that with political speech is that if you screw up that line you hamper criticism of authoritarian regimes (for example). If you set the bar too tight on your porn filter, it's much less likely you stifling political progress.

A decision not to decide is still a choice.

A decision to attempt to remain neutral should be carefully considered.

Sure, but whether the choice has any political content depends on whether YT considers themselves publishers or infrastructure. If they just want to be the world's video hosting site then they're not taking any political stance by staying out of it and just hosting what people upload.

They don't have to decide anything like that. They could prohibit cat videos for no reason tomorrow, and suffer whatever the consequences of that are.

They have chosen to try to stop ISIS recruitment by videos hosted on their site. That is their prerogative.

> I think youtube needs to consider backing off regulating political content.

They can't. We've entered an era where a disagreeable tweet is - in the mind of some people - a good reason to dox you and destroy your life. This extends to companies. Your ad shows up next to an opinion piece that they disagree with? Time to start a #activist campaign to boycott your company and mention how you would probably sell your goods to Hitler and your board is full of white patriarchs who are probably racist, anti-Muslim, anti-immigrants, anti-women, and closet Neo Nazis.

The companies are also to blame since they've been mostly spineless. Save a few companies, they fold and pull their advertising the minute these #activists put them in their sights.

To an advertising delivery company like Youtube, which is probably barely at breakeven, losing advertising dollars is crushing. Over the years they've been slowly reducing how much money they bleed, so a reversal of the trend is grounds for alarm.

So Youtube has to either ween itself off advertising (virtually impossible, since Google is an advertising company) or make sure the content is advertiser friendly.

I think you bring up an interesting point. Big corporate advertisers are demanding of youtube the same predictability and sanity checks which they have with traditional media buys such as television, print etc. However part of the appeal of a medium like youtube is in much of the unscripted, random, spontaneous, esoteric content. And these two interests seem to be at odds with each other.

> The fact is politics and morality are inherently intermingled.

> How could any system consistently categorize "appropriate" videos without making moral judgements?

Politics and morality are inherently intermingled, I agree.

Appropriateness and morality are the same concept.

I disagree with your last statement. A video featuring gratuitous violence is not appropriate for YouTube Kids but there's no moral statement there.

> A video featuring gratuitous violence is not appropriate for YouTube Kids

Why not? Answer without making a moral statement.

I'll try an utilitarian argument: because my younger daughter (13) has nightmares after seeing a violent video. (Not blaming YT for that, just saying that you can make a non-moral argument against violent videos in kids' channels.)

That argument will easily prove that all videos are inappropriate for Youtube Kids. An actual utilitarian argument would balance your daughter against the kids who liked seeing violent videos, and she would lose.

Furthermore, utilitarianism is a moral position. Why would you prefer a world where your daughter doesn't have nightmares over one where she does?

Your daughter would probably stop having nightmares if you gave her more exposure to that kind of video, too.

An alternative, advertisers who target children don't want to be included on videos that feature graphic depictions of violence so violent videos (or overly sexual, vulgar, etc) on YouTube Kids will be a net negative for YouTube.

I'm much a proponent of automation as anyone else. But I think right now Google is trying to do something way too hard. By looking for "extremist" material, they are basically trying to determine the intention of a video. How can you expect an AI to do that?

Doesn't matter, it's already out in the wild. This year so far, tons of channels whose videos have had ads for years are being instantly demonetized without explanation. If even one word from a video's title, one tag is on their "controversial" shitlist, it's SOL. Knowing YouTube's track record with this stuff, they will continue to be silent and not give a shit.

Content creators that don't produce content for 5 year-olds need to start looking somewhere else than YouTube.

Demonetization as a feature comes from the advertisers' desire, not from YouTube itself. Advertisers are very sensitive about what they will allow their brands to be associated with, and they demand the tools to configure their ad campaigns to not display on broad swaths of potentially offensive content. YouTube has had some huge issues with advertiser complaints of ads on offensive content in the past. If anything, YouTube should be praised for continuing to host demonetized content at a loss, since it's not making any advertising revenue for them.

People who make non-advertiser-friendly videos need to figure out some other kind of way to make their money. Patreon is excellent for that.

There are people that make Linux Tutorial Videos that have were Demonetized, it is not just "non-advertiser friendly" political content.

Further I do not buy the Not Ad friendly BS, there are TONS of Advertisers that would put (and have to try to buy ads) on some of those channels but Youtube either refuses them as Advertisers or otherwise does not give the Advertisers the proper control to pick and choose which channels they want to out ads on, it says you video is not ad friendly and that is that, the advertiser has not input in it.

Finally Advertisers care about eyeballs, they really do not care about the content of the ads UNTIL they get people complaining or boycotting their product.

Today there are a small number of permanently and perpetually offended people that are VERY vocal, and VERY loud on social media that have wielded undue influence over these YT Advertisers with their threats of boycotts and outrage. These people have no respect for the freedom of expression, and desire nothing more than to shut down the speech of anyone they disagree with.

It would be nice if they let you buy ads on "demonetized" videos at an extreme discount. I'm sure many companies would be more than happy to get cheaper ads and aren't that sensitive to brand association concerns.

> There are people that make Linux Tutorial Videos that have were Demonetized

source on this? I'd like to read more

Joe Collins was one example I was referring to in my comment, YT has since monetized his videos but here is the source where he Expresses his Frustration with YT not just in the current round but his experiences over the Years,


YouTube is the only option. Smart businesspeople (i.e. most channels over 500k or even 200k subs) have alternate sources of revenue, mainly sponsorships but also Patreon.

It's almost like you forgot Vimeo

I didn't forget Vimeo. YouTube is so dominant that, since everyone else is on YouTube, uploading videos elsewhere is career suicide. No one will switch websites or apps to watch just one creator's content.

One of the keys to YouTube's success is its sub feed. All the videos from all my favorite channels, all in one place. It's extremely convenient. People tend to take the path of least resistance.

But Vimeo isn't, and doesn't want to be, YouTube.

Oh god please no

I mean its also youtube so .. who cares

Yes, who cares about the world's (by far) largest and most popular video distribution site?

Because it's operated on the whims of a 3rd party whose primary concern is ad revenue. It's disappointing that historically important videos are being deleted but unless they introduce a new program for long term archival YT isn't really the place to store them.

But is Youtube really the right platform for such videos? It's a platform made to host videos in order to put advertiser's ads on them. When I think "raw war videos", I think of Liveleak.

For many users, Youtube is the only video publishing site, and many people get their news through YT. Very few people are aware of the existence of LiveLeak; it is an ineffective platform for spreading awareness.

Agreed. I don't click liveleak links if I don't feel like watching someone get murdered on camera. So, basically, I don't click liveleak links.

Would you prefer to stumble across those on Youtube?

It seems better that material which needs to be kept for history and remained uncensored due to advertising is on a site dedicated to that instead of a site which most people use for videos of recipes, memes, and great football headers.

This is a solved problem. Flag it and it will get behind a confirmation screen asking you if you're really sure to view the video.

Yeah, there does need to be some sort of inhibition against highly traumatizing content, it should certainly not be promoted to people who do not seek it out. But purging news videos of statues being destroyed and ancient buildings being demolished, is going too far.

YouTube is now what TV used to be. So yes, you should be able to show this content. However, the platform needs to provide betyer tools to users and producers to aide in categorizing content.

When did TV ever show content like this?

I saw plenty of different views of folks being injured or killed in recent protests on the television, I saw folks losing their lives in Nice recently on television, I saw a reporter shot point blank in 2015 on television, I saw an old man shot to death from the point of view of the killer earlier this year on television.

And that's not even counting war footage.

I'm not hunting this stuff down, the only protection against watching this crap is a half second pause after the reporter says "Warning, the following may be considered graphic."

The funny thing is : I don't own a television; I just go to a diner with whatever garbage news channel everyone in the place likes at any given time for breakfast occasionally. Even brief casual glimpses at any given news channel are bound to get you an eye-full of gore or violence of some sort.

I remember graphic photos of Iraqi war crime victims on the evening news in '91.

But what happens if YT doesn't want to host this content? Do they have a moral obligation simply because of their size?

I dont think its about morals, but more about userbase expectations. YouTube as a product is generally seen as a video archive. They do have the choice to not host it, but they risk an user exodus. Given the competition from Facebook, Snapchat, and the like, that's a risk they cant take. More so when it's Alphabet's only successful social network acquisition.

I dont think the problem is automation...

It's people's expectation for it to be perfect, and the egoic drive to blame someone when something goes wrong. There was no reason for the hype around this story... an AI determinator had a false positive. Thats not google attacking the videos, thats a technical issue and it needs to have zero feelings involved because the entire process happened in a damned computer incapable of feelings...

But everyone needs to feed their outrage porn addiction...

It's not a technical issue. Software is not yet capable of accurate content detection, and even if it were, it's not clear whether this sort of thing should be automated. It's not like google can just change a few lines of code and the problem is gone.

> It's not a technical issue. Software is not yet capable of accurate content detection,

Your second sentence is a technical argument, which makes your first a lie. Obviously Google disagreed, which is why they put this system into place. And if they were wrong about that they were wrong for technical reasons, not moral ones.

I mean, you can say there's a policy argument about accuracy vs. "justice" or whatever. It's a legitimate argument, and you can fault Google for a mistake here. But given that this was an automated system it's disingenuous to try to make more of this than is appropriate.

If you just stare at the words and ignore my meaning, sure. But saying this is a technical problem is like saying that climate change is a technical problem because we haven't got fusion reactors working yet.

Then I don't understand what your words mean. Climate change is a technical problem and policy solutions are technical.

My assumption was that you were contrasting "technical" problems (whether or not Google was able to do this analysis in an automated way) with "moral" ones (Google was evil to have tried this). If that's not what you mean, can you spell it out more clearly?

Is there any problem you wouldn't frame as technical then? If the software isn't anywhere close to capable enough to do this task and YouTube decides to use it anyway that is a management problem. Otherwise literally every problem is technical and we just don't have the software to fix it yet

Sure: "Should Google be involved in censoring extremist content?". There's a moral question on exactly this issue. And the answer doesn't depend on whether it's possible for Google to do it or not.

What you guys and your downvotes are doing is trying to avoid making an argument on the moral issue directly (which is hard) and just taking potshots at Google for their technical failure as if it also constitutes a moral failure. And that's not fair.

If they shouldn't be doing this they shouldn't be doing this. Make that argument.

The software makes literally millions of correct calls every day, both positive and negative.

I'd say it's pretty capable.

Human raters are a fucking nightmare of inconsistency and bias. I'd guess this is more accurate at this point, and is only going to improve.

I would argue climate change is a political problem.

Policy solutions are political.

A policy is a deliberate system of principles to guide decisions and achieve rational outcomes. A policy is a statement of intent, and is implemented as a procedure or protocol. - https://en.m.wikipedia.org/wiki/Policy

If you believe climate change is a technical problem then there isn't much point continuing this discussion. Using that logic you could claim that any problem is technical because everything is driven by the laws of physics.

The point is, there will be false positives, there is no reason to get upset and hurt over them...

There is no perfect system. If its automated, there will be false positives (and negatives), if there is a human involved, you have a clear bias issue, if there is a group of humans involved, you have societal bias to deal with...

There is no perfect system for something like this, to the best answer is to use something like this, that gets it right most of the time... then clean up when it makes a mistake. And you shouldn't have to apologize for the false positive, people need to put on their big boy pants and stop pretending to be the victim when there is no victim to begin with...

This is the exact same argument for "stop and frisk", and that is just totally NOT OK.

It's not the exact same argument because stop and frisk is not automated.

If isn't the same process being defended, but I clearlt didn't claim that: the argument used to defend the different processes, however, is the same. This "put on your big boy pants" bullshit is saying that people should accept any incidental harassment because false positives are to be tolerated and no system is perfect, so we may as well just use this one. If the false positives of a system discriminate against a subset of people--as absolutely happens with these filters, which end up blocking people from talking about the daily harassment they experience or even using the names of events they are attending without automated processes flagging their posts--then that is NOT OK.



Thats exactly the OPPOSITE of stop and frisk.

1) Stop and frisk is BIASED heavily on race becuase its a HUMAN making the choice...

2) Stop and Frisk is the GOVERNMENT, and therefore actually pushes up against the constitution.

How do you see these things are remotely the same?

The false positives are not random: they target minorities; these automated algorithms designed to filter hate have also been filtering people trying to talk about the hate they experience on a daily basis. They keep people from even talking about events they are attending, such as Dykes on Bikes. It is NOT OK to tell these people to "put on their big boy pants" and put up with their daily dose of bullshit from the establishment.



On one hand, one problem with automated systems is that they're perfectly happy to encode existing biases.

On the other, the Alphabet family don't have the support systems to clean up when they make a mistake.

Your whole premise is wrong, because the final decisions were made by humans. But even if they weren't, you're still mistaken. If you write a program to do an important task, it is your responsibility to see that it's both tested and supervised to make sure it does it properly. Google wasn't malicious here, but it was dangerously irresponsible.

From the article:

"Previously we used to rely on humans to flag content, now we're using machine learning to flag content which goes through to a team of trained policy specialists all around the world which will then make decisions," a spokeswoman said..."So it’s not that machines are striking videos, it’s that we are using machine learning to flag the content which then goes through to humans."

"MEE lodged an appeal with YouTube and received this response: 'After further review of the content, we've determined that your video does violate our Community Guidelines and have upheld our original decision. We appreciate your understanding.'

Humans at YouTube made the decisions about removing videos. Then, on appeal they had a chance to change their minds but instead confirmed those decisions. Then, because of public outcry, YouTube decided it had been a mistake. 'The entire process happened in a damned computer incapable of feelings' is inaccurate.

Read the article. It says that humans made the final decisions.

[Disclaimer: I'm not a youtuber, so my knowledge is only 2nd hand]

Aside from but related to this story, many people are making a living off of YouTube ad revenues, and the AI is unpredictable in how it will respond in terms of promoting your video content on the front page, as links from other popular videos, and so forth. I think it's also unknown how the AI categorizes the content appropriateness of videos to advertisers, which if categorized the wrong way leaves your stuff unmonetizable.

Basically, people are throwing video content up, but have no way to properly have a feedback loop to gauge whether or not they violate the "proper" protocols that the AI rewards. This really is a problem of automation using (presumably) trained statistical rules where nobody really knows what specifically influences the decisions about their videos.

It is the People's expectation that it be perfect. Once they have determined that there is something badly wrong going on in the video destroying it is a violation of U.S. Code § 1519 (destroying evidence with intent). They had better have backups.

The AI doesn't seem to remove them though, it just flags them for human review. In theory, these humans should be the ones determining intent.

And in this case, there should probably be a separately trained group of reviewers to carefully examine these videos. Not the same group that's quickly checking over videos to see if they're pornographic, for instance.

I think most of their review ops are in Manila, PH, no? It'd take some time to get them up to speed on that...

Why do you believe that to be the case?

the highest level are in the US


What am I missing here? How is the openly racist GP comment not pounded into the gray? Or flagged?

All I am saying is it takes time for things to propagate. If HQ were in PH, then things would have faster turnaround is all I'm saying.

Do you see "race" in every bit of convo. If I said white chocolate or black chocolate, do the colors auto infer race somehow? If I prefer black or white choco what does that mean? If I prefer black or white or does that mean I'm destroying consuming black or white?

Just calm down.

Yes, because "They're in the philipines... it'd take them longer" is totally the same as "black chocolate" (who says "black" chocolate anyway?)

How is their comment openly racist?

How do you know there isn't already a separately trained group of reviews whose only role is to carefully examine videos related to war crimes/terrorism/violence? I suspect there was, and Google/YouTube's senior management just decided to take a harder line on it than they should've.

That's certainly a first step, but I doubt it's a full solution. What's the phrase, "dog whistles" for phrases and keywords that only a target audience would understand?

The demonetization effort is too targeted and obvious to be explained by "AI did it". If AI did it, they could undo it after appeal which they don't.

Basically every YouTuber I follow has complained about having videos demonetized this week. Subjects ranging from video game reviews to body dysmorphic disorder.

It really seems they've bitten off more than their machine learning algorithms can chew here.

>How can you expect an AI to do that?

Machine Learning is just a way to launder bias. What will be defined as extremist by US companies will favor the West but discounting Western extremism and overplay non-Western extremism.

Let's look at the bigger picture. First, in March some newspapers find an extremist video. It has ~14 views and YT advertising all over it. They make a a big deal out of it. As a result YouTube looses ad clients and tons of money.

Then, as a response, they make an alg. They don't want people to call them a "terrorist platform" ever again. Hence they take down the videos.

Now, this algorithm is hurting the bystanders. IMO the real problem is a public and business reaction to the initial event.

And this peace of news is an inevitable consequence.

if you look at this sequence of events, the deeper trouble is really that of the newspapers (or any other media outlet). Their incentive is to find things people are going to get shocked by, and report it to get views/clicks (that fuel advertising). They aren't going to fully explore the pros/cons, like long form journalism, since that's expensive, and they get paid peanuts from advertisers.

The public's reaction is going to be the one of outrage, because the news article was _designed_ to evoke such feelings! If the public sentiment was more liberal, then the news article would've picked an even more extreme event, and evoke the same reaction.

Fix the problem at its root - that of advertising funded news and media.

Couldn't agree more. In my opinion, advertising is not the _only_ problem. Remember, YT is a competitor to the "old media". And hurting them is in their best interest. Hope that initiatives like wikitribune will pave the way to a better future

Something people need to keep in mind when parsing this story is that many of the effected channels were not about militancy, they were local media outlets. Local outlets that only gained historical note due to what they documented as it was unfolding.

In Syria outlets like Sham News Network have posted thousands upon thousands of clips. Everything from stories on civilian infrastructure under war, spots on mental health, live broadcasts of demonstrations.


Including documenting attacks as they happen and after they have happened. Some of the effected accounts were ones that documented the regime's early chemical weapons attacks. These videos are literally cited in investigations.

All that is needed to get thousands upon thousands of hours of documentation going back half a decade deleted is three strikes.

Liveleak is not a good host for such outlets because it is not what these media outlets are about. Liveleak themselves delete content as well so even if the outlets fit the community it would not be a 'fix.'

i really don't know how to describe my feeling as a syrian when i know the most important evidence that witnessed the regime crimes were deleted because of wrong call. And it's really confusing how artificial algorithm get confused between what is is obvious as isis propaganda and a family buried under the rubble and this statement makes things even worse. mistakenly? because there is so many videos? just imagine that may happen to any celebs channel. Will youtube issue the same statement? dont think so.

What I don't like about those web giant services is, to get a human support, it requires to start social pressure like this.

If they fucked up something by automation, contacting to human support is hopeless unless you have very influential SNS status or something.

Google/YouTube needs to admit defeat in this area and stop trying to censor, they are doing more harm than good.

Is there sufficient data available to actually draw that conclusion? All we generally hear about are videos that are taken down because they are false positives, and usually those are just down temporarily. We'd have to know how many are taken down that are not false positives, and how harmful those were, in order to begin to estimate net harm or good.

Just admit defeat and let all the advertisers pull out?[1] Then go bankrupt and (like Soundcloud was on the verge of) let everything get deleted?

[1] https://www.theguardian.com/technology/2017/mar/25/google-yo...

Maybe the advertisement-funded content needs to be pulled into question then

Well, the AI did such a bang–up job sorting out the mess in comment section that it got promoted to sorting out the videos themselves.

The AI isn't the only problem, Syrian government supporters are actively reporting evidence of war crimes in an attempt to get it removed.


5 minutes reading YouTube comments is enough to make me ill. I don't know what a couple hours a day after school would do to a person after ten years. We'll all find out, I guess, once this generation of kids reaches adulthood.

Probably has the same effect as computer games did on us - little.

Statements like that always puzzle me. How would you be able to tell?

For example, it is entirely possible that they called it right, back in the 1950s, when they said Rock and Roll would ruin society. We have no access to what would have happened without it, and so nothing with which to compare .

..should stress I have nothing against Elvis :) It's just useful as a thought experiment.

HN discussion of deletion event: https://news.ycombinator.com/item?id=14998429

What about all the speech that's censored that doesn't have enough interest or political clout to make people aware of the injustice of its censoring.

Google (parent company of YouTube) already sees itself as the protector of the public's eyes and ears. They might be contrite now but they behave as a censorshipping organization.

I think YouTube really needs to hire more humans to review flagging of videos rather than leave it to a loose set of algorithms and swarming behavior of viewers. They assume wrongly that anyone who flags a video is honest. They should always assume the opposite and err on the side of caution. And this should also apply to any Content ID flagging. It should be the obligation of accusers to present evidence before taking content down.

> They assume wrongly that anyone who flags a video is honest.

I don't think they assume that at all. If they did, you'd see at least an order of magnitude more videos removed.

I agree with the sentiment of your criticism, but I think we could phrase it more in terms of prior probabilities or something about the false positive and false negative rate in their review process. Flagging of videos is extremely common and even a small amount of unreliability in the review process translates into a huge number of mistakes.

Also, users of the site don't actually agree with each other much at all about which removals were in error; we could say that there's absolutely abysmal inter-rater reliability if the end-users of the site are the "raters" of the quality of content removal decisions.

Also, most people who flag things don't necessarily know much at all about YouTube's terms of service or how YouTube has interpreted or applied them in the past, so it's hard to be clear on what it means for flaggers to be honest or dishonest. Probably the most common meaning of flagging is "ugh, I'm upset that this video is up on YouTube".

The biggest problem, imo, isn't the random flagger but rather the concerted actions of groups to flag videos. This is obvious in terms of reddit or 4chan users swarming a channel they don't like. This kind of behavior needs to be mitigated in some way. I think a quick solution would be to force a cool down timer on flagging of 24-48 hours for all users to ensure they're not abusing the system. That should include random users who file DCMA takedowns that aren't partnered with Youtube in some way.

Very much agreed on this swarm behaviour. A political channel made by a very kind person with nice intentions didn't seem to breaking any rules at all, though the /pol/ board on 8chan coordinated mass-flagging attacks against his videos twice which resulted in his channel being deleted twice.

YouTube (google) has become the EXACT opposite of what they said they were not going to do.

They are evil.

At least, someone at Google was good (honest) enough to drop that motto...

Bellingcat account should be removed, I agree on that with YT.

Automation is the only real solution. These types of conversations seem to always overlook how normal people don't want to watch such videos. Do you want to spend your day watching this stuff to grade them?

Yet youtube is admitting that the videos should not have been pulled, and there's no AI in the world that could have made the right call here. So... what sort of automation are you suggesting? It seems as though the real solution is the exact opposite of what you're proposing; human review by better trained personnel with clearly defined criteria.

The real solution is to continue to blame the machines whenever an angry mob decides to show up at their doorstep instead of admitting it was their clearly defined criteria that was at fault.

This is irresponsible to the trained personnel. We have job safety requirements for people working on assembly lines. We should also have psychological safety for people, and there are lots of stories of employees that are paid to view the toxic videos suffering (even developing PTSD).

So we shouldn't allow jobs which may expose employees to psychologically harmful experiences? You realize you'd have to outlaw a whole slew of professions, right? If you can't take it then don't work the job, Yeesh.

We have psychological safety checks for police/social service workers/military/etc.

Could those not apply here?

"Do you want to spend your day watching this stuff to grade them?"

What about providing more tools for community to categorize disturbing videos other than simply "flag".

I will literally never choose to watch a disturbing but real event. War crimes, either in the form of documenting them or promoting them, will be flagged as "people like Ben never click on previews of this video, don't waste time suggesting it to them in future". I won't even get as far as the page the flag button is on, never mind any other options.

Sounds like the system would be doing its job in that case.

Only insofar as the system is there to make me spend more time watching videos; it does nothing to help categorise or identify propoganda that isn't targeted at me.

> These types of conversations seem to always overlook how normal people don't want to watch such videos

That makes me not normal, I guess. Plus, the NSFL videos were almost instantly taken off YT immediately after having been uploaded, what had remained was really interesting stuff documenting the war in Syria (or at least interesting for people such as myself, a guy interested in wars and conflicts in general).

So hire abnormal people?

I'm pretty sure there's a healthy chunk of population who would be unfazed and would love a cushy job judging videos.

People also don't want to spend their day scrubbing toilets, but there are plenty of janitors out there.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact