The fact is politics and morality are inherently intermingled. One can use words like extremist, but sometimes the extremists are the "correct" ones (like our founding fathers who orchestrated a revolution). How could any system consistently categorize "appropriate" videos without making moral judgements?
But from the "right side" of an ideology, those are the freedom fighters, visionaries, defenders, Founding Fathers, Underground Railroad, idealists, etc.
Controlling vocabulary is a very powerful tactic in politics, as illustrated in 1984. People respond in very predictable ways to certain words, hence their power.
That's probably why footage of the American military usually doesn't get taken down. Our military kills from a thermal view half a mile away.
I still support your free speech (right or wrong)
Your comments would have to mean YouTube/Goolge is a Syrian Company not a American Company when it pertains to the Topic at hand.
Your Analogy is simply defunct on many levels, further your switch from British Companies supporting the revolution, to "Google Supporting anything new or different" is nonsensical. Google supports all kinds of "New and Different" things that will likely change the face of society in profound ways, things like AI, Self Driving Cars and other Moonshot Programs.
No I really do not understand what point your driving at here
Yes, because there was little or no risk for them. Google will happily side with the revolution in Bangladesh or Gambia, not in the US.
Did I miss something? Is there massive Conflict in the US? Is their an active Violent Revolution happening? Because if you think the clashes that have occurred at protests between Anti-Fa and the Nazi;s equate to "Violent Revolution" then you will be in for a shocking awakening if the unthinkable occurs and the US does find itself in an actual violent revolution or civil war again.
The last US Civil war took the lives of 2% of the population, to put that into context if happening today that would that would be 6.5 Million people killed.
No, the US is not involved in a Violent Revolution Google needs to pick a side over, and you should pray that day never comes
Further the idea that Trump is a threat to Military, Manufacturing, or many other industries is equally absurd, you may have a small case for Tech but I think that is over stated.
Trump is a HUGE supporter of Military and by extension military contractors, wanting to increase the military expenditure by large margin.
Protectionist policies often Aid US Manufacturing at the expense of Cheap Imports from China, so US Manufacturer, actual manufacturers that are making stuff in the US, not Foreign importers which are often confused as "Manufacturers" support Trump
Sounds like you might be in a Echo Chamber...
To really give the military a huge boost, they need more wars. You can't spend much on "defense" without ongoing wars or big threats... it maxes out pretty quickly. How will Trump justify more wars? When Bush ran the 2000 campaign, one of the messages was to get out of foreign wars. If 9/11 hadn't happened, the military industry would have suffered greatly. 9/11 happened, and every war was justified until recently because "terrorists".
Now the best prospects they have are North Korea, which is a joke, and ISIS, which the US and allies have created to defeat Syria. When they finally realize that none of these will fly, they will try to get rid of Trump and take matters to Russia and hence Syria again... which they have been trying since after he won.
>Protectionist policies often Aid US Manufacturing at the expense of Cheap Imports from China
One can't actually implement protectionist policies without being heavily biased towards big industries. If this happens, everyone, including people who side with Trump will cry corruption. There is no way to do it right... and if you think Trump will be able to pull it off from a Republican platform, you are very out of touch. Even if the big players pretend that it is okay (because it doesn't affect them as much), it is definitely not okay because they are forced to price their products up because they are not allowed to do business as they would like to.
The other distinction is that with political speech is that if you screw up that line you hamper criticism of authoritarian regimes (for example). If you set the bar too tight on your porn filter, it's much less likely you stifling political progress.
A decision to attempt to remain neutral should be carefully considered.
They have chosen to try to stop ISIS recruitment by videos hosted on their site. That is their prerogative.
They can't. We've entered an era where a disagreeable tweet is - in the mind of some people - a good reason to dox you and destroy your life. This extends to companies. Your ad shows up next to an opinion piece that they disagree with? Time to start a #activist campaign to boycott your company and mention how you would probably sell your goods to Hitler and your board is full of white patriarchs who are probably racist, anti-Muslim, anti-immigrants, anti-women, and closet Neo Nazis.
The companies are also to blame since they've been mostly spineless. Save a few companies, they fold and pull their advertising the minute these #activists put them in their sights.
To an advertising delivery company like Youtube, which is probably barely at breakeven, losing advertising dollars is crushing. Over the years they've been slowly reducing how much money they bleed, so a reversal of the trend is grounds for alarm.
So Youtube has to either ween itself off advertising (virtually impossible, since Google is an advertising company) or make sure the content is advertiser friendly.
> How could any system consistently categorize "appropriate" videos without making moral judgements?
Politics and morality are inherently intermingled, I agree.
Appropriateness and morality are the same concept.
Why not? Answer without making a moral statement.
Furthermore, utilitarianism is a moral position. Why would you prefer a world where your daughter doesn't have nightmares over one where she does?
Your daughter would probably stop having nightmares if you gave her more exposure to that kind of video, too.
Content creators that don't produce content for 5 year-olds need to start looking somewhere else than YouTube.
People who make non-advertiser-friendly videos need to figure out some other kind of way to make their money. Patreon is excellent for that.
Further I do not buy the Not Ad friendly BS, there are TONS of Advertisers that would put (and have to try to buy ads) on some of those channels but Youtube either refuses them as Advertisers or otherwise does not give the Advertisers the proper control to pick and choose which channels they want to out ads on, it says you video is not ad friendly and that is that, the advertiser has not input in it.
Finally Advertisers care about eyeballs, they really do not care about the content of the ads UNTIL they get people complaining or boycotting their product.
Today there are a small number of permanently and perpetually offended people that are VERY vocal, and VERY loud on social media that have wielded undue influence over these YT Advertisers with their threats of boycotts and outrage. These people have no respect for the freedom of expression, and desire nothing more than to shut down the speech of anyone they disagree with.
source on this? I'd like to read more
One of the keys to YouTube's success is its sub feed. All the videos from all my favorite channels, all in one place. It's extremely convenient. People tend to take the path of least resistance.
It seems better that material which needs to be kept for history and remained uncensored due to advertising is on a site dedicated to that instead of a site which most people use for videos of recipes, memes, and great football headers.
And that's not even counting war footage.
I'm not hunting this stuff down, the only protection against watching this crap is a half second pause after the reporter says "Warning, the following may be considered graphic."
The funny thing is : I don't own a television; I just go to a diner with whatever garbage news channel everyone in the place likes at any given time for breakfast occasionally. Even brief casual glimpses at any given news channel are bound to get you an eye-full of gore or violence of some sort.
It's people's expectation for it to be perfect, and the egoic drive to blame someone when something goes wrong. There was no reason for the hype around this story... an AI determinator had a false positive. Thats not google attacking the videos, thats a technical issue and it needs to have zero feelings involved because the entire process happened in a damned computer incapable of feelings...
But everyone needs to feed their outrage porn addiction...
Your second sentence is a technical argument, which makes your first a lie. Obviously Google disagreed, which is why they put this system into place. And if they were wrong about that they were wrong for technical reasons, not moral ones.
I mean, you can say there's a policy argument about accuracy vs. "justice" or whatever. It's a legitimate argument, and you can fault Google for a mistake here. But given that this was an automated system it's disingenuous to try to make more of this than is appropriate.
My assumption was that you were contrasting "technical" problems (whether or not Google was able to do this analysis in an automated way) with "moral" ones (Google was evil to have tried this). If that's not what you mean, can you spell it out more clearly?
What you guys and your downvotes are doing is trying to avoid making an argument on the moral issue directly (which is hard) and just taking potshots at Google for their technical failure as if it also constitutes a moral failure. And that's not fair.
If they shouldn't be doing this they shouldn't be doing this. Make that argument.
I'd say it's pretty capable.
Human raters are a fucking nightmare of inconsistency and bias. I'd guess this is more accurate at this point, and is only going to improve.
Policy solutions are political.
A policy is a deliberate system of principles to guide decisions and achieve rational outcomes. A policy is a statement of intent, and is implemented as a procedure or protocol. - https://en.m.wikipedia.org/wiki/Policy
There is no perfect system. If its automated, there will be false positives (and negatives), if there is a human involved, you have a clear bias issue, if there is a group of humans involved, you have societal bias to deal with...
There is no perfect system for something like this, to the best answer is to use something like this, that gets it right most of the time... then clean up when it makes a mistake. And you shouldn't have to apologize for the false positive, people need to put on their big boy pants and stop pretending to be the victim when there is no victim to begin with...
1) Stop and frisk is BIASED heavily on race becuase its a HUMAN making the choice...
2) Stop and Frisk is the GOVERNMENT, and therefore actually pushes up against the constitution.
How do you see these things are remotely the same?
On the other, the Alphabet family don't have the support systems to clean up when they make a mistake.
"Previously we used to rely on humans to flag content, now we're using machine learning to flag content which goes through to a team of trained policy specialists all around the world which will then make decisions," a spokeswoman said..."So it’s not that machines are striking videos, it’s that we are using machine learning to flag the content which then goes through to humans."
"MEE lodged an appeal with YouTube and received this response: 'After further review of the content, we've determined that your video does violate our Community Guidelines and have upheld our original decision. We appreciate your understanding.'
Humans at YouTube made the decisions about removing videos. Then, on appeal they had a chance to change their minds but instead confirmed those decisions. Then, because of public outcry, YouTube decided it had been a mistake. 'The entire process happened in a damned computer incapable of feelings' is inaccurate.
Aside from but related to this story, many people are making a living off of YouTube ad revenues, and the AI is unpredictable in how it will respond in terms of promoting your video content on the front page, as links from other popular videos, and so forth. I think it's also unknown how the AI categorizes the content appropriateness of videos to advertisers, which if categorized the wrong way leaves your stuff unmonetizable.
Basically, people are throwing video content up, but have no way to properly have a feedback loop to gauge whether or not they violate the "proper" protocols that the AI rewards. This really is a problem of automation using (presumably) trained statistical rules where nobody really knows what specifically influences the decisions about their videos.
Do you see "race" in every bit of convo. If I said white chocolate or black chocolate, do the colors auto infer race somehow? If I prefer black or white choco what does that mean? If I prefer black or white or does that mean I'm destroying consuming black or white?
Just calm down.
It really seems they've bitten off more than their machine learning algorithms can chew here.
Machine Learning is just a way to launder bias. What will be defined as extremist by US companies will favor the West but discounting Western extremism and overplay non-Western extremism.
Then, as a response, they make an alg. They don't want people to call them a "terrorist platform" ever again. Hence they take down the videos.
Now, this algorithm is hurting the bystanders. IMO the real problem is a public and business reaction to the initial event.
And this peace of news is an inevitable consequence.
The public's reaction is going to be the one of outrage, because the news article was _designed_ to evoke such feelings! If the public sentiment was more liberal, then the news article would've picked an even more extreme event, and evoke the same reaction.
Fix the problem at its root - that of advertising funded news and media.
In Syria outlets like Sham News Network have posted thousands upon thousands of clips. Everything from stories on civilian infrastructure under war, spots on mental health, live broadcasts of demonstrations.
Including documenting attacks as they happen and after they have happened. Some of the effected accounts were ones that documented the regime's early chemical weapons attacks. These videos are literally cited in investigations.
All that is needed to get thousands upon thousands of hours of documentation going back half a decade deleted is three strikes.
Liveleak is not a good host for such outlets because it is not what these media outlets are about. Liveleak themselves delete content as well so even if the outlets fit the community it would not be a 'fix.'
If they fucked up something by automation, contacting to human support is hopeless unless you have very influential SNS status or something.
For example, it is entirely possible that they called it right, back in the 1950s, when they said Rock and Roll would ruin society. We have no access to what would have happened without it, and so nothing with which to compare .
I don't think they assume that at all. If they did, you'd see at least an order of magnitude more videos removed.
I agree with the sentiment of your criticism, but I think we could phrase it more in terms of prior probabilities or something about the false positive and false negative rate in their review process. Flagging of videos is extremely common and even a small amount of unreliability in the review process translates into a huge number of mistakes.
Also, users of the site don't actually agree with each other much at all about which removals were in error; we could say that there's absolutely abysmal inter-rater reliability if the end-users of the site are the "raters" of the quality of content removal decisions.
Also, most people who flag things don't necessarily know much at all about YouTube's terms of service or how YouTube has interpreted or applied them in the past, so it's hard to be clear on what it means for flaggers to be honest or dishonest. Probably the most common meaning of flagging is "ugh, I'm upset that this video is up on YouTube".
They are evil.
Could those not apply here?
What about providing more tools for community to categorize disturbing videos other than simply "flag".
That makes me not normal, I guess. Plus, the NSFL videos were almost instantly taken off YT immediately after having been uploaded, what had remained was really interesting stuff documenting the war in Syria (or at least interesting for people such as myself, a guy interested in wars and conflicts in general).
I'm pretty sure there's a healthy chunk of population who would be unfazed and would love a cushy job judging videos.