I cannot help but think that this will not be applied evenly - that some political content will be allowed and some will not.
I share your fear. The "kosher" content would be now defined by some faceless person in NY/SF.
Big US companies, and all US companies, big and small, have always choosen, and very carefully, what sorts of content they want to distribute, what sorts of image they want to portray, what sorts of causes they want to publicly support, and what their public imagine will be. You were never able to find atheist content in a Christian bookstore, you could only buy the censored version of CDs from Walmart, you couldn't find pornography at K-Mart, The Disney Channel never broadcasted any politically incorrect material, and you couldn't buy t-shirts with "inflammatory religious or supremacist content" in Old Navy. Even the original Geocities had strong content restrictions.
Are you loudly complaining Old Navy doesn't sell a "Hitler was right" shirt? Are you complaining about the "censorship" going on at the Museum of Fine Arts since they don't have a white supremacist exhibit?
In fact, pornography is legal and YouTube does not allow pornography. Why aren't you already up in arms about that "censorship."
Facebook is not Old Navy (one of thousands of competing clothing stores), it's a ubiquitous service with over a billion people in it, almost everybody on the internet.
Like Google, it's more of a basic internet service than a mere website. And its content (and content policies) are a factor in political discourse, both in the US and outside of it.
Secondly, to restrict the discussion to examples that your audience will clearly dislike ("Hitler was right", "white supremacist exhibit") is misleading, because the problem is with items that are not that clear cut but will be censored anyway.
E.g. "Iraq doesn't have WMDs", "CIA is involved in drug trafficking", "US supports death squads in Latin America", "Dodge the Vietnam draft" and so on -- to limit the examples to such items from the past. What would a mainstream company who "censors" stuff allow from those back in the day when they were hot issues?
Or let's take it to today, how about pro/anti-Trump, or pro-anti Assad, or pro-anti Black Lives Matter, pro-anti Manning, pro-anti Assange, etc?
Even stuff that the majority in the US might disagree with, the majority in another culture/country might legitimately agree (and not want it censored) -- but they'd have no say. A single country (and one from which many countries have scars from) will control a large part of the internet discussions (through Facebook, and similar policies in Google, etc) of other countries.
Why it's censored? And, btw, Iraq _did_ had WMDs.
Except in a huge stretch of the notion, that doesn't justify invasion, war, hundreds of thousands dying, and trillion spent -- some degraded barrels of mustard gas and the like from 30+ years ago, the era of Iran-Iraq war...
When you call for violence against non-combatants you're breaking the law in every single western country. If there were only one web browser and the company behind it were implementing universal blocking measures maybe I'd agree with you, but honestly I'd have to think long and hard first. Radicalisation is impossible to survive in the long run as the average power an average individual keeps going up.
I find it an interesting question because:
A) Not every video branded as culturally unacceptable will be. Not every video is as bad as the worst-case hypothetical used to justify the content classification.
The landscape of cultural attitudes differ from California-based content minders. The categorization can be flat out wrong, there will undoubtedly be a small percentage of videos that even the minders see as mis-classified.
B) Social interventionist policies can - and often do - backfire.
e.g.: Teens that deliberately seek out taboo. The allure of R movies, M games, Explicit Lyrics, and underage binge drinking can cause them to live a period of their life less well-adjusted than if that content wasn't aggressively filtered from their lives in the first place.
>videos that contain inflammatory religious or supremacist content
It's very easy to claim content is inflammatory or supremacist . This will be highly subjective, which is the problem.
I personally know people here in the bay area that would have no problem labelling lots of campaign talk by Trump with those tags.
also they have to determine these norms for the whole planet (without North Korea), now every attempt so far that tried to set cultural norms for the whole world has failed, lets see if they do better.
That's not entirely true. Abstract advocacy of illegal violence is protected speech under Brandenburg v. Ohio, 395 U.S. 444 (1969). Only when the incited violence is imminent (as opposed to at some indefinite future time) does the speech fall outside the bounds of the First Amendment.
And that's considered a good job if you're a filipino.
It's about half that in India.
Besides, my local librarian doesn't decide what books the library will have (what's this, USSR?). They do the initial ordering management, but library members can request any book and have it ordered.
There's a mechanism for us curating our own content: we decide to which pages/friends we subscribe. How about that?
Um, of course they do.
Librarians do handle the organization and everyday operation, archiving projects, curated collections open to the public, etc.
> curated collections open to the public
What's the difference between Google removing a terrorist video from public view (they never delete anything), and a library having a book but the librarian not making it available to the public?
I mean, you rebut me when I say Google curates its own content, then turn around and say librarians are different, their duties include curating content.
Edit: How come people don't call Google some sort of evil censoring overlord when it comes to child pornography? You'll get the good ol' "I defend to the death your right to speak" when it comes to terrorist videos, but not child exploitation ones. Where are the people angrily demanding that google put child pornography back into their search results, out of a demand for freedom of speech for all? Why is that topic treated differently to terrorist recruitment videos?
Because most people act irrationally when it comes to related issues, and because other people (still a minority) don't want to be branded negatively by hysterical public/pundits.
One might as well ask where were the vocal proponents of black rights in 1920 Alabama?
I think what bothers me the most, though, is that we're too reliant on one company to be the gatekeeper to the vast majority of video content online. I'd feel better if there were more competitors in this space, and if YouTube was overly restrictive of certain content, you could just move to a different service and have a similar experience.
I can't really blame YouTube for being "too successful", but this is one of those cases where being a near-monopoly can put a company into an awkward position.
EDIT: Unless someone is saying that AntiFa ISN'T violent?
It's worth noting that these were original videos uploaded to YT by the protestors themselves!
By your definition(s), YouTube can't possibly handle inflammatory leftist material.
If they don't remove it, they're clearly leftist biased.
If the do remove it, they're also clearly leftist biased, but for a different reason.
You can't have it both ways.
Removing things that make one side look bad - but not similarly doing the same for the other side - is bias, yes. Unless you think YouTube would have removed videos taken by the protesters if the situation was reversed, and it was a neo-Nazi protest that was looking horrible?
Or are you claiming that the people at Evergreen are all secret members of the Pepe Brotherhood and that they are not holding their leftist views honestly?
This was pretty clear in what DuskStar wrote. See the "the fact it was produced by leftists who need a far better grasp of optics" bit.
How about doing some old-fashioned data gathering for the benefit of the community? Document all violent events, how many YouTube videos are posted of each, how many are removed and how long it takes for each to be pulled down.
Without data, it's just conspiracy theory.
And that's just the last two points of your comment.
The first requires a nation state.
But using the left/right simplified political spectrum is a terrible metric in any case. If there was ever a time to at least use the compass metric to differentiate between authoritarian/libertarian opinions as well as left/right it is now. It can be hard to realize as an American but judged globally you are comparing far right authoritarians to centre or even centre-right authoritarians as if they were opposites.
This is based on information that is highly controlled by the companies which we were just discussing, right?
Are you sure? I'd love to see credible sources for this data, for the US. Note that sources talking about "hate crimes" are unfortunately not usable here simply because they by definition exclude violence towards groups that violent left-wing groups target. I am aware that after this last election there was an upsurge in hate crimes; there was also an upsurge in violent attacks on Trump supporters. I have had little luck finding good numbers on what's going on, and would appreciate pointers.
Also, are we talking about recent history, or historically? Because again, for the US, there was a good deal of left-wing violence in the 70s that has been successfully whitewashed out of history. For example, some (but not all) of the leaders of https://en.wikipedia.org/wiki/Weather_Underground moved right on to positions like "Clinical Associate Professor of Law at the Children and Family Justice Center at Northwestern University School of Law" and "communications director at Environmental Advocates of New York" (and for the same one, "heads up his own consulting firm called Jeff Jones Strategies that specializes in media expertise, writing, and campaign strategies that help grassroots and progressive groups achieve their goals"). Going further down the list we have people becoming high school teachers and then academics specializing in education, and mathematics instructors .
At least as of 20 years ago, none of this was discussed in high school history classes that cover the period. Most people who didn't personally live through it aren't aware that any of this ever happened.
> or even centre-right authoritarians as if they were opposites
The centre-right authoritarians, by that definition, are not the ones involved in what would be considered "leftist violence" in the US.
Sure, there was an uptick in attacks on Trump supporters. But you take all of those and I'll take only violence against muslims or trans people, your choice. Hell, if I took only violence against non-muslim brown people mistaken for muslims the numbers would still be higher.
Do you have a source for that? Because, again, I've had a _very_ hard time finding credible numbers for this stuff, and would love some actual data.
I would say there's a very good chance that it will
That's a hell of an assumption. This subthread succinctly illustrates the problem at hand.
I imagine it'll be a very fuzzy line that they draw.
I also wonder how much it will cost to classify a video versus just hosting one of those average and barely viewed videos.
SCOTUS decisions scale just fine. It's called "stare decisis."
This is simply a new censor with less permissive and less stringent standards.
Why does that trigger concerns for you about 'political content'?
the establishment and left extremism figured out that the internet as the greatest platform for real free speech undermines their prerogativd of interpretation in the public.
this is a fight to control public opinion and we will lose it when we continue to be jelly fish.
If traditional media would step in, it would be major win. Instead of having symbiotic relationship with terrorism, where they hype up the terror, they could stick to reporting the facts and situation and not providing non-stop terror-porn entertainment to audience.
Once people realize that terrorism just one small risk among other must bigger risks (like high rise buildings with flammable material) they stop fearing it and the terrorism stops being effective. It will never completely vanish, but it will not attract radicals as it used to.
Once people realize that terrorism just one small risk among other must bigger risks (like high rise buildings with flammable material) they stop fearing it and the terrorism stops being effective. It will never completely vanish, but it will not attract radicals as it used to.
x2. Traditional media outlets are a massive force multiplier for anyone attempting to terrorize through violence. I understand they have an economic interest in hyping up that sort of thing but maximizing the visibility and publicity of a terrorist act is helping them, not hurting them.
On the smaller scale, violence will get more prominent coverage, yes. But that's just a reflection of public interest, and how much intent makes a subjective difference. News outlets like the BBC or the Guardian seem to be far away from "terror porn", and in the case of the BBC, it's obviously wrong to suggest they're doing it for money, seeing as they're not financed by ads.
Terrorist events should be treated like any other murders and people will very quickly stop caring. They are just hyped up by the media to drive views and the governments love it because it allows them to expand their power.
Catastrophe porn sells in general, fires, plane accidents, natural disasters, etc. don't increase if the reporting increases.
Terrorism works trough media. Terrorism should get less media exposure and. Ongoing video feeds, constant speculation etc. should be treated as enemy propaganda even if it goes trough neutral channel.
If there is ongoing situation, short messages from the authorities are good. Deeper reporting at later time. TV-cameras on the scene and all the graphical drama building is working for the enemy.
> and in the case of the BBC, it's obviously wrong to suggest they're doing it for money, seeing as they're not financed by ads.
BBC and others are doing what others do and doing it for viewers. They honestly think that creating drama is being neutral. It's not.
Just call them 'evil monster' or something similarly vague and hope history forgets their existence. Otherwise, you end up with a lot of criminals and terrorists causing great amounts of suffering to have their names recorded in the history books.
All of those platforms that are notoriously bad at curbing terrorism and great at promoting the most asinine "I can't believe it's not a troll" social justice concerns.
Also, not to mention present day Twitter gets a lot of money from wahabbi-funding Saudi Royals: https://qz.com/131532/meet-the-people-and-funds-that-stand-t...
I've seen videos on YouTube contain some of the most shockingly needless violence with complimentary cheering and derogatory language used by British and US soldiers (often directed towards foreign people) one can imagine, disturbing.
It seemed like footage aimed at radicalising people to me.
A better stop-gap measure against the radicalization of American terrorists would be to prevent the DoD from recruiting at high schools, at sporting events, in movies and video games, etc.
Mind you this "internet" is private biz. This is a pretty obvious "don't vomit on my rug" case. Don't like it? Find another party.
Is there any actual evidence that these youtube videos are in fact driving people to commit acts of terrorism?
Also, some of them are probably posted just so that the Western media can find them and scare the population shitless with them, pumping the huge autoimmune response we have wrt. terrorism here.
The internet bubbles will get worse.
Indeed, if Google thinks they are capable of shaping the narrative, what is stopping them from redirecting people e.g. to Coalition for Better Ads when searching for a content blocker? It is not hard to imaging scenarios when Google and the individuals' incentives are not aligned and that slope is slippery.
It's an action by Google, not the internet. If Google is the "internet" then maybe they shouldn't be.
There isn't anything stopping folks from setting up alternative video hosting/social network sites.
Just the billions of dollars and thousands of talented staff.
So, yeah, for 99.999% of people it is impossible.
If that's impossible to achieve maybe there is some analogue of Darwin's law about whether your content deserves audience.
But the real thing isn't the platform. These groups are after the audience. And that I see Google as having having no moral responsibility to provide.
They aren't locked out of the internet at all or prohibited from creating/distributing content (with which I would take objection). They just can't use a private companies distribution channel at the same level as other users to get eyeballs. Cry me a river. It's not the end of the internet.
Furthermore, while "hate" speech is not the best use of our ability to communicate it does fall under the protection of free speech - at least in the United States. In some groups hate speech (however it is you describe it) is acceptable.
Just as you are free to not associate with those groups the internet should be free to express the ideas you disagree with.
You don't have the freedom to interfere or disrupt the freedoms of others.
The west has enemies and stirs up hate and uses violence to target them, adding to the mayhem in the middle east, killing 10s of thousands of people every year.
Violence is very acceptable for the western democracies. You have to be willfully ignorant not to see this, after the last two years and 70000 people killed by the coalition during that time.
How so? I don't see it.
edit: Anyway. There's a lot of hate towards real or believed enemies of the west. It's easily visible online in news websites' comment sections, on social media, etc. It is somewhat rare to see people standing up against it. Even on platforms, where you need to provide gov. ID to be able to discuss, and your name is visible to others, people feel perfectly fine to spew hate against muslims, ISIS, or whatever in very non-measured ways.
My point with the previous comment is, that it is important to realize the western hate and violence too and not to brush it away, because it's very significant in its effect on peoples lives.
I'm not from the US, so it might be different in different countries.
However we have recently seen that principle degrade with all kinds of exceptions. It's weird given how little potential islamist terrorism has in the west compared to what communism, republicanism or protestantism had in the past.
As with all weapons, there is no doubt a risk that enemies will get the weapons. Open source will in fact make it easier. Network control will be more important.
And there are many, many fronts. Trump. ISIL. Any number of dictators (the Chinese Communist Party, Putin's oligarchy). And only a few bright lights of democracy.
I'm not sure terrorism has ever accomplished its goal to effect radical change. But a powerful entity ostensibly protecting us from a group perceived as a common evil that's not worth debating?
I am more worried about Google than ISIS. Real terrorists are people like Eric Schmidt who cozy up to the DoD.
> Driving something underground only makes it more attractive
See, I think you're wrong, and that the secret to attraction is visibility. As an online advertising company would know.
And IP addresses can be logged :)
Theresa May, as well as some Australian leaders have made very general statements that the big tech companies are not doing enough to fight terrorism.
I expect that these statements are merely a prelude some announcement of legislation, seeking even greater government powers and access to googles data.
I imagine law enforcement would rather have detailed identity information on the people consuming the content, rather than simply having the content removed more efficiently.
So while removing propaganda and violent videos from youtube is just going to drive the traffic elsewhere, at least googles action may help resist further government encroachment on our online privacy.
to fight online terror
1. identify extremist and terrorism-related
2. increase the number of independent experts
3. a tougher stance on videos
4. expand its role in counter-radicalisation
And, to be honest, I've seen plenty of NSFW videos.
Let's not kid ourselves. Terrorism has pretty much nothing to do with videos. Radicalization has more to do with the social realities faced on the ground, day after day, as we live our lives. Alienation and the misery of slums, and hopelessness of monotony without progress, are what lead to terror.
The true effects of propaganda are over-stated and over-imagined.
Lately I was watching some live stream on youtube about UK terrorist attack to get information about what's happening. I've seen what couple of thousand people wrote on the chat below the video. I didn't seen so much hate, intolerance, racism, fascism from so many people at same time in my life. I've closed it because I couldn't read that, the problem is that probably many young people didn't...
This is very hard problem without any satisfying solution. On one hand we don't want to give up our freedom of speech, we are fearful that some evil actor will use it to manipulate our society, on the other hand, ask parent of a dead kid that died in last UK attacks or any other attacks, ask yourself, what you would give up to get your dead child back.
The increasing tilt in the balance of power toward the state, on the balance of power equation between the state and the individual, is quite alarming. It seems to me that world over, we're allowing monopolistic views and policies to grow, to the detriment of common people. Terror and terrorism are very useful for those in power to gain even more power. Terrorists who claim to fight for freedom are winning by showing practically that those in power are continually curbing our freedoms. With such policies, tech companies are just joining forces with "democratic oppressive governments" in making them more powerful.
I do not see a clear and easy solution for these problems.
I'm somewhat hopeful for the "Redirect Method" method mentioned in the article. If the other content is not ridiculing and inflammatory, but reasonable, and fact based and balanced, it may actually work and be a good thing for some people.
> UK attacks or any other attacks, ask yourself, what you would give up to get your dead child back.
There's no getting back. Questions should be asked by regular people who didn't experience any harm yet. And also it should not only be selfish questions, but questions like "What harm I'm willing to inflict on others (inside and outside of my country) in the name of my supposed protection?"
And yet these same organizations assure us they will have clearly defined and wholly impartial standards.
Unfortunately I fear the damage has already been done, and these policy changes will almost certainly push newly-popularized radicals onto platforms Google has no meaningful influence over.
I'm possibly out of the loop, what happened at facebook? Are you talking about the fake news fiasco?
we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits,
It is not really clear why is Europe specifically targeted.
If you mean "targeted by attacks" I'm not sure why.
If you mean "targeted with this method" is because most of these kinds of acts in the West happened in Europe and not somewhere else. Also, IIRC, most of the attacks came from people born here, not immigrants, so it wouldn't make much sense to target wannabe attackers directly in the Middle East.
>>> Participation in the Program by some participants may be restricted (e.g., no or limited perks), including persons who are government officials, including (i) government employees; (ii) candidates for public office; and (iii) employees of government-owned or government-controlled companies, public international organizations, and political parties.
that tells much...
The fact that Google tries to do something about terrorism is absolutely not equal to the fact that it is efficient at that. For us to know that, we must be able to review their results at that (and how do we even define a result in this area :-/ ) as well as their methods (which requires transparency).
The problem is that there are terrorists among us who are willing to bomb, stab, or run people down with a truck. That there are entire communities where this either receives support or is at least tolerated.
It has very little to do with technology. They don't do it because they saw an ISIS video on YouTube or a post on Facebook. I doubt this is even a minor factor. Hodgkinson watched The Rachel Maddow Show. On cable.
That doesn't sound very efficient to me.
Also, isn't this a task for governments?
Now, when this expands to other subject areas, then it is a different story for sure.
They go on to say that this type of speech won't be exactly blocked, but basically made harder to find.
It's not quite big brother, but I suspect nothing like that happens all at once. "Inflammatory religious" in particular, feels like a pretty loose description...almost every religion has some bit that's inflammatory to someone else.
Religious speech is not inflammatory just because some people strongly disagree with it.
I didn't say all religious speech was inflammatory either. I said most religions have some speech that could be considered so. I really don't get your point.
Simple equation, no way out of it
Which is rather a pitty, given this metaphor makes most of our cultural baggage into unwanted and unremovable bloatware that has its own set of problems.
Let ISIS or White Nazi rebirth club set their own video hosting services up instead of piggybacking on something they are ultimately trying to destroy.
So let's say ISIS and the Neo-Nazis get their own video hosting services, since there probably aren't enough of either to make that infeasible, and they already have plenty of venues for hosting their own material. Then what?
What problem does removing that content from Google actually solve? The internet detects censorship as damage and routes around it, remember - stopping the propagation of anything people want to publish is impossible by design.
Google removing or otherwise taking steps to suppress this content very obviously (partially) solves the "most important" problem.
And that "most important" problem is Google being accessory or providing platform which is a legal and public relations liability.
Notice who the author of the article is. Not some engineer. Not a public relations person. Nope. It's Kent Walker, General Council. And that tells you a lot about the "why".
In the context of 'protecting' society:
IMO it's better to protect your society by removing or blocking blatant propaganda, but I don't think that ban should be extended to violent/gruesome content. The less you see of it, the harder it will hit once it gets past the blockade. I consider it a healthy human response to be able to see such images and think "Fuck those <INSERT_GROUP_OR_STATE_HERE> assholes", and then move on with your day. Shielding people will have the opposite effect, since people will not be familiar with the possibility of seeing such things and it will therefore retain it's shock-value/utility for the <INSERT_GROUP_HERE>.
No matter how much (western?)society might fool us, humans can deal with this. We've lived as savages, constantly tested by nature and the cruelty of life, for most of our existence on earth. I'm not saying that we should return to such a state, but we also shouldn't needlessly cuddle responsible adult people either.
Of course in the case of more subtle manipulation I guess you also have to be aware of the fact that the manipulation is happening. In such a case I imagine one cannot rely on a gradual hardening because exposure alone.