Hacker News new | comments | show | ask | jobs | submit login

They aren't installing a filter into the browser they're OPENLY addressing a problematic issue on YouTube, one of many, many video hosting websites. Nothing stops a terrorist from getting a computer, an internet connection, and hosting his own damn video calling for the murder of women and children.

When you call for violence against non-combatants you're breaking the law in every single western country. If there were only one web browser and the company behind it were implementing universal blocking measures maybe I'd agree with you, but honestly I'd have to think long and hard first. Radicalisation is impossible to survive in the long run as the average power an average individual keeps going up.




They didn't say they were going to start removing videos with illegal content, they already do that. They said that they were going to start removing videos that don't break any rules, yet the company deems them unsavory. Which is incredibly frustrating since 1) YT has become the center of our changing culture, and 2) not everyone lines up with the PC Californian culture that dominates large multinational corporations.


They didn't say they would remove the videos, instead they will display an "interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements." Which is not even on the level of a shadow ban, as practiced e.g. on HN.


I wonder what effect Google's wagging finger and implied scolding from an interstitial will have on people who stumble across a video they like but is branded as naughty.

I find it an interesting question because:

A) Not every video branded as culturally unacceptable will be. Not every video is as bad as the worst-case hypothetical used to justify the content classification.

The landscape of cultural attitudes differ from California-based content minders. The categorization can be flat out wrong, there will undoubtedly be a small percentage of videos that even the minders see as mis-classified.

B) Social interventionist policies can - and often do - backfire.

e.g.: Teens that deliberately seek out taboo. The allure of R movies, M games, Explicit Lyrics, and underage binge drinking can cause them to live a period of their life less well-adjusted than if that content wasn't aggressively filtered from their lives in the first place.


If they do that, they might as well remove the videos, since they have the same goal in mind. Look at the quarantined subreddits on Reddit. While the company gets to say it allows free speech, it basically removed those subreddits from existence, thus successfully​ controlling the narrative. Do we really want large corporations to intentionally guide the direction of our culture? Personally, I don't. In the end, a corporation would guide it in a direction that favors itself and its donors.


Where does that say they're removing them?


Murder sure; however, it said this:

>videos that contain inflammatory religious or supremacist content

It's very easy to claim content is inflammatory or supremacist . This will be highly subjective, which is the problem.

I personally know people here in the bay area that would have no problem labelling lots of campaign talk by Trump with those tags.


> This will be highly subjective, which is the problem

also they have to determine these norms for the whole planet (without North Korea), now every attempt so far that tried to set cultural norms for the whole world has failed, lets see if they do better.


i think that once upon a time facebook and google wouldn't do such things for fear of loosing customers to the competition, now its different: they got us hooked and now they are behaving as if they were a real government. This consolidation around 'platforms' and lack of competition is not good for the internet.


OMG a multinational doesn't share my precise political bias


>When you call for violence against non-combatants you're breaking the law in every single western country.

That's not entirely true. Abstract advocacy of illegal violence is protected speech under Brandenburg v. Ohio, 395 U.S. 444 (1969). Only when the incited violence is imminent (as opposed to at some indefinite future time) does the speech fall outside the bounds of the First Amendment.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: