> "we've determined that your video does violate our Community Guidelines and have upheld our original decision. We appreciate your understanding."
Can someone explain to me why corporations, when interacting with customers regarding complaints/appeals, seem to have "don't forget to add insult to injury" as one of their motto more often than not? Does that kind of patronizing tone sound polite to the ears of a PR drone?
It seems tone deaf especially since in cases such as these there is no understanding to appreciate. Google will not tell you what you did to violate policy, only that they checked to ensure that they found you guilty, and then they snub you further with the HR speak. It's maddening.
A transparent appeals process staffed by humans who can at least deliver a rationale, including what rule you broke, should be required by law. There's irreparable reputational damage associated with an algorithm libelously labeling something "extremist content" that isn't.
Distributed hosting of static content is a sorta-solved problem. But curating, linking and discoverability (which require mutating content) is a lot harder due to the trust anchor problem.
BitChute is a peer to peer content sharing platform.
Our mission is to put people and free speech first.
It is free to join, create and upload your own content to share with others.
I hope they make it.
That makes it tricky to for solutions that "put people and free speech first" to succeed, because they've basically painted a giant target on themselves, and it easily makes even a lot of people that sympathise in principle worried about the bits and pieces that steps over their personal line.
Figuring out a reasonable solution to this, I think, will be essential to get more widespread adoptions of platforms like these.
The only reasonable solution is to host everything, modulo requirements by law, and give users the tools to locally filter out content en masse.
In a decentralized system you also skip the law requirements since you cannot enforce multiple incompatible jurisdictions at the platform level, individual users will be responsible for enforcing it on their own nodes, similar how all you can do when accidentally encountering child porn is to clear your cache.
> In a decentralized system you also skip the law requirements since you cannot enforce multiple incompatible jurisdictions at the platform level, individual users will be responsible for enforcing it on their own nodes, similar how all you can do when accidentally encountering child porn is to clear your cache.
But that's the thing: You don't skip it. You spread it to every user. They both have to deal with whether or not they are willing to host the material and whether or not it is even legal for them.
How many of us sympathise with the idea of running a Tor exit node, for example, but avoid it because we're worried about the consequences?
These platforms will always struggle with this unless they provide ways for people to feel secure that the content that is hosted on their machines is content they don't find too offensive, and/or that traffic that transit their networks is not content they find too offensive.
Consider e.g. darknet efforts like cjdns which are basically worthless because their solution to this was to require people find "neighbours" they can convince to let them connect. Which basically opens the door to campaigns to have groups you disapprove of disconnected by harassing their neighbours and their neighbours neighbours, just the same as you can go to network providers on the "open" internet.
Secondly, there are several tiers of content. a) stuff that is illegal to host b) stuff that is not illegal but that you find so objectionable that you don't even want to host it c) stuff that you don't like but doesn't bother you too much d) stuff you actually want to look at.
I posit that a) and b) are fairly small fractions and the self-selection mechanism of "things that I looked at" will reduce that fraction even further.
And even if you are on a network where you randomly host content you never looked at encryption can provide you some peace of mind (of the obliviousness kind) because you cannot possibly know or expected to know what content you're hosting. Add onion routing and the person who hosts something can't even be identified. If Viewer A requests something (blinded) through Relay B from Hoster C then B cannot know what they're forwarding and C cannot know what they're hosting. If neither you nor others can know what flows through or is stored on your node it would be difficult to mount pressure against anyone to disconnect.
For the illegal content, especially in oppressive environments, you could install a Voluntary Compliance(tm) government blocklist on public-facing nodes and still opt to run an internal node in your network that uses encrypted connections to retrieve things hosted in other countries you're not supposed to see.
Anyway, back to filtering for decentralized content hosting. I think once you have a network it is a matter of managing expectations. You can't make content magically disappear. Platforms like youtube, twitter, facebook etc. have raised the false expectation that you can actually make things go away by appealing to The Authority and it will be forever gone. In reality things continue to exist, they just move into some more remote corners of the net. Once expectations become more aligned with reality again and people know they can only avoid looking at content but not make it non-existent things boil down to being able to filter things out at the local level.
I think you misunderstand the objection. Yes, encryption can mean you cannot be persecuted for "hosting"/"transmitting" some objectionable stuff, since you can prove that you had no idea (at least that's the theory).
However some want to be able to "vote with their wallets" (well "vote with their bandwidth"). They don't want to assist in the transmission of some content, they want that content to be hard to find, and slow and unreliable. They have the right to freedom of association and don't want to associate with those groups. Encryption cannot guaranatee that I won't help transmit $CONTENT.
I'm aware of that, but they you suffer the problem of people wanting deniability.
> Secondly, there are several tiers of content. a) stuff that is illegal to host b) stuff that is not illegal but that you find so objectionable that you don't even want to host it c) stuff that you don't like but doesn't bother you too much d) stuff you actually want to look at. I posit that a) and b) are fairly small fractions and the self-selection mechanism of "things that I looked at" will reduce that fraction even further.
That's true, but those sets pretty much only need to be non-zero for it to threaten peoples willingness to use such a network.
Further, unless there is stuff in a), and stuff that fall into b) for other people that you want to look at, such a network has little value to most of us, even though we might recognise that it is good if such a network exists for the sake of others.
This creates very little incentive for most to actively support such systems unless such systems also deals with content that we are likely to worry about hosting/transmitting.
> For the illegal content, especially in oppressive environments, you could install a Voluntary Compliance(tm) government blocklist on public-facing nodes and still opt to run an internal node in your network that uses encrypted connections to retrieve things hosted in other countries you're not supposed to see.
That's an interesting thought. Turning the tables, and saying "just tell us what to block". That's the type of ideas that I think it is necessary to explore. It needs to be extremely trouble-free to run these types of things, because to most the tangible value of accessing censored content is small, and the intangible value of supporting liberty is too intangible for most.
> Anyway, back to filtering for decentralized content hosting. I think once you have a network it is a matter of managing expectations. You can't make content magically disappear. Platforms like youtube, twitter, facebook etc. have raised the false expectation that you can actually make things go away by appealing to The Authority and it will be forever gone. In reality things continue to exist, they just move into some more remote corners of the net. Once expectations become more aligned with reality again and people know they can only avoid looking at content but not make it non-existent things boil down to being able to filter things out at the local level.
This, on the other hand, I fear is a generational thing. As in, I think it will take at least a generation or two, probably more. The web has been around for a generation now, and in many respects the expectations have gone the other way - people have increasingly come to be aware of censorship as something possible, and are largely not aware of the extent of the darker corners of the net.
Centralisation and monitoring appears to be of little concern to most regular people. People increasingly opt for renting access to content collection where there is no guarantee content will stay around instead of ensuring they own a copy, and so keep making themselves more vulnerable, because to most censorship is something that happens to other people.
And this both means that most people see little reason to care about a fix to this problem and have an attitude that give them little reason to be supportive of a decentralised solution that suddenly raises new issues to them.
Note that I strongly believe we need to work on decentralised solutions. But I worry that no such solution will gain much traction unless we deal with the above issues in ways that removes the friction for people of worrying about legality and morality, and that provides very tangible benefits that gives them a reason to want it even if they don't perceive a strong need on their own.
E.g. Bittorrent gained the traction it has in two ways: through copyright infringement and separately by promising a lower cost way of distributing large legitimate content fast enough. We need that kind of thinking for other types of decentralised content: At least one major feature that is morally inoffensive and legal that attracts people who don't care if Facebook tracks them or Youtube bans a video or ten, to build the userbase where sufficient pools of people can form for various type of content to be maintained in a decentralised but "filtered" manner. Not least because a lot of moral concerns disappear when people feel they have a justification for ignoring them ("it's not that bad, and I need X")
I genuinely believe that getting this type of thing to succeed is more about hacking human psychology than about technical solutions.
Maybe it needs a two-pronged attack - e.g. part of the problem is that the net is very much hubs and spokes, so capacity very much favours centralisation. Maybe what we need is to work on hardware/software that makes meshes more practical - at least on a local basis. Even if you explicitly throw overboard "blind" connection sharing, perhaps you could sell people on boxes that shares their connections in ways that explicitly allows tracking (so they can reliably pass the blame for abuse) to increase reliability and speed, coupled with content-addressed caching on a neighbourhood basis.
Imagine routers that establish VPN to endpoints and bonds your connection with your neighbours, and establishes a local content cache of whitelisted non-offensive sites (to prevent a risk of leaking site preferences in what would likely be tiny little pools).
Give people a reason to talk up solutions that flattens the hub/spoke, and use that as a springboard to start to make decentralisation of the actual hosting more attractive.
> How many of us sympathise with the idea of running a Tor exit node, for example, but avoid it because we're worried about the consequences?
Tor isn't the best example, because exits don't cache anything. So mainly, exit operators get complaints. And the exit IPs end up on block lists. Operators don't typically get prosecuted. Maybe they get raided, however, so it's prudent to run exit relays on hosted servers.
Freenet is the better example. The basic design has nodes relaying stuff for other nodes. In an extremely obscure and randomized way. Also, keys are needed to access stored material.
However, nodes see IPs of all their peers. Investigators have used modified clients to identify nodes that handle illegal material. So users get busted. There is "plausible deniability". But it's not so plausible when prosecutors have experts that bullshit juries. So users typically plea bargain. Or, if they use encrypted storage, they get pressed for passwords. Like that guy in Philadelphia.
Same goes for freenet and the like.
The first includes both legal and moral considerations, the second only moral ones.
My consideration of whether to share, part of the time, some slice of my home Internet connection bandwidth as a Tor exit node is almost entirely a legal one (I admit that time/effort may play a role too). I'd consider the moral aspect too, but I wouldn't have to think long to decide that (for me personally) the trade-offs are worth it (I could explain why and how, but I don't want to derail the discussion in that direction).
In fact I'd argue this goes for anyone, in some sense. Even if their underlying reasons align with the legal considerations (and thus not run one), it's a moral judgement. (in the worst case, there exist people who equate moral judgement with legality)
I don’t think that’s always the mindset. Isn’t it reasonable for people to have the mindset of “I want to go to sites that don’t have content that I find objectionable”? This way websites can decide which group of people they want to cater to.
the thing is law requirements do not generally allow you just to clear your cache of the offending content, the company is not allowed to show it.
Channels aren't open to everyone, so it looks like they have manually allowed that?
Plus the discovery component is still hosted on websites subject to the networking effect.
I love the idea but one problem: who pays for it? It's a special case of the co-operative vs corporation problem. Without an individual's starting capital, how do you get off the ground?
>There's irreparable reputational damage associated with an algorithm libelously labeling something "extremist content" that isn't.
The damage is definitely repairable. You admit the algorithm is flawed and you tolerate exceptions to it.
The issue isn't AI vs human, it's transparent vs opaque.
Perhaps the Graph should be public domain. Perhaps too we are heading towards a world where reputation and legal identity are subject to casual destruction but there's no real barrier to starting over, much like when you die in a videogame.
The fact is that there is always a human in the loop. Without human supervision these algorithms deliver a small but significant portion of incredibly stupid results. So an actual human has to sit down, analyze these results one by one and decide what to do (in some cases just hardcoding the "correct" answer). The general public must be educated about this stuff so that responsibility is not muddled.
Incidentally, this also explains why there is zero interest in making rulebooks available, conscise, searchable, etc. All of these would improve fairness, but rulebooks are actually an instrument of power, not of fairness, so existing power structures will typically oppose any such changes.
It's not like the AI has absolute idea of what is 'extremist' content, it's just enforcing someone idea of what it is. AI is trained on data, and whoever labeled that data is the person/s who are winning here.
Nobody is taking away your freedom of speech by deleting your video.
Nobody is mandated to provide you with a vehicle or medium for your speech.
Are you sure you are sold on right outcome of the baker/LGBT wedding-cake case? How about a pharmacist not telling correct/all options based on their theology?
How about a publicly traded corporation? Do they have a mandate to treat people equally? If they are picking a political viewpoint and removing customers because of it, what makes you think that their hiring practices are fair?
Google has a religion now, it has been baptized in the religion of intolerant left. Google is now theocratic, it will not allow blasphemous talk that challenges its religion.
Google claimed to champion Net-Neutrality, don't open the packet, they said to the ISPs. They want to resist opening of TCP/IP packet but when it comes to content of the videos, they want to play God. TCP/IP packet or Video, let the legal system take its course, let authorities tell you to ban something, don't play God on the platform that is valuable because of the sum number of people on it. YouTube is a social network, its value comes from people participating in it, treat the people equally and be an neutral steward of the platform technology, don't push ideology. Anyway, Google has damaged its image too much now. It will never be seen with same affection again, at least not by me.
The training parameters of that "AI" were set by a human too. Someone said to it "here are a bunch of videos that I PERSONALLY THINK are to be banned, learn from that".
Bureaucratic hell, as defined by Harry Harrison in the Stainless Steel Rat series, is the definition of humans as automata.
Nothing major online happens by accident.
The thing is, people only tend to notice it when it affects them personally (either they are the victim of the algorithm, or someone they know/like/support is). The world has long worked on irrational biases, which now are being used as the training data for decision-making systems which are subsequently declared to be "objective" because people believe an algorithm can't be biased. And increasingly, the mark of privilege is having access to a system -- applications, interviews, customer service, even courts -- which will use human judgment instead of an unreviewable algorithm.
For more on the topic I suggest the book Weapons of Math Destruction by Cathy O'Neil.
That means that it needs to be done by an entity outside of Google itself, and not in any way associated with or influenced by Google.
-They enjoy a de facto monopoly and are protected by the extreme cost, risk and time involved in building competing services.
-Finally, they have a potential for abuse (say, with selective censorship or politically biased algorithms) that could essentially curb the Constitutional rights of individuals
If these points sound familiar it's because they're frequently used when arguing for the nationalization of a private company. Since I think that's (currently) out of reach, I believe regulating Facebook, Google, et al as public utilities to be the next best thing.
Some people really do have a naïve trust in government. Free markets are the answer. Who has actually made a legitimate attempt to compete with Google or Facebook? What VCs are investing in Facebook alternatives?
MySpace was unstoppable – until it wasn’t. Yahoo owned search – until it didn’t. Perhaps there ought to be more bold entrepreneurship rather than calls for regulation.
Sounds to me that people are ok with just giving up and giving Facebook the win.
Don’t like Facebook’s dominance? Then challenge it. Don’t cop out and just let the government take control.
History is littered with great companies toppled by better ideas and execution.
It's naive to think that any one form of human organization, be it governmental or corporate, is somehow less corruptible than another. You're right to be on your guard against governmental abuses, but don't take your eye off the other balls in play.
What we've seen lately are instances where one company topples another and proceeds to commit the same abuses, only more effectively and at wider scale. When Facebook replaced MySpace, were its users really that much better off? Which company had fewer rules and enforced fewer content guidelines? When one company dominates the market and locks it up with network effects, what incentive does that leave them to play well with others?
You don't think any organizations have ever been any more corruptible than any other organizations?
None of these organizations should have been trusted implicitly to do the right thing for society at large. The burden of proof rests decisively with those who want us to believe that Google and Facebook are somehow different.
Are you really claiming that very government that tries to regulate corporations is going to wind up like China?
You know there are lots of governments around the world that regulate corporations, and most aren't anything like China.
"Some people really do have a naïve trust in government. Free markets are the answer."
Some people really do have a naive trust in free markets.
What are you actually proposing?
I'm with you. I don't want that, but at some point I expect to lose the argument. Google will cut their own throats with their smug "We investigated our decision and found it to be correct" pronunciations.
The fact is, any sufficiently dominant corporation is indistinguishable from a government. The more a company like Google behaves like a bureaucratically-hidebound public utility, the harder it will be to argue that it shouldn't be regulated like one.
An alternative is to classify Google as a common carrier, exempting them from the DMCA, but preventing them from censoring or even throttling traffic, however given their business model is around sponsorship, it is unclear how to also protect the advertisers' interests. Trying to get government shackles onto Google simply seems too tricky;
It seems much easier to simply run Google with tax dollars and no advertising.
Google and Facebook combined have a total market cap (sum of all shares) is around 1.2 trillion dollars, which would represent only a 5% increase in tax revenue for America to simply buy all the shares.
However, the Government doesn't need to turn a profit: Google and Facebook combined spend only around $200 million dollars per year on R&D and operating expenses, which would be a rounding error on the tax budget.
I for one would welcome my files being hosted by the government if we lived in a world where democracy isn't more utopian than flying pink unicorns; as a citizen I would have a slight chance of being respected and listened to because I'd be a part of that, albeit a very small one, whilst with a company you have zero chances unless you're a stock holder or work there in some high rank. That's something to keep in mind next time they want to brainwash people about how good is privatization of public property.
We're slowly but steadily going towards a future where governments will first be owned by corporations, then will cease to exist or be relegated to a purely PR role (think about the royal families in nations still having them). That will likely mark the start of the worst period humanity will ever live in.
Let us simply label Google (search, yt, news) and FB as utilities and regulate them too.
I have watched hours upon hours of his videos (I've been a fan long before his PC controversy - I love personality theory and the twist he adds to them).
I'm pretty centre left as far as I'm concerned and his videos do not in any way promote anything nasty. He's completely upstanding. I have no idea why they'd ban his channel unless there was a coordinated flagging.
1. It's to add insult to injury while attempting to soften the blow
2. It's an attempt to deny that they have power to do what they did.
For the insult to injury: This is a technique that is under the "thinking past the sale". (http://blog.dilbert.com/post/129433801521/thinking-past-the-...) The context prior to the understanding part basically has put you in: "I've done something bad and now I'm being punished." The last phrase "We appreciate your understanding." has later put you in the position of understandnig that: They (plural people) would like for me to understand their decision.
In short this you're no longer put in a position where you can really fight back directly with the issue at hand. You're reminded that you are fighting against an organization if you disagree. It's predatory and manipulative.
The second part: It's a manipulative attempt to prevent you from attributing ill will against the offending party. They're attempting to "soften the blow" because they appear to be reaching out.
On both of these possibilities, the biggest issue with this is it's incredibly manipulative and it's much more insulting when you notice it. The organizations and people who use those statements should be doomed to be constantly rejected in everything they do and want in his passive aggressive stance for the rest of their life. It's an incredibly anti-social behavior.
Unfortunately, we don't have social punishments for shitty behaviors like this.
Last to note: This is the equivalent of the "apology" "I'm sorry that you feel that way" or "I'm sorry this didn't turn out the way you had hoped."
If you say nothing, there's nothing to grab on to.
It's like online dating, everything you say is something that someone will dislike about you.
I don't know, but you're right it is very common, and infuriating.
"Your call is important to us. Please hold."
I think allowing low level minions with saddistic tendencies to express that saddism via a kthxbye-but-actually-fu here and there is used to reward them for an otherwise boring and unfulfilling task. In this case it is a bit indirect because it is the developer who wrote the code not necessarily a clerk at the counter or a customer service representative in a call center.
Don't remember the incident, I think it was when someone was fired after some public incident at one of the tech conferences (Pycon I believe) where the HR commenting on the firing said something ridiculous like "we reached out to the developer and told them we'd be letting them go". Which I remembered because it sounded like such a massive and rude passive agressive "fuck you"
Google’s whole future business model seems to be based on getting deeper into our lives, into our homes, into our vehicles and gathering ever more data about us so they can more effectively help others to market to us. Many of us have been completely okay with that in past because we trust Google. They’ve worked hard to earn that trust. But with the public shaming and firing of James Damore, the blacklisting of “non believers”, the demonetizing and deleting of YouTube videos that violate the “Code of Conduct”, etc. the bonds of trust have been shattered. And once trust has been shattered, it is nearly impossible to re-build.
Yes, it does. And they're not entirely wrong. Consider this video in which Uber CEO talks with someone like a real person instead of "respectfully" brushing him off:
It was a PR disaster. If he'd just ignored what's-his-name as an insignificant peasant, nothing bad would have come of it.
Word is at some point it became a dumping ground for Google employees who transferred in because they wanted an easy job where they could use the amenities of the YouTube offices in San Bruno.
It's their way of saying that they know you won't find their decision popular but they hope you won't pursue it any further.
Google does have a habbit of making jokes in their software, chrome crash page comes to mind.. so I don't really find it very surprising to see this kind of message.
To be clear it's really only an insult when you don't know why your content was removed. If you know why, and can see their side it makes sense and it wouldn't seem so insulting. But in a case like this, it comes off insulting due to the true situation.
Removing the text all together and simply stating the fact would have sent the intended message perfectly.
And finally, likely just a developer, not a pr drone.
Or, it could be that, long long ago, that phrase actually conveyed sincerity that won customers.
Soon, that became the next big customer retention tactic to apply. Next, it became cliché. Now it is just grating on our senses to hear the fake insincerity echoing from these huge organizations all around.
> Ironically, by deleting years old opposition channels YouTube is doing more damage to Syrian history than ISIS could ever hope to achieve
> Also gone are the dozens of playlists of videos from Syria I created, including dozens of chemical attacks in playlists by date
> Keep in mind in many cases these are the only copies of the videos, and in some the channel owner will have died, so nothing can stop it
Kind of sad where you have video evidence being deleted by YouTube. It would be nice if they allowed some sort of option for political type videos like these to actually be uploaded by users, especially if the original uploader was killed, to be downloaded with metadata (upload date to youtube, youtuber username, etc) so anyone can reupload it elsewhere.
Another case where I wish TPB had made their own YouTube clone already. I'm sure they would of not taken down these sort of videos.
Wondering where Wikileaks is in these sort of cases? Do they download these sort of videos? That begs the question: why don't they? It seems right up their own alley. I don't always agree with them, nor do I digest their content but at the very least for a site like theirs it would make sense for them to archive YouTube and other politically sensitive videos no?
"as long as they don't abuse copyright"
Would we delete videos of the liberation of concentration camps if there was Nickelback music playing in the background? This just demonstrates how successful media companies have been in distorting the true purpose of copyright laws: to promote science, art, and culture for the public's benefit. It does not exist for fairness or personal gain. Copyright laws should be changed to better reflect this. Nobody should be able to silence any information that has a public benefit.
You're forgetting that 'promote...' means give the creator control of that content for the purpose of limiting access and making money. Promote in the sense that it becomes possible to actually sell artistic works like commodities. And then once the value has ben extracted the public can do what they please with it.
I think this was the video, but it is obviously back up now: https://www.youtube.com/watch?v=N1KfJHFWlhQ
While I agree that would be the most sensible course of action, if this in reality happened right now, the whole video would be deleted in both cases, automatically (assuming it's clear enough to trigger detection in the first example).
As others said, the Internet Archive may be a good option for these videos. I wouldn't mind writing a system for backing them up to archive.org, but I'm not sure how would I detect them. Marking those videos requires the user to know they should be marked, which just moves the question to how they would know.
A YouTube clone that uses a clone of YouTube's infrastructure is expensive, but what about a distributed p2p YouTube clone?
Obviously it's hard to quantify, as it doesn't exist yet, but I think it's technologically feasible.
That would be more expensive because
- You have a much higher failure rate of the storage media as people say "I'm running out of hard drive space. What should I get rid of?"
- You need to recruit those people to give up a resource that (unlike the spare compute cycles that SETI uses) they are likely currently using.
- You have to convince people to trust you to put arbitrary video content on their hard drives. Therefore, you need to have some process for deciding what video content is objectionable enough that you won't store it.
Not really, because data is replicated between many people. People can delete it and other people would still have it.
> You need to recruit those people to give up a resource that (unlike the spare compute cycles that SETI uses) they are likely currently using.
It's no different than people seeding torrents or people just using ipfs. Simply accessing the system would transparently increase availability. In fact, ipfs is probably suited as-is.
> You have to convince people to trust you to put arbitrary video content on their hard drives. Therefore, you need to have some process for deciding what video content is objectionable enough that you won't store it.
No, data will become more accessible as people would consume it. People only need to understand how the system works, they don't need to trust "me" (whoever you refer as "you" in your post).
As I said, ipfs probably works as-is.
Yes, so you are designing a distributed system with a higher failure rate of the underlying media. That means you need more replication, so you need more people to donate space.
Same reason Bitcoin and its ilk can't scale. Decentralization throws in essentially a log x exp growth rate on bandwidth and storage for every additional peer on the network. Technology can't keep pace, period.
Make one. Add it as warrior project to archive team; save them and re-upload them if they are deleted.
Unfortunately we all need to participate, because https://medium.com/message/never-trust-a-corporation-to-do-a...
Can't people just use liveleaks for that kind of videos?
Use youtube-dl, download it to your own server and back them up yourself. Yes, this is awful and sucks on so many levels, but please, please, back up data.
I realize that minimizing human labor is a big part of how these sorts of business models achieve their profitability, but problems like this aren't going to go away as long as that's the norm. And I don't just mean poor explanations for policy decisions either. The core issue is bigger than that imo.
The information age has privatized a lot of the modern 'public' social/cultural spaces. For nations that value both the freedom of speech and the preservation of historically/culturally significant speech this is problematic. It reduces the public's ability to express itself but also their ability to look back on old expressions and learn about the history or cultural paradigms behind them.
This isn't really supposed to be a rant at Google in specific. They're just the topic at hand so they're the easy punching bag. In general, customer service aside, I think they do good work and more importantly they exercised the necessary foresight and resources to develop their products into what they are today. I'm by no means implying we should socialize social media... no pun intended. But I do think there needs to be more discourse about how these trends will affect the future of speech and historical censorship. Right now it's just a modern problem in its infancy, but decades from now people who want to see visceral content depicting firsthand experiences from events like those happening in Syria, or the Arab Spring, are going to be getting censored history. What if China started pressuring foreign companies, via benevolent coercion such as financial incentives, to implement systems that made finding information about Tienanmen Square more difficult? The privatized nature of these platforms makes this sort of attack easier as well. And I don't have a good solution, but that's why I think there needs to be more dialogue about the future of online media in general and what direction we want to steer it in.
I calculated the cost to be 6bn USD/yr assuming the fully loaded cost of a full time reviewer is 20k, which dwarfs the revenue YouTube has.
So please, lay out a plan that actually works with the economics of YouTube.
Either way, totally agree that it's a tragic situation.
The only sympathy I have for Google is that trying to separate the good vs. "evil" (as in "Don't be evil.") content is a monumental task that machine learning will probably never be capable of performing. So they're left with the choice of spending an inordinate amount on human review and detailed research or just make wildly over-broad removals.
I'd rather they leave up more rather than less, but they tried this approach and it almost lost them every major advertiser. So continuing down that road would potentially lead to the whole platform losing viability. Maybe some would like that outcome but these historical videos would be just as lost.
Maybe we'll see the pendulum swing back in an effort to reach a more reasonable middle ground.
If youtube can't handle the load, then they shouldn't claim that they can. At the absolute lest, they need a usable appeals process. If they can't do that, then they need to own up to it, and stop allowing anyone to upload anything.
YouTube is a company that is beholden to advertisers. YouTube wants/pays for videos that they can put ads in front of. If your content is not the type of content YouTube can wrap in ads, and you need longevity for your videos, YouTube is not the platform for you. YouTube never claimed to be an everlasting video storage space for all your video needs, so I'm not sure why you're expecting that of them.
Video has become politicised because it is a popular medium for political topics and one which can be rapidly produced and consumed. Advertisers are perhaps influencing Google's decisions on this, but they are equally political. The debacle with the diversity memo is one example of biases inside of the company. There have been many more examples over the last years, one such example is censorship across the board of conservative commentators.
It goes on in Facebook, Twitter and so on. So we have to wait for competitors to turn up, how many years will that take? Is that a responsible route considering the foothold these companies have?
ALSO: I don't know about you, but this so-called "censoring" of conservative commentators doesn't seem to have worked. I see more young people skewing right/centre than I did 10 years ago. You can call it censorship if you want (because sure, that's probably the best term), but implying that a company selectively hosting content is the same thing as limiting the free exercise of speech is absurd.
Probably for this reason: http://slatestarcodex.com/2014/04/22/right-is-the-new-left/
That's an impossibility because evil depends on perspective.
This is why we have free speech rights in america. If we determined others to limit our free speech by their perception of evil, then atheist speech and lgbt speech would be banned. Pornography would be banned along with "offensive" music. Hell books like huckleberry finn would be banned.
This is why we cherish free speech rights in america ( or we used to ) and why we have the saying "You have the right to be offended".
At some point it will be as easy to create fake videos as to create fake text. It is unrealistic to expect people who aren't information literate about text will be literate about video, but I hope that there's a way for journalists to move away from YouTube by then.
I was googling for baking videos the other day, when all of the sudden most of the hits I got was auto generated videos uploaded on youtube. They just had some panning stock photos in the background with text scrolling on top and super generic music, must be really simple to generate.
There were hundreds, maybe even thousands, of them from various accounts, all conveniently linked to each other making more of them appear as relevant in the rightmost column.
Documentation is pointless if it can't be distributed and used to effect change.
This comment by jacobr suggests there is a risk of lost footage: https://news.ycombinator.com/item?id=14998452
You're saying like every human rights organization have some magical means of preservation other than uploading to YouTube.
If it's so important, these orgs should maybe build their own platform that is purely about video longevity.
Expecting a private company to host content they don't want to host is silly.
Tools exist already to upload to S3 from practically every device, especially Android.
Curation would be much harder, but there is a lot of money in philanthropy and I could see some deep pockets contributing to that.
YouTube is fine as a means of promoting and gaining awareness - world should know about these things, but it's not rational to expect a corporation (aka people within it) to act in any other interest other then it's own.
Seems like this could be an interesting infrastructure non-profit for YC to fund.
At the least, we probably need “upload video to archive.org” mobile apps to make this as useful to journalists in the field as YouTube currently is.
If the Archive grows as a journalistic distribution channel, it might then face YouTube's issues of copyrights, piracy, and other criminal use. However, the Archive could apply goals that are more compatible with journalism than YouTube can. Maybe sufficient philanthropic support could make this possible.
YouTube banned the whole channel for extremist/hateful content. Probably some of the videos/titles told the AI that the footage is extreme or some sort of glorification.
I appealed on some form but don't even bother anymore.
I hope YouTube as a video platform (not streaming) gets a serious competitor.
I have been amazed at the little importance people put on this kind of video. You have video evidence of crimes with faces appearing clearly. It can take 5 to 10 years for such events to calm down enough to reach a point where crimes can be prosecuted.
And it is hard to blame youtube for that. They are considered the channel for Lady Gaga and silly cats video. Hell, I know 3 years old toddler who browse youtube unsupervised.
In many places Youtube is criticized to promote violence and extremism by leaving these videos. I feel bad for them, they are between a hammer and a hard place.
I just hope that the censored video are not totally deleted from their servers. They should have someone reviewing criminal videos and keeping them at the disposal of judicial authorities but even that opens a whole can of worms: do you obey only to US authorities (who do not care about war crimes in other countries)? Do you obey all world authorities including Saudi and Chinese?
Anyway, that's youtube's problem, not ours. Simply, helping prosecute war crime is not part of Youtube's mission, so do not trust them for it. To anyone who feels this is important content, use youtube-dl and keep backups. Make torrents of it, share it around, make sure it does not disappear.
And when some NGO finally realize that this content is precious, pump up your upload bandwidth and fill their servers.
I would argue that there is a key difference in customer support which makes me much more confident in Amazon than Google.
Google has non-existent customer support for the public and virtually non-existent customer support for paid customers. If something goes wrong with your Google product your best bet - even as a paid customer - is to contact someone you know at Google. Going through the official channels is a waste of time.
Amazon, on the other hand, will bend over backwards to make sure you're satisfied - even if it loses money in that transaction. Refund decisions are mostly automated at this point, although human support for both vendors and buyers is there if needed.
The problem that needs to be solved is how to educate people into not being lured into those organization DESPITE having access to those materials... This kind of censorship is just as STUPID as banning drugs like heroin and cocaine (instead of just making them unavailable to children, or without a "license") or the "war on drugs".
Imho the problem comes from the fact that corporations try to hard to be "democratic" about things and "please the majority". But this is not a good idea: sometimes the majority of 99% is against freedom, and they are wrong, despite being the 99%. And the majority should be opposed and freedom protected even when the cost is someone's blood. For me personally, there are these words from my native country's national anthem: "life in freedom or death [for all]"... and I will sure as hell fight, die or kill for them.
That remains true even if their algorithm or human criteria for determining what is and isn't extremist material sucks.
Freedom of speech is not inherenlty the highest value there possibly is. You should be able to defend something like "Yeah, even Hitler had a right to say what he thought and it'a a good thing he had it, despite the consequences that ensued." because, in my opinion, I rather restrict the freedom of one genocidal maniac than see the dath of 85 mio. people.
Using your definition of freedom of speech we could easily justify not outlawing murder: "We should just educate people not to murder each other instead of banning it." Maybe banning can actually have a chilling effect on ie. hate-speech, heroin abuse? (While I'm for banning heroin, I advocate for providing services that ensure safe consumption (e.g. needle dispensaries, consumption rooms) and help prevent (further) abuse instead of jailing them)
First, "limited free speech" is not "free speech" anymore. Second, you're just not going to be able to "define" things anymore as you automate and replace with AIs more and more processes (the definition will more am more start to be "the practical implementation of the machine learning filtering algorithm and the choice of training data", anything else will be "approximations" since you won't be able to prove things about these statistical algorithms much). The choice will be either (a) full freedom + massive investment in mechanisms to manage the negative consequences of this freedom (let's start with education first, not only making it free for all at all levels, but also forcing people be given free paid time to educate themselves, not worked to dumbness 8+hrs/day and then expecting them to differentiate real-news from fake-news...) or (b) give up freedom and live in a "well managed totalitarian system" with "freedom for distractions and sex only" or some other deal like that.
> we could easily justify not outlawing murder
No, there'l a clear criteria: reversibility! If I say something incredibly hate enticing, I can be proven wrong, and I can even retract my words and say "I changed my mind" later, that should be ok. If I murder someone, that can't be undone, even if I say I changed my mind about murdering him, he's still dead and I've still proven that I'm capable of murder (imho all people are, but that's a different discussion...).
Freedom of speech is a very high value because sometimes extremist views (such as "we should rebel against the British") are a good idea, yet if there are anti-extremism laws, this sort of idea would never be surfaced, because extremism can be defined however the enforcer chooses to define it.
Why you think this also applies to murder?
Current solution is to just babysit general population by essentially censoring information.
Because kids don't want to go to school usually, yet they do. And luckily with a good curricula the school system could teach critical thinking with a significant efficacy. (And it could be added to common core)
Corporations aren't trying to please the majority. They don't care about the majority. Besides, the majority wants free speech.
Most americans don't want jobs being sent to china, but the elite do. Which side do you think corporations chose?
Youtube and the rest of social media are censoring because the WSJ, NYTimes, etc have been pressuring them to. And the WSJ, NYTimes and the traditional media doesn't represent the "majority", they are the mouthpiece of the elite.
Think about it for nearly 10 years social media has been highly "pro-free speech". The WSJ, NYTimes, etc do hit pieces against social media and put pressure on them and all of a sudden, it's relentless censorship.
Remember, before the declaration of independence our founding fathers were terrorists/rebels. I don't mean this as a snappy hollow comparison. I'm saying fundamentally, you can't distinguish between a US soldier recruitment video and an ISIS soldier recruitment video without applying a moral context. How would an AI ever do this? And even if it could, who's moral retelling is the right one?
Better in my mind to stay out of the censorship game altogether and promote a forum that is inherently structure in a format that incentivizes accuracy over emotions.
Somehow I doubt US recruitment videos have englishmen being decapitated as job perk
They need to moderate because they are centralized, and their revenue demands it. We, as a society, need to create a better option. Not just another YouTube, but a seamless decentralized solution.
User that enable viewing of certain tags csn't complain then, and google only needs to put enough legalese when enabling comtent
They already doing that to an extent with mature content, so there's that
Allow people to chose content level just like they choose security level in browser settings.
1. Legal content. May include content that violates YouTube content policy, but is legal in USA, or the country of the viewer. Maximum freedom of speech and maximum ability to see content that you may find offending.
2. YouTube content policy met. Content that is legal and meets YouTube Content Policy.
3. Legal, Meets YouTube content policy, Meets a certain org's taste. Like when you can pick a charity that you can donate to when you shop on smile.amazon.com. You can select the org whose bubble you want to live in. ADL, Focus on Family, Skeptics etc. The org bans content and it only is banned for people who opt into that blacklist on youtube.
4. When user is not logged in they get AI filtered list but can select "all legal" or "all that meets content policy" filters, even when logged out. All others bubbles available to logged in users only.
Advertisers can opt into certain bubble if they want, or opt out of certain content e.g. content deemed inappropriate by the AI?
How does that sound YouTube?
Doesn't the government security agencies want to know who is watching extremist content and who is not interested in it? How would we know who the extremist are if they fall back to person to person, in person, communication?
I think a good strong case can be made to advertisers that their ad will only be served to people opting into a certain bubble. Or reverse of that i.e. show my Ads to all people except those who are in this bubble. Inclusion list and exclusion list.
What advertiser is going to want to advertise in front of ISIS/Neo-Nazi videos?
Jokes aside, why does it matter that Advertisers are not choosing certain 'unpalatable' bubbles to not be associated with those channels, that's perfectly fine.
You seem to be concerned about extremist videos making money, I don't care about that at all. I just want all videos to be available unless legal system demands its removal after due process. Present sanitized content by default, present all content if explicitly requested, where an action from user says they want to see offending content.
Advertisers should be allowed to chose the channels that they advertise on. Some may choose to advertise on default channel before the content is flagged. Let the buyer make the decision. Why is YouTube giving in to certain vocal Ad buyers to decide for entire Ad market?
2. YouTube recoups the cost and makes a tidy profit from ad revenue.
3. Videos you can't put ads in front of can't be monetized.
4. Videos that can't be monetized still cost YouTube to process, host and serve.
5. As such, every non-monetizable video hurts YouTube's bottom line.
6. Why would YouTube want that?
>As such, every non-monetizable video hurts YouTube's bottom line.
Even every non-monetized video is still eyes on the screen for Google. NetFlix claims to compete with books & libraries for time of the day from its viewers. Yeah, Google may not show Ad on that video but the next one in autoplay/recommended list is still going to ring the cash register. It is actually better for Google because they may very well show Ads but not pay the De-Monetized content creators, videos that they created actually act as leads to youtube. These political videos is where a reader is sent to youtube from non-youtube sites, which is much more valuable to google and they are getting away by not paying anything for that. Leading video is much more valuable to google than a video in auto-play list. Not pay for leads but get paid for Ads on subsequent videos, very nice business model win-win Google developed for itself.
And, I have a hunch that three letter acronym gov organizations dealing with keeping us safe would rather know who is interested in terrorist induction videos, and track them, than just remove the videos and let opinions fester at an isolated individual level (lone wolf).
Freedom of speech only says the government can't stop your speech. It doesn't say that private organizations have to provide you a platform. Youtube also has freedom of speech. They have the freedom to filter and compile videos that they like and only show those. It's also not youtube's responsibility to optimize their site for helping the NSA to track terrorists.
If I create a website that allows people to upload videos, it's perfectly fine for me to filter those videos and only show the ones I like. It's my website after all, I am allowed to control what is on it.
It had, for ages.
>Youtube also has freedom of speech. They have the freedom to filter and compile videos that they like and only show those.
Sure they do. I wish they were honest about their political ideology before they touted the platform for all to come and participate.
>It's my website after all, I am allowed to control what is on it.
Sure you are. I just wish you had advertised as such before content creators invested time and money into the platform, creating user base for you. That's a bait and switch, no different than Apache to AGPL license change on 27.8.3 version of your successful GitHub project.
A private entity and a public corporation and government are three different levels of individual discriminatory behavior. A private person or business can employ any discriminatory practice they see fit, as you yourself say. A government is held at highest standard of equality for all. A publicly traded corporation is somewhere in between the government and a private citizen.
Youtube doesn't hide the fact that it bans certain content. And this page isn't new, in fact it looks nearly identical to how it did in 2013.
Because it's about controlling narrative, controlling propaganda and giving the "media/news" space back to traditional media.
That's what people have been asking from sites like reddit and HN for years now. Give people the option view the raw threads ( uncensored ) and the moderated ( censored ). But neither are interested or have indicated they will. Instead, on reddit at least, there is more and more censorship.
> Advertisers can opt into certain bubble if they want, or opt out of certain content e.g. content deemed inappropriate by the AI?
Do you really think advertisers care? Do you really think corporate america cares? They don't have morals. China is brutal dictatorship and yet advertisers and corporations have no problem doing business with china.
It's simply a matter of control. Who gets to decide what you and I see. Do the masses get to decide for themselves and control what they see or do the small group of elites? Per usual, the elites won and they get to decide.
IMHO the gradual increase of (self-)censorship in the popular Internet is worrying --- one of the most compelling things about the Internet as it existed was that, from the safety of your own home, you could see and experience things that would otherwise be impossible to access. Now it seems it's turned into a massively commercialised effort of "curating" content so that it doesn't offend anyone, and only results in more profits for advertisers.
If Google/people at Google made an algorithm to intentionally delete criminal evidence, that would qualify. Having an algorithm that deletes lots of things, and happens to delete evidence does not.
That should get past any "it woz the algorithm that did it!" arguments about intent.
A contract can't override criminal law.
That seems like such a long time ago. Since then my attitude has changed to being mostly hostile towards Google, with every such event.
Google should have never entered the "content game" and should have remained a neutral search and distribution (YouTube) platform. Once it went down the path of being a content company, it started "compromising" in all sorts of ways that were terrible for its users.
I wonder if the higher-ups have even noticed this change in attitude towards them, and if they did, then they've probably decided that making money is more important even if they become the Comcast of the internet (most hated company).
Like just because their gateway won't give you access to it doesn't necessarily mean that the bits have been scrubbed on the back end.
Also: here's a project to archive this information.
They don't provide the judgement because that only invites attempts of explaining and negotiation - they don't want to spend time to do a careful review of every contested video, they want to make an usually accurate final judgement with the minimum time investment possible (e.g. 5 seconds for a video), and they don't want to spend resources reading all kinds of reasoning and appeals, so they don't. And that is their right to do so - they can completely arbitrarily choose which videos to host on their site and which not.
It is indeed YouTube's right to vaporize any bits at any time. But when they are the leading video platform on the entire web by a huge margin they need to at least adequately present the reality of their content guidelines to users in countries like Syria who are probably not focused on researching alternative video hosts while trying to document chemical weapons attacks.
Seriously, Google, Twitter and FB massively need to ramp up their customer service and not externalize the costs of a lack of support onto society any more. And there are many "costs": people being actively harrassed and intimidated, sometimes so far they are afraid leaving their house, due to hate speech or doxxing, a loss of historically relevant information as in this case, people locked out of vital emails or their businesses (e.g. when their Gmail account gets closed due to copyright violations on Youtube)...
This is not going to happen; the whole point of their businesses based on offering free massive online services is that they are dirt-cheap by being run mostly automatically.
No, the only way to fix the problem in those juggernauts, and protect the tiny individuals from getting caught and squashed in their wheels, is the mechanism that governments use to protect citizens from the worst effects of bureaucracy: having an ombudsman. A semi-independent service to receive complaints of severe abuse by the main service, and for which the primary goal is protecting users, not reducing costs.
In some sense, this is how their PR department operates: they'll bring human attention and put all the required effort to fix an unjust situation, to clean the image of the company. The difference is that now the unjust situation needs to become a scandal, as you said, and an ombudsman would be required to examine all applications (either to accept them or reject them) as part of their official definition.
Yeah but who would finance the ombudsman? To service a country like Germany, I'd bet it needs around 2.000 FTE minimum (Facebook alone is opening a new, additional 500 FTE centre right now, and that's just for deleting the worst of the worst hate speech and porn). That's around 5M € per month.
Having it paid for by taxpayers is the true manifestation of cost externalization, having it paid for by the services quickly leads down to "do whatever $company wants", and having it paid for by users leads to service only for those who can afford it while leaving the poor and most vulnerable persons in the rain.
Modern society is looking a lot like time-compressed feudal eras, with corporations taking the role of noble families; imho the time of powerful independent bourgeois professionals, thriving under the rule of law in nation states, is coming to an end. Maybe we should start looking at the medieval ways of organizing a fair society, at least as the starting points for the new social structures that will be unique to the digital era.
They should find a way to host the content somewhere else.
Something is wrong about that.
I'm just saying the notion that private companies (on the internet or not) have almost zero responsibility and we're subject to their whim is wrong.
I guess that's where the rule of law is supposed to enter the discussion.
Privatizing it was a political decision.
This is demonstrably untrue. Minitel was already much more than that, despite being designed and implemented by a division of the French Ministry of Posts and Telecommunications.
Or were, some are dead now, and have taken their videos with them.
Not blaming the victim, but at this point most of Google services have not shown to be reliable, especially if you require some kind of thinking human behind a decision
You could also save it to Google Drive or other "Cloud Backup" solutions like OneDrive/Dropbox
But I guess hindsight is 20/20, and I would probably have trusted YT more than I should
The thing is, it makes perfect sense from their side - they will make people angry, but why would they bother if those people can't go anywhere else?
I'm starting to feel that a competitor providing the same quality of service while allowing all kinds of videos has a chance to succeed. It's OK to have both child videos, porn and Syrian documentation, as long as you can filter - maybe have some sort of a "curiosity" slider that filters child content on one side, YouTube content in the middle and all content to the other side. Also some category toggles,... If you're unhappy with the current selection, just take a few minutes of your time and change your preferences.
Should we get to see the training data used and labels?
Or is this the modern day equivalent of credit score algo, something that can have huge impact on lives, but you are not allowed to know what it is.
This is bad.
Can't wait till they are sued out of their bubble.
I miss the days of "don't be evil".
Give evidence to the courts or police. Don't upload it to a video entertainment site and expect it to stay up, despite skirting their rules.
corporations control what info passed to people, and create their own version of reality, but blocking what they don't agree with.
I know it's AI, but seems that google appeal agrees with AI decision.
people should read Noam Chomsky's Manufacturing consent book, here's interview about it in 1992 
To be honest, if you have evidence of a war crime, I hope your plan to seek justice doesn't depend on Youtube.
The difference is in the audience's mindset - which is only partially influenced by the uploader's intentions, and partially by how other pages and channels link to the video and present it, and partially by historical context (the same content can acquire a different interpretation five years down the road). Machine learning cannot be expected to emulate that.
Guess they need to change to "information that we and our advertisers agree with."
Yes, I know they are different companies under Alphabet but it doesn't matter. Google has become a monster and too big, powerful.
It's just marketing. If you really want to see what the actual "mission" is read the TOS. Google,Facebook,Twitter and co like to boast about their humanitarian and humanist stance, the Apple way, when it comes to their relationship with their users but that's all a lie. The moment the need of their users doesn't match their financial interest all bets are off. The shit-storm triggered by a few outraged online publications and announcers a few month ago is a demonstration of that fact.
Independence and freedom of speech online has a price and "users" are going to find it out the hardway when Google refuses to host their content for political reasons.
People already forgot, that the tech to share content online already exists, it's called RSS and Google,Twitter,Facebook and co want it to go away.
Eeeeeeeeeeh....I don't know about that.
Lots of corporations today target "owning" a certain aspect of humanity. Facebook "owns social", Google "owns search", and LinkedIn is having a jolly good swing at "owning recruitment". Youtube wanted to "own video" and by and large it has succeeded. I'm not sure they get to have that position consequence free though.
I'm increasingly of the opinion that companies that manage to pin an entire market implicitly take on a social responsibility, and lots of them are not shouldering it appropriately.
If you film people getting shot at in a demonstration and want to get the word out, chances are you use a popular social network. You might not have any further knowledge, or not be able to (imprisoned, fleeing, dead, like the majority of Syrians) put the video elsewhere.