Hacker News new | past | comments | ask | show | jobs | submit login
YouTube suspends account for linking to a PhD research on WPA2 vulnerability (reddit.com)
801 points by decrypt on April 14, 2021 | hide | past | favorite | 290 comments



>Anyhow, I truly believe humanity has to rollback to operating at a human scale.

>Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.

Can't agree more.


The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube? Would that work with the current society's attention span?

Is YouTube as we know it actually feasible without AI-based moderation? Can it be a sustainable business with an army of human moderators and/or a drastically reduced scale as a result?

I don't know, but I have my doubts.

Yes, moderation/suspensions can be a pain, and pointing out embarrassing mistakes is important to add pressure on Google to put more resources into it. At the same time, YouTube is full of great content and content creators. Anyone can upload a video and make it viewable by billions, instantly. I think we shouldn't forget how amazing this actually is.


I think the argument is framed as wanting a human during take down dispute, not pre-approval. There's no need to treat this as an absolute. Why are human contact and reasonable expediency considered mutually exclusive? There is a lot of room in the middle. Can allow human dispute resolution for non-free accounts (i.e. paid or earning). Can limit upload frequency site wide while dispute/abuse queues are backed up (knowing YouTube would be incentivized to prevent backup). Can have non-profit reviewer company that charge very minimal amounts for human review. Etc.

Let's be clear, the lack of humans anywhere in the pipeline for uploaders is because they don't have to, not because it is impossible. Having said that, I really hope we do not see government intervention here.


Agree wholeheartedly: there's definitely a spectrum between what Google are doing atm and full human moderation. I am a strong believer in using AI to augment humans, not replace them. Nudging Google towards putting more humans in the pipeline is what I am hoping posts like this are achieving.


When was the last time Google listened to the public? In spirit, not because there was a backlash? It’s a techno conglomerate with no human face. Google was cool and interesting in 00’s, but nowadays all they create is fodder for those sweet ad dollars.


> Let's be clear, the lack of humans anywhere in the pipeline for uploaders is because they don't have to, not because it is impossible.

The real problem here is that people are envisioning human interaction as something that can produce better outcomes, but the cost of that may not be economical.

You can hire a generic human to look over a flagged video for 15 seconds and render a verdict, but the verdict is going to be bad. That's not enough time to make an accurate evaluation.

Suppose you have an epidemiologist on YouTube making a claim about virus propagation. It gets flagged because there is a CNN article reporting that the CDC said something contradictory. The epidemiologist argues that the CNN article is a misinterpretation and if you look at the original CDC publication, that's not really what they said.

At this point for YouTube to get it right they've basically got to hire their own M.D. to study the CDC publication in enough detail to understand the context. That is prohibitively expensive. But the alternative is often censoring an epidemiologist who is trying to correct misinformation being disseminated by CNN.

And science is a moving target. The infamous example was the CDC in 2020 telling people not to wear masks. But there was a point in time when scientists were publishing studies on the health benefits of tobacco consumption. Challenging the established orthodoxy is inherently necessary for progress.

The underlying problem here is that YouTube isn't competent to make these determinations. They're attempting to do something they have no capacity to succeed in.


So true. Automation cannot infer nuance. It's ok to strike down obvious spam and troll but the but this pretense that private actors should spend their resources to cultivate a sane public space in just a broad-spectrum reflex reaction to micro-targeted disinformation. Nonsense.


What stops the YouTube reviewer from calling or emailing the CDC to render a verdict on which interpretation of their public information is accurate?

When it comes to government agencies, they are contactable (usually with a simple email). You might not get an immediate response, but a few hours to a few days later you can expect to hear back.


Why would youtube be trying to make a determination like that? Individuals are the arbiters of truth, not organizations. Ministries of truth are not desirable.


What's with the assumption here that there aren't humans involved in this decision? The linked post literally quotes Google saying:

"Hi Sun Knudsen, Our team has reviewed your content, and, unfortunately, we think it violates our harmful and dangerous policy."

Google has always had humans in the loop. Not for every decision but certainly the more impactful ones that affect user generated content. When I worked there, there were even humans reviewing requests for account un-suspensions (yes, really! they didn't do a great job though, needless to say).

Also, it's quite hard to understand what sort of algorithm could result in this sort of suspension. What training data would give it hundreds of thousands of examples of infosec discussions labelled as harmful? But it's very easy to see how a large army of low paid humans given vague advice like "don't allow harmful or dangerous content" could decide that linking to hacking information is harmful.

The problem here is not AI. The problem here is Google allowing vague and low quality thinking amongst the ranks of its management, who have decided to give in to political activists who are determined to describe anything they're ideologically opposed to as "harmful", "dangerous", "unsafe" etc. When management give in to those radicals and tell their human and AI moderators to suspend "harmful" content, they inevitably end up capturing all kinds of stuff that isn't political in nature, simply due to the doublespeak nature of the original rules.


"Our team" is quite often a lie. Why do you assume that a human was involved? Especially when it comes to Google, who notoriously skimps on human support staff.


They lie both ways too, whenever there is a public kerfuffle they always blame the algorithm / the AI.


It's a bad idea to give liars the benefit of the doubt


Because I worked there and back then, when Google claimed content had been manually reviewed, it had been. It might have been by a low wage worker in India or Brazil and they might have spent 20 seconds on it, but there was a person in the loop.

That's why I turned this around. Why do you assume they're lying, without evidence?


Why wouldn't we assume they are lying? Interacting with their dispute process doesn't have any human qualities, so what would make us think that humans are involved?

They are a massive monopoly and just the other day it was reported that they manipulated their own ad markets for their own profit. They constantly inaccurately accuse people of malicious intent and copyright infringement, often massively hindering peoples' careers and generally providing zero recourse. To not assume lying and at best self-serving intent given Alphabet/Google's position in the world would be foolish.

On top of that, is it really that different to use inaccurate AI or to use humans who are intentionally not given the time or tools to do human-quality work?

The problem is obviously with the overall quality of moderation, and the level of power exerted by platforms over users. Whether a non-skilled human reviewer or an AI is handling the broken process hardly matters given the results.


Choosing to die on that hill of technicality seems like a silly choice. People say "human" but they really mean "a person who diligently considered the content, probably watched the video more than once, understood what they were watching, and possibly has a chance to debate with another human". Or, you know, a reasonable definition for "human in the loop".


It seems you're defining "human in the loop" as "Google makes what I consider to be the correct decision in this case." I believe the point that thu2111 is trying to make is that there _are_ humans in the loop, and having them is no insurance against unilateral decisions that you disagree with or even just plain stupid decisions.

The point is that involving more humans in the moderation decisions at YouTube does not guarantee nor is there any evidence to suggest, that it would result in better moderation.

Somebody with the technical and industry knowledge to know that talking about security vulnerabilities like this shouldn't be removed can probably find a better job that YouTube video moderation.


I would accept that Google will sometimes make the wrong decision, but it's unacceptable that the care given toward making that decision stops at a low-wage worker in another country spending 20 seconds on the decision.

There's a very wide chasm between "an AI decided" or "someone spent 20 seconds on it", and "we hired an expert in every possible field and they thoroughly research the facts at hand before making a decision".

The latter is obviously not feasible, but there are many options in between that will give markedly better outcomes than the status quo while not making YT moderation economically infeasible.


There are plenty of cases where I disagree with the outcome of companies I interact with. When I spend 30 minutes chatting with an agent via text with Amazon or airlines, I might leave the conversation disappointed, but I never doubt their conscientiousness. I highly doubt United beat me at the Turing Test.


Google makes what I consider to be the correct decision in this case.

I'll settle for "Google makes a well-informed decision and will communicate the weighted pros and cons that led to that decision."


> What's with the assumption here that there aren't humans involved in this decision? The linked post literally quotes Google saying [...]

The post literally quotes Google as saying that for the _last_ time this happened. Then the post literally states they appealed and never heard back. There is nothing concerning a human for this instance, though I suspect once the uploader reaches out (were it not for the popularity this is gaining), one could expect similar dismissal. That there are barely humans involved, and in a poor way when they are, doesn't mean there are never humans involved, true. But for many, the bare minimum of assistance is indistiguishable from no assistance.


Appealing and not hearing back doesn't mean the original decision was entirely automated, it means they ignore appeals. Plenty of people would love nothing more than to argue with Google employees all day about every decision the firm makes - just think about webmasters and SEO alone - so the fact that YouTube actually has such a process is already kind of remarkable.


You seem to be arguing that because their dispute process everywhere else is reprehensible, a slightly-less-reprehensible process should be celebrated.

It shouldn't, and we need to hold Google and others to a much higher standard here. I'm not sure how we do that, but the status quo is garbage.


It's easy in theory: you create a competitor that differentiates through the quality of its content-provider service quality and how much money is spent on dispute resolution.

But I am skeptical it would be successful. It might be, but it would probably require a much larger percentage take of video revenues, which would discourage most YT content creators from going there because in fact, relative to overall YT volumes, these suspensions are rare. That's why we're discussing individual cases instead of studies with percentage figures in them.

It feels a bit weird to be playing the devil's advocate in this case because frankly the state of modern Google saddens me. It seems not much like the firm we built back in the day. They obviously shouldn't be doing this kind of thing and in the past they didn't, which is why I place the finger of blame not on AI (which may not even be at fault here, and is a general tool anyway) but rather on bad ideologies and ideas permeating the management classes in America.

In this case I blame favouritism and feminism. YouTube content moderation is run by Wojcicki, a woman who got in to Google by renting Larry and Sergey a garage, by being the sister of Sergey's wife. She's not an engineer. Thanks to the prevailing ideology an ambitious feminist woman in an executive position at a west coast tech firm is utterly untouchable, and must be given whatever she wants. So despite being the exec in charge of Google Video when it was utterly beaten by YouTube, she has ended up running YouTube. A clearer case of failing upwards can't be imagined. It's her personality that dictates the post-2016 swing towards promoting woke "authoritative content" on YT and the general culture of unaccountability for women that prevents anyone else pushing back, even as the bad PR racks up.


Knowledge is empowering, and one can also argue that lack of it is considered harmful. Not being able to link to resources that can help you understand how something works can lead to (in the big picture) that only criminals will gain access to that knowledge.


Louis Rossman had his YouTube account temporarily suspended for posting a video entitled “An angry clinton” which was a video of his pet cat Clinton meowing. Human moderation is needed in cases like this. There is obviously no analysis of the video content, the algorithm looks for keywords in the video title. The keywords used by the algorithm are centered around protecting Democratic politicians from criticism. There is no advanced AI going on, the algorithm is simple pattern matching.


> The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube?

Or they could just stop doing that and go back to how it used to be. Let people upload videos and review them only after it was flagged. Novel idea, I know.


The media companies made sure that’ll never happen again :/ That’s how we got ContentID: to prevent music and video piracy.


That's its own thing and it's just as absurd as this, but this person got a strike for "violating harmful and dangerous policy".

I don't upload any videos, but if their AI for reviewing videos is as bad as their AI for reviewing comments then both should be completely scrapped. Because it's not a regex matching or whatnot, but AI, it's impossible to predict which comment or a livechat message will get through. In practice it's virtually random. And the worst part is that they don't even bother to mention that your comment was 'problematic' to them, you have to have two windows open, one for posting and one on private mode to verify which of your comments actually showed up.

Just as a disclaimer, I haven't tried it in a while, so maybe it's somewhat better now, I don't know.


Something I never quite understood... couldn't Google have just said "no" and required them to submit DMCA takedowns? The law doesn't require the level of broken proactivity that Youtube has.


They absolutely could have, and have been entirely within the rules of the DMCA. My best guess is that the ContentID system is either (a) a bribe to get media companies to partner with youtube's paid streaming, or (b) a bribe to convince media companies that it isn't worth lobbying yet again for stronger copyright monopolies.


They don't care about your average Joe user. Just look at their search results and see what comes up. What they want is to transfer the 'legacy media' to their platforms.


google wants the content up. it is in their interests for there to be as much mundane fluffy ad-watch-generating content as possible.

having it DMCA'd down is a problem for youtube. content id is how most of the material can stay up, the riaa can stay happy, and google can sell ads.


Why is it a foregone conclusion that we cannot unmake this?


That is why we need Copyright reform to take the teeth out of media companies.

Return copyright in the US back to the original constitutional limits


Never'll happen.

“Congress shall have power… to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”

As I read it, the law should be formed to maximize progress, not to maximize value of an idea.

But, a large amorphous group of citizens will never overcome a small, highly-concentrated self interested group. I doubt you'll ever see the copyright-opening version of the NRA or SPCA.


Groups like NRA are as powerful as they are, despite their relatively small membership numbers, because said membership is really dedicated to one particular issue, to the point of coordinated single-issue voting in the primaries. So if you can find enough people who are willing to vote solely on copyright in the primaries, it might actually work.


Well the google case put a huge dent in the prevailing idea that Maximum value / profit is the constitutional purpose.

The court clearly held that profitability alone should not be a factor.


> I doubt you'll ever see the copyright-opening version of the NRA

Corrupt influence-peddling charlatans milking the rubes?

Or was there something else intended by this analogy?


Simply that the power of a highly focused group can be remarkable. Substitute 'NRA' for Big Pharma trade organization or Teachers' Union if you like.


What if... instead of YouTube you'd have several intereset based sites which could actually handle their moderation and workload instead of being a giant uncaring behemoth?


Who pays for the moderators? You still need a lot of manpower if you don't want algorithms, regardless of whether you have one Youtube or a thousand SpecialInterestTube.


Tech is very special in a way where they can get away with claiming that quite a lot is not possible at their scale. Imagine this for other industries.

"We cannot do quality control before sending products out. We make too much stuff."

"We don't check all tickets for the venue. Too many people."

"Cleaning the entire building is not possible. It's too much space."


All the examples you provide have to do with tangible, physical things. Tech is in some sense special because we're dealing with intangibles. It costs a bad actor next to nothing to upload the same video with harmful content to YouTube a thousand times, a million times even. And YouTube has to deal with all of this.

That's why they still use physical ballots in most elections: as soon as you're dealing with physical things the costs for a bad actor to do bad things at scale balloon up.


But Tech also enjoys the benefits of this. You make a product once, sell it thousands of times. Or allow thousands/millions of people access without much overhead. I'm simplifying here of course, but compared to e.g. a machine which you have to produce new for each customer I think we are in a very good situation. And with that advantage comes a price tech should be forced to pay.


But a bad actor can only do so much if manual reviews are involved. If they’re a completely new account because their previous accounts have been banned, accessing on a new device because their previous Alphabet-surveilled fingerprint had been banned, or on a VPN because their IP has been banned, YouTube could just implement manual review which would slow such a bad actor down. And if the intent is to clog up the manual review process with whatever copies of a single illegal video Alphabet we already have technology that is used to automatically identify specific videos or images and remove them, usually for purposes of preventing the spread of illegal depictions of children.


If you change the MAC address on your cable modem you can get tons of different IPs. Same thing with cellular networks. With raspberryPis so cheap, fingerprints don't work that well for all devices.


There are relatively few people capable of doing this compared to the population of the world that uses YouTube, and there’s even fewer people who, even with capacity, will be doing this just to upload illegal content into YouTube. I think this sizes down the problem of manual review quite easily.


So what you’re saying is that their’s market for obtaining different ip numbers.. its almost exactly like a commercial VPN service


IPs are very cheap. Botnets are huge and residential IPs are easy to come by.


"We cannot do quality control before sending products out. We make too much stuff."

Ever buy strawberries?

"We don't check all tickets for the venue. Too many people."

Train conductors often check tickets after the train has started moving.

"Cleaning the entire building is not possible. It's too much space."

Climate change. Dirty streets.


Very much so. Tech companies have a way of confusing "this is difficult" with "this is impossible," or "this is impossible" with "this isn't free," or "this is impossible" with "we don't want to."


They are incentivized to have that confusion. So no surprise.


There are few other industries where users expect to get everything for free.


And those things are paid for. It would be perfectly reasonable if you had to pay per minute for uploading video and people had to pay for a subscription to watch it. That's just not what is expected for a mass platform today.


Vimeo has an “uploader pays” system (of sorts), and, while it prevents garbage on the platform, it stops creators from wanting to use it when YouTube is free.


Yes. It's of course a perfectly valid (and widely-used) model to pay for hosting. But, by and large, only "serious" users who probably otherwise have a business model will do so. And there will inevitably be far less content on the platform.

On the one hand, I think it's sort of unfortunate that so much of the Web has come to be about "free" access (though paying for access with attention and time) but that does come from the perspective of someone who would be able and happy to pay with money instead.


It's expected because the expectations are set by the existing platforms. And the reason why Google specifically is actively promoting those expectations, is that it makes ads the only viable business model for content creators; and Google's primary source of income is their ad services, so it maintains their dominant market position.


"We don't check all tickets for the venue. Too many people."

This happens in some public transport operators already.


It's argued that in some cities public transport systems the price of the ticket doesn't cover the cost of its own infrastructure, the machines to buy them, the ones to validate them the salaries of the agents verifying, etc. It can be more cost effective to drop the concept of ticket and make it a free public service.


I agree with all that, but some central governments such as Turkey go to great lengths to make public transport a paid service. The best public transport is the free one indeed.


There's a massive difference between centralized production of physical goods vs user-generated digital assets. Scale itself becomes the challenge.


Google had a revenue of $180B in 2020, and their EBITDA was $55B. I'm no accountant but that sounds like an absolutely insane profit margin. 8-12% would seem much more palatable, so that means they have over 25B of excess profits they could spend on 500.000 middle class incomes for content moderators.

YouTube represents 10% of Google's income BTW, so the other 90% of middle class incomes could do moderation for their other services.

Is that naieve?


I usually frame it this way - Mazda Motor Corporation, an order of magnitude smaller company can provide me with a several salons in my < 2mil population country where there are physical people that will fix my problem.

And Google, the mighty megacorporation, can't even clear a basic phone support bar, much less a local user support office in each region? Something normal for other corporations?


Mazda earns several hundred to several thousand dollars of profit from the sale of a car to you. Google earns an order of magnitude or two less per user.


They also had to source parts, manufacture it, transport it via boat and train, then pay someone to show me through all the buttons, clean it and then offer service at least once every year.

Besides that they also provide 24 hour support for breakdowns and plenty of other services.

And at best, they made 10% (!!) gross margin on my car - which is of a range of 2500$. Meanwhile Google has 45-60% (!!) profit margins while not giving a crap.


Mazda spends several hours in pre- and after sales on each car. Google could automate 99.99% of moderation, and spend literal seconds on 90% of all further moderation per user. A magnitude or 3 less human attention than Mazda's sales pipeline spends on a user would be worlds better than what Google does now.

I've seen YouTube channels with thousands of dollars of monthly revenue be closed over issues that would've taken a trained human less than 10 minutes to resolve. YouTubers with that kind of revenue are literally one in a million users or so.


Thousands of dollars are only possible with luxury cars.

VW, one of the two largest auto makers, makes around 300€ profit per vehicle sale (excluding luxury brands). For some models it's even less than 100€. You can calculate around 1% profit for a car sale, often less.

The actual money is in leasing and financing, which is why almost all automakers own a bank.


>>VW, one of the two largest auto makers, makes around 300€ profit per vehicle sale (excluding luxury brands).

Seeing as it's absolutely trivial to negotiate 10% discount on any new VW car, I struggle to see how that's true? Unless you mean that VW makes 300 euro and the rest is the profit of the dealership?

Even if that's true, VW makes an absolute mountain of money through financial services. Yes, they will make very little profit selling you a £12k Polo, but VW Financial Services will make £2-3k just providing you with a financial agreement for it.


I don't see how you are contradicting me? That's exactly what I said. The dealer doesn't matter here, it's just about the manufacturer.


Oh it's not about contradiction, I'm just curious where the 300€ value comes from.


Without disagreeing with your point, I do think there's something to be said for the more specialized sites the Internet used to specialize in. Places that foster a sense of community tend to wind up with users who are happy to pitch in when something doesn't look right (or get overrun by spammers if the community isn't able to deal with the issues). I am happy to pull weeds in a community garden in a way I will not at some factory farm. This approach isn't a utopia; people are people and everything is political, but at smaller scales, saying "Well, just start your own [niche] video site" isn't snark, it's a realistic thing and perhaps both sites contribute to the marketplace of ideas. YouTube is the marketplace of the unthinking.


I think it's much less work on SpecialInterestTube. On YouTube there's a huge amount of decisions and ideas the moderator has to take into account. The other one can filter a large amount of spam with "the first 5 seconds make it clear it's not SpecialInterest".

For example comparing to HN, I can go on "New" and see "Places to Spray a Fragrance on for a Long Lasting Effect-Ajulu's Thoughts" - I know it's immediately flaggable, even if it could be a valid post on another website. (and require a full review there to check it doesn't break some specific rules)


It's definitely not just a money thing, because even major channels that bring in lots of money can't get someone to review their cases.


Hackaday.com does just fine.

YouTube was fine before Google.

Stop centralising.


YouTube was bought in 2006 - not much time to even think about having a content problem or adpocalypse.

0: https://www.nbcnews.com/id/wbna15196982


Let people have the detestable YouTube TV for free as mods.


What if... people could make their own websites and host their own videos, and people could link to each other's content?


You mean, like the open standards-based Fediverse, where there are also no ads and algoritmic feeds optimizing for engagement?


Isn't that basically how reddit operates? And last I heard they have their native video hosting as well. Sounds like this could work, but doesn't quite at the moment. Is YouTube handling content discovery better? Network effects maybe?


Reddit is interest-based but gets free labor from most subreddit moderators that remove bad content for them. Reddit is also much less lucrative of a content platform given you have limited ways to make money (usually limited to linking to your patreon), so the content is constrained to whatever people do with no expectation of money.


>What if... instead of YouTube you'd have several intereset based sites which could actually handle their moderation and workload instead of being a giant uncaring behemoth?

What if... the average viewer was willing to pay for video content instead of requiring a free, unprofitable service run as a "Free add-on" to an Ad Empire?

Youtube may finally be profitable in the past few years but the first 90% of its existence, the entire rise to power, it was unprofitable and run at a loss. There aren't other major providers because no one else wants to lose that much money


> What if... the average viewer was willing to pay for video content

That would solve all of our problems, wouldn't it? Too bad there's likely always going to be someone willing to offer the same service for free + ads and/or selling users data. Unless people start valuing their privacy/data much more than they do at the moment (or regulation comes in), it's just wishful thinking.


It would solve a couple problems. I don't see how it would solve this problem.


Have you forgotten abour Netflix and the innumerable streaming services out there as every studio and their dog's mother wants to claim 100% of the pie? The money isn't an issue - it is a freudian slip that either you want it to be like cable or are blindly repeating the words of someone who does.

If you don't like youtube's free service you can get your money back.


That will end up just like paywalls on news sites. It’s the best way forward (with tweaks), but ultimately will fail unless everyone participates.

Services like Brilliant and others are attempting what you’re suggesting, though.


That may have worked in the pre-iPhone Internet, where it took effort to become a part of communities, and everyone implicitly understood that. Now, it's a rock vs. a hard place: the masses want centralized platforms, but those centralized platforms are proving cancerous to society.


The masses don’t want centralized platforms or decentralized platforms. The masses don’t care how things work. What the masses want is an easy to use, fast platform. Centralized systems have been the best at providing that because they are easier to engineer and because there is an economic model via ads and surveillance.


I’d be shocked if the best and the brightest that Alphabet can buy can’t figure out a reputation system, manually-reviewed with AI assistance, or some other way to approve content in a convenient manner.

Don’t forget also that Alphabet is swimming in money. They can certainly afford the manpower to make it relatively seamless for the user. And I really doubt users can’t be trained to associate a few hours’ wait of a completely new, reputationless user, or a user who is using an unknown device or VPN, with a better quality of site experience because the videos are screened.


this! they could have a page entirely of "reported videos" by person or AI and the severity would rank how high on this page the content is.

Things like timestamp of when in the video was reported might help for scanning "content" e.g. "50 people reported this 20 minute video in the 3-5 minute mark" or similar average.


How about you don't believe everything you read on the internet. Even 5 star people are wrong too.


>The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube? Would that work with the current society's attention span?

Why should they have to? Youtube makes BILLIONS of dollars, and they can afford to employs 10s of thousands of people to be human moderators, they just don't want to because that would eat into profits. If the video has X views or the publisher has X followers, it should be reviewed within hours by a real human.

>Is YouTube as we know it actually feasible without AI-based moderation? Can it be a sustainable business with an army of human moderators and/or a drastically reduced scale as a result?

Nobody is saying there should be 0 AI moderation, just that there should be a human that reviews a clip if AI flags it and the end-user clicks a button saying "this shouldn't have been flagged".

The entire premise of google is kind of broken and both technical and non-technical people feel that way from what I've heard. The fact it's nearly impossible to get to a real human on anything, and if you do happen to get to a real human, half the time they aren't even empowered to undo whatever decision their AI has made... it's just not a good way to do business. There's a balance to technology and human interaction and Google (again IMO) has gone too far in the technology direction. Heck, humanity as a whole might need to find someplace in the middle if the studies about the current generation of 20-somethings struggling with stunted social interactions due to social media is to be believed.


There are often problems with Reddit moderators, but at least you know who they are (sort of) and can talk to them. It seems like this there is a better relationship between moderators and a community when the moderators are part of the community, versus being an anonymous horde of interchangeable personnel.

(And of course there is Hacker News for another example, combining some automatic stuff with manual control.)

To get this at scale, though, it seems like you need a larger social network to be divided into many smaller ones in some logical way that’s understood by by everyone involved.

It seems like the inherent hierarchy (authors versus subscribers) and subdivision of Substack is likely to work out well for them.


> Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube?

Well presumably Google being one of the worlds wealthiest corporations could hire and pay people a living wage to match the demand of users looking to upload videos to YouTube. It is in YouTube’s interest as it only exists and makes money on this very user generated content.

Yeah it’s great YouTube exists and can market to content creators about its market capture audience, but maybe it would be a better system without the middleman as anyone can upload a video anywhere on the internet making it instantly viewable by billions


This line of thinking leads to non-solutions. There are potentially many options beyond the dichotomy of crude AI automation and an army of human employees.

First, I think most of us would just like a relatively transparent method of escalating issues that will eventually get to a qualified human. Put some time delays and recaptchas to stop spammers from abusing it but if someone is willing to put in the time and effort to fight for a video, they should get a clear response as to why it was taken down and who made that determination.

Then there is the immense potential for crowdsourcing moderation. You can both get data by polling users (again with basic checks to prevent exploits) and promote certain users to a moderator role such that they can keep the community in which they participate reasonable. Especially for disinformation, how hard would it be to find volunteers to tell other people they're wrong on the internet?

And finally, if erroneous moderation has the potential to cause many problems, it is worth re-evaluating how much is actually necessary. For legal reasons there is some content that you really need an AI to quickly scan for, such as child porn, but in a lot of cases the harm in leaving up a video is insignificant.

There are a lot of sites on a similar scale to youtube that have substantially better (even if not necessarily perfect) content moderation despite working with substantially lower resources. If they can do it, so can Google.


> The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube?

People will eventually be fine with however the system works. If that's how it works, they assume that's how it has to be. Most of users have no idea how this stuff functions. We're in it now because the expectation has been set for instant uploads, so there would be outrage, but eventually it becomes "the way things are." As an example see any change users hate - users complain, TPTB either concede completely (rare) and revert, make minor concessions but change little, or apologize for the inconvenience and do nothing. Eventually nobody cares anymore and forgets they were angry.

Most of this stuff barely matters. Frankly it might be good if my nephews had to wait longer between any number of irritating youtube videos they binge and parrot.


> Is YouTube as we know it actually feasible without AI-based moderation? Can it be a sustainable business with an army of human moderators and/or a drastically reduced scale as a result?

I am curious about the cost of a process that establishes human reviews for conflict scenarios, i.e. situations like this. Is that still the planet-scale numbers?


I'd say it's infeasible without AI moderation based on their revenue model. They will never make enough money off intellectual content in order to pay for enough humans to moderate their system. Content for the lowest common denominator is not only easy as hell for an AI to moderate but it makes orders of magnitude more revenue than anything about technology, science, philosophy, etc.

The answer is to simply not rely on YouTube for anything besides lifestyle and self-help vlogs, marketing, instructions on how to put together your IKEA bookshelf, and of course cat videos. There are plenty of other video hosting sites that won't ruin your day as a creator because you accidentally used the word "doodie" or because the AI thinks your video on writing "hello world" in Rust is teaching people to hack computers.


Why not allow the majority of YTers with a legit posting history to post videos as they wish? You wouldn't have to wait days or weeks for these folks, they are trustworthy already. The vast majority of channels I care about wouldn't need moderation, they're hobbyists like woodworkers, musicians, etc. with dozens of posts already.


Anyone can upload a video to their website and make it viewable to all. The hosting is not the issue. You could use dropbox, box, etc for easy video hosting and sharing.

It's the discovery piece that cannot be replicated. If hosting happened outside moderation can move to automatic with a user/ml flag with manual approve process that follows.


Your question is: would a company based on youtube be hugely profitable without AI? Answer: probably not, but who cares other than the shareholders of Alphabet? I don't really think I should base my decisions about software based on how much the owners of the company are making.


Maybe we could have that if the uploader is responsible, not the platform.

Want to upload fast and not having your content taken down - deal with the legal issues yourself.

Want anonymous uploading and/or some protection - deal with automatic systems as employing enough people is just too expensive.


Maybe we could have that if the uploader is responsible, not the platform.

Maybe we could have the uploader host their own content, and do away with hosting platforms and their gatekeeping altogether?

I can see little reason today that well-established technologies and individual hosting arrangements couldn't offer similar value to whatever YouTube or some other centralised hosting service offers.

We are talking about the World Wide Web. Linking to other people's content is literally the central feature of the real platform here. For a long time, we did just fine with community-operated facilities and everyone linking to other sources of interesting content to help discoverability. We still do, really, but now the links and other community engagement tend to be locked up in places like different YouTube tabs for each channel, not the sidebar of someone's personal website. Of course, these days we also have entire sites (HN among them) built around sharing links to content that might be interesting to like-minded people. So I don't buy the usual arguments about centralised content hosting being the only way interesting content can gain exposure.

Numerous individual content creators make demos or livestream games or present tutorials with decent-to-excellent presentation values. They often use specialist software to do it. I therefore don't buy arguments about the technical difficulty of self-hosting either. There could be hosting packages offered by ISPs or other hosting services on a common carrier basis if individuals didn't want to set up all the infrastructure themselves, and those who were good/popular enough to make it more of a regular income would still have the ability to do so but now on their own terms.

Even at today's prices, you could serve tens of thousands of videos directly for less than a lot of people are paying for their ISP or mobile/cell phone plan each month. And of course, we know lots about how to efficiently serve interesting content via P2P networks that could be a big amplifier for popular content producers who were happy to share without wanting to monetise or otherwise collect large numbers of viewers on their own site for some reason. The available capacity and prices are surely only going to improve as the technology improves. So that's cost taken care of in most or all cases too.

So, if individuals hosting their own content is technically and economically feasible, and if individual creators and community groups facilitating content discovery is a viable culture along with all the other interactions they tend to have anyway, I struggle to see why the likes of YouTube are so valuable or irreplaceable in our society today. Obviously if YT disappeared overnight there would be a gap tomorrow, but we've been adapting and filling gaps and finding better ways to do things on the Internet since some time in the last millennium, and I see no reason to think we wouldn't do so again.


...if individual creators and community groups facilitating content discovery is a viable culture...

That's a pretty big if, and from my limited perspective, the most important problem content creators have and the most useful service Youtube provides.


I think it's mainly about convenience. It's like with businesses starting their stores on Shopify: "hey big corp, please handle mundane/technical stuff for me and just let me sell stuff/post content".


One problem is that there is a single process for all. Certain types of journalism and research should be able to ask for pre-approval of videos without risking a ban. The AI can still deal with the bulk of those requests.


I think it would be great. There would be huge incentives to find other solutions, and a lot of what goes into YouTube would find other outlets. Especially since we've already tasted what's possible. It could be the end of the YouTube monopoly.


Apple reviews each app update manually. Not only each app, but each and every one of its updates too.


The answers to all your questions is no, which is precisely the point the author is trying to make.

If we made the case for operating "at human scale" Youtube would be amongst first casualties.

to your last point :

> Anyone can upload a video and make it viewable by billions, instantly. I think we shouldn't forget how amazing this actually is.

By the same reasoning, COVID-19 is similarly amazing.

Opinions move on : Youtube is 15 years old now. It's a practical monopoly, being the worlds 2nd most visited website. I wont deny it's utility - but it's a gross centralisation who's modus operandi present significant questions - chiefly, Is it right that it has for years circumvented nearly all regulation applied to other media, and should it remains largely unaccountable to any elected power ?

So the challenge it seems is thus : how do preserve the good things that self published video content has done for us, but prevent or limit the consequences and attendant injustice caused by a service so infeasibly large. it's impossible to administer ?


Do you have reason to believe it can be done better than how Youtube does it? Again, there are two approach:

1. Limit who can post content

You'll end up with only the powerful being able to push content, you're taking the voice away from the masses.

2. Try to gradually improve content moderation and live with the rare false-positives or false-negatives.

This is the current approach, and while not perfect, I personally much prefer it to the former.


I dont think it could be done better much better than youtube aat the scale youtube does it, but i would insist they need to be plugged in to existing national media regulators and bound by the same standards- flawed as they are.

> You'll end up with only the powerful being able to push content, you're taking the voice away from the masses.

i think the algorithms already do that. the channels with the most followers have the loudest voice

>. Try to gradually improve content moderation and live with the rare false-positives or false-negatives.

its better than it was, and will likely improve but it's still drowing in shills and scammers. i think this would be rooted out by more community centric model. exactly how that looks, i dont know - but i think self policing would play a part.


> the channels with the most followers have the loudest voice

Sure, but I don't people people are entitled to have their videos promoted for them. The fact that you can have your video hosted for free, and share it to anyone in the world is already great enough. While on average louder voices get the most attention, plenty of smaller creators have managed to get important messages out.

> drowing in shills and scammers

That seems like a hyperbole? Do you honestly feel this way browsing Youtube or Twitter? I'm not saying they don't exist but they're fairly rare in my experience.

> more community centric model

I agree, reddit's model works kinda well. Youtube has tried that in the past (though I guess they got rid of it?), and Twitter is experimenting with it now (Birdwatch). I also like the Facebook supreme court model for handling edge cases.


> >Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.

> Can't agree more.

Simply replaced a prior system in which humans cannot interact with humans anymore and certain humans get to chose what is right and wrong.

The key problem is the inability to provide human feedback more than the mechanism for making the initial decision itself (which is a different problem). Not sure how to achieve that at scale.


Federation is the answer at scale.

You need dictatorship for effective moderation. But you also want those dictatorships small, uniform, and interchangeable so that the "citizens" of those dictatorships can easily pick the one that suits them best. Those dictatorships link up (federate) with like-minded dictatorships to benefit from network effects of sharing their content among themselves.

If your content doesn't suit a certain dictatorship, it will suit another one.

PeerTube does federated video already. No, it's not as polished as YouTube. But it is the answer to human-level moderation at scale.


The web is already federated, YouTube just happens to be one of the most popular nodes. Fediverse systems like PeerTube are great, but they don't change anything; in a world where something like PeerTube is as popular as YouTube we'd see the exact same complaints about the most popular node.


> The web is already federated, YouTube just happens to be one of the most popular nodes.

The network layer of the web is federated, but the application layer usually isn't. That is, if you care just about who you send TCP traffic to, then yes, the internet alone is sufficient, but if you want to think about federation at the content level, the internet alone enables that but doesn't make it very easy. Something like PeerTube makes it easy. And you need to pull it up to the content/application level if you want to do things like implement content searching/filtering across a federated network of hosts.

> Fediverse systems like PeerTube are great, but they don't change anything

Disagree. Nobody can host their own YouTube server. That's the point of federated video hosting: (a) anyone can spin up their own host (or join a host they agree with if they don't have the tech skills) and (b) these hosts can network with like-minded hosts to build a YouTube-sized content library without any single entity owning the entire library. A world without PeerTube would not give typical users this option.

> in a world where something like PeerTube is as popular as YouTube we'd see the exact same complaints about the most popular node.

We are starting to see this problem a bit on Mastodon because people are piling into the most popular node and having their opinions about how it should run instead of understanding that the point is if they disagree with how a node is run, it's (relatively) easy to run one of their own (or bribe a technical friend into running one). But I'd argue this is a problem in the domain of user understanding/education rather than a problem built into the nature of Mastodon whereas centralized global-scale moderation is a problem baked into YouTube.


I’m unfamiliar with PeerTube, but it appears to me that search and the recommendation engine is YouTube’s secret sauce. Does PeerTube have that?


v4 is adding search and recommendations, but I suspect it will not be as good as YouTube's simply because YouTube is run by the global king of search and PeerTube is developed part-time by one developer and open source contributors and funded by grants and donations.

However, PeerTube's secret sauce is putting the hosting power into the hands of the users, so for content creators with their own audience already in place who get whacked by YouTube's moderation algorithm, this can outweigh the benefits of the search/recommendation algorithm.

And this isn't just controversial pundits. Even the Blender Foundation got their videos blocked through a perfect storm of poor UX/support which spurred them to host their videos on PeerTube: https://www.blender.org/media-exposure/youtube-blocks-blende...


Or you could just be mature and not need to suck on the teat of authoritarianism and learn to tolerate offensiveness.


Moderation isn't just for offensive content.


There's always going to be some line people are unwilling to cross. For instance, one might say Nazi rhetoric should be tolerated for its historical value, but I know zero advocates for allowing child porn.

But how about adult porn? Acceptable to some, not to others (also depends on the context - even advocates of adult videos are generally not arguing that YouTube should host them). And there are shades of that.

In the end, no matter how mature and accepting you are, unless you allow literally everything, you will need moderators who are empowered to be the final authority on that decision, which is what I mean by "dictatorship".


Or you could be mature and realise that the reason everyone in your pub is a Nazi is that you never said “no” to the first Nazi sympathiser, after which all the non-Nazis left. What is merely offensive to you is a persistent death threat to lots of other people.


The problem is that all of humanities videos are going through a single organization. Peertube may provide a solution to this but people have to actually use it.


I think this is a distraction from the real problem, which is that YouTube censors/demonetizes/deplatforms videos at all. The problem is very widespread, ranging from censorship of political content to firearm videos to COVID related content. Most recently YouTube banned a panel held by the Florida governor with medical experts from Stanford and Harvard (https://www.politico.com/states/florida/story/2021/04/12/des...).

This is a company in the business of shaping political opinion and manufacturing narratives, not providing a neutral platform. Unless that fundamental issue is addressed, symptoms (like bad AI flagging) will continue to persist.


Advertisers asked to not have their ads on certain content, because normal people complained to them when they perceived the advertisers to be supporting the content they didn't like.

Advertising is the problem.


So I actually think this viewpoint vastly overestimates the ability for human reviewers to come to the right conclusion. I can tell you thats not really the case: paid trained human contractors for content moderation very commonly make far more problematic errors than "wpa2 cracking info is a bannable offense, but not this case because it's a academic research".

This is the kind of edge case that even after escalation to higher tier human contractors will often still get wrong. I don't really see how it's actually much less dystopian just to have humans in the loop with the same outcomes.


This ignores human bad actors.

There is no "rollback" for the existence of algorithms.

Instead of whining (problems without solutions) spend some time contemplating what a sufficient algorithmic solution looks like.

To be clear, that would be an algorithmic solution that "fights back" against issues like these (and not simply improving a failing solution).

That may not be a competing product. Maybe it's an algorithmic whistleblower?


The algorithm can exist, but to run it at scale requires buildings and large teams and bank accounts and therefore the companies that use these algorithms are susceptible to regulation.

Otherwise it seems like an enormous waste of talent and resources playing cat and mouse, when we can just give them guidelines and fairly often they'll be followed.

Of course bad actors can exist, but that's not a reason to give up on social solution to problems.


> give up on social solution to problems

Everything is a trade-off.

Sure humans can be used for moderating spam.

I'd prefer if they performed real social work, instead.


I’d prefer if people realised that moderating YouTube was social work, and not suitable for an algorithms-only approach.


That's exactly the point, this issue was known as a problem in 1977 published in Computers Don't Argue, but developed in 2010s as a feature

https://archive.org/details/bestofcreativeco00ahld/page/133/...


I'm of the complete opposite opinion. I look forward to the day when our machine overlords make all decisions. Humans are assholes with their own nefarious agendas. Machines are not. Of course if the day comes where machines ask for bribes then I'll change my opinion ;)


Except that machines aren't making some grand objective pronouncement, sent from on high and untainted by base human nature. Machines are programmed to carry out some particular human desire. They follow that single human desire, without any of the restraint or competition of other desires, no matter how deep that path may lead.

Machines are assholes with nefarious agendas given to them either intentionally or unintentionally.


We have a brief window to get this right before the machines take over. Let's not waste it.


Though I agree, I would phrase it as "before people with machines take over". The machines themselves are tools enacting the will of whoever controls them.


and who do you think develops code for machines and whats their agenda?


Who controls the machines? Those people are a-holes too.


If they are machine overlords, then they control themselves ;)


Baby steps to autonomous weapons.

Bring on the disagreement. It was a few short years ago the idea of this would have seemed a really, really bad idea. Just like autonomous weapons do now. And, hey, it doesn't even have to be us (Americans). I'm not going to name the potential baddie. There are many, including us. FOMO will drive so many terrible things in our future. Inch by inch, march closer to the reality many say will never happen. FSM save us.


> >Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.

Youtube is in the same economic vice as Walmart. It's hard to be cheap and provide a human touch, especially as labor costs rise. Automation only takes you so far in a free economy.


walmart actually does have customer service in stores. amazon also tries it best to provide customer service (want a return, here is shipping label).

the problem with youtube is that you are not youtube's customer, so the dont care about you. Only advertisers are customers and youtube cares about them only. I bet there is a big customer support department staffed with real people and serves only advertisers


It's hard to argue that Walmart and Amazon are much more considerate of their "users" than Youtube. They are all trying to maximize margins, which are dependent on maintaining network effects and keeping costs as low as possible.


Why does content need to be "flagged"?


Don’t even care that it’s AI. It’s just that the AI-based products simply suck. Which is saying a lot considering all the money being thrown at it to solve simple problems.


Only way is probably population reduction


YouTube's content policy is bizare. Children in my family seem to watch videos of Spiderman dry humping Elsa or grannies kicking people in the nuts, but then they remove stuff like this.

I guess this is what happens when advertisers are the main priority and the moderators have all been replaced with robots. Content that may actually harm the development of children is considered fine, while someone posting a slightly edgey political take or a research paper about something which triggers a bot is removed.


YouTube will robot-moderate themselves into the ground. Too many people are leaving the platform. It’s just a trickle now but soon it will be a flow.

It’s the inevitable result of moderation based only on who’s paying you.


> Too many people are leaving the platform

A quick google search shows the platform has 2 billion monthly active users[1]. A number that has only been growing. Why do you claim people are leaving? Is it because of a handful of high-profile cases, or maybe you live in a hackernews/techie thought bubble where everyone is annoyed at youtube? The claim that people are leaving, or will leave is just completely baseless imo. This reminds me of the saying "there are 2 types of programming languages: those that people complain about, and those that no one uses".

[1] https://www.businessofapps.com/data/youtube-statistics/


>It’s just a trickle now but soon it will be a flow.

And soon after that it will be a torrential exodus. Myspace died within the space of a few months during my time in high school. YouTube could easily face a similar downfall.

I'm not saying "YouTube will die in a year!" I'm saying if they don't fix their moderation problems, their auto-bans, etc... they will start to lose bigger and bigger customers more frequently.

I claim people are leaving _because they are_. Just because people are joining faster than they're leaving doesn't invalidate the point that people are leaving the platform for a place where their channels won't get completely demonetized for background music that an AI flagged. Youtube has a serious moderation problem, and it will be their bullet wound when another competitor comes along.


I too don't see people leaving. We can complain all we like. Nothing will change until there's an alternative.


Eh, I've seen several content creators I follow make loud and pointed moves away from YouTube after they've been targeted or after a high-profile case gets some news.

People are leaving, and if YouTube doesn't fix its moderation, it'll only get worse. There are alternatives, just none that have done anywhere close to as well as YouTube does. It's a good product, it literally just needs to not be moderated into the ground.


We've all seen mighty platforms fall. Myspace at one time could've made the same argument.

You're not wrong, but that logic doesn't hold forever.


Where are people escaping to? Are people publishing their own content on Gopher or BitTorrent nowadays? These platforms have a perfect balance of usability and freedom. They can take over the world.


It seems like they are making their own sites. Nebula is one such place where a few of the higher quality content creators are going. I've no idea if it'll work, but they are trying.

https://watchnebula.com/


With a $5 monthly fee and no way to upload your own videos, Nebula is most directly competing against Netflix, not YouTube. A few top creators might be able to make that work, but most won't.


And no free trial w/o giving credit card info. Yeah, hard pass.


That is very normal. It’s to prevent people from just creating accounts multiple times to never have to pay.


A $20 gift card is great for those things. Then, it doesn’t matter whether you remember to cancel.


Use privacy.com. It's a godsend.


The problem with these techniques is discover ability. How do you build an online audience without paying google for advertisements?


Then perhaps the solution isn't for politicians to demand that YouTube be separated from Google/Alphabet, but instead demand that users on YouTube be able to discover videos on other sites.

I see no reason why the search feature within YouTube couldn't return videos that people were hosting on their personal sites or on competitors to YouTube (including Twitch VODs and livestreams). Even subscriptions and recommendations could work across platforms.

YouTube could still run adverts in/on the videos, and the revenue from that would be split with the site where the video was actually published. For a greater share of the ad revenue, YouTube could also offer to stream (a copy of) the video from their own servers.

Even if sites (or Multi-Channel Networks) had to pay a $1000 fee to be included in this system, and YouTube kept its rules about moderation and demonetisation, it would still greatly improve discoverability and hopefully competition generally.


Patreon and Twitch seem to be doing a better job of handling content creators needs. I've seen a few who do everything on their own website, too, but that is less common for sure.


The author receiving a strike can mean two things: a) the policy prohibits this b) a mistake was made.

The author seems to consider the latter more likely.

Likewise, Spiderman dry humping Elsa staying up can mean either that the policy allows it, or, much more likely, that they just didn't manage to identify and take it down yet.


The problem is, how difficult is it for the average, non-special YouTube producer able to fix "b) a mistake was made."

?

And, how long are they refused earnings during that time?

It's similar to the copyright infringement stuff, where real livelihoods can be put in jeopardy from these sorts of decisions.


I'm not saying that b) is OK or means that everything is going well. I'm saying that assuming that the policy is bizarre means you're most likely looking for the problem in the wrong spot.


Knowing that that's what the kids are watching, why are you still allowing them to have access to YouTube?


i kind of get where you are coming from but denying access to knowledge/reality is not helpful.


What knowledge/reality is gained from videos of Spiderman dry humping Elsa or grannies kicking people in the nuts? This is the typical kinds of content that YouTube has lined up for children. Do you really expect a cesspool like this to harbor anything of quality or value for developing minds?


You're speaking of a specific instance and declaring all instances bad. This is unhelpful.

If that specific video exists and if it's common, this conversation could be had; but, it's relatively uncommon, making it a surprise.

The real answer, as someone else has alluded to, is that YouTube is something that parents should watch or be aware of with their children.

There is a great deal of valuable information on YouTube and cancelling it wholesale is absolutely the wrong approach.


You realize that this comment section is in response to a story where YouTube is suspending educational content, right? Someone else already mentioned Elsagate which is still an ongoing problem as noted by the GP. YouTube is not interested in providing healthy material, which is not surprising since they are owned by the world's largest advertising company. I would advocate that parents should look for better alternatives altogether rather than use it as source of education or entertainment, even with supervision. It's simply the wrong place to go for that sort of thing.


> If that specific video exists and if it's common, this conversation could be had; but, it's relatively uncommon, making it a surprise.

It is common: https://en.wikipedia.org/wiki/Elsagate


our kids are not allowed to watch youtube on their own because youtubes algorithms surface the worst of the worst

Youtube is not something kids can use without supervision.


And for those pointing out YouTube Kids: it is only slightly less bad. It’s still full of questionable material.


We should start letting "AI" decide the outcome in any of Google's pending court cases. We could setup a twitter account and direct them to complain, I mean appeal by @ the twitter account and hope the bot gods decide their tweet is worthy enough for human review


I think the "oh no human moderation would cost us too much" defense is a distraction. It's a good one, because it is working, but it is still a distraction.

Let us suppose we have a program which checks for orthodoxy according to the YouTube Terms of Service against each video submitted.

Nothing but a lack of having done the work prevents YouTube from adding to its "rejected" flag the following:

1) A list of timestarts and durations of the portions of the video violating the ToS. 2) For each item in that list, make it a tuple and add a reason -- which clause was violated?

Then outputting that to the owner of the rejected video.

Nothing stops them from doing this. Some set of words or images happened somewhere in the timestream -- the orthodoxy program has that timestamp as it churns through the video. And a specific clause was violated -- a particular word in a wordlist as an example, because there's no simple, non-composite function that just says "good or not good."

This is really not a particularly high bar as an ask.


That will never work.

If you tell the user exactly what they did wrong, they will keep trying by changing very little in the original video until it gets past bots, but still keeps the illegal or "forbidden" message.

I think youtube should not be allowed to remove videos for their content unless for legal reasons. Meaning it's a violation of the law, in which case they have to send the information to the police as well. And that, of course, should require a human stamp of approval. Of course, Youtube should be allowed to remove videos if they are too long, wrong format etc., but NOT for their content.


> If you tell the user exactly what they did wrong, they will keep trying by changing very little in the original video until it gets past bots, but still keeps the illegal or "forbidden" message.

I think there's a possibility to implement that handles the issue you describe.

It's trivial to detect an user trying to circumvent content controls if he's trying that from the same account. When that is detected the user account can be put into manual mode review or automatically block new uploads for a while.

If circumvent is tried with different accounts the current system already doesn't prevent that so nothing is lost.


My experience having worked at content platforms on ml (so fb, tiktok, snap) the largest motivation for content removal is content that general audience would be uncomfortable with. Pornographic content is the easiest example. Should a video platform be required to show porn? Some platforms allow it with tagging others avoid it entirely and often try to remove gray things like promiscuous videos.

Another big area is political content and how much of it to allow. A lot of people complain on hyperpartisan news content and fake news. How to moderate it is very platform dependent with some wanting to restrict a lot of it and others leaning to allow almost anything.

Also I’d be extremely surprised if youtube has no human moderation. My experience has always been a mix of human and ai moderation. Facebook has like tens of thousands of contractors paid for moderation. They still use a lot of ai on top of that.


I think Reddit handles porn well. If you don't want to see porn images, they are just a NSFW picture in the thumbnail, and nobody forces you to click on it. If you want stricter parent controls, that is ok to censor much more aggressively for porn.

As to political speeches, I find it completely unacceptable that a private company gets to decide what stays on and what not. For example just the other day Youtube threatened to shut down the Senate channel in Romania because of the speeches of one of the senators. She is a wacko, true, but she is a Senator in the Romanian senate and didn't violate any law, it's unacceptable that private companies get to decide what is free speech nowadays. The Romanian MPs have special protections in the law, and the most important one is the free speech. They have even more free speech that any citizen of Romania. But hey, youtube is above the Romanian Constitution.

Who is going to hold Youtube employees accountable? What democratic or judicial process is there to have checks and balances on that? Just saying: move to another platform is not good enough. You can't cut off the phones of your political adversaries and just say: buy a cell phone with another carrier.

If the platform is becoming overwhelmed with extremist content, then it's on to the platform to find ways to better show the most relevant content to each user (e.g. burying it), but outright banning non-illegal content should be illegal.


For political content I was thinking more of qanon content. Also you can bury it in the recommendation sense but you can’t bury it in the search engine sense without removing it. The mere existence of qanon like content on the platform even if it requires searching causes a platform a lot of problems. Burying content in recommendation sense that is brand problematic but not actually that bad does already happen for these platforms. This may be content that is just too common on the platform otherwise to try to diversify available content.

Reddit is one of the very lax content platforms but has occasionally banned subreddits including political ones (Donald trump subreddit). Your phone example I don’t agree with as content platforms are fundamentally about curating a certain group of content. I think of them as closer to news media in terms of what they show. You would not expect New York Times to have much very conservative articles and opposite for Fox News. While content platforms have user generated content mostly that defines the available content pool not what they want to recommend. Actual percent of banned things is pretty low so they do mostly allow any content but there are certainly categories of content they spend a lot of effort on. As for another motivation for political banning one complaint tiktok often receives is as a Chinese related company it should not influence American politics. If it wants to be safe there then it having extreme content becomes legal risky itself. Not so much in the it is directly illegal sense but in the increasing chances of the platform getting banned which felt like a serious risk a few months ago. I think in general a lot of the political content restriction motivation is legal in origin in the sense of wanting to avoid issues that cause congress to do stuff to the platform. Similarly Facebook and all the Anti trust talks one way of being careful is trying to restrict extreme political content that otherwise senators may complain about.

There’s also related should political advertisements be allowed.


this will make it very easy to develop adversarial algorithm and avoid detection by youtube's system


That would literally beat the current situation by a country mile. The goal isn't to perfect the YouTube flagging algo, after all ... an adversarial algo is a small price to pay.


>That would literally beat the current situation by a country mile.

I don't think google agrees.


Google likely only agrees with whatever choice makes them the most money. Whatever decision they make will always be jaded by how they think it will affect their stock price.


Recently I also discovered that YouTube shadow bans comments for using banned words (e.g. kill, coronavirus etc) or external links (two of my comments got banned). I have seen some people asking for sources on YouTube comments and it turns out you literally can't do that. Maybe that's why only spam comments rise to the top, any long or thoughtful comment simply gets shadow banned.

0. https://support.google.com/youtube/thread/6273409?hl=en

1. https://support.google.com/youtube/forum/AAAAiuErobU70d28s1N...


The first part of the post, where they said the got a strike for teaching people how to use pgp...

Wow, that’s um, wild


I believe it's still against US law to export pgp to certain countries, as it's considered munitions. Wild is a great description.


Didn't DJB v US pretty much end that? That would also impact things like Firefox for the record.


Pretty much, but export to "rogue states and terrorist organizations" is still technically illegal. I suspect it wouldn't withstand challenge though.


Stop buying into the "the AI did it" whitewashing. Youtube is selectively targeting more and more channels and videos for daring to be any sort of contrarian. Its psyops under any other name.


"The AI did it" is the same cop-out as "A low level employee did it of their own volition" and should be rejected as an excuse for all the same reasons.


Are you claiming that Youtube is running psyops trying to cover up PhD research into WPA2 security? I don't want to distort your argument, I honestly don't get how you went from a WPA2 video being accidentally taken down to "this is intentional psychological warfare"...


No, I'm talking about youtubes censorship campaign much more broadly. I was hoping that was much more obvious. I am saying that I have been paying attention to this trend for years, and I very often have seen both creators on youtube and youtube use the AI did it excuse, often without evidence that its true, and I posit if someone wanted to, it would be possible to prove.

I can't prove it myself, but I can offer a plethora of inductive evidence that builds up the probability that I am right to a high enough probability range to be correct.

More than just a pattern on youtube itself, but just read the kind of doublespeak Susan Wojcicki puts on on the topic.


YouTube is? Or we as a society are?

The world of Fahrenheit 451 didn't arrive that way in a day, it probably took decades.


It's not a "psyops", YouTube, like every gigantic corporation is motivated entirely by money, and they would be happy to monetize every video on the site if they could, but advertisers pay the bills, and since advertisers are sensitive to their public image, YouTube goes out of their way to appease them. The problem here is the ad model, not AI or YouTube's values.


>hey would be happy to monetize every video on the site if they could, but advertisers pay the bills, and since advertisers are sensitive to their public image

Thats the excuse, but in reality, google still runs ads on demonetized channels. And there are companies that want to advertise on these other channels. Google is actually doing a disservice to stock holders by excluding profits, but many suspect that they want to be a cable network in the long run, siding with media corps.

There are tiers of advertisers, lower tier companies will gladly pay to advertise on lower quality channels.


That's an outdated view. Youtube now bends to activists and political winds.


Advertisers (and PR departments) are fragile to activists and political winds.

Society gives PR departments far too much power. We should be electing governments that enforce reasonable standards, not relying on what is essentially DDoS to achieve social mores.


IMO bending to activists and political winds is within the same realm of what OP mentioned. There is a LOT of money to be made by following the trends and in today's world, activism and politicizing is the name of the game.


So what is the activist political position on WPA2 vulnerabilities?

You see what you want to see, but YouTube hosts an overwhelming abundance of evidence that contradicts your claim.


Internal employee pressure is the cause of some things getting removed, but they are not behind everything. In this particular case it might bdriven by legal concerns or something. But it should also be seen in light of the status quo. When censorship being the new default, then it doesn't take a strong reason to add another thing, such as hacking-adjacent content to the no-go pile. The code is already written, and the routines are well-known. There is no freedom-of-speech reputation left to protect. So the cost of removing this content is very low, lower than the legal risk.


Without any evidence this is just speculation. The reality is, there are many popular political YouTube channels that represent viewpoints all across the spectrum and this is an easily verifiable fact. I am yet to see any evidence that YouTube is motivated by anything other than money.


I have seen evidence, but nothing I can share publicly.


The activist political position is that targeted hacking is a major political threat (DNC hacks etc). Thus, any material that sounds like it might be providing assistance to that is potentially politically volatile.


> So what is the activist political position on WPA2 vulnerabilities?

Well we did just have a case with Twitter and IIRC Youtube where they redefined their "hacked material" policies to bend to the political will after the whole Hunter Biden laptop fiasco. So this may just be collateral damage due to those policy changes and the AI behind enforcing them.


FYI, this is the video: https://peertube.sunknudsen.com/videos/watch/182e7a03-729c-4... ; the links in the description:

KRACK Attacks: Breaking WPA2 https://www.krackattacks.com/

KRACK - Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2 https://www.youtube.com/watch?v=fOgJswt7nAc


Are these new attacks? The video in OP only seems to indicate the ability to attack content on a Wifi connection of other connected clients if you're already connected.

It doesn't give you the ability to "crack" a WPA2 network without the password.


The hypocrisy from Google here is palpable. First this content creator got a strike for showing users how GPG works and now for linking to academic research. On the other hand you have Google project zero who openly publish vulnerability research to the detriment of some software vendors.

You can’t have it both ways and as someone who works in this space I’m pretty upset by this ham fisted approach to censoring content.

I look forward to google becoming the next AOL.


I used to love Google - I can’t believe I’m feeling the same way. Everything is so downhill for them. Searching isn’t as great as it used to be. Popular services get killed to be replaced by replicas (thinking of messaging apps). What the heck is going on with Google? I used to think they were innovators, and now I feel like they’re just Big Brother.


Maybe it's time for a public company-independent complains-platform where all companies are obliged by law to respond. Each complaint of course is hidden behind some anti-bot captcha/protection so that the companies themselve can't use AI to answer your complaints.


https://ec.europa.eu/info/law/law-topic/data-protection/refo...

I dunno, where's the evidence this case was even done by AI? People just say Google does everything with computers when they actually use armies of contractors.


That's basically what the author suggests when they say:

> Anyhow, I truly believe humanity has to rollback to operating at a human scale.

It's impossible to operate a complaints-platform on a global scale if it's run by humans. According to GMI [0], 500 hours of content are uploaded every minute.

Given an average video duration of about 12 minutes [1], that would be 2500 videos per hour. That's just too much to manually review and handle complaints.

[0] https://www.globalmediainsight.com/blog/youtube-users-statis...

[1] https://www.statista.com/statistics/1026923/youtube-video-ca...


Is it though? Let's do a very rough estimate: 500 hours of content per minute in 2019, lets say 1 reviewer can review 3 hours of video each hour (by increasing play speed/skipping etc.) and we have a global workforce in lower income countries working 3x 8 hour shifts + weekends. That's 50060/33 = 30000 reviewers, at a monthly cost of $1000 that's 30000100012*7/5 = $504m/year. Youtube had $15b in revenues in 2019, so this represents around 3.5% of revenue. Now this is assuming that we actually need to 100% review every video before releasing it (which is not the case) and one reviewer can probably review more than 3 hours of content per hour with the right AI assistance so the real cost would be quite a bit lower. Even then, spending less than 5% of revenues on content review, moderation and support sounds very reasonable to me.


The flaw is that you think humans would do a better job than AI which def not the case. Especially hiring 30k people in a low income country, what could go wrong... This is the kind of scale problems that can't be fixed by humans review.


> Is it though?

Yes, it is - because it's actually 2500 videos per MINUTE, not per hour, mea culpa. So your 30,000 reviewers would actually have to be at least 1.8 MILLION.


They didn't use your videos per hour/minute figure; they used hours of content per minute, so it still comes out 30'000 reviewers.


It's not about the viewing time, though, it's about the videos.

The misconception is that it's the review process that's the problem - it isn't. That can be automated just fine.

The problem arises as soon as there are complaints or issues with the content and that depends on the number of videos, not the duration.

So if there's a problem with a video it can get flagged, de-monetised or even taken down automatically by software (as is the case now). This is a non-issue. It gets complicated as soon as one party has a dispute over this and that scales with the number of videos, not their length.


> that scales with the number of videos, not their length.

That seems pausible, but if so, the entire calculation would have to be redone from scratch, with qualitatively (not quantitatively; different units, not just different values) different numbers, so bringing "1.6 million" into it is still a misleading non sequitor.


It takes a lot longer to make a video than to watch it. It therefore stands to reason that if humanity is capable of making all that content, humanity is capable of watching it - if it decided that were a priority.


2500 videos per minute doesn't equal 500 hours of original content per minute, which is part of the problem.

Just look at all the reaction channels and compilations that simply reuse the same content over and over again. You have one funny or shocking clip (often from 3rd party sources such as TikTok) and you'll find the same video snippet in at least 10,000 remix/compilation/reaction videos. Not to mention reuploads and straight up copies.

Algorithms have a hard time catching up with this and cropping, mirroring, tinting, etc. are often used to confuse ContentID. Asymmetry is the problem. Bots and software can both spam and flag content at superhuman rates.

The inverse - e.g. deciding whether a complaint is legit, fair use applies, whether monetisation is possible, etc. - is actually a really hard problem and therein lies the dilemma.

Certain parties are gaming the system and the scale is just too much to handle manually.


I don't have any data to back this up, but i believe of those 2500 videos per minute, 2450 or so the AI could classify them as safe, not requiring human interaction. The other 50 are classified on a scale from 0 to 100 on a badness scale. The ones closer to "not that bad" (ToS and such) gets put through automatically waiting for a review. The illegal content (rape, gore, child porn) and such gets blocked automatically until reviewed by a human. Doesn't sound that far fetched to implement with 50B a year in profits?


But how would that help with complaints, ContentId and copyright claims, though?

The problem isn't the automated review process, the problem is complaints and disputes.

Even if only 1% of all videos had any issues of this sort at all, that'd still be 25 complaints per minute about the most complex topic in media no less - copyright law and fair use.

The problem lies in the asymmetry - bots and automated flagging campaigns can scan, mark and take down thousands of videos per minute no problem.

But it's impossible for creators to get their issue reviewed in time by a human, because we just don't have AI capable of handling such decisions yet. And even then it's often still not as clear-cut as one might think and both sides need to be heard, etc.


And right now it's essentially impossible to be heard by a human, which is the problem. They don't have enough humans employed.


I've thought that something like the spamassassin model would be sufficient - calculate a 0.0 - 1.0 range of likelihood to block, and set cutoffs on the 0 end to auto approve, and toward the 1 end to auto block, and moderate the middle.

Was good for spam for a long time.


Maybe we don't need to let people upload that much video content.


*2500 videos per minute


Yes, thanks.


A small claims court for the internet. I like that.


That sounds like something that the government should oversee, maybe they could also make some regulations so that , I dunno, corporations might need to be responsible for their actions.


What do you think this will achieve? We know what Google thinks; they don't want this content on their platform.

Forcing them to state that in a specific forum as some sort of power play isn't going to help.


AI is a great excuse for censorship. But many economic actors want censorship including the advertisers. So it seems more competition to Youtube is the only answer to censorship. But I don’t see any credible competition on the horizon to Youtube for a while.

The ultimate ironic censorship on the part of Youtube:

https://www.mintpressnews.com/media-censorship-conference-ce...

And the discussion thread:

https://news.ycombinator.com/item?id=26008217


Time to install GPT-3 on your own server and unleash it onto Google-Support, YouTube-Support, Alpha-Support, etc. to complain about your situation ;-) You just need to answer the captchas to keep it goin.


With AI-based support automation, it might end up like https://www.youtube.com/watch?v=WnzlbyTZsQY


Combine that with 2captcha.com, what do we have? Lol


YouTube, like most of the "Internet" these days it's just a shopping center. It appears to be the Universe because these sites/companies are so massive but at the end of the day it's their turf and they do what they want.

Now, once they reach a certain scale, why are they allowed to operate like a "normal small company"?


>Now, once they reach a certain scale, why are they allowed to operate like a "normal small company"?

There was and is a push to make them behave more like a utility (i.e. a platform). Mainstream media, along with a major political party, are on a crusade to make these tech platform be more and more of an editor because they like that currently their competition (in case of media) and political rivals (in case of Democrats) are bearing the brunt of the censorship.

And you can see this push even here on HN with constant references to how evil Facebook and Twitter is because memes on those platforms caused Hillary to lose in 2016, and therefore how misinformation is such a big problem that more censorship is needed. I don't know how you fix that. These platforms didn't want to do this. They got bullied into it.


If I can uninstall YouTube from my Android I think that would be a good start towards a journey to a normal, if not small company..


Leonard French recently reviewed a Supreme Court ruling on Knight Institute v. Trump that includes a concurrence statement by Justice Thomas that touches exactly on these issues (and maybe even net neutrality by proxy).

https://youtu.be/IpkGzJYYQj4

https://www.supremecourt.gov/orders/courtorders/040521zor_32...


Google being google. Its funny that once I used to lookup to this company and now I view it as a threat to society.


"You either die a hero, or you live long enough to see yourself become the villain." - Harvey Dent in The Dark Knight

It's funny that I really appreciated Google when I first signed up for Gmail. Now that I've had it for 16 years, I resent the fact that I can get deplatformed for any reason, and there's really nothing I can do to prevent that.


export all of your email to Protonmail and stop sucking on that Google breast


I'm seriously considering it, but I'm a heavy IMAP user. When you enable IMAP in ProtonMail, it's no different than any other mail provider, security wise.

This is the dilemma for me.


Other people have made good points. Just wanted to add: you'll feel better knowing you're no longer vulnerable to Google. It's a nice feeling of relief knowing Google could cancel my account tomorrow and I wouldn't be hurt in any substantial way. Definitely worth the effort to move your email to an independent provider.


Technically no different but you have cast a small vote to behavior you no loner tolerate or support (socially). Even if it’s a fraction of a dollar. In aggregate they add up.


That's another point to look at it, you're right.


I used to be also but now use the protonmail app on my phone. And this on my desktop https://github.com/vladimiry/ElectronMail


I don't think you can connect to your email via IMAP without ProtonMail Bridge (which encrypts traffic between your email client and ProtonMail).


AFAIK you bypass 2FA when you use IMAP. I don't see many advantages of using Proton (over any other independent e-mail provider) without 2FA.

The thing is, I connect all my accounts to a single mail application and handle everything from there, including their automated backups for ~15 years or so.

The jury is still out for me, we'll see.


It is for this reason I am moving away from ProtonMail to Migadu. Literally any IMAP-offering email service would do. All that we need is a custom domain.


buy your own domain and only use the email addresses you fully control.


I don't see any mention of the other thing that would be a concern to me - don't ever be in a situation where any Google product is critical to you, because the behemoth may casually destroy you as a side effect.

What if instead of locking him for a week they'd canceled his account (and related accounts e.g. Gmail, plus linked accounts because 'bypassing ban')?

And you don't have to be doing something like discussing security - maybe you're talking about Pokémon Go and they ban you for "CP videos" or you posted a bunch of red or green emoji in livestream comments for a gamer. Goodbye email, sites logged into with a Google account, Android phones (because they KNOW that phone number is linked to your identity), etc.

And what are you going to do about it? Call customer service? snrk


> And what are you going to do about it? Call customer service?

I know it's not perfect but I feel much better knowing I pay for my email service, online storage and password management, knowing I'll be a paying customer in someone's system deserving of at least some degree of customer service.


I remember a survey by google about security research and one of the questions was something like "What hinders learning/education in this topic?", and I remember answering exactly this behavior.


FWIW the video in question seems to be this one from ~1y ago:

https://peertube.sunknudsen.com/videos/watch/182e7a03-729c-4...


It would be fun to do a quick survey of books breathlessly laying out the future of the internet from 20 years ago.

Looking back, the evolution of social media/youtube was pretty obvious. Back then, not so much.

1) Begin with anything goes including illegal. Run at a loss, Grow baby!

2) Ads

3) Bitching from some important people causes some removal of the more flagrantly illegal stuff.

4) Employees and PR departments apply a POV to what is now close to a monopoly. Large scale censorship.

What makes the modern era interesting is the POV angle. I can't imagine Rockefeller's Standard Oil restricting sales to people with conflicting politics. Of course, there are people spending a lot of time each day on r/politics and what used to be r/thedonald shouting joyous insanity at each other...that's the modern pool of workers.


Whenever I hear "algorithm" anymore, I see it as the "statistics" of the past. This entire thing is a product of math washing[1].

[1]https://www.mathwashing.com/


It's disheartening that in the 2000s, people used to regularly talk about creating decentralized systems to prevent megacorporations and governments from being able to censor the internet, now the tech community that used to deplore such tactics actively supports mass AI censorship on entire host of topics where the only version of the truth that is allowed to stand is #thenarrative.


Are we certain this was not because the vulnerability is called "KRACK" which is similar to the drug "crack"? Youtube has been very strict about such phrasing mishaps.


If your argument is it would be reasonable for you tube to automatically ban anything that could be a mention of crack cocaine we’re not going to find a lot of common ground on what is accidental moderation vs intentional censorship.


YouTube (and others) appear to have lost the plot.

I tend to avoid it and support other platforms to encourage content creators to migrate.


This is what people choose when they decide to angrily tweet at advertisers into pulling away ads whenever YouTube doesn't take down bad videos quickly enough. It tells Youtube that they should tune their moderation to be overly aggressive. This is true regardless of how much human moderation they use.


One of the largest and most profitable companies in the world can't hire enough low-paid humans to do content review. They already have some of the highest margins in history and they still can't figure out that sacrificing less than 1% of profits could solve this problem?


Paying less attention to YouTube also solves this problem.

People still can't figure out that not watching YouTube solves all moderation problems?

Google doesn't care about those posting on YouTube. They care about advertisers pulling their money if whatever content infringes on THEIR priorities.


I think the reason most people are watching YouTube instead of traditional media is _because_ of this type of thing. Take this exact situation here as an example: this kind of content would never make it to traditional TV because it can't find an audience for such niche content, eg. people smart enough to warrant linking a PhD research paper in the description.

I think Odysee is a good alternative. PeerTube doesn't really seem to be a great option here as it seems to have been down during this time.

> Google doesn't care about [...]

Mostly true, but also kind of not relevant. Google isn't doing anything here besides making poorly trained AI models that try to solve this problem for them. It's more negligence than it is malice, IMO.


Odd, more so given a whole video is upon YT about this from 2017.

https://youtu.be/Oh4WURZoR98


Note that this video was originally taken down for the same supposed ToS violation: https://twitter.com/vanhoefm/status/927612810984677376


"violates our harmful and dangerous policy" is hysterical coming from a harmful and dangerous automated censorship engine.


YT suspended my account without telling me why. Oh and no recourse for this evidence-free execution with only an ostensible appeal.


I think that Google should hide all sites that display ads from the search results... because after all, it goes against AMP's principles of responsive websites.

They definitely should also block Gmail... because it has become incredibly slow to load.

I bet the CEO is laughing a lot when he sees all of this interacting together.


Off topic: is there an YouTube equivalent in China? I'm just curious.


When I lived there it was https://www.youku.com/, not sure if they're still the dominant site now, stuff changes quick.


tube has been censoring folks left and right since the start of covid


Google has long been manipulating content, look up "blue anon". You can compare searches between Bing and you'll find lots of content thats been "moderated"


Not a popular opinion here but selectively covering up research is a play straight out of the Communist Manifesto. This sounds more like concerns over the spread of hacking intel, but it's worth noting yet another example given big tech's selective censorship over the past year.


[flagged]


If you're interested in reaching out the mainstream public to increase its exposure to a topic you care about, there's no way around YouTube and the like

It's obviously and evidently not about uploading videos, but rather to build and reach a community

Also, it's wrong to accept that these platforms have this kind of influence and we should just avoid them. They're part of the social fabric of our time, and it's our duty to make things better.


> If you're interested in reaching out the mainstream public to increase its exposure to a topic you care about, there's no way around YouTube and the like

This kind of attitude only serves to entrench the problem and make everything worse. Even if you are not personally affected by youtube's damaging policies, just using the platform is an insulting lack of respect towards other people who are. What ever happened to "be the change you want to see in the world"? Any such change must be led by knowledgeable people like this security researcher. I hope that this ban serves Sun Knudsen (who already publishes their videos on peertube) to lead more and more people out of this and other user-hostile platforms, until these platforms finally turn into a cesspool of content-free "influencers".


I see that "duty" as avoiding using YouTube and encouraging others to do so as well, until YouTube goes the way of MySpace and Yahoo and every other service that's lost their way.

Eventually even Google in general.

MySpace was part of the social fabric, too, at one point.

No one is too big to fail or at least become irrelevant. No one is sacred.


> If you're interested in reaching out the mainstream public to increase its exposure to a topic you care about, there's no way around YouTube and the like

Is this true? There's almost no discoverability on YouTube imo. You could just push people towards your privately hosted platforms.


> Anyhow, I truly believe humanity has to rollback to operating at a human scale.

BYOReligion.. and Google believes that they need to increase 'engagement', increase profit, and decrease headcount.. so.. there you have it dear Sun Knudsen.

Edit: I don't care for the 'virtual currency' of 'karma'. I wonder though; the sky is blue, I point at the sky, I say "the sky is blue", and people frown (?) upon me for stating the obvious fact of life.. Or is it just Google-trolls/fanboys/fangirls that don't like the true being called out? My above comment is also applicable to any entity that automates "as much as possible" and maintains a small team to manage operations. Something tells me that their internal Finance team has "enough" staff to monitor the General Ledger. Because General Ledger is critical to their $$$$$. A 'content creator' though, is not as critical, thus gets far less attention, and <insert FAANG> will only be bothered to fix this, if said person yells loud enough. How is this fact of life being downvoted? (not me.. I don't grow taller/shorter based on the karma points) :)


> "I say "the sky is blue", and people frown (?) upon me for stating the obvious fact of life.."

Yes; exactly this. If you were out in a crowd and called everyone's attention to you - thousands of people quietened to pay attention to what you were going to say for a moment, and you announced the thing you yourself say is an "obvious fact", that the sky is blue, the crowd would be annoyed at you for wasting their time with obvious low-quality trivia.

"company seeks profit" is the same low-value annoying thing to say. You're not being downvoted because it's untrue, you're being downvoted because it's like a toddler going "hey, hey, hey, hey, look, look at this, watch this" "okay?" "throws ball at ground". Great, thanks for that, good contribution.


I am sure it is ok to use AI to cheapen initial screening. This FAANG and the like set of companies are becoming vital for people to the point that sometime they can loose their income. I see nothing wrong with those that serve as a source of income being declared as utility type service with the mandatory staged conflict resolution process. Said process at the last stage (well before the actual lawsuit) would include being able to present proof to the actual human being/s who has the power to reverse the decision and is paid for but not employed by said utility. In case of utility failing to comply that watchdog should be able to levy fines. This should also help to prevent cases when the utility "moderation" is based on the political (other similar in nature) opinions of the owners.

All this "I am a private company and do as I please" when it comes to a very big companies is baloney. They have way too much power and should be held responsible for how this power being used. Preventing them from being able to lobby governments should be of first priority as well.

Since the companies of that type are international it is of course up to the individual governments to implement whatever if any measures they would see fit.


Google is a private company. It's fully within their rights to remove any video they don't like, for whatever reason.

It comes with the territory if we want Google to be able to censor hate speech and misinformation.

You can always use a competing site, or build your own.


Except the sheer scale of Google is hair-raising. (Sleep tight dear FTC).

If Google loses a YouTube creator, it has almost no impact on their bottom line.

If a creator gets kicked out however, their livelihood is threatened.

I would argue this is the perfect scenario for a creator's union (of course, it should use some other platform to communicate).


> If a creator gets kicked out however, their livelihood is threatened

No. There are numerous providers that can host your videos. For a small fee. This mentality is what gives Google such pseudo-power.


At the end of the day, I'd guess this is less about alternatives and more about youtube being the best platform for your investment. Hosting isn't the problem, its the audience and traffic (and ad monetization from your vids.)

Do the alternatives hold up in terms of things like market reach, $ performance, and not banning the same users?


If Youtube was to go away today, do the guys who upload videos on this platform stop being content creators?

This is myth I want to dispell.

Now to answer your question. Yes, it will be tough and disruptive should you start distributing your content through other alternatives if you were using youtube as your only vehicle. But similar issues were faced by other content creators such as musicians, journalists when the streaming era started disrupting their vehicles. They adapted. Don't see why video content creators shouldn't be able to find the same flexibility and continue to thrive. Youtube by itself with no content does not have anything to sell.

Infact, relying only on a single platform can be dangerous for creators since we've seen some of them being booted off by AI. No human explanations.

Audience, traffic, monetization, reach are all market problems. They will exist whether Youtube is there or not.


Thanks for joining HN just to tell us this.


That post really does read like a parody haha.

Such a desire to shield mega-corporations from any moral culpability since they think it serves their personal sensibilities. Very naive.


It seems to be a common opinion, I see a lot.

What many do not realize is the 'build your own' is going on (I can name at least 4 YT replacements, some better than others). But they are dismissing them with the same reasons they are kicking the creators off the 'mainstream' sites. So you see a lot of YT refugees and many existing YT creators 'splitting the difference' basically they upload it to both YT and several other sites. There is almost zero downside in doing so really.

Eventually someone will click it all together in the right way and create a decentralized uncensorable thing. At that point the internet will become a lot more like 4chan.

The old adage is 'the internet sees censorship as damage and routes around it'. It just does not happen overnight though.


Do we want Google to be able to censor hate speech and misinformation? You assume the answer is "yes", but I think it's at least an open question, with me personally leaning toward "no".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: