>Anyhow, I truly believe humanity has to rollback to operating at a human scale.
>Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.
The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube? Would that work with the current society's attention span?
Is YouTube as we know it actually feasible without AI-based moderation? Can it be a sustainable business with an army of human moderators and/or a drastically reduced scale as a result?
I don't know, but I have my doubts.
Yes, moderation/suspensions can be a pain, and pointing out embarrassing mistakes is important to add pressure on Google to put more resources into it. At the same time, YouTube is full of great content and content creators. Anyone can upload a video and make it viewable by billions, instantly. I think we shouldn't forget how amazing this actually is.
I think the argument is framed as wanting a human during take down dispute, not pre-approval. There's no need to treat this as an absolute. Why are human contact and reasonable expediency considered mutually exclusive? There is a lot of room in the middle. Can allow human dispute resolution for non-free accounts (i.e. paid or earning). Can limit upload frequency site wide while dispute/abuse queues are backed up (knowing YouTube would be incentivized to prevent backup). Can have non-profit reviewer company that charge very minimal amounts for human review. Etc.
Let's be clear, the lack of humans anywhere in the pipeline for uploaders is because they don't have to, not because it is impossible. Having said that, I really hope we do not see government intervention here.
Agree wholeheartedly: there's definitely a spectrum between what Google are doing atm and full human moderation. I am a strong believer in using AI to augment humans, not replace them. Nudging Google towards putting more humans in the pipeline is what I am hoping posts like this are achieving.
When was the last time Google listened to the public? In spirit, not because there was a backlash? It’s a techno conglomerate with no human face. Google was cool and interesting in 00’s, but nowadays all they create is fodder for those sweet ad dollars.
> Let's be clear, the lack of humans anywhere in the pipeline for uploaders is because they don't have to, not because it is impossible.
The real problem here is that people are envisioning human interaction as something that can produce better outcomes, but the cost of that may not be economical.
You can hire a generic human to look over a flagged video for 15 seconds and render a verdict, but the verdict is going to be bad. That's not enough time to make an accurate evaluation.
Suppose you have an epidemiologist on YouTube making a claim about virus propagation. It gets flagged because there is a CNN article reporting that the CDC said something contradictory. The epidemiologist argues that the CNN article is a misinterpretation and if you look at the original CDC publication, that's not really what they said.
At this point for YouTube to get it right they've basically got to hire their own M.D. to study the CDC publication in enough detail to understand the context. That is prohibitively expensive. But the alternative is often censoring an epidemiologist who is trying to correct misinformation being disseminated by CNN.
And science is a moving target. The infamous example was the CDC in 2020 telling people not to wear masks. But there was a point in time when scientists were publishing studies on the health benefits of tobacco consumption. Challenging the established orthodoxy is inherently necessary for progress.
The underlying problem here is that YouTube isn't competent to make these determinations. They're attempting to do something they have no capacity to succeed in.
So true. Automation cannot infer nuance. It's ok to strike down obvious spam and troll but the but this pretense that private actors should spend their resources to cultivate a sane public space in just a broad-spectrum reflex reaction to micro-targeted disinformation. Nonsense.
What stops the YouTube reviewer from calling or emailing the CDC to render a verdict on which interpretation of their public information is accurate?
When it comes to government agencies, they are contactable (usually with a simple email). You might not get an immediate response, but a few hours to a few days later you can expect to hear back.
Why would youtube be trying to make a determination like that? Individuals are the arbiters of truth, not organizations. Ministries of truth are not desirable.
What's with the assumption here that there aren't humans involved in this decision? The linked post literally quotes Google saying:
"Hi Sun Knudsen, Our team has reviewed your content, and, unfortunately, we think it violates our harmful and dangerous policy."
Google has always had humans in the loop. Not for every decision but certainly the more impactful ones that affect user generated content. When I worked there, there were even humans reviewing requests for account un-suspensions (yes, really! they didn't do a great job though, needless to say).
Also, it's quite hard to understand what sort of algorithm could result in this sort of suspension. What training data would give it hundreds of thousands of examples of infosec discussions labelled as harmful? But it's very easy to see how a large army of low paid humans given vague advice like "don't allow harmful or dangerous content" could decide that linking to hacking information is harmful.
The problem here is not AI. The problem here is Google allowing vague and low quality thinking amongst the ranks of its management, who have decided to give in to political activists who are determined to describe anything they're ideologically opposed to as "harmful", "dangerous", "unsafe" etc. When management give in to those radicals and tell their human and AI moderators to suspend "harmful" content, they inevitably end up capturing all kinds of stuff that isn't political in nature, simply due to the doublespeak nature of the original rules.
"Our team" is quite often a lie. Why do you assume that a human was involved? Especially when it comes to Google, who notoriously skimps on human support staff.
Because I worked there and back then, when Google claimed content had been manually reviewed, it had been. It might have been by a low wage worker in India or Brazil and they might have spent 20 seconds on it, but there was a person in the loop.
That's why I turned this around. Why do you assume they're lying, without evidence?
Why wouldn't we assume they are lying? Interacting with their dispute process doesn't have any human qualities, so what would make us think that humans are involved?
They are a massive monopoly and just the other day it was reported that they manipulated their own ad markets for their own profit. They constantly inaccurately accuse people of malicious intent and copyright infringement, often massively hindering peoples' careers and generally providing zero recourse. To not assume lying and at best self-serving intent given Alphabet/Google's position in the world would be foolish.
On top of that, is it really that different to use inaccurate AI or to use humans who are intentionally not given the time or tools to do human-quality work?
The problem is obviously with the overall quality of moderation, and the level of power exerted by platforms over users. Whether a non-skilled human reviewer or an AI is handling the broken process hardly matters given the results.
Choosing to die on that hill of technicality seems like a silly choice. People say "human" but they really mean "a person who diligently considered the content, probably watched the video more than once, understood what they were watching, and possibly has a chance to debate with another human". Or, you know, a reasonable definition for "human in the loop".
It seems you're defining "human in the loop" as "Google makes what I consider to be the correct decision in this case." I believe the point that thu2111 is trying to make is that there _are_ humans in the loop, and having them is no insurance against unilateral decisions that you disagree with or even just plain stupid decisions.
The point is that involving more humans in the moderation decisions at YouTube does not guarantee nor is there any evidence to suggest, that it would result in better moderation.
Somebody with the technical and industry knowledge to know that talking about security vulnerabilities like this shouldn't be removed can probably find a better job that YouTube video moderation.
I would accept that Google will sometimes make the wrong decision, but it's unacceptable that the care given toward making that decision stops at a low-wage worker in another country spending 20 seconds on the decision.
There's a very wide chasm between "an AI decided" or "someone spent 20 seconds on it", and "we hired an expert in every possible field and they thoroughly research the facts at hand before making a decision".
The latter is obviously not feasible, but there are many options in between that will give markedly better outcomes than the status quo while not making YT moderation economically infeasible.
There are plenty of cases where I disagree with the outcome of companies I interact with. When I spend 30 minutes chatting with an agent via text with Amazon or airlines, I might leave the conversation disappointed, but I never doubt their conscientiousness. I highly doubt United beat me at the Turing Test.
> What's with the assumption here that there aren't humans involved in this decision? The linked post literally quotes Google saying [...]
The post literally quotes Google as saying that for the _last_ time this happened. Then the post literally states they appealed and never heard back. There is nothing concerning a human for this instance, though I suspect once the uploader reaches out (were it not for the popularity this is gaining), one could expect similar dismissal. That there are barely humans involved, and in a poor way when they are, doesn't mean there are never humans involved, true. But for many, the bare minimum of assistance is indistiguishable from no assistance.
Appealing and not hearing back doesn't mean the original decision was entirely automated, it means they ignore appeals. Plenty of people would love nothing more than to argue with Google employees all day about every decision the firm makes - just think about webmasters and SEO alone - so the fact that YouTube actually has such a process is already kind of remarkable.
You seem to be arguing that because their dispute process everywhere else is reprehensible, a slightly-less-reprehensible process should be celebrated.
It shouldn't, and we need to hold Google and others to a much higher standard here. I'm not sure how we do that, but the status quo is garbage.
It's easy in theory: you create a competitor that differentiates through the quality of its content-provider service quality and how much money is spent on dispute resolution.
But I am skeptical it would be successful. It might be, but it would probably require a much larger percentage take of video revenues, which would discourage most YT content creators from going there because in fact, relative to overall YT volumes, these suspensions are rare. That's why we're discussing individual cases instead of studies with percentage figures in them.
It feels a bit weird to be playing the devil's advocate in this case because frankly the state of modern Google saddens me. It seems not much like the firm we built back in the day. They obviously shouldn't be doing this kind of thing and in the past they didn't, which is why I place the finger of blame not on AI (which may not even be at fault here, and is a general tool anyway) but rather on bad ideologies and ideas permeating the management classes in America.
In this case I blame favouritism and feminism. YouTube content moderation is run by Wojcicki, a woman who got in to Google by renting Larry and Sergey a garage, by being the sister of Sergey's wife. She's not an engineer. Thanks to the prevailing ideology an ambitious feminist woman in an executive position at a west coast tech firm is utterly untouchable, and must be given whatever she wants. So despite being the exec in charge of Google Video when it was utterly beaten by YouTube, she has ended up running YouTube. A clearer case of failing upwards can't be imagined. It's her personality that dictates the post-2016 swing towards promoting woke "authoritative content" on YT and the general culture of unaccountability for women that prevents anyone else pushing back, even as the bad PR racks up.
Knowledge is empowering, and one can also argue that lack of it is considered harmful. Not being able to link to resources that can help you understand how something works can lead to (in the big picture) that only criminals will gain access to that knowledge.
Louis Rossman had his YouTube account temporarily suspended for posting a video entitled “An angry clinton” which was a video of his pet cat Clinton meowing. Human moderation is needed in cases like this. There is obviously no analysis of the video content, the algorithm looks for keywords in the video title. The keywords used by the algorithm are centered around protecting Democratic politicians from criticism. There is no advanced AI going on, the algorithm is simple pattern matching.
> The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube?
Or they could just stop doing that and go back to how it used to be. Let people upload videos and review them only after it was flagged. Novel idea, I know.
That's its own thing and it's just as absurd as this, but this person got a strike for "violating harmful and dangerous policy".
I don't upload any videos, but if their AI for reviewing videos is as bad as their AI for reviewing comments then both should be completely scrapped. Because it's not a regex matching or whatnot, but AI, it's impossible to predict which comment or a livechat message will get through. In practice it's virtually random. And the worst part is that they don't even bother to mention that your comment was 'problematic' to them, you have to have two windows open, one for posting and one on private mode to verify which of your comments actually showed up.
Just as a disclaimer, I haven't tried it in a while, so maybe it's somewhat better now, I don't know.
Something I never quite understood... couldn't Google have just said "no" and required them to submit DMCA takedowns? The law doesn't require the level of broken proactivity that Youtube has.
They absolutely could have, and have been entirely within the rules of the DMCA. My best guess is that the ContentID system is either (a) a bribe to get media companies to partner with youtube's paid streaming, or (b) a bribe to convince media companies that it isn't worth lobbying yet again for stronger copyright monopolies.
They don't care about your average Joe user. Just look at their search results and see what comes up. What they want is to transfer the 'legacy media' to their platforms.
“Congress shall have power… to promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”
As I read it, the law should be formed to maximize progress, not to maximize value of an idea.
But, a large amorphous group of citizens will never overcome a small, highly-concentrated self interested group. I doubt you'll ever see the copyright-opening version of the NRA or SPCA.
Groups like NRA are as powerful as they are, despite their relatively small membership numbers, because said membership is really dedicated to one particular issue, to the point of coordinated single-issue voting in the primaries. So if you can find enough people who are willing to vote solely on copyright in the primaries, it might actually work.
What if... instead of YouTube you'd have several intereset based sites which could actually handle their moderation and workload instead of being a giant uncaring behemoth?
Who pays for the moderators? You still need a lot of manpower if you don't want algorithms, regardless of whether you have one Youtube or a thousand SpecialInterestTube.
Tech is very special in a way where they can get away with claiming that quite a lot is not possible at their scale. Imagine this for other industries.
"We cannot do quality control before sending products out. We make too much stuff."
"We don't check all tickets for the venue. Too many people."
"Cleaning the entire building is not possible. It's too much space."
All the examples you provide have to do with tangible, physical things. Tech is in some sense special because we're dealing with intangibles. It costs a bad actor next to nothing to upload the same video with harmful content to YouTube a thousand times, a million times even. And YouTube has to deal with all of this.
That's why they still use physical ballots in most elections: as soon as you're dealing with physical things the costs for a bad actor to do bad things at scale balloon up.
But Tech also enjoys the benefits of this. You make a product once, sell it thousands of times. Or allow thousands/millions of people access without much overhead. I'm simplifying here of course, but compared to e.g. a machine which you have to produce new for each customer I think we are in a very good situation. And with that advantage comes a price tech should be forced to pay.
But a bad actor can only do so much if manual reviews are involved. If they’re a completely new account because their previous accounts have been banned, accessing on a new device because their previous Alphabet-surveilled fingerprint had been banned, or on a VPN because their IP has been banned, YouTube could just implement manual review which would slow such a bad actor down. And if the intent is to clog up the manual review process with whatever copies of a single illegal video Alphabet we already have technology that is used to automatically identify specific videos or images and remove them, usually for purposes of preventing the spread of illegal depictions of children.
If you change the MAC address on your cable modem you can get tons of different IPs. Same thing with cellular networks. With raspberryPis so cheap, fingerprints don't work that well for all devices.
There are relatively few people capable of doing this compared to the population of the world that uses YouTube, and there’s even fewer people who, even with capacity, will be doing this just to upload illegal content into YouTube. I think this sizes down the problem of manual review quite easily.
Very much so. Tech companies have a way of confusing "this is difficult" with "this is impossible," or "this is impossible" with "this isn't free," or "this is impossible" with "we don't want to."
And those things are paid for. It would be perfectly reasonable if you had to pay per minute for uploading video and people had to pay for a subscription to watch it. That's just not what is expected for a mass platform today.
Vimeo has an “uploader pays” system (of sorts), and, while it prevents garbage on the platform, it stops creators from wanting to use it when YouTube is free.
Yes. It's of course a perfectly valid (and widely-used) model to pay for hosting. But, by and large, only "serious" users who probably otherwise have a business model will do so. And there will inevitably be far less content on the platform.
On the one hand, I think it's sort of unfortunate that so much of the Web has come to be about "free" access (though paying for access with attention and time) but that does come from the perspective of someone who would be able and happy to pay with money instead.
It's expected because the expectations are set by the existing platforms. And the reason why Google specifically is actively promoting those expectations, is that it makes ads the only viable business model for content creators; and Google's primary source of income is their ad services, so it maintains their dominant market position.
It's argued that in some cities public transport systems the price of the ticket doesn't cover the cost of its own infrastructure, the machines to buy them, the ones to validate them the salaries of the agents verifying, etc. It can be more cost effective to drop the concept of ticket and make it a free public service.
I agree with all that, but some central governments such as Turkey go to great lengths to make public transport a paid service. The best public transport is the free one indeed.
Google had a revenue of $180B in 2020, and their EBITDA was $55B. I'm no accountant but that sounds like an absolutely insane profit margin. 8-12% would seem much more palatable, so that means they have over 25B of excess profits they could spend on 500.000 middle class incomes for content moderators.
YouTube represents 10% of Google's income BTW, so the other 90% of middle class incomes could do moderation for their other services.
I usually frame it this way - Mazda Motor Corporation, an order of magnitude smaller company can provide me with a several salons in my < 2mil population country where there are physical people that will fix my problem.
And Google, the mighty megacorporation, can't even clear a basic phone support bar, much less a local user support office in each region? Something normal for other corporations?
Mazda earns several hundred to several thousand dollars of profit from the sale of a car to you. Google earns an order of magnitude or two less per user.
They also had to source parts, manufacture it, transport it via boat and train, then pay someone to show me through all the buttons, clean it and then offer service at least once every year.
Besides that they also provide 24 hour support for breakdowns and plenty of other services.
And at best, they made 10% (!!) gross margin on my car - which is of a range of 2500$. Meanwhile Google has 45-60% (!!) profit margins while not giving a crap.
Mazda spends several hours in pre- and after sales on each car. Google could automate 99.99% of moderation, and spend literal seconds on 90% of all further moderation per user. A magnitude or 3 less human attention than Mazda's sales pipeline spends on a user would be worlds better than what Google does now.
I've seen YouTube channels with thousands of dollars of monthly revenue be closed over issues that would've taken a trained human less than 10 minutes to resolve. YouTubers with that kind of revenue are literally one in a million users or so.
Thousands of dollars are only possible with luxury cars.
VW, one of the two largest auto makers, makes around 300€ profit per vehicle sale (excluding luxury brands). For some models it's even less than 100€. You can calculate around 1% profit for a car sale, often less.
The actual money is in leasing and financing, which is why almost all automakers own a bank.
>>VW, one of the two largest auto makers, makes around 300€ profit per vehicle sale (excluding luxury brands).
Seeing as it's absolutely trivial to negotiate 10% discount on any new VW car, I struggle to see how that's true? Unless you mean that VW makes 300 euro and the rest is the profit of the dealership?
Even if that's true, VW makes an absolute mountain of money through financial services. Yes, they will make very little profit selling you a £12k Polo, but VW Financial Services will make £2-3k just providing you with a financial agreement for it.
Without disagreeing with your point, I do think there's something to be said for the more specialized sites the Internet used to specialize in. Places that foster a sense of community tend to wind up with users who are happy to pitch in when something doesn't look right (or get overrun by spammers if the community isn't able to deal with the issues). I am happy to pull weeds in a community garden in a way I will not at some factory farm. This approach isn't a utopia; people are people and everything is political, but at smaller scales, saying "Well, just start your own [niche] video site" isn't snark, it's a realistic thing and perhaps both sites contribute to the marketplace of ideas. YouTube is the marketplace of the unthinking.
I think it's much less work on SpecialInterestTube. On YouTube there's a huge amount of decisions and ideas the moderator has to take into account. The other one can filter a large amount of spam with "the first 5 seconds make it clear it's not SpecialInterest".
For example comparing to HN, I can go on "New" and see "Places to Spray a Fragrance on for a Long Lasting Effect-Ajulu's Thoughts" - I know it's immediately flaggable, even if it could be a valid post on another website. (and require a full review there to check it doesn't break some specific rules)
Isn't that basically how reddit operates? And last I heard they have their native video hosting as well. Sounds like this could work, but doesn't quite at the moment. Is YouTube handling content discovery better? Network effects maybe?
Reddit is interest-based but gets free labor from most subreddit moderators that remove bad content for them. Reddit is also much less lucrative of a content platform given you have limited ways to make money (usually limited to linking to your patreon), so the content is constrained to whatever people do with no expectation of money.
>What if... instead of YouTube you'd have several intereset based sites which could actually handle their moderation and workload instead of being a giant uncaring behemoth?
What if... the average viewer was willing to pay for video content instead of requiring a free, unprofitable service run as a "Free add-on" to an Ad Empire?
Youtube may finally be profitable in the past few years but the first 90% of its existence, the entire rise to power, it was unprofitable and run at a loss. There aren't other major providers because no one else wants to lose that much money
> What if... the average viewer was willing to pay for video content
That would solve all of our problems, wouldn't it? Too bad there's likely always going to be someone willing to offer the same service for free + ads and/or selling users data. Unless people start valuing their privacy/data much more than they do at the moment (or regulation comes in), it's just wishful thinking.
Have you forgotten abour Netflix and the innumerable streaming services out there as every studio and their dog's mother wants to claim 100% of the pie? The money isn't an issue - it is a freudian slip that either you want it to be like cable or are blindly repeating the words of someone who does.
If you don't like youtube's free service you can get your money back.
That may have worked in the pre-iPhone Internet, where it took effort to become a part of communities, and everyone implicitly understood that. Now, it's a rock vs. a hard place: the masses want centralized platforms, but those centralized platforms are proving cancerous to society.
The masses don’t want centralized platforms or decentralized platforms. The masses don’t care how things work. What the masses want is an easy to use, fast platform. Centralized systems have been the best at providing that because they are easier to engineer and because there is an economic model via ads and surveillance.
I’d be shocked if the best and the brightest that Alphabet can buy can’t figure out a reputation system, manually-reviewed with AI assistance, or some other way to approve content in a convenient manner.
Don’t forget also that Alphabet is swimming in money. They can certainly afford the manpower to make it relatively seamless for the user. And I really doubt users can’t be trained to associate a few hours’ wait of a completely new, reputationless user, or a user who is using an unknown device or VPN, with a better quality of site experience because the videos are screened.
this! they could have a page entirely of "reported videos" by person or AI and the severity would rank how high on this page the content is.
Things like timestamp of when in the video was reported might help for scanning "content" e.g. "50 people reported this 20 minute video in the 3-5 minute mark" or similar average.
>The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube? Would that work with the current society's attention span?
Why should they have to? Youtube makes BILLIONS of dollars, and they can afford to employs 10s of thousands of people to be human moderators, they just don't want to because that would eat into profits. If the video has X views or the publisher has X followers, it should be reviewed within hours by a real human.
>Is YouTube as we know it actually feasible without AI-based moderation? Can it be a sustainable business with an army of human moderators and/or a drastically reduced scale as a result?
Nobody is saying there should be 0 AI moderation, just that there should be a human that reviews a clip if AI flags it and the end-user clicks a button saying "this shouldn't have been flagged".
The entire premise of google is kind of broken and both technical and non-technical people feel that way from what I've heard. The fact it's nearly impossible to get to a real human on anything, and if you do happen to get to a real human, half the time they aren't even empowered to undo whatever decision their AI has made... it's just not a good way to do business. There's a balance to technology and human interaction and Google (again IMO) has gone too far in the technology direction. Heck, humanity as a whole might need to find someplace in the middle if the studies about the current generation of 20-somethings struggling with stunted social interactions due to social media is to be believed.
There are often problems with Reddit moderators, but at least you know who they are (sort of) and can talk to them. It seems like this there is a better relationship between moderators and a community when the moderators are part of the community, versus being an anonymous horde of interchangeable personnel.
(And of course there is Hacker News for another example, combining some automatic stuff with manual control.)
To get this at scale, though, it seems like you need a larger social network to be divided into many smaller ones in some logical way that’s understood by by everyone involved.
It seems like the inherent hierarchy (authors versus subscribers) and subdivision of Substack is likely to work out well for them.
> Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube?
Well presumably Google being one of the worlds wealthiest corporations could hire and pay people a living wage to match the demand of users looking to upload videos to YouTube. It is in YouTube’s interest as it only exists and makes money on this very user generated content.
Yeah it’s great YouTube exists and can market to content creators about its market capture audience, but maybe it would be a better system without the middleman as anyone can upload a video anywhere on the internet making it instantly viewable by billions
This line of thinking leads to non-solutions. There are potentially many options beyond the dichotomy of crude AI automation and an army of human employees.
First, I think most of us would just like a relatively transparent method of escalating issues that will eventually get to a qualified human. Put some time delays and recaptchas to stop spammers from abusing it but if someone is willing to put in the time and effort to fight for a video, they should get a clear response as to why it was taken down and who made that determination.
Then there is the immense potential for crowdsourcing moderation. You can both get data by polling users (again with basic checks to prevent exploits) and promote certain users to a moderator role such that they can keep the community in which they participate reasonable. Especially for disinformation, how hard would it be to find volunteers to tell other people they're wrong on the internet?
And finally, if erroneous moderation has the potential to cause many problems, it is worth re-evaluating how much is actually necessary. For legal reasons there is some content that you really need an AI to quickly scan for, such as child porn, but in a lot of cases the harm in leaving up a video is insignificant.
There are a lot of sites on a similar scale to youtube that have substantially better (even if not necessarily perfect) content moderation despite working with substantially lower resources. If they can do it, so can Google.
> The thing is, we have to make sure we're OK with the consequences of going this way. Would people be happy to wait for days, maybe weeks, for their video to be approved for upload to YouTube?
People will eventually be fine with however the system works. If that's how it works, they assume that's how it has to be. Most of users have no idea how this stuff functions. We're in it now because the expectation has been set for instant uploads, so there would be outrage, but eventually it becomes "the way things are." As an example see any change users hate - users complain, TPTB either concede completely (rare) and revert, make minor concessions but change little, or apologize for the inconvenience and do nothing. Eventually nobody cares anymore and forgets they were angry.
Most of this stuff barely matters. Frankly it might be good if my nephews had to wait longer between any number of irritating youtube videos they binge and parrot.
> Is YouTube as we know it actually feasible without AI-based moderation? Can it be a sustainable business with an army of human moderators and/or a drastically reduced scale as a result?
I am curious about the cost of a process that establishes human reviews for conflict scenarios, i.e. situations like this. Is that still the planet-scale numbers?
I'd say it's infeasible without AI moderation based on their revenue model. They will never make enough money off intellectual content in order to pay for enough humans to moderate their system. Content for the lowest common denominator is not only easy as hell for an AI to moderate but it makes orders of magnitude more revenue than anything about technology, science, philosophy, etc.
The answer is to simply not rely on YouTube for anything besides lifestyle and self-help vlogs, marketing, instructions on how to put together your IKEA bookshelf, and of course cat videos. There are plenty of other video hosting sites that won't ruin your day as a creator because you accidentally used the word "doodie" or because the AI thinks your video on writing "hello world" in Rust is teaching people to hack computers.
Why not allow the majority of YTers with a legit posting history to post videos as they wish? You wouldn't have to wait days or weeks for these folks, they are trustworthy already. The vast majority of channels I care about wouldn't need moderation, they're hobbyists like woodworkers, musicians, etc. with dozens of posts already.
Anyone can upload a video to their website and make it viewable to all. The hosting is not the issue. You could use dropbox, box, etc for easy video hosting and sharing.
It's the discovery piece that cannot be replicated. If hosting happened outside moderation can move to automatic with a user/ml flag with manual approve process that follows.
Your question is: would a company based on youtube be hugely profitable without AI? Answer: probably not, but who cares other than the shareholders of Alphabet? I don't really think I should base my decisions about software based on how much the owners of the company are making.
Maybe we could have that if the uploader is responsible, not the platform.
Maybe we could have the uploader host their own content, and do away with hosting platforms and their gatekeeping altogether?
I can see little reason today that well-established technologies and individual hosting arrangements couldn't offer similar value to whatever YouTube or some other centralised hosting service offers.
We are talking about the World Wide Web. Linking to other people's content is literally the central feature of the real platform here. For a long time, we did just fine with community-operated facilities and everyone linking to other sources of interesting content to help discoverability. We still do, really, but now the links and other community engagement tend to be locked up in places like different YouTube tabs for each channel, not the sidebar of someone's personal website. Of course, these days we also have entire sites (HN among them) built around sharing links to content that might be interesting to like-minded people. So I don't buy the usual arguments about centralised content hosting being the only way interesting content can gain exposure.
Numerous individual content creators make demos or livestream games or present tutorials with decent-to-excellent presentation values. They often use specialist software to do it. I therefore don't buy arguments about the technical difficulty of self-hosting either. There could be hosting packages offered by ISPs or other hosting services on a common carrier basis if individuals didn't want to set up all the infrastructure themselves, and those who were good/popular enough to make it more of a regular income would still have the ability to do so but now on their own terms.
Even at today's prices, you could serve tens of thousands of videos directly for less than a lot of people are paying for their ISP or mobile/cell phone plan each month. And of course, we know lots about how to efficiently serve interesting content via P2P networks that could be a big amplifier for popular content producers who were happy to share without wanting to monetise or otherwise collect large numbers of viewers on their own site for some reason. The available capacity and prices are surely only going to improve as the technology improves. So that's cost taken care of in most or all cases too.
So, if individuals hosting their own content is technically and economically feasible, and if individual creators and community groups facilitating content discovery is a viable culture along with all the other interactions they tend to have anyway, I struggle to see why the likes of YouTube are so valuable or irreplaceable in our society today. Obviously if YT disappeared overnight there would be a gap tomorrow, but we've been adapting and filling gaps and finding better ways to do things on the Internet since some time in the last millennium, and I see no reason to think we wouldn't do so again.
...if individual creators and community groups facilitating content discovery is a viable culture...
That's a pretty big if, and from my limited perspective, the most important problem content creators have and the most useful service Youtube provides.
I think it's mainly about convenience.
It's like with businesses starting their stores on Shopify: "hey big corp, please handle mundane/technical stuff for me and just let me sell stuff/post content".
One problem is that there is a single process for all. Certain types of journalism and research should be able to ask for pre-approval of videos without risking a ban. The AI can still deal with the bulk of those requests.
I think it would be great. There would be huge incentives to find other solutions, and a lot of what goes into YouTube would find other outlets. Especially since we've already tasted what's possible. It could be the end of the YouTube monopoly.
The answers to all your questions is no, which is precisely the point the author is trying to make.
If we made the case for operating "at human scale" Youtube would be amongst first casualties.
to your last point :
> Anyone can upload a video and make it viewable by billions, instantly. I think we shouldn't forget how amazing this actually is.
By the same reasoning, COVID-19 is similarly amazing.
Opinions move on : Youtube is 15 years old now. It's a practical monopoly, being the worlds 2nd most visited website. I wont deny it's utility - but it's a gross centralisation who's modus operandi present significant questions - chiefly, Is it right that it has for years circumvented nearly all regulation applied to other media, and should it remains largely unaccountable to any elected power ?
So the challenge it seems is thus : how do preserve the good things that self published video content has done for us, but prevent or limit the consequences and attendant injustice caused by a service so infeasibly large. it's impossible to administer ?
I dont think it could be done better much better than youtube aat the scale youtube does it, but i would insist they need to be plugged in to existing national media regulators and bound by the same standards- flawed as they are.
> You'll end up with only the powerful being able to push content, you're taking the voice away from the masses.
i think the algorithms already do that. the channels with the most followers have the loudest voice
>. Try to gradually improve content moderation and live with the rare false-positives or false-negatives.
its better than it was, and will likely improve but it's still drowing in shills and scammers. i think this would be rooted out by more community centric model. exactly how that looks, i dont know - but i think self policing would play a part.
> the channels with the most followers have the loudest voice
Sure, but I don't people people are entitled to have their videos promoted for them. The fact that you can have your video hosted for free, and share it to anyone in the world is already great enough. While on average louder voices get the most attention, plenty of smaller creators have managed to get important messages out.
> drowing in shills and scammers
That seems like a hyperbole? Do you honestly feel this way browsing Youtube or Twitter? I'm not saying they don't exist but they're fairly rare in my experience.
> more community centric model
I agree, reddit's model works kinda well. Youtube has tried that in the past (though I guess they got rid of it?), and Twitter is experimenting with it now (Birdwatch). I also like the Facebook supreme court model for handling edge cases.
> >Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.
> Can't agree more.
Simply replaced a prior system in which humans cannot interact with humans anymore and certain humans get to chose what is right and wrong.
The key problem is the inability to provide human feedback more than the mechanism for making the initial decision itself (which is a different problem). Not sure how to achieve that at scale.
You need dictatorship for effective moderation. But you also want those dictatorships small, uniform, and interchangeable so that the "citizens" of those dictatorships can easily pick the one that suits them best. Those dictatorships link up (federate) with like-minded dictatorships to benefit from network effects of sharing their content among themselves.
If your content doesn't suit a certain dictatorship, it will suit another one.
PeerTube does federated video already. No, it's not as polished as YouTube. But it is the answer to human-level moderation at scale.
The web is already federated, YouTube just happens to be one of the most popular nodes. Fediverse systems like PeerTube are great, but they don't change anything; in a world where something like PeerTube is as popular as YouTube we'd see the exact same complaints about the most popular node.
> The web is already federated, YouTube just happens to be one of the most popular nodes.
The network layer of the web is federated, but the application layer usually isn't. That is, if you care just about who you send TCP traffic to, then yes, the internet alone is sufficient, but if you want to think about federation at the content level, the internet alone enables that but doesn't make it very easy. Something like PeerTube makes it easy. And you need to pull it up to the content/application level if you want to do things like implement content searching/filtering across a federated network of hosts.
> Fediverse systems like PeerTube are great, but they don't change anything
Disagree. Nobody can host their own YouTube server. That's the point of federated video hosting: (a) anyone can spin up their own host (or join a host they agree with if they don't have the tech skills) and (b) these hosts can network with like-minded hosts to build a YouTube-sized content library without any single entity owning the entire library. A world without PeerTube would not give typical users this option.
> in a world where something like PeerTube is as popular as YouTube we'd see the exact same complaints about the most popular node.
We are starting to see this problem a bit on Mastodon because people are piling into the most popular node and having their opinions about how it should run instead of understanding that the point is if they disagree with how a node is run, it's (relatively) easy to run one of their own (or bribe a technical friend into running one). But I'd argue this is a problem in the domain of user understanding/education rather than a problem built into the nature of Mastodon whereas centralized global-scale moderation is a problem baked into YouTube.
v4 is adding search and recommendations, but I suspect it will not be as good as YouTube's simply because YouTube is run by the global king of search and PeerTube is developed part-time by one developer and open source contributors and funded by grants and donations.
However, PeerTube's secret sauce is putting the hosting power into the hands of the users, so for content creators with their own audience already in place who get whacked by YouTube's moderation algorithm, this can outweigh the benefits of the search/recommendation algorithm.
And this isn't just controversial pundits. Even the Blender Foundation got their videos blocked through a perfect storm of poor UX/support which spurred them to host their videos on PeerTube: https://www.blender.org/media-exposure/youtube-blocks-blende...
There's always going to be some line people are unwilling to cross. For instance, one might say Nazi rhetoric should be tolerated for its historical value, but I know zero advocates for allowing child porn.
But how about adult porn? Acceptable to some, not to others (also depends on the context - even advocates of adult videos are generally not arguing that YouTube should host them). And there are shades of that.
In the end, no matter how mature and accepting you are, unless you allow literally everything, you will need moderators who are empowered to be the final authority on that decision, which is what I mean by "dictatorship".
Or you could be mature and realise that the reason everyone in your pub is a Nazi is that you never said “no” to the first Nazi sympathiser, after which all the non-Nazis left.
What is merely offensive to you is a persistent death threat to lots of other people.
The problem is that all of humanities videos are going through a single organization. Peertube may provide a solution to this but people have to actually use it.
I think this is a distraction from the real problem, which is that YouTube censors/demonetizes/deplatforms videos at all. The problem is very widespread, ranging from censorship of political content to firearm videos to COVID related content. Most recently YouTube banned a panel held by the Florida governor with medical experts from Stanford and Harvard (https://www.politico.com/states/florida/story/2021/04/12/des...).
This is a company in the business of shaping political opinion and manufacturing narratives, not providing a neutral platform. Unless that fundamental issue is addressed, symptoms (like bad AI flagging) will continue to persist.
Advertisers asked to not have their ads on certain content, because normal people complained to them when they perceived the advertisers to be supporting the content they didn't like.
So I actually think this viewpoint vastly overestimates the ability for human reviewers to come to the right conclusion. I can tell you thats not really the case: paid trained human contractors for content moderation very commonly make far more problematic errors than "wpa2 cracking info is a bannable offense, but not this case because it's a academic research".
This is the kind of edge case that even after escalation to higher tier human contractors will often still get wrong. I don't really see how it's actually much less dystopian just to have humans in the loop with the same outcomes.
The algorithm can exist, but to run it at scale requires buildings and large teams and bank accounts and therefore the companies that use these algorithms are susceptible to regulation.
Otherwise it seems like an enormous waste of talent and resources playing cat and mouse, when we can just give them guidelines and fairly often they'll be followed.
Of course bad actors can exist, but that's not a reason to give up on social solution to problems.
I'm of the complete opposite opinion. I look forward to the day when our machine overlords make all decisions. Humans are assholes with their own nefarious agendas. Machines are not. Of course if the day comes where machines ask for bribes then I'll change my opinion ;)
Except that machines aren't making some grand objective pronouncement, sent from on high and untainted by base human nature. Machines are programmed to carry out some particular human desire. They follow that single human desire, without any of the restraint or competition of other desires, no matter how deep that path may lead.
Machines are assholes with nefarious agendas given to them either intentionally or unintentionally.
Though I agree, I would phrase it as "before people with machines take over". The machines themselves are tools enacting the will of whoever controls them.
Bring on the disagreement. It was a few short years ago the idea of this would have seemed a really, really bad idea. Just like autonomous weapons do now. And, hey, it doesn't even have to be us (Americans). I'm not going to name the potential baddie. There are many, including us. FOMO will drive so many terrible things in our future. Inch by inch, march closer to the reality many say will never happen. FSM save us.
> >Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.
Youtube is in the same economic vice as Walmart. It's hard to be cheap and provide a human touch, especially as labor costs rise. Automation only takes you so far in a free economy.
walmart actually does have customer service in stores.
amazon also tries it best to provide customer service (want a return, here is shipping label).
the problem with youtube is that you are not youtube's customer, so the dont care about you. Only advertisers are customers and youtube cares about them only.
I bet there is a big customer support department staffed with real people and serves only advertisers
It's hard to argue that Walmart and Amazon are much more considerate of their "users" than Youtube. They are all trying to maximize margins, which are dependent on maintaining network effects and keeping costs as low as possible.
Don’t even care that it’s AI. It’s just that the AI-based products simply suck. Which is saying a lot considering all the money being thrown at it to solve simple problems.
>Using algorithms to flag content is totally fine… problem is when humans cannot interact with humans anymore and AI gets to chose what is right and wrong.
Can't agree more.