Hacker News new | past | comments | ask | show | jobs | submit login
YouTube bans comments on all videos of children (bbc.co.uk)
610 points by _wmd 19 days ago | hide | past | web | favorite | 530 comments

Chris Ulmer, who runs the popular YouTube channel Special Books for Special Kids, said that comments were removed from his channel despite no evidence of unacceptable comments, and YouTube told him that if he turned comments back on, he risked his channel being deleted. [EDIT: Turns out, this is not actually the case. See child comments.]


"Last night I realized all of the comments on SBSK's YouTube channel were disabled. I saw I could manually turn them back on so I did. Then I read a post by YT saying that by turning comments on I risk our channel being deleted. I love and respect YT but IDK what to do.

"The beauty of SBSK is the love and acceptance in the comment section. It shows families and individuals across the world that their [sic] are people who accept them. Many people I interview have been socially isolated. Comments can change their self perception."

So it sounds as if YouTube content creators are now in the unenviable position where they need to actively moderate the comments section for videos featuring children, and if they don't do so to YouTube's satisfaction, they could have their entire channel nuked. Even if you're pretty darn sure that your commenters will behave themselves, that doesn't sound like a good deal.

Seems like YouTube will need to come up with some sort of "trusted subscriber" designation, and allow content creators to permit comments only from those subscribers, so that any random bad actor can't swoop in and destroy a channel.

I might be in the minority here but I enjoy the low quality youtube comments. Most people dismiss them as cancer but the reality is a lot of people think in patterns that drive these comments.

I would rather be in touch with and exposed to this rather than try to pretend it doesn’t exist. It won’t ever go away, it will just be hidden.

I think that there's an element of validation and indoctrination that is a serious concern here. In addition to an overall significant lowering of the discourse around things by allowing garbage quality trolls and other horrible comments, and in addition to bullying, aggressive behavior, stalking, and other things that are broadly considered unpleasant, this kind of behavior breeds more of this behavior.

Consider someone growing up on the internet. The more they're exposed to this sort of content, the more it will be normalized in their mind. The more they're exposed to this sort of content, the more likely susceptible people are to be radicalized by it, and to grow into the same sort of troll. Hiding or suppressing low-quality content creates a herd immunity effect. It prevents a shift of the Overton Window to where "generally allowable discourse" suddenly includes timestamping the most "salacious" parts of a child's video, or telling people to actually kill themselves, or sharing their purely racist, hate-filled viewpoints.

While you may rather be exposed to it because you have the strength and capability to view it as a curiosity and a sociological study, the impressionable among us maybe deserve to not be bombarded with garbage, and we may have a moral duty to at least make some effort to minimize indoctrination and radicalization on these platforms.

I think there is no evidence of the mechanisms you describe. That could have been a text written about books shortly after Gutenberg.

The kids will flee the garden eventually anyway and "allowable discourse" is quite subjective.

You will never create a child-proof youtube. Those kids will eclipse your abilities and understanding at some point.

It would be better to create a separate platform

Looks to me like you guys are talking past one another.

disillusioned is taking the position of The Internet. If we allow these things, things will just continue to get worse and worse. Standards exist for a reason, Gresham's Law, and the rest of it. (Including some wonderful pleas for civility and kindness that I completely agree with)

raxxorrax is taking the position of The Human Species. We have survived and evolved because we are a wild, aggressive, curious species. Whatever boundaries there are, we will push them as hard as we can. Life will find a way.

Both of you folks are correct. This is the crux of the problem with technologies like YouTube (or books, for that matter). If YouTube is a publisher, then it has an opinion, a political position, standards for what it thinks is civil, and so forth. That's great, but there's no freaking way in hell I'm going to agree to having a handful of companies decide that kind of thing for the entire planet. The idea alone is insane. I just read yesterday about people getting warnings from Twitter from things they posted 5, 6 years ago that somebody in a dictatorship found offensive last week. It's ludicrous.

If, however, they're a public forum, then they should just shut up and stop looking at the things that appear on their site. The public forum -- books -- require chaos, disorder, and evolutionary pressure. Can't fight mother nature. Life will evolve.

But they can't do that, can they? Because they're monetizing all of that content. So they have to keep close track of each little piece of everything.

Now they're stuck.

They want to have it both ways, and we feeble-minded folks watching this spectacle end up choosing one or the other. It's a sucker's game, and no matter which side we choose, it doesn't work. That's because the premise is broken. BigTech wants to be two things at the same time. Picking one of those things and arguing with folks who pick the other one just plays into their schtick and keeps the gravy train rolling along.

> Looks to me like you guys are talking past one another.

Every Internet discussion ever. I'm glad you pointed it out! The world needs more reasonable posters.

> I think there is no evidence of the mechanisms you describe.

Which part? If we are talking about this part:

>> I think that there's an element of validation and indoctrination that is a serious concern here.

Then I think there is evidence of that being true.

Take for example “incels”. They gather in online forums where they construct a worldview built entirely on misogyny, entitlement and hate. The ideas they are spreading among themselves are much worse than probably most of those people would come up with on their own. In their echo chambers they validate these ideas to one-another and make them seem acceptable.


Those horrible people still exist either way, they just dont say out loud what they think. You dont restrict people thinking that way but people talking out loud about it. The no platform approach was rather successful in Germany when it came to pushing the extreme right out of the public discourse and banning their parties. But that only works as long as they dont organize a platform of their own. Then you have a far right party rising into parliament and entering as the second largest party in some states.

People dont change you just filter them from your view of reality. And its no surprise people who bothered to look knew rather well that they existed.

You are accusing an entire group of people of an action that you deem unacceptable. That is rather discriminatory of you.

No, I am literally talking about the group of “incels” that are specifically engaging in hate speech.

The people I am talking about here are outright promoting rape, violence and child abuse. Not indirectly. Not between the lines. Just straight up stating those kinds of things.

It seems fair enough to say that literal murders are unacceptable.

don't we have decades of evidence from online communities showing that?

From usenet to reddit, many communities that tolerate abusive conduct degenerate, as the "bad behaviour" is normalized and becomes accepted.

The fact that you can't create a child-proof YT doesn't mean you shouldn't strive to keep negative behaviour at bay.

> don't we have decades of evidence from online communities showing that?

In my experience the community changes as a whole, but not the individual actors. Once a community tips the member base changes. People displeased with the behavior leave and people attracted to it join.

And we would have gotten away with new humans in a new utopia if it weren't for those meddling kids.

As you are clearly blurring the lines between this specific issues and a more general one, I'm going to reference the latter and exclusively the latter. There's a wide gulf between pedophilia and then jumping into appeals against the ever more amorphous 'hate speech.' So on that note it's interesting that in the early 20th century if you look at support in the US for eugenics it was practically a who's who of academia: The Carnegie Institution, Rockefeller Foundation, W.E.B. Du Bois, Harvard, Stanford, etc, etc. [1] One might argue that such views were "purely racist, hate-filled viewpoints." The very reason freedom of speech was such a revolutionary concept is because authority figures, since time immemorial, have been able to propagate bad ideas and manipulate the general population by making false statements which could not be challenged.

In contemporary times a good example of this is Iraq. Our invasion was precipitated based on fabricated evidence and appeals to authority of the sort 'x intelligence agencies have proven beyond any doubt that Iraq has or is pursuing nuclear weapons of mass destruction.' After the fact when an individual [2] tasked with investigating whether Iraq was or was not trying to purchase uranium (he found they absolutely were not and reported as such - the government would go on to claim they were trying to purchase uranium), a government official outed his wife as a covert CIA operative (which the Washington Post would go on to publish) not only potentially endangering her, but terminating her career.

In the case of eugenics it wasn't so much a conspiracy rather than that authority and academia were simply collectively wrong, as has also often occurred. Censorship benefits those in power and only those in power. Those in power choosing to try to inhibit censorship, as happened with the first amendment, was quite the revolution! The point of free speech is to ensure that not only with the good comes the bad, but also that with the bad comes the good. When you begin to tolerate censorship by one side - you very much risk that the side doing the censoring is the bad. And in efforts to obtain only the good, you end up with only the bad.

In today's world, free speech and corporations are becoming a major new issue. An ever larger percent of all human communication is digital, and digital communication is extremely monopolized. This means, for instance, that the US government could effectively circumvent the first amendment by simply pressuring a single company rather than passing a law against the dissemination of any given viewpoint. It also means that a very small handful of people could end censoring or otherwise manipulating public discourse for billions. That is exactly what the first amendment sought to prevent; only the founding fathers could never have imagined a corporation (under which two people have majority control) having more power over speech than any government in the world.

[1] - https://en.wikipedia.org/wiki/Eugenics_in_the_United_States

[2] - https://en.wikipedia.org/wiki/Joseph_C._Wilson


IME it's difficult as one or two parents to out weigh 100s of other voices.

This is one of the reasons I think anonymity on the Internet is a bad thing. Not that I am saying it should be banned, but it should not be the norm. Much of the toxicity simply would not happen, could be prosecuted, or could be filtered if a "real person" attribute was widely available.

Toxic comments can be just as common in non-anonymous forums and venues. Facebook produces a lot of toxicity, for example. Further consider; the Internet is much less anonymous then twenty years ago and that hasn't stopped the overall level of toxicity imo. The type of person who spontaneously makes toxic comments spontaneously will still make them when forced to use their real name (they'll just suffer more from it). The provocateur, those who calculatedly elicit toxicity, is always going to be here too - no matter many Russians Facebook filters.

One factor is that once one person begins attack another person, both using real names, both people have a hard time backing down, especially if they know each in real life or if they are semi-public figures. For a lot of people, admitting that they are wrong is a huge hurdle - and these tend to be the people who engage in toxicity in the first place.

...and people have been jerks since there were people, I am not implying that people being unpleasant was created by anonymous commenting.

However there would be quite a bit of backpressure against certain kinds of Internet toxicity if people were less anonymous.

I think it's the other way round. Anonymity (or rather: the right to present different personas to different audiences and to give up a burned persona when you see fit) makes the internet a bearable place. It also allows people to make up their mind without standing in their own way, and disregard hurtful comments as trolling (which they often are). Facebook painfully shows that real name policies do not necessarily lead to more civil discussions, instead they facilitate ad hominems, and expose vulnerable groups to hate everywhere, as aspects of their identity can not be selectively hidden anymore. This even extends to their real lifes, with their names being publicly known. Now speech needs to be controlled, because people lost control of their personas and need to fight everywhere, instead of just when they choose to.

Absolutely not.

Facebook comments aren't much better than YouTube, especially non-English ones that Facebook doesn't bother moderating at all. With about 97% of social market share in my country and a real name policy, it should be a brease to use it. It's absolutely not.

As for the prosecution, only if your country decides to give a fuck. There's a case in which a soldier called for a journalist to be raped and killed in a public Facebook post using his real name. He didn't face any consequences despite the story being picked up and screenshots of his public Facebook status shared all over the news.

Death threats, hate crimes, separatist movements, fake news... Real name policy stops absolutely nothing.

You are not the first person to think this, and some even put it into practice [0] hoping it would frontload forum moderation, but couldn't follow through with it [1].

As another commenter pointed out, facebook (and facebook-based commenting) has plenty of trolls in public comment sections. Anonymity is not what makes people jerks. More real reasons why internet forums enable trolls include:

- the asynchronicity of negative feedback: our primitive brains don't get to closely associate the negative feedback with the moment we expressed something in a socially unacceptable manner

- the lower bandwidth of negative feedback: IRL involves awkward silences, dirty looks, snickering, people turning up their nose at you, telling their children not to be like you within earshot, etc. All of this is suppressed over the internet, at most summarised by a downvote (which some trolls feed off of as positive feedback, knowing they ticked someone off, but don't face immediate social consequence). The trolls don't get conditioned to avoid doing things that hurt other people or are socially unacceptable.

[0] https://kotaku.com/blizzards-real-name-forum-policy-has-fans... [1] https://kotaku.com/blizzard-scraps-plans-to-display-real-nam...

   RL involves ...
I would add the most powerful (because most feared) form of feedback available in face-to-face communication: physical violence.

I intentionally didn't list that, because that's something else that's often cited as a reason why people aren't jerks in real life compared to the internet, but I don't think that's true in most civilized cultures.

If I'm a jerk in public, outside of racist/prejudiced violence or a mentally unstable antagonist, I'm not at serious risk of being physically attacked unless I start breaking laws or pose a notable threat to someone else. Because about 90% of the less edgy side of trolling (just being annoying or verbally mean, mostly) won't really draw physical violence in meatspace, you'll just be treated like a jerk or risk being politely asked to shut up or leave by an authority figure. Yes there's a risk of physical violence, but it's overplayed as a reason why people aren't trolls in real life as much as they are online, and I think it's significantly superseded by the things I mentioned, and probably other things too, which stop people from being jerks far before physical violence needs to be brought to the scene.

you need only look as far back as the covington incident to see that 'real people' are hardly inhibited from vile and abusive behavior just as long as they believe themselves to be on the right side of the mob. this has been the case since time immemorial. in my admittedly anecdotal experience, fully anon communities are far more respectful & a have a better standard of decorum than communities with high standards of verification, even and perhaps especially in dealing with the sorts of topics that tend to make more highly regulated forums collapse into flame wars. real name communities are prone to witch hunts, purity tests and all sorts of hysteria full of very real life consequences. anons just call you a retarded faggot.

Not sure why this was flag killed. The Covington incident is a good example. A controversial event occurred and everyone from every 'side' was inflammatory and no one maintained civil standards.

EDIT I guess flagging indicates part of the problem. Society is so busy screaming at ourselves that the only thing reasonable people think they can do is check out of controversial topics.

EDIT Would any downvoters care to engage about why they disagree?

HN has many great qualities, but some of the most important conversations we need to have are too emotional and inherently divisive to be connected to any kind of social currency system. there are times when identity itself is simply too much of a barrier.

I've read this a few times and I don't quite follow what you mean...

the heavily moderated upvote/downvote system of prioritizing content is great when dealing with technical topics that aren't too emotional and don't challenge cultural established norms too heavily. however there are many aspects of life that are too messy, too heated & too shameful to be effectively dealt with in this format. these tend to turn into shadow topics that are better out of sight & out of mind, only discussed behind closed doors where the risk of vulnerability is limited and even then they still carry potential repercussions. in 'real name only' networks there are far too many risks to ever be a real person at all so we cultivate a brand that is the best & most superficial version of ourselves. we hide our pain, our doubts, our regrets. this extends to pseudonymous networks as well and anywhere that a cumulative identity is present. very few people want to be recognized as assholes and no one wants to be a pariah, a victim or a failure even if they really are. in a sense the only time you're really talking to anyone is when you've both stripped your personhood away and have nothing to lose.

It's just fun to trigger people by downvotes :)

Ironic you would say this on a low toxicity anonymous forum.

HN has some pretty toxic behavior, depending on the topic. it’s not as egregious as others due to moderation and flagging but definitely not low.

Toxicity is a non issue compared the the huge disadvantage of losing anonymity. In fact you're feeding the trolls by giving them hints of ethnicity, gender, etc.

A real name policy also wouldn't actually address the actual problem. Toxic behavior is possible because on the internet you can write angry comments to a person on the other side of the globe and then forget about that person tomorrow. If that person does come back tomorrow then it doesn't matter if they are called "hahaurmom" or "Jonathan Willis".

The word you want is "impunity" not "anonymity". Because awful behaviour persists in actively de-anonymized contexts (eg: Facebook) when there's a power imbalance that puts society (or a local subset of it) on the side of the troll and against the victim.

I get where you're coming from with the academic curiosity of seeing how people think.

That said, it seems you're assuming comments are simply a one-way output of different people's thought patterns, rather than a two-way process that has an effect on others. It's obvious it has a two-way effect - the point of reading and listening is to learn, and we can learn in ways that make us worse off.

The idea that we should have a super low-friction way to be exposed to the internal thought process of anybody who is motivated to post those is an idea worth challenging, and we're starting to wake up to just how negative the repercussions of it are. That level of reach without any filtering steps (such as community standards, social feedback loops, etc) is fuel for extremism, harassment, and triggering/exacerbating mental illness. Lets not give everyone PTSD if we don't have to, cause thats effectively what your argument boils down to.

It's kind of like deinstitutionalization. When deinstitutionalization happened, lots of chronically mentally ill people started wandering the streets behaving erratically. You run into these people in cities--I've encountered one who loudly narrates her paranoid delusions about the people around her. Sometimes late at night you see people committing vandalism or doing drugs.

I think it's a better idea to give people the tools to control where they direct their attention.

Good analogy, the internet is entirely deinstitutionalized, but even worse, because:

a) people are more likely to have gumption to be negative in the absence of social cues of another person in their presence (global effect, whether or not user is mentally ill)

b) those motivated and possessing time to post are more likely to be suffering from issues (where those not mentally ill may just not engage because they have other things going on in life)

c) unlike the physical space where filters exist, you're often exposed to these people when navigating to entirely innocuous content (e.g. a kids video).

d) way way easier to find people with same or adjacent point of view on internet, reinforcing beliefs and potentially driving person further to extremes

Problem needs to be addressed from multiple fronts - as you say, more control for end users to tweak their experience (top down), alongside better platform-level filtering (bottom up), along with all the helpful designs therein like good defaults.

The open and free exchange of ideas isn't the same as deinstitutionalizing criminally violent people and self-harming drug users. Not in the slightest. The people "harmed" by Internet comments (live aside) simply have no recourse in U.S. law for their problem with it.

As a staunch supporter of free speech, being exposed to your internal thought process had some negative repercussions for me. None of my fellow pro-free-speech HN users were able to filter you, nor even were they able to socially pressure you, and as result I read an extremist opinion. I think it's important that communities control anti-free-speech comments, otherwise they will descend into extremism.

You haven't refuted my argument that super low-friction reach is valuable enough to preserve.

I have pretty decent karma in HN and there are much better filtering mechanisms in this community to remove bad actors and posts than there are on youtube. That's the context of my post that, while hidden, is nonetheless present in this community. There is no community for many youtube comments - its driveby comments that get free reach.

What's so bad about being exposed to the public id that you'd be willing to even loose a sliver of free speech to prevent it? You're already being exposed to the public id every moment, awake or asleep, because part of it is you.

> What's so bad about being exposed to the public id

I dunno, wake up and look outside.

Indeed, it's important for people to not think so academically about viewpoints all the time. There are ideals, and then there is the real world. It's fine to have ideals about things like absolute free speech in all domains, but it must be balanced against the constraints, rational and irrational, of society at large which never constrains its behaviours to anything resembling the ideal.

It's a classic question of the world not being black and white. Either people accept this greyness and work within it to produce the most desirable result possible, or fruitlessly wonder why nobody else is able to see what apparently is so patently visible to your eyes only.

and it is exactly the shifting of the Overton window into that machiavellian realpolitik approach that has the empire invading and performing coups since at least ww2, causing millions of lives lost and reverberation effects for generations to come. The same for any number of major constitutional issues such as the justice system, regulatory capture, etc.

We need more idealism, not less. Tempered with pragmatism is one thing, but the problem is then one is encouraged to temper that idealism a bit more, and a bit more, and one more time, until the original ideal is more a memory than anything.

For me this mostly stems from a lack of knowledge or respect for history.

I've downvoted this because it's clearly taking the piss, recontedtualising a comment for gainsaying. No part of the content reinforces nor advances the main thrust of your argument that I can see.

You can't exactly challenge ideas if the challenger ideas need to be approved or in an approved venue.

It's not that long ago that the Catholic church had people executed for heresy.

I didn't say that ideas need to be 'approved'. I challenged the idea that the ability to reach a worldwide audience should be free and available to anybody, regardless of content therein.

Filtering mechanisms exist in all forums, online and offline. Some suck, like certain sub-communities in youtube. The discussion should be about what filters we employ to best balance the positive value of conversation against the negative.

> I didn't say that ideas need to be 'approved'. I challenged the idea that the ability to reach a worldwide audience should be free and available to anybody, regardless of content therein.

If that "reach" should not be available to "anybody, regardless of content therein", then the logical and unavoidable conclusion there is some person or organization that has to decide what people and content are allowed access. You can hardly dodge the logical implications of your own statement.

Do you have a specific point? Or just picking a nit?

There are vastly different ways in which the flow of communication between people or groups of people can be impacted, and the differences between these ways can be very important. Because of this, sweeping generalizations don't seem very helpful.

This is absolutely not the case.

Reach is about broadcast. There are plenty of non-authoritarian ways to reduce reach. The most obvious one is to make people pay for it (like they do on Facebook). Community moderation is another, and trust metrics is yet another.

All have tradeoffs, but - since you are nitpicking here - it isn't a logical and unavoidable conclusion that this has to be a decision about allowed access.

Yes, let's institute zones of allowed speech.

While we're at it, if you'd like to protest the White House, let's make it so that you're allowed to do so but your protest must fit within a 9' by 9' space that must be a minimum of 500 feet from 1600 Pennsylvania Avenue. No carried signs may extend above 8' from the floor and your total volume must remain below 90dB. Make sure you optimize your protest space to have the loudest (but not too loud), boldest (but not too bold) voices, otherwise it will diminish your potential impact.

Lest you go to an argument about public vs private property, in our society, social media _is_ the public forum... Just as when these laws were written, public spaces were.

To use reductio ad absurdum on your argument for a minute:

Everyone in the world has an earpiece which they must wear at all times. Everyone has a microphone which automatically broadcasts everything they say to everyone's earpiece. You can't take your earpiece off, as that would reduce someone else's ability to speak freely to you.

There are downsides to allowing everyone a global platform.

This assumes, that giving someone free speech forces you to listen to it. Do you have an example of where this happens to you?

Not, not forced, but then I did say it was an absurd example. Slightly different qualitatively from my hypothetical: there are plenty of examples of unwanted speech appearing in places, and while they are not forced to actually read it, the "owners" of those places are unable to stop it. For example: https://www.theguardian.com/technology/2019/feb/27/facebook-...

You still can set your page to private if i am not mistaken. And you are also not forced to use facebook in the first place. I understand that its unpleasant and annoying in public, but the assumption that someone elses free speech in a public place affects you personally whether you want it or not doesnt seem necessary true to me.

While not so well worded, i do agree with the broad rational, that in order to restrict free speech in public, some sort decision has to be made about what is allowed and what not. Like it is with everything from laws to social norms. The methods for decision making are broad, from authoritarian, to economical, over technical to consensus based decision making. But they are all methods of decision making. Asking who, or better how that decision is made is a sensible concern.

There are two people needed for a conversation in a public place. You have the ability to leave. You are not forced to listen to anyone. In a private context this is another topic altogether. You can kick someone out of your facebook group, unfriend him or block him. If they then show up at your front door we are talking harassment not free speech. Which is why the loophole of anti abortion protestors being allowed in the streets in front of clinics is such a horrible situation.

I know its annoying to be confronted with stuff you dont like, when your private life takes place in alot of public places, but that is your choice. You dont have to have a public facebook profile, you dont have to have a public youtube channel. And facebook and youtube as a whole dont have to be a public place if they decided otherwise.

> I know its annoying to be confronted with stuff you dont like, when your private life takes place in alot of public places, but that is your choice. You dont have to have a public facebook profile, you dont have to have a public youtube channel. And facebook and youtube as a whole dont have to be a public place if they decided otherwise.

Thank you for understanding and better expressing my core argument.

The context is comments on videos of children.

In the context of that, I'm fine with youtube's rule, but we were already off on a tangent in the meta discussion and that is clearly not the context of this particular discussion.

Go up two levels if you don't believe me.

We actually already have laws here in the US about signing up people under the age of 13 for online services and have since the 90s. They're very lax and not enforced.

A very large measure that YouTube could take is to hide all comments if you aren't signed in and strictly enforce the over 13 rule in the US (or whatever other rules elsewhere). But they don't do that.

You seem to be missing the point.

This isn't about children making comments. It's about comments by adults about children.

My point is that if the existing laws were followed, children shouldn't ever be seeing those comments unless their parents are insane and allowed them to...Don't give your young children a youtube account. Youtube can and should add blocking _ALL_ comments to the parental controls.

If all of these services weren't being offered for completely free, you could require a credit card purchase to make an account and solve this problem in an afternoon over a cup of tea. That's how simple it is. The problem is that the economics of these services allows and encourages children into these spaces without restriction.

Part of being an adult is figuring out how to deal with a world where some of your neighbors are horrible monsters. You don't just drive them into dark corners where you pretend they don't exist.

If pedophiles want to self-identify, that's great, because it gives us the option of getting them the treatment that they need AND of keeping our children far away from them.

It's not risk to children reading them, it's promotion of pedophilia that is the problem.

You don't just drive them into dark corners where you pretend they don't exist.

You don't pretend they don't exist, but you don't let them normalize their conduct either. And that is absolutely what is happening with the previous approach.

You don't let them normalize their conduct. You loudly tell them how intolerable they are to society. You watch them at all times and make sure they don't harm anyone.

As someone who was on the receiving end of attention from pedophiles as a child and who has spoken to many of their victims, you do not want these people isolated in their own communities. You want to know exactly who they are.

Letting them speak, which at least where I live is still their right, is NOT tolerance.

This isn't about shitposting on YouTube videos, this is about pedos time stamping videos of kids in sexualized positions. Mostly in response to this video [1] (with a warning that it's pretty uncomfortable and gross).

1. https://www.youtube.com/watch?v=O13G5A5w5P0

that truly was uncomfortable to watch, did not finish. Amazing how the human mind works that such a system came about but the algorithm that is built into youtube itself needs to be questioned.

so what solution is there, preventing any content with children under a certain age? blocking any comment that features a link? can youtube detect use of vpn? once you leave certain countries I am not sure it can be policed.

My best idea is containerized video platforms. A parent can pay for the service or host it themselves. You control who is on it. The place where this breaks down though is not everyone has friends and the world would no longer be the audience. As well, video hosting is expensive, not sure it is affordable in most countries.

I don't think there is a great solution yet.

Are direct-to-DVD types of content dead and buried in developed countries?

Lots of middle-class families in developing countries still buy (pirated) DVDs of Barney & Friends and similar family friendly shows for playback on their home theatre systems, for those moments when kids announce they are bored.

Ahh, I see. Thanks for explaining.

>It won’t ever go away, it will just be hidden.

deplatforming is an effective strategy to reduce the spread of malicious content for several reasons, in the case of youtube because it reduces the incentive to monetize and spread harmful content.

To say that you'd rather be a exposed to pedophilia in youtube comments than to remove them is a little bit like saying you'd like to stay in contact with the marburg virus.

Ideally, I'd like a vaccine for the Marburg virus so being exposed isn't a big deal. I think the analogy is obvious enough from there.

Mirioron 19 days ago [flagged]

I agree! Since Hacker News could also potentially be such a way to disseminate bad information we should remove all comments here too.

See the problem? That's basically what you're justifying.

This comment breaks the site guidelines—in particular this one:

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

You also got into some pretty flamey discussions elsewhere, e.g. the religious flamebait in https://news.ycombinator.com/item?id=19275923—not cool on Hacker News.

If you'd please review https://news.ycombinator.com/newsguidelines.html and refrain from doing those things here, we'd be grateful.

Hi dang, I’m given to understand that you’re one of the moderators here. I’ve tried to contact you three times in theee weeks through email, and this is my third time trying to get your attention in the comments, so far without a response. Is there a better way to contact you? I don’t feel good about inserting my comment in an unrelated post like this, but I’m unsure what else to do.

I've had good results sending messages to mailbox named "hn" in the "ycombinator" domain, in "com" TLD.

I’ve tried that for three weeks without a response.

I would assume your email address is mistakenly flagged by anti-spam system, and proceed accordingly.

This is me proceeding accordingly, short of making a bunch of email accounts on different services, which seems a bit shady.

I understand your frustration, but please stop effectively spamming HN with these requests.

Ok, what should I do then?

There are logical stopping points on the way from "all comments should be allowed" to "no comments should be allowed."

One such stopping point could be "no comments should be allowed on media with certain features that overwhelmingly tend to attract morally reprehensible comments." Another such stopping point could be "all comments should be allowed, except those which the community has decided have no merit."

In fact, (almost?) all worthwhile online communities are somewhere between the two extremes you seem to advocate. They don't always call it "deplatforming," which may be where you're getting hung up, but comment moderation is everywhere, and it does work.

Now, if you want to say that Youtube could handle this particular type of comment moderation in a better way, I think there are plenty of arguments to be made. But the choice isn't "all comments" or "no comments."

You may see logical stopping points, but I see a slippery slope. And at each such stopping points there will be someone having good arguments for going down just a little bit further. Interestingly, starting down that path always seems to start with something relating to pedophilia. It's unfortunate that one always has to defend scoundrels in the first stages of what will otherwise naturally snowball into something nobody at the starting line ever intended for.

I think HN is actually a poster child of strong moderation on social media, because it remains relevant by being one of the most regulated platforms. There is a reason this place hasn’t devolved into complete and utter uselessness as well as why you rarely see any form of political debate, and it’s exactly because this places isn’t liberatarian in its approach to free speech.

Free speech comes with responsibility, and part of that responsibility is to make sure people aren’t legally monetising videos of children for pedofiles.

I know it’s not always a popular opinion, but I think social media has been hiding from its responsibility for far too long, and I’m personally happy the EU is stepping in to regulate it.

Big tech companies shouldn’t be able to get away with things no one else could simply by being big. Especially not when it’s used to undermine the foundation of our society. If someone uses your platform for a network of pedos, or to commit genocide then you are also responsible for enabling it in my opinion.

HN is self selecting a higher quality audience. The content is interesting to an educated audience, the website contains very little fluff/idle entertainment and the ui is ugly to the average user.

While true, the moderation system also tends to discourage the snarky or ad hominem attack that at least some of us might otherwise be attempted to make from time to time.

I'd say the one down side to HN is the moderation and I don't think social media has any responsibility to moderate comments.

Readers of NN comments are free to ignore any content they wish to ignore.

I find the trend of individuals demanding organizations and governments protect them from content they may find distasteful or content they don't agree with to very disturbing.

Is your reasoning invariant across different moral systems (like Iran, China)?

> I would rather be in touch with and exposed to this rather than try to pretend it doesn’t exist.

YT comments do more than just represent ideas that are out there. Sometimes they can serve as a means of disseminating harmful ideas.

Just yesterday I was looking at some YT videos about the history of the Balkan peoples and languages, where the speakers were internationally recognized historians and linguists (and who are from outside the region and don’t have a dog in any of the ethnic fights). However, the comment sections had become a place where people could post lengthy crackpot claims about their people’s history in direct contradiction to the authoritative scholar speaking in the video. Furthermore, it appeared that many people, when they came to the video, went straight to the comments section to read the other YT commenters, and thus absorb their crackpot ideas. They didn’t actually watch the video and learn any of the information in it.

I couldn’t help but feel that two decades ago, people similarly believed things that were baseless, but they had less capability to disseminate those ideas to other people.

> I couldn’t help but feel that two decades ago, people similarly believed things that were baseless, but they had less capability to disseminate those ideas to other people.

Which people? Twenty years ago, people (by which I mean schoolteachers) baselessly believed that Columbus proved that the world was round rather than flat and disseminated that idea, and it was hard to disseminate the counter-idea of "no, actually, that's not true, the Ancient Greeks already knew the world was round, Columbus just thought the world was round and also much smaller, and he was wrong".

If you can stop people from disseminating "harmful" ideas, you have the power to decide which ideas are "harmful". You might think it's "harmful" to disseminate the idea that the American Civil War was motivated primarily by the issue of slavery, for instance. If you're just in charge of buying school textbooks for the state of Texas, you won't buy textbooks that disseminate this idea, and that's harmful enough. But if you're in charge of moderating user-generated content on a huge set of platforms like Google and Facebook are, it starts becoming more and more of a problem.

Two kinds of ideas I would consider harmful to the point of using the force of law to ban them: antivax materials, and the kinds of comments broadcast over media leading up to and during the Rwandan genocide that lead to that genocide taking place.

I could not in good conscience say "the kids who die of preventable illness, their lives are worth it so antivax people can spread misinformation without consequence."

I could not in good conscience say "the Tutsi lives lost were worth protecting the Hutu rights to free speech."

The "force of law" was largely on the side of the Rwandan genocide, so it's a slightly nonsensical example, although some of the broadcasts do satisfy the tight bound of speech that calls for "imminent lawless action" (which is the most recent criteria set by the US Supreme Court).

Aside from that, both of those examples are examples where the harm comes from a specific action (or inaction), rather than the speech itself. If refusing to vaccinate children was treated as criminal neglect and unvaccinated children were forcibly removed from their families, people could talk and talk all they want and it wouldn't matter.

Do you not see that there is a line drawn from speech to action, like in the situation surrounding the Rwandan genocide or in the case of antivaxxers leading to a resurgence in preventable disease?

Like, are you trying to frame this in the sense of, "I can swing my fist toward your face all I want, but so long as I don't actually make contact it's okay"?

I'm trying to understand how you're trying to thread the needle here and am coming up empty-handed.

Speech can counter speech, and action can counter action. If you don't vaccinate your kids, CPS can take them away from you. If you want to write paranoid screeds about how vaccines and fluoridated water are a communist plot to block the pineal gland, that's up to you, buddy. Just know that if you have kids, we're keeping an eye on you, and if their shot cards aren't up to date or you have a doctor buddy forging them, we're gonna take away your kids and throw you in prison for a long, long time--just to make an example of you. This is what happens to tax protesters (like Wesley Snipes) and this is why tax protester conspiracy theories don't really get anywhere despite the absolute lack of any legal authority to stop people from disseminating them.

I mostly agree with you about Rwanda because openly calling for people to commit genocide does cross a threshold that justifies forceful reaction. My point there is that you're appealing to government, which is exactly who in Rwanda was organizing the genocide in the first place. But if you're imagining a UN peacekeeping force being deployed to Rwanda being the censors instead--sure, among many other things they should probably shut down the radio stations. I'm fine with that.

> Do you not see that there is a line drawn from speech to action

The problem is that this argument makes it too easy to misuse the instrument of censorship. You're calling for censorship that's both unnecessary and insufficient to solve the anti-vax problem, using a standard that could be just as easily abused to call for censorship that even you would disagree with.

That's the problem with videos. It's a lot quicker and easier to read a short text and respond to it than to watch a 10+ minute video and then respond. There's been a very large movement towards making videos instead of writing articles, even when a video isn't really necessary or helpful.

I agree, in most cases.

Hiding the insanity of YouTube comments won't change anything it will just hide it ever so slightly. Like strategies to _hide_ homelessness in cities.

But I strongly disagree when it comes to pedophiles sharing suggestive timestamps in comments and making vile sexual remarks targeted at children. That's where I nope out. That shit shouldn't be allowed in the comments section - even reddit doesn't allow this kind of commentary.

The problem has more to do with validation and echo chamber.

It wouldn't be a huge issue if these kind of comments would stay constrained in the digital realm.

But as people see that the tone and hardline position is shared by a lot of other people, they feel empowered and validated, so that that behavior starts rippling in the "real" world.

>but the reality is a lot of people think in patterns that drive these comments

and that's where you might be mislead by machine-learning optimized comment sorting, recommendations that drive more people that'll agree with any given videos point and vocal minorities in general.

Actually, that's a very good point. I think along similar lines. That it's not the comments or platforms that are terrible, they simply shine a light on our own human nature. We're dark, ugly creatures.

> We're dark, ugly creatures

Among other things. Don't forget the duality.

I'm not sure if the bright side is justification for the rest, but regardless, we are what we are.

The only reason I still have yahoo as my homepage is the the comments on the top stories. While most are low quality, there are some very clever comments and I feel I get a better sense of what everyday people are thinking about it.

> I enjoy the low quality youtube comments

They're the solution to a hard engineering problem: how to generate stupid statements on a given topic? Not grammatically incoherent statements, but the products of true stupidity. Pretty hard to automate that kind of thing.

But with YouTube, it's easy. Just search for videos that match the topic, and pull random comments from them. A large fraction are guaranteed to be examples of genuine stupidity.

People's beliefs and behaviors are shaped by what they're exposed to and what they see to be publicly acceptable. Of course you can't completely stamp these things out, but you can absolutely affect how widespread and popular they are.

The low quality YT comments often (by the law of large numbers) lead to some of the funniest quips/quotes/jokes/etc. that I've ever read.

YouTube comments is the only real democratic and open dialog on the web.

Everything else is one's or another's ideas for "right" dialog imposed.

That’s pretty obviously false. I think you mean “anarchic”.

Democracy would allow the majority’s idea for “right” dialogue and would censor minority ideas. Which is what moderation does.

>That’s pretty obviously false. I think you mean “anarchic”

Democratic. The people ("demos") control the dialogue and have equal voice.

As in this dictionary definition: (3): relating to, appealing to, or available to the broad masses of the people - democratic art - democratic education (4): favoring social equality - not snobbish

I'm also referring to the original meaning of the word and practice in ancient Athens when it came to deciding, where every participating citizen could be heard [1].

What you describe (suppressing the minority etc) is about lawmaking and decisions (e.g. the democratic , which is not part of the YouTube dialogue. Users don't decide what's to be done. Nobody is surpassed in YouTube comments, minority or not -- they just get downvotes and negative counter-comments, but they can still write and have their comments shown.

[1] The innovation was that it was not a king, tyrant, or group of rulers, but the whole citizen community (even if restricted to slaves/women which was the baseline at the time -- not to mention also for 2.5 millennia later) themselves openly debating and voting.

And you'll still get that from non-children related videos. No one is taking anything away from those.

Unless I'm mistake, may I ask why you'd be interested in comments specifically for children videos?

> YouTube content creators are now in the unenviable position where they need to actively moderate the comments section

Why is that so wrong?

I understand that it's often hard work. I understand that they might want to be making videos, or product-placing, or whatever it is any given creator wants to be doing, but they're the people at the hub of their own communities.

They should have some responsibility over the conduct displayed there. And if they don't want that, they can host it themselves somewhere else, or disable comments.

Honestly, I feel YouTube would be a lot nicer place if the standards required weren't just the channel owner's. Plenty of niches are way too happy to foster the the next /b/.

If youtube can determine when to delete a channel due to bad comments wouldn't they also be able to just delete the comments?

Of course not. One requires hypothetically only noticing something afterwards, and one actively nannying in real time.

"I can tell something bad happened here" is way easier.

Sure, but getting people to keep their bedrooms clean serves as an additional metric for YouTube. You can see who's putting in the time and effort to keep things clean, as well as another metric to rate the content they're uploading.

If you upload videos of kids and you're sitting back and letting paedos write this weird shit, you probably shouldn't be making, let alone uploading videos of children.

Just as if your community has an above-average number of people calling for the lynching of Muslims. Or quacks suggesting you can cure cancer with alkali diet pills. You let this stuff stick on your videos, it sticks to you and that attracts more.

Pushing this onto the uploaders forces them to think about what they're doing, and whether or not they really want to build that community.

I dunno... I see what you are getting at but it also lets Google off the hook here.

In my mind it's the Broken Window Theory [0].

It's Google's house. Google's windows. I think they need to take more responsibility for cleaning up.

It's not the creator's fault if pedo's show up and comment on their stuff at a scale they can't possibly control.

There's another side to this coin!

There is a part of me that asks "Why are parents uploading pictures and videos of their children to the internet for the public to ogle over?".

The cynical part of me says "What did they expect to happen?"

It all comes back to money though... Google have had this problem for a while but their bottom line wasn't threatened until recently... let's face it: They don't really give a shit until money's involved.

[0] - https://en.wikipedia.org/wiki/Broken_windows_theory

Can someone delegate other users to act as "moderators" in the comment section of their channel's videos?

If not, it's like saying you can run a forum, but you have to moderate the whole thing yourself.

You can. https://support.google.com/youtube/answer/7023301

And again, nobody is forcing you to enable comments. If you can't handle the workload, the arguments, the very worst of humanity, etc, just turn them off.

Looks like YT wrote back to that post to say that's not true?

"...Channels will not get deleted if you re-enable comments--can you send us the post where you saw that?"

Ah, thanks for pointing that out!

It appears that he saw demonetization and mistook that as deletion.

That said, for some channels, demonetization means effectively shutting down the channel.

Limited Monitization

Video that include minors and are at risk of predatory comments may receive limited or no ads (yellow icon). If you think we made a mistake please appeal [link]. We will continue to refine our approach in the coming weeks and months.

Is misstaken as deletion? English is not my first language, but I have a hard time reading this as "we may delete your channel".

Removing monitization is as good as deletion for the content creators. Furthermore, their "appeals" process tends to be a black hole, as is basically everything at google that would require human intervention.

That's only true for "professional" content creators. The silent majority of creators don't care about monetization and won't be impacted.

I guess I'm reading the words to strictly

- We will stop giving you money

- We will force you to stop by delete the channel

I guess it can feel the same for many creators and be the same thing when communicating on twitter. (No sarcasm intended)

Where does it strictly say they will delete the channel?

His point is that it doesn't.

The majority of content that is watched on Youtube is monetized and people do depend on it to some degree. If they didn't, then they wouldn't be on Youtube, because there are plenty of other services that offer you the ability to upload videos for free.

> because there are plenty of other services that offer you the ability to upload videos for free.

Even without monetization, the community/discoverability benefits of YT have to be appealing. As much distaste as I have for Google, the ability to find a plurality (at least) of the content related to my area of interest in one place is quite appealing.

Not if you are a hobbyist.

YT also ask for referns so they can correct the faulty information.

If i understand the twitter thread correctly, he referens that ads can decrease, and this somehow in his mind means that YT will delete his channel...

Edit: In my world, that is called a lie.

And well, who wants to advertize if the comments is full of pedophils?

So it sounds as if YouTube content creators are now in the unenviable position where they need to actively moderate the comments section for videos featuring children, and if they don't do so to YouTube's satisfaction, they could have their entire channel nuked.

Is that a problem? If a run a large forum and have moderators running subforum, they either moderate comments in their subforum to my satisfaction or I "nuke" their subforum. Given that the channels can just turn comments also, I can't see a terrible burden here.

Popular forum software (discourse, phpbb) have good mechanism for comments moderation. For example a comment from every new user can require approval before it becomes public. YouTube doesn't seem to have such mechanism, so the moderator would need to continuously monitor the comment section.

Yeah but also your moderators are actually just users who don't know anything about moderation, and you didn't tell them at any point that they have to handle moderation duties.

So you turn comments off and tell them if they do turn it on, they have to moderate the comments.

Does YouTube even provide tools to let people do effective moderation nowadays? Just telling content creators that it's essentially their problem now can't be the whole solution. This reads like the typical Google response of leaving the people that fill their platform with quality without any useful feedback.

YouTube moderation tools were abhorrent last time I tried them.

Yeah, and most of the channels I follow have 1-5 people producing them while sitting on millions of subscribers and tens of thousands of comments on most videos.

It will just kill small but successful channels because of the sheer amount of moderation to do.

laowhy86 had all his video comments removed by youtube because his toddler daughter was in a couple of his videos, despite not being the focus of or even in 99% of his videos, and despite the fact that he does curate comments. He received no notice of this from youtube. He found out when one of his twitter followers pointed it out.

Yes it seemed like a big overreaction from Youtubes side.

I enjoy his channel, so I really hope it doesn't affect the output.

What's worse for YouTube? That news articles continue to be written about how they're ignoring a pedophilia problem, or some channels getting caught up in the algorithm? It sucks to be negatively affected as a content creator, but YouTube is doing what everyone has been pressuring them to do.

People are pressuring YouTube to hire and train moderators to competently and soberly evaluate context and take intelligent action.

Nobody is pressuring YouTube to blow channels away because one of their ML algorithms hit a probability threshold.

It's simply absurd for YouTube to manually moderate comments at the scale they currently operate. If you force them to do that, it won't be profitable, and you'll end up with YouTube blowing channels away because they can't afford to host them anyway.

Why? Why is it absurd?

You are thinking at human scales, and that is understandable, but Google doesn't think at human scales and it's only "absurd" if you think that Google has the inalienable right to the smallest possible cost of goods sold, even if that means offloading their externalities onto everyone else.

It is probably obvious that I do not. You shouldn't, either.

At Google's scale, trained-but-unskilled workers are not expensive. They are not cheap, but they are not expensive. And Google makes a lot of money. This is a common throughline from large societally-threatening, socialize-our-externalities-but-never-our-profits companies from Facebook to Google: "doing something correctly, or even trying to, would just cost too much money, so we should continue our societal-termite ways!" Until these unwatched monsters--and that is, I stress, the default state of the corporation, it is only the threat of the society that grants them their charter taking it away that adds even a speck of decency to them--prove, prove, that they somehow just can't survive by reducing incomprehensible net revenues to merely gigantic, then I will continue to operate on the understanding that they don't want to. Which I tend to think is a much, much more realistic thing.

I don't care. They fix their product or YouTube delenda est. Either is preferable to the current situation.

> And Google makes a lot of money.

They make money by not spending it when they can get the same outcome for free[1]. Also, Search and Adwords make money, YouTube is getting by[2] (relatively). Why should other divisions subsidize a loss-making YouTube? Some channels don't make enough money relative to number of comments to be financially viable (no matter how cheap the moderators are) - Google has simply outsourced this decision to individual channel owners.

1. Google user's do a lot of things for free already, e.g. Map POIs

2. My guess - they don't breakout YT's income/expenses in fincancial reports https://www.marketwatch.com/story/the-sec-wants-to-know-why-...

I understand that. I also understand that stuff like YouTube is effectively becoming the public square of the twenty-first century and if a company wants to own that, they can deal with not making all the money off of it that they could possibly, theoretically, make.

People matter more than corporations. Society matters more than corporations. I'm comfortable asserting that it would be better for Google to close YouTube down than to let an organ of growing central importance to society at large become what it's obviously starting to become; something less damaging than that neglectful caretakership can arise in its wake.

Are you seriously suggesting that disabling comments on a certain type of video content is more damaging to society than losing a global engine of content creation and community?

YouTube benefits society immensely by sustaining a very expensive 21st century public square. If we as a society want to have that - and I at least very much do - we can deal with not making comments on all the videos we could theoretically comment upon.

I am seriously suggesting that this is not something that can be algorithmically determined. I'm quite OK with all manner of content not having comments enabled. I'm not okay with unthinkingly stupid false positives all over the place harming creatives' (actual creatives) ability to feed themselves, and those false positives are overwhelmingly caused by bad heuristics and objectively dumb algorithmic decision-making.

Feeding humans into The Machine, having The Machine make context-free, alarmingly inaccurate, and functionally beyond-appeal decisions--because the appeal process doesn't scale either, we are so frequently told, when it isn't just "drop the appeal on the floor--is bad. If Google has no other answer than Feed The Machine, then The Machine should be considered inimical to humans and should be dismantled.

But, of course, The Machine is not necessary; that's a convenient fiction to paint the problem as a dilemma of "no YouTube" and "some unaccountable algorithm runs YouTube and decide what you can see, free to lead kids from Let's Plays to Nazi agitprop and pedophiles to their spank bait." It's just that the Machine is cheaper, you know? And that's really, and literally, all.

I can't take your concern for creatives' ability to feed themselves seriously when you turn around and advocate for fully destroying the platform that is feeding them. Many full-time content creators on YouTube aren't big enough to make it on a smaller platform or on their own.

I also don't think the "Machine" is necessary, but I do think it's better than having no global engine for content creation and community at all. If you think there's a viable third option, I'm interested in hearing how it would work and the cost of achieving it. But of course you're free to continue making dystopian metaphors and pointing at Nazis instead.

The way it works is to have these companies hire, and pay for, and care for (see Facebook terminating counseling services, etc. for leaving content-moderation employees) employees to make the decisions to provide a platform that's safe and sane.

That's it. That's literally it. That's just...it.

You are ultimately correct, in that it will be of relatively higher cost. You are ultimately correct, in the sense that "anything" costs more than "nothing". And I genuinely don't care. It must to happen,. And a large part of why I don't care is that I am not advocating for its destruction; what I am saying is that I am perfectly okay with going to the mat with Google and other ostensibly supra-national corporations because they'll back down. They will back down because they will still do just fine. Google is not going to shutter YouTube, Twitter is not going to fold (well, not because of this), Facebook is not going to hang a CLOSED sign on the door because governments say "no, you have to actually have humans make decisions that impact these other humans and process them sanely instead of having your robots blap stuff to death because it found a peak in their hill-climbing." They will comply, because they will still make plenty of money.

And if they don't? If I'm wrong? Somebody else will do it. They're plenty of gold in that hill, even if you aren't allowed to get at it for completely free.

(It is also worth noting that...uh...on YouTube, those Nazis exist. They're right there. I've watched them radicalize teenage boys who started on Let's Plays. The algorithm happily feeds those boys to them. That's part of this problem, too, and you can't just handwave it away.)

You speak with such confidence that YouTube is printing enough money to sustain such a massive additional cost but that's unlikely. Don't just take my word for it, the WSJ has reported on this matter [1] because Google doesn't release financial details for YouTube on its own.

You've done nothing but rattle off assertions about how YouTube just so profitable and won't shut down, how there's so much money in ad-supported video hosting, how somebody else can do it. These are fantastic claims, by which I mean they are rooted in fantasy.

I have no trouble believing that this represents an existential threat to YouTube. If Google massively shrinks or shuts down YouTube as a free and global content platform, it's not just their loss, it's ours as well.

[1] http://www.wsj.com/articles/viewers-dont-add-up-to-profit-fo...

Relevant snippet from the WSJ article:

> The online-video unit posted revenue of about $4 billion in 2014, up from $3 billion a year earlier, according to two people familiar with its financials, as advertiser-friendly moves enticed some big brands to spend more. But while YouTube accounted for about 6% of Google’s overall sales last year, it didn’t contribute to earnings. After paying for content, and the equipment to deliver speedy videos, YouTube’s bottom line is “roughly break-even,” according to a person with knowledge of the figure.

I didn't say YouTube was "just so profitable." Google is so very profitable and Google won't shut YouTube down because Google derives incredible mindshare value and analytics insight from owning YouTube. YouTube and a similarly not-super-profitable-but-very-useful product--Gmail--get people into the Google ecosystem and facilitate greater understanding and deeper analytics into their userbase in ways that make the things that do make money make more money. To reduce it to a P&L for that single division is bonkers.

And from a brand perspective? To younger people, YouTube is the part of Google that they like. It's not going away if it becomes marginally more expensive to run (and we are talking marginally. Facebook pays $28,800 a head for content moderation, and that's American employees), because all doing so does is open the door for a competitor--and while 2009-me thinks this is crazy to say, I find myself eyeing Microsoft in 2019, though Facebook is also of course a likely contestant--to come take all those eyeballs and all that analytics data.

I promise: it's okay to dare even a megacorporation to blink. We live in a society, they operate under our rules.

Google doesn't need YouTube to exist in its current form to have a large viewership. It can just as easily turn YouTube into a controlled TV-like platform where content is primarily created by incumbent professionals with little room for anything else. They'll still get incredible viewership. That's where the mainstream lives after all. Smaller content creators aren't particularly profitable or popular, so why bother if all they do is invite the press and people like you to slap them around for having them. I'd say it's already going in that direction.

And from a brand perspective? The linked article in this thread is a global, mainstream news publication burning Google & YouTube's brand by associating them with pedophiles.

Marginally more expensive? Try hundreds of millions a year to employ the thousands of workers to properly vet the 80k+ hours of video content uploaded every single day, with countless more comments. Then get slapped around by the press anyway because those workers aren't paid enough, and they aren't given quite enough mental care because they're still a bit screwed up after watching garbage 8 hours a day, and by the way they shouldn't be watching garbage 8 hours a day because that's awful for a human being to do that, they should do it at a nice 8 hours/week but they should still get paid a lot more because they're doing god's work and market rate wages aren't enough for them.

So what's your plan? Google realizes that hey, they don't need to operate a free global platform for content creators of all sizes at a P&L loss, they can do what everyone else does and make a lot of money, get a lot of mainstream viewership, avoid PR blows like this one...then you get to proclaim victory because youtube.com still exists?

Oh right, if Google stops operating a free global platform for content creators everywhere at a loss, someone else will do it. Like Facebook, which suffers from the exact same issues, is working towards the same AI approach as YouTube, and got slapped by the press after hiring human moderators anyway? Like Amazon, which acquired Twitch and almost immediately applied an AI-based automatic content moderator even more inaccurate and punishing than YouTube's? Like Microsoft, which...uhh what? I'll let you come up with reasons why Microsoft is somehow an appropriate competitor.

I can only describe your comments as wishful thinking. We live in a capitalist democracy, we operate under its rules. You're free to suggest that we as a society choose a different system, but good luck with that. Until that changes, I promise: megacorporations don't blink, they just look away. I think it would be a tremendous loss if one of the most competent members of our society looked away from the project of a free, global video platform for content creators of all sizes, stripes, and beliefs.

Cool. And when that algorithm decides that that thing you don't like is banned beyond appeal, what will you do?

I would argue against banning those things, even if it's a thing I don't like, like I am now. I argue against the cultural idea that if I don't like it, YouTube needs to get rid of it. If you think something is so bad that it shouldn't be on YouTube, you should go through society's democratic process and get it enshrined into law.

So, quality instead of quantity? How horrible.

> big overreaction

What other kind of reaction is possible when people scream "pedophile"?

On any popular video or channel the comments have always been a cesspool of hate and evil. I am glad that YouTube is finally trying to do _something_ about it, but it also seems really shitty that it took pedophiles to instigate an advertiser boycott and get them to act.

Pedophiles are just the proverbial "Straw that broke the camel's back."

Advertisers have always been agitating, often behind the scenes, that their ads not show up on certain channels. It's just that now that there are all these code words, out and out brazenness, and what not that undesirable people use to have their conversations on otherwise innocuous channels, the advertisers are really starting to put their collective foot down.

This really is an "existential" level threat for YT. I understand the urgency. I'm just wondering if there is a better way to accomplish the same goal? Is there a way to, more directly, target undesirables?

Or maybe advertisers want better ad deals? They're also agitated from behind by traditional media that sees Youtube as a threat. Back during the first adpocalypse it seemed as though media organizations were threatening companies to pull out otherwise they'd be written poorly of.

Gab just released a tool that permits anybody (with a browser plugin) to comment on any web page, so this will probably not only hurt legitimate users, it will not do what it's intended to either. Just like every other moral panic in history.

This isn't a moral panic, I don't know why people push that line anytime racists or pedophiles are even mildly inconvenienced.

Advertisers left Youtube because people left creepy and sexually suggestive comments on some videos, and Youtube responded to preserve ad revenue. No one was clutching their pearls over this.


This is pretty much raw, uncensored, money grubbing, greed.

Racists and pedophiles are bad for the bottom line because advertisers refuse to pay to have their ads next to such content. So unless we're willing to start paying to use YT so that YT can get off the ad supported model, then we'd better get used to moves like this one.

I don’t think it’s YouTube’s responsibility to moderate what happens on nazi pedophile sites that they don’t own.

Google runs Chrome and its extension store

This is basically the same as Genius' old mission to annotate the web.

Except it's Gab.

Or ThirdVoice back a couple decades ago.

> Seems like YouTube will need to come up with some sort of "trusted subscriber" designation, and allow content creators to permit comments only from those subscribers, so that any random bad actor can't swoop in and destroy a channel.

Alternately, a "trusted moderation service" designation, where you—if you think it's worth it—can pay a third-party to do the moderation that YouTube doesn't want to pay for, and YouTube can verify such third-parties as being "thorough enough" that it won't automatically nuke a channel upon report if such a verified moderator service is doing the moderating (just like they wouldn't nuke a channel upon report if they were doing the moderating.)

> See child comments.

But YouTube won't let me

I only have a peripheral knowledge of Twitch, but don’t they have a similar solution to what you’re outlining here? There is a “public” channel, on major streams this is full of spam and nonsense, then a separate channel for subscribers where they can talk amongst each other.

Perhaps a subscriber-only gate to comments would be a good thing. And only subscribers could read the comments, too. Channel owner is tasked with moderating the conversation or recruiting mods to do it, like any chat room or message board. Then, whatever happens in the comments becomes the channel owners responsibility.

So did they ban comments as the headline indicates or just default disable them. Default disable seems much more reasonable to me, let creators decide if they want to moderate comments but keep them default off so family videos don't have to deal with that.

Thanks for pointing me toward SBSK. Fair warning to others: have some napkins handy.

Only KYC'd accounts can post on videos featuring children.


First time I’ve actually seen "[sic]" being used

> see child comments

I can't, they were removed.

But seriously: YT is just hiding the problem by disabling comments; pushing it under the rug.

This is so bizarre, so Google/Youtube is still trying to claim that they are NOT a media company, just a platform, no media, not responsible of course, but at the same time they are putting responsibility of moderating user comments ONTO THE VIDEO MAKERS themselves.

This is completely hypocritical and idiotic. Youtube comments have been famous for a decade for being among the worst of the worst content on the web, and now they're going to try to just foist that cancer on channel owners and wash their hands of it? Are you kidding? Is this a joke?

Is nobody in charge at Google anymore? Are they just going to keep endlessly reacting to whatever media story got the most attention last week instead of actually trying to build something new?

The fact that Google is ceding the fight against toxic comments on YouTube is actually pretty shocking.

In a company that knows more about you than you can possibly imagine, and more about automated sentiment analysis than anyone in the world, they couldn’t algorithmically determine who should be allowed to post on certain subsets of videos, or devise a system they thought was worth deploying to ensure comments meet a basic level of decency.

The ramifications of this are profound, at least:

- shake the money stick at them and they will dance (thanks, of all people, Nestle!)

- they've admitted a serious problem exists that they were unwilling to deal with until external pressure forced them to (i.e. they can't be trusted to self regulate)

- they've all but admitted they can't fix this in reasonable time, if at all

This is the first time I can think of where there has been a seriously material chink in Google's.. cultural armour? Turns out the advertisers are in control, and turns out they don't have a technical cure all. It'll be interesting to see how they attempt to reintroduce comments in the long term, no doubt more ML. Of course, this says nothing about a recommendation system that continues to blindly cluster videos of lithe toddlers together. I wonder if any advertisers are making a stink about that

Favourite summary: kids are safer on YouTube today because of Disney and Nestle, not because of Google. Let that sink in. The subtext here of course is that Nestle and Disney are some of the most evil companies around, and yet they're the ones that were forced to strong-arm Google. The irony of this defies words, and the reality of the only mechanism at play here to protect children is almost as disturbing - these companies don't "care about children", they were only forced into action to maintain their reputation.

(gentle reminder: HN punishes highly commented 'controversial' stories. If you care about this issue being more widely understood, try to limit your commenting)

>- they've admitted a serious problem exists that they were unwilling to deal with until external pressure forced them to (i.e. they can't be trusted to self regulate)

No, they didn't. The problem that Youtube is dealing with is advertisers leaving, not the actual comments. Dealing with the problem of advertisers leaving is different from dealing with those comments. It's unclear whether dealing with those comments is even desirable, because Youtube would, in essence, become the censor of what is and is not okay to post in the comments sections, even if they don't break any codifiable rules or laws. Would you find it acceptable that what your freedom of expression is filtered by rules as unclear as that?

Laowhy86 got his comments disabled because his daughter appeared in some of his videos. If this standard were to be pushed across the board then it would effectively ban comments on any videos that have underage people appear in them. Is this desirable for our society?

>Favourite summary: kids are safer on YouTube today because of Disney and Nestle, not because of Google. Let that sink in.

But this is not true.

Firstly, kids on YouTube were NOT impacted by this. When a creep watches a video of a person then that person is not negatively impacted by it.

Secondly, all you've done is hide the problem. The videos of the kids still exist, the time stamps can still be created. All the creeps need to do is share that information somewhere else. That's it. And the only way to fight this one is to simply bar kids from appearing in any media content. Good luck posting a video of you walking around town or of an event where somebody underage might be.

This is similar to the discussion about whether you are allowed to let your 9 year old go outside unsupervised. It's a question of how much freedom do we allow people and kids in our society. It seems to be that public opinion is on the side of less freedom and more protection.

> Firstly, kids on YouTube were NOT impacted by this

Unfortunately this is inaccurate. There are billions of daily adolescent YouTube users, some of whom entering puberty that were exposed and continue to be exposed to the recommendation algorithm that created this mess in the first place, and for periods running into hours every evening.

At such an early stage in development, it is absolutely the company's duty of care to ensure that a 5 year old is not being sent up a gradient of a recommendation system that is encouraging them (with the help of the comments just removed) to view people of their own age in a sexual manner. Not only were children impacted by this, but the mechanism that enables it remains active to this day.

I agree with the idea of allowing children outside, but as per my reply to your previous comment, not if that means spending all day in the back yard of the village creepy old man. Balance is required in every situation, and denial of the kind your comment is riddled with accomplishes nothing.

>Unfortunately this is inaccurate. There are billions of daily adolescent YouTube users, some of whom entering puberty that were exposed and continue to be exposed to the recommendation algorithm that created this mess in the first place, and for periods running into hours every evening.

You are talking about a different issue.

>At such an early stage in development, it is absolutely the company's duty of care to ensure that a 5 year old is not being sent up a gradient of a recommendation system that is encouraging them (with the help of the comments just removed) to view people of their own age in a sexual manner.

Why would it be the company's duty of care and not the parent's? The parent should be the one that controls what the child consumes, not some nameless company or the government.

>I agree with the idea of allowing children outside, but as per my reply to your previous comment, not if that means spending all day in the back yard of the village creepy old man.

Again, this should be done by the parent, not by a faceless corporation or the government. It is the parent's job to deal with this.

> Why would it be the company's duty of care and not the parent

It is the company's duty of care for the same reason that

- it is the city's duty of care if the child goes outside and falls down an unmaintained manhole

- it is the school's duty of care if a fire extinguisher malfunctions and kills the child

- it is the driver's duty of care if the child crosses the road and gets struck on a green crossing

In all these cases, the entities assume a certain privilege to operate due to the trust placed in them that enables the freedom for the child to go outside whatsoever. If that duty does not exist, then there is no trust the child can safely leave home unattended. In effect what you're arguing for is total control from the parent -- a much worse outcome for a child than self-moderation on behalf of the trusted entities they would otherwise have had the freedom to interact with.

> Why would it be the company's duty of care and not the parent's?

It is the company's duty of care for the same reason that

- it is the city's duty of care if the child goes outside and falls down an unmaintained manhole

- it is the school's duty of care if a fire extinguisher malfunctions and kills the child

- it is the driver's duty of care if the child crosses the road and gets struck on a green crossing

In all these cases, the entities assume a certain privilege to operate due to the trust placed in them that enables the freedom for the child to go outside whatsoever. If that duty does not exist, then there is no trust the child can safely leave home unattended. In effect what you're arguing for is total control from the parent -- a much worse outcome for a child than self-moderation on behalf of the trusted entities they would otherwise have had the freedom to interact with.

>that enables the freedom for the child to go outside whatsoever

But this is not comparable to "going outside" at all. This is "seeking out and going onto private property and seeing stuff you shouldn't". The logic that a company is comparable to public spaces doesn't hold water. YouTube is not a public space any more than Pornhub or 4chan is. If 4chan wanted to cater to 10 year olds (yes yes I know) then the way to do so would clearly not be to remove content not suitable to kids but to create a separate place for those kids.

> The parent should be the one that controls what the child consumes, not some nameless company or the government.

The ways in which people access community-created content (or commentary) have changed substantially enough that I don't believe this is true any longer. Content providers now bear a portion of this responsibility in addition to child-rearers. That is ethically much more sticky, I'll admit, but I also think that moral imperative does not automatically follow the path of greatest simplicity.

> Youtube would, in essence, become the censor of what is and is not okay to post in the comments sections

They can, that's enough reason. It's their lawn, they can do whatever they want. If you invite me into your house and I start yelling like crazy (because I have the right to), if you don't like it you would suggest me to get out (gently or not). You will be right. Who am I to complain about what I can or cannot do in your house?

> freedom of expression

Why people always misinterpret what freedom of expression is? It does not apply to the comments section of Youtube. People are not entitled to be heard or to express whatever they want in Youtube comments.

Wait, so you're saying that if every company in the us wanted to ban videos of people that were gay they could because they dont have to allow them that they could. Yet we both know there would be a lawsuit so fast that your head would spin. Age is a protected class too. Children just dont happen to have their own lawyers as often and knowledge to use them in cases like this.

I was referring to publishing comments as the equivalent of free speech or freedom of expression, and not about denying participation and/or discrimination. Why going so far with the reasoning?

It's not Youtube's responsibility to protect anything, they just need to keep things barely legal. They are doing this because of the dollars and public image. Youtube owes us nothing, let alone guaranteeing our freedom of expression or protecting classes. If they say so, it's just PR.

Can a child consent to fame? Should a parent be permitted to make that choice for their child?

I don't believe so. We don't let parents send their children down into coal mines, and neither should we allow parents to make their children famous on youtube. Parental rights are secondary to the rights of the child.

Hmm, guess childhood actors aren't a thing any longer. What a weird culture we live in.

Childhood actors are the weird culture. Their abolition would be a lot less weird.

Do I want Nestle deciding what video content is worthwhile? No.

Do I want edgelords and abusers all over my screen? Also no.

At this stage I'll take any attempt to do something, even if it is deeply corporate. At least Nestle you can argue with without getting death threats.

(Not always true of e.g. Shell...)

I, for one, do not want Google deciding what is true.

Everyone who types anything into a Google search box is asking Google to do that.

God forbid they put a link to Wikipedia under flat earth videos, huh?

You only think that's a good idea because you don't believe in flat earth. They could equally put a link to flat earth sites under NASA/SpaceX videos. Do you really want to decree that whatever they link is "true" just because you agree with them in this particular instance?

I try not to consider things in terms of absolutes, so to answer your direct question: yes, I am pretty comfortable believing the vast majority of such links would be substantially factual.

Google already does this, I don't see the issue with it hypothetically happening on YouTube. If you search for a well-known person, place or thing, you'll be presented with information aggregated from trustworthy sources. Can they be wrong? Sure, and it happens. It's not sinister.

It sounds like you're envisioning someone manually tagging things. That would be odd and suboptimal. Instead, just tag helpful Wikipedia pages about the broad scientific consensus of things on relevant videos.

When I am objectively wrong, I want my mind to be changed. I expect this to be the case for around 10% of my ‘knowledge’, even on topics I care to educate myself about, and much worse on other issues.

Putting a link under the videos isn’t likely to achieve that, but that’s a separate issue.

The only problem I have with Google doing this, is that I trust corporations and government about the same — i.e. that both will lie and dissemble as much as they are allowed to get away with for their own or their leader’s benefit, without regard for my interests.

You're missing the point. Most things in life aren't settled as easily as the flat Earth debate. Google could also do this on something that's far more controversial (eg political) and justify it in the same manner. Imagine if Google were against climate change and every video about climate change gets a link to some website that says it's not true. That wouldn't be acceptable, would it? But being okay with Google doing this for flat-earth also makes it more acceptable for Google to do it for climate change.

Recent past (e.g. in US politics) has shown that our society needs mechanisms to incentivize consensus. Google promoting fake information would get appropriate push-back and in the end help form consensus. Yes, even on political topics I'd be fine with that as long as they try to stay fact based. There is a lot of room for honest actors to discuss ideas. But I'm fine with society (including companies like google) pushing back once people (including politicians) go crazy with stupid positions (likely trying to widen the overton window to redefine the "reasonable center")

> But being okay with Google doing this for flat-earth also makes it more acceptable for Google to do it for climate change.

Sounds great, I'm all for it.

Sure the truth can be complicated, but I fail to see how implementing software that auto-links to a relevant article on Wikipedia causes Google to be the arbiter of truth.

>Sounds great, I'm all for it.

Even if Google thinks climate change isn't man-made and links to sources backing that up?

No, but I reject the implied premise. I don't just think Google should endorse the things I agree with, and I think characterizing this that way is disingenuous. Rather Google should endorse the truth which substantially recognized experts have consensus on.

There aren't sources backing up that climate change isn't real. That's a hill I'll happily die on. There are plenty of alternative sources which make that claim, but Google does not aggregate facts from them for presentation to its users. Because they don't have evidence.

Like I've said elsewhere in this thread, this isn't a revolutionary idea. Google does it on its search engine and the sky hasn't fallen. I remain unconvinced it would fail if they pushed it out to YouTube.

Do you think YouTube should have censored videos about government surveillance programs before the Snowden documents came out?

That sounds like a leading question for rhetorical purposes - is this something Google actually did, or are we speaking purely of hypotheticals here?

Note the solution I posed is something which Google already does on its search engine without calamity. Therefore I don't see a reason why it would fail for YouTube. In contrast the example you're giving seems pretty hard to just link to an authoritative source.

Put another way, I'm not advocating for Google to arbitrate the truth on a case by case basis. I'm advocating for Google to identify ahead of time which sources are well-researched and trustworthy, then outsource its fact-linking system to those sources.

If Google were to supply facts on a case by case basis that would be suspect. But that's not how the company operates, so I'm deeply skeptical they would become some kind of arbiter of truth.

I refer you to my last paragraph.

"You only want them to do good things, not bad things! How hypocritical!"

Yes. Because obvious idiocy should not be rewarded.

There are many things that are debatable, but flat earth? That falls under verifiable facts, and we shouldn't equivocate about those.

You can’t abstract away from the content to the pure form of an action, and then posit that there’s no observable difference between two (very much different) examples.

Ex: „If banks stop allowing strangers to withdraw money from my account, how will I ever get my money“

And, no, questions of fact aren’t different. The earth is round, not flat. One tree does not a forest make, but a thousand does. Even if we can’t agree on the specific cutoff (is 15 trees a forest? 50? 500?), that does not prevent us from accurately describing the extremes.

What are you using google for if you don’t want to find out information. I generally don’t enjoy being fed bullshit.

Do you realise how silly this line of argument is? Why exactly should we (or Google for that matter) not recognise that there are differences in some actions? That some things are good, and some things are bad? In what world do you conceive it to be the same to advertise flat earth theories as legitimate unless you are yourself a flat earther?

This idea that because nobody has a monopoly on the truth then we can't make decision is utterly futile and silly; I have no idea where it originates, except perhaps in the darkest places where reason has utterly collapsed.

Of course, every authority since the beginning of civilization has carried this mantle in justification for censorship of all sorts. Having Google decide what information can and can't be shared on their platform (read: utility) is a dangerous state of affairs. What new social movement, or recognition of a current injustice will be stifled due a status-quo bias that is codified by such top down control over the media? We can't know from where we currently stand, which is what makes such control dangerous.

The fact that a reason is used (and has been used) to justify censorship unjustly does not mean that the reason itself is invalid, or that there is no such thing as good censorship; most people agree that some censorship (of threats or child pornography etc.) can be a great positive force.

There is no reason behind the idea that because we can't totally differentiate between good and bad where the line is blurry then we can't do anything at all. I'd also question whether free speech is intrinsically valuable, more than other actions are. I have seen no convincing reason to think so.

We can approve of good actions (like putting a Wikipedia link about earth science under a flat earther's video) and disapprove of bad ones (like putting flat earth propaganda underneath a scientific video). I see no issue here.

The debate isn't whether we can do any sort of "good" censoring, but whether we should do any censoring at all. (Just to be clear, I'm narrowing the scope under discussion to ideas. Of course things like child pornography should be censored due to the direct harm.) I reject the idea that society should welcome some authority having control over ideas such that ones deemed "bad" enough by a large enough majority should be actively suppressed. The "good" we presume can be done by shielding people from bad ideas does not outweigh the fundamental right of expression and communication.

Your claim is that some form of censorship may be permissible if the harm is direct, but I think this carries with it a certain ideological slant - what harm is 'direct' and 'indirect' has vastly different consequences; for instance, prevention of direct harm may be sufficient to protect children, but it probably isn't enough to protect the proliferation of racist or sexist ideas which have historically led to widespread oppression on those fronts. What is your threshold for harm?

Here we see the vacuity of the harm principle: one can claim anything is (or isn't) harmful in order to attach their favorite idea to it. As an example, some people may be said to be harmed simply by the knowledge that someone is watching pornography in their house. You'd likely say that doesn't "count" as harm - well then what does? As it turns out, controlling speech under your schema is simply a matter of defining what counts as harm and what doesn't. Yet philosophers such as Joel Feinberg and Catherine MacKinnon have pointed out, very few people (if any) would like to live in a society in which only harmful speech (or acts, since there is no meaningful distinction between speech and acts other than invoking body-mind dualism) is not permitted.

Then we get to ideas: who's to say that threats or child porn can't carry ideas in them? In censoring them, aren't we censoring ideas too? Some would say the idea that "it's not so bad to have sex with children" is encoded in every instance of child pornography. What if I made my threat into an art piece?

Your argument is unmoving. Child pornography isn't speech nor it is an idea. Images are records of events and the dissemination of such records can be directly harmful. There is no ambiguity about the harm principle to be mined from this example.

> In censoring them, aren't we censoring ideas too?

Ideas are by definition abstract and so they should be communicable through some other medium.

You've managed to circumvent my entire post and you're still wrong; my point was that ideas are communicated through a medium, and they can even be communicated through, for instance, threats and pornography. You have given no convincing reason to single out child-pornographic images for censorship while allowing others, such as regular pornography. What differentiates the free speech content of child pornography from other pornography, or even art which required harm in its creation?

Obviously I'm not defending child pornography here, but I think there's a logical flaw in your reasoning.

The fact that ideas can be communicated through other media is irrelevant, since it would mean that we can censor whatever ideas we like in any major category (e.g ideas conveyed in photography and film) but thereby only farcically allow them otherwise (e.g the expression of the idea is only allowed through physical speech).

>and they can even be communicated through, for instance, threats and pornography.

But censoring one particular medium is not censoring the idea. So your attempt at finding a contradiction doesn't hold water.

>What differentiates the free speech content of child pornography from other pornography, or even art which required harm in its creation?


>since it would mean that we can censor whatever ideas we like in any major category

This doesn't follow from my argument that censoring one particular medium is OK. Child porn is a genuine special case (direct harm in its production, lack of consent in dissemination) that doesn't transfer to other mediums that don't have the same problems.

Censoring a medium is an instance of censoring the idea, and if censoring one particular medium is permissible, then any number of media may be therefore censored.

>This doesn't follow from my argument that censoring one particular medium is OK.

It does, since by your own admission, censoring a particular medium does not entail censoring the idea.

>Child porn is a genuine special case (direct harm in its production, lack of consent in dissemination)

So this is what I was getting at - you say it's fine to censor a particular way of conveying an idea due to other harms being associated with that particular way of conveying. In child pornography it's the violation of consent in its production and violation of privacy in its reproduction. Extending this argument from child pornography to regular pornography, some would say there are significant harms involved there too (e.g it conveys the idea that women ought to be subservient to men), and then to hate speech.

The core idea is that speech is not absolute, just like actions aren't absolute. You're free to swing your fist so long as it doesn't hurt anyone, and you're free to say things so long as they don't hurt anyone (or require anyone to be hurt, of course). This means that with a sufficiently convincing empirical dataset, we can outlaw regular pornography and hate speech.

That's an argument for ignoring harms that take place in the present because it's possible to imagine worse ones in the future.

If addressing a current harm leads to worse harm in the future, then yes it is an argument against addressing the current harm. I see no reductio here if that was the intent.

The people who use gmail essentially are "letting google decide what is true" by having google do their spam fighting.

And letting google choose which responses to choose to a search request is essentially also asking them to decide what's true (how many people even go to the second page of results, much less later ones?)

I agree with your sentiment (and am not a gmail user -- yuck!!) but I do sometimes use their search engine...and know that when I use ddg I'm ceding the same authority to them.

Tangential: what e-mail (if any) do you use? Thanks!

I ran sendmail from the late 80s until around 2001 when I switched to qmail, which I then used until around 2012. Since then I have used postfix.

My mail and other services run on hardware of mine in a colo. I've had a rack of personal machines in a colo since the mid 90s; before then I simply plugged my machines in at work.

I have no problem with Google deciding what Google will say is true on Google-owned platforms.

Google: gay people are bad and women are lesser than men.

You: sue them out of existence.

We both know what you just typed is bullshit, for good reason.

> Google: gay people are bad and women are lesser than men

> You: sue them out of existence.

Well, no, so I wouldn't say that, but, OTOH, saying that might be problematic from an employment noon-discrimination standpoint. And there are other things Google might say that would be problematic from a standpoint of either consumer or securities fraud. Or libel. But none of that is germane to the point under discussion.

The common theme in every discussion about moderation is that Google, Facebook et al have a division of literal wizards who can wave magic wands to enforce arbitrarily nuanced moderation policies at scale.

This just isn't in touch with reality, but public opinion and pressure has no incentive to be rational, so blunt, restrictive policies like this one are the only path left open to these companies.

In Google’s latest press release they admit they are pushing out a new classifier which flags 2x more comments, and that they are increasing efforts in this area.

I think that the reality is that until very recently there wasn’t a huge bag of money attached the having fantastic moderation tools. I don’t moderate a large YouTube channel, but I will go out on a limb and guess that the tools are rudimentary.

Basic things like scoring the comment poster as well as the comment itself, the ability to have verified commenters (but why should this have to be a manual switch?), the ability to adjust the sensitivity of the classifier based on the intended audience of the video...

There is a massive valley of opportunity in between what we have today, and “magic wands to enforce arbitrarily nuanced moderation policies at scale” and perhaps the biggest comment platform on the Internet just decided they need to turn off comments entirely and if you do turn them back on, you need to do manual human pre-approval on each one.

>I think that the reality is that until very recently there wasn’t a huge bag of money attached the having fantastic moderation tools.

Uh, spam? Spam has a pretty damn massive bag of money attached to it and nobody has managed to get it right. Every single spam filter catches a ridiculous amount of false-positives and spam is far easier to classify than what's required here. Youtube catches a ridiculous amount of comments as false positives in their spam filter. I can't imagine this new category is going to be any better.

> ridiculous amount of false-positives

Not to mention quite a few false negatives. And the stakes on this are possibly higher, so it will be tuned toward an even higher false-positive rate.

Yea, that's probably fair. I didn't mean to suggest there's no scope for improvement or investment. But I suspect that you and I differ on our estimates of the degree to which this is possible, particularly without fundamentally changing the platforms and drastically reducing their value for legitimate usage.

The existence of the gap also doesn't change the fact that my experience is that implicit in the majority of the mainstream conversation about these topics is is an imagining of tech giants as wielders of unlimited power over their billions of users, and I'm skeptical that they'd be satisfied with most/any of the points in the valley you describe.

Not to mention that the flipside of public opinion is that the caprices of an algorithm are even more hated than their cases of leniency! All of the suggestions you make are going to increase false positives, and the PR backlash I've seen from that is far worse than the backlash to unmoderated content.

That isn't to say that there isn't room for reasonable complaint, but most people are really, really stupid, and even worse, entirely unconcerned with the consistency of their objections (a problem exacerbated by the fact that different vocal subgroups complain about different problems, and our various media will happily signal-boost all of them). Again, I don't see a solution that threads the needle effectively other than blunt restrictions, leaving everyone worse off but neutering the worst of the PR damage.

I feel like this cart before the horse vision is the problem. If you are going to suck up as much data about the world and monetize it, you have an obligation to spend some of that dough on a coven of wizards who can make the place you have created safe for people. Facebook, YouTube, Twitter, et al have proven to be a disease vector over the past decade and if they want to be rich, they have a responsibility to bleach the shit out of the door handles.

Do you similarly feel that the telephone shouldn't have been rolled out until we could ensure that people couldn't speak about objectionable things, paved roads should have waited on ensuring that getaway cars wouldn't work on them, and the printing press should've been gated on technology that ensured it couldn't print falsehoods? As you can imagine, there are a trillion and one other examples I could bring up: you could ask the same about pretty much every sufficiently big advance in technology and the new structures enabled by it.

This is a serious question, not a rhetorical one: the responsibility line is unclear to me, but it seems to me that there's a level of Ludditic absolutism around the topic of Internet platforms that I don't see anywhere else, and it's possible that I'm missing the variable that makes it relevant here where it wasn't elsewhere.

I think they are shining examples of privatizing the profits and socializing the costs. In Google's case they have accrued over $100 billion in savings, and the Chinese government moderates content more effectively and from more disparate sources. A handful of companies provide phone support to over 100 million customers and Google provides automated emails.

I don't think people are this naive by accident: Google, Facebook, etc. have been pushing this idea for a long time. It has always been unrealistic but I don't think we can blame the average consumer for taking the statements of Google at face value.

I think it's actually a pretty hard problem.

> ensure comments meet a basic level of decency

Decency is famously difficult to define [0]. One person's idea of decency is another's idea of "free and open debate", is another's idea of alternative culture, etc. And the truly bad actors are using throwaway accounts anyway, or will start as soon as you start banning them.

I think active moderation and community curation is the only real solution to this. Sub-communities (formed around particular channels or topics) have to define and enforce their own standards of decency. In this case, I can understand Youtube taking the conservative approach of not trusting communities to do that, especially given a lot of the stories that have come out showing that they aren't.

[0] https://en.wikipedia.org/wiki/I_know_it_when_I_see_it

> And the truly bad actors are using throwaway accounts anyway

Have you tried to make a throwaway Google account lately? It's kind of useless to do so. Not so much in the sign-up, but due to the fact that accounts with no history and no clear ties to a physical identity have nothing "vouching" for them, so constantly trip the "Are you a robot?" checks. (And, I imagine, {email, comments, Google Docs shares, etc.} from such accounts also are invisibly thresholded lower on any spam filters they are run through. This is why people write viruses to take over people's existing Google accounts—accounts without "reputation" are kind of worthless for doing anything public-visible.)

I agree. Much like the email reputation systems that evolved over time, I think other companies (like Google/FB/etc.) will eventually create hidden "reputation scores". For brand-new (maybe throwaway) accounts, those scores will harm the ability to participate in many places. After cultivating reputation (sending emails that don't look sketchy, talking to people who are deemed likely--by IP/demographics/click habits--to be your real-life peers), more participation in different communities will be allowed. Situations where reputation is insufficient for participation will be shadowbans more often than not.

This type of system solves a lot of the throwaway-for-purposes-of-evil-behavior problem, and may well be able to do that algorithmically, in lieu of human moderators. In return, it requires people to give up a lot of privacy/anonymity to participate. I don't think that's an inherently good or bad tradeoff, but it is definitely one people should be aware of.

It’s important to note they’re ceding a fight they never even seriously attempted to engage in where comments and content are concerned. As with all things Google and YT, if it can’t be badly automated, they’d rather kill it than set the precedent of doing anything useful. The short term result will be attention moving on. The long term is building up pressure that will probably result in major regulation. I wouldn’t be shocked if they’re digging a grave in the shape of “They’re too big to be responsible, so break them up.”

If your company has a platform that is beyond your capacity to manage responsibily, then your platform needs to shrink, or be chopped up.

>a fight they never even seriously attempted to engage in //

Google+ started with a requirement for real names didn't it, no adopted identities. They stuck with it for quite some time.

IIRC it did largely cure YouTube comments. But, it damaged engagement, so ...

Yeah, this whole "Youtube isn't even trying" is a massive survivor bias/fallacy because it ignores all the failed moderation attempts that aren't visible.

Its less that nobody at Youtube tried to solve the problem but more that it was never a business priority in the interests of the bottom line to solve this problem in a nuanced way.

For Youtube, these kinds of comments threaten to alienate advertisers. Comments only have a miniscule impact on viewer retention and engagement, so if the actual business is threatened by comments you just get rid of the comment system.

Saying it's not a "business" priority is the same as saying that Youtube isn't trying, just with more words. It's also subject to the same survivor bias because we don't know the "business" investments into community and moderation.

Exactly, they want to have their cake and eat it too. Low overhead, automated everything, anonymous comments, and respectability to draw ad revenue and avoid sanctions. Those are mutually exclusive goals on a large platform, and I don’t look forward to seeing hamhanded legislation with the inevitable cronyism and political infighting “solve” the problem either.

Like the big game companies and their kiddie casino business model, it will eventually be slapped down if they can’t control themselves first.

Killed a lot of other things too. And plenty of people will post hate speech under their own name. Even in the byline of a national newspaper.

What's even more amazing is that they're willing to do it in the bylines of local newspapers, where there's a non-zero chance of you running into that person in your day-to-day activities.

> IIRC it did largely cure YouTube comments.

Did it? I don't remember that being true.

"If your company has a platform that is beyond your capacity to manage responsibily, then your platform needs to shrink, or be chopped up. "

This is how I feel about fake new too. It's not exactly what you said, but if your business can't afford to moderate what spews out of it, then adjust your business or shut down.

>In a company that knows more about you than you can possibly imagine, and more about automated sentiment analysis than anyone in the world, they couldn’t algorithmically determine who should be allowed to post on certain subsets of videos, or devise a system they thought was worth deploying to ensure comments meet a basic level of decency.

Those solutions are not perfect and every time Google implemented them they caused collateral damage or even major changes to the whole platform (see the demonetization situation). I believe that they refrain from using them unless pressured to because fixing a problem that is not know to the general public isn't worth the negative PR, complaints, investigations and the damage to stability of the platform.

> In a company that knows more about you than you can possibly imagine,

Maybe the problem is that they don't actually know that much about us.

Since one of the other threads (which one, I cannot remember right now -- one of the adtech ones I think) brought up how much supremely accurate location data Google or FB etc knows about any given user, I have been thinking about how much the megacorps actually know, you know, about us.

If someone knows that I go to the grocery store two or three times per week, do they also know that I detest going to the grocery store? I do it for a reason, of course, but e.g. showing me ads for grocery stores is likely to create negative feelings in me.

So, someone might know I go there a lot, but that doesn't mean they know enough about me to make my world better in any useful way.

I don't doubt that some companies know a lot of objective facts about me -- where I go, what I spend money on, with whom I associate -- but I would be surprised if any of them know much about me, about who I am, about what I really like and dislike, about what motivates me, about what I care about, et cetera.

Maybe they know more than I can possibly imagine; maybe they know less.

Maybe that's not at all why moderating comments programmatically is so difficult. I certainly have no idea. But it makes me wonder.

At Google/YouTube scale, something will always fall through the cracks, and someone will be there to be outraged.

Will you stop using YouTube because of this? No. So Google doesn't loose anything and gains the ability to say they're "fighting toxicity".

In reality, if they wanted to fix comments, they could easily do it without any high-tech stuff, through basic UI redesign.

It's the quintessential broken window effect. The comment section design is horrible for actual discussions. People who have something intelligent to day simply don't bother. On the other hand, anyone who wants a place to put some hostile nonsense with high visibility gets exactly that. The behavior gets normalized and the ratio of hostile nonsense keeps itself at a high level.

> In reality, if they wanted to fix comments, they could easily do it without any high-tech stuff, through basic UI redesign.

Do tell.

They won’t. You can’t fix it through UX or algorithms. Honestly why allow comments on kids videos at all? What value do they deliver over the risk of abuse? None.

This is a step in the right direction for google.

What are their other options? Imagine they could, with great accuracy, predict whether somebody is going to leave a good comment or a bad one. Do they expose that to the commenter, saying they've been blocked from commenting because the algorithm thinks they're a shitty person? Imagine the optics of that...

The other option is the shadowban, where toxic commenters are hidden without notice. It sounds like a good idea, but just about led to a user revolt when Reddit tried it.

Simply turning off comment sections that have the potential to become toxic may not be the most technologically interesting solution, but it's the safest PR move.

If the structure of the service offering exceeds the limits of available technology, then the structure of the service is at fault and needs to change -- wishing away the bad people doesn't work, nor does pretending the issue does not exist (their current strategy) until advertisers are forced to threaten them

The option available to them is clear, it's just not something anyone is willing to accept: connecting untrustworthy anonymous third parties to the bedrooms of 5 year olds cannot be done safely within the limits of existing technology.


This reply is needlessly personal, are you materially affected by this issue somehow?

> Would you ban beaches as well, because creeps might watch others on a beach?

No, but nor would I pretend people can shower and change without cubicles, or permit the nearby village to drown during the first storm because I deluded myself in the belief there was no need for a seawall.

There is no rationality to be found in an absolute "save the children at all costs" position, much as there is none in "freedom of speech at all costs". This issue like so many more is nuanced, and I'm surprised people here are so easily willing to deny its possibility, but prefer to get worked up by its very existence or the plain reality nuance is required to solve the problem. Progress is never due to people like that.

This is true, as it doesn't need to be a matter regarding only the creeps. I was thinking sociopolitically, where adults related or not make the decision to use minors as props, whether for a thing as big as ad campaigns or as small as for social brownie points. As a grownup, if someone were to post my image without my express permission I could pull some legality in my favor. Children generally don't have the same resources, lacking their own guardianship. I'm just saying that until they can legally make such decisions for themselves, privacy should be the standard, to save them from embarrassment or abuse. We must all decide for ourselves what to share of ourselves.

>This reply is needlessly personal, are you materially affected by this issue somehow?

No, but you're saying that if we can't limit people from posting bad things then we should remove or change the ability to post. Take that same idea into real life. It's impossible to stop people from saying and doing things without excessively infringing on their freedoms and as a result people say and do bad things. You're arguing that we should excessively infringe on their freedoms to prevent that from happening. This leads me to believe that you are against freedom of expression and speech.

>There is no rationality to be found in an absolute "save the children at all costs" position, much as there is none in "freedom of speech at all costs".

But you didn't express nuance. You categorically said that if a service can't do it, then the service needs to change. If you want to paint a picture of nuance, then express nuance.

And I would argue that freedom of speech is a necessity for a free society, meaning that it is almost "at all costs". I've seen what the Soviet era did with people. I don't want to live in such a society.

> You categorically said that if a service can't do it, then the service needs to change

Would you categorically state that if the service can't do it, the service should continue in its present form?

Walls are some of the oldest inventions of civilization, and YouTube currently lacks /any/ walls. I'm arguing for some walls in the right places, you're arguing for no walls whatsoever. We have a difference of opinion, it's fine.

Kids lack the same free speech rights as adults who don't have to live under a guardian's authority though. If you have open floodgates there will invariably be contradictions and conflict, when a little bit of Reason could avoid much of that. Reason along the lines that the owners of even the biggest online platforms are not themselves magically entitled to all datum.

I'm not trying to be prickly here, but I've thought quite a lot about freedoms of speech, and I believe for it to concisely work, it cannot unto itself guarantee an audience, as that would still be mandating opinion, only in the other direction. It can provide legal protections, but not protections from social ramifications, as again, this would be mandating opinion one way or the other. We can live in a free society without that freedom meaning I can help myself to my neighbor's wife. THAT is nuance.

Let's say they are able to make an IA that detect toxic comments.

They can even make it relatively accurate with 1 error out of 1000 detections.

Sooner or later they will have a false positive that gets a lot of bad publicity as censorship or a false negative on somebody truly horrible.

Just cutting all comments on kid seem like the only option that will allow them to give the impression that they are taking the problem seriously.

Just like they could not afford to have google photo call dark skinned people gorilla 1 time out of 100000 photos. Better to just remove that label entirely.

Algorithmic contextual sentiment analysis is still, very much, an open question. Not even the best data and scientists in the world can get all that close to inferring the context clues that the human brain is capable of.

I think this is indicative of their actual ability to meaningfully use this data. The hype around their algorithms and machine learning paints a picture that doesn't match reality.

Google might not be ready to make it's own social credit scoring system public just yet...

Google is really a former shell of itself... hell, search doesn't even function correctly for me in Gmail of all things. Google's hands-off algorithmic approach to moderation and support has repeatedly been shown to fail and be easily exploited by nefarious actors. I'd like to see them hire some actual people for once, I think they have the budget for it.

are you by any chance pineapple clock

My guess is they decided it would take them a little while to do it algorithmically and would thus lose even more advertisers, so they bit the bullet and disabled comments. I suspect they're working hard on a different solution.

Note this is ONLY for videos featuring minors not general videos. Really big difference I think and doesn't mean they're giving up on all comments. Really is seems like an harder problem than general toxic videos. From the reporting it seemed like the whole thing was just comments linking to other videos (not sure anywhere had examples) which is perfectly innocent and the line between that and being part of this pedo comment ring is a matter of exactly where in the video the link leads.

Google probably could devise such an algorithm or system, but rather chose the cheapest solution that required the least amount of human intervention. That shouldn't surprise anyone, as their goal is maximizing ROI from ad revenue.

I think this says more about Google than anything. Google is the new "too big to fail" ... they can't seem to do anything right lately.

You think they're doing this manually? There's probably some human supervision, but this is actually a failure of their algorithms.

SlateStarCodex's recent post on a similar topic was illuminating [1]

> It’s very easy to remove spam, bots, racial slurs, low-effort trolls, and abuse. I do it single-handedly on this blog’s 2000+ weekly comments. r/slatestarcodex’s volunteer team of six moderators did it every day on the CW Thread, and you can scroll through week after week of multiple-thousand-post culture war thread and see how thorough a job they did.

> But once you remove all those things, you’re left with people honestly and civilly arguing for their opinions. And that’s the scariest thing of all.

> Some people think society should tolerate pedophilia, are obsessed with this, and can rattle off a laundry list of studies that they say justify their opinion. Some people think police officers are enforcers of oppression and this makes them valid targets for violence. Some people think immigrants are destroying the cultural cohesion necessary for a free and prosperous country. Some people think transwomen are a tool of the patriarchy trying to appropriate female spaces. Some people think Charles Murray and The Bell Curve were right about everything. Some people think Islam represents an existential threat to the West. Some people think women are biologically less likely to be good at or interested in technology. Some people think men are biologically more violent and dangerous to children. Some people just really worry a lot about the Freemasons.


> The thing about an online comment section is that the guy who really likes pedophilia is going to start posting on every thread about sexual minorities “I’m glad those sexual minorities have their rights! Now it’s time to start arguing for pedophile rights!” followed by a ten thousand word manifesto. This person won’t use any racial slurs, won’t be a bot, and can probably reach the same standards of politeness and reasonable-soundingness as anyone else. Any fair moderation policy won’t provide the moderator with any excuse to delete him. But it will be very embarrassing for to New York Times to have anybody who visits their website see pro-pedophilia manifestos a bunch of the time.

> Every Twitter influencer who wants to profit off of outrage culture is going to be posting 24-7 about how the New York Times endorses pedophilia. Breitbart or some other group that doesn’t like the Times for some reason will publish article after article on New York Times‘ secret pro-pedophile agenda. Allowing any aspect of your brand to come anywhere near something unpopular and taboo is like a giant Christmas present for people who hate you, people who hate everybody and will take whatever targets of opportunity present themselves, and a thousand self-appointed moral crusaders and protectors of the public virtue. It doesn’t matter if taboo material makes up 1% of your comment section; it will inevitably make up 100% of what people hear about your comment section and then of what people think is in your comment section. Finally, it will make up 100% of what people associate with you and your brand. The Chinese Robber Fallacy is a harsh master; all you need is a tiny number of cringeworthy comments, and your political enemies, power-hungry opportunists, and 4channers just in it for the lulz can convince everyone that your entire brand is about being pro-pedophile, catering to the pedophilia demographic, and providing a platform for pedophile supporters. And if you ban the pedophiles, they’ll do the same thing for the next-most-offensive opinion in your comments, and then the next-most-offensive, until you’ve censored everything except “Our benevolent leadership really is doing a great job today, aren’t they?” and the comment section becomes a mockery of its original goal.

[1] https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

This is a company that recommends you nazi videos just for watching anything political.

I think they’ve given up on ethics and the greater fight for what’s right long ago.


>It's ironic, because the left, not the right, has spent nearly the last 50 years promoting pedophilia.

... What? Your evidence is that Salon.com had one article about people who specifically avoid acting on their urges?

>But what is kind of shocking is how anti-free-speech so many on this site are. I'm old enough to remember the idealistic - although rather fatuous in hindsight - early days of the "internet culture" which promoted John Perry Barlow's "A Declaration of the Independence of Cyberspace" [0] and John Gilmore's sentiment that "The Net interprets censorship as damage and routes around it." [1]

The difference is that many of us have seen how bad groups have benefited from the current state of affairs since then. It was easy to be idealistic about free speech on the internet back when it was almost solely populated by academic types. But the world changed, and we don't have to be dogmatic about old ideals.



>>When Barlow said “the fact remains that there is not much one can do about bad behavior online except to take faith that the vast majority of what goes on there is not bad behavior,” his position was that we should accept the current state of affairs because there is literally no room for improvement. [...] In my opinion, Barlow’s opinions on online behavior, given his standing and influence were irresponsible. [...] Saying “we can do nothing” is like saying it’s not worth having laws or standards because we can’t achieve perfection.

>and "racist" sentiments, are quite popular in America, Europe, and in all countries among all human populations on earth.

Just because it might be natural or common doesn't make it acceptable and worth spreading.

>many of us have seen how bad groups have benefited from the current state of affairs since then. It was easy to be idealistic about free speech on the internet back when it was almost solely populated by academic types. But the world changed, and we don't have to be dogmatic about old ideals.

Exactly my point. This is simply a minority political faction using their political and economic power to silence those they don't agree with, all the while hypocritically pretending it's about pedophilia, which they, more than anyone, have been attempting to mainstream.

>Just because it might be natural or common doesn't make it acceptable and worth spreading.

Unacceptable - to you - you are simply asserting moral superiority. Why should anyone accept your assertion? The left certainly has no objective definition of "racist" which is why certain kinds of "racist" speech are acceptable to leftists. Again, this is the point. You, Google, Facebook, the SPLC and the ADL are simply declaring themselves the moral authority and censoring speech. The pretense at morality is just that, a pretense. It's just a display of raw political and economic power, couched - as naked displays of power typically are - in the language of morality. Many people find your ideas unacceptable and not worth spreading.

> Your evidence is that Salon.com had one article about people who specifically avoid acting on their urges?

That is called "an example" and one could give 100 more from the last 50 years, for instance, the German Green party chairman, who admitted in his own book to molesting children at his leftist school as part of "sexual liberation." But I obviously can't give anymore, because YCombinator doesn't allow expressions of opposition to the Silicon Valley/Democratic party political establishment, hence the removal of my comment.


I extensively watch music videos and live music performances on youtube, comments are fun to read. Not sure they are completely necessary but i've learned some interesting things and been turned on to good bands from YT comments!

I watch lots of hobbyist how-to videos and the comments are usually pretty nice. At the very least they are some human engagement from the views to the author, without which making/posting videos would seem pretty lonely.

Could be a function of what you watch. I frequent cricket/soccer/indian music videos which match your experience. Other things like VASAviation are filled with awesome comments.

This very much depends on what part of YouTube you tend to be in. Much like any online massively-used portal, different segments of videos will attract vastly different users and types of comments. There is somewhat of an over-generalization of YouTube comments, where the description of toxic comments seems more accurately applied to truly /viral/ videos: those that have broken beyond normal segment boundaries, and where the 'best' comments are perceived to be those that draw the greatest reaction, as opposed to genuine commentary on the video itself.

I'm surprised they don't just disable comments for viral videos, too.

Now that I think about it, imagine the equivalent of that for other services. For example, imagine if all of Reddit's "default" subreddits (the ones that make up the front page when you're not logged in) had no comments attached to posts. To get comments, you'd have to opt into a community. (Which might very well just be a post-for-post mirror of a "default" community, but with comments enabled.)

Seems like it'd be kind of... nice?

I've found that comment quality is pretty reliably proportional to the specificity of the audience niche. I also have the hunch that any meaningful algorithmic comment moderation would have to approach being a general be AI. Might be room for advancement in machine assisted moderation though.

What? Is it 2006 again? I mostly watch photography videos and vines/weird videos, and the comments sections are often fun and sometimes informative.

Low quality channels have low quality comments. High quality channels have high quality comments. I feel a bit embarrassed for people who claim they've never seen a good comment on youtube.

Here are two examples I came across a few minutes ago:

> Grady, to gain a variable flow control, could you use the horizontal angle of a folder weir? If the point of the folded weir was say 5 degrees higher than the outer sides, a slow flowing river would only flow over the lowest parts of the crest. As the flow increased, more of the weir's crest is used by the water, This would effectively self regulate the weir's geometry.


> Such structures exist and they're called compound weirs. These structures come in various cross-section geometries which can be tailored to provide better control of water levels under various discharge rates. The structures discussed with a fixed crest height, also have a fixed relationship between the upstream water level and discharge capacity.

I think that's an interesting exchange, don't you? Nobody is tossing around profanities, neither claimed the earth is flat, called the other a 12 year old, or anything like that. Insightful and constructive comments are common on decent channels, and non-existent on trash channels. That's not really a reflection of how youtube works, but probably a symtom of something more fundamental about human nature.

It’s rare, but they do happen. I’d say less than 1% are insightful and 1/2 of those are more just snark.

Most channels that I find worthy of subscription have decent comments. Informed or funny or intelligent and also with decency.

There are a few channels -- Ben Krasnow's 'Applied Science' comes to mind -- where the commenters tend to be well-informed and supportive.

But yes, in general YouTube commenters are enough to make one question the long-held conventional wisdom that a nuclear war would be a bad thing. I don't envy the people at Google who have to support and maintain the commenting system on YouTube.

> The fact that Google is ceding the fight against toxic comments on YouTube is actually pretty shocking.

Not shocking at all, it's like a typical "welcome to Internet" thing that somebody outside of "computer people culture" gets.

Internet is the future of ads, Internet is also the end of ads.

Ads are made and used by rich, well fed people, and in their majority targeted at people on other side of social ladder.

The TV was perfect for that, it was one sided.

Now, the "reflux" that comes back from the Internet culture is hurting them. When some random nobody can pour dirt and bile on videos of pierre cardin toating, avocado munching "successful people" it destroys that image.

Google really should do it that way: you either agree to have your ads run on every type of scatalogic content, and your image covered by a metre of dirt, or you don't advertise at all.

They can do that.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact