Hacker News new | past | comments | ask | show | jobs | submit login
On YouTube, a Network of Paedophiles Is Hiding in Plain Sight (wired.co.uk)
92 points by perseusprime11 26 days ago | hide | past | web | favorite | 131 comments



> Enter “twister girl” and autocomplete suggests “little girl twister in skirt”

Jesus…

Credit where credit is due, I learned about this from MattsWhatItIs:

https://youtu.be/O13G5A5w5P0

Which is similar to the recent videos PayMoneyWubby has been exposing:

https://youtu.be/5PmphkNDosg and https://youtu.be/M78rlxEMBxk

It reminds me of a darker version of https://en.m.wikipedia.org/wiki/Elsagate.


This issue was brought to YouTube's attention by another content creator. They say they are working to disable comments on some videos and delete some accounts. Its tough when kids upload innocent videos and it is the comment sections that become filled with predators.

I do want to say this MattsWhatItIs guy appears to be motivated by attention and ignored the fact that Youtube was looking into this and continues to push for people to go after advertisers.

https://www.youtube.com/watch?v=aLsYQYHHqoM


People likely feel that they are not doing (or spending) anywhere near enough on the problem. Particularly because many of the gripes with YouTube have gotten deliberately worse, yet the situation with stuff like creepy Elsa videos (and by extension, creepy comments and suspected pedophile activity) has not improved for years.

The 'adpocalypse' showed that the best way to enact any meaningful change on YouTube is to get the advertisers riled up. So that's the lever they're going to focus on.

I think it's always going to be a double-edged sword at best.


It's convenient that simply disabling comments allows youtube to continue monetizing these videos while pretending pedos aren't their main audience.


Frankly, getting advertisers involved is probably the best way to get actual meaningful action taken.


This is almost exactly like reddit ~8 years ago. It's amazing how often we keep repeating ourselves.


Care to elaborate? I'm youngster


I run ads and have for the last year or two systematically removed any placements involving youtube channels related to children - anything with toys, games, kids, etc. I have done this because after demonetizing a number of videos Google has apparently decided that children's content was ideal for advertisers; it was safe.

This was wrong, but clearly nobody in their ads department actually used the targeting systems.

There is no way for L'oreal to remove their ads from channels with disturbing comments automatically. They would have to individually find and exclude these placements and build their lists like I have, and I believe they probably have underpaid and either overworked or incompetent people running their targeting so that's unlikely to happen.

To be honest I believe ads should not be shown on children's youtube channels at all. Further, rather than reporting a video, I'd like channel creators to be able to report comments and get people banned from commenting on the site as a whole.


But they can figure out when a video has copyrighted music at 1% volume in the background.


Because that is a trivial task as the question "is there licensed music in the track" is binary and really easy to do given a large enough number of fingerprints.

Detecting if there is a child in the video is more difficult, but do-able with current ML models. Now, however, determining why there is a child in the video - if it is a family "fail" video or pedo material, for example - via AI is about as impossible as trying to distinguish between satire, hate speech or propaganda via AI. It's not possible at all, as AI will for the near future totally lack context.

This distinction will require humans, and this is something not viable at all for fb, youtube, twitter & co, as it is a huge cost... the saving of which is offloaded to society though in form of e.g. undermined democracies or psychological trauma in sexual violence survivors.


They already have an AI to detect these videos: the recommendation engine mentioned in the article. These videos aren't difficult to find.


The recommendation engine doesn't know what the content is or in what way it is similar, it just knows that people who view this video often view this other video. On the bright side the recommendation engine grouping all these videos means that once YouTube identifies one video as inappropriate they can easily use the recommendations to find and delete all the other videos that are also being viewed by the same pedos.


> On the bright side the recommendation engine grouping all these videos means that once YouTube identifies one video as inappropriate they can easily use the recommendations to find and delete all the other videos that are also being viewed by the same pedos.

That is exactly the point, they have this ability and are not using it. Not even to disable comments.


The "recommendation engine" is a stupid "people searching for x clicked on y" thing. Not intelligent at all.

And banning keywords won't help, that's just whack-a-mole'ing.


Some of the videos were up for months, and youtube did nothing to remove them after being asked.


> Now, however, determining why there is a child in the video

It doesn't matter why the child is in the video. The pedophile only cares that there is a child. Videos uploaded with children in them will be exploited, regardless of whether they are legitimate.


Banning the public from uploading videos featuring children, or worse, deleting those already there, will be extremely unpopular. And I'm not convinced it's necessary.


I honestly don't think you can do anything about pedophiles watching otherwise innocent videos of children.

The article claims that there is nudity (via timestamp comments). Separate models could be used to score the video, where a child plus any other potentially sexual content (genitalia, nipples) withholds the video. A withheld video can be appealed (for scenarios such as "breastfeeding tips" or what-have-you), much like copyright strikes, which have an existing process.

Obviously the model might have problems discerning between a 17 and 18 year old, but it would catch the most egregious content.


How about disabling comments, proactively deleting illegal content and not helping people find more non-illegal but potentially exploitative content?

I feel like you have missed the point. YouTube is actively helping people find this content and making money off of it.


In this case we will have to ban children movies (Harry Potter?) and TV channels, which sounds too harsh.

As long as child is not performing some sexually suggestive stuff on youtube video ("popsicle challenge", wtf) it is OK, probably.


So we should just ban all videos of children then?


This is not true. ML can detect kids in videos, and if there was any financial incentive to doing so, Youtube could follow their recommendation graph to know exactly what type of content to ban or restrict commenting on.

The reason why they find licensed music in videos is because they have a financial (profit) motive for doing so. Find a licensed track, and get a few fractions of a penny every time someone watches that video. There is no financial incentive to blocking CP, so they don't bother. This is a real shame.


> ML can detect kids in videos

It can't detect context, however.

>The reason why they find licensed music in videos is because

It's technically easier to do so. ML is still new, and it still makes mistakes, this is why people complain about content ID because it takes down videos that are "fair use".

Explain to me, Mr Expert, how a ML algorithm can understand the nuance of the law, interpret it flawlessly like a human would (say, in a court of law).

Unless you're saying these cases no longer require humans, courts and judges, and ML is at a point where it's _flawless_ and accurate 100% of the time.

It's a shame you're so technically weak minded, I thought the Hackernews community would be educated in this regard.

Please go read about how ML works before making such comments in the future, they make you look really stupid.


>> ML can detect kids in videos > It can't detect context, however.

You don't actually need to, at least initially. You only need to detect kids, and when you have some threshold of reports on comments on a video where the computer vision algorithm detected kids in the frame that rise to the level a human should take a look, the human can flag the video as containing elements of CP, which would kick in the following: - Deeper ML analysis of the dependency graph of commentors, related videos. - Blacklisting or law enforcement notification for commentors that are violating the law. - Training data that can be used to train more advanced algorithms.

Because there is no profit motive to do so, like there is with music licensing, they won't do it.

> It's technically easier to do so.

I'm not talking about determining context with ML. That is a hard problem, of course. But identifying "child human" with a computer vision algorithm is very simple, then relying on human moderators to initiate the dependency graph traversal.

> Please go read about how ML works before making such comments in the future, they make you look really stupid.

No need for the ad hominem. I'm not an ML expert, but I understand enough of the basics to know what is possible.


> ML can detect kids in videos

Even dental scans and x-rays can't give you a reliable chronological age for somebody [0].

In that context, I consider it extremely doubtful that some ML algorithm solved this whole problem by just using pictures/visual recording, that sounds a bit too much like magic/wishful thinking.

[0] https://www.newscientist.com/article/mg21428644-300-with-no-...


The quick & dirty solution here would be to disable comments if children are detected


The problem with that is YouTube's awful history of and approach to false positives.

People would be a lot more willing to accept blanket and risky experiments to solve these problems if there was a viable and reliable reconciliatory/redemptive path for when those systems get it wrong.

For better or worse, YouTube hasn't focused on scaling that part of their system, so there will be apprehension and scepticism with any approach they take.


They could also use their own recommendation engine to flag videos for manual review.


While content ID is another problem, I agree it is highly convenient the same recommendation system that permits (and enables, and encourages!) this activity can't internally be reused by them to prevent it. They already pay contractors to filter through illegal images, in comparison this seems like.. no pun intended, childsplay

The most horrid thought occurred to me while reading this, that somewhere within Google there may be a glossy (autogenerated or otherwise) report detailing exactly how many people are watching these videos and falling into this particular gradient of the recommendation model. Perhaps even read and passed up as too difficult to extract value from by their ads team. A company of this size (YouTube, not just Google) couldn't possibly be oblivious to this


If they have that segment I doubt it would be difficult to extract value from them....

Just advertise Spy Cameras, Thailand Vacation Packages, and Annual Passes to Disneyland, no ?


I know, it's crazy. It's almost like the feasibility of a task is unrelated to the importance of a task.


This is yet another of those "Omg pedophiles everywhere!" hysteria cases where people are too busy being all freaked out without actually looking at the facts.

Ain't helping that the guy who "discovered" this claims it is some kind of "YouTube wormhole" you supposedly fall into and never ever can escape from. Even tho he pretty much did everything to make the results come out as they did (fresh YouTube account, using a VPN, looking up content that's already considered as "soft-core" for many users).

But the reality is that these videos show nothing illegal or really that out of the ordinary. The fact that some people derive sexual pleasure from them, is something that can never be stopped unless you introduce something akin to mass-scale thought/mind-control.

Because before it was YouTube videos or the Internet it was plain old children underwear catalogs printed on paper and without these, or YouTube videos, those people would just do a Google picture search of "childrens underwear" and get unlimited pictures of children in equally revealing poses as in those YouTube videos.

What to do about that? How to "fix" any of this? Particularly in an age where becoming a "YouTube personality/influencer" is increasingly peddled as some kind of viable choice of occupation for younger people and even children [0]?

[0] https://www.businessinsider.de/ryan-toysreview-7-year-old-ma...


First, there is, in point of fact, a network of pedophiles on YouTube as demonstrated by the article. Your comment is twisting it into "Omg pedophiles everywhere!" hyperbole for what reason?

The seriousness of even a single case of pedophilia or child exploitation cannot be overstated. Not only is it ethical and moral to take it very seriously, but it's also the logical action because children are our future and if they are healthy and happy then the human race is healthy and happy.

There is no fixing the problem, but we can mitigate it. The first step would be, apparently, to educate the public because comments like yours still exist which is deeply concerning. The second step is up for discussion of course, but perhaps it would be useful to ban children under the age of N from appearing in videos for more than T amount of time or perhaps ban altogether.


Can you elucidate what you find deeply concerning in the parent's comment?

I believe there is fixing, or strongly addressing, this harm: raise the standard of living, provide education and safe communication networks to children for identifying and reporting abuse (and importantly: feeling safe to do so), provide quality mental healthcare for those abused. Abuse often begets abuse: physical, verbal, and sexual.

I find your implication that "even a single case of pedophilia" is something to be damned morally and ethically disturbing (my interpretation being that you are implying a person who has the attraction, not a person who has acted on it). I believe that our incredible stigmatization of people who have pedophilic urges prevents them from getting the necessary counseling and help they need to not harm children. I believe that with a proper support network, they wouldn't harm children. Sadly, we seem to treat this criminally rather than medically.


Yet, I find it funny when people also say, in the same breath "omg governments using children as scapegoat for censorship".

Which is it? If you think you can have a 100% success rate you are horribly, stupidly mistaken. You WILL get innocents caught in the crossfire, you WILL see videos removed that are okay, and people WILL cry censorship.

People are demanding a flawless system, that never makes mistakes, that has 100% accuracy to be developed and released in the next week, and yet, we need a "free and open internet" otherwise Reddit gets angry and puts up hundreds of "net neutrality" posts.

Oh, what's that, Article 13 wants upload filters? BOO!! Yet they may catch this type of content.

Nobody likes that idea because it'd "ruin" the internet.

Which is it. This problem can ONLY be fixed with upload filters, potential censorship and mistakes which will affect a number of legit videos.

That okay with you? People seem pretty mad at youtube for their content ID and DMCA, just check r/videos.

This would increase 10 fold with any further automated system designed to attempt to filter certain categories of content.

People will have channels deleted and potentially their Youtube 'careers' ended by a machine. But r/videos hates this.


Problems and solutions have a relationship that is many to many.

A problem may or may not have a solution or many solutions. And solutions have a relationship not just to the problem they solve but also the problem or problems they create.

This fundamental meta-problem is a problem to which there is no solution. This is why we argue. This is why we fight. This... is why we politick.


Yes because the children are the problem we should ban them from videos.

As the GP points out there is no real exploitation here. There is harassing and demeaning comments and a general toxic culture of objectification, but limiting the actions of children it of some desire to protect them from the nastiness in the real world does them no favors and denies them their self determination.

The feminist movement has been dealing with these same issues affecting adults for decades. They have found that this sort of "protection" doesn't work and makes the problem worse. Children should be taught and supported in dealing with this behavior. We should impower them to bring the asshole commenters to light, shamed and held responsible. There's no magic age when the harassment stops or becomes okay. The best we can do for our children is prepare them for life.


And of course most likely after 4/5 years you regret uploading cringy videos like this and just delete them


Article didn't demonstrate any network, it essentially only shouted "Omg pedophiles everywhere!" as you put it.


Your position seems to boil down to suggesting since the problem cannot be fixed perfectly, nobody should even try, and meanwhile those who do try are "hysterical". In other words, simultaneously defeatist and offensive.

If you feel so hopeless about your impact on the world around you, please keep such depressing ideas to yourself


What exactly is the problem though? No children are exploited or abused neither any demand is created for abuse or exploitation of children with any of this, etc. So what then? You don't like what some people might think and do in the privacy of their home? Keep in mind, there is no evidence that they actually think and do anything you don't like, someone just claims they do. There is, however, plenty of evidence of pushing for censorship, policing, control using emotional manipulation. Like every child porn smear campaign ever.


>No children are exploited or abused...

Can you verify this 100%? You're absolutely certain that there isn't a single video on YouTube of a child where a grown up might've instructed them to do something that seemed innocent but the adult found sexual? I understand what you're getting at with the rest of your comment, but to sit there and say that not a single child on YouTube has been exploited or abused is really, really reaching and deserves a cited source.


You're asking someone to prove a negative, which is pretty much impossible.

But it does seem fair in response to ask if you can prove, with the same 100% accuracy, that any one of children in the videos identified by the article are being exploited and/or abused?


>You're asking someone to prove a negative, which is pretty much impossible.

Fair, but why state it as fact if it's impossible to prove?

>But it does seem fair in response to ask if you can prove, with the same 100% accuracy, that any one of children in the videos identified in the article are being exploited and/or abused?

Also a fair point, and I can provide at least two articles that addressed YouTube removing content involving the exploitation of children in the past. Granted, these aren't "the videos identified by the article", but they're strong evidence that, contrary to OP's claim, children have been exploited on the platform.

https://www.buzzfeednews.com/article/blakemontgomery/youtube...

https://www.buzzfeednews.com/article/charliewarzel/youtube-i...


What exactly is the harm behind beheading videos? It's not like anyone is being beheaded every time you watch it, right. Nobody pays attention to these videos ever, nobody has ever been radicalized by them. No 15 year old British schoolgirl has run away to Syria and found herself pregnant and stateless because of them ( https://www.bbc.co.uk/news/uk-47301623 )

You could take the common line where if someone is driven to a compulsion though self-temptation, all responsibility lies with them. But if you encourage that compulsion knowing full well what it leads to, are you blameless?

I posit that knowingly providing tooling that actively seeks out, recommends on behalf of and encourages the behaviour of voyeuristic individuals interested in children, in an absolutely concrete way the provider of the tool is in part responsible for any child abuse that may result. When you're talking about a planetary-scale recommender, the scale of the viewers is terrifying -- and even if a small fraction of those compulsions are acted upon, that's far, far too many.

edit: apparently this line of thinking is so repugnant to some that it doesn't even deserve a reply


Your position seems to boil down to suggesting since the problem cannot be fixed perfectly, nobody should even try, and meanwhile those who do try are "hysterical".

I read the position as: "The problem can be mitigated, but we shouldn't go so far as to actually try to police people's internal thoughts and feelings. There are some who are so overcome by emotions with this issue, that they would even advocate going that far."


There’s a big difference between someone getting turned on by motorcycles and a two-sided marketplace of sexualized videos of minors.


Is it really a two-sided marketplace? Most of that content could easily be just aggregated from random social media accounts.

The sexualization only happens in the context of the viewer mentally sexualizing the content. The vast majority of people who see these videos, without any further context, would simply assume those are just family videos.

In that context, it is quite a bit perverse to have some people constantly fantasize how certain content could be "abused" for sexual pleasure by other individuals and how to best go about to deny them that pleasure. When in reality human sexuality is just fucked generally [0] and people can derive sexual pleasure from the weirdest of things, like naked feet in high-heels.

Would you rather these people build secretive in-groups where they trade real hardcore material, supporting actual physical child abuse? Should we ban the depiction of animal violence in general because some people derive sexual pleasure from it?

I just see no practical solution to this that doesn't end up with a lot of quite arbitrary censoring rules, that would then need to be expanded. Case in point: Just do a Google image search on "children's underwear" and you will find plenty of similar stuff like in those videos.

Should that stuff also be all removed/deleted/censored? That's where the real slippery slope begins.

[0] https://www.psychologytoday.com/us/blog/in-excess/201605/cru...


Facts:

https://www.reddit.com/r/conspiracy/comments/ahzv21/importan...

https://www.theguardian.com/world/2010/jul/24/pentagon-us-st...

I mean -- I could give you literally dozens of "facts" -- so your comment is a strawman. its not hysteria, and I am skeptical of anyone who denies abuse. As a victim of abuse, I really can't stand people who say such things.


I mean -- I could give you literally dozens of "facts"

Facts are good. We can corroborate facts. Non-facts can eventually be debunked.

its not hysteria, and I am skeptical of anyone who denies abuse. As a victim of abuse, I really can't stand people who say such things.

As a victim of abuse, I still think everyone should exercise their skepticism, and not just victims and the outraged. If you advocate for skepticism, then you should not object to fair skepticism being directed at you as well. There is a great power in strong emotions and outrage, and with great power, comes great responsibility. Transparency is paramount. The best way forward is to avoid hysteria, and to go forward rationally.


There are no objective solutions to this kind of "problem". What one person think its OK, might not be OK for other. What one person thing disturbing, might not be disturbing for other.

It will always be forever battle between the 2 sides. The side who can force/convince the other will become the "solution".

The only thing you can do is fight for the side you believe in, whatever that side.


> What to do about that? How to "fix" any of this?

Ideas:

A. Auto take-down videos featuring minors with unusual amounts of sexual comments

B. Auto ban accounts that are repeat offenders of (a)

C. Require all uploaders have a real paper trail so they can be held accountable for what they upload


Uhm, A and B don't really feel that well thought out.

The creator/uploader rarely has any influence over what people are commenting, yet you would punish them if they had repeat offenders commenting under their videos. A system that would be very easy to abuse to get peoples channels banned.

This whole system would also heavily depend on how you classify "sexual comments", working on the assumption that communities wouldn't adapt by using invented slang.

While C is just straight up anti-privacy without any direct advantages. I have no doubt Google/YouTube/law enforcement currently have the capabilities of tracking down anybody using these services without any paper trails, that's in no way part of the issue.


> yet you would punish them if they had repeat offenders commenting under their videos

You misunderstand. If the uploader keeps uploading videos of minors that garner mass sexual comments, the uploader gets banned.

Uploader uploads preteen girl deepthroating popsicle. Lots of pedophiles come along and express their pleasure in the comments with water squirting emoticons and such. Bam, video flagged and taken down. Uploader uploads preteen daughter doing handstands/nipslips. Pedophiles return to comments to express pleasure in their sick way. Uploader is now repeat offender and account is banned.

I don't see why this is a big problem. If it's a family video make it private.

ML can be used to train algo on the latest slang.

> I have no doubt Google/YouTube/law enforcement currently have the capabilities of tracking down anybody using these services without any paper trails, that's in no way part of the issue.

Really? Why aren't they doing it? How do they track down people uploading via VPNs and such? Tying account to bank makes it much more difficult to evade law enforcement.


> You misunderstand. If the uploader keeps uploading videos of minors that garner mass sexual comments, the uploader gets banned.

I understood you perfectly fine, that's why I said this approach is easy to exploit by simply spamming the videos of some uploader you don't like with "sexual comments".

The qualifier of "uploads videos of minors" does not work because there is no kind of ML/AI that could reliably recognition children and their ages, and whether what they are doing is "sexualized" or not.

For that, you would need human moderation, and that's not just impractical, it's simply impossible because during one second more than 1 hour of content is uploaded [0]. People are uploading content faster to YouTube than any reasonable human workforce could ever hope to review, without generating a steadily growing massive backlog.

[0] http://www.everysecond.io/youtube


> I understood you perfectly fine, that's why I said this approach is easy to exploit by simply spamming the videos of some uploader you don't like with "sexual comments".

You'd need to have dozens of puppet accounts. Obviously you could take into consideration whether all the sexual comments are coming from one person/ip address.


[flagged]


As disgusting as the thoughts and feelings might be, we shouldn't be trying to regulate what people think and feel inside their own heads. That's a level of tyranny that would be a living hell. It's also corrosive to society to create a situation where everyone is paranoid about the "other" hiding in plain sight among us. This paranoia over internal mental states is precisely the kind of oppression which was directed against Jewish people and against homosexuals in the past. This is precisely what was called out in the "Have you no Decency?" moment where McCarthy's fall started.

In 2019, outrage mongering should automatically trigger skepticism -- especially if the subject matter is extra lurid and stigmatizing.


You think having an issue with pedophilia is just "othering"? Really?


No, it's not "just" othering. It's a particularly potent form of othering, because it's based on the most lurid, most instantly damaging, and most easily outrage inducing subject. It's potent and has the most potential for exploitation precisely because everyone agrees it's bad.

The problem is not having an issue with pedophilia. The problem is with bad actors exploiting the outrage amplified by the potential for paranoia surrounding the issue. Bad actors in Salem in the 1690's exploited outrage and potential for paranoia around witchcraft to take land from other people and take down rivals. Bad actors throughout history have leveled the charge of homosexuality, of having the "wrong" religion, of harboring the "wrong" politics, or of being from the "wrong" ethnic background to exploit outrage amplified by paranoia to take down rivals and to gain power through attention.

In 2019 most of those things no longer produce outrage like they did in the past, which is how things should be. Now, we are left with charges of racism and non-consensual sexual misconduct. Everyone agrees that those are bad. However, the potential for the outrage to be misused by bad actors is even greater as a result.

In 2019, the takeaway should be that great outrage should be met by great skepticism, and that important conclusions should be drawn in calmness and using evidence and logic.


> As disgusting as the thoughts and feelings might be, we shouldn't be trying to regulate what people think and feel inside their own heads. That's a level of tyranny that would be a living hell.

There is this thing called "criticism". Or "feedback". GP gave an example of it. If you cannot distinguish between that and "regulating what other people think and feel", anti-Semitism, homophobia and McCarthyism, then maybe you should rethink your own lack of perspective.


There is this thing called "criticism". Or "feedback".

When people are starting to go around and imagine what's going on in other people's heads, it's gone too far beyond "criticism" or "feedback."

If you cannot distinguish between that and "regulating what other people think and feel", anti-Semitism, homophobia and McCarthyism, then maybe you should rethink your own lack of perspective.

One horrible thing about being gay-bashed, and I have experienced this personally, is having other people talk about what's going on in your head, as if they know better than you. There's something particularly dehumanizing about it. Such things also come into sectarian bigotry and out-grouping. I've even experienced a bit of that from being raised Catholic, though that sort of thing was much rarer. Certainly McCarthyism involved a lot of this speculation about what's "really" going on in your head or other people's heads.

then maybe you should rethink your own lack of perspective.

Having been the recipient a lot of bigotry in my lifetime, one common pattern I've seen is people arrogantly, confidently, even self righteously jumping to conclusions about what's going on in my head while being spectacularly wrong. I think that's quite a profound perspective. It's also a perspective which is the foundation of caution against people who use phrases like "maybe you should rethink your own lack of perspective" in such a gaslighting fashion.


Can you please explain what you find highly disturbing?


Some of these comments here are disgusting. I thought HN would be better than this.

YouTube could start requiring validation for uploaders before they’re monitized or allow comments. Require a credit card on file or something. That’d likely help cut down on spam too.


I believe you can only monetize once you have 1000 subscribers.

However, I think you should re-read article. The problem isn't the video uploaded, it's the commentor.


I meant youtube themselves shouldn't monetize unless they're certain that the content is advertiser friendly. They have no problems cutting legit youtubers off from ads, so why can't they also kill ads for uploads from children? It's just been creating problems for them.


>It's just been creating problems for them.

It's also been creating a cash cow for them. I also think there is just too much content being uploaded for them to accurately determine if the video is advertiser friendly.


Another user had a forward-thinking root idea that I think would be a good option for parents who legitimately wanted to share their family videos/let their kids send their friends videos.

Private social-network video hosting. Not outside-searchable. You can host your own or maybe pay for another host.

Only accessible by invite, etc. Parent configurable, permissions-based access for children (eg. Children cant send invites, only parents and approved members like elder children or trusted relatives).

Essentially, it wouldn't involve existing *tubes.


I share a couple of albums on iCloud for that with my family. Works great.


That's a pretty good option.

But I imagine there are a lot of people who don't (or prefer not to) use Apple products or services.

I could see there being a decent opening for something with the general usability of *tube, but with greater configurable control, and private by default. As well as there being a free option they can run themselves if they're somewhat technical.


Speaking of pedos hiding in plain sight; around 16 years ago there was a Verizon bust in Canada of some pedo networks. One such network of sites fled to a new web host that had not heard of the preceding events.

They came to their new host as a BSD consultancy firm. It wasn't until the new host stumbled upon their HTTP traffic that they noticed some strange domain names. And later found a whole network of message boards setup for "people" talking about grooming children for sex.

On the surface it was a legitimate business, under the surface it was horrible. They were kicked out of their new hosting and looking at their domains a few weeks later they had found a new home with PRQ. :(

This youtube stuff is also sad because these kids are uploading their videos with no other intent than to have fun. And now someone has to explain to them that their videos will receive special treatment because of sickos online.

Maybe leave the videos alone and teach kids not to answer strangers online instead?


If you want plain sight you need not look further than Catholic Church.


BSD as in Berkeley Software Distribution?


Business. Sustainability. Development.

I think... "S", could be for Services.


Lines like this tell me the author has no idea how advertising works these days "The videos are also being monetised by YouTube, including pre-roll adverts from Alfa Romeo, Fiat, Fortnite, Grammarly, L’Oreal, Maybelline, Metro: Exodus, Peloton and SingleMuslims.com."

These companies don't choose to advertise on these videos specifically. Youtube is just placing the ads there based upon the viewer etc. I imagine there are some exceptions for high profile youtubers and companies that pay directly to display their ads prior to their videos, but the accounts in question don't likely fall into that category.


Companies do not get to play that card in their defence.

They know full well they are scatter gunning and that some small percentage of their ads will end up next to/in front of awful content. They know this - they've seen it happen time and time again. They continue to do it because it's cheaper to do so and occasionally take some PR flack and then blame Youtube.

Ultimately - your ad, your problem. If you value your brand so poorly that you don't care where you advertise, this is what happens.


My issue is that authors often try to name and shame the companies as if they had some say. Companies should definitely reach out to Youtube or their marketing company to correct the problem but every single advertiser could have their ads shown before that video. That's how the ad network works.


They are at fault because they choose to pay youtube to be placed in front of videos designed for pedos.

There are plenty of advertisers that pay the content creators directly for in-video ads, there's already a model where you can choose exactly what kind of content you want your ad featured in.

I'll say it again, these advertisers are choosing where to advertise.


>They are at fault because they choose to pay youtube to be placed in front of videos designed for pedos.

See, this is factually incorrect. Companies are paying Youtube to place their ads in front of viewers that are most likely to convert/in their target audience. A company generally doesn't say "Place my video in front of youtuber X,Y,Z."


No, they're just saying "Place my ad in front of x,y,z, even if it's a pedophile, even if it's a child watching elsa pooping on spider man, or even if the videos flagrantly infringe on copyrights.


The video isn't a pedophile video... the commentors are. Perhaps you haven't understood the problem here. These are harmless videos where creepy people identify small scenes where you can see something.


And what about the point?


Oh, the writer's comments sound spot-on to me. The idea is to provoke a reaction from these companies against YouTube. If your article provokes a reaction, it's a win for the author.


Well, I can guarantee you that YouTube will solve this problem. Why? Because major advertisers are pausing all advertising on YouTube. The only leverage we have against the Internet giants is their money supply.


Stop monetizing kids.


As a result of a few close friends having children, I was recently introduced to certain channels run by parents who seem to have had children specifically to be used as youtube props. It's pretty sad.

It's probably safe to assume child "influencers" have an even rougher road ahead of them than many child actors did. I wonder how many or few of the parents who are making a living doing this are putting money their children earned them aside for their children? Based on the sizes of the houses and pools in the background rapidly increasing over time in a couple channels my niece watches, I'm guessing not many.


It's bad when parents get hooked on the cash flow they receive from exploiting their children online

https://en.wikipedia.org/wiki/FamilyOFive


Also, stop advertising to kids.

Experts in psychology and whatelse, with huge budgets, work tirelessly to make your kid intensely crave something it does not need.


So, do you want them to ban all child video ?


Yes.

And keep parents from uploading them too:

https://www.theatlantic.com/technology/archive/2019/02/when-...


Many of the children are uploading the videos themselves.


Youtube should enforce their own terms of service then.

Can a minor legally grant copyright to another entity by accepting TOS?


Frankly, that might not be a bad idea. There are some interesting consent issues surrounding this sort of thing.

As an example of why, this much-criticized article where a mommy blogger's fourth grader discovered she'd been the subject of a bunch of columns: https://www.washingtonpost.com/lifestyle/2019/01/03/my-daugh...


Banning children from uploading to large public sources sounds extreme but it could actually create really good market conditions for host-your-own services. A group of friends could have their own video host.


You mean like a containerized version of *tube people accessed as a service run for them, or encouraging people to actively host and maintain their own?

Maybe an ideal service would offer both— paid private and shareable containerized hosting and an open-source or free-download host-your-own option.


Pedophilia has always been an execuse for taking over people's rights. Good thing tor isn't targeted this time.


Possibly in response to this article coming out, Youtube also ramped up one of their other things that has popped up in the past - auto-flagging videos with "CP" in the video title, removing channels and in at least one case completely locking associated Google accounts (without apparent human involvement).

This hit at least two of the highest-profile Pokemon Go Youtubers this past weekend with videos related to the "Combat Power" of virtual monsters.


The somewhat overt nature of this brings a question to mind:

We know that our social networks have been invaded by foreign adversaries to sow chaos and division. Is there any research as to whether they've also sought to undermine us via promoting the degradation of our moral values as a society?

At a minimum, it would reduce our moral standing in the world, and potentially produce a numbing to or higher tolerance for other types of moral deficiency in which they might seek to engage. In general, it would be demoralizing to see this constant stream of depravity in your own culture.

Seems it would be potentially very effective and consistent with other lines of attack we've seen.


Not much need to hypothesize, it's sourced to the point Wikipedia has an article on it: https://en.wikipedia.org/wiki/Active_measures

Wikipedia does not explicitly call out moral warfare, but if you keep Googling on that term you can find a lot of chatter about it, of various levels of veracity. It's only a baby step beyond what is very firmly established; I don't find it hard to believe.


Yeah, I'm familiar with active measures in general. What you referenced is the well-publicized piece I mentioned.

I'm specifically talking about the morality line of attack, and whether there is similarly well-documented/sourced discussion of same (that is, beyond random Googling for which results might also include disinformation).


There is stuff that passes my credibility filter for "probably true", and it comes up early in the web searches, specific statements about specific policies from first-hand sources of people in the relevant positions in the cold war, or at least I've seen no challenges that they aren't who they say they are.

The people/entities who would publicize this and make it "officially official" have every motive in the world not to, so you're not going to find something like a 60 Minutes report on it or anything. (I think that for all the "officially official" sources casually commit lies of commission all the time, their true power is in the lies of omission.)

I have to apologize for the vagueness, but the HN gestalt would not particularly care to examine the details of this matter too closely. It would result in... nontrivial cognitive dissonance.


[flagged]


Generating outrage is still a cleaner business model than monetizing pre-teen crotch shots, which is the business model you seem to be defending here.


That's a pretty bold strawman argument. I don't think anyone is arguing that YouTube should be monetizing pre-teen crotch shots. The way I read it OP is pointing out a definite trend in which the other mainstream medias like to jump on any situation that they can use to attack YouTube and YouTube's ad revenue.

Wired in particular has a weird fascination with criticizing YouTube and YouTubers. It's a pretty transparent attack by one media that is losing attention against another media that has the attention. You can see it to some extent with how other media formats report on Facebook as well. It's not just a war for attention, its a war to reduce the ad budget that gets spent on the competing platform.

In particular for Wired making articles like "Pewdiepie is a nazi" or "YouTube is full of paedophiles" isn't just about reporting, or even clickbait to attract views. They also specifically mention ads in these type of articles as a way to hurt the bottom line ad revenue of YouTube by trying to scare advertisers away from advertising on YouTube.


See the thing is that YouTube (and every other company whose business model is “serve ads next to user-generated content”) will let pretty much anything go through unless it is something that will generate huge fines, or will get advertisers to walk away. They are aggressively amoral corporations whose only motivation is profit, and they will promote anything that results in people seeing a lot of ads, regardless of whether what the ads are being placed next to is healthy for the viewers or for society at large.

Saying “hey big advertisers do you know your ads are showing up on this hideous thing” is one of the few ways available to try to get some sense of moral accountability to one of these platforms. Reporting individual pieces of content or individual users does nothing, if it’s drawing paid views there will be another person doing the same thing that the system will be quite happy to start promoting until they, too, get reported.


Yeah. I've basically concluded that YouTube is either going to have to A) be replaced by something else with a lot less censorship or B) remove all advertisements, charge a $5/year subscription for access to more than 5 videos a month, or C) come up with some other non-advertising way of monetizing that I haven't thought of.

Case in point, as long as YouTube is beholding to the wims of advertisers, free speech is a dying thing on youtube. The same thing goes for how they are handling copyright strikes etc. They should be taking an aggressive stance to crack down on people who abuse the reporting system (e.g. ban an IP if it has X false reports, use the content ID system to actually detect whether it is fair use and whitelist those cases, which is easy to detect because it's based literally on how long the borrowed portion of the video is), instead of just assuming every report is 100% legit.


In other words, they should operate on a presumption of fair use.


I'm saying that YouTube is successfully blocking 99.9999% of cases but journalists make their living by that 0.0001% context and nuance be damned.


That's a false dichotomy brother.


Although I understand that the comments section under the videos can be problematic for various reasons, what is the intrinsic issue with people being turned on by kids videos? It's probably the worst sexual preference to have in terms of societal acceptance, but as long as it doesn't cause any harm to others- i.e., it's limited to watching innocent youtube videos- it seems perfectly fine.


This -- people need to differentiate between "I think this is disgusting, but there is nothing illegal about this" and "this is illegal". You can't have your cake and eat it, in that you can't have a (relatively) free society with freedom of speech and freedom of the press, and not cater to pedofiles in some way. This is the (totally worth it) price we pay for not having systematic state censorship.


I think that our demonization of pedophilia will been seen by future generations as one of our cultural failings. It pushes people into secrecy and hiding. This isolation and alienation can keep borderline criminals from seeking and receiving the help they need before they abuse children.

We should also be careful about removing free speech from minors to alleviate our own sense of disgust.

However, YouTube has created, monetized and helped spread a network of people encouraging sexual exploitation and abuse of minors. That is incredibly messed up and people at YouTube should face criminal charges for this.


This a thousand times over. See my other posts in this thread for more on this perspective.


>You can't have your cake and eat it, in that you can't have a (relatively) free society with freedom of speech and freedom of the press, and not cater to pedofiles in some way.

Maybe, but you can still criticize a platform for a monetization algorithm that optimizes for pedophiles, collecting the cash and looking the other way. Although I think that, in doing so, we shouldn't absolve parents of their own responsibility, since often adults are pimping their children out for the views.


Youtube basically lost the monetization battle when it legitimized the view that advertisers should have any sort of say in what content gets shown next to their ads. 10-15 years ago, that concept didn't exist in the online advertising space, publishers were first class citizens, and advertisers didn't get any say in anything and would get banned more often than publishers for not meeting advertiser standards (which I don't even think are a thing anymore). Now it's completely switched. Catering to advertisers is rapidly killing youtube, and a large portion of the web as a viable platform for free expression.

You are right, but by criticizing said platform, you are giving advertisers more leverage and basically helping to even further whitewash an already heavily censored internet.

It's not just children. Look at the comments on any video and you'll see all kinds of abusive/sexual-harrassment comments. It has always been like this, and it will always be like this unless you want an algorithm to detect the "offensiveness" of your comments, which will result in certain topics being completely banned unintentionally (think China's handling of the term"tiananmen square massacre" if you want a working example of this). And the worst part is, it's essentially advertisers that get to decide what content is OK for you to view. I'd rather take the lack of censorship any day and pay a subscription, thank you.


It's true that a lot of people go literally insane when they think about pedofilia. Hate to break it to you guys but just like there exist plebty of straight males who aren't rapists, many of the people with attraction to minors never act on them.

While many of these youtube comments are toxic and/or also constitute harrasment (which I don't condone and nor does the law), there are plenty of examples of legitimatly harmless expression (fanfiction, forum discussions and articles about attraction to minors and how to deal with it, etc), that are the subject of unimaginable amounts of hate and censorship online, where said hate and censorship is being conducted essentially on the basis of sexual orientation. Some companies (notably medium and reddit) are protective of this speech, while others (notably youtube/google) are not.


> Some companies (notably medium and reddit) are protective of this speech

Not Reddit anymore, if the recent permaban+unban of /u/holofan4life for a drawing of a fictional teenager in a swimsuit [0] is any indication. Granted it's a drawing and not text, but still.

0: https://imgur.com/a/BZinZM6


That's unfortunate


> Hate to break it to you guys but just like there are straight males who aren't rapists, many of the people with these attractions never act on them.

Not all straight males are turned on by rape. The appropriate comparison is rape fetishists or maybe even the more general umbrella of sexual sadism.


See your reaction is exactly my point. In your mind the closest most analgous thing to a pedofile is a sadist. All a pedofile is definition-wise is a minor attracted individual (MAP). The rapey connotation is societally bound. Just like straight males as a group don't like to be thought of as rapists or rape fetishivists, MAPS don't like to be thought of as rapists or even as people who fantasize about rape.

The confusion where you are technically right comes in because legally there is no such thing as a consenual sexual relationship with a minor, so someone who fantasizes about a sexual relationship with a minor is technically fantasizing about something that is legally rape, however what many of them fantasize about is absent of many of the trappings of what we would typically call rape. I've worked with some via some psych studies in college, and many fantasize about what they think of as a legitimate consensual relationship. That's a far cry from what society would have you believe, and that's what I'm trying to highlight.


> See your reaction is exactly my point. In your mind the closest most analgous thing to a pedofile is a sadist. All a pedofile is definition-wise is a minor attracted individual (MAP)

And all a sexual sadist is is someone who derives sexual pleasure from he suffering of others. And a rape fetishist derives sexual pleasure from thoughts about rape. You're the one who seems to think that has anything at all to do with how they act as opposed to merely what excites them.

Edit: I think I see now your point. Your argument is that just because straight males are attracted to females does not mean that they are going to act on those impulses in an illegal or immoral way.

I still think it is a flawed comparison because one could argue that being attracted to women doesn't mean being aroused by non-consent. That is of course also true of being aroused by children, however as you have pointed out the issue is that there isn't a legal (or for most definitions moral) path to fulfillment of that desire.

The comparisons I mentioned fit better because they too cannot be legally or morally fulfilled. Instead, as is the case with pedophiles, they explore the fantasy via various fictional media, imagination, and role-play. Yet these two groups do not provoke this irrational equivalating with people who do fulfill their desires in illegal and immoral ways.


Yes you are getting me now. The additional info I would give you is that many MAPS dont even think about the sexual component explicitly, focusing their fantasies more on a romantic relationship, and those that do think of sex would vehemently argue that the relationship or encounter they are imagining is consensual, even if that is legally impossible. Violent, forced rape fantasies among this population are much more rare than society would have you believe. Unfortunately even in academia much of the research that would show this likewise gets suppressed due to our moral biases against anything related to this category.

So in other words, there are plenty of pedophiles who are disgusted by the act of raping children, which largely contradicts society's preconceived notions about pedophiles. There are also many otherwise normal people with varying pedophillic inclinations. It's clearly a spectrum, just like the Kinsey scale, but more complicated and multivariate. Some people can only form attractions to people their age, some people can form attractions to adults and minors, some people can ONLY form attractions to minors, and the age and gender requirements vary widely from person to person. I wouldn't be surprised at all if it turned out that a double digit percent of the general population has at least a slight inclination towards this stuff, but the research that needs to be done to figure all of that out will never get done with the current social climate.

People in general don't realize that many pedophiles know they are pedophiles as early as age 11 (let that sink in, and imagine growing up like that), and we know almost nothing about their early experiences and inclinations during childhood and teen years largely because they are too terrified to reveal themselves. Do pedophiles like even younger children when they are age 11? Do they like a particular age their entire life? All of these are open questions that academia and our society are frankly too scared to address.


Pedophilia isn't a sexual orientation.

edit, in reply to "on what do you base that notion?"

What makes you question that notion?

https://en.wikipedia.org/wiki/Pedophilia

> Pedophilia (alternatively spelt paedophilia) is a psychiatric disorder

https://en.wikipedia.org/wiki/Sexual_orientation

ctrl+f "pedo" no hits ctrl+f "minor" no hits (that are relevant in this context) ctrl+f "child" same as minor, however this:

> There is no substantive evidence to support the suggestion that early childhood experiences, parenting, sexual abuse, or other adverse life events influence sexual orientation.

We know that sexual abuse as a child is a factor in being pedophile, no? So that also fits.

And not addressed at you, how come just basically "playing dictionary" and stating what should be obvious earns downvotes? What is going on here? This isn't the first time I'm getting a quite pungent vibe around this subject, e.g. https://news.ycombinator.com/item?id=19168928

(oh nice, I got throttled so even though I wrote this reply 2 minutes after that question, so instead of replying to it I had to make it this edit instead)


If I was a betting man I would say that in 50-100 years it will be recognized as one (obviously it will still be illegal to act on it, but it will at least be scientifically regognized). So far scientists have found it just as hard to categorize neurologically and genetically as sexual orientation. There are some themes, but nothing obvious enough to justify its current DSM categorization. Prejudice is what keeps this from happening and will likely delay things for a long, long time.


That's like saying a table is actually a chair, we just find it hard to categorize it as such. It's saying nothing.


One implication would be that it should be LGBTQP instead of LGBTQ, (or M for MAP, maybe). There would definitely be implications.

But to the general point. Pedophile literally means people that are sexually attracted to minors. Full stop. It doesn't mean child rapist, and there are minors who are pedophiles and know they are as early as 11 and have to deal with society's bullshit eating away at their conscience their whole lives even if they never do anything wrong. The reason for the negative perception is that you only end up hearing about the rapists because the other ones are too busy not doing anything wrong and keeping their orientation a secret due to stigma.

The current DSM criteria is a result of this bias. Not long ago homosexuality and bisexuality had the same treatment in the DSM (listed as disorders), and transgenders are still classified as having a gender identity disorder. So being labeled a certain way in the DSM means nothing when it comes to sex, because it's basically political at this point, and tons of researchers and psychologists realize this but say nothing. Those that do often can't publish their studies because bias is so ingrained in every facet of society and academia.

Also I upvoted you, because it's a good (albeit annoying and ultimately wrong in my opinion) argument. On HN generalizations in the form of short comments always get downvoted. It's dumb but that's the way it is.


Actually the prevailing research shows that there is also no statistically significant influence of early childhood abuse either. There are plenty, plenty of pedos who weren't abused. This is another common mistake. All scientific attempts to classify pedophilia have had about as much success as those trying to classify homosexuality.


On what do you base that notion?


This begs the question— if it were your children, how would you feel about it?


Not GP, but if they were my children, because they are small and innocent, I would pay attention to what they do online. They are your children, keeping them safe is primarily your responsibility, not Google's.


I agree with your first sentiment, absolutely. Your second, however, grossly oversimplifies the situation IMHO—and there we diverge greatly.


I'm not absolving Google or any company, they have a duty (and the commercial incentive) to solve this situation, but I think people here are magnifying the problem.

YouTube makes similar recommendations to what you search or watch, and ads based on some black box analysis. Pedophiles just slipped through the cracks in the system. The cracks need to be sealed, problem solved... until these people find new cracks to exploit. We can't kill them or get rid of them, so this cycle is perpetual. Excessive moral outrage only contributes to sell clicks and justify ill-thought laws.


I still disagree. The potential for the harm to get worse is there. I don't think there's anything excessive about making a public display of the issue.

A limited amount of regulation surrounding certain services could be a good thing, implementation-pending. In the same way that I'm glad engineers who design bridges have regulations to abide by in their projects.

The reasons those regulations [choose your field of application] arose in the first place was because there was new knowledge informing an improvement in the [application] and yet there were those willing to close their eyes to the newly obvious design flaws because it was more profitable to do so, and the market simply wasn't correcting for it as some would believe it naturally will.

I didn't know this was happening. Hell, I hadn't even considered it a possibility. I don't pay close enough attention to the social aspects of the site. I'm glad it was in the news.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: