Credit where credit is due, I learned about this from MattsWhatItIs:
Which is similar to the recent videos PayMoneyWubby has been exposing:
https://youtu.be/5PmphkNDosg and https://youtu.be/M78rlxEMBxk
It reminds me of a darker version of https://en.m.wikipedia.org/wiki/Elsagate.
I do want to say this MattsWhatItIs guy appears to be motivated by attention and ignored the fact that Youtube was looking into this and continues to push for people to go after advertisers.
The 'adpocalypse' showed that the best way to enact any meaningful change on YouTube is to get the advertisers riled up. So that's the lever they're going to focus on.
I think it's always going to be a double-edged sword at best.
This was wrong, but clearly nobody in their ads department actually used the targeting systems.
There is no way for L'oreal to remove their ads from channels with disturbing comments automatically. They would have to individually find and exclude these placements and build their lists like I have, and I believe they probably have underpaid and either overworked or incompetent people running their targeting so that's unlikely to happen.
To be honest I believe ads should not be shown on children's youtube channels at all. Further, rather than reporting a video, I'd like channel creators to be able to report comments and get people banned from commenting on the site as a whole.
Detecting if there is a child in the video is more difficult, but do-able with current ML models. Now, however, determining why there is a child in the video - if it is a family "fail" video or pedo material, for example - via AI is about as impossible as trying to distinguish between satire, hate speech or propaganda via AI. It's not possible at all, as AI will for the near future totally lack context.
This distinction will require humans, and this is something not viable at all for fb, youtube, twitter & co, as it is a huge cost... the saving of which is offloaded to society though in form of e.g. undermined democracies or psychological trauma in sexual violence survivors.
That is exactly the point, they have this ability and are not using it. Not even to disable comments.
And banning keywords won't help, that's just whack-a-mole'ing.
It doesn't matter why the child is in the video. The pedophile only cares that there is a child. Videos uploaded with children in them will be exploited, regardless of whether they are legitimate.
The article claims that there is nudity (via timestamp comments). Separate models could be used to score the video, where a child plus any other potentially sexual content (genitalia, nipples) withholds the video. A withheld video can be appealed (for scenarios such as "breastfeeding tips" or what-have-you), much like copyright strikes, which have an existing process.
Obviously the model might have problems discerning between a 17 and 18 year old, but it would catch the most egregious content.
I feel like you have missed the point. YouTube is actively helping people find this content and making money off of it.
As long as child is not performing some sexually suggestive stuff on youtube video ("popsicle challenge", wtf) it is OK, probably.
The reason why they find licensed music in videos is because they have a financial (profit) motive for doing so. Find a licensed track, and get a few fractions of a penny every time someone watches that video. There is no financial incentive to blocking CP, so they don't bother. This is a real shame.
It can't detect context, however.
>The reason why they find licensed music in videos is because
It's technically easier to do so. ML is still new, and it still makes mistakes, this is why people complain about content ID because it takes down videos that are "fair use".
Explain to me, Mr Expert, how a ML algorithm can understand the nuance of the law, interpret it flawlessly like a human would (say, in a court of law).
Unless you're saying these cases no longer require humans, courts and judges, and ML is at a point where it's _flawless_ and accurate 100% of the time.
It's a shame you're so technically weak minded, I thought the Hackernews community would be educated in this regard.
Please go read about how ML works before making such comments in the future, they make you look really stupid.
You don't actually need to, at least initially. You only need to detect kids, and when you have some threshold of reports on comments on a video where the computer vision algorithm detected kids in the frame that rise to the level a human should take a look, the human can flag the video as containing elements of CP, which would kick in the following:
- Deeper ML analysis of the dependency graph of commentors, related videos.
- Blacklisting or law enforcement notification for commentors that are violating the law.
- Training data that can be used to train more advanced algorithms.
Because there is no profit motive to do so, like there is with music licensing, they won't do it.
> It's technically easier to do so.
I'm not talking about determining context with ML. That is a hard problem, of course. But identifying "child human" with a computer vision algorithm is very simple, then relying on human moderators to initiate the dependency graph traversal.
> Please go read about how ML works before making such comments in the future, they make you look really stupid.
No need for the ad hominem. I'm not an ML expert, but I understand enough of the basics to know what is possible.
Even dental scans and x-rays can't give you a reliable chronological age for somebody .
In that context, I consider it extremely doubtful that some ML algorithm solved this whole problem by just using pictures/visual recording, that sounds a bit too much like magic/wishful thinking.
People would be a lot more willing to accept blanket and risky experiments to solve these problems if there was a viable and reliable reconciliatory/redemptive path for when those systems get it wrong.
For better or worse, YouTube hasn't focused on scaling that part of their system, so there will be apprehension and scepticism with any approach they take.
The most horrid thought occurred to me while reading this, that somewhere within Google there may be a glossy (autogenerated or otherwise) report detailing exactly how many people are watching these videos and falling into this particular gradient of the recommendation model. Perhaps even read and passed up as too difficult to extract value from by their ads team. A company of this size (YouTube, not just Google) couldn't possibly be oblivious to this
Just advertise Spy Cameras, Thailand Vacation Packages, and Annual Passes to Disneyland, no ?
Ain't helping that the guy who "discovered" this claims it is some kind of "YouTube wormhole" you supposedly fall into and never ever can escape from. Even tho he pretty much did everything to make the results come out as they did (fresh YouTube account, using a VPN, looking up content that's already considered as "soft-core" for many users).
But the reality is that these videos show nothing illegal or really that out of the ordinary. The fact that some people derive sexual pleasure from them, is something that can never be stopped unless you introduce something akin to mass-scale thought/mind-control.
Because before it was YouTube videos or the Internet it was plain old children underwear catalogs printed on paper and without these, or YouTube videos, those people would just do a Google picture search of "childrens underwear" and get unlimited pictures of children in equally revealing poses as in those YouTube videos.
What to do about that? How to "fix" any of this? Particularly in an age where becoming a "YouTube personality/influencer" is increasingly peddled as some kind of viable choice of occupation for younger people and even children ?
The seriousness of even a single case of pedophilia or child exploitation cannot be overstated. Not only is it ethical and moral to take it very seriously, but it's also the logical action because children are our future and if they are healthy and happy then the human race is healthy and happy.
There is no fixing the problem, but we can mitigate it. The first step would be, apparently, to educate the public because comments like yours still exist which is deeply concerning. The second step is up for discussion of course, but perhaps it would be useful to ban children under the age of N from appearing in videos for more than T amount of time or perhaps ban altogether.
I believe there is fixing, or strongly addressing, this harm: raise the standard of living, provide education and safe communication networks to children for identifying and reporting abuse (and importantly: feeling safe to do so), provide quality mental healthcare for those abused. Abuse often begets abuse: physical, verbal, and sexual.
I find your implication that "even a single case of pedophilia" is something to be damned morally and ethically disturbing (my interpretation being that you are implying a person who has the attraction, not a person who has acted on it). I believe that our incredible stigmatization of people who have pedophilic urges prevents them from getting the necessary counseling and help they need to not harm children. I believe that with a proper support network, they wouldn't harm children. Sadly, we seem to treat this criminally rather than medically.
Which is it? If you think you can have a 100% success rate you are horribly, stupidly mistaken. You WILL get innocents caught in the crossfire, you WILL see videos removed that are okay, and people WILL cry censorship.
People are demanding a flawless system, that never makes mistakes, that has 100% accuracy to be developed and released in the next week, and yet, we need a "free and open internet" otherwise Reddit gets angry and puts up hundreds of "net neutrality" posts.
Oh, what's that, Article 13 wants upload filters? BOO!! Yet they may catch this type of content.
Nobody likes that idea because it'd "ruin" the internet.
Which is it. This problem can ONLY be fixed with upload filters, potential censorship and mistakes which will affect a number of legit videos.
That okay with you? People seem pretty mad at youtube for their content ID and DMCA, just check r/videos.
This would increase 10 fold with any further automated system designed to attempt to filter certain categories of content.
People will have channels deleted and potentially their Youtube 'careers' ended by a machine. But r/videos hates this.
A problem may or may not have a solution or many solutions. And solutions have a relationship not just to the problem they solve but also the problem or problems they create.
This fundamental meta-problem is a problem to which there is no solution. This is why we argue. This is why we fight. This... is why we politick.
As the GP points out there is no real exploitation here. There is harassing and demeaning comments and a general toxic culture of objectification, but limiting the actions of children it of some desire to protect them from the nastiness in the real world does them no favors and denies them their self determination.
The feminist movement has been dealing with these same issues affecting adults for decades. They have found that this sort of "protection" doesn't work and makes the problem worse. Children should be taught and supported in dealing with this behavior. We should impower them to bring the asshole commenters to light, shamed and held responsible. There's no magic age when the harassment stops or becomes okay. The best we can do for our children is prepare them for life.
If you feel so hopeless about your impact on the world around you, please keep such depressing ideas to yourself
Can you verify this 100%? You're absolutely certain that there isn't a single video on YouTube of a child where a grown up might've instructed them to do something that seemed innocent but the adult found sexual? I understand what you're getting at with the rest of your comment, but to sit there and say that not a single child on YouTube has been exploited or abused is really, really reaching and deserves a cited source.
But it does seem fair in response to ask if you can prove, with the same 100% accuracy, that any one of children in the videos identified by the article are being exploited and/or abused?
Fair, but why state it as fact if it's impossible to prove?
>But it does seem fair in response to ask if you can prove, with the same 100% accuracy, that any one of children in the videos identified in the article are being exploited and/or abused?
Also a fair point, and I can provide at least two articles that addressed YouTube removing content involving the exploitation of children in the past. Granted, these aren't "the videos identified by the article", but they're strong evidence that, contrary to OP's claim, children have been exploited on the platform.
You could take the common line where if someone is driven to a compulsion though self-temptation, all responsibility lies with them. But if you encourage that compulsion knowing full well what it leads to, are you blameless?
I posit that knowingly providing tooling that actively seeks out, recommends on behalf of and encourages the behaviour of voyeuristic individuals interested in children, in an absolutely concrete way the provider of the tool is in part responsible for any child abuse that may result. When you're talking about a planetary-scale recommender, the scale of the viewers is terrifying -- and even if a small fraction of those compulsions are acted upon, that's far, far too many.
edit: apparently this line of thinking is so repugnant to some that it doesn't even deserve a reply
I read the position as: "The problem can be mitigated, but we shouldn't go so far as to actually try to police people's internal thoughts and feelings. There are some who are so overcome by emotions with this issue, that they would even advocate going that far."
The sexualization only happens in the context of the viewer mentally sexualizing the content. The vast majority of people who see these videos, without any further context, would simply assume those are just family videos.
In that context, it is quite a bit perverse to have some people constantly fantasize how certain content could be "abused" for sexual pleasure by other individuals and how to best go about to deny them that pleasure. When in reality human sexuality is just fucked generally  and people can derive sexual pleasure from the weirdest of things, like naked feet in high-heels.
Would you rather these people build secretive in-groups where they trade real hardcore material, supporting actual physical child abuse? Should we ban the depiction of animal violence in general because some people derive sexual pleasure from it?
I just see no practical solution to this that doesn't end up with a lot of quite arbitrary censoring rules, that would then need to be expanded. Case in point: Just do a Google image search on "children's underwear" and you will find plenty of similar stuff like in those videos.
Should that stuff also be all removed/deleted/censored? That's where the real slippery slope begins.
I mean -- I could give you literally dozens of "facts" -- so your comment is a strawman. its not hysteria, and I am skeptical of anyone who denies abuse. As a victim of abuse, I really can't stand people who say such things.
Facts are good. We can corroborate facts. Non-facts can eventually be debunked.
its not hysteria, and I am skeptical of anyone who denies abuse. As a victim of abuse, I really can't stand people who say such things.
As a victim of abuse, I still think everyone should exercise their skepticism, and not just victims and the outraged. If you advocate for skepticism, then you should not object to fair skepticism being directed at you as well. There is a great power in strong emotions and outrage, and with great power, comes great responsibility. Transparency is paramount. The best way forward is to avoid hysteria, and to go forward rationally.
It will always be forever battle between the 2 sides. The side who can force/convince the other will become the "solution".
The only thing you can do is fight for the side you believe in, whatever that side.
A. Auto take-down videos featuring minors with unusual amounts of sexual comments
B. Auto ban accounts that are repeat offenders of (a)
C. Require all uploaders have a real paper trail so they can be held accountable for what they upload
The creator/uploader rarely has any influence over what people are commenting, yet you would punish them if they had repeat offenders commenting under their videos. A system that would be very easy to abuse to get peoples channels banned.
This whole system would also heavily depend on how you classify "sexual comments", working on the assumption that communities wouldn't adapt by using invented slang.
While C is just straight up anti-privacy without any direct advantages. I have no doubt Google/YouTube/law enforcement currently have the capabilities of tracking down anybody using these services without any paper trails, that's in no way part of the issue.
You misunderstand. If the uploader keeps uploading videos of minors that garner mass sexual comments, the uploader gets banned.
Uploader uploads preteen girl deepthroating popsicle. Lots of pedophiles come along and express their pleasure in the comments with water squirting emoticons and such. Bam, video flagged and taken down. Uploader uploads preteen daughter doing handstands/nipslips. Pedophiles return to comments to express pleasure in their sick way. Uploader is now repeat offender and account is banned.
I don't see why this is a big problem. If it's a family video make it private.
ML can be used to train algo on the latest slang.
> I have no doubt Google/YouTube/law enforcement currently have the capabilities of tracking down anybody using these services without any paper trails, that's in no way part of the issue.
Really? Why aren't they doing it? How do they track down people uploading via VPNs and such? Tying account to bank makes it much more difficult to evade law enforcement.
I understood you perfectly fine, that's why I said this approach is easy to exploit by simply spamming the videos of some uploader you don't like with "sexual comments".
The qualifier of "uploads videos of minors" does not work because there is no kind of ML/AI that could reliably recognition children and their ages, and whether what they are doing is "sexualized" or not.
For that, you would need human moderation, and that's not just impractical, it's simply impossible because during one second more than 1 hour of content is uploaded . People are uploading content faster to YouTube than any reasonable human workforce could ever hope to review, without generating a steadily growing massive backlog.
You'd need to have dozens of puppet accounts. Obviously you could take into consideration whether all the sexual comments are coming from one person/ip address.
In 2019, outrage mongering should automatically trigger skepticism -- especially if the subject matter is extra lurid and stigmatizing.
The problem is not having an issue with pedophilia. The problem is with bad actors exploiting the outrage amplified by the potential for paranoia surrounding the issue. Bad actors in Salem in the 1690's exploited outrage and potential for paranoia around witchcraft to take land from other people and take down rivals. Bad actors throughout history have leveled the charge of homosexuality, of having the "wrong" religion, of harboring the "wrong" politics, or of being from the "wrong" ethnic background to exploit outrage amplified by paranoia to take down rivals and to gain power through attention.
In 2019 most of those things no longer produce outrage like they did in the past, which is how things should be. Now, we are left with charges of racism and non-consensual sexual misconduct. Everyone agrees that those are bad. However, the potential for the outrage to be misused by bad actors is even greater as a result.
In 2019, the takeaway should be that great outrage should be met by great skepticism, and that important conclusions should be drawn in calmness and using evidence and logic.
There is this thing called "criticism". Or "feedback". GP gave an example of it. If you cannot distinguish between that and "regulating what other people think and feel", anti-Semitism, homophobia and McCarthyism, then maybe you should rethink your own lack of perspective.
When people are starting to go around and imagine what's going on in other people's heads, it's gone too far beyond "criticism" or "feedback."
If you cannot distinguish between that and "regulating what other people think and feel", anti-Semitism, homophobia and McCarthyism, then maybe you should rethink your own lack of perspective.
One horrible thing about being gay-bashed, and I have experienced this personally, is having other people talk about what's going on in your head, as if they know better than you. There's something particularly dehumanizing about it. Such things also come into sectarian bigotry and out-grouping. I've even experienced a bit of that from being raised Catholic, though that sort of thing was much rarer. Certainly McCarthyism involved a lot of this speculation about what's "really" going on in your head or other people's heads.
then maybe you should rethink your own lack of perspective.
Having been the recipient a lot of bigotry in my lifetime, one common pattern I've seen is people arrogantly, confidently, even self righteously jumping to conclusions about what's going on in my head while being spectacularly wrong. I think that's quite a profound perspective. It's also a perspective which is the foundation of caution against people who use phrases like "maybe you should rethink your own lack of perspective" in such a gaslighting fashion.
YouTube could start requiring validation for uploaders before they’re monitized or allow comments. Require a credit card on file or something. That’d likely help cut down on spam too.
However, I think you should re-read article. The problem isn't the video uploaded, it's the commentor.
It's also been creating a cash cow for them. I also think there is just too much content being uploaded for them to accurately determine if the video is advertiser friendly.
Private social-network video hosting. Not outside-searchable. You can host your own or maybe pay for another host.
Only accessible by invite, etc. Parent configurable, permissions-based access for children (eg. Children cant send invites, only parents and approved members like elder children or trusted relatives).
Essentially, it wouldn't involve existing *tubes.
But I imagine there are a lot of people who don't (or prefer not to) use Apple products or services.
I could see there being a decent opening for something with the general usability of *tube, but with greater configurable control, and private by default. As well as there being a free option they can run themselves if they're somewhat technical.
They came to their new host as a BSD consultancy firm. It wasn't until the new host stumbled upon their HTTP traffic that they noticed some strange domain names. And later found a whole network of message boards setup for "people" talking about grooming children for sex.
On the surface it was a legitimate business, under the surface it was horrible. They were kicked out of their new hosting and looking at their domains a few weeks later they had found a new home with PRQ. :(
This youtube stuff is also sad because these kids are uploading their videos with no other intent than to have fun. And now someone has to explain to them that their videos will receive special treatment because of sickos online.
Maybe leave the videos alone and teach kids not to answer strangers online instead?
I think... "S", could be for Services.
These companies don't choose to advertise on these videos specifically. Youtube is just placing the ads there based upon the viewer etc. I imagine there are some exceptions for high profile youtubers and companies that pay directly to display their ads prior to their videos, but the accounts in question don't likely fall into that category.
They know full well they are scatter gunning and that some small percentage of their ads will end up next to/in front of awful content. They know this - they've seen it happen time and time again. They continue to do it because it's cheaper to do so and occasionally take some PR flack and then blame Youtube.
Ultimately - your ad, your problem. If you value your brand so poorly that you don't care where you advertise, this is what happens.
There are plenty of advertisers that pay the content creators directly for in-video ads, there's already a model where you can choose exactly what kind of content you want your ad featured in.
I'll say it again, these advertisers are choosing where to advertise.
See, this is factually incorrect. Companies are paying Youtube to place their ads in front of viewers that are most likely to convert/in their target audience. A company generally doesn't say "Place my video in front of youtuber X,Y,Z."
It's probably safe to assume child "influencers" have an even rougher road ahead of them than many child actors did. I wonder how many or few of the parents who are making a living doing this are putting money their children earned them aside for their children? Based on the sizes of the houses and pools in the background rapidly increasing over time in a couple channels my niece watches, I'm guessing not many.
Experts in psychology and whatelse, with huge budgets, work tirelessly to make your kid intensely crave something it does not need.
And keep parents from uploading them too:
Can a minor legally grant copyright to another entity by accepting TOS?
As an example of why, this much-criticized article where a mommy blogger's fourth grader discovered she'd been the subject of a bunch of columns: https://www.washingtonpost.com/lifestyle/2019/01/03/my-daugh...
Maybe an ideal service would offer both— paid private and shareable containerized hosting and an open-source or free-download host-your-own option.
This hit at least two of the highest-profile Pokemon Go Youtubers this past weekend with videos related to the "Combat Power" of virtual monsters.
We know that our social networks have been invaded by foreign adversaries to sow chaos and division. Is there any research as to whether they've also sought to undermine us via promoting the degradation of our moral values as a society?
At a minimum, it would reduce our moral standing in the world, and potentially produce a numbing to or higher tolerance for other types of moral deficiency in which they might seek to engage. In general, it would be demoralizing to see this constant stream of depravity in your own culture.
Seems it would be potentially very effective and consistent with other lines of attack we've seen.
Wikipedia does not explicitly call out moral warfare, but if you keep Googling on that term you can find a lot of chatter about it, of various levels of veracity. It's only a baby step beyond what is very firmly established; I don't find it hard to believe.
I'm specifically talking about the morality line of attack, and whether there is similarly well-documented/sourced discussion of same (that is, beyond random Googling for which results might also include disinformation).
The people/entities who would publicize this and make it "officially official" have every motive in the world not to, so you're not going to find something like a 60 Minutes report on it or anything. (I think that for all the "officially official" sources casually commit lies of commission all the time, their true power is in the lies of omission.)
I have to apologize for the vagueness, but the HN gestalt would not particularly care to examine the details of this matter too closely. It would result in... nontrivial cognitive dissonance.
Wired in particular has a weird fascination with criticizing YouTube and YouTubers. It's a pretty transparent attack by one media that is losing attention against another media that has the attention. You can see it to some extent with how other media formats report on Facebook as well. It's not just a war for attention, its a war to reduce the ad budget that gets spent on the competing platform.
In particular for Wired making articles like "Pewdiepie is a nazi" or "YouTube is full of paedophiles" isn't just about reporting, or even clickbait to attract views. They also specifically mention ads in these type of articles as a way to hurt the bottom line ad revenue of YouTube by trying to scare advertisers away from advertising on YouTube.
Saying “hey big advertisers do you know your ads are showing up on this hideous thing” is one of the few ways available to try to get some sense of moral accountability to one of these platforms. Reporting individual pieces of content or individual users does nothing, if it’s drawing paid views there will be another person doing the same thing that the system will be quite happy to start promoting until they, too, get reported.
Case in point, as long as YouTube is beholding to the wims of advertisers, free speech is a dying thing on youtube. The same thing goes for how they are handling copyright strikes etc. They should be taking an aggressive stance to crack down on people who abuse the reporting system (e.g. ban an IP if it has X false reports, use the content ID system to actually detect whether it is fair use and whitelist those cases, which is easy to detect because it's based literally on how long the borrowed portion of the video is), instead of just assuming every report is 100% legit.
We should also be careful about removing free speech from minors to alleviate our own sense of disgust.
However, YouTube has created, monetized and helped spread a network of people encouraging sexual exploitation and abuse of minors. That is incredibly messed up and people at YouTube should face criminal charges for this.
Maybe, but you can still criticize a platform for a monetization algorithm that optimizes for pedophiles, collecting the cash and looking the other way. Although I think that, in doing so, we shouldn't absolve parents of their own responsibility, since often adults are pimping their children out for the views.
You are right, but by criticizing said platform, you are giving advertisers more leverage and basically helping to even further whitewash an already heavily censored internet.
It's not just children. Look at the comments on any video and you'll see all kinds of abusive/sexual-harrassment comments. It has always been like this, and it will always be like this unless you want an algorithm to detect the "offensiveness" of your comments, which will result in certain topics being completely banned unintentionally (think China's handling of the term"tiananmen square massacre" if you want a working example of this). And the worst part is, it's essentially advertisers that get to decide what content is OK for you to view. I'd rather take the lack of censorship any day and pay a subscription, thank you.
While many of these youtube comments are toxic and/or also constitute harrasment (which I don't condone and nor does the law), there are plenty of examples of legitimatly harmless expression (fanfiction, forum discussions and articles about attraction to minors and how to deal with it, etc), that are the subject of unimaginable amounts of hate and censorship online, where said hate and censorship is being conducted essentially on the basis of sexual orientation. Some companies (notably medium and reddit) are protective of this speech, while others (notably youtube/google) are not.
Not Reddit anymore, if the recent permaban+unban of /u/holofan4life for a drawing of a fictional teenager in a swimsuit  is any indication. Granted it's a drawing and not text, but still.
Not all straight males are turned on by rape. The appropriate comparison is rape fetishists or maybe even the more general umbrella of sexual sadism.
The confusion where you are technically right comes in because legally there is no such thing as a consenual sexual relationship with a minor, so someone who fantasizes about a sexual relationship with a minor is technically fantasizing about something that is legally rape, however what many of them fantasize about is absent of many of the trappings of what we would typically call rape. I've worked with some via some psych studies in college, and many fantasize about what they think of as a legitimate consensual relationship. That's a far cry from what society would have you believe, and that's what I'm trying to highlight.
And all a sexual sadist is is someone who derives sexual pleasure from he suffering of others. And a rape fetishist derives sexual pleasure from thoughts about rape. You're the one who seems to think that has anything at all to do with how they act as opposed to merely what excites them.
Edit: I think I see now your point. Your argument is that just because straight males are attracted to females does not mean that they are going to act on those impulses in an illegal or immoral way.
I still think it is a flawed comparison because one could argue that being attracted to women doesn't mean being aroused by non-consent. That is of course also true of being aroused by children, however as you have pointed out the issue is that there isn't a legal (or for most definitions moral) path to fulfillment of that desire.
The comparisons I mentioned fit better because they too cannot be legally or morally fulfilled. Instead, as is the case with pedophiles, they explore the fantasy via various fictional media, imagination, and role-play. Yet these two groups do not provoke this irrational equivalating with people who do fulfill their desires in illegal and immoral ways.
So in other words, there are plenty of pedophiles who are disgusted by the act of raping children, which largely contradicts society's preconceived notions about pedophiles. There are also many otherwise normal people with varying pedophillic inclinations. It's clearly a spectrum, just like the Kinsey scale, but more complicated and multivariate. Some people can only form attractions to people their age, some people can form attractions to adults and minors, some people can ONLY form attractions to minors, and the age and gender requirements vary widely from person to person. I wouldn't be surprised at all if it turned out that a double digit percent of the general population has at least a slight inclination towards this stuff, but the research that needs to be done to figure all of that out will never get done with the current social climate.
People in general don't realize that many pedophiles know they are pedophiles as early as age 11 (let that sink in, and imagine growing up like that), and we know almost nothing about their early experiences and inclinations during childhood and teen years largely because they are too terrified to reveal themselves. Do pedophiles like even younger children when they are age 11? Do they like a particular age their entire life? All of these are open questions that academia and our society are frankly too scared to address.
edit, in reply to "on what do you base that notion?"
What makes you question that notion?
> Pedophilia (alternatively spelt paedophilia) is a psychiatric disorder
ctrl+f "pedo" no hits
ctrl+f "minor" no hits (that are relevant in this context)
ctrl+f "child" same as minor, however this:
> There is no substantive evidence to support the suggestion that early childhood experiences, parenting, sexual abuse, or other adverse life events influence sexual orientation.
We know that sexual abuse as a child is a factor in being pedophile, no? So that also fits.
And not addressed at you, how come just basically "playing dictionary" and stating what should be obvious earns downvotes? What is going on here? This isn't the first time I'm getting a quite pungent vibe around this subject, e.g. https://news.ycombinator.com/item?id=19168928
(oh nice, I got throttled so even though I wrote this reply 2 minutes after that question, so instead of replying to it I had to make it this edit instead)
But to the general point. Pedophile literally means people that are sexually attracted to minors. Full stop. It doesn't mean child rapist, and there are minors who are pedophiles and know they are as early as 11 and have to deal with society's bullshit eating away at their conscience their whole lives even if they never do anything wrong. The reason for the negative perception is that you only end up hearing about the rapists because the other ones are too busy not doing anything wrong and keeping their orientation a secret due to stigma.
The current DSM criteria is a result of this bias. Not long ago homosexuality and bisexuality had the same treatment in the DSM (listed as disorders), and transgenders are still classified as having a gender identity disorder. So being labeled a certain way in the DSM means nothing when it comes to sex, because it's basically political at this point, and tons of researchers and psychologists realize this but say nothing. Those that do often can't publish their studies because bias is so ingrained in every facet of society and academia.
Also I upvoted you, because it's a good (albeit annoying and ultimately wrong in my opinion) argument. On HN generalizations in the form of short comments always get downvoted. It's dumb but that's the way it is.
YouTube makes similar recommendations to what you search or watch, and ads based on some black box analysis. Pedophiles just slipped through the cracks in the system. The cracks need to be sealed, problem solved... until these people find new cracks to exploit. We can't kill them or get rid of them, so this cycle is perpetual. Excessive moral outrage only contributes to sell clicks and justify ill-thought laws.
A limited amount of regulation surrounding certain services could be a good thing, implementation-pending. In the same way that I'm glad engineers who design bridges have regulations to abide by in their projects.
The reasons those regulations [choose your field of application] arose in the first place was because there was new knowledge informing an improvement in the [application] and yet there were those willing to close their eyes to the newly obvious design flaws because it was more profitable to do so, and the market simply wasn't correcting for it as some would believe it naturally will.
I didn't know this was happening. Hell, I hadn't even considered it a possibility. I don't pay close enough attention to the social aspects of the site. I'm glad it was in the news.