Hacker News new | past | comments | ask | show | jobs | submit login
EU to make AI generated child abuse material criminal (europa.eu)
47 points by miohtama 3 months ago | hide | past | favorite | 87 comments



no page loading for me. did a quick search on the article title and found, among others :

https://techcrunch.com/2024/02/06/eu-csa-deepfakes/

seems to be a working link . makes refernce to the title submission.

of further note :

"The possession and exchange of “pedophile manuals” would also be criminalized under the plan — which is part of a wider package of measures the EU says is intended to boost prevention of CSA"

https://techcrunch.com/2024/02/06/eu-csa-deepfakes/

this could be chilling, as outreach to children at risk, could be misconstrued, or repurposed for grooming, thus materials concerning adult interaction with children would become treacherous


Didn't the US try this at some point with 3D cgi material? I think it fell through in the US. I believe in the EU such material is already criminal. OTOH i don't see any possible way they can stop it


> Didn't the US try this at some point with 3D cgi material? I think it fell through in the US

In the US, there is a legal distinction between child pornography and child obscenity. Both are criminal, and exceptions to the 1st Amendment – but, the first is much easier to prove in court. In the 2002 case of Ashcroft v Free Speech Coalition [0] SCOTUS ruled that (under the 1st Amendment), child pornography only included material made with real children – so written text, drawings, or CGI/AI-generated children are not legally child pornography in the US. In the US, child pornography is only images/video [1] of real children. If someone uses editing software or AI to transform an innocent image/video of a real child into a pornographic one, that is also child pornography. But an image/video of a completely virtual child, that doesn't (non-coincidentally) look like any identifiable real child, is not child pornography in the US.

What most people don't seem to know, is that while a virtual child can't be criminal child pornography in the US, it still can be criminal child obscenity – which is rarely prosecuted, and much harder to prove in court, but if they succeed, can still result in a lengthy prison term. In 2021, a Texas man was sentenced to 40 years in prison over a bunch of stories and drawings of child sexual abuse. [2] (Given the man is in his 60s, that's effectively a life sentence, he's probably going to die in prison.) If someone can get 40 years in prison for stories and drawings, there is no reason in principle why someone could not end up in federal prison for AI-generated images too, under 18 USC 1466A. [3]

[0] https://en.wikipedia.org/wiki/Ashcroft_v._Free_Speech_Coalit...

[1] maybe audio too? I'm not sure about that

[2] https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...

[3] https://www.law.cornell.edu/uscode/text/18/1466A


How can you define the age of a fictional character in a computer-generated rendering anyway? This is misguided and ill-conceived from top to bottom.


In the US, CSAM bans are on slightly shakier ground than in other countries because of the First Amendment (and the general trend in law over the past 100-ish years of liberalization of interpretations of 1A... pornography bans were rolled back significantly by various lawsuits around Hustler and its owner). They're grounded, more or less, in the notion that their creation is itself always a criminal exploitation of a minor, so trading in them is always encouragement of criminal creation of more such material (essentially, the idea that 1A doesn't apply because one is creating a marketplace for goods that are illegal to create).

This argument falls flat regarding synthetically-produced CSAM and CSAM-adjacent material (no human beings exploited in its creation; in fact, one could argue that creation of synthetic CSAM depresses the market for child-exploiting CSAM), so I wouldn't be surprised if the US can't find a way in their legal structure to ban such material; their protections against obscenity in general tend to be weaker than other nations, and if the content is merely obscene and not generated by harming a minor, it's harder to argue it should be banned (as opposed to shunned for being odious).


EU is a lot of countries. The government of Denmark (for example) did a study 2012 which failed to show how fake CP would lead to actual harm.

The US on the other hand is a different story. In 2005 one man was convicted to 20 years imprisonment for hentai. In 2016 a man was convicted to 10 years imprisonment for owning a coloring book.

Wikipedia has a few more cases in the article "Legal status of fictional child pornography".


You don't see any possible way? Really? How do governments stop anything else? Google "laws" and "enforcement."


To those asking you are right that there is no victim. The issue that they are trying to solve is that allowing this will (or at least may) harm society in the long term by normalizing the content. We've somewhat lost the way to communicate this idea in our current individualist society. In East Asia for example all porn is censored not because it is helping victims but as an attempt to enforce a certain level of morality in their societies. Even though in many of these places there is no real recourse for distributing uncensored porn and all those countries are generally atheist. It is more about having the law so as to keep it from becoming accepted in society. We can argue whether that is a desirable value for society but I don't think anyone is saying there is a victim for this crime. It may also be worthwhile to compare this with public indecency laws that exist here which range from prescribing minimum attire in public to the banning of public defecation. There are many differences in practice between all these points but I think they all touch on the point of victimless crimes to some extent.


It's unsavory when institution's default mode of acting in face of issues is to enact bans instead of actually solving core issues in society.

In this case, shouldn't we solve child abuse (by, for example, promoting social safety nets) instead of spending already limited resources in victimless crimes?

Edit: clarification


My assumption here is that few will actually be jailed for whatever law is being created. If lots of resources are used to enforce this problem then I think it will be a failure. Not just of wasted money but also of invasion of privacy and some other issues. It should be more like shutting down clearnet websites that are obviously creating and distributing material that is banned, especially those that are making money from it. And I don't think social safety nets can prevent children from being abused. It certainly can't stop them from being abused by teachers and parents. I think the big issue here is that governments don't know how to frame this issue as anything but "someone is immediately being harmed by this action" which is not the case.


> My assumption here is that few will actually be jailed for whatever law is being created. If lots of resources are used to enforce this problem then I think it will be a failure. Not just of wasted money but also of invasion of privacy and some other issues.

First. Rightly said, it's an assumption. Then, I'd like to highlight that resources are limited. This means that if want to spend resources on X you have to take resources from Y, and is this trade-off really acceptable for this case in particular?

What outcomes can we really expect from a law like this? How do we know? What's the best and worst scenario? How will it be enforced?

I'd bet nobody can answer these questions with data supporting them. Including policymakers.

> And I don't think social safety nets can prevent children from being abused

Just today on the front page: https://news.ycombinator.com/item?id=39374152

Anyway, all of this is just speculation because research on this topic is banned in practical terms.


I saw that article earlier today but skipped it. From a brief skim and control f it doesn't seem to say anything about abuse. I just have a hard time imagining how it would help even if there were data that seemed to indicate that it was so. As long as children are dependents they can be abused by those who they depend on. That basically means their parents and teachers (or generally 'community leaders'). I agree that research is sparse although I have seen that the amount of men who are attracted to young girls is actually rather high. [0]

[0] https://www.sciencedirect.com/science/article/abs/pii/S00057... See graphs on pages 687 and 689


>I just have a hard time imagining how it would help even if there were data that seemed to indicate that it was so.

I was only assuming that a higher happiness was correlated with children not being abused. It was just speculation on my part. Sorry I should've clarified that.


I'll preface this by saying I am in favor of making AI child porn illegal. There are probably some details there that will need to be ironed out, but as a baseline it is a good idea.

In your text, however, there is something I find far more interesting for discussion:

> enforce a certain level of morality in their societies

I, personally, like this. I think a society should have a level of enforced moral behavior. But by going down this road, you can reach some undesirable conclusions super fast.


I'm not sure if you're being downvoted by people who disagree that we should enforce a level of moral behavior or by people who disagree that we reach some undesirable conclusions. I however think both points you make are correct. The truth is we always enforce some behavior which is (almost) the definition of society. Usually we can frame these enforcements as giving or protecting someones rights. But the rights themselves only exists as a result of our moral conclusion. There is no physical substance called human rights that can be seen or observed. I think we can frame this as human rights as well although it becomes more difficult. The current mainstream is that whoever is most directly affected has the right. For example healthcare has been able to become defined as a human right because it affects those who can't afford it most directly and those who lose out are are generally seen by proponents of this right of already having enough or not being affected as much. But in this case we are taking away a right from someone – which, by the way, can almost always be described as giving a right to someone else – who is more immediately affected in order to give a right of some other group. And I think that is where human rights activists get upset with this legislation (ignoring privacy concerns). To them, the one who loses out because of a new human right must deserve to lose it or must be taken away to serve a more immediate right. And they will generally not accept that the way one is born – taking that as true for the moment – can be considered as deserving of losing a right. And that's why I said we don't know how to phrase or justify this law properly.


Society already has a certain level of enforced moral behavior. Making murder or torture illegal is based in morality. Problems arise if we get to less universal. That some things are illegal in one society, but legal in another is also often a consequence of a certain morality in that society. How much morality is "universally accepted" in a society to make violating it something which should also be against the law is a contentious issue. Personally, I think it's good to err on the side of "let people decide for themselves" and only make it into laws in exceptional cases.


> I think a society should have a level of enforced moral behavior.

I, as a person with superior moral principles, completely agree. Following an old tradition, we should also put some psychopaths in charge of enforcement.


I never understood why fictional cp of any kind is illegal but highly violent content is not.

One can watch people getting tortured to death in "Hostel" or "Saw" in graphic detail on Amazon Prime.

Is it the innocence and defenselessness of children that gets society so riled up?


> Is it the innocence and defenselessness of children that gets society so riled up?

Yes.


Then why is the fictional depiction of dismsembering a child legal, but the fictional depiction of giving a child an orgasm illegal?


It is probably the getting off on seeing those pictures that triggers the visceral response. If I make a movie of a child getting dismembered, I do not have the goal to create a sexual reaction in the audience. In contrast if I make fictional CP, I have the inherent goal in sexually exciting a certain category of people.


Sure, but when has repressing urges led to anything good? I don't think CP will turn anyone into a pedophile, might as well let people satisfy their urges in a non-harmful way.


> Sure, but when has repressing urges led to anything good?

I mean we all repress certain behaviors to a certain extent. Some level of repression is healthy. I might have the urge to eat 5KG of ice cream every day but I keep it under control.

> I don't think CP will turn anyone into a pedophile, might as well let people satisfy their urges in a non-harmful way.

If need CP to get off you are a pedophile by definition because you are sexually attracted to children. I guess you meant watching CP will not automatically turn him into a child molester.

It is not that I agree with this new law. I do not see how it is enforceable. But I do see why people have a very negative view of it.


> If need CP to get off you are a pedophile by definition because you are sexually attracted to children. I guess you meant watching CP will not automatically turn him into a child molester.

No, I meant a pedophile. It's not like people aren't attracted to children just because they haven't seen a CP video, therefore allowing fictional CP should be OK (it won't convert anyone into a pedophile).

I guess there's an argument to be made that a pedophile watching CP might be tempted to become a child molester, but then we should debate that, not issue blanket bans on fictional CP.


> it won't convert anyone into a pedophile

Thanks for clarifying, I misunderstood your earlier point then.


so what was the goal of your child dismemberment movie?


As a comparison, currently illegal in Canada:

> The current law criminalizes possession of purely fictional material and has been applied in the absence of any images of real children, including to possession of fictional stories with no pictures at all, or vice versa, cartoon pictures without any stories.[4]

* https://en.wikipedia.org/wiki/Child_pornography_laws_in_Cana...


> It also includes the written depictions of persons or characters (fictional or non-fictional) under the age of 18 engaging in sexual activity.[2] Courts in Canada can also issue orders for the deletion of material from the internet from any computer system within the court's jurisdiction.[3]

That's too much in my opinion. What if there is a novel about teens experiencing it for the first time at 16, or 17? Or if there is a novel a rapist?


As someone who loves abstract thinking and debate for debate's sake (in pursuit of truth and nuance). All i can comment is that this topic is so far out of the overton window that it's not possible to find any truth about the topic. In the US/Canada anything but full support for "Think of the children" will result in all out ad hominem.

People cannot be honest about how they feel, sexually, because it's thought crime.

People cannot get help because others must report it to the police.

Researchers cannot study it (or release results contrary to the status quo) because it would be career ending.

Reporters cannot tackle the subject because it would make them unemployable or ruin their publication.

Politicians must bow to it, and leverage it, because of the populist view.

Inmates cannot get rehabilitation because they're societally hated, and other inmates will kill them or commit deplorable acts against them. (This is socially acceptable behavior, even encouraged)


> In the US/Canada anything but full support for "Think of the children" will result in all out ad hominem.

The other side of this is, "What's an acceptable number of crimes against children?" Implicitly, there's a choice, "This remedy is projected to reduce crimes against children by X%, but we're not going to do it because Y." The projected marginal reduction is justified using the consequences of Y.

I'm perfectly ok with this calculus. If someone wants to say that right to privacy is more important than a projected reduction in crimes against children, more power to them.

What I'm asking for is honesty regarding this calculus. For some reason it short circuits peoples' brains and the consequences of taking whichever action end up making them unable to say, "Yes, the [projected] marginal reduction is the what I'm willing to spend by not doing Y." Just say it.


The issue is that there's little research being conducted on the topic. There's just multiple factors that make research extremely difficult, making progress in understanding the issues harder, thus undermining any policymaking.


That's a good point; it's not a very scientific field. The baserate for "trying something no one has tried before," is definitionally zero.

Any thoughts on the use of prediction markets, especially ones where predictor performance is tracked, in order to make better predictions on the results of legislative action?


I find it an interesting idea. In my humble opinion I'd just add a consideration: the returns/results of legislative action might not be objective, even for well-defined issues. It might not be as objective as money because the results (data) have to be interpreted by participants.


Who is the victim of this “crime” here?


This is "thoughtcrime". The argument against outlawing thoughtcrime has never been that there isn't a victim. Of course there are victims of bad thoughts. The steel man argument is that we're better off without central thought control in spite of those real victims, because there are more and worse victims of authoritarian thought control.


This is probably controversial, but if we assume that pedos will (unfortunately) exist and will generate demand for this kind of material, isn't it better if it comes from an AI instead of real children?


This is an old discussion about "pretend paedophilia" using either art or (adult) actors, and AI essentially changed very little about it. The two views, very briefly, are:

- "It's a safer outlet and prevents actual child abuse, so it's a good thing."

- "It will encourage and enforce paedophilic tendencies and (indirectly) encourages actual child abuse, so it's a bad thing."

The last time I looked, the evidence is inconclusive. It's a difficult topic to research well, so I'm not expecting anything conclusive on this any time soon.

My own view is that most likely, there are different kind of paedophiles and that different things will be true for different groups because these types of things aren't that simple. This kind of nuance is even harder to research, especially on such a controversial topic fraught with ethics issues.

There's also the issue of training material, which is unique to AI. Is it possible to have AI generated child abuse material without training material of that kind of thing? I don't know enough about AI to comment on that.


Does the legality of fake material increase the demand of real material? Does availability of fake material "awaken" or otherwise normalize desires that might have remained dormant? Studies have shown a link between violent porn and abusive behavior. A link, of course, does not mean causation, but given the potential for monstrous harm, I think we need to be wary of legalizing this kind of material. There's also the question of the training set used to generate this type of imagery.

However, I also think thoughtcrime is a very dangerous and slippery slope. It's not an easy question with an easy answer.


For it to come from AI it needs to come from real children. Chicken and the egg scenario.

Someone abuses a child, film it, put it in AI. And they now have that child's model.

Throw away the child and they're currently guilty free if any charges. Of course that won't be enough so repeat the process.

It's not like someone is creating a model in blender and than running that though a AI. Not like that doesn't happen anyway.


Gen AI does not need to be training on photos of nude children to produce photos of nude children. It can generate a flying pig soaring over Congress without ever being trained to do so.


It may not but why would it not be trained on child imagery if not to produce photorealistic results?

If you had the opportunity to tune your AI with photography than to self generated where true photography of a pig which produced higher quality less defects on generation why would you not go for such?


If a generative AI knows the concept of children and the concept of porn, it can generate porn with children in it (possibly with various degrees of success and realism). It’s not stuck and forced to produce only what was strictly in the training set. AIs are fundamentally extrapolation machines.


> For it to come from AI it needs to come from real children.

Yes, but given that CSAM data already exists, and we can't go back in time to prevent it, there's no further cost to attain that dataset. Unlike all future real CSAM, which will be produced by abusing children IRL.

I see parallels with Unit 731 here. Horrible things have already happened, so what do we do with the information?


It's not the cost, why do need movies get produced when existing movies already exist?

Because of new content. If AI is being trained on real data and new content than the datasets don't end up stale.


New movies get produced because people want to make and sell movies. They don't have to make movies that are 100% reality. Movies actually use special effects and CGI to fake all kinds of things that would be illegal in real life.

For example, there was a time when to get a flood effect filmmakers flooded a set. 3 extras died. Later on they were told they can't do that, but they can simulate it. Tons of movies show people getting overcome by floods, but no one dies in real life anymore.


> New movies get produced because people want to make and sell movies.

Same with CP.

But real movies still use real effects. Just a lot more of it is on a green screen as a cost saving exercise and the demand for the movie to be now now now.

If quality went in to making films as they did in the past, the movie industry wouldn't be such a shovel of shite. Those were real, with real actors and real acting. Now you got CGI however, scenes are still produced in the real.


It feels really wrong to write this but what happens if someone makes a model that's "good enough" and there's no incentive to abuse children any more? Also, a lot of models are never trained on real pictures of what they generate.


If a "good enough" model ever came be it could split the pedophile category in to two. (On a basic level)

Those who seek sexual gratification from the abuse of a minor. The real deal.

And those who are aroused by the body of the minor, or watching the abuse of an minor.

If the model is "good enough" than you could potentially say that those who are interested in pedophilia probably won't seek the further extremes to fulfil their pleasure.

However, in the long run they are still pedophillac and the real deal will always be the more for those.


> what happens if someone makes a model that's "good enough" and there's no incentive to abuse children any more?

That isn't how anything works.


One thing that comes to mind is this: imagine someone is found with cp on a device. They could defend themselves saying it is AI generated. Unless there is a reliable way to tell AI fakes from real ones people could possibly use this defense.


If AI generated porn were indistinguishable wouldn't that almost totally eliminate demand for the real stuff?


But to generate faithful AI CP it must mean the AI was trained on actual CP dataset. So those who trained the AI would have some explaining to do.


You don't need to train on pictures of canine golfers to make highly convincing pictures of dogs driving golf carts on Mars. https://imgur.com/a/EIWUJYp The AIs are extremely good at mixing concepts.


I don't think that's necessarily true.

An AI can generate an image of a wizard with a frog on their head and that doesn't imply that the training set had such an image


Are you sure? I’d guess that AI can extrapolate from adult porn and non-sexual depictions of children.


So the AI will generate children with adult private parts?


Pretty sure there are non-sexual images of naked children too, such as in anatomy textbooks.


Unknown. For example, I have heard most offenders abuse their relatives, and I don't expect synthetic material to have any impact in this category.

Also, the only way to find out if this has any effect at all (positive or negative) would disgust and outrage many, as that test would require having a region where it's forbidden and a control group where it's allowed and seeing which is worse.

I'm not sure how many people would try to lynch mob (let alone vote out) whoever tries to do that, but I'm sure it's enough that exact numbers don't matter.


My guess is that offenders abuse relatives because they are easier to access and manipulate, not because there is a true preference there. More a crime of opportunity than a pursued goal.


In that scenario, how tf would you know that "real stuff" was eliminated? Think, please.


Makes sense but real people have real ages. Couldnt they just say the AI is a rendition of an 18 year old with some hypothetical development deviation? You'd have to ban all ai porn because the age can't be measured as it's non-existent.


That would indeed be the probable next step for government or intergovernmental organizations. Criminalize AI porn. Then criminalize regular porn.

The government is greedy in its lust for control and order in a chaotic world. It has a tendency to overreach, then overreach again (as we see with in the overlap of privacy and counterterrorism).


Ah yes, the japanese “1000 year old dragon loli” gambit.

Which is actually a perfectly valid defense imo, as it’s horribly dumb to incriminate real people because of fictional characters. Should everyone who has a copy of IT go to jail because of child pornography? It makes no sense.


If the technology gets to that point, who needs the real thing?


People who are into it not because they like their kids young (i.e. "classic" pedos), but because they want to (feel to) have the power of causing pain. There is a real market for custom pedo videos, it's utterly insane.


> People who are into it not because they like their kids young (i.e. "classic" pedos), but because they want to (feel to) have the power of causing pain.

Stupid question but why take kids then and not adult women? Why take the risk of buying CP if you do not like the kids young?


I'm guessing the market will just serve them fake custom videos then ...


One argument I've heard is that it makes investigation of real cp harder.


Possibly, but I'd also speculate that it makes real cp less profitable, and therefore start to disappear. Similar to how real OF girls are going to rapidly lose their income because dudes running AI camgirls will increasingly outcompete them for attention and money.


You think the main reason cp exists is profit? Really?

Listen to the podcast "Hunting Warhead" before you make another comment so wildly uninformed on the topic anywhere.


Pretty sure that is a terrible reason to make laws for it. I heard the bill of rights has the same problem.


I'm also not sure how I feel about it. Guess I'd have to see some evidence of how people generating cp images affects real children, be it by encouraging actual acts or whatever. If it doesn't or has the opposite effect, I would be against criminalising it, even though it is disgusting


Luckily most other people in the world don't need further convincing.


I hear encryption does the same thing.


It does make it harder to investigate, the counter argument is that encryption is a literally unavoidable requirement for securing almost 100% of all online activities, which are themselves now critical to the functioning of modern economies.

For the moment, GenAI isn't.


"Anyone in the training dataset"?

A big unanswered question in the age of AI: how does a system of law work when breaking one law is bad, but the product of breaking many laws is totally exempt?

We're starting to see the milder form of this in debates around authorship and copyright. But when your AI model requires a shockingly large quantity of clearly verboten material as input, what is one to make of the output?


Real children in the activity this kind of material promotes


This is the old 'violence in movies promotes violence in real life' argument.


You mean, in the same way as black metal promotes burning churches and video games cause mass shootings?


What is to stop someone from making a USB dongle that generates CP to politically assassinate a target?


What is to stop someone from making a USB dongle that has real CP on it to politically assassinate a target?


I presume the difference is- this USB has nothing on it except this one small exe. Nothing to worry about. Then it just generates massive amounts of illegal content on your PC. But I guess that’s not that different from a small exe reaching out to the internet and pulling in massive amounts of the same illegal material.


They may not wish to interact with real CP or the people that sell it. Typing a few words into a model is certainly easier.


specificity, and effectiveness.

when its faked, it can be magnitudes more damning in the fake details, vs spraying jpegs of old polaroids of a baby having first bath


No network traces. Can target offline PCs.


What's to stop people from framing others for any other crime you could think of, what's to stop people from committing crimes themselves? Technically nothing that's bulletproof, yet we have laws anyway. The world is messy.


The FBI already has ways of "discovering" CSAM on the computers of slippery suspects or obstreperous potential informants.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: