Hacker News new | past | comments | ask | show | jobs | submit login
Man Arrested for Creating Child Porn Using AI (futurism.com)
47 points by CharlesW 18 days ago | hide | past | favorite | 101 comments



Obvious disclaimer first: I do not condone imagery of child abuse, I believe this guy has some kind of psychological problem and needs professional help, AI-generated deepfakes can actually cause real harm and proper regulation is needed.

With that out of the way:

> the generative AI wrinkle in this particular arrest shows how technology is generating new avenues for crime and child abuse.

Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.

> The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified, [...] that is a much harder problem to fix.

Sarcastic: people downloading and modifying open source software is a major problem indeed, hopefully a solution can be found.


> Is it really child abuse if no children were involved?

I think there is an empirical question one step beyond this. Does a pedophile who sees AI child porn get inured to it and then go on to try to act out fantasies with real victims? Or does this give pedophiles a way to satiate their desires indefinitely with only AI-based content, and lead to a lower portion abusing actual kids?

A second order issue is that distributing child porn is claimed to create demand for child porn which leads to more abuse. If there were no criminal penalties for purely AI-generated CSAM and the normal criminal penalties remained for CSAM that in any way derived from images of actual kids, would the cost-benefit difference push most consumers to demand only AI-generated stuff?

I'm not saying this is definitely the case, but I think it's at least plausible that more AI-generated CSAM would reduce actual sexual abuse of children, and from a harm-reduction standpoint, it should be beneficial to have more of it ... and it's also plausible that pedophiles being able to generate more and more extreme material at will would make things worse ... and it's also likely legally and institutionally impossible to do the studies to determine which of these is actually true.


Pedophilia seems, to a large degree, to be under-researched. Mostly because of the taboo, but even questions like "what portion of the general population are sexually attracted to pre-pubescent children" do not have good data.


I’m not sure about other countries but in the USA if you even broach the subject that the guilty individuals are often themselves extremely mentally disturbed, probably themselves victims of sexual abuse at a young age, and maybe should get some help while serving out punishment for whatever society deigns as “the proper amount of time” you will be ostracized from just about any group of people


At the risk of whatever, I believe we do have good data on this sort of thing when it comes to adults.

It's often noted, if not utterly established, that "internet access" (aka porn access) strongly correlates with (and thus, perhaps causes) lower rates of sexual assault.


Allowing AI generated realistic CSAM while prohibiting real CSAM creates a real enforcement problem. Do prosecutors now need to find and identify depicted victims to prove a CSAM charge? Does believing material was AI generated serve as a defense?

We already have severe limitations on fictional depictions of this type of content so prosecuting AI depictions isn't anything particularly new.


> Do prosecutors now need to find and identify depicted victims to prove a CSAM charge?

Surely that would be a good thing if it incentivizes prosecutors to track down the purveyors and distributors instead of just stopping with easily targeted consumers. With nothing to establish beyond mere possession, by some metrics their performance is optimized to the contrary, much like the war on drugs.

> Does believing material was AI generated serve as a defense?

Believing that stolen property was legitimately acquired is a defense against a charge of possession of stolen property, as is plausibly claiming to have been set up. The alternative enables anyone with physical access to cause anyone else to be guilty of a crime, surely a net negative for society.

> We already have severe limitations on fictional depictions of this type of content so prosecuting AI depictions isn't anything particularly new.

Are you advocating its expansion to a general principle? Maybe Agatha Christie should have faced charges for the crimes committed by her characters.


> Surely that would be a good thing if it incentivizes prosecutors to track down the purveyors and distributors instead of just stopping with easily targeted consumers

It would make it harder to prosecute purveyors and distributors. Arguing to some amorphous 'incentive' under some unnamed "metric" is silly since we have much better ways of creating incentives if you think the investigatory or prosecutorial priorties need to shift.

> The alternative enables anyone with physical access to cause anyone else to be guilty of a crime, surely a net negative for society.

No it doesn't as these defenses are unchanged.

> Are you advocating its expansion to a general principle? Maybe Agatha Christie should have faced charges for the crimes committed by her characters.

Work on your reading comprehension instead of making ridiculous claims. I'm not advocating anything. I am describing the current legal state in our country. If you are unfamiliar with the laws about creating fictional porngraphic material with underage characters, then google is your friend.


If watching this stuff makes you "hungry for more" then we have to show gay people just long enough straight porn and they will become straight. Right?

You don't choose who or what you like. Unfortunately.


This comment is great but I feel we're also leaving out the "human element" that necessarily is part of these sorts of questions. Ultimately, we, as a people, don't want pedophiles, we don't want children abused, etc. These I feel are not controversial statements. However, allowing AI-generated CASM as a way to placate pedophiles does a few things:

Firstly, it normalizes this material, even to the tiniest degree you'd like to claim. If you bring something that was formerly illegal into the state of being legal, it becomes a part of our society. Weed is a fantastic example. It's on a great track to become a very casual drug of use in our society, and with each step forward along that path, it becomes less remarkable. When I was in high school, I was taught, by teachers and hired professionals, about the "dangers" of weed and other drugs. Now, I drive down the street and pass a couple of dispensaries selling that product and many derivatives of it, completely without drama. It is simply a thing that exists.

[And to not leave it merely implied, that's a GOOD thing.]

So, that being the case, are we as a society prepared to have a society and to live within one where something like AI-generated CASM is an accepted, to whatever degree, thing to have, sell, and create? Are we okay with that if it satiates pedophiles? Are we prepared to reckon with the consequences of that if it doesn't, and more children are harmed?

Secondly, I think we have to contend with the fact that now that this technology exists, it will continue to exist regardless of legality. This is one of the reasons I was so incredibly opposed to wide-spread and open-source AI in the first place, and at risk of sounding like "I told you so," one of the concerns I outlined many times is that this technology enables people to create... anything, at a near industrial scale, be that disinformation, be that spam, be that non-consensual pornography, be that CASM. I don't think a black box program that can be run on damn near any consumer PC that can create photorealistic renderings of anything you can describe in text is inherently, by virtue of it's being, a bad thing, but I do think it's something that we as a society are not ready for, and I was not alone in that thinking. But now it's here, and now we have to deal with it.


I'd have to look up the data, but the Netherlands has a long history of decriminalizing weed. However, decriminalization did not increase usage in the Dutch population.


> Ultimately, we, as a people, don't want pedophiles, we don't want children abused, etc.

We could say the same about violence and its presence in media.


I don't think you'd find nearly the unanimous agreement with that statement as mine. I personally love violent media, both the action-oriented John Wick type stuff and shooter games, and also I am an avid fan of gruesome horror too.


this is a false question because neither option is valid. pedos are born pedos and they see and feel attraction to kids every single day. they are just as likely to act on these desires with or without cp


That which is asserted without evidence can be dismissed without evidence.


yeah, thats what i just did


"no u"


>Or does this give pedophiles a way to satiate their desires indefinitely with only AI-based content, and lead to a lower portion abusing actual kids?

By that logic, if I see regular porn I would not be interested in having sex.

I assure you this is not the case, and your premise is flawed.


Way to dismiss a solid argument based on an anecdote. I wouldn't be so quick to conclude that porn has nothing to do with seeking out real life sex, especially given how it has been widely studied that the amount of sex young adults are having has been steadily going down since the end of 90's.


>Way to dismiss a solid argument based on an anecdote.

We'll have to agree to disagree that it's a solid argument. I think it's weak and specious.


You don't exactly give any sources or justification for your arguments either.


Except that there is a good bit of data suggesting that access to porn does reduce sexual assault.


I don't think there is. I've seen data suggesting correlation but no causal studies.


Yea, that is moving goalposts.

A causal study in this case is basically impossible.

So at the moment, we have correlation studies compared to your anecdote….


>Yea, that is moving goalposts.

No it's not. The comment I replied to was the one suggesting there is a causal relationship.

My anecdote is a counterargument that shows the erroneous logic underlying the original claim.


> Not saying it doesn't, but it may be worth thinking about.

I think the path starts with asking, "How do things get normalized? What are the tools of normalization?"


You lost me. Why would normalization have any impact on child predators?

People aren’t going to be attracted to kids because they see such things. It’s like arguing gay porn is going to turn people gay.


Thank you for this clarifying point.

You're right. It's probably better to focus on things that have a proven record of impacting child predators or child predation.


Exactly my thoughts. It normalizes this messed up stuff.


From the first paragraph:

> A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography

I'd bet distribution of CSAM the crime what got him prosecuted, not modifying OSS.


The distribution is 100% what got him caught, and is the vast majority of prosecutions, but creation and possession is still separately illegal. And he didn't need to modify any OSS. Any image-generating AI is capable of generating such imagery. So too can anyone use Photoshop, or a paint brush, to create illegal imagery.


I can, will, and do create whatever imagery I want, no matter how I create it, and no matter what anyone else's opinion about that is. I will look at or think about that potentially depraved shit as often, hard, and long as I want, I will feel about that however I want, and no one in this big world of control freaks can or will do anything about any of that.


> no one in this big world of control freaks can or will do anything about any of that

Many sovereign citizens(1) have in fact been prevented from doing things society deems illegal despite their attestations to the law’s applicability to them.

1

https://en.m.wikipedia.org/wiki/Sovereign_citizen_movement


Ew


In Australia, someone was jailed for possessing Simpson's cartoon in 2008:

   https://www.computerworld.com/article/1510365
That seemed an like absurd overreach at the time. But Australia has until recently always suffered from overreach in it's censorship laws, so as an Australian I didn't find it surprising.

I always assumed the reason we didn't see the same thing in the US's was the 1st amendment. I guess it's too soon to know if this is just an aberration that will be fixed.


If the model was trained on CSAM then it’s definitely still CSAM, like the comment in this thread comparing it to cut cocaine, but if it wasn’t it sure seems like a thought crime.


> Is it really child abuse if no children were involved?

I hope that there are people out there who know the true answer.

It takes a special kind of person to seek the truth about something that people feel so strongly about. The truth doesn't care about us or how we feel. Finding it is a thankless task full of distractions and dead ends. If a perspective makes sense and rings true to me, that says more about me than it does about whether the perspective describes reality.


Doesn't matter if an actual child was abused or not. Under the PROTECT Act, obscene child porn is illegal to produce or distribute whether actual children were involved or not. The Supreme Court has ruled that material deemed obscene is NOT subject to First Amendment protection.


Incorrect . Obscene material can be restricted in terms of its distribution and advertising. It is not an excuse to make it illegal to possess or distribute privately. This is why things like porn and vibrators are illegal to sell in stores in Texas, but you can still order it online.

The reason why child porn is illegal completely - not just restricted in its advertising and distribution - is because it's production necessarily involve abuse of actual children.

https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...

> In response to Ashcroft v. Free Speech Coalition, Congress passed the PROTECT Act of 2003 (also dubbed the Amber Alert Law) which was signed into law on April 30, 2003, by President George W. Bush.[126] The PROTECT Act adjusted its language to meet the parameters of the Miller, Ferber, and Ashcroft decisions. The Act was careful to separate cases of virtual pornography depicting minors into two different categories of law: child pornography law and obscenity law. In regards to child pornography law, the Act modified the previous wording of "appears to be a minor" with "indistinguishable from that of a minor" phrasing. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.

Now, there is an argument that if the fictional content is indistinguishable from real abuse, it may be illegal. But that could probably be circumvented by providing the inputs and seed to generate the image, rather than the image itself.

I'm not super motivated to defend AI generated child porn, but the notion of banning it introduces ambiguity. Actual child abuse material can be identified because the people involved were born on a specific date and so whether or not it's illegal has a clear threshold. But AI generated content is not so easy to distinguish. The fictional characters don't have a birth date, so whether or not it's illegal becomes a lot more subjective.


> Is it really child abuse if no children were involved?

How can you prove that zero children were involved at any point?

How does the model generate CSAM without it either being in the training material or fed to the model as an input?


You're asking the wrong question. How can you prove that children were involved? In most developed jurisdictions, you must be proven guilty, not innocent.

> How does the model generate CSAM without it either being in the training material or fed to the model as an input?

How can a skilled artist (when forced to) draw CSAM without ever having seen CSAM? How can they draw something without having seen the exact thing before? Both humans and LLMs are able to extrapolate information.


> How can they draw something without having seen the exact thing before?

There has, in fact, been CSAM found in a version of the training set that was used to make Stable Diffusion 1.5, for example.

https://www.theregister.com/2023/12/20/csam_laion_dataset/


It somehow generates flying pigs without them existing. Substitute pigs for an other animal if your next argument is that flying pigs might be in the database.


> Is it really child abuse if no children were involved? Does that mean that AI-generated imagery of some protected group being harmed causes actual harm to that protected group? Not saying it doesn't, but it may be worth thinking about.

Deepfakes of real children is real children being involved.

Outside of that, my response is, “how would you create a compelling CSAM image with nothing resembling the target in the training data?”


>> how would you create a compelling CSAM image with nothing resembling the target in the training data?

(1) You feed it extra training data to suit the desired output.

(2) The AI combines bits from existing training data. It has non-pornographic content of children. It has adult pornography. Layer one over the other and voila. Crime. Most AI has zero images of cats doing calculus, but is very capable of generating such content.

(3) You manually aid the process. People forget how powerful standalone AI can be when one manually selects good output and feeds it back as a guide for the next iteration. This does not scale to a multi-task environment, but is very useful when accomplishing one specific task.


In theory, without the adult pornography to help seed step 2, how hard would you think it to still accomplish this? ie: what if we somehow target not-training LLMs on any pornographic material or any material with children?


Deepfake porn of real people is already starting to be criminalized, children or not: https://www.criminaldefenselawyer.com/resources/is-deepfake-...

I think that's a good thing, though the law obviously needs more time to catch up.


Despite never having seen any CSAM, and despite my terrible drawing skills, I could probably draw something that would qualify as CSAM.


If a skilled human artist could generate convincing csam despite not having ever viewed it, a computer could too


Deepfakes of real children aren’t mentioned here, so you know. The discussion is more general.


From TFA:

> Last year, the National Center for Missing & Exploited Children received 4,700 reports of generated AI child porn, with some criminals even using generative AI to make deepfakes of real children to extort them.

So, it’s definitely part of the conversation. Also, being a more general conversation means that it encompasses more specifics. Not being able to bring the general conversation back to specifics makes for a fairly useless discussion.


> Is it really child abuse if no children were involved?

The "harm" principle which is very popular these days to see if things are bad or good or punishable or not, I think it has a lot of problems. There are a lot of things that don't map onto this principle well. Social cohesion is a huge one. Something may not directly harm someone, but it can harm the fabric of society and many other things.


On the other hand, a government that makes victimless actions into major crimes also destroys social cohesion as it means the law is clearly detached from what is just.

By all means, fast track the death penalty for child rapists. Even make a constitutional amendment allowing cruel and unusual punishments for that single crime. Treat paying for actual abuse images as a worse crime than commissioning a murder. Sky's the limit for actual abusers. But (legally) punishing the creation of fake imagery is absurd, and in practice punishing possession seems to be unhealthy for a free society.


Yeah obviously there needs to be a balance. But I think it's popular right now to be unbalanced towards allowing any and all victimless crimes not thinking about the effects they do have on society overall and in the long term vs any particular specific person immediately


Something can be legal while not being socially acceptable, and we can still strictly police related harms. Like with drugs, we can legalize them while still making e.g. smelling up an area with marijuana carry a public nuisance fine, or make littering with needles a misdemeanor (or more severe if someone gets injured).

If someone's only crime is being disturbing, the government doesn't need to get involved, but we can still not want to be around them and especially keep kids away from them.


I think that's a better balance than what we currently do.


Not saying I agree with legally permitting this kind of thing, but just addressing your comment:

If the government can throw you in prison for something, it better be for a good reason. If it's simply doing something "that doesn't harm anyone but is perceived as not conducive to social cohesion", that doesn't seem like a good reason. Many conservatives would say gay marriage, polyamory, tattoos, porn, foul language, and blasphemy are not socially cohesive. Insulting people or using slurs is not socially cohesive. Some countries imprison people for many of these things, but I'm glad the US doesn't.


Many people's instinct was to go here, which is maybe why our current society tips over into permissiveness. But a good balance can be reached without becoming brutal totalitarians.

One example is alcohol, a lot a lot of countries struggle with controlling it. Technically it's a victimless crime. It's easy for people to see how it affects society though. In some places banning it will be impossible due to local preference, but governments have creative ways to reduce usage. It's not totalitarian to run a campaign to try to reduce people's alcohol consumption, or to add taxes onto it which are shown to reduce consumption and related incidents.


I agree with you, but I would rephrase it as "people should have the freedom to do certain harmful things without going to jail for it".

For example, insulting someone, or self harming. Even things that "don't harm anyone" actually do, it's just the scale is too small to be measurable at the individual level. For example, substance abuse. If one people does it, it's hard to measure the harm to society. But if 300M people do it, it's easy.


You don't want to use that argument, because that's the same argument that has been and will be used for a lot of things, like gay relationships, transsexualism, any and all drugs, violence in media, etc.


But just because it can be taken too far, doesn't mean the argument is bad. Likewise, the argument for a social safety net is good, taken too far you get Communism. Does it mean we throw the baby out with the bathwater?


It also doesn't apply to most laws. The law doesn't always require a victim or harm. Just look at speeding tickets or DUIs. You don't have to harm someone to wind up in jail


Both of those have clear harms, namely increased risks of traffic collisions.


That's not true. Increased risk is not harm, it's just a higher potential for harm.


It's no different than the logic behind medication regulation. Some substances are known to induce bad/undesirable/harmful behavior and need to have access restricted. There may be a scientific argument about whether or not the argument is correct in this particular case, but the legal paradigm is sound.

Yes, basically. You can absolutely criminalize fake images.


I would say the difference between drugs and images is that the first amendment doesn’t protect drugs. This then becomes an argument about the limits of the first amendment.


The US constitution doesn't constrain French criminal law. But regardless, you're wrong in the US too: the first amendment absolutely does not "protect images", c.f. the fact that you can be prosecuted for distributing CSAM at all. We've already crossed the rubicon where we agree that the "speech" is illegal. The quibble under discussion here is whether or not the justification for the ban applies to synthetic images.


To be pedantic, that is only true until/if the Supreme Court rules that it is a violation of the first amendment. That is to say, it is simply the status quo and not a hard fact that the first amendment doesn't cover images and text/speech equivalently.


Distribution of fictitious CSAM does create harm, despite not directly involving real children.


How?


By cultivating a market for CSAM.


How is it different than the company making fake rhino horns to flood the market to fill the demand and reduce harm to real rhinos?

https://www.usatoday.com/story/news/factcheck/2021/06/10/fac...


I don't know enough about how the market for rhino horns works to say. If putting fake rhino horns on the market encourages a more thriving market for rhino horns that then places a premium on real rhino horns, it can indeed be harmful to supply fake rhino horns: the practice will then have a foreseeable net effect of killing more rhinos.


so if it's availible for free, it's okay?


No, and for the same reason.


I guess renders or cartoons of child pornography is equally banned, so I guess that's not a defence here, as the law stands.


I think they are legal in the US as long as the render or drawing is not depicting a real person.


On the opposite end of this spectrum, there is interest among digital forensics examiners in some kind of automated capability for detecting child porn. Such a capability would speed up the process and reduce the mental/emotional load on examiners having to deal with this sensitive type of content. Automated processing could also reduce the risks of this material being mishandled as evidence. In a report presented at the 2019 Digital Forensics Research Conference, a survey of forensics examiners showed heightened interest in AI/ML models for this application. They discuss some prior work, recent attempts, and challenges in reaching this goal.

https://dfrws.org/wp-content/uploads/2019/11/2019_USA_pres-a...

https://dfrws.org/wp-content/uploads/2019/06/2019_USA_paper-...


leaves the question: how much authority do these models posess, see also: https://www.nytimes.com/2022/08/21/technology/google-surveil...


It's typical for authoritarians, not specific to AI. There are laws hiding all over the place that make drawing pictures illegal. Another Florida Man, Mike Diana, was convicted and sentenced for his (absurdly far from pornography or realism) zine-comic book Boiled Angel, which had a circulation of 300.

For 3 years of probation, he was banned from drawing at all, even for personal use.

https://cbldf.org/2016/09/mike-diana-case-still-resonates-in...


Florida defines child porn as anyone under the age of 18 so all they need to do is say the AI image depicts a 17 year old. Vaguely written laws are very expensive to fight

Important callout beyond the headline: "Last year, the National Center for Missing & Exploited Children received 4,700 reports of generated AI child porn, with some criminals even using generative AI to make deepfakes of real children to extort them."


The original rationale for criminalizing the possession of child porn was that a crime was inherently committed in its creation, and possession is participation in that crime. I think this is a correct conclusion.

I can't see this rationale extending to generated imagery (deepfakes aside). No victim exists.

Vaguely gesturing at social harm is not principled enough, in my opinion. One can point to actual crime and actual harm for filmed and photographed child pornography. For generated imagery, one can only point to one's own personal revulsion as "harm".


Florida law defines anything under the age of 18 as "child porn" - so if your catgirl doesn't look at least 30 you are probably heading to prison or an expensive court battle or both

Are we still pretending that we punish harm or can we admit that we just want to punish people for doing things we don't like?


People are not talking about how revolutionary this is. This isn't AI generating content to compete with artists. This is software running on standard hardware that turns electricity into material more illegal than cocaine. Just think of how that would impact the market for cocaine if suddenly everyone could make easily it at home from common ingredients.

We are spiraling very quickly towards a "media creation box", standalone software that will generate whatever content a person might want without any external connections. There will be huge societal ramifications. We think media bubbles are bad now, but just wait until everyone can live inside their own bubble filled with locally-generated content to match their increasingly warped world views.


seems concerning although the result would probably be similar if he drew it by hand


i have not followed the topic since it's not my cup of beer, but what is the legal stance on hentai with characters that are depicted as under 18 (or whatever age of concent is relevant)

edit: https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...


I like how they use the loophole of "oh she's a 200 year old vampire in a child's body, so it's not actually a child".


Most likely illegal under the PROTECT Act. It would have to be ruled obscene under the Miller Test (which most material considered hentai probably would).



crazy how the same people who say fake pictures can cause enough harm to warrant jail time also are fine with dissemination of religious materials, violence porn on netflix, all of which is causing way more harm. by a mile. its not the harm, its just you picking and choosing.


So now is not downloading pics, but downloading AI models that can generate it. Are there people fine tuning models with these illegal images? Or is it just a jail broken model. In the former case the fine tuner needs to be traced, the latter case is new law territory


I suspect that most open image models are capable of creating illegal images without fine tuning (though perhaps with significant effort). Models are capable of generating images containing subject matter that has never been depicted by any human ever. It's not hard to imagine that a model that can produce images of nude bodies could adapt the apparent age of those bodies.

The legal challenges here will be important to follow.


No new law is necessary. The PROTECT Act makes all sexually explicit imagery of a minor that does not have artistic or literary merit illegal to produce, distribute, or possess. It closes the "no actual children were involved, it's just fiction/cartoons" loophole.


The PROTECT act specifically does not cover fictional content. "no actual children were involved, it's just fiction/cartoon" is a valid defense.

> The PROTECT Act adjusted its language to meet the parameters of the Miller, Ferber, and Ashcroft decisions. The Act was careful to separate cases of virtual pornography depicting minors into two different categories of law: child pornography law and obscenity law. In regards to child pornography law, the Act modified the previous wording of "appears to be a minor" with "indistinguishable from that of a minor" phrasing. This definition does not apply to depictions that are drawings, cartoons, sculptures, or paintings depicting minors or adults.[127][128][129][130] Furthermore, there exists an affirmative defense to a child pornography charge that applies if the depiction was of a real person and the real person was an adult (18 or over) at the time the visual depiction was created, or if the visual depiction did not involve any actual minors

https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...


That stretches the definition of "loophole." It's like calling "truth" the libel loophole.

The harm is supposed to be to children.


Many countries criminalize the distribution of child porn, irrespective of whether it's real, drawn, sculpted, or sometimes even written.

If you are caught with loli hentai manga in New Zealand, expect the law to treat you like the chomo you are.

The reason why blanket bans are implemented is not just to punish actual harm, but to avoid creating a demand for material that harms children.

In the USA this kind of blanket ban, though desired by a majority of citizens, has historically not been feasible due to "that pesky First Amendment". So we say it's not about banning speech, it's about banning the abuse of children. And then we say that children get re-abused every time someone procures, distributes, or views this material. (Except when the FBI does it.) And then we're into semantic hair splitting of when exactly child abuse has occured: what if it's Bart Simpson? What if it's an AI deepfake? A photorealistic render of a fictional child? A Shadman cartoon drawing of a real minor e-celeb?

Other countries have eliminated this problem by banning the material altogether irrespective of whether it depicts real or fictional persons, whether photographed, drawn, or computer synthesized. Before 1969, this would even have been possible in the United States. The PROTECT Act gets us closer by criminalizing any material that flunks a Miller obscenity test, which the Supreme Court has ruled is NOT protected under the First Amendment.


In the US if I get caught selling you a gram of pure cocaine I get the same punishment as I would if I sold you a gram that’s only 20% pure. If I sold you a gram of some random powder and told you it is cocaine I am likely to be prosecuted all the same whether I knew it was fake or not.

That aside, the “fully synthetic CSAM with no children involved at all” idea relies very, very heavily on taking the word of the guy who you just busted with a hard drive full of CSAM.

His defense would essentially have to be “Your honor I pinky swear that I used the txt2img tab of automatic1111 instead of the img2img tab” or “I did start with real CSAM but the img2img tab acts as an algorithmic magic wand imbued with the power to retroactively erase the previous harm caused by the source material”

There is no coherent defense to this activity that boils down to anything other than the idea that the existence of image generators should — and does — constitute an acceptable means of laundering CSAM and/or providing plausible deniability for anyone caught with it.

The idea that there would be any pushback to arresting or investigating people for distributing this stuff boggles the mind. Inventing a new type of armor to specifically protect child abusers from scrutiny is a choice, not some sort of emergent moral ground truth caused by the popularization of diffusion models.


Generally in the US, you must be proven guilty, not proven innocent. The prosecution must prove that the defendant actually committed the crime. The defendant absolutely can claim plausible deniability.

Although currently it is not entirely clear whether mere possession should continue to constitute a crime by itself, or it should require actual child abuse (because the former used to imply the latter).


I did not mention conviction, only prosecution.

It’s not unreasonable to think “we should look into this guy that is publishing CSAM even though he’s posted it under hashtag #totallylegal”

I have a legitimate question for the defenders of this specific activity: If a bad actor were to produce real CSAM by harming children and then used an img2img model that creates very similar but distinct outputs, what do you call those outputs? CSAM? Art?

What (if anything) should happen when law enforcement sees those images?

How can the idea of “synthetic CSAM that harms no one” exist without some mechanism to verify that that is the case?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: