Hacker News new | past | comments | ask | show | jobs | submit login

It will be interesting to see if/how this will modify their argument "for the children". It is a nuanced discussion.

Is it a crime ? Why / why not ?

Is there a victim, is there harm being done ?

Needless to say I personally find it appalling.




They (other than the cops investigating actual crimes against actual children) don't care about the children. They care about control. AI "CSAM" in particular (combined with deepfakes of adults made without the model's permission) is an emotional justification for more control in the form of draconian internet censorship and other preemptive measures.

Decades of megacorporation propaganda have paid off, and a fair portion of gen-z/y/x is hypervigilant about use without permission, even if the use is transformative and doesn't displace the original work. Their misguided concerns will play into the hands of the politicians seeking more control.

And it will do very little to prevent the spread of AI "CSAM"[1]. What are they going to do, regulate GPUs or any AI training software?

[1] Or any of the other concerns of today, like voice cloning or mimicry of personal likeness, like what just happened with Taylor Swift. The cat's out of the bag for images; OSS voice cloning is roughly at elevenlabs' quality, and it'll be surprising if there aren't, very soon, easy-to-use interfaces for voice cloning and tts.


Precedent for this exists. [1] US Supreme Court ruled that completely virtual images are okay, but those derived using innocent real images of children would still be illegal.

So the AI images likely are as well because the training set would contain innocent child images that helped generate them.

Training data may also unknowingly contain actual child porn. If so then any naked/sex images derive partially from child porn. And many jurisdictions have "strict liability" laws...

Find one single illegal image in the training set and the entire model would be tainted. And how many may have slipped through human review? [2]

[1] https://web.archive.org/web/20201109024622/https://www.nytim...

[2] https://www.theguardian.com/technology/2023/aug/02/ai-chatbo...


It's not really clear to me what's your position. In the sense you state that you find it appalling, but what? The fact that there is a problem with such generative AI, the discussion that might follow, or its consequences.

Obviously the powers that be will try to use this at their advantage, trying to tighten a bit control over population, such as rendering AI models illegal, except a selected blessed few (read megacorp. ones)

IMO the general response shouldn't be just trying to defend the freedom position for freedom sake, but to find viable alternative solutions that don't mine the freedom of the population.

For example... we could try to use AI to detect if the images are AI generated or not :) And police should have the means to use AI, maybe given for free by the megacorp that benefit the most from AI


there would have to be illegal images in the training set right? which I would imagine would make the whole model illegal and therefore it outputs?


Not necessarily.

If the training data contained sexual images of consenting adults and legal images of children, many intelligent models could interpolate between the two.


Not necessarily: the algorithms are perfectly capable to extrapolate, which makes the argument that the synthetic images "harm children" (as the article repeatedly tells) hard to defend.

To be clear, I find child sexual abuse appalling, but maybe synthetic images would keep some people "satisfied" and leave the real children alone?


Pictures of naked kids aren’t necessarily illegal, or we’d be sending nearly every parent to prison (to take one example).

Besides, if it worked like that, training on anything under copyright would have a similar effect. These models have a bigger problem if they get “tainted” by a tainted training source. (Fingers crossed they do! But I doubt it)


For truly realistic CSAM, there would probably need to be some quasi-illegal images in the training set.

But "CSAM" also includes "character looks under 18" even if they look fully developed, which an AI model could do without training on actual CSAM.

Whether that makes the model illegal is a separate question. Is a LLM illegal if a clever prompt can get it to output a copyrighted poem? Is a human illegal if they can draw "CSAM"?


I have luckily never encountered any of this stuff, real or fake, so I may be hopelessly naive about what's depicted, but... I can ask for a picture of naked Donald Trump wallowing in mud and such a picture will be constructed even though there are no photos of Donald Trump wallowing in mud (naked or otherwise). So I don't think the training set necessarily contains illegal images.


I reckon this all comes down to a bunch of diligence/negligence judgements that will eventually be ironed out in the courts if necessary after some initial broad legislation, but as someone with no legal expertise, at least, it still seems pretty messy.

The inability to extract a training set from a model adds a lot of ethical ambiguity to generative images, as does the ambiguity in who’s responsible for what the models produce. I think it would be utterly ridiculous to say that someone training models with CSAM has no culpability in what it produces— only for possessing the CSAM to begin with— but I also think it would be utterly ridiculous to hold people accountable for everything their models produce, given their flexibility. What about writing a prompt that generates CSAM inadvertently? What if nobody involved intended to make CSAM but through some algorithmic shenanigans the prompt produced it? Should we legally require some amount of model testing before it’s used? Would the tester be violating the law if it failed the test, even if they reviewed every single image in the training set? Who’s responsible for deliberately poisoned models with secret key terms, or malicious data that is not CSAM but can trick the model into creating it? Would some entity like Midjourney that not only provides a model, but a complete appliance for this process be responsible for the images it produced? Does it matter if they authored the models they use? What if users can upload or train their own models? How does automation ethically affect these considerations?

Someone with legal expertise obviously would have a better grasp of these situations than I do, but I do know we’ve got a lot of growing pains en route with this technology.


I tend to think all these problems stem from making certain classes of fictional image illegal, and while that remains the case then logically all sorts of 'ridiculous' things can become serious offences. People have been convicted for possessing cartoons.

As far as I know it's still OK to make images of murder and torture.


No _known_ photos of said scenario.


> Is there a victim, is there harm being done?

While one might argue there’s no immediate victim, the problem is a kind of sexual hedonistic adaption,

At Gay saunas and sex shops it’s well known that playing porn where people don’t wear condoms greatly increases the incidence of condomless sex on the site, and playing porn with condom users greatly increases condom use. There are some saunas that ONLY play porn with condom users for this reason.

I imagine these synthetic images would do the same thing. They are fueling a dangerous fire. Soon images won’t be enough and given they’re now desensitised though frequent exposure, it’s easier to make the leap to reality

If you’re a chocolate addict, having a hobby of smelling chocolate is a really dumb idea.


The images would desensitise them, and it would normalize them. But there is a huge step between having a fantasy and taking it to reality. There are people who fantasize about being raped, but would hate for it to become reality.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: