Hacker News new | past | comments | ask | show | jobs | submit login
Cops bogged down by flood of fake AI child sex images, report says (arstechnica.com)
45 points by isaacfrond 10 months ago | hide | past | favorite | 105 comments



So many commenters didn't read the article.

> The troubling case seems to illustrate how AI-generated child sex images can be linked to real criminal activity while also showing how police investigations could be bogged down by attempts to distinguish photos of real victims from AI images that could depict real or fake children.


sounds like doomers are still winning


I find child pornography abhorrent. But AI child pornography could actually help reduce child abuse - some abusers might get their fix from these images instead of needing actual children. This would be similar to the (yet unproven) claim that widespread access to porn has reduced physical sexual activity in general.


Serial killers and serial rapists tend to show a pattern of escalation, in both significance and frequency of the crime. I would not be shocked if crimes against children of this nature do as well.


I think most killers aren't serial killers though.


People have been saying that for decades before generative AI was a glimmer in anyone's eyes. Even the story we're commenting is of someone who was doing both. It's time to retire that tired theory.


As with all interventions, it is a claim that can only be tested by comparing abuse rates in two otherwise very similar jurisdictions, where the only difference is the availability of such material. To think otherwise is to presume that all members of the target group are identical, which is implausible.

Even attempting to run such a pure observational study of such a situation would be denounced, even if the researchers had no power to alter the laws.

If you did somehow manage to run such a study, I expect you'd find three things: (1) plenty of examples of people who were inspired to act due to the images they saw, (2) plenty of examples of people who used the images as a way out, (3) that there are way more abusers and victims than most people realise.


What you're saying is the market-share of the AI version that does not involve additional exploitation of minors in its creation is going up over time. The evidence is increasing that the theory is right.

If the AI version gets good enough, the supply of a substitute will real CP will go up, the price of real CP will go down, meaning there is less incentive to exploit real children.


That's assuming a lot of things that are shockingly poor choices to assume. Like that the only reason children are exploited is for photographs.


> Like that the only reason children are exploited is for photographs.

The unreasonable assumptions you assert I'm making are of your own imagination.


Your argument is built on assertions and assumptions that do not stand up to reality. It's not my imagination, its your own words. AI will do nothing to demand for CSAM because the premise is faulty.


If there's anything to learn from history, it's that desensitizing people isn't a good thing. Much like heroin wasn't a great way to get people off morphine.


In the 80s & 90s, many people feared that violence in video games, movies & music would desensitise young people to violence and lead to an increase in crime.

But the exact opposite happened. The generation who murdered and robbed their way through the streets of San Andreas committed much less IRL crime than their parents generation.


I can think of a few meaningful differences between GTA and CSAM and a few key differences in policing since then.


Who are you seeing as being desensitized in this scenario? People searching the net for CP are already desensitized.


In Russia and Ukraine, simple possession of child pornography is not even illegal. Yet they still are the 2nd and 3rd top CSAM producing countries respectively...


While I agree in principle, it turns out that real CSAM may be used during the training phase and that obviously has to be prosecuted.


It turns out that for some people porn is a gateway drug to child abuse.

They get hooked and start looking for more and more extreme content, eventually that escalation has them watching child abuse material, then then go from that to the real thing.

It might be why pedophilia seems to be more and more common post internet. If all you have is Hustler magazine you can't escalate to children.


The age of consent in the UK was 12 between 1275 and 1875, so I suspect "seems" doesn't map to "is".

And I vaguely remember an 80s or 90s survey which said something like 1 in 6 American women were victimised as minors.


> It might be why pedophilia seems to be more and more common post internet.

I suspect (admittedly without evidence) that it's not more common—it's just more socially unacceptable, and also easier to get caught.


How do they determine the images are generated and not real? It's not mentioned in the article.


It is still not particularly difficult to tell AI images from real ones. The problem is that "not particularly difficult" means minutues of a human looking closely at them. If you have a bunch of images, those minutes at up very quickly.


Not to mention all the people who already looked at the images that hopefully were filtered from the data set before training.

And none of these people likely received mental health care as compensation for that work.


Is it? I know in the early AI image days there were issues with number of fingers or other weird things, but today there are plenty of images that I struggle to tell apart


Never mind that in most places fake images of such scenes are also illegal and thus also need investigating.

Oh, and of course, "the police" consists of police officers. There's a joke where I live about the exam to become a police officer. There's just one question: "how much is 5+5?", and all answers between 2 and 17 EXCEPT 10 are correct ...


If the volume of novel images shot up 10x right after these tools started becoming accessible, that’d let you make a good guess that most of it’s not real.


It’s very obviously not going to be a meaningful legal distinction in terms of possession nor should it be when you think about it for more than 5 seconds.

It’s only particularly relevant in the context of terms of trying to identify the source of abuse and trying to stop or interrupt that.


Under current US law, it is a very meaningful distinction. Possesing pictures of real child abuse. Ashcroft v. Free Speech Coalition (2002) makes this distinction very clear.

> The argument that eliminating the market for pornography produced using real children necessitates a prohibition on virtual images as well is somewhat implausible because few pornographers would risk prosecution for abusing real children if fictional, computerized images would suffice. Moreover, even if the market deterrence theory were persuasive, the argument cannot justify the CPPA because, here, there is no underlying crime at all. Finally, the First Amendment is turned upside down by the argument that, because it is difficult to distinguish between images made using real children and those produced by computer imaging, both kinds of images must be prohibited.

With the current composition of the court, I'm not convinced this precedent will hold up if a case reaches them. But until that happens, the law seems quite clear.


That only applies to child pornography which is not "obscene" under the Miller standard. I doubt that affects the legal status of the images being discussed at all.


> It’s very obviously not going to be a meaningful legal distinction in terms of possession nor should it be when you think about it for more than 5 seconds.

I've thought about it for 5 seconds, and it's not obvious to me.

The whole possession of images is a crime is already a stretch, justified by the very real observation that the market for these images necessarily fuels abuse of real children.

But AI images cut that link.

I'm in complete agreement that the content is odious. But there's plenty of stuff that's odious and legal.


It feels like making a crime of possessing fake ivory items.


Why the fuck anyone would log in to argue that possession of child porn is some kind of grey area that’s up for debate is beyond me but you’re certainly not alone in this thread that’s for sure.


> Why the fuck anyone would log in to argue that possession of child porn is some kind of grey area that’s up for debate is beyond me but you’re certainly not alone in this thread that’s for sure.

1. Some people think free speech is a pretty good idea. And it doesn't really mean much if you only defend speech that you agree with.

2. There's pretty widespread agreement that drug crimes are inhumane and should be replaced with treatment. So-called "victimless crimes". IMO, AI CP has the exact same characteristics, but it's even less damaging to society - there's no chance that someone will get into a car crash because they're "using".

3. Convictions here absolutely destroy lives. The consequence of being convicted for possession of CP is not just jail (where people absolutely do get attacked by other inmates, often times fatally), but also registry on the sexual offender list.

Do people who indirectly hurt children deserve this fate - I guess, it's complicated. It does seem out of proportion to the punishments handed out to people who kill others with cars, for example. But I can certainly get on board with some sort of criminal consequences.

But if the "offender" doesn't hurt anyone, even indirectly, that does seem like an absolute nuclear warhead of a punishment for something that most people would struggle to explain why it's a crime at all other than "it's gross".


> 1. Some people think free speech is a pretty good idea. And it doesn't really mean much if you only defend speech that you agree with.

Child porn is not protected by the first amendment, just like it does not protect speech calling for lawless action

"Muh free speech" is very often the weakest argument in any kind of debate.


> Some people think free speech is a pretty good idea. And it doesn't really mean much if you only defend speech that you agree with.

I hate this argument when applied like this. There is no universe whatsoever where images of this nature constitute speech. Even when taken as the abstract principle and not the law. The images don't convey any message or idea. There's nothing to agree or disagree with. Free speech doesn't protect mouth sounds or information transfer in abstract, it protects expressions of opinion and ideas. Now, if you went to a protest for legalization and made a sign with the images or made a display at an art gallery then you have a case that it's part of your free expression.

There is an argument for making free speech protections overbroad purposefully because we don't think government can be trusted to evaluate the nuance in good faith but this isn't one of those "well it's arguable as to whether it constitutes speech."

Not being speech doesn't mean it shouldn't be allowed, just that the reason it's allowed isn't free speech.


> I hate this argument when applied like this. There is no universe whatsoever where images of this nature constitute speech.

Do you think that ordinary porn constitutes speech, deserving of the protections of the First Amendment?


[flagged]


As odious as the content is, curtailing it gets into a mess of legal grounds which the parent comment tried to explain why they believe so.

I understand getting riled up because of the content's nature but you haven't improved the conversation by debating on what specific legal grounds this type of speech in the US should be curtailed by. There's no direct harm to a victim, if a film maker includes a scene of a CGI baby being raped (for whatever reason they do it, be it for shock value, storytelling, whatever) should that also lead to a prosecution? That's the point where finding a valid legal ground enters, just defining it by "I don't like it" is not a valid legal reason.

I'm not American nor a free speech absolutist, much much less trying to defend people looking at CP, but you do need strong arguments on what basis to make laws against this.


You asked.


It used to be about protecting the children. Now it's about thought crime. Once the link to protecting children is severed, the category of thought crime is free to expand without limit. Where does it end?


If there is no way to tell, then reasonable doubt will become much more prevalent.


It will be interesting to see how this plays out between two different scenarios:

1. Real-world photographs of people, minors or otherwise, that are altered via photoshop/AI/hand/etc. and then disseminated

2. Images, that appear or seem to be of minors, created entirely through synthetic means (photoshop/AI/hand/etc.) and then disseminated

Can drawings be illegal?



Thanks for this! If wikipedia is correct, it seems that this is clearly answered.

> The PROTECT Act also enacted 18 U.S.C. § 1466A into U.S. obscenity law:[122]

> Section 1466A of Title 18, United States Code, makes it illegal for any person to knowingly produce, distribute, receive, or possess with intent to transfer or distribute visual representations, such as drawings, cartoons, or paintings that appear to depict minors engaged in sexually explicit conduct and are deemed obscene.

> Thus, virtual and drawn pornographic depictions of minors may still be found illegal under U.S. federal obscenity law. The obscenity law further states in section C "It is not a required element of any offense under this section that the minor depicted actually exist."

The difficult part of this is the phrase, "...that appear to depict minors..."


> Can drawings be illegal?

Yes, very much so.

https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...

Wikipedia calls the US a legal grey area but it really isn't. It's just a long road to, "yep still illegal."


Synthetic AI images (2) would have to be trained on a corpus of real images (1), except in the very unlikely event that many child-porn images are generated by photo-realistic rendering of 3D animated models (CGI), where the 3D models are entirely created and animated by real content creators (i.e. entirely synthetic).

So I think the default assumption for most realistic AI-generated images, is that they were generated from at least some real photographs.

I am not an expert in AI images, but perhaps there is a way for AI to be trained on real pictures of children (innocuous, clothed, etc.), combined with real images of adult porn to create novel realistic child porn.


Hm, I'm thinking, if there's a way to convert a real image of a person into a manga version (there's ton of them on the internet) the reverse should be true. So you would just have to train it with manga (which is legal in Japan, for example), and then convert it to real. No real people dataset needed.


Japan seems to be an interesting case, where real porn was censored, so it encouraged thriving comic-book manga/anime/hentai cartoon porn, that seems to be legal there (you see sararīman reading them on the commuter trains!).

Censorship is the mother of metaphor - J.L.Borges

P.S. Japan does appear in the long list of legal exceptions in the Wikipedia article linked by sibling comments


I wonder if fake images will mean less attacks on real children.

This is a really hard issue to tackle!


Physically maybe, but knowing that someone is getting off to images of you as a young kid is still traumatising, especially if you still are a young kid when you find out.


We've had fake images for decades, with people justifying it the same way. "Drawings of children mean real children aren't being abused." I've yet to see the slightest shreds of evidence supporting that claim.


Please, don't think I'm trying to justify. This thing is just plain nasty.

But these images do bring us some new (terrible) questions to ponder. Especially if the police can't differentiate between what's real and what's not.


They don't need to differentiate to charge you with possession of child abuse material - they can still do that for drawn images.

This just means that efforts to find and protect real children will become harder, as those children will be mixed in with photorealistic 'fake children'.

But possession will still (and quite rightly) be illegal.


It is a steady drumbeat of 'think of the children'. I have a daughter I would personally love it if politicians and their ilk stopped thinking about children so hard ( and either found a better scapegoat or stop eroding right or neuter new tech ).

It is annoying and, frankly, it is keeping us from evolving as a species.


I am worried about the changes in laws this will bring.

As on date, the state cannot force you to testify against your own interests. It is the states responsibility to ensure it finds the evidence to present to courts.

With situations like this, the chipping away of rights will begin.

How do the cops know which is real and which is fake? Make the convict tell.

The problem is a tech problem, which has to be solved by techies. Otherwise, we will land up into a quagmire of privacy and state protections eroding, or the AI technology itself will be heavily regulated / banned.


>The problem is a tech problem, which has to be solved by techies.

When has that ever been the way it worked? Or arguably should have worked?


> How do the cops know which is real and which is fake? Make the convict tell.

What would that accomplish? All you'll learn is that they're all fake.


TL;DR - Could fake AI child abuse content be being pumped out as part of manufacturing consent from the general population to bring in more controlling policy to then implement a totalitarian state?


They are already using child protection as an argument to implement totalitarian measures.

The most recent example that come to mind is the European Union Chat Control, which were meant to break all e2e encryption, enforcing on-device data scanning and fingerprinting to function.


It will be interesting to see if/how this will modify their argument "for the children". It is a nuanced discussion.

Is it a crime ? Why / why not ?

Is there a victim, is there harm being done ?

Needless to say I personally find it appalling.


They (other than the cops investigating actual crimes against actual children) don't care about the children. They care about control. AI "CSAM" in particular (combined with deepfakes of adults made without the model's permission) is an emotional justification for more control in the form of draconian internet censorship and other preemptive measures.

Decades of megacorporation propaganda have paid off, and a fair portion of gen-z/y/x is hypervigilant about use without permission, even if the use is transformative and doesn't displace the original work. Their misguided concerns will play into the hands of the politicians seeking more control.

And it will do very little to prevent the spread of AI "CSAM"[1]. What are they going to do, regulate GPUs or any AI training software?

[1] Or any of the other concerns of today, like voice cloning or mimicry of personal likeness, like what just happened with Taylor Swift. The cat's out of the bag for images; OSS voice cloning is roughly at elevenlabs' quality, and it'll be surprising if there aren't, very soon, easy-to-use interfaces for voice cloning and tts.


Precedent for this exists. [1] US Supreme Court ruled that completely virtual images are okay, but those derived using innocent real images of children would still be illegal.

So the AI images likely are as well because the training set would contain innocent child images that helped generate them.

Training data may also unknowingly contain actual child porn. If so then any naked/sex images derive partially from child porn. And many jurisdictions have "strict liability" laws...

Find one single illegal image in the training set and the entire model would be tainted. And how many may have slipped through human review? [2]

[1] https://web.archive.org/web/20201109024622/https://www.nytim...

[2] https://www.theguardian.com/technology/2023/aug/02/ai-chatbo...


It's not really clear to me what's your position. In the sense you state that you find it appalling, but what? The fact that there is a problem with such generative AI, the discussion that might follow, or its consequences.

Obviously the powers that be will try to use this at their advantage, trying to tighten a bit control over population, such as rendering AI models illegal, except a selected blessed few (read megacorp. ones)

IMO the general response shouldn't be just trying to defend the freedom position for freedom sake, but to find viable alternative solutions that don't mine the freedom of the population.

For example... we could try to use AI to detect if the images are AI generated or not :) And police should have the means to use AI, maybe given for free by the megacorp that benefit the most from AI


there would have to be illegal images in the training set right? which I would imagine would make the whole model illegal and therefore it outputs?


Not necessarily.

If the training data contained sexual images of consenting adults and legal images of children, many intelligent models could interpolate between the two.


Not necessarily: the algorithms are perfectly capable to extrapolate, which makes the argument that the synthetic images "harm children" (as the article repeatedly tells) hard to defend.

To be clear, I find child sexual abuse appalling, but maybe synthetic images would keep some people "satisfied" and leave the real children alone?


Pictures of naked kids aren’t necessarily illegal, or we’d be sending nearly every parent to prison (to take one example).

Besides, if it worked like that, training on anything under copyright would have a similar effect. These models have a bigger problem if they get “tainted” by a tainted training source. (Fingers crossed they do! But I doubt it)


For truly realistic CSAM, there would probably need to be some quasi-illegal images in the training set.

But "CSAM" also includes "character looks under 18" even if they look fully developed, which an AI model could do without training on actual CSAM.

Whether that makes the model illegal is a separate question. Is a LLM illegal if a clever prompt can get it to output a copyrighted poem? Is a human illegal if they can draw "CSAM"?


I have luckily never encountered any of this stuff, real or fake, so I may be hopelessly naive about what's depicted, but... I can ask for a picture of naked Donald Trump wallowing in mud and such a picture will be constructed even though there are no photos of Donald Trump wallowing in mud (naked or otherwise). So I don't think the training set necessarily contains illegal images.


I reckon this all comes down to a bunch of diligence/negligence judgements that will eventually be ironed out in the courts if necessary after some initial broad legislation, but as someone with no legal expertise, at least, it still seems pretty messy.

The inability to extract a training set from a model adds a lot of ethical ambiguity to generative images, as does the ambiguity in who’s responsible for what the models produce. I think it would be utterly ridiculous to say that someone training models with CSAM has no culpability in what it produces— only for possessing the CSAM to begin with— but I also think it would be utterly ridiculous to hold people accountable for everything their models produce, given their flexibility. What about writing a prompt that generates CSAM inadvertently? What if nobody involved intended to make CSAM but through some algorithmic shenanigans the prompt produced it? Should we legally require some amount of model testing before it’s used? Would the tester be violating the law if it failed the test, even if they reviewed every single image in the training set? Who’s responsible for deliberately poisoned models with secret key terms, or malicious data that is not CSAM but can trick the model into creating it? Would some entity like Midjourney that not only provides a model, but a complete appliance for this process be responsible for the images it produced? Does it matter if they authored the models they use? What if users can upload or train their own models? How does automation ethically affect these considerations?

Someone with legal expertise obviously would have a better grasp of these situations than I do, but I do know we’ve got a lot of growing pains en route with this technology.


I tend to think all these problems stem from making certain classes of fictional image illegal, and while that remains the case then logically all sorts of 'ridiculous' things can become serious offences. People have been convicted for possessing cartoons.

As far as I know it's still OK to make images of murder and torture.


No _known_ photos of said scenario.


> Is there a victim, is there harm being done?

While one might argue there’s no immediate victim, the problem is a kind of sexual hedonistic adaption,

At Gay saunas and sex shops it’s well known that playing porn where people don’t wear condoms greatly increases the incidence of condomless sex on the site, and playing porn with condom users greatly increases condom use. There are some saunas that ONLY play porn with condom users for this reason.

I imagine these synthetic images would do the same thing. They are fueling a dangerous fire. Soon images won’t be enough and given they’re now desensitised though frequent exposure, it’s easier to make the leap to reality

If you’re a chocolate addict, having a hobby of smelling chocolate is a really dumb idea.


The images would desensitise them, and it would normalize them. But there is a huge step between having a fantasy and taking it to reality. There are people who fantasize about being raped, but would hate for it to become reality.


When other cultures try to impose their morals on the west, we say it is oppression (eg. compulsary headscarves). When we try to impose our morals on others, that's somehow totally different, and they need to follow our rules even if they're in a totally different country.


Modern progressive discourse argues that when the west exports their culture it is native-erasure and neo-colonialism(ie racist). So I think your argument is very much up for debate. Sometimes a part of someone's culture is trash. Sure the Mayans built cool temples, but they were also ultra violent.

It sure sounds a lot like you are defending pedos here...


Why would you think Mayans are any more violent than other contemporary or later civilizations?


The reputation of the Mayans was of large scale human sacrifice. As I am not a historian I have no idea how accurate this reputation is, but that reputation is a reason for thinking they were more violent than the average contemporary or later civilisations…

…although now I realise I have no idea how it compares to the mass burning of suspected witches in parts of europe.


Unjustified killings are still a thing. Are Mayans ultra violent but Russians not?


Perhaps I've missed the news story about Russian soldiers cutting out prisoners' hearts as blood sacrifices to the maize god…

Regardless, modern violence is also condemned when it happens. Unless done by the in-group, then apologists try to brush it away as necessary for some reason or another. If your point was meant to be about the latter, I'd agree, though it didn't come across like that on the first reading — those sacrifices (and indeed the in/out group difference) are the "why" in "Why would you think Mayans are any more violent than other contemporary or later civilizations?"


"Modern progressive discourse" is an US culture export. Also is using "racism" as a substitute for "bad".

And what would be a pedo by US standards might be legal and fine in other countries (looking at Japan lolicon situation).


Hehehehehe, my code of conduct starts with "No USA cultural imperialism allowed" (https://codeberg.org/ltworf/international_code_of_conduct/sr...)

Some people loved it, some others told me I'll never find a job because I'm a bad person who can't work with others (I have a job).

edit: in case that wasn't clear, it still doesn't allow child porn…


Ironically in trying to transcend a conflict you never needed to transcend because you're not even a part of, you just sound like you picked a side.


It is purely based on past experience of who opens bug reports to complain about things.


And you chose to react to it by publishing something about it. It wasn't inevitable. You can always ignore irrelevant issues.


Code of conduct is compulsory these days, or github nags you about lacking it.

And I couldn't pick one of those that say "no skill discrimination", since that is just wrong. So since I had to write my own, might as well address the issues I've encountered.


Yeah that's my whole point. You made a choice to position yourself in a debate where you don't even have a voice and in pretending to transcend it you just sound like one of the sides in it.


Your whole point is saying that I'm wrong, while having no coherent argument for what you're saying.

Ok.

Can you come up with an argument next time?


Nah it's pretty clear. You are pretending to transcend an American political issue, in a debate where your opinion don't matter in the slightest, but failed and simply took a pretty tired position.


My opinion does matter :) The code of conduct is there to inform people that if they want to contribute to projects where that code of conduct is adopted, acting like you are doing now will get them a ban.


Good thing you have that power to make you feel good huh


You get horny telling random people on the internet "your opinion doesn't count". I write free software to help others.

We aren't the same.


Hey you're the one imagining things about other people's genitals. But if you need this kind of reassurance to feel self important by all means. I'm glad we're note the same.

edit: lol cowsay-pervert? sure buddy you're changing the world haha


> Hey you're the one imagining things about other people's genitals

Just not yours :D

I don't have a need of reassurance. I just don't care about the opinion of some moron who thinks he's clever :)

Sure… why not sort the projects by popularity and check the ones with millions of downloads instead? Maybe because you're just a sad waste of atoms :)


Wow trully sad how much projection is going on. You should get your pressure checked buddy. Cheers


Who is “law enforcement”. Not “cops” as per YCombinator headline. You can’t get a cop to do a thing about online content. I would really like to know who is feeling bogged down. I have a serious case for them that a person was actually drugged, trafficked and extorted.


If this is real, please hit up ICE C3 [1]. They specialize in investigations for this type of thing, specifically sexual exploitation and trafficking.

[1] - https://www.ice.gov/partnerships-centers/cyber-crimes-center


If a child, contact NCMEC (National Center for Missing and Exploited Children)

https://report.cybertip.org/

24/7 phone line: +1-800-843-5678

Otherwise/in addition, FBI is a good bet.


You need to contact FBI child enforcement immediately.

FYI you could have some liability of you delay.


She is an adult. Yes adults get trafficked. Especially good looking ones falling into health and economic challenge. She has a daughter. Children are hurt when adults are trafficked- no one gets this. The trafficked party then used to extort me by someone actually pretending to be “cops” with cross platform analytics and data broker access.

Craziest thing you ever could have imagined and only an extremely rich tech narcissist can come up with what they did to a woman already being trafficked. They did so to secondarily traumatize me.


This will be a shitstorm... How do you determine the age of an AI generated personna?


If it looks like a 8 years old you won't convince the FBI that it actually is a DnD character that's 106 years old under a spell of rejuvenation, and even if you do it won't matter for your sentencing.


I mean sure... but that's because there's an actual person photographed there. What happens when you get into "does (s)he look 17 or 18 in this photo?" and no way to prove anything, since the person does not exist?


I imagine the same way they do for actual photographs in which they haven’t identified the victim, which I’d guess is a pretty big percentage. Stuff like this is never cut-and-dried, but the law deals with ambiguous situations all the time. The test for what even constitutes pornography compared to, say, medical images involving genitals is a good example.


I mean sure... but that's because there's an actual person photographed there. What happens when you get into "does (s)he look 17 or 18 in this photo?" and no way to prove anything, since the person does not exist?


How would they differentiate generated images of people who look young from generated images of people who are underage?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: