The data used to train those model is specifically filtered to remove sexual content, so the model can't generate porn because it has no idea what it looks like, beyond a few samples that made it past the filter.
Why is it that sexual content is so frowned upon in this space? If it's a content publishing platform I would understand that advertisers don't want that, but this is literally dictating people what is bad and good. I just don't understand this Puritan outrage with text-to-image porn generation.
Because you can't control what the model is going to output in response to a query. The model is trained to respond in a way that is aligned but there is no guarantee.
Since we certainly don't want to show generated image of porn or violence to someone that didn't specifically ask for that, the easiest way to ensure that's not going to happen is to just not train on that kind of data in the first place.
The worst that can happen with a model trained on "safe" images is that the image is irrelevant or makes no sense, meaning you could deploy systems with no human curator on the other end, and nothing bad is going to happen. You lose that ability as soon as you integrate porn.
Also with techniques like in-painting, the potential for misuse of a model trained on porn/violence would be pretty terrifying.
So the benefits of training on porn seems very small compared to the inconvenience. I don't think it's anything to do with puritanism, it's just that if I am the one putting dollars and time to train such a model I am certainly not going to be taking on the added complexity and implications of dealing with porn to just to make a few people realize their fetishes at the risk of my entire model being undeployable because it's outputting too much porn or violence.
uh have you seen American/European mainstream pornography? it's already pretty violent (ex. face slapping, choking, kicking, extreme bdsm).
I just don't see why this stuff is allowed and protected by the law (if its not recorded and published its illegal) and then we are suddenly concerned about what text can do.
Just one of the many double standards I see in Western society.
> uh have you seen American/European mainstream pornography? it's already pretty violent
That's not at all what I am talking about. What I am saying is that such a model would give everyone the ability to create extremely realistic fake images of someone else within a sexual/violent context, in one click, thanks to inpainting. This can become a hate/blackmail machine very fast.
Even though Dalle-2 is not trained on violence/porn it still forbids inpainting pictures with realistic faces that have been uploaded by users to prevent abuse, so now imagine the potential with a model trained on porn/violence.
Someone is eventually going to do it, but back to your initial question about why it's still not done yet, I believe it's because most people would rather not be that someone.
One example risk is someone using computer-generated content to extort money, demand ransom, etc. The cheaper and easier this becomes, the more likely it is to be weaponized at scale.
but wouldn't the ability to auto-generate blackmail material mean the value of blackmail would fall? Just from a supply and demand perspective, it makes sense to me why a deepfaked kompromat would put serious discount on such material especially if everybody knows it was generated by an AI.
Someone like Trump would just shrug and say the pee tapes are deepfaked. I don't think its possible for AI to bypass forensics either. So again this narrative that "deepfake blackmail" would be dangerous makes no sense.
I think it's less for Trump level people and more for basic scams. Imagine just automating a Facebook bot to take someone's picture, edit it into a compromising scene, and message them saying you'll share it with their friends if they don't send you some Bitcoin. This gives you a scalable blackmail tool.
Of course, after a while it'll probably stop working, but there will be a period of time where it can be done profitably and a longer period where it will be obnoxious.
And, of course, you could probably always use the tool to scare children, who, even in the future, might not know that everyone would shrug off the generated pictures.
The cheaper and easier this becomes, the more likely it is to be weaponized at scale.
...and the more people will be aware of and stop believing in the "fake reality".
Ensuring this technology is only available to a tiny subset of the population is to essentially give all the power of distorting reality to that tiny group of people.
Because it's a lot more annoying for your innocuous content to be rendered as porn when the ai happens to interpret it that way than it is for you to be unable to render your pervy desires intentionally.
I imagine a large part of it is that it could generate photorealistic child porn (also "deepfake" porn of real people) and there's not really a good way to prevent it entirely while also allowing generalized sexual content AFAIK. There's probably some debate on how big a problem this really is, but no one wants their system to be the one with news stories about how it's popular with pedophiles. It was the issue they had with AI Dungeon.
I'd guess that, for general purpose companies, it's an area full of legal ambiguity and potential for media outrage, so just not worth the risk. However, given the evidence of human history, it's certain that someone with an appetite for exploiting this niche will develop exactly that kind of tool.
So no, your "friend" can't use it for that.