Civitai doesn't show the pornographic models/LoRAs unless you're logged in, but it's there.
That being said, the article is an awful sensationalist hit piece. It's as accurate as saying "Andreessen Horowitz Invests in YouTube, Which Profits from Schizophrenics Making Videos of Racist Tirades" or "AH Invests in Github, Which Profits from Software Banned in China and Iran"; true strictly speaking, but completely misrepresenting the purpose of the service.
CivitAI seems to have improved the filtering on the logged out experience, then.
Which, yay, big plus; porn slapping visitors on the face used to be a big deal.
Its still definitely there, and 404 clearly has a paid account since they posted about deliverately violating the nonconsensual porn policy for science with the paid onsite generation services.
> LoRA: Low-Rank Adaptation of Large Language Models
> An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency.
It's a smaller (space-wise) model that's much easier to train, both computation and input data-wise. It accomplishes a specific style or goal when combined with a full sized model.
Which is a good thing for the non-porn SD world, because a lot of the improvements from the base models in human anatomy non-porn-focussed third-party models come from merging porn-focussed models in with other trained models.
(Which parallels the role of nudes is other visual arts domains.)
Somebody is targeting this Horowitz person I do not know nor care about.
They dug and dug, and found something to attack them with. A connection by investing into a company allegedly doing something illegal and deeply immoral.
Let me put extra emphasis: Horowitz isn't allegedly doing something illegal and immoral. A company he is investing on is allegedly doing something illegal and immoral.
We're supporting this lynching campaign by tolerating it and giving it attention.
I feel like the long term equilibrium of stuff like this is that in the not too distant future, anyone will be able to easily generate weird deepfake inappropriate porn / content for anyone. It seems unavoidable.
I might argue this could be good in that this sort of stuff will become so commonplace that it won't be a big deal and people will learn (be forced to shrug it off). Up until now-ish, the scarcity of celebrity nude leaks and revenge porn type stuff made it somewhat of a forbidden fruit. I like the idea of destigmatizing this sort of stuff so that it loses power.
That said, I'm sure there will be some casualties along the way. Eg vulnerable teens who are picked on by mean girls / boys, semi-celebrities / public figures who are targeted by online trolls, etc. I hope that the prevalence of this sort of stuff enlarges the conversation around this, increasing the reach of strategies to deal with this stuff psychologically, and significantly reduces it's impact.
I don't know many celebrities or politicians, but I suspect that after they see a few weird / porn related photos of themselves, they stop caring.
I disagree with the tone of the headline. CivitAI is a good resource, and the rest of AI world could learn from it for, for example, LLM lora/grammar hosting.
But I am concerned, for a different reason.
Civitai's enshittification potential is sky high. They are hosting loads of enormous models and images for free, on a very spiffy, heavy website. There's no way thats sustainable.
> Civitai's enshittification potential is sky high.
I share these same concerns.
What this space desperately needs is to have models shared via torrent. As it is today Civit.ai has had problems with creators decided to take down their own models and suddenly people who were using them have no place to find them.
I suspect we'll see more and more lockdowns on what models are deemed "appropriate" and it would be nice to have a decentralized alternative.
Not to mention torrenting is the only rational way to sustainably host such large files at no cost.
Uh, I don't want to see an unmoderated, distributed CivitAI. That would be a unspeakable wasteland.
And there already are torrents of SD models. Just filter by the .safetensors filetype.
If would be nice if their hosting was decentralized though. Users seem to hoard LoRAs and merges in personal collection anyway, mind as well help distroute them so Civit doesn't have to enshittify so quick.
This is the next pearl clutching headline that people will be reading over and over for any models that generate images.
"Oh no, the model can generate porn!"
"Oh no, it can generate realistic photos of people!"
"Oh no, it can generate violent or racist images!"
I'm so sick of people arguing that we should just shut down AI research or commercialization because it could be used maliciously. Think of how many grocery store products could be used maliciously if someone wanted to. Should we not allow people to buy bleach? Should we not allow knives to be sold?
People need to get over their misgivings from this new technology and start taking accountability for that fact that models can be abused like any other tool. That doesn't mean we should just not give anyone access to them.
Is the article suggesting that it should be shut down entirely? I didn’t read that anywhere. I would actually think that its asking for basic accountability and due diligence.
> Is the article suggesting that it should be shut down entirely?
No, its misleading by selective focus (like the previous article it references) to get people to come to that conclusion themselves, a much more effective persusasive technique than the explicit demand because with the explicit demand people assume your evidence is selective to support the demand which may have ulterior motive, while they are less likely to do so if your presentation is superficially merely informative but actually selectively crafted to motivate a particular conclusion.
The problem here is assuming that civitai would be the ones who are accountable. I don't think that's the case since they're just hosting a collection of perfectly legal Stable Diffusion checkpoints (aka "models").
The people who download a checkpoint and a LoRA (a special kind of sub-checkpoint or "model modifier" if you will) of some celebrity are the ones who would be generating something like porn using that celebrity's likeness. Not civitai.
In this instance civitai is providing tools that can be used to generate unauthorized porn just like Home Depot provides tools that can be used to make illegal weapons.
Is it illegal to make a LoRA of someone's likeness without their permission? No. Because until it is used to generate images which are then published it's like a gun in a safe, doing nothing... Just sitting there.
There's no federal laws protecting likeness anyway (only certain states). So civitai has no real obligation there.
Is it ethical to make a model of someone's likeness without their permission though? I don't know. To me, it's akin to taking a picture of someone in public... If they're in a public space they're "broadcasting" their likeness.
You can make a really great LoRA of anyone's likeness with just a handful of images. If you're skilled you can probably do it with just one image (using some controlnet stuff).
I think there’s room for, allowing people to control their own likeness while not fully banning AI content generation.
I mean, do people have a right to privacy? Is there some line between having privacy and infringing on someone’s freedom of expression through AI content generators that you would accept?
If I take your picture and record your voice in public and make a blockbuster movie staring you using this technology, you wouldn’t think that would be an issue? Presuming you are not a member of SAG-AFTRA.
Sorry you are tired of people voicing legitimate concerns, perhaps you can tune them out and use your own likeness in pornography to help these companies speed things along.
People have been using various chemicals for cleaning and knives for cooking for thousands of years. They are embedded in our culture. LLMs only became popular a year ago and the farthest back you can go with their origins is the 1950s. It is not at all unreasonable for people to say "hey should we require that this new tool have safeguards built in so it isn't horribly abused?", especially when you consider that these are not just tools that have been fashioned from the natural world, these are products provided by companies that are in some cases gleefully amoral. A space moving quickly is reason for more criticism and caution from the general public, not less.
And if safeguards cannot be put into place by design into your tool, then that is malicious design
Unsure why your comment is in the grey. I'm with you. When they first came on the scene a few months back they had some nice tech meets the real world articles, but the headlines and articles attached seem to have gone downhill rather quickly.
"Nonconsensual AI porn" is a weasel term because it implies that it should be necessary to get someone's consent to create fake porn using their faces.
Nonconsensual porn laws generally aren't restricted to commercial use, and some include fake images with intentional and recognizable use of likeness (some also don't, its a mixed bag.)
Yes, commercial use of likeness is also an issue, and it may or may not be violated simply by distribution of something use the likeness on a commercial website like civitai.
Right now CivitAI is basically Stable Diffusion social media. It doesn't seem profitable at all, even of they are selling user data or getting subs or whatever.
But if they start charging for downloads or whatever, they may cross a line into making money off likeness, and not just "hosting user content" like Facebook and Twitter and any oldschool image hosting service gets to say they do.
How can that possibly be enforced though? You can't really stop people from drawing or photoshopping stuff for personal use, and this is essentially a further extension of the technology.
Well said. If you fancy using my likeness as a dartboard, or in a meme, or as a Photoshop asset, or painted on a canvas, or drawn by AI, or mistakenly randomly generated, etc, great! Have fun. Not my circus, not my monkeys.
I'm not entitled to categorically own/forbid using a look. That's nonsense and leads to self-inflicted quandaries: How do I know a video of unknown provenance contains me, not a dead ringer that gave consent? How different must a depiction be to not require my consent? 9 pixels? 30%, whatever that means? At least an eye color change?
It's impossible to consistently enforce, presumptive, and effectively thought-policing a concept. In short: it's absurd.
> How do I know a video of unknown provenance contains me, not a dead ringer that gave consent?
> It's impossible to consistently enforce, presumptive, and effectively thought-policing a concept. In short: it's absurd.
I mean, come on. It’s fine to disapprove of the law, but this isn’t some uniquely difficult thing that the legal system couldn’t possibly handle. It’s certainly nowhere near the level of complexity and ambiguity of, for instance, criminal fraud law, where things like the intent of the accused and the “reasonable person” are routinely crucial elements.
Actually it is arguably higher level of complexity, because while intent is not normally an element for right of publicity, it has been looked to along with effect to disambiguate which cases that aren’t simple image or voice likenesses (such as voice impersonations) are nevertheless covered.
Across the internet spreads a noisy video. It's pornography with an absolute dead ringer celebrity face. There are no context clues -- physical SMT, celebrity references, video provenance, etc.
I mean, if you don’t know who is responsible, what do you ever do? What if you find a dead body but no clues about how they died? What’s uniquely tricky about this particular type of crime?
“In the United States, the right of publicity is a state law–based right, as opposed to federal, and recognition of the right can vary from state to state.”
So, the USA-specific answer is depends on the specific US state(s) whose law relevant to the action in controversy.
There are countries with national rights in this area, but the USA is (and your source highlights this) not one of them.
The irony of course is that people are only able to create deepfakes of non-celebrities because social media has already gotten the average user very comfortable with letting go of their privacy.
Taking a pornographic movie and putting someone's face in place of one of the actor's does not violate their privacy in any way, since nothing private was shared that wasn't private before (their face).
Sharing a photo on Facebook doesn’t imply a public license for any pervert to use it for pornography. Don’t complain when that gets codified in law either — people like you are way too cavalier with other people’s livelihoods.
People are too entitled in trying to own a look. Can I spread porn fakes of myself? What if I look identical to Taylor Swift? Do I lose my right to free expression?
Blame Disney, not me. And I'd argue most individual people are interested in controlling what is done with their own visage more than anything. It's the same legal logic as revenge porn laws. If you are my enemy, all I have to do is find your sibling's Instagram account and I could make entire <yourlastname>hub website.
> What if I look identical to Taylor Swift? Do I lose my right to free expression?
Taylor Swift's corporate legal team would already do a pretty fine job of excising that right from you. No additional legislation needed.
Now, if you are trading commercially on the appearance similarity in a way which presents, either explicitly or implicitly on your images as Swift’s, then you open yourself up to right of publicity claims in some jurisdictions, and the same may be true of revenge porn laws in some jurisdiction, even without the context being commercial.
If you genuinely believe that my kosher, not-Swift deep fakes would ever legally survive -- regardless of context/provenance claims -- then I have a bridge to sell you.
No they could not easily sue and win, especially if no money was exchanged and the video wasn't presented by the maker as a actually being of the video in question. There are plenty of porn sites who's sole existence is deep faking rich and powerful people performing sex acts. Those sites use the word FAKE or something akin to that in the title/intro of the video to add some protection.
Have you used Civit.ai? While it does cater to audiences interested in "adult" content I don't think any of it's active users would characterize it as "a deepfake marketplace trading in celebrity and private-person identities".
Civitai is just a popular place to host stable diffusion models / LORAs / textual inversions, not some kind of fake nudes marketplace. It's the huggingface of stable diffusion.
But that wasn't the choice, the choice was to create an open hub for basically everything related to Stable Diffusion, with limits for a broad range of illegal things, but without policies against things that are steps out from that because at even one step out the “well, combined with other things it could be used badly” bans everything.
The highlighted bounty feature is technically accurately, but misleadingly, described. Yes, people can and do post requests for models of real celebs there, but they also post requests for fictional character models, for concept models not tied to particular real or fictional characters, for technical assistance with model training, for people to generate showcase images for existing models.
So the choice was not screening for it. Regardless, without moralizing, a choice was made. This is a shocked pikachu situation.
[edit] The fact is we've had this kind of content for a long time, and we've developed techniques for identifying and removing it. There's great nudity detecting ML models. I suspect the choice was made in part ideologically, and in part to drive sensationalist coverage, and it looks like it's working.
No, they screen, remove, and ban people for policy violations. (And that has improved over time, at least, the violations that have slipped through have dropped over time.)
What isn't a policy violation is separate things which could be combined together or with other things to make something that would be a policy violations.
Some are available early exclusively to paid members, and there are onsite model training and generation services, which are paid (in virtual, but can be bought for cash, currency.)
Well huggingface and tons of others offer stable diffusion training services. Google essentially hosts dreambooth with options for a subscription. Thats nothing unique.