Hacker News new | past | comments | ask | show | jobs | submit login

This is really excellent work. The fact that it is seemingly easy to deprogram LLM's makes me hopeful. I wonder whether that will lead to more barriers in the future though, like eliminating "harmful" content already at the dataset level.



Cleansing the data-set is what has made the last 2 releases of stable diffusion duds. Even though its much less technically advanced, the latest version without censorship still beats everything else out.


> Cleansing the data-set is what has made the last 2 releases of stable diffusion duds.

Which last two releases? SDXL is very much not a dud. SD 2.x was (2.1 less so than 2.0, but not enough to make up for 2.0.)

SD 1.5 still has a bigger ecosystem of fine-tunes, etc., and its less resource intensive, so its superior for some work, but SDXL is rapidly catching up in ecosystem support in a way that 2.x never did.


Go on CivitAI and sort by popular... There is an awful lot of nudity and anime, and SDXL struggles with both. For censored SFW image generation, DALL-E 3 now beats out SDXL by a wide margin. The SDXL resource requirements are somewhat of an issue as well, 8 GB cards barely work and that is still the largest consumer market segment.

SDXL's only real differentiation now is the ability to locally host and avoid the OpenAI / Microsoft censorship filter. Leaning into that would be a smart decision, although maybe it conflicts with Stability's attempts to raise money.


> Go on CivitAI and sort by popular... There is an awful lot of nudity and anime, and SDXL struggles with both

Whether the base model does them well or not, there were (and very quickly after release) far more resources (checkpoints, LoRa, TI) for both for SDXL than the ever were based on the 2.x base models.


Let's be honest, the NAI leak is what really made it blow up. Ever seen the front page of civit.ai with the filters turned off?


The same technology they used allows for reintroducing the concepts into the model. Which I guess is as “bad” as removing safety, making the entire security theatre pointless.

Plus any combination of harmless concepts on their own could be harmful, so really - no.

Now maybe they are making the argument that generative models are too dangerous to be given to anyone at all except a few government blessed gatekeepers but such an argument probably would need proof.


I'm not sure it makes me hopeful that anyone can have a horrible AI in their pocket.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: