Hacker News new | past | comments | ask | show | jobs | submit login

I’m getting NSFW warning. On the sample prompt provided by the app itself:

> Photo realistic oil painting of a baby wearing a bib sitting at a wooden table in front of an egg. Painting by Leonardo da Vinci in the style tof mona lisa (1519). The baby has a Laughing expression. Medium shot with a tilted frame. The person is to the left of the table. Warm lighting. Photorealism.




Same for me. Got an NSFW warning for "A ten year old boy daydreaming"


If what the model wanted to generate based on that prompt is considered not safe for work, you know what the training data must’ve been like...


The prompt "A ten year old boy daydreaming" is a bit haunted, as it's not specific enough to generate good results.

But the results from running that does not result in NSFW content, contrary to what you say. Most of these NSFW "blockers" simply act on keywords like "ten year old boy" rather than trying to understand the output.

Here is 9 images of the prompt "A ten year old boy daydreaming" with various settings. None of them are good (bad prompt + no tuning on my side), but none of them are NSFW either: https://imgur.com/a/FcBv05w

(slightly off-topic, seems there is a lot of misinformation in this thread, from misunderstanding the training data, to how the NSFW content block some platforms are implementing works)


I was just being snarky. I didn’t actually think the content filter would take into account what the model would actually output. I thought it probably was just a really bad keyword based filter.

Thanks for sharing these examples. Good to see what the model actually produces with that prompt.


It is a vision based nsfw classifier, not keyword based.


More likely, the NSFW classifier has a lot of false positives because that’s the only way to effectively prevent those outputs, even though it causes it to detect nsfw when there is none.


Well, it is true that a 10 year old can be quite distracting at work. However, so is playing with stable diffusion models in general. (Which is probably why we are getting that message on basically every prompt.)

;)


What?


That’s a dark thought right there.


same for me.

"a river between mountains"


well that one makes more sense than some of these others...


Seems to be the word "baby". It worked for me after removing it.


Same for me. I don't even understand what could be detected as NSFW


I think he false-flagged the script.


remove word egg




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: