Hacker News new | past | comments | ask | show | jobs | submit login

Yeah, although it is weird that it doesn’t insert white people into results like this by accident? https://x.com/imao_/status/1760159905682509927?s=46

I’ve also seen numerous examples where it outright refuses to draw white people but will draw black people: https://x.com/iamyesyouareno/status/1760350903511449717?s=46

That doesn’t explainable by system prompt




Think about the training data.

If the word "Zulu" appears in a label, it will be a non-White person 100% of the time.

If the word "English" appears in a label, it will be a non-White person 10%+ of the time. Only 75% of modern England is White and most images in the training data were taken in modern times.

Image models do not have deep semantic understanding yet. It is an LLM calling an Image model API. So "English" + "Kings" are treated as separate conceptual things, then you get 5-10% of the results as non-White people as per its training data.

https://postimg.cc/0zR35sC1

Add to this massive amounts of cherry picking on "X", and you get this kind of bullshit culture war outrage.

I really would have expected technical people to be better than this.


It inserts mostly colored people when you ask for Japanese as well, it isn't just the dataset.


Yes it's a combination of blunt instrument system prompting + training data + cherry picking




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: