If the word "Zulu" appears in a label, it will be a non-White person 100% of the time.
If the word "English" appears in a label, it will be a non-White person 10%+ of the time. Only 75% of modern England is White and most images in the training data were taken in modern times.
Image models do not have deep semantic understanding yet. It is an LLM calling an Image model API. So "English" + "Kings" are treated as separate conceptual things, then you get 5-10% of the results as non-White people as per its training data.
I’ve also seen numerous examples where it outright refuses to draw white people but will draw black people: https://x.com/iamyesyouareno/status/1760350903511449717?s=46
That doesn’t explainable by system prompt