With bias, it's helpful to define what property is desired, and then measure the discrepancy from that.
it seems to me that it is really situation dependent, complex, and you hit tough philosophical issues about the nature of fairness and justice pretty quickly.
on the other hand, with this model, i’m willing to say “i know it when i see it”. if the model did not default to white facial features, it would be less biased.
The harder biases to address are going to be ones where the AI reinforces current undesirable patterns. Eg statistically certain minority groups are more likely to commit petty crimes. If you replaced police with AI robots, these robots would then automatically label people from those minorities as more suspicious. That sucks, and likely has much more complex solutions.
I'm not sure what you mean when you said the model defaults to white features.
Obama is the child of a white parent and a black parent, so will have characteristics of both parents; so the algorithm should have either outcome as a possibility. The default categorization of that mix by people depends on the culture categorizing the person.