I would argue that it's beside the point. The argument is that the ML systems we generate now and are looking to use for real world systems have biases in them. For example, you do not want a lender auditing system used by banks to incorporate features that end up being proxies for race because the data you used to generate it was biased .
White men are the largest demographic of the US, so nueral networks will have a bias towards them. It really isn't that complicated.
That is super unlikely, considering that women are born slightly more often then males and women live longer.
Asians are less than 5% of the population in the US.