> Identifying which ML models _actually running in production_ cause systemic discrimination (e.g. as you mentioned poor image recognition, bail predictions, etc.) is exactly focusing on real issues that... cause systemic discrimination.
There's nothing systemic about these issues. I already mentioned it's a data problem. Nothing new. It's very easy to build a fair image recognition system by representing all demographics. And even then AI systems will continue to make mistakes. Some AI ethics researchers cherry pick on those mistakes to justify their entire research.
> Some AI ethics researchers cherry pick on those mistakes to justify their entire research.
This is a weird statement. This is like saying police cherry pick on criminals to justify their existence.
Do you not believe in harm reduction? Don't you think some part of AI research should be dedicated to minimizing how many "AI systems will continue to make mistakes"?
Thanks for the references. I will check them out once I get a chance. I do know one of these papers and from my understanding the modeling bias is on underrepresented features or the long tail, which again can be thought as a data problem that can be solved with better data collection.
I do agree that in the real world datasets are often biased because they represent the real world... and there are indeed modeling approaches to address such issues. (e.g., designing a loss function to up/down weight of certain types of examples). There's nothing new about this, it's been known in ML for decades.
There's nothing systemic about these issues. I already mentioned it's a data problem. Nothing new. It's very easy to build a fair image recognition system by representing all demographics. And even then AI systems will continue to make mistakes. Some AI ethics researchers cherry pick on those mistakes to justify their entire research.