
Aequitas – An open source bias audit toolkit for machine learning - lainon
https://dsapp.uchicago.edu/projects/aequitas/
======
opwieurposiu
I find the premise that different groups should expect the same percentage of
interventions highly suspect. Imagine we have a program that distributes
seeing eye dogs. This toolkit would discover that sighted persons have a 0%
chance of getting a dog, while blind persons have a 50% chance. Oh the
injustice!

------
Macuyiko
FairML
[https://github.com/adebayoj/fairml](https://github.com/adebayoj/fairml) and
algofairness
[https://github.com/algofairness/BlackBoxAuditing](https://github.com/algofairness/BlackBoxAuditing)
are some similar, earlier projects in the same space.

------
to_bpr
If the goal is equity of outcome above-all-else, ignoring for any differences
derived from the data, then why are we bothered investing so much time, money
and effort into this area?

~~~
Macuyiko
See [https://www.propublica.org/article/machine-bias-risk-
assessm...](https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing) for a well-known example in this space.

This article:
[https://www.nature.com/articles/d41586-018-05469-3](https://www.nature.com/articles/d41586-018-05469-3)
also highlights the issues pretty well IMO.

It's not just about letting the data speak. The data is gathered by someone,
containing historical bias we might nowadays disagree with, models are chosen
by people, using parameters set by people, using evaluations set by people.
It's about making sure models are both predictive, usable, and fair.

~~~
Alex888
Predictive, usable and fair. Pick any two!

