
Setting fairness goals with the TensorFlow constrained optimization library - atg_abhishek
http://ai.googleblog.com/2020/02/setting-fairness-goals-with-tensorflow.html
======
bobcostas55
Anyone looking into this stuff needs to read "The impossibility of “fairness”:
a generalized impossibility result for decisions"[0]

One type of fairness is always trading off against another, and any
presentation suggesting you can just magically increase the fairness of your
classifier is straight-up lying.

[0]:
[https://arxiv.org/pdf/1707.01195.pdf](https://arxiv.org/pdf/1707.01195.pdf)

~~~
asdfasgasdgasdg
It's not surprising to me that it is impossible to minimize three different
functions over the same parameters simultaneously. That would essentially
imply their are the same function, right? But while that is a mathematical
impossibility result, it doesn't imply that making things _more_ fair is
impossible. An ML algorithm that doesn't consider fairness at all might be
adjusted to improve on all three of the proposed metrics without violating the
findings of that paper.

~~~
unishark
If the objective function was a separable function of the different parameters
you could simultaneously optimize over them with ease, but that's obviously
not the case very often in life.

In general optimizing multiple objectives (or one objective subject to
additional constraints) would achieve a compromise between them. If you are
trying to simultaneously maximize equality of opportunity and equality of
outcome you would just achieve a trade-off where neither side is happy. On the
other hand if you were starting out just trying to optimize for profit, then
adding a constraint or additional objective will just mean you make less
profit in achieving that other objective. I suppose that could be called more
fair if you're trading off profit for fairness.

------
jrumbut
Constrained optimization is great and probably helpful but calling it a path
to fairness misses what makes something fair.

Fairness is not a result, it is a process. Any time people come with a model
they made in a closed room that is going to make important decisions for or
about other people fairness is impossible.

Fairness is the ability to appeal decisions, fairness is taking part in
building the model, fairness is voting on whether the model is suitable and
giving consent before your data can be fed to the model in development or
production.

Optimization strategies can not achieve any of these things. We're not a
little numerical tweak from fairness.

------
antpls
Question : In the last chart, we can see the constrained model reduces the FPR
for both categories, old and young. However the "old" category still has more
than 2 times the FPR of the young category, how did it make the model fairer?
(I would imagine an equal FPR between both categories for it to be fair)

~~~
Eridrus
As linked in the comments above, there are a lot of different ways to define
fairness and it is impossible to satisfy them all at once.

One way you could argue this example is fairer: the gap used to be ~10%, now
it is ~5%.

------
ThePhysicist
This probably won't solve all fairness problems but having such a library is a
good step in the right direction, great they're making this available for
free!

------
throwaway97345
An AI model is another word for a statistical model of reality, a small
scientific theory, which actually works.

It is the most unbiased thing you can have. It's pure applied science/maths.
You are only interested in the input that tell you something, and figure out
which one to which degree.

You don't trust intuition or common knowledge. You don't trust anything that
might introduce bias. You just look at the data and build the best decision
mechanism that can be derived from it.

The degree to which you feel you need to "adjust" that is the degree to which
you denounce science. You prefer your bias over science.

Today it's probably the cultural marxism / structural oppression narrative
that inspires such manipulation. At other times it would have been
infallibility of the papacy, or some other brain construct.

If you feel the need to adjust an ML model, be aware how strongly you feel
that way. That's the degree to which you reject science.

Then proceed. Or reverse course. But be honest with yourself about what you're
doing.

~~~
IanCal
You are missing the point of the entire article here. Quite aside from the
fact that you can obviously have biased models because of their design or
because of their training data, the key misunderstanding in your comment is
this:

> You just look at the data and build the best decision mechanism that can be
> derived from it.

How do you define "best"? That's the issue. Not all errors are equal, not all
distributions of errors are equivalent even if the total is the same.

~~~
throwaway97345
> How do you define "best"? That's the issue.

That's very simple with the example in the article. Who can pay back the loan
best? You just fear the answer and rather twist up your reasoning.

~~~
IanCal
And that classifier is bad at determining who can pay the loan back when
looking at the blue group.

Even using a very short term, entirely selfish view this can be bad for the
loan company. It becomes clear that blue group people are being denied loans
they could well afford, and so people in that group start moving over to other
providers.

If the populations of blue groups are geographically clustered, this may mean
losing large portions of business in certain areas, resulting in shutting down
local offices if there's a physical presence (e.g. banks).

This is also entirely aside from legal concerns.

The best model is rarely found by simply optimising a basic measure with no
context.

