
What If Algorithms Could Be Fair? - pekalicious
https://humanreadablemag.com/issues/0/articles/what-if-algorithms-could-be-fair
======
daenz
I'm not sure I follow their car crash diagram and explanation. They've laid
out that one ethnicity might prefer red cars more than others, and drivers of
red cars tend to get into more crashes, and that training ML with "red cars"
as a feature would lead to a bias against that ethnicity. I got that part.
What I don't get is how the creation of the "risky behavior" node can be
assumed to have a completely uniform distribution of ethnicities inside of it.
The author has no problem saying that an ethnicity can have one causal
behavior (purchasing red cars) but not another (being riskier drivers). This
seems logically inconsistent.

~~~
bitL
There is a strong push for "fairness", see e.g. "Toronto Declaration". I think
all it would do is completely halt progress of AI and install bureaucracy to
the lowest decision levels, paralyzing whole ML research. Nobody seems to
think that we are in a clash of different cultures with different
sensitivities and there is no single common platform for stating what is
"fair". I am worried the loudest voice would set the trend and we will have
some insanity enforced all the way down. There are even calls to ban
"blackbox" ML, basically allowing only trivial parts in any kind of decision
making.

If members of my nation get drunk more often than some other, while it's
offensive to say I am a 34% drunkard, on average it might hold; instead of
forbidding this type of inference I'd rather rely on more signals to figure
out what kind of person I am specifically for individualized decisions. They
bypass this problem by adding "risky behavior" not contained in the input
dataset so they just decide to model it as a hidden variable of Bayesian
inference, where "risky behavior" might be correlated with ethnicity and red
car anyway, just not visible outside. So if my nation is 34% drunkard but
neighboring is only 11%, the conditional probability will likely be higher for
my nation anyway, but obfuscated by the use of Bayesian hidden state. I am not
sure why would that improve fairness.

~~~
barry-cotter
> There is a strong push for "fairness", see e.g. "Toronto Declaration". I
> think all it would do is completely halt progress of AI and install
> bureaucracy to the lowest decision levels, paralyzing whole ML research.

It would only paralyze those who paid attention to the Toronto Declaration.
You’re right because you can’t make ML fair because the universe isn’t fair,
that’s a property of human judgements about facts. The facts remain the same
regardless of ones feelings.

[https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/sl...](https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf)

AI Ethics, Impossibility Theorems and Tradeoffs

~~~
candiodari
Except any 2 humans don't have matching ideas about what's fair, which means
that they're both unfair from eachother's perspective.

Humans are in reality much less fair than algorithms.

------
speedplane
Facebook developed it's "Look Alike" platform, to advertise things to people
who "looked like" their current followers. Then they deployed this to hiring,
home loans, and housing. The "algorithm" here just amplified whatever biases
the company had to begin with. It's pretty unbelievable that Facebook did not
recognize this was a problem until they were sued over it.

Making a system fair at the very least requires people designing the system to
be fair. It's pretty clear that still does not happen, so I'm pretty skeptical
of those that claim it's just around the corner.

------
remote_phone
Algorithms are based on statistics, or essentially stereotypes. The concept of
fairness is something that can’t be adequately injected into an algorithm
because it completely depends on what “fair” means and how that changes over
time. What is “fair” now won’t be fair in 10 years.

It used to be considered fair to let people smoke when they wanted. Then it
was considered fair to have smoking sections and non—smoking sections in
restaurants. Now it’s considered fair to ban smoking entirely in restaurants
and most public places.

------
throwaway72873
If the algorithm charges higher car insurances premium to men, does it mean it
is fair?

~~~
perl4ever
I think the whole point is that what causal relationships you assume matter,
and they do not _have_ to be derived from correlations. And they _should_ not,
in order to be "fair".

You have a choice of whether or not you believe being male causes car
insurance claims. That is independent of the statistical correlations. Ten
times a day people say correlation is not causation, but a hundred times a
day, I see people implicitly insisting that it necessarily is.

~~~
TeMPOraL
It's not that people think correlation implies causation, as much as in many
practical models, it's correlations you care about, not causation.

If I'm running an insurance agency and not a public policy advocacy, and my
data keeps showing that men have higher accident rate than women, I can just
ignore causation and build my actuarial tables based on that. I don't need a
casual model here, at least not until the point I'd want to optimize my models
further still, but there are diminishing returns on that.

~~~
perl4ever
This makes no sense to me. Everything depends on your causal model. You can't
just not have one; if you don't have one, you are treating correlations as
causative indiscriminately.

Suppose (just as a toy example) that being young causes accidents, and the
population of men is younger, but being male does not cause accidents. You are
going to charge mature men too much and lose that business to a competitor
with a correct causal model.

This is quite separate from the correlational data.

The insight I get from the article is that the "correctness" of your causal
model can incorporate social justice or political correctness, without being
objectively mistaken, because causation is not defined by measured
correlations.

------
lainga
Are there no fair algorithms? I urge the authors to at least give Bogosort
another chance!

