
AI expert calls for end to UK use of ‘racially biased’ algorithms - davidfoster
https://www.theguardian.com/technology/2019/dec/12/ai-end-uk-use-racially-biased-algorithms-noel-sharkey
======
casion
I assume this might be an unpopular opinion, but shouldn't these programs be
evaluated on some metric of success rather than their racial selection?

If the software is accurate at picking choices that match the set goals, then
it would seem that: the software is doing its job and is 'blameless', the
racism is on the side of those that set the goals and/or the people claiming
'racism' because a person of a given skin colour was selected.

If it could be shown that there's low success in the software, yet it's relied
on despite racial bias, then I would think few rational people could argue
against its removal. (edit: s/for/against)

Edit: I suppose that I'm asking for better auditing of the results vs the
desired outcomes. Race seems like a red-herring until it's shown that the
metrics support poor performance of these systems.

~~~
MaxBarraclough
> If the software is accurate at picking choices that match the set goals,
> then it would seem that: the software is doing its job and is 'blameless',
> the racism is on the side of those that set the goals and/or the people
> claiming 'racism' because a person of a given skin colour was selected.

Well, no.

It's not enough to simply decline to input racial data into the machine-
learning system. The system may discover a proxy for race, and make decisions
based on that proxy. Name and address, for instance, may be enough to give a
good idea as to someone's race.

We expect human judges and parole-boards, for example, to discount racial
information, even if there are demonstrable correlations associated with race.
If the machine-learning system is not built with that discounting in mind, it
isn't going to do so of its own accord, as it was of course built to detect
and exploit correlations.

> If it could be shown that there's low success in the software, yet it's
> relied on despite racial bias, then I would think few rational people could
> argue for its removal.

I don't follow. If the software has been unsuccessful, that seems like a good
reason to remove it. If, additionally, it has been making decisions on racial
grounds, that seems like another good reason to remove it.

> Race seems like a red-herring until it's shown that the metrics support poor
> performance of these systems.

As I hope I've made clear above, this isn't right. It's not just about
performance, it's also about ensuring the system does not make decisions on
the basis of an individual's race.

~~~
lkschubert8
A good example of this that I saw recently was AI used for underwriting health
insurance. In order to make the algorithm "unbiased" in terms of race they
removed race from the input dimensions. It then started heavily biasing along
"frequency of doctor visits" which turns out to have a strong relationship
with race. My takeaway is if equality of outcome is part of the goal that has
to be part of your training as opposed to attempting to ignore the input,
which seems obvious in hindsight.

~~~
slowmovintarget
If you're looking for equality of outcome, why go to the trouble of training a
system in the first place? A pre-selected distribution applied arbitrarily is
just as good.

Frequency of doctor visits seems a good indicator for determining how likely
an insurance company will have to pay on a policy. If you want equal outcome,
you end up setting a higher premium for everyone, and skip the underwriting
altogether. The next part of the discussion would be subsidizing the higher
premium for those that can't afford it. Just keep in mind that because there
are fingers on the scale, fewer people can afford the "equalized" insurance
than could afford the honestly evaluated coverage. (Prices are higher for
everyone, so the pool of subscribers ends up being smaller if subscribers can
opt out.)

It becomes a never-ending cycle of controls and impositions leading to
collapse. Have the realistic underwriting, lower prices, and subsidize those
(overall) lower premiums when needed.

------
bsenftner
The issue that facial recognition is racially biased has only ever been an
issue for non-industry leaders in facial recognition. And for the point,
Amazon and Microsoft and Google have NEVER been industry leaders in FR - they
just have enormous marketing budgets. Any police or any article you see where
Amazon's FR is being used is an article about that police force being duped by
marketing and expecting "best in class" from an organization that does not
even rank in the industry as a serious player. The industry leaders in FR can
easily be identified by going to the NIST Facial Recognition Vendor Test web
site to view the annually ranked testing results from FR vendors who desire to
work on Federal government contracts. Also, the media treatment of FR is
beyond pathetic. Pretty much every article I encounter is pulp crime level
fiction. FWIW, I'm lead developer of one of the industry leading FR
applications, typically within the first 4 on the NIST vendor rankings.

------
throwGuardian
Notice how the article is very light on direct examples of actual bias -
that's not a bug, it's a feature intentionally adopted to create fear,
uncertainty and doubt (FUD). If anyone wants to FUD anything, just start with
this article as a template, find-and-replace the subject from AI to
insert_here, and change the name and quotes of the subject-matter-expert, and
viola, you've got yourself a hit piece

We should be upvoting more substantive writing, making serious accusations on
racism should be backed up by real evidence, not quotes and opinions of
someone parroting the author's narrative

------
stolenmerch
Every time this topic comes up, I'm always confused how the algorithms are
biased. Which algorithms? Isn't it actually the labeled data they used to
train that is biased by not including enough samples of non-white faces? What
mechanisms are in place that prevent literally everyone from just re-training
on better data? Why does my toy facial recognition software written in
javascript detect every face I throw at it?

Many of these pieces smell phony. I'm certainly not saying there isn't
institutional racism at work here, but I think we need way more detail to
evaluate these claims.

~~~
rswskg
This. These are articles which have a long reach being written by people who
are operating far beyond their understanding. Surely the guardian can dig up a
quality technical writer to dig into this?

