
Algorithmic Bias Was Born in the 1980s - jonbaer
https://spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-the-birth-of-machine-bias?href=
======
microtherion
Apparently, the creator of that algorithm literally and consciously added bias
against females and non-caucasian to his program, which made it pretty much an
open and shut case.

The modern approach would be to take some NLP machine learning kit, which
would do a wonderful job learning and reproducing the human assessors' biases,
without any overt evidence (because the reasoning is always opaque), without
ever being explicitly instructed to do so, and possibly even without the
author of the program being aware of it.

~~~
sologoub
This is where the creator has a very specific duty of being careful about what
features are fed into the learning algorithms - including names, gender and
other protected category info is a huge no-no.

The NLP part is complex because language and dialect differences can reveal a
lot about the individual and her/his background.

If you have to analyze non-standardized inputs, some personalized data is
going to leak. The question is how much does it affect the outcome.

------
vichu
On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the
machine wrong figures, will the right answers come out?" In one case a member
of the Upper, and in the other a member of the Lower, House put this question.
I am not able rightly to apprehend the kind of confusion of ideas that could
provoke such a question.

-Charles Babbage, _Passages from the Life of a Philosopher_ , 1864

~~~
bonoboTP
I don't like this anecdote. A charitable reading of the question would be
whether the machine has any means of input verification, error
detection/correction mechanisms etc. Things that are pretty common and
reasonable.

~~~
theoh
I don't like it either.

I think Babbage saw the machine as implementing a mathematical function, which
is why he thought the question was ridiculous.

I suspect that the people who asked the question did _not_ realise that it
just calculated a function and were thinking of the behaviour of machines more
generally, e.g. their ability to take in rough materials and produce a fine
result, or maybe also some kind of canalization whereby approximate inputs
would produce a valid and precise output.

~~~
wzdd
Babbage fairly clearly knew this and was playing up his public persona of
"irascible genius who must have everything precisely quantified" for laughs.
His autobiography is full of this kind of self-aware self-mockery; another
funny example is "Every moment dies a man, every moment 1 1/16 is born"
([http://www.uh.edu/engines/epi879.htm](http://www.uh.edu/engines/epi879.htm))

In other words, the quote isn't written to illustrate the stupidity of
politicians, it's written to have a laugh at contextually-oblivious engineers
like Babbage.

~~~
theoh
Looking at the context, I don't buy this explanation. It's ingenious but a bit
too subtle to be true.

[https://en.wikisource.org/wiki/Page:Passages_from_the_Life_o...](https://en.wikisource.org/wiki/Page:Passages_from_the_Life_of_a_Philosopher.djvu/83)

------
a-dub
They say that those who don't learn from the past are doomed to repeat it, yet
somehow statistical data driven algorithms that literally learn from the past
provide guarantees of repeating it.

~~~
mannykannot
Nice paradox! It is resolved, I think, by there being two different concepts
of 'learn' here.

~~~
a-dub
Seems that the next iteration of the Turing Test will account for this...

------
basetop
Algorithmic bias really was born a long time before then. There were
"computers" and "algorithms" long before the 80s. Perhaps algorithmic bias in
the age of personal computers was born in the 80s.

Is the code available for review? I didn't see any mention of it being
available in the article. Maybe if ieee has it, they can post it to github?
Did the guy hard code a list of "european" names directly into the code or did
he store it in a file or even a db? Did it just check the surnames or the
first and middle names also?

Also, is bias still in the college admissions system in the UK? I know we have
a form of it in the US.

~~~
mannykannot
To be clear, the existence of bias for "european" names or against women does
not necessarily imply a store of explicitly "european" or feminine names. If
the training data reflects that bias, the program derived from it is also
likely do so.

------
Sol-
Seems a bit different from modern day algorithmic discrimination. In the 2016
pro publica example, for instance, the algorithm doesn't explicitly consider
race but just picks up the fact that black inmates more often have other
features correlated with reoffending (they are younger, for instance). Now
that might very well indicate that there's a systematic bias against younger
black people in the training data and that they are more likely to be singled
out for arrest than others, but the algorithm did an okay job given the
unbalanced arrest prevalence in the training data. Of course, there's still a
lesson of not blindly trusting the algorithm but trying to understand why the
data caused it to make its decisions.

But at least no one went to such lengths as to encode ethnicity as a separate
feature to weigh in the process. I wish the article gave more detail about the
algorithm - was it a manually constructed decision tree or what? Because I
wonder what the guy was doing. Surely even in the 70/80s you should have been
aware that encoding ethnicity as some explicit variable is very
discriminatory.

------
chihuahua
It would be interesting to see how the code classified names as European/non-
European. Given that apparently no ML was involved, I assume it was based on a
list of European names, and if your name is not on the list, you're not
European and you lose 15 points ? Or some kind of pattern matching (Reged or
similar), again if your name doesn't match, you lose 15 points ? It seems very
strange that someone would go to such lengths to explicitly code this kind of
bias.

~~~
BeetleB
>It seems very strange that someone would go to such lengths to explicitly
code this kind of bias.

It's a fun intellectual exercise - like any other in academia. It's not as if
they were working in a cutthroat commercial venture with hard deadlines.

A simple rule would be to rule out certain orderings of characters. A name
beginning with "Ng", or "Kp", etc. Also, given that this is the UK, just put
in lots of popular Indian names (Patel, Singh).

------
csours
> After all, Franglen had tested the machine against humans and found a 90 to
> 95 percent correlation of outcomes

------
bayonetz
False. Algorithmic bias was present from the very beginning. One way or
another, directly or indirectly, algorithms only do what people tell them to
do.

~~~
mannykannot
This is too trite a dismissal. The problem is that it is all too easy to not
fully comprehend all of the implications of what you are telling the algorithm
to do.

