
Can We Keep Our Biases from Creeping into AI? - bipr0
https://hbr.org/2018/02/can-we-keep-our-biases-from-creeping-into-ai
======
hagbardceline
I suppose this all depends on how you are framing/thinking about the question.

There is a certain amount of bias in being
designed/architected/engineered/constructed by humans - the artifact is 'in
our image', as it were.

Are you talking about feeding a machine learning system data/information to be
turned into information/knowledge by the system that is created by humans?
That has our fingers all in it as well.

Cognitive bias? Well, are we talking about what is an 'acceptable' result to a
human, or the underlying process? Take Google's machine vision system that
sees the world as dogs - the cognition certainly is different, and the results
of value to humans for an illustration/visualization of the different approach
as well as a certain aesthetic I suppose. The bias is in the entire point of
the question and the specialization of the system to provide an answer. But as
to the details of the analysis and results? I don't think that has bias in the
sense you may be thinking of.

Can you be more specific about what is meant by bias?

------
dlwdlw
I'd argue that human bias is actually a source of truth. "higher" levels of
truth are often codified in a way very similar to logical constructs. Human
knowledge is biased but is flexible and adaptable. Codified law is fairer and
has less bias but can only deal with exact situations.

I wonder if black box AI internally has an isomorphism to what is basically a
gigantic codification of logic and the training process just discovers then or
if it's fundamentally something very different.

