
Copyright Law Can Fix Artificial Intelligence’s Implicit Bias Problem - miraj
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3024938
======
rspeer
I consider it very important to fight bias in AI, but I fundamentally do not
understand this article's take.

The thing stopping AI researchers from obtaining unbiased training data is not
that we're waiting for laywers to give us permission. The thing stopping us is
that the right training data is already hard to find, and unbiased training
data is the hardest of all to find because it doesn't exist.

Google, for example, does not need permission. They can acquire and train on
basically any data they want if they put their mind to it. And Google is
_completely shit_ at deploying unbiased AI. (They write great blog posts and
presentations about it! Then they don't do it in their actual products.)

You won't just find a naturally unbiased dataset. The way to fight AI bias is
deliberately and artificially, like in Bolukbasi et al. [1]

[1] [https://arxiv.org/abs/1607.06520](https://arxiv.org/abs/1607.06520)

------
brighteyes
The term "implicit bias" is a poor choice here.

First, it only appears in the title, but not the abstract, for good reason:
they are not talking about implicit bias, which as a concept only makes sense
when contrasted with explicit bias. Humans can have implicit and explicit
bias. An AI can just have "bias" (well, at least until we get true human-level
general AI, and we'll be able to talk to it and differentiate what it does
from what it says it does).

Second, "implicit bias" brings in a lot of unfortunate connotations. It is
tied up with the IAT (implicit association test) [1], which is highly
controversial. That controversy has no meaning here (again, since there is no
explicit bias to contrast it to), it only hurts.

Unless I'm missing something, I can only guess that they use the term to grab
attention, which is cynical and sad.

[1] [https://en.wikipedia.org/wiki/Implicit-
association_test#Crit...](https://en.wikipedia.org/wiki/Implicit-
association_test#Criticism_and_controversy)

~~~
hasenj
Quoting the paper, an example of this "terrible" implicit bias is associating
men with programming and women with house keeping.

What exactly does an AI that's free from biases look like? Will it presume for
instance, that all men and all women are the same, all the time?

How is that AI at all?

Correct me if I'm wrong, but I think the whole point of AI is to find patterns
in the data that can help make decisions that lead to a desired outcome.

If you're not using real real world data, are you really doing AI?

~~~
rspeer
AI is just a tool, a way to automate making more kinds of decisions. The
decisions are not _better_ or inherently _more right_ by virtue of taking a
human out of the loop. In fact, they're usually worse, because humans are
intelligent for real and computers are not.

But the decisions are automated, and that makes more things possible. And
sometimes you need to deal with the effects of automation.

Nobody could reasonably declare an AI "free from biases", but you can work on
particular aspects. Here's an example: Meetup de-biases their recommender
system so that it doesn't automatically recommend knitting instead of coding
meetups if you are a woman. [1]

Yes, in the data, there are relatively more women going to knitting meetups
and fewer going to coding meetups, but it doesn't _have_ to use that data or
its correlates to make predictions. In fact, it shouldn't. If you do like
knitting, it can still learn that from your actual personal preferences and
not infer it bluntly from your gender.

[1] [https://civichall.org/civicist/meetup-counters-invisible-
sex...](https://civichall.org/civicist/meetup-counters-invisible-
sexism/.WdflQbxqvIY.twitter)

~~~
hasenj
A little off topic but I find most recommendation systems to be abysmal
failures. It's like, as soon as you look into something, they start assuming
you're very interested in it, and they don't show recommendations for anything
other than things related to what you've looked into before.

