
The Code We Can't Control - pron
http://www.slate.com/articles/technology/bitwise/2015/01/black_box_society_by_frank_pasquale_a_chilling_vision_of_how_big_data_has.html
======
yummyfajitas
This is an interesting illustration of correlation vs causality. By
"causality", I mean links which remain even after you change the composition
of your inputs. See the do calculus, which is one of the better attempts to
codify it:

[http://arxiv.org/pdf/1305.5506v1.pdf](http://arxiv.org/pdf/1305.5506v1.pdf)

[http://ftp.cs.ucla.edu/pub/stat_ser/r402.pdf](http://ftp.cs.ucla.edu/pub/stat_ser/r402.pdf)

The do calculus attempts to describe causality as the correlations which
remain present when you change the underlying composition of the sample set.
In a certain sense, that's equivalent to saying it's the correlations which
are not caused by Simpson's paradox.

[https://en.wikipedia.org/wiki/Simpson%27s_paradox](https://en.wikipedia.org/wiki/Simpson%27s_paradox)

So an interesting (but more difficult) question is whether the effects
described here are causal or merely correlative. Personally I consider a
causal link to be problematic, while a correlative one is completely
innocuous. But regardless, it's important to distinguish the two.

~~~
pron
Causality here makes absolutely no difference. These aren't research programs
-- these are programs designed to increase revenue or some other arbitrary
composite variable. They're not interested in _why_ the correlations are
there, only whether they can be used to increase revenue.

The problem here is one of agency. Even true causality in humans is subject to
change -- these are not laws of nature here but snapshots of human society,
which changes all the time through political action -- but computers
inadvertently perpetuate the existing condition, because they have no agency
or interest in pursuing change. Perhaps this should be programmed: we could
program computers for, say, affirmative action so that changing the status quo
is one of their goals. But whether we do or not, the important thing to
understand here is that once computers make decisions that influence humans,
they are carrying out _some_ political agenda (even if that is maintaining the
status quo), whether we consciously program them or not.

~~~
yummyfajitas
If a program has affirmative action built in, it is indeed serving a political
agenda. AA is a _causal_ relation: if (race == black) score += 10.

But if it merely programmed to show ads for "Think Like a Man Too" to people
who watched "Think Like a Man" based on a rule about sequels, there is no
agenda there beyond profit.

In both cases they are disproportionately targeting black people, but in the
latter case it's only a composition effect. If you run the sequel targeting
program on a group of asians who enjoyed "Think Like a Man", you'll get the
same outcome. If you run the AA program on a group of asians, you won't. AA is
a causal relationship on race, whereas the sequel rule is not.

From previous interactions with you, I gather that you want statistical
equality rather than equal treatment. That's great, but many people only care
about equality of treatment. The do-calculus (or similar tools) helps with the
latter problem, not the former.

~~~
pron
> But if it merely programmed to show ads for "Think Like a Man Too" to people
> who watched "Think Like a Man" based on a rule about sequels, there is no
> agenda there beyond profit.

But that's not the dynamics. It's like, there's this whole universe out there
with its own forces and dynamics, but you're saying -- nah, I don't care about
that, I just care about the current snapshot. Suppose for a second that every
school taught blacks to like that movie you referred to. You just created a
(causal!!) proxy for race. So now if decisions pertaining to your future
income are based on you having watched the movie, then all of a sudden they
become pure of ideology? First, we create a proxy, then we discriminate based
on the proxy, and finally we claim no political agenda because we're "only"
discriminating on that proxy.

> but in the latter case it's only a composition effect.

Why does that make a difference? Why do you think targeting correlations is
somehow less political than targeting race directly? Your direct intentions
are not what matters here. What matters here is what future your actions
serve. Any directed (i.e. non-noise) changes (or lack thereof) to human
society caused by humans are politics[1].

> I gather that you want statistical equality rather than equal treatment

Quite the opposite. If there's a society with _built in_ mistreatment of
certain races, then that mistreatment must first be addressed. You see equal
treatment only because you don't try to learn how society works. Politics
changes society _all the time_. What society looks like now is a result of
yesterday's politics, and what it will look like tomorrow is a result of
today's. You're simply suggesting implementing conservative politics that _don
't_ try to change society today. But there's absolutely no objective advantage
to the current situation over a previous or a future one. That's what's known
as the naturist, or is-ought, fallacy.

What I find quite interesting is that as a a scientist, you take absolutely no
interest in learning how things work and how we can change them. You want to
hack computers but not society. How is that _not_ a political agenda?

P.S.

I tried finding a good starting point to learning the topic (of politics), or
at least understanding what issues researchers are looking into and what
they've found. I think a good introduction is:
[http://en.wikipedia.org/wiki/Power_(social_and_political)](http://en.wikipedia.org/wiki/Power_\(social_and_political\))

[1]:
[http://en.wikipedia.org/wiki/Politics](http://en.wikipedia.org/wiki/Politics)
and in particular
[http://en.wikipedia.org/wiki/Politics#Politics_as_an_academi...](http://en.wikipedia.org/wiki/Politics#Politics_as_an_academic_discipline)

~~~
yummyfajitas
I don't really know what you mean by political - I think you are defining it
pretty expansively. What, exactly, would count as NOT political to you?

 _If there 's a society with mistreatment of certain races built in, then that
mistreatment must first be addressed._

It's a bit unreasonable to ask adwords engineers to layer on some post-hoc fix
many miles downstream when they can't even measure the mistreatment or
reasonably predict the effect of the fix. It's also completely unfair to the
_individual humans_ (individual humans are the only ones I care about) who
didn't mistreat anyone and are now obligated to pay for the actions of someone
else (in the form of fewer clicks on ads).

And of course, built into all of this is the assumption that statistical
disparities are all caused by mistreatment. That is unlikely to be true.

Incidentally, as per the description given in your edit, unintentionded
consequences of ad serving algorithms can only be interpreted as "political"
in the broadest sense. So can anything else. So if you just mean "political"
in the sense of "anything that ever affects anything", then sure - ad serving
is political.

~~~
pixl97
From Wikipedia on the first line of politics.

>Politics is the practice and theory of influencing other people on a global,
civic or individual level.

So yes, ad serving clearly falls under that scope.

~~~
yummyfajitas
Pretty much anything is - buying food, negotiating with an auto driver, asking
a girl for a date. As I said, in the broadest interpretation, everything is
political and the word is now meaningless.

~~~
pron
This again? The word is not meaningless just because you say it is (after
insisting on misinterpreting it). This is a discussion at the intersection of
technology and political science, a vast academic field you clearly have no
knowledge of and no interest in learning the basics of. Obviously, the
complete definition is nuanced. What you said right now is as knowledgable
(and -- excuse me -- as intelligent) as someone saying in a discussion of
physics, "general relativity is meaningless". I simply don't understand why
you even insist on arguing matters in a field of study you have absolutely no
clue in, and no interest in. Last time you asked me for papers that you had no
intention of reading. So I've found something more manageable for you:
[http://en.wikipedia.org/wiki/Power_(social_and_political)](http://en.wikipedia.org/wiki/Power_\(social_and_political\)).
You can manage reading at least that. Then, look up "Politics" in Wikipedia.

~~~
yummyfajitas
The more ideas a word describes, the less information using it conveys.
Similarly, "a human" conveys far less information than "pron", and "an animal"
conveys even less information than "a human".

~~~
pron
Thank you. Nevertheless, the word "political" conveys just as much information
as needed to study it as a field -- like the word "physics". Physics covers a
lot (in fact, it encompasses politics, too, so it carries even less meaning),
and yet there are some things in nature that you'd study through the prism of
physics while others through the prism of biology, so the word physics seems
to be quite useful to define a field of study. So is the word politics (or
political).

There are, of course, many sub-types of politics -- as you'd have known if
you'd bothered to look it up on Wikipedia. But your intentional ignorance
combined with obsessive pedantism makes it very hard to provide the full
science with every HN comment. Like I've said so many times: either learn this
topic or don't, but why do you insist on flaunting your ignorance with such
vigor? You seem almost proud of how little you know and how little you wish to
understand.

~~~
yummyfajitas
If you tell me an action is either physical or political (how you define it),
I get no new information. Why waste words on it?

No science is necessary for anything I've said - you havent actually made any
falsifiable claims. I don't know why you keep bringing it up.

~~~
pron
The categorization of something as "political" is not meant to be falsifiable
or objective, but to serve as a useful tool in studying it. It's exactly like
saying that studying planet motion is best done by applying physics, rust is
best studied with chemistry (so we say it's a chemical process), and mitosis
should best be studied by biologists (i.e., it's a biological process). But
all of those are physical, and the latter two are chemical, and yet the
categorization helps. Is it always absolute? No. Carbon nanotubes might be
studied by both physicists and chemists, and ATP by both biologists and
chemists.

When we say an action is political, we mean it is best analyzed through
political sociology/psychology, i.e. by using the techniques used to study
about power dynamics in society, and drawing on the knowledge learned in that
field.

Usually, those categorizations aren't even contentious. No chemist is offended
when we say that metabolism is a biological process, even though it is _also_
chemical. But I think that in this case (of politics), many people -- and
especially here on HN -- don't even know what the word means, let alone its
academic definition.

And consider this. If you were to travel in time to, say, Mesopotamia, and
tell them that fire was a chemical process, they won't know what you're
talking about. In fact, they may have used the word "chemical" to mean
material or something. But you know that the process is really one of
reactions among molecules -- they wouldn't even know molecules exist. So too
in politics. The power dynamics shaping our society are often hidden from
those who aren't trained to see them.

When we say "the invention of the fork was a political action that caused..."
(we don't, but the invention of the fork was just something I studied once so
it came to mind) everybody understand what that means, and no one is offended,
because people who know about power in society immediately grasp what we're
talking about. Those who don't might either don't understand at all, or even
push back -- like you have -- and say something like, "but that was purely
technological", which is not too much unlike saying that lightning is a spear
of a god. You need to learn -- even a little bit -- to see politics at work,
and that Wikipedia article on power is an OK place to start.

------
morgante
> unless you consider invasive personalized ads a benefit.

I do. I'm much happier that my Facebook includes ads for New Relic and
MailChimp than condoms and sports.

More generally, I don't understand how algorithms or profiles can be
prejudiced. If they're inaccurate, there is a strong economic incentive to
improve accuracy over time. If they're accurate, I think that should be
defense alone—if your history suggests you are likely to default on a loan,
companies should have every right to not loan you money.

~~~
zimbatm
The algorithms are usually statistical which by definition makes them
prejudiced. Targeted ads only mean targeted to some group that they believe
you belong in. Maybe some day each person will be exactly profiled but it's
not the case right now.

The other issue is that humans are much more permeable to their environment
than we would like to admit ourselves. We are influenceable. Ads invade our
environment constantly for that reason, their goal is to influence us into
buying something. They can take us to directions that we want to avoid. What
if a fat person is trying to lose weight ? Ads have measured that this
individual is likely to buy cake and present him with delicious assortiments
everywhere he goes. It's beneficial for the companies but what about that
individual ?

The algorithms can only be accurate to who you are, not what you want to
become.

~~~
icebraining
_What if a fat person is trying to lose weight ? Ads have measured that this
individual is likely to buy cake and present him with delicious assortiments
everywhere he goes. It 's beneficial for the companies but what about that
individual ?_

Conversely, what if the algorithms have decided that the person is a good lead
for a local gym, and those ads push him or her into losing weight?

We're all influencing each other, constantly. In a way, banner ads and such
are some of the most honest ways of going about it, since they're clearly
delineated as such.

 _The algorithms can only be accurate to who you are, not what you want to
become._

If aforementioned fat person searches for "weight loss" and Google starts
showing ads for that, haven't the algorithms been accurate to what (s)he wants
to become?

~~~
marketforlemmas
I don't think the issue is whether ads should prompt them to lose weight, or
they shouldn't. I think the crux of the issue is that we (humans) are ceding
this judgment (which have political and societal ramifications) to the ad
matching algorithm. Some people (myself included) find that bothersome.

Of course not every example of a targeted ad is going to be controversial
(like advertising merchandise of a sports team that I like versus the one that
is just popular in my area) but its problematic that we don't have any sort of
machinery in place to control/modify the examples that are
politically/societally important.

At least if that machinery were in place, then we could transfer the agency of
decisions to the humans that run google, rather than the algorithms that are
just running in the background.

------
Rapzid
I'm not convinced the article established the computer program "definitely is"
racially biased. The claim was thrown out there and then not substantiated. I
guess the article needed to be racially charged..

~~~
pron
This is because you (like manny others) are unfamiliar with what racism means.
Racism (when applied to processes rather than individuals) means practices --
intentional or not -- that result in an unequal distribution of power among
races. If a computer program perpetuates or increases the unequal distribution
of power among races, then that program is racist. The same definition, BTW,
applies to sexism, where that refers to unequal distribution of power between
the sexes.

~~~
yellowapple
But what about that program is actually racist?

The first paragraph establishes that the program bases its decisions on
indicators of financial responsibility. It then makes a decision based
(apparently) on race alone. This would require race to be a recorded indicator
of financial responsibility as decided by the programmer (or his/her
employer).

Had race not been recorded at all by the computer, then it would not have made
a "racist" decision (for it would be based entirely on individual merit,
rather than any particular connections with others of a specific demographic),
and this article would therefore be turned over onto its head and rendered
entirely moot.

~~~
pron
This is a bit of a philosophical question, and I don't think you should take
it too literally, but here's what I think. If race is one of the variables,
then due to correlations created by society, race might be learned to be a
negative signal, which is racist. If it isn't one of the variables, it might
be still inferred. For example, if your purchasing history is an input, there
is probably a correlation between purchase of cosmetics for blacks and chances
of incarceration, which, in turn affects your chances of loan repayment. The
correlation between being black and buying, say, hair-care products for blacks
is direct and causal (and completely non racist -- there is no effect on
power), so this could be an almost perfect, and neutral, proxy for race, and,
in fact, might be the best and simplest correlation the program could come up
with.

------
yellowapple
I stopped after the first paragraph, when the premise of the entire article
seemed to suddenly base itself upon some random jump from lots of indicators
of financial responsibility to race alone without any apparent connection. If
the program was designed to base its decisions on financial responsibility
indicators, then the _only_ way for it to become "racist" is if race is also
recorded as a financial responsibility indicator, which would thus indicate
that the _programmer_ , rather than the program, is racist.

Basically, the first paragraph effectively creates a strawman; I can only
assume that the rest of the article seeks to defeat that particular strawman.

