
Don’t ask if artificial intelligence is good or fair, ask how it shifts power - MindGods
https://www.nature.com/articles/d41586-020-02003-2
======
ghostcluster
> A year ago, my colleagues and I created the Radical AI Network, building on
> the work of those who came before us. The group is inspired by Black
> feminist scholar Angela Davis’s observation that “radical simply means
> ‘grasping things at the root’”, and that the root problem is that power is
> distributed unevenly.

I'm a bit uncomfortable with this kind of political posturing being published
in _Nature_ , the preeminent science journal. It seems the exact wrong forum
for it.

~~~
xmprt
The article is low on examples which doesn't lend it much credibility but I
can sort of understand where she's coming from because Google's AI has had
similar issues in the past. Namely YouTube classifies LGBTQ+ content as not
advertiser friendly and therefore doesn't recommend it as much and often
demonitizes it.

~~~
ramblerman
Would that not be based on the fact that, that content is somehow generating
less revenue?

Or you believe there is a more sinister reason?

~~~
tobr
Videos with titles with certain words might be flagged and can’t have ads. As
I understand it they’re trying to prevent pornographic content from being
monetized, and their list of dirty words include neutral LGBTQ+ terms like
“gay” and “lesbian”. This has been verified by experiments where people upload
the same video with different titles. [1]

So it’s yet another opaque system that might not have a sinister intention,
but has a pretty sinister outcome.

1: [https://www.vox.com/culture/2019/10/10/20893258/youtube-
lgbt...](https://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-
censorship-demonetization-nerd-city-algorithm-report)

~~~
Nasrudith
Reminds me of another case of the opposite with Xbox Live where "gay" and
"lesbian" were being censored to try to prevent the then prevalent use as a
term of abuse.

To be frank to me the algorithim seems to be a scapegoat for society's own
fucked up norms and practices. It isn't a moral process and expecting it to
act like one isn't reasonable - it would be like demanding a cargo value
calculator stop listing concentrated drugs as the most profitable commodity by
to transport per mass and volume because addiction ruin lives and overdose
ends them. It is terrible but it is accurately describing the status quo.

------
Isinlor
I'm not sure if machine learning researchers have tools to analyze how minutia
of their research e.g. proposing new activation function in ANN will impact
shifts in power in the future. I personally have no slightest clue how
institutions like police across the world operate and what power dynamics are
governing them.

IMO academia is really bad at impacting anything mostly because they are
couple of steps detached from actual engineering and management that leads to
impactful deployments. Not to mention that academia overall is extremely naive
- they failed for some 20 years to stand up for themselves when it comes to
research publication. Some researchers even manage to loose rights to access
their own research without paying big corporations... For anyone interested in
raw power playing academia is like stealing a candy from a kid.

The biggest impact academia could have is on education and that's probably
where the focus should be concentrated.

------
khawkins
Clearly, the goal of this article is to shift power into the hands of the
author and her ideologically aligned community.

~~~
tehjoker
Well, yes. If you don't like those that wield power you must contest it.
Anyone that aims to accomplish a particular goal, such as more equitable AI,
will need to challenge the ideology of power and seek to impose their own
view.

~~~
SpicyLemonZest
Generally people accomplish goals by presenting intellectual arguments why the
goals are good and should be worked towards. Engaging in power struggles to
impose one's view by force is certainly an option that exists, but it's
generally seen as toxic behavior and I do not think we should encourage it.

~~~
thundergolfer
If we read the same article, nowhere is there a suggestion of "[imposing]
one's view by force".

Your comment seems to suggest otherwise.

~~~
SpicyLemonZest
"Impose their own view" comes from the comment I was responding to, and the
commenter did not seem to be suggesting an imposition where "those that wield
power" would have a choice in the matter.

~~~
thundergolfer
I don't think you're reading their comment charitably. They're using "impose"
in the context of politics being a contest.

Winning a democratic election necessarily "imposes" upon the losers some
decisions of the winners, eg. higher taxes. That's not authoritarianism,
that's just politics.

~~~
logicchains
>Winning a democratic election necessarily "imposes" upon the losers some
decisions of the winners, eg. higher taxes. That's not authoritarianism,
that's just politics.

This doesn't change that it's the majority imposing its will on the minority
by force.

~~~
tehjoker
This is all true, and it's also true that those in power use power to serve
their own interests and the interests of their group. That's why power must be
contested. If you're neutral, power is serving you or else you'd be incensed.

~~~
tehjoker
Responding to the comment below: Well, then the stakes for you are low enough
that you don't care, you feel too powerless to protest, or the decisions are
in some sense fair. Very often people are subjected to decisions that are not
fair and they should dissent.

------
meowface
The core arguments, as far as I can tell, are:

>Researchers should listen to, amplify, cite and collaborate with communities
that have borne the brunt of surveillance: often women, people who are Black,
Indigenous, LGBT+, poor or disabled. Conferences and research institutions
should cede prominent time slots, spaces, funding and leadership roles to
members of these communities.

And:

>Remarkably little research focuses on serving data subjects. What’s needed
are ways for these people to investigate AI, to contest it, to influence it or
to even dismantle it. For example, the advocacy group Our Data Bodies is
putting forward ways to protect personal data when interacting with US fair-
housing and child-protection services.

I'm unable to find any other concrete suggestions or arguments. There are some
things like "In addition, discussions of how research shifts power should be
required and assessed in grant applications and publications.", but "how
research/AI shifts power" seems to be a very fuzzy and ill-defined concept.

People aren't going to agree on what power is and who possesses it and who
doesn't and what does or doesn't constitute shifting or non-shifting of power,
and to who, and how, and why. Let alone exactly how their research's tiny
piece of the puzzle fits into it all.

I think it'd be clearer if this contained specific claims or requested
specific legal rights beyond broad terms like "shifting power": like
anonymization of data, regulations on how companies and the government can
collect and use people's data, how and when law enforcement can use automatic
recognition systems, etc. Maybe an auditing organization for companies' and
governments' heuristic systems (AI or otherwise), to look for type I errors
(YouTube LGBT+ videos as not suitable for minors comes to mind). Maybe some
way to determine what technologies could be ripe for abuse.

It's like making your motto "our organization is dedicated to ensuring AI
benefits the people, not the elites". Okay... who are the elites, who are the
people, how could the elites and people be benefitted or harmed, etc.

~~~
ponker
Lol. “Communities that have borne the brunt of surveillance” and then a list
starting with “women.” Men are surveilled at vastly higher rates than women,
and a huge majority of police brutality is against men (specifically, poor and
black men). Next up is “Indigenous” which is also laughable, as at least in
the US the federal government clearly couldn’t give two shits about what is or
isn’t happening to Native American people.

------
nil-sec
The fraction of people in AI working on problems that need to consider
diversity/fairness/etc. is rather small. Yes, those people, these specific
applications, should be designed with care, should be overseen properly and in
the best of all worlds should be unbiased. However, the recent discussion
around this topic is framed like AI research in general is somehow unethical
and should be informed more by minority populations. While this is something
to strive for, for the sake of having a more just society, the impact of
diversity on an AI system that say classifies mitochondria in yeast is not
clear to me. I’d argue the majority of problems in AI right now are of this
form and not of the form addressed by the author. If you want to push quotes,
increase minority representation for occupations, do so. If you want to
address specific biases in very specific applications then do that. Don’t use
the latter to make an argument for the former though. These are two different
issues and while I recognize the author sees a link here I’m not so certain
about that. To be clear I think that both are worthy goals. But it’s somewhat
disingenuous to make an inherent political argument, people may disagree with,
and justify it by overgeneralizing a specific, niche technological ethics
question to an entire field.

~~~
joe_the_user
I don't think you're making much of an effort to describe the variety of AI
that are appearing. The thing about AI is it thrives on data and lots of large
institutions have data that they use to make decision about people. Fairness
questions come in.

Google randomly, I see:

AI Applications: Top 10 Real World Artificial Intelligence Applications

Marketing, Banking, Finance, Agriculture, Health Care, Gaming, Space
Exploration, In Autonomous Vehicles.

About half of those seem like situation where fairness questions enter -
banking and finance - plausibly (who gets a loan), marketing - plausibly (who
gets sold what), health care - plausible (who gets treated, what groups' data
is and isn't used for what, etc). The other not so much.

~~~
nil-sec
In industry, yes. I had the impression this article was aimed at academia and
research in AI.

~~~
joe_the_user
Well, academia pretty has to study AI applications.

AI is not a field like physics, which can be roughly separated into
theoretical tools and applications of those tools.

AI is creating heuristics, approximations to data that "generalize" while keep
how that generalization works vague. Essentially it's a very leaky abstraction
so researchers need to be concerned how that leaking happens, what it's
implications are.

~~~
nil-sec
I happen to work in AI research and what you are saying isn’t true. There is
theoretical machine learning and applications of it. They are distinct.

The former is largely task agnostic and deals with fundamental issues such as,
how to train networks in an unsupervised way, how do you do hard statistical
inference for intractable models in a approximate but well enough manner, how
is gradient descent behaving exactly, why does it work, are there better ways
for optimizing nns, what about pruning? The list goes on.

On the other hand you have applications of AI to other fields that require
their own research. In biology for example you may want to segment cells and
their compartments or design better point spread functions for you microscope
or classify cell types. These are applied problems.

Again, as stated in my original post, striving for more diversity is a good
thing and should and is done. Why make it about AI ethics and bias though when
large portions of this field have no contact point with it?

~~~
joe_the_user
In your research, how do you define the process of "generalizes"?

I understand theoretical research exists but I think it's problematic that
theoretical researchers imagine that a kind of "generic" problem exists, even
when a variety of test sets exist to

I mean, is SOTA on imagenet or whatever data a theoretical or an applied
question? What theoretical research in AI is so theoretical that the question
of data sets doesn't appear?

~~~
nil-sec
Let me be very concrete: There is currently a lot of research being done in so
called “contrastive learning”. This is an unsupervised technique in which you
train a network to build good representations of its input data. Not by
explicitly telling the network, “this is a dog” but by telling it “this image
is different from that image and similar to this other one”.

How this works, why this works, coming up with the technique itself, are all
data agnostic. All you did so far is write down a function f(x) -> y and a
loss L(x,y), with x the input and y the output, specifying your model.

Of course you use a specific dataset to train your model in the end and see if
it works. But the model and the technique itself are not grounded in any
specific dataset and thus nothing in this model perpetuates bias.

Now usually the next step for you as a researcher is to evaluate the
performance of your model on the test set. Lets take image net. Now there are
3 situations.

A) Your Train set is biased & your test set is biased in the same way.

B) Your train set is biased & your test set is not.

C) Your train set is unbiased & your test set is biased.

With biased I mean any kind of data issue such as discussed in this article,
e.g. no women in the class “doctor”.

In situations B) & C) your model wont work well so you actually have an
incentive to fix your data. This will happen to you in production if you train
your tracker only on white people say.

Situation A) is likely what’s happening with imagenet and other benchmark
datasets. In this case your model learns an incomplete representation of the
class “doctor” and learns the spurious correlation that all doctors are men.
This will work on the test set because it’s equally biased.

You go, get good test results and publish a paper about it, unaware of the
inherent dataset biases. (You could have done all this also on MNIST or a
thousand other datasets that do not have any issues with societal biases
because they are from a totally different domain, but that’s another point).

In this entire process of coming up with the model, training and evaluating
it, there is no point at which the researcher has any interest, or benefits
from, working with biased datasets. Furthermore, besides potentially
overestimating the accuracy of your model, there is nothing in here that would
hurt society or further perpetuate biases. That is because models are
generally not designed to work on a specific dataset.

Again, this is a different story when you use your model in production. In
this case you are in situation B) or C) and here now lies the crux. If you can
make money from this bias or maybe it perpetuates your own biases well you
might keep it like that. This should be fixed. Here now is a real argument for
why there should be diverse populations working on AI systems that are used in
industry.

Of course having diverse populations in science is also our goal. But not to
fix our datasets but to do better research.

------
mtgp1000
>In my view, those who work in AI need to elevate those who have been excluded
from shaping it, and doing so will require them to restrict relationships with
powerful institutions that benefit from monitoring people.

I am not going to choose who I work with based on their race/sexuality/gender.

>Researchers should listen to, amplify, cite and collaborate with communities
that have borne the brunt of surveillance: often women, people who are Black,
Indigenous, LGBT+, poor or disabled.

Right, we know, only privileged straight white men escape the clutches of
surveillance.

All these ideologues are doing is setting the stage for the marginalization of
whites, a global minority and soon to be minority majority in the US. I can't
believe this kind of casual discrimination has not only been normalized, but
is openly and explicitly encouraged.

This is not ok.

Edit: you know, the same way that it takes generations to build wealth, it
takes generations to build knowledge, especially institutional knowledge. If
we just hand over a chunk of our industries to people because of their
identities, by definition we will be even less of a meritocracy - and that's
bad for all of society.

~~~
082349872349872
What is "soon"?

([https://en.wikipedia.org/wiki/Demographics_of_the_United_Sta...](https://en.wikipedia.org/wiki/Demographics_of_the_United_States#Projections)
suggests "A report by the U.S. Census Bureau projects a decrease in the ratio
of Whites between 2010 and 2050, from 79.5% to 74.0%")

[https://en.wikipedia.org/wiki/The_Rising_Tide_of_Color_Again...](https://en.wikipedia.org/wiki/The_Rising_Tide_of_Color_Against_White_World-
Supremacy) was almost exactly a century ago. Might be worth a read before we
go stampede cattle through the Vatican.

------
xlm1717
“Once men turned their thinking over to machines in the hope that this would
set them free. But that only permitted other men with machines to enslave
them.” - Dune

~~~
Nasrudith
That is a pretty ironic given the sentiment is given from actual slavers who
justify it over their ancestors not fighting the machines.

------
young_unixer
Since when is Nature an outlet for political propaganda and racial
discrimination? Seriously, what a useless article.

~~~
collyw
It is concerning seeing Nature viewing everything from such a postmodernist
perspective of power dynamics.

------
jbob2000
The best thing you can do with powerful technology is to give it away for
free. Then everyone is on even footing and you don’t get an abusive power
imbalance.

~~~
RhysU
Nuclear proliferation is a counter example. Its danger goes up the more states
that have the tech.

~~~
catalogia
North Korea has nuclear weapons and they seem to be the looniest state around
(Russia, America, and China also have nuclear weapons); I think this horse has
left the stable.

~~~
082349872349872
Take a look at the trajectories North Korea has used in their missile tests.
Their leadership may well be as loony as claimed, but whoever is picking
parabolas knows how to speak softly and loft a high apogee.

(similarly, no matter how loony their current CINC may be, whoever in the USAF
redeployed B-52s from Andersen AFB to Minot AFB also understands aposematism)

------
Wolfenstein98k
Marxian analysis is the only analysis now, apparently.

~~~
cambalache
It is going to get worst. If you think carefully and quoting Feynman "There is
plenty of room at the bottom".

Do you want examples?

A Simon Bolivar statue was vandalized in Florida, now Bolivar was no saint but
he is a South-American hero he died in 1830 and now he has been tarnished just
by being a "White Slaver"(He was not). Well, let's see you can come now after
the name Bolivia, named after him, you can also include Colombia, America, The
Philippines, El Salvador, Dominican Republic and many many more. I have seen
several prominent people just shy of outright requesting that.

The Oscars are going to change their eligibility criteria and it will be based
supposedly on the British awards, quoting from a NYT article "All entries in
two British film categories, outstanding British film and outstanding debut by
a British writer, director or producer, are now required to increase
representation to meet at least two of four diversity standards, like
'onscreen representation, themes and narratives,' and 'industry access and
opportunities.' among others." Do you know what this means for art? Bye
Celine, by Kafka,bye Joyce, bye Kurosawa, bye Kubrick, bye Lynch, bye
Tarantino and so on. Only wholesome, clean , politically conscious
entertainment will be recognized. Now, what does this remind you of?

None of the current power-holders will be affected for all these changes, so
no essential change. The big traditional companies are already on-board,
Silicon Valley is on board, the savvy political parties are on-board,
Hollywood is on-board , the media too. What this will do is to disenfranchise
and silence any dissenting, dissonant voice and create a moralist culture
which will dictate in a top-down form the "correct way" to do things and
interpret the world. You disagree? No Facebook for you, No youtube, no
mastercard, no visa, no Stanford, no Netflix, no New York Times, no BBC,
nothing for you.

~~~
TheCoelacanth
It seems like you are drastically overstating what the BFI's actual
requirements are[1]. Basically if you hire some people outside of London/South
East England and if you offer some paid interships and training to under-
represented groups, you have passed.

[1]
[https://www.bfi.org.uk/sites/bfi.org.uk/files/downloads/bfi-...](https://www.bfi.org.uk/sites/bfi.org.uk/files/downloads/bfi-
diversity-standards-criteria-2019-07-23.pdf)

~~~
cambalache
I was just quoting the New York Times article, if what you say it's true then
the rule is meaningless and unnecessary and only included to virtue-signal.

------
bsenftner
This view is an improvement over the status quo view held by most
technologist.

~~~
krona
The 'technologist' would typically (I would argue) want to _remove_ bias from
data, not introduce it to make a model less effective at its intended purpose.

The article suggests making models politically aware, which is probably
standard practice in many parts of China.

~~~
ardy42
> The 'technologist' would typically (I would argue) want to remove bias from
> data, not introduce it to make a model less effective at its intended
> purpose.

This quote from the OP directly address that attitude: "When the field of AI
believes it is neutral, it both fails to notice biased data and builds systems
that sanctify the status quo and advance the interests of the powerful. What
is needed is a field that exposes and critiques systems that concentrate
power, while co-creating new systems with impacted communities: AI by and for
the people."

> The article suggests making models politically aware, which is probably
> standard practice in many parts of China.

While that's a technically correct statement, it misses the point by a mile.
Firstly, it's impossible to make non-political models: at a minimum they embed
the politics of the status-quo, which is often falsely confused with the
absence of "politics" (for the same reason it's hard to see the mountain
itself when you're standing on top of it). Secondly, what makes a model
designed to embed the politics of the Communist Party of China objectionable
is not that it embeds _an_ instance of the class "politics," but rather
characteristics of that specific instance.

~~~
DougBTX
> This quote from the OP directly address that attitude

It doesn’t directly address it, the parent was talking about intention, the
article is talking about outcome. Smooshing then together produces nonsense:
“when someone wants to remove bias from data, they will fail to notice biased
data”.

Surely it isn’t a given that someone trying to remove bias will believe that
all bias has been removed?

------
SpicyLemonZest
"Fair" and "good" are certainly tricky words to narrow down, granted. But to
say they're not even worth discussing - that the _only_ question to ask is
whether AI shifts power to groups I like from groups I don't like - is to let
politics swallow the entire field and make collaboration across ideological
boundaries impossible. That's not going to lead to good outcomes for anyone.

~~~
GaryNumanVevo
Typically ethics is devoid of politics. A simple apolitical question: is this
research going to harm anyone?

It’s open ended and simple enough to answer.

