
Google cancels AI ethics board in response to outcry - minimaxir
https://www.vox.com/future-perfect/2019/4/4/18295933/google-cancels-ai-ethics-board
======
dang
We merged the comments from
[https://news.ycombinator.com/item?id=19578963](https://news.ycombinator.com/item?id=19578963)
into this thread.

The previous discussion on the story is
[https://news.ycombinator.com/item?id=19567290](https://news.ycombinator.com/item?id=19567290).

------
redwood
This is a shameful situation. We need to all wake up and take a step back and
realize what's happening here: increasingly you can't have a conversation
about one issue unless you are a hundred percent pure in the eyes of a frankly
self-proclaimed judge jury and executioner for 100% of your other views.

I am of the left myself but on a meta-level what we're doing is essentially
making it impossible to ever find middle ground on issues if we're unable to
talk to folks who we don't agree with on 100% of things.

This is a dangerous trend for all of us because it makes it impossible find
common ground in an open democracy even if half the country in our eyes have
essentially deplorable views, we need to recognize that they feel the same
about us and then it's simply not constructive to be unable to find common
ground.

~~~
kuzehanka
You articulated so well what I've been observing.

With the widening political divide, we find ourselves in this weird situation
where any decision makers are swarmed with moral outrage attacks unless
they're the bastions of politically correct ideology. An increasing number of
topics are taboo because merely opening up a discussion around them summons
the moral outrage army. Moral outrage has become most effective social
manipulation technique, and morality has long ago stopped playing into the
equation.

The entire point of ethics discussions is to capture the views of the
population in a proportionate manner and try to come to a set of mutually
positive outcomes. Bonus points for not stomping on minority views. Instead
even on HN we have people claiming that their views are the Correct views and
the opposition should not even be given a voice.

No matter how disagreeable you may find some particular viewpoint, if it
happens to be held by a non negligible number of people and you marginalise
them, it will eventually result in tremendous blowback.

I don't know how we got here, it's a bit scary.

~~~
fixermark
I'm underwhelmed. In this specific example, the individual in question
could've gotten out ahead of the press cycle and explained herself. That she
didn't means the beliefs she holds as reported probably are her beliefs, and
yeah---that's unacceptable for this role. This isn't thoughtless litmus-
testing; this is "Google has a track record of errors regarding its trans
userbase and is a company that specifically wants to do better there." If she
doesn't see trans people as valid, she's the wrong fit for the role.

~~~
raxxorrax
> that's unacceptable for this role

Do we really need to play facts <-> opinion?

In a discussion about ethics you include as many viewpoints possible or it
become dishonest immediately.

You are not a priest or something like that. It has taken too long to accept
premised authority.

~~~
xnyan
I don't accept people who advocate suppressing the rights of trans people as
members of society. I'm fine if you judge me for that or think I'm moralizing,
because I am. I can live with that.

~~~
belorn
In almost all political disagreement there is underlying fear or concern being
unaddressed. Simply shutting people out only amply it further.

For example, a very common topic in practically all discussion around trans
right in the US is the issue around toilets. An obvious solution to it is
unisex toilets, but the right never accepted it and the left abandoned it so
it is now a centrism political view. When it comes to both the left and the
right, it seems to exist a fear that unisex bathroom are more likely to cause
crime, through there is no data to actually support it. Thus rather than
settle the issue, they fight over it.

If we could convince both side, maybe by using data from countries which
overwhelming use unisex bathrooms, we could settle those fears.

------
vilhelm_s
I thought this twitter thread by Kelsey Piper was pretty good:

> as I look through her history I am mostly confused why she was chosen in the
> first place. She doesn't have any background in tech. She doesn't have any
> prior writing or research related to AI. No background in the other topics
> that come up - surveillance, data privacy, international security. She's
> mostly a culture warrior, so no wonder now we're having a culture war.

[https://twitter.com/KelseyTuoc/status/1113544871292182528](https://twitter.com/KelseyTuoc/status/1113544871292182528)

~~~
telltruth
Interestingly everyone who hasn't done anything technical but wants to ride AI
train chooses to become "AI Ethics" person these days. You can look up vast
number of these AI ethics "experts" giving talks on this subject without ever
having trained a model for anything. So apparently the bar for this "field" is
zero experience in anything technical but great ability to hold a microphone
and induce FUD in public.

~~~
ecshafer
I would trust an ai ethicist with a PhD in philosophy or ethics of their ideas
were coherent. Tech backgrounds are not necessarily required.

~~~
telltruth
The big problem is that these people don't understand what current tech really
is. The media and Musk have hyped this up as "AI" but it's not even remotely
AI. These people go around and present slides as if sky is falling and induce
massive FUD in general public. There is no real AI as far as anyone technical
is concerned. So whole deal about "AI Ethics" is great way to get on AI train
and get massive salaries for non-technical people. There are few things were
policy is needed like surveillance and detecting model bias - but these are
very few things and they do need awareness of actual capabilities and
understanding of tech.

~~~
matt4077
The central example of AI ethics is a self-driving car deciding whom to run
over. That example is perfectly possible to occure even with today’s
technology. I don’t know what this continuation of playing-chess-doesn’t-take-
Intelligence griping is supposed to accomplish, except as posturing of people
conflating perpetual contrarianism with insight.

~~~
michaelt
See, that exact example is why I look askew at a lot of the field of 'AI
ethics'

I mean, human drivers' education doesn't cover choosing who to kill in
unavoidable crashes. Isn't that because we believe crashes where the driver
_can 't_ avoid the crash, but _can_ choose who to kill, are so rare as to be
negligible?

IMHO much more realistic and pressing AI ethics questions surround e.g. neural
networks for setting insurance prices, and whether they can be shown not to
discriminate against protected groups.

~~~
DebtDeflation
> See, that exact example is why I look askew at a lot of the field of 'AI
> ethics'

The main focus of "AI ethics" needs to be on model bias and how to counter it
through transparency and governance. More and more decisions, from mortgage
applications to job applications are being automated based on the output of
some machine learning model. The person being "scored" has no insight into how
they were scored, often has no recourse to appeal the decision, and in many
cases isn't even aware that they were scored by a model in the first place.
THIS is what AI Ethics needs to focus on, not navel gazing about who self-
driving cars should choose to kill or how to implement kill switches for
runaway robots.

------
nosseo
Hey HN, I'm the author of this article (also the precursor predicting this,
which was on the front page yesterday). My impression is that the best place
to look for an explanation is actually the facebook post by Luciano Floridi:
[https://www.facebook.com/floridi/posts/10157226054696031](https://www.facebook.com/floridi/posts/10157226054696031).
My sources at Google just couldn't see the panelists constructively working
together on a panel at this point. Obviously, protests by Google employees
played a role too.

~~~
geofft
> _My sources at Google just couldn 't see the panelists constructively
> working together on a panel at this point._

Well, that's the point, isn't it? If the AI ethics council is meeting four
times a year for a couple hours at most, and and has people who can't even
agree on "Which bathroom should this person use," how are they going to
produce productive advice for actual hard questions that _haven 't_ been well-
explored?

There is a place for debate between people who don't agree on worldview. This
council was never going to be it.

~~~
educationdata
Is "Which bathroom should this person use" an easy question? If it is really
easy, why it is so controversial? You may think it is an easy question, in
reality it is an actual hard question.

~~~
matt4077
„Some say you shouldn’t exist. Others say you have a right to live. Hard
question“

~~~
93s6oz
That is polarising. An extreme take. I haven't seen anybody make the argument
that trans people should be killed. Just that men should not be able to use
women bathrooms. In this case, that's the argument. Turning it into something
like that is not going to help anybody.

~~~
pjc50
> I haven't seen anybody make the argument that trans people should be killed

Brunei?

~~~
sadris
Think their law applies to homosexuals, not transexuals.

~~~
mike00632
If I were trans I don't think I would take my chances of them making a
meaningful distinction.

------
everdev
This article is a little light on details, but there's a more in depth article
here:

[https://www.vox.com/future-
perfect/2019/4/4/18295933/google-...](https://www.vox.com/future-
perfect/2019/4/4/18295933/google-cancels-ai-ethics-board)

The initial complaint was around comments from 1 of the 8 board members: Kay
Coles James

The complaint was published Apr. 1 in a Medium post:

[https://medium.com/@against.transphobia/googlers-against-
tra...](https://medium.com/@against.transphobia/googlers-against-transphobia-
and-hate-b1b0a5dbf76)

As evidence the post lists 3 tweets from Kay Cole James:

[https://twitter.com/KayColesJames/status/1108768455141007360](https://twitter.com/KayColesJames/status/1108768455141007360)

[https://twitter.com/KayColesJames/status/1108365238779498497](https://twitter.com/KayColesJames/status/1108365238779498497)

[https://twitter.com/KayColesJames/status/1108365238779498497](https://twitter.com/KayColesJames/status/1108365238779498497)

~~~
m1el
The two last links are the same.

~~~
everdev
Good catch. Here's the 3rd tweet:

[https://twitter.com/KayColesJames/status/1100488434500685824](https://twitter.com/KayColesJames/status/1100488434500685824)

------
ananth22by7
I fail to understand. Is “comments” and “views on issues” enough to cause such
a stir? People’s sensitivity boggles my mind. Can we not say things without
having to walk on eggshells? What kind of unforgiving and judgemental culture
are we living in?

~~~
dx87
If it makes you feel any better, not everywhere is like that. I work at a mid
sized technology consulting company on the east coast, and I've heard multiple
managers say that they hate when they have to manage developers from the west
coast. My manager told us about a time she got a nasty email from the lead
developer of a team because she started an email with "Hey guys" instead of a
gender neutral greeting.

~~~
smelendez
That is regional, though. I'm from New York and I've addressed groups of older
female relatives as "you guys" without anyone batting an eye.

In other parts of the country that's not common and people see "guys" as
gendered, and people feel like you're forgetting they're there.

The word "dude" as a form of address is gender neutral and business
appropriate some places and not others.

It's a hard problem: Y'all works some places but sounds goofy and affected in
others. Same with "folks."

~~~
tiglionabbit
It's not really regional. I have noticed this most from folks who are
transitioning genders. Some are OK with "they", but some prefer folks to go
above and beyond to affirm their new gender, thus objecting to "dude" and
"guys".

As a result, I've gotten into the habit of saying "y'all" and "folks" even
though I'd otherwise never have used these words.

~~~
seanmcdirmid
It is regional outside of the USA, quickly gaining gender neutrality in the
UK, Australia, India, .... y’all or folks won’t cut it in those places.

~~~
swish_bob
Folk is fine in the UK and works well, as does "everybody".

------
itg
Why can’t leadership at Google stand by any of their decisions, instead of
caving in to the demands of a small yet vocal group of people.

~~~
geofft
Engineering at Google objected, and leadership at Google can't do anything
without engineering at Google.

~~~
sadris
They could fire them? They're paid for their labor, not their opinions.

~~~
geofft
Sure, Google has and always has had the option of deciding that engineers
should shut up about their opinions and just work and not ask questions. I
suspect Google management realizes that isn't going to work. They've been able
to hire and retain good people with the "bring your whole self to work,"
"don't be evil," etc. policies. If they want to try "We're no different from
Oracle, but with brighter colors and maybe better pay," they shouldn't expect
to continue being more successful than Oracle. They can fire their best people
and hire some mediocre replacements.

------
mc32
So what now? No ethics board at all and it just proceeds without ethics
oversight?

Isn’t that state worse than having someone you disagree with on some
principles but in theory provide constructive criticism where the politics
don’t intersect?

I mean, making some jobs obsolete or making decisions on recidivism etc I
think are distinct from what a panelist thinks about an unrelated issue.

Moreover this person didn’t provide s majority view. Presumably the other
panelists would have good arguments against her controversial views.

~~~
effics
Oh, I think the problem might just be a little bit more complicated than that.

In a sense ( _and this is a gross exaguration, but just to frame the concept_
), an ethics panel formed by Charles Manson, Ted Bundy, John Wayne Gacy,
Jeffrey Dahmer and David Berkowicz would not be an improvement over nothing at
all. It would be a step backwards, and the world would be worse for tolerating
it.

This is not to say that any of the people involved in this particular episode
are abominable monsters, far from it, but to drive home the point, whether you
pick the right people or the best people really matters, and it does make a
difference.

In some respects, I'm not sure I'd want individuals with a vested interest and
enthusiasm for AI to play watchdog over appropriate, responsible behavior.

In a way, an ethics board in something of a no fun zone. It would likely make
more sense to invite members from areas that might run counter to industry
wonks, in ways that AI experts might prove tone deaf to self policing
concerns. Does that make sense?

We don't want to stifle the best parts of progress, but an ethics board
shouldn't be made of people inclined to rubber stamp Skynet, because they'd
tend toward seeing AI as progress by default.

~~~
creaghpatr
A comparison worthy of a throwaway account, right there

------
Reelin
> “It’s become clear that in the current environment, ATEAC can’t function as
> we wanted. So we’re ending the council and going back to the drawing board,”

The current environment is so overly toxic that I doubt it will be possible
for any process that might produce reasonably useful results to be run in an
open manner. This is particularly troubling given the number of very plausible
potential problems posed by AI in the near-ish future. Unfortunately it also
seems unlikely to me that any government regulation that emerges will be at
all competent.

~~~
raxxorrax
I think the questions about ethics is already behind, because most people like
to think about artificial general intelligence. The little routines that are
separate but together begin to quantity every aspect of out lives? Welcomed
with open arms... Privacy rights could help a lot...

------
throwawaysea
I don't get why Google caves to pressure from internal activism. Only 2000 of
98000 employees signed the letter. And they are themselves a very skewed
cohort, unrepresentative of the general population of customers Google has.

~~~
swang
eh you know that doesn't mean 96000 opposed this though? do you yourself
actually care other than the fact that a specific group of people complained?

~~~
throwawaysea
Sure, but it doesn't mean 96000 are in agreement either. So why wouldn't
leadership just trust their own judgment and stay their course? That seems not
just spineless, but also irresponsible since it means they are willing to
discard all prior careful deliberation and research at the drop of a hat.

> do you yourself actually care other than the fact that a specific group of
> people complained?

Yes, I want a diverse set of views on such a council, representing multiple
areas of the political spectrum - that is, left, moderate, and conservative
views rather than just the far-left. There are also a wide range of opinions
out there on topics like the use of AI for military purposes or on the modern
transgender movement (which is the subject of controversy relating to this
council). Only 8% of America are progressives after all
([https://www.theatlantic.com/ideas/archive/2018/10/large-
majo...](https://www.theatlantic.com/ideas/archive/2018/10/large-majorities-
dislike-political-correctness/572581/)). And all this is still without
consideration of the worldviews that exist around the globe, which is
important given Google has customers globally.

~~~
swang
what is kay cole james' expertise on ai ethics that she should be on the
council?

is there no other person in computer science, mathematics, privacy research or
some professor that is a conservative that holds none of this baggage and says
shitty things? and like one of staff at vox said, why bring someone who is
entrenched in the political culture war in the first place?

you want a set of diverse views. assuming she wasn't actually anti-trans,
anti-lgbtq, do you think this woman knows enough about ai to represent the
views of conservatives?

let's say this ethics council actually had power and could dictate what google
or alphabet as a company could do in regards to ai. do you trust her to
understand the topic at hand to then also address conservative concerns? let's
say they make a ruling and she was too ignorant to the issue at hand (maybe
those sneaky leftists used language that hid their leftist agenda). what is
the likelihood of outrage by the "far-right" that the council was setup to be
some pro-left google cabal because they brought someone with zero knowledge of
the situation to represent the conservative side?

~~~
jmcgough
They probably didn't bring her on for her expertise, they brought her on
because she's a powerful conservative. Google wants politically powerful
friends on both sides of the aisle in light of how much criticism there's been
of them lately.

~~~
mike00632
Piling on against a minority of people who can't really fight back is not
being "powerful." It's cowardly and weak. It's also _unethical_.

~~~
jmcgough
I didn't say that what they were doing was a good thing. The AI ethics council
wasn't intended to do much but provide some nice PR and political connections.

------
AndrewKemendo
I'm on an IEEE AI ethics standards working group and I can tell you that these
kinds of boards, without deep expertise in technology and logic/philosophy
don't end up producing anything of much value.

Human ethics is hard, messy and forever changing. Codifying human ethics into
systems that can be tested and implemented are what politics and economic
philosophy are all about. It's ungodly hard and not something that can be
taken lightly.

------
RcouF1uZ4gsC
The Heritage Foundation is very much in the mainstream of conservatism in the
United States. They are not a fringe right wing organization. This very public
protest about the appointment of the Heritage Foundation president to the AI
ethics panel, and Google's subsequent dissolution of it, deepens the mistrust
that conservatives have about Google and Silicon Valley in general being
hostile to them and the feeling that Silicon Valley does not even want to give
them a voice in the debate.

Given that conservatives make up between 35-40% of American adults, and that
the way Senators and Electoral College votes are allocated gives them more
political power than just the number would suggest, I fear that this
alienation by Silicon Valley will prove to be detrimental in the long-term.

[https://news.gallup.com/poll/225074/conservative-lead-
ideolo...](https://news.gallup.com/poll/225074/conservative-lead-ideology-
down-single-digits.aspx)

In the near term, I think this will cause more conservatives to embrace the
need for a government AI ethics oversight panel as opposed to a Silicon Valley
selected one.

~~~
kevingadd
There's an argument to be made on the political basis, but unfortunately the
choice of conservative was poor given the complete lack of qualifications and
unnecessarily toxic public speech background. It's really worthwhile to vet
your selections in advance to ensure they're not militant anarchists or
antisemites or homophobes or anything else in their historical tweets or
public speeches - regardless of their expertise on the subject (in this case,
also none) it prevents them from doing their job without interference.

There are numerous conservatives you could find to do the job who wouldn't
have set off this controversy.

~~~
jcims
Maybe they thought that any conservative they pick is going to have a similar
history but that her diversity ups would buy her a pass.

------
stareatgoats
We need an independent AI ethics oversight panel (not just for Google),
period. But with everything ethical so tainted with insanely polarized
politics it is unfortunately impossible to envisage such a board operating
with the required general legitimacy. Which doesn't bode well for human
control over technology in the future.

------
compiler-guy
Not only could the panelists themselves not work together constructively.
Googler's were so enraged by the inclusion of certain people that any
recommendation with those people's sign-on would have been DOA.

~~~
andrenth
This is the cause of Googler rage:
[https://www.frontpagemag.com/fpm/273360/how-african-
american...](https://www.frontpagemag.com/fpm/273360/how-african-american-
grandmother-enraged-1000-daniel-greenfield)

~~~
skj
I tried to read it, to see what she did that was considered objectionable, but
I couldn't get past the trite putdowns the article slipped in all over the
place about lefties and millennials. I was three pages deep before I gave up.

~~~
SamReidHughes
I found a tweet where she opposed a law that would force women’s sports
leagues to let men play, require people to let men use women’s bathrooms, and
force doctors to sterilize teenagers.

~~~
skj
Any possibility you're going to link to any of that sauce so we can decode it?

~~~
SamReidHughes
Looks like it's already in this thread.

------
hollasch
We should establish a committee to establish the Ultimate Human Purity Test.
Use this to select an ecclesiastic convention of the resulting saints to
deliver papal encyclicals on the topic. This might be what it takes to remove
from sinners the freedom to speak.

------
kenneth
@dang — came through the Reuters article. Prefer it to the new Vox article,
due to being factual and objective, instead of editorialized and showing bias.

Vox's article implies Google was in the wrong. Reuters makes no such
implications.

I have no opinion on the topic at hand, but I am saddened that journalism
these days falls far too often into the trap of being lynch-mob. I would've
preferred to see the objective and factual article win over the opinion piece.

------
helen___keller
In the information age, people perceive authority and trustworthiness
differently. This can be both a good thing and a bad thing.

What we really need is a solution to the problem of `building diverse expert
opinion` that doesn't rely on a panel of flawed humans, because in the 21st
century it's obvious the public will refuse to trust said expert panel when it
conflicts with their worldview in an entirely unrelated way (eg a members view
on climate change).

I don't think there's a point fighting against the public outcry on this, as
many commenters here seek to do. People don't trust authority the same in the
information age, and the world needs to adapt to that.

------
antpls
I admire what Google is trying to do, but this looks more like an AI lobby to
defend Google interests in the process of making laws than an altruism move.

I guess it's good enough if it sparks discussion and a debate about ethic more
generally

------
mensetmanusman
They should just work with local medical ethics boards at Stanford.

AI interactions with humans could be considered a class of medical
experimentation as a thought experiment. Would have interesting output, and
the system already exists...

------
knolax
a.

------
zuse
"[T]he inclusion of a drone company executive had raised debate over use of
Google’s AI for military applications."

Lovely.

~~~
Reelin
I share your distaste for militarized AI (unless I mistake you?), but if AI
_are_ to be used in such applications then competent representation and
examination of such use cases by a public ethics body would seem to me to be a
good thing.

~~~
swang
yes if you dislike militarized AI you should discuss it.

the woman in question that runs the drone company is ex-military, so i would
feel like she would be in favor of militarizing drones (albeit to be fair her
drone company currently does not do that) and thus probably shouldn't be on
the board in the first place

~~~
Reelin
What? No. The fact that she's ex-military and runs a drone company is
precisely why she _should_ be on such a board.

If there is no one on the board with the relevant knowledge and experience to
competently represent a given use case, then the board will likely be unable
to produce results relevant to such use cases. For example, if I form a board
to hash out software version control system best practices but actively
exclude experts on distributed VCS such as git and mercurial, then the
resulting "best practices" are unlikely to prove useful for anyone actually
using a DVCS in reality.

My point here is that excluding her almost certainly won't actually do
anything to prevent the development of militarized AI. Rather, it will simply
reduce the likelihood that anything the board puts out has influence on such
matters.

------
tanilama
AI ethics is a non-issue, self-inflicted hype machine anyway.

------
dangerface
If you want people to believe something unreasonable stop trying to reason
with them, this is the result of deplatforming.

Resorting to ostracisation is proof that your point of view is unreasonable,
which then makes their point of view seem more reasonable as there is no
counter.

This society seems obsessed with the idea of ostracisation and deplatforming
its ultimately a breeding group for fascism and bigotry.

