
Google offers to help others with the tricky ethics of AI - elsewhen
https://arstechnica.com/tech-policy/2020/08/google-offers-to-help-others-with-the-tricky-ethics-of-ai/
======
jerrysievert
> _Google’s new offerings will test whether a lucrative but increasingly
> distrusted industry can boost its business by offering ethical pointers_

Isn’t google a big part of why there is so much distrust to begin with? Seems
like asking your oxy dealer for detox advice.

~~~
RobLach
Opiate dealers often stock narcan.

~~~
aabhay
But only the worst dealers would sell it, most would just keep it or give it
to you. It’s the quintessential olive branch — “Keep buying drugs from me, I’m
the nice one”

------
theptip
A lot of knee-jerk Google-hating in this thread, which I think is unfounded in
this specific context. Google’s AI safety and AI bias toolchain is by far the
most robust that I’ve seen. Seems to me that they are investing much more
heavily than other players.

Note that this is a different issue than data privacy, which Google rightly
takes flak on. AI bias is referring to questions like “if I train a network to
(sentence criminals, price insurance, ...), how do I detect if the resulting
predictions are racially biased?”

The conversation on this subject by lay folks is rife with statistical
ignorance, and Google has done good work communicating and clarifying the
conversational starting points. This is a hard issue because it takes ethical
trade-offs and forces you to specify mathematically exactly how you want to
handle inequality, which is a subject that most people haven’t thought through
rigorously, and would rather hand-wave away with virtuous sound bites.

For example, see [http://research.google.com/bigpicture/attacking-
discriminati...](http://research.google.com/bigpicture/attacking-
discrimination-in-ml/)

~~~
_Nat_
That example ([https://research.google.com/bigpicture/attacking-
discriminat...](https://research.google.com/bigpicture/attacking-
discrimination-in-ml/)) isn't particularly compelling.

As their example shows, there're two traditional approaches: maximize profit
and group-unaware.

* Maximizing profits gets the most profit, but treats people differently based on their group.

* Group-unaware treats everyone the same regardless of their group, but can generate far less profit.

The example presents two alternatives: " _demographic parity_ " and " _equal
opportunity_ ". Presumably the authors would argue that these may be superior
choices because they generate nearly as much profit as profit-maximization
while having a plausible argument for being socially responsible.

This seems a bit off.

Fundamentally, we might say that there're 2 kinds of discrimination: fair and
unfair. For example, it might be fair to discriminate for objective reasons,
but it'd be unfair to discriminate for non-objective reasons (e.g., bigotry).

The advantage of profit-maximization is that it takes full advantage of fair-
discrimination while fully avoiding unfair-discrimination; the drawback is
that it does discriminate.

The advantage of group-unaware is that it fully avoids all discrimination; the
drawback is that it sacrifices fair-discrimination, causing it to yield the
lowest profits.

The two alternatives proposed in that example seem to get the best of both
worlds because they're basically just cloning profit-maximization, but with
slight concessions to plausible-sounding criteria for equality to dodge
perceptions of unfair-discrimination.

Here're the tricks:

* In " _demographic parity_ ", everyone has the same odds regardless of group. This would appear to be the same thing as profit-maximization if the risk/rewards were the same, but since it ignores them, it ends up being basically " _profit maximization, but ignoring different risk /rewards_".

* In " _equal opportunity_ ", both groups get the same true-positive rate. This again seems to sacrifice some of the risk-vs.-reward information, but with a slightly different skew.

So for the privileged group (Orange) vs. the disadvantaged group (Blue):

* Profit maximization: $32,400 from 50 vs. 61

* Group-unaware: $25,600 from 55 vs. 55

* Demographic parity: $30,800 from 52 vs. 60

* Equal opportunity: $30,400 from 53 vs. 59

To me, that looks like 3 ways to discriminate, all yielding roughly the same
profit and thresholds -- using either of the 2 proposed alternatives gives up
a little bit of the profit in exchange for a pleasant-sounding rationale.

What I dislike about this is that it seems entirely superficial. The proposed
alternatives engage in roughly the same level of fair-discrimination (and none
of them engage in unfair-discrimination, which wasn't given in the example at
all) to generate roughly the same level of profit.

\---

> The conversation on this subject by lay folks is rife with statistical
> ignorance, and Google has done good work communicating and clarifying the
> conversational starting points. This is a hard issue because it takes
> ethical trade-offs and forces you to specify mathematically exactly how you
> want to handle inequality, which is a subject that most people haven’t
> thought through rigorously, and would rather hand-wave away with virtuous
> sound bites.

That's exactly what this looks like!

In this case, the virtuous sound-bites are " _Demographic Parity_ " and "
_Equal Opportunity_ ". They both worked out to be mostly the same as simple
profit-maximization, but if someone in a disadvantaged group protests that
they're being discriminated against, they'd probably find it difficult to
follow the math far enough to sustain their complaint.

~~~
theptip
I appreciate the detailed object-level analysis of that case, but my point was
more intended at the meta-level — in order to have a conversation about AI
bias, we need language and examples to start from. Google is building these
fairness metrics into parts of its cloud ML toolchain, and is investing in
peer-reviewed research in this area. That’s more than most other companies,
and should be applauded.

At the object level, I think you perhaps oversimplify with “ Fundamentally, we
might say that there're 2 kinds of discrimination: fair and unfair.” What we
think is fair is the crux of the whole issue, and there is not agreement on
that distinction. For example, if we price risk at the “true” risk of default,
and this means that fewer black people get loans, is that fair? Some think it
is, and some think it is not, even though I hope most would agree there is no
racist intent in the decision itself. The economic context in which these
decisions are made already contains the impact of past racism, which can be
perpetuated or or restituted by decisions made now.

I also disagree with your characterization of these types of parity as sound-
bytes. My definition of a sound-byte is a conceptually thin phrase that sounds
good, but does not contain much content. These are the opposite; they are
specific jargon with a precise meaning that is explained in detail in the
article. You could call them “Foo” and “Bar” and their usefulness in the
conversation would be the same. (Sure, “equal opportunity” has been deployed
as a sound-byte elsewhere without any rigorous definition, but I hope you’d
agree that this article at least fleshes out one possible concrete definition,
even if you don’t think it is the right one. I’m sure they are others - and
now we can further the field by writing them down, citing this piece, and
making an objective critique!)

~~~
_Nat_
> My definition of a sound-byte is a conceptually thin phrase that sounds
> good, but does not contain much content.

Yeah, same. And to my eye, this whole thing's paper-thin; it tested my
suspension of disbelief to not dismiss it as an obvious joke or bad attempt at
a PR stunt, and I'm honestly still undecided.

Part of the issue is that its conclusions are absurd. We can invent ethical
constraints to arrive at max-profit by pulling things out of thin air; the
procedure's analogous to _p_ -hacking. For funsies:

1\. Come up with a huge portfolio of ethical-sounding constraints. Include
basically any ethical-sounding argument imaginable, even if it seems stupid or
bad for business (the numbers can be hacked; we just need the pretty words).

2\. Come up with model-transform strategies. For example, instead of
calculating the threshold, calculate the log-threshold or the threshold-
entropy. (Again, don't think logically -- we're hacking the math, so just come
up with random things.)

3\. Come up with model-hybridization strategies. For example, construct a
meta-model that uses 50% of the threshold + 25% of the inverse-log threshold +
20% of the inverse-entropy threshold + a flat 5% (because, hey, why not?).

4\. Whenever we want to maximize profit, just do so with simple profit-
maximization. No " _ethical_ " constraints.

5\. Repeat Step (4) for millions (or more) different hybridized models,
combining all sorts of different model-transforms and ethical-constraints at
random. Optimize each for approach to simple-maximization.

6\. Stop upon finding an " _ethical_ " solution that effectively equals simple
profit-maximization.

7\. Generate a description of the " _ethical_ " solution, and write a report
on how your company's using that particular ethical strategy as part of its
latest, on-going campaign to promote social equity.

8\. Just do simple profit-maximization. Or the " _ethical_ " solution;
whatever, they're literally the same thing. But if anyone asks, be sure to
tell them it was for ethics.

While we're at being ethical, we could even shift some paradigms to envision
new modes of engagement with socioeconomically disadvantaged stakeholders to
promote a synergistic realignment between our commitment to ethical outreach
and disrupting the zeitgeist with our social media blitz to rapidly accumulate
surprisingly liquid social capital, indelibly scrawled across the human
blockchain of love and mutual respect, validated in the proof-of-work of
billions of sapient hearts yearning to emerge from the trials-and-tribulations
of our modern era into the welcoming bosom of a new tomorrow.

But who cares about pointlessly defining jargon for logical inconsistencies,
though? It's hallow and meaningless; at most, it might fool people before they
realize that it's logically inconsistent, but that'd just be lies. It doesn't
seem to have an actual use.

------
fencepost
Gah. I'm experimenting with an iPhone as a second phone to dip my toes into
the iOS ecosystem because despite their other flaws they've at least learned
the lesson "don't be f __ckn ' _creepy_ " which Google has not.

Being able to count on probably 5 years of OS and security updates instead of
the mishmash of abandoned handsets or potential security nightmares with
hobbyist firmwares is another nice feature.

------
ooobit2
I made it two paragraphs before I remembered the preface of Google's code of
conduct was stripped of the line "Don't be evil." It's now the last line,
which to me reads as _doing the ethical thing is an afterthought_. The much
more ambiguous "Do the right thing," is in Alphabet's corporate code.

We need the ethics of AI to be an offshoot of open source. It's tough to
suggest that in an environment where enough people can pressure a group to
adopt objectively unethical policies, but at the very least open source
communities do not have the trigger of financial risk.

Google is going to need to make money from this "Ethics as a Service"
eventually. And that is an obvious, all too obvious conflict of interest for
what Google envisions.

Maybe they _can_ do this. I'm not 100% objecting to the idea. But I'd like to
suggest they start with YouTube. Ads on the platform have quadrupled since
November 2019. Creators are playing a lottery with monetization every other
upload, and they're making as little as 1-5% compared to YouTube's ad-share
program 10 years ago. Start there.

~~~
twitch-chat
As far as the ethics of AI goes, would "don't be evil" slogan be better than
"do the right thing"?

I'm sure people may interpret things differently but doing the right thing
would mean Google making AI that has a positive net impact on society, while
"don't be evil" could simply mean good for Google but neutral to everyone
else.

~~~
galuggus
It's easier to know what is wrong that what is right.

------
rbecker
Reminds me of
[https://en.wikipedia.org/wiki/Comics_Code_Authority](https://en.wikipedia.org/wiki/Comics_Code_Authority)
\- a sneaky way for a private entity to act as de-facto arbiter and censor.

------
throwaway316943
Please don’t

------
spodek
Google helping on ethics: the blind leading the blind!

Some fun history: I thought the phrase originated with the Christian bible:

> _" when the blind lead the blind, both shall fall into the ditch"_

but Wikipedia
[https://en.wikipedia.org/wiki/The_blind_leading_the_blind](https://en.wikipedia.org/wiki/The_blind_leading_the_blind)
traces it to the Upanishads:

> _" Abiding in the midst of ignorance, thinking themselves wise and learned,
> fools go aimlessly hither and thither, like blind led by the blind."_

------
_Nat_
> Longer term, the company may offer to audit customers’ AI systems for
> ethical integrity, and charge for ethics advice.

If Google'll act as any other company in a free market, then this'd seem like
a neat offering. Maybe it'll be a helpful service, or maybe it'll be a flop;
but, worst-case scenario, just don't use their service if it's no good.

But the language of " _auditing_ " and " _charg[ing] for ethics advice_ "
sounds almost like someone's laying the groundwork for regulatory authority
over AI.

------
_alex_
Ag yes, that paragon of ethical data use, Google.

~~~
revel
Next up: "Media accuracy and journalism standards" by Facebook

------
adamlangsner
Google offers help with ethics. Love this headline! So funny!

------
SMAAART
Step 1. Do no evil

Step 2. JK

------
seneca
Google is in such a strange state. I deal with Googlers several times a week,
and their defining characteristic is hubris. It seems like their arrogance
blinds them to the fact that their credibility has been in a free-fall for
years. Publishing articles like this is just laughably tone deaf.

