
A new course to teach people about fairness in machine learning - grey-area
https://www.blog.google/technology/ai/new-course-teach-people-about-fairness-machine-learning/amp/
======
humanrebar
I'll point out that broad categories of fairness have nothing to do with AI/ML
and everything to do with institutions ensuring fair outcomes.

For instance, we would be appalled if a court found that the conviction rate
for people like the defendant was extremely low, so the trial was going to
start with the presumption of 80% guilt.

What's the problem? It's not one fixed by better modelling or a better bias
percentage. The problem is that fairness in many contexts is about processes
and institutions. We expect courts to follow principles like due process and
equal protection under the law.

Companies like Google can't jump right to social science and social
engineering without thinking about fairness and justice in _these_ terms.

Can a user confirm that the data about them is accurate? If not, what can they
do about it?

If a business has its account algorithmically cancelled, what is the appeals
process? How is it fair?

When companies like Google start getting into social science, the problem is
extremely complicated fast. For example, it starts looking like a judicial
system in parts, but one that isn't established and maintained by a legitimate
government.

~~~
k__
Can't they just remove unfair data from the trainings set?

~~~
atlantic
Which begs the question: how do you identify unfair data? If the data is not
false or inaccurate, you shouldn't tamper with it.

On the other hand, if you already know at the outset the result that you would
like your algorithm to produce, then why bother with machine learning at all?
Just hard-code your output.

------
bicubic
So what's happening lately is that tech giants have gotten into the business
of dictating how the world _should be_. No one was consulted about it, it was
not put to a vote, it simply began, and over the last few years it's
increasing in amplitude.

The problem with them using words like 'fairness' is anyone who disagrees with
what they're doing is immediately put on the defensive and labelled as
'unfair'. This is far from the truth though, as what society today is actually
doing, is trying to figure out exactly what 'fair' should mean. Google et al
just took it upon themselves to start pushing their particular brand of
fairness onto society and acting as if it's the most natural widely-accepted
common sense.

Google's idea of fairness is to build ML solutions that don't accurately
predict the world, but rather inject Google biases into the predictions in
order to bias some real-world process, and ultimately change the distribution
of some factors in the real world according Google's image of how things
_should be_. Not enough <demographic> represented in the workforce? Bias the
candidate selection model to equalize it. Some <demographic> is over-
represented in crime statistics? Bias jail sentencing model to be more lenient
on them and hope it equalizes their representation in stats over time.

This is not inherently a bad mechanism, but it's an extremely powerful
mechanism with far-reaching consequences, and it is currently controlled by a
small number of people at the top of the US tech industry, with no checks or
oversight. I worry that there's a strong possibility that the long term
effects of messing around with these levers have not been considered. We have
done serious unintentional damage with far smaller levers like the Australian
Rabbit Plague.

I don't know what 'fairness' should actually mean in modern society, but we
definitely need to be aware that the US tech industry is hijacking the term to
fit their agenda, and need to be checked. And the exact nature of the biases
they're injecting should definitely be made transparent and put to public
debate.

~~~
sytelus
Your viewpoint is not correct. Many class of decisions we make needs to be
independent of race, color, religion and gender. This is not Google’s idea,
it’s US Constitution + laws in many states. As ML is increasingly used for
making decisions such as whom to give loans or who do you hire, many models
may predispose a human based on exactly these attributes violating laws.

~~~
bicubic
Well, yes. That's exactly why the debate around 'fairness' has begun.

ML will easily infer gender/color/religion from any other features about an
individual. It should not be too surprising that those features play a big
role in identity. Consider the recent news about Amazon throwing out a model
which kept learning to discriminate on gender no matter how much they tried to
blacklist gender-related features. They ended up with a model that could
determine the gender of a job applicant just from subtle language cues in how
they write their CV - usage of words like 'captured' and 'executed' which were
used more frequently by one gender.

ML is worthless if blinded to anything that might give away a sensitive
feature about an individual because that's virtually everything. And the
current generation of ML is making it it really clear that the world as a
whole discriminates on these features all the time, whether directly or
indirectly.

These are the facts, the world is not evenly distributed. The question for
society now is where do we go from here, and what exactly should 'fair' mean.
Do we introduce reverse bias into processes to equalize the output
distribution? Do we change the criteria considered by various processes to try
to influence the output distribution? Do we look at the underlying social
causes of the non-uniform distribution and try to address those? Do we just
leave things as they are? Do we legislate to get ML out of critical social
processes? Should we maybe first address much more obvious cases of unfairness
that don't even require ML to identify, like massive and growing wealth
disparity? Google's idea of fairness is only one of these choices, and it does
not represent a public consensus on the subject - the public was never
consulted. And yet the public is increasingly being impacted by this issue.

[https://www.reuters.com/article/us-amazon-com-jobs-
automatio...](https://www.reuters.com/article/us-amazon-com-jobs-automation-
insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-
women-idUSKCN1MK08G)

[https://www.cnbc.com/2018/03/02/google-accused-in-lawsuit-
of...](https://www.cnbc.com/2018/03/02/google-accused-in-lawsuit-of-excluding-
white-and-asian-men-in-hiring-to-boost-diversity.html)

~~~
KirinDave
> ML will easily infer gender/color/religion from any other features about an
> individual. It should not be too surprising that those features play a big
> role in identity. Consider the recent news about Amazon throwing out a model
> which kept learning to discriminate on gender no matter how much they tried
> to blacklist gender-related features. They ended up with a model that could
> determine the gender of a job applicant just from subtle language cues in
> how they write their CV.

Ah yes, subtle clues like the word "women's" and "girl's".

> ML is worthless if blinded to anything that might give away a sensitive
> feature about an individual because that's virtually everything. And the
> current generation of ML is making it it really clear that the world as a
> whole discriminates on these features all the time, whether directly or
> indirectly.

It's not a problem that the network could identify that fact. It's a problem
that the training data for Amazon contained this, because it's pretty clear
there is an illegal and unfair bias in Amazon hiring. The data, from the
stories we have, didn't include anything but successful hire data. No credible
notion of "success" for post-hires has been indicated here.

> Google's idea of fairness is only one of these choices, and it does not
> represent a public consensus on the subject - the public was never
> consulted. And yet the public is increasingly being impacted by this issue.

Have you looked at the courseware? The entire courseware is about how these
systems optimize to fit data, not to fit truth. They explore numerous ways in
which that subtlety creates surprising or undesired outcomes.

It's very difficult to imagine a world where more awareness of the limitations
of mathmatical models will lead to an undesirable outcome. Which one of these
core tenants from the syllabus are YOU opposed to?

* Engage with a diverse set of users and use-case scenarios, and incorporate feedback before and throughout project development. This will build a rich variety of user perspectives into the project and increase the number of people who benefit from the technology.

* Model potential adverse feedback early in the design process, followed by specific live testing and iteration for a small fraction of traffic before full deployment.

* Consider augmentation and assistance: producing a single answer can be appropriate where there is a high probability that the answer satisfies a diversity of users and use cases. In other cases, it may be optimal for your system to suggest a few options to the user. Technically, it is much more difficult to achieve good precision at one answer (P@1) versus precision at a few answers (e.g., P@3).

* Ensure that your metrics are appropriate for the context and goals of your system, e.g., a fire alarm system should have high recall, even if that means the occasional false alarm.

Most are equally best practice. It's a course designed to supplement the depth
and breadth of education in a world increasingly staffed by ML practitioners
who learned the entirity of their courseware from 2 free Coursera courses and
a bunch of Siraj Raval videos.

You're talking about "fairness" like this course has ML-based policy decisions
or even addresses that substantially. This is a category error.

------
jf-
The video lecture calls for validation of ML results with regard to
“appropriate social behaviour”. Is this actually a call to hide results that
are politically inconvenient? A concrete example would help here.

~~~
NewEntryHN
A bias is simply a non uniform distribution among the values of a feature of
what you're trying to predict. Since the data used by Google in training AI
comes from users, it's likely to contain all sorts of human and social bias
adding noise to what they're trying to predict.

~~~
jf-
Indeed, I know what a bias is and what the common cognitive biases are, though
they always bear repeating. I’m referring to a specific statement at the end
of the video lecture, where the viewer is encouraged to validate ML results
against “socially appropriate behaviour”. This is somewhat ambiguous, and can
be interpreted as meaning that the viewer should suppress any result that may
appear taboo. I’m wondering if this is the intention behind the statement.

If so, this is not to be encouraged. It would be the equivalent of the
sciences rejecting results when they don’t conform to current theory. Go to
extra lengths to validate the results, sure, but don’t throw them away out of
hand.

~~~
pdkl95
This isn't about "appearing taboo", it's remembering that data is always the
product of the environment that created it. If you are using e.g. housing
data, naively applying the data directly bakes in, for example, the decades of
redlining, blockbusting, restrictive covenants, etc. Many cities are still
segregated today, decades after redlining/etc ended.

Lets say you are writing ML related to loan applications or deciding if
someone gets parole. Do you want to "accurately" look at the historical data?
Or should the decision _also include_ the context of that data with an eye
towards making a decision that is more socially appropriate than perpetuating
the racism of the past?

~~~
jf-
I take your point about misleading historical data, but I’m not sure that was
the sentiment behind the statement. I took it more to mean “if this feels
transgressive to you, don’t publish it!”. Your explanation is a valid
observation of a phenomenon, but the “socially appropriate” part feels
shoehorned in.

------
mbrumlow
> (fairness) how to define it

Not even a paragraph in and I see problems with this. Seems somebody bias has
already gotten in their own way.

This is the first step - redefine the meanings of words to fit your agenda.

I could be wrong about the intent. But when you can redefine fairness the you
can make it fair to take from one person and give to another. Or deny one
person and Grant access to another based on things that don't matter.

Also. The attack on human bias is totally silly. It's a key trait that had
made us and other species successful. Without these biases we might pick the
wrong fruit eat it and die. We might try to cuddel into a cave with bears in
the winter.

Our entire existence and reasoning skills is based off bases.

Also. Never in my life have a seen ripe or unripe bannans and described them
by their color. Is this a education problem?

But even if I had there is nothing wrong with pointing out when something is
not atypical. Besides if I plan on eating bananas in a few days I might want
the "green" ones. Or if I was going to make a yummy smoothie the "brown" ones
are the ones I want.

Also bananas are a bad analogy for what they were getting at. Every banana has
the potential to be green, then yellow, then brown. So none of these states
are out of the ordinary.

I don't need or want to be unconsciously retrained by ML. I think we are going
to need laws against big cooperations using their power and tech to change
socal constructs. It's not their place and we can never be sure they are doing
it for good or even the outcome will be good for human kind. It's also a power
that can be abused. We already have internal documents comming from Google
suggesting swaying public opinion on on political canadates. And this type of
research is exactly where you would start.

~~~
majos
What? This isn't about _re_ defining fairness, but defining it in the first
place. Seriously, try to come up with a rigorous technical definition of what
a fair decision process means. It's not easy. But we need such a definition
(or definitions).

It mattered less in the past because humans made all the decisions so we could
fall back on fuzzy qualitative notions. We can't do that when algorithms make
decisions.

~~~
humanrebar
It's worse than that. Often fairness is a process, not a formula or metric.

------
nobody271
Is this for anyone or is it just posturing? I mean anyone doing ML knows that
if you train an algorithm on a biased dataset that it will, surprise, learn to
be biased. It's not at all inherent or restricted to unfair treatment of any
group. It's a more general problem. By focusing on "fairness" you are
basically overfitting the larger problem of not selecting good datasets or not
having good data.

~~~
nkozyra
Huh? Humans make the conscious decision to override instinctual and
observational impulses all the time in response to empathy and emotion. It is
a huge foundational factor in society. Why would we NOT want autonomous agents
to have the same capacity? There's a reason things like The Trolley Problem
come up in AI courses: our agents have to play reasonably in a human world.

~~~
sokoloff
I find the classic Trolley Problem interesting mostly because it exposes the
extremely limited capacity that humans have to act rationally. I would expect
a trivial ML/AI implementation to "correctly" solve the Trolley Problem and I
_would not_ want to consciously program our future AI systems to suffer from
the same limitations that cause humans to struggle with the Trolley Problem.

By programming them this way (avoiding programming in human frailties for the
purpose of better "fitting in"), I think we can more quickly expose and
correct certain areas where us bags of mostly salt water act suboptimally.

~~~
nkozyra
The whole point of the trolley problem is there is no correct solution. It's a
negative sum game. It's an ethical problem that has no solution. There is also
no guarantee that a machine would kill one in lieu of five. Why? Training
data.

The bigger point I'm making is people think AI acts in some form of posthuman
reason. At this time no such thing exists: decisions are still bound to human
bias, and we should at least consider optimizing with regard to the rules of a
modern society.

~~~
sokoloff
My intentionally off-hand use of "correctly" was meant to indicate a belief
that killing one innocent is strictly better than killing five and so if a
machine is faced with choosing exactly one of those, the problem boils down to
a comparison and a conditional jump, IMO.

Humans are constrained by the knowledge that they would dread for the rest of
their life the knowledge that they optionally and intentionally took an overt
action which resulted in the death of an innocent person (to save five, but
they would know they killed one when they could have done nothing and had
"clean hands"). Machines don't need to suffer from this.

I would hope that if we program safety-controlling systems that we wouldn't
replace the compare and conditional-jump with "halt and catch fire".

~~~
nkozyra
No offense here, but none of that has anything to do with the dilemma. Human
response would be biased in myriad ways. Agent response would be biased in
myriad way. There is no correct solution. The lasting impact on a person (and
lack thereof in an agent) has no bearing on the decision.

~~~
sokoloff
Really? Why is the original/classic trolley problem in any way interesting if
the future weight of the decision on the human has no bearing? (That's a
serious question.)

In the problem statement, an actor has control over whether this 1 person or
those 5 people die. If they choose red, then 1. If they choose blue, then 5.
(Failing to make a choice is choosing blue.) If after playing the scenario,
they knew someone would use a Men In Black pen on them to completely forget
what happened, is there a rational argument against choosing red?

(I realize this might read like trolling, but I genuinely don't find the
original problem a difficult one if you eliminate the human guilt aspect.)

~~~
Kaveren
Utilitarianism is very dangerous. Our current societal system isn't really
based on utilitarianism as I see it, and I don't think we want to live in a
world where a pure version of that school of thought is allowed to have any
power.

Actively choosing an action to deliberately kill someone else could be weighed
negatively by the machine, it would depend on the programmer.

I don't think you can apply the word "rational" here, if you don't take a
position of utilitarianism, not deliberately killing the person could be
axiomatically rational. It's not just about guilt.

~~~
sokoloff
Fair point on the mis-use of rational; thanks.

------
sytelus
There seems to be big confusion on what is bias and fairness and this course
doesn’t seem to help. Fairness is ensuring that peaple are not deliberately
punished for things that they have no control over. Typically this translates
to making sure that decisions made by governments and employers do not depend
on race, color and gender. US constitution adds religion in to the mix and
some EU countries may also want to include sexual orientation, disability,
age, political orientation but the minimal thing required by US laws in most
states are those 4 attributes.

Every ML model can have different bias on different features. That’s what
makes model a model. The fairness means you should make sure that your model
is not biased towards above _specific_ attributes and explicitly test for it.
This is not some liberal propaganda by tech but required by law in many
instances even if ML was not involved. This only applies to models used for
certain classes of decision making, for example, loan eligibility or hiring or
FaceID login. It doesn’t apply to things like breast cancer predictions or
shopping recommendation system.

~~~
btown
It's more than this, though. For instance, is the presence of the word
"captured" (e.g. in a dataset of performance reviews) correlated with gender?
This was certainly the case for Amazon:
[https://www.theguardian.com/technology/2018/oct/10/amazon-
hi...](https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-
gender-bias-recruiting-engine) . If so, even if you don't include gender as a
feature itself, your outputs may end up being biased (in the technical sense)
by gender. If you want to correct for this, then you might want to _include_
gender in your analysis in a structured way in order to determine: are there
better features that apply relatively equally to indicate a superstar of
either gender? Personally, I don't know what the battle-tested approaches in
the ML literature are to do this type of thing, so this course would be
helpful to people like me!

At the end of the day, it all boils down to this: you can't be "blind" to
race, color, religion, and gender if information is leaked to your evaluator
(human or otherwise) via a side channel. You might get away with this legally
(especially if there's not intent to discriminate), but if you really want to
do it right, you have to engineer systems that _minimize_ those side channels.

~~~
Pengy7
> If so, even if you don't include gender as a feature itself, your outputs
> may end up being biased (in the technical sense) by gender.

Part of the problem is people using the same word to mean multiple things. For
instance, "bias" has a precise mathematical definition in the context of
statistics:
[https://en.wikipedia.org/wiki/Bias_(statistics)](https://en.wikipedia.org/wiki/Bias_\(statistics\))
. And this sentence makes no sense with that definition. In fact, with linear
models it is mathematically impossible to make a "worse" model (in terms of
mean squared error) by including more variables (like gender, age, race,
etc...).

> you can't be "blind" to race, color, religion, and gender

Also I am not sure that this train of thought actually leads to where we want
to go. A perfect model isn't necessarily blind to these features, a perfect
model treats everyone as an individual.

~~~
yorwba
> In fact, with linear models it is mathematically impossible to make a
> "worse" model (in terms of mean squared error) by including more variables
> (like gender, age, race, etc...).

That's only true if you mean the mean squared error on the training data,
which is not usually a good indicator of model quality. Instead you should use
the mean squared error on test data, which gets worse if you add non-
predictive variables to the input.

If there are non-predictive variables, the linear model with the lowest
expected square error should assign exactly zero weight to them, equivalent to
the situation where those variables don't exist. But training on a finite
sample, that "exactly zero" outcome is extremely unlikely (as in, the
probability is 0) if the non-predictive variables vary at all. That variance
allows identifying individual data points, even though the relationship is
completely random and doesn't help generalize to unseen data. In other words,
the model overfits to noise.

------
kyleperik
So why isn't there anything about teaching fairness in statistics?

To sport the classic over simplification, ML is just glorified curve fitting.
But in reality, it's not like we created a special new species with
intelligence that we have to careful teach not to be racist. _We_ just have to
be careful not to be racist with it.

------
hashr8064
I think google should take a bit of its own advice here. Google "american
scientists", "american mathematicians" and tell me how these are not hugely
biased results. I work in NLP and there is no way you get this without forcing
your data/algorithms to return these types of results.

This isn't a normative statement, just descriptive. Whether or not google
should bias its results is a completely different discussion.

~~~
nkozyra
1) I'm trying to figure out what NLP has to do with this problem. This is a
classic collaborative filtering "problem."

2) I think Google is acutely aware that their results are driven by human
behavior and thus are biased. It's the nature of its design

~~~
hashr8064
Can you explain how this is collaborative filtering as opposed to a classic IR
ranking problem? CF would suggest they are somehow getting user ratings of
these scientists, but either way its going to boil down to a similarity metric
basically. So I guess for me, I can't imagine how user data is creating these
rankings and I'm pretty confident using IR techniques on the datasets they
have would not return these either, ergo, they are likely tweaking the factors
themselves to return results that are "less biased" i.e. less representative
of the underlying distribution and more normally distributed aka politically
correct.

But If you have a better theory of how the 10 of the first 20 "american
scientists" are black and 5 are women, I'd be interested to hear it.

~~~
yorwba
Check Baidu:
[http://www.baidu.com/s?ie=utf-8&wd=%22american+scientists%22](http://www.baidu.com/s?ie=utf-8&wd=%22american+scientists%22)

Result #6 is the list of African-American inventors and scientists on
Wikipedia. Unless Baidu has the same ideological biases as Google (would be
strange), the most likely explanation is that it's driven by n-gram
frequencies.

~~~
hashr8064
Yes, precisely that's what I would expect from an NLP system b/c it will find
"African American" and, I would expect "Chinese American", etc. in documents
more frequently than for a plain "American", much like what this article
mentions with Banana and no one ever mentioning yellow. Still, the algorithm
would have to be pretty approach would have to be pretty naive not recognize
that "X-American" is a subset of "American". It would be like not recognizing
that a query for "anonymous function" is something different than a query for
"function".

Here's the underlying data at duckduckgo:
[https://duckduckgo.com/?q=american+scientists&t=h_&ia=list](https://duckduckgo.com/?q=american+scientists&t=h_&ia=list)

I'm still interested in a possible technique which could lead to this type of
bias without it being explicit (or requiring google to have an extremely naive
approach).

~~~
sidibe
I don't see why you can't just accept that that naive approach is their
approach? Those two words almost always occur together as part of "-American
scientist." This happens to work very well in general for search engines. I
don't think Google or DuckDuckGo is hoping their image page for American
Scientist just returns African Americans and are therefore subtly changing
their algorithm to that end.

~~~
Udik
> I don't see why you can't just accept that that naive approach is their
> approach?

I very strongly doubt their approach is based on substring search. They're
obviously using a knowledge graph. And if you try a search for "American
economists" or "American philosophers" the results look much more expected,
either the "American" in this case is not a substring of "African-american" or
they simply thought that economy and philosophy aren't as worth of an equality
boost as STEM disciplines.

~~~
sidibe
> I very strongly doubt their approach is based on substring search.

You don't think Google search is using 2-grams? Do you think they're
conspiring with other search engines?
[https://www.bing.com/images/search?q=american+scientist](https://www.bing.com/images/search?q=american+scientist)

~~~
Udik
Good point, you're right (and btw, that's a crappy result from Bing!). As a
verification, I did another experiment:

[https://www.google.com/search?q=usa+scientists](https://www.google.com/search?q=usa+scientists)

apparently gives results from the same knowledge graph and displays them under
the same heading, but orders them differently from:

[https://www.google.com/search?q=american+scientists](https://www.google.com/search?q=american+scientists)

So apparently Google, while understanding the search query, still orders the
results by the words used to express it- and in the second case clearly
privileges African-Americans because of the "American" substring. My bad.

------
Hendrikto
Unequal outcome != bias. Most SJWs don‘t seem to understand this. Of course it
can be an indicator that there may be bias, but it is __FAR __from a proof
thereof.

I can easily write you a completely unbiased “model” predicting who comitted
crimes:

    
    
        func probability_of_comitting_crime():
            return 1/human_population
    

This is 100% unbiased and treats everybody the same. It’s also 100% useless...

~~~
nkozyra
The problem is that people view pure statistical based crime probability as
"unbiased" because... math. It's not! Most bias is deeply hidden, and our
training data comes directly or indirectly from some form of human
subjectivity.

The more autonomy plays a role in our lives, the more aware we need to be of
how internal bias from human source has affected things like predictions.

And please, there is nothing more flippant and off-putting than railing
against "SJWs," this is a topic that demands thoughtfulness.

~~~
tomp
> The problem is that people view pure statistical based crime probability as
> "unbiased" because... math. It's not!

Can you explain what you mean? “Biased” usually means not using _just_
statistics but also extra-statistical personal convictions. What does
“unbiased” mean for you?

~~~
nkozyra
Where do the numbers come from? What is the context of collection of data?
What is the context of the classification of training data?

There are hundreds of questions like that which apply. It's not (necessarily,
it certainly can be) the algorithm that carries bias, it's the data.

------
deepnotderp
Fairness (tm) by Google.

------
retox
Do a Google image search for "new actress" without the speech marks. Or
"American inventor"

~~~
jf-
The “American Inventor” result is easily explained by being a substring of
“African American Inventor”. No conspiracy necessary, it’s just a substring
hit.

~~~
eecks
Google really needs to improve their search quality then.

Searching for razor doesn't give results for occam's razor.

~~~
jf-
Because there aren’t that many pages about it. Also a white American inventor
would probably just be referred to as an inventor, without reference to race
or nationality. A black inventor would be seen as out of the ordinary, and so
be referred to as an African American inventor, stacking the deck in favour of
those results.

~~~
eecks
This is a guess: I don't think that most people would think "American
inventor" when someone says "Inventor"

------
gaius
How about fairness in taxation? What do Google know about that?

------
Rainymood
Before Google starts lecturing people on fairness, maybe they should look in
the mirror and lecture themselves on what's good and evil ...

------
908087
"Fairness" as defined by Google.

Ethically bankrupt advertising corporations have no business teaching people
about ethics.

------
realpinkie
A theoretical ML paper came out recently that directly ties the notion of
bias/fairness in machine learning to the tuning of the bias parameter in deep
neural network architectures. The authors used stochastic backpropagation to
classify an unbalanced dataset (85% class 0, 15% class 1) and show, through a
series of topological cross-validation techniques, that very small amounts of
perturbation in the bias parameter results in significant increase in the
overall bias of the model towards the minority class (class 1).

~~~
p1esk
Which paper?

------
jlawson
I read HN with showdead on. There's some pretty unjustifiable censorship going
on in this thread.

I'll just call their attention to their own rule on this: "Please respond to
the strongest plausible interpretation of what someone says, not a weaker one
that's easier to criticize. Assume good faith."

The weak interpretation of these dead comments is that they're just racists
who hate dark-skinned people. There are much stronger interpretations and
those should be discussed.

Especially in the context of Google, where it's been demonstrated over and
over through video leaks, court filings and the Damore firing, that a single
and very heavy-handed ideology has power and will not allow discussion. From a
centrist position is seems obvious Google's internal thought process will be
itself heavily biased, and the possibility of certain factual hypothesis being
true about the physical world (not normative statements on what to do) will be
denied for ideological reasons. It's like a group of fundamentalist Christian
scientists studying geology. This needs to be acknowledged in any discussion
of Google, ML, and 'bias'.

~~~
atlantic
How do posts become dead? Is it because of downvotes, or due rather to some
form of editorial intervention?

