
Fearful of bias, Google blocks gender-based pronouns from new AI tool - eplanit
https://www.reuters.com/article/us-alphabet-google-ai-gender/fearful-of-bias-google-blocks-gender-based-pronouns-from-new-ai-tool-idUSKCN1NW0EF
======
jakelazaroff
Alternate wording: "Google's Gmail autocomplete no longer assumes gender."

From the article:

 _> Gmail product manager Paul Lambert said a company research scientist
discovered the problem in January when he typed “I am meeting an investor next
week,” and Smart Compose suggested a possible follow-up question: “Do you want
to meet him?” instead of “her.”_

So really, this is a fairly reasonable decision whereby it doesn't infer
gender from signals such as profession.

The article doesn't address this, but I wonder what it would do given an
explicit gender, e.g. "She's the investor I am meeting next week."

~~~
hueving
So what does it follow up with? English doesn't have a non-gendered way to say
"Do you want to meet him/her?"

Is the idea just to eliminate all useful suggestions that contain genders?

~~~
hrnnnnnn
To my ear, as a 34 year old native English speaker from Scotland, the phrase
"Do you want to meet them?", meaning non-gendered singular pronoun, sounds
absolutely totally fine and not strange in even the slightest way, and always
has done since I was a child.

~~~
elliekelly
If Google had made this change quietly I suspect no one would have even
noticed.

------
bitcharmer
When will this madness end? There will always be people offended by something.
I'm pretty sure there are individuals claiming that NOT suggesting their
pronoun deprives them of their identity, dignity or whatever.

This is getting ridiculous.

~~~
thaumasiotes
> I'm pretty sure there are individuals claiming that NOT suggesting their
> pronoun deprives them of their identity, dignity or whatever.

This would be similar to how feminist movements in languages that
systematically distinguish men from women push to get the language changed so
that women are referred to as men ("there shouldn't be any difference between
a waiter and a waitress"), while feminist movements in languages that don't
draw the distinction push to get the language changed so that women are
referred to explicitly as women ("we should acknowledge that women can be
waiters too").

If feminism demands one of these things, it surely can't also demand the
other.

~~~
VikingCoder
...OR....

Different groups of people call themselves "feminists," and don't agree with
each other about changes they'd like to see in the world.

It's kind of odd to me that people (like you apparently) don't understand.

~~~
thaumasiotes
What is it that you think I don't understand?

I think they all agree that they'd like other people to change as an
acknowledgement of their power.

I also think that they can't both use the same justification to argue for
opposite changes. What did I miss?

~~~
VikingCoder
"they" is not a homogeneous group.

"they" are different groups that might use the same label. They can absolutely
use the same justification, to argue for opposite changes.

Their justification could be as simple as, "We want to be respected," and
different groups of feminists disagree with each other about how best to
achieve that goal.

Sure, it's even worse if you personally land in the crosshairs, but from your
example it sounds like the feminists are working in different places with
different languages. And perhaps the problem of gaining respect requires
different solutions in different areas.

------
hadrien01
> Google’s technology will not suggest gender-based pronouns because the risk
> is too high that its “Smart Compose” technology might predict someone’s sex
> or gender identity incorrectly and offend users

Nobody cares when touch keyboards predict the wrong pronouns, so why would the
same service in an email client be any different?

~~~
bluetidepro
I agree. It seems a bit hyper sensitive to me. Esp when it's a machine that's
predicting it, and not a human.

~~~
levesque
Bias is one of the new hot topics in ML. People act surprised that a face
recognition software trained on a database of white people is unable to
properly classify pictures of people from all around the world. The algorithm
is only learning the data it was fed.

------
chickenfries
This is fine. Guessing gender based on things like name is terribly error
prone. Imagine if you have a name like Taylor. But I’m sure everyone who wants
to be annoyed by this will find a way to be annoyed.

~~~
Bartweiss
Now that you point it out, I'd imagine that aside from any bias pronoun choice
would also be one of the least _accurate_ suggestions offered. When somebody
writes "my cousin is in town, do you want to meet", it's basically impossible
to do better than a coinflip chance - and that's if offering a pronoun next
was even correct. So it's no real surprise that they didn't spend a bunch of
effort on debiasing a feature that would still be wrong if they succeed.

------
brobdingnagians
> Men have long dominated fields such as finance and science, for example, so
> the technology would conclude from the data that an investor or engineer is
> “he” or “him.”

Err, so the AI correctly predicts that statistically you probably intend
"him", but we limit the utility of the tool because that would be
discriminatory? I know it will get it wrong sometimes, and you can say it re-
inforces stereotypes, but if it will get it right most of the time based on
strict statistical inference, seems like it could at least be configured. It
seems to be a case where the AI is a bit too accurate... I don't disagree with
their decision, I think given the circumstances it's actually a brilliantly
safe move and keeps out of the fray as much as possible.

> “The only reliable technique we have is to be conservative,”

I know this is just whimsical, and a horrible logical equivocation on my part,
but that's kind of funny that a large tech company decided being conservative
might actually be useful in some cases to help protect against liberal
outrage...

~~~
damnyou
Yes, the goal of the modern Civil Rights Movement has basically been to say
"in these areas, even though Bayesian inference produces statistically valid
results, we as a society have decided that it's immoral to use it."

What axes are OK to use Bayesian inference on and what are not is a
philosophical and historical question with lots of practical implications
across politics, economics, actuarial science etc. It's really worth thinking
about. But here's a start: in general, people are more forgiving of inferring
data from mutable traits than immutable ones.

------
lifeformed
Is it just me or does it seem like the only people getting upset in these
situations are the people offended by others trying not to offend others? It
just doesn't seem like a big deal either way. It's weird that people
automatically view actions like this as some sort of moral command, and thus
feel so strongly about it. At the end of the day, it's just a design choice to
reduce inaccuracies.

It's not something I would've thought to do myself, but I can see the
reasoning for it. Bias in AI is a real issue, and it's wise to consider it
earlier than later. Sometimes it's something more culturally visible, like in
this situation, but oftentimes it can be much more subtle and insidious. This
step isn't going to fix much but it's part of a bigger effort to make
considering bias one of the priorities. Ignoring bias in our AI will make our
AI more human in all the bad ways.

On the other hand, Smart Compose already seems like a bad idea to me. It's
good for their AI but bad for humans. This pronoun action feels like a micro
optimization for this technology's social impact, while the entire feature
itself is a small net harm for society, in my opinion. It's a subtle dampening
of our personal voice and nuances.

~~~
DogPawHat
Yeah, people are getting worked up about a good faith trade off in software
features made by a company trying to create products useable by, preferably,
every single person on the planet. And since gender is something were are -
right now, whether you like it or not - revising our understanding of, it
makes sense to wait until you have a better solution to the problem.

I mean, autocorrect can only take you so far anyway, your gonna have to change
it a bit anyway and I assume most people with two brain cells knows the gender
and preferred pronouns of the people mentioned in the convo and knows how to
use them properly even if they don't have a clue about the pronoun debate.
This is an edge case, but still something they saw value in avoiding a mistake
in.

------
topynate
When's the Hebrew language version coming out? There are many languages for
which this "solution" is completely impracticable. On the other hand, there
should be no problem with Turkish or Persian.

~~~
mc32
They’ll have a fun time with Korean where depending on speaker and audience
different word declensions are used.

------
walterbell
Which pronoun(s) will be used by Google AI to address future AI tools and
robots?

[https://www.wired.com/story/robot-gender-
stereotypes/](https://www.wired.com/story/robot-gender-stereotypes/)

 _> Robots don’t have genders—they’re metal and plastic and silicon, and
filled with ones and zeroes ... The problem is that even if a robot isn’t
gendered, and even if it doesn’t look human or even animal, you’ll tend to
want to gender it._

------
skrowl
Great, now Skynet is going to believe there are 38,000 genders when it goes
online.

------
JacobJans
I don't know whether other people do this, but when using voice recognition
software, such as Google's, I "learn" what it is likely to understand, and
then modify my statements to be more easily understood by the software. This
is very useful when sending text messages, for example. I am able to quickly
send a message in just a moment.

However, because of Google's limitations, the language I use is modified.
Their "AI" is shaping my use of language, and thus, how I communicate with
others. Multiply this by millions (billions?) of people, and this could have a
real impact on culture.

What is Google's responsibility, in this regard? Certainly they shouldn't
ignore the ways their technology could affect society. They should be making
conscious, deliberate decisions. This is dangerous territory; and not an easy
one to navigate. I am glad they seem to recognize that.

------
cabaalis
Gender is a big thing to get wrong in person. I am a man, If someone referred
to me as "she" or "her" I would correct them. I wouldn't expect any less from
anybody.

When it's a machine doing it though, some unknown piece of metal spitting out
a form message, it seems it would be easier to forgive.

------
gp7
For people who are eager to think this is a don't-assume-gender charade: try
misgendering a cis person next time you're out, and see how much they don't
care. _You_ are who this is for

~~~
MockObject
I don't follow. I get misgendered quite commonly. It doesn't bother me at all.
Is it supposed to?

~~~
gizmo686
I've never been offended by getting misgendered, but hearing people get
misgendered does, at times, bother me in the sense that it can make the
sentence difficult to understand.

------
Jorge1o1
Heaven forbid that the millions of mindless sheeple using “Smart” Compose
actually have to read and edit their email before hitting send.

If Google really cared about combatting bias, they would just use “she/her.”
At least it would be grammatically correct.

It’s not about taking a political/social stance to them, it’s about making
life __convenient __for their users by removing any form of cognitive
processing necessary

------
CJKinni
I'm confused by any outrage over this.

It's a tool that's meant to predict things, and they weren't able to
successfully predict something that is used extremely frequently in lots of
conversations. So they decided, after 'several' other attempts, that they were
best off not trying to make suggestions.

If you can't do something right after repeated attempts, don't do it.

------
feefie
I read the article and I find this fascinating. I am happy they are giving it
some thought. Pronoun political correctness (a.k.a. thoughtfulness or
respectfulness) aside, if our society's training data leads AIs to make
assumptions, we should be aware of it and be careful to detect it.

An incorrect pronoun isn't going to kill someone -- but we are gradually
handing over more and more decisions to technologies that rely on AI/ML.
Perhaps investigations into incorrect pronoun assumptions can lead to
improvements in assumption errors in other areas (e.g. you think your self
driving vehicle doesn't need radars and that only using cameras is sufficient?
Just because it looks like a big fluffy cloud doesn't necessarily mean you can
safely fly/drive through it).

Back to just pronouns though: if someone says "My teacher assigned me to read
Act I of Macbeth tonight", most people would avoid a reply that uses he/she
until an indication was given or they'd just ask "Who's that, Ms. McFadden?
Yeah, I know, she assigns way too much homework!". If a human can be smart
enough to get it right, then I'm glad folks are working on AIs getting it
right too (or, for now, not making assumptions until they can get it right).

The fact that AIs are advanced enough for us to be thinking about these kinds
of details is wonderful! :)

------
SolaceQuantum
I'd like to point out how long-hanging fruit this is for Google to address
gender imbalance and gender inequality, especially given Google's much larger
issues with their policies around inappropriate sexual behavior. You'd think
more significantly, Google could work on establishing better policies and a
more equal workplace culture.

------
lujim
Anyone care to guess what the ratio is between trans people that are offended
by the pronoun issue vs liberal college kids that are offended on behalf of
the trans community?

Yes I realize that saying this on hacker news or reddit is grounds for heavy
downvotes and hate mail but good god people. This is all political gaming, not
compassion.

------
awkward
If automatic suggestion tools had more than a markov chain like understanding
of language they likely would have been able to identify this as a use case
and had a culturally appropriate UI. As is, completely blocking references to
gender is the only way to fix the issue.

------
vowelless
Is this gender pronoun controversy primarily in English speaking populations?
In other languages, almost every word is gendered. How will they tackle this
problem of assuming gender in them?

------
thwoawayyowser
Wait... So because the real world data doesn't align to our ideology, we're
going tell it to ignore the facts

~~~
DogPawHat
And what exactly is the real world data actually saying and how does it
disprove Google's "ideology"?

~~~
MockObject
It's saying that word X is usually associated with "her", and Y with "him".
Often, reality has a sexist bias.

------
1023bytes
replace(/^(him|her)/gi, "them")

Here you go Google, I fixed it for you

~~~
npcdevops
you nailed it, Step 1: Over engineer the problem to virtue signal for a
culture war win. Step 2: Maintain control of culture. Step 3: Profit

------
adamnemecek
It’s getting too much...this whole ideology will self implode.

------
emiliobumachar
I, for one, applaud this move.

These times are indeed hyper-sensitive, and ridiculous exaggerations do occur.
This is not one of them.

Gender dysphoria is a real, crippling, dangerous mental illness. Maybe in the
future there'll be some pill or simple brain surgery that fixes it. Today,
there's none of that.

Gender reassignment surgery generally helps. Calling people their preferred
gender helps, as a complement or substitute to surgery. It's a hacky,
inelegant solution, perhaps disgusting to some, but _it fucking works!_ Goind
the extra mile to avoid misgendering people is reasonable.

"Imagine if we could give depressed people a much higher quality of life
merely by giving them cheap natural hormones. I don’t think there’s a
psychiatrist in the world who wouldn’t celebrate that as one of the biggest
mental health advances in a generation. Imagine if we could ameliorate
schizophrenia with one safe simple surgery, just snip snip you’re not
schizophrenic anymore. Pretty sure that would win all of the Nobel prizes.
Imagine that we could make a serious dent in bipolar disorder just by calling
people different pronouns. I’m pretty sure the entire mental health field
would join together in bludgeoning anybody who refused to do that. We would
bludgeon them over the head with big books about the side effects of lithium."

From [http://slatestarcodex.com/2014/11/21/the-categories-were-
mad...](http://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-
not-man-for-the-categories/)

