
Cloud Vision API will not return gendered labels such as 'man' and 'woman' - jawngee
I just received an email from Google Cloud Platform:<p>Hello Google Cloud Vision API customer,<p>We are writing to let you know that starting February 19, 2020, the Cloud Vision API will no longer return gendered labels such as &#x27;man&#x27; and &#x27;woman&#x27; that describe persons in an image when using the ‘LABEL_DETECTION’ feature.<p>What do I need to know?<p>As you know, the Cloud Vision API can perform feature detection on a local image file for the purpose of identifying persons by sending the contents of the image file through ‘LABEL_DETECTION’.<p>Currently, when you request the API to annotate an image with labels, if you use this feature on images with people, it may return labels describing them in an image with gendered terms, like ‘man’ or &#x27;woman’.<p>Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.<p>https:&#x2F;&#x2F;ai.google&#x2F;principles&#x2F;
======
0x1221
> Given that a person’s gender cannot be inferred by appearance, we have
> decided to remove these labels in order to align with the Artificial
> Intelligence Principles at Google, specifically Principle #2: Avoid creating
> or reinforcing unfair bias.

Isn't this just outright wrong? A person's gender _can_ be inferred by
appearance, and with extremely high accuracy at that.

~~~
rl3
While I tend to agree with you from a technical standpoint, I can't help but
think Google is actually doing the morally correct thing here.

If organizations are creating systems using that API, most will just roll with
man/woman labeling. If those systems are important, a lot of trans and non-
binary people will end up fucked over.

At the end of the day, if the deprecated functionality is still needed, just
set a threshold on masculine vs feminine. It's a far more elegant approach,
even if more work.

~~~
brodouevencode
I always get it confused: is it sex or gender that people can assign to
themselves?

~~~
foldr
Are you really confused about this, or just rhetorically confused? You can
answer the question by googling "sex vs gender".

------
comsolo
Another instance of SJW types causing problems in the coding world (although
the vast majority of actual coders aren't on the SJW train).

The fact is gender can be determined by appearance in 99.5% of circumstances
or more, which is probably higher than the overall accuracy of their ML
labelling system.

The idea of "avoiding unfair bias" is highly vague and problematic since,
theoretically nearly any statement or view beyond "particle X is at Y" type
statements has a somewhat "unfair" bias reflecting the lens of the human
experience, which has developed over millions of years of biology and
thousands of years of culture.

Finally, seeing as the fact is that 99% of people would not be negatively
affected by this feature remaining whatsoever, and the very slight negative
impact on the other (intersex type) 1% or so is minimal at best, while the
total impact of removing the feature is mildly annoying for many users, this
seems a lot like irrational, hollow, virtue signalling.

~~~
hluska
A more charitable interpretation would be that classifying gender is hard. It
is so hard that humans have trouble classifying gender purely with sight. If
humans have so much trouble, how can we teach machines?

~~~
esyir
Classifying gender is easy. Humans do it correctly with near-perfect accuracy,
typically failing only on edge cases. Furthermore, most cultures already bake
in signaling of gender in various forms (Behavior, attire, etc).

Humans don't have much trouble, except against adversarial/outlier examples.

~~~
hluska
Can you provide some citations for what you’ve written?

~~~
esyir
First, please hold yourself to the same standard that you're holding me to. I
see no citations to your (paraphrased) "identification of gender is
exceedingly hard for humans". That goes heavily against most people's
intuition and thus carries a higher burden of proof.

Now, 0.6% of Americans identify as transgender
[[http://williamsinstitute.law.ucla.edu/wp-
content/uploads/How...](http://williamsinstitute.law.ucla.edu/wp-
content/uploads/How-Many-Adults-Identify-as-Transgender-in-the-United-
States.pdf)]. That means using sex prediction alone, we can achieve a
theoretical maximum of 99.4% accuracy here. This ignores the fact that
transgenders often signal their gender via behavior or clothing, or attempt to
obtain the physical characteristics of their assumed genders as well, both of
which could further improve performance.

~~~
hluska
I asked for your help. Why did you see fit to get rude? What the hell is wrong
with society that people like you think this is okay? Toxic responses like
this ruin tech because instead of teaching you attack.

Loads of research suggests 7 year olds are about 70% accurate at predicting
gender based off of pictures. It takes a couple of decades to get that into
ranges you’ve cited. That doesn’t sound easy. If it is, I’d love to read about
it.

Why be toxic?

------
he11ow
Wow, the responses to this thread are such an excellent marker for skin
colour. I don't know the Google stats, but as far as labeling black women as
male, Amazon and IBM get it wrong 30% of the time. Microsoft errs in 21% of
test cases. As for Google's rates, not all that long ago they were labeling
black people as gorillas so... [0]

Put it another way, how impressed would your boss be if you were to say "Yo,
I've nailed this. I've got a kick ass algorithm, check this out. Oh, yeah, it
fucks up 30% of the time."

Bias in AI is bad, but what makes it much much worse is arrogant attitudes
claiming it doesn't exist.

And you know what, that speaks to the idiocy around AI more than anything. All
these debates on AGI are moot, because people are already so keen to believe
they got it all figured out it doesn't even matter if the actual tech is any
good at all.

[0] [https://www.wired.com/story/photo-algorithms-id-white-men-
fi...](https://www.wired.com/story/photo-algorithms-id-white-men-fineblack-
women-not-so-much/)

~~~
raxxorrax
> Wow, the responses to this thread are such an excellent marker for skin
> colour.

As is pronouncing the effects of bias and prejudice. You can almost guarantee
the person is white without using any AI.

I was or still am pretty critical of AI since the quantification of every
property of people will probably not be very enjoyable for anyone. What
changed my mind a bit is that there are people wanting to employ it to improve
reality. That is far worse and I hadn't had that on my radar before.

------
fyp
My google ads profile correctly identifies me as male:
[https://adssettings.google.com/](https://adssettings.google.com/)

I imagine the high level business use cases for these machine learning APIs
are similar. For example, analyze shopping mall camera footage to figure out
demographics shopping there and coordinate the (socially accepted and
segregated) men/women department stores accordingly.

This is such an profitable use case I cynically can't see why they would
cripple their API like this, regardless of their ethical stance.

------
mateo1
That's just hilarious. We live in an era where the statistical differences
between the appearance of men and women are better understood than ever
before, thanks to artificial neural networks, and we can in fact pinpoint the
exact combinations of low-level features that make a face look like a man or
woman. And then you have people in such a deep denial of reality they pull off
this type of joke. Sure, you cannot identify sex with absolute certainty only
with a photo, but ignoring fringe cases you can guess with very high
certainty.

------
emilfihlman
It's funny how ads can be targeted at men or women, but everything else is
just "nooooo, we can't do that!".

I wish the adbuyers would flex that money muscle a bit and put google in line
again.

------
MidnightRaver
I wonder if this is actually a marketing gimmick to invoke outrage and
therefore clicks? The change certainly reduces the usefulness of the API by a
fraction of a fraction of a percentage.

~~~
rl3
No, it's legitimate. Google is the last company that needs marketing gimmicks,
let alone ones based on outrage.

------
codesternews
Are we becoming more stupid?

