Hacker News new | past | comments | ask | show | jobs | submit login
Cloud Vision API will not return gendered labels such as 'man' and 'woman'
47 points by jawngee on Feb 20, 2020 | hide | past | favorite | 39 comments
I just received an email from Google Cloud Platform:

Hello Google Cloud Vision API customer,

We are writing to let you know that starting February 19, 2020, the Cloud Vision API will no longer return gendered labels such as 'man' and 'woman' that describe persons in an image when using the ‘LABEL_DETECTION’ feature.

What do I need to know?

As you know, the Cloud Vision API can perform feature detection on a local image file for the purpose of identifying persons by sending the contents of the image file through ‘LABEL_DETECTION’.

Currently, when you request the API to annotate an image with labels, if you use this feature on images with people, it may return labels describing them in an image with gendered terms, like ‘man’ or 'woman’.

Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.

https://ai.google/principles/




> Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.

Isn't this just outright wrong? A person's gender can be inferred by appearance, and with extremely high accuracy at that.


While I tend to agree with you from a technical standpoint, I can't help but think Google is actually doing the morally correct thing here.

If organizations are creating systems using that API, most will just roll with man/woman labeling. If those systems are important, a lot of trans and non-binary people will end up fucked over.

At the end of the day, if the deprecated functionality is still needed, just set a threshold on masculine vs feminine. It's a far more elegant approach, even if more work.


Automatic gender detection should certainly not be used for data entry where the data is of any real consequence (even if we ignored trans/non-binary people), simply due to the fact that it can't be proven to be perfect.


I think yeah, it's easier to use masculine/feminine. Something like long hair, soft skin, pink cheeks, more curves is feminine. Someone with more squares/edges (jaw, cheeks, shoulders) is more masculine.

Humans have trouble with this too, e.g. many crushed on Taylor Hanson.


I always get it confused: is it sex or gender that people can assign to themselves?


Are you really confused about this, or just rhetorically confused? You can answer the question by googling "sex vs gender".


Another instance of SJW types causing problems in the coding world (although the vast majority of actual coders aren't on the SJW train).

The fact is gender can be determined by appearance in 99.5% of circumstances or more, which is probably higher than the overall accuracy of their ML labelling system.

The idea of "avoiding unfair bias" is highly vague and problematic since, theoretically nearly any statement or view beyond "particle X is at Y" type statements has a somewhat "unfair" bias reflecting the lens of the human experience, which has developed over millions of years of biology and thousands of years of culture.

Finally, seeing as the fact is that 99% of people would not be negatively affected by this feature remaining whatsoever, and the very slight negative impact on the other (intersex type) 1% or so is minimal at best, while the total impact of removing the feature is mildly annoying for many users, this seems a lot like irrational, hollow, virtue signalling.


>the other (intersex type)

The biggest group likely to be impacted is trans people, not intersex people.


Still less than 1% of the population, and that number could dwindle if and when treatment sorts out the underlying problems with the mental wiring causing the gender dysmorphia, and the current trend dies out.


Red heads are less than 2% of the population and yet if you built a classification system that failed to identify them as people that wouldn't be seen as acceptable either.

Trans people have been around forever, and aren't going away just to make image classification less complex. It's not a "trend", treatment options just happen to be more available and the internet makes communities more visible now.


> Red heads are less than 2% of the population and yet if you built a classification system that failed to identify them as people that wouldn't be seen as acceptable either.

So the solution would be to not identify _anyone_ as people? As a ginger I doubt that.


The solution would be to accept that the classification algorithm was too error probe to be usable for non-toy applications.


We are going in circles. An error rate of 2% (heck! 20%!) would not make a classification unusable, that was what one of the grandfather posts was arguing.

https://adssettings.google.com/u/0/authenticated

Here Google can infer my gender and my age and personal interests just from my search history. I am sure this is not perfect either, it is still immensely useful for advertising.


Not the best example, given google lets you toggle your ad segments when they get them wrong and isn't baisising that guess based on physical presentation; the ad segment "male" is just a comment on the things you search and click on.


Why are you imposing the error tolerances the user should have? Shouldn't that be their choice?


If someone wants to build and train their own model that assigns a binary gender based on pictures that's their own choice, and because of that freedom they can't force google to do it for them.


Right, it's just that usually the concept with goods and services is that you're paying someone else to do it for you because you either can't or don't want to.


If I'm the author of the tool, I can impose whatever I want, no? It's not like people have a God-given right to have Google tag photos with genders for them.


So the best argument to do is the fact that you have a legal right to?


The argument in this case is that the potential for misuse outweighs the potential benefits.


Depends on the application. For something like approximating demographics of retail store traffic, this kind of issue would probably be fine. I don’t think anybody expects these models to be right 100% of the time anyway.


A more charitable interpretation would be that classifying gender is hard. It is so hard that humans have trouble classifying gender purely with sight. If humans have so much trouble, how can we teach machines?


Classifying gender is easy. Humans do it correctly with near-perfect accuracy, typically failing only on edge cases. Furthermore, most cultures already bake in signaling of gender in various forms (Behavior, attire, etc).

Humans don't have much trouble, except against adversarial/outlier examples.


Can you provide some citations for what you’ve written?


First, please hold yourself to the same standard that you're holding me to. I see no citations to your (paraphrased) "identification of gender is exceedingly hard for humans". That goes heavily against most people's intuition and thus carries a higher burden of proof.

Now, 0.6% of Americans identify as transgender [http://williamsinstitute.law.ucla.edu/wp-content/uploads/How...]. That means using sex prediction alone, we can achieve a theoretical maximum of 99.4% accuracy here. This ignores the fact that transgenders often signal their gender via behavior or clothing, or attempt to obtain the physical characteristics of their assumed genders as well, both of which could further improve performance.


I asked for your help. Why did you see fit to get rude? What the hell is wrong with society that people like you think this is okay? Toxic responses like this ruin tech because instead of teaching you attack.

Loads of research suggests 7 year olds are about 70% accurate at predicting gender based off of pictures. It takes a couple of decades to get that into ranges you’ve cited. That doesn’t sound easy. If it is, I’d love to read about it.

Why be toxic?


Should we not teach machines to do things, or attempt to do things, if they can't do a better job than humans?


Wow, the responses to this thread are such an excellent marker for skin colour. I don't know the Google stats, but as far as labeling black women as male, Amazon and IBM get it wrong 30% of the time. Microsoft errs in 21% of test cases. As for Google's rates, not all that long ago they were labeling black people as gorillas so... [0]

Put it another way, how impressed would your boss be if you were to say "Yo, I've nailed this. I've got a kick ass algorithm, check this out. Oh, yeah, it fucks up 30% of the time."

Bias in AI is bad, but what makes it much much worse is arrogant attitudes claiming it doesn't exist.

And you know what, that speaks to the idiocy around AI more than anything. All these debates on AGI are moot, because people are already so keen to believe they got it all figured out it doesn't even matter if the actual tech is any good at all.

[0] https://www.wired.com/story/photo-algorithms-id-white-men-fi...


> Wow, the responses to this thread are such an excellent marker for skin colour.

As is pronouncing the effects of bias and prejudice. You can almost guarantee the person is white without using any AI.

I was or still am pretty critical of AI since the quantification of every property of people will probably not be very enjoyable for anyone. What changed my mind a bit is that there are people wanting to employ it to improve reality. That is far worse and I hadn't had that on my radar before.


My google ads profile correctly identifies me as male: https://adssettings.google.com/

I imagine the high level business use cases for these machine learning APIs are similar. For example, analyze shopping mall camera footage to figure out demographics shopping there and coordinate the (socially accepted and segregated) men/women department stores accordingly.

This is such an profitable use case I cynically can't see why they would cripple their API like this, regardless of their ethical stance.


That's just hilarious. We live in an era where the statistical differences between the appearance of men and women are better understood than ever before, thanks to artificial neural networks, and we can in fact pinpoint the exact combinations of low-level features that make a face look like a man or woman. And then you have people in such a deep denial of reality they pull off this type of joke. Sure, you cannot identify sex with absolute certainty only with a photo, but ignoring fringe cases you can guess with very high certainty.


It's funny how ads can be targeted at men or women, but everything else is just "nooooo, we can't do that!".

I wish the adbuyers would flex that money muscle a bit and put google in line again.


I wonder if this is actually a marketing gimmick to invoke outrage and therefore clicks? The change certainly reduces the usefulness of the API by a fraction of a fraction of a percentage.


No, it's legitimate. Google is the last company that needs marketing gimmicks, let alone ones based on outrage.


Are we becoming more stupid?


[flagged]


>effectively denies my biological masculinity

How does it do that? All they are doing is no longer using perceived masculinity as a metric for determining whether you might be male or female.

>I'll not bother making a fuss

And yet here we are


A post on HN is a bit different to the mind-melting full-spectrum 'there's no such thing as men and women guyz!' etc. subversion efforts afoot.


> And yet here we are

A two sentence post on HN is "making a fuss"???


>In a way I find it offensive that this service effectively denies my biological masculinity

All it's saying is that it won't make gender classifications based on perceived masculinity/femininity.

Just because you can automatically classify certain features to some degree of accuracy doesn't mean you should. A company like Google also has an ethical obligation to think about how the technology will be used.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: