Hacker News new | comments | show | ask | jobs | submit login
Amazon scraps secret AI recruiting tool that showed bias against women (reuters.com)
317 points by wyldfire 63 days ago | hide | past | web | favorite | 407 comments



The eye opening thing here is not that the AI failed, but why it failed.

At start the AI is like a baby, it doesn't know anything or have any opinions. By teaching it using a set of data, in this case a set of resumes and the outcome then it can form an opinion.

The AI becoming biased tells that the "teacher" was biased also. So actually Amazon's recruiting process seems to be a mess with the technical skills on the resume amounting to zilch, gender and the aggressiveness of the resume's language being the most important (because that's how the human recruiters actually hired people when someone put a resume).

The number of women and men in the data set shouldn't matter (algorithms learn that even if there was 1 woman, if she was hired then it will be positive about future woman candidates). What matters is the rejection rate which it learned from the data.. The hiring process is inherently biased against women.

Technically one could say that the AI was successful because it emulated the current Amazon hiring status.


> The number of women and men in the data set shouldn't matter (algorithms learn that even if there was 1 woman, if she was hired then it will be positive about future woman candidates).

This is incorrect. The key thing to keep in mind is that they are not just predicting who is a good candidate, they are also ranking by the certainty of their prediction.

Lower numbers of female candidates could plausibly lead to lower certainty for the prediction model as it would have less data on those people. I've never trained a model on resumes, but I definitely often see this "lower certainty on minorites" thing for models I do train.

The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

Now, I'm not saying that Amazon's data isn't biased. I would not be surprised if it were. I'm just saying we should be careful in understanding what is evidence of bias and what is not.


It's wrong even if their model doesn't output a certainty (not all classifiers do). Almost all ML algorithms optimize the expected classification error under the training distribution. So if the training data contains 90% men, it's better to classify those men at 100% accuracy and women at 0% accuracy, than it is to classify both with 89.9% accuracy. Any unsophisticated model will do this.

gp: "The number of women and men in the data set shouldn't matter (algorithms learn that even if there was 1 woman, if she was hired then it will be positive about future woman candidates)."

This is false for typical models.


> The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

This is not true.

Probabilistic-ly speaking, if we are computing P(hiring | gender); Lower certainty means there is a high variance in prior over women. However, over a large dataset, the "score" would almost certainly be equal to the mean of the distribution, and be independent of the variance.

In simpler words, if there was a frequency diagram of scores for each gender (most likely bell curves), then only the peak of the bell curve would matter. The flatness / thinness of the curve would be completely irrelevant to the final score. The peak is the mean, and the flatness is the uncertainty. Only the mean matters.


There's not enough information about how their ML algorithm works, nor how large their dataset was for any of the above reasoning to be justified. Fwiw, many ranking functions do indeed take certainty into account, penalizing populations with few data points.


If they were using any sort of neural networks approach with stochastic gradient descent, the network would have to spend some "gradient juice" to cut a divot that recognizes and penalizes women's colleges and the like. It wouldn't do this just because there were fewer women in the batches, rather it would just not assign any weight to those factors.

Unless they presented lots of unqualified resumes of people not in tech as part of the training, which seems like something someone might think reasonable. Then, the model would (correctly) determine that very few people coming from women's colleges are CS majors, and penalize them. However, I'd still expect a well built model to adjust so that if someone was a CS major, it would adjust accordingly and get rid of any default penalty for being at a particular college.

If the whole thing was hand-engineered, then of course all bets are off. It's hard to deal well with unbalanced classes, and as you mentioned, without knowing what their data looks like we can only speculate on what really happened.

But I will say this: this is not a general failure of ML, these sorts of problems can be avoided if you know what you're doing, unless your data is garbage.


> It wouldn't do this just because there were fewer women in the batches, rather it would just not assign any weight to those factors.

That's exactly the issue we are talking about here. Woman's colleges would have less training data so they would get updated less. For many classes of models (such as neural networks with weight decay or common initialization schemes) this would encourage the model to be more "neutral" about women and assign predictions closer to 0.5 for them. This might not affect the overall accuracy for women (as it might not influence whether or not they go above or below 0.5), but it would cause the predictions for women to be less confident and thus have a lower ranking (closer to the middle of the pack as opposed to the top).


I don't think I'm with you. A neural net cannot do this - picking apart male and female tokens requires a signal in the gradients that force the two classes apart. If there's no gradient, then something like weight decay will just zero out the weights for the "gender" feature, even if it's there to begin with. Confidence wouldn't enter in, because the feature is irrelevant to the loss function.

A class imbalance doesn't change that: if there's no gradient to follow, then the class in question will be strictly ignored unless you've somehow forced the model to pay attention to it in the architecture (which is possible, but would take some specific effort).

What I'm suggesting is that it's likely that they did (perhaps accidentally?) let a loss gradient between the classes slip into their data, because they had a whole bunch of female resumes that were from people not in tech. That would explain the difference, whereas at least with NNs, simply having imbalanced classes would not.


supposing waiter and waitress are both equally qualifying for a job, and most applicants are men, won't the ai score waiter as being more valuable than waitress?


Not generally. The entire point being made is that whether one feature is deemed to be more valuable than another feature depends not just on the data fed into the system but also on the training method used.

Specifically, the gp is pointing out that typical approaches will not pay attention to a feature that doesn't have many data points associated with it. In other words, if it hasn't seen very much of something then it won't "form an opinion" about it and thus the other features will be the ones determining the output value.

Additionally, the gp also points out that if you were to accidentally do something (say, feed in non-tech resumes) that exposed your model to an otherwise missing feature (say, predominantly female hobbies or women's colleges or whatever) in a negative light, then you will have (inadvertently) directly trained your model to treat those features as negatives.

Of course, another (hacky) hypothetical (noted elsewhere in this thread) would be to use "resume + hire/pass" as your data set. In that case, your model would simply try to emulate your current hiring practices. If your current practices exhibit a notable bias towards a given feature, then your model presumably will too.


How did you control for these things? Wondering what patterns there are that people use to prevent social discrimination.

Seems challenging since much of AI, especially classification, is essentially a discrimination algorithm.


There are a few ways you can tackle this issue: 1) have the same algorithm for each group, but train separately (so in the end you have two different weights); 2) over-sample the group under represented in the data; 3) make the penalty more severe for guessing wrongly on female then male applicants during training; 4) apply weights to gender encoding; 5) use more then just resumes as data.

This isn't an insurmountable problem, but does require extra work then just "encode, throw it in and see what happens".

Amazon only scrapped the original team, but formed a new one in which diversity is a goal for the output.


Or: don't include gender in the training data.


They didn’t. It was discovered through other signals (mention of membership in “women’s” clubs etc.


So they did. It should be obvious that if you don't want to include gender, then you have to sanitize gender-related data.


That's not as easy as one might think.

Machine learning generally doesn't have any prior opinions about things and will learn any possible correlation in the data.

It could for example discover that certain words or sentence structures used in the resume are more likely associated with bad candidates. Later you find out that <protected class> has a huge amount of people that use these certain words/structures while most other people don't.

And now the AI discriminates against them.

ML will pick up on any possible signal including noise.


More than that, though. Graduates of all-women colleges were also caught. If you're using school as a data point, that's extremely hard to sanitize.


Then what is the purpose of this? At some point you want this thing to "discriminate" (or "select", if this is a better word) people based on what they have done in life. Which is not negative per se.


But you don't want it to select based on gender.


Would it though? A school name is essentially just that, no gender information there, even with the "women" prefix. If you discriminate other schools, you can do it too with those. FWIW there could be a difference in performance which the ML finds.


It would. Just because it's not explicitly looking for a "W" in the gender field doesn't mean it's not able to determine gender and discriminate based on that. The article and the discussion is all about how these things, despite not explicitly being told to discriminate based on gender, or race, or any number of factors, can still end up gathering a set of factors that are more prevalent among those groups, and discriminate against those people all the same.


>despite not explicitly being told to discriminate based on gender, or race, or any number of factors

Then this is completeley useless. You want this "AI" to discriminate based on a number of things. That's the whole point. You want to find people that can work for you. If a specific school or title is a bad indicator (based on what you hired now), then it just is that.


> The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

I don't think that's true. "No bias" means that gender is irrelevant (i.e. its correlation with outcome is 0%). Therefore the system shouldn't even take it into account - it would evaluate both men and women just by other criteria (experience, technical skills, etc), and it would have equal amounts of data for both (because it wouldn't even see them as different).

You need bias to even separate the dataset into distinct categories.


> "No bias" means that gender is irrelevant

False. If we're talking about the technical statistical definition, bias means systematic deviation from the underlying truth in the data -- see this article by Chris Stucchio with some images for clarification:

https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what...

"In statistics, a “bias” is defined as a statistical predictor which makes errors that all have the same direction. A separate term — “variance” — is used to describe errors without any particular direction.

It’s important to distinguish bias (making errors with a common direction) from variance which is simply inaccuracy with no particular direction."


I think the comments I replied to mean bias as in “sexist bias”.


Bias as in racism, sexism, etc, has multiple definitions, some of which are mutually exclusive.


Well, it was clear that _you_ think so.

My point was that you should consider the meaning of the word under which the post you're replying to is correct, especially given that the author was claiming specific domain experience.


The original was:

> The lower certainty would in turn lead to lower rankings for women even without any bias in the data.

your post said:

> If we're talking about the technical statistical definition, bias means systematic deviation from the underlying truth in the data

So I think my interpretation is correct, even though it's not "the technically statistically correct usage". You were referring to the bias of the algorithm (i.e. the mean divergence from the mean in the data), whereas we were referring to the "hiring bias" evident in the data. In fact, your "bias" was mentioned as "lower rankings for women" - i.e. "the algorithm would have (statistical) bias even without (sexist) bias in the data" and I was replying that I think that's false.


Question: So technically, the AI is not bias against women per se, but a set of characteristics / properties, that are more common among women.

I'm not trying to split hairs (or argue), as much as further clarify the difference between (the common definition of) human bias and that of statistical bias.


Correct.

Computers are very bad at actually discriminating against people, they will pick up a possible bias in a statistical dataset (ie, <protected class> uses certain sentence structure and is statistically less likely to get or keep the job).

Sometimes computers also pick up on statistical truths that we don't like, ie, you assign a ML to classify how likely someone is to pay back their loan and it picks up on poor people and bad neighborhoods, disproportionately affecting people of color or low income households. In theory there is nothing wrong with the data, after all, these are the people who are least likely to pay back a loan, but our moral framework usually classifies this as bad and discriminatory.

Machine Learning (AI) doesn't have moral frameworks and doesn't know what the truth is. The answers it can give us may not be answers we like or want or should have.

on a side note; human bias is usually not that different since the brain can be simplified as a bayesian filter; there are predictions on the present based on past experience, reevaluation of past experience based on current experience and prediction of future experience based on past and current experience. It's a simplification but usually most human bias is based on one of these, either explicitly social (bad experience with certain classes of people) or implicitly (tribalism).


> the brain can be simplified as a bayesian filter

I agree with everything else in your post, but just wanted to note that while this is true to some extent, the brain is much less rational than a pure Bayesian inference system; there are a lot of baked in heuristics designed to short-circuit the collection of data that would be required to make high-quality Bayesian inferences.

This is why excessive stereotyping and tribalism are a fundamental human trait; a pure Bayesian system wouldn't jump to conclusions as quickly as humans do, nor would it refuse to change its mind from those hastily-formed opinions.


> the AI is not bias against women per se

I think I'd make the claim a bit less strongly -- we don't know if there is statistical bias or non-statistical/"gender bias" in the data; both are possible based on what we know.

However exploring the statistical bias possibility, the simple way this could happen is if the data have properties like:

1. For whatever reason, fewer women than men choose to be software engineers 2. For whatever reason, the women that choose to be software engineers are better at it than men

(Note I'm just using hypotheticals here, I'm not making claims about the truth of these, or whether it's gender bias that they are true/false).

Depending on how you've set up your classifier, you could effectively be asking "does this candidate look like software engineers I've already hired"? If so, under the first case, you'd correctly answer "not much". Or you could easily go the other way and "bias" towards women if you fit your model to the top 1% where women are better than men, in our hypothetical dataset.

This would result in "gender bias" in the results, but there's no statistical bias here, since your algorithm is correctly answering the question you asked. It's probably the wrong question though!

Figuring out if/when you're asking the right question is quite difficult, and as the sibling comment rightly pointed out, sometimes (e.g. insurance pricing) the strictly "correct" result (from a business/financial point of view) ends up being considered discriminatory under the moral lens.

This is why we can't just wash our hands of these problems and let a machine do it; until we're comfortable that machines understand our morality, they will do that part wrong.


The article didn't specify how they labeled resumes for training. You're assuming that it was based on whether or not the candidate was hire. Nobody with an iota of experience in machine learning would do something like that. (For obvious reasons: you can't tell from your data whether people you did not hire were truly bad.)

A far more reasonable way would be to take resumes of people who were hired and train the model based on their performance. For example, you could rate resumes of people who promptly quit or got fired as less attractive than resumes of people who stayed with the company for a long time. You could also factor in performance reviews.

It is entirely possible that such model would search for people who aren't usually preferred. E.g. if your recruiters are biased against Ph.D.'s, but you have some Ph.D.'s and they're highly productive, the algorithm could pick this up and rate Ph.D. resumes higher.

Now, you still wouldn't know anything about people whom you didn't hire. This means there is some possibility your employees are not representative of general population and your model would be biased because of that.

Let's say your recruiters are biased against Ph.D.'s and so they undergo extra scrutiny. You only hire candidates with a doctoral degree if they are amazing. This means within your company a doctoral degree is a good predictor of success, but in the world at large it could be a bad criteria to use.


I'm not a ML guy, but reading this, it almost sounds like the training data needs to be a fictional, idealized set, and not based on real world data that already has bias slants built in. Possibly composites of real world candidates with idealized characteristics and fictional career trajectories. Basically, what-my-company-looks-like vs what-I-want-it-to-look-like. I'm not sure this is even possible.

Its an interesting questions. On one hand, a practical person could argue: "Well, this is what my company looks like, and these are the types of people who fit with our culture and make it, so be it. Find me these types of candidates."

VS

"I don't like the way may company culture looks, I would rather it was more diverse. This mono-culture is potentially leaving money on the table from not being diverse enough. I'm going to take my current employees, chart their career path, composite them (maybe), tweak some of the ugly race and gender stats for those who were promoted, and feed this to my hiring algorithm."


> the training data needs to be a fictional, idealized set, and not based on real world data that already has bias slants built in

Thatd be great, but in this case (as in most ML cases) the idea is not "follow this known, tedious process" but instead "we have inputs and results but dont know the rules that connect them, can you figure out the rules?"

> this is what my company looks like

In tech hiring, no one wants the team they have...they want more people but without regrets (including regretting the cost)


> You're assuming that it was based on whether or not the candidate was hire. Nobody with an iota of experience in machine learning would do something like that. (For obvious reasons: you can't tell from your data whether people you did not hire were truly bad.)

It's a fine strategy if all you're trying to do is cost-cut and replace the people that currently make these decisions (without changing the decisions).

I agree that most people with ML experience would want to do better, and could think of ways to do so with the right data, but if all the data that's available is "resume + hire/no-hire", then this might be the best they could do (or at least the limit of their assignment).


A reasonable assumption but, in practice, false. Many companies believe (perhaps correctly) that their hiring system is good. Using hiring outcomes would be a reasonable dependent variable, especially if supply is lower than demand, performance is difficult to measure, or there’s a huge surplus of applications which need to be cut down to a smaller number of human assessed resumes.


Men are promoted quicker, and more often, than women.


There was a company meeting one year at Amazon when they proudly announced that men and women were paid within 1-2% of each other for the same roles. It completely missed the point which you raise.

I want to see reports of average tenure and time between promotions by gender. I suspect that the reason we don't see those published is that the numbers are damning.


Or possibly noone did a study of sufficient size that passed peer review.

It's also not hard to make the pay gap 1-2% just like it's not hard to make it 25% (both values are valid). Statistics is a fun field. Don't trust statistics you didn't fake yourself.

Amazon could easily cook the numbers to get to 1-2%, I doubt anyone checked the process of determining that number if it's unbiased and fair and accounts for other factors or not.


I didn't write anything about promotions. I mentioned tenure and performance reviews.

If you had a way to accurately predict that some company would systematically donwrate you and eventually fire you or force you to quit, would you want to interview there? If you were a recruiter in that company and could accurately predict the same, would it be ethical for you to hire the candidate anyway?

This is not to say that I approve of blindly trusting AI to filter candidates, but the overall issue isn't nearly as simple as many comments here make it out to be.


Does it corelate with performance?


And how is performance measured?

Aggressive behavior is considered admirable in men, and deplorable in women. Many women I know have noted comments in their performance reviews about their behavior - various words that can all be distilled to "bitchy".


And then you take your experience, connections and expertise to leave and start your own company where none of this happens.

But is that what we see in real life?

I don't have data or sources at hand, but I'd bet top dollar that F-M ratio among employees is much more lopsided in male favor among founders[0].

[0] Not using the word CEO, because that can be appointed for somewhat arbitrary reasons.


citation needed


downvoters, please explain. The statement makes sense when you look at it in tech where there are more men than women. So it may appear that more men are getting promoted compared to their women counterparts. But that doesn't mean men >>> women, it's just statistics at play.


> For obvious reasons: you can't tell from your data whether people you did not hire were truly bad.

Many companies are fine with false negatives in their hiring process. Better to pass on a good candidate than hire a bad one.


This also means that if you hire unqualified women only because they are women, then your AI will have bias against women.


This seems to assume that performance evaluation is itself free from bias.


This doesn’t seem to be a reasonable conclusion. There is no reason to assume the AI’s assessment methods will mirror those of the recruiters. If Amazon did most of it’s hiring when programming was a task primarily performed by men, and so Amazon didn’t receive many female applicants, they could be unbiased while still amassing a data set that skewed heavily male. The machine would then just correctly assess that female resumes don’t match, as closely, the resumes of successful past candidates. Perhaps I’m ignorant about AI, but I don’t see why the number of candidates of each gender shouldn’t increase the strength of the signal. “Aggressiveness” in the resume may be correlated but not causal. If the AI was fed the heights of the candidates, it might reject women for being too short, but that would not indicate height is a criteria of Amazon recruiters hiring.


This is a subtle point but worth stating -- AI does not mirror or copy human reasoning.

AI is designed to get the same results as a human. How it gets to those results is often very, very different. I'm having trouble finding it, but there was an article a while back trying to do focus tracking between humans and computers for image recognition. What they found was that even when computers were relatively consistent with humans in results, they often focused on different parts of the image and relied on different correlations.

That doesn't mean that Amazon isn't biased. I mean, let's be honest, it probably is; there's no way a company this large is going to be able to perfectly filter or train every employee and on average tech bias trends against women. BUT, the point is that even if Amazon were to completely eliminate bias from every single hiring decision it used in its training data, an AI still might introduce a racial or gendered bias on its own if the data were skewed or had an unseen correlation that researchers didn't intend.


The whole aim of the AI was to make decisions like the recruiters did -- that is explicitly what they were aiming to do. It might be worth reading the article as it addresses your two ideas (the aim of the project and the fact that the training set was indeed heavily male).


Hey. I did read the article. It doesn’t support the conclusion OP is drawing. The aim of the AI is to “mechanize the search for talent”. It doesn’t care to, nor have any means to, make decisions “like the recruiters did”. Obviously machines don’t make decisions like humans do. They’re trying to reverse engineer an alternate decisions making process from the previous outcomes.


> The aim of the AI is to “mechanize the search for talent”. It doesn’t care to, nor have any means to, make decisions “like the recruiters did”.

This is why AI is so confusing. All "AI" does is rapidly accelerate human decisions by not involving them, so that speed and consistency are guaranteed. They are not replacements for human decision making, they are replacements for human decision making at scale.

If we can't figure out how to do unbiased interviews at the individual level, then AI will never solve this problem. Anyone that tells you otherwise is selling you snake oil.


> If we can't figure out how to do unbiased interviews at the individual level, then AI will never solve this problem. Anyone that tells you otherwise is selling you snake oil.

I wonder to what extent people want to solve it and perhaps more importantly whether or not it can be solved at all...


This is all happening before the interview, even. The AI, as far as I can see from the article, was just sorting resumes into accept/reject piles, based on the kinds of resumes that led to hire/pass results in the hands of humans.


So the recruiters may or may not have been biased, but if the previous outcomes were (based on the candidate pool) then the AI is sure to have been "taught" that bias.

Unless Amazon is willing to accept a) another pool of data or b) that the data will yield bias and apply a correction, the AI is almost guaranteed to be taught the bias.


Yep, I agree a skewed dataset is not good for the task of correcting an unequal distribution and is likely to maintain or even increase it.


Aren't the "previous outcomes" past hiring decisions though?


Yes, but you have to know what pool you started with. As an overly simplistic example, if a bank used historical mortgage approval records from primarily German neighbourhoods to train AI, it might become racist against non-Germans despite that it’s just an artifact of the demographics of the time. I think it just shows how not ready for prime time AI is.


Control question for if you're making a certain intellectual mistake.

The data set will also have skewed heavily against people named "David". Probably only ~1% of the successful applicants.

Would you also expect the machine to be biased against candidates named David?


What if people named David got hired 10/100 times in the past but people named Denise only got hired 6/100 times?

Hiring practices as expressed in the data get picked up by the machine and applied accordingly. As such, David is predicted to be a better hire than Denise.

This is not about "David" vs. "Denise", but how the machine learning process will aggregate and classify names. David and David-like names will come out on top while obscure names it has no idea how to deal with (0/0 historically) will probably be given no weighting at all.

Sorry "Daud!" Our algorithm says David is better.


I would expect the AI isn't fed names as an input, but rather things Amazon wants to weigh like experience, awards and education.


This isn't correct, the worry isn't that a single group is small, its that a single group is large. (basically if one group is large, you can get by ignoring all the smaller groups).

This is most common with binary problems.


I'm going to make a supposition here but one of the first things I think they did (especially when trying to fix the AI) was to balance and normalize the data so that there would be no skew between men and women number of records in the data set.

If my supposition is correct then the other parameters are at fault here from which gender and language used stick out.

Another supposition I'm going to make is that they even removed the gender from the data set so that AI didn't know it, but cross-referencing still showed "faulty" results due to hidden bias that the AI can pick up, like language used.


If they did normalize the data across gender, then you’re correct it may indicate bias on Amazon’s part. But I don’t know about that. The article doesn’t provide enough information. I think it should be obvious, to Amazon as well, that if you want to repair inequality in a trait (gender) you can’t use an unequal dataset to train a machine to select people. I just don’t think it follows that machine bias must mirror human bias.


Did you read the article?

(Serious question. Not intended as snark. Genuinely wondering if I'm missing some deeper current in your post?)


Twice. It doesn’t support OP’s conclusions.


"they could be unbiased while still amassing a data set that skewed heavily male" - this sounds like a self contradiction


Is the NBA biased against white guys?


I don't know - is it? What is the difference between bias and inferring information from skewed data?


Bias, to me, is the active (perhaps unconscious) discrimination based on a trait. Skew is an unequal distribution of that trait as a result of bias in favor of other traits, historical circumstances, or anything other than discrimination.

The NBA wants good basketball players. If they happen to be white, I imagine they'd draft them with equal enthusiasm as any other player. So no, it isn't.


Do you have some information not present in the article? There seem to be some assumptions on the training process in your comment that are not sourced in the article.

I'll don my flack jacket for this one, but based on population statistics I believe a statistically significant number of women have children. A plausible hypothesis is that a typical female candidate is at a 9 month disadvantage against male employees and that that is a statistically significant effect detected by this Amazon tool.

Now, the article says that the results of the tool were 'nearly random', so that probably wasn't the issue. But just because the result of a machine learning process is biased does not indicate that the teacher is biased. It indicates that the data is biased, and bias always has a chance to be linked to real-world phenomenon.


Does Amazon give 9 months of parental leave, or are you saying women employees are disadvantaged for their entire pregnancy?


Ah. Sorry, silly me. A quick search suggests 20 weeks, so ~4.5 months.

Obviously I don't have much specific insight, so maybe there is a culture where they don't use leave entitlements. But if there are indicators that identify a sub-population taking a potentially 20 week contiguous break it is entirely plausible that it would turn up as a statistically significant effect in an objective performance measure. All else being equal, then a machine learning model could pick up on that.

The point isn't that it is the be-all and end all, just that the model might be picking up on something real. There are actual differences in the physical world.


The term "AI" is over-hyped. What we have now is advanced pattern recognition, not intelligence.

Pattern recognition will learn any biases in your training data. An intelligent enough* being does much more than pattern recognition -- intelligent beings have concepts of ethics, social responsibility, value systems, dreams, ideals, and is able to know what to look for and what to ignore in the process of learning.

A dumb pattern recognition algorithm aims to maximize its correctness. Gradient descent does exactly that. It wants to be correct as much of the time as possible. An intelligent enough being, on the other hand, has at least an idea of de-prioritizing mathematical correctness and putting ethics first.

Deep learning in its current state is emphatically NOT what I would call "intelligence" in that respect.

Google had a big media blooper when their algorithm mistakenly recognized a black person as a gorilla [0]. The fundamental problem here is that state-of-the-art machine learning is not intelligent enough. It sees dark-colored pixels with a face and goes "oh, gorilla". Nothing else. The very fact that people were offended by that is a sign that people are truly intelligent. The fact that the algorithm didn't even know it was offending people is a sign that the algorithm is stupid. Emotions, the ability to be offended, and the ability to understand what offends others, are all products of true intelligence.

If you used today's state-of-the-art machine learning, fed it real data from today's world, and asked it to classify them into [good people, criminals, terrorists], you would result in an algorithm that labels all black people as criminals and all people with black hair and beards as terrorists. The algorithm might even be the most mathematically correct model. The very fact that you (I sincerely hope) cringe at the above is a sign that YOU are intelligent and this algorithm is stupid.

*People are overall intelligent, and some people behave more intelligently than others. There are members of society that do unintelligent things, like stereotyping, over-generalization, and prejudice, and others who don't.

[0] https://www.theverge.com/2018/1/12/16882408/google-racist-go...


We are pattern recognition machines. If you consider pattern matching unintelligent, then machines are more intelligent that we are since they rely more on logic than pattern matching.

For the black man = gorilla problem, an untaught human, a small child for instance, can easily make the same mistake. Especially if he has seen few black people. And well educated adults can also make the mistake initially, even if they hate to admit it.

However, in the last case, a second pattern recognition happen, one that matches the result of the image classifier with social rules. And it turns out that mixing black men and gorillas is a clear anti-pattern and anything that isn't certain is incorrect.

Unlike us, computer image classifiers typically aren't taught social rules, so like a small child, they will tell things without filter. It will probably change in the future for public facing AIs.

Not stereotyping is not a mark of intelligence, it is a mark of a certain type of education. And I don't see why it couldn't be done with the usual machine learning techniques.


> social rules

I claim it isn't just social rules -- part of that is empathy, which is a manifestation of intelligence that I think is beyond pattern matching.

If a white person were mislabeled as a cat, it would be a cute funny mistake. Labeling people as dogs, not so much. Gorillas, even worse. Despite that gorillas are more intelligent and empathetic than cats. Oh, and bodybuilder white celebrity boxing champion as a gorilla, may actually be okay. The same guy as a dog, no. It makes no sense to a logic-based algorithm. But humans "get it".

A human gets it because they could imagine the mistake happening against them, with absolutely zero prior training data. You don't need to have seen 500 examples of people being called gorillas, cats, dogs, turtles and whatever else.

If you want to say that a hundred pattern recognition algorithms working together in a delicate way might manifest intelligence, I think that is possible. But the point is one task-specific lowly pattern recognition algorithm, which is today's state of the art, is pretty stupid.


> We are pattern recognition machines.

That's just one function. That's not the entirety of what the brain (and body) does.

> If you consider pattern matching unintelligent,

What do you think pattern matching IS? Round ball round hole does not require intelligence. It requires physics. The convoluted rube goldberg meat machine what we use to do it, doesn't change what it is. Making the choice of will and approximations, are more signs of intelligence, imo.


"a worldview built on the important of causation is being challenged by a preponderance of correlations. The possession of knowledge, which once meant an understanding of the past, is coming to mean an ability to predict the future." - Big Data (Schonberger & Cukier)

so, knowledge now is allegedly possession of the future, rather than possession of the past.

This is because the future and past are structurally the same thing in these models. Each could be missing, but re-creatable links.

Also, conflicting correlations can be shown all the time. if almost any correlation can be shown to be real, what's true? How do we deal with conflicting correlations?


They didn't scrap it because of this gender problem. That wasn't why it failed. They scrapped it because it didn't work anyway.

Note the title is "Amazon scraps secret AI recruiting tool that showed bias against women" not "Amazon scraps secret AI recruiting tool because it showed bias against women". But I guess the real title is less clickbaity - "Amazon scraps secret AI recruiting tool because it didn't work".


The same AI should be applied to hiring nurses and various other fields which show population skews in gender, as well as fields which are not skewed. I'd be curious as to the outcome.


It failed because rationally interpreting gender data leads to politically incorrect conclusions.


How did you come to the conclusion that gender was being the most important, rather than skills or aggressiveness?


I don't think that's what the parent was claiming; the parent says "gender and aggressiveness" were most important and skills listed on the resume as providing such an unclear signal for actual hires that they were not picked up by the AI.


Without regard to this particular issue, you also have to concern yourself with the bias of the person determining if the AI has a bias.


> The AI becoming biased tells that the "teacher" was biased also.

That doesn’t follow.


Someone had to decide on the training material. Note that saying that they had bias does not mean that they acted with malicious intent; most likely they didn't. That doesn't change the outcome, however.


Thanks for spelling this out, I think this is exactly how to look at this.


Hold on here. This article seems to have buried a pretty important piece of information wayyy down in the middle of the text.

> Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.

Granted an article isn't going to get as much attention without an attractive headline but that seems a far more likely reason to have an AI based recruiting recommendation scrapped. The discovery of a negative weight associated with "women's" or graduates of two unnamed women's colleges is notable but if it's tossing out results "almost at random" then...well there seems to be bigger problems?


The media is no longer reporting things. You can't make money with reporting. The media is actively creating narratives, and one of the narratives that people are fed nowadays is that women are victims.

Men and women are pitted against each other.

Due to the way the media has evolved people consume their own biases and most often just read the headlines.


I mean, yeah you're not wrong. I try not to be too cynical about the whole thing even if I think the narrative is suspect. Yes women and minority representation in tech is a potential issue but I really want to know more about the AI recommendation system for potential hires. Especially if it was giving out spurious recommendations.

It's amazon, I can't imagine how many millions went into something like that. We'll almost certainly not get a postmortem but it's definitely intriguing.


No longer? Read some history, start with yellow journalism and muckraking.

Learning history will teach you more about things today than any news source.


The media has always been about narratives.


The article leaves a lot open to interpretation, including what was expected of the tools... that could range from providing some beneficial hints to replacing all hiring. As you rightfully remark, having the tool rank candidates for highly specific jobs and its tech requirements would be a great achievement. But is also a big challenge, thus they probably were aiming at something more basic initially. Building models for broad categories like "manager" or "box packer" and hope they will detect soft skills or work ethics seems more achievable. Thus the additional star rating that can be used for hiring and provides some value.

Now having known limited capabilities isn't great. But those can and will be worked on. Unknown / unexpected biases wont, making finding them important.


(Disclaimer: I am an Amazon employee sharing his own experience, but do not speak in any official capacity for Amazon. I don't know anything about the system mentioned in this article.)

I am a frequent interviewer for engineering roles at Amazon. As part of the interview training and other forums, we often discuss the importance of removing bias, looking out for unconscious bias, and so on. The recruiters I know at Amazon all take reaching out to historically under-represented groups seriously.

I don't know anything about the system described in the article (even that we had such a system), but if it was introducing bias I'm glad it's being shelved. Hopefully this article doesn't discourage people from applying to work at Amazon - I've found it a good place to work.

To say something about the AI/ML aspect of the article: I think as engineers our instinct is "Here's some data that's been classified for me, I can use ML/AI on it!" without thinking through all that follows, including doing quality assurance. I think a lot of focus in ML (at least in what I've read) has been on generating models, and not nearly enough focus has been on generating models that interpretable (i.e., give a reason along with a classification).


It seems like they did think it through, though? And that's why it's being shelved. I don't really see what the story is here. It seems like the whole process worked exactly as it should - Amazon tried something, it had some unintended consequences, they caught it, and shelved it.


Agreed. There is no story here.


The story is, some ML researchers did their job properly and detected ethical issues before they became a problem. That's more rare than you'd think.


Yes. Software engineers taking ethics seriously and not letting technical enthusiasm blind them is news, not normality.


I like the counterpoint, but it's not clear that the ML researchers were the ones who pulled the plug on the project.


This will be unpopular but I don't care. What is the evidence that the source data for this 'AI' is biased because the men it came from did not want to hire women? Is there a reserve of unemployed non-male engineers out there? If so what evidence is there of that?

Technical talent is both expensive and a rare commodity for tech companies. The non-male engineers I've worked with have always been exceedingly competent, smart, and their differing perspectives invaluable. If there was an untapped market of engineers you'd better believe every tech company would be taking advantage of it.


Yeah - I'm not even expressing an unpopular opinion, just asking a (leading) question: where are all these women who are chomping at the bit to get into _technical_ positions like programming but find themselves being turned away by biased recruiters? I've never even seen somebody _claim_ that they were a woman who couldn't find a tech job, just people wondering where all the women were.


Last place I worked at had to invest in additional resources to hire several females because the office was mostly male and applicants were overwhelmingly male. We did eventually find a few great female applicants, but it took a lot of work and a lot of time dedicated specifically to that goal.

There does not exist some magical undiscovered pool of talented female engineers that are being turned away by biased recruiters. It's hard enough to find any sort of talented engineers regardless of other factors. Shit, it's not uncommon to recruit from other countries and cover relocation costs these days.


Just being female is a qualification at your workplace?


I worked at a company that prided itself as a meritocracy and we produced phenomenal value for our customers. We eventually got purchased by a huge corporation, which took over our hiring processes, and told us we could no longer assess a candidate's technical skill level. Soon after that, our best employees started leaving and they were being replaced with people that improved our diversity numbers, but had very little technical capability. If you were female, black, or LGBT, you were hired on the spot. Some of the female candidates were good, but the majority of the new hires were a dead weight and the productivity of the office tanked. I am all for equality, but it's sheer nonsense to hire based on gender, race, or sexual orientation.


Not the parent, but in my current company it definitely helps to be a women, more referral bonus for referring a diversity candidate (post joining), performance targets for managers related to hiring and promoting diversity candidates.

Please note there is no active discrimination against men, but preference in some cases would be to hire a women. Gender is the only criteria for diversity in here.


How is a preference for hiring women not discrimination against men?

Would a preference for hiring men be discrimination against women?


a rather curious question... what prompted it?


The person they're asking said that they dedicated a extra resources to try and hire female candidates because the office was all males. That implies that they were hiring specifically for gender, not for any specific skillset.


The parent post? Evidently, for whatever reason, male applicants and coworkers are a problem for their organization, so they go out of their way and invest in "additional resources" to hire female candidates from the hiring pool. They're not hiring on merit, but on gender.

How exactly do you hire for female candidates without discriminating against the "overwhelming" body of male applicants? I would be really interested to know how this goes on behind the scenes: do you have open positions but cherry pick female candidates while disregarding male candidates from the get-go?


If it's anything like conferences trying to increase diversity numbers, they don't specifically target one gender, they focus their recruiting efforts on sites and locations that typically have a much higher representation, like a girls college, or a girls hacker group. That way they aren't actively saying "we won't accept male applicants", but they ensure that 90% of the applicants will be from the target demographic.


>We did eventually find a few great female applicants, but it took a lot of work and a lot of time dedicated specifically to that goal.

How is this not sexism? Replace female with male in the above quote and tell me it isnt sexist.


Diversity is generally accepted to be a positive thing. Targeting women specifically furthers that goal. Targeting men specifically furthers no goal.

So yes, sexism is not symmetric. Same with race.


How much money and time goes into hiring more female septic tank cleaners? Garbage men? Construction workers? Truck drivers? If sexism isn't symmetric then feminism isnt about equality.


You start looking in different places for your applicants.


Can you share a little more what you did specifically to find female candidates?


Not the person you responded to, but a few hiring managers at my office found once they changed up the language in the job listing it made a huge improvement in female applications. If your staff is predominately male then you're wording will skew a particular way that may not seem inviting. Glassdor has a write up on how to go about removing gender bias with links out to studies: https://www.glassdoor.com/employers/blog/10-ways-remove-gend...


They don't need to be currently unemployed. There are certainly enough women currently employed as programmers in the US to fill all the currently open Amazon positions, it's just it would be positively insane for Amazon to offer twice the salary and work-from-home to female candidates just to have prettier numbers.

Amazon (and a lot of other big non-diverse companies) are therefore hoping what is actually happening is that they are the women's first choice already, but turn them down for some reason, and that they would not have to change a thing about their work process to attract more women, except to start seeing them.

It's obvious why companies like thinking that way, and it's possible to some extent it's true. However, the fact is, if they're playing like this it's a zero-sum game and it is not actually going to improve the diversity numbers.

On my part, I've been wondering. If all these companies want to know where the women who can code are... why not just ask them? Why do you never see a "Female Programmer's Career Survey" with questions such as "On average, over the last ten years, how long have you been looking for a job?" "Would you accept a new job for a $5k raise?" "Have you ever dropped out of a hiring process because of sexism?" Take it out there in the open. Ask the real questions.


They all stopped joining up around 1985. https://www.npr.org/sections/money/2014/10/21/357629765/when...


What came first? Was the cast of Revenge of the Nerds all males because nerds were males and art was depicting reality, or did only men start becoming nerds become this movie had an all-male cast? I get being envious of people who had a head start with computers and feeling behind. I had zero coding experience before my first CS class in college. It sucked, but I grinded through it. It's relatively easy to catch up with determination. Probably going to be an unpopular opinion.


I'll start by saying that I agree that determination can overcome a late start, but familial and society pressures can be really difficult to overcome. My wife comes from a small southern town and wanted to be a writer and college professor, but was constantly ridiculed and chastized by teachers and family because she is a woman, so they said she should just be a nurse, or get married and be a housewife. Her family refused to help with her college education, and her dad and mom would tell her that she should quit her masters program to cook and clean while I was at work. She didn't listen to them, but a lot of her female cousins do, and just get stuck in shitty marriages and deadend jobs, never doing what they want.

Progamming is also manufactured to be a "male" profession. It used to be that researchers were males, and programming was considered womans work, like doing data entry. When companies like IBM discovered that the best programmers tended to be anti-social males, and that females tended to leave careers earlier to start a family, big tech companies focused all their recruitment on males until progamming became a "male" job. It's similar to how light beer used to be considered a womans drink, but beer companies started running ads showing football players and other masculine figures drinking it, and now it's acceptable for anyone to drink.



I think you mean that boatloads more men started signing up and drowned out female recruitment which plodded along at the original pace.


It's taking a little longer than it should, but people are finally starting to realize that the actual reason there aren't as many women in software is because they've chosen not to be. They're wired differently and therefore have different interests.


I see this argument a lot but your exposition is less inflammatory than most. Please note this: when you tell a mixed-gender group of people that all women are "wired" to not like programming, you are telling the women who do like programming (which, let's be real, is most of them on Hacker News) that they're in effect not real women (or defective women, in an engineering sense). Or that they don't count.

Since you're presumably not a woman, and they are, they object to your seeming to be taking it upon yourself to tell them what a woman is.

It's not unlikely that most women don't want to be in software. Most men don't want to be in software! But hundreds of thousands of women are, in fact, working as programmers. Whenever tech news talks about hiring women, presumably, they're talking about hiring these women. Why pretend they don't exist?


"Since you're presumably not a woman, and they are, they object to your seeming to be taking it upon yourself to tell them what a woman is."

That idea is complete poison. In a debate where you're both completely uninformed, the anecdotal evidence of experience is relevant. In any discussion where reason, numbers, research are involved, the gender/race of the person making the argument is irrelevant to the argument. This current idea that only women can talk about female issues, only X about X is pre-enlightenment tribalism.

It presupposes conflict between the tribes too. If in my philosophy I believe that through reason and evidence I can understand your point of view and experiences,there's the possibility of agreement. If you really believe that I can never talk about issues affecting your tribe, or arrive at reason to an underlying truth, there's no point in talking. We might as well just fight to see whose tribe can impose power on whose.


I'm sorry, I don't follow how your argument (which is valid) is a response to mine. I said people tend to object to being imposed an identity by people who do not share it ("all women hate programming", "all Yankees eat too much"). (And to a lesser extent they also object to the same thing by ingroup members, but are less comfortable expressing it. If my parent was in fact a woman, her viewpoint would most likely still be pretty unpopular.)

Arguments being inflammatory does not make them invalid. It is fine to have conversations that include inflammatory arguments, if they are made politely, which parent did. But ignoring that the argument is inflammatory, and/or that the group the argument refers to are in fact intelligent people involved in the discussion that may have themselves an opinion, is lacking in empathy. Rhetoric is founded in empathy; that is why it is an art, and not merely a technique.


>you are telling the women who do like programming (which, let's be real, is most of them on Hacker News) that they're in effect not real women

woah what? no that is not at all what he's saying. If someone tells me "most american men like the NFL" and I don't like the NFL, I would be insane to take that as someone telling me "you're not a real man" and think they're trying to "tell me what a real man is." I can see how someone who is perpetually trying to be a victim might take such a hardline stance, though.


I hope you're actually an American who doesn't like the NFL so you can tell me how well the analogy follows.

The conversational equivalent using the NFL example would go something like this:

"Why are there no Americans at my favorite chess forum?"

"Americans like the NFL. They're just more into brute force and camaraderie, especially American men. Chess can't really appeal to them. I mean, back in the Neolithic, a modern day chess grandmaster, if he managed to not burn in the sun and see an angry cougar 15 meters away, would probably have died. American men are just closer to their nature. I have this cousin who's American, I have attempted for years to get him to play chess and no dice. Not during the season, anyway."

"I mean sure but there are American chess clubs? Some Americans like chess? Surely the cougar thing is not relevant to my original scope?"

Now, you're a chess loving American. You're not at the first poster's forum because you can't be on all the forums in the world, and it's true love of chess is not exactly common in the US. However, how would you feel reading the description of how apparently literally everyone else in America loves the NFL? Would you feel proud to be American? Would you feel like the poster who made the NFL comment is likely to be an American man? Would you be more, or less likely to read his further arguments on different or related topics, for example, his opinion on American politics?

While I'm sure that there are plenty of women who don't mind those arguments being made, and I think to an extent this forum would select for those women anyway, it remains an argument that is, in essence, 1) prone to being misinterpreted and 2) a little blind to your audience. (the subject is chess - you are talking to a chess fan - for all you know the chess fan also loves the NFL)

And those arguments are less effective than other arguments, for example, the kinds that 1) don't make assumptions of their audience and 2) are closely related to the topic.

(I realise this digression itself is totally off-topic and I'm sorry. I'm not interested in monologuing or haranguing the poster or anything. I hope it, and the little NFL parable, showed you and the people who usually make that argument without thinking about who reads it, a slightly different side, and that you may consider it if in the future you should have longer debates on relevant subjects with people whose opinion and background you don't know a priori. And thank you to anyone who read it.)


I reread this three times, and what a contrived way to look at the world. You are the one coming up with all these labels, and trying to project it into other people's arguments.

Let me play the same game: I know a black guy who is president. Does that mean all black men are political? What about the one who just wants to play sports. Will we now call him an athletic black man, instead of just a black man...

You can generate endless arguments like this depending on your choice of anecdote and label.


My summary of the most recent comments in this thread:

Dirlewanger: group X have property Y arandr0x: members of group X without property Y might be offended by "group X have property Y" courir: as a member of group X without property Y I'm not offended by "most members of group X have property Y" arandr0x: <a reiteration of the previous comment> ramblerman: <rhetorically> does saying a member of group X has property Y mean all members of group X have property Y?

I think you (ramblerman) have logically inverted the main claim, which is why it doesn't seem to make sense. Behaviour in line with arandr0x' comment seems perfectly reasonable to me - few people take well to poorly fitting generalisations.


> that all women are "wired" to not like programming

But nobody has ever said that. They say that women are more likely to be.


But there are fewer women in software now than there were 30 years ago. Are women today "wired differently" from their mothers?

Single-generation changes in behavior aren't genetic. They're social.


> But there are fewer women in software now than there were 30 years ago.

That's not true. Firstly, it's more like 40-50 years ago.

Secondly, there are far more women doing software development, but the gender ratio is dramatically different.

Thirdly, that's because male interest exploded with the advent of personal computing the 80s.

Lastly, "programming" as a profession used to be regarded as an offshot of secretarial work, which was dominated by women.

The facts around women in STEM are polluted with a lot of bizarre narratives.


Okay, so TWO generations. Big deal. It still dismisses the "born that way" nonsense argument.

"'programming' as a profession used to be regarded as an offshoot of secretarial work, which was dominated by women". Which begs the question of why women dominated secretarial work (and still do), while as programming became a more respected and better paying profession, it became male-dominated.


> Okay, so TWO generations. Big deal. It still dismisses the "born that way" nonsense argument.

It really doesn't. Unless you seriously think punch card programming is the same as modern programming, or that the fact that only women were secretaries that did programming somehow provides data on the relative strengths and inclinations of women and men for programming work at that time.

Look, it's clear that you have no idea of the breadth and depth of data available on this subject, and a trite "sexism/oppression" narrative explains hardly any of it. For instance, the fact that as a nation becomes more egalitarian, the gender disparities in STEM increase, ie. Nordic countries have worse gender disparities than here, despite having less sexism, and oppressive countries like Iran actually have gender parity in STEM fields.

If you want to actually learn about this subject, I suggest reading: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0018...

The fact is, there's good evidence that women are naturally less interested in STEM-like fields due to a well known psychological attitude on things vs. people. That attitude explains facts like why medicine and law have achieved approximate gender parity overall, but surgery is still dominated by men, pediatrics and family law is dominated by women.


Why are you assuming I have no idea of the data available? Because I question the default narrative?

I do find it interesting and noteworthy that gender disparities have grown in STEM while shrinking in other fields. But I believe my explanation accounts for that - that STEM has become more prestigious, which draws men, which forces out women.

The "well known psychological attitude" is begging the question, which seems par for the course on responses here. Is this psychological attitude biological, or social? And if it's biological, how do we explain significant changes in professional proportions that have happened over a mere one or two generations? It seems like a very poor explanation for what you're asserting, contradicting your own stated facts.

If it's social, however, we're back to my explanation - as the prestige of formerly female-dominated careers rises, they become more attractive to men, to the point where men dominate them. It's a much simpler explanation, with no contradictions.


> Why are you assuming I have no idea of the data available? Because I question the default narrative?

Because you're throwing out wild, unsupported speculation to salvage your narrative, and the original post of yours to which I replied had at least 4 elementary factual errors.

> But I believe my explanation accounts for that - that STEM has become more prestigious, which draws men, which forces out women.

That's not an explanation at all. Why would prestige drive away women? Just because there are men there? Or you think men drawn to prestige don't want women around? Or you think men just flood into any field that has some form of prestige thus drowning out women? So then why aren't the careers they left suddenly dominated by women because all the men left for more prestige? And where are all these men coming from since we have rough equal numbers of men and women? Why are janitors and dangerous jobs dominated by men since those aren't prestigious?

The fact that you think this explains anything or is free of contradictions is frankly bizarre, and just reinforces my point that if you're really interested in this field, you need to more read more and speculate less.

> The "well known psychological attitude" is begging the question, which seems par for the course on responses here. Is this psychological attitude biological, or social?

Likely both, since there's plenty of evidence of things vs. people in toddlers, and this innate preference no doubt gets reinforced and magnified.

In the end, your scoffing at the original poster and "subtly" implying that he's sexist for a remark that is actually well grounded in facts is exactly the problem with debating people on this subject.

Yes, there is sexism in STEM, just like there is in most other fields, but sexism didn't keep women out of medicine or law, they just pushed through and staked their claim. The fact that women haven't done this for STEM which is far less of an old boys' club already suggests something else is at play, and the fact that the same trends are seen across disparate cultures already suggests strongly there's a universal component.


Sexism kept women out of medicine and law for centuries. It's only very recently that this has changed. Women were not even admitted to Harvard Law School until 1950.

I do think there's a universal component, though, as sexism is seen across virtually all cultures.


> Sexism kept women out of medicine and law for centuries. It's only very recently that this has changed. Women were not even admitted to Harvard Law School until 1950.

You're equivocating. You know very well that the type of sexism that kept women from working in virtually all professions, including law and medicine, is not the type of sexism we're discussing now.


> Sexism kept women out of medicine and law for centuries.

Have you noticed how this is inconsistent with your prestige argument?


> that STEM has become more prestigious, which draws men, which forces out women.

Is it competition that is forcing women out, or men ?

> as the prestige of formerly female-dominated careers rises, they become more attractive to men, to the point where men dominate them

What do you mean by dominated, is it the number of people, or is it something else ?

Are you trying to say that once a career path becomes female dominated, men should stay out ?


"Programmer" used to be the title that goes with using a keypunch to turn a flowchart into a deck to submit to the operator. That job had low status because it sucked, for the same reason that spending all day typing someone else's words sucked. Eventually we could afford to automate that job away. "Systems analyst" and "programmer/analyst" are the titles for independent design work we should be comparing to today's developers.


But you're not comparing the same thing generation to generation. The job of developer has changed massively, the number of developers, the expectations, the salary. Sociaety has also changed, and not just in culture but in income distribution, etc etc.

If tomorrow we say that you have to do 30 chinups to be a waitress, and the job will involve regular fistfights then we count the number of waitresses by gender and say "it must be cultural", we're kind of missing the point. Or if we say "OK now waitresses make 200k and are respected" and watch the numbers shift.


"The job of the developer has changed massively".

What, pray tell, has changed that made the job more attractive to men, and less attractive to women? You need to be able to answer that question if you're going to make a causal assertion.


I wasn't around two generations ago to make the comparison but I imagine that with the higher income has come much higher expectations that you'll be in the office for 12 hours a day and weekends. You also have much higher wealth in western nations which correlates with higher ability to seek jobs that fit your preferences. Back in the day most people didn't go to uni and had a much smaller choice of positions. There are hundreds of ways the world and job are very different, and you're flipping the argument to say that I have to assert the one specific causal link. If you're proposing an argument "it's misogynist culture, as evidence compare two generations ago". Then it's more the case that you need to demonstrate that the conditions and job are the same, for your link to be valid. Or that all the ways they're different are irrelevant, which they're just obviously not.


> Back in the day most people didn't go to uni and had a much smaller choice of positions.

Not the parent and I wasn't around either, but I think accessibility of education counters your point, not supports it.

More egalitarianism should in theory be more favorable to women.

Back then I imagine it was much harder to program without access to university computers and education materials.

More women get higher education than men compared to 35 years ago.

More incentives (monetary and otherwise), combined with lower barriers to entry should also be favoring the supposedly disadvantaged.

And yet the drop in F-M ratio since late 1980s has not been overcome last time I checked.


More egalitarianism should be favorable to women /if/ you assume a priori that they are mostly disadvantaged through lack of access to education/resources, and the real expected outcome distribution is 50-50.

If you assume that there are underlying differences in interests and aptitude, more egalitarianism allows these differences to be expressed more since women are more free to eg. choose a career working with people, like medicine or law. http://www.thejournal.ie/gender-equality-countries-stem-girl... It also raises the bar for inherent aptitude to get into/(the top of) a career, since you're competing against a much wider pool of talent.

The point I was making to the parent was that his point "cross-generation drop in ratio proves it's cultural" doesn't hold up, because there have been many changes across those generations, you're comparing apples to oranges.


Nonsense. I don't know where you work, but where I work, 12 hour days and weekends are clearly not the norm. We put in 40 hour weeks like any other profession.

I first learned to program 35 years ago. It wasn't fundamentally different then. Hell, we still use programming languages that were in wide use 35 years ago, like C and the unix shell. The kind of thinking required hasn't changed.

So, based on my 35 years of experience, the conditions and the job are basically the same. So again, I challenge you - how is the job different now?


The parent conceded we're talking 50 years not 35 years, and my point wasn't only that the job is different but that society is different.

how is the job different? The pay is much higher and so are the entry requirements, that's the biggest difference. You don't get assigned to program punch cards as part of your secretarial role, you have to actively get educated and good to choose it as a career.

I think there's far more places now expecting crazy hours but that's anecdotal I don't have numbers on it. But the languages are different (mostly), the tooling is different, the deployment is different, the scale is different.


One big change is that many traditionally male high paying jobs like doctors, lawyers, etc. have become much more open to women. Some, like publishing, have become majority women to the same degree as Tech.

One very likely explanation is that many women who wanted to be independently financially successful had few choices other than tech back decades ago. Now they have many other choices.


> But there are fewer women in software now than there were 30 years ago. Are women today "wired differently" from their mothers?

But are they really ?, do you have any data/reference to back it up, I was under the assumption that there are more people, both men and women, working as programmers, than 30 years ago.


I don't think that's quite true, I think this is more of a nurture vs nature thing.

My father got me into programming, and my colleagues who are women also have a backstory with someone supporting their interest in development.


For me it was my grandfather, but yes, I also got into programming at least partially due to nurture, not nature.

Just teach kids to code, boy or girl. Not all of them will like it, not all of them will be good at it. But I think a lot more girls would be into it and good at it if they were introduced to it before college.

Tailor it to the kid's interests. My first programs were more socially oriented. When I was 5-6 years old, all my programs were made-up conversations with the computer, where your answers were stored and parroted back by the computer to show that it was "listening". Maybe a boy would have been less into that and more into something mathier, like LOGO instead of BASIC, but it was what was interesting to me at that age. The computer was a form of imaginary friend for an introverted kid like me.


Yup. Someone telling us that programming was something for us - not just being exposed to media and advertising and social pressures that continually suggest (and did even more so in the 80's and 90's) that computing was for socially inept males.


Oh, there are a bunch of us, even here in the SF Bay Area. Trouble is, we're older than 35, or don't have degrees from "top" schools, and/or don't have the "passion" for bizarre extended hiring rituals. I could staff an entire dev team with non-male people within a week.


> we're older than 35, or don't have degrees from "top" schools, and/or don't have the "passion" for bizarre extended hiring rituals

This hits home so hard.


So what do you do now? Btw some men in tech are also over 35 and tired of hiring rituals.


After 15 years of front-end dev, I now work in retail. Some of my other peers are scraping by with Uber/Lyft. Some are muddling through as housewives or substitute teaching.

And, yes, Bay Area tech hiring is needlessly hostile for men over a certain age as well.


Ageism, OK - but still I find it hard to believe that you can't find a job if you can code. Maybe competition or demands are especially high in the Bay area?


I’m a bit confused... surely going through a needlessly bizarre hiring ritual is worth it compared to driving for Uber or being a substitute teacher?


I don't have any sort of degree beyond high school and I have been a programmer/engineer for almost 15 years. Do I work at top tech companies? No. But that doesn't mean people like us can't be hired in the industry.


SW engineering can be a pretty brutal job psychologically - there's a reason people are burning out. Interviewing is particularly bad.

I guess most men have thick skin or got lucky, so they don't see that, instead they think it's this dream job and everyone should partake in its wonderfulness.

I believe that the lack of women in tech is explained by societal bias against women and the nature of the job.


> I could staff an entire dev team with non-male people within a week.

I think the recruiting team at my company would very much be interested in speaking with you.


I agree with you, but I have to point out, because it's so common: The "this will be unpopular, but I don't care" preface is, I feel, about as damaging to the perception of whatever you're about to say as "I'm not racist, but". To make an effective point, I think you should avoid, as the first thing you say, painting yourself as an underdog brave enough to speak out by preemptively criticising your audience's reactions that haven't even happened yet. That's not to say you should never admit those observations - rather, I would reserve such broad criticism of people's opinions for a separate train of thought or conversation.


Also, the "this will be unpopular" seems like it inevitably turns out to be virtue signaling for the "un-PC" people who will then aggressively make the thing popular anyway.


> What is the evidence that the source data for this 'AI' is biased because the men it came from did not want to hire women?

One issue that keeps happening is an over-emphasis on CS-related questions. There are many great engineers I've worked with who didn't do a CS degree, and even though they are brilliant thinkers and talented engineers, too many times the interview question is "solve this problem using <pet CS 101 lesson, like red-black trees>".

And the number of people who are hired who can barely communicate effectively is still shocking. Very few interview questions focus on communication outside the technical realm.

So you can argue there is a bias in recruiting, simply because different people have different criteria for what the best traits/skills to look for is - even though everybody has the same goal, hiring the "best".

I'd also caution about taking Reuters too seriously though. Seems that they've only focused on the gender issue, but this is the money quote:

> With the technology returning results almost at random, Amazon shut down the project, they said.


> One issue that keeps happening is an over-emphasis on CS-related questions

No. If you go down that path, then you are implying that women do in fact perform worse at CS-related questions. That's a much bigger can of worms than the bias being implicated here.


Hopefully we can at least agree that those questions are limited in effectiveness, and often have no actual relation to how good an engineer is. It varies of course.

Sometimes, they seem more like a secret handshake you need to memorize to get into the boys club than actually useful engineering. Who hasn't had to revise some of these before applying for a job 3 years out of college?

What it does do is effectively exclude applicants who didn't study CS, or who haven't heard of and memorized "cracking the coding interview".

Assuming `fake CS questions == good engineer` is a huge mistake, but one i keep getting downvoted for everytime i mention it. Most rebuttals are usually something like "it's the best system we have", something i find unsatisfying.


There is an interesting tangent in this thread where we can wonder what it would reveal if "coding interview" type CS tests were administered along with standard IQ tests (or the application included SAT scores). Do coding tests predict work performance better or worse than an IQ test? If worse, are they merely "culture fit" bias filters meant to retain the ingroup? If better, is it because culture fit actually matters, or because there is in fact some CS-specific skill or set of assumed knowledge that matters in programming that goes beyond logic?

While I understand that using IQ tests as hiring predictors is itself a problem, I'm interested in the interplay in predictive ability between the two classes of tests. I think everyone would agree that any primarily intellectual timed test that was _less_ salient to work performance than an IQ test should be binned. What would happen to our interviews then?


Using an IQ test in order to hire is actually illegal. If only the laws were that simple..


It may imply this on a long enough timescale, but as it stands it only implies more that there is a historically larger pool of capable female employees sans CS degrees than there are candidates with. Which is demonstrably true.


I don't understand how your example is an evidence that data came from mean who didn't want to hire women.

These interview processes exist sure, and I personally find them idiotic, but is there evidence that they disqualify women more than men?


From this article/Amazon - no. From my personal experience, yes.

After we moved to a logic-based test, we were able to hire several more women from interesting disciplines including psychology, math, and biology. The tests involved technical problems written in a general way. For example, thread scheduling was written to instead involving painters, rooms, and drying time. We were able to hire 4 women on a ~50 person team in a very short time, and it worked out pretty well.


"Untapped market of engineers" yes it exists. The majority of my female friends with STEM degrees ended up as high school teachers. I had several older people suggest to me that teaching should be my preferred career choice because it was more flexible than a programming job (wtf...)

"every tech company would be taking advantage of it" - nope, no one is. I don't know why but my guess is its hard to admit you're doing hiring wrong, hard to hire people who think differently than you, etc.


To note, there are more women than men in high school teaching positions, so what you're seeing might not be that STEM has a bias against women but that teaching has a bias for women (or any other out of a very big set of possible conclusions)

see: https://stats.oecd.org/Index.aspx?DataSetCode=EAG_PERS_SHARE...


Maybe being a high school teacher is simply more pleasant than working in tech to some people? I don't think your example shows there is an untapped talent pool.

Of course, in general, you can make a job more attractive (raise salaries, roll out red carpets, install slides...), and you will attract more people. That doesn't prove those people were an untapped talent pool.

Presumably there is a price that would make a high school teacher consider working in tech again. That doesn't imply companies should be willing to pay that price.


So why are the "some people" more often female than male?

I keep seeing all these explanations that are just begging the question.


Why did they choose being a high school teacher?

Are there specifics about hiring processes in tech that bias against or scare away female candidates?


Two comments:

1. The issue is certainly bigger than hiring. In the many years between birth and looking for a job, there are a lot of societal pressures that will impact what eventual careers people end up in.

2. Hiring managers are people. They are not perfect. They have biases. If someone expects an engineer to look, talk, and act a certain way, that can impact their decision making completely independent of the fact that they want to hire the best people for their company.

Bonus third point: I still see a whole lot of "We want to make sure that the hire fits on the team." This is completely natural, and comes with its own set of built-in biases.


Some reasons why I disagree with this polemic:

1. There's no reason to expect that these women will be unemployed - they just won't be working for Amazon. That's all we know. No point going looking for them.

2. You can't assign intent to hiring decisions made in the training data - there's no reason to believe that men (and why single them out?) "did not want to hire women". Maybe they did. Maybe they have no idea that they're biased - maybe the women making such hiring decisions are just as biased. We have no idea.

3. The evidence that the AI is biased, is that.... the AI is biased. Which means that the training data is biased. Why that is, is a great question - it may reflect unconscious bias in the hiring process, or more obvious old-fashioned biases. It may reflect that the model amplifies some minor bias in the training data and turns it into something much bigger. We don't know.

So yeah, it's biased - the question is why.


"The non-male engineers I've worked with have always been exceedingly competent, smart, and their differing perspectives invaluable" - that is really an (anecdotal) evidence that there is a bias indeed. If the recruiting was all unbiased than the quality of existing male and female force would be the same - if female workers are of higher quality than it means that they need to pass higher requirements.


I didn't mean to imply the males I worked with weren't equally good. I was simply pointing out that, in my (anecdotal) experience, I couldn't see any reason non-males couldn't do the job well.


So everyone you haver worked with have always been exceedingly competent and smart?


Obviously not. Every time I've worked with someone incompetent they tend to get fired rather quickly and I don't work with them anymore. As far as female vs. male goes I've worked with way more males than females and I just haven't happened to work with any females that have been fired for incompetence. Perhaps my level of judgement on other's performance is too lax but I tend to focus on my own performance and less on that of others. That's management's job, not mine.


So there were a few incompetent males, no incompetent females - but you believe that it is just a statistical fluke. It might be - but it is still evidence for the bias, just very weak. I noted it because it seemed that you used it as an argument against bias - which it isn't.


This is my anecdotal experience. Obviously, other people have other experiences and the reality of the whole is probably different. The simple fact that there are more males than females in a job is not evidence of a bias perpetuated by sexist men.


Another corollary should immediately follow: If an average woman in the industry is that much better than an average man, where are all the female-only companies?

Does nobody want money in a capitalist society?


How much is 'that much'? The effect is probably small but what is more important it is not about a moved centre of (a Gaussian) distribution - but rather about a higher threshold usually used for women to pass.

This is of course very much dependant on the distribution shapes and I am too lazy to make a thorough analysis - but:

Let's assume that on average females were 10% more efficient programmers - but with the effort to find one female programmer you can find 10 male programmers. How much more effort do you need to find a 10% better programmer - twice as much as for the average one? Even if it was 8 times harder - then still it would make more sense to look for only men than for only women. Of course the optimal way would be to be unbiased and look for any gender.


Sorry for the late reply, I was away from civilization.

I think that depends on how your hiring process looks like in general. You might, for example do a benevolent (from a certain POV) discrimination and simply start filtering applicants with something like a naive 20-line Python script that matches applicant names against female names from a dictionary and pushes them to the top of your applicant stack, so to speak.

And there are less tangible or directly measurable, but nonetheless important benefits to hiring women for a business: You can get free publicity and marketing if you run a successful women-only shop, there is a significant demand in the liberal media for female success stories and you can ride that wave.


An untapped market of engineers would be tapped...

...if and only if...

...there were no other factors at play that cause that market to remain untapped.

For further rational thinking, consider this. If there's a bias, it doesn't mean women won't get hired. It just means they won't get hired for the best positions. Everyone else gets Amazon's cast-offs.


If there was an untapped market of engineers you'd better believe every tech company would be taking advantage of it.

That's exactly the same argument used to justify every regressive policy. If x was true then rational y action would happen.

But that's the poo tof racism and sexism rational y action doesn't happen due to the -ism.


Do you really think white men care so much about keeping down other groups of people that they would prioritize it over making more money and having more access to good workers? That sounds like a conspiracy theory to me.


The whole point being made is that prejudice/bigotry is not rational. Just look at the phenomenon of redlining. Black home buyers from certain areas were absolutely prevented from acquiring mortgages even though it would have been a source of profit for local banks. There are people out there who won't hire female candidates in technical fields due to the misguided belief that 'women aren't good at math/science'.


It doesn't have to be be deliberate.

Unconscious bias is a thing.


It doesn't need to be deliberate or conscious. It can be incidental.

Or maybe women just aren't as smart as men.


No body thinks that. You're way outside the overton window.

Why is the health care field heavily biased in favor or female nurses and doctors? Are women smarter than men when it comes to biology/anatomy?


You're right, nobody thinks men are smarter than women. And it isn't true.

So, let's think about why we see gender roles in employment. Why are there so few women software engineers? One possible explanation is that women just aren't smart enough. If you don't believe that (and I don't), then you need another explanation. Maybe it's because of sexism. But if you don't want to believe it's sexism (as the OP implied), then what is it? They're not too dumb, and the hiring process isn't sexist, so why? And that's where hands come up empty.

That leads to nonsense like the person on this thread who said women are "wired differently", which presumably makes them less suitable. Which is just a polite way of saying women are too dumb to program, without facing the reality that that's exactly it means.


> And that's where hands come up empty.

Except they're not, they're only empty if you haven't done any reading in this field.

> That leads to nonsense like the person on this thread who said women are "wired differently", which presumably makes them less suitable.

That was your supposition, not the only intepretation of those words. In fact, the weight of the evidence seems to support his statement, but similar to Damore, people like you are just fond of attacking reactionary strawman interpretations of the words actually employed.

> Which is just a polite way of saying women are too dumb to program, without facing the reality that that's exactly it means.

No it's not. "Wired differently" can mean many things, only one of which refers to competence.


> And that's where hands come up empty.

Maybe anti-male sexism prevalent in the health care and education fields is causing women to prefer those fields.

Fix the sexism in health care/education. Elementary teachers should be 50% men. Nurses should be 50% men. Instead those fields are 90%(!) women! That is a HUGE level of bias and discrimination


Possibility 1: Female-dominated fields discriminate against men.

Possibility 2: Those fields are female-dominated because they can't get into male-dominated fields.

So what do the pay and prestige look like for female fields, vs male fields? Well, take medical. Nurses (low prestige, low pay) are >90% female. Doctors (high prestige, high pay) are about 70% male.

This suggests to me that there's indeed a huge level of bias and discrimination, but not in the way you think.


Possibility 1: Male-dominated fields discriminate against females.

Possibility 2: Those fields are male-dominated because they can't get into female-dominated fields.

Men do not work as teachers because the media has painted men as "sex crazed". Most mothers would be uncomfortable with having a male 4th grade teacher for their daughter.

Many women would be uncomfortable having a male gynecologist or a male nurse helping them deliver their baby.

> Doctors (high prestige, high pay) are about 70% male.

Sorry but this breaks your narrative: 60% of new MDs each year are female. However: female MDs are more likely to quit the profession or go part time in order to raise kids. Again, this might show anti-male discrimination because it is not socially acceptable for male doctors to quit work to stay home with the kids.

---

The above suggests to me that there's indeed a huge level of bias and discrimination, but not in the way you think.


You have a number of issues with your narrative. "Quit the profession or go part time in order to raise kids". So what other reasons do women have for quitting the profession, other than because men are too victimized to be stay at home dads?

-- edit: fwiw, I googled stats. According to the American Association of Medical Colleges, 2017 was the first year ever that female medical school enrollment was greater than male medical school enrollment. I also went to graduation by year as far back as 2002, and it has always been more men than women. So yeah, your statistics are bullshit. Care to offer a source? --

And mind you, being a stay at home parent is considered a low-prestige, low-pay role. To the extent that it's discouraged for men, that's a result of a sexism that puts men in a dominant role and demeans them for doing "women's work".

The idea that men aren't teachers because the media paints them as sex-crazed is absurd. The gender disproportion of teachers existed long before the media mentioned such things at all. And you offer no evidence whatsoever for the assertion.


> Men do not work as teachers because the media has painted men as "sex crazed". Most mothers would be uncomfortable with having a male 4th grade teacher for their daughter.

> Many women would be uncomfortable having a male gynecologist or a male nurse helping them deliver their baby.

And what is your opinion of the above bit of my previous post (since you avoided that in your answer?)


My opinion is it's not worth deigning to answer.


The way you're ranking occupations has an implicit bias. Let's rank them for work/life balance. Nurses are busy and work long hours, but when the work day is over, they go home until the next shift. Doctors go home, and possibly get paged to come right back.

Is it possible men and women weight values differently when selecting occupations?


> So what do the pay and prestige look like for female fields, vs male fields?

What is the payoff for pay and prestige and is it the same between genders?

I think the overwhelming evidence suggests it is not the same and that women value different things than men.


> And that's where hands come up empty.

there seems to be a presupposition here that the the 'natural' proportion of women software engineers is 50%.


There's a presupposition that the natural distribution of intelligence is gender-neutral. Which suggests that the unequal distribution of software engineers by gender has a cause other than intelligence.

So what is the cause, then? Is it biological, or social, or random chance? "Random" doesn't seem likely, especially given how many other professions are male-dominated, and the relative economic and social power of those roles, compared to female-dominated professions.

"Biological", if it doesn't map directly to intelligence, needs another cause - something that can be measured. Do you have a suggestion for this? I don't.

"Social" is the most likely reason, but how is "social" different from "discrimination"? How do you define a social cause for men dominating the industry that can't be readily interpreted as discriminating against women?


is there a reason why you're so intently focused on the metric of intelligence here, as if it's the end-all-be-all of psychological factors?

I work in personality psychology research, so this whole IQ-centric line of reasoning is very dubious to me. There are many other influential phycological factors involved in people's lives that aren't (as far as we know) a direct result of nurture, and when taken together often make a more significant contribution to people's lives than their score in the single dimension of IQ. Learning disabilities and affective/mood disorders are a big example of this, and personality traits are just as impactful in how a person's life unfolds, regardless of intelligence.


It doesn't need to be IQ-based. I'm dubious about any sort of "genetic" argument for why some fields are dominated by men, and others by women. The shift in programming from primarily women to primarily men is evidence for that, imho - if the leanings are genetic, why a change over the course of one or two generations?


>if the leanings are genetic, why a change over the course of one or two generations?

A trait not being the direct result of nurture does not imply it's the result of a traditional long generic process, and this is something that we're only just beginning to scratch the surface of with epigenetics, so it's unlikely that such questions will get definitive answers anytime soon. That being said, the observation that a trait may be determined at birth only suggests that the trait is heritable, but not that it's genetic; those are two separate concepts, and heritability allows for much more variation from generation to generation, such as the case of children of immigrants from poor countries generally being taller than their parents when they're raised in western countries (which is likely due to improved nutrition enabling the full expression of their heritable height).

For example, you could ask the same question about whether the increase in learning disabilities and affective disorders within the past few generations in western societies is also "genetic". The default answer there of course, is that these conditions were only formalized as officially recognized diagnoses recently, and that such traits are only known to be heritable anyway (i.e. there are no definitively known "autism/adhd/etc genes" as of yet), so they're likely caused by the combination of the environment enabling the expression/observation of heritable predispositions. We can then similarly propose a null hypothesis to the male/female divide with the observation that western societies have only recently attempted to become more egalitarian by making various fields more equally attractive than they used to be, along with technological advances creating even more of such equally attractive opportunities, leading to heritable traits expressing themselves more noticeably through choices in the overall job market. In other words, being a professional "gamer" wasn't a viable job option 500yrs ago, but neither was being a professional "camgirl" either (to use two distinct, yet similar and stereotypically gendered "modern" occupations), but being a farmer was, in which case equal male/female distributions among farmers would've been the result of an underlying bottleneck in the pipeline, rather than the lack of one.

To suggest that this issue is either purely "genetic" or purely "social", is severely oversimplifying the matter.


> nobody thinks men are smarter than women

Actually I grew up hearing exactly the opposite - that girls are smarter than boys and that girls "mature" faster than boys.


They're not too dumb, and the hiring process isn't sexist, so why? And that's where hands come up empty.

There is strong evidence that women are on average more interested in "people" and men more interested in "things". Several references in http://slatestarcodex.com/2017/08/07/contra-grant-on-exagger...


I don't believe there is a heavy bias in favor of female doctors, I believe it is a field still majority male, although becoming almost equal. Nurses were traditionally the only actual healthcare profession open to women so it makes sense they would be overrepresented there.

Male nurses now actually can find they have an advantage in hiring because they often have an easier time with the lifting and physical labor being a nurse often requires.

My point is the comparison between nursing and programming is not strong.


Biases are rarely knowingly done, that's why they're biases. The proof is in the pudding though. The model Amazon came up with was biased against women. That suggests that female candidates in the dataset were discriminated against.


That is very interesting. What are the stats for unemployed female engineers? Is there a shortage of female representation because female engineers aren’t being hired or is there a shortage because there are actually fewer female engineers?

There is a shortage of male therapists and kindergarten teachers: is that because males aren’t being hired or because there are fewer of them in existence?


A lot of us drop out mid-career. I'd argue there's more a shortage of female engineering directors, VPs, CTOs.


Also airline pilots, nurses, michelin-starred chefs...


"What is the evidence that the source data for this 'AI' is biased because the men it came from did not want to hire women?"

Nobody is saying that the biases was caused because it was created by men who didn't want to hire women. That's a fear-mongering straw man.

What people are saying is that there was bias in the training data selected, and so the algorithm exacerbated that bias. Thus, being a cautionary tale about the training data you feed to these things.

" If there was an untapped market of engineers you'd better believe every tech company would be taking advantage of it."

You're assuming rationality where there really is no cause to do so.


This is a direct and clear example of bias which made it easy to flag the ML algorithm. But what about ML algorithms that are inducing benefits to groups in less obvious contexts? What about groups that are not so easily identified as being protected classes by simple, human-understandable model features? What about cases where the features are just merely correlated with a subpopulation of a protected class?

If we're being honest, a system only needs to be in a decision-making capacity for discriminatory behavior to be scrutinized, since in many cases human operators will not be able to identify the specific features being used to make decisions about people -- the features could be highly correlated with some subpopulation of protected class. If you take that to be true, the question reduces onto what decision-making roles ML algorithms have that could be discriminatory, and it's hard to argue this is not a massive part of their current and expected roles.

I think this is going to be a long, winding ethical nightmare that is probably just getting started by human-digestible examples such as these. One can imagine things like this one being looked back on as quaint in the naivety to which we assume we can understand these systems. Where do we draw the line, and how much control do we give up to an optimization function? Surely there is a balance -- how do we categorize and made good decisions around this?

As far as I know, a cohesive ethical framework around this is pretty much non-existent -- the current regime is simply "someone speaks up when something absurdly and overtly bad happens."


> What about cases where the features just merely correlated with a subpopulation of a protected class?

This is just Simpson's paradox [1] which is notoriously hard to identify because you have to compare the overall with the breakdown. As you say, current-AI probably already has such biases.

[1] https://en.wikipedia.org/wiki/Simpson%27s_paradox


> What about cases where the features are just merely correlated with a subpopulation of a protected class?

This question can be rephrased as "is there a difference between de facto and de jure discrimination?"

My answer is no, causality doesn't matter here: if feature A is a good predictor that some person belongs in group B and not group C, then filtering out feature As is effectively the same as filtering out only group Bs.


Ok, so if you're hiring professional arm wrestlers, and your model looks at bicep muscle mass, is that discrimination because it selects against women?

If you're hiring therapists, and your candidates take a personality test, and your ML model weights the 'nurturing' feature highly, is that discrimination because it selects against men?


Underlying your examples is the implication that a preference shouldn't be considered discriminatory if the trait being selected for correlates with fitness. I agree with this position!

What I don't agree with is the assumption that, in this case, the preferred traits do correlate with fitness, since there's at least one — gender — for which this model is biased even though it has no apparent correlation.


Ya, I just mean to say that uncorrelation with fitness is an important qualifier.


Separate it one step further -- hiring decisions are all too easy to pin as discriminatory. What if a site shows ads for a special offer on protein shakes for people with higher bicep mass since they are in the target market? Is that discriminating against women?


How the hell are you targeting ads with "bicep mass"?


> the features are just merely correlated with a subpopulation of a protected class

The article notes that Amazon's system rated down grads from two all-women's schools. But it immediately occurs to me to wonder what the algorithm did with candidates from heavily gender-imbalanced schools, which could be much harder to spot.

RPI's Computer Science department is about 85% male, while CMU's is just over 50% male. CMU's CS department is also considered one of the best in the world, and presumably any functional algorithm that cared about alma mater would respond to that. So if the bias ends up being "because of CMU's gender ratio, CMU grads with gender-unclear resumes are advantaged slightly less than otherwise would be", how on earth would someone spot that?

Once you're looking for it, you could potentially retrain with some data set like "RPI resumes, but we adjusted their gendered-words rate" and see if you get a different outcome on your test set. But that's both a labor intensive task, and one that's only approachable once you already know what you're looking for. And even if you do see a change, you'd still have to tease it out from a dozen other hypotheses like "certain schools have more organizations with gendered names, and the algorithm can't tell that those organizations are a proxy for school".

Of course, the counterpoint is that human decisions can't be scrutinized any better, and it's not entirely clear they're less arbitrary or more ethical. At a certain point algorithmic approaches are being scrutinized because they're slightly transparent and testable, so running them on a range of counterfactuals or breaking down their choices is hard rather than impossible. I suspect that's true, but it doesn't really comfort me - humans at least tend to misbehave along certain predictable axes we can try to mitigate, while ML systems can blindside us with all sorts of new and unexpected forms of badness.


A subtle point you may have missed, amazon knew about and accounted for the gender bias, the scrapped it because of all of the biases that they couldn't identify and were leery of. Most of your suggestions seem to be solving for the known biases, which I believe they did.

Also knowing some people who worked on this, they were VERY cognizant of re-encoding biases from the start of the project, it was one of the main reasons they thought the project might fail.


I did not at all get from the article "amazon knew about and accounted for the gender bias".

"Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory." I read that as a very different statement - as written, Amazon corrected two specific instances of keyword gender bias by hand, but couldn't reliably prevent further bias (including gender bias) from arising. That's where tricks like "ask the system to classify gender, and then un-train via that data" come in.

(I don't mean you're wrong, just that if gender bias was accounted for more generally, the article should have said so.)

That said, I think our disagreement might just be a miscommunication on what went wrong in the first place. If you know some people involved, maybe you can help clarify the situation?

The article totally fails to explain why "most engineering resumes are from men" led to an algorithm that downrated female resumes. "Most applicants had brown hair" does not produce a system that downrates blondes if you tell it hair color. So the question is - was the training data biased against female applicants (in which case why wasn't it caught before specific outputs needed modification?), or did something else altogether cause this issue (in which case what?)


I believe the system was trained on successful candidates. And Successful means: They were retained and likely promoted after hire over the next 3 years.

If they only trained on who was hired they wouldn't really know if those were good hires.


All of that is true, but I think the most important question is: compared to what? ML is substantially more transparent than human decision makers. Human decision makers will actively lie to you. ML is a major step forward in correcting these sorts of biases, by making interpretable (relative to humans) models in the first place.


At best ai amplifies existing patterns and biases when handling repetitive work. Over and over we hear how Facebook, Twitter, Google, and others will solve the problem of problematic content and bad actors through ai and neural networks. It's a fraud and the digital potemkin village of our era.


AI learns from the training data it's given and copies any biases this data exhibits. Pretty much all software today uses ML in some form to improve their services. I feel it's here to stay and not bad by default. We just have to make sure we are aware of its current limitations.

Facebook is already auto-flagging content this way but it's just a very hard problem (even for humans).


AI learns from the training data it's given and copies any biases this data exhibits

I hate to sound like "that pedantic guy", but I'd argue that the quote above is only partially true. It's the case that some subset of AI techniques "learn from the training data it's given and copies any biases this data exhibits". There are AI techniques that aren't based on supervised learning from a pre-existing training set. That doesn't mean that those techniques can't wind up adopting the biases of their human overlords, but I believe some aspects of AI are less susceptible to this kind of bias, than others.


Call me cynical but I found it amusing that no one points out the fact that engineering work is laborious and dry to say the least for most people. That's the reason why there are so few people who have other options, females, upper middle class people, people of means, in the scene. There are so many lowish paid low status unsought for sectors where majority workers are male, say janitors in Universities, why I never see any discussions on that bias ?


You're saying that software engineers are in the role because they have no other option? No upper-middle-class people in the position? What are you talking about?

You're missing the point anyway. The article made it pretty clear that this AI amplified biases humans already have about women applicants to tech positions. Stating your own biases about women make no sense to the topic, or to the argument you seem to be trying to make.

Janitors are worth talking about as well (women in the same job usually have a different title with less authority and less pay), but high-status, highly-paid, highly influential jobs are where it's most important to avoid bias, and so we talk about those more.


And of you think programming is high status, high paying, highly influential, why we import immigrants to do that ? You think the talent pool of some hundred million American people with most entensive education system is insufficient ?


You're saying that software engineers are in the role because they have no other option? No upper-middle-class people in the position? What are you talking about?

There are enthusiastic people who start with enthusiasm and can keep that enthusiasm. Most begin with practical concerns. And most of those who don't begin with stay with. he

And yes, rich people, upper echelon, people of means are rare in the industry


Because, though logical, it's a highly unpopular thing to discuss. Narrative has replaced critical thinking in many areas, this being one.


Cynical, or wrong? There's a clear history of women in software engineering when it was considered a low-class low-skill job. The first "programmers" in the modern sense of the word were women at NASA. The lead software developer for Apollo 11 was a woman. There's so much evidence to suggest that there is far more at play here than simply "engineering is laborious and dry [so women must not want to do the work]" that I'm willing to state it falls under wikipedia's citing rules for globally known knowledge: I don't need to cite that the capital of France is Paris any more than I need to cite this.

This is the same line of already-refuted reasoning behind the "I'm just asking questions" in the infamous Google Memo.

To answer my original, rhetorical, question: It's not cynical. It's wrong.


While I agree with the early days female programmers history thing, I'd like to point one thing you've neglected, that programming work 60 yrs ago is not comparable or even relevant to that of today. They are of different nature. What's farther from the reality is , back those days programming was not particularly losing paying or low status, I'd say it was elite. After all, most people then had no access to tertiary education, let alone computers.


I agree that programming is very different today than it was in the 50s and 60s but I absolutely disagree that it was high paying or elite. Programming was considered a step above secretarial work[0] and women were actively encouraged to enter the field because they were “naturally suited to planning”.

[0] https://www.smithsonianmag.com/smart-news/computer-programmi...


> There's so much evidence to suggest that there is far more at play here than simply "engineering is laborious and dry [so women must not want to do the work]" that I'm willing to state it falls under wikipedia's citing rules for globally known knowledge: I don't need to cite that the capital of France is Paris any more than I need to cite this.

What about citations proving that "software engineering when it was considered a low-class low-skill job" is the same profession as the programming in the past 30years. Or at least that it has the same difficulty / processes.

Btw usage of phrases like "clear history", "so much evidence" (especially when you cite one(!) arguable data point), "already-refuted" does not convince anyone about you being right. It is at best annoying.


What about citations proving that programming is dry, tedious, or boring? I am under no obligation to engage a comment made in bad faith as if it wasn’t.


> What about citations proving that programming is dry, tedious, or boring?

From what I've seen this is standard assumption by ordinary people. And it does not target only programming, any office job that involves "sitting at the computer all day" gets that reputation.

And that easily could have not been the case in the 50's (I really don't know). And the profession has clearly evolved (anecdotally: many think it has gotten worse). So your assumptions are really not that obvious. Sorry if that comes accross to you as "arguing in bad faith".


That's a fine assumption to be made by ordinary people but I think it's quite fair to respond to somebody commenting on hacker news as if they have more than a lay-person's understanding the technology industry.

Are you just arguing for the sake of arguing? You're not engaging with my points in any meaningful way. Can we be done with this thread?


Yeah, talk about arguing in bad faith, then move the goal posts (when lay people think the job is too hard / boring, they will not choose it when choosing education, seems pretty obvious?) and claim the other side is "arguing for the sake of arguing".

You are not engaging in debate in any meaningful way, maybe stop arguing on HN? You are not convincing anyone . . .


You don't see discussion on that because you're on a tech industry website, not a janitorial forum. And the percentage of women in janitorial work is nearly twice that as the percentage in tech per BLS: https://www.bls.gov/cps/cpsaat11.htm

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: