> Damore then started making HR policy proposals. We use a 50/50 gender ratio as an indicator that a particular field is free from bias. It's one thing to propose that 50/50 is not the natural ratio to end up with, but until Damore can propose a model that predicts another number then proposing HR policy changes put the cart before the horse.
This seems to assume that the only way to measure or achieve equitable hiring is to measure the representation of identity groups across a given position and make sure it tracks their makeup in the general population. It's not clear to me that there aren't other acceptable methods of trying to make things equitable.
For example, you could check that applicants from different identity groups succeed in being hired at about the same rate. That's a practice that should direct an organization towards equitable results whether the reality is that women are underrepresented because of sexism in hiring or the reality is that women are represented in different proportion because of the endeavors they tend to prefer. And also for a reality that's a mix of both (which I suspect is the way of things).
Also: if the primary accepted standard becomes to match representation in a position with an identity's representation in the population, it seems pretty likely that over time it would become more difficult over time to predict a "natural" ratio.
I completely agree, which is precisely why I think believe the role of the employment process ought to be to screen everyone by fair (in a way that everyone agrees on) and transparent metrics. 'All fronts' hopefully means fixing every issue encountered simultaneously at the local level.
If instead we're adding a 'fudge factor' based on race, gender, or other measure of 'privilege', we're just hoping that fudge factor in hiring makes up for problems elsewhere, and it can paradoxically make things even worse.
Think about a lot of the (often very well justified) complaints that minority and other hires have with the current situation: they feel like, or they feel that other people believe, that they are simply a 'diversity hire' that doesn't deserve to be there. They feel constantly pressured to 'prove themselves' under the suspicion that the bar was 'lowered to let them in'. And the entire structure of un-blinded affirmative action exacerbates the situation, because nobody is allowed to know how big the fudge factors are, neither the minorities nor the dominant group. Under that situation, how can there anything but suspicion and mutual distrust?
Under a provably blinded hiring process, none of those should be an issue, because the process is completely transparent and agreed to ahead of time.
Other people have said this much more eloquently than me:
'Fair & transparent' and 'blind' are two different things, and neither are subsets of each other.
A 'blind' hiring process _can_ be akin to, faced with a densely connected graph, focusing only on the most immediate causal relationships.
I do agree that 'fudge factor's are clumsy at best, where all candidates are hired, and then an arbitrary number is added to candidates based on race/gender/etc.
However, 'fudge factors' have already existed in history. For a completely different example outside of hiring practices: redlining[1] was an explicit practice of denying services/mortgages to city neighborhood based on its racial makeup.
So, what now? There have been decades of racist 'fudge-factoring' in real estate and urban development. Is the right approach to fudge-factor the other way? Or is it to be 'blind' and to look purely at the financials of each individual/organization?
Obviously this is a different scenario than hiring, and cannot necessarily be directly applied back onto hiring practices. However, we can separate out a) one way to correct for historical/systematic 'fudge factors' from b) whether or not this can apply to hiring.
I would argue that yes, you need fudge factors to correct previous problems.
It should be fair and transparent, I agree, but it will not be very clear-cut. In complex systems (densely connected graphs of causality), the only clear-cut processes are creating problems, or ignoring them. Fixing complex problems are always messy.
As many people have pointed out before, making the hiring process blind doesn't do what you seem to think it'll do. There was a famous study (can't find it right this second on mobile) where researchers found that the ratio of black hires to white hires decreased when their resumes were submitted with all identifying information scrubbed.
The results of studies like this depend strongly on the context... both the method used, and the existing hiring environment you're comparing to.
We've run a similar study and for the company we were hiring into we found blinding in that specific case had no effect on race or gender but drastically improved socio-economic diversity. The hiring company already had equitable hiring on gender & the candidate group wasn't racially diverse enough to make a conclusion.
Would I generalise that result to all organisations? No way, and neither should you.
If you can find the study you're thinking of I'd be interested to look at it.
I don't know the answer but if I cared to guess, it might be because the talent pool for orchestra performers had significantly more gender parity than the talent pool competing for elite engineering jobs at Google.
Edit: Google says that their diversity platform is non-discriminatory because they're not changing their standards, but rather looking harder for qualified diversity candidates (paraphrasing). This makes the gargantuan and probably unwarranted assumption that there are a lot of these candidates not applying and that 'looking harder' will find them.
Maybe there is pressure to hire from minority groups then. It makes sense that minority candidates are on average less qualified objectively if there is an affirmative action earlier in the process (for example at school admission level).
Google's hiring rate between men and women does seem to match the relative rates that men and women graduate with CS degrees, which suggests that they're at least not discriminating at that level.
There are still arguments to be made that either more aggressively recruiting women (fattening their pipeline, even if it's zero-sum versus other players in the field) or accepting a higher rate (yes, "lowering the bar", which most colleges do quite aggressively and people seem mostly okay with) could be positive moves on many axes.
More productive overall measures involve equalizing the educational pipeline, which IMO is the real solution. Google invests heavily in that, too, though, so I'm pretty happy with their multi-pronged approach.
This seems to assume that the only way to measure or achieve equitable hiring is to measure the representation of identity groups across a given position and make sure it tracks their makeup in the general population. It's not clear to me that there aren't other acceptable methods of trying to make things equitable.
For example, you could check that applicants from different identity groups succeed in being hired at about the same rate. That's a practice that should direct an organization towards equitable results whether the reality is that women are underrepresented because of sexism in hiring or the reality is that women are represented in different proportion because of the endeavors they tend to prefer. And also for a reality that's a mix of both (which I suspect is the way of things).
Also: if the primary accepted standard becomes to match representation in a position with an identity's representation in the population, it seems pretty likely that over time it would become more difficult over time to predict a "natural" ratio.