While it is true that using an algorithmic process to select candidates may introduce discrimination against protected groups, it seems to me that it should be much easier to detect and prove than with previous processes with human judgement in the loop.
You can just subpoena the algorithm and then feed test data to it, and make observations. Even feed synthetic data like swapping in “stereotypically black” names for real resumes of other races, or in this case adding “uses a wheelchair” to a resume. (Of course in practice it’s more complex but hopefully this makes the point.)
With a human, you can’t really do an A/B test to determine if they would have prioritized a candidate if they hadn’t included some signal; it’s really easy to rationalize away discrimination at the margins.
So while most AI/ML developers are not currently strapping their models to a discrimination-tester, I think the end-state could be much better when they do.
(I think a concrete solution would be to regulate these models to require a certification with some standardized test framework to show that developers have actually attempted to control these potential sources of bias. Google has done some good work in this area: https://ai.google/responsibilities/responsible-ai-practices/... - though there is nothing stopping model-sellers from self-regulating and publishing this testing first, to try to get ahead of formal regulation.)
Which is part of the reason that discrimination doesn't have to be intentional for it to be punishable. This is a concept known as "disparate impact". The Supreme Court has issued decisions that a policy which negatively impacts a protected class and has no justifiable business related reason for existing can be deemed discrimination regardless of the motivations behind that policy.
 - https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.
This is not true, IQ tests in the mentioned Griggs v. Duke Power Co. (and similar cases) were rejected as disparate impact specifically because the company provided no evidence they lead to better performance. To quote the majority opinion of Griggs:
> On the record before us, neither the high school completion requirement nor the general intelligence test is shown to bear a demonstrable relationship to successful performance of the jobs for which it was used. Both were adopted, as the Court of Appeals noted, without meaningful study of their relationship to job performance ability.
However the sort of insidious discrimination at the margin I was imagining are things like “equally-good resumes (meets all requirements), but one had a female/stereotypically-black name”. Interpreting resumes is not a science and humans apply judgement to pick which ones feel good, which leaves a lot of room for hidden bias to creep in.
My point was that I think algorithmic processes are more testable for these sorts of bias; do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing? (I’m aware of some large-scale research on name-bias on resumes but it seems hard to do in the context of a single company.)
That is a common example, but it is much broader than what goes on a job ad. For example, I have heard occasional rumblings about how whiteboard interviews are a hiring practice that would not stand up to these laws (IANAL).
>My point was that I think algorithmic processes are more testable for these sorts of bias
Yes, this is true, but that doesn't really matter. If there is consistent discrimination happening at the margins, that will be evident holistically. If that is evident holistically and there is no justification for it, that is all we need. We don't need to run resumes through an algorithm to show that discrimination is happening at an individual level. We just need to show that a policy negatively impacts a protected group and that the policy is not related to job performance.
>do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing?
I think the bigger problem than the regulations is that there is an inherent bias against these type of cases actually being pursued. First, it is difficult to identify this as an individual so people don't know when it is happening. Additionally, people fear the retribution that would come from pursuing this legally. People don't want to be viewed as a pariah by future employers so they often will simply move on even if their accusations are valid.
It is true that extreme bias/discrimination will be evident, but smaller bias/discrimination, particularly in an environment where the pool is small (say, black women for engineering roles) is extremely hard to prove for a human interviewer. Your sample size is just going to be too small. On the other hand, if you have an ML algorithm, you can feed it arbitrary amounts of synthetic data, and get precise loadings on protected attributes.
> Disparate impact in United States labor law refers to ...
ELEMENTS TO ESTABLISH ADVERSE DISPARATE IMPACT UNDER TITLE VI
Identify the specific policy or practice at issue; see Section C.3.a.
Establish adversity/harm; see Section C.3.b.
Establish disparity; see Section C.3.c.
Establish causation; see Section C.3.d.
Because I'll tell you, there's millions of landlords and they blindly trust FICO when screening candidates. Maybe not as the only signal, but they do trust it without testing it for edge cases.
Yes, you can A/B test the model if you can design reasonable experiments. You still don't have the general discrimination test because you have to define what a reasonable input distribution and what reasonable outputs are.
If an employer is looking to hire an engineer with a CS degree from a top-tier university, and they use an AI model to evaluate resumes and it returns a number of successes on black people very similar to the population distribution of graduates from those programs is the model discriminatory?
There are still hard problems here because any natural baseline you use for a model may in fact be wrong and designing a reasonable distribution of input data is almost impossibly hard as well.
“We applied best practices in the field to limit discrimination” should not be an adequate legal defence if the model can be shown to discriminate.
To clarify further, just because you tried to prevent discrimination doesn’t mean you should be off the hook for the material harms of discrimination to a specific individual. Otherwise people don’t have a right to be protected against discrimination they only have a right to people ‘trying’ to prevent discrimination. We shouldn’t want to weaken rights that much even if it means we have to be cautious in how we adopt new technologies.
Not for individual candidates, no. But you can introduce a parallel anonymized interview process and compare the results.
As soon as you do this, they're revealed to exploit only statistical coincidences and highly fragile heuristics embedded within the data provided. And likewise, pretty universally discriminatory when human data is inovlved
> The format of the employment test can screen out people with disabilities [for example:] A job application requires a timed math test using a keyboard. Angela has severe arthritis and cannot type quickly.
> The scoring of the test can screen out people with disabilities [for example:] An employer uses a computer program to test “problem-solving ability” based on speech patterns for a promotion. Sasha meets the requirements for the promotion. Sasha stutters so their speech patterns do not match what the computer program expects.
Interestingly, I think the second one is problematic for common software interview practices. If your candidate asked for an accommodation (say, no live rapid-fire coding) due to a recognized medical condition, you would be legally required to provide it.
This request hasn’t come up for me in all the (startup) hiring I’ve done, but it could be tough to honor this request fairly on short notice, so worth thinking about in advance.
Pretend your colleague had a cast and couldn’t type for a few weeks. Is that person going to get put on the time-sensitive demo where 10k SLOC need to be written this week? Or the design / PoC project that much less SLOC but nobody knows if it will work? Or the process improvement projects that require a bunch of data mining, analysis, and presentation?
It’s not hard to find ways to not discriminate against disabilities on short notice. The problem is, at least in my experience with these YC start-ups who did not, there’s so much naïveté combined with self-righteousness that they’d rather just bulldoze through candidates like every other problem they have.
When you apply to be a carpenter they don’t make you hammer nails, when you apply to be a accountant they don’t have you prepare a spreadsheet for them, etc.
I don’t work (even in interviews) for free.
A quick coding test is something that any places where people should know how to code has to do, doing it through one of those platforms seems perfectly reasonable, and I'm happy to do it.
Writing fizzbuzz is not "working for free" any more than any other form of interviews.
And is the "when you apply to be a carpenter" sentence really true? I've heard of the interview process for welders being "here's a machine and two pieces of metal, I'll watch".
For example, I am a frequent customer of U-Haul. I learned to not use the branch that’s closest to me, because some employees there are really slow with computers, which makes checking out equipment very slow, and frequently results in a long line of waiting customers. Driving 5 extra minutes saves me 20 minutes of waiting for employees to type in everything and click through the system.
And this is freaking uhaul. If you’re a software engineer, slow typing is also a productivity drain: a 3 minutes email becomes 6 minutes one, a 20 minutes slack conversation becomes 30 minutes etc. It all adds up.
Consider another example: police officers need to do a lot of typing to create reports. A fast typing officer can spend less time writing up reports, and more time responding to calls. That makes him more productive, all else being equal. Of course it would be silly to consider typing speed as a sole qualification for a job of police officer (or, for that matter, a software engineer), but it is in no way unreasonable to take it into account when hiring.
Maybe. If you type 10,000 words per minute but your entire module gets refactored out of the codebase next week, is your productivity anything higher than 0?
Multiple times in my career, months or even years worth of my team's work was tossed in the trash because some middle manager decided to change directions. A friend of mine is about ready to quit at AMZN because the product he was supposed to launch last year keeps getting delayed so they can rewrite pieces of it. Maybe some people should have thought more and typed less.
If you spent less time typing that module that later went to trash, you are, in aggregate, more productive than someone who spent more time typing the same module.
This sort of argument only makes sense if you assume that there is some sort of correlation, where people who are slower at typing are more likely to make better design or business decisions, all else being equal. I certainly have no reason to believe it to be true. Remember we are talking about the issue in context of someone who is slow at typing because of arthritis. Does arthritis make people better at software design, or communication? I don’t think so.
Besides, the point seems to have been about interview practices. You know, those practices which are often quite removed from the actual on-the-job tasks.
What if I was disabled to the degree that I couldn’t leave the house, but I could work remotely (an office job)? That’s what accomodations are for.
There are many (and I know quite a few) people who are quite capable at their jobs and entirely computer-ineffective. As they're forced more and more to deal with confusing two-factor requirements and other computer-related things that we're just "used" to, they get discouraged and give up.
For now you can often help them fill it out, but at some point that's going to be unwieldy or insufficient.
A simple example of this is that often part-time hours would allow me to continue working and there is nothing in the ADA/EEO or FMLA that guarantees a worker's right to keep their job at part-time indefinitely. The obligation to perform to the level of a typical worker is squarely on the shoulders of the disabled. Good luck finding an employer generous enough to deal with all of your needs.
85% of the people with my diagnosis are unemployed despite the fact most of us want to work. The algorithms will probably help make sure it's more like 90-95% of us. But at least the stakeholder involvement process makes people feel like they have a voice even as they are categorically excluded from ever being fully-functional human beings.
And as I understand it, you don't really have a case without evidence that the hiring algorithm is discriminating against people with disabilities.
How would an individual even begin to gather that evidence?
There are three major kinds of evidence that would be useful here. Most useful but least likely: email inside the company in which someone says "make sure that this doesn't select too many people with disabilities" or "it's fine that the system isn't selecting people with disabilities, carry on".
Useful and very likely: prima facie evidence that the software doesn't make necessary reasonable accomodations - a video captcha without an audio alternative, things like that.
Fairly useful and of moderate likelihood: statistical evidence that whatever the company said or did, it has the effect of unfairly rejecting applicants with disabilities.
"this is the June 2020 version, this is the current version, we have no back ups in between" is acceptable if true. Destroying or omitting an existing version is not.
For a civil action, the burden of proof is "preponderance of evidence," which is a much lower standard than "beyond a reasonable doubt." "Maybe the weights are different now" is a reasonable doubt, but in a civil case the plaintiff could respond "Can the defendant prove the weights are different? For that matter, can the defendant even explain to this court how this machine works? How can the defendant know this machine doesn't just dress up discrimination with numbers?" And then it's a bad day for the defendant to the tune of a pile of money if they don't understand the machine they use.
Seems like the behavior becomes predictable and then you have to retrain if you see unoptimal results.
You just run the same software (with the same state database, if applicable).
Oh wait, I forgot, nobody knows or cares what software they're running. As long as the website is pretty and we can outsource the sysop burden, well then, who needs representative testing or the ability to audit?
At most I imagine the plaintiff is allowed to do discovery, and then has to prove positive discrimination based on that.
"Clarifies that, when designing or choosing technological tools, employers must consider how their tools could impact different disabilities;
Explains employers’ obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation;"
That seems backwards, at least in the US.
I think the Americans with Disabilities Act (ADA) requires notification. (i.e. I need to talk to HR/boss/whoever about any limitations and reasonable accommodations.) If I am correct, not-answering the question "Do you require accommodations according to the ADA? yes no prefer not to answer" can legally come with a penalty, and the linked DoJ reasoning wouldn't stop it.
"Without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and"
"If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams."
This makes it sound like the employer needs to ensure their AI is allowing for reasonable accommodations. If an AI can assume reasonable accommodations then what benefit would they ever have to assume not supplying the reasonable accommodations that they are legally required to?
>> "Completion of this form is voluntary. No individual personnel selections are made based on this information. There will be no impact on your application if you choose not to answer any of these questions"
Your employer shouldn't even be able to know whether or not you filled it out.
"Self-identification is the preferred method of identifying race/ethnicity information
necessary for the EEO-1 Component 1 Report. Employers are required to attempt to
allow employees to use self-identification to complete the EEO-1 Component 1 Report.
However, if employees decline to self-identify their race/ethnicity, employment records
or observer identification may be used. Where records are maintained, it is
recommended that they be kept separately from the employee’s basic personnel file or
other records available to those responsible for personnel decisions."
20+ years of environmental differences, especially culture? The disabilities themselves? Genes? Nothing about human nature suggests that all demographics are equally competent in all fields, regardless of whether you group people by race, gender, political preferences, geography, religion, etc. To believe otherwise is fundamentally unscientific, though it's socially unacceptable to acknowledge this truth.
>Remembering that we all have implicit bias
This doesn't tell you anything about the direction of this bias, but the zeitgeist is such that it is nearly always assumed to go in one direction, and that's deeply problematic. It's an overcorrection that looks an awful lot like institutional discrimination.
>Remembering that we all have implicit bias and it doesn't make you a mustache-twirling villain.
Except pushing back against unilateral accusations of bias if you belong to one, and only one, specific demographic, you effectively are treated like a mustache-twirling villain. No one is openly complaining about "too much diversity" and keeping their job at the moment. That's bias.
What does exist is, at best, shows mild correlation over large populations, but nothing binary or deterministic at an individual level.
To whit, even if your demographic group, on average, is slightly more or less successful in a specific metric, there is no scientific basis for individualized discrimination.
It's "not socially unacceptable to acknowledge this truth", it's socially unacceptable to pretend discrimination is justified.
There absolutely is a mountain of research which unambiguously implies that different demographics are better or worse suited for certain industries. A trivial example would be average female vs male performance in physically demanding roles.
Now what is indeed missing is the research which takes the mountain of data and actually dares to draw these conclusions. Because the subject has been taboo for some 30-60 years.
>To whit, even if your demographic group, on average, is slightly more or less successful in a specific metric, there is no scientific basis for individualized discrimination
We are not discussing individual discrimination, I am explaining to you that statistically significant differences in demographic representation are extremely weak evidence for discrimination. Or are you trying to suggest that the NFL, NBA, etc are discriminating against non-blacks?
>It's "not socially unacceptable to acknowledge this truth", it's socially unacceptable to pretend discrimination is justified
See above, and I'm not sure if you're being dishonest by insinuating that I'm trying to justify discrimination or if you genuinely missed my point. Because that's how deeply rooted this completely unscientific blank slate bias is in western society.
Genes and culture influence behavior, choices, and outcomes. Pretending otherwise and forcing corrective discrimination for your pet minority is anti-meritocratic and is damaging our institutions. Evidenced by the insistence by politicized scientists that these differences are minor.
A single standard deviation difference in mean IQ between two demographics would neatly and obviously explain "lack of representation" among high paying white collar jobs; I just can't write a paper about it if I'm a professional researcher or I'll get the James Watson treatment for effectively stating that 2+2=4. This isn't science, our institutions have been thoroughly corrupted by such ideological dogma.
Instead, we could give everyone the absolute best tech and social support, and only then evaluate performance, not of individuals, but of individuals+tech, the same way we evaluate a pilot's vision with their glasses on.
Or are you asking me to find a study which shows which specific cultural differences make large swaths of people more likely to, say, pursue sports and music versus academic achievement? Or invest in their children?
Again, the evidence is ubiquitous, overwhelming, and unambiguous. Synthesizing it into a paper would get a researcher fired in the current climate, if they could even find funding or a willing publisher; not because it would be factually incorrect, but because the politicized academic culture would find a title like "The Influence of Ghetto Black Cultural Norms on Professional Achievement" unpalatable if the paper didn't bend over backwards to blame "socioeconomic factors". Which is ironic because culture is the socio in socioeconomics, yet I would actually challenge YOU to find a single modern paper which examines negative cultural adaptations in any nonwhite first world group.
Further, my argument has been dishonestly framed (as is typical) as a false dichotomy, I'm not arguing that discrimination doesn't exist, but the opposition is viciously insisting, that all differences among groups are too minor to make a difference in a meritocracy, and anyone who questions otherwise is a bigot.
I am pointing that, despite your claim that your viewpoint is rooted in science, you have no scientific basis for your belief beyond your own synthesis of facts which you consider "ubiquitous, overwhelming, and unambiguous".
You have a belief unsupported by scientific literature. If you want to claim that the reason it is unsupported is because of a vast cultural conspiracy against the type of research which would prove your point, you're free to do so.
I have repeatedly explained to you that the belief is indeed supported by a wealth of indirect scientific literature.
>You have a belief unsupported by scientific literature. If you want to claim that the reason it is unsupported is because of a vast cultural conspiracy against the type of research which would prove your point, you're free to do so.
Calling it a conspiracy theory is a dishonest deflection. It is not a conspiracy, it is a deeply rooted institutional bias. But I can play this game too: can you show me research which rigorously proves that genes and culture have negligible influence on social outcomes? Surely if this is such settled science, it will be easy to justify, right?
Except I bet you won't find any papers examining the genetic and/or cultural influences on professional success in various industries. It's like selective reporting, lying through omission with selective research instead.
But you will easily find a wealth of unfalsifiable and irreproducible grievance studies papers which completely sidestep genes and culture while dredging for their predetermined conclusions regarding the existence of discrimination. And because the socioeconomic factors of genes and culture are a forbidden topic, you end up with the preposterous implication that all discrepancies in representation must be the result of discrimination, as in the post that spawned this thread.
Disparate impact is often caused by discrimination upstream in the pipeline, not discrimination on the part of the hiring manager. Suppose that due to systematic discrimination, demographic X is much more likely than demographic Y to grow up malnourished in a house filled with lead paint. The corresponding cognitive decline amongst X people would mean they are less likely than Y people to succeed in (or even attend) elementary school, high school, college, and thus the workplace.
A far smaller fraction of X people will therefore ultimately be qualified for a job than Y people. This isn’t due to any discrimination on the part of the hiring manager.
When a generation of Americans force all the people of one race to live in "the bad part of town" and refuse to do business with them in any other context, that's obviously discrimination. If a generation later, a bank looks at its numbers and decides borrowers from a particular zip code are higher risk (because historically their businesses were hit with periodic boycotts by the people who penned them in there, or big-money business simply refused to trade with them because they were the wrong skin color), draws a big red circle around their neighborhood on a map, and writes "Add 2 points to the cost" on that map... Discrimination or disparate impact? Those borrowers really are riskier according to the bank's numbers. But red-lining is illegal, and if 80% of that zip code is also Hispanic... Uh oh. Now the bank has to prove they don't just refuse Hispanic business.
And the problem with relying on ML to make these decisions is that ML is a correlation engine, not a human being with an understanding of nuance and historical context. If it finds that correlation organically (but lacks the context that, for example, maybe people in that neighborhood repay loans less often because their businesses fold because the other races in the neighborhood boycott those businesses for being "not our kind of people") and starts implementing de-facto red-lining, courts aren't going to be sympathetic to the argument "But the machine told us to discriminate!"
If you refuge to hire a woman because she is a woman, you are discriminating. Fortunately that is historically rare today.
Sadly, they insist that they would only call me at an agreed time and not take any incoming call from me, even at their agreed upon timeframe.
Why is this an important stance of AI bias against people with disability? I need to line up a voice interpreter who would do American Sign Language so I can understand what a group of Google interviewers would be saying. And voice interpreters requires a 3-way call that I can help setup.
I am capable of understanding them perfectly through my lipreading given that I lost all high frequency discriminator due to a childhood high fever. I cannot tell the difference between B, V, P, D, Z, or E, but a lipreading skill can nails them all. I am an excellent speaker of the English language given being technically and governmentally assigned as Deaf.
So, have I been wronged? I still think so … to this day.
> Beginning on March 15, 2011, only dogs are recognized as service animals under titles II and III of the ADA.
For example, if I were hiring a programmer, and the programmer was technically competent but spoke with such a thick accent that I couldn't understand them very well, I'd be tempted to pass on that candidate even though they meet all the job requirements. And if it happened every time I interviewed someone from that particular region, I'd probably develop a bias against similar future candidates.
You wouldn't screen out a person who cannot speak or who cannot speak clearly due to a disability of some sort. You'd use a different method of communication as would everyone else and it could really be the same for them.
On the other hand, if communication was clearly impossible and/or they needed to be understood by the public (customers), the accent may very well mean they cannot do the job and not in the scope of things to teach someone like you can teach expectations about customer service.
The big difference is we can prove the bias in an AI. It's a very interesting curveball when it comes to demonstrating liability in the choice making process.
Usually showing that input data is biased in some way or contains a potentially bad field will result in winning a discrimination case.
If neither side can conclusively prove what the model is doing but the plaintiff shows it was trained on data that allows for discrimination and the model is designed to learn patterns in its training data then the defendant is on the hook for showing the model is unbiased. For the most part people design input data uncritically and some of the fields allow for discrimination.
That's all data; there would be no need to show anything.
There was a paper a while ago by a team of doctors who wanted to use classifiers on X-ray images and FREAKED OUT when they realized that the first thing the classifier did was categorize every image by the race of the patient. As they note, it will always do this regardless of whether the race of the patient is present in the input data. (Because obviously, the race of the patient is part of the information conveyed by the structure of their body, which is what an X-ray shows.)
I am not saying we humans are perfect to handle that complexity, hence when humans are the ones behind such desition making it is rarely done by a single person
I find it all very interestingly paradoxical, regardless of everything else.
You can fix the problem by going nuclear and omitting any sort of data that could serve as a proxy for the discriminatory signals, but it's possible to explicitly feed the discriminatory signals into the model and enforce that no combination of other data amounting to knowledge about them can influence the model's predictions.
There was a great paper floating around for a bit about how you could actually manage that as a data augmentation step for broad classes of models (constructing a new data set which removed implicit biases assuming certain mild constraints on the model being trained on it). I'm having a bit of trouble finding the original while on mobile, but they described the problem as equivalent to "database reconstruction" in case that helps narrow down your search.
People have a limited tolerance. Then they start telling you to take care of yourself and strongly encouraging you to leave.
It’s why I switched to remote work before the pandemic. If a bad episode starts up I can cover for it much more easily.
I wonder if the answer to the disability question is something the AI uses when evaluating candidates, and if it has learned to just toss out anyone who says yes?
High School: Florida School for the Deaf and the Blind.
Other Experience: President of Yale Disability Awareness Club (2009-2011).
1. The act of discriminating.
2. The ability or power to see or make fine distinctions; discernment.
3. Treatment or consideration based on class or category, such as race or gender, rather than individual merit; partiality or prejudice.
You are talking about 2. The article is talking about 3.
3. is illegal in hiring. 2. is not.
It's just that simple. 2 creates categories implicitly.
Applying ML to hiring shows a profound lack of awareness of both ML and HR. Especially using previous hiring decisions as a training set. Like using a chainsaw to fasten trees to the ground.
Was it Amazon?
Anyway, this feels different to me, IIRC you can't ask disability related questions in hiring aside from the "self identify" types at the end? So how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?
Or is that the advisory? "Don't do this?"
A few off the top of my head:
(1) Signals gained from ways that a CV is formatted or written (e.g. indicating dyslexia or other neurological variances, especially those comorbid with other physiological disabilities)
(2) If a CV reports short tenure at companies with long breaks in between (e.g. chronic illnesses or flare-ups leading to burnout or medical leave)
(3) There are probably many unintuitive correlates irt interests, roles acquired, skillsets. Consider what experiences, institutions, skillsets and roles are more or less accessible to disabled folk than others.
(4) Most importantly: Disability is associated with lower education and lower economic opportunity, therefore supposed markers of success ("merit") in CVs may only reflect existing societal inequities. *
* This is one of the reasons meritocratic "blind" hiring processes are not as equitable as they might seem; they can reflect + re-entrench the current inequitable distribution of "merit".
This is a case where it may benefit a candidate to disclose any disabilities leading to such an erratic employment pattern. I don’t proceed with candidates who cannot explain excessively frequent job hops because it signals that they can’t hold a job due to factors I’d want to avoid hiring, like incompetence or a difficult personality. It’s a totally different matter if the candidate justified their erratic employment due to past medical issues that have since been treated.
And what if they haven't been? Disability isn't usually a temporary thing or even necessarily medical in nature (crucial to see disability as a distinct axis from illness!). Hiring with biases against career fluctuations is, I'm afraid to point out, inherently ableist. And it should not be beholden on the individual to map their experienced inequities and difficulties across to every single employer.
they are not meant to be "equitable". they're meant to provide equality of opportunity, not equality of outcome
This is important and tricky as if we have across the board decreases in hiring the best person for the job we end up with a less productive economy. This means our hiring practices directly compete against other aims like solving poverty.
In machine learning this happens all the time! Stopping models from learning this from the most surprising sources is an active area of research. Models are far more creative in finding these patterns than we are.
It can learn that people with disabilities tend to also work with accessibility teams. It can learn that you're more likely to have a disability if you went to certain schools (like a school for the blind, even if you and I wouldn't recognize the name). Or if you work at certain companies or colleges who specialize in this. Or if you publish an article and put it on your CV. Or if you link to your github and the software looks there as well for some keywords. Or if among the keywords and skills that you have you list something that is more likely to be related to accessibility. I'm sure these days software also looks at your linkedin, if you are connected with people who are disability advocates you are far more likely to have a disability.
> Or is that the advisory? "Don't do this?"
Not so easy. Algorithms learn this information internally and then use it in subtle ways. Like they might decide someone isn't a good fit and that decision may in part be correlated with disability. Disability need not exist anywhere in the system, but the system has still learned to discriminate against disabled people.
> For example, some hiring technologies try to predict who will be a good employee by comparing applicants to current successful employees. Because people with disabilities have historically been excluded from many jobs and may not be a part of the employer’s current staff, this may result in discrimination.
> For example, if a county government uses facial and voice analysis technologies to evaluate applicants’ skills and abilities, people with disabilities like autism or speech impairments may be screened out, even if they are qualified for the job.
> For example, an applicant to a school district with a vision impairment may get passed over for a staff assistant job because they do poorly on a computer-based test that requires them to see, even though that applicant is able to do the job.
> For example, if a city government uses an online interview program that does not work with a blind applicant’s computer screen-reader program, the government must provide a reasonable accommodation for the interview, such as an accessible version of the program, unless it would create an undue hardship for the city government.