
Beware of Automated Hiring - dredmorbius
https://www.nytimes.com/2019/10/08/opinion/ai-hiring-discrimination.html
======
ppod
There is too much performative expression of concern and not enough discussion
of actual factors. The problems are basically the same as for human bias, and
the automation of the process gives us an opportunity to examine them
transparently.

Having lacrosse or sailing as a hobby on a CV is probably correlated with job
performance, because those things are independently correlated with wealth and
greater educational opportunities earlier in life. It's pretty simple to
exclude them from a logistic regression. But what about education? A degree
from an Ivy league university is correlated with job performance for both good
and unfair reasons, and the correlation is very noisy. Which variables should
be allowed? Which ones does the human system use, and how are they weighted?
What causal models are appropriate? I don't know the answers, but the
conversation often doesn't even happen at this level. And this is just at the
level of transarent models, like graphical probablistic models or structural
equation modeling. I generally think there's too much AI alarmism, but in this
case I think banning non-linear machine learning methods using large high-
dimensional training sets might be appropriate.

~~~
heavenlyblue
Wealthy people are also lazy and have a much higher tendency to politicise
their environments (just like they do with their parents when they need their
money).

Compared to a poor qualified-enough candidate, they would not have an option
of going back to the sh*thole they came from, so they would definitely not be
as motivated.

------
blackbrokkoli
The fundamental problem here seems to be _scale_ (like often in tech).

I don't think it is correct to say that algorithms "enabled discrimination
against job applicants". Otherwise we wouldn't have discussions about ageism
or the amount of personal information which should go on a CV. Closed loop
systems are also not something AI brought upon our world, otherwise the term
"white privilege" would not have made it into public consciousness. There are
racist hiring managers and interviewers who mentally discard anyone looking
like their ex for sure.

The key is: All these effects are fairly contained in comparison. Companies
can be sued, people can be fired and even systemic biases can be evaded in a
lot of cases. I am not trying to say that e.g. sexism in hiring is harmless or
anything, I'm just trying to weight two evils.

We do enter a disturbingly dystopian world when the AI-hiring start up brings
their fancy algo to the mighty gods of cloud computing and just scale into
infinity. Now suddenly hundreds of companies and millions of people are hit by
the biases. Having discrimination at that level, personal damage compensation
becomes a warm gesture at most just because the cultural deterioration done by
subconsciously establishing a group as subpar workers is plain immeasurable.

And if you think about it, this exact problem transcends hiring. You have the
right to kick everybody who is carrying something resembling a paint spray can
out of your mall to prevent vandalism. You may discriminate some people just
trying to get something done about the flies in their kitchen, sure. Happens.
But if GoogleMaps, or Cloudflare, or Netflix does it, they are ever so subtly
nudging the tech interest, or the information accessibility, or the quality of
life, or the probability to get hired by minus a fraction of a fraction of a
percent. For a billion people, or for all Hispanic people in the US, or for
everyone using sub $300 phones.

That is a very real, very dark problem we should be all aware of in its
greater context!

~~~
SolaceQuantum
I wonder if it's possible to audit AI for discriminatory practices. Of course
this would require a completely different set of legal processes (needing
access to the algorithm, needing access to enough data to prove a bias) that
could take decades to implement in a way that covers even a slim majority of
applicable, obvious cases like race, sex, pregnancy status, religion, nation
of origin, veteran status, and age.

Decades of the bar being just that much higher for the disadvantaged
demographic. Potentially, a generation of disenfranchisement, which we know
has generational effects. (The USA arguably still hasn't 'recovered' from
segregation's economic effects on the populace)

~~~
naniwaduni
This would require it to be generally possible to audit AI at all, which would
be a nice problem to solve.

~~~
big_chungus
DARPA has this same problem; most people aren't just willing to "trust the
machine", and it's hard to know where it went wrong, how to improve it, or who
messed up without it. See XAI: [https://www.darpa.mil/program/explainable-
artificial-intelli...](https://www.darpa.mil/program/explainable-artificial-
intelligence)

------
6gvONxR4sf7o
>So when a plaintiff using a hiring platform encounters a problematic design
feature — like platforms that check for gaps in employment — she should be
able to bring a lawsuit on the basis of discrimination per se, and the
employer would then be required to provide statistical proof from internal and
external audits to show that its hiring platform is not unlawfully
discriminating against certain groups.

How would this work? Without ground truth labels of "should have been hired"
and "shouldn't have been hired" how can you demonstrate that your algorithm
isn't biased? I mean counterfactual labels like "we hired this person the
algorithm said not to hire, and they turned out good/bad."

Base rates will be different, which means any algo _will_ be biased in some
way (see these impossibility theorems [0]), so the question is what
demonstration of non bias will be sufficient without counterfactual data.

[0]
[https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/sl...](https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/slides.pdf)

~~~
jldugger
Burden of proof may need to be on the employer here. Much of the research here
revolves around extracting human readable decision support, e.g. "I recommend
this person for hire 25 percent because they have experience in devops and 75
percent because they were on the men's cross country crew."

~~~
vinceguidry
If the history of credit unions is anything to go by, then due process rules
will enshrine the ability of individuals to force the employer to do their
due-diligence. But as this is initiated by the individual, only a relatively
small number of people will do this.

------
radcon
I applied to a few jobs at United Healthcare and they sent me HireVue
interview invitations. Wasn't sure what HireVue was so I looked it up and was
completely appalled that this sort of AI hiring technology is actually being
used already.

Of course they frame it as a benefit to you (interview whenever and wherever
you want!) but we all know their real motivation...

------
degenerate
Archive link to full article:
[http://archive.is/1szN3](http://archive.is/1szN3)

------
foxfired
The reason hiring becomes automated is because there is a flood of people who
apply to jobs. Here is a rough list:

1\. Graduates looking for their first job

2\. Unemployed people looking for work

3\. Employed people looking for better work

4\. Recruiters scouring the web.

5\. Automated systems that apply everywhere they can.

6\. Spam.

The people from 1-3 are lost in the noise of 4-6. Automation should only try
to discard the last 3. That's it!

Automation will save you time and money in the hiring process. You will spend
that time and money in HR and employee management, which will end up costing
you more. The more you rely on automation, the less you will personally screen
people, which means you don't really know who you are hiring.

You are better off using my secret algorithm[1] to select candidates, then
personally review their resume.

[1]: SELECT * FROM candidates ORDER BY RAND();

~~~
bigwavedave
I once worked with a man who would take half of the resumes he'd receive and
throw them out without even glancing at any of them. His reasoning? "I only
hire lucky people."

~~~
ThrowMeAwayOkay
Wow!

------
blister
Holy Moly! The author of this paper makes a very scary argument further
shifting the burden of proof onto the employers.

The problem with hiring is that there is a hugely disproportionate
supply/demand ratio. We might have 1 position to fill and 100+ applicants. How
can any company correctly protect themselves against allegations of illegal
hiring criteria in this type of environment? Right now the best protection is
to say "we hired the most qualified candidate", but even that argument is
shaky. What if the most "technically qualified" candidate performed the worst
in the interview process?

Hiring is scary enough already without bringing the law into the picture. :/

~~~
maxaf
Hiring decisions can be made defensible by openly publishing a transparent
process that reveals metrics at the end for all candidates.

~~~
Konnstann
If the hiring process is transparent, wouldn't job seekers just figure out how
to game it? Unless the process is confidential, and only comes out in
lawsuits, and isn't disclosed to the public, this would only serve to further
obscure the candidates who can do the job as opposed to just get hired.

~~~
maxaf
At FANG-style companies the decidedly non-transparent process is already well
known and is gamed on a regular basis.

~~~
Konnstann
AFAIK the only part that is gamed is the one that is fairly transparent, the
leetcode problems.

------
arrosenberg
It would seem to me that this is another symptom of overly concentrated
industries. Automated hiring is driven by a desire to optimize - either to cut
costs and/or to handle a situation where applicants outnumber openings by an
exponential number. Automated hiring is another concentrating, anti-people
solution - we should be using anti-trust laws and regulation to create a
system that encourages a greater number of firms, distributed over a much
wider geography. That's the democratic solution to this problem, and abrogates
the need for these types of solutions.

I would also disagree with the premise of the article - the government has
shown time and time again that they are not good at compliance. They are good
at investigating and regulating. Trying to audit these systems is an
impossible task that will wind up with the agency responsible captured by the
industry. The only permanent, self-executing solution is to make the companies
smaller such that these systems are no longer viable.

------
dredmorbius
This article expresses one piece of the problem that's increasingly suggesting
to me that it's not hiring or job-hunting that's broken, but the model of
employment itself.

(For yet another example of broken hiring, see this story, also presently on
the HN front page: [https://medium.com/@bellmar/sre-as-a-lifestyle-choice-
de9f5a...](https://medium.com/@bellmar/sre-as-a-lifestyle-choice-
de9f5a82d73d))

Skills assessment (and assertion) is difficult. Skills _relevance_ is hard to
assess. Incorportion and business as risk-externalising systems has worked
rather too well (though the problem's hardly new), with much of employment
risk being shifted from employers to employees, and from large employers to
small ones (answering the anticipated small-business / startup response). The
entities most able to manage risk are those most equipped to shift it
elsewhere.

There's also the problems of offshoring, outsourcing, and shifting to the
modern variants of endentured servitude, generally: employment-contingent
temporary visa workers (H-1B in the US, similar equivalents elsewhere). And of
gig, contingent, temporary, part-time, and other employment conditions, as
well as of the tying of specific benefits (most especially healthcare) to
employment status.

Don't even get me started on pensions/retirement, professional ethics,
whistleblower protections, and labour organisation.

