Hacker News new | past | comments | ask | show | jobs | submit login
Beware of Automated Hiring (nytimes.com)
66 points by dredmorbius 13 days ago | hide | past | web | favorite | 51 comments





There is too much performative expression of concern and not enough discussion of actual factors. The problems are basically the same as for human bias, and the automation of the process gives us an opportunity to examine them transparently.

Having lacrosse or sailing as a hobby on a CV is probably correlated with job performance, because those things are independently correlated with wealth and greater educational opportunities earlier in life. It's pretty simple to exclude them from a logistic regression. But what about education? A degree from an Ivy league university is correlated with job performance for both good and unfair reasons, and the correlation is very noisy. Which variables should be allowed? Which ones does the human system use, and how are they weighted? What causal models are appropriate? I don't know the answers, but the conversation often doesn't even happen at this level. And this is just at the level of transarent models, like graphical probablistic models or structural equation modeling. I generally think there's too much AI alarmism, but in this case I think banning non-linear machine learning methods using large high-dimensional training sets might be appropriate.


Wealthy people are also lazy and have a much higher tendency to politicise their environments (just like they do with their parents when they need their money).

Compared to a poor qualified-enough candidate, they would not have an option of going back to the sh*thole they came from, so they would definitely not be as motivated.


The fundamental problem here seems to be scale (like often in tech).

I don't think it is correct to say that algorithms "enabled discrimination against job applicants". Otherwise we wouldn't have discussions about ageism or the amount of personal information which should go on a CV. Closed loop systems are also not something AI brought upon our world, otherwise the term "white privilege" would not have made it into public consciousness. There are racist hiring managers and interviewers who mentally discard anyone looking like their ex for sure.

The key is: All these effects are fairly contained in comparison. Companies can be sued, people can be fired and even systemic biases can be evaded in a lot of cases. I am not trying to say that e.g. sexism in hiring is harmless or anything, I'm just trying to weight two evils.

We do enter a disturbingly dystopian world when the AI-hiring start up brings their fancy algo to the mighty gods of cloud computing and just scale into infinity. Now suddenly hundreds of companies and millions of people are hit by the biases. Having discrimination at that level, personal damage compensation becomes a warm gesture at most just because the cultural deterioration done by subconsciously establishing a group as subpar workers is plain immeasurable.

And if you think about it, this exact problem transcends hiring. You have the right to kick everybody who is carrying something resembling a paint spray can out of your mall to prevent vandalism. You may discriminate some people just trying to get something done about the flies in their kitchen, sure. Happens. But if GoogleMaps, or Cloudflare, or Netflix does it, they are ever so subtly nudging the tech interest, or the information accessibility, or the quality of life, or the probability to get hired by minus a fraction of a fraction of a percent. For a billion people, or for all Hispanic people in the US, or for everyone using sub $300 phones.

That is a very real, very dark problem we should be all aware of in its greater context!


The other thing you didn't touch on is: lack of variance.

As markets consolidate, most companies will use one, maybe three different AI products to rate people. So if an AI decides that you're not fit for work, you're done for - no matter where you apply, it'll be that one AI returning the exact same result. There's no variance of different people looking at applications.


The markets will sort it out though. That filtered out candidate has some inherent worth and companies can take advantage of that. Competent hirers will then always use multiple AI hirers and experiment with hiring to make sure they're not losing out on talent that's been inaccurately filtered out.

That's assuming you never change your CV.

You're assuming that an inscrutable AI is basing its decisions off of something you can change on your CV.

Maybe it doesn't like your name, one of your past employers, or Oxford commas. It's effectively a black box, so who knows?


Well sure, but HR people are not less of a black box either.

People have well-documented biases, besides reflecting biases of humans, AI systems will also have arbitrary biases that are hard to predict.

CVs are already a horribly low signal - people lie way too much on them. I wouldn't expect AI to take into account anything you personally put on paper. Just like insurance/credit score AIs don't care about what you put down on papers much.

I believe you're missing the whole point. AI is often used as a fancy name for classifiers in general, including supervised and unsupervised learning algorithms.

Filtering the initial batch of candidates is basically a classification problem. It is also very time-consuming to perform by hand.

If recruiters are able to used supervised/unsupervised algorithms to filter out 90% of the initial candidates with a low misscladsification error and without wasting valuable time the they'll be able to operate more efficiently.

The trick is how to tell how far the auto classification algorithms should go in helping pick the best candidates. Some company use stupid fizzbuzz tests to weed out candidates based on a quantifiable index by abusing a metric whose value is negligible but very effective in weeding out a considerable portion of initial candidates.


The case for automated discrimination is made very well by Cathy O'Neil in Weapons of Math Destruction (https://www.worldcat.org/title/weapons-of-math-destruction-h...). I recommend it strongly.

There are several cases of problems. One is of course intentional discrimination. The larger problems are likely either a failure to care, consider, or attend to problems, or most insidious, side effects which arise entirely unintentionally.

The fact that much gradient descent machine learning is opaque to explanation means that such AI essentially becomes a new form of knowledge: like science, it provides answers, but unlike traditional Baconian scientific methods, it doesn't answer why or how, and fails to provide cause or mechanism.

Given use in increasingly complex, large-scale systems, without ready human review or oversight, this creates conditions for numerous unfortunate probably consequences.


It's already the case with credit-rating companies that slowly nudged themselves everywhere and are even being used by the government to check your identity. I bet that having a mistake on file at Experian can make your life a nightmare.

Once the market has consolidated itself there will be 2-3 companies controlling the vast majority of all hiring processes. At that point it will be easier to just ask what type of job their AI decided you would be a good fit for.

Think about how inefficient the hiring process is right now. Humans can only process very little information. There are only so many companies you can learn about and apply to. Most of the time people end-up in a local maxima and aren't really happy about their job. In contrast, the AI has access to hundreds of thousand of companies and profiles that they can match with each-other. They will claim how the AI knows what you truly desire and find the best fit that will make you truly happy.

Welcome to the future where you will get lessons in virtual-school on how to be best perceived by the big AI.


And thus instead of being outwitted by a superintelligent AI maximizing paperclips, we end up with a pantheon of blind idiot gods, tended to by priests who don't understand them, and kept alive with human sacrifice.

I wonder if it's possible to audit AI for discriminatory practices. Of course this would require a completely different set of legal processes (needing access to the algorithm, needing access to enough data to prove a bias) that could take decades to implement in a way that covers even a slim majority of applicable, obvious cases like race, sex, pregnancy status, religion, nation of origin, veteran status, and age.

Decades of the bar being just that much higher for the disadvantaged demographic. Potentially, a generation of disenfranchisement, which we know has generational effects. (The USA arguably still hasn't 'recovered' from segregation's economic effects on the populace)


This would require it to be generally possible to audit AI at all, which would be a nice problem to solve.

DARPA has this same problem; most people aren't just willing to "trust the machine", and it's hard to know where it went wrong, how to improve it, or who messed up without it. See XAI: https://www.darpa.mil/program/explainable-artificial-intelli...

All the EEOC would need to do is to prove that there is disparate impact[1] based solely on the hiring outcomes of companies using AI to hire. This is what they already do.

[1] https://en.wikipedia.org/wiki/Disparate_impact


Given that AI are often not explanatory (see my earlier comment in this subthread), access to the algorithm may not be strictly necessary. Though the ability to black-box test it against a wide range of possible inputs might be a good thing to aim for.

>So when a plaintiff using a hiring platform encounters a problematic design feature — like platforms that check for gaps in employment — she should be able to bring a lawsuit on the basis of discrimination per se, and the employer would then be required to provide statistical proof from internal and external audits to show that its hiring platform is not unlawfully discriminating against certain groups.

How would this work? Without ground truth labels of "should have been hired" and "shouldn't have been hired" how can you demonstrate that your algorithm isn't biased? I mean counterfactual labels like "we hired this person the algorithm said not to hire, and they turned out good/bad."

Base rates will be different, which means any algo will be biased in some way (see these impossibility theorems [0]), so the question is what demonstration of non bias will be sufficient without counterfactual data.

[0] https://www.chrisstucchio.com/pubs/slides/crunchconf_2018/sl...


Burden of proof may need to be on the employer here. Much of the research here revolves around extracting human readable decision support, e.g. "I recommend this person for hire 25 percent because they have experience in devops and 75 percent because they were on the men's cross country crew."

If the history of credit unions is anything to go by, then due process rules will enshrine the ability of individuals to force the employer to do their due-diligence. But as this is initiated by the individual, only a relatively small number of people will do this.

I applied to a few jobs at United Healthcare and they sent me HireVue interview invitations. Wasn't sure what HireVue was so I looked it up and was completely appalled that this sort of AI hiring technology is actually being used already.

Of course they frame it as a benefit to you (interview whenever and wherever you want!) but we all know their real motivation...


Archive link to full article: http://archive.is/1szN3

The reason hiring becomes automated is because there is a flood of people who apply to jobs. Here is a rough list:

1. Graduates looking for their first job

2. Unemployed people looking for work

3. Employed people looking for better work

4. Recruiters scouring the web.

5. Automated systems that apply everywhere they can.

6. Spam.

The people from 1-3 are lost in the noise of 4-6. Automation should only try to discard the last 3. That's it!

Automation will save you time and money in the hiring process. You will spend that time and money in HR and employee management, which will end up costing you more. The more you rely on automation, the less you will personally screen people, which means you don't really know who you are hiring.

You are better off using my secret algorithm[1] to select candidates, then personally review their resume.

[1]: SELECT * FROM candidates ORDER BY RAND();


I once worked with a man who would take half of the resumes he'd receive and throw them out without even glancing at any of them. His reasoning? "I only hire lucky people."

Wow!

Holy Moly! The author of this paper makes a very scary argument further shifting the burden of proof onto the employers.

The problem with hiring is that there is a hugely disproportionate supply/demand ratio. We might have 1 position to fill and 100+ applicants. How can any company correctly protect themselves against allegations of illegal hiring criteria in this type of environment? Right now the best protection is to say "we hired the most qualified candidate", but even that argument is shaky. What if the most "technically qualified" candidate performed the worst in the interview process?

Hiring is scary enough already without bringing the law into the picture. :/


Well, the law is already in the picture; discrimination lawsuits aren't a new thing. And it's also true that, just because you encode certain things in an algorithm (or just let an algorithm learn from existing discriminatory practices), you can't absolve your own responsibility.

There are practices for employment advertising/recruiting that certainly highlight issues in a new way. Is doing on-campus recruiting or advertising in media that targets specific young demographics illegally discriminatory? Probably not. (IANAL) Is targeting young people using Facebook algorithms OK? Dunno.


> Hiring is scary enough already without bringing the law into the picture.

This is at the root of a lot of social issues lately.

One person, let's say it's a person of colour, is afraid of not being able to get a job at all.

Another person, who is doing the hiring, is afraid of someone holding them accountable for actually hiring qualified applicants.

This is a very asymmetrical pair of concerns. But if you're doing the hiring, it feels like the worst thing in the world to have someone second-guess your decisions.

You are so busy worrying about this that you don't even consider how scary it is to look for a job, and have robots tell you that you aren't getting an offer because your diction and facial expressions don't match those of people who the company has already hired.


Hiring decisions can be made defensible by openly publishing a transparent process that reveals metrics at the end for all candidates.

If the hiring process is transparent, wouldn't job seekers just figure out how to game it? Unless the process is confidential, and only comes out in lawsuits, and isn't disclosed to the public, this would only serve to further obscure the candidates who can do the job as opposed to just get hired.

At FANG-style companies the decidedly non-transparent process is already well known and is gamed on a regular basis.

AFAIK the only part that is gamed is the one that is fairly transparent, the leetcode problems.

They aren't really trying to hide it, they send you an email with a page or two full of topics and links to help you prepare for the interview. The current state where people practice for their interview is exactly what they want.

Yes. But that's already the case and is usually a useful screening criteria anyway. i.e. I don't actually care if you smoke pot or not. But I care whether or not you are smart enough to lie about it during an interview or job screening process, and if you can quit long enough to pass a drug test. Because that's relevant.

If you are intelligent enough to game the system into looking like you meet all of my criteria, you can probably also "game the system" into actually meeting my criteria on the job too.


That would seem to have a major chilling effect on hiring practices across the board. For what it's worth, the reduced friction for companies in the hiring process, i.e., not having to explain every decision, moves the process along fast enough to keep the economy churning. I'm not sure that would be the case if all that bureaucracy were installed into the process.

I can't wait to the see the objective breakdown explaining how I scored 5 out of a possible 7 in "how much they like me".

The point of a transparent process would be to make the rubric as non-subjective as possible. A "how much they like me" criterion, if it were still part of the decision, would become very obvious if/when a company selected some candidate that was not in fact "the best" according to their transparent rubric.

But how do you put team fit into a rubric. It's absolutely an element - a team of 1Xers will be more productive than a team or 10Xers who all can't stand each other. I wouldn't want to be part of a team where everyone has an excellent objective mark, but is just a bunch of jerks. What part of the objective rubric accounts for "can't play well with others"?

One way is to define a set of bullet points that are representative of the team’s culture. This isn’t always easy, but is a worthwhile exercise even outside the context of hiring. Think: a localized version of the Joel Test. These values can be objective (“we strive for 100% test coverage”) and subjective (“we value communication over process”). Every candidate can be ranked relative to these values. The most important factor here is to rank every single candidate against every single value. This will most likely lead to additional questions for some candidates, such as, “tell us about a time when you found communication or process more helpful than the other, and how it made you feel”.

It's easy to frame "team fit" as "a team of 1Xers will be more productive than a team or 10Xers who all can't stand each other," but I am going to put Fleetwood Mac's "Rumours" album on and say "citation needed."

Also, "team fit" is very nice when used in good faith, but all-too-often, hiring for "culture fit" or "team fit" ends up being a euphemism for "valuing a monoculture over valuing competence."

Here's HN discussing this exact subject: https://news.ycombinator.com/item?id=3868873


I may have exaggerated with the values, but if we can't both agree that people who work well with each other are more productive than those who don't, I don't know what to tell you.

While it may be used poorly, it does still have value, and that's why it exists at all. If judging someone's personality and attitude wasn't important, there'd be no need for in-person interviews at all.


Well, you've already reframed the point you're trying to make. You just said "people who work well with each other are more productive than those who don't," but that's tautological, the very definition of "people who work well together" is that they are productive, and the definition of "don;t work well together" is that they aren't.

Whereas, in the comment I was responding to, you talked about "people who can't stand each other." You haven't made the case that if people can't stand each other, then they necessarily can't work productively together.

Furthermore, you seem to imply that if you can learn something about someone's personality and attitude, then you can judge whether they will be productive working with other people.

That's a perilous endeavour. It may seem intuitively obvious, but it isn't by a long shot. There are lots and lots and lots of examples of people who don't like each other or enjoy socializing or joking with each other who nevertheless get things done working together.

Furthermore, when people set out to build a framework around interviewing someone, or asking them a bunch of questions and from there working out their "compatibility" with other people, we always end up with something like Myers-Briggs.

Thaat kind of thing has been conclusively debunked as a signal for hiring or not hiring people. But most people aren't even as quasi-objective as Myers-Briggs. Most people just go with their intuition, which they couch as "experience."

When you ask such people, they tell you they are very good at hiring people. But of course, they don't really know that in an empirical sense. They don't measure their false negatives, they have no idea how many people they disliked turned out to be productive workers.

They don't work from a reproducible framework. In the end, they're just working their own biases and crediting their expertise for all the successful hires while downplaying unsuccessful hires, and ignoring outright no-hires who turned out to be successful elsewhere.

At the end of the day, you're running a business or a movement or whatever, and people have to put aside their likes and dislikes and execute on the mission. Most people can do that just fine. There are some exceptions, people who are toxic, but honestly those are the exceptions.

I find, with my n=1 sample set, that when you hire for competence, and hire people who have demonstrated competence in the past, they almost always get along well enough to do their jobs.

And when they don't, there's a little thing called management that you use to get people back on track. In my n=1 experience, people who have competence out the yin-yang but are so toxic that they cannot be managed and must be fired are rare, and easy enough to detect in the normal course of interviewing that there ois no need to constantly ask myself if the person I'm interviewing can get along with the team.

If they can make it through the interview process without insulting everyone or having a temper tantrum, and if they demonstrate competency programming, designing, and communicating their technical rationale, my experience is that they're nearly always going to get along well enough to G$D.


> the very definition of "people who work well together" is that they are productive, and the definition of "don;t work well together" is that they aren't.

I don't agree.

Working well together suggests they cooperate to reach a common goal, regardless of their individual productivity.

Not working well together suggests that they might even waste their time attacking each other instead of working towards a common goal, which has a negative impact on a project regardless of how productive each individual might be.


The Disparate Impact Doctrine doesn't allow adverse effects on that could cause unintentional discriminatory impact for protected class minority groups. This law exists for decades before algorithmic decision making become a fad. In case of supply/demand ratio arguments, this doctrine takes into consideration the "outcomes" relative to the input. That is, if you are able to give a reasonable explanation for your outcomes - you wouldn't be labeled illegal.

If the most qualified candidate bombed the interview, they wouldn’t get an offer this time, but probably would in the next interview when their performance reverts to their average level of competence.

Some companies place rejected candidates on de facto blacklists to avoid wasting time with a recruitment process that's likely to be rejected.

This is particularly common when the hiring process is delegated to third-party HR/recruitment services, where recruiters are averse to the possibility of appearing ineffective in the eyes of their client.


That’s that company’s choice of how to conduct recruiting. Other (perhaps more competitive) companies say “you can re-apply after N months.”

I have a hard time seeing that as improperly discriminatory any more than departed employees having an “eligible for re-hire” bit set to false.

“Plenty of other fish in the sea” works both ways.

Maybe turn the question around. If someone bombed the interview, is there some additional process a company should go through to ensure they aren’t in fact the best candidate? What form might that type of inquiry take and why wouldn’t we just use that instead of the interview process for the company’s purpose in selection?


It would seem to me that this is another symptom of overly concentrated industries. Automated hiring is driven by a desire to optimize - either to cut costs and/or to handle a situation where applicants outnumber openings by an exponential number. Automated hiring is another concentrating, anti-people solution - we should be using anti-trust laws and regulation to create a system that encourages a greater number of firms, distributed over a much wider geography. That's the democratic solution to this problem, and abrogates the need for these types of solutions.

I would also disagree with the premise of the article - the government has shown time and time again that they are not good at compliance. They are good at investigating and regulating. Trying to audit these systems is an impossible task that will wind up with the agency responsible captured by the industry. The only permanent, self-executing solution is to make the companies smaller such that these systems are no longer viable.


This article expresses one piece of the problem that's increasingly suggesting to me that it's not hiring or job-hunting that's broken, but the model of employment itself.

(For yet another example of broken hiring, see this story, also presently on the HN front page: https://medium.com/@bellmar/sre-as-a-lifestyle-choice-de9f5a...)

Skills assessment (and assertion) is difficult. Skills relevance is hard to assess. Incorportion and business as risk-externalising systems has worked rather too well (though the problem's hardly new), with much of employment risk being shifted from employers to employees, and from large employers to small ones (answering the anticipated small-business / startup response). The entities most able to manage risk are those most equipped to shift it elsewhere.

There's also the problems of offshoring, outsourcing, and shifting to the modern variants of endentured servitude, generally: employment-contingent temporary visa workers (H-1B in the US, similar equivalents elsewhere). And of gig, contingent, temporary, part-time, and other employment conditions, as well as of the tying of specific benefits (most especially healthcare) to employment status.

Don't even get me started on pensions/retirement, professional ethics, whistleblower protections, and labour organisation.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: