Having lacrosse or sailing as a hobby on a CV is probably correlated with job performance, because those things are independently correlated with wealth and greater educational opportunities earlier in life. It's pretty simple to exclude them from a logistic regression. But what about education? A degree from an Ivy league university is correlated with job performance for both good and unfair reasons, and the correlation is very noisy. Which variables should be allowed? Which ones does the human system use, and how are they weighted? What causal models are appropriate? I don't know the answers, but the conversation often doesn't even happen at this level. And this is just at the level of transarent models, like graphical probablistic models or structural equation modeling. I generally think there's too much AI alarmism, but in this case I think banning non-linear machine learning methods using large high-dimensional training sets might be appropriate.
Compared to a poor qualified-enough candidate, they would not have an option of going back to the sh*thole they came from, so they would definitely not be as motivated.
I don't think it is correct to say that algorithms "enabled discrimination against job applicants". Otherwise we wouldn't have discussions about ageism or the amount of personal information which should go on a CV. Closed loop systems are also not something AI brought upon our world, otherwise the term "white privilege" would not have made it into public consciousness. There are racist hiring managers and interviewers who mentally discard anyone looking like their ex for sure.
The key is: All these effects are fairly contained in comparison. Companies can be sued, people can be fired and even systemic biases can be evaded in a lot of cases. I am not trying to say that e.g. sexism in hiring is harmless or anything, I'm just trying to weight two evils.
We do enter a disturbingly dystopian world when the AI-hiring start up brings their fancy algo to the mighty gods of cloud computing and just scale into infinity. Now suddenly hundreds of companies and millions of people are hit by the biases. Having discrimination at that level, personal damage compensation becomes a warm gesture at most just because the cultural deterioration done by subconsciously establishing a group as subpar workers is plain immeasurable.
And if you think about it, this exact problem transcends hiring. You have the right to kick everybody who is carrying something resembling a paint spray can out of your mall to prevent vandalism. You may discriminate some people just trying to get something done about the flies in their kitchen, sure. Happens. But if GoogleMaps, or Cloudflare, or Netflix does it, they are ever so subtly nudging the tech interest, or the information accessibility, or the quality of life, or the probability to get hired by minus a fraction of a fraction of a percent. For a billion people, or for all Hispanic people in the US, or for everyone using sub $300 phones.
That is a very real, very dark problem we should be all aware of in its greater context!
As markets consolidate, most companies will use one, maybe three different AI products to rate people. So if an AI decides that you're not fit for work, you're done for - no matter where you apply, it'll be that one AI returning the exact same result. There's no variance of different people looking at applications.
Maybe it doesn't like your name, one of your past employers, or Oxford commas. It's effectively a black box, so who knows?
Filtering the initial batch of candidates is basically a classification problem. It is also very time-consuming to perform by hand.
If recruiters are able to used supervised/unsupervised algorithms to filter out 90% of the initial candidates with a low misscladsification error and without wasting valuable time the they'll be able to operate more efficiently.
The trick is how to tell how far the auto classification algorithms should go in helping pick the best candidates. Some company use stupid fizzbuzz tests to weed out candidates based on a quantifiable index by abusing a metric whose value is negligible but very effective in weeding out a considerable portion of initial candidates.
There are several cases of problems. One is of course intentional discrimination. The larger problems are likely either a failure to care, consider, or attend to problems, or most insidious, side effects which arise entirely unintentionally.
The fact that much gradient descent machine learning is opaque to explanation means that such AI essentially becomes a new form of knowledge: like science, it provides answers, but unlike traditional Baconian scientific methods, it doesn't answer why or how, and fails to provide cause or mechanism.
Given use in increasingly complex, large-scale systems, without ready human review or oversight, this creates conditions for numerous unfortunate probably consequences.
Once the market has consolidated itself there will be 2-3 companies controlling the vast majority of all hiring processes. At that point it will be easier to just ask what type of job their AI decided you would be a good fit for.
Think about how inefficient the hiring process is right now. Humans can only process very little information. There are only so many companies you can learn about and apply to. Most of the time people end-up in a local maxima and aren't really happy about their job. In contrast, the AI has access to hundreds of thousand of companies and profiles that they can match with each-other. They will claim how the AI knows what you truly desire and find the best fit that will make you truly happy.
Welcome to the future where you will get lessons in virtual-school on how to be best perceived by the big AI.
Decades of the bar being just that much higher for the disadvantaged demographic. Potentially, a generation of disenfranchisement, which we know has generational effects. (The USA arguably still hasn't 'recovered' from segregation's economic effects on the populace)
How would this work? Without ground truth labels of "should have been hired" and "shouldn't have been hired" how can you demonstrate that your algorithm isn't biased? I mean counterfactual labels like "we hired this person the algorithm said not to hire, and they turned out good/bad."
Base rates will be different, which means any algo will be biased in some way (see these impossibility theorems ), so the question is what demonstration of non bias will be sufficient without counterfactual data.
Of course they frame it as a benefit to you (interview whenever and wherever you want!) but we all know their real motivation...
1. Graduates looking for their first job
2. Unemployed people looking for work
3. Employed people looking for better work
4. Recruiters scouring the web.
5. Automated systems that apply everywhere they can.
The people from 1-3 are lost in the noise of 4-6. Automation should only try to discard the last 3. That's it!
Automation will save you time and money in the hiring process. You will spend that time and money in HR and employee management, which will end up costing you more. The more you rely on automation, the less you will personally screen people, which means you don't really know who you are hiring.
You are better off using my secret algorithm to select candidates, then personally review their resume.
: SELECT * FROM candidates ORDER BY RAND();
The problem with hiring is that there is a hugely disproportionate supply/demand ratio. We might have 1 position to fill and 100+ applicants. How can any company correctly protect themselves against allegations of illegal hiring criteria in this type of environment? Right now the best protection is to say "we hired the most qualified candidate", but even that argument is shaky. What if the most "technically qualified" candidate performed the worst in the interview process?
Hiring is scary enough already without bringing the law into the picture. :/
There are practices for employment advertising/recruiting that certainly highlight issues in a new way. Is doing on-campus recruiting or advertising in media that targets specific young demographics illegally discriminatory? Probably not. (IANAL) Is targeting young people using Facebook algorithms OK? Dunno.
This is at the root of a lot of social issues lately.
One person, let's say it's a person of colour, is afraid of not being able to get a job at all.
Another person, who is doing the hiring, is afraid of someone holding them accountable for actually hiring qualified applicants.
This is a very asymmetrical pair of concerns. But if you're doing the hiring, it feels like the worst thing in the world to have someone second-guess your decisions.
You are so busy worrying about this that you don't even consider how scary it is to look for a job, and have robots tell you that you aren't getting an offer because your diction and facial expressions don't match those of people who the company has already hired.
If you are intelligent enough to game the system into looking like you meet all of my criteria, you can probably also "game the system" into actually meeting my criteria on the job too.
Also, "team fit" is very nice when used in good faith, but all-too-often, hiring for "culture fit" or "team fit" ends up being a euphemism for "valuing a monoculture over valuing competence."
Here's HN discussing this exact subject: https://news.ycombinator.com/item?id=3868873
While it may be used poorly, it does still have value, and that's why it exists at all. If judging someone's personality and attitude wasn't important, there'd be no need for in-person interviews at all.
Whereas, in the comment I was responding to, you talked about "people who can't stand each other." You haven't made the case that if people can't stand each other, then they necessarily can't work productively together.
Furthermore, you seem to imply that if you can learn something about someone's personality and attitude, then you can judge whether they will be productive working with other people.
That's a perilous endeavour. It may seem intuitively obvious, but it isn't by a long shot. There are lots and lots and lots of examples of people who don't like each other or enjoy socializing or joking with each other who nevertheless get things done working together.
Furthermore, when people set out to build a framework around interviewing someone, or asking them a bunch of questions and from there working out their "compatibility" with other people, we always end up with something like Myers-Briggs.
Thaat kind of thing has been conclusively debunked as a signal for hiring or not hiring people. But most people aren't even as quasi-objective as Myers-Briggs. Most people just go with their intuition, which they couch as "experience."
When you ask such people, they tell you they are very good at hiring people. But of course, they don't really know that in an empirical sense. They don't measure their false negatives, they have no idea how many people they disliked turned out to be productive workers.
They don't work from a reproducible framework. In the end, they're just working their own biases and crediting their expertise for all the successful hires while downplaying unsuccessful hires, and ignoring outright no-hires who turned out to be successful elsewhere.
At the end of the day, you're running a business or a movement or whatever, and people have to put aside their likes and dislikes and execute on the mission. Most people can do that just fine. There are some exceptions, people who are toxic, but honestly those are the exceptions.
I find, with my n=1 sample set, that when you hire for competence, and hire people who have demonstrated competence in the past, they almost always get along well enough to do their jobs.
And when they don't, there's a little thing called management that you use to get people back on track. In my n=1 experience, people who have competence out the yin-yang but are so toxic that they cannot be managed and must be fired are rare, and easy enough to detect in the normal course of interviewing that there ois no need to constantly ask myself if the person I'm interviewing can get along with the team.
If they can make it through the interview process without insulting everyone or having a temper tantrum, and if they demonstrate competency programming, designing, and communicating their technical rationale, my experience is that they're nearly always going to get along well enough to G$D.
I don't agree.
Working well together suggests they cooperate to reach a common goal, regardless of their individual productivity.
Not working well together suggests that they might even waste their time attacking each other instead of working towards a common goal, which has a negative impact on a project regardless of how productive each individual might be.
This is particularly common when the hiring process is delegated to third-party HR/recruitment services, where recruiters are averse to the possibility of appearing ineffective in the eyes of their client.
I have a hard time seeing that as improperly discriminatory any more than departed employees having an “eligible for re-hire” bit set to false.
“Plenty of other fish in the sea” works both ways.
Maybe turn the question around. If someone bombed the interview, is there some additional process a company should go through to ensure they aren’t in fact the best candidate? What form might that type of inquiry take and why wouldn’t we just use that instead of the interview process for the company’s purpose in selection?
I would also disagree with the premise of the article - the government has shown time and time again that they are not good at compliance. They are good at investigating and regulating. Trying to audit these systems is an impossible task that will wind up with the agency responsible captured by the industry. The only permanent, self-executing solution is to make the companies smaller such that these systems are no longer viable.
(For yet another example of broken hiring, see this story, also presently on the HN front page: https://medium.com/@bellmar/sre-as-a-lifestyle-choice-de9f5a...)
Skills assessment (and assertion) is difficult. Skills relevance is hard to assess. Incorportion and business as risk-externalising systems has worked rather too well (though the problem's hardly new), with much of employment risk being shifted from employers to employees, and from large employers to small ones (answering the anticipated small-business / startup response). The entities most able to manage risk are those most equipped to shift it elsewhere.
There's also the problems of offshoring, outsourcing, and shifting to the modern variants of endentured servitude, generally: employment-contingent temporary visa workers (H-1B in the US, similar equivalents elsewhere). And of gig, contingent, temporary, part-time, and other employment conditions, as well as of the tying of specific benefits (most especially healthcare) to employment status.
Don't even get me started on pensions/retirement, professional ethics, whistleblower protections, and labour organisation.