Hacker News new | past | comments | ask | show | jobs | submit login
New Research Finds That Algorithms Are Better Than Humans at Hiring (bloomberg.com)
86 points by T-A on Nov 19, 2015 | hide | past | favorite | 43 comments



Very relevante snippet:

"Looking across 15 companies and more than 300,000 hires in low-skill service-sector jobs, such as data entry and call center work, NBER researchers compared the tenure of employees who had been hired based on the algorithmic recommendations of a job test with that of people who'd been picked by a human."

The headlines are far too general and deceptive. It's like saying "computer are better than humans at playing games" because computer bet humans in chess.


Well said. Is there any research done at the non-low-skill sectors? I wonder if the same algorithmic approach would be successful?


We should expect machines to be better than humans when there's a bunch of standardised, static, objective criteria, eg number of calls made and such. It's like looking at sports, where it's also quite clear that Messi and Ronaldo are in a league of their own. Any kind of personal hunch is likely to be either wrong, or something that data could quantify more precisely.

The thing is performance is often subjective, and it's often not possible to tell how a team will change from adding a member. (In football you can at least try to look at national/club team differences, or stats that changed after a player changed clubs). When people change jobs, they often change roles, and the firms they are joining have different positions in the market, and different aims.


This is your human anti-algorithm bias showing. When it is more subjective that doesn't mean that algorithm still cannot do a better job than human. :-) You have just assumed that it can't, because you don't know how to measure it. In other words, it's an argument from ignorance (which is not surprising - AfI is often used to justify unwarranted beliefs).

It's actually quite possible that algorithms learns the actual preferences of some person better than the person himself; for example, due to exponential discounting of future, person may make hiring decision based on immediate impression and discount other warning signals, which is going to be wrong in the long run.

(Downvoters, can you please explain yourself? Maybe we can have an interesting discussion. Thanks.)


Sure, I'll give it a shot, although I'm not amongst your downvoters.

There are two ways to build a system like this: one requires no training and uses a heuristic. That is, you come up with a formula that you think will rank candidates well, plug input variables into it, and sort candidates based on the score. Such a system cannot learn, it is fixed; it can only be improved by a human tweaking it. The fact that we have no way to measure our success is irrelevant.

The other way is to build a learning system, and a learning system needs to be trained. This means there must be either a set of training data (of positive examples with specified outcomes) or an objective goal function that can be computed on the output over which an optimization can be run.

Without these things (i.e., when the results are too subjective and we're unable to assign a reliable score to the outcome), machine learning is impossible, and the algorithm can never be trained to perform better than a human.


OK, so it seems to me that you pretty much agree with my argument, but you don't like my assumption that even if there is subjectivity, there can be a learning algorithm. Is that correct?

I think even for 100% subjective decisions we can construct an algorithm as follows. Take the same inputs that a person (making the decisions, for example to hire) takes as input to the algorithm (well, at least try to take in reasonable number of parameters). As an example outcome, take the subjective decision that would same person (or committee, what you have) make after a certain period of time, say 3 months (when it becomes more obvious whether the original decision was good or bad). Then you can train your algorithm (but then it will, obviously, depend on specific people involved). And I think it can possibly perform better than the person for the reason I outlined.


Until, of course, there is a machine that learns what the most important things to learn are.


To play devil's advocate, there's certainly a limit to computer-based searches. If the computer's model (which was made by a person or persons) is under-specified, then it will produce suboptimal results. At least with a person, there is always the possibility of integrating new facts on the fly.


"...better than humans at hiring replaceable drone-serfs in an inhumane, inhuman industry which burns people out like they were incandescent bulbs", for sure.


Robots hire people that develops robots that replace people.


Robots hire people that have to act like robots :-(


I guess this is the new age evolution.


How do you define a better employee though ? I mean employers might want to hire people they like, not people who are better at their job. Skill can also be subjective. Sometimes it's about having a good relationship inside a company.

I'm all for hiring people based on their skill, experience and potential, but don't think employers will want to let algorithm do the job of HR. Many hires are based on social contact and first impression, and I don't think anyone want to change that idea of a company being a group of people getting along.

Being able to hire whoever you want is am important liberty. If people were letting algorithms do the job, they might not like it. I guess it's part of the "machine will decide for us" debate.

Anyway, I'm unemployed, so by no mean I could be against change of any sort.


> Being able to hire whoever you want is am important liberty.

Being an important liberty does not mean that it is necessarily a good idea. What (kind of "scientific") experiment would convince you that computers are better at hiring than people?


I think it exactly is what Elon Musk was talking about. I disagree that AI can be dangerous, but in this case I completely agree, human judgement is better, because what employees you choose should be a human decision, not an algorithmic one.

Although I think AI should give suggestions AND be able to explain why such or such candidates is better. What I'm worried, is letting AI black boxes determine the best candidates and not letting HR understand why, based on previous statistics about hiring.

> Being an important liberty does not mean that it is necessarily a good idea.

Well it's how society have been working until today. Would you let science decide instead of people ? I would certainly not. I agree that politicians and people in general don't listen to enough science, but letting science drive us seems like an other extreme alternative.

I personally think AI is great, what I'm worried is if people use it without understanding it, and in history it has happened very often.


Honestly the only experiment I can think of is to put the software on the market and let it compete with traditional hiring. Companies already buy a variety of HR services, and don't need to know whether the job on the other end is being done by a person or a computer. The better service will win the market in the long run.


Hiring who you want is agency and power. Being powerless hurts, and giving us less agency makes us more powerless.

Applying to a position to a human similarly gives us agency and power. It's a bit of competition. If we pass or fail, it's based on how we did at the interview.

We give these things up, and that harms us. One of the big fears about Communism in the post war era was that the Party would dictate what you did for work and how you would get paid. This fear came from this loss of agency, but communism still had support in some situations because of the idea that at least in this case everyone would get paid.

Algorithmic hiring at its extreme would lead to the same loss of agency in the hiring and applying process. The algorithm would be the one that dictated whether you would get a job, and if your ability to get future jobs are dictated by your past jobs, and your past jobs are dictated by the algorithm, you are powerless. If you're really interested in computers but you worked a few summers doing mechanical work in college because that was the work that was available, well, maybe you're better qualified to be a mechanic now, so you'll never get an opportunity to work on computers. Maybe it is strictly better for the mechanic shop to hire you than get a new junior mechanic, and maybe it's strictly better for the company to hire a person who has more experience in IT.

But what if the person that they hire for IT actually wants to be a mechanic, they just had jobs doing IT work? In the same way, the company keeps you on as a mechanic because you've got experience, and it's still better for the company to hire someone like him for the IT position.

This might be more efficient for companies. But is it better for people. My question is really, how do you define 'better'?

It's certainly easy to say that it's more efficient. But cold efficiency is the stuff that scared us in the cold war, luckily Communism failed to really take hold because that cold 'efficiency' was actually inefficient and people were starving. However, implementing it in a Capitalist society is even worse, because you get all of the bad that comes with a cold uncaring hand dictating your fate, it probably IS going to be efficient, and you've got no promise that the people left behind after this efficient system is done allocating all the labor that's necessary will be cared for. How will an algorithm rate a person who is 25 and has been unemployed since college? They'd certainly be a high risk compared to someone who has been working steadily. Bottom of the pile. What if that person has been unemployed because the algorithm filtered them out of jobs just because at the time other applicants were better or more suited for the task?

As humans, we can take a chance on people like that. We might even know that they're not the perfect person for the position, but maybe we'll have "a good feeling" about them. We might make a poor hiring decision, but we might elevate a human because of it at the expense of company profits.


The thing with algorithms in such use-cases is that once people know the algorithm (even partially), they will start gaming it. E.g. Google's ranking algorithm and there are companies who do SEO.


Which is what people do with PRP systems now I knew one guy where I worked who was going for promotion - as my Team leader remarked of course he hasn't done any real work in the last 6 months.

I also know someone who spent £1,000,000 and 15 /16 Man years on a project to redevelop an existing system into oracle - added no share holder value but they ticked a box on the promotion track.


I think that's already a problem. It doesn't matter who enforces them. You can get a lot of applicants that can answer your computer science questions but still fail at writing a simple fizzbuzz.


Which, if the algorithm makes sense, will lead to better employees - do you think this is bad?


It is the same story as the key performance indicators in employee assessment. People stop working to the wealth of the enterprise and instead focus on the growth of their indicators.


If this is possible, we've proved that the performance indicators do not indicate what you defined as "performance" but something different and thus should be dropped.


The point is that the algorithm isn't what people are currently trying to get past, they are trying to get past human hiring. When they switch to a strategy gaming the algorithms instead of the humans then the algorithms can become worse than nothing depending on how quickly the information race goes.


Of course computers are better than humans when they both have same amount of rigid standardized data.

The elephant in the room is: when you do in-person interview, human captures much more data than computer will ever know. We make dives into candidate's background and skills. Dives that you can't put into data model.

Of course that doesn't matter much for low-skill jobs.


The thing about computers is they aren't easily charmed. Interviewers sometimes are. So on occasion get people who shouldn't be in a job based solely on looks, attitude or ability to smooth talk. Very often these types of hires don't work out and/or cause problems. I know I've seen this more than a few times and I bet you have too.


You say that as if it's a bad thing ; )

The thing about being charmed is: there's a reason you like being charmed. Charmers are likely to get along with the team together, and make the clients happier. Sure, you need to be good at whatever you are doing, but charming is an inherent part of your skillset.


Charm can also be used to hide a multitude of sins. Some of my worst working experiences have happened after taking on projects sold to me by charming managers. Beware charm!


I guess that's where the term CONfidence man comes from: because they charm your pants off, and then rob you. Still, I'd argue it's a net asset than a liability. Guess that's how your managers got where they were though: by charming.


Psychopaths are notorious for their apparent charm and charisma.

It isn't usually a good idea to hire one. They will destroy any team they work in, and if given enough power will kill any company they're employed by.


I think that some psychopats can be quite productive in healthy teams. It's just that they're the first ones to use any flaw for their own benefit.


I suppose a follow-up would then be to test

1: hires only made by algorithm vs

2: hires that had human initial screening and then a human interview, and maybe vs

3: hires that had algorithm initial screening and then human interviews.

I think we often assume that #2 gives us the best chance at the right cultural/skill fit, and fall back to #3 when we don't want to give our recruitment departments enough money for the staff needed in #2.

But it's possible that the in-person interview ends up affecting the interviewer's biases such that #1 provides a better overall result than #2 or #3.

Complication certainly ensues when, as others said, the applicants (or the new Professional Application Assistant industry that will pop up) learns how to game the algorithms.



Hmmm. Are the best hiring managers attracted to such jobs? I would not enjoy interviewing so many people.


Perhaps the algorithms should be used (or statistical analysis of hiring retention) to gauge hiring managers... :-)


Of all the jobs that will soon be eaten by computer programs, recruiting specialists and HR candidate screeners are on my short list of those for whom I feel the least amount of sympathy.

I hate the hiring process. I simply can't wait until all the stupid, irrationally-biased humans are removed from it entirely. And I'm reasonably certain that no algorithm is going to ask me to whiteboard a red-black tree while interviewing for a company that hasn't ever coded a single library-quality data structure.


At first, I thought about this very negatively.

Then, I remembered how awful and idiotic most of my HR managers were. The parameters they used to select people were all based on their own biased upper-class background.

Now, an algorithm, if well-written, could grasp many other parameters that would be much more relevant. Innate and applicable abilities an HR manager would never consider.


Okay, let's say you lose the genetic gamble, and you test poorly on these algorithms. I've seen no claim that they are are infallible, they just find correlations. If the correlation is not 1, it is measuring some people poorly (false negative).

So, you actually do quite well at work, but the test puts you in the low category. You are basically unhirable, or at only a fraction of the salary of your peers.

You end up with things like that recent Google hire thing that was all over social media - the author of the package for which they were hiring was declined because he didn't pass some whiteboard algorithm thingy. At least that was random - if he interviewed again he'd have a chance of getting hired. Tests like this would render a section of humanity unhirable.

At least now, even though I am subjected to randomness, I can eventually get a job and prove myself. Then I can take that proof and use it to get other jobs. Not in Silicon Valley as a programmer, perhaps, where they ignore your resume in favor of 'solve this 20 year problem in 30 minutes on a whiteboard while you pretend you didn't just study this and are working it through for the first time', but in the rest of the world.


I have worked for companies making hiring software, most HR people are swimming in a sea of applicants and still do something as dumb as a raw keyword search of resumes from their db and call people with almost no other reference.

That is if you are lucky and doing direct hire work, if you are looking for temporary work the question at the top of everyone's mind is how recently you have applied.

I remember seeing databases with 400k+ employees and they had maybe 200 active staff, always doing work trying to recruit more people instead of using the data they already had.


This finding has been known for 50 years and it is very general. Meehl's (1954, http://psycnet.apa.org/psycinfo/2006-21565-000) little black book on clinical vs. actuarial prediction sets the stage. The basic premise has been replicated many times. Simple algorithms and regression based prediction consistently outperform human judgement. Nothing new here...keep moving.


this is the real story "The median duration of employees in these jobs isn't very long to begin with, about three months." from the article

Service work is scandalously undervalued. I would think that if the technocrats were motivated, they could create tax codes and whatnot to gradually correct this (socialism), and then we would also see an accompanying decrease in: illness, crime, as well as the birth of a real discussion about the purpose of public education, which is now a farce which at best looks for models that will give one group of kids a temporary edge over their neighbors. If we will always need some amount of service work, then we will never need all students to "get ahead (of their peers...)", and this would be so much more tolerable if we would finally begin to value the entire bell curve.


I think algorithmic hiring can be a self-fulfilling prophecy.

Putting people in situations that are not optimal for them can cause them to grow. This might lead to situations where people are not as productive, or where they leave the job early either because it is not a right fit, or because they've built skills and found a better job elsewhere.

But in all of these situations the people grow. They've learned about why the job isn't right for them, they've built skills, even if they didn't have perfect skills to start with.

Choosing the person who is the best fit for the job is more efficient for the company, but it means that you're less likely to have someone take the chance when you're looking to broaden your horizons, even when you're truly eager to do so, because an algorithm says that you have some risk factors.

But there's a cost to society there, you start working in one career, and now the system feels you're optimized to continue working there, changing jobs is risky. If you have a run of bad circumstances that lead to you being laid off a few times, your average employment duration goes down and you become a higher risk factor, meaning you get fewer offers from the best jobs, and more offers from more desperate employers, which have a higher chance of conditions that might lead to a shorter length of employment.

A question is, what is the value to society of the most efficient hiring decisions? It keeps us from making hiring mistakes, but mistakes are things we learn from. It keeps us from taking hiring risks, but risks are something that we are occasionally rewarded for. It maximizes efficiency, which reduces the number of jobs necessary. It concentrates wealth.

I'm not saying that we should strive to be inefficient. I mean, that's easy to do, we could just hire the first applicant to any position. But I do think there's value in remaining human. I don't think we make better decisions than an algorithm in terms of maximizing the value of the hire. But I do think we can make more human decisions, which can't go into an algorithm because it is so subjective and dependent on an individual's personal experience.

But that's not what these algorithms are for. They're for hiring 5000 people instead of 5500 people. That's fine for the person who wants to profit off the work of those 5000 people. But it's less interesting when that inefficiency just leaves 500 jobs off the table.

I am not saying that an algorithm is bad, or is less efficient than human decision making. I'm questioning what we should value in society. I'm asking what we should give up. To let an algorithm dictate hiring practices is different than something like improvements to robotics allowing 10% more widgets to be made per factory worker.

It's taking something from us. It's removing human agency. Sometimes that is good, for instance, removing agency from human drivers can be good because it protect society by causing fewer accidents on the road. But what is the good of removing human agency from hiring? It only benefits really large hires, it makes more efficient labor, these situations are ones where people who are already wealthy make more money. It also limits the agency of the people applying for the job. No longer can you do better in an interview, or convince someone to take a risk on you. Your position is firmly set by your personal details and past, which are set in stone, and might have already been decided algorithmically for you.

We give up a lot, and the benefit goes to a few. Is it better for us to do that? I don't know. But I think with less and less labor needed, and more and more concentration of wealth, I'm not sure if it's worth giving up our control of both the hiring and applying process to algorithms, even if they are beneficial to the company hiring. Why throw away our ability to make our own decisions for so little, even if they are "better" ones?


moneyballed human resources




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: