
New Research Finds That Algorithms Are Better Than Humans at Hiring - T-A
http://www.bloomberg.com/news/articles/2015-11-17/machines-are-better-than-humans-at-hiring-top-employees
======
ucaetano
Very relevante snippet:

"Looking across 15 companies and more than 300,000 hires in _low-skill
service-sector jobs_ , such as data entry and call center work, NBER
researchers compared the tenure of employees who had been hired based on the
algorithmic recommendations of a job test with that of people who'd been
picked by a human."

The headlines are far too general and deceptive. It's like saying "computer
are better than humans at playing games" because computer bet humans in chess.

~~~
DaveWalk
Well said. Is there any research done at the non-low-skill sectors? I wonder
if the same algorithmic approach would be successful?

------
lordnacho
We should expect machines to be better than humans when there's a bunch of
standardised, static, objective criteria, eg number of calls made and such.
It's like looking at sports, where it's also quite clear that Messi and
Ronaldo are in a league of their own. Any kind of personal hunch is likely to
be either wrong, or something that data could quantify more precisely.

The thing is performance is often subjective, and it's often not possible to
tell how a team will change from adding a member. (In football you can at
least try to look at national/club team differences, or stats that changed
after a player changed clubs). When people change jobs, they often change
roles, and the firms they are joining have different positions in the market,
and different aims.

~~~
asgard1024
This is your human anti-algorithm bias showing. When it is more subjective
that doesn't mean that algorithm still cannot do a better job than human. :-)
You have just assumed that it can't, because you don't know how to measure it.
In other words, it's an argument from ignorance (which is not surprising - AfI
is often used to justify unwarranted beliefs).

It's actually quite possible that algorithms learns the actual preferences of
some person better than the person himself; for example, due to exponential
discounting of future, person may make hiring decision based on immediate
impression and discount other warning signals, which is going to be wrong in
the long run.

(Downvoters, can you please explain yourself? Maybe we can have an interesting
discussion. Thanks.)

~~~
astazangasta
Sure, I'll give it a shot, although I'm not amongst your downvoters.

There are two ways to build a system like this: one requires no training and
uses a heuristic. That is, you come up with a formula that you think will rank
candidates well, plug input variables into it, and sort candidates based on
the score. Such a system cannot learn, it is fixed; it can only be improved by
a human tweaking it. The fact that we have no way to measure our success is
irrelevant.

The other way is to build a learning system, and a learning system needs to be
trained. This means there _must_ be either a set of training data (of positive
examples with specified outcomes) or an objective goal function that can be
computed on the output over which an optimization can be run.

Without these things (i.e., when the results are too subjective and we're
unable to assign a reliable score to the outcome), machine learning is
impossible, and the algorithm can never be trained to perform better than a
human.

~~~
asgard1024
OK, so it seems to me that you pretty much agree with my argument, but you
don't like my assumption that even if there is subjectivity, there can be a
learning algorithm. Is that correct?

I think even for 100% subjective decisions we can construct an algorithm as
follows. Take the same inputs that a person (making the decisions, for example
to hire) takes as input to the algorithm (well, at least try to take in
reasonable number of parameters). As an example outcome, take the subjective
decision that would same person (or committee, what you have) make after a
certain period of time, say 3 months (when it becomes more obvious whether the
original decision was good or bad). Then you can train your algorithm (but
then it will, obviously, depend on specific people involved). And I think it
can possibly perform better than the person for the reason I outlined.

------
JulianMorrison
"...better than humans at hiring replaceable drone-serfs in an inhumane,
inhuman industry which burns people out like they were incandescent bulbs",
for sure.

~~~
lugus35
Robots hire people that develops robots that replace people.

~~~
Roboprog
Robots hire people that have to act like robots :-(

------
jokoon
How do you define a better employee though ? I mean employers might want to
hire people they like, not people who are better at their job. Skill can also
be subjective. Sometimes it's about having a good relationship inside a
company.

I'm all for hiring people based on their skill, experience and potential, but
don't think employers will want to let algorithm do the job of HR. Many hires
are based on social contact and first impression, and I don't think anyone
want to change that idea of a company being a group of people getting along.

Being able to hire whoever you want is am important liberty. If people were
letting algorithms do the job, they might not like it. I guess it's part of
the "machine will decide for us" debate.

Anyway, I'm unemployed, so by no mean I could be against change of any sort.

~~~
wolfgke
> Being able to hire whoever you want is am important liberty.

Being an important liberty does not mean that it is necessarily a good idea.
What (kind of "scientific") experiment would convince you that computers are
better at hiring than people?

~~~
jokoon
I think it exactly is what Elon Musk was talking about. I disagree that AI can
be dangerous, but in this case I completely agree, human judgement is better,
because what employees you choose should be a human decision, not an
algorithmic one.

Although I think AI should give suggestions AND be able to explain why such or
such candidates is better. What I'm worried, is letting AI black boxes
determine the best candidates and not letting HR understand why, based on
previous statistics about hiring.

> Being an important liberty does not mean that it is necessarily a good idea.

Well it's how society have been working until today. Would you let science
decide instead of people ? I would certainly not. I agree that politicians and
people in general don't listen to enough science, but letting science drive us
seems like an other extreme alternative.

I personally think AI is great, what I'm worried is if people use it without
understanding it, and in history it has happened very often.

------
sudeepj
The thing with algorithms in such use-cases is that once people know the
algorithm (even partially), they will start gaming it. E.g. Google's ranking
algorithm and there are companies who do SEO.

~~~
wolfgke
Which, if the algorithm makes sense, will lead to better employees - do you
think this is bad?

~~~
reacweb
It is the same story as the key performance indicators in employee assessment.
People stop working to the wealth of the enterprise and instead focus on the
growth of their indicators.

~~~
wolfgke
If this is possible, we've proved that the performance indicators do not
indicate what you defined as "performance" but something different and thus
should be dropped.

------
guard-of-terra
Of course computers are better than humans when they both have same amount of
rigid standardized data.

The elephant in the room is: when you do in-person interview, human captures
much more data than computer will ever know. We make dives into candidate's
background and skills. Dives that you can't put into data model.

Of course that doesn't matter much for low-skill jobs.

~~~
jqm
The thing about computers is they aren't easily charmed. Interviewers
sometimes are. So on occasion get people who shouldn't be in a job based
solely on looks, attitude or ability to smooth talk. Very often these types of
hires don't work out and/or cause problems. I know I've seen this more than a
few times and I bet you have too.

~~~
benten10
You say that as if it's a bad thing ; )

The thing about being charmed is: there's a reason you like being charmed.
Charmers are likely to get along with the team together, and make the clients
happier. Sure, you need to be good at whatever you are doing, but charming is
an inherent part of your skillset.

~~~
osullivj
Charm can also be used to hide a multitude of sins. Some of my worst working
experiences have happened after taking on projects sold to me by charming
managers. Beware charm!

~~~
benten10
I guess that's where the term CONfidence man comes from: because they charm
your pants off, and then rob you. Still, I'd argue it's a net asset than a
liability. Guess that's how your managers got where they were though: by
charming.

------
vellum
The PDF is here:
[http://cep.lse.ac.uk/seminarpapers/19-03-15-DL2.pdf](http://cep.lse.ac.uk/seminarpapers/19-03-15-DL2.pdf)

------
riskneural
Hmmm. Are the best hiring managers attracted to such jobs? I would not enjoy
interviewing so many people.

~~~
sageikosa
Perhaps the algorithms should be used (or statistical analysis of hiring
retention) to gauge hiring managers... :-)

------
logfromblammo
Of all the jobs that will soon be eaten by computer programs, recruiting
specialists and HR candidate screeners are on my short list of those for whom
I feel the least amount of sympathy.

I hate the hiring process. I simply can't wait until all the stupid,
irrationally-biased humans are removed from it entirely. And I'm reasonably
certain that no algorithm is going to ask me to whiteboard a red-black tree
while interviewing for a company that hasn't ever coded a single library-
quality data structure.

------
gtpasqual
At first, I thought about this very negatively.

Then, I remembered how awful and idiotic most of my HR managers were. The
parameters they used to select people were all based on their own biased
upper-class background.

Now, an algorithm, if well-written, could grasp many other parameters that
would be much more relevant. Innate and applicable abilities an HR manager
would never consider.

~~~
RogerL
Okay, let's say you lose the genetic gamble, and you test poorly on these
algorithms. I've seen no claim that they are are infallible, they just find
correlations. If the correlation is not 1, it is measuring some people poorly
(false negative).

So, you actually do quite well at work, but the test puts you in the low
category. You are basically unhirable, or at only a fraction of the salary of
your peers.

You end up with things like that recent Google hire thing that was all over
social media - the author of the package for which they were hiring was
declined because he didn't pass some whiteboard algorithm thingy. At least
that was random - if he interviewed again he'd have a chance of getting hired.
Tests like this would render a section of humanity unhirable.

At least now, even though I am subjected to randomness, I can eventually get a
job and prove myself. Then I can take that proof and use it to get other jobs.
Not in Silicon Valley as a programmer, perhaps, where they ignore your resume
in favor of 'solve this 20 year problem in 30 minutes on a whiteboard while
you pretend you didn't just study this and are working it through for the
first time', but in the rest of the world.

------
chewyshine
This finding has been known for 50 years and it is very general. Meehl's
(1954,
[http://psycnet.apa.org/psycinfo/2006-21565-000](http://psycnet.apa.org/psycinfo/2006-21565-000))
little black book on clinical vs. actuarial prediction sets the stage. The
basic premise has been replicated many times. Simple algorithms and regression
based prediction consistently outperform human judgement. Nothing new
here...keep moving.

------
fish55
this is the real story "The median duration of employees in these jobs isn't
very long to begin with, about three months." from the article

Service work is scandalously undervalued. I would think that if the
technocrats were motivated, they could create tax codes and whatnot to
gradually correct this (socialism), and then we would also see an accompanying
decrease in: illness, crime, as well as the birth of a real discussion about
the purpose of public education, which is now a farce which at best looks for
models that will give one group of kids a temporary edge over their neighbors.
If we will always need some amount of service work, then we will never need
all students to "get ahead (of their peers...)", and this would be so much
more tolerable if we would finally begin to value the entire bell curve.

------
zeidrich
I think algorithmic hiring can be a self-fulfilling prophecy.

Putting people in situations that are not optimal for them can cause them to
grow. This might lead to situations where people are not as productive, or
where they leave the job early either because it is not a right fit, or
because they've built skills and found a better job elsewhere.

But in all of these situations the people grow. They've learned about why the
job isn't right for them, they've built skills, even if they didn't have
perfect skills to start with.

Choosing the person who is the best fit for the job is more efficient for the
company, but it means that you're less likely to have someone take the chance
when you're looking to broaden your horizons, even when you're truly eager to
do so, because an algorithm says that you have some risk factors.

But there's a cost to society there, you start working in one career, and now
the system feels you're optimized to continue working there, changing jobs is
risky. If you have a run of bad circumstances that lead to you being laid off
a few times, your average employment duration goes down and you become a
higher risk factor, meaning you get fewer offers from the best jobs, and more
offers from more desperate employers, which have a higher chance of conditions
that might lead to a shorter length of employment.

A question is, what is the value to society of the most efficient hiring
decisions? It keeps us from making hiring mistakes, but mistakes are things we
learn from. It keeps us from taking hiring risks, but risks are something that
we are occasionally rewarded for. It maximizes efficiency, which reduces the
number of jobs necessary. It concentrates wealth.

I'm not saying that we should strive to be inefficient. I mean, that's easy to
do, we could just hire the first applicant to any position. But I do think
there's value in remaining human. I don't think we make better decisions than
an algorithm in terms of maximizing the value of the hire. But I do think we
can make more human decisions, which can't go into an algorithm because it is
so subjective and dependent on an individual's personal experience.

But that's not what these algorithms are for. They're for hiring 5000 people
instead of 5500 people. That's fine for the person who wants to profit off the
work of those 5000 people. But it's less interesting when that inefficiency
just leaves 500 jobs off the table.

I am not saying that an algorithm is bad, or is less efficient than human
decision making. I'm questioning what we should value in society. I'm asking
what we should give up. To let an algorithm dictate hiring practices is
different than something like improvements to robotics allowing 10% more
widgets to be made per factory worker.

It's taking something from us. It's removing human agency. Sometimes that is
good, for instance, removing agency from human drivers can be good because it
protect society by causing fewer accidents on the road. But what is the good
of removing human agency from hiring? It only benefits really large hires, it
makes more efficient labor, these situations are ones where people who are
already wealthy make more money. It also limits the agency of the people
applying for the job. No longer can you do better in an interview, or convince
someone to take a risk on you. Your position is firmly set by your personal
details and past, which are set in stone, and might have already been decided
algorithmically for you.

We give up a lot, and the benefit goes to a few. Is it better for us to do
that? I don't know. But I think with less and less labor needed, and more and
more concentration of wealth, I'm not sure if it's worth giving up our control
of both the hiring and applying process to algorithms, even if they are
beneficial to the company hiring. Why throw away our ability to make our own
decisions for so little, even if they are "better" ones?

------
Dowwie
moneyballed human resources

