
Ask HN: Has there been any academic research into software interviewing? - CSMastermind
Some common interview types are algorithm problems, pair programming exercises, take-home assignments, etc.<p>Has there been any research into the predictive power of such assessments?  Is there any evidence that a particular type of question correlates well with job success?
======
barry-cotter
I doubt there’s been too much on software interviewing specifically but
interviewing/candidate selection for jobs is a large part of personnel
psychology. A recent meta-analysis of what they know is below. Basically
general mental ability (g, IQ) is the single best predictor, work sample tests
work well, structured interviews can increase the performance of IQ tests or
work samples and unstructured interviews are a trash fire.

> The Validity and Utility of Selection Methods in Personnel Psychology:
> Practical and Theoretical Implications of 100 Years

> This article summarizes the practical and theoretical implications of 85
> years of research in personnel selection. On the basis of meta-analytic
> findings, this article presents the validity of 19 selection procedures for
> predicting job performance and training performance and the validity of
> paired combinations of general mental ability (GMA) and the 18 other
> selection procedures. Overall, the 3 combinations with the highest
> multivariate validity and utility for job performance were GMA plus a work
> sample test (mean validity of .63), GMA plus an integrity test (mean
> validity of .65), and GMA plus a structured interview (mean validity of
> .63). A further advantage of the latter 2 combinations is that they can be
> used for both entry level selection and selection of experienced employees.
> The practical utility implications of these summary findings are
> substantial. The implications of these research findings for the development
> of theories of job performance are discussed.

[https://www.semanticscholar.org/paper/The-Validity-and-
Utili...](https://www.semanticscholar.org/paper/The-Validity-and-Utility-of-
Selection-Methods-in-%3A-Schmidt/235f263383afe7cda38ef9a49ceb5dc1090f4769)

~~~
dcolkitt
Can't agree with this enough. The most consistent finding in industrial
psychology is that general cognitive ability is by far the best predictor of
job performance in virtually every role ever tested.

Even jobs that don't seem cognitively demanding, like janitors or infantry.
Higher IQ candidates almost universally do better, even if they start with
much less experience.

The takeaway as it applies to modern software hiring, is that skillset is way
overemphasized. It's quite common to see job ads that heavily focus on the
company's specific tech stack. ("Must have experience with Rails, React,
Travis, and AWS")

It's much better to cast a much wider net, and try to find the smartest people
anywhere. High IQ people can easily re-tool their specific skillset. What's
interesting is this is much closer to how the most successful tech firms, like
Google, tend to hire.

~~~
kayoone
I can't agree with this. High IQ can't be everything. Wouldn't this value
experience and knowledge (like everything you learned in university) at zero?
Imo there are a lot of great/amazing and productive Software Engineers that
wouldn't do well in a whiteboard algorithm interview. The people that tend to
do well, are the ones that have a lot of practice with the kind of problems.

~~~
pps43
> Wouldn't this value experience and knowledge (like everything you learned in
> university) at zero?

Having switched professions twice, I now use practically nothing of what I
learned in university from about second year forward. Math and to some extent
physics are still relevant, but that's pretty much it.

It might be painful to admit, but the practical value of that rather
specialized knowledge that took me several years of hard work to obtain is
pretty much zero now.

~~~
klntsky
> It might be painful to admit, but the practical value of that rather
> specialized knowledge that took me several years of hard work to obtain is
> pretty much zero now.

It's sad that your professors didn't tell you beforehand that you were going
to learn how to learn during that time, not just study a particular technology
stack.

~~~
jonfw
This bears the question- are classes based around studying a particular
technology track the most effective way to teach somebody how to learn?

~~~
ghaff
The short answer is probably no. And the fact is that top universities mostly
aren't so focused on teaching whatever language or framework is the flavor of
the day.

That said, you need tools of some sort if you're going to actually build
things as opposed to just learning, say, algorithms in pseudo-code. And it
probably makes sense to use some fairly standard language to do so. There's
not much point in making things deliberately obscure by making students use
some language that the professor designed for his PhD thesis.

------
neilk
Interviewing.io is a startup that does some research into candidate
evaluation. To their credit, they have even posted research with negative
results.

[http://blog.interviewing.io/posts/](http://blog.interviewing.io/posts/)

~~~
kayoone
I had the opportunity to be in a small interviewing workshop with Aline, she
really knows what shes doing and I believe the tech interviewing world would
be a better place if more companies adjusted their process based on
Alines/Interviewing.ios insights.

~~~
pmiller2
I don’t really see what’s special about their process other than the resume-
blindness aspect. That’s been validated elsewhere by Triplebyte. Other than
that one aspect, the process is exactly like your standard technical phone
screen, except that you have to do a really easy Hackerrank problem to get
invited to the platform.

If I am wrong here, someone please correct me.

~~~
leeny
Hey, Aline here. Unlike Triplebyte, we offer people free, anonymous mock
interviews with engineers from companies like Google, Facebook, etc.
Basically, you get on the platform, practice, and then if you do well in
practice (again real interviews not coding challenges), you can book real
interviews top companies. Those interviews are also anonymous, which means
that if you do poorly, you don't have to unmask.

~~~
pmiller2
Well, I disagree with your use of the word “anonymous.” Gender, national
origin, and ethnicity are frequently easy to guess (at least at the level of
“is not a white, American-born male”) via voice. I know you tried voice
masking and it didn’t help equalize the results wrt gender, and I don’t have a
better suggestion, so I’m not blaming the process, for not attempting to
remove bias. This criticism applies to both the mock interviews and the real
interviews.

Bias creeps in in other ways, too. I have done both on interviewing.io as a
candidate. You probably remember how Instacart has interviews on your platform
under the terms that it was just an informational chat, and anyone who met the
criteria to interview would then get a real technical interview. I did the
informational chat and was then not interviewed after deanonymizing. And, BTW,
after I emailed support I never got my technical interview, nor was I given
any sort of resolution or information about what happened.

It’s a good experiment, but it falls short of what I’d call “anonymous” and
doesn’t really remove very much bias in the overall process.

~~~
leeny
We've stopped letting companies do informational chats precisely for that
reason. It's all technical from here on out. I'm not a fan of the
"informational chat" approach at all.

------
lemony_fresh
When I was self-studying social psychology I came across social facilitation
theory and found it applicable to understanding interviewing. People have a
hard time performing complex tasks while other people are watching because
their working memory is monitoring the social situation. This varies for each
individual. For tasks that have been rehearsed, like a musical or dance
routine, they perform better. Politeness theory is applicable as well, people
want to be liked and at the same time do not want their autonomy challenged.
My pet theory is that the whiteboarding epidemic is a status hierarchy game
essentially communicating "if you want to work here, you are going to do what
I say."

~~~
emaginniss
I disagree completely about your theory. I have been on both ends of the
whiteboard interview an almost uncountable time during my career. What it has
always come down to in my mind is a collaborative process to solve a problem
where the one performing the task is trying to solve the problem, while the
giver of the task makes suggestions and points out weaknesses before they get
out of control. I'm sure there are sadists who just want to watch people
squirm while trying to solve an extremely complicated problem, but I don't
think I've ever run into that in an interview and it has never been what I've
put people through.

~~~
mancerayder
I've had whiteboard interviews where the interviewer stays silent while taking
notes.

I failed 100% of them with various, strangely random feedback by HR/recruiter.

I've come to conclude it's the hiring mechanism of social incompetents /
unempathetic sciency types. And it was always a dev from another group than
the one I was applying for.

No thanks.

------
sampo
To correlate with job success, you need to define and measure job success. A
sub-question: How's the academic research into measuring programmer job
success?

~~~
srfilipek
Easy: LOC / hour.

</sarcasm>

~~~
organsnyder
On an inverse scale, sure. Only the best coders have net-negative LOC/hour
metrics.

~~~
szatkus
This would end up with a lot of code golf.

~~~
thanatropism
Code golf probably correlates with IQ.

------
crispyambulance
"job success" is something that happens in the future relative to a "job
interview".

To be able to truly predict whether or not someone will be successful means
that the interviewer would have to able to foresee how that person will
interact with others in their workgroup , rise to challenges that don't yet
exist, and be motivated to remain for a "long-enough" engagement with the
company (whatever that means). Predicting the future is just damn hard, and
it's even harder when nebulous desired outcomes such as "job success" are
used.

To make matters even more difficult, this problem also has "another half" to
it which people tend to ignore: the employer doing the interviewing.

These types of questions seem to take the point of view that an employer is
presented with a bowl of fruit (candidates) and all they have to do is be able
to select the best fruits.

But it doesn't work that way. You may not get the fruit which you want, that
fruit may want to go elsewhere or others might have already grabbed the fruit
you would have wanted. The fruit which looks good now may end up rotten a
short time later. Some fruit that looks undesirable today, may be awesome
later. You may grow tired of apples and desire pears, but you've already
filled your pantry with apples.

What would happen if some organization was able to "figure this out" and truly
maximally optimize their candidate selection using interview techniques? I am
not so sure it would be easily distinguishable from what other similar
competitive employers are doing. Predicting the future can only go so far.

------
dudul
One thing I've been wondering for a long time when I see the quasi-sado-
masochist relationship between engineers and hiring processes is: are bad
hires that 1) costly, 2) frequent ?

In my 12+ years career, I've worked with so-so engineers, but never truly bad
ones that would ruin a project. And even the ones that weren't great, what
damages did they really cause? I've seen many more companies failing because
of a bad product, bad sales strategies, bad market fit, bad business model
than engineering teams using the wrong programming language, or not building
microservices the right way.

I find this constant obsession around identifying "rockstars", avoiding "bad"
engineers at all cost to be truly unhealthy.

~~~
rightbyte
Let's define "bad" as the second worst to forth worst engineer in your class
of 30 engineering students that made it through.

I would say bad engineers tend to know they are bad and either tend to _go
into managing_ , coordination roles or grab a subset of tasks they get good
at, like being the only one on the company working with eg. a specific third
party system and doing support and integration for that.

Bad engineers that have worked with the same system for three years are way
better with it than super-engineers that have never touched it for at-least a
couple of months.

As you said, I have never really had problems with bad engineers messing
things up.

EDIT: Reluctance to hire engineers and saving pennies for a dollar on the
other hand have messed up some projects.

~~~
webmaven
_> Bad engineers that have worked with the same system for three years are way
better with it than super-engineers that have never touched it for at-least a
couple of months._

It's when that bad engineer starts protecting "their" turf at the expense of
the rest of the org in order to provide themselves with job security that
things start to really go south.

~~~
rightbyte
Ye, but not really a big problem where I live where it's a lifo que for firing
people. On stack ranking US corps on the other side the Atlantic I imagine
it's another story ...

------
vthallam
I don't know of any academic research, but Triplebyte is trying to understand
this better. Their interview process involves a little of algo problems and
pairing on hangouts.

Also, there's no one size fits all theory. There are companies where every ms
of time you save by writing a better algorithm makes a big difference and most
where it doesn't matter. Take home assignments, pair programming have all been
tried and is still used by some companies, but majority still rely on white
boards and it seems to work fine.

~~~
EpicEng
>it seems to work fine

How do we know that? Interviewing is near a total crapshoot from what I can
tell based on my 13 years of experience (~8 of which being actively involved
in the process, 3 as a decision maker.) I have yet to find a method that weeds
out people who can whiteboard, but can't deliver, or those who seem to have a
great personality and work ethic, but stop showing up to work and/or throw
tantrums when they don't get their way.

Obviously some people emit red flags like the Sun emits energy, but I'm I
don't believe you can assume it's "working fine" without some baseline
definition of "fine" and a corresponding study.

~~~
vthallam
> How do we know that?

As the sibling comment mentions, companies are able to build large scale
systems by engineers who got in through these kind of interviews. I am not
saying this is the right way or wrong way, but it definitely works.

Of course when you do this, you miss out on some amazing candidates, but the
big companies which started this can afford to do that because of the insane
num of applications they get.

~~~
Dangeranger
A larger problem is the cargo-culting of whiteboard interview techniques by
smaller companies who do not get the volume of applicants that FAANG do, get
believe they will succeed anyway.

------
bellevue
Made an account just for this:

A professor I know from undergrad did research which could essentially
evaluate, with some adaptation, whether or not people would work well in
software engineering situations. More centered around whether team members
would work well together, but relevant nonetheless: Malte Jung

------
perfunctory
I don't think it's necessary. Good software engineers are so scarce you should
just grab the first adequate one who comes your way. When I was in the
position of interviewing candidates, 9 out of 10 couldn't solve the most
trivial of programming tasks.

~~~
bartread
> 9 out of 10 couldn't solve the most trivial of programming tasks.

This has consistently been my experience.

Nevertheless, the ability to code is necessary but not sufficient. You're
going to want to dig into character at least a little.

\- How are they in a team context?

\- Do they treat others with respect?

\- Do they act with integrity?

These kinds of attributes are just as important as whether or not they can
code, and I've been badly bitten when they've been lacking.

------
Scoundreller
Not software development, but professional schools (eg medical schools) have
published research about which admission selection methods can have predictive
value both in-school and as a clinician. Lots of long follow-up.

I’ve seen some programs do away with references, resumes and even interviews.

The programs that I’ve seen remove interviews usually did replace them with
standardized stations with different scenarios.

E.g.: “Explain to someone how to wash their hands if they never have before”.
If you can do that well, you can probably explain some health procedure you’ll
later learn about and have to explain.

~~~
ghaff
I haven't looked for any recent research in this area but when I saw some way
back when it was, in one sense, somewhat depressing. I forget if it was for
undergrad or MBA programs but, as I recall, the correlations were basically to
SATs/GMATs and class rank/grades. Of course, your results will vary depending
on how you define success but basically quantitative hard measures had the
most predictive value.

~~~
Scoundreller
But what outcome were they measuring? Grades and rank in the MBA program?

A good student will probably continue to be a good student.

But how to select which students will become good _practitioners_ may be a
different story.

~~~
ghaff
This was a long time ago and I don't remember the details. One commonly-used
quantitative workplace success metric is compensation/salary, which is not
necessarily unreasonable for MBA programs but can be more problematic in other
areas.

And you're right. It's not really surprising that getting good grades/test
scores at one level of school correlates reasonably well with the same thing
at another level. Which I imagine is one reason universities/grad programs
generally don't optimize purely on that metric because their objective isn't
solely about cranking out students who study well in school.

ADDED: This, by the way, is more or less just a variant of general
intelligence measures being correlated to a lot of outcomes. The SAT and so
forth aren't intelligence tests, but I'm sure they're very correlated.

------
artemisyna
I know of at least one big tech company that has and it informs the ratio of
different types of interviews for each level.

I imagine it'd be really hard for an academic research institute to do so.
Firstly, they'd have to focus on technical interviewing in specific. Secondly,
they'd have to get some big tech companies to share information about
trajectory and whatnot for employees once they've been hired -- I imagine the
extra work of building those pipelines/anonymizing the data/etc probably
wouldn't be worth it for a lot of companies.

------
cm2012
Tokenadult used to post the data on this on every hiring thread. Here's a
link:
[https://news.ycombinator.com/item?id=4613543](https://news.ycombinator.com/item?id=4613543)

------
kaiju0
I think a prior study is needed in how job postings correlate to actual work.
Once you have a stable initiating side from an academic standpoint you can
begin to look at the receiving side of the equation.

------
jkukul
I once bookmarked a blog post [1] which attempts to review existing research
on the topic. The bottom line, I think, is that unstructured interviews have
the least predictive power. So whatever is your interviewing process, make
sure it's structured and similar for every candidate.

[1] [https://erikbern.com/2018/05/02/interviewing-is-a-noisy-
pred...](https://erikbern.com/2018/05/02/interviewing-is-a-noisy-prediction-
problem.html)

------
azhenley
This paper isn't about interviewing, but is very relevant:

What makes a great software engineer?
[https://dl.acm.org/citation.cfm?id=2818839](https://dl.acm.org/citation.cfm?id=2818839)

HN discussion:
[https://news.ycombinator.com/item?id=15892898](https://news.ycombinator.com/item?id=15892898)

------
tschwimmer
It's unlikely that any academic research will happen. Companies view hiring
pipeline data as proprietary because they fear competitors may learn from it
and gain and advantage. Even if researchers agreed to anonymize data I doubt
BigCos would agree to share because it still could be used to benefit a
competitor.

There is likely to be internal research at some companies.

------
mratzloff
I haven't found any academic research about this specific area, although there
is of course a wealth of research into interviewing in general. The closest
analog to a technical exam is probably in the finance industry.

My suspicion, however, is that these kinds of problems have limited predictive
success and yield a lot of false negatives.

------
simplify
It's not academic, but we just published a guide that goes over the pros/cons
of each type.

[https://medium.com/@cspa_exam/a-guide-to-building-your-
own-t...](https://medium.com/@cspa_exam/a-guide-to-building-your-own-
technical-interview-95c32f1d43d2)

------
saltvedt
[https://leanpub.com/leprechauns](https://leanpub.com/leprechauns)

------
hguhghuff
Are you asking cause you are thinking of building a product around this idea?

