Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Has there been any academic research into software interviewing?
206 points by CSMastermind on Apr 25, 2019 | hide | past | web | favorite | 122 comments
Some common interview types are algorithm problems, pair programming exercises, take-home assignments, etc.

Has there been any research into the predictive power of such assessments? Is there any evidence that a particular type of question correlates well with job success?

I doubt there’s been too much on software interviewing specifically but interviewing/candidate selection for jobs is a large part of personnel psychology. A recent meta-analysis of what they know is below. Basically general mental ability (g, IQ) is the single best predictor, work sample tests work well, structured interviews can increase the performance of IQ tests or work samples and unstructured interviews are a trash fire.

> The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years

> This article summarizes the practical and theoretical implications of 85 years of research in personnel selection. On the basis of meta-analytic findings, this article presents the validity of 19 selection procedures for predicting job performance and training performance and the validity of paired combinations of general mental ability (GMA) and the 18 other selection procedures. Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63). A further advantage of the latter 2 combinations is that they can be used for both entry level selection and selection of experienced employees. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed.


Can't agree with this enough. The most consistent finding in industrial psychology is that general cognitive ability is by far the best predictor of job performance in virtually every role ever tested.

Even jobs that don't seem cognitively demanding, like janitors or infantry. Higher IQ candidates almost universally do better, even if they start with much less experience.

The takeaway as it applies to modern software hiring, is that skillset is way overemphasized. It's quite common to see job ads that heavily focus on the company's specific tech stack. ("Must have experience with Rails, React, Travis, and AWS")

It's much better to cast a much wider net, and try to find the smartest people anywhere. High IQ people can easily re-tool their specific skillset. What's interesting is this is much closer to how the most successful tech firms, like Google, tend to hire.

The research says structured interviews and work-sample tests have similar predictive power and all together they have yet more. The takeaway shouldn’t be IQ über alles.

The research shows that structured interviews do have better predictive performance than unstructured interviews. But that effect is entirely mediated by their higher correlation with IQ.

In other words, structured interviews are better because they're less noisy measures of intelligence. The takeaway very much is IQ uber alles.

[1] https://digitalcommons.unomaha.edu/cgi/viewcontent.cgi?refer...

I am not familiar with the research, and don't have time to review it right now. However, it seems like common sense to me that there are some factors beyond intelligence that matter such as motivation, interpersonal skills and character traits.

I think you're right but those are really hard to measure in an interview. Motivation especially.

Your grasp of the research is out of date.

From the 1998 meta-analysis

> The most well-known conclusion from this research is that for hiring employees without previous experience in the job the most valid predictor of future performance and learning is general mental ability ([G M A ], i.e., intelligence or general cognitive ability; Hunter & Hunter, 1984; Ree & Earles, 1992).

> Work sample measures are slightly more valid but are much more costly and can be used only with applicants who already know the job or have been trained for the occupation or job.


From the 2016 meta-analysis by the same author.

> Overall, the two combinations with the highest multivariate validity and utility for predicting job performance were GMA plus an integrity test (mean validity of .78) and GMA plus a structured interview (mean validity of .76)

Work sample tests do not work as well as the old research suggests

Table with effects linked below


More recent paper


I can't agree with this. High IQ can't be everything. Wouldn't this value experience and knowledge (like everything you learned in university) at zero? Imo there are a lot of great/amazing and productive Software Engineers that wouldn't do well in a whiteboard algorithm interview. The people that tend to do well, are the ones that have a lot of practice with the kind of problems.

> Wouldn't this value experience and knowledge (like everything you learned in university) at zero?

Having switched professions twice, I now use practically nothing of what I learned in university from about second year forward. Math and to some extent physics are still relevant, but that's pretty much it.

It might be painful to admit, but the practical value of that rather specialized knowledge that took me several years of hard work to obtain is pretty much zero now.

> It might be painful to admit, but the practical value of that rather specialized knowledge that took me several years of hard work to obtain is pretty much zero now.

It's sad that your professors didn't tell you beforehand that you were going to learn how to learn during that time, not just study a particular technology stack.

This bears the question- are classes based around studying a particular technology track the most effective way to teach somebody how to learn?

The short answer is probably no. And the fact is that top universities mostly aren't so focused on teaching whatever language or framework is the flavor of the day.

That said, you need tools of some sort if you're going to actually build things as opposed to just learning, say, algorithms in pseudo-code. And it probably makes sense to use some fairly standard language to do so. There's not much point in making things deliberately obscure by making students use some language that the professor designed for his PhD thesis.

Studying math is the best way to improve your logical thinking abilities.

Learning how to learn is a specialized set of skills that weren't taught at the university level when I was in a school.(2010)

I thought that was fairly common knowledge about a great deal of university education.

I bet it wasn't a total waste. The real point of school, particularly university, is to learn how to learn rather than learn a bunch of facts or tools. Attending university probably made it a lot easier for you to switch professions twice.

I learned how to learn way before graduating.

The waste was close to total, but this is only obvious in hindsight. Given the information I had at the time my choices were rational.

I read the parent as talking about narrow skill sets that a smart and motivated individual can pick up pretty quickly. If you're looking for a senior developer you probably don't want to hire someone who has never programmed even if they're widely acknowledged as really smart and an expert in ball bearing design.

And obviously applies in lots of areas. If I'm looking for someone to head up a digital marketing or marketing research initiative, someone whose only work experience is software development is probably not the best choice no matter how smart and motivated they are.

OTOH, as you get to the level where people are more managers than practitioners, their specific skill sets presumably start to make less of a difference.

Added: And, as peer noted, people do shift careers significantly all the time. I never really directly used my undergrad (or grad) engineering degrees all that much.

So, what we know from industrial psychology isn't that experience doesn't add value. It's that high IQ people learn faster. This makes sense when you remember that intelligence is broadly defined as the ability to learn.

While Alice may have fewer years of experience than Bob, she might have effectively more experience if she absorbed understanding at a faster rate. An unexperienced, high-intelligence person usually starts at an initial disadvantage, but manages to "get up to speed" quickly.

This also underscores the particular importance of intelligence in software. The field is constantly awash in new technologies, where nobody has had time to accumulate extensive chronological experience. So, it's really important to find people that can absorb new concepts quickly.

[1] https://www.sciencedirect.com/science/article/pii/S019130851...

My interpretation is that among the pool of people who meet the minimum requirements and are interested in that position/career, IQ is the best predictor. If you're just picking random high-IQ people off the street, I don't think they'll do very well in a software engineering job.

> Wouldn't this value experience and knowledge (like everything you learned in university) at zero?

I haven't read the research myself but if they are only looking at interviews, this could be explained by the fact people with absolutely no prior knowledge would be filtered before reaching the interview.

What is the correlation between IQ tests and algorithm interviews? Do people with better IQ scores do better on algorithm interviews?

I wouldn't expect that. Algorithm interviews as they're practiced tend to favor memory abilities and not cognitive. It's hard to come up with standard algorithms on the spot if you haven't been exposed to them.

I do know that cognitive abilities and long term memory abilities aren't correlated. Cognitive abilities and short term / working memory are positively correlated though.

I work at Google, based on personal experience I would say it does a good job of setting a minimum bar in terms intelligence and coding ability. However, it might reject some people unfairly.

A little bit of preparation helps a lot, but beyond a certain point preparation doesn't help anymore. All interview questions require solving a new problem in the interview itself.

Depending on the interviewer and question, the difference between a hire and no-hire recommendation can be quite marginal. There is a huge luck factor involved.

With or without months of preparation on some of the exact problems interviewers use “grinding LeetCode”?

The trick here would be separating high IQ people who have prior algorithmic experience from those who do not, because that's a much bigger influence on performance.

DO NOTE that the in the United States, the use of IQ tests for hiring is potentially legally fraught.

A SCOTUS ruling[1] has found that IQ tests are assumed to disfavor minority employees, and therefore using them as a major factor in hiring decisions may run afoul of the Civil Rights Act. There are ways around this, but generally speaking you have to prove the specific test you are administering either does not disfavor minorities, or else show that it is directly related to the specific position you're hiring for.

[1] https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.

I am not a lawyer, do not take legal advice from randos on the interwebs. You should consult with a Real Actual Professional on how to deal with this risk. There have been both laws and court cases since the above that impact the ruling, which I am not qualified to analyze.

IQ tests don't measure creativity. Can I assume these studies suggest creativity is not very relevant to job performance?

That is so obvious - just hire people smarter than you.

But that is so hard: we have pride, we have "company culture", we are stubborn, we are talking about "A players", etc. etc.

So yes - try to make interviews structured and organized so that there is no random noise.

Interviewing.io is a startup that does some research into candidate evaluation. To their credit, they have even posted research with negative results.


I had the opportunity to be in a small interviewing workshop with Aline, she really knows what shes doing and I believe the tech interviewing world would be a better place if more companies adjusted their process based on Alines/Interviewing.ios insights.

I don’t really see what’s special about their process other than the resume-blindness aspect. That’s been validated elsewhere by Triplebyte. Other than that one aspect, the process is exactly like your standard technical phone screen, except that you have to do a really easy Hackerrank problem to get invited to the platform.

If I am wrong here, someone please correct me.

Hey, Aline here. Unlike Triplebyte, we offer people free, anonymous mock interviews with engineers from companies like Google, Facebook, etc. Basically, you get on the platform, practice, and then if you do well in practice (again real interviews not coding challenges), you can book real interviews top companies. Those interviews are also anonymous, which means that if you do poorly, you don't have to unmask.

Well, I disagree with your use of the word “anonymous.” Gender, national origin, and ethnicity are frequently easy to guess (at least at the level of “is not a white, American-born male”) via voice. I know you tried voice masking and it didn’t help equalize the results wrt gender, and I don’t have a better suggestion, so I’m not blaming the process, for not attempting to remove bias. This criticism applies to both the mock interviews and the real interviews.

Bias creeps in in other ways, too. I have done both on interviewing.io as a candidate. You probably remember how Instacart has interviews on your platform under the terms that it was just an informational chat, and anyone who met the criteria to interview would then get a real technical interview. I did the informational chat and was then not interviewed after deanonymizing. And, BTW, after I emailed support I never got my technical interview, nor was I given any sort of resolution or information about what happened.

It’s a good experiment, but it falls short of what I’d call “anonymous” and doesn’t really remove very much bias in the overall process.

We've stopped letting companies do informational chats precisely for that reason. It's all technical from here on out. I'm not a fan of the "informational chat" approach at all.

Thank you for saying that! It was great to meet with you and the rest of the team =)

^^ this. Aline learner runs a solid content strategy over there focusing on data journalism.


^ Where do you report bugs around here?

That reply button's in italics?!

It seems you can italicize your reply button by putting a star at the end of your comment

Please don't anyone report this bug, let the italics roam free

whoops, thank you.

Has anyone here done interviews with interviewing.io? How was it, did it lead to a job?

Yes and Yes. Interviewing.io was great for kicking some rust of the ol' interviewing tires and also led to an offer that I accepted.

I have a longer reply to a different HN thread with more detail about my experiences here: https://news.ycombinator.com/item?id=11679844

I think reading 10-20 ancedotes is far more useful than an academic paper.

After meeting my PhD professor. I stopped trusting academia for truth.

I spent some time digging through academic research last year for a chapter of a book I was writing. There was some interesting work both relating directly to software development and collaboration and other topics more broadly. There was also a lot of unreadable crap that was pretty disconnected from the real world.

Academia's disconnect with practical things is amazing.

Perfect example of a field were the quantification of performance killed all good intentions.

The dishonesty to "prove" something was shocking.

Now I skip the abstract and go straight to Data.

When I was self-studying social psychology I came across social facilitation theory and found it applicable to understanding interviewing. People have a hard time performing complex tasks while other people are watching because their working memory is monitoring the social situation. This varies for each individual. For tasks that have been rehearsed, like a musical or dance routine, they perform better. Politeness theory is applicable as well, people want to be liked and at the same time do not want their autonomy challenged. My pet theory is that the whiteboarding epidemic is a status hierarchy game essentially communicating "if you want to work here, you are going to do what I say."

I disagree completely about your theory. I have been on both ends of the whiteboard interview an almost uncountable time during my career. What it has always come down to in my mind is a collaborative process to solve a problem where the one performing the task is trying to solve the problem, while the giver of the task makes suggestions and points out weaknesses before they get out of control. I'm sure there are sadists who just want to watch people squirm while trying to solve an extremely complicated problem, but I don't think I've ever run into that in an interview and it has never been what I've put people through.

I've had whiteboard interviews where the interviewer stays silent while taking notes.

I failed 100% of them with various, strangely random feedback by HR/recruiter.

I've come to conclude it's the hiring mechanism of social incompetents / unempathetic sciency types. And it was always a dev from another group than the one I was applying for.

No thanks.

> it has never been what I've put people through.

How can you tell? Is this always based on how you experienced the interview, or do you ask candidates afterwards how they felt?

(no insult intended, but) you made an empirical statement, and I'm curious where your evidence is from.

In that case, why use a whiteboard instead of a collaborative text editor like coderpad, repl.it, or even gnu screen?

Testing in an environment completely different from 'production' isn't going to give a super strong signal on collaborative problem solving imo.

Would you allow junior developers to whiteboard a highly qualified and accomplished CTO candidate? No. It is acceptable for peers who will be competing with the candidate in the promotions tournament or the hiring managers to conduct whiteboarding exercises.

> "if you want to work here, you are going to do what I say."

I recently started pointing that out, in a roundabout way, when I run interviews.

Why? If a candidate starts arguing about xml versus JSON, the candidate misses the point when I bring up xml. (Hint, I only discuss xml because it allows discussing widely known concepts that are very general and have little to do with xml versus JSON.)

Or, if a candidate says to me, "why can't we use async-await" in a particular question, the candidate also misses the point of the question. (Hint, I'm trying to test knowledge of concepts that's easy to test when doing something without async=await.)

Interview questions are always contrived and end up as an "if you want to work here, you are going to do what I say" exercise. If a candidate can't work within the constraints of a short interview exercise, then what will happen when the constraints of the job come into play? We never get to use our favorite patterns and APIs all the time.

Wow, if I was in an interview where the interviewer was that closed-minded, I would definitely turn down an offer but I might just walk out. If they are unwilling to consider even discussing useful patterns in an interview, working there must be hell.

To correlate with job success, you need to define and measure job success. A sub-question: How's the academic research into measuring programmer job success?

I've always thought that surveys of team members and managers could be a pretty cheap and effective way to do this (in a research setting, where you aren't basing pay or contract renewal on it). It may not give you much in an absolute sense, but should give you a good idea of the relative ranking of programmers.

Of course software engineering research in general has always been a little unimpressive because what you really need is several teams completing projects, some using method A and others using method B. This could cost millions, and would only settle the bet between methods A and B.

Lacking this, we're left with the publically available research that relies on unconvincing proxies for success and very small samples. It's better than nothing but far short of what we ought to have given how important software is.

> This could cost millions, and would only settle the bet between methods A and B.

Even worse: it would only settle the bet between methods A and B in a particular context.

Google publishes a lot of their work at https://rework.withgoogle.com/.

Also, Google's former head of people (Lazlo Block) founded a company that is focusing on that area: https://humu.com/.

While I wasn't always a fan of Google People/HR procedures while I was there, I did always appreciate the data-driven/research approach Lazlo and the People team took.

How about academic research into the value of academic research in programmer job success?

I figure that eventually it'll be similar to the Voight-Kampff test and kept on file forever while being shared with all potential employers.

I think the baseline test from 2049 would be more appropriate:

"Because they have to kill their own kind, they constantly need to be assessed as to whether their work is having some kind of moral impact on them'"


Hillel Wayne has been writing about this a bit on Twitter. Some threads: https://twitter.com/hillelogram/status/1120495752969641986 https://twitter.com/hillelogram/status/1119709859979714560

In general, there's a lot of research, though with some weird gaps, and a lot of results (though not all) are inconclusive.

Related, how do you define even the type of job and compare the person hired to the job?

A noob could be successful hire and not ultra productive compared to say a more experienced hire.

And that doesn't account for all the places hiring / think they need / hired "rockstar" developers .... to maintain their crud app.

Usually they look at future reviews or just ask managers how well that individual is performing.

Easy: LOC / hour.


On an inverse scale, sure. Only the best coders have net-negative LOC/hour metrics.

This would end up with a lot of code golf.

Code golf probably correlates with IQ.

Reaching the objective as efficiently as possible... yeah, sounds about right.

Hard Mode: Removed LOC / hour.

Hardest mode: non-written code

Social-engineering your PM to remove requirements, yep.

My PM loves removing stuff from tickets.

Most of the time, what people want isn't even what they want.

I really think this is best we got so far.

So the secret to being a 10X programmer is ":set tw=8" making your maximum line length 8 characters, forcing you be 10 times more productive than rival devs who're stuck at the old 80 character standard. Got it.

Or you can move around code creating large diffs and when asked why: just say "I'm familiarizing myself with the code" -- real story: my boss says about that person that he is the most productive programmer.

It seems gaming trivial metrics works in this case.

1. no one will be allowed to do that if you work in a team environment.

2. No one has been able to come up with a better alternative.

I wasn't implying its a prefect metric or even a reasonable one. If you really need a quantifiable metric then thats all we got.

I think reading code deliniated into lines 8 characters code, written by somebody else, with no whitespace and single letter variable names is the punishment reserved in hell for the very worst of developers.

Also known as reading Arthur Whitney's code.[0]

[0] http://kparc.com/b/

I hope hell is as engaging and rewarding as that.

The only difference between hell and heaven is that in one, you end up understanding what you read. ;-)

"job success" is something that happens in the future relative to a "job interview".

To be able to truly predict whether or not someone will be successful means that the interviewer would have to able to foresee how that person will interact with others in their workgroup , rise to challenges that don't yet exist, and be motivated to remain for a "long-enough" engagement with the company (whatever that means). Predicting the future is just damn hard, and it's even harder when nebulous desired outcomes such as "job success" are used.

To make matters even more difficult, this problem also has "another half" to it which people tend to ignore: the employer doing the interviewing.

These types of questions seem to take the point of view that an employer is presented with a bowl of fruit (candidates) and all they have to do is be able to select the best fruits.

But it doesn't work that way. You may not get the fruit which you want, that fruit may want to go elsewhere or others might have already grabbed the fruit you would have wanted. The fruit which looks good now may end up rotten a short time later. Some fruit that looks undesirable today, may be awesome later. You may grow tired of apples and desire pears, but you've already filled your pantry with apples.

What would happen if some organization was able to "figure this out" and truly maximally optimize their candidate selection using interview techniques? I am not so sure it would be easily distinguishable from what other similar competitive employers are doing. Predicting the future can only go so far.

One thing I've been wondering for a long time when I see the quasi-sado-masochist relationship between engineers and hiring processes is: are bad hires that 1) costly, 2) frequent ?

In my 12+ years career, I've worked with so-so engineers, but never truly bad ones that would ruin a project. And even the ones that weren't great, what damages did they really cause? I've seen many more companies failing because of a bad product, bad sales strategies, bad market fit, bad business model than engineering teams using the wrong programming language, or not building microservices the right way.

I find this constant obsession around identifying "rockstars", avoiding "bad" engineers at all cost to be truly unhealthy.

Yes, and these people: "bad product, bad sales strategies, bad market fit, bad business model" never get "tested" on anything during job interviews.

Let's define "bad" as the second worst to forth worst engineer in your class of 30 engineering students that made it through.

I would say bad engineers tend to know they are bad and either tend to go into managing, coordination roles or grab a subset of tasks they get good at, like being the only one on the company working with eg. a specific third party system and doing support and integration for that.

Bad engineers that have worked with the same system for three years are way better with it than super-engineers that have never touched it for at-least a couple of months.

As you said, I have never really had problems with bad engineers messing things up.

EDIT: Reluctance to hire engineers and saving pennies for a dollar on the other hand have messed up some projects.

> Bad engineers that have worked with the same system for three years are way better with it than super-engineers that have never touched it for at-least a couple of months.

It's when that bad engineer starts protecting "their" turf at the expense of the rest of the org in order to provide themselves with job security that things start to really go south.

Ye, but not really a big problem where I live where it's a lifo que for firing people. On stack ranking US corps on the other side the Atlantic I imagine it's another story ...

I strongly agree. A good engineer can compensate for bad product management and occasionally bad leadership, but can't do much about all the other things you listed.

There's no way to guarantee a successful engineering hire, but organizations try to because their culture is so blameful or fearful of conflict that all the incentives are misaligned.

I believe most organizations copy the Google, Facebook, etc. hiring processes with the goal of adopting an "industry standard process" because they don't know how to conduct an interview process that's tailored to their own organization, and they don't want to think too deeply about it.

Having a coherent interview process means dealing with your organizational baggage: understanding it and either resolving it or crafting the interview process to select for people who can integrate well with it. Usually it's the latter, since most of the issues start at the top.

Google, Facebook, etc. have spent a lot of time and money crafting a good interview process for their organizations: specifically what they value and don't value (even if the individual interviewers aren't always fully aware of what the organization is selecting for). They can also support a lot of false negatives.

But the copycats don't have those same needs or candidate pool, so it always comes across as a bit disconnected to me.

it's very easy to realize how it can damage or ruin a project. it reveals once you need to implement a big feature to address some new business needs that can be critical for the company to survive. you have large code base and you should not break something. the clock is ticking, the rivals don't sleep, the market doesn't wait.

or you can just waste all the time firefighting with consequences of bad software.

it's not the only factor, but it can matter. the majority of companies i worked suffer from this. what really mitigates the problem is that competitors have the same problem on average.

I don't know of any academic research, but Triplebyte is trying to understand this better. Their interview process involves a little of algo problems and pairing on hangouts.

Also, there's no one size fits all theory. There are companies where every ms of time you save by writing a better algorithm makes a big difference and most where it doesn't matter. Take home assignments, pair programming have all been tried and is still used by some companies, but majority still rely on white boards and it seems to work fine.

>it seems to work fine

How do we know that? Interviewing is near a total crapshoot from what I can tell based on my 13 years of experience (~8 of which being actively involved in the process, 3 as a decision maker.) I have yet to find a method that weeds out people who can whiteboard, but can't deliver, or those who seem to have a great personality and work ethic, but stop showing up to work and/or throw tantrums when they don't get their way.

Obviously some people emit red flags like the Sun emits energy, but I'm I don't believe you can assume it's "working fine" without some baseline definition of "fine" and a corresponding study.

> How do we know that?

As the sibling comment mentions, companies are able to build large scale systems by engineers who got in through these kind of interviews. I am not saying this is the right way or wrong way, but it definitely works.

Of course when you do this, you miss out on some amazing candidates, but the big companies which started this can afford to do that because of the insane num of applications they get.

A larger problem is the cargo-culting of whiteboard interview techniques by smaller companies who do not get the volume of applicants that FAANG do, get believe they will succeed anyway.

Some are. Some fail to even pull of moderately difficult projects. I didn't mean to say that interviewing is literally a coin flip, but I can dig a trench with a pickaxe. Doesn't mean it's a good way to do it.

> How do we know that?

Because businesses are still able to solve problems, service customers, etc. Of course, there is room for improvement.

Unsatisfactory as the whole hiring process can be, I don't actually believe that throwing a pile of resumes in the air and picking up a handful at random would work as well.

I don't follow. Are you suggesting random = whiteboard?

Some people are suggesting that nothing really works. I suggest that our imperfect processes are still probably a lot better than randomness.

Ah I see. Agreed!

Are a lot of jobs really algorithm development though? Every job I've had just used existing algorithm implementations for sorting, FFT, etc. I feel like to do well at software interviews I have to brush up on dynamic programming, sorting, greedy, etc. algo development even though I've almost never had to develop a serious performant algorithm on the job.

Companies tend to use algorithms for interviews for new college grads not because that's what the job entails, but because that reflects what the candidate's background was about (college).

If someone is fresh out of a 4 year program that was all about data structures, algorithms, operating systems, etc, then demonstrating mastery of those topics tells an employer that a person is smart and can learn large amounts of complex technical material.

I agree that algo questions are useful for recent grads. I find it more frustrating when I see those questions given to candidates for senior positions with 10 or more years of experience. Keeping fresh on the big-O of a bubble sort and rebalancing a binary tree ends up being in addition to building enterprise software solving real-world problems day to day.

>Are a lot of jobs really algorithm development though

Probably not "a lot", but personally I have _only_ worked in companies where they are our bread and butter (Biotech and finance.)

Made an account just for this:

A professor I know from undergrad did research which could essentially evaluate, with some adaptation, whether or not people would work well in software engineering situations. More centered around whether team members would work well together, but relevant nonetheless: Malte Jung

I don't think it's necessary. Good software engineers are so scarce you should just grab the first adequate one who comes your way. When I was in the position of interviewing candidates, 9 out of 10 couldn't solve the most trivial of programming tasks.

> 9 out of 10 couldn't solve the most trivial of programming tasks.

This has consistently been my experience.

Nevertheless, the ability to code is necessary but not sufficient. You're going to want to dig into character at least a little.

- How are they in a team context?

- Do they treat others with respect?

- Do they act with integrity?

These kinds of attributes are just as important as whether or not they can code, and I've been badly bitten when they've been lacking.

Not software development, but professional schools (eg medical schools) have published research about which admission selection methods can have predictive value both in-school and as a clinician. Lots of long follow-up.

I’ve seen some programs do away with references, resumes and even interviews.

The programs that I’ve seen remove interviews usually did replace them with standardized stations with different scenarios.

E.g.: “Explain to someone how to wash their hands if they never have before”. If you can do that well, you can probably explain some health procedure you’ll later learn about and have to explain.

I haven't looked for any recent research in this area but when I saw some way back when it was, in one sense, somewhat depressing. I forget if it was for undergrad or MBA programs but, as I recall, the correlations were basically to SATs/GMATs and class rank/grades. Of course, your results will vary depending on how you define success but basically quantitative hard measures had the most predictive value.

But what outcome were they measuring? Grades and rank in the MBA program?

A good student will probably continue to be a good student.

But how to select which students will become good practitioners may be a different story.

This was a long time ago and I don't remember the details. One commonly-used quantitative workplace success metric is compensation/salary, which is not necessarily unreasonable for MBA programs but can be more problematic in other areas.

And you're right. It's not really surprising that getting good grades/test scores at one level of school correlates reasonably well with the same thing at another level. Which I imagine is one reason universities/grad programs generally don't optimize purely on that metric because their objective isn't solely about cranking out students who study well in school.

ADDED: This, by the way, is more or less just a variant of general intelligence measures being correlated to a lot of outcomes. The SAT and so forth aren't intelligence tests, but I'm sure they're very correlated.

I know of at least one big tech company that has and it informs the ratio of different types of interviews for each level.

I imagine it'd be really hard for an academic research institute to do so. Firstly, they'd have to focus on technical interviewing in specific. Secondly, they'd have to get some big tech companies to share information about trajectory and whatnot for employees once they've been hired -- I imagine the extra work of building those pipelines/anonymizing the data/etc probably wouldn't be worth it for a lot of companies.

Tokenadult used to post the data on this on every hiring thread. Here's a link: https://news.ycombinator.com/item?id=4613543

I think a prior study is needed in how job postings correlate to actual work. Once you have a stable initiating side from an academic standpoint you can begin to look at the receiving side of the equation.

I once bookmarked a blog post [1] which attempts to review existing research on the topic. The bottom line, I think, is that unstructured interviews have the least predictive power. So whatever is your interviewing process, make sure it's structured and similar for every candidate.

[1] https://erikbern.com/2018/05/02/interviewing-is-a-noisy-pred...

This paper isn't about interviewing, but is very relevant:

What makes a great software engineer? https://dl.acm.org/citation.cfm?id=2818839

HN discussion: https://news.ycombinator.com/item?id=15892898

It's unlikely that any academic research will happen. Companies view hiring pipeline data as proprietary because they fear competitors may learn from it and gain and advantage. Even if researchers agreed to anonymize data I doubt BigCos would agree to share because it still could be used to benefit a competitor.

There is likely to be internal research at some companies.

I haven't found any academic research about this specific area, although there is of course a wealth of research into interviewing in general. The closest analog to a technical exam is probably in the finance industry.

My suspicion, however, is that these kinds of problems have limited predictive success and yield a lot of false negatives.

It's not academic, but we just published a guide that goes over the pros/cons of each type.


Are you asking cause you are thinking of building a product around this idea?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact