
We analyzed thousands of interviews on everything from language to code style - emilong
http://blog.interviewing.io/what-really-matters-in-technical-interviews-we-analyzed-thousands-of-interviews-on-everything-from-language-to-code-style-heres-what-we-found/
======
Jemaclus
> Furthermore, no matter what, poor technical ability seems highly correlated
> with poor communication ability – regardless of language, it’s relatively
> rare for candidates to perform well technically but not effectively
> communicate what they’re doing (or vice versa), largely (and fortunately)
> debunking the myth of the incoherent, fast-talking, awkward engineer.

My interpretation of this is that interviewees who can communicate clearly
about code (whether they wrote it or not) correlate with high technical
ability. Does this suggest that rather than having the interviewee write code
on the spot, one could give them some new code they've never seen before and
ask them to reason about it aloud for 30 minutes, then gauge their technical
ability based on their ability to communicate clearly about the code?

In other words, could you replace live-coding with "here's some code, tell me
about it"?

~~~
lettergram
That's actually how I conduct interviews. It's been immensely successful at
weeding out candidates within 5 minutes; although most interviews are 45 - 60
minutes.

Basically, give them less than ten lines of code, ask them what it does, where
are a couple bugs, ask what would you name the function, etc. Then we talk
about how to improve it. I'd say, less than 20% of people I interview pass.
It's actually amazingly low how many people can find a bug and communicate it.

Half the people don't even tell me what they are thinking. And no matter how
many times I try to work with them, act like their buddy, or w.e. they just
kind of shut down. They think in their head, don't work through the problem at
all.

~~~
watwut
"Half the people don't even tell me what they are thinking."

Maybe they are introverted or simply need to think before they talk. Around
50% of people are introverted (less in usa). Many people are like that and
simultaneously quite skilled. Moreover, some environments punish errors, so
people who worked/studied there tend to be conditioned to think before
talking.

"And no matter how many times I try to work with them, act like their buddy,
or w.e. they just kind of shut down."

You are not buddies, you are interviewer about to decide whether they get
hired. Many people shutting down might mean that they are not comfortable
juggling "buddy" social role and expectations and "serious job interview"
social expectations simultaneously.

~~~
_0ffh
I'm not sure about introverted. I am, but when I get excited about a topic, I
can be as loud as the next person. As long as I feel I have something to
contribute, that is. I can rant for two minutes straight and then suddenly and
unexpectedly shut up when I've made my point. People are sometimes surprised
and ask why I suddenly stopped talking. I'll answer: "I've made my point."...
:3

So my bet is on error avoidance.

~~~
flukus
I'm pretty similar, but solving a bug in an interview question is hardly
something I'd get excited about.

~~~
_0ffh
I agree. Still, the point I was trying to get across is this: While introverts
do not strive to take center stage for its own sake, they are perfectly
capable to do so for any good reason. Introversion is not shyness.

------
skylark
Overall, the data lines up with my own intuition, but I thought I might throw
my own interpretation into the ring.

One of the biggest keys to doing well on technical interviews is to completely
separate the problem solving from the coding. The strongest interviewers will
discuss the problem and solve it at an abstract level using diagrams. Once
satisfied with the solution, they'll code the entire thing making few
mistakes.

I think this is what drives most of those metrics. Strong interviewers submit
code later, and have a higher chance of it being correct because they take the
time to problem solve upfront. Their thought process seems more clear because
there isn't the iteration of "this should work, let me code it, oh no wait,
that's wrong, let me erase this now..."

~~~
omot
I think this applies to actual software management too. I don't see people
with the most code commits as the ones who gets promoted. The ones promoted
are usually the ones who really scope out the problem. Once they're sure of a
complete solution they code out their plans on a steady pace.

I think Einstein's the one that said: "If I had an hour to solve a problem I'd
spend 55 minutes thinking about the problem and 5 minutes thinking about
solutions."

~~~
ozim
Yes guys that do have a lot of commits (and those who don't think through the
problem long enough) end up with commits like:

"Fix" "Real fix" "Fix the fix because of fix"

Then you know you don't want those on your team. It really bites on tight
deadlines when you have to put something to production but poor dude needs to
push that really, really last fix.

------
NumberCruncher
>> An average, successful candidates interviewing in Python define 3.29
functions, whereas unsuccessful candidates define 2.71 functions. This finding
is statistically significant.

The "average" is too sensitive to outliers and should not be used for such a
comparison...

[Edit] Being bored I calculated the Kolmogorov-Smirnov statistic based on the
chart. It is between 10%-10.5%. The number of defined funtions seems to be a
significant but weak indicator.

~~~
leeny
We actually did the KS test as well, but we omitted the results for narrative
clarity. Our KS test statistic is also < 0.05.

~~~
NumberCruncher
I would recommend you to put also the more sophisticated results (like K-S
tests etc.) in the blogpost. For people interested in technical interviews and
charts they may be more important than narrative clarity.

------
janwillemb
The title is quite clickbaity: "We analyzed thousands of technical interviews
on everything from language to code style. Here’s what we found."

What's wrong with this, I think, is that a (journalistic) title should give an
ultra-condensed summary of the main point of the article. This title suggests
that the authors gathered a lot of data but didn't find much.

(I find myself quite intrigued by clickbaity titles somehow, sorry for that.)

------
zebraflask
A lot of this article rings true with my experience. And I agree with the
comments dinging live coding tests - those are the worst. I can't code
effectively unless I'm calm and can concentrate, and these things are almost
designed to get you off-balance.

Even more galling when you have a healthy GitHub portfolio that they refuse to
even look at in favor of a quiz (this has happened recently).

~~~
graphememes
To note a few things regarding the interview and hiring process (I do a lot of
it, have done about 20 in the past two days), the reason for the quiz and not
using your portfolio (unless its extremely outstanding and you are fine doing
a live coding questionnaire) is so that we can judge all the candidates
equally and fairly.

It gives a common baseline to judge. Each candidate does the same thing and we
have a good idea of what we are looking for, the rest of it tells us how you
think, how you approach your work, organize your work, and best of all? You
can compare that to how others do it.

Now, not to say that's everyone, that is what we use it for.

\---

Before, as an Engineer / Manager, I hated doing "live coding" tests when it
wasn't relevant. For example doing "algorithms" or "palindrome" or "sliding
window dns" or "O(n)" examples when you're doing front-end or a management
position screams to me that the people doing the interviewing don't know what
they actually want.

Instead quizzes or live coding that are relevant like "tell me how to access
all the elements in this particular element and traverse the children to apply
some styling" is much more relevant and will show me the thought process,
their ability to retain information, and their recall. It also shows
communication ability when they get stuck and ask for help or use me as a
sounding board.

It's not always about your implementation, but how you handle the situation
and communicate

~~~
zebraflask
I agree with you about irrelevant questions, and out of the dozen or so coding
quizzes I've done over the past several years, mostly with start up companies,
those are unfortunately usually what I got. One company was advertising for a
front end role and, sensibly enough, asked front end-related questions - that
was the exception.

And the thought that quizzes provide a fair point of comparison comes across
to me as putting process ahead of substance. Interviewing isn't meant to be
fair to everyone - only one person gets the job, after all - so it's not like
handing out cookies and stickers in middle school. It's meant to see, in part,
whether the person is capable of generating working code. If you have a person
who can provide samples to prove it, requiring an artificial quiz really is a
slap in the face to a lot of good candidates.

~~~
graphememes
Well a good interview process has steps that come first to evaluate the
individuals substance, certainly if someone is simply approaching an interview
going... Here do this quiz, that's not beneficial.

What's important to us is the person, their ability to communicate, learn,
interests, then their skill set and what we have is a good fit for them and
us.

Focusing on a singular part of the interview without taking a step back and
reviewing the whole isn't too beneficial.

------
Radim
_" Poor technical ability seems highly correlated with poor communication
ability"_

Yep. Articulate attention is the name of the game (where "communication
ability" sounds a little nebulous).

If you can't organize your thoughts, bring them to the forefront of your
attention, name them, you're likely bad at handling abstractions. And
abstractions are at the core of "technical ability" \-- the ability to name
things, find the appropriate abstraction boundaries, chisel structure out of
chaos.

Articulate speech is the greatest human invention for a reason.

Testing for that (plus conscientiousness -- can you pay attention to details
and get shit done?) during interviews makes perfect sense.

------
FLUX-YOU
If you filter the interviews to only interviewees who:

\- liked the person

\- rated the questions 3 or 4 stars

\- gave the interviewer 3 or 4 stars for being helpful

Do the trends still hold?

How are those trends compared to only looking at interviews with:

\- disliked the person

\- rated the questions 1 or 2 stars

\- gave the interviewer 1 or 2 stars for being helpful

------
thenanyu
Judging by this graph in the article, and somewhat counter to the claim in the
article:
[https://plot.ly/~aline_interviewingio/952.png?share_key=Htks...](https://plot.ly/~aline_interviewingio/952.png?share_key=HtksI4SyhXQRrGVr7Qo06i)

Looks to me that interview length _is_ correlated with success rate. If your
interviewer stops before 60 minutes, there's a bias towards successful
interviews. It seems like the interviews that end up being "no"s tend to get
hard-stopped right at the 1-hour mark.

------
pklausler
I like to ask one question that probes basic analytic ability and a second
question that probes programming aptitude. Generally, the first question
either takes 3-5 minutes or the whole 45-50. It's usually a problem of the
form "write a predicate (Boolean-valued) expression that is true when..."
applied to something simple, and it's a basic test of being able to use
relations and logical operations to characterize a situation. It's depressing
how many great-looking candidates with awesome degrees, resumes, and phone-
screen performances get stuck trying to describe how to tell whether two
calendar entries (just start/end times) conflict with each other.

------
javabean22
Here is a hint. If you aren't a fresh graduate avoid companies making you code
in a browser under a time pressure.

------
crobertsbmw
None of the graphs are loading for me. It says "If the problem persists, open
an issue at support.plot.ly" Unfortunately, I have to pay money to file
reports...

~~~
emilong
Looks like this has been fixed.

------
tptacek
One of the language results doesn't make sense. It claims that it matters,
significantly, if you solve interview problems in Java when the hiring company
is a Java shop, but not when the hiring company is a C++ shop.

But that's reversed. It is in fact fairly difficult for a high-level language
programmer to pick up C++, and facility with C++ (or at least C) is a common,
accepted goal for C++ hiring shops. A C++ shop that hired candidates without
regard for their aptitude _at C++_ would have real problems.

~~~
blacksmythe
The data suggests that it is easier for a C++ programmer to get make a good
impression in a non-C++ shop, than at a C++ shop where they likely to test you
on C++ edge cases.

Interesting that this effect does not show up for Java programmers.

~~~
dkersten
I think this makes sense, because C++ is seen as this difficult beast. Non-C++
shops will be impressed that you know _any_ C++, while a C++ shop will want to
dig much deeper and make sure you are sufficiently advanced at the language to
do the job. Non-C++ shops won't dig as deeply (either because it doesn't seem
relevant -- you're not going to be using C++ anyway, so you just need to show
programming ability and not C++ mastery -- or the interviewers don't know C++
well enough themselves, since they don't use it in their jobs) and won't ask
you about in-depth edge case language features (because why would they if they
don't use the language), but in a C++ shop, they will care about all of these
things and have people who use C++ and can ask in-depth questions.

Why isn't it the same in Java? I'm not sure, perhaps its because Java has less
gotchas as a language (certainly a lot less undefined behaviour and weird
memory-related gotchas, no templates, no multiple inheritance etc) and C++ has
this "its a difficult language" prestige which Java doesn't have.

------
bovermyer
There's just one problem: this assumes that code challenges are present in all
(engineering) interviews.

------
leeny
Graphs have been fixed! Sorry about that, HNers.

~~~
Terr_
There's something awry about the "fewer code execution errors" graph: I count
an odd number of columns when they ought to be in pairs.

Clicking on the graph to go to plot.ly and viewing its "data" tab, it looks
like there's a blank X value for:

    
    
             text                                                                           y                   x
        0    bucketed_success_rate: -0.02<br>pct: 0.1<br>`Interviewer Would Hire`: False    0.0987654320987654

------
Joboman555
These people are not particularly good at interpreting statistics.

~~~
jaclaz
>These people are not particularly good at interpreting statistics.

Maybe that is a tad bit too harsh, but surely the use of "big difference" and
of "significant" seems like not being justified by the actual data:

>On average, successful interviews had final interview code that was on
average 2045 characters long, whereas unsuccessful ones were, on average, 1760
characters long. That’s a big difference! This finding is statistically
significant and probably not very surprising.

An average of 1760 vs an average of 2045 indicates a general average-average
of around 1900 lines, so that would be 1900+/\- 7%, and anyway the difference
in ranges is so little that _anything_ could cause it.

To have more or less 200 characters, merely calling variables a, b, c, etc.
vs. FirstUserChoice, DefaultArrayIndexingField, you know what I mean, would be
enough.

Same goes for:

>On average, successful candidates’ code ran successfully (didn’t result in
errors) 64% of the time, whereas unsuccessful candidates’ attempts to compile
code ran successfully 60% of the time, and this difference was indeed
significant.

As I see it 60% or 64% as an average are almost exactly the same number, and
bear very little significance. Maybe it is just me missing some sensibility
...

------
snissn
What is the blue bar on top of the page?

~~~
FLUX-YOU
pace.js or something similar to it

