
Technical interview performance is kind of arbitrary - leeny
http://blog.interviewing.io/technical-interview-performance-is-kind-of-arbitrary-heres-the-data/
======
brightball
With programmers, the single easiest way to identify good candidates (in my
experience) is sheer interest in what they do / desire to learn. This is a
learn everyday field and if you're interested in what you're doing, you're
going to do a lot better at it. It's hard to apply yourself mentally to
something that you don't have a good level of interest in. Given that it's a
learn everyday field, people with that level of interest will realistically be
able to learn anything they need to do solve the problem you're hiring them to
solve.

The only real differentiating factor is your tolerance for ramp up time. I
expect a programmer to be able to pick up a new language or database within a
couple of weeks (tops) in most cases. If I'm hiring full time, that's
something I'll tolerate. If I'm hiring a contractor, I'm going to be uneasy
about paying high hourly rates for him to learn the job.

The single most effective way that I've found to interview for "interest" is
to just get them talking about something they've done before and ask them to
go deep into the details. You get everything you need from watching somebody
talk, with a smile on their face, about how they solved some problem in a
creative way that makes them show some pride. Doesn't really matter what the
problem was, if it was a business problem, code problem or hardware problem.
The important thing is the level of attention to detail in addressing it.

I've been using this technique for about 8 years now and while I don't make it
the exclusive criteria for hiring, every person I've ever hired who has passed
that part has ended up in my "great hire" category.

~~~
jkyle
> I expect a programmer to be able to pick up a new language or database
> within a couple of weeks (tops) in most cases.

They may be able to hack around, write a for loop, track down a bug....but
you're not going to get the same caliber of work from someone who first saw
python two weeks ago compared to someone whose been using the language for 5
years on real projects.

~~~
bsder
> but you're not going to get the same caliber of work from someone who first
> saw python two weeks ago compared to someone whose been using the language
> for 5 years on real projects.

Careful. You probably need to define your terms more clearly.

A CS student who knows Java (4+ years of experience) is going to turn out
_very_ different programs from a 20 year veteran of Erlang who is just
learning Java.

Even with bad Java idioms, the veteran is very likely to be turning out much
better code because he is thinking about the underlying architectural issues
(failure modes, recovery, concurrency) with far more experience.

Yeah, I watched this play out in real time when I paired them. It was actually
really enlightening and entertaining.

I also found out I didn't know as much about either Java or Erlang as I
thought I did.

~~~
deangiberson
"Even with bad Java idioms, the veteran is very likely to be turning out much
better code because he is thinking about the underlying architectural issues
(failure modes, recovery, concurrency) with far more experience."

Careful with that assumption. While there may be instances where this is true,
I've met many veterans that couldn't think outside the small specialty they
had become locked into.

Idioms are powerful in that the shape how you think about a solution within a
fixed language. They shape your thinking and the shape of your thinking
changes what solutions you can conceive of. A veteran that shows interest in a
broad range of topics will have more failure experience and will be able to
offer better results.

~~~
blackflame7000
I too have found this assumption to not always be the case. At my company
there are certainly some people who qualify as vets yet insist on writing
purely procedural C style code and have yet to adopt modern paradigms like OO
design simply because they are so far removed from their education.

~~~
Can_Not
To be fair, they might be on to something. I've programmed my whole life in OO
and recently started looking into FP. I can see the veterans thinking that OO
is a fad after seeing things like BeanFactoryFactoryFactory (I personally do
not program in Java and have never encountered the use case for a "Factory",
and as such do not know what the use case is). After that, what value do you
get to adding the functions to structs? I personally see the value, but I also
see the value in not doing that. I would recommend teaching them OO while
yourself should study FP and criticisms of OO. After them, it may be easier to
evaluate which is better for your use case. It sounds like the vets have
already cornered you out of OO, but you also definitely don't want to force
the wrong tool onto a project just because it's the tool you understand best.

------
staunch
Most interviewers don't ask enough technical questions to have any idea what a
candidate knows or doesn't know. If their _one_ or _two_ questions happen to
be something the candidate knows well, they'll call them a genius. If they
happen to not know, they'll label them an idiot.

You can learn a lot more from 20+ rapid fire questions than forcing a
candidate to eek out an answer to something they're not familiar with. And
once you establish the areas they're familiar with, you can ask them truly
useful questions.

The key is to look for people who have strengths and not worry at all about
gaps in their knowledge. Anyone who has earned genuine expertise in one area
will be able to do so in other areas.

The other big mistake most interviews make is forgetting about the "Curse of
knowledge"
[https://en.wikipedia.org/wiki/Curse_of_knowledge](https://en.wikipedia.org/wiki/Curse_of_knowledge)

I've seen people research the answer to a question before an interview and
then expect candidates to be equally informed without that advantage.

~~~
TallGuyShort
Along these lines, I start out with very broad questions. Something like,
"tell me how you'd troubleshoot a web service that's suddenly not accepting
connections / suddenly performing badly." Different candidates will focus on
different aspects of that problem depending on their background: low-level
networking, cloud environments, application-level problems, databases etc.
Based on their resume, I like to see if their expertise matches up with their
experience. I like to see that they don't consider ONLY things within their
expertise. I like to see if they can make reasonable guesses outside of their
expertise without BS'ing me. I like to see if they have a good approach to
exploring areas they're unfamiliar with and general problem-solving. I like to
see if they recognize how valuable it is to have diagnostic / monitoring /
change control tools in place and to have done pro-active testing. This can be
tough to do with coding problems, but I still try give problems that should be
familiar to experienced low-level C developers as well as high-level Python
web devs and tailor my expectations to their experience. I'm more concerned
with how well they've learned based on what they've done than if they've
already learned what I'd like them to do.

~~~
8873872782
This is a very good way for senior engineers to interview candidates.

Newer engineers interviewing candidates will typically have difficulty doing
this well.

~~~
TallGuyShort
That's fair - I wouldn't be a good judge of this if I didn't have really good
breadth myself. Often a candidate in his own are of expertise will go beyond
what I can fairly judge and I end up having to do a bit of reading afterwards
to confirm and / or assume they were correct.

------
senekerim
I have been to lots of interviews, on both sides of the table. I find most
interviewers unprepared to evaluate the person for the role, and instead
exercise their own biases, stroke their egos, etc. It's largely a voodoo
practice that we'll look back and laugh at as a civilization at some point..

~~~
colmvp
I wonder how many employ Kahneman's recommendation based on his book,
"Thinking, Fast and Slow":

> Suppose that you need to hire a sales representative for your firm. If you
> are serious about hiring the best possible person for the job, this is what
> you should do. First, select a few traits that are prerequisites for success
> in this position (technical proficiency, engaging personality, reliability,
> and so on. Don't overdo it — six dimensions is a good number. The traits you
> choose should be as independent as possible from each other, and you should
> feel that you can assess them reliably by asking a few factual questions.
> Next, make a list of those questions for each trait and think about how you
> will score it, say on a 1-5 scale. You should have an idea of what you will
> call "very weak" or "very strong."

> These preparations should take you half an hour or so, a small investment
> that can make a significant difference in the quality of the people you
> hire. To avoid halo effects, you must collect the information on one trait
> at a time, scoring each before you move on to the next one. Do not skip
> around. To evaluate each candidate add up the six scores ... Firmly resolve
> that you will hire the candidate whose final score is the highest, even if
> there is another one whom you like better — try to resist your wish to
> invent broken legs to change the ranking. A vast amount of research offers a
> promise: you are much more likely to find the best candidate if you use this
> procedure than if you do what people normally do in such situations, which
> is to go into the interview unprepared and to make choices by an overall
> intuitive judgment such as "I looked into his eyes and liked what I saw."

~~~
Someone1234
I like that.

But just to be a little bit contrarian for the sake of conversation: Has it
been shown that ignoring your gut instinct or even who you like most if
inherently wrong? That kind of testing definitely removes a lot of bias, but
it just assumes that biases are inherently bad.

There certainly ARE bad biases (sexism, racism, etc). But you may also need to
work with the individual you're recruiting, so how well you think you can get
along with them or work with them is likely important too.

Plus if this method was common, do you think candidates would prep
specifically for those scores? Seems like something very easily gamed. Not to
mention that scoring itself can be biased.

As I said at the start, I actually like that, and think they're on the right
track. But I'd likely want to combine it with something LESS subjective to get
a fuller picture of a candidate.

~~~
colmvp
Your question is one I've been exploring since the New Year.

One thing to consider is that the gut is a collection of relationships and
groupings based on experiences and cognitive processes.

This is great for some circumstances. I don't need to take a tally of all the
stats to ensure I'm making the most educated choice (based on reviews,
calories, health safety, longitudinal studies, etc.) of which sandwich to buy
at the deli.

On the flipside, our gut can betray us. One of the chapters of Malcolm
Gladwell's Blink talks about a story of New York Cops whose instant reaction
caused the wrongful death of a innocent because their heuristics that formed
assumption after assumption were just flat out imprecise. Signs that should've
flagged them to check their assumptions actually only reinforced their view.
From their perspective they truly believed their conduct was in line with what
they expected. In Thinking, Fast and Slow, Kahneman provides numerous gambles
where our intuition guides us to make suboptimal or imprecise choices due to
loss-aversion, endowment theory, et al.

I have to go but I don't have a perfect answer for you. But I will provide a
Kindle two highlights that I highlighted in a book I'm reading called Rational
Choice in an Uncertain World:

> Our decision-making capacities are not simply “wired in,” following some
> evolutionary design. Choosing wisely is a learned skill, which, like any
> other skill, can be improved with experience. An analogy can be drawn with
> swimming. When most of us enter the water for the first time, we do so with
> a set of muscular skills that we use to keep ourselves from drowning. We
> also have one important bias: We want to keep our heads above water. That
> bias leads us to assume a vertical position, which is one of the few
> possible ways to drown. Even if we know better, in moments of panic or
> confusion we attempt to keep our heads wholly free of the water, despite the
> obvious effort involved compared with that of lying flat in a “jellyfish
> float.” The first step in helping people learn to swim, therefore, is to
> make them feel comfortable with their head under water. Anybody who has
> managed to overcome the head-up bias can survive for hours by simply lying
> face forward on the water with arms and legs dangling—and lifting the head
> only when it is necessary to breathe (provided, of course, the waves are not
> too strong or the water too cold). Ordinary skills can thus be modified to
> cope effectively with the situation by removing a pernicious bias.

> The greatest obstacle to using external aids, such as the ones we will
> illustrate in this chapter, is the difficulty of convincing ourselves that
> we should take precautions against ourselves as Ulysses did. The idea that a
> self-imposed external constraint on action can actually enhance our freedom
> by releasing us from predictable and undesirable internal constraints is not
> an obvious one. It is hard to be Ulysses. The idea that such internal
> constraints can be cognitive, as well as emotional, is even less palatable.
> Thus, to allow our judgment to be constrained by the “mere numbers” or
> pictures or external aids offered by computer printouts is anathema to many
> people. In fact, there is even evidence that when such aids are offered,
> many experts attempt intuitively to improve upon these aids’ predictions—and
> then they do worse than they would have had they “mindlessly” adhered to
> them. Estimating likelihood does in fact involve mere numbers, but as Paul
> Meehl (1986) pointed out, “When you come out of a supermarket, you don’t
> eyeball a heap of purchases and say to the clerk, ‘Well, it looks to me as
> if it’s about $17.00 worth; what do you think?’ No, you add it up” (p. 372).
> Adding, keeping track, and writing down the rules of probabilistic inference
> explicitly are of great help in overcoming the systematic errors introduced
> by representative thinking, availability, anchor-and-adjust, and other
> biases. If we do so, we might even be able to learn a little bit from
> experience.

------
maxaf
Take home tests FTW. The thinking goes as follows:

1\. If the candidate can't be bothered to complete a 2-4 hour (depending on
claimed seniority) code test in the language of their choice, we can't be
bothered to talk to them.

2\. If the candidate does reasonably well by completing the code test somewhat
on time (with a fat margin allowed for them, well, having a life) and within
parameters of the task, they're invited for a mostly non-technical onsite
meet-and-greet.

3\. During the meet-and-greet we make sure that the candidate isn't an axe
murderer, is able to hold a quasi-technical conversation, and that both sides
aren't immediately scared of each other.

4\. The meet-and-greet can also include some low-key architecture discussion.
Any nerds worth their salt will be able to conduct this line of questioning
without making it obvious that an interview is taking place. Hopefully this
isn't a critical step, as a good take-home code test will require the
candidate to spend a little time designing or architecting their solution.

After the above has taken place, it should be pretty clear whether the
candidate in question is a fit or not. Note that this process is by design
missing the useless traditional CS questioning component, contrived problem
solving exercises, or a whiteboard code beatdown.

~~~
chrisper
>1\. If the candidate can't be bothered to complete a 2-4 hour (depending on
claimed seniority) code test in the language of their choice, we can't be
bothered to talk to them.

A good reason not to work at your company. Why would I want to invest 4(!)
unpaid hours into something where I am not even considered seriously yet?

I recently had a coding challenge, which was not only vague, but also took up
two hours of my time. The end result was... nothing. Not even a "thank you,
but we chose someone else." Since every then, I chose to not bother with long
coding challenges anymore.

~~~
maxaf
Would you have preferred to spend a whole day on an inconclusive onsite
interview? Or perhaps a phone screen during which you're asked to implement a
hashtable for the umpteenth time?

As I said in another comment somewhere in this thread, one day people will
learn to do this right. Hopefully this will happen before programmers as a
profession have decided to never take code tests again.

~~~
turar
Yes. The onsite means the company is investing just as much time and effort
into the process as the candidate.

~~~
brianpan
It's not just the investment. In a 4 hour take-home I learn close to nothing
about the company I am applying for. In a 4 hour on-site I see the workspace,
talk to employees, etc.

------
marknutter
Technical interviews are a form of hazing. Engineers often suffer from
imposter syndrome, especially during an interview. Those who have already been
hazed and accepted to the club will turn around and put potential candidates
through the same humiliating process. And what's worse is that demonstrating
you have superior capabilities in one area or another can be seen as a threat
to the interviewer and they may give you a thumbs down based purely on their
own insecurities. What ends up happening is that, just like in college
fraternities, is that everyone ends up being similar, both culturally and in
terms of abilities.

If I had it my way I would do away with the interview process altogether and
do something more akin to an internship. Potential employees could start their
engagement with a company by working (for pay, mind you) on a very limited
basis to solve actual problems that need solving (i.e. "write an algorithm
that's 10% more efficient", "create a tooltip that's aware of the viewport in
React", etc). Based on their output their engagement could be ramped up until
they are brought on as a full time employee. That way it ends up being
completely merit based. You can either solve these problems or you can't. And
whether or not they ultimately end up becoming an employee doesn't end up
mattering because both parties are compensated along the way.

This would obviously put the burden on the company to boil its problems down
into smaller, isolated efforts but that's something all companies should be
trying to do anyways. In the end, they just want some code written that will
end up solving some problem for their customer.

~~~
kelvin0
Hazing? Really?

Hazing is the practice of rituals and other activities involving _harassment_
, _abuse_ or _humiliation_ used as a way of initiating a person into a
group...

[https://en.wikipedia.org/wiki/Hazing](https://en.wikipedia.org/wiki/Hazing)

~~~
marknutter
Yeah, that sounds about right.

~~~
kelvin0
Maybe you should tell us which companies haze the potential candidates so we
can steer clear of them? At least some hints?

~~~
kafkaesq
Overly complex algorithm questions (that the interviewer expects an immediate,
near-optimal solution to -- even though they were open in the literature for
several years before these solutions were found); or reasonably enough
questions that just aren't articulated properly (this happens amazingly
often); gratuitous brain teasers, or "Carnac the Magnificent"[1] question so
any sort; or just the sheer duration of some of these one-way, "prove to me
you aren't a liar or an idiot, while I play with my cellphone" sessions (6-8
hours over multiple visits); and then, quite often, dealing with the candidate
in a desultory fashion afterwards -- all qualify as borderline hazing, in my
book.

[1] [https://raganwald.com/2015/05/08/carnac-the-
magnificent.html](https://raganwald.com/2015/05/08/carnac-the-
magnificent.html)

------
rjzzleep
Constantly confused by this, so much arguing back and forth, and yet the
single easiest way to deal with this is a stripped down real world problem and
then giving it to a whole bunch of different candidates. Some people adopt
this some people don't some people argue that it's meaningless, and go back to
the standard silly interview patterns of algorithm questions and meaningless
complex fizzbuzz alternatives.

tptacek summarized this in his hiring post:

[http://sockpuppet.org/blog/2015/03/06/the-hiring-
post/](http://sockpuppet.org/blog/2015/03/06/the-hiring-post/)

My personal conclusion is that most companies don't want this for two reasons:

1\. culture fit is more important for people in a rigid hierarchical
structure, partly because an out of the box thinker could be dangerous for
that structure. too much questioning authority, too much pointing out flaws.
It's much easier to have a good worker bee than wondering why you need 40
employees to build an automated gif platform.

2\. in most companies everyone is very reluctant to make decisions. for
example management struggles with clear direction because it opens them up to
the question of liability. if they make a decision and it's wrong they might
get fired. HR works the same way, if HR passes a resume along they want it to
hit a list of keywords, so they can cover their asses if he turns out to be a
bad hire.

Basically everyone is so scared to make a mistake that they make a lot more
mistakes trying to avoid them.

The opening of the cracking the coding interview she talks about how they
don't really care about false positives and negatives, they just want those to
stay below a certain threshold. But consider the hiring scale of google
compared to a small company and suddenly those things matter.

One bad hire can be toxic. And basing your hiring strategy on something a huge
behemoth with infinite money does is kind of silly imho

------
richardwhiuk
Isn't this exactly what you'd expect? e.g. if get a random cohort of
developers who all of their peers would say are amazing, if I subject them to
a series of tests, they will do differently well at the different tests?

For example, so I set a test where you have to write some Java, if half the
candidates haven't written any Java, they'd all surely do worse on the test
than the other half?

Or is their a belief in the industry that there's some scale on which we can
absolutely rank all developers - front end, back end, full stack, mobile,
desktop, embedded? That sounds like a surprising belief which would require
extra-ordinary evidence?

~~~
dilemma
People do believe such things, yes, despite how absurd it is as you've shown.

------
mentatseb
The conclusion is misleading due to 2 wrong assumptions:

1\. The population is heterogeneous: interviews test different skills. All
interviews don't test the same set of skills, which is mandatory to compare
interview scores because scores are aggregates of these skill tests. Different
job opportunities means different skills to test, so it seems reasonable to
assume that people evaluation vary for different job opportunities, and thus
their scores vary for different interviews.

2\. The observations are not statistically independent: past interviews may
influence future interviews. People may get better at passing interviews or
conducting interviews over time. This would impact their score. It would be
good to study the evolution of individual scores over time.

While (1) should strongly limit the conclusions of the study, the complete
analysis may simply be irrelevant because of (2) if the statistical
independence of observations is not demonstrated. Sorry guys but this is
Statistics 101 introductory course.

~~~
leeny
(1) We listened to most interviews on the platform to establish homogeneity.
Interviews were across the board, language agnostic, and primarily algorithmic
in nature.

(2) We actually looked into this and noticed that time didn't really affect
performance. Usually, people did their interviews over a pretty short time
span and then found a job. Or, people were already experienced interviewers
and had kind of hit a plateau. You can see the raw data and how it oscillates
wrt time in the footnotes.

------
Xyik
Not too surprising when you consider there isn't really a standardized
guideline and every interviewer asks questions of varying difficulty.
Sometimes interviewers don't even ask candidates the same question and tailor
them based on the candidate's resume and experiences. I've had interviews as
simple as writing a function that outputs if two strings are anagrams, and
other interviews that test dynamic programming knowledge and other interviews
that tested my knowledge of concurrency. At the end of the day, its luck of
the draw which interviewer you get and what questions he decides to ask you.

------
Gratsby
Technical interviews are silly exercises IMO. You have a multi-month ramp up
period and you have all kinds of environment specific things to deal with at
just about every employer. Nobody in real life gets asked to program on stage
or forced to answer esoteric questions as if your in some sort of math
competition.

If you don't have confidence in someone's ability based on their experience
and their interview but you did like them, give them a task to accomplish
offline. See if their results are anything like your results would be, and
bring them in to see how they respond to feedback both negative and positive.

I've seen too many interviews that go along the lines of "How would you rate
your Java on a scale of 1-5?" "5" "So how would you fix the problem if your
cache hit rate on SomeObscureCommercialProduct went from 94% to 82%?" "Forget
that guy. Huge ego. Doesn't know anything."

I did run into one company that had an interesting process for technical
validation. They actually hire people for two weeks as contractors and have
them work with the team. Then they hold a vote and decide whether to extend an
offer.

------
dms105
The real weaknesses of technical interviews are:

1) They usually just measure the amount of effort a person has put into
studying interview questions. Whether or not the ability to do this translates
to being a better engineer is debatable.

2) An interviewer almost always exercise some form of personal bias, whether
it be educational, personal, etc. This doesn't always show up in written
feedback, but the interviewers with stronger personalities usually dominate
interview debriefs, and often influence others into hire/no-hire decisions.
This is especially prevalent in smaller startups where the process is more
informal, things move quickly, and decisions are based more on gut feelings.

~~~
iolothebard
I had a 800+ page study guide while living in SF for the ridiculous technical
interviews they'd put you through. Then you'd see their code base and realize
they've never actually done anything that would remotely resemble best
practices.

The best part though was realizing if you didn't answer that one question
exactly how the "brilliant" person interviewing you wanted it answered, you
were done. So at that point I'd quiz them on all their shortcomings that were
evident based on what they'd told me.

In my experience hiring it's not that difficult. Good attitude, good aptitude,
genuine interest in the field (software engineering), ideally interest in the
applied product (if not a software product), decent communicator and good
hygiene. If they've done well in those areas, I've never had to let someone
go.

~~~
stale2002
Ever thought about releasing this study guide that you've made? I would love
to take a look at it/ send it to some friends!

~~~
iolothebard
Lol, no. It's mostly just an amalgamation of all the technologies and stupid
fucking interview questions that you get.

I have 10-12 different documents. CS/OO (basics for reminders with some simple
algorithms), .Net, SQL (I always forget this shit, it's why I have data layer
and use ORMs + LINQ), LINQ, jQuery, Vanilla JS, HTML5+CSS,
Architecture/Patterns, ASP.Net vs MVC, Tuning, etc.

Haven't updated them in a few years because I left the bubble. My interviews
in the midwest are typically an hour or two at most with very little quizzing.
At worst they ask you to make them a small simple app (which I find irritating
but better than SF interviews).

------
grillvogel
technical interviews are a joke. the majority of the time they exist so the
interviewer can try to feel smart and subject the interviewee to whatever
whimsical problem they found on the internet. how often do you do group coding
in a whiteboard in your actual job? at one interview I was criticized for
sitting and thinking about a problem for a minute without just blindly jumping
into attempting to solve it. also tons of people are great at solving toy
interview problems but can't debug their way out of a paper bag.

~~~
marknutter
I once was called into an emergency meeting by the CEO of the company I was
working for the time. When I entered the room, all of the top brass were
seated around the table, some visibly agitated. The CEO proceeded to hand me a
single black whiteboard marker as he stated "the fate of the company depends
on you". The problem was outlined by a fellow engineer and it was explained
that I had 10 minutes to solve it, or we might risk going out of business.

With that, I took a deep breath and went to work furiously scribbling away on
the board while everyone watched with anticipation. As I closed the final
bracket, and stepped back to examine my work, audible gasps could be heard
from around the room. The grizzled old CTO whom everyone was a little scared
of broke the silence at last: "God dammit, Mark!" I caught a few nervous
glances from some of the less technical folks. You could hear a pin drop when
he continued:

"you just saved the god damn company!"

A slow clap started, and soon everyone joined in with a round of applause and
cheering. Over the outburst of joy and adoration the CEO shouted to one of the
developers who had gathered outside of the conference room, "get this code
into production!" Soon the whiteboard was being whisked away by a group of
smiling engineers as they navigated with great purpose through a sea of high
fives and back slaps.

And then, with a twinkle in his eye, the CEO shook my hand, and pulled a crisp
$100 bill out of his wallet. As he handed it to me, he said "I knew from the
moment I heard about your interview that you would be a great asset to this
company, Mark. Go home and relax for the rest of the day and take your wife
out for dinner tonight. You've earned it."

So you see, being able to solve complicated problems on a whiteboard in a
crunch time _is_ a valuable skill.

~~~
turkishrevenge
I believe you, but this really reads like something on /r/thathappened.

~~~
codeisawesome
I had the same reaction as you - until I realised it was sarcasm.

Bravo, parent, bravo.

------
minimaxir
The headline is "interview performance is kind of arbitrary," but the data
solution proposed in the article is "interviewers rate interviewees in a few
different dimensions," which is not any _less_ arbitrary.

I appreciate there is an appendix addressing this issue, but it does not
absolve the issues the analysis, especially since the appendix uses a "Versus
Rating" to justify the statistical accuracy of the system, which is also
calculated somewhat arbitrarily (since the Versus Rating is _derived from_ the
calculated interview score, wouldn't it be _expected_ that the two have a
relationship?)

The fact that the results of the non-arbitrary score are centralized around 3
out of a 4 max (instead of the midpoint of 2) implies a potential flaw or bias
in the scale criteria. (The post notes that people who get a 3 typically move
forward; maybe selection bias is in play since companies would not interview
unskilled people in the first place)

That's not to say that the statistical techniques in the analysis themselves
are unimpressive though. I particularly like the use of FontAwesome icons with
Plot.ly.

~~~
rewqfdsa
> the data solution proposed in the article is "interviewers rate interviewees
> in a few different dimensions," which is not any less arbitrary.

This article's headline reveals the spin the HN community is trying to put on
it. Many people in SF like to attack the very idea of meritocracy, because if
you admit that meritocracy is a good idea, you're implicitly endorsing the
idea that people have different levels of merit, and thus that between-group
differences in representation might be due to something other than
discrimination.

~~~
minimaxir
The real issue I have is that only a few of the other comments in this HN
thread are looking at the data analysis presented in the article and are just
taking the headline as gospel.

------
kbd
I had a really bizarre interview recently where, after the initial recruiter
phone screen, I was rejected based on an in-person half-hour very simplistic
paired coding exercise, only met with one person, and wasn't asked about my
(imo very strong) resume once. I must have said something foolish at some
point, which is on me, but the point is: interviews can be hit or miss.
Fortunately you only need one hit.

~~~
kybernetikos
It does sound like a bad interview experience, but I will say that I have very
little interest in what a candidate writes on their resume. There are
candidates with amazing resumes that can't do much and people without much
that they can talk about publicly that are amazing. If the interview is low
signal, the resume is even worse.

~~~
kbd
> There are candidates with amazing resumes that can't do much...

That's certainly true. I understand not taking resumes at face value; I've
interviewed developers (and DBAs) and I'd always pour over their resumes to
look for both padding and meat to ask about. It can be insightful to ask about
a big accomplishment someone claims as their own only to find out that they
only actually affected a small part of it. However, In this case I'm honestly
not sure whether my interviewer even read my resume.

------
mirceal
3 things I search for:

Fundamentals: CS basics. I don't nitpick on details. It's more around if
you've heard about it or not and if you could figure out how and when to use
it.

Structure: I want to see a structured approach to problem solving. Doesn't
matter if your code is perfect. Doesn't matter the programming language you
want to use.

Curiosity: You need to be curios about things. Asking the "why".

------
lostcolony
It may not be arbitrary, but if so it's buried deep in the data.

I've turned down candidates who had impressive technical resumes, who had
worked in startups that sold, who had been hired on as consultants at various
places, etc, because they were unable to solve simple algorithms in a simple
manner, and their code was atrocious. Does this mean they're "bad developers"?
No. If we were a consulting firm or a startup they might well be worth it;
where the important thing is getting code out the door quickly, and to have
something that works, even if it's not easily maintainable. But I was hiring
for a position that required someone who would keep solutions simple and
maintainable ('craftsmanship' rather than productivity, if you will. Note that
the former does not necessarily preclude the latter, but it's the trait that
was necessary, and was lacking).

Google optimizes for people with strong algorithmic knowledge. It's debatable
whether they need everyone to have that, but certainly, many shops don't.
Again, I've hired people with no formal CS background, because most of my
job's problems don't require you to have deep algorithmic knowledge (the ones
that do we can have others address, or work together on).

We know that people can fail one technical interview, while being radiant in
another, and the reality is that what we're looking for, and what others are
looking for, are often different. That creates a lot of variance in the data
when we compare them.

------
kearneyandy
Great post and awesome interactive graph! props

I'm curious about the interviewer community. Specifically things like how are
they vetted and how often they come to conduct interviews. It would be cool if
there was a community of interviewers for the betterment of the process, but I
could see their retention for conducting interviews to only be 1 or 2 before
they drop out. I see in the appendix that there are those that do more, but no
indication about what percent leave quickly.

A better drinking game might be when a candidate offers a data structure they
know nothing about. Would a red-black tree work here? No.. I guess not.

------
siliconc0w
I dunno - if you look at the data, there are fairly clear clusters of people
who are 'probably good' and 'probably not so good'.

'Programming' is necessary but not sufficient for product engineering and that
is what most of these interviews are trying to tease out. Good companies will
balance out 'programming' with other rounds like 'technical design' or 'pair
programming' or even non-technical rounds with business analysts or product to
gauge general ability.

------
pklausler
I have learned that an (in)ability to program "in the small" correlates very
well with an (in)ability to program in the large, and now ask mostly simply
questions whose answers are things like one-line Boolean predicates to test
for well-defined conditions. It is paradoxically easier for an inept candidate
to fake his way through an algorithm design question than it is to fake the
coding of a simple test for "determine whether two closed intervals [a,b] and
[x,y] overlap each other".

~~~
Ocerge
I've moved to doing something extremely simple: just give me pseudo-code for
indexOf given a string and a character. If you can't write the 4-5 liner for
linearly searching a string for a character, then there's not much point in
moving further. It is shocking how many people get tripped up by it.

~~~
jcadam
Is it a sign of insecurity that I just popped open vi to make sure I could
solve this? :)

Where I work, we've been trying to fill a vacancy for a senior dev for months.
Our one whiteboard question is similarly easy: find the largest integer in an
array (The interviews have become easier since I was hired on). And yet...
I've been watching an endless parade of people with decades of experience who
apparently don't know how to write a for loop :(

~~~
pklausler
And yet they can somehow get through a telephone screening interview.

------
JasonCEC
As a potential candidate, all of the standard complaints ring true - but once
you're on the other side of the equation, and need to hire people... your
ability to create new interviews is not nearly as wide or as clear as it would
seem from the outside.

1) Take home test: OK for performance metrics, bad for "getting to know" the
candidate, and terrible for selling the candidate on your company

2) Daylong interview: Expensive, requires interrupting our team, needs a fully
planned and well executed itinerary - but is perfect for getting to know
someone, getting the feel for their personality and interests, and is the best
way to sell someone on the opportunity.

3) Work sample: we usually do this for interns[1] and pair it with a ~1 hour
conversation (either before or after, doesn't really matter to us) on what the
company is like and what they would be working on. Obviously, work samples
suffer from the same deficiencies as a take home test for cultural fit and the
like, but it's the best we can do for interns!

[1] [https://gastrograph.com/blogs/gastronexus/interviewing-
data-...](https://gastrograph.com/blogs/gastronexus/interviewing-data-science-
interns.html)

~~~
rm_-rf_slash
I've said elsewhere on this thread, but my absolute favorite technical
interview method is to set up a contract to pay the candidate to create a
feature or fix a bug on your codebase, and have it merge and work as expected.

It lets the candidate feel (literally) valued by your company, and you get to
learn _how they do the job you are hiring them to do._

~~~
hire_charts
Agreed that this is a great approach, but the major downside is that you are
limiting yourself to candidates who either:

a) have the time to do real work outside of their day job and aren't on a
restrictive contract preventing them from moonlighting, or...

b) are currently unemployed

~~~
rm_-rf_slash
I was a college student at the time. I was long on money and short on cash. I
was also far less experienced than I was so there were a lot of new things for
me then, like those fascinating "callback" thingies I had to figure out.

Nowadays if I were given roughly the same assignment it would have taken me
about 3-5 hours at best, which, given my graduated-and-employed hourly rate,
is actually quite in line with what I was paid. So it paid as much as I would
have charged for a Saturday hack.

------
dudul
In the HN echo chamber, there isn't a day without some blog post/article
describing how our interview process is BS, interview is broken, etc.

I don't necessarily dispute this state of affaire, but does anyone know how it
compares to other fiels/professions? How about interviewing a lawyer? Or a
doctor? Or an account manager? Or a product marketer? Are developers the only
one with a "broken" interview process?

~~~
grillvogel
most other job interview processes don't involve solving a series of riddles

~~~
rewqfdsa
In most other jobs, candidates don't vary in skill by a factor of 10x or more

~~~
pklausler
In most other jobs, it's harder to earn a PhD in the field while not being
able to perform basic skills competently. But not computer science. It's
damned depressing how low the proportion of new grads is that can write code
for me to, say, sort a linked list. So the factor is way larger than 10x.
We're not differentiating the great programmers from the merely competent;
we're struggling to distinguish the capable from the inept.

~~~
dragonwriter
> In most other jobs, it's harder to earn a PhD in the field while not being
> able to perform basic skills competently. But not computer science.

I think that you are confusing "computer science" with "software development".
It may be easy to get a Ph.D. in computer science and be poorly suited for
work in the software development industry, but that's no more surprising than
the fact that people can get Ph.D.'s in economics and be poorly suited for
work in the finance industry.

Computer science is, obviously, related to software development, and in many
cases its possible to take a CS degree that _is_ focused on software
development, but CS _in general_ is not software development, and CS degrees
_in general_ are not vocational degrees in software development.

The best software developers may need to have extensive knowledge of CS, but
merely having extensive knowledge of CS doesn't make you even a competent
software developer.

~~~
pklausler
All that I'm saying is that it's possible to get a degree in CS from some
schools without being able to demonstrate a mastery of concepts like pointers
or recursion.

------
LanguageGamer
The problem is, whether or not interview performance is consistent, we still
don't know how or when it's correlated with performance if hired, and that's
the sort of thing you would need to actually help people making better hiring
decisions.

Does interviewing.io have any plans to collect employee performance metrics
from companies that hire via their platform? Is that something companies would
be willing to cooperate with?

~~~
RogerL
We know big companies go through tons of people. Isn't stack ranking supposed
to eliminate 10-20% a year.

All this noise about avoiding bad hires while ignoring the elephant in the
room - companies claiming that bounce as many or more people as the companies
that don't use these foolish interviewing practices.

------
jorgecurio
Technical interviews are part of why I've moved away from engineering
positions. I'm looking at product management jobs thinking that I could use
what I learned past 6 years working on a SaaS alone. However, I found the
exact same shit, even more technical interview questions that require
whiteboard code writing.

There might be some merit to why they are doing this but it's impossible for
me to engage companies that discount real world product experience in favor of
rote memorization.

So far it's a pretty tough nut to crack, lot of product manager interviewers
don't seem to know what they are doing, instead relying on law of large
numbers and how great their fucking product is blah blah blah (it isn't).

It's a bit worrying since some companies seem to be hiring product managers
for some subjective end goal of an improved product and improved sales....they
want one person to take the credit from, and the same person to take all the
blame...another huge red flag when managers outright tell you they have no
idea what to do so they just get someone else to outsource their thinking.

------
innertracks
Just finished 2nd phone interview (it was somewhat technical) yesterday with a
company and an online timed tech test this afternoon. First interview was with
an internal recruiter and was non-technical.

I've been impressed. They've been very straight forward regarding tech eval
with no trick questions and respectful of time. Their interview process is
selling me on the company.

------
aprasad91
This is why we at Lytmus believe the most effective way to assess a candidate
is to see how they perform on a real work sample. We've built up a virtual
machine based platform that allows candidates to showcase their skills in a
real development environment with working code cases (web, mobile, data,
systems, etc.). Most interviewing methods like algorithmic challenges often
only provide signal into a discrete skill that can be acquired through
practice, whereas what matters is whether or not you can actually work on real
world projects, understand an existing code base and perform on the job as
opposed to on an interview coding challenge. Google's SVP of People Ops Laszlo
Bock also writes about the ineffectiveness of indirect tests and their weak
correlation with on the job performance.

------
alive2007
Minor question; is it just me or is the "Results of Interview Simulations by
Mean Score" a bit difficult to parse? I understand that observing the behavior
of any singular cohort involves looking at the endpoints of the cohort's curve
at the horizontal line 'x=n', where n is number of simulations you wish to
observe (the right point of the curve at x=n is P(fail) of the worst performer
in the cohort at n simulations, the left point of the curve at x=n is the
P(fail) of the best performer); which is why the gap between endpoints within
a singular cohort decreases as n increases. But it seems kind of
counterintuitive to observe any other kind of trend -- shouldn't the
information be graphed as P(fail) being a function of # of simulations, as
opposed to the other way around, seeing as the latter is the independent
variable?

------
uiri
The data seems to show the opposite to me - despite scores being all over the
place, the mean is very reliable. When a 2 or lower is considered a fail,
those who consistently rate ~2.5 fail about half of their interviews while
those who consistently rate ~3.0 fail only 10%. Of course, the probability
that a candidate failed an interview approaches 1 as they are subject to more
and more interviews. That the test has both false negatives and false
positives does not invalidate the test. In fact, that the test is accurate
despite the false positives and the false negatives ought to do the opposite.
If a single bad interview invalidates a candidate for company A, that doesn't
mean that the candidate won't go on to pass all of their interviews with
company B.

------
agentgt
I haven't interviewed in some time but one thing I absolutely hate about
technical interviews is "white board" coding.

For some reason white boards intimidate me. I have terrible penmanship and
complete lack of planning how much space I need for writing things. Then there
is the fact the markers seem to have a hi failure rate when ever I use them.

Perhaps I'm the only one that feels that way. I have even begged some
interviewers that I would prefer them just watching me use a laptop but the
offer is typically refused. Maybe things have changed now?

------
fecak
Great post as always Aline. I'd be most curious as to how well anonymity was
kept. Did interviewees identify their employers, schools, or any other
information that might create bias while in the interview itself?

I've been recruiting for a long time, and I'm rarely shocked about the result
of an interview - maybe a few times a year. There are tons of possible
explanations for that, and lots of possible explanations for your results as
well.

Keep up the great work.

~~~
leeny
Most of our interviews didn't have people unmasking until after the feedback
step.

------
dilemma
The problem with hiring is that hiring decisions are centralized, causing huge
workloads for the decision maker. To reduce the load, arbitrary processes and
voodoo tests are used, always with the same poor results.

Instead, the team hiring should themselves interview candidates and make
decisions on who to hire, because it requires personal knowledge that you
can't get from tests.

------
robodale
Good, because I suck at technical interviews, but built my own SaaS offering
(working the corporate job until my customer base is large enough).

------
macscam
super true, and probably easy to bypass if you're a data viz wiz. It's more
burdonsome for web devs who don't have the math / algorithms chops, but I know
I have to learn it.

