
A Humility Training Exercise for Technical Interviewers - ammon
https://triplebyte.com/blog/a-humility-training-exercise-for-technical-interviewers
======
steelframe
For the past several years I've become deeply involved in interviewing at my
company (hint: it's big and is really good at things like search and ads).
I've done hundreds of technical interviews here, and I also teach a class to
train employees how to interview. I'm also involved in evaluating candidates
who have gone through the interview process.

In my position, I get the occasion to read a ton of assessments written by
interviewers. Some of the most striking assessments are the ones where the
interviewer is cock-sure that they completely nailed the candidate's utter and
complete incompetence. It's usually an interviewer who's been asking the same
question dozens of times over a year or more. They've seen every variation of
performance on the question, and they've completely forgotten what it was like
for _them_ when they first encountered a question like that.

It's total lack of empathy at that point, and if the candidate doesn't exude
near-perfect interviewing brilliance on that specific question, the
interviewer judges them as essentially worthless. Interviewers like that
sometimes even get snarky and rather unprofessional in their writeup,
"Finally, time ran out, mercifully ending both my and the candidate's misery."

If I were to diagnose one of the causes of this phenomenon, I'd say it is
bias. The interviewer best remembers the candidates who performed
exceptionally well on their question, triggering the availability heuristic.

There are tactics that I think can be effective to bust those biases. One
might be to put an upper limit on the number of times an interviewer is
allowed to ask any given question. Once they've asked maybe 20 or 30
candidates the same question, it's spent. They have to move on to something
substantially different.

There are some other experiments I'd like to run. One of them is to have
interviewers go through a one-hour interview themselves for every 50 or so
interviews they give. Maybe match up an interviewer who has a track record of
being especially harsh on candidates for not giving a flawless performance on
the question they've been asking for a while. The idea is to see if we can't
bubble up some empathy.

~~~
magnetic
I wonder if one way to help this bias is to have interviews be an exercise in
teamwork rather than a lopsided relationship as it is now.

What I would suggest is that when the interviewer and candidate get reach the
"quiz or coding exercise" part, have them pick a question from a website that
provides a question at random, and let both work together towards a solution.

This matches more closely with what they will end up doing anyways if the
candidate is hired, and will remove the "I've seen 20 different ways to solve
this" bias while also generating empathy for the candidate when the
interviewer him/herself also struggles with a fresh problem.

Of course, the interviewer can try to lead the candidate and can hold back
from telling the solution right away if s/he can clearly see it, but if not
this could set the stage for a fairly realistic way to sample all kinds of
qualities, from technical, to communication, to empathy, to teamwork, etc...

I think it would also be less stressful from the candidate's point of view: a
candidate often feels like an interview seems unfair because s/he is being
asked about something that the interviewer has had a chance to review and
prepare much in advance.

When interviewers ask trick/complicated questions, I sometimes wonder what
would happen if they were to ask the same question to the rest of their own
team. Are they expected to answer it well? Would they know? Think about the
places you've worked and the questions you've given at interviews: do you
think your own colleagues would have aced them?

Obviously, this doesn't completely remove the additional stress on the
candidate, since the interviewer's job isn't on the line, but I think it would
provide a more balanced assessment of the candidate's abilities and
personality traits.

In practice, I'm not sure interviewers would be very receptive to such method,
as it could turn an interview into a stressful event for them.

~~~
jack_h
I've had a very similar idea to this as well.

It all stemmed from one particular interview I had years ago. The interviewer
presented me with a very simple problem, I solved it. He then stepped up the
difficulty a bit. This kept happening and as it got harder he started acting
more like a co-worker where we bounced ideas off of each other.

Granted he knew the solution, but the mere fact that he presented himself not
as an interviewer judging my performance but as a co-worker helping to solve a
shared problem made that one of my favorite interviews.

~~~
akvadrako
That does sound pretty good - what company was it at?

~~~
jack_h
It was for Amazon. I will say I had 4 or 5 other interviews at Amazon that day
and most of them weren't nearly as good, one was actually atrociously bad.

They may have changed their interview process at this point though, I only
interviewed for them that one time and it was quite a few years ago.

------
cgearhart
This is a great suggestion that closely aligns with my own thoughts and
experiences with technical interviews.

I've had some recent experience with technical interviews, and my first big
takeaway was that the current interview process is broken largely because it's
too common for the interview to never surface the _strengths_ of a candidate,
and instead only to highlight their weaknesses. For every candidate, there is
literally an infinite number of things they do not know. Surfacing those
deficiencies has no purpose or value unless those weaknesses are _directly_
relevant to the job role–which is hardly ever the case when talking about DS&A
questions.

The second lesson I learned is that interviewers need more training because
there is a __vast __difference between good and bad interviewers–and almost
all of it comes down to communication skills. If we don 't finish a warm-up
problem because it takes me 30 minutes to decode and understand the question
that the interviewer is trying to ask...that's a problem.

~~~
hinkley
We hired eight contractors all at once. Two day marathon of interviews.

One candidate clearly thought she'd flunked the interview within ten seconds
of when I decided to recommend her. She got stuck on the problem (totally
wrong answer from the code due to a typo) and started to crumple but
immediately went into the debugger to try figure it out. Ran through a series
of perfectly reasonable diagnostics trying to zero in on the problem. I didn't
even care if she found it at that point because I could see that she would get
it eventually, and probably every one after that. You don't get to see the
engineering discipline the same way if you use the whiteboard.

People who can solve their own problems can often help other people solve
theirs. I don't want to add someone who is nice but needs my help all day.
That might fluff my ego doesn't make us go faster.

I switched teams shortly thereafter (I was hiring my replacements) so I didn't
get to work with her much, but I know she stayed on through the first contract
renewal (not everybody did), so she must have worked out.

~~~
scottlamb
> You don't get to see the engineering discipline the same way if you use the
> whiteboard.

I don't think I agree. There's absolutely something to be said for the comfort
of a familiar environment, but I think the interviewer should be able to
emulate the compiler/debugger/runtime for the question they're asking. (Many
of the most successful interviewees can do this themselves; they write down
the program state on the whiteboard and step through it in their head.)
Interviewers should be able to say "you get a SIGSEGV" and ask what the
candidate would do. If the candidate says "I'd run gdb", they should be able
to say "it says the crash was at this line", emulate break/print statements,
and such. In some ways, it's slower than the candidate doing things, and more
awkward to go through a human. In others, it's faster, because the interviewer
can/should speed up the process by forgiving small syntax errors, saying "oh,
you're bisecting? it's here", etc.

I do this sometimes when interviewing. I find though that the people who can
successfully use a debugger (or me as a debugger) tend to have relatively
minor errors in their code anyway[1]. It's pretty rare for someone to have a
completely incorrect algorithm and figure that out from debugging.

[1] forgetting a guard on an if for an empty datastructure, forgetting to sort
numerically instead of lexicographically, some dumb typo, etc.

~~~
afarrell
> but I think the interviewer should be able to emulate the
> compiler/debugger/runtime for the question they're asking

How does a human emulate the UI of a debugger? It seems like you would have
much lower information-bandwidth and thereby inevitably end up not really
presenting all the info at once that a terminal window is able to.

~~~
scottlamb
(Sorry, I missed your reply day of, so I'm replying much later.)

Yes, you're right. It's a bit awkward. I certainly wouldn't want to do my
regular work by whiteboard + dictation. And if I were designing my company's
interview approach I might allow candidates to bring in a laptop to work on a
toy problem in their chosen IDE.

But my point is that I don't think debugging skills are completely impossible
to test in this way, and the forced interaction allows you to learn what piece
of information they're looking for and why. If this is how you have to
interview, you might as well find the best aspects of it and use them.

I think a lot of the key of getting useful signal from an interview is to ask
a simpler problem. If you ask someone to write and debug really complex code
in 45 minutes, you'll just find out if they can write code under pressure
really fast. That's a great skill, but I care more about communication: being
able to ask good requirements questions, describe the data structure/algorithm
so that teammates will understand it, teach people how to work through
problems, etc. I think overly complex coding takes away the time I spend
examining those things. Likewise, there are a few criteria besides "coding"
and "communication" which I also want time to focus on.

When I look through the notes from the full interview panel, my questions
often seem to be simpler than others. My feedback appears to correlate more
strongly with actual hiring than average, so I think it's persuasive. Of
course, I don't know if it correlates well with how people would actually
perform if we hired them. I don't even know if the people we did hire are
performing well, because I work at a big company and the people I hire
generally don't end up on my team.

------
o10449366
Unfortunately, some insecure developers sign up to be interviewers precisely
to feel superiority and power over interviewees. I've both worked with and
been interviewed by some of these people - they seem particularly prominent in
the valley. Anecdotally, Google was the worst of the big tech companies with
cocky, condescending interviewers that came in wearing Ivy league sweaters. By
contrast, all of my interviews at mid-level tech companies were equally
difficult technically, but much more pleasant and engaging.

~~~
steelframe
Some of the people who I would like most to interview at my company are opting
out of interviewing because of their own self-doubt.

~~~
megy
Why would you interview someone you already worked with? What?

~~~
tomnipotent
All interview material should be run through mock interviews with internal
developers before asking anyone outside the company. This is the first step to
preventing legitimately stupid questions from making its way into your
process.

------
ammon
I mostly focus on consistency in this post (how to make a group of
interviewers more consistent). Of course, what actually matters is accuracy
(predictive utility of the interview). Obviously those are not the same thing
(you could run 100% consistent interviews by just having everyone always grade
“strong no”). However, I think (based on running the interview team at
Triplebyte) that inconsistency is the primary obstacle to accuracy in
practice. So I end up spending the majority of my time focusing on how to make
interviewers more consistent.

~~~
thaumasiotes
In psychometrics, the field that studies the same thing you're trying to do,
the concepts you're referring to as "consistency" and "accuracy" are known as
"reliability" and "validity".

It's somewhat striking to me that you seem so worried about these concepts but
you don't seem to be aware of the normal terms for them. How much does
TripleByte try to inform itself of the existing research in this field? To
what extent does TripleByte seek to incorporate psychometric results about
what kinds of tests are likely to have high reliability and construct
validity?

And one more more specific question:

> what actually matters is accuracy (predictive utility of the interview)

What is it that you're trying to predict? You could be trying to find
employees _who will be good employees_ , which would put TripleByte in the
business of credentialing, or to find employees _who will pass interviews at
other companies_ , which would make TripleByte a recruiting agency. In the
past, Harj has been explicit that what TripleByte wants to predict is whether
a candidate will successfully pass the hiring process at another company,
regardless of how well that hiring process performs. Is this still true?

~~~
trishume
I’m guessing they know the terms but ‘consistency’ and ‘accuracy’ are just
more common easier to understand terms without sacrificing meaning. I’ve
talked to some TripleByte engineers and been very impressed by the
sophistication of the statistics and experimental methodology they use.

~~~
thaumasiotes
Using an experimental methodology isn't something you should be impressed by
in itself. Experimenting means you don't know what you should be doing. That's
great if no one knows what you should be doing and you're trying to figure it
out; it's less great if everyone else knows what you should be doing, but you
don't.

The psychometric literature is pretty robust.

~~~
trishume
One specific conversation I had was how they read the literature on the best
adaptive testing systems and then developed improvements tailored to their
specific data and the advantages they had as a real time online test.

Another was on specifically the psychometric literature, and the big meta-
analyses of the predictiveness of different testing factors on job
performance, and how that influenced the experiments they did early on and how
they honed in on what they do today. As well as downsides they discovered of
various methods people commonly suggest they're ignoring.

I came away from those conversations extremely impressed with TripleByte's
employees and competence as an organization. They definitely think about this
stuff.

------
mherdeg
Interesting idea:

> Then, as the interview progresses, do exactly this. About half the time give
> your best answer. The other half of the time give an intentionally poor
> answer. ...

> What this does is free your co-worker to be 100% honest. They don't know
> which parts of the interview were really you trying to perform well.
> Moreover, they are on the hook to notice the bad answers you gave. If you
> gave an intentionally poor answer and they don't “catch” it, they look a
> little bad. So, they will give an honest, detailed account of their
> perceptions.

This reminds me of the second part of the Rosenham experiment [
[http://psychrights.org/articles/rosenham.htm](http://psychrights.org/articles/rosenham.htm)
]:

> The following experiment was arranged at a research and teaching hospital
> whose staff had heard these findings but doubted that such an error could
> occur in their hospital. The staff was informed that at some time during the
> following three months, one or more pseudopatients would attempt to be
> admitted into the psychiatric hospital. Each staff member was asked to rate
> each patient who presented himself at admissions or on the ward according to
> the likelihood that the patient was a pseudopatient. A 10-point scale was
> used, with a 1 and 2 reflecting high confidence that the patient was a
> pseudopatient.

> Judgments were obtained on 193 patients who were admitted for psychiatric
> treatment. All staff who had had sustained contact with or primary
> responsibility for the patient – attendants, nurses, psychiatrists,
> physicians, and psychologists – were asked to make judgments. Forty-one
> patients were alleged, with high confidence, to be pseudopatients by at
> least one member of the staff. Twenty-three were considered suspect by at
> least one psychiatrist. Nineteen were suspected by one psychiatrist and one
> other staff member. Actually, no genuine pseudopatient (at least from my
> group) presented himself during this period.

There is a version of this exercise you could do where you _say_ you are
intentionally giving bad answers and give none!!!

~~~
retsibsi
> There is a version of this exercise you could do where you say you are
> intentionally giving bad answers and give none!!!

This might seem clever, but it could backfire. If the other person trusts you,
they will take it as axiomatic that some of your answers are bad; therefore,
if all of your answers are actually pretty good, they will desperately look
for nits to pick, and possibly end up making criticisms that they don't really
believe in (or at least wouldn't have believed in when unbiased). This can
take you from one extreme (too polite/respectful/humble to be critical) to
another (finding things to criticise no matter what), skipping the middle
ground that you really want.

------
rhizome
I'd be interested to see what happens if an interviewee turns the table under
the guise of "interviewing the company," which is a concept promoted from time
to time in interviewing threads.

When your interviewer is telling you about their role in the company and a
little about their history (if they have one), say it's a dev interview
because why not, ask them how they rate their programming skill on a scale
from 1 to 5. Ask them why they left their last company. Ask them what the most
difficult thing they've ever achieved is.

~~~
rhexs
The interviewer usually makes some sort of excuse then gets right back to the
actual reason they're in the room with you: leetcode 101.

~~~
jungler
You could ask the interviewer to whiteboard something for you, too.

~~~
klodolph
I mean, sure, that’s a possibility. But some people are just really good at
whiteboarding problems and explaining solutions. Sometimes these people end up
being technical interviewers. If you’ve been a technical interviewer for a
while, you don’t find the experience stressful. After a while of doing
technical problems they start to seem easier and easier, even when presented
with evidence to the contrary (when candidates find them difficult).

Speaking as someone who is (1) a technical interviewer (2) good at
whiteboarding problems and (3) has near zero fear of public speaking, I think
the big exercise in humility is to figure out ways to get evidence for the
interviewee’s skill set even when it’s dissimilar to my own.

------
malvosenior
This is great. I seriously doubt most interviewers could pass the technical
tests they give out if they were incurring the huge cognitive load of
interviewing for a company while trying to do it.

I'd love to see actual pair programming with interviewer and interviewee.
Where they were assigned a random (small) code project and had to work on it
together with neither having prior knowledge. It would level the playing field
a bit and is much closer to actual working conditions than being forced to
write code under duress and close real-time examination.

~~~
Texasian
Pivotal does this. The interview is usually just a "day on the job".

------
sonnyblarney
Very good points, agree with it.

However - I don't think this is about 'ego' or even 'humility' \- I think
those are not the right words.

It's a lack of contextual understanding both in 'self awareness' and also the
interviewers plight.

I think the premise can be taught.

Also, I think interviews can be structured to find qualities independant of
background.

\+ Questions that don't measure a person's ability to 'memorize algorithms'
are a good start.

\+ Allowing devs to pick their language of expression, i.e. sometimes they are
more comfortable in one lang than another.

\+ Don't get syntax/code structure confused with the abstract problem if
that's what one is going for. Google has a nice interview example [1]

\+ Open ended questions with many possible turns allow for a 'good' thinker to
just go a lot further, and be more impressive while at the same time allowing
junior devs to still walk through and complete something. The Google example
is again good here.

\+ Time/on the spot - one of the worst issues. Personally I'm about 50/50\.
Sometimes 'in the flow' sometimes 'not', but surely just given a little bit fo
time, I'd be fine on most things. For this reason, giving interviewees an
intro to the problems, and giving them as much time as they want to think
about them before the interview starts, might be worthwhile as well. 'Let us
know when you want to go over a solution'. This could work well for pedantic
things such as 'here's some code, find some bugs' or 'how would you structure
this differently' etc..

[1]
[https://www.youtube.com/watch?v=XKu_SEDAykw](https://www.youtube.com/watch?v=XKu_SEDAykw)

------
nobodyandproud
I went through one of TripleBytes interview processes. The interviewer was
smug and condescending

Thankfully it was remote and not during my work hours, so little was lost.

------
klrr
Why not pick a problem the interviewer have not solved, and let the
interviewee and interviewer solve the problem together. This shows how the
candidate is thinking, and how they are working together with others. It will
also let the interviewer get a more un-biased view of the difficulty of the
algorithm or CS problem.

------
paulie_a
Give pointers if necessary in a technical interview, see what questions they
ask. If they are completely flubbing it and you already know it is a no go,
call it off. It's a little shocking to hear but in a few minutes they will
generally be grateful/understanding you didn't waste two more hours of their
time.

------
garrettr_
“What this does is free your co-worker to be 100% honest. They don't know
which parts of the interview were really you trying to perform well.”

Since there was no mention of it in the post, this is called “randomized
response,” and is a building block for modern privacy-preserving protocols
e.g. RAPPOR, which is used in Google Chrome:
[https://security.googleblog.com/2014/10/learning-
statistics-...](https://security.googleblog.com/2014/10/learning-statistics-
with-privacy-aided.html)

------
User23
There are some factors that affect interviewee performance that are seldom
considered. The stand-out I noticed is that the quality of whiteboard coding
is a function of whiteboard size, the bigger the better.

------
rb808
Nice. I also feel like I need humility training to work with and evaluate
newbie devs. They cant do anything right. Not sure if its just hard to find
people who know what they're doing or I've forgotten what its like in your
first few years.

------
tacomplain
I dont really agree with ego = 1/knowledge; I believe that my knowledge over
time has been increasing and my ego has been fluctuating in a non correlate
way.

------
antoinevg
Brilliant.

------
mentally_broken
.

~~~
ben_jones
Don't. Work. For. FAANG. Money. Isn't. Everything.

------
bra-ket
why interviewing is not automated yet

~~~
SketchySeaBeast
How would you determine if the person is a good fit culturally via an
automated system?

~~~
bra-ket
have a human-to-human interview asking behavioral and background questions

and automate all the whiteboard/leetcode parts with boilerplate Q&A

~~~
steelframe
Often times the most valuable signal comes from how the candidate got to an
answer rather than the exact answer that the candidate got to.

~~~
bra-ket
when you apply to graduate school they don't ask you to solve calculus
problems on a whiteboard to get the 'signal', the signal comes from 1)
standardized tests 2) prior work 3) recommendations 4) behavioral interviews

