In my position, I get the occasion to read a ton of assessments written by interviewers. Some of the most striking assessments are the ones where the interviewer is cock-sure that they completely nailed the candidate's utter and complete incompetence. It's usually an interviewer who's been asking the same question dozens of times over a year or more. They've seen every variation of performance on the question, and they've completely forgotten what it was like for them when they first encountered a question like that.
It's total lack of empathy at that point, and if the candidate doesn't exude near-perfect interviewing brilliance on that specific question, the interviewer judges them as essentially worthless. Interviewers like that sometimes even get snarky and rather unprofessional in their writeup, "Finally, time ran out, mercifully ending both my and the candidate's misery."
If I were to diagnose one of the causes of this phenomenon, I'd say it is bias. The interviewer best remembers the candidates who performed exceptionally well on their question, triggering the availability heuristic.
There are tactics that I think can be effective to bust those biases. One might be to put an upper limit on the number of times an interviewer is allowed to ask any given question. Once they've asked maybe 20 or 30 candidates the same question, it's spent. They have to move on to something substantially different.
There are some other experiments I'd like to run. One of them is to have interviewers go through a one-hour interview themselves for every 50 or so interviews they give. Maybe match up an interviewer who has a track record of being especially harsh on candidates for not giving a flawless performance on the question they've been asking for a while. The idea is to see if we can't bubble up some empathy.
What I would suggest is that when the interviewer and candidate get reach the "quiz or coding exercise" part, have them pick a question from a website that provides a question at random, and let both work together towards a solution.
This matches more closely with what they will end up doing anyways if the candidate is hired, and will remove the "I've seen 20 different ways to solve this" bias while also generating empathy for the candidate when the interviewer him/herself also struggles with a fresh problem.
Of course, the interviewer can try to lead the candidate and can hold back from telling the solution right away if s/he can clearly see it, but if not this could set the stage for a fairly realistic way to sample all kinds of qualities, from technical, to communication, to empathy, to teamwork, etc...
I think it would also be less stressful from the candidate's point of view: a candidate often feels like an interview seems unfair because s/he is being asked about something that the interviewer has had a chance to review and prepare much in advance.
When interviewers ask trick/complicated questions, I sometimes wonder what would happen if they were to ask the same question to the rest of their own team. Are they expected to answer it well? Would they know? Think about the places you've worked and the questions you've given at interviews: do you think your own colleagues would have aced them?
Obviously, this doesn't completely remove the additional stress on the candidate, since the interviewer's job isn't on the line, but I think it would provide a more balanced assessment of the candidate's abilities and personality traits.
In practice, I'm not sure interviewers would be very receptive to such method, as it could turn an interview into a stressful event for them.
It all stemmed from one particular interview I had years ago. The interviewer presented me with a very simple problem, I solved it. He then stepped up the difficulty a bit. This kept happening and as it got harder he started acting more like a co-worker where we bounced ideas off of each other.
Granted he knew the solution, but the mere fact that he presented himself not as an interviewer judging my performance but as a co-worker helping to solve a shared problem made that one of my favorite interviews.
They may have changed their interview process at this point though, I only interviewed for them that one time and it was quite a few years ago.
In the past I was hired on the back of simply sitting down with the team lead, and us pair programming to implement what he happened to be working on at the time.
That gave a good idea of how quickly I could start contributing, how easy I was to communicate with, whether I was going to be a snob about the existing code, whether I have relevant pre-existing knowledge about the technologies being used and whether I had any interesting insights to offer. There was also a more standard interview process accompanying this (which I doubt I excelled at) but the pair programming excercise gave a real world insight into exactly what I had to offer and what I was like to work with.
The problem is that getting the answer correctly shouldn't be the main goal of the quiz question. Rather, it is the opportunity to see how the candidate think, and how the person is able to work with someone else (aka the interviewer) to solve a given problem.
It could also be a way to make sure the candidate understands a few concepts of its field (complexity, memory pointers, etc).
Mainly what matters is the process, not the solution.
I always ended those 1:1 sessions by asking if the team member would be comfortable working with people who couldn't get to the correct answer. And if they could, what they would want to see from a candidate working on the question.
Power interviewers do a lot of interviews. They can do them in their sleep and crank them out like an assembly line. But assembly lines aren't nuanced, I fear power interviewers lose the ability (or desire?) to assess candidates through their performance instead of strictly assess the performance.
Power interviewers concern me. I think they end up with too much influence over a company's hiring practices.
I've heard of people at big companies who conduct thousands of interviews a year. This gives them a lot of sway in company hiring practices and culture.
The problem, though, is these large companies are hiring at scale. Growth + attrition yields a lot of hires. Google itself just announced in their earnings call nearly 20k more employees in the past year. That means 100k+ interviews conducted. It's hard to have every interview stay personal an nuanced at that scale.
I don't disagree with the rest of your thesis, but this seems off by an order of magnitude. They would have to conduct 4 interviews every working day to reach even 1000 interviews. Counting the time needed to write feedback for every interview, that person would be a full-time interviewer who occasionally writes software/does product management/project management.
Unless you're speaking of a small group of people who conduct disproportionately more interviews than everyone else (Pareto distribution) - a dozen interviewers could easily rack up 1000 interviews between them.
I admit it's hearsay, but yes, I've heard that some people supposedly conduct 1,000+ interviews.
To be fair, I'm sure some of those are the online "solve this coding puzzle in 30 minutes" type that can be watched later at 3x speed and don't require human interaction. I don't know the ratio, maybe the power interviewers are heavily sandbagging with those.
This is an interesting point, because I feel like that's how large companies treat employee performance in general. I often hear stories of talented engineers being treated as cogs or passed over for promotions, such as the classic protobuf maintainer example.
Perhaps the assembly line fashion of interviews is reflective of how large bureaucracies treat employees as a whole.
I would be sad if I had to switch questions after 20 or 30 candidates because I find it necessary to invest substantial effort in calibrating a new question. Before I ask a candidate a new question, I try use it to mock interview at least 10 people I have worked closely with. Iteration is always required to tune the difficulty and complexity of a question, so the total time invested in a question can be quite high.
One way that I keep questions well-calibrated is that I use them to mock interview other members of the interview panel. This serves at least two purposes: one is to constantly remind myself what realistic answers sound like and another is the help the rest of the panel understand the areas my question will cover.
I find that — for me — this type of mock interviewing and the subsequent retrospective cultivate empathy for candidates. I think this avoids the sort of bias you're observing.
(NB: Some of this may be specific to the kinds of questions I ask; I care less about the initial answer a candidate gives than their ability to self-assess their answer or incorporate feedback to improve their solution.)
If you're interviewing correctly, your job as an interviewer is to ensure that the candidate succeeds during the interview. For example, one question might require to check if two intervals overlap. There are multiple ways to check if they do. One can enumerate all possible ways that they can do (first interval fully contained within the second one, second interval partially overlapping with the first one, etc.) but this quickly grows into a complicated conditional. At that point, if the candidate struggles, you can mention "how would you check that two intervals don't overlap at all?" which is a much easier test that can then be inverted by the candidate.
What the interviewers should be looking for is if the candidate can think through a problem, can split it in smaller steps, can solve each step and is able to integrate each small step into a complete whole.
You could do that some process, with the interviewee.
It would be a great way to see how collaborative they are.
i.e. you say "I've never actually worked through this problem myself, so I don't know how deep the rabbit hole is, but let's give it a crack and see what we come up with together."
Sounds a lot more like real-life to me than a manufactured question you've already worked through to the nth degree.
Without establishing objective criteria with which to evaluate candidates, it's much easier to fall back on decision making processes that are prone to unconscious bias.
A manufactured question may seem impersonal, but that's the point.
Sometimes, all the contestants will miss a prompt, and Alex Trebek comes across as a bit smug or incredulous when reading out the correct response. ("No one? No one knew this? The correct response is....") I may have occasionally yelled at my television, "You have been the host for decades! They have to give you all the answers in order to run the show!"
And that's what it is when you think you're smart just because you know most of the answers on "Jeopardy!" The game is different when you're watching it (or hosting it) than when you're actually playing it competitively.
The interview processes that are designed to stress or fluster a candidate can make the divide between host and players even worse. There is nothing the candidate can do to get the interviewer fired. Almost anything the interviewer does can result in a no-offer.
Then go interview with them. They don't stand next and engage you while you're working at the white board. Instead, they sit at the opposite end of the table and type out your code line by line on their laptops so at the end they can see if whatever you scribbled actually compiles.
The laptop requirement just encourages disengagement. I knew the problem and how to get to the optimal solution for my first round at Google so it didn't affect my performance, but I was pretty disappointed because my interviewer, between typing up the lines I wrote on the board, was completely occupied answering email and Slack messages and barely said anything or made eye contact aside from the occasional "mhm" or nod. Had I gotten stuck at any point I don't think I would have been able to get any help, certainly not any help like I would get from a coworker in a truly collaborative scenario.
$300k+ total comps are a big part of it
I would take a deep breath and go at it, thankful that this unfriendly person isn't breathing down my neck while I'm solving the problem.
Although that hardly constitutes a test of how someone will work in a collaborative environment.
Real collaborations have a lot of tangents, and rabbit trails, and clarification requests, and elucidation sidebars, and restating what has already been established, and re-asking the core questions to see if they have been answered. If the other person is just nodding and saying "Mmhmm. Go on.", then that's no "true [Scotsman]" collaborator. It's a typical collaboration, though; there are two people in the room, and between the two of them, someone is doing the work.
My experience was that the video might show the ideal interview but not the default one. The default was tense silence and "okays" that barely restrained their judgement. The linked video seems to give an entertainingly wrong impression.
Or perhaps you got to find out exactly what it's like to work with that particular person.
I think one fix is to have an explicit criteria scoring a given question, and training interviewers to not rely too heavily on a single question.
This is so incredibly unprofessional I'm shocked that this large search and ad company wouldn't pull this interviewer out of the rotation.
I hope this was just a joke. If I received that in feedback for a candidate I'd replace that interviewer in a heartbeat.
The fact that the organisation not only has people who are like this in a professional setting working there but also tolerates it doesn't reflect very well on the organisation. I understand it's difficult to control for the variation in personalities working at a large company but it's not too difficult to set a culture where a base level of professionalism is expected and is the norm. I'm not sure of how reviews work at your company but I sincerely hope comments like this officially flag the person as not suitable for management and possibly not suitable to continue in long term service at the company.
I could tell you which questions were asked at MTV this past week and by looking at the last month of these posts for a specific office it's incredibly easy to come up with the most frequently asked questions and memorize their optimal solutions. This means that if you're someone who doesn't partake in these communities that you'll enter these interviews at a significant disadvantage because your performance on that problem will be compared to everyone who was able to (legitimately or illegitimately) arrive at the optimal solution in 45 minutes.
Even the most common questions aren't aren't common enough that you're sure to encounter one on an interview loop.
1) At the office that my friend (who is also an interviewer) works, there is a pool that the interviewers collaboratively built and generally select from. They aren't required to choose from this pool, but 95% of them do because they contributed to it.
2) At MTV there's either a pool or the number of interviewers is so low that the same questions are repeated over and over at high frequency for several weeks, making it trivial to know which ones are likely to show up. I know because the questions I found on the sketchy Chinese forums were exactly the ones I encountered on my onsite interview.
I'm unclear if 1 is referring to a specific office pool, which would be weird, or the global knowledge base I mentioned, but that has lots of questions, more than one could hope to memorize in any reasonable time frame.
I would like to throw an experiment your way. As you mentioned, the interviewer, and interview questions itself can have its own bias. How about pre-screening questions? First, you would pool the possible set of interview questions. Then, once a month, a meaningful subset (or subsets) of 'regular' interviewers is created. Each subset gets one question from the pool and are required to complete it in 'interview like conditions' (closed room, no internet, just pen and paper). If the question has a poor answer rate with existing employees (aka interviewers), throw it out. It would be great if they explained what they did and did not like about the question and what would have made it easier. Would this process increase their empathy towards the interviewee? It needs to be frequent so it is a constant reminder.
Another, likely chaotic, option is to have the interviewer be randomly assigned a question from the pool. The question cannot have been their own submission. This way, both the interviewer and interviewee are seeing it for the first time and have to work toward a goal (team work).
Changing topics from interviewing to interview metrics... I would be interested in how interviewee - interviewer age differences affected outcomes.
I've ignored every potential job ad I have seen from said search and ads company ever since.
Would it also make sense to have every harsh interviewer be interviewed by all the other harsh interviewers, and see how well they all perform on each others' question? In each case, incentivize the interviewer to stump the interviewee, but using only the same questions they use in real interviews with prospective candidates, and link the interviewee's performance to their bonus or something. Put the harsh interviewers in the same situation with a similar amount at stake as prospective candidates, remind them that nervousness and the unnatural setup relatively to typical working conditions can also be a factor in interview performance.
I find it curious that you think this has to do with the particular question. This kind of bias comes up all the time, and it will certainly be in place even the first time you ask a particular question.
Simply growing in one's career and being around mostly other experienced people can quickly lead to one forgetting what it was like to be new to the field.
It's total lack of empathy at that point, and if the candidate doesn't exude near-perfect interviewing brilliance on that specific question, the interviewer judges them as essentially worthless.
That's going way too far. Having poorly calibrated standards doesn't imply lack of empathy. Having an unreasonably high bar doesn't mean you think anyone under the bar is worthless.
How about just recording data about interviews and using it as feedback to make the system better? If there are outlier interviewers with unreasonable interview-to-hire rates, deal with that situation directly.
The question is not "is this person good?" or "can they do the work?" The mandate is not "hire people who can do the work” and rejection is not "we don't think you can do the work.” The mandate is "hire people we're confident that we're excited about, even with the imperfect information we can afford." Like a competitive college, we're going to pass on a lot of applicants who are probably fine. It's not a reflection on their worth as people or as professionals. It's a pragmatic tradeoff in the design of a machine.
It behooves all parties to cast their nets far and wide, and to not get emotionally invested in any particular match before the offer stage.
My first dev job was at a place that had a whole team like this. They would get themselves psyched up for interviews and code reviews talking about making someone cry today.
It was pretty disturbing.
I'm glad most devs aren't like this, but sadly, there are still too many who are.
For example, let's say a previous candidate that was brought in as an L4, rose to L6 extremely quickly while constantly getting great performance reviews - maybe you could look back at the type of questions asked, and the type of feedback that candidate received.
Additionally, let's say a previous candidate ended up being a non-performer and was quickly let go, you could also look back at their initial interview.
Maybe everything could be fed into ML, and once you have a model in place you can start getting signals based off a candidate's replies to behavioral interview questions, or certain characteristics displayed during their systems design interview, etc.
- Review candidate's resume and side projects to get to know more about them. This helped with getting to know their work and finding things to discuss beyond the interview exercise.
- Before diving into the exercise I would out right tell the candidate what I was looking for in regards to the exercise; e.g. "I'm not looking for a complete/perfect answer but I'm looking to have a conversation about the pros/cons and edge-cases. You can write on the whiteboard as little or as much as you want to but either way let's have a discussion. Let me know if you feel stuck at any point, and I'll make sure to let you know if you are/aren't on the right track. If you don't know/remember something just ask me; it' fine, nobody knows everything."
- I didn't put too much weight on whether the candidate gave a complete answer or not, or how much I had to help them. I basically asked myself a simple question: "Do I feel like this person would be a productive contributor here and are they someone I would be able to work with?"
- I always did my best to go into the room with a relaxed and conversational attitude. I was there to pass the candidate and not to fail them for random reasons.
- I passed most people and only failed some when it was somewhat obvious to me that they really lacked some very basic foundational pieces, or when I felt like they weren't someone I would want to work with (for various reasons).
Towards the end of my career there I started to really dislike interviewing because I would personally put so much effort to passing candidates, but they wouldn't get hired because other interviewers left feedback like "I helped the candidate too much" or "they didn't even know what a TRIE tree was" or "they struggled with X" or "the solution had bugs" or "the solution wasn't complete".
They interesting thing was that the same interviewers that failed candidates for seemingly random reasons, were also the ones who left confusing feedback or not enough feedback. Also the same interviewers either treated candidates poorly or they showed boredom and agitation.
Care to share the name? The search engine I currently use is garbage, it routinely ignores half my query.
This would only be effective if the interviewee faces similar consequences for failing that interview, e.g. losing their job (the equivalent of an actual interviewer not getting an offer), or at the very least losing the ability to interview moving forward.
They might also fix their comically bureaucratic promotion committee process at the same time. Seems like a win-win.
Maybe it’s time to step back and admit that Google-style interviewing is itself intrinsically toxic and unhuman? It brings out the worst in everyone, reduces candidates to rote memorization and tortured puzzle solving, and very sincerely does not reflect the realities of the job (not even at Google).
> There are some other experiments I'd like to run... to see if we can't bubble up some empathy.
I don't know dude, it doesn't sound like you're the authority on empathy if you think you can "bubble up some empathy."
As far as making fun of the preposterous egotism of the Google interviewing process is concerned, that line could definitely make it to Silicon Valley in the parody.
I mean, I don't really know. If you aspire to be better at recruiting--of being a people person of some kind--you gotta not say stuff that sounds utterly, hilariously disconnected from what normal human beings say.
I've had some recent experience with technical interviews, and my first big takeaway was that the current interview process is broken largely because it's too common for the interview to never surface the _strengths_ of a candidate, and instead only to highlight their weaknesses. For every candidate, there is literally an infinite number of things they do not know. Surfacing those deficiencies has no purpose or value unless those weaknesses are _directly_ relevant to the job role–which is hardly ever the case when talking about DS&A questions.
The second lesson I learned is that interviewers need more training because there is a vast difference between good and bad interviewers–and almost all of it comes down to communication skills. If we don't finish a warm-up problem because it takes me 30 minutes to decode and understand the question that the interviewer is trying to ask...that's a problem.
I get submarined by people who use pretty open-ended problems as their warmup question. Clearly they didn't think about how many corner cases a production version of the solution will take. I'm not in R&D very often. My job is to make production ready versions of solutions. I can't just turn off thinking about corner cases (and really, I have no interest in learning that habit because it just seems like a liability).
This train of thought goes through my head while I'm also trying to answer the question and ask follow on questions and then I end up wondering about the chops of the interviewer. Are they a corner cutter? They picked this person as a spokesman for the group. Is the whole group a bunch of hacks?
One candidate clearly thought she'd flunked the interview within ten seconds of when I decided to recommend her. She got stuck on the problem (totally wrong answer from the code due to a typo) and started to crumple but immediately went into the debugger to try figure it out. Ran through a series of perfectly reasonable diagnostics trying to zero in on the problem. I didn't even care if she found it at that point because I could see that she would get it eventually, and probably every one after that. You don't get to see the engineering discipline the same way if you use the whiteboard.
People who can solve their own problems can often help other people solve theirs. I don't want to add someone who is nice but needs my help all day. That might fluff my ego doesn't make us go faster.
I switched teams shortly thereafter (I was hiring my replacements) so I didn't get to work with her much, but I know she stayed on through the first contract renewal (not everybody did), so she must have worked out.
I don't think I agree. There's absolutely something to be said for the comfort of a familiar environment, but I think the interviewer should be able to emulate the compiler/debugger/runtime for the question they're asking. (Many of the most successful interviewees can do this themselves; they write down the program state on the whiteboard and step through it in their head.) Interviewers should be able to say "you get a SIGSEGV" and ask what the candidate would do. If the candidate says "I'd run gdb", they should be able to say "it says the crash was at this line", emulate break/print statements, and such. In some ways, it's slower than the candidate doing things, and more awkward to go through a human. In others, it's faster, because the interviewer can/should speed up the process by forgiving small syntax errors, saying "oh, you're bisecting? it's here", etc.
I do this sometimes when interviewing. I find though that the people who can successfully use a debugger (or me as a debugger) tend to have relatively minor errors in their code anyway. It's pretty rare for someone to have a completely incorrect algorithm and figure that out from debugging.
 forgetting a guard on an if for an empty datastructure, forgetting to sort numerically instead of lexicographically, some dumb typo, etc.
How does a human emulate the UI of a debugger? It seems like you would have much lower information-bandwidth and thereby inevitably end up not really presenting all the info at once that a terminal window is able to.
Yes, you're right. It's a bit awkward. I certainly wouldn't want to do my regular work by whiteboard + dictation. And if I were designing my company's interview approach I might allow candidates to bring in a laptop to work on a toy problem in their chosen IDE.
But my point is that I don't think debugging skills are completely impossible to test in this way, and the forced interaction allows you to learn what piece of information they're looking for and why. If this is how you have to interview, you might as well find the best aspects of it and use them.
I think a lot of the key of getting useful signal from an interview is to ask a simpler problem. If you ask someone to write and debug really complex code in 45 minutes, you'll just find out if they can write code under pressure really fast. That's a great skill, but I care more about communication: being able to ask good requirements questions, describe the data structure/algorithm so that teammates will understand it, teach people how to work through problems, etc. I think overly complex coding takes away the time I spend examining those things. Likewise, there are a few criteria besides "coding" and "communication" which I also want time to focus on.
When I look through the notes from the full interview panel, my questions often seem to be simpler than others. My feedback appears to correlate more strongly with actual hiring than average, so I think it's persuasive. Of course, I don't know if it correlates well with how people would actually perform if we hired them. I don't even know if the people we did hire are performing well, because I work at a big company and the people I hire generally don't end up on my team.
Had this experience recently. In the 2 hours or so interview. The interviewer bragged and boasted about himself for a good 1.5 hours. Out of 30 minutes of question/answer time I got he would routinely and very rudely interrupt me to tell in a humiliating tone that I was wrong.
I know not everyone in that company would be humiliating like this interviewer but it tells something about the company if they hire such people and put them in the chair of an interviewer no less.
It's somewhat striking to me that you seem so worried about these concepts but you don't seem to be aware of the normal terms for them. How much does TripleByte try to inform itself of the existing research in this field? To what extent does TripleByte seek to incorporate psychometric results about what kinds of tests are likely to have high reliability and construct validity?
And one more more specific question:
> what actually matters is accuracy (predictive utility of the interview)
What is it that you're trying to predict? You could be trying to find employees who will be good employees, which would put TripleByte in the business of credentialing, or to find employees who will pass interviews at other companies, which would make TripleByte a recruiting agency. In the past, Harj has been explicit that what TripleByte wants to predict is whether a candidate will successfully pass the hiring process at another company, regardless of how well that hiring process performs. Is this still true?
The psychometric literature is pretty robust.
Another was on specifically the psychometric literature, and the big meta-analyses of the predictiveness of different testing factors on job performance, and how that influenced the experiments they did early on and how they honed in on what they do today. As well as downsides they discovered of various methods people commonly suggest they're ignoring.
I came away from those conversations extremely impressed with TripleByte's employees and competence as an organization. They definitely think about this stuff.
> Then, as the interview progresses, do exactly this. About half the time give your best answer. The other half of the time give an intentionally poor answer. ...
> What this does is free your co-worker to be 100% honest. They don't know which parts of the interview were really you trying to perform well. Moreover, they are on the hook to notice the bad answers you gave. If you gave an intentionally poor answer and they don't “catch” it, they look a little bad. So, they will give an honest, detailed account of their perceptions.
This reminds me of the second part of the Rosenham experiment [ http://psychrights.org/articles/rosenham.htm ]:
> The following experiment was arranged at a research and teaching hospital whose staff had heard these findings but doubted that such an error could occur in their hospital. The staff was informed that at some time during the following three months, one or more pseudopatients would attempt to be admitted into the psychiatric hospital. Each staff member was asked to rate each patient who presented himself at admissions or on the ward according to the likelihood that the patient was a pseudopatient. A 10-point scale was used, with a 1 and 2 reflecting high confidence that the patient was a pseudopatient.
> Judgments were obtained on 193 patients who were admitted for psychiatric treatment. All staff who had had sustained contact with or primary responsibility for the patient – attendants, nurses, psychiatrists, physicians, and psychologists – were asked to make judgments. Forty-one patients were alleged, with high confidence, to be pseudopatients by at least one member of the staff. Twenty-three were considered suspect by at least one psychiatrist. Nineteen were suspected by one psychiatrist and one other staff member. Actually, no genuine pseudopatient (at least from my group) presented himself during this period.
There is a version of this exercise you could do where you say you are intentionally giving bad answers and give none!!!
This might seem clever, but it could backfire. If the other person trusts you, they will take it as axiomatic that some of your answers are bad; therefore, if all of your answers are actually pretty good, they will desperately look for nits to pick, and possibly end up making criticisms that they don't really believe in (or at least wouldn't have believed in when unbiased). This can take you from one extreme (too polite/respectful/humble to be critical) to another (finding things to criticise no matter what), skipping the middle ground that you really want.
When your interviewer is telling you about their role in the company and a little about their history (if they have one), say it's a dev interview because why not, ask them how they rate their programming skill on a scale from 1 to 5. Ask them why they left their last company. Ask them what the most difficult thing they've ever achieved is.
When interviewing someone to be my manager I asked him since he spoke with such joy about his current job, why he was leaving... It was money. We found out the company he was at got funding a few days later, and although we made an offer he rejected us.
I've interviewed at a company where the environment didn't seem good at all. I always ask whats good about working at a place, and what isn't so good. I found people to be honest about it. I didn't think it was a good fit after thinking about it a bit a decided not to continue.
So when interviewing don't be arrogant. You need help, thats why you are looking for people.
When interviewing candidates, my main criteria are:
- Can this person help us
- Can I deal with working with them
Speaking as someone who is (1) a technical interviewer (2) good at whiteboarding problems and (3) has near zero fear of public speaking, I think the big exercise in humility is to figure out ways to get evidence for the interviewee’s skill set even when it’s dissimilar to my own.
We've found that the three best predictors for good hires are Curiosity, Ability to Self Learn, Ability to Listen.
Good hires will generate fill 2 or three.
I'd love to see actual pair programming with interviewer and interviewee. Where they were assigned a random (small) code project and had to work on it together with neither having prior knowledge. It would level the playing field a bit and is much closer to actual working conditions than being forced to write code under duress and close real-time examination.
However - I don't think this is about 'ego' or even 'humility' - I think those are not the right words.
It's a lack of contextual understanding both in 'self awareness' and also the interviewers plight.
I think the premise can be taught.
Also, I think interviews can be structured to find qualities independant of background.
+ Questions that don't measure a person's ability to 'memorize algorithms' are a good start.
+ Allowing devs to pick their language of expression, i.e. sometimes they are more comfortable in one lang than another.
+ Don't get syntax/code structure confused with the abstract problem if that's what one is going for. Google has a nice interview example 
+ Open ended questions with many possible turns allow for a 'good' thinker to just go a lot further, and be more impressive while at the same time allowing junior devs to still walk through and complete something. The Google example is again good here.
+ Time/on the spot - one of the worst issues. Personally I'm about 50/50. Sometimes 'in the flow' sometimes 'not', but surely just given a little bit fo time, I'd be fine on most things. For this reason, giving interviewees an intro to the problems, and giving them as much time as they want to think about them before the interview starts, might be worthwhile as well. 'Let us know when you want to go over a solution'. This could work well for pedantic things such as 'here's some code, find some bugs' or 'how would you structure this differently' etc..
Thankfully it was remote and not during my work hours, so little was lost.
Since there was no mention of it in the post, this is called “randomized response,” and is a building block for modern privacy-preserving protocols e.g. RAPPOR, which is used in Google Chrome: https://security.googleblog.com/2014/10/learning-statistics-...
A lot of companies now rely on tools such as Codility / Leetcode / HackerRank for technical screening, or their own in-house tests.
and automate all the whiteboard/leetcode parts with boilerplate Q&A