Hacker News new | past | comments | ask | show | jobs | submit login
Why phone and whiteboard interviews are not the best way to evaluate candidates (krishicks.com)
64 points by tush726 on Feb 26, 2015 | hide | past | web | favorite | 65 comments

Whiteboard interviews can be good, but there's "One weird trick" to getting them right.

Let the person code WITHOUT YOU BEING THERE.

Interviews are stressful. The author says "My heart rate jumped" ... his heart rate jumped because his body released a bunch of adrenaline in reaction to a stressful event.

Adrenaline prepares the body for fight or flight, and part of this preparation is re-routing blood to critial systems ... muscles, heart and lungs ... and things like digestion and higher level cortical function are not required for fight or flight.

Programming requires higher level cortical thought - which is disabled during a stressful situation.

Give the person a programming challenge, tell them to code it on a machine (or the whiteboard for the inferior option) and then leave the room and tell them you'll be back in 10 minutes.

Let them relax, then their brain can think.

This is pretty much analogous to urinal "stagefright" where your ability to pee (for those afflicted) goes right out the window as soon as another person steps up to an adjacent urinal. Likewise, my ability to do anything coherent on a keyboard drops to a "uhm what was I doing?" as soon as anyone starts hovering behind my shoulder. Doubly so for interviews. I wonder if there's a correlation between those susceptible to urinal (or stall perhaps...) stagefright and inability to code on show.

Amusing. I absolutely suffer from the urinal stagefright. I'm also terrible at whiteboard coding. Or playing piano in public. I recall from childhood the time my father invited me to play a song I knew by heart in front of one of his friends. I got a couple bars into it, flubbed a few keys and then the song just fell completely out of my memory. I had nothing.

Pretty much any time I feel like I'm being evaluated by someone, my brain goes into an overly defensive, self-critical mode where I spend more cycles trying not to screw up than I do actually performing. It's generally a disaster.

Strangely, I discovered my cure many years ago when I was able to crank out Mozart's Rondo Alla Turca in front of an entire wedding party. Under just the right level of inebriation, alcohol seems to dampen the self-critical part of my brain just before it starts affecting much else.

Unfortunately, it takes quite a bit of alcohol for this effect to kick in. And sadly for me, this hack is a bit harder to pull off in a job interview situation without coming off like a high-functioning alcoholic. It certainly helps with the phone interviews though...

Ballmer peak then ;)

I'm no fan of whiteboard coding, but besides checking for basic coding ability, the most important aspect of the interview is to see how candidates think. Which is why most interviewers will ask you to "think aloud".

Also, it is very important to "manage" the process, because the candidate may get stuck and feel frustrated, thus performing worse, or may settle on a sub-optimal solution when they could actually do better. Both these issues can be solved with some interactivity, using hints and probing questions, which cannot be done if you're not in the same room.

The author is absolutely right that interviews that involve someone to code over the phone, on paper or on a whiteboard bias against a certain part of the developer population. Unless it is something people actually need to do for the position, you would be wise to remove "paper" coding from your hiring pipeline.

But, there is also a large part of the developer population that has the same reaction that the author did to "paper" coding, to any pair programming, including pairing with devs they've worked with for years. Developers in this population, who know that they are in it, generally will decline any job that actually involves large amounts of pair programming so you should reveal that you do lots of pairing early in your hiring pipeline.

If you do not actually do pair programming a lot in practice, this technique is no better, and in some ways worse, than "paper" programming.

My policy on any programming test is that it should be take home, non-timed and non-observed. Unless your environment has real, hard requirements about "how" someone writes code putting fake constraints on that part of the filtering process only biases against potentially great hires.

I couldn't agree with you more. To pile on here, I'd like to add that when you force somebody to do an observed pair programming interview, several things still do not match what will most likely be the day-to-day process. First, and most obviously, their environment will not be what they're used to, at all. Any pair tools out there are most likely very different from their normal setup. This immediately sows discomfort, no question. Second, let's be honest here: We all use documentation and google extensively when programming. That's simply due to the fact that we can't possibly remember all syntax and available functionality. But we know where to look. Doing that during an observed interview is usually a no-no.

As an example: I recently applied at Trello. I nailed their programming challenge to be able to submit the resume, and I gave a thorough explanation for my solution in the submission. I got a call from HR and they scheduled an online interview where I would be sharing a code window with the interviewer and I'd have to solve a succession of problems. Shit. All of these teasers were easier than the initial problem I solved to get the interview in the first place, but with a spotlight on me and an implicit time limit that I could feel ticking past with the interviewer's silence, I went down in flames. I have no doubt that I'd be perfectly capable for the job itself, but that doesn't matter because I couldn't dance the right jig.

I have since accepted employment with a company whose interview process cared more about what I have done and what I can do, rather than forcing me to participate in an unnatural process.

Devil's advocate here - how do you control against candidates plagiarizing code or cheating? Do you ask them about the code sample onsite in such a way as to determine if they understand it?

I'm not sure where I stand on this, but I've heard many hiring managers complain that great phone and work sample candidates often fizz out entirely when tested on the sample in person - and not due to jitters, but due to e.g. not being able to comment their own code or explain one of their functions.

I will say that I think a "homework" style coding test has always felt like a more fair assessment and been more enjoyable for me personally. I think a homework style work sample is probably one of the best methods for hiring, as long as it 1) won't take longer than an hour or so and 2) is technically exhaustive of typical job functions.

That said, I know brain teasers are hated, and I dislike them too, but I really think most programmers should know standard algorithms and data structures. I don't think the current model of whiteboard coding is perfect, but knowledge of algorithm design is great for optimizing performant code.

If you are in an environment that is large enough for your programming problems to be widely available, you are also large enough to build mechanical plagiarism detection systems (which are a pretty widely researched field oddly enough). As an aside, if you have that problem with submitted work, you also have it with whiteboard/paper problems. How do you distinguish between a "good" candidate and a "well coached" one in the whiteboard setting? I personally think if your hiring pipeline is generating lots of "cheaters" you need to address the candidate collection not the candidate evaluation. You are fishing in the wrong ponds, so to speak.

I personally have no problem with candidates getting help on submitting their programming problem. I view effective help getting as a fundamental tool in a developers toolkit, so that in and of itself doesn't bother me.

I think you can do an onsite code review, where the candidate presents their code sample, teaches you how to use it, explains the tradeoffs, responds to criticism, etc. This should be just like what you are doing for your "real" code reviews (if you don't do "real" code reviews, how do you know that any of your colleagues understand the code they write?)

Over the years, I've grown a bias against even caring about that second step, because I have found so few people that do well on the programming problem part and then do poorly on the second part due to anything other than stage fright, but I understand I have an extreme position on that.

I'd usually say the first part isn't that interesting compared to the second. Any take home challenge to some extent just tests how willing someone is to burn several hours generating code to maybe pass a stage in an interview.

How would you feel about, say, providing a candidate with a chunk of [source] code and 30 minutes to read through and grok it. Then they can lead a discussion/review about their thoughts on its structure, and so on. This can move in to their preferred implementation, or just focus on consequences and risks etc. Essentially switching positions from the usual stage 2 where you're reading code and leading.

I know reading code is in some ways easier than writing - but to me understanding foreign source is sometimes harder than writing a fresh tool in your own style etc. Being able to distill intent, aim etc.

> how willing someone is to burn several hours generating code to maybe pass a stage in an interview.

We are well into the area where it is my biases talking and not any sort of well researched finding, but I don't believe in developers who can turn on/off writing good code. Either a developer writes good code, or they don't. Further, I take getting my next job very seriously. I take hiring my next colleague equally seriously. I'd hope that this seriousness (in combination with me asking for a reasonable amount of work sample) would encourage everyone involved to put their best foot forward.

> How would you feel about, say, providing a candidate with a chunk of [source] code and 30 minutes to read through and grok it.

I have never tried this so don't have any experiential recommendations. My instinct is to think that this would be problematic, not because it is wrong to judge someones ability to learn foreign code, rather because the evaluation step would be very hard to filter for someone who learned foreign code quickly, from someone who is great at talking about stuff generally.

If I wanted to judge someones foreign code ability, I would as part of the code project, expect them to interact with a provided set of foreign code. I'd set explicit boundaries about how much they could or could not change it and then judge the candidate on how their new code interacted with the legacy code as well as what and how they changed the legacy system.

> I know reading code is in some ways easier than writing

I don't think that is in the least bit true. I think evaluating a candidates ability to interact with legacy code is a hugely valuable thing to filter on in most environments, it's just very hard to design a fair and repeatable work sample problem around it.

How do you feel about testing algorithm design as a component of, or alongside a more "project based" homework test (project based meaning something like "Develop a few classes to interact with an API like so...").

Do you think developers need any grounding in foundation if it isn't explicitly part of their job description?

Finally, do you administer the homework test via a third party like HackerRank or develop your own problem and simply email it to them?

I quite like algorithm design to be one of the judgement criteria for judging submitted work samples. Designing the work sample is very difficult, but the good ones have some box canyons that are easy to fall into without proper attention to algorithms.

I personally have found my computer science academic background invaluable in my career, and think that foundational computer science is useful in nearly any software development position. That said, some of the very best developers I've worked with do not have that foundation, so I do not think it should be used as a selection criteria.

I have historically used email for work samples and then fed them into whatever code review system the company is using. If I were setting up a big hiring pipeline right now I would probably due research about outsourcing the mechanics of collection/testing/code review collaboration to a 3rd party. That said, sending candidates through your real workflow can expose problems on your side, which is a nice side benefit.

Your method has one huge drawback - time consumption on candidate's side. Before doing this test I would recommend a simple preliminary test to quickly eliminate obvious misfits. It's also in align with Joel's recommendation[1].

At the end, there's too much factors you can't uncover before hiring, that at some point more testing in meaningless. My experience is that everybody has a preferred way how to work and a preferred topics, and allowing them to do just then yields most benefits.

[1] http://www.joelonsoftware.com/articles/GuerrillaInterviewing...

I absolutely think that your work sample project should have reasonable time requirements. I shoot for "a candidate that we would want to hire should be able to do this in 2-5 hours over the course of a week or 2".

If a candidate is unwilling to invest that much time in our process, that is perfectly within their rights and I respect that. But I'm not going to sacrifice my time for them either. If I need to cut time commitments from my hiring pipeline I will do it at the interview component. I would gladly trade 4 hours of work sample time for a half day of onsite interviews for instance.

As far as eliminating obvious misfits, I'd argue that a mechanical suite of tests (I like to provide say 70% of them to the candidate) gets much lower false negatives and much higher true positives than any "simpler" test you can devise.

The Guerrilla guide is a very good starting point btw, but I think it misses the central problem, which is, interviewing is not a reliable way to filter candidates for software development jobs, no matter how you do it.

I prefer that the initial interviews be about knowing what the candidate wants out of the job he's applying for, how it fits into his master plan etc. Then I want to try and figure out if I really want to spend a third of my waking life working with this person. Only towards the end of the process, and only if I'm not too sure about the candidate ability to do his job (eg: did not demonstrate interest in keeping himself up to date; was not able to articulate how X adds value over Y; etc) will I go into full-on technical interview.

Sounds weird right? But the thing is, I'm not hiring rocket scientists so I don't need the best of the best, just someone who knows his stuff and isn't afraid of saying "I don't know (but I will find out)" when that's the case. I also take great care in planning work and defining deliverables so as to reduce the pressure on my colleagues' workday and there is hardly ever a situation where we need to rush to meet a deadline. If or when that does happen, I find that the culture of trust and a certain amount of friendship does wonders for the team's spirit.

And you know what? Maybe I'd like to work for Google or McKinsey. However, I suck at brainteasers.

I've worked in research since 2002. I always solve the problem. I'm the one you go to, after the other teams ran into trouble (I work rather hard to position myself as that).

I take the details of your issue, and offer to come back to you. Then I brainstorm and come back with a proposal. I will probably discuss the problem with several other specialists, one or two at a time.

Ideas occur at a Poisson rate with time spent thinking. I find running helps to focus the time spent thinking.

My most creative problem-solving ideas I've had while peeing. I think it is the interruption.

These screening interviews will rule me out, with high probability. Fine.

These screening interviews will rule me out, with high probability

This sounds like a problem that you could put your mind to solving.

The solution is like the blog author's (maybe it was a different blog I read this morning): practice interview questions. Study for the exam.

Yeah, I can do that. Mmmm. And maybe I pass. But other companies won't make me do that...

So, if the aim was screening for people that want to work for Google above all else, then, yes, the screening totally works. But... Google would downsize you and your project coldly if there were financial reason to do so, so...

Not a big fan of Google here, but while they may kill your project in cold blood, I highly doubt they'd downsize you with it.

The other part you should consider is that Google and others of their whiteboarding ilk will likely pay you much more than other companies, except maybe finance. So if other factors don't prevent you, maybe it's worth jumping through the hoops for the pay upgrade.

>I've worked in research since 2002. I always solve the problem.

Those two sentences conflict heavily with each other. In research it is very rare to solve open problems. So either you are researching trivial things or your definition of "solving the problem" includes assuming it can't be solved.

The best coding interview I ever did was a pair programming exercise using a laptop with one of my prospective co-workers. We worked on a problem for 45 minutes, we conversed naturally about the problem, and there was very little stress.

At the end both of us came away with an understand of the others skills and how we worked as part of team.

I've since used a similar exercise when interviewing people as I've found I get a much better sense of what the candidate can do than placing them in from of a whiteboard.

While I don't want to say if phone/whiteboard interviews are good or not, I can give you some pointers that worked great for me in the past.

1) Always, ALWAYS before writing a line of code say that you are trying to solve the problem first, by whatever means necessary and only then will you refactor.

2) Solve the problem by any means necessary. O(n*n)? Not freeing up resources? Allocating an array that is 100000 elements long just in case? No problem. Solve the problem.

3) Explain what you did/how you did it

4) Ask (if there is time) to refactor the code to be more idiomatic. Keep explaining that you wanted to solve the problem first, so later you can think of the right way to organize the code, and not the way around.

I never had an interviewer complain about solving the problem in that order nor have cut me off (even when it ran out of time) when I suggested to refactor the code in the end. Also solving the problem first, then using idiomatic code takes a bit of the stress off since you don't have to go and write and rewrite each line everytime you see something wrong.

I did a whiteboard-based interview recently. I am very, very bad at these, mostly because I tend to freeze up. Fortunately, there were two parts, and by the second part, I was more comfortable, and did significantly better [0]. Your point about being clear about refactoring and talking through the problem is critical, I think. It also helps if the problem starts small (write a function to reverse a string) and builds in complexity. That allows the interviewee some time to find their sea legs, as it were.

[0] During the first part, I completely, totally, forgot about SQL joins. No, really.

That's very interesting—I'm much less interested in a candidate solving a problem in that situation, than I am in seeing how they approach it. That includes things like 'Hey, you made an unusable O(n!) algorithm, what gives?'

That's always made clear though. I guess it goes to show that there's rarely one approach to this sort of thing.

To be fully honest, that is usually how I solve problems anyway :)

I start with the problem, whatever it maybe, then I 'solve' it by whatever means necessary. This lets me iterate very very quickly over various solutions if needed without much concern about pretty code, even if it is a simple api call, parse json, display something in the view. When I have that working, I refactor, sometimes I rewrite, I make the UI nice, I write tests, etc.

In an interview, I believe you either want to see if the person can code/create and algorithm or you want to see his thought process on solving problems. If the latter, you don't need a whiteboard and a conversation with someone will work as well. If the first, I would do as I mentioned in the previous post. The O(n!) question for me wouldn't be important. I would probably reply "Yes, I know, let me just finish this part"... "Back to the O(n!), you are right/wrong, but seems to work this way, what we can do is change that for loop into a something/something reducing it to O(2n). And rewrite that part of the code.

When I was interviewing people, after a quick phone screening, we would ask someone to come over, we had a quick chat with them (30 minutes maximum) and then had 2 quick exercises for them to do. 1) given a simple piece of code (200 lines or so) we would ask them to refactor it, based on some parameters like future extensibility, not being linked to some implementation, etc. Second one was a written spec, with 3-4 unit test written that would verify the spec, and we would ask for them to write the implementations (max. time 3h, good candidates finished it in 30-45minutes, bad ones couldn't finish it in time). With those two questions we were quite easily 'judge' the competence of the candidate. The ones that did very good would go home with job offers (or would have one when they got home), bad ones were rejected, and intermediate ones, depending on team feedback would be invited back to discuss the code so we could make up our minds.

ps: I actually wrote the exercises and everyone already at the company did them as well and passed.

It definitely is strange that we still see these interviews when they don't do a particularly good job in terms of predicting success on the job.

On the other hand, I'm on the opposite end of the spectrum from the author I suppose, I actually really enjoy interview questions. If anything I wish I would encounter those sort of questions day to day, but sadly real programming is much less demanding intellectually for most of us.

I recently had a similar waterboarding, err, whiteboarding experience.

My first solution to the first part of the problem was laughably complex (but still correct). I soon arrived at a much simpler version the employer was likely looking for a few moments after.

Immediately the interviewer should've known I was affected negatively by being put on the spot. I am an introverted person after all, and I don't program on a giant whiteboard with someone constantly looking over my shoulder. So if my job is to work with someone new every day on a programming problem, someone I am not entirely comfortable around then I guess I'm off the list?

The second part of the problem was more complex, and I eventually worked through it, with some leading questions by the interviewer.

I did trip up once or twice. Oops, that one part of the program was O(n/2). I did not arrive at that one correctly.

I demonstrated I'm comfortable with multiple languages, can reason intelligently about a program and come up with some solutions, iterating on them while demonstrating understanding of big-O notation, but the employer cares about none of these things even if they tell you beforehand that they do. That's pretty much a lie. The employer is simply lazy and has decided the best metric for weeding out candidates are toy problems on a whiteboard.

I basically built an entire product to help make the phone-coding situation better. It's called CoderPad (https://coderpad.io) and it basically tries to mimic a more natural coding environment inside the browser, for the purposes of conducting interviews. For example, if you're used to Ruby, you'll get a full REPL with debug information and tab-assist, etc.

It obviously can't do too much for the psychological side of the problem, but I believe that getting the interview environment as close as possible to your natural coding environment is definitely a needle-mover when it comes to candidate comfort during interviews.

We use a similar tool during phone screens: http://rextester.com/runcode Doing so has dramatically improved the screening process for us. This site allows real-time multi-person interactive coding sessions in a variety of languages (note: If you try the "live cooperation" mode, you need to click the link yourself as well as give it to others. This wasn't obvious for some people. )

As for whether or not coding problems on the phone or the whiteboard are a good idea, in my opinion they are when applied properly. There is no way I am going to hire someone for a software development position without seeing them write some code, and there is no way I am hiring someone for an embedded C position without knowing that they understand how pointers work. There are not that many reliable ways to do this in a controlled environment. We know that everyone can use google to find answers - I'm more interested in where people are starting from.

There is a balance though. I do not ask interviewees to solve brain teasers; that sort of problem only serves to increase stress in the interview. Most of the problems I ask are simple, such as printing out multiplication tables or simple string manipulation, or linked list type questions for embedded C. When doing these on a whiteboard, I am very clear up-front that trivial errors like missing punctuation or spelling do not matter. I am interested in seeing how the candidate approaches the problem logically, and it is surprisingly obvious what sort of candidate you are dealing with when seeing them attack a problem.

> There is no way I am going to hire someone for a software development position without seeing them write some code

How often do you watch your currently hired developers as they actually produce the code? How does the best of them handle you standing over their shoulder judging their typing technique?

I'm unfairly giving you a hard time because you seem to have a sensible approach to hiring, but you should definitely know that watching people code, impacts the code they write. I've also found little evidence that people that are good at coding in front of someone, are necessarily better at the final product, than people that need some space to code.

There's a need to watch the candidate code at least once, and at least for simpler problems, simply because if you don't watch them you can never be sure they're not cheating by getting real-time help from someone else.

This gets trotted out a lot as a reason for "watching" someone code, and I don't find that to be very compelling.

1 - I don't think "cheating" is rampant. Obviously opinions will vary, but if you think that you are getting a lot of cheaters in your hiring pipeline, I would suggest the problem is in the funnel part not the evaluation part.

2 - I view getting help on a programming problem to be a good thing. I suppose in certain environments you need to make sure that your hiring candidates are not collaborating with others, but I'd be surprised if that was common. The idea that it is bad to seek outside help on work problems is some weird vestige of our education system, not an actual day to day issue with most developer jobs.

3 - It is much more important to find out if a candidate understands the code they've submitted and thought about the trade offs they've made (or didn't) as well as if they can explain it to the people around them. This is true regardless of how the code was produced. We suss this out with our colleagues all the time, and at least in my professional career its never been via watching someone work. We have tools like code review, discussion, documentation etc to do that.

>The idea that it is bad to seek outside help on work problems is some weird vestige of our education system, not an actual day to day issue with most developer jobs.

You do need some knowledge up front and immediately accessible. However, almost no one agrees what that body of knowledge should be per position (and it's never going to be the same across positions or companies). Part of that is because you need to be able to practically communicate to someone about the things he's going to be responsible for. The other parts are actually doing the job and reducing the ramp up time.

The more accommodating positions will need less up-front knowledge (e.g. hiring any junior developer). They're okay if you don't know a few basics but you need to demonstrate something. You can't be too accommodating or what is the difference between him and someone from another field with no knowledge?

You also can't require too much knowledge up-front, or you just end up hunting for purple squirrels. Everyone (that is sane) already accepts that there is some ramp up time.

'Hitting the ground running', which is what people with a lot of up-front knowledge should be able to do, should be a luxury, not a requirement. Asking it of someone who isn't prepared is asking for a technical debt loan from the mafia loan sharks. The reason hiring managers seek people who can do this is because it means more work done in the same amount of time.

The conventional way to break into the industry is a CS degree and an internship, but that's not set in stone (contrast: medical doctors/scientists/etc). However, the industry still accommodates the self-taught crowd and other quirky origins because the body of knowledge we require to work in this industry is just so diverse and not agreed upon at all.

I give computer (coding) interviews instead of whiteboard interviews, and the candidates are allowed to freely use the compiler. This should be more natural than whiteboard interviews and closer to actual programming work. The candidate may tend to be less communicative than in a whiteboard interview, but I can still observe what they are typing and ask them questions.

I would really appreciate it if you could describe some of your experiences in this in more detail. How many approximately have you given? How is the success rate? How many do you turn away, and why? Have you been able to match up key successes and disappointments with particular signals in the interview, and what were they?

There wasn't any A/B testing since when I think about whiteboard vs computer, I already have the notion that if you can write code on a whiteboard, you can write it more easily and naturally on a computer. I've given 5-10 such interviews.

Went through two of these recently. Despite programming for more than a decade since university, I noticed the the unusual stress level and awkward situation (coding on google docs or a plain editor with no debugging ability) while the interviewer is umming and aahing (way to be judged) while you're thinking your way through a problem with stops and starts. I managed to solve all the problems given, but failed to get past the phone interview without the any feedback as to why. Thinking retrospectively, I'm almost never in that situation and pair programming with real tools and problems is not approximated at all by these types of phone or hangout screens. Neither are the time based coding tests like codility. That being said, the big software companies spend lots of money and time on hiring and swear by this process, so we're in a "like it or leave it" situation.

I think the lack of highlighting and indentation support in some of those tools is a pain and adds unnecessary mental overhead.

Whether a candidate can reason about the correctness of their code without using a debugger is a useful data point though. Both in a more abstract way and by manually tracing at least part of the program instead of stepping through it with a debugger. If you struggle to manually trace code in your favorite language, that raises some questions in my mind.

I used to share the same sentiment about whiteboard interviews and the cs "fundamentals"(read: brainteasers) interviews. 2 years ago, I failed spectacularly in an interview for a Pinterest internship -- I couldn't answer a single technical question.

Fast forward to this year and and I've breezed through every interview with Google and other companies. I spent months preparing for the interviews, but the preparation was one of the most rewarding things I could have done as a developer at my stage. I feel far more competent as a developer with a solid grasp of algorithms and data structures. I enjoyed the preparation so much, that I started reading white papers on the more cutting-edge cs theory :)

I have hiring responsibility for a startup. Honest question...Over the period of time when you were preparing for interviews how much meaningful code did you ship? More? Less? Algorithms and data structures are dandy but I would have a hard time hiring somebody with this knowledge but no history of shipping production code.

Well, I'm still an undergraduate. I was pretty much coding 24/7, but it was hackerrank, leetcode, codechef problems. My side projects slowed down tremendously during this time; however, once I started them back up, I was working at a much faster pace, producing higher quality code, and using design patterns/algorithms/data structures that scaled. For instance, there was a problem on my senior project that had me stumped for weeks before I stopped the project to focus on studying for interviews. I immediately knew how to solve the problem once I resumed the project.

On top of my personal projects, I've had great internships, which treated me like a full-time engineer. I shipped production code regularly during those.

Discussing almost anything highly technical over the phone is usually a disaster. When's the last time you solved a production problem without reference materials over the phone? Sounds hard right? Whiteboards are a crap shoot - we can all come up with lots of problems that are difficult to whip up without any reference materials but what's the point? Neither method reflects how anyone actually does their job so get rid of it.

Every skill takes time to develop yet most people think they are good at giving interviews with zero experience, research, experimentation or methodology. Allow a typical junior employee to start interviewing candidates and it will go right to their head as they are given their first taste of power over others in a professional setting. It takes time to learn how to interview properly, figure out the specific attributes you want to screen for a particular role, remove your own personal biases and plain get over the fact that you have the power over someone else's career for probably no good reason.

My philosophy on interviewing is that as an interviewer you want to avoid screening for anything that is not reflective of the actual job duties and to also avoid anything that the candidate is expecting to be asked. If you ask typical programming questions then you are likely screening for candidates that have the most time to prepare for interviews which screens out the lazy but also screens out the no-nonsense doesn't like to waste time types. Ask questions that get to the core of what interests them and figure out if they will be motivated enough to learn what they need to get the job done. Then get back to them quickly!

I don't do code interviews over the phone, but I always do a phone interview. It's just a smell test to see if the person has obviously over-stated their experience, and make sure they have some exposure to the technical things we care about.

Once they pass that gate, then we do an interview with our dev team. That way everyone gets a chance to talk to/ hear the candidate and get a feel for if they want to work with them. As part of that interview we do a group coding exercise, usually using Pex4Fun (http://pex4fun.com/default.aspx?language=CSharp&sample=Chall...). Basically, we give the interviewee time to start working on it, and if they get stuck, then start white-boarding the problem as a group and solve it together.

Again, we want to see how the person works and whether we think they will fit well on the team.

After the interview is over, we take a vote and if the result is not a unanimous yes, then we pass. It's tough, but I've had about 80% of interviewees tell me that it's the most fun they've ever had in an interview. So far, we've been very happy with the results.

> I've had about 80% of interviewees tell me that it's the most fun they've ever had in an interview.

Did you ask them this during/immediately following the interview? That may... skew your results.

Then again, if they're offering that insight without being prompted, that's a very good sign. I think the execution of this strategy, and the general tone of the people involved is very important.

Yeah, it has always been unprompted. We all try and treat as a fun exercise, occasionally with beer although I've never had an interviewee accept the offer of any drink other than water.

I also go out of my way to make sure I communicate with the candidate. We take the voice vote immediately after the interview, and I call the candidate within 24 hours. If there are any no votes I make sure to tell the candidate why, and try and give them pointers for specific areas they would need to improve on to work here and try and point them at resources they can use to improve.

We're in a relatively small market, and it pays to be nice to everyone. Anyone we interview I am pretty much guaranteed to run into at some sort of dev event. And just because someone wasn't a good fit today, it doesn't mean they won't be the perfect candidate next year or two years from now.

Best challenge I had at an interview was drawing UML diagrams.

First I talked to the CEO who liked me and said: "Now your gonna be asked a few technical questions by our lead developer." The dev came in, we both hated a bit on Oracle and he told me to draw an UML diagram about a system of classes he will tell me. When he told me I didn't need to go into detail that much, the CEO looked impressed and I got the job.

So strange...

Being a developer myself, I agree with most of the things. I learnt this in a different way though. When we started RemoteInterview.io, we started as real time coding interview tool [0]. Even Though, it's really helpful for interviewer to watch candidate code in real-time, we soon realized that a number of candidates don't like to code under stress.

We then developed RemoteInterview.io Tests [1]. It's a screening platform based on coding tests where candidates solves the challenge in offline mode whenever they want. Some candidates still don't like to have time limits but I personally think it is important to have a deadline for every task because in real-world, no project can have infinite time.

[0] https://www.remoteinterview.io/features-interview [1] http://blog.remoteinterview.io/post/109864123971/screen-cand...

What sort of time limits are you talking about? If its on the order of "we'd like to have this back in the next week or so, unless you have something else going on?" I'll buy it.

If it is "you need to complete this problem in 2 hours from the time we on our server observe an arbitrary event", that would be a complete non-starter for me. A) those deadlines quite simply do not exist in the real world and B) it adds a whole level of stress to some candidates, much like the stage fright stress, that is unnecessary.

I think it completely depends on the task given. If it's a quick question or some MCQs, I think giving X hours is fine.

If it's something like "design this project/prototype" then a week should work.

Adding anecdote: I know of a team that uses test-driven hiring, and they have found it to be a good indicator for subsequent performance.

For javascript, you can probably set up a qunit jsfiddle (e.g. http://jsfiddle.net/IrisClasson/RMh78/)

I think phone and whiteboard interviews are great for selecting the very smartest of the developer population, which is why companies like Google use it. Although they hardly reflect production environments, if someone is able to ace brain teasers in 45 min interviews, they must surely be smart enough to figure out pretty much anything technical, a priori.

However I absolutely agree that phone/whiteboard interviews are not sufficient for determining a candidate's fit, because it doesn't say much about the person's work ethic, time management, and ability to manage large projects - which is a thing that many "born gifted" smart people struggle with, in my experience.

In my experience, I've seen some incredibly smart people (the kind who ace those brain teasers) make some incredibly bad technical gaffs.

Like an architect who chose synchronous HTTP as a protocol for a service that later required the ability to cancel long running operations in progress (he unfortunately didn't realize why hitting the browser stop button wasn't the same as cancelling the process on the server).

Or a senior dev who thought that a JSP scriptlet getting the current time would show the client's time, instead of the server's time.

There is no guarantee that people who ace brain teasers will necessarily be able to figure out technical issues, although they are often excellent at sounding like they have figured out technical issues when they leave the details for the people who get the real work done.

Those two examples sound like simple knowledge mistakes, not critical thinking failures. There is a difference between someone who is not too familiar with the HTTP protocol and someone who just doesn't have the mathematical mind for translating real world tasks into precise steps that a computer can understand.

Right. Google et al have never been concerned about false negatives when hiring. They have more than enough applicants to worry about whether a few good people get turned away.

I personally dislike whiteboard interviews as well. Sometime ago they ask me in an interview to code on the whiteboard in perfect Java code no pseudocode allowed. I'm not sure why you should want that.... I also done some challenges for different companies. Most of the time they are something like create X in N amount of time those I like they make so much more sense to me as you can see the result and talk about it especially when you can do it from your own home on your own computer.

  no pseudocode allowed. I'm not sure why you 
  should want that....
Some interview candidates, when invited to write pseudocode, will write pseudocode imprecise enough that you can't tell if they will make good programmers.

For example, if a candidate is asked to write a bubble sort and they write: Take the input array, go through every element comparing it to the one after it and, if they're in the wrong order, swap them. Repeat until you get through without any swaps.

Did I see the candidate think about off-by-one errors, like missing the last item or exceeding the array's bounds? Did the candidate think about common edge cases, like zero-item and one-item arrays? Did the candidate think about whether the function would work if the array contained two identical entries? Did the candidate check their loop will eventually terminate? Does the candidate know how to write a working computer program, rather than describing an algorithm in vague terms?

With the pseudocode answer I can't tell if the candidate did those things, because the solution doesn't show me what I need to see. And yet, if you've asked them to use pseudocode, they've done exactly what you asked of them.

That's why my preference is to say "use a real language and tell me the tests you'd perform on it, but I'm not fussed about semicolons and exact names of library functions." - it tends to get you a solution that lets you tell if the candidate knows how to program.

The best interview I had was with my current employer. It was a half hour of discussion, some white boarding (just FizzBuzz), then an hour of "pair programming". What was great is that the three of us sat together, I wrote the code, but we asked each other questions and talked through it. It felt more like a real world scenario than a test. It was relaxing and enjoyable.

I'm curious if anyone else has had the experience of a pair programming exercise for their interview?

I have done them, given them, and compared them to other interview techniques. I personally quite like them, but I like brain teasers, hostile phd style defenses and whiteboard "jam" sessions as well.

My experience with them as a good tool in the hiring pipeline? Less compelling. Any work sample evaluation is better than techniques that don't involve work samples and pairing seems to pain less developers than on paper coding does. But unless you actively pair a lot during your day to day work, then paired programming is still fundamentally different than the kinds of work you are actually looking for and there are lots of developers who work poorly in paired environments.

My current employer asks you to do an hour code test and submit it to them. The next stage is a phone interview to discuss what you did for the code test as well as a few questions around it.

I feel this takes some of the pressure away but has the benefit of having an insight into how someone works technically.

This is largely what I always recommend.

- Conversation about the company/position, not even a screen, usually more of a sales job, but done by someone who can answer actual real on the ground questions about the position and the hiring process.

- Standard take home programming problem.

- Code review session.

That is all. My own personal bias has come down on the code review session isn't actually important either, but I get looked at like I have 3 heads when I suggest 0 interview style settings in a hiring pipeline.

Playing devil's advocate here. If there are two kinds of great programmers, those that can handle programming in a stressful situation and those that cannot. Given a choice I would want to hire the one that can handle the stress.

Why? You have a duty to protect your employees from harm in the workplace and this includes protecting them from stress.

This would hold if programming was done using a whiteboard. Luckily it is not.

> If there are two kinds of great programmers

There aren't. If there were, they wouldn't be split on the axis of "programming in a stressful situation".

Even in development environments where front line stress can be a part of the job, its not full time, and the good developers work on 1) reducing the amount of time the organization spends in those modes and 2) removing human performance from the triage out of those modes. The reason for that is that under stress performance is not actually repeatable. Everyone breaks eventually and pushing the eventually back is not a cost effective way to mitigate the risks.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact