I'm not opposed to interviews. I just think you need to factor out the programming from them.
I'm currently working with a roomful of cognitive scientists on ways to unpick some of the bias from recruitment. Be good to have a chat at some point if you're interested.
tptacek's process is also going to work a lot better for candidates who aren't comfortable in high-stress whiteboard coding situations. The OP's process is designed around the assumption that the candidate can be made comfortable; tptacek's process is designed around the assumption that some candidates can't be made comfortable, but you shouldn't reject them (and in fact you may desperately want to hire them if you could find out how good they are), so you need to find another way.
Although the OP's article states that he is prepared to offer a computer for anyone who prefers it, how many candidates are confident enough to tell an interviewer that they don't want to do whiteboard coding? My experience is that candidates tend to be incredibly submissive during interviews so they're not going to speak up about something like that.
Based on the OP's article, I didn't get the sense that they have a rigorous rubric or that they meticulously record objective data points for every candidate to establish a reliable dataset over time.
All that said, I do feel that the OP's example question is pretty good (for that domain of expertise, at least) and better than what I typically see in these sorts of articles.
The number of candidates who have told me that they want a computer rather than a whiteboard is small but not zero.
I do give the same few problems over and over again, and I do take extensive notes, but I am not keeping track of a specific set of metrics that I track across candidates. I have a pretty good sense of where the "middle of the pack" candidate is. The process could be more scientific, yes.
I've gotten better at it with age, but the improvement in my interviewing skill is completely unrelated to my improvement in programming skill, though I believe both continue to improve still into my early 40s.
Speaking on behalf of younger me, and probably others like younger me:
Doing the work on a computer is scarcely better than coding on a whiteboard, and in some ways it is worse. It isn't my computer. It is set up all different, it would take me hours to set it up like mine, the keyboard is all weird, what the fuck is the ctrl key doing over there, fucking Lenovos, all of these are trivial differences in isolation but in aggregate they make the work seem just as foreign as doing it on a whiteboard especially because I already feel like I am "under the gun".
Also (and by far most importantly and relevant even if I had brought my own computer in): I'm being "watched". Even if you aren't actively watching me (or rather, younger me), I am effectively being watched if I'm sitting in some room at your company banging out code. I'm all up in my head about how this is taking too long, how long ago did I start, how long are they expecting this to take, etc, I never get anywhere even close to the "flow" that I commonly get into when I am doing real work, the whole experience is just completely unlike the experience I have when doing actual work and feels like torture.
And then lastly (drifting somewhat out of scope of both the original blogpostand Thomas' blogpost) another issue I've always had with these types of interviews is that I'm somewhat of a subconscious worker, in the sense that when I am presented with a thorny technical problem the way I often best deal with it is by not really thinking about it (consciously), but rather going for a walk, taking a nap or just otherwise zoning out and letting the solution sort of materialize out of my subconscious. This is obviously completely incompatible with any kind of traditional programming interview, where the interviewer is almost obsessively looking to "see the work" in how you're thinking about the problem and I can't just ask them to let me go take a walk for 15 minutes and get back to them with the solution they're looking for (generally - I'm sure some will respond saying they'd be okay with this, but practically speaking we all know it wouldn't fly in the vast majority of interviews). An extremely common occurrence for me back during the time when I did not interview very well is I'd be driving back from the interview and would just suddenly have completely fantastic answers for all of the questions that didn't go very well on-site, a technical interview version of l'esprit de l'escalier.
Having said all of this, I'm not one to suggest anyone who interviews in any specific way is objectively wrong (for their own needs). If whatever system you use generates enough hires to get the work done, then so be it; but I can really relate to a lot of the problems highlighted in Thomas' article, so at the very least I hope people interviewing with more traditional methods than he has been working towards realize they are certainly filtering out some really great hires (which is maybe okay for them, but like Thomas I believe ultimately bad for the industry if we continue with what we have now as the standard).
I'd be interested in hearing how you improved your performance in interviews, if you wouldn't mind sharing?
Perhaps most importantly (and this is something that is often mentioned by people give interview advice but easy to ignore if you're a "meritocracy"-minded techie) I've gotten really good at just sort of taking control of the interview and leading it where I want it to go (while still being sure to display how I would be valuable to the company) rather than being a passive question-answer-er, but getting good at this is also pretty much just down to experience.
Worse, it's almost never well implemented. It seems almost designed to crumble under the smallest misapplication - a badly worded question, a dry whiteboard marker, or an unsociable interviewer can destroy a candidate. None of this is conducive to good hiring.
It would seriously change my worldview if it will turn out that Google, Apple, Facebook, Microsoft, Dropbox, Evernote, Amazon, Airbnb, Uber, Square, PayPal and countless others firms that hire their engineers using some variation of whiteboard coding would be proven wrong in their practices by one person.
Horrible, and about the direct opposite of a question that will put someone at ease. Questions that will put someone at ease are ones that have nothing that can be construed as something you are assessing their answers to, like asking them about their visit to the area or if they enjoyed lunch or something like that. Break the ice, tell them about what you work on, see if they need a break, and then start digging into stuff. I also like to tell the person up-front what we'll be covering in the interview so there are no surprises. Walking in the door and asking them a question they could interpret to mean to reflect on their interactions with the previous interviewers makes me cringe.
Instead, I found that people became surprisingly flustered - including one who said that "like a lot of things on there, it sounds cooler than it is" and went on to question his own commitment to the field. I asked about the project because it sounded awesome, and, when I pressed him to please tell me about his role, it was awesome.
I think that many people are not prepared to discuss every item on their resume. I don't think that's great, but I don't want to base a large amount of my decision on that fact. This question threw them off for the rest of the interview, so I've stopped asking it.
Tell them to pick a project or two and prepare to talk about it ahead of time of the interview.
Actually many of the issues I have with the current "standard" interviewing process is all about the fact that its a "school test" environment instead of a "prepare for a specific task" environment. The latter being what real work is actually about.
Because learning 64-bit architecture is impossible for somebody that have proven that they can learn a field, perform research, and successfully write and defend a dissertation? '32 < 64' is beyond them?
I've had candidates with PhDs in computer science who thought that pointers on 64 bit operating systems were two bytes wide. Do they believe that there are only 16000 possible allocations before the allocator fails? Not likely, because that's ridiculous. So how could they have that belief? because they understand so little about pointers that they don't understand the implications of their false beliefs.
I need someone who is going to command the salary that a PhD-level candidate will demand to be able to hit the ground running.
I guess it is lucky for us that when they interviewed they didn't have an interview question that involved hash tables, because they would have been bounced for not hitting the ground running.
I bet I know a lot about a field that you don't. I wouldn't bet that you couldn't walk in and pretty quickly become useful, given that you've learned other things, and at one time I didn't know this field either yet somehow, somehow, learned it. I think you would do just fine; and I think somebody that can get a PhD in CS can handle the math that 2^16 < 2^64, even if they haven't thought about the implications re allocations.
For example: I'm a midrange DevOps/Sysadmin/Whatever-it-is-these-days. I will likely never be a senior. The (real) seniors are the folks who go home and mess with hardware and pull new toys apart to see what chips they were made with and so forth. I like to go home and play games. I do my job competently, but I don't have the underlying knowledge that those guys do. If I was up against one of those in a job application, then you'd be crazy to hire me at the same salary as a senior - there'd have to be a pretty big black mark against the senior, like being completely unpersonable or abusive or somesuch. Or the role used some of the other things I'm decent at. Perhaps a colloquial way to put it is that I live these things, but the seniors live and breathe them :)
But it tells a lot about the amount of "just 5 min" things they don't know.
I was asked once what load average knows (and I didn't know it). Because my background wasn't in sysadmin.
The fact that you know one fact or doesn't is immaterial, but it's a good proxy.
Your hiring decision should not be based on only that, of course, but let's say it's a pixel in a picture.
Personal attacks are not allowed on Hacker News.
Not sure that I agree on the choice of this specific "red flag"; on the other hand, having PhD in CS is a red-ish flag to me per se.
I would be interested in hearing your reasons for that.
But in my experience people with advanced "pure CS" degrees seem to be focused on the process not the result. I mean, they went to CS because they like to tinker with type systems, elegant algebraic concepts, nice abstract problems -- and not just as a hobby, they like it so much so they commited several years of their lifes to do that. Not that this is bad area of research, but generally when you look for a software engineer you look for someone not only smart but also pragmatic, who can get stuff done in a simple and efficient way, and this is kind of the opposite.
Of all PhDs, in my experience people who have degrees in areas that use programming as a tool not the goal in it self, e.g. physicis/EE, make the best software engineers.
If I ask somebody ten factual questions and they know the answers to five of them, how should I interpret that? Sure, it would be easy for me to just fill in those particular gaps in their knowledge. But assuming my questions were well-chosen, it's reasonable for me to expect that they'll continue to hit similar gaps in the future, about 50% of the time.
That's not to say that a good interview consists of drilling someone on facts; more often, they come up implicitly in the context of solving a problem. And as an interviewer, I'll look much more favorably on a candidate who recognizes what they don't know, because I give them the benefit of the doubt and assume that in a real-world situation, they would be able to do the necessary research. But even so, if there are basic things they don't understand, it's a red flag because it tells me they're encountering these concepts for the first time.
In most cases it would be a silly waste of time, but it would also have saved me from one or two truly horrific jobs with skill-free managers.
summary: ask the open ended question ‘what was the most recent interesting project in which you were heavily involved?’ and then focus most of your technical questions on the system the then describe. If a programmer can’t talk throughly about a topic which they chose themselves and are purportedly very interested, they probably aren’t right for the job (at whatever company I’m working for anyway!)
The benefits are that you wont be randomly hitting a gap in their knowledge, you get a better idea of what they feel are their greatest strengths, and because you can raise your expectations about the quality of their answer you can probe very deeply and find out how good they are at communicating, and whether they are actually blagging.
The problems you can encounter are: they might end up running the interview if you aren't careful; you must be very engaged and perhaps talk about things you aren't that knowledgable yourself, and it is harder to compare one candidate with another (which is VERY important at large companies where you must justify your decisions to HR)
A lot of developers I know when in interviews say, "Well, the "team" did this and "we" did that. You always want to use singular personal pronouns such as, "I did this", or "The team expected me to do such and such."
This is something a lot of potential hiring managers like to hear. If they have to ask what you did, many believe you're role or your work may not have been that important.
Is that what you want, though?
Q: On a scale from 1 to 10 (1 being low) how do you rate yourself on "Programming Language Du Jour"?
I also like to ask for them to choose a project in the past to talk about, because then it's easy to see what they are passionate about. Or easily tell if they are choosing to talk about something because they think, you think, it's cool.
I pointed out that Bjarne Stroustrup rated himself 7 out of 10 once. I wasn't as good as him, and I wasn't nearly as good as him, which left me the field of 5 and below.
I can guarantee that some ham-fisted chancer who'd flicked through the "Teach Yourself C++" chapter summaries put himself down as an 8, though.
If the interviewer can distinguish between these two categories, then what value does the question add? If the interviewer cannot and is relying on the answer to this question (even partially), then it's an example of relying on a false signal.
Whenever I'm asked this question, I always respond with the following questions:
1) What does a "10" mean?
2) What does a "1" mean?
3) What's the difference between 5, 6 and 7?
4) (Optional, this could easily come across as being snarky) What's the range of levels in your team/org/dept for this metric?
I like to interview people across a range of subjects (as I have not had the chance to hire a pure specialist). So knowing what they feel confident in answering will help me tailor the questions to what their perceived strengths are.
So I think it gives me a feel of your confidence level in a language. If you flipped through a "Teach yourself C++ in 24 hours" book once, then rate yourself a 10 on a scale from one to 10. Then not answering basic C++ questions will raise red flags.
If they answer a "1" on something. (And I re-assure them that answering 1 is perfectly ok). Then I can feel a lot more comfortable giving them more hints, or a lot more supporting detail in a question that they have indicated they are not very strong in.
It's very easy to filter out these guys though. I had one such come to my interview - on a scale 1-10 (10=best) the guy put 8s, 9s and 10s for several programming languages (C++, C#, Java, a few others); and that guy was fresh out of college.
Then during the interview it turned out the guy knows exactly nothing, even when I tried to talk with him about the most basic concepts of OOP (like, what is a class or an object), he couldn't provide any meaningful answer to lead the conversation further.
I have no idea how he hoped to pass an interview, perhaps he thought there wouldn't be anything beyond the online form to fill.
But the self-assessment on a scale of 1-10 just seems like a quick way out for the interviewer and can only serve to find fault with the candidate and to me, doesn't seem like it provides much information.
For example, what does 10 mean? Does that mean that they have mastered the language (and all of its minute details, implementation, etc.) and would find it impossible to learn more? Knowing that, the only possible individuals who would think about rating themselves a 10 are:
1) The language designers themselves. (Possibly not even - see the comment below about Bjarne Stroustrup)
2) Those that think they know the language so well that they would rate themselves a 10.
You're probably not interviewing people from group (1). (Or if you were, you wouldn't ask them this question as it would reflect more on yourself than the candidate) So, if someone rates themselves a 10, you can probably assume they are in group (2) or have otherwise misunderstood the question. I don't know for sure, but I feel there are better ways to rate a candidate other than asking them a question, that, if they answer "10" on, means they aren't suitable and are likely overrating themselves.
So where does that leave us? Most people who are comfortable with the language are likely to rate themselves a 6, 7 or 8, being conservative and not wanting to expose themselves to a tricky follow-up question should they self-rate too highly. Again - this seems to be leading toward gotchas, and seems adversarial.
If you're going to ask for the self-rating, at least limit it to something less fine-grained: Beginner, Novice, Expert/experienced/whatever, so you have an idea for the candidate's comfort level with a particular language. A 1-10 rating is way too granular (what's the difference between 3 and 4?).
But really, I think asking the candidate to rate themselves is really just a spin on the "What's your greatest weakness?" question - it's asking the interviewee to do the interviewer's job.
When they do give themselves a midrange answer N, the appropriate follow-up is "What do you know that a (N-1) doesn't?" and/or "What would does a (N+1) know that you don't?" Either of these will let you calibrate your internal rating system with their self-assessment. It will also let the candidate demonstrate their understanding of the topic in a non-adversarial way. The follow-up questions might convince them that they should change their initial ranking as well, which is also a useful calibration of their knowledge and meta-knowledge.
On the off chance they say 10, well, they better know just about everything.
> what is something that a seven would have difficulty with?"
My honest answer is "this is an ill-posed question and there's no way of stating for certain that feature X or library Y is something every FooLang developer who's not as good/informed as me doesn't know."
But of course that won't get me the job. So I just pick based upon my perception of your personality/background from the 5-8 range. When asked for justification, I choose the most esoteric-but-not-inconsequential thing I know and bullshit for a minute or two while being personable. Whelp, that was a useful signal as long as your goal as to find a professional bullshitter...
Concretely, I've created stand-alone implementations and hacked on compilers for a couple of languages, but would not have a non-BS answer to this question for those languages. Mostly because I'm totally unfamiliar with the most recent set of popular libraries in that language's ecosystem, and also there are certain language features that I know how to implement but don't know the canonical usage patterns/idioms. Your standard FooLang dev could probably hack together certain programs in their sleep that would take a week for me to write. But then for other programs -- or especially if you needed a static analysis or optimization -- I could whip it up in a couple of days where the standard FooLang dev would probably take weeks just getting up to speed on the language spec and some particularly thorny parts of the implementation. Hm. Apples and Oranges. And what if you want a static analysis/optimization that exploits idiomatic use of FooLang's whizz-bang BarLib? Hm. Not sure who has the upper hand.
I know you'll say "but that's the point -- I want to hear exactly that reasoning so I know what you know/how you assess language familiarity." But this is all meta -- performance on this question crucially depends upon 1) ability to bullshit (i.e., come up with plausible-sounding hypotheses and give plausible reasons for them, quickly and on-the-spot -- this is not a particularly crucial skill in software development); and 2) ability to intuit the purpose of the question.
In conclusion, I'm not really sure what the point of this exercise is, except to select for bullshitters or people who read your HN posts.
> Make them calibrate the scale for you.
You're not just asking for a calibration, you're asking for a rubric and a scale. That makes a lot of sense as an interview task if you're hiring a manager who will be doing lots of interviewing or a business processes/HR processes expert/researcher. I'm not sure why this is a relevant skill for a software engineer, though.
I feel like I haven't experienced this enough in interviews. Mostly I assumed that if I'm applying then interviewer already thinks that I have a good view of the company (or otherwise I wouldn't want to work there right?), but it would be nice to get a better outlook of the company. Maybe I'm just asking the wrong questions.
This is the essential cowardice of employers. Interviews are insufficient contexts to judge if someone can help move your business forward. Screen the incompetent, but just hire people and don't be afraid to let them move on if they don't work out after 2/=4/6 months.
- paid work for a week
- writing patch for a project or making simple tool(proxy of the job) and invite them to talk about it.
This is sounds like a very pretty typical software engineer interview process. The fact that these kinds of interviews suck, don't produce good hires, and yield a ridiculous amount of false negatives has been beaten to death.
Unless you're Google (or Facebook), you are not getting thousands of applications a day. You don't need to emulate their hiring process. They do it for a reason (practicality) whereas it seems others do it out of pure snobbery. Whiteboard coding is worthless (barring simple fizzbuzz tests which can be done over the phone anyway). I mean how pompous does this shit sound:
"All candidates eventually figure out that they will need to add an auxiliary data structure that maintains a map from a 32 bit integer to a 64 bit pointer, because of the aforementioned ten pound sack. Do they know that there are off-the-shelf map classes? If not, do they have confidence that they could write one?"
Eric Lippert is originally from Microsoft though, so I guess that shouldn't surprise me. After all, they're the ones that started asking "Why is a manhole cover round?" in software engineering interviews.
Much respect goes to tptacek that outlines a much better and enlightened alternative.
For some, this is not good enough proof though. A local star running an obscure company is certainly more enlightened -- because, you know, he wrote a blog post on hiring.
There's some evidence that the iPhone in your pocket, for example, may have been built via forced manual/child labor in Taiwan/Vietnam/China etc. The fact that the iPhone is a revolutionary (and generally game-changing) product doesn't somehow add to the virtuousness of how it was built.
It has been beaten to death, but it is nowhere near to being established as "fact".
1) If they provide a github/bitbucket/sourceforge link, look at what's there. Skip step 2.
2) Literally anything else. It won't work anyway.
"We need a web-dev, you know node.js?"
"okay, lets do this"