Hacker News new | past | comments | ask | show | jobs | submit login

Hi, author here, thanks for your feedback. I think you might be right, thinking about rewriting that paragraph a bit.

For me it's been really obvious in interviews whether someone knows what they're doing or not. But probably I've just been lucky in this so far.

Looking forward to reading your post about it. Haven't read it yet, but if it's not in the post, could you give a few examples of what people said, that turned out to be bullshit? And the other way around, too? Not questioning you, just genuinely curious.

I think your key point is less that testing is easy than you should let the candidate prove their worth in their own way. This sounds key to me. Your original quote is… yeah, “easy” happens, but it’s not always elementary.

Testing a candidate is actually very candidate-dependent, I found out; however, it is a lot less hard when you let the candidate pick their own approach (and you supplement it subtly). I found it easier than most interviewers who tend to do scripted, one-size fits all tests.

I always ask candidates to talk about what they’ve done (I’m in data science, so university projects typically can be relevant for day-work; standard CS might not be so lucky). If there are omissions, ellipses, I might prod a bit.

The best candidates can easily juggle between business objective, method, architectural and technical choice, how they scaled a release into marginal improvements from prototypes to a full-fledged platform. It typically takes me one question, two follow-up and 10 minutes to know they are strong. From there, I can shelter them from more technical tests, or at least apologise for the process; ask if they are comfortable with white-boarding, etc.

The worst candidates are not that different: they can sweet-talk their way, but might not be able to scale up or down abstraction. You can prod from them if they are comfortable receiving feedback (based on how they react to you re-framing the question), their learning style (based on how excited when I ask details about specific implementation).

If you are uncertain, which remains most candidates, I like to put them in front of what I have been working on recently. It gets Legal a little nervous (it really shouldn’t) but it puts them in a rare but effective situation: I might not know as much as they do. I probably have had more time to explore, but no idea that they suggest, I can dismiss: either I had the same or it’s new to me and worth trying. I’m not the most structured worker, so I often have a bug, something that could be improved but it’s a priority, some high-level conceptual concern, etc. I make sure that they go for something that excites them or the problem that I’ve identified earlier.

I rarely need more than an hour to have a clear idea, including data preparation.

Hey, thanks for your detailed response. I agree with a lot of what you said, really helped me put my thoughts together.

I'm really not comfortable talking about specific patterns of questions and answers that turned out not to be predictive, because I feel like they'd be traceable (or at least appear, to the people involved, traceable) to particular candidates.

Happy to answer any other questions you might have about our erstwhile process at Matasano (which is more or less our current process at Latacora).

Right, totally understand.

Thanks for the offer! I'll read your post some time tomorrow and might send you a PM, if that's ok.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact