Typically, I ask questions that give me an idea about the candidate's depth of experience and awareness. For example, "I see that you've spent X years using Subversion for source control. What are your opinions on trunk-first development vs. branch-first development?" I ask questions that speak to practical experience and design ability, without getting into too much depth. For example, "Pretend that you're using an OO language to build an application for <insert purpose here>. Just off the top of your head, what are some classes that you would expect to see in your class diagram?". Etc.
I find that this level of questioning is a much better screening tool than trick questions about programming language quirks or the minutiae of frameworks, or cliche puzzles about how many golf balls fit on an airplane. However, I groan and roll my eyes when I hear people challenge the need for technical interviews at all. Yes, they are necessary.
Having performed a thousand interviews by now, I am awestruck by how poor the software development talent pool is. I am aware that the Bay Area is overrepresented in HN's readership, and that crowd tends to take for granted the talent level found in the technical equivalent of Mecca. However, I assure you that the rest of the planet is dominated by sleepy line-of-business developers... who have all the passion beaten out of them in the first 5-10 years, and spend the rest of their career just phoning it in and not growing.
I sometimes ask candidates what the letters "MVC" stand for. The successful response rate is around 50/50. I ask candidates to briefly explain the advantages of the Model View Controller pattern, and only 10-20% can field the question. We bring in Java and C# candidates who have been working with their respective language for 10 or 15 years, and they get COMPLETELY EXPOSED during the face to face round when asked a series of basic certification exam style questions. Nothing tricky, just core fundamentals.
People who post here are not the norm. The "norm" is atrocious. So yes, unfortunately we all must endure technical interviews... to filter out people who have enough confidence or personality to excel in the other interview segments, yet are utterly useless.
Let's put it this way: the more experience I've had with interviewing, the more selective I've been in my own job searching, and the more aggressive I've been in my salary negotiations. If you are really good, and are located outside of San Francisco, then you are worth your weight in gold and should value yourself accordingly. You wield tremendous leverage once you make a strong showing in a technical interview.
Actually, this demonstrates a problem right here. It's not clear to me what you mean by these terms. A quick Google search ("trunk-first vs branch-first svn") suggests that you're not using common terminology.
After thinking about it for a minute or two, what I _think_ you're asking is "should all development happen on a common branch, or should developers create separate branches for individual features/fixes, merging back into the mainline when finished". And, indeed, I'd be happy to have a conversation with you about this.
But if it takes me a couple of minutes to figure this out while just sitting here at home, in the pressure of an interview, I'm probably going to fumble, or say "I don't understand what you mean", which will make me wonder if I've blown the entire interview. This despite the fact that I've been programming for 20 years and have used a number of version control systems (CVS, ClearCase, P4, Subversion, and most recently Git).
Have you worked for a company that strives to do as much development as possible in 'trunk', creating a release branch near deployment time for production bugfixes? Have you worked for a company that preferred to branch at the outset of new development, merging back to 'trunk' periodically? What did you find to be the strengths and weaknesses of each approach?
Their answer lets me read between the lines and gleen much more information about their work history. Have they worked in settings where multiple work streams were in development simultaneously? Do they have substantial experience in collaborating without trampling on shared resources? If so, then they usually mention the different pain points in merging. I don't consider there to be a "right" or "wrong" answer to this question, and not having a clever answer certainly doesn't disqualify someone from being a strong programmer. But it does help to level-set, and identify candidates who think like team leads or might be well suited for responsibly beyond raw coding.
However, I don't want too much text between parentheses that are injected into a long sentence. So you get "trunk-first vs. branch-first". :)
I feel like dev interviews are full of questions like this, things that require some expertise and experience to answer, but do virtually nothing to predict on-the-job performance.
When you ask an interview question, you are pricing candidates. That's obvious when you think about it: you're screening, and so your questions alter the supply of candidates that will hit the bottom of your screening processes. Fewer selectees -> poorer employer BATNA -> higher prices.
Do you really want to price candidates based on how they use version control tools? How much are you willing to pay extra for people who have a lot of experience with different VC methodologies? Are you sure the weight of your questions about VC match up with the (hopefully minimal) premium you're hoping to pay for VC expertise?
However, if you list "10 years experience with <Technology X>" on your resume, then you should absolutely be prepared to discuss your experience with <Technology X> at a high level. Not anal minutia or contrived trick questions, but certainly you should be able to respond to an open-ended question about the basics with enough context to show that you weren't lying to pad your resume.
More importantly, as I explained in the parent comment, I am interested to see if their response reflects experience with multiple teams working on parallel over overlapping efforts within the same codebase simultaneously. Everyone says that they have experience like that, but if you poke a bit deeper you find that half the time it's exaggerated. They may have worked in a context with multiple teams, but affecting the same area of the application or system. If you have had legit experience of this kind, or at least show a high level of insight in talking through the issues that can arise, then you might be considered for a team lead role sooner than you otherwise might have been. As I said earlier, it's not a "right or wrong" question that can disqualify you from being a capable programmer... it's a "level setting" question that helps gauge which level of responsibility you might start out with.
I think that's an unfair interpretation of what StevePerkins is saying.
I don't get the impression from him that VCS knowledge in particular will make-or-break a candidate. He's saying that IF the candidate put it on his resume, then the candidate himself is the one who opened that door for discussion.
Out of the infinite list of programming topics to discuss, what are some options to narrow it down? Well, the candidate (through his own volition) put <Topic X> on his resume... so... let's talk about Topic X! It doesn't matter what Topic X happens to be (whether it's "svn", "parallel algorithms", "Ruby", "TDD", "cloud scaling", whatever). What matters is that the candidate is the one who thought it important enough to highlight it. From there, it's reasonable to think it's something the candidate is already comfortable with discussing in depth. If not, he shouldn't put it on the resume.
Your response makes it seem like StevePerkins is playing Alex Trebek with Jeopardy random topics and asking "gotcha" questions. It's not random -- the source is the candidate's resume. It certainly seems fair and reasonable to discuss any topics the candidate put on his resume. Imo, it's also fair to augment with questions that are not represented on his resume (but that's a different discussion from StevePerkin's example.)
If you have a significant amount of experience then hopefully you have seen enough different situations to be aware of the advantages and disadvantages of each approach.
It's really a question to figure out whether or not you can reason about high level concepts.
For example, the 1-in-a-million candidate that has used git with a CI/CD configuration (note: I'm also outside SV) versus the candidate who uses TFS ("git? You mean Github? Yes).
This question is a trifecta of ineffective candidate screening tactics:
(a) It's a technical screening question, one a strong candidate could get wrong, based on a technical aptitude that is trivial to teach on the job and thus rarely worth paying a premium for.
(b) It's a subjective technical question, for which reasonable engineers can have differing opinions, which means it's an outlet for interviewer subconscious bias. Did the interviewer just eat lunch? Candidates will do better on this question if they have.
(c) It's a tea-leaf-reading question about engineering/team management: it's superficially and overtly about technology, but subtextually about a bunch of other things. Let's hope the candidate realizes that.
A typical interview lasts about 60 minutes. Let's say it takes 10 minutes to pursue this particular line of inquiry. That's 16% of your interview you're spending with a question that greatly rewards people who are good at talking about technology. Worse, if you ask that question early to a quiet but excellent candidate, you can psych them out, which means you pay for that version control question in every other question you ask.
It is totally reasonable to assess soft skills and team compatibility. But you have to design an interview that does it. You can't improv it based on candidate resumes.
Avoid questions like this.
This thread started with someone saying they're pretty good at interviews. It turns out that they try to assess soft skills with technology questions based on the luck- of- the- draw of candidate resumes. Candidates with effective resumes will have an easy time passing these interviews. From the comments on this thread, that obviously sounds reasonable to some people.
I submit: those people are not competing for talent. They may think they are, but the real contenders in this market won't be OK with letting good candidates slip past because their job-hunting skills aren't finely tuned. In fact: they'll do the opposite: those candidates are steals in this market.
The original commenter acknowledged that when he said that his experience was that the market was full of poor candidates. If that's the case, you especially can't afford to filter out effective devs because they fail to impress you when they explain how they use version control, or when their resume overstates their facility with version control.
Do you believe that technical phone pre-screens are ineffective in general? From what I've read Matasano doesn't pre-screen candidates, but provides complementary study materials instead. Is that because the subject matter is specialized? Would you approach hiring web dev roles differently?
From your remarks, it sounds like you would reject the practice of asking open-ended interview questions (e.g. describe your workflow, describe a typical day, describe a recent project) due to interference from the interviewer's bias. What, if any, value do you place on open-ended questions?
We do free-form technical interviews, but only to try to detect candidates who really aren't ready for the work-sample challenges. Our in-person interviews are standardized and try to evaluate consulting/architecture skills that are hard (impossible?) to measure without people. These involve open-ended intermediary questions, but the final answers are structured.
The bottom line is this: you cannot compare candidates using free-form interviews. You must compare candidates who are going to be doing similar work. Thus, free-form interviews have no evaluative value.
And it's relevant to a job like sales. (That's unsurprising because a job interview is actually a sales meeting.) And many management jobs do have a sales aspect -- you have to justify budgets, sell work inside the company, et cetera.
But if explaining our work to outsiders were a particularly important or routine skill for programmers, we wouldn't be so bad at it. And, on average, we are bad at it. Because our actual on-the-job communication, which we practice all the time, is largely written and asynchronous, taking place on media like Slack or Github. It relies on plenty of job-specific shared knowledge, domain experience, and jargon, and it all happens in the shadow of a job-specific shared codebase that is supposed to speak for itself -- the whole point of software is to build something that works by itself -- but is also perpetually unfinished.
There are social skills that are important to have on a software team, but it's difficult to judge them in an interview. Interviews are staged events.
Challenging a candidate to defend a resume in an interview is like asking them to do improv comedy, and selects for many of the same factors: Verbal gracefulness, comfort in the spotlight, the ability to seamlessly change the subject, and the amount of time spent in rehearsal. Good candidates rehearse their resumes. We get to write them ourselves, after all, and with practice we learn to design them with hooks that lead into our best material.
Oh, but surely they'll write documentation? Programmers, by and large, seem to suck at that too, which is why we have tech writers. But somebody still needs to explain to the tech writer what's happening! And only a few companies seem to carry tech writers for internal only products.
To put it simply, if I ask somebody how their code works and they say "Go away for an hour while I write documents" I think I'd rather not work with them. Or worse, they ask me for help debugging but can't tell me what they're trying to do. No thank you.
"Hello, coworker! Did you enjoy the cake we both got to eat the other day in the company cafeteria?"
"By the way, I'd like to ask you a question, and don't worry: This isn't an interview or anything, so if you can't answer me right away, or if your answer lacks grace, it's not as if you'll lose your job."
"Anyway, coworker: I found this code, which you wrote while working for my company, under the direction of my company's management, and which solves a problem that my company actually has, and which builds upon my team's platforms, languages, and coding standards, and which might even link directly to my code, and which both of us have had a moment to read and think about and which is right in front of us on this monitor. How does this code work?"
"Also, can you help me debug this code I have here? It builds atop the code I showed you last week, and is written in the same language that we all use, and attempts to solve a problem you've seen before – which is not a coincidence, because you were the person who asked me to solve the problem."
These questions are incredibly relevant to our work, but interviews can't cover them. Candidates are not our coworkers and they share none of our context. Instead, interviews are, at best, an exercise in prediction. In practice, they are often an exercise in magical thinking.
During the workday, people aren't being constantly judged. They don't implement functions on whiteboards without unit tests, solve brainteasers out loud during stand-up meetings, or implement quicksort from memory. They do have to explain code to coworkers, but not to people who don't understand the problem space, the language, the background, or the constraints. These rarely-exercised feats of skill are valuable -- sales is valuable -- and our gut feeling is that such feats are somehow related to relevant job skills. But gut feelings are often wrong. And not every job is in sales.
Depending on the experience of the candidates he's trying to acquire/interview, this may be appropriate to indicate that they have either experienced different ways of developing at different companies, or have an interest in software development practices wider than just "The way I've done it is the way I was told."
I'd consider it an 'indicator' question, indicating that the candidate has either experience or interest in software delivery and the way that different organisations work/operate. It's probably not a make-or-break question, but if a candidate had a good number of years under his belt and hasn't at least heard of some kinds of different practices then they might be the "do what I'm told but no more" coder that that organisation wants to avoid.
You might want to build an interview process that selects candidates who belong to the latter set. I'm a little baffled why you'd want to select from the former set in preference to the latter.
I'm not trying to get in an ego stroking contest here, I'm just providing anecdata.
I worked with SVN for ~4 years and git for ~4 more. What StevePerkins was asking was perfectly clear to me after about three seconds of thinking. Of course, in an interview, I would be sure to parrot back my understanding of the question. :)
FWIW, you and I understood his question to mean the same thing.
Of course this again biases against people who suck at interviewing (too nervous or whatever).
It's not a terminology question, it's a concept question. A good candidate should be able to recognize abstract concepts regardless of the words used to describe them. Even in an interview setting.
I think I'm a pretty good programmer and I've got a reasonable body of work on GitHub to back it up, but I've often failed technical interviews because I go to pieces under the pressure and my brain just stops working. I've been a dev for > 15 years but if anything I've got worse at interviewing over time. I only apply for positions that I genuinely think I'd be good at, but the technical interview gives the impression that I'm a clueless idiot. From my (admittedly selfish) point of view, the approach proposed in the article seems much better than the process you describe here.
And remember, even though the primary skill you're interviewing for is coding, they are also testing you for communications ability. Being able to clearly and concisely explain what you're doing is at least as important as being able to get things done.
Plus, as one other commenter points out elsewhere in this thread, once you've been on the other side of the table for a while, you get a much better idea of where your own strengths lie (and what you're up against, which can instil a lot more confidence than you might think!).
I like to ask people about side projects, and what is the latest interesting thing that they've been learning in their own time. However, my issue with the "Your-GitHub-Is-Your-Resume" nonsense is that it falls apart past the age of 30. Before I got married and had kids, I used to spend almost every waking moment of personal time (and about half of my employer's time, to be honest) working on areas of personal interest. I wrote a handful of minor open source frameworks that are too obsolete to be worth mentioning, a set of Java bindings to the wxWidget library, authored a book, and served as technical reviewer for a stack of other books.
Then I got married, and became a father.
Now, quite frankly, I don't do that anymore. I just can't. It isn't a matter of passion, it's a matter of physics. My GitHub account includes some small personal apps that I tinker with from time to time (e.g. a diet and exercise tracking application), but I would be MORTIFIED if someone thought that was a representative sample for job application purposes. It's a fairly trivial web app, that I could just as well have written 10 years ago.
In fact, the type of work that you do later in your career really doesn't lend itself to showcasing in a small GitHub portfolio. You work on Node.js websites, with whichever client-side data binding framework is popular this week? Awesome, GitHub would be perfect for showcasing an example of that. But your most recent projects include integrating a dozen microservices with an AMQP broker and Apache Camel, or using Spark to crawl a mass of data in a Cassandra cluster? Large scale development just doesn't lend itself as well to representation in a personal GitHub portfolio.
So side projects may be a useful screening tool with junior level candidates for web developer positions, but I think it's a naive suggestion for more senior level candidates in more complex domains.
The notion of having an on-site coding exercise is better in my opinion, but that's not a panacea either. Anything worthwhile would probably take a matter of hours, turning your on-site interview into an all day affair for which the candidate would have to take a full day of PTO from their current job. There is NO WAY that I would submit to something like that unless I was already very far into the recruiting process with a particular company, and very much interested in working there already. You're not going to reach that point unless you already have technical screening tools in an earlier stage of the process, so there you are right back at the original problem.
I do agree with the article, as well as many of the comments here, that "whiteboard exercises" are a fairly pointless tool (e.g. "write a recursive function to traverse this tree", or "walk me through your process for guessing how many golf balls could fit in an airplane"). Even though my company does a bit of that too, I grumble about it and refuse to incorporate any of that into my stage of the process. However, almost all of the alternatives that I've heard suggested seem to be proposed by very young junior-level devs, who lack perspective on what their career path and mindset are going to look like 10 years down the line.
I wonder if there are any careers where you only have to get good at the thing you're doing rather than a bunch of meta-things.
I think it's great when the practice of some discipline or craft has beneficial effects in other areas of our our lives.
If you're interested, the way I got over it was by just accepting the fact the interview process is not perfect and in the end I have no idea what these people are looking for. That being the case, I just do my best and enjoy the fact I get to work on algorithmic questions usually much more interesting than what I would get to see while working. I guess I try to just have fun!
It really seems like a lot of people freak out because they are afraid of rejection. I don't want to get stereotypical about how nerds are antisocial or whatever, but I definitely found that once I stopped caring whether I was going to get hired or not, I started doing and feeling a lot better. I actually started enjoying interviews because I get to see interesting questions, see what other people are working on, and because I am junior, learn about more technologies and why they decided to use them.
Really, it's more like "99% of the people still searching, who don't have a documented contribution history, and don't have a network that lets them find good jobs in a heartbeat, who must resort to this to find a job, can't answer basic questions."
Most of the talent pool is working and has to be pried away; they won't show up through HR channels.
 Not a diss, by the way; I count myself in that set, though am still not "on the market", and still don't fail at fizzbuzz style questions.
But I've been thinking about the marginal value of each additional technical question. In other words, how much additional relevant information do you get about a candidate increase with each increasing level of difficulty?
For instance, suppose you ask fizzbuzz and the candidate has an easy time of it. Then you ask about building/searching a binary tree, which the developer manages, but only after fumbling round a bit. Then you get into finding cycles in linked lists or graphs, and the developer takes a crack at it, but would need to look it up. Or maybe the developer gets it, and the interviewer ask about finding all permutations of a string...
How much more do you learn by going from fizzbuzz to binary trees. How much more do you get by asking about cycles in linked lists? And so on…
Just for the record, I can't stand technical interviews, and I dislike them so much that I'm considerably less inclined to apply for and interview for new jobs because I feel that I've studied for my data structures and algorithms midterm one time too many. I just don't want to re-load merge sort into short term memory again.
However, in spite of all this, I'm still somewhat sympathetic to interviewers. The truth is, if you really don't have a lot of direct evidence about a developer, you truly are at risk of hiring someone who can't code.
 My problem with silicon valley hiring practices isn't that they have a process that leads to a high false negative rate. They should do what they feel is best for their company. My problem is that they do this while complaining about a critical shortage of developers so severe that it endangers the entire tech economy.
Am I totally unfit for developing software?
If not, What is the correct answer that will prevent me from getting COMPLETELY EXPOSED (in your words) if you ask me this MVC question in an interview?
Every interviewer has their own pet question that the candidate is COMPLETELY UNQUALIFIED if they don't know.
My short answer to "What is MVC?" is that it's a popular web development trend, but without much substance to back it up. Every MVC project that I've been supporting/maintaining was a nightmare, taking 3x-5x longer to get stuff done than what I consider reasonable.
MVC = Model-View-Controller. It's usually for web development. You structure your code in a way that separates the data logic (Model), the user presentation logic (View), and the glue that holds the bits together (Controller).
There are many popular MVC frameworks that let you churn out a lot of mediocre websites quickly (angular.js, Ruby on Rails, Zend, Django(Python), and many others). The MVC framework does a lot of routine coding for you, but it comes at a price. The code can become a nightmare to maintain, especially if you use the MVC framework incorrectly. If you do something outside what the MVC framework provides, it can become a handicap rather than a help. The MVC framework usually demands you write your whole application in its preferred style, such as spreading out your code in various directories with forced naming conventions. MVC frameworks tend to be poorly documented compared to the underlying language (For example, compare the zend documentation to php.net).
The point of it is two-fold: separation of concerns, and DRY. But the reason it so ugly on the web when it produced elegant code for GUIs is the reason that web controller schemes are not MVC.
Errm, it's a pattern often used for UI (CLI and GUI) programming that was first commercially introduced in Smalltalk-76 (in 1976). [This predates Windows and Mac by many years.]
MVC can be done -with varying degrees of difficulty- in any computer language. You don't even need a web development framework to implement it! ;-)
iOS/OS X seems to do real MVC, with lightweight reusable view objects managed by ready-made controller classes. The obvious benefits are minimal memory and streamlined data flow. You only ever make/use the view objects you really need, you can recycle them to save memory, and - in OS X especially - controller objects include useful ready-made convenience methods.
Web frameworks often seem to do something that looks like MVC if you squint and don't think too hard. But when you're forced to use separate languages for logic (js), markup (HTML), and view design (CSS), and handle separate client/server environments, and there's inevitable overlap between all of the above (jquery etc) because stuff doesn't "just work", and you probably have yet another layer as a DB driver, it gets very complicated very quickly - to no great benefit.
The web has become a snarly ball of warty epicycles. That's why native is so popular - you get one common language for logic, views, and data, with a clean-ish interface to a remote server if you need one.
When everything works together like that, you can think seriously about MVC.
When the core concerns are all over the place already, it's a mess before you even start.
Which is not to say you can't work with it and build cool stuff - more that you can't just airdrop in a design pattern from a different tradition without thinking really, really hard about what, why, how, and what happens a year from now.
Separation of concerns and DRY are both good. MVC doesn't succeed at either, except in idealized corner cases.
 A lot of widgets in Windows, for instance text fields, encapsulate the view (it's where the text is displayed), the model (which keeps the text in the control), and the controller (the widget handles the UI for entering text, catches and handles the mouse, etc.).
(In angular, when you edit data in the view, the model is immediately updated, which is why it's MVVM and not MVC.)
There's lots of ways to decompose a program, each with its own strengths and weaknesses. MVC is a particular decomposition that is useful for interactive applications. The components are named Model, View, and Controller. An example of an advantage of this decomposition is that you can apply a different user interface (e.g. a command line one) by exchanging the View, without needing to change the Model.
Other answers here have focused on MVC as it applies to web development. But MVC is from the 70s, predating the web by many years! If you look beyond the trend, you see ideas rooted in an effort to architect software well, and see MVC as just one design, one particular decomposition. The key idea is not MVC itself, but the principles behind it. Being able to place ideas into larger contexts shows mastery, in my view.
To be honest, I'm not entirely sure that the MVC example would be a good question to ask to a younger candidate. If you had been around during the early days of web development (i.e. CGI scripts based on Perl or raw PHP, or Java where all of your logic is crammed into a JSP page), then you could say all sorts of things about the shift toward MVC being the norm. However, if you came along after that shift, and every web framework you've ever seen has been some variation on MVC principles, then it's a bit like asking a fish to describe water.
Anyway, if you're just getting started in the field, and haven't had much interview experience yet, then try not to let it get to you. Interviewing involves a TON of rejection, and that never really stops. If interviews ever start feeling easy, then it means you're selling yourself short... and you should be interviewing for more senior positions with higher salaries.
To get started though, I would suggest simply doing a Google search on "programmer interview questions", or something more specific to your background (e.g. "java interview questions"). This is exactly what the majority of your lazy interviewers are doing to come up with their questions anyway. As you encounter questions that you can't answer, then look them up and read about it (e.g. http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93cont...).
BigO, data structures and having a good idea how computers work, what matters for performance both from theoretical and practical perspective, matters infinitely more to us that knowing how to play object-oriented BS bingo or subversion usage patterns.
If you know what MVC stands for, you're know enough about OO patterns to me.
Not going to argue that we got it right way(tm), just wanted to give you another perspective.
1) You've got a code base large enough that they matter;
2) You wrote it initially without using something like MVC;
3) And then it grew into a ridiculous monstrosity, which you then cleaned up by breaking it into models, views, and controllers.
At that point, if it turns out well, you start to "know it in your heart" instead of just being a nebulous concept that you read in a blog/textbook/whatever somewhere.
It's good you don't ask for a definition. Even the inventor of MVC thinks it's rarely been applied as intended  (see forward by Trygve Reenskaug).
Short of quoting a single book's definition of them, word-by-word, you'll find hundreds of different definitions of those two terms online.
So, definitions are bad, rather ask for an "understanding" or "explanation" of the interviewee.
If you look at the answers, aside from a few that are really poorly phrased, they all fundamentally say the same thing.
If I were interviewing someone and asking them about polymorphism, I would expect a definition which shows understanding and perhaps an example which explains to me that they really do understand it in concrete terms. (I'd probably give them bonus points for mentioning what the Greek words mean, but only because someone who knows that is likely to be someone I'd enjoy working with.)
Inheritance is certainly a dicier subject. I'm not 100% sure I'd ask about either polymorphism or inheritance in an interview nowadays (though I certainly have in the past), but if I were asking about the latter, I'd probably phrase it in the specific context of a particular language - or perhaps request a comparison of how inheritance differs between two languages. I guess, really, I'd actually want a discussion about inheritance versus composition - but, having asked for just that in the past, it was quite surprising to me (at the time) how many people had literally no idea what the term "composition" meant (despite, I'm sure, using it on a daily basis).
I know what the Greek words mean, but it would never in a million years occur to me to mention it in an interview; I would have no way of knowing without being told that you would assign positive value to such.
My experience: exactly the opposite.
Your experience comes from matasano, a company where people get paid to be hack, a dream job for many techies, with offices in New York, Chicago, and Silicon Valley.
Is it surprising that your experience is exactly the opposite? I would imagine there's a huge self-selection factor at play here with regards to who applies to work for each company.
(None of this is to say Steve's company isn't awesome - I'm sure it is - just that recruiting is going to be much more difficult).
p.s. - I agree with you that tech interviews are terrible - I just don't think anyone has got it figured out yet and what worked or didn't work at you for you at matasano is not guaranteed to work or not work for the rest of the industry.
If you read Thomas's recent post on Matasano's hiring process , they were looking to funnel people in. Steve's process seems pretty clearly designed to filter people out.
Compare the idea of sending would-be candidates books to study and the idea of an interview process that COMPLETELY EXPOSES candidates. One is trying to bring people in, the other is trying to drive them out.
Somehow tech hiring has often become an adversarial process. It shouldn't be shocking to us that approach makes it hard to find good people.
How can I know if an employer is worth 1-3 months of my free time until I meet my potential future coworkers? Demanding a candidate invest 1-3 months of free time before you meet them seems like an insult to me.
If your job uses a niche skill, why demand candidates be an expert already before you hire them?
I do have a problem with passing on candidates who were sold on our subspecialty, had an aptitude for it, but could not pass an interview on it "cold".
Are you hiring people who are brilliant, or people who are so desperate that they'll spend a couple months preparing for one interview?
Not your cup of tea? Totally fine. Not everyone is interested in doing security.
2. Package it, with all of the assets and utilities needed to get it running with "vagrant up".
3. Carve out some feature/features from the application, and replace them with stub functionality.
4. Deliver the vagrant app and a functional spec to candidates. Have them implement the missing feature.
5. Devise a scoring rubric (unit test coverage, lines of code, algorithms used, safe/unsafe APIs, performance, whatever). Mechanically evaluate candidate submissions.
6. (Optional) Devise a 15-20 minute on-site interview component to verify that the candidate actually did the work. We didn't bother with this, and multiplied the size of our team (NCC is the largest software security firm in North America) and had 100% retention. But it's a big concern for some people.
The reason is, of course, self-selection bias. On average, the people who applied for the tech college were just stronger students than the people who applied to the state college. This likely is because applicants to the tech college were more likely to be people whose primary goal was to peruse an academic discipline that interested them while the state school had a good number of students applying who wanted to party and attend football games. If you were to look at students who applied to BOTH schools, you would actually see a lower acceptance rate for the tech school than the state school.
I suspect that the same factor is at play here. I suspect that the average quality of a candidate who applies for a software security engineering position is much higher than the average quality of a candidate that applies for a enterprise software development position. The software security engineering position is an esoteric position that is more likely to attract applicants who are enthusiasts or at least very interested in the field. The enterprise software development position is more likely to attract anyone with a tech background looking to phone it in and collect a paycheck.
If this is true, while both Thomas and Steve have a difficult time hiring developers, they have a difficult time for fundamentally different reasons. Thomas is like the tech school - his difficulty is with getting candidates to apply. Steve is like the state school - his problem is with separating qualified candidates from "phone it in and collect a paycheck" candidates.
This is why I am skeptical of Thomas thinking he's got a much better way of hiring figured out. Maybe he does for companies that focus on security - judging by his Linkedin profile, he's only worked for companies focused on security. This is certainly valuable, but he has no idea what it is like for Steve.
By the way, I think many of us have seen this self-selection bias at play even within our own companies. Post a primarily Java job and a primarily Scala job and then try to tell me that the same principles apply for filling both positions.
Invariably, one of the top replies boils down to, "Hey! I'm a 20-something in the valley, and I see it completely different!".
"After a couple years of this I could tell which companies to worry about and which not to. The more of an IT flavor the job descriptions had, the less dangerous the company was. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening-- that's starting to sound like a company where the technical side, at least, is run by real hackers. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried."
I'm not sure if what you're saying isn't true, or if I'm just biased and not able to see Atlanta from an outsider's perspective.
I left because I wanted to live in a big city, not in suburbia. Atlanta has some of the plusses of a city, but they're relatively limited compared to LA, SF, NY, DC. It's more a collection of suburbs than a city.
If suburban life is what you want, Atlanta is fine for that. Honestly, most of Silicon Valley is suburbs, outside of SF itself. And they aren't as appealing as Atlanta suburbs, with many more trees and a lot more space. If that's what someone wants, Atlanta is a better choice than pretty much anywhere in California.
One of the best things about California, particularly in LA and the Bay Area, is that there are more people living their lives according to their own unique desires than anywhere else I've ever been. Having "odd" choices about your lifestyle or having crazy dreams and aspirations is the norm here, and that's something I want around me. It's a plus for me to be surrounded by that kind of thing. There's some of that in Atlanta, but it's more localized to certain areas. Here it's everywhere, just how things are.
But I do get your point about "odd" lifestyles. We're certainly getting weirder here, but I don't think it can yet compare to what you see in NYC or the Bay Area. Still, there's a lot to like here, and cost of living alone is a great argument for Atlanta.
Atlanta is by far the technology hub of the southeastern United States. It has no competition to the west until you reach Houston (and even that is mostly specific to the energy industry). It faces only mild competition from the Research Triangle region of North Carolina, and beyond that it's a technology DMZ all the way to Chicago or the northeast.
It's home to Georgia Tech, perennially one of the top 5 engineering schools in the nation, and pumps out thousands of new programmers per year in addition to the thousands who migrate here. It has a highly diverse business base, not tied to any one predominant industry, and its Buckhead and Midtown districts have one of the most thriving startup scenes that you'll find outside of SV or NY.
Of course, if a Georgia Tech grad is in the uppermost percentiles, then odds are he or she would be enticed by Google, or a west coast startup. I'm certainly not saying that Atlanta is in the same league as San Francisco, or even New York (at least not its finance/quant opportunities). But my earlier point was that these environments are extreme outliers, and virtually no other city is comparable to that either.
I think what some of the replies to your post are forgetting is there are no right or wrong answers to these questions. The questions are about gauging the experience of the candidate so you can make an appropriate offer if you wanted to hire them. It's probably not a huge deal if you couldn't describe MVC but you gave an in-depth answer to the OOP question for example. It is a problem however if several questions reveal there are major holes in your knowledge and experience though.
I wonder how the success rate would change if you defined "model," "view" and "controller," and asked why developers might choose to organize projects in such a way.
My guess is that most working developers would get the concept right away and might even be able to relate it to other named models we've seen--we might just all exist in a web bubble where MVC is trendy because of the levels of abstraction that exist in our common toolsets.
Even Cobol's ancient "divisions" seem to have a similar organizational structure (http://en.wikipedia.org/wiki/COBOL#Features) and so, I'm sure, did a lot of old-school, green screen terminal apps.
Our computers are generally fast enough and our tools good enough for what we're asking them to do that we can often just define data models and user views fairly abstractly--index this, make that an input box rounded corners and indented text--and that we can often avoid delving into the low-level details of how data's stored or pixels laid out on the screen. I'm sure it would be possible to describe an operating system, or a video game engine, or a DBMS in terms of which sections of code model data, handle the meat-and-potatoes "business logic" and communicate with the outside world, but you'd probably need deeper layers of abstraction to actually plan how code would be organized.
We also have enough fast memory that we can basically pass data from one level to another without issue and don't have to deal with abstraction-breaking optimizations like processing data while a disk head or spinning drum is in the right place or before an old video game system's tiny RAM chip is full.
But I think programmers who worked with those kinds of systems could quickly jump into web and app development, and if they couldn't, it's probably not because they don't know what MVC stands for (although it's possible it's a useful proxy).
"Why do we keep doing whiteboard interviews, even though they tell us little about how good the applicant is at actual software development?"
Let me guess, they work for a "big box software factory/consultancy".
They don't program, they just click buttons on Eclipse.
That should be your filter.
I suspect that it is very likely that this process ends up excluding very high-level producers.
Which implies, almost by definition, that they will become much better at the process over time and eventually become very good at gaming it, therefore making it even less reliable.
and i agree with you that threshold competence in the general applicant pool is so low that some form of technical screening is absolutely essential. My problem is how best to do it. For one thing, it has taken me far too long to recognize that the pool of interview questions and plausible answers are widely available and apparently studied by many applicants prior to technical onsite interviews. Their answers are quickly given and so polished that we bring them on, then fairly soon find that answers to those interview questions is pretty much all they know.
If a "good" candidate is a 1-in-100 find, then each false negative means you have to look at another 100 candidates.
Also, if you decrease your false negative rate by more than you decrease your false positive rate, you're actually hiring MORE bad candidates (while spending more time interviewing). I.e., every time you pass on a good candidate, that gives you another chance to make a mistake and hire someone bad.
Your odds of hiring one specific "bad" candidate may be small, but if most candidates are "bad", that actually makes it more likely to hire someone bad each time you pass on someone good.
(1) loss of entire person year or even two because annual reviews need to accumulate evidence for HR to fire (may be less time if you were startup without real "HR")
(2) amount of cleanup other people have to do after that new bad hire
(3) loss in moral for good people in your team who now perceives your hiring process at the company as "broken"
(4) delays + bugs introduced in product because actual work probably didn't got done or badly done despite of you filling up your headcount
(5) amount of money lost in salaries, signing bonuses, office space and benefits (typically > $200K)
(6) amount of productivity lost because of wasted time by good people in the team trying to "ramp up" your true negative
(7) emotional stress you caused to good people wondering them about their job stability and to managers who wasted their time in months of paper work and lot of explaining
(8) emotional stress you caused to your true negative being fired who had moved across the country for you, bought a house on mortgage and had 3 school going children
(9) Most likely, if you are big company, true negative didn't actually got fired because hiring manager never wanted to admit it. S/he was encouraged to join another team or role or even learned political tricks to get promoted contributing to ongoing bozo explosion
(10) I could go on and easily justify probably 3X-10X loss compared the case if true negative was avoided
1 - Many companies specify their hiring rate over total resume they received which is wrong. I'll use 10% as total number of full interviews that needed to be conducted which is average of 5%-15% at most companies.
2 - This is bad math. Assuming random trials, it would be actually 5 person-day on average but intuitive approach doesn't produce entirely bad results here so we will go with that.
3 - http://guykawasaki.com/how_to_prevent_/
* 99% of applicants are bad
* 50% false negative (You look over about one good developer for every good developer you hire)
* 1% false positive (one out of a hundred bad devs can snooker you into hiring)
In that scenario, you're twice as likely to hire a bad dev as a good one. And if you halve your false positive rate by increasing your false negative rate by 50%, you're still twice as likely to hire a bad dev, it will just take you twice as much work.
Even if you hire 10% of the candidates you on-site interview, that says nothing about your actual false positive or false negative rate. For all I know, he could be weeding out all the top candidates at the pre-interivew stage, and then hiring the best of a mediocre group of people.
It's easy to measure your false positive rate, people you are forced to fire (or wish you could fire if not for corporate bureaucracy).
It's harder to measure your false negative rate. The only way you could measure your false negative rate is to pick a random sample of people who fail your interview, AND HIRE THEM ANYWAY. (However, that could be a lawsuit risk. It would be unfair to the people who hire despite failing the interview. A small business couldn't afford to do it, only some huge corporation could do the experiment.)
Also, I doubt the ability of most businesses to identify the best performers AFTER THEY ARE HIRED and working there for a couple of years.
I feel you are truly confused about FP and FN. Whether there are 99% bad developers out there or if you hire 10% of candidates you interview - these both quantities are independent of FN and FP. FN says that you are turning away X good people and it's again independent of FP which ultimately decides how many bad developers you would eventually end up hiring regardless of other 3 quantities I mentioned. See here: http://en.wikipedia.org/wiki/Confusion_matrix
It's not easy to measure FN, FP, TN or TP. Even good people fail due to different reasons like bad manager and bad people may succeed despite of mediocre skills. Looking at who you had to fire or who got promoted doesn't give accurate measurements at all although they may serve as weak proxy. The scenario I described was hypothetical to point out that cost of FP is far more higher than additional cost in hiring due to FN.
I'm using standard terminologies here. There are plenty of textbooks and articles on confusion matrix, precision, recall, RoC etc. Not sure what definitions you are using to arrive at conclusion that FN increases the number of good hires (it only increases effort).
FP = probability, given a bad candidate, you will hire him
FN = probability, given a good candidate, you will pass
Suppose 100 bad candidates, 10 good candidates
You make 10 bad hires and 9 good hires
You make 10 bad hires and 8 good hires.
So increasing FN lowers your yield.
Your statstic (good hires / total hires) tells you nothing about your actual FP or FN value.
If you don't get it, I'm not wasting time on you anymore. You are very dangerous. You think you know statistics, but you don't.
Using the math from that link, if you decrease FN, then PPV increases.
First FP and FN are not probabilities. They are just unbounded numbers. This may feel pedantic but in a moment I'll show you why this is critical. Let me draw the confusion matrix first (G = Good candidates, H = Hired candidate etc):
\ H NH
G | TP FN
B | FP TN
What you are referring to as probabilities is actually False Positive Rate or FPR and TNR respectively which is defined as follows:
FPR = FP / (FP + TN) = FP/B
FNR = FN / (TP + FN) = FN/G
precision = P(G|H) = TP/H
So how do we get TP to calculate precision if we only knew FPR, FNR, G and B? I did little equation gymnastics using above and got below:
TP = G - GFNR
H = TP + FP = TP + FPRB
So now you can plug this in to above equation for precision and find that as you increase FNR, precision goes down while you keep FPR constant. So you are actually correct. Although it might look like unnecessary exercise vs following intuition I think above equation can actually help calculate exact drop in precision and multiply that with cost of FP vs FN to get the operating sweet spot. On my part I need to do some soul searching to figure out why this didn't triggered to me before :).
if you decrease your false negative rate by more than you decrease your false positive rate, you're actually hiring MORE bad candidates
First false negatives (FN) and false positives (FP) are independent of each other. FP estimates how many bad developers you would end up having regardless of your FN. The FN determines how many good developers you would turn away regardless of your FP. If you are confused about this, well, these numbers are part of appropriately called "Confusion Matrix". I would highly recommand reading up on Wikipedia (http://en.wikipedia.org/wiki/Confusion_matrix) or any textbooks before you jump on commenting and through bayesian equation around because you are certainly not using right terminology. Also both of these are again independent of actual % of bad developers out there (i.e. whether market has 99% bad or 1% doesn't matter, FP solely determines what many bad developers you would end up with).
Next, it might be actually easier for you to think in terms of precision and recall instead of FP/FN. Interviewing process is nothing but classification problem and P/R is standard way to measure its performance. Again Wikipedia is your friend to brush up on that.
A classic situation in classifier performance is referred to as precision recall tradeoff. You can plot that on curve called RoC and choose your operating point. The way you typically do that is by quantifying how much you would get hurt due to loss in precision (~ more FP) compared to increase in recall (~ less FN). You plug the costs in equation and decide your operating point. For companies that can rapidly deal with FP, increasing recall may make sense and other way around. However in most cases there are too many other reasons that I'd listed should typically prevent you from lowering your precision too much.
Why should it take 2 years to fire someone? That sounds like a corporate bureaucracy problem.
I had never thought of it this way before but this is an instance of Bayes's rule. If the false negative rate goes too high and the percentage of good programmers is small then yes, the process could actually increase the odds of a bad hire.
Try it out with some numbers.
10100 candidates, 100 are "good".
Suppose you have 2% false positives and 1% false negatives.
You hire 99 good candidates and 200 bad candidates.
Suppose now you have 0.5% false positives and 90% false negatives. (You decreased your false positive rate by 4x but increased your false negative rate by 90x. This is typical for employers who look for every little excuse to reject someone.)
You hire 10 good candidates and 50 bad candidates. Your "good hire" percentage went down, and you're churning through a lot more candidates to meet your hiring quota!
So, "it is better to pass on a good candidate than hire a bad candidate" is FALSE if you wind up being too picky on passing on good candidates.
Assuming you can identify losers and fire them after a year or two (with decent severance to be fair), you're actually better off hiring more leniently.
It's also even worse when you realize that the candidate pool is more like:
10200 candidates, 100 are "good", 100 are "toxic", and the toxic people excel at pretending to be "good".
Also, the rules for hiring are different for a famous employer and a no-name employer. Google and Facebook are going to have everyone competent applying. If you're a no-name startup, you'll be lucky to have 1 or 2 highly skilled candidates in your hiring pool.
When you make a false negative, you never find out that you passed on someone amazing.
When you make a false positive, it's professional embarrassment for the boss when he's forced to admit he made a mistake and fire them.
So the incentive for the boss is to minimize false positives, even at the expense of too many false negatives. The boss is looking out for his personal interests, and not what's best for the business.
What you're attempting to do works well for hypothetical drug testing or terrorists but not for hiring developers (or anyone else). With the numbers you used you're proposing that less than 1% of all candidates are "good" - nobody would reasonably set the "good" threshold to include only the top 1% of developers.
First, unless you really think we are terrible at hiring as an industry. So even if on a given day all developers that start looking for a job have a skill level that matched the average population, the good developers will find jobs faster, leaving the 4th, 5th and 6th job applications for the developers that did not manage to get hired after applying ina couple of places at the most. So yes, your talent pool on any particular day, just due to this effect, is far worse than the average talent in the industry.
Then there's how bad developers are fired or laid off more often than the good ones, so they are added to the pool more often. Typically companies make bigger efforts to keep good developers happy than those that they considered hiring mistakes.
And then there's the issue with the very top of the market being a lot about references and networking. In this town, no place that does not know me would give me the kind of compensation that places that do know me would. I'll interview well, but nobody will want to spend top dollar in someone based just on an interview. In contrast, if one of their most senior devs say that so and so is really top talent, then offers that would not be made normally start popping up. The one exception is 'anchor developers', people that have a huge level of visibility, and you still won't get them to send you a resume at random. You will have to go look for them, at a conference, user group or something, and convince them that you want them in the first place.
My current employer has a 5% hire rate from people interviewing off the street, and that's not because our talent is top 5%, but because you go through a lot of candidates before you find someone competent. We've actually tested this: Interviewers do not know candidates, even when they were referred by other employees. But, as if by magic, when there's a reference, the interviewed almost always is graded as a hire.
Based on my experience, most employers don't hire better than "Pick a candidate at random".
Also, if you had an employee that was super-brilliant, why would you tell someone else so they can hire them away from you?
Based on the people I've worked with over the years, I say that the actual skill distribution is:
5% toxic - These are the people who will ruin your business while deflecting blame to other people.
25% subtractors - These are the people who need more attention and help than the amount of work they get done. In the right environment, they can be useful. (Also, this is mostly independent of experience level. I know some really experienced people who were subtractors.)
60% average - These people are competent but not brilliant. These are solid performers.
9% above average - They can get 2x-5x the work done of someone average.
1% brilliant - These are the mythical 10x-100x programmers. These are the people who can write your MVP by themselves in 1-3 months and it'll be amazing.
You first have to decide if you're targeting brilliant, above average, or average. For most businesses, average is good enough.
If you incorrectly weed out the rare brilliant person, you might wind up instead with someone average, above average, or (even worse) toxic.
Actually, when my employer was interviewing, I was surprised that the candidates were so strong. There was one brilliant guy and one above-average guy (My coworkers didn't like them; they failed the technical screening, which makes me distrust technical screening even more now). They wound up hiring one of the weakest candidates, a subtractor, and having worked with him for a couple of months my analysis of him hasn't changed.
There is no reasonable definition of average that would only allow for 9% above that (or 10% including the 1% you marked as brilliant). Average is usually considered as either the 50th percentile (in which case you would have ~50% above this) or some middle range (e.g. 25th - 75th percentile).
Since you said 60% are average we'll consider an appropriate range as average, the 20th - 80th percentile. That leaves you with 20% of applicants below average and 20% above. Your math falls apart real quick when we're dealing with distributions like 20%/60%/20% instead of 99.5%/0.5%.
[As an aside, the toxics and brilliants are outliers, they should be fairly obvious to a competent interviewer (and as someone who previously spent a decade in an industry where nobody conducts interviews without adequate training I'll be the first to say most interviewers in our industry are not competent)].
So "average" is not really a meaningful term. I mean "average programmer" as "can be trusted with routine tasks".
Behind every successful startup, there was one 10x or 100x outlier who did the hard work, even if he was not the person who got public credit for the startup's success.
If you're at a large corporation and trying to minimize risk, hiring a large number of average people is the most stable path. You'll get something that sort of mostly works. If you're at a startup and trying to succeed, you need that 10x or 100x person.
It has 20 elements and an average of 10. 5% are toxic. 25% are below average. 60% are average. 10% are above average.
I didn't say it was impossible to construct a set that would yield only 10% as above average, I said there "is no reasonable definition of average" - if you feel the above set accurately represents the distribution of the caliber of developers then we clearly have very different opinions of what's "reasonable."
Or one could replace the mathematical term average with the word ordinary. Is it possible for 60% of developers to be ordinary?
That would depend on what set of developers we're looking at:
All developers - this will be very bottom-heavy, people [usually] get better with experience and there's obviously a lot less people that have been doing this for 20 years than having been doing it for two. Additionally people who are bad at a profession are more likely to change careers than those that are good (this is by no means an absolute, I wouldn't even go as far to say most bad engineers change professions, I'm just saying they're more likely to - further contributing to higher caliber corresponding well to years of experience).
Developers with similar experience - this is much more useful as there's not much point comparing someone who's been doing something for decades with someone on their first job. I would expect this to be a fairly normal distribution.
Developers interviewing for a particular position - applicants will largely self-select (and the initial screening process would further refine that) so this group will largely have similar experience (i.e. you're typically not interviewing someone with no experience and someone with 25 for the same job). But it won't match the previous distribution because, as someone else commented, the bad ones are looking for work more often (and for a longer period of time). Do the interviewees you wouldn't hire outnumber the ones you would? Yes, definitely. Do they outnumber them by a factor of a hundred to one? Definitely not. Ten to one? Probably not - if they do it probably represents a flawed screening process causing you to interview people you shouldn't (or not interview the people you should) rather than an indication that only one out of every ten developers are worth hiring.
If you substitute with these terms:
- 5% toxic
- 25% subtractors
- 60% competent
- 9% exceptional
- 1% brilliant
...then there's no reason to apply (or defend!) the mathematical definition of "average." And I think those numbers actually seem somewhat reasonable, based on my own exposure to working developers in various industries. What this doesn't count is the the "FizzBuzz effect," where ~95% of the people who are interviewing at any one time (in a tight market) tend to be from the bottom end of the spectrum.
Even within the broader pool of programmers, the line between subtractors and competent is very project-dependent, in my opinion. For some levels of project complexity, the line might actually invert to 60% subtractors and 25% competent, while for far less complex projects, it might be 5% subtractors to 80% competent.
In the former case I'd want an exceptional developer, while in the latter the exceptional developer probably wouldn't even apply, or would quit out of boredom.
For example, if you assume that 90% of your employees are "good" and 1% are "toxic", what does that tell you about the candidate pool and/or your interview process?
If I was the boss and had a "toxic" employee, I'd just dump them rather than waiting. I've been forced to work with toxic people because I'm not the boss, and I've noticed that toxic people are really good at pretending to be brilliant.
Over the years, I've also worked with a couple of people who singlehandedly wrote all of the employer's key software. I also worked with several people who wrote a garbage system but conned everyone else into thinking it was brilliant.
If 90% of candidates are "good", then why waste time with a detailed technical screening at all? Just interview a couple and pick the ones you like the best.
I thought I was pretty good so a few years ago I moved to Silicon Valley to get a startup job. Boy was I wrong. Even though I had created real applications and knew MVC, etc. etc. I was basically told I was worthless because I didn't know algorithms, unit testing, etc. etc.
So I moved back home (Tel Aviv) and started my own company (lead gen market). Programmed everything myself. Last year I did over $2 mil (70% margins) and this year I am on track for $3.5-$4 mil in revs.
The technical interview was the best thing ever for me because if I had passed I wouldn't be where I am today.
For example, you tube engineers probably curse that the thing was made in python, because in a 1000 engineer org, making changes can break things you wouldn't be aware of elsewhere. While something more statically typed like java or go would break on the compile step instead of the run time step.
Python was fine when the project was 1 - 20 engineers, but it became a liability later on.
That is why people want you to spend a week or two to learn something better and more maintainable, so you wont curse your future self later.
But if your making small build utility type things, then it's fine. Contact websites for small firms. Or if it's for your quick test project, etc.
I really don't understand why more people don't do this.
It doesn't even matter if it is their own code. The fact that they will have to talk intelligently about it means that it will accomplish its task.
I used to get this issue interviewing with VLSI chip companies all the time. "Do you know Perl?"--"Yes, I do, but I don't know YOUR particular favorite subset of that write-only language" doesn't go over well in interviews.
After getting tired of getting dinged on questions about a language that I had used for almost daily for 5 years and that I abandoned for good reason, I took an old program of mine from Python and ported it to Perl. I bring both.
Now, I have something concrete to talk about, can actually compare the points of two languages, and most interviewers realize that "Gee, you're probably better at this than I am."
This shifts the conversation from "Technical Jeopardy!" to "Ah, you know how to program, let's look.", "I did this. Here's why.", "Oh, that's interesting. What does that do?", etc.
We wound up switching to giving code challenges instead.
I think you missed this part. Anybody can go on github and grab a repo and pick a specific section of code to talk about. This still accomplishes the goal of finding out if the candidate can understand code and communicate about it.
Either way, you're asking the candidate to develop code, and then explain their decisions in a meaningful way.
If you want to chat more about it, hit me up. me (at) ericharrison.info
.... tactics sims for militaries ?
For the on-site interview, we ask candidates to bring their laptop. We advise them to use a typical, comfortable development environment. We've seen candidates struggle with the latest Ubuntu, installed that morning to impress us. So we don't want that.
During the hands-on interviews, we ask to see a side project, or any code they are mostly responsible for, and familiar with. We ask them to explain code, maybe change something, refactor a test, etc.
What we don't tell candidates is that this part of the interview is also about how they use their tools. It's important to see how good they are with their editor, command-line tools, can they type well, do they get easily distracted, and so on.
One of the most effective interviews we do is for project planning. A problem is explained in detail and the candidate is asked to design a solution. Not in code, but to talk it through in detail, drawing or writing docs/stories if needed. This phase helps show us how they break down a project, ask questions, negotiate features, and look for opportunities to reduce complexity. Bonus points for making a pen-and-paper wireframe or throwaway prototype.
What we refuse to do is the "puzzle" problems, whiteboard code (which makes no sense), or tricky technical questions. We instead want to find people that use best practices, don't re-invent the wheel, tackle problems pragmatically, and are good with their tools.
Over time, this approach seems to work well. However, we also discovered that we have to re-train and test our own interviewers. Without that step, the process can change unexpectedly, become inconsistent, or unfair. Don't just assume your staff is interviewing well - take time to check it out and help them get better.
Fail. You just killed the quality of your pool.
You've just negatively screened against people with a life (experienced, 30+ years old, generally with a family) and screened for people with no life (aka single, male 20-somethings).
For an example, if John Carmack hadn't been able to release his employers source code (like many people), your process would screen him out (no github or side project presence).
That kind of works both ways, though, doesn't it? As a 30 year old developer with a family, I don't want to waste time interviewing somewhere that has the expectation I'll work outrageous hours. If I get filtered out early in that process because I'm not involved in multiple OSS projects or whatever, all the better for me.
That being said, resumes are a really shitty way to get an idea of someone's experience and talent. If I'm hiring someone and they can show me a github profile or blog entries or a Stack Exchange profile or slides from a local user group presentation or anything besides their resume, it really helps me get to know them better, and -- all else being equal -- will probably set them apart.
On the other hand, we've also hired people that have not much happening with their public github profile. We still ask to see code they might have worked on (job-related). If they can't show that, we try to have them pair with us on something simple. We have hired people in this situation, but I'm sure we've missed others that may have worked out.
If a young Carmack came through our interview process, I'm pretty sure our team would recognize his talents when asking him to go through someone else's code.
As an interviewer, I simply don't have an hour or two to evaluate someone's project. I want to see a strong resume and ask a few questions to understand whether the resume is real or not. A simple whiteboard question or two is great for that. You can always see whether the person is capable of writing code and it always brings up questions around the choices they made.
As an interviewee, I rather not be engaged in a live coding session: the whiteboard is a very efficient tool to get an idea across, without sweating on syntax errors, searching for API documentation and whatnot. Test projects - I have a job, thank you very much.
These are people with whom you'll be working together with for years. If you can't spare a few hours to make sure you are finding the right person, what does that say to a candidate?
I don't know what does this say to a candidate, but I surely hope they're mature enough to understand that there's nothing personal about it. They will be treated with dignity and courteously during the whole process, but that's all I can promise.
I know that when I'm on the candidate side of the fence it makes a big difference to me if a company/interviewers seem like they've actually taken the time to get to know who I am instead of just walking in a room and that being the first time they've checked out my resume. I realize it's not personal and I don't expect it but if a company/interviewer puts in the extra effort it's a definite positive.
It is easy to dismiss something that you have never experienced yourself. Some people are good at whiteboard interviews and some are not. In my experience, the ability to be good at whiteboard interviews is mostly independent of how good the person will be at their job. The questions are either too difficult to avoid the stress-blanking issue, or too easy to weed out the barely-capable-but-not-someone-you-would-ideally-like-to-hire candidates.
We currently do pair-programming interviews with ping-pong. I write a test, you write the production code, then you write a test and I write the production code. We discuss programming issues as we go. This seems to work well for people who are familiar with TDD and real pair programming (as opposed to: one guy codes and the other guy watches them). It does not work so well for people who have never tried it, though. We actually do most of our work this way, so finding people with this experience is good. Unfortunately, I think we've almost certainly missed good talent just because they don't immediately grasp how this process works. We do some hand-holding at the interview, but then you are always wondering about the person's actual ability.
@jimbobimbo, if you don't mind some unsolicited advice, I would recommend that you revisit your stance on maintaining a few non-job projects. A portfolio that shows what you can do on your own is unbelievably valuable (in terms of real dollars) when you are looking for the next job. I have gotten good jobs in the past simply because someone stumbled across my portfolio and decided to hire me. The time required is not all that bad -- say an hour or so 3 times a week for the next year or two would give you a really excellent portfolio. It also allows you to experiment with techniques and tools that would be too risky to introduce into a project without some experience. Building that experience outside of work, also makes you much more valuable even if you don't switch jobs. I think you will find that while it is difficult to find the time, the pay off is excellent. Avoiding doing low priority overtime at work and replacing it by investing in yourself is a good way to start.
Having said that, my portfolio is currently a complete mess and I really need to spend some time on it ;-)
"Non-job projects" != "test projects". I do have non-job projects and showcase them when interviewing, but I would not take a test project as part of the interview process, unless I have really good reasons to do so.
I wouldn't present on a technical topic without preparation, because I'm not going to do a good job of explaining it or communicating the ideas - I'd be wasting the audiences time. Same with interviews - you need to be prepared to explain your work and put your experience in as favorable a light as possible.
I think one of the reasons so many developers feel the process is "broken" is because it's all they know (i.e. most developers didn't have a previous profession).
For most jobs it's not practical or possible to get any insight into how a person will do that job prior to hiring them. Whiteboarding isn't intended to perfectly mirror "real life coding" - it's intended to give some insight into one's ability to write software to solve a problem. It's not perfect (and it can certainly be done extremely poorly!), but it shouldn't be dismissed as broken or useless any more than one should suggest actors shouldn't have to audition for roles.
After a couple of job interviews in recent times my personal inclination to invest a lot of time has gone down by a huge amount. For my first application I even took the time to contribute to one of their open source projects (as they asked on their job application page). Didn't get a job - how many times am I supposed to invest that much time for a job application?
Is determining technical skill even the biggest problem? I think my GitHub account shows I can code, even if it mostly contains small projects - but vastly more complex than FizzBuzz. I'm not even afraid of whiteboard interviews (if I had a dollar for every time I was asked to implement Quicksort or compute Fibonacci numbers recursively, I'd probably have ten dollars by now).
Yet I don't get hired. So my conclusion is that there really isn't that much of a talent shortage. Not enough to let companies look beyond my age or my lack of passion, anyway (my answer to the trunk vs branch first development question would be "I don't care much", although I could probably blab about presumed pro and contra arguments).
Edit: just looked up Starfighter again. In a way I am excited, as I have recently decided that only online games that have an API really interest me. However, it sounds like it would require a huge time commitment, too. Wouldn't the time be better invested in side projects for GitHub?
What I don't get is all the whining that seems to happen from interview candidates who seem to think they are hot stuff and deserve to get hired but blame it on a bad interview process. It's not like the interviewers aren't aware of the shortcomings of the technical interview.
I have yet to learn of another industry where people routinely blame interviewers for their being rejected. I mean it seems like in other industries people are just hired almost solely based off resumes, and while everyone also realizes it sucks, they don't seem to think the interviewer is an idiot for not hiring them.
IMO successful startups are the one's where founders had the skill to identify and hire the right talent. What I don't understand is why big companies are not creating a separate division of engineers with primary task of hiring? Is it because that if such a division existed the people there would eventually lose the skills being cut off from mainstream engineering tasks? I am not sure. But it seems to me that this can be solved by looking at hiring task as an engineering task. What's wrong in seeing hiring engineers as another piece of code which needs to be carefully thought and polished?
But what about positions where you will be expected to give presentations, do pair-programming, or mentor junior developers? I think whiteboard interviews can be a measure of your ability to take technical concepts and illustrate/explain them clearly to a team.
I used to really hate whiteboards, but as I grew into positions that required more leadership, I realized that I personally needed to develop better presentation skills. If a potential employer were to test me on that, a whiteboard test wouldn't seem unreasonable to me.
So I have mixed feelings on this. Not every hire needs to be capable of being a teacher or delivering a solid keynote speech.. some positions would be best filled by someone with those skills.
If you want to measure their ability to pair program ... pair program with them.
If you want to measure their ability to mentor ... have them teach you something they know.
I want to know how you do X, and measuring X is easy, so please do Y as a proxy is never a good approach.
I have never, in my 25+ year career, ever had to whiteboard a problem, where the other person knows the answer (that is laughable - I've given correct answers and been told they are wrong), I don't, and there are hundreds of thousands of dollars at stake. It just has nothing to do with the job or on the job performance.
These are behavioral attributes that are important - much more important than a simple binary test.
Furthermore, I think past basic fizzbuzz questions, complex interview questions are good, but someone failing to get an objectively correct solution to an objectively difficult problem is just one signal among many. The goal of asking the questions should be to gather lots of other signals about how you approach writing software, not to pronounce you right or wrong.
One interviewer tried yelling at me for no reason, just to see if I would react or flinch.
White-boarding is good for high level design and architecture but not for actual code.
These are indeed useless technical interview questions.
But that doesn't meant there aren't good technical coding interview questions. I tend to favor the kind that present a candidate with some example data and a set of constraints or patterns, and ask them to write code that analyses the data for those patterns or constraints and reports them. This sort of problem is ubiquitous in my domain. Importantly, I always choose problems for which there are multiple possible attack angles, not just one. And I don't give a hoot about syntax errors, or what language they use.
This sort of question gives you a good sense about the candidate's analytical capability (breaking down the "word problem"), and their ability to translate their problem-solving thought process into code. Because there are always multiple angles of attack, candidates have some leeway to exercise creativity.
In the end, it's not terribly important to me that they get the optimal solution. I do care whether they demonstrate strong analytical capability in the literal sense, meaning they can decompose the problem and the associated programming exercise into their logical parts and implement them. I also look for good communication skills in the questions they ask when reasoning through the problem - this is something that only an in-person technical interview can reveal, AFAICT.
There are probably many smart candidates that don't do well on these questions because they're just having an off day, or nervous, or don't perform well under pressure. I sympathize with them, because I've been there and felt all of the above.
But if the goal of a technical interview is to assess a candidate's analytical and coding abilities, and their ability to do both simultaneously, there is no shortcut I know of to just giving them a role-relevant problem to work on.
On test projects. I generally expect that companies (after a pre screening) put similar efforts in me as I put in them. If you let me do a two day project, I'd expect that one person (preferable my future boss or a future colleague) to show me a "company fitting" solution and take the time to discus my work and theirs. If you're not willing to spend that time I'll most likely think you don't respect my time. Which is a factor in choosing a company. I understand you are busy, I hope you understand that I’m too.
Edit: not a native speaker, sorry for bad english
This might be a viable strategy for people who have a well-established career/credentials/references (etc), but for junior-level candidates still trying to prove themselves (such as myself) I can't see this working out too well.
To me, it's absolutely ridiculous to expect to hire a junior developer and have them come with a fully-developed set of skills. If they do, that's great. But if you're hiring someone fresh out of school, you've got to be approaching it as hiring someone that you're going to train and mentor. For me, the number one thing I look for in a junior developer is the ability to learn.
Here's an example from early in my own career. I was just finishing up my 3rd year of a combined CS/EE program, and looking for a summer job. I got in touch with a biology lab that needed a developer for the summer to build some (very cool) software to support the neurophysiology experiments they were doing. I looked at the job requirements and thought "well, I don't know most of this stuff, but I'm sure I could pick it up."
The interview progressed like this:
> Do you know Python?
I'd heard about it, but have no real experience with it. I downloaded it last week and started playing with it though, and it doesn't seem too different than other languages I've used.
> How about VisionEgg (neurophysiology module for Python)?
Well, I downloaded it at the same time I downloaded Python. I've managed to get a window to open up, and I'm displaying a square that's got a cool animated habituation pattern on it. (Note: 1 week prior, I had no idea what a habituation pattern was) I do have a bit of OpenGL experience from a class I took, and that's the underlying library that VisionEgg uses.
> Well, so far, you're the only applicant who has actually made an effort to look at the specific tools we're using here. I've got one other applicant coming this afternoon, but unless they somehow have more experience with these tools than you do, the job is yours.
It turned out to be a great experience, and they hired me back on the following summer. I went from being a total Python noob to contributing patches back into VisionEgg. I think most junior positions should probably progress like that. Give me a keen junior developer, and let me shape and mould them into a not-junior developer.
The flip side to this: if you're expecting the person to be productive on day 1 or 2, you'd better be hiring someone with experience. Whether or not they have code they can show you, they should be more than capable of going into serious detail about past projects they've worked on (within the confines of NDAs and such, of course).
1. Show them a problem with your product along with the code. For front end developers it could be how form invalidation errors are being presented to users.
2. Ask them to figure out what why the code is doing this and observe them troubleshoot the code.
3. Tell them to fix the problem in the code and observe them apply a fix, test, and debug it.
4. Ask them to architect a better solution to the problem and to explain what makes it better. What would would be drawbacks to their solution.
The benefits of this approach is that you can evaluate directly how someone solves problems, not how well they communicate, nor how knowledgeable they are. It also helps me judge how fast they are. Sometimes I ask people to estimate how long it would take to solve this and time them. The last step helps establish how they think through their solutions and how well they can communicate their ideas.
Additionally, I get to see how quickly they might get up to speed with our codebase and I can hear other solutions to some of the technical problems we are facing which is incredibly beneficial as a small startup.
The worst is the NYC tech scene where they have the exact same standards (which they cargo-cult from the west coast), but they decided not to bring over that aspect where engineers are respected and valued. Instead they borrowed the west coast interview and combined it with the east-coast finance style where programmers are considered clerical workers and cost centers.
NYC is actually a fun city to interview in because since it combines so many different cultures, you can't even study for an interview because 5 different companies will give you 5 different interviews. Finance companies still love brainteasers and they _love_ mutexes, seriously, if you're interviewing at a finance shop just memorize Java Concurrency in Practice and the producer/consumer wikipedia article, because they are reading questions from there, even though when you show up on your first day you'll have been better off having read Spring in Action and Headfirst Enterprisey Design Patterns or whatever.
In general I love how these companies have elite hiring standards but incredibly mediocre interviews. You get asked the same questions over and over again by these companies. Find the biggest sum in a list, find two items in a list that sum to a given number, sort a list of integers/strings but keep integers where integers were and strings where strings were, copy files with ids to all servers with ids, etc. What's the difference between an abstract base class and an interface (by a Python/Node shop), what is a closure, etc etc. I've heard so many of the same questions repeated over and over. They have "high bars" for their candidates but apparently their interviewers can just use the first link off of Google or re-use whatever Facebook was asking in 2010 when they got rejected by them.
And then at the end we have a "tech talent shortage". Whatever happened to that part where we claimed a false negative wasn't a big deal? (glad the article calls this attitude out).
Tech hiring is totally broken and then they claim there's a shortage. There definitely is a shortage of qualified interviewers, not so much a shortage of qualified candidates. I have a friend who I mentored into the industry, she's very smart but absolutely a junior engineer, and yet she starts a new job and was doing interviews with _no_ training 3 months later. They just threw her into the lion's den and expected her to figure it out.
Coding tests are another great one because the exact same people who talk about the importance and value of data throw all of that out the window when it comes to evaluating them. There is no calibration or standards, generally. One person is offended by a hardcoded file path but doesn't care about whether you have tests, and another person is the direct opposite. Many people expect you to write extensive optimizations for the 100Kb input file you were given, another person sees that as absurd premature optimization. Whether you make it through a code screen is entirely tied to whether your coding style happens to jibe with whoever is reviewing you.
Again, the West Coast is better because at least compensation packages reflect the hoops you have to jump through and the monkey dances you have to have learned on cue. On the East Coast, anybody paying attention is desperately trying to get into management by the time they're 30 because to do otherwise is to be humiliated and infantalized during the interview process _and_ during your tenure working there. My simple solution to all this nonsense is to make sure your CTO and head tech managers are being made to jump through all the same hoops with all the same standards. Drop this whole "Oh, the CTO is a _manager_ role, he shouldn't have to worry about all that." (again, more of an East Coast attitude).
Another incorrect thing tech companies claim that benefits them at the expense of labor that engineers brainlessly parrot: "false positive are expensive because firing is hard". No, it isn't, I've seen many people fired very easily the second they're not up to standards. At worst they're instantly let go because it's at-will employment. At best, their given some absurd Performance Improvement Plan that establishes a paper trail so they can fire them without severance. And if your employer tries to tell you people have come off of those things, I have news for you, people lie, and liars are good at reaching senior management positions.
Something else is how you can put in hours and hours of your life, and get _no_ feedback, because telling you how you scored on an interview might expose them to legal liability. Let's be real, the legal liability is when they reject qualified people for things like "culture fit", and if your interview process exposes you to legal liability, maybe it's because it's illegal and unethical.
Another thing tech companies should consider is, instead of paying insane money to recruiters to be pushy sales people who are trying to dupe engineers into their low-paying positions, maybe just redirect that money to the engineer instead, it's 2015, the days that sleazy recruiter types are necessary to try to fast-talk an engineer into a position that's not good for him are over because we have the internet and we can read your terrible Glassdoor reviews.
I'm convinced one huge reason all this happens is to discourage job-hopping, because in this market, liquidity would probably help salaries move up faster.
I know I sound bitter, but again, these are the exact same companies running to the taxpayers to spend hundreds of millions of dollars of middle class Americans money to pay to solve their tech shortage crisis. Yet they absolutely refuse to evaluate their own hiring processes. And a huge chunk of engineers just eat up all this dogma about how hiring is hard and these processes are necessary, and don't think for one second that all these processes are totally designed to please the employer at the expense of engineers being treated well. In a few years, there will be another recession and we're all going to be "rightsized" away, so, demand to get treated well while you still can.
"Now, this does require one huge prerequisite: every candidate must have a side project that they wrote, all by themselves, to serve as their calling card.
I don’t think that’s unreasonable. In fact, I think you can very happily filter out anyone who doesn’t have such a calling card. (And lest I be accused of talking the talk without walking the walk: I am very happily employed as a full-time software engineer; I travel a lot, and I write books, along with this here weekly TechCrunch column; and I still find the time to work on my own software side projects. Here’s my latest, open-sourced.)"
The usually spoken requirement of having side projects in 2015 is the job posting's equivalent of the bachelor's degree being the new high school diploma/GED. If every programmer did it, then every programmer would in theory be far more qualified for the interesting jobs where more difficult things happen, but then ultimately fall into the same trap where they've still not done enough compared to the people with the side-project programming equivalent of their master's.
This will become a vicious cycle until companies with more experienced programmers realize that life is not about programming, and most of the stuff you're working on is not all that important, with even your average (or slightly below average) programmer being capable of doing the work. In most cases, it's a job like any other. I fear that this may never happen.
I think some of these average companies should recognize themselves for what they are and be more accepting of average candidates. Give some new people in the industry a chance, train some entry-level people, especially if the work you're doing is not really cutting edge tech but just web apps or data analysis stuff that bright but not world-class people can learn with practice.
Also, companies should just focus more on candidate experience. I have interviewed and been rejected by Facebook a few years back. They did expect me to jump through hoops, but in general, they had a few original questions, I felt like they had a great candidate experience with polite recruiters, they put me up in a nice hotel and expensed it instantly, and I knew that if I did get through those hoops they were going to pay me a lot of money. I didn't get it, but I felt fine afterwards. I'm angry at all the average companies that don't do any of that but think they're entitled to put you through the same grinder. My only big criticism of companies like Facebook is they should give more feedback so you feel like you got something to improve upon. Also, I know Zuckerberg campaigns hard for H1Bs, but when Facebook is paying people what they're paying, I assume he genuinely does want to find the world's best and is not just trying to undercut labor. Although most H1Bs are in fact about undercutting labor, and the simple solution is to change it to an auction rather than a lottery and give them more time to look for new jobs before they have to leave. (note, I do not work for Facebook, I interviewed there once and got rejected but had a massively more positive experience with them than most companies. Google is also an excellent company to interview for, I'm sure there are some others but not many).
Walk yourself through that for a second. You're saying a company that's hiring would intentionally make the process difficult for applicants in order to help their competitors retain their employees? That doesn't make sense. Even if they conspired with said competitors for this purpose, they would simply be ceding the hiring advantage to anyone who wasn't conspiring with them.
Occam's razor suggests a simpler answer: it turns out that identifying good software workers is a hard problem.
Do you realize that many "competing" startups have the same VC firms investing in them who don't want bidding wars between their engineers?
They want new talent but they don't want their current talent bouncing around for more money.
> They want new talent but they don't want their current talent bouncing around for more money.
Of course everyone wants to retain employees, and I'm sure that many will stoop to less-than-ethical means to do so. I never suggested that the possibility of a conspiracy between companies was crazy, just pointed out why it's ineffectual. Google and Apple can maybe make it work, because there's no substitute for those names on a resume. But once you start going down the brand name ladder a bit, it is (a) infeasible for the large number of companies to conspire together effectively and (b) far more likely that some companies will refuse to conspire, and those companies will get the upper hand in the hiring market.
To use a crude analogy:
Burger King and Starbucks don't mind if a KFC or whatever opens across the street, because while the newcomer is "competition", it isn't a big deal, and may even bring in more customers overall (because now that area is a "food zone").
They will, however, all fight tooth-and-nail to fight raising the minimum wage, because that results in higher costs for everyone.
Incidentally, I think you're wrong about "one huge reason all this happens is to discourage job-hopping" ... I think it's actually just that people don't know what else to do, and this is what they've always done. It's how they were interviewed, and they got the job.
The places that asked me things I did not know probably had been looking for somebody with different experience than I had so it's fine with me too, even though it would be better if they evaluated my experience from my resume.
So I don't see the author's problem. If the questions on the interview are too hard - you are probably applying at above your level or in the wrong field. I will most definitely not spend my time working on some programming test or do some side project for "show-and-tell" to please you.
Heck, he is appealing to other professions allegedly not doing interviews, yet it's even harder to imagine them doing things he suggests. What kind of side project a doctor will have? A civil engineer? A manager? A pilot? A chef? A lawyer?
I'm sure it's stressful if you don't know what you say you do. Then again, if you list 6 programming languages on your resume and can't xor decode a string in the one I let you choose from that list, you deserve to sweat a little.
For more experienced candidates, we've swapped out most of the whiteboard questions for several other types of interviews, including in-depth technical discussion of one of their past projects, a discussion of how to architect a non-trivial system, and a more soft-skills interview on how they work with others, break down projects, and so on. This gives us a much rounder picture of a candidate, and allows candidates who are very good at something to shine a bit more.
I was stunned by the end of the interview because they got out everything one would want to know about how good a person is at their everyday job.
The worst interview I face is when people try to test my algorithm skills, that generally a test of how much I could memorize from careercup.com
When the candidate is ready he/she announces it and we review the solutions together.
It is not perfect, but I find it much less taxing than being asked to solve a technical problem in a white board in front of an interviewer.
But put me in front of 3 guys I've never met whose purpose for being in the meeting is to expose every single one of my weaknesses, push me up in front of a whiteboard and ask me to explain the best way to solve a maze puzzle, and I'm going to freeze up and fail the interview. The reason is, this is simply not how I work. I work by sitting at my desk, thinking about the problem presented, and writing code to have the computer do all the work. It may very well be organizationally convenient to have these psychological lynchings occurring, but I highly doubt it gets the absolutely best candidate in the door.
I look forward to new solutions to the problems of finding qualified people; in my case, I feel I unjustly failed an interview at a company I could have been quite productive. Such is life in the modern software world, alas ..
I also don't think hiring by committee decision makes much sense. It reduces the one-on-one evaluation time by the hiring manager. If you have 6 people doing 30-minute interviews of a candidate, everyone gets a few good questions in then the decision comes down to a consensus of gut feelings. Plus the manager can deflect blame for bad hires.
But changing this require re-organizing people and job duties in a company and they'd rather just look for the next hot interviewing trend. (Edit) But as far a trends go, looking at real work is at least better than contrived exercises.
I'm not buying that argument. I would however agree that recommendations and references create a lack of diverse ideas/approaches. Lack of racial/gender diversity: That would indicate that the people in the minority just aren't reaching out or participating in the community.
They should come out to the non-discriminatory user groups. (I.e. groups that don't ask for a particular gender i.e. xNUG/xJUG etc)
I've found, in others and myself before, is that I've held myself back from going to meetups because I didn't feel immediately welcome the first go, or that I didn't think I'd know people. However, you can't expect for everyone to welcome you with open arms in the first go. It's easier to blame the issue on racism/sexism, than to admit that you may just be shy. (That being said, I attempt to introduce new people in my meetups to the group and others)
It seems like figuring out problems like this is what marketing is all about.
Except for the creepers issue (which only can be solved by bringing it up to the organizers), thats on the fault of the attendee. You can't make connections if you're not there. Don't blame the organizers for that.
Think you can pass a hardcore whiteboard interview? Go for it.
Would you rather spend a day and solve a real problem we have? Lets see what you got.
Think you have the attitude, just not the knowledge to match our exceptions yet? Do you have time for an internship? Ok cool. Lets play.
One size fits all is a bad idea for interviews. Just as how teachers test their students.
Interviews should really be limited to check personality, and not much more imo.
So this works great for people that partake in such things, but there are clear still many great developers (perhaps the vast majority) that don't have this type of extensive public profile. These are the applicants that I fear false-negatives for the most, and the candidates for which I am not confident of my interview techniques.
Could it be that the proper way to technically interview such a candidate is to offer them a "fellowship" to work on an open-source project? Point them at any project (used by the company or a favorite project of their own) that they'd like to contribute to, and offer them a small stipend and enough time to make a legitimate contribution to open source? And use that as the technical portion of the interview?
In my bubble things aren't perfect, but they're quite different.
1) College graduate: Keep the traditional algorithm programming tech interview. But allow the use of a laptop (no whiteboard)
2) 2 - 7 years engineer: Require a github side project or give a home coding test. Interview will focus on discussing the project implementation. Also ask them to add a simple feature.
3) 7 - 15 years engineer: Ask them to come to the office and show them some bad code from the company source control. Ask them to explain why it's bad and to refactor it to something better.
4) Express route: Instant offer, no interview. Only "interview" is by a VP and it's more about trying to convince the candidate to join the company instead of the other way around.
The first 3 scale according to typical life situation and industry experience. The amount of free tend tends to decrease from college graduation to 30's and beyond. For example, a college graduate has lots of free time. Someone in their 20's has less, but still a significant amount. 30's and beyond tend to have little free time due to family responsibilities.
But the interview styles also match what they should know by that point. A college graduate doesn't know anything about real industry coding so the typical algorithm coding interview is ok. Someone with 2 - 7 years experience should be good at writing lots of code. But they don't yet have the experience to know that sometimes deleting code is better than adding more. They also don't have enough experience yet to read code well and refactor, their "code smell" sense is not yet developed and they think adding more code is the solution to everything. An industry veteran of 7 - 15 years should be able to read code well, spot all the issues, and be able to refactor into something better. These skills can only be gained after years of experience.
So the 3 interview styles scale well according to the free time they are likely to have and testing whether they've really grown as an engineer. The last one, the "express route" is reserved only by referral from the company's best engineers. For example, if the company has this tech ladder:
junior engineer, engineer I, engineer II, senior engineer, senior engineer II, principal engineer:
Then only senior engineer II or above AND being at the company for at least 3 years would be allowed to make ONE express route referral per year. The reason is that an engineer has to be in the industry for a while to meet other good engineers. And 3 years is enough to learn the company culture. This express route system effectively would give a competitive advantage over other companies. For example, imagine a very good engineer who is already employed. They are busy and don't want to go through any kind of interview process. But if they are given an instant offer, that greatly reduces the barriers for them and they are much more likely to seriously consider switching to the company. The main idea here is that good engineers have a lot of options but not much free time to interview, your good engineers should know a few in other companies, so the express route is a way to get more good engineers who would usually not consider moving because they don't want to spend time interviewing.
I tried hiring the normal way (CV, white board) and had one candidate worth speaking to in over 200 applicants.
We decided to try something else. We stopped reading any cover letter or CV and wrote that in the job ad. We asked candidates to build a simplified version of what the job was about, taking about 4 hours, then to send us the code which we'd discuss with them over the phone at their convenience. Anybody whose code passed would be hired, and if people couldn't be bothered because they were already famous they could send us their libraries. Nobody took the Starcraft best of 7 option :(
Why we did this:
- having to write code eliminates those who can't, or who can't be bothered; not having complex unrelated hoops to jump through eliminated those who are good at career climbing;
- we replaced the hours candidates would normally spend dealing with HR or travelling to a site for something ("cultural fit") that overselects the polished anyway, with something people we want would enjoy doing (solving a problem, writing code);
- no pressure, access to your "external brain" (Google, your own libraries) unlike with a white board; allows for the anxious to pass, and those who outsource a lot of their knowledge to the machine (like me);
- you can tell a lot about how people will approach their job from the way they approach the small version of it; documentation, structure, abstraction levels, choice of libraries, and if you're wrong about the candidate's reasons you can always clear it up in the chat;
- it's fair because everybody has to tackle the same problem;
- no discrimination since it's all ability based: I took to Skype-text-chatting with candidates instead of voice calls mostly because I don't like talking on the phone. It was only when she sent us her passport scan for the contract that we realised one of our candidates was female. Another team member didn't have a CV; it's because he was just finishing high school, as we discovered when we talked to him, but his code was good enough, so he got the offer and skipped university.
I've never had to put out another job ad (not for myself anyway) because I have so many qualified candidates left over from that round, and from the team network. Surprisingly, we didn't get spammed; the requirement to post code seems to have been mostly understood and we only had a half dozen "dear Sir/Madam"s.
So I disagree with you, but YMMV. We were hiring for a small, distributed team solving relatively well known problems. Requirements are probably different at Google or Facebook.
Think about it another way. How much time do you spend/waste when applying to a standard corporate job?
I graduated at the height of the Lehman fallout. I really wanted to work in finance. I applied to over 200 companies and did a seemingly endless string of interviews. In one case, a fund interviewed me an incredible 17 times for an analyst position (about half of which were "technical" i.e. "here's a bunch of accounts, tell us what you think of the company"). In every occasion that led to an offer, I spent way more than 4 hours travelling, talking with HR, doing various rounds of interviewing with various team members, having "social" coffees with other same-level folks... hell just the automated screening tests, essays, forms for somewhere like Goldman Sachs would take a good couple of hours if you did them properly. I still know many experienced people for whom a job search during an economic downturn means sitting for months at home doing these stupid screening tests and perfecting their cover letter (after all, putting the wrong company name in your letter is enough to get you rejected, because of course you really, really want to work at this particular bank and none of the competitors even though the reputation, work and compensation are identical).
I think a lot of more experienced people realised the trade-off and thought this was quite a positive signal on our part - at least based on the number of people who were willing to quit incredibly well paid jobs for our pretty average Asian retailer with no equity and a third of the pay. I had one prominent member of the open source community slam us publicly for "wasting people's time with free work" whilst one of his work colleagues was gleefully submitting his solution to the task...
I have seen a lot of PMish/weak programmers make cries about technical interviews. Yes, whiteboarding interviews are not perfect and false negatives rate is typically high but the companies who want nothing but the best would care less about false negatives and rather more about false positives. Anyone who has worked long enough in software engineering professionally knows the enormous cost of false positives . These sort of companies also typically care less about which language and frameworks candidate knows at the moment. They rather put huge weight on evidence whether a candidate can analyze a challenging problem, apply computer science and obtain a solution. Why? Because these companies are actually working on problems that does requires deep hard computer science.
Most - perhaps 80% - of software development houses are not like that. Programmers in those companies have never bothered to think about collisions in hash tables and they often wonder why anyone would care when libraries take care of everything. They never need to create data structure for trees or graph or even linked lists because simple things have always got job done. They wonder why they are being asked questions on those "arcane" things when nobody really uses computer science stuff that they were taught at school.
That's the culture problem. Those 80% of companies tend to copy interview process at other 20% even though it is meaningless for what they do. Interviewers typically chose their questions from algorithms text books or even internet searches. Then they get stuck on it for a decade and proudly mention it as their "favorite question". That's completely wrong. I almost always tend to ask CS questions that I myself needed to solve to get my job done. I typically retire my questions in month because I probably would have found another problem where applying right algorithms and data structures was the key. When I ask these questions to candidates, (1) it's rare chance that it would have been covered in books like Cracking the Coding Interview (2) no risk because candidate decides to do interview brain dump (3) I know candidate is smarter than me if s/he can solve within hour under pressure (4) If they fail miserably I know this guy could have screwed up our project just last month had he been hired.
Bottom line: Ask questions from your actual day job. It takes a lot of skill and effort to abstract away other complexity and form a good interview questions and that's why being an interviewer is hard. Whiteboarding works if you do that. It doesn't work if you are trying to copy companies which are doing different kind of work than you do. It most certainly doesn't work if you never actually needed to solve the problem you are asking to get your actual job done.
People can learn technical stuff and people can grow professionally but the chemistry is harder to fix. And, unlike some programmers might see it, even code is communication and, a bit surprisingly, is subject to chemistry. You can talk to some people via code. Not all people.
I consider the technical part merely as a way to filter out people who seem to be missing something blatantly obvious. I'm fully aware that my system will sometimes generate false negatives but there's no such thing as a fully objective interview.
I have a few technical things or topics that I usually ask about and I expect every candidate to know at least a few of them but certainly not all. I generally go forward topic by topic until the candidate has at least N things s/he knows about. Of those that the candidate does know about, I ask more and more details until I run out of questions or the candidate runs out of answers. If I'm getting strong signals early that the candidate is likely to be ok, I'll just try to finish up the technical part quickly.
Then I proceed to the most important and revealing part which seems to be asking about a project that the candidate is really proud of, work or hobby. In the best case I get a lecture down to details on something the candidate built that s/he's still excited to explain to someone, on the coolest things s/he could build into that project. A good programmer pours so much passion into some, possibly minuscule, part of what s/he's building that you surely should be able to ooze some of that out back.
Another important part is asking about hobby projects: if one particular candidate doesn't program on his/her spare time, I'll probably go with any other candidate who does unless s/he's exceptionally strong otherwise.
I try to remove pressure from the interview by acknowledging that given some rough preliminary filtering, I could go wrong either way. Maybe I reject a good candidate because I don't know as much myself. Maybe I accept someone who's really nice and who seems to know about things, passing my filters, but turns out s/he just can't produce much code hands down. All that will happen some day.
The trial period is there so that both parties can revert an obviously wrong decision. We haven't needed that yet but I greatly prefer to postpone some responsibility for later and relaxing the actual interviews as much as possible. People don't like to be grilled and I don't like to grill people, not only because it's very consuming but because it doesn't seem to be a particularly effective indicator.
There will never be a silver bullet to interviewing because there will never be a silver bullet to meeting people the right way, but if I were to suggest one thing it might be to listen more and ask fewer questions. By listening, I don't mean letting a babbling candidate take over the interview. By listening, I mean to figure out who is this person, where s/he's coming from and where s/he seems to be going.
Being the fastest and most accurate shooter isn't much of a value unless you know what you want to shoot, what needs to be shot, and why.