Way too many questions get labeled (on the internets) as "puzzles" and "brain teasers" when they are simply abstract algorithm questions. Crossing rivers with chickens and wolves, and/or figuring out the prisoners with the one room and the light-bulbs and all that crap are brain teasers and are totally useless. Impossible questions are not brain teasers. If I asked you how many tires are in the city of Chicago, and you said 1000, what should I think? What does that tell me?
Even still, way too many programmers decide to get intellectually offended (at least on the internet) when they hear about some problem that may involve, say, dropping light bulbs off a building to find the highest floor from which they can be safely dropped. This is nothing but an algorithm development question for a problem that doesn't have an off-the-shelf solution. This is testing your algorithmic thinking skills.
And as always I hear that phrasing such and such algorithm question "in real terms" is better than phrasing it abstractly. I've never seen a compelling argument that this is true except for this faux outrage that people have about puzzle questions. Phrasing it abstractly is quicker, simpler, and clearer. It removes all the hemming and hawing about implementation details. It removes answers like "doesn't intel have a library that does that?". It gets to the point, and quickly.
There is a second problem with that angst, and comments like: "I just can't muster any enthusiasm for completely random arbitrary puzzles in the face of so many actual, real world problems.". It is that we are computer scientists and abstract problem solving is something we should care about. Abstraction shouldn't be some barrier that makes it hard for you to think about a simple problem. Abstracting the real-world details away from you shouldn't get you emotional.
Besides, if I had an interview that was only a water cooler talk about something in the past, Jeff Atwood would smack that out of the park. And then I'd end up with a shop full of Jeff Atwoods. wink.
In theory, theoretical questions are great. In practice, they are fidgety little things that must be asked precisely correctly to have the same "correct" answer that the interviewer has memorized and are frequently botched, almost invariably involve a "gotcha" answer in such a way that honestly I do not want to see a coder actually code, and have a laser-like focus on an aspect of the job that in most cases will involve less than 1% of your job, no exaggeration.
I've done some algorithms work at my job, but in all my years it has always been replacing O(n^2) or O(n^3) solutions with O(n) or O(n log n) in very straightforward ways. Had I chosen an O(n log log n) answer, it would have been wrong for being radically more complicated.
Basically, my problem with the puzzle questions is they show a misplaced priority. You could be asking gotcha puzzle questions (that, by the way, I've probably already read and memorized the answer...), or you could ask the interviewee to write a script in their favorite language to count the words or whatever, and learn both everything the puzzle question would have taught you and whether they can code.
This may be a side effect of the fact that I refuse to spend eight hours interviewing someone. I want it done in less than half an hour unless there's a good reason to make it go longer (which does happen). Puzzle questions are a waste of time in that context. I guess if you're planning on wasting a candidate's entire day, you can afford wasting time on puzzles. (I have read the science that says we make decisions on the hiring issue far faster than in eight hours, which matches my experience, so why sit there and fiddle around all day when the decision has been made?)
I find it easy to believe that many people make a decision in far less than eight hours of interviewing, however I do not believe that all interview processes lasting eight hours are a waste of time for the candidate and for the interviewers.
To pick an example, what about a process involving twelve different interviewers, each of whom takes between a half hour and an hour, with the process spread over a certain amount of elapsed time? Can we really assert that we cannot make a better decision with this process than with having one interviewer spend thirty minutes with the candidate?
Puzzles and psychometric testing are useful, but have a lot of limitations. However, you may have a job that involves a high level of responsibility (e.g. people's lives) and you want to assess their response to stress/crisis. Again, it's not perfect -- you don't truely know how someone will respond in a unique situation -- but in those circumstances you might want all the information possible.
References, by and large, are useless.
Cultural fit is incredibly important, but there are very few interviewers or interview techniques that do this adequately. There is so much bias in this it's hard to assess very well.
So what are the keys for me? Validating what they have on their CV. If they have experience with a particularly technology, I ask about it. I propose problems that they are likely to encounter every day in their job, and see how they go about fixing it. From what I've seen, abstract problems do not exercise this sufficiently.
What is it from there? Well, if what they have been doing is a fit for the job, then that's great. If there is a gap, there is an assessment on what it will take to bridge. How a person has conducted their career is a good indicator on how achievable this is.
You are replying to a comment suggesting that there may be value in a process using multiple interviewers over a total time of eight hours.
It could well be that one, some, many, or even all of the multiple people involved use your suggested process and that that it takes a total of eight hours to get a good read on past performance.
So... is your comment orthogonal to mine?
Ahha - yeah - I think I started to reply and wandered off topic :)
I think it was more in reply to the OP's original comment about puzzles being a waste of time. I think 8 hours is fine - if it's progressing in the right areas.
I tend not to like panel interviews, but a broad range of interviewers isn't a bad idea - the best interviews I've had usually include walking the floor a bit, seeing the environment, chatting to potential teammates - although I'd worry that it would be hard to get a consensus if too many people are involved.
Marissa Mayer says, in all their studies of the Google interviewing process, references are the best indicator of future performance.
This was probably certainly the case for Google, but probably not for many others.
You also skipped out on something I explicitly mentioned, which is that there are times I go over when it is called for. (Recall that the median interview result is "obviously no".)
Also bear in mind when I talk about "wasting time", I'm also concerned about the candidates time. Spending 8 hours interviewing someone that you decided "no" on in the first 30 seconds is doing nobody favors. That people defend this practice boggles my mind. (I don't know if you're defending that or not, but I've seen it defended elsewhere.)
(Also note I did not say you should turn them out in 30 seconds, that's not respectful either. They should have a chance to convince you you are wrong. It's rare, but it happens. But the third hour vs. the fourth hour is unlikely to produce any changes.)
Time is money, and that includes the candidate's time. The idea of flying people out for a two-day interview cycle actually sort of angers me unless they're planning on compensating me for it, and I mean with more than a few free meals and a hotel stay.
I didn't ignore that, I was thinking about it when I suggested people taking a variable amount of time to perform an interview.
Spending 8 hours interviewing someone that you decided "no" on in the first 30 seconds is doing nobody favors. That people defend this practice boggles my mind. (I don't know if you're defending that or not, but I've seen it defended elsewhere.)
I am absolutely not defending that practice. If, for example, you plan to have six people do consecutive interviews and you also give each person a veto, there is no reason to have person two through five do the interview if person one votes "NO HIRE."
On the other hand, if your policy is to have a discussion after each interview where the interviewer raises concerns that subsequent interviewers may wish to address, then person one might terminate their interview but persons two through five might feel it's still worthwhile to continue. Or perhaps not.
The idea of flying people out for a two-day interview cycle actually sort of angers me unless they're planning on compensating me for it, and I mean with more than a few free meals and a hotel stay
I think you are mixing your strategy for getting hired with a discussion about the best strategy for hiring. I always try to remember that it is not my job to hire myself, therefore when putting together an interview process it may make sense to do things that would cause me personally to decline to pursue the opportunity. To give a very simple example, I personally do not like writing code in an interview. I strongly dislike trying to "Guess the coding style the interviewer is thinking of." I never know if what I write will be not clever enough or too clever, especially when given something ridiculously trivial to implement. I feel a lot of pressure.
Nevertheless... I have had very good results asking candidates to write code in the interview process. Oh well!
I certainly may have brought my own feelings into the multi-day marathon issue, but it's for the purpose of making sure I understand the other side of the table. I think not appreciating giving away two days of your time is of a different nature than not liking to write code in an interview. The latter seems pretty clearly like a "suck it up" problem, if you know what I mean. Personally, I consider the interview being unpleasant for the interviewee is a given, and all I can do is minimize that, I can never eliminate it, because the stress and uncertainly is sufficient on its own to make the process unpleasant.
In interviews when I ask code problems, I try to do my best to work with the interviewee. They get to pick the language, and I'm generally not looking at any style issues at all (if by "style" we mean "things that have no impact on running code"). If anything I'm a little too accommodating when I also ask them to choose their own data structures for the problems in question. This throws a surprisingly large number of people. (It's the price of allowing people to pick their implementation language; I'm not going to go write a separate test problem for each of the ten languages I might reasonably expect to get an answer in, especially when I don't even know them all. I actually feel I've sort of blundered into a good test here; a surprising number of people know "arrays" and nothing else and choose very bad representations for things of their own free will.)
Indeed, programmers as a whole really have no clue about the biases they inject into interviewing somebody else. The knee jerk dogmatism that underlies the various "religious wars" in programming are merely a large, easy to see manifestation of the same fundamental myopic expectation mechanism. :-(
I wrote back, "What is the maximum number of coins that you can process in 5 weighings to find the heavy one? If you know the answer to that, then you know that I Know the answer to your question."
I didn't hear back. I guess they didn't understand the algorithm or they thought I was a wise-ass. Of course I could have been filtered through HR.
Questions of the form "How many X are there in Y" are vanilla sizing questions, which are not first-order relevant for coding but are crucial for even the most basic engineering. The notion behind these questions is that an interviewee should be able to take a topic they're nominally familiar with (their favorite beverages, their home state), find a candidate correlation, and extrapolate (there are Y million people in my state, and the average consumer consumes X beverages per unit time, so XYtime).
Questions of the form "You have X consumable Ys, how do you accomplish task Z within those X Ys", also wordable "how few consumable Xs do you need to accomplish Z" are about resource allocation or consumption, and are frequently used to ask questions about O-notation without using formulas or having programming/scripting interviewees tell you "oh I don't remember how to write bubble sort... but that's OK because every library has all kinds of sorts now" or to find out if system administrator candidates know about automation tools and how to apply them.
The questions are very much not about rote memorization of brain teasers, and if the interviewer can sniff that behavior out it's _very_ negative: the interviewer most probably has no interest in the number of gas stations in Texas and will never be at the leaning tower of Pisa with two lightbulbs and an afternoon to spend. The idea is to ask one question that can test:
1) a candidate's willingness to pick up a new or unusual challenge
2) their ability to adapt what they're familiar with to solve a problem that they don't know off hand
3) the thought process involved in arriving at the conclusion
Studying a list of "brain teasers" and their answers is like attempting to study a test's answer key -- it may get you through the door but it's very dishonest and a disservice to yourself as you cheat yourself out of an opportunity to expand your mind in a very relevant way using facts that aren't directly related.
Take yourself back to 1989 or so when very few companies asked brain teaser questions so therefore there were very few people who studied brain teasers specifically to ace job interviews. The question for an interviewer was whether being good at brain teasers correlated to being good at a programming job in a statistically significant manner. Now combine that with a number of other interview techniques, each of which correlates well and you may have had something.
If you must search for reasons and meanings, it was probably because the kind of people who studied brain teasers for fun also happened to have a certain knack for the math-y problem-solving part of programming. Or they actually enjoyed solving problems for fun. Or both. I am not suggesting that we need to prove that the skill of solving a brain teaser was an important programming skill, it could be there was a root talent which expressed itself in programming and brain teaser abilities simultaneously. Or that some event in their past opened them up to learning two different and unrelated skills. All that mattered was whether you could measure a correlation.
But that was then. Today, I avoid such questions entirely. However, that is because IMO it no longer correlates well. Mostly because it has become popular enough that interviewees game the system by swotting up on brain teasers, so being good at them no longer means you have innate talent for math-y problem solving, nor does it mean you enjoy solving problems for fun.
So in summary I say that being good at brainteaser problems in a job interview no longer even proves that you're good at brain teaser problems, or like solving them for fun, but I disagree that it has never conveyed useful information to an interviewer.
Also, in my opinion a lot of them are "aha" questions - once you get them they're obvious. They don't require deductive thinking, but rather that you "get" this particular question. A good example of this might be "Why are manhole covers round?" The answer is that round covers are bigger than the hole no matter how you turn them, and thus can't fall in. It's hard to deduct your way to this conclusion, yet once you get it it's obvious.
Questions such as "how many pianos are there in New York?" require a bit more deductive thinking, but I'm sceptical as to how much they say about someones ability to code.
But, as you point out, if there's a correlation, hoowever small, and you get other loosely correlated data it adds up.
First, determine the requirements of a manhole cover (strong enough for semis to roll over it, movable for access, big enough for most people to fit in the hole) and the possible issues (theft, wear/tear/destruction, falling into the hole).
With these requirements, it's pretty easy to design an acceptable manhole cover:
1. The manhole cover must be at least 2' in diameter to allow people to go into the hole.
2. The manhole cover should be thick metal, both to increase life and to prevent theft (some manhole covers weigh more than 80 lbs).
3. The manhole cover should have a hole/catch mechanism in it to help open it.
4. The manhole cover should not be able to fall into the hole, so should be a shape that can't fall into itself (circles are the only shape I know of that follow this).
In the real world, many manhole covers do not follow all of these requirements (I've seen triangle-shaped ones), but this design outline describes the manhole covers near me pretty accurately.
I would argue the 'spirit' of the question (determining whether or not the applicant can use deduction to find an answer to a tricky problem) is still extremely important--it's just that this type of problem (and especially the manhole cover example) has been gamed.
The infamous retort is this parody: http://hebig.org/blogs/archives/main/000962.php
But that really illustrates the trap of almost all interview questions. The interviewer is obsessed with the correct answer, when in reality it's the process of arriving at any answer that matters.
My personal beef with "Aha!" questions is that if given one that I actually solve on the spot, I can't tell you much about how I got it. It just came to me.
The only exception I can remember is a question about finding whether a linked list has a cycle in it. The "proper" answer involves a pair of cursors, one of which operates at double the speed of the other. I came up with a much less efficient but equivalent solution because the problem reminded me of something we did with Turing machines in college back in the 80s.
So in that particular case I could actually explain how I arrived at an answer. Now that I think about it, I think I'm a pattern-matching machine. My algorithm for solving problems is to perturb them until they resemble something I've seen before.
- The manhole cover should be thick metal - in some third world countries this is actually a problem because the heavy manhle covers have a substantial value as scrap metal, and thus get stolen. The heavier they are, the more cash they will bring in.
- circles are the only shape I know of that follow this - a reuleaux triangle can't fall into the hole either. Incidentally this is also the shape used for the turning part of a wankel engine. (http://en.wikipedia.org/wiki/Reuleaux_triangle)
Good engineers, much like good artists, simplify the solution as much as possible while still meeting its requirements.
The down side to hiring people who are good at solving complex problems, especially when they're young, is that they'll tend to solve them with complexity to make things more interesting than the simple solutions. It's good for them in the short term, bad for the company in the long term, and someone else's problem to maintain.
On another point, you can't just get 'good at puzzles.' It's really not possible. Even if you are learning tricks and techniques for piercing these things that type of thinking can be applicable elsewhere. It doesn't even have to be algorithms. Working your brain is a reasonable thing to do. Just signaling that you are willing to do so can be of interest to an employer.
This is where you need to start injecting things like "Don't you think?..." or "I hadn't thought about it that way, tell me more...". That kind of interaction will be come critical day to day, it also helps test the depth of the candidate.
I guess that's why I don't like puzzles - it feels too much like a trick (also, I'm terrible at them). What I usually look for are teammates - people I can work together with to crack problems.
I guess it just comes down to picking something that draws that out - something realistic, but without too many preconceptions on the interviewer's part.