My phone interview with Google had something I would probably describe as a brain teaser (which is probably why I didn't get a follow up interview). It was something along the lines of explaining how an insect in a bottle could jump out if the bottle was frictionless. Something like that. Anyway, I don't remember specifics well enough to make a strong statement, but I would argue that this isn't an absolute in either direction -- I bet a certain class of question is used less, but some that border on brainteaser are still used.
I would agree with you that that's a brainteaser -- or at least a stupid question that shouldn't be asked.
However, brainteasers are indeed banned and they always have been. The problem is that what you and I call a brainteaser isn't what everyone else calls one. Thus, a brainteaser could still be asked, despite the ban.
How did you answer that question? -- I honestly don't understand what possibly could be drawn from asking that.
Deductive reasoning? Logical thought pattern? Creative thinking? Seems like there are a lot of better ways to find out if someone is capable of such things
Using a swimming technique to push off the air the bug could either gain momentum by using the bottle's sloped bottom as a kind of half-pipe, or start moving in a circular path until sufficient speed were built up that the bug would begin moving up the bottle's side.
Alternatively, the bug could fly out if it's that kind of bug. Or it could knock the bottle over by rocking it.
Clearly I'm Google material because of my "bug in frictionless jar" degree.
I fully admit there might have been a detail I'm neglecting that made it more clearly useful, but I do know for a fact it didn't fall in the "market estimation" bin, either. At least not fully or obviously.
I was asked how I would roll out a kernel upgrade to a datacenter on the Moon. I thought that was a little silly (though I get that it was an easy way to provide strict constraints on a problem).
Asking "how would you estimate the number of cars sold in a year?" is not a brainteaser. It's an estimation / market sizing question. You can deduce a reasonable estimate from the population size and other assumptions.
A brainteaser is something like "A man pushed his car to a hotel and lost his fortune. What happened?" That's a brainteaser -- and nothing Google would ask.
I got asked the cars sold in a year in California question. I answered by saying I would look for a car that was new two years ago, and one that was new this year, and then difference the license plate numbers and divide by 2.
Took the interviewer a while but they finally figured it out.
Chuck is assuming (reasonably) that license plate numbers are assigned in sequential order to all new cars.
So if 1000 cars are sold in a year, maybe he'll see plate #267 from Year 0, and plate #1796 from Year 2. 1800 - 300 / 2 = 750 cars per year, close enough.
Exactly, in California private plates are of the form /[0-9][A-Z]{3}[0-9]{3}/ you can think of it as three digits base 10, and three digits base 26, one digit base 10. There are letter combos they don't use like SUX or FUX etc but each of those only reduce the pool by 1000 vehicles for each banned letter sequence so overall a relatively small hit on the estimate. My wife pointed out you can be more accurate if you include the month of the renewal in the registration (its on the plate as well) to get total months in your sample.
"How would you estimate the number of cars sold in a year?"
This boggles my mind. Where? For what purpose? Is the expectation that the candidate googles that question?
I would google lots of different things until I found some answers (car manufacturers sales reports from car magazines, news articles that may have the answer, literally for the phrase "number of cars sold in a year" etc, etc).
If you want me to talk through it, it's just mind numbing making asinine assumptions. Market sizing? Obvious assumptions? What?
I can't understand how anyone can say with a straight face that that question is straight forward and not ambiguous.
The ambiguity is what makes these google interview questions brain teasers, to me at least.
But that's probably the last item in a list of reasons why Google wouldn't hire me :)
Ambiguity is something a programmer needs to deal with at work. Riddles are not. Open ended questions are exactly what you want to use to evaluate someone.
Riddles fail because there is an expected answer, not because the form of the question is ambiguous.
Some coding questions as asked in interviews are actually riddles in practice because the interviewer is unskilled. Those 'coding questions' are worse than the obviously bad riddles because it is harder to tell that the feedback is slanted.
I wonder what internal Google has explored along the lines of prompting the user with clarifying questions. Like Google Suggest and Spell Check exhibit the level of intelligence needed I think.
Is there anyone that have read one of the readily available interview books that can't answer one of those estimation questions? I'm sure the answer to that question is 'yes', but I am also sure that their utter inability to prepare for an interview or reason on their feet will be discovered in the interview by other means.
Software Engineering interviews will focus on your standard coding, algorithm, and system design questions...
Why are algorithm questions still being asked in a high-pressure environment? Very few people actually work on algorithms once hired and never in my experience has it actually been a good indicator of actual development competency. As DHH states: http://37signals.com/svn/posts/3071-why-we-dont-hire-program..., unless you're hiring someone to code algorithms it's not useful.
As a Director at AppNexus I've done my best to reverse this trend by asking what I consider competency questions. Such as "On a scale of 1-10, 1 being novice, 10 being creator of said technology, how would you rate yourself?" Then based on this answer I'll ask a question at that level. I find that most people screened don't actually know the basic fundamentals of the technologies they list.
After you've gotten the basics down you can then get into system-design, or thinking questions. In the case code analysis is necessary I think it's much better to present a sub-optimal pre-written function and ask the candidate what the function does, and whether and how it can be improved. In this way I know whether they understand code, and whether they're competent enough to improve.
The faster we move away from these algorithmic questions the better.
"Such as "On a scale of 1-10, 1 being novice, 10 being creator of said technology, how would you rate yourself?" Then based on this answer I'll ask a question at that level. I find that most people screened don't actually know the basic fundamentals of the technologies they list."
See, and I hate questions like that.
If you're going to ask candidates to rate themselves in a technology, don't ask for a number. What I think a 7 is might be different from what you think it is. Just ask the candidate how comfortable they feel in a language. Words work better than numbers.
Also, you should know that many candidates are really scared by someone asking them to rate themselves on a scale from 1 to 10. You're kicking off an interview with a degree of intimidation that's probably unnecessary.
You're right though that many people don't know the fundamentals of the technologies they list. This doesn't mean that they're bad engineers; there's just a lot of confusion around at what point you should list a technology on your resume.
Remember that a lot of companies aren't particularly looking for competency in a particular language. Thus, drilling into the specifics of a technology doesn't assess much for them.
How do you account for the Dunning-Kruger[1] effect when asking your "How do you rate yourself?" question? I would guess that answers in the range "6 or 7" could span a really wide range of actual knowledge. Does that agree with your experience?
Right, even if someone miscalibrated we can quickly get to the proper calibration. Though, I stress ten as being the creator of said technology. For example, in the case of PHP ten would entail actually being able extend PHP via C/C++. In my experience most people calibrate pretty well initially or after an initial miscalibration.
Ask for code. TALK about algorithms at a high level. I have an aptitude for reading people, but my day two impressions of coding with a new person tend to lead to valid assumptions about them months later.
I've found that algorithms are one of those things that doesn't come up often in your daily work, but when you need them, you really need them, and otherwise you'll end up delivering a substandard product. It's very much like Joel Spolsky article on hitting the high notes:
I've found that in my work, a tricky algorithmic problem comes up on average about once a month. It's a tiny fraction of the total work I do - but if I couldn't solve algorithmic problems, then there are whole projects that I simply couldn't tackle. We'd either do something substandard for users, or we'd throw bodies at the problem (which still happens a lot, unfortunately), or they'd need to bring in someone else to do my job.
Moreover, I've found that there's a cultural shift that happens when enough people are familiar with algorithm design within a company. A lot of other places I've worked are basically "Make the user fill out a form, dump it in a database, format it for display." When a significant fraction of your employees are capable of hitting the high notes, this becomes "Make the user interact with your system in the manner they're most comfortable with, extract meaning out of their ordinary behavior, compute interesting results, and show it to them." The complexity moves from the user to the software, and as a result, users would rather use your software. There are whole subsystems within Google Search - synonyms, spelling, refinements, authorship, snippets, ranking, translate, voice search, etc. - that would never have happened if there weren't a critical mass of people willing to dive into difficult problems.
Personally, I'd find these questions a lot more palatable if they'd change the format a little. Instead of asking "how many gas stations are there in Manhattan" why not ask "what steps would you take to estimate how many gas stations there are in Manhattan"?
The latter makes it clear that this isn't some sort of brain-teaser, it's question about process. You could also phrase it as "what information would you need to estimate how many gas stations there are in Manhattan?" This in particular would be a good question for people who will doing a lot of work that involves some speculation, such as long-term product roadmaps, long-term expansion planning, looking at new markets, etc.
BTW, I've been to Manhattan and noticed that there are remarkably few gas stations there for a city of its size, so whatever estimate you come up with will probably be very, very wrong.
Changing the wording of a question can dramatically change the outcome, and which skills are tested.
"how many gas stations are there in Manhattan" actually makes sense as a question for a venture capitalist firm employee expected to make investment decisions, as the leap from "statement" to "this is something I need to estimate" is absolutely key to the role, and hence something you'd want to test for. I'm sure others can think of much better examples of this kind of thing.
Rather than massive re-wording of the question, you can instead change the framing of the question. The most scary aspect of such questions is that they might come out of the blue, or immediately after a number of completely different questions. Mental inertia will then dictate you stall massively and find the question completely confusing.
Imagine instead that the interviewer stated "We are going to ask some questions that will test for reasoning and deductive skills that you will use on a regular basis in the role". This is similar to changing the wording, but more like real situations. You know you have a meeting with clients (so you're prepared), but they'll ask questions in a difficult and something obtuse manner.
"BTW, I've been to Manhattan and noticed that there are remarkably few gas stations there for a city of its size, so whatever estimate you come up with will probably be very, very wrong."
Yes, and that's precisely the sort of thing you're supposed to take into account. Again, you're looking to get in the right ballpark. They're not looking for precision.
Asking "what steps would you take" wouldn't be quite the same question. They want you to actually take those steps to come up with a number.
Ultimately, I'm not necessarily defending estimation questions. I'm just saying that they are, in fact, asked because interviewers don't consider them to be brainteasers.
According to Laszlo Bock, senior vice president of people operations at Google, "We found that brainteasers are a complete waste of time." How did they determine that without asking candidates any brainteasers? Clearly, they did at some point--probably before the author's direct experience.
This article is really just a means to promote the author's books and prevent readers of the original article from presuming that that her books are obsolete now (they're not).
(1) I was at Google when the study was done. It wasn't before my direct experience.
(2) I've confirmed with a bunch of people currently at Google that, indeed, nothing has changed. This study was done 5+ years ago.
(3) My books don't focus on brainteasers. Thus, if people believe these changes at Google are real, then this would actually make my books seem more relevant, not less.
(4) If brainteasers are banned, this doesn't mean that no one has even asked them. Some people break the rules (because they're unaware of them or because they don't feel that a particular question is a brainteaser). Thus, Google could, theoretically, study how effective brainteasers are even while they are banned.
(5) What Laszlo is saying is provably incorrect. He's saying estimation questions are brainteasers and that they no longer ask such questions. This is false. If he wants to define these questions as brainteasers, he's welcome to do that. However, he would then be wrong about Google continuing to ask brainteasers. By his definition, Google absolutely does ask these "brainteasers" frequently.
(6) What I really suspect is going on is that he misremembered the study (a reasonable thing to conclude, given #5). It was, after all, done 5+ years ago. I don't think the study ever actually looked at brainteasers. I read the results, and I don't remember anything about brainteasers. (They did look at interview scores and job review scores though.)
(7) Huh? It's "just" a means to promote my books? That's a huge leap. My books aren't even mentioned anywhere except for in my bio. You could argue that it indirectly promotes my books, but then basically everyone ever writing anything is promoting their stuff. And, even so, you couldn't say that it's just to promote their stuff. This article is a means to counter a lot of the myths around Google hiring practices. I don't like candidates walking into interviews misinformed.
I personally think that "estimation" questions (sometimes called "Fermi problems" since Enrico Fermi was famous for asking them) are a great way of gauging an individuals thought process.
They aren't about getting the right answer, but rather about seeing how one breaks down a seemingly difficult question into simpler pieces and determines what can be reasonably estimated. They really don't fall into the category of "brain teasers" which involve some sort of trick.
However, the statement that software developers are not asked these types of question is not universally true, nor should it be. In and SDE interview, I was asked an estimation question that was both interesting and relevant to the position (although I signed an NDA, so I'm not going to give specifics).
In my case, there were several pieces of required information that I realized I couldn't reasonably estimate on the spot, so I gave a description of how I could ballpark them from quick measurements.
I'm a little surprised that all the comments thus far and the article itself never used the term "Fermi Problems". Whether or not someone calls them a brainteaser is up to them I guess. I too think of brainteasers as trick questions with some gotcha answer that you normally wouldn't think of based on how the question was phrased. Fermi problems aren't in that category for me either.
Try something like: I have a 100 billion webpages and wish to build a hashtable-like structure that maps from each word to a list of the documents that contains that that word. How many computers do I need to hold this hashtable?
Market estimation questions are primarily asked in non-engineering interviews at Google like PM, PMM etc. I think market estimation question is a very good way to test how a person thinks. There are no right or wrong answers, mostly the candidates are being tested on their thinking abilities in ambiguous situations, which happens in your real job all the time. For example, you are trying to launch a product in a new market and you have to estimate sales/marketing dollars/headcount the 1st year - how do you go about it? Are you asking the right questions?
For anyone wondering what relevance an estimation question could possibly have in real life, imagine it's 2004 and you're about to launch GMail. How many hard drives are you going to need?
And my point is that there's about a zero percent chance that the person making that decision, or even giving significant feedback about it, is going to be within 20 feet of the people in question who will be implementing it.
Or more practically that they would do a roll-out release (as they did) and increase capacity as they go. All questions for devops, the product manager or some type of an "Architect" working with analytics to estimate demand and volume.
To imply that a single engineer in a room with no information about Gmail's infrastructure is supposed to know that without (any information about: technologies, storage, indexing, user volume, mail volume, etc) seems absolutely fucking stupid. If they want to hear me think aloud about these things and know that I can be cognisant of them... well... there are better ways of doing that than asking me stupid questions that it's stupid for me to even try and answer.
As someone else said, simply changing the question from "How many gas stations are there in NY" to "How would you estimate the number of gas stations in NY" makes it entirely different, IMO.
> Or more practically that they would do a roll-out release (as they did) and increase capacity as they go.
What if, increasing capacity as they go, they discover that they're going to need 140 billion dollars worth of hard drives?
Before the rollout can happen, someone has to decide whether or not to greenlight the project. And that person needs to come up with an estimated cost. Before the rollout begins.
> To imply that a single engineer in a room with no information [etc]
I don't know, I guess I have a very pessimistic view of interviews asking these sorts of questions almost vindictively.
I'd prefer questions and problems and challenges that I'm likely to face and address as an engineer.
If Google hires engineers to answer "brain teasers" that either require a dozen exceptions/qualifications or some vague random nonsensense answer, then more power to them.
(But I doubt they do, they wouldn't be where they were if their hiring practices didn't work to some degree.)
That's really not the case at Google (or, I would hope, at any company). The engineers would be very much involved with this decision, largely driving the ultimate conclusion.
I was definitely asked a "cutting the cake" brain teaser question when I interviewed. Cut a cake fairly into X pieces using Y slices.
And it was a reasonable question that may or may not have knocked me out of the job, as long as by reasonable you think it's "fair" to serve some people the bottom half of a cake and other people the top half of a cake.
But I of course don't know if that's what knocked me out.
If you were interviewing for a software engineering position, that question makes sense - that reads to me like an algorithm problem (albeit a three-dimensional one). There's also a couple of other roles for which it would make sense.
The big problem with that question is "cake". Mappings change our entire perception of a scenario (The mass-grave-filling problem of Tetris being a classic example). Cakes have tops, bottoms, fillings, decoration, and so on. This makes a perfectly reasonable question into something much more nasty
Yep. I don't like questions like that, mainly because -- whether it's really an algorithm question or not -- candidates will feel it's a brainteasers. Given that there are more than enough questions that won't be perceived as brainteasers, there's no reason to ask one that might be.
This is the same issue I have with the egg drop problem. It can be logically deduced and so it's sort of a fair question for a software engineer. But, given the abundance of more relevant algorithm questions, there's just no reason to ask it.
Ultimately what it comes down to is this: brainteasers are banned and have always been (or at least for a very, very long time). However, since everyone defines brainteasers a little differently, you could still get a question that you feel is a brainteaser.
Thanks for writing this, Cracking the Coding Interview has been one of my go-to books these last few weeks. A must read for any new grad, and the coding problems / answers are very insightful.
The prisoner's dilemma is a basic game theory concept (with a lot of interesting math inside it). It isn't a brainteaser and there isn't even an "answer"
No, it's not. Brainteasers are banned and have been forever (or at least since 2005). However, everyone has a different definition of what a brainteaser is. You think the Prisoner's Dilemma is a brainteaser, but other people (as you see in the comments below) do not.
I don't know about Google in particular but I've always been bothered by the market sizing question.
In part, it's because it seems to have come out of consulting, where a common task seems to be to size a market for which there is no commonly available data. So, understanding how someone might go about doing that seems reasonable on its surface.
But in fact, I've met too many consultants who seem inclined to build "castle in the air", spinning market details about theoretical markets without properly defining either the product or service or the buyer of that product or service. And since they are for markets for which there is by definition no solid data (since that is why the consultant was asked to size it in the first place), their estimate is never validated (or if it is, it is so long after they have left that they never hear about.)
At the same time, no one ever seems to mind if the answer to such questions is off by a country mile when the actually size of the market is known, so long as the thought process was rigorous and shows the right kind of logical decomposition of the problem. But this justification for the value of the question also bothers me, as it seems to be testing more whether you can come up with convincing-sounding bullshit than whether you can correctly estimate a given value. This, again, may be an accurate measure for whether someone can become a consultant, but I never liked the notion that we should judge people on how well they can spin bullshit.
Finally, the question seems to also be gauging the interviewees willingness to "play along" with what an interviewer is asking. The questions are often on seeming random topics (piano tuners in New York or gas stations in Wisconsin), and are often not directly related to the domain. In the real real world, you would actually probably find a list of such things on the internet, or conduct some basic research that can be done to answer such a question. Or the correct response might be something like, "We can make a rough estimate, but without more solid data than a few random facts which we've rubbed together to come up with a market size, maybe we shouldn't be pursuing this market". In which case, the contrived example serves to allow the interviewee to demonstrate that they are the type of person who will enthusiastically pursue whatever random intellectual exercise they have been assigned by the interviewer, so long as there is a chance of getting the interviewers approval.
All of the things that the question tests for, then, seem to not be characteristics you would actually want in someone if they were to answer the question in the real world. In the real world, you might do a back of the envelope calculation, sure, but you would also do a lot of research on the internet, conduct a lot of interviews and surveys, and/or conduct evaluations of competitors to understand and size a market.
I would argue that if one is unable to do the back of the envelope estimate, then they would have a difficult time doing the rigorous calculation / determination. The main difference between the two is the quality of the numbers that go into them, right?
I agree though, that overconfidence in one's ballpark estimate is not to be desired. But, that's more useful information gained from the question, not less.