
Google Hasn’t Changed Their Interview Questions - gaylemcd
http://blog.geekli.st/post/53477786490/sorry-folks-google-hasnt-changed-their-interview
======
ubercore
My phone interview with Google had something I would probably describe as a
brain teaser (which is probably why I didn't get a follow up interview). It
was something along the lines of explaining how an insect in a bottle could
jump out if the bottle was frictionless. Something like that. Anyway, I don't
remember specifics well enough to make a strong statement, but I would argue
that this isn't an absolute in either direction -- I bet a certain class of
question is used less, but some that border on brainteaser are still used.

~~~
sharkweek
How did you answer that question? -- I honestly don't understand what possibly
could be drawn from asking that.

Deductive reasoning? Logical thought pattern? Creative thinking? Seems like
there are a lot of better ways to find out if someone is capable of such
things

~~~
makerops
"Quit wasting my fucking time google-interviewer."

"Correct. You're hired."

~~~
yelnatz
If only it was that easy.

------
seunosewa
Today I learned that Google's brain teasers are internally known as
"estimation or market sizing questions"!

~~~
gaylemcd
Asking "how would you estimate the number of cars sold in a year?" is not a
brainteaser. It's an estimation / market sizing question. You can deduce a
reasonable estimate from the population size and other assumptions.

A brainteaser is something like "A man pushed his car to a hotel and lost his
fortune. What happened?" That's a brainteaser -- and nothing Google would ask.

~~~
proksoup
"How would you estimate the number of cars sold in a year?"

This boggles my mind. Where? For what purpose? Is the expectation that the
candidate googles that question?

I would google lots of different things until I found some answers (car
manufacturers sales reports from car magazines, news articles that may have
the answer, literally for the phrase "number of cars sold in a year" etc,
etc).

If you want me to talk through it, it's just mind numbing making asinine
assumptions. Market sizing? Obvious assumptions? What?

I can't understand how anyone can say with a straight face that that question
is straight forward and not ambiguous.

The ambiguity is what makes these google interview questions brain teasers, to
me at least.

But that's probably the last item in a list of reasons why Google wouldn't
hire me :)

~~~
DanBC
The ambiguity is what makes it relevant to search.

What does a user mean if they enter [how many cars sold in a year?]

The ambiguity shows that the specific answer is less interesting than showing
your working; having the discussion about how to get an answer.

~~~
proksoup
That's more what I was hoping for, thanks :)

I wonder what internal Google has explored along the lines of prompting the
user with clarifying questions. Like Google Suggest and Spell Check exhibit
the level of intelligence needed I think.

------
asnyder
Sounds like she's still perpetuating the problem:

 _Software Engineering interviews will focus on your standard coding,
algorithm, and system design questions..._

Why are algorithm questions still being asked in a high-pressure environment?
Very few people actually work on algorithms once hired and never in my
experience has it actually been a good indicator of actual development
competency. As DHH states: [http://37signals.com/svn/posts/3071-why-we-dont-
hire-program...](http://37signals.com/svn/posts/3071-why-we-dont-hire-
programmers-based-on-puzzles-api-quizzes-math-riddles-or-other-parlor-tricks),
unless you're hiring someone to code algorithms it's not useful.

As a Director at AppNexus I've done my best to reverse this trend by asking
what I consider competency questions. Such as "On a scale of 1-10, 1 being
novice, 10 being creator of said technology, how would you rate yourself?"
Then based on this answer I'll ask a question at that level. I find that most
people screened don't actually know the basic fundamentals of the technologies
they list.

After you've gotten the basics down you can then get into system-design, or
thinking questions. In the case code analysis is necessary I think it's much
better to present a sub-optimal pre-written function and ask the candidate
what the function does, and whether and how it can be improved. In this way I
know whether they understand code, and whether they're competent enough to
improve.

The faster we move away from these algorithmic questions the better.

~~~
morpher
How do you account for the Dunning-Kruger[1] effect when asking your "How do
you rate yourself?" question? I would guess that answers in the range "6 or 7"
could span a _really_ wide range of actual knowledge. Does that agree with
your experience?

[1][http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect](http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect)

~~~
pisarzp
I understand that he just asks real question on that level to calibrate your
answer. It's just faster to start from some reference point.

~~~
asnyder
Right, even if someone miscalibrated we can quickly get to the proper
calibration. Though, I stress ten as being the creator of said technology. For
example, in the case of PHP ten would entail actually being able extend PHP
via C/C++. In my experience most people calibrate pretty well initially or
after an initial miscalibration.

------
autarch
Personally, I'd find these questions a lot more palatable if they'd change the
format a little. Instead of asking "how many gas stations are there in
Manhattan" why not ask "what steps would you take to estimate how many gas
stations there are in Manhattan"?

The latter makes it clear that this isn't some sort of brain-teaser, it's
question about process. You could also phrase it as "what information would
you need to estimate how many gas stations there are in Manhattan?" This in
particular would be a good question for people who will doing a lot of work
that involves some speculation, such as long-term product roadmaps, long-term
expansion planning, looking at new markets, etc.

BTW, I've been to Manhattan and noticed that there are remarkably _few_ gas
stations there for a city of its size, so whatever estimate you come up with
will probably be very, very wrong.

~~~
mdisraeli
Changing the wording of a question can dramatically change the outcome, and
which skills are tested.

"how many gas stations are there in Manhattan" actually makes sense as a
question for a venture capitalist firm employee expected to make investment
decisions, as the leap from "statement" to "this is something I need to
estimate" is absolutely key to the role, and hence something you'd want to
test for. I'm sure others can think of much better examples of this kind of
thing.

Rather than massive re-wording of the question, you can instead change the
framing of the question. The most scary aspect of such questions is that they
might come out of the blue, or immediately after a number of completely
different questions. Mental inertia will then dictate you stall massively and
find the question completely confusing.

Imagine instead that the interviewer stated "We are going to ask some
questions that will test for reasoning and deductive skills that you will use
on a regular basis in the role". This is similar to changing the wording, but
more like real situations. You know you have a meeting with clients (so you're
prepared), but they'll ask questions in a difficult and something obtuse
manner.

------
codex
Yes, they did--just not recently.

According to Laszlo Bock, senior vice president of people operations at
Google, "We found that brainteasers are a complete waste of time." How did
they determine that without asking candidates any brainteasers? Clearly, they
did at some point--probably before the author's direct experience.

This article is really just a means to promote the author's books and prevent
readers of the original article from presuming that that her books are
obsolete now (they're not).

~~~
gaylemcd
(1) I was at Google when the study was done. It wasn't before my direct
experience.

(2) I've confirmed with a bunch of people currently at Google that, indeed,
nothing has changed. This study was done 5+ years ago.

(3) My books don't focus on brainteasers. Thus, if people believe these
changes at Google are real, then this would actually make my books seem more
relevant, not less.

(4) If brainteasers are banned, this doesn't mean that no one has even asked
them. Some people break the rules (because they're unaware of them or because
they don't feel that a particular question is a brainteaser). Thus, Google
could, theoretically, study how effective brainteasers are even while they are
banned.

(5) What Laszlo is saying is provably incorrect. He's saying estimation
questions are brainteasers and that they no longer ask such questions. This is
false. If he wants to define these questions as brainteasers, he's welcome to
do that. However, he would then be wrong about Google continuing to ask
brainteasers. By his definition, Google absolutely does ask these
"brainteasers" frequently.

(6) What I really suspect is going on is that he misremembered the study (a
reasonable thing to conclude, given #5). It was, after all, done 5+ years ago.
I don't think the study ever actually looked at brainteasers. I read the
results, and I don't remember anything about brainteasers. (They _did_ look at
interview scores and job review scores though.)

(7) Huh? It's "just" a means to promote my books? That's a huge leap. My books
aren't even mentioned anywhere except for in my bio. You could argue that it
indirectly promotes my books, but then basically everyone ever writing
anything is promoting their stuff. And, even so, you couldn't say that it's
_just_ to promote their stuff. This article is a means to counter a lot of the
myths around Google hiring practices. I don't like candidates walking into
interviews misinformed.

------
morpher
I personally think that "estimation" questions (sometimes called "Fermi
problems" since Enrico Fermi was famous for asking them) are a great way of
gauging an individuals thought process.

They aren't about getting the right answer, but rather about seeing how one
breaks down a seemingly difficult question into simpler pieces and determines
what can be reasonably estimated. They really don't fall into the category of
"brain teasers" which involve some sort of trick.

However, the statement that software developers are not asked these types of
question is not universally true, nor should it be. In and SDE interview, I
was asked an estimation question that was both interesting and relevant to the
position (although I signed an NDA, so I'm not going to give specifics).

In my case, there were several pieces of required information that I realized
I couldn't reasonably estimate on the spot, so I gave a description of how I
could ballpark them from quick measurements.

~~~
yelnatz
How many golf balls can you fit in a bus?

How many plumbers in New York?

~~~
gaylemcd
Try something like: I have a 100 billion webpages and wish to build a
hashtable-like structure that maps from each word to a list of the documents
that contains that that word. How many computers do I need to hold this
hashtable?

------
rajksarkar
Market estimation questions are primarily asked in non-engineering interviews
at Google like PM, PMM etc. I think market estimation question is a very good
way to test how a person thinks. There are no right or wrong answers, mostly
the candidates are being tested on their thinking abilities in ambiguous
situations, which happens in your real job all the time. For example, you are
trying to launch a product in a new market and you have to estimate
sales/marketing dollars/headcount the 1st year - how do you go about it? Are
you asking the right questions?

------
raldi
For anyone wondering what relevance an estimation question could possibly have
in real life, imagine it's 2004 and you're about to launch GMail. How many
hard drives are you going to need?

~~~
drivebyacct2
That's a complicated question with a complicated, nuanced, business-driven
answer.

~~~
raldi
And yet someone had to answer it.

~~~
drivebyacct2
And my point is that there's about a zero percent chance that the person
making that decision, or even giving significant feedback about it, is going
to be within 20 feet of the people in question who will be implementing it.

Or more practically that they would do a roll-out release (as they did) and
increase capacity as they go. All questions for devops, the product manager or
some type of an "Architect" working with analytics to estimate demand and
volume.

To imply that a single engineer in a room with no information about Gmail's
infrastructure is supposed to know that without (any information about:
technologies, storage, indexing, user volume, mail volume, etc) seems
absolutely fucking stupid. If they want to hear me think aloud about these
things and know that I can be cognisant of them... well... there are better
ways of doing that than asking me stupid questions that it's stupid for me to
even try and answer.

As someone else said, simply changing the question from "How many gas stations
are there in NY" to "How would you estimate the number of gas stations in NY"
makes it entirely different, IMO.

~~~
raldi
_> Or more practically that they would do a roll-out release (as they did) and
increase capacity as they go._

What if, increasing capacity as they go, they discover that they're going to
need 140 billion dollars worth of hard drives?

Before the rollout can happen, someone has to decide whether or not to
greenlight the project. And that person needs to come up with an estimated
cost. _Before_ the rollout begins.

 _> To imply that a single engineer in a room with no information [etc]_

Who implied that?

~~~
drivebyacct2
I don't know, I guess I have a very pessimistic view of interviews asking
these sorts of questions almost vindictively.

I'd prefer questions and problems and challenges that I'm likely to face and
address as an engineer.

If Google hires engineers to answer "brain teasers" that either require a
dozen exceptions/qualifications or some vague random nonsensense answer, then
more power to them.

(But I doubt they do, they wouldn't be where they were if their hiring
practices didn't work to some degree.)

------
jerrya
I was definitely asked a "cutting the cake" brain teaser question when I
interviewed. Cut a cake fairly into X pieces using Y slices.

And it was a reasonable question that may or may not have knocked me out of
the job, as long as by reasonable you think it's "fair" to serve some people
the bottom half of a cake and other people the top half of a cake.

But I of course don't know if that's what knocked me out.

~~~
mdisraeli
If you were interviewing for a software engineering position, that question
makes sense - that reads to me like an algorithm problem (albeit a three-
dimensional one). There's also a couple of other roles for which it would make
sense.

The big problem with that question is "cake". Mappings change our entire
perception of a scenario (The mass-grave-filling problem of Tetris being a
classic example). Cakes have tops, bottoms, fillings, decoration, and so on.
This makes a perfectly reasonable question into something much more nasty

~~~
gaylemcd
Yep. I don't like questions like that, mainly because -- whether it's really
an algorithm question or not -- candidates will _feel_ it's a brainteasers.
Given that there are more than enough questions that won't be perceived as
brainteasers, there's no reason to ask one that might be.

This is the same issue I have with the egg drop problem. It can be logically
deduced and so it's sort of a fair question for a software engineer. But,
given the abundance of more relevant algorithm questions, there's just no
reason to ask it.

Ultimately what it comes down to is this: brainteasers are banned and have
always been (or at least for a very, very long time). However, since everyone
defines brainteasers a little differently, you could still get a question that
_you_ feel is a brainteaser.

------
delasher
Thanks for writing this, Cracking the Coding Interview has been one of my go-
to books these last few weeks. A must read for any new grad, and the coding
problems / answers are very insightful.

------
vsla
I aced the teaser back in college a s still didn't get the internship... What
does that tell you?

~~~
diminoten
That you didn't ace the teaser!

Or that there are other parts to a Google interview besides the teaser that
can determine your employability at Google.

~~~
gaylemcd
Yup. Exactly.

------
nakedrobot2
I had dinner with 4 googlers, one of whom was very proud to ask the Prisoner's
Dilemma to lots of interviewees.

Sorry but this article is false.

~~~
gohrt
The prisoner's dilemma is a basic game theory concept (with a lot of
interesting math inside it). It isn't a brainteaser and there isn't even an
"answer"

------
johnobrien1010
I don't know about Google in particular but I've always been bothered by the
market sizing question.

In part, it's because it seems to have come out of consulting, where a common
task seems to be to size a market for which there is no commonly available
data. So, understanding how someone might go about doing that seems reasonable
on its surface.

But in fact, I've met too many consultants who seem inclined to build "castle
in the air", spinning market details about theoretical markets without
properly defining either the product or service or the buyer of that product
or service. And since they are for markets for which there is by definition no
solid data (since that is why the consultant was asked to size it in the first
place), their estimate is never validated (or if it is, it is so long after
they have left that they never hear about.)

At the same time, no one ever seems to mind if the answer to such questions is
off by a country mile when the actually size of the market is known, so long
as the thought process was rigorous and shows the right kind of logical
decomposition of the problem. But this justification for the value of the
question also bothers me, as it seems to be testing more whether you can come
up with convincing-sounding bullshit than whether you can correctly estimate a
given value. This, again, may be an accurate measure for whether someone can
become a consultant, but I never liked the notion that we should judge people
on how well they can spin bullshit.

Finally, the question seems to also be gauging the interviewees willingness to
"play along" with what an interviewer is asking. The questions are often on
seeming random topics (piano tuners in New York or gas stations in Wisconsin),
and are often not directly related to the domain. In the real real world, you
would actually probably find a list of such things on the internet, or conduct
some basic research that can be done to answer such a question. Or the correct
response might be something like, "We can make a rough estimate, but without
more solid data than a few random facts which we've rubbed together to come up
with a market size, maybe we shouldn't be pursuing this market". In which
case, the contrived example serves to allow the interviewee to demonstrate
that they are the type of person who will enthusiastically pursue whatever
random intellectual exercise they have been assigned by the interviewer, so
long as there is a chance of getting the interviewers approval.

All of the things that the question tests for, then, seem to not be
characteristics you would actually want in someone if they were to answer the
question in the real world. In the real world, you might do a back of the
envelope calculation, sure, but you would also do a lot of research on the
internet, conduct a lot of interviews and surveys, and/or conduct evaluations
of competitors to understand and size a market.

~~~
morpher
I would argue that if one is unable to do the back of the envelope estimate,
then they would have a difficult time doing the rigorous calculation /
determination. The main difference between the two is the quality of the
numbers that go into them, right?

I agree though, that overconfidence in one's ballpark estimate is not to be
desired. But, that's _more_ useful information gained from the question, not
less.

