
In Head-Hunting, Big Data May Not Be Such a Big Deal - Esifer
http://www.nytimes.com/2013/06/20/business/in-head-hunting-big-data-may-not-be-such-a-big-deal.html?pagewanted=all&_r=0
======
edent
The original article - [http://mobile.nytimes.com/2013/06/20/business/in-head-
huntin...](http://mobile.nytimes.com/2013/06/20/business/in-head-hunting-big-
data-may-not-be-such-a-big-deal.html) Distraction free reading and without all
the annoying cruft of Quartz.

Fascinating use of "Big Data" to cut through the bullshit. Wonder if it will
change anything. I suspect the "tough" interview plays well into a company's
PR.

~~~
donohoe
We built Quartz to be as distraction free as possible - mind telling me what
the "cruft" is?

~~~
engtech
1\. when I go to this link: [http://qz.com/96206/google-admits-those-infamous-
brainteaser...](http://qz.com/96206/google-admits-those-infamous-brainteasers-
were-completely-useless-for-hiring/)

It shows several stories at once, not just the google brainteasers stories.

2\. In general, people would rather read the real article instead of a summary
of the article. When someone submits a summary of an article to a site like HN
or reddit, it is usually flagged as blog-spam because we'd rather read/support
the original content than a summary with questionable value.

3\. For long form articles, nothing beats reading the print-preview page to
get rid of all the sidebars, comments, ads. Look at the print preview page: it
is not possible to get less distraction free than that. Any other format has
more distractions.

Even aside from that, the New York Times has some of the best information
architecture in the business. These are the guys who did NYTProf. Their web
team is awesome.

[http://www.nytimes.com/2013/06/20/business/in-head-
hunting-b...](http://www.nytimes.com/2013/06/20/business/in-head-hunting-big-
data-may-not-be-such-a-big-deal.html?pagewanted=all&_r=0&pagewanted=print)

4\. Some visual issues I had with quartz:

4.1: No left/right whitespace around images.

4.2: I see a vertical scroll bar in the middle of my screen on Firefox.

4.3 The black header bar which is fixed and stays on the screen all the time
even though it conveys no useful information to me.

4.4: A bunch of text blurbs on the left side of the screen that convey no
useful information to me.

You say you're trying to be as distraction free as possible, but that's not
actually true because it isn't possible to have your business model and be as
distraction free as possible. The print preview page is as distraction free as
possible.

~~~
donohoe

      1. ... It shows several stories at once, not just the google brainteasers stories.
    

It shows one article initially. It will load the next one as you scroll down
and approach the end. This is not counted as a Page View unless you actually
continue down into it - you'll notice the URL change at that point)

    
    
      2. In general, people would rather read the real article
      instead of a summary of the article.
    

I would argue that this is a "real article". The NYT piece were 8 questions
and answers. This article is based on just one of those questions - and
expands on it. I'm not an editor/write so I'll avoid going deeper but thats my
take-away.

    
    
      3. For long form articles, nothing beats reading the print-preview page...
    

Tru dat.

    
    
      Even aside from that, the New York Times has some of the best 
      information architecture in the business. These are the guys 
      who did NYTProf. Their web team is awesome.
    

I used to work there :)

    
    
      4. Some visual issues I had with quartz:
      4.1: No left/right whitespace around images.
    

The Featured Image (between Headlines and Text) is meant to be full-width to a
max. Inline images should have left/right whitespace

    
    
      4.2: I see a vertical scroll bar in the middle of my screen on Firefox.
    

Can you email me a screenshot (email in profile)? There are a few Firefox
specific bugs we're working on this week. This may be one of them.

    
    
      4.3 The black header bar which is fixed and stays on the screen 
      all the time even though it conveys no useful information to me.
    

True. Intentional. It can be expanded which reveals the large site map. There
are big pros and cons to hiding it. Its an on-going conversation.

However we used to have it disappear altogether and people complained about
that too....

    
    
      4.4: A bunch of text blurbs on the left side of the screen 
      that convey no useful information to me.
     

Its a list of Headlines - thats all that is meant to be conveyed.

    
    
      You say you're trying to be as distraction free as possible, 
      but that's not actually true because it isn't possible to 
      have your business model and be as distraction free as possible.
      The print preview page is as distraction free as possible.
    

I'm confused. That doesn't make much sense to me. Yes, I am saying that we
intend to be "distraction free as possible" \- I'm not sure that I have to add
a big asterisk * that covers "within the confines of an ad based business
model" any more than I should also add "within the confines of a browser
running a web site thats not a book" \- I'm not trying to be snarky, just hard
know what to make of what you said exactly..

Also - take a look at the ads... do we have them all over the place? Nope - we
have them at the end of an Article - not in-between, not embedded, not inline.
Thats important.

We are not perfect, but we aspire to continuously improve. Focus is on the
user and the reading experience but with recognition that we have to pay the
bills for 20 or so editors and journalists across five (maybe more?)
countries. (I'm not counting devs, sales, hr etc in that)

~~~
chinpokomon
The pages also won't show in my favorite Android HN client:
[https://play.google.com/store/apps/details?id=com.airlocksof...](https://play.google.com/store/apps/details?id=com.airlocksoftware.hackernews).
I always have to open in a browser.

------
Jabbles
I don't understand people's problem with estimating. It's a useful skill.
Perhaps it would be better if the questions actually related to technology,
rather than golf balls - but the principle is the same.

For instance - "how many hard drives does Gmail need?" requires a rough guess
of how many users Gmail has (if you're interviewing at Google, you should know
it's 1e8-1e9). How much space each one takes (probably nowhere near a gigabyte
on average - let's say 1e8 bytes). And that the current capacity of hard
drives is (1e12 bytes).

Then you can say that they probably need 1e5 hard drives, link it to
redundancy, availability, deduplication, backups etc. You can comment that
it's feasible to build a datacenter with that many hard drives.

No one cares that the actual number is 12,722 - but you've demonstrated a
broad set of knowledge about the current state of technology. Saying "dunno -
a billion?" is not going to get you anywhere, and with good reason.

The Monopoly question is crap, though.

I'd like to know how useful [http://google-tale.blogspot.com/2008/07/google-
billboard-puz...](http://google-tale.blogspot.com/2008/07/google-billboard-
puzzle.html) was.

~~~
rorrr2
The problem is, the interviewers often judge how accurate your estimation is,
and not the fact that you know (the highly flawed) Drake Equation.

These estimates are completely useless in real life, because in real life
nobody guesses how many drives you need for GMail, or how many gas stations
there are in LA.

~~~
ig1
I'm not sure where you've worked, but doing resource estimation for projects
has been pretty important for most greenfield projects I've worked on.

It's also good for sanity testing, it's a useful skill to be able to spot that
something is out by an order of magnitude as it can allow you to catch
problems early on.

~~~
rorrr2
Data-backed estimation is completely different from random guesses you will
make during an interview.

Not only that, even if your guesses are decent, multiplying them can drive you
orders of magnitude in the wrong direction.

~~~
ig1
The underlying data might be different but the process is the same, you need
to figure out what are the contributing factors, how they relate and establish
an upper and lower bounds for the values you're assuming.

Once you have data you can make corrections to those bounds, but other than
that the process is the same.

It's a skill that a lot of first time startup founders lack. They have no-idea
how to estimate the market size for their startup, you need to understand the
process of how to build an estimation model.

~~~
Ziomislaw
Process is not the same, in one case you have real data, in the other one you
pull the data out of your arse.

~~~
ig1
It sounds like you build estimation models by looking at the data you have and
combining it together to try and figure out your goal.

The disadvantage with that approach you often end up missing factors (because
you don't have the data to hand) and end up with a suboptimal model.

In the same way that a lot of startups end up analyzing user behaviour by page
analytics rather than user analytics simply because Google gives them page
analytics.

It's a good idea to know how to do both top-down and bottom-up estimation
models, as best practice is to make estimations using several different models
and compare the results.

------
moron4hire
It's a crutch. Nobody knows how to interview. Interviewing properly is a lot
of work. There are two people who can do interviews--people who have knowledge
of the job and people who have time to interview--and they are so infrequently
the same people. These sorts of things were appealing because they were easy,
a way to not spend a lot of time on interviewing, or a way to not need a lot
of knowledge about the job.

And these things are important, because job candidates are not people, they
are OEM replacement parts being order from Pep Boys. Call up the recruiter and
requisition a J6-252: Programmer, seasoned 5 years, with degree from MIT. Oh,
those ones are too expensive. Guess I'll take the knock-off version, but I
refuse to pay full price!

Hopefully, because it's Google saying it, everyone will cargo-cult on this
bandwagon too.

------
tokenadult
From the original New York Times article that Quartz has linkspammed here: "On
the hiring side, we found that brainteasers are a complete waste of time. How
many golf balls can you fit into an airplane? How many gas stations in
Manhattan? A complete waste of time. They don’t predict anything. They serve
primarily to make the interviewer feel smart."

Long before this was reported in the New York Times, this was the finding of
research in industrial and organizational psychology. A valid hiring procedure
is a procedure that actually finds better workers than some different
procedure, not a hiring procedure that some interviewer can make up a
rationale for because it seems logical to the interviewer. We have been
discussing home-brew trick interview questions here on Hacker News for more
than a year now.

[https://news.ycombinator.com/item?id=4879803](https://news.ycombinator.com/item?id=4879803)

Brain-teaser or life-of-the-mind interview questions do nothing but stroke the
ego of the interviewer, without doing anything to identify job applicants who
will do a good job. The FAQ on company hiring procedures at the Hacker News
discussion linked here provides many more details about this.

~~~
donohoe

      From the original New York Times article that Quartz has linkspammed
    

No, this is Linkspan:

    
    
      Link spam is defined as links between pages that are present for 
      reasons other than merit.[9] Link spam takes advantage of 
      link-based ranking algorithms, which gives websites higher 
      rankings the more other highly ranked websites link to it. 
      These techniques also aim at influencing other link-based 
      ranking techniques such as the HITS algorithm.
    

Source:
[http://en.wikipedia.org/wiki/Linkspam#Link_spam](http://en.wikipedia.org/wiki/Linkspam#Link_spam)

Lets be clear, there is link-spam and then there is writing an original piece
based on information from elsewhere.

The NYT article is about 8 questions and answers from a HR person at Google.

The "puzzle" aspect is 1 of those 8 questions.

From that Quartz references that, links directly to the piece and then expands
upon it and links out to other relevant and related information.

~~~
jbapple
I think you should have made it clear in this comment that you work for
Quartz.

~~~
donohoe
I take it for granted so I forget - but I think its spelt out very very
clearly in my profile so I don't have to put an * every time I comment.

~~~
jbapple
What percentage of readers of your comment do you think click through to your
profile?

Of the comments you read, what percentage do you view the profile of the
author of?

~~~
donohoe
I take your point. I click through to most people - but I do not think that is
the norm.

------
Udo
There are questions that are actually fun and I can sort of see them starting
a conversation with the right kind of interviewer that tells both parties a
lot about who they're dealing with. From the article:

    
    
      > How much should you charge to wash all the windows in Seattle?
    

Basic economics estimating - probably not that useful and a bit dull, but hey
why not. At least the problem has several angles to it that might be fun to
explore.

    
    
      > Design an evacuation plan for San Francisco
    

That's a nice one. Kind of open-ended, a lot of things to consider, a lot of
ideas to be had.

    
    
      > How many times a day does a clock’s hands overlap?
    

Why? What happens to the interview after you counted them (possibly on a
whiteboard)? It's a dead end and the question is dull.

    
    
      > A man pushed his car to a hotel and lost his fortune. What happened?
    

Now this has the potential to be great or absolutely horrible, depending on
the intent behind the question and the nature of the interviewer. If it's
taken as a "fill in the blanks" kind of challenge it would be a fun way to
explore the candidate's imagination. But I'm guessing it's not. It's probably
one of those "clever" questions that have only one "right" answer that makes
no real sense except creating a few moments of uncomfortable silence.

    
    
      > You are shrunk to the height of a nickel and your mass is proportionally reduced so 
      > as to maintain your original density. You are then thrown into an empty glass blender. 
      > The blades will< start moving in 60 seconds. What do you do?
    

Again, this could be a fun physics and chemistry question and I see a couple
of possible solutions that might or might not work out - might be fun
exploring them. But again, it _sounds_ more like a trick question with one
standardized answer. Bad.

The problem with trick questions and standardized answers is that the nature
of the question makes the candidate uneasy and even if they eventually figure
it out, nobody will have learned anything during the process. It's more like a
hazing, not a hiring interview.

~~~
fduran
> A man pushed his car to a hotel and lost his fortune. What happened?

horrible question, it took me a few seconds to figure out they are talking
about the game Monopoly.

~~~
yen223
Not sure what the point of this question is. It seems all it does is to test
whether the candidate has heard of Monopoly.

~~~
jimmaswell
I'm familiar with Monopoly and I didn't think of Monopoly when I read the
question.

------
raldi
I've never seen any citation that Google _ever_ used these kinds of question.
Especially the idiotic one about pushing a car to a hotel. I think it was just
an urban legend and a good piece of linkbait.

There must be thousands of people on HN who interviewed at Google over the
years. Did anyone ever get a question like this?

~~~
drgath
From the article

> "On the hiring side, we found that brainteasers are a complete waste of
> time"

That implies Google has some data to back it up, whether they themselves
previously asked those types of questions, or they derived it from some other
means. Either way, they aren't doing themselves any favors to dispel that
urban legend. Most reading that will just assume they used to ask
brainteasers, but no longer.

~~~
gwern
Yes, it's a rather strange objection to make... How could Google possibly use
data on its employees to disprove that brainteasers work - if they weren't
using brainteasers at some point?

------
litewulf
When I interviewed at Google 5 years ago they weren't using those
brainteasers.

There are many posts online about the actual, CS-y questions that you can
expect in a Google interview, I had just assumed that the mentions of
brainteasers were merely urban legend.

~~~
inopinatus
I've interviewed at Google. Years, years ago. I didn't get the job. Similarly,
no brainteasers, but something worse: they made me write syntactically correct
code on a whiteboard. I have never written code without using a keyboard;
turns out, I just didn't have the neural pathways for anything else. My brain
kinda seized up. I specifically recall failing to recognise the fibonacci
sequence (especially horrifying given that I read mathematics at Edinburgh).
Things went downhill from there.

Ever since, whenever I've interviewed someone, I ask them to demonstrate their
strengths to me first.

~~~
eternauta3k
What do you mean by "read mathematics at Edinburgh"?

~~~
lmm
It's standard (or slightly pretentious) British English; I guess the US
equivalent would be "majored in math at Edinburgh" (which would be equally
incomprehensible to a Brit)

~~~
richbradshaw
It's not really pretentious – it kinda depends on what university you went to.
I typically say 'studied', but my friends who went to other unis say 'read'. I
would take 'read' as a pretentious term.

~~~
shawabawa3
> It's not really pretentious

> I would take 'read' as a pretentious term.

I'm confused...

~~~
dasil003
I think he meant "wouldn't". It's not pretentious per se, it just would be
interpreted that way to an American because we wouldn't use that phrasing,
therefore we can only imagine it being spoken in an upper-class English
accent, pinky fully extended.

~~~
inopinatus
Quite so. Having been raised by the BBC World Service I actually do have a
somewhat received pronunciation, albeit gently deflected by many years abroad.

The disposition of my pinky, however, shall remain a mystery.

~~~
dasil003
If only it were tea time in Australia. Blast, foiled again.

------
bane
I was contacted by a Google recruiter a few months ago, I had no intention of
changing my day job at the time, but for shits and grins I went through a
couple phone interviews. The position they were hiring for wasn't an area I
have any experience in (the recruiter had made a mismatch), but I thought the
questions were reasonable for somebody who works in that field and were kind
of fun. They were quizzy, but could be practical. It was a management position
so there weren't any coding questions, but things like basic cost estimating
that sort of thing.

I had fun and wouldn't mind it again, it didn't feel like a bunch of stupid
random brain teasers like I've experienced before (how many t-shirts would it
take to make sea worthy sail? why are manholes round?) etc.

~~~
mikestew
"It was a management position so there weren't any coding questions"

Interesting; when I interviewed for a management position (test manager) it
was nothing _but_ coding questions, including the infamous "reverse a string"
question. ("Would like that optimized for space or speed? In-place, or do I
get a buffer? Can you tell I've heard this a zillion times before?") I can
understand wanting a test manager to be more than an empty suit, but _yoiks_.

~~~
bane
There's different kinds of management positions other than software
development management. HN is notorious for forgetting that.

------
freework
This topic/discussion reminds me of a movie I saw recently It was called "That
guy...who was in that thing". It is a documentary about working actors. Not
Big time superstars like Tom Cruise, but the small time 'character' actors.

Anyways, there was one part in the movie where they start talking about
auditions. All four or five of the actors they were interviewing for the movie
unanimously spoke badly about the typical audition process. Some quotes taken
from memory:

"I love acting, but I hate auditioning"

"You've seen my demo reel, you've seen me when I was on Star Trek, you know I
can act, then why not just give me the part? Why make me go through this
tedious audition process"

"90% of acting is reacting. You can't fully demonstrate your full acting
abilities when you're standing in front of a panel of producers 'acting' out a
scene that consists of 5 lines of dialog"

What the actors were saying about how they hate the audition process reminded
me a lot of my frustrations surrounding hiring during tech interviews. Making
an engineer do puzzles like FizzBuzz is a lot like making an actor act out a
20 second scene without any time to prepare or a proper "scene partner" to act
alongside of.

I wish I could like to a youtube of the movie, but I can't find one. Its on
netflix though.

~~~
tbrownaw
_Making an engineer do puzzles like FizzBuzz is a lot like making an actor act
out a 20 second scene without any time to prepare or a proper "scene partner"
to act alongside of._

FizzBuzz is self-contained tho, so maybe a better comparison would be to
asking for a dramatic poetry reading?

------
sergiosgc
They aren't using the brain teasers right. The Idea is not to create a barrier
to entry, nor is it to stress the candidate. The objective of the brain teaser
is having the candidates think slow enough that the interviewer can observe
how he approaches a problem.

It's hard, when using problems that are common, to really understand how the
candidates gets to the answer. Often, he's building on pre solved sub problems
he encountered on his professional life, so the resolution process didn't even
occur at the interview.

I personally don't use brain teasers, because they stress out valid candidates
who do not work well under pressure. However, I think teasers, when properly
used, are valid tools in an interviewers toolbox.

~~~
dasil003
Totally agree with this. The essential skill of a software engineer regardless
of position is to be able to approach any problem no matter how unfamiliar or
intractable and formulate a means of attacking it and verifying the solution.
The right type of brainteaser can be a great way to demonstrate this provided:
A) The interviewee hasn't heard it before B) It's meaty and not relying on
some flash of insight (the manhole cover question is absolute garbage) C) you
are able to capture the thought process in sufficient detail, either through
verbally talking it out or writing down or whatever.

This has the potential to reveal a certain high level problem solving ability
which the lack thereof will not necessarily be revealed by more concrete
"write pseudocode for X" type of interview questions. What I mean by that is
that there is a continuum of skills ranging from rote copying of solutions all
the way through synthesizing solutions to business problems and designing
architectures to fulfill a malleable list of requirements. A mediocre engineer
can inch their way up the continuum through raw pattern matching ability
(which humans excel at) without ever attaining mastery of the high level
abstraction that are driving the implementation detail. Such engineers can
appear tremendously productive at the ground level, but they are dangerous for
an technical organization to have many of them because they tend not to see
where technical debt is piling up and can often paint themselves into corners
because they're not considering the bigger picture. Knowing someone has strong
reasoning skills from very high level human tasks down is a good hedge against
this.

------
nchlswu
I took an i/o psychology course during school and a chunk of it dealt with
interviewing and finding best candidates (from an employer stand point and
equity standpoint), as lots of people who took the course tend to pursue
education with the idea of obtaining an HR-related certificate.

The comment about brainteasers vs structured rubrics is sort of surprising to
me, given Google's reputation for quantitative data. Speaking from a very high
level, structure was really what was emphasized for interviews. It's
interesting how culture can get in the way of proven 'fact,' and I love that
Google is using their own (much larger data sets) to make these improvements
and in/validate other research

------
jmillikin
How to drive clicks in four steps:

1\. Invent a bunch of silly riddles that a non-technical reader might accept
as tech interview questions.

2\. Pull a major tech company out of a hat (today it's Google), and claim with
no evidence that their interviews are based around silly riddles. The article
will be cited for years as proof that people working at $COMPANY are weird and
obtuse.

3\. Wait a couple years. Ignore all evidence that $COMPANY does not use silly
riddles in interviews.

4\. Once traffic on the original article dies down, write another article
claiming $COMPANY has "admitted" silly riddles aren't useful for interviews.

~~~
pathy
I see you didn't read the article. The basis for the article is a NYT
interview with a SVP at Google, claiming that the brainteasers are not useful
(among other things). Surely that is good source? I haven't got a clue if
Google actually used these kinds of questions but the interview sure seem to
suggest it.

[http://www.nytimes.com/2013/06/20/business/in-head-
hunting-b...](http://www.nytimes.com/2013/06/20/business/in-head-hunting-big-
data-may-not-be-such-a-big-deal.html?pagewanted=all&_r=0)

That said, the riddles listed are of course a bit clickbaity but they did not
conjure the story out of thin air.

~~~
jmillikin
The same list of riddle questions has been circulating for at least twenty
years. Before Google existed, it was credited to Microsoft. I know they've
been explicitly banned at Google for many years, and have seen no evidence
that they were ever in common use at either company.

------
ShabbyDoo
A problem I see with many of these sorts of questions is that they often
require the candidate to have some supposedly common knowledge which is not
required for the job itself. Cryptic word games surely are much more difficult
for a non-native speaker of the language in use. Questions related to facts
about cities probably require local geographic knowledge. Surely the
evacuation plan for SF must consider the capacity of various bridges? Someone
who has lived in northern CA for most of his life would have a much easier
time thinking through the logistics of moving people off a peninsula. And, of
course, there's the Monopoly question (which I had to Google).

I like estimation questions in general for many of the reasons other
commenters have cited. However, I wish those using them would consider the
knowledge implicitly required of a candidate.

------
cousin_it
> _Years ago, we did a study to determine whether anyone at Google is
> particularly good at hiring. We looked at tens of thousands of interviews,
> and everyone who had done the interviews and what they scored the candidate,
> and how that person ultimately performed in their job. We found zero
> relationship._

Can we see the study?

Also note that performance on the job is a noisy measurement, because people
who get to work on impactful projects (through luck or people skills) get
rated higher than others. I wouldn't be surprised if interview scores were a
better measurement of "true" skills.

~~~
jasonwocky
> _I wouldn 't be surprised if interview scores were a better measurement of
> "true" skills._

Possibly, but in a sense "true" skills don't really matter. What matters to
Google, ultimately, is Google's opinion of the worker. It's almost certainly
skewed / flawed / distorted in some way from the individual's true skills, and
that's unfortunate but mostly a fact of life.

------
lobotryas
Sounds great, although like with any retraction I doubt this will be enough to
stop the spread of interview puzzles. Even I'm guilty of asking my share
before I realized that the only thing that matters about the candidate is
whether they can sit down and start writing code (and the quality of said
code).

~~~
objclxt
Google still ask puzzles: they just don't ask _brainteasers_.

For example, _write a program to find every possible word in a given Boggle
board_ is a puzzle, but one you're going to solve by coding...rather than "how
many piano tuners are there in New York", which is a rather different matter.
I've interviewed on-site with Google several times, and always found the CS
puzzles to be challenging but fair.

~~~
drgath
> Google still ask puzzles: they just don't ask brainteasers.

Isn't the only difference between "brainteasers", "puzzles", and real
engineering challenges, just the usefulness of the result?

I get what you are saying though. Asking someone challenges rooted in
technology seems so much more useful and natural than something involving ping
pong balls and Lake Michigan.

------
darrellsilver
The best book on hiring, no doubt, is Who: [http://www.amazon.com/Who-The-A-
Method-Hiring/dp/0345504194](http://www.amazon.com/Who-The-A-Method-
Hiring/dp/0345504194)

We used it to build our hiring process for
[http://www.thinkful.com/](http://www.thinkful.com/) and it consistently
proves valuable.

We also use it to help our students prepare for job interviews.

~~~
dpritchett
I'm seriously put off by any talk of topgrading and 'a/b/c player' ranking.

Have you gathered much data on the success of this book's approach in your
firm? I'd love to hear a positive take on it.

~~~
freework
Agreed. I think anyone can be an "A" player under the right conditions. Under
different conditions, the same person can be a "D" player. I know I've had
jobs were I was the wonderboy who was regarded as an A player all around. I've
had other jobs where I was the black sheep "F" player who gets fired after one
week of employment.

~~~
dfriedmn
First commenter's co-founder here: Sure, you definitely need to screen for
culture fit. There are great people who would be bad fits here. That said, we
want people who have succeed in most positions they've had in the past. If
there are two people who have had 4 jobs in their career, you're way more
likely to pick the better if you favor the one who outperformed in 3 of those
4 jobs rather than 1 of the 4. When you combine that with looking closely at
their experience as it fits with the role, and their fit with the culture,
then you have a complete screening process.

------
mgkimsal
"How many gas stations in Raleigh?"

I had a couple questions like this at a couple of interviews more than a few
years back now. In both cases, I sat for a minute, and asked a few questions
back, like "do you mean the city limits of Raleigh, or the metro area?", "how
do you define gas station - do we include public-only, or private fueling
places?", etc. Part of this was buying some time, because the question caught
me off guard, but I think my questions back caught him off guard a bit too.

That interviewer told me I was the only person who asked clarifying questions
before blurting out an answer or walk through. Another one was "take this
marker and design a house on the whiteboard for me". So I took the marker and
asked questions like "how many people will live here, do you want one or two
story, do you need a garage/shed/basement, etc?" And again, was told I was the
only person who'd asked questions before starting to draw.

I don't think the intention behind those brain teasers was necessarily to
determine how you react to those sorts of problems, but it may have been a
useful determining factor for some interviewers nonetheless.

~~~
jacques_chester
> _That interviewer told me I was the only person who asked clarifying
> questions before blurting out an answer or walk through._

I once got negative feedback from an interviewer: I'd asked too many questions
about the questions.

~~~
dennisgorelik
That probably means you two should not work together.

~~~
jacques_chester
Definitely. We both dodged bullets.

------
troni
Every time I click any link on HN that points to qz.com I get QZ without any
reference to the article in question. Currently it points to "Why Tesla wants
to get into the battery-swapping business that’s failing for everyone else"...
in Chrome. Firefox seems to work. Terrible website.

------
troymc
These sorts of questions didn't start with Google. They're known as Fermi
Problems for a reason: they're named after Enrico Fermi, the physicist.

[http://en.wikipedia.org/wiki/Fermi_problem](http://en.wikipedia.org/wiki/Fermi_problem)

Knowing how to quickly estimate something _is_ useful.

I imagine that Larry Page does a few quick estimates every day. How many Loon
balloons would it take to bring Internet to 90% of Africa?

But not everybody at Google has a job like Larry Page. It's gotten to be a big
company full of accountants, HR people, and other jobs that don't require much
thinking in unfamiliar territory.

In other words, guesstimation is a useful skill, but not for every Google
employee, so it's not going to show up as useful on average.

------
tonylemesmer
Some of the more flippant sounding ones could be useless but I thought the
idea of the simpler ones (how many golf balls etc.) is to get a feeling for
how people's minds work and whether they can make sensible best guesses in the
abscence of concrete facts and make judgements based on those guesses. Weed
out the ones who have no appreciation for how the volume of a golf ball
relates to the size of a bus.

Good logical thinking shown here could indicate an ability to rapidly
prototype systems without getting hung up on too fine detail.

~~~
bengillies
On the other hand, demonstrating evidence for being able to rapidly prototype
systems without getting hung up on too fine detail also indicates an ability
to rapidly prototype systems without getting hung up on too fine detail. And
it does it much more directly (i.e. there is an obvious link rather than a
tenuous at best one) and with much less stress, awkwardness and mind games.

~~~
tonylemesmer
fair point :) I guess I work in an industry (design engineering) where my
interviews have only ever consisted of a "nice chat"

------
rekatz
I think you'll find this response by @gayle to be spot on. SORRY, FOLKS:
GOOGLE HASN’T CHANGED THEIR INTERVIEW QUESTIONS BY GAYLE MCDOWELL, EX-GOOGLE
ENGINEER & HIRING COMMITTEE MEMBER
[http://blog.geekli.st/post/53477786490/sorry-folks-google-
ha...](http://blog.geekli.st/post/53477786490/sorry-folks-google-hasnt-
changed-their-interview)

------
echion
Insofar as this interview speaks to the relevance of brainteasers to actual
software development / engineering, it fails to provide a meaningful topic of
conversation. It surprises me that nobody's pointed out that at best the
conclusions are relevant to engineering "leadership" performance, rather than
-- as I expected for "Google" and "head-hunting" \-- coding performance. Sure,
people skills and team skill are important, but if you're going to get good at
selecting for leadership and ignore selecting for productivity, to the extent
they're not related you're not going to be very good at creating and
maintaining software. Although software isn't 100% of Google's success and
coding productivity isn't 100% of software success, it's pretty important.

------
cwesdioner
Why are they talking about "Big Data" rather than just "data"? I doubt the
data sets they used were so large that they could not be easily analysed on a
cheap laptop using normal statistical packages.

When trying to work out what best predicts job performance, the quality of
your data is by far the most important thing to focus on. I would very much
like to know more about the details of their internal studies. There are a lot
of difficult problems in trying to use statistics to improve interview
processes. One of the big problems is that you will always have a truncated
sample of only those people who were selected: you would then expect the
importance of certain variables, such as GPA or test scores, to be lowered
because those who scored lower on such metrics will have had compensating
characteristics...

------
contingencies
_Google reputation in shreds; funds transparent PR stunt_

~~~
fatjokes
Huh? Google's reputation in shreds? How so? If you're referring to NSA issue,
that's the American government's reputation you're talking about. Google (and
almost every other big tech company) was simply compelled to follow the law.

------
fnordfnordfnord
If you're a hiring manager who uses these things, you should know that I (try)
to train/prepare my students to answer them. I do think there is some utility
in watching how people approach an unconventional problem, but don't be too
impressed with people that can solve them easily, compared to those who don't
do well the first time they see them. I see a huge improvement in the quality
of answers of most students, once students know it is a gag and once they've
been shown how to estimate things. Most students are constrained by having
been in a learning environment that provides them with well-defined boundaries
within which to form their answers. IMO failing to perform well with these
problems is not always a failing of the student as much as it is their
educators.

------
chevas
The article also mentions:

"It’s also giving much less weight to college grade point averages and SAT
scores"

In 2004 I interviewed for a Creative Maximizer position. I received a glowing
review from my brother who was a Googler. I studied all the ins-and-outs of
adwords back then and the British interviewer confirmed: "You did very good on
the assessment" (which was working through real ads that needs to be
maximized). My opinionated experience has been that in these kinds of
situations, Brits embellish less than Americans.

However, she told me that my college GPA was "a major question mark" because
it was 2.99 and Google only hires people with 3.0 and above (I didn't know
what I wanted to do in college). Looking back I'm glad I was never hired, but
that burned me bad for a while.

------
drawkbox
The only true way to tell if an interviewee will be a good employee is actual
work/product output with the right amount of responsibility. Product focused
people not just coders looking to code lots of tricks to compete.

Contract to hire is one way, another is what they have done previously as a
good predictor. It is a risk for sure but that really is the only true way in
the end.

Plenty can be gained from just letting the interviewee talk and maybe looking
at some of their code they have done previously while they talk about it.
Whiteboard coding should not apply as it is completely out of element for many
coders.

The type of person they are can't really be detected correctly until they are
in the team and delivering because everyone is selling themselves on an
interview.

------
31reasons
I think companies would be better off hiring people not based on their IQ or
skill level but by hiring people who love what they do, have done side
projects and achieve flow in their work. People who achieve flow in their work
will work harder and are more creative than others because they enjoy the
process of solving problems. So the interview process should be to identify
how often the given candidate achieve Flow (as defined by mihaly
csikszentmihalyi)

[http://www.ted.com/talks/mihaly_csikszentmihalyi_on_flow.htm...](http://www.ted.com/talks/mihaly_csikszentmihalyi_on_flow.html)

------
alok-g
>> After two or three years, your ability to perform at Google is completely
unrelated to how you performed when you were in school, because the skills you
required in college are very different. You’re also fundamentally a different
person. You learn and grow, you think about things differently.

While the analysis is correcting some beliefs about interviewing techniques,
do I sense them draw a conclusion again not supported by data? How did they
conclude the lack of correlation is "because" the skills required are
different and people think differently a few years out from college.

------
ArekDymalski
This is great news both for Google and the candidates. Of course as long as
the behavioral indicators for the competencies will be defined right -
according to actual goals and tasks on the job.

------
mossplix
So how useless exactly were they? As long as you are looking for a "right"
answer not a correct one, they are a very good metric for testing problem
solving skills.

~~~
rjd
I got off to a real bad footing in a job interview using a questions like that
once. With a guy looking for a 'right' answer, and it didn't go down well when
I challenged his assumption.

He asked how many plumbers worked in the city, to which I replied you could
check the industry registry for qualified plumbers, you can probably filter
them out by city. There was silence then I had the question clarified to how
many 'plumbing businesses' where there not individual plumbers.

To which I replied you could get the company registrar office but it was
impossible to calculate as so many plumbers work full time while also holding
businesses of there own as free agents. A very unimpressed look came across
the guys face and I was told there is a very simple way to find out and asked
to try again.

I sat in silence for a 30 seconds or so trying to think of something that
would be more thorough than the registry offices, I think offered a few
alternatives like tax department records, government statistics office. All
things I could think of that would keep fine grained data. But I could see the
guy growing impatient with me so I stared at him and asked him what a better
metric was than what I had offered.

After a few moments I was told the correct answer was to check the phone book,
any practicing plumber business would be listed.

Startled but what seemed like a completely faulty answer I pointed out what
seemed obvious to me... not every business needs to have a public listing...
some deal directly as sub contractors ... some could be umbrella companies for
subbies ... again some are free agents... some might use unlisted
cellphones... not everyone is a legal company, not all plumbers where
qualified. It was a terrible way to get a dataset you could rely on.

Angry swept across the guys face and I was told sternly I was wrong the data
was perfectly suitable, onto the next question... which was all down hill from
there as he didn't want to hear my answers, didn't challenge me back, just rip
through the rest.

To this day I laugh when ever I think back to that interview. It was probably
the most uncomfortable interview I've ever been in.

~~~
DanBC
So, is Google admitting the questions are hopeless, or are they saying that
their interviewer's reactions to the answers to those questions are hopeless?

Because fixing interviews is harder than just working out hwat questions to
ask.

~~~
rjd
I dunno about google, but my experience was more copy cat behavior by someone
that didn't get the purpose of it I think... maybe I was to blame as well as I
pushed back expecting to be challenged more.. not just told I was wrong.

It ended up worse then useless for both of us involved.

~~~
coopdog
Dodged a bullet, anyone can see your answers were at least a good as the phone
book one. Nothing worse than a managers who relies on authority to backup
their flawed decisions just to spare their own ego. Sounds like a toxic org
culture

------
ryguytilidie
This always seemed so overhyped to me. I did hundreds of interviews at Google
and I never once asked anyone a question anything like the ones described. It
was generally stuff like "oh hey, you're going to do deep work on our unix
systems? What is the difference between kill and kill -15?" We also didn't
care about GPA. This all seems like super old information if it was ever true
at all.

~~~
iyulaev
_What is the difference between kill and kill -15?_

Well, for one, the second isn't syntactically correct.

I can't decide what's worse, brainteasers or brainless trivia questions.

~~~
spudlyo
Pedantic nonsense. Most of the time when you're using kill you're going be
using the bash shell built-in, where it _is_ syntactically correct.

 _kill [-s sigspec | -n signum | -sigspec] [pid | jobspec] ..._

Even so, with /bin/kill -sigspec is still valid and common usage, even if it
is not documented in the manpage.

------
uxwtf
"Design an evacuation plan for San Francisco" Why not Mountain View? At least,
it would be useful for Google... if the big one comes

~~~
brazzy
How's an evacuation plan going to help you with an earthquake?

~~~
6d0debc071
Escaping after the services are all shot to hell?

------
theboss
I'm still a student and like to interview at a lot of places, shop around, and
keep practicing my interviewing skills.

I STILL go to interviews where I am ONLY asked these kinds of questions...It's
embarrassing. If you ask me these questions for a 2 hour long interview then
I'm not going to work for you...it's that simple

------
X4
Why so serious? Isn't hiring about maxing out the potential of a company?

Anyone can help maximing it out. I know for myself that having 'clue' reduces
self-esteem. Which can be balanced by having the right co-workers. End result:
Maxing out the potential.

A(Technical intelligence) + B(Social Intelligence) = (Innovative potential)

~~~
X4
Why does Google want the PERFECT candidate, isn't anything less better? Aren't
they ok, with 90% of the human population?

------
return0
So a lot of "difficult" manager decisions can be equally well solved by the
toss of dice.

------
hernan604
Those type of questions are plain stupid. Wont take anywhere and wont solve
any problems.

------
voltagex_
I had to go look up what quotidian meant - it means everyday.

------
victorlin
Well, at least, they know how stupid it is now.

------
edwardliu
Oh really? what a surprise.

