Hacker News new | past | comments | ask | show | jobs | submit login
How Google Hires (wired.com)
296 points by Libertatea on April 7, 2015 | hide | past | favorite | 271 comments

The problem with "Tell me about a time when ..." questions is that they more so test the strength of self-righteous memory than offer a clear picture of valuable job skills. These questions assume the interviewee has a vivid set of occurrences of which they can commend themselves. A good candidate who frequently leads, salvages, solves problems better than the rest probably doesn't store a list of these occurrences as special times they were able to do good work, they think of them as the norm, and may have trouble praising themselves in detail. In fact, I'd be more weary of the candidates that have a go to story to answer these questions as it may mean it's the type of thing that doesn't happen regularly.

> may have trouble praising themselves in detail

Yes that happens quite often. The really good interviewers know that and understand that their job is to help the candidate to present his/her skills. That means additional questions, rephrasing the question, patience, empathy and first of all: taming the premature judgements.

It's really hard but it can and should be done, if interview is to have any sense.

When I used to interview at Google (non-Eng interviewing), I found the "Tell me about a time when..." questions hugely helpful. The initial answer is just the start of a conversation where I as an interviewer could go deeper and deeper into the skills they exhibited, their thinking processes, how similar these situations were, etc.

If you just take their initial response and move on to the next question, you fail as an interviewer. You've got to go deeper... that's also the way that you can quickly spot a bullshitter or someone making a proverbial mountain out of a molehill project.

All good interviewing is done this way. The goal of the interview is to get into the thought process of the interviewee. You just need them to give you a thread to start pulling.

I think a lot of candidates think they're immediately screwed if they don't have a compelling answer to every question. It's not seen that way on the other side of the table, at least not when I'm there. Often the interviewer himself is just trying to find a foothold that he can take to really start digging into something. Good interviewers throw out a bunch of questions that are designed to get you talking until you start talking about something they can sink their teeth into. Bad interviewers have no idea what to do and throw out stupid, random questions that sound like job interview questions to them, accept your answer, and then do it again for the entirety of the 30-60 minutes. At the end of your interview, they'll make the same decision they would've made if they'd only seen your picture and were told they had to decide from that (unless you managed to offend the interviewer during your responses ;) ).

Any nervousness I had as a candidate was permanently resolved after I started interviewing people and seeing how others reacted to interviews. Most interviewers are not disqualifying you for all the stupid crap you imagine. If you're good and they pass, they're either that incompetent in which case you don't want to work for them anyway, or they have many good candidates and the choice is then based on unknowable differences in perception like perceived likeability or other superficialities.

+1. People who have been in management (or are "people" people) are often good speakers that do very well at a surface level, so it requires digging deeper to get a richer picture of what they were thinking and what they did.

Totally agree. Also, not everyone is a good story teller. I'd find it difficult to pick one such occurrence ("Tell me about a time when ...") and describe it vividly in a blink of an eye. My answers tend to be laconic and concrete, especially when I'm nervous (which is somewhat natural during interviews). Such questions might be great for managing roles, but not so good for technical ones imho.

The real "best interview technique" is not so much black-and-white structures-vs-unstructured. It depends a lot on the type of job you are interviewing for. Structured interviews work really really well for engineering jobs, particularly software engineering. However for marketing, sales, finance, assistant, etc. jobs, the unstructured interview questions work quite well. These are the jobs for which "Tell me the time when..." actually works.

I don't understand. Isn't the point that your example good candidate will prepare for interviews by reviewing those occurrences such that they are vivid, but a bad candidate will be unable to do so regardless of how much they prepare?

No, the idea is that people who find it easy to praise themselves may actually do fewer praise-worthy things. Like when I sit on my butt for weeks, then wash the dishes one time and consistently remind my wife of that one time I washed the dishes. Whereas she might wash the dishes every night and never asks for praise.

In that situation, if someone asked "tell me what special activities you've done in the past month", I might point to that one time where I washed the dishes, whereas my wife, not seeing dish washing as anything impressive, might say she's done nothing special. Even though I only washed the dishes once, I come off looking better there since I have an example I can point out.

Simply saying "I did my job every day" isn't good enough. However, not doing your job except on one occasion makes that one occasion seem special.

In your example, "I washed the dishes once or twice" is not going to impress the interviewer and is quite revealing. (This is true for the more general version of your anecdote as well.)

No, but you'd come up with a story to go along with it. You didn't just do your job building a website for a client, but rather you had a client with a pressing need for a custom site and they were on a tight deadline. Normally your team wouldn't work contracts like this, but you thought it would be a good challenge and a way to test your skills. You had to work late nights for a few days, but you managed to get the site done in time and on budget. Sounds pretty nice, it's good that you were selfless and spent some of your personal time to help the client on such a tricky project.

What actually happened was you just sat there browsing reddit until the deadline had almost arrived, then worked like crazy trying to get the site finished. Normally your team wouldn't work contracts like that (which means normally your team would do their job properly). And it was a challenge and a test of your skills, because normally you don't do jack squat at work. In fact, the reason you're looking for a new job is because if you don't quit your current employer will fire you.

Now you have a great story to tell an interviewer about how you completely fail to do your job properly. But by dressing up the words a little bit and using some creative euphemisms, you can make it sound like you went above and beyond. And it's not really a lie, because you did go above and beyond... eventually. You just didn't have to do that if you had worked properly. All you have to do is leave out the fact that you're a massive slacker.

So you get the job because you told an awesome story of how devoted you are, and all you had to do was leave out one single fact (that the deadline was only tight because you procrastinated). Meanwhile your really awesome coworker tells every single fact 100% true and you get the job instead of him, because your story sounds better.

It's not too hard to tell a story about how you wanted to do something great, as though you actually did it. It's like the pseudocode of programming.

I think this is a good point, but an even more likely one is just claiming credit for someone else's accomplishment. People who are good at self aggrandizing have an advantage for these kinds of questions.

If the interviewer follows up with all the team members who were actually there it wouldn't work, but I kind of doubt this happens a lot.

Yeah, but "I took on the undesirable task of fixing all the shitty excel cleaning because no one else was going to do it" isn't going to score you many points in a job interview.

It makes sense that this could be the case, but do you have evidence that it actually is?

For example, one could make the argument that they favour people with good situational recall, because they're better able to think about occasions where they've done something that applies given the criteria specified. Or that it favours people who have depression, as they tend to be more self-aware and self-critical, so have a larger bank of answers readily available.

I disagree with "A good candidate who frequently leads, salvages, solves problems better than the rest probably doesn't store a list of these occurrences".

Unless one is talking many years prior, these situations are still in the memory. The question isn't meant as an approach to rehash a particular solution. It is meant to set ground work to explore decisions, options presented, etc.

I've written a bit about this: http://sockpuppet.org/blog/2015/03/06/the-hiring-post/

The worst thing you can do is unstructured interviews. Unfortunately, that's exactly what almost every company does: they create ad-hoc committees of workers and have them each design an interview, often on the fly, for each candidate.

Everything you do to add structure and remove degrees of freedom from interviewers will improve outcomes. Every company that hires more than one tech worker in a year should be working on this problem, and hopefully documenting their result.

Our structured interview process differed from Google in that we (a) completely standardized interviews and (b) designed the questions in those interviews to minimize the need for free-form follow-up questions ("Tell me about a time..." and then digging into the answer to mine conclusions). We tried to flip the script: instead of having the interviewer ask followups of the candidate, we put candidates in a position to ask lots of questions of the interviewer, within a framework that generated lists of "facts" that we could record and compare to other candidates.

Example: "I am going to describe a trading system in broad strokes. I know lots of things about it works, and I want you to ask me enough questions so that you can roughly diagram it on a whiteboard, capturing its components." Then, later, "I want to have a conversation where we rank these components in order of sensitivity to latency". The interviewer captures the answers, but the back-and-forth is dominated and mostly directed by the candidate.

The problem I see with the Google strategy is that it combines interviewer-directed free-form questions with an open-ended and somewhat fuzzy question that really evaluates less the candidate's ability to do the work and more their presence of mind during the hostile, stressful interview process.

Things I positively responded to in this article:

* The notion that we come to snap judgements about candidates within minutes of the interview and are then hostage to confirmation bias for the rest of the interview

* The effectiveness of combinations of evaluation techniques; for us, a combination of work-sample testing and scripted interviews were extremely effective.

I wonder if it's common or I am just unlucky or unskilled...

I've been out of work for 11 months. I've been on a large number of interviews and it seems like after that many failed interviews, the answer is probably with me.

The most frustrating thing is the lack of consistent information about why I didn't get hired. Some people said I had good theoretical grasp but can't code. Others said I could code really well but didn't have the grasp of fundamentals they wanted. Most stay silent.


I've grown to hate and resent myself.

I've conducted a lot of interviews. In my 23 yr career I've done on avg 10 a year. Early last year I was in Yahoo's hiring war-room and did more than I care to remember.

All that to say; I'd love to conduct a few mock interviews with you. My contact info is in my profile.

Do it. Contact me. Seriously.

Edit: this offer extends to anyone that wants help. I am a front-end engineer so that means you'll get more mileage out of me for JS, CSS, and HTML; but I am totally willing to help with the subjective side to interviewing as well.

Thank you for doing this. You've improved my impression of humanity today, and inspired me to do the same.

(And you also, papercruncher)

As someone who does 8 interviews a day and loves to help candidates understand what they did wrong and how they can improve; this is really, really nice of you.

Did you mean "on avg 10 a [month]" ? 230 over a 23-year career doesn't seem like all that many.

Maybe not heaps, but consider you won't hire at entry-level, and unless you beeline straight into management you're unlikely to be conducting anywhere near 10 a month, ever.

That doesn't agree with my (somewhat limited) experience at all. I don't know if my experience is atypical or yours is, though.

I interviewed more people than that per year while I worked at Google, and I'm doing even more now that I'm at a smaller company.

As a software engineer you are doing more than two interviews a week on avg?

I've worked at small places, where we might only hire 5 people a year; and larger places which were large enough to leave it to the leads (engineer-managers overseeing 20 working engineers)

Don't hate yourself, interviewing is a skill that can be improved with practice. I'm not hiring right now but if you're in SF, I'd be more than willing to give you a mock interview and all the honest feedback you want. Email is in my profile.

I think the best way to get over that is to make something on your own that is so great that everyone wants to hire you without even doing an interview!

The funny thing is that you may not even need a job once you do something great on your own :)

If you are short of money, you can always get a simple part time job to survive.

Last time I went out interviewing, I whipped up a fun little Android game, put it on my phone, and made a point of finding a way to slip it into the conversation. It was great because:

1) It let me set the tone and path for much of the interview, since so many places do unstructured interviews. 2) It was a concrete demonstration of my skills. 3) Java and mobile are both hot technologies, and the game also made effective use of other common stuff like Open GL, multithreading, network, database storage (via sqlite), etc. 4) Spending a few minutes playing a game sets a relaxed tone for the entire interview, which makes things easier for everyone involved.

Structured interviews really are a brilliant strategy. Whenever I interview, I do my best to subtly direct the interview in a way that exposes my strengths and leads the conversation into areas that I am most comfortable with (and trust me, whipping out a concrete example full of technologies that you're absolutely comfortable with helps). A structured interview, to an extent, would allow the employer to retain better control of the interview (whether or not the realize it), which is probably to their benefit. For instance, I teach Java, C#/VB, and some other modern languages at a local college after work, do tons of C / embedded / network / etc. stuff at work, and do digital electronics for hobby ... so if you let me push the interview in those technical directions, I'm at an advantage.

This is great.

I am always surprised when I meet mobile developers who have never created a mobile app for themselves.

The game is called 'aqua balls' and is on google play. I haven't updated it in almost 3 years, though ... right after I finished it I started having trouble with my wrists and finger joints, which meant no more coding for fun (I took a team lead job and cut out a lot of typing at work too). My brother wrote and maintains the other app published by 'woggle' (he also did all of the art for aqua balls), so it is still updated regularly. He's actually a D.O. IRL, so this is 100% hobby for him.

> I think the best way to get over that is to make something on your own that is so great that everyone wants to hire you without even doing an interview!

This is a good idea.

I have no idea what people would want or what I could make that people would want or what open problems are there that I could make. Every idea I have someone else already has a better solution.

I keep trying and put it in github but it's mostly for me; no one looks at it.

Okay, here is an idea.

The Julia people are complaining that their language is great but it is not being picked up because their standard libraries are missing a ton of functionality.

Start knocking out some standard library functionality.

Sure, you may not be interested in Julia. But there are plenty of other projects out there that are understaffed. IPython Notebook needs developers. Octave needs somebody to write a good front end. It goes on and on.

The advantage is that you don't have to be super creative (compare Julia's libraries to Python, and start implementing something that isn't yet in Julia), nor predict the future (meaning, you can make a brand new X, but if the world goes to Y, you will never get picked up). What you write will be used by (at least) thousands, and you will have to write production quality code to get your pull requests accepted. I'd be impressed with anyone that did that, even if I had no interest/need in the project that they contributed to.

I would like to proffer a different strategy: Pick something that you are passionate about, good at, and comfortable with. Then push your personal boundaries and learn something new while making some app/widget/whatever, even if it's been solved/done before. Why did you do it? You wanted to stay up to date, learn something, and it was fun (and all of this will be true).

Whatever you made will probably be something pretty sweet because you chose something close to your heart, not something that was simply 'unsolved' or 'needed doing'.

Employers will love that you chose to challenge yourself to build new skills as a hobby, and because you chose something that you are passionate about, you'll probably also have a good amount of enthusiasm while you talk about it, which is equally important.

That sounds like a great idea. Do you have any specific links or resources regarding limitations of the Julia standard library?

Where is your github? People bothering to respond to your comments on HN might be the only ones you will be able to get to glance at your code ... but I bet they will. I would have if I had been able to find your github.

Are you showing it to anyone? Put it in your profile, for starters.

Pick an open source project that interests you, preferably one you personally use, and become active in their community. The issue/bug tracker of almost any open source project has a list of feature requests and bugs to work on. You get to add open source contributor to your resume, and become a "subject matter expert" by virtue of that.

You've got about billion comments like this in your history.

The previous advice has been good.

If you opened up more about your specific troubles I think more on HN might help diagnose.

Maybe put some stuff on github, who cares how awful it is or if no one will use it .... The following statement is not globally true but I think it is probably true in your case --- you won't be able to succeed until you fail some more.

If there was a "hoboon" github we could like at when responding to your comments, the specific advice might be more specific.

You're right. I talk about this often. I'm going a little crazy, and maybe angry.

Yeah yeah, it's all good. We cool.

Show us your code.

Frustrated and wanting to improve your skills -- fine.

Hate yourself -- hmm, not good.

Everyone struggles and it's very hard. Looking in the mirror and figuring out what you can do better, not being satisfied with what you've achieved, I'm all for that. It's how we grow. But you cannot allow your assessment of yourself on a single dimension to become your entire assessment of your entire worth. I'm certainly not saying we're all special flowers, and our professional is hugely important. But it isn't all we are, and those skills aren't static anyway.

Also: If you want feedback, you can do things to improve your chances of getting it. Be incredibly positive in your responses -- "thanks for the opportunity!" People are obviously reluctant to talk with someone they've rejected, so signalling that you won't make this hard for them can help. If you have guesses at the problem, or suspicions you'd like to rule out, suggest them. Statements like "I think I need to build up Z" or "I worry that I come across as Y" might give you useful yes / no answers, if they are set up properly.

Think also about informational interviews as a means for seeking out fit. If you're just having coffee with someone, about what they look for and how people succeed, they are much more likely to be brutally honest than they are after rejecting you. If I'm having coffee, I can politely tell you why your resume or body of work look off, because I think that's helping you.

May I humbly recommend this resource?


Don’t hate yourself. I refuse to hate myself. I know my many skills and powerful talents. Hate the system.

Where do you live?

Regarding snap judgements, I've had multiple interviewers write me off within the first five minutes. It's just silly and a waste of everyone's time.

Example: Interviewer asks me a question. I start brainstorming outloud because they coached me to talk through my ideas. Of course some of my ideas are not going to work. The interviewer will shoot the first wrong idea down immediately and peg me as an idiot for the rest of the interview even if my second idea is the perfect solution.

This is the exact opposite of an interview I used to run in my old biology lab. We used to sit down a new candidate in front of a microscope and ask them to do some cell transfers and counting. They had to finish the task.

We right away knew if they were persistent, skilled, problem solvers, and if they were meticulous.

If they pulled it off maybe you knew that. If they didn't -- is it because they suck? Or because they're incredibly nervous and just got put in an environment which which they were not acclimated, and told to do work?

I may not be a scientician, but I'm pretty sure there's something in the Big Book of Science about controlling for variables...

They may have been good, but we have no way of knowing that if they are nervous. I doubt too many employers have this figured out either. If the candidate is nervous and doesn't perform, there isn't too much that can be done about it other than maybe looking at past history.

We were also rigorously testing to see if they could get acclimated to the environment. They would have to do this routinely since there were so many new lab techniques to be learned.

I think its the opposite. I would much rather had only my and the candidate's 5 minutes wasted, versus a whole hour, when its very apparent in those first 5 minutes they don't have the technical skills we need for the job.

Presumably you consider how drastically your whole process has failed if you bring someone in for an on-site interview who demonstrates within 5 minutes that they lack all the technical skills required for the job.

Well mistakes happen - a phone screen can only get you so far, especially in a seller's market where your talent pool is pretty shallow. Though I think a shared google docs interview is probably the best way to go for the initial screen, I just haven't given or taken one like that yet so it must still be relatively rare.

We came to the conclusion that phone screens are pretty close to worthless. We'd still have been very upset if we ended up in a face-to-face interview with someone who had no business interviewing for the job. You need to do more than just phone screen (actually: you should eliminate phone screens).

(I don't know who's downvoting you but I upvoted.)

I'm really fascinated by your replies on this thread; seems you build a really solid process there.

I'm curious about the phone screens - how do you screen out the candidates that are grossly unqualified so that they don't progress to the interview stage?

In other words, how do you screen out candidates like those Joel describes here: http://www.joelonsoftware.com/articles/ThePhoneScreen.html

By my last year at Matasano we were literally discarding the results of phone screens. We did them largely as a pro-forma exercise, and because they made team members more comfortable with the process as a whole.

We virtually never selected out candidates based on phone screens. We had a work sample process that kicked in after phone screens, and that cost us almost nothing to run. So we never had an incentive to prevent a candidate from going through that process.

Unlike phone screens, the work sample results were strongly predictive. You could bomb phone screens, ace work samples, and end up coming in for in-person interview.

(We did "nudge" candidates we felt wouldn't do well on the work sample tests, solely out of concern for not wasting their time, but anyone who wanted to proceed was able to).

Phone screens are a waste of time.

Spolsky did a really good job of documented the best practices within the framework of unstructured interviews as they were practiced 10 years ago. The problem isn't Spolsky's tactical suggestions; it's that the strategy they're a part of is being discredited.

I'm curious how often you had people voluntarily drop out of the process without completing the work sample, and whether that frequency would change if you didn't have a phone screen. (I take it from your questions that you never actually stopped doing phone screens).

If my first contact with a company is them asking me to put in hours of my time, when it costs them "almost nothing to run" (your words) then I'd be inclined to pass.

The company I work for has a work sample exercise, and we intentionally place it after a phone screen and first interview, because we feel it is (and appears) more fair to the candidates.

From our point of view, we'd love to put it up front, as it is the best source of information we get, but if it caused us to lose good candidates before we even started (and we believe it would) that would be a show-stopper.

Most of our candidates drop out before the work sample. On the other hand, almost none of our candidates are qualified to work for us when they initially apply.

We make it really clear that there will be work samples before people even apply. I think the real question would be "how many qualified applicants don't bother applying because of the work sample," which is a question we can't answer. Given the paucity of unemployed qualified infosec folks, we're comfortable with the tradeoff.

I’m curious exactly how many times a candidate gets to apply to Matasano.

Thomas describes the Matasano process as costing “almost nothing,” but that includes running a couple exploit training websites and sending an 852-page book to applicants. Which I’m not inclined to go through immediately, because I need money right now.

Do I have to do the microcorruption.com and the cryptopals.com and The Web Application Hackers Handbook before even trying the technical screens and challenges? Or should I try seeing if my existing web application security best practices and rusty MIPS assembly experience are enough to get to where I have enough breathing room to do these exercises?

> I’m curious exactly how many times a candidate gets to apply to Matasano.

Undefined. Personally, I applied twice, ended up coming to work here the second time through.

> Do I have to do the microcorruption.com and the cryptopals.com and The Web Application Hackers Handbook before even trying the technical screens and challenges?

Nope. But they might help, and you may want to anyhow; they're fun :)

No! You absolutely do not need to do Microcorruption or Cryptopals.

On the other hand, if you made something you could give to anyone who applied and was little to no work for you how many good candidates would you pick up that you wouldn't have given a phone screen to?

I think that varies quite a lot based on company profile and role-type.

We phone screen anyone who has a resume that's vaguely relevant. If you're applying for a software engineering role and have never done any development, then we'll reject straight away, but that almost never happens.

We simply don't have a high enough profile to receive a flood of applications that we need to filter before we phone screen.

And for a software engineering role, evaluating the work test takes as much as the phone screen (sometimes more). We're not simply checking whether you can produce the right output for FizzBuzz (checking that could be automated), we're looking at your choice of algorithm, design trade-offs, unit testing approach, etc - things that require a human to be involved.

Different companies and different roles will have different time investments.

A CTF style task can be evaluated more simply (at least on first pass) - did you capture the flag.

And if you are looking for a few amazing candidates form amongst a mass of poor ones, then you can afford to optimise your process so that you test every candidate at as low a cost as possible, even if that causes some candidates to self-select out.

I have heard much the same about dating, so I don't think it is necessarily about a process failing, as the way we judge people based on first impressions.

Curious how you could judge that a candidate doesn't have the technical skills you need for the job in 5 minutes? Can all the tasks your employees do on the job be completed in 5 minutes?

Sometimes you get applicants that literally don't know what a compiler is, don't know what a variable is. That sort of thing. That kind of person can probably be ruled out in the first 5 minutes for sure.

Of course I think everybody would agree that if such a person progressed to the interview stage, there was a failure somewhere.

Thanks for your hiring post article -- it's one of the best things I've read about what interviewing should be and I've shared it with countless friends and colleagues in the last month.

Thank you!

> Our structured interview process differed from Google in that we (a) completely standardized interviews

Google interview questions are leaked so often that standardization is virtually impossible...

I don't see why it's so hard to just standardize on a formula for generating questions, rather than a static question.

There's a simple method of doing this that doesn't require hard AGI to generate the questions, either: just standardize on a list of the microskills the job requires. Then, get the interviewer to create a small question to test each microskill. Keep previous interviewers' questions together with the microskill in the list as examples for the next interviewer.

Why does that matter?

Your question isn't very good if you are selecting for people that know your question vs are smart/skilled.

Even if I were not fit for a job with you, or a position in your company, I would LOVE to interview just to experience what this is like.

> (a) completely standardized interviews and (b) designed the questions in those interviews to minimize the need for free-form follow-up questions

Doesn't this just make the system easier to game? It's like SAT prep. SAT scores correlate with the amount of time/money preparation for it, not your actual knowledge.

The process of answering matters far more than the answer. It's not a graded multiple-choice test.

Out of curiosity, how do you know if your interview process is better ? Is there some scientific method ? A lot of what you write makes sense, but how can I convince myself that I am not falling prey to Confirmation Bias ?

I really enjoyed this article, thank you for writing it. I'm going to marinate on the things you talked about and see if I can try to influence some change where I work.

Let me know if there's anything I can do to evangelize. We got this to work, and it was pretty amazing to watch.

> All our technical hires, whether in engineering or product management, go through a work sample test of sorts, where they are asked to solve engineering problems during the interview.

I disagree that a whiteboard session can be considered a work sample test. Whats wrong with 'complete steps 1,2 & 3 and i'll be back in 30 minutes'?

For engineering we actually found giving a 2-3 hr 1 problem test has been a better predictor, and 3-4 30 minute face to face (usually 2 on skype prior to onsite Test, then 2 follow ups). Alone time gives the candidate time to digest and solve the problem on their own which is more 'real world' like and shows what can be done on their own. Some questions are intentionally vague to compensate for 'able to make decisions with little information'. We have no right answer but will ask why you made these decisions. This frees up our staff time tremendously and weeds out those that may need a lot of hand holding early on.

This is exactly correct. A work-sample test needs to capture the normal parameters of the work. If team members aren't routinely called on to solve programming problems in a high-stakes stick-the-landing-or-you're-fired exercise on a whiteboard in front of an audience, then the prediction the test is trying to make is confounded by all those factors.

* Eliminate live audiences

* Let people work in the environment they'll be able to choose on the job

* Ideally, let people work in their own comfortable environs, even if they won't be able to do that on the job

* If you're worried about cheating, build that assessment into your in-person followup interview

How often did people cheat in remote work-sample tests?

I never once saw it happen.

I helped a friend cheat.

Originally he'd simply asked if I could be there whilst he did the test so that he could talk aloud and bounce ideas off of me as it was timed and he gets nervous in tests and was afraid he'd not think clearly.

He froze. Totally.

I took over and did the test for him, and he aced it and was offered the job at the top salary band.

He is a perfectly good engineer and the company were very satisfied with their hire, but he never did that test. The test in it's entirety was completed by myself.

I cannot imagine this is such a rare thing with remote technical tests.

I believe both of you, but our process did/does nothing to overtly catch cheaters, we hired directly off its conclusions, we hired at a rate faster than most VC-funded cash-flow-positive YC companies, including in SFBA, and we never let anyone go (nor did anyone ever quit while I was there) once we made an offer.

(Matasano is also not a company where it's easy to duck attention and coast; the tempo is 2-3 week engagements that wrap up with metrics that everyone cares deeply about).

The conclusion I draw is that cheating just isn't as big an issue as people think it is.

I've seen it several dozen times. A solution to one of our sample tests made it out to Github and that was all she wrote. For a while we kept the same sample test as a honeypot, but disqualifying half of the candidates for cheating was tiring.

I still agree it's the best approach.

> disqualifying half of the candidates for cheating was tiring.

That actually sounds awesome. You could reduce your interview evaluation overhead by nearly 50%!

Meh. IT unemployment has been about 2% for YEARS. I can get a good job on a team I'll enjoy working with in less than a week. If you're a top tier IT name then it might be worth jumping through hoops, but I will wind up at companies with the least hassle in the hiring department. 15 years in the industry has told me that there's no strong correlation between hiring practices and team quality.

Precisely why I hit the back button if my employment application is through something like Taleo. My time is valuable, too valuable to waste on a resume rabbit hole.

actually, that brings up a really great point. This could be an interesting article. Considering how much the web focuses on 'conversion of web leads' and making the entire process seamless, you'd think you could quantify the candidates lost (and put a dollar figure on it) simply because the interface wasn't friendly and turned away candidates. However, all too often i feel that the goal for everyone involved but the hiring manager, is a seat filled.

Anecdotally, I was prompted 'Zip code required' on a multinational company talent portal. Tried to fix and submit for at least 60 seconds before finally had to modify the CSS to show the input so I could put in a value, as Zip input was set to display:none. I figured at least there would be less competition for the role.

It's possible that your application was the only one... in months... so nobody was even looking at these applications.

You did not get reply in the end, right?

not a peep :)

That's a killer idea for a front-end developer test.

Front-end developer may not want to work in the place that makes such silly mistakes.

If you deliberately made it that way and told your candidates, it might work well. If you're a web dev and can't figure out how to un-hide a field, then you don't deserve an interview.

Definitely intentionally, more of a gauge of problem solving ability. Fix the bugs, submit the test, get interview.

Similarly, I send my resume out as a PDF. If they ask for it in Word format, I know it's going into that kind of system. I may or may not send it in Word format, but it's definitely a mark against them.

I have questions for both you and IndianAstronaut, then. I see statements like this frequently. I also see people talking about desiring to work at places like Intel, Qualcomm, Nvidia, and other such large tech companies that violate one or both of your rules. I assume you are not in that group? Would you really turn down one of those companies of a recruiter approached you and asked for a Word resume or told you to fill out a Taleo profile?

I kind of hinted at that in my original post. If it's a named company that I explicitly desire to work for then I might be willing to jump through a couple of hoops. Generally speaking, though, the name on the sign is not a strong indicator of whether or not I'm interested in working there. I'm usually looking for some combination of salary, technology stack or cool project, and strong/intelligent team. Having a recognizable company name on my resume is a very very low priority.

If a recruiter approached me, then sure. The recruiter is at least keeping an eye on my resume. Otherwise I will not apply. Taleo is a black hole.

Same here! It's a definite alert.

Even more irritating is that recruiters don't appear to know the difference between C#, C, C++ and Obj-C.

> weeds out those that may need a lot of hand holding early on.

i.e. You are using interviews in order to avoid paid in-house training.

True, but theres a difference between paid in-house training and being able to investigate problems on your own without constantly bothering everyone on the team to do your thinking.

The latter is a candidate I try to avoid like the plague.

I'd agree with everything but "alone". While having an interviewer (or more than one) staring at you while you solve a problem is unnerving, I'd also expect to be able to elicit requirements during the exercise, either before, during or near completion - because that's how it has always happened on my past assignments.

Do you let your interviewees reach out (possibly via IM) to your interviewers with specific questions?

Yes, we all have skype on our machines and I show them how to dial my phone extension if they need anything.

Edit: also for clarification, we discuss the problem ahead of time to make sure they understand it and ask any questions. For my group it's always a basic full stack problem. Here's a mockup and a basic install of visual studio and MSSQL, show me you can CRUD some data and present a decent UI. then we'll talk about how you did it.

I had heard horror stories of Google hiring practice: long timelines, brainteasers, GPA, top schools etc. Hence I had never really considered Google as an option. Then a recruiter got in touch and scheduled the interviews. I wasn't sure if I even wanted to go in. But I did. And it turned out to be the best interview experience I ever had. By far. I didn't make the offer and despite that I felt great about the process and myself. I called up all my talented friends and encouraged them to apply at Google. They have definitely gotten this right.

As with most things, anecdotes are anecdotes. There are many horror stories and many great experience stories. But the bottom line was that it is important to Google and that was why they had folks go through a pretty detailed training class and they often took candidate feedback through the hiring committees back to the interviewers.

Disclosure: I worked there for four years and interviewed a lot of people.

I've also done an interview at google and would have to agree.

I decided to take another offer. I'm sure google's offer would have been great, but the timeline was taking too long, and I wanted to start at the other place, so I didn't wait for it (2 weeks at that point).

I had a similar experience. Google's interviews were generally great (and the interviewers were kind -- it might have helped that I generally did well in them), but the overall process was far slower than some startups and I eventually had to shut down the process at Google to take another offer that was too good to turn down.

The bottom line is that, unless you request everything to be expedited from the beginning, it can take a few months to finish the entire process with Google. Even requesting a schedule change to be made would take about half a week for me. At least that was my experience -- I can't speak for others.

>it can take a few months to finish the entire process with Google.

That is not too bad for such a large company really. Personally I have always worked for small software companies where the hiring process frequently consists of one interview, a bit of a programming test and an offer all concluded in 72 hours (frequently less). Compared to that a few months seems slow but I once had a friend get hired as a trader at a huge international financial organization and that took a glacial 6 months.

Whats more there was the expectation that you would more or less just walk out on whatever job you happened to have accepted in the interim, get on a plane and fly to where they wanted you as soon as you finally received an offer. Looking back on it I suspect that agreeing to that was itself part of their selection criteria.

I had heard similar things and wasn't really considering Google either, but a friend there recommended that I apply so I did. The recruiter I interacted with was moderately (but, I suppose, understandably) condescending, and the phone interviewer combative---he also apparently hadn't bothered looking at my resume, apparently assuming I was still in college (I've been out of college for a decade).

How Google actually hires:

Lean heavily on brand name and the preconceived prestige of a google position, give the same stock interviews that most companies give that are heavily tilted towards false-negatives rather than false-positives, and then pay well.

Speaking as somebody whose SO works at Google, I think it's fair to say that Google does not hire the best people, but hires a lot of people that think they are. CL's being delayed by a week or longer because of nitpicks regarding comment capitalization/grammar/wording, why didn't you follow the exact idiom of this totally non-related library/code component that I wrote, or just pure spite is common place.

Also, memes have almost completely subverted the english language at Google. Really, the window into Google that I have is hardly showing me "Best of The Best" material.

It's too bad that Google has likely permanently ruined their HR image. They have developed an overwhelming reputation for arbitrary interviews and prolonged hiring decisions.

I'd like to think I'm a pretty good developer and have made a positive impact everywhere I've worked. Yet I'd never even consider applying to Google, because I know exactly how that process would go down.

1) Most likely result: I never hear back at all, because I didn't attend Stanford/MIT. (Never mind that I got accepted but didn't attend due to better financial aid elsewhere.)

2) If they do decide to interview me, I won't hear about it for months.

3) I'll have to drill on data structures and efficiency of sorting algorithms I've literally never implemented outside of an interview or academic setting.

4) Even having drilled on those arbitrary questions, I'll likely fail the interview because after a multi-day gauntlet of questions a random engineer didn't like the particular assumptions I put into my Fermi model of golf balls.

5) Of course, if they ever bother to let me know I failed, it won't be for months.

(Note: this is all based on actual Google hiring experiences.)

Based on my actual experience:

1) I went to community college, dropped out, then finished my degree at a state school 10 years later. Google didn't care.

2) I have no idea how long it took them to decide to interview me. Once they decided I got a call. People I've referred recently have received _very_ prompt responses, though years ago sometimes people would sit "in the system" for a while.

3) It is good to brush up on algorithms, like for most other SWE interviews. You do end up using this stuff regularly though - Even if say, your'e "only" doing UIs, it's how you avoid accidental n^2 algorithms in the middle of your UI code which pisses off your users. I don't get the attitude that algorithms are just academic. They matter. Besides, "use a hash table" is often the correct approach in interviews and non-academic code ;)

4) I have no idea what a Fermi model of golfballs is, but I've never asked or been asked such a thing. And "fail" is such a black and white term. Candidates are scored, and those with marginal scores the first time often get another chance later on.

5) Not my experience in general. Sometimes the interviewers lag on getting in their feedback for non-hires, which is really unfortunate, but the recruiters are pretty on top of things.

You say your post is based on actual hiring experiences, but I bet that the negative experiences get much more attention than the positive ones. People don't generally write blog posts about how wonderful their interview was.

1) Maybe it's different for internships, which is the only time I ever applied to Google.

2) I agree that algorithms are important, to the extent that you know the importance of "use a hash table." I don't think being able to implement quicksort on a white board is important.

I don't think these interviewers expect quicksort verbatim (though they usually don't ask for a run-of-the-mill algorithm).

They want to see your thought process as you tackle an unfamiliar problem. Do you think about the input and the end-goals explicitly? Do you consider multiple different ways of solving it, and cover pros/cons? Do you consider where issues may occur, and attempt to mitigate them? That's what they want to see.

I've heard from one senior engineer at Google that he likes to ask candidates to sketch out the implementation for a red/black tree on the spot.

I guess I could go dig up my old data structures and algorithms book that I haven't touched in 15 years to refresh my memory, but I don't want to work at Google badly enough to bother with that.

I can totally relate. When a Google recruiter approached me and told me that they were looking for Python and also Java developers I was potentially interested in the python position (being a Ruby dev). But then I learned that my interview would be conducted in Java and they would generously give me 1 week to brush up my Java skills, I laughed and declined. Definitely got other things to do with my evenings than brushing up my Java skills so I can work a Python job...

Really? The general policy, as I've been told, is to let the interviewee use whatever language they're comfortable with as long as the interviewer is comfortable with it too. I even give the option of using sufficiently detailed pseudo-code.

Well I guess nobody in the team I was contacted to work for was comfortable with Ruby. Screenshot of the actual email: https://www.dropbox.com/s/4g01mqxycyf08u4/Screenshot%202015-... I let you draw your own conclusions.

I actually had a very good interview experience at google even though I didn't get an offer. One interviewer and I clearly did not get along but that happens other companies too.

Same here. The recruiter was super nice and it all went very quickly. I got to a second interview, but then didn't make the cut. I understand why; I screwed up those interviews pretty badly. It was my first technical interview ever. But, the experience was great and I'm much better at tech interviews now.

But, unless you were being an ass to that interviewer, whether you got along or not is not a measure of your quality as an employee.

Presumably you were being polite and trying to develop a rapport.

That means you didn't get along because the interviewer brought their own baggage or narrow mindedness into the interview.

This is very common and shows the interview process was incompetent.

That may sound like a strong word, but if you were qualified and they didn't hire you because of what this guy said, then they are, by definition, incompetent.

Maybe arbitrary is a nicer word to use, but it's the same thing.

You seem to have an incredible grasp on what went on in that interview, along with how common it is.

The poster responded to someone who publicly offers interview advice. That affects probabilities in the poster's favor, I imagine. https://www.ocf.berkeley.edu/~kelu/interviews/

More importantly, conditions like "unless you were being an ass to that interviewer" make it an entirely sensible comment, in my view. I remember strongly disliking how fellow interviewers had zero empathy. They generally have more power in that situation, and therefore more responsibility for reasonable outcomes.

I definitely agree with your observation that empathy is important, and an interviewer has responsibility for creating an environment in which the interviewee is comfortable.

I also agree with the commentor that interviewers can be very arbitrary, and that is not good.

I only took issue with "That means you didn't get along because the interviewer brought their own baggage or narrow mindedness into the interview. This is very common and shows the interview process was incompetent."

Commentor doesn't know what went on in that room, and doesn't know that this is what happened in this case.

Apply during a down-cycle.

Most of the hiring snafus at Google are because they use a large population of temp recruiters, whose contracts may not be renewed before the candidates they're sponsoring get offers. When that happens, the candidates are often left in limbo.

When there's a recession, all the temps are laid off, and you only get the permanent recruiters who now fear for their jobs. They are extra incentivized to a.) make sure the candidate gets through the process, so that they have a reason for their job to exist and b.) make sure the candidate has a good experience, so they get a commendation. And c.) they're all experienced, so you don't get the kind of gratuitous screw-ups you might with temps.

The hiring bar is higher during down cycles, but that can also work in your favor as well; your resume doesn't just show "Worked at Google", it shows "Started at Google when nobody was hiring", and you occasionally get comments like "So how long you been at Google? Oh, that means you started in...January 2009, wow, you must be smart to have gotten through the hiring process then."

This is a bad idea: there was a hiring stop at the down-cycle and Google tried to cut costs while keeping all engineers. (the biggest change that I felt was that bottled mineral water was changed to ultra-clean water from the river in Zurich).

when are the "down cycles" and "up cycles" ?

Current status: up cycle

The next down cycle: You'll know it when you see it

Yeah, you'll know it when you see it. Some indicators:

The press never writes articles on startups, except to report their demise.

Everybody is seemingly in hiring freeze at once. (But don't let that stop you! Many times, companies that are reported to be "in hiring freeze" are actually still hiring for the right candidate.)

VC firms start telling their portfolio companies to conserve cash, shutter lines of business, and lay off people.

Plans for new corporate headquarters are shelved indefinitely.

You start hearing "Well, at least I still have a job" from friends.

You start hearing "Fuck, I just got laid off. Can I crash at your place for a month while I find a cheaper apartment?" from friends.

Searching for new jobs starts to seem dauntingly frightening - what if they go under? (In the context of this thread, this is ridiculous - Google is not going under.)

Also, bubble talk stops.

Most people will go out of their away to invent reasons for why a disadvantageous evaluation system is fundamentally broken. For example, people who don't do well on standardized tests (e.g. SATs) typically say "I'm bad at test-taking" or "SATs are stupid" despite overwhelming evidence that SATs are a good predictor of general intelligence.

It's worth considering the possibility that while you may be a great developer, you're not as good as you think you are with respect to the caliber of people that work at Google.

(Note -- I don't work at Google and didn't do spectacularly well on standardized tests; but after working with many algorithms whizzes over the years I've learned that I'm not nearly as good a programmer as I once thought).

If Google primarily worked on developing novel sorting algorithms, I'd agree that this is a great process for them. Heck, maybe it is a great process for them—they just do an astoundingly poor job of explaining who they want to work at Google.

(Also, please recall that literally the only time I personally applied to Google was the freshman year of college. This isn't a case of personal sour grapes.)

By the way, I also don't think their hiring process is fundamentally broken. Just pointing out that this is the reputation it's acquired.

Since you brought up the SAT, it's an absolutely perfect and effective system with zero flaws—which I did spectacularly well on.

> despite overwhelming evidence that SATs are a good predictor of general intelligence

I never heard this before. I thought that SATs and other standardized tests heavily correlate with background / race. Which to me, means it's not a good indicator of intelligence, but rather education.

That depends on whether or not you think intelligence is an inheritable trait.

Who cares how well it does at predicting general intelligence? If we wanted to give people a test to figure out their intelligence we would give intelligence tests. If we are going to use it for college admissions it should predict how well one would do in college. It fails mightily at that job(correlation coefficient of .2 when compared against first year grades).


If SATs were some sort of general intelligence assessment, it is unlikely that a $500 Kaplan course would significantly increase your scores (which they do).

That depends on whether you consider 30 points significant or not. [0]

[0] http://www.wsj.com/articles/SB124278685697537839

Like any other study, you need to look at the source of that data. I'd trust a study re: SATs sponsored by "National Association for College Admission Counseling" about as much as a study about pollution sponsored by the "American Petroleum Institute".

It's a small anecdote, but my high school got a grant to do a pilot program to incorporate SAT test prep into the school program back in the 90s. IIRC, the average score went up 100 points vs. the PSAT. With the old version of the test, I went from the 1200s (80th percentile) to the 1400s (95th). Writing was an optional test then, and the test prep didn't cover it, but I was already familiar with the writing process from AP courses.

30 points IMO would represent prep focused on test strategy exclusively. For example, with tests like the SAT, answering questions wrong comes with a higher penalty than not answering.

If you drill on vocabulary, tune your writing to line up with the scoring methodology and are familiar with the structure of math problems asked, you're golden. But knowing those things doesn't grant you greater general intelligence.

I don't think NACAC is equivalent to the API. It's not like college counselors are the ETS—in fact, many counselors complain about how much some students focus on the SAT.

As for your anecdotal evidence, it sounds pretty flawed to compare results on the PSAT to the SAT directly—I also got a much higher score on the SAT without doing any studying at all, probably because they're scored and weighted entirely differently. Moreover, if this comparison was done over a year (ex. sophomore to junior year), the results are likely even more flawed—there's too much confounding development in that year to attribute the increase to SAT prep.

Anecdotally, I know I did much better than all of my friends who spent months studying for the SAT and drilling on vocabulary, math, and strategy. If the SAT is so game-able, they should have overcome me.

I interviewed over a period of three month in 2007. At the end of the interview, they said I did great. The recruiter called me and said they would be extending me an offer the next day.

I was ecstatic and told my family and was planning on how I would tell my current employer.

The next day the recruiter called me and said that they rescind this and wont be offering me the job. He said he was not given a reason why.

Then, over the next ~4 years, they called me no less than 4 times for the same damn position. I had to remind the recruiter about the first interview experience, after which they said OK then they dont want to continue that process.

After the 4th time, I angrily said to the recruiter "You have called me for this position 4 times! Either give me the damn position of delete my damn phone number from your system!!"

They haven't called me since.

I was worried about this as well (to a point that when Google reached out to me, I told them thanks but no thanks, given all this). They eventually wore me down and the process was actually quite painless. I have been here for 3 years now, quite happily.

They wore you down?

They tried to wear me down, to the point of contacting me multiple times a week, via every means of contact possible, for weeks on end, until I threatened to take them to court because they were harassing me after I asked them to stop contacting me.

Every single one of those contacts, by the way, was by a used-car-salesman type of "recruiter" who would blow smoke and not answer questions, etc.

There was no way I was going to ever work for google because of their lack of ethics... but this didn't matter to them. And their persistence despite being told that only confirmed the view.

Edit: Sorry my experience doesn't fit your desired view of Google. But if pointing out facts that don't fit someone's ideology gets me constantly downvoted here, what is the purpose of participating in this site? Is this only for circle jerks?

I was mightily tempted to downvote you, not for the substance of your comment but for the complaints about downvoting. Up until the complaining it was interesting.

They responded to all my questions as honestly as they were able to

I wonder when these "experiences" happened. I went through Google's process this semester looking for an internship position and it was no different than the dozen other companies I've gone through. Things might have changed since the days of random Fermi questions.

Edit: correspondence with the recruiter was no less timely than it was with other companies, and the only major difference I can think of is the host-matching process we went through.

Maybe they have. That's exactly why my comment was that it's too bad Google has developed this reputation.

Yes, I was confused because that's how your comment starts out - it sounds like you're looking at this reputation like it's a thing in the past. Then you go on to say you'd never apply because you "know exactly how that process would go down." Which sounds like something you're thinking now and even in the future. I'm getting downvotes so maybe this makes no sense to anyone else but I don't see why you'd say "it's a shame" they've ruined their reputation in the past and then go on to say that you're sure things are going to be the same now and in the future.

By the way, since I'm getting the downvotes already anyways, I might as well say that you sound a little salty. I'm sure no one bases hiring decisions entirely on what university you went to.

Furthermore, all of the news articles we read love emphasizing the weird questions you might get once in a blue moon, the quirky perks you get for working there, etc. but in reality I doubt the interviews and work experience are much different from any other company (I asked my interviewers about this as well). Just some food for thought before you go accusing them of having "developed an overwhelming reputation" - a lot of this is the result of media hype, not their practices per se. Even though I agree that being employed at Google is probably overhyped, there's no need to be so antagonistic to the company for it.

To be clear, I'm just not sure what it's like there now. Maybe it's still bad, maybe it's improved a lot. Probably it's a bit of both, depending on who you encounter.

My point is that even if they have improved (maybe they have!), their past reputation makes me reticent to even bother seeing if they have.

> I'm sure no one bases hiring decisions entirely on what university you went to.

Former Google managers have explicitly told me that this filter is used for some positions at the resume screening level.

Sorry if I sounded salty. I have a great job and probably wouldn't work at Google even if they offered—just trying to frame the common perspective.

Facebook is still like this. The ask all the dumb questions like how do you detect what color hat someone is wearing. Then if you don't do well, they won't let you know.

Was this your experience? I just went through an SDE loop last week (still waiting to hear the results), but I got no "aha" questions. 2 strictly coding questions and 1 open-ended design question (with pseudo-code). It was very straight forward.

I did not have at all this experience interviewing @ Facebook for a mobile position.

Questions were reasonable and the interviewer gave nudges / prompts when it was obvious you were stuck on something.

They really seem to take recruiting very seriously and it showed.

How do you detect what color hat someone is wearing?


Sadly not AI detecting hats and then finding colours based on lighting of faces and rest of scene.

My response would be: Let me take a selfie.

> 1) Most likely result: I never hear back at all, because I didn't attend Stanford/MIT. (Never mind that I got accepted but didn't attend due to better financial aid elsewhere.)

> 2) If they do decide to interview me, I won't hear about it for months.

My gut feeling is that with the rise of Facebook, Google has really changed its hiring practices. Now they don't want to lose out a good talent to FB; and dicking around for months is a sure-fire way to lose that talent, because FB sure doesn't.

So, I'd like to think this article is more about hiring practices than it is about Google, but I'll bite.

>> 1) Most likely result: I never hear back at all, because I didn't attend Stanford/MIT. (Never mind that I got accepted but didn't attend due to better financial aid elsewhere.)

I didn't go to college at all, and I've been invited to interview there by engineering managers. If I didn't have absolute golden handcuffs at my current job, I might have considered it.

Furthermore, I know plenty of people who haven't gone to any college and work at Google. If you don't hear back, it has nothing to do with the college you went to. Even if a single interviewer had this bias against you, the larger hiring committee would review each of the interviews you had with different people.

To name a couple of examples - David Byttow, founder of Secret, was hired as a Software Engineer at Google without any degree. Michal Zalewski is in an engineering director role without a degree. Once you reach the interview, the sole reason for a hire/no-hire decision is how you did in the interview process. Literally nothing on your résumé will disbar you at that point.

>> 2) If they do decide to interview me, I won't hear about it for months.

You won't hear about the interview date for months? I admit this can be frustrating for large companies, but no one I've spoken to told me it literally lasted months. The longest I've heard of was a month and a half, from first interview to hearing news.

How are you submitting your résumé? Are you just applying to the job and hoping for a recruiter to pick it up, or actually emailing a hiring manager at Google? If you apply without trying to get the attention of a decision maker it always takes longer because that's how the process works at large companies.

>> 3) I'll have to drill on data structures and efficiency of sorting algorithms I've literally never implemented outside of an interview or academic setting.

While algorithmic questions are asked, they do not make up the entirety of the technical interviews at Google. Generally speaking from what I have seen, questions are not "implement a linked list", questions are "implement this program" and you are free to choose which algorithm is best. However, I acknowledge that interviewers might not always follow this.

>> 4) Even having drilled on those arbitrary questions, I'll likely fail the interview because after a multi-day gauntlet of questions a random engineer didn't like the particular assumptions I put into my Fermi model of golf balls.

Fermi questions are actively discouraged by the Hiring Committee, and questions involving these will be thrown out when evaluating an interviewer's analysis of an employee. As someone elsewhere in the thread mentioned, this would involve a strongly worded email from the Hiring Committee afterwards.

The first thing to understand about Google's hiring process is that you are interviewed onsite using five or so 1:1 sessions. The interviewer has to perfectly transcribe everything that happens and needs to be able to explain their decision to the Hiring Committee. The Hiring Committee will not consider answers from an interviewer like, "He couldn't tell me how many stoplights are in Los Angeles" or answers like "He didn't give a good enough answer about why he wants to work here."

I realize this seems like "my anecdata is better than your anecdata", but what are you saying is actively negative and contrary to a lot of the literature about what Google's hiring practices. I'm not saying their hiring practices are perfect - far from it. But you seem to be ascribing malice to their methodology when it's really not the case.

> Furthermore, I know plenty of people who haven't gone to any college and work at Google. If you don't hear back, it has nothing to do with the college you went to.

The author of "Cracking the Coding Interview", who worked at Google, has publicly said that, at least in her hiring committees, your education and particularly the prestige of the school you went to was very important.

That book was published four years ago; that's plenty of time for things to change, and there have been news stories that Google is putting less weight on academic performance because the data shows it doesn't matter that much. (http://www.nytimes.com/2014/02/23/opinion/sunday/friedman-ho...)

> said that, at least in her hiring committees, your education and particularly the prestige > of the school you went to was very important

In all of the hiring committees I've participated in I can't remember a case where the candidate's school was a significant factor in the hiring decision. Even for new grads.

It's possible that it's given more weight in the pre-screening process, but once you get to interviews it just doesn't matter much at all. At least that's my experience as both an interviewer and a hiring committee member.

I can't imagine that'd be true for someone who has been around the block. I'm pushing 40, I sincerely doubt they'd care about where I did my undergrad.

There are all sorts of reasons why they wouldn't take me, but there's no way that's one.

Google is looking for people with interesting stories. They're big on diversity. A straight-laced bloke like me from a regular family hasn't a chance without that MIT/Stanford sheepskin. But if you were adopted into a trisexual human/Klingon line marriage... well, Google could use someone with your fresh perspective, even if you only went to clown school. Hell if you went to clown school you're a shoo-in.

Want to work at El Goog? Cultivate as much weirdness as you can without becoming a criminal or jeopardizing your competence.

Is anybody else annoyed by these kinds of articles confusing Fermi problems (how many piano tuners in Seattle or golfballs in school bus) with Brainteasers (why manhole covers are round, or crazy questions about perfectly rational guys on an island wearing hats)?

It seems to me that those types of questions are very different in terms of what they're testing for. Fermi problems in particular show how you might go about approaching a problem, and there may be many different correct answers or ways to approach it, and the goal isn't to get to the "right answer".

Brainteasers, in contrast, either test whether you can recall or figure out an extremely specific problem, or whether you're good at solving certain types of logic puzzles under pressure.

It may well be that both of these types of questions don't give useful information about a candidate, but they are vastly different overall.

What annoys me is articles perpetuating the myth that such questions are still asked at technical interviews.

Google banned them from its process ten years ago and most large companies did as well.

Definitely not a myth at all.

Disclaimer: recent Google interviewee.

The person who interviewed you is going to write their brain-teasery questions into their interview feedback and get a strongly worded email from the Hiring Committee.

Statistically, across multiple interviewers, Google does not allow these questions.

The definition of brain teaser is pretty vague. Many algorithmic interview questions are of the you-need-to-have-seen-it-before variety.

This is certainly true; the optimal method for finding a cycle in a linked list is unlikely to be a solution you'll stumble on during an interview.

Yeah but just having two pointers and incrementing one by 2 is easy enough.

You actually indirectly highlighted one of the problems with these types of "brain teaser" questions.

Condition 1: If you are actually as brilliant as Donald Knuth, and independently derive the Floyd's cycle-detection algorithm, then obviously you must have cheated, because only Donald Knuth could have come up with that sort of thing in 20 minutes.

Condition 2: If, on the other hand, you aren't brilliant like Donald Knuth, you'll probably come with the naive solution using a visited data structure of some sort, in which case you're stupid because you can't come up with the optimal algorithm.

In either case, you bombed with that interviewer.

Condition 3: Cheat. Do the naive algorithm first, then have an "ah-ha" moment that magically gives you the optimal algorithm, because you actually knew it before hand. I suspect, but can't prove, that some hires get in this way. During my time as a PhD researcher studying deception under a related NSA grant, I routinely found that a) people are horrible at lie detection, and b) people greatly overestimate their ability to detect lies.* The perfect way to game the system!


Condition 4: Inform the interviewer that you're aware of the cycle detection algorithm, and get another brain teaser that reduces you to Condition 1 or Condition 2 (and if less than ethical, Condition 3). Oops.

Ideally, you want interview questions from which you can start at Condition 1, and without deus ex machina, eventually get to Condition 2, perhaps having the interviewer give some hints along the way. Better is to start with a problem that has a reasonable Condition 1 solution, and then slowly modify the problem specification for increasingly complexity ("Now pretend this is an arbitrary graph instead of a tree, what would you have to change?").

Finally, Google maintains a list of banned questions which have such brain teasers (technically, they have an entire question pool), but unfortunately, interviewers don't seem to check them frequently enough and so brain teasers continue to persist (even in 2015).

* If you're fascinated by lie detection, start with scholarly publications from Aldert Vrij, and work from there.

Maybe to solve optimally, but most algorithm questions should be able to be answered with the knowledge attained in a Data Structures and an Algorithms course.

Was this for a software engineering role? I've never been asked this, and I'm pretty sure during interview training you are explicitly told not to ask those sorts of questions.

No, not software engineering - product management. That said, the position in question was still of a fairly technical nature as far as I understood it.

Sure, but coding doesn't apply to the position you applied for. Fermi questions are a good way of determining cognitive ability and problem solving, especially when coding or algorithmic questions aren't appropriate. I've gone through both PM and SWE interviews at Google, and am currently a SWE and interviewer. My PM interview had a Fermi question, and thought it was enjoyable and appropriate for the position.

So I'll try to respond to this with as little bias as possible considering I was an interviewee and you have self-identified as an interviewer.

I am not debating the utility of Fermi questions, I can see how they might be useful and/or might be harmful during the interview process. My statements have been simply that my experience differed from what others have been saying, in that I definitely had that type of question during an interview with Google, so clearly they cannot be "against the rules" or anything like that.

That said, these types of questions are a bit like any standardized test (such as the SAT/ACT, etc.), which may or may not be strong indicators of cognitive ability/problem solving depending on who you ask. I think there is enough controversy over standardized testing to be able to at least say that solely relying on such methods, especially in a high-stress situation or even due to cultural differences, might come with some drawbacks and not be an accurate indicator for all candidates.

Lest I forget, Google is a business, and if such tools are what help Google find the candidates it wants, then so be it. It might also be an indicator to candidates about what kind of organization Google is. As a business, the organization will usually prioritize its desires/needs/benefits over those of the candidate - it's not a charity, and I get that. All I am saying is, it may just be that they are excluding certain diversity or individuals unnecessarily without realizing it. Perhaps that is the motivation behind the reported change in attitude towards such types of questions, I'm not sure.

It may or may not be that coding skills were relevant to the specific position. That said, in my experience, product management in software companies in particular is not stovepiped in such a way that you need not have any experience in coding. In fact, I think that some of the best product managers in such companies have coding experience, business experience, hardware/software/etc., and/or cross-disciplinary skill-sets. Perhaps such strong candidates don't fit the standard model, I'm not sure.

Which is why I specified "technical" interviews.

Brain teasers are not as frowned upon as much for non engineering interviews but they are absolutely banned for engineers (source: former Google employee with about 300+ interviews during my time there and also former hiring committee member).

Can you describe the differences in product management positions at Google? Which ones are technical and non-technical?

No, unfortunately, I can't give you a useful answer here. It seems that, in the first rounds of interviewing, you might not be interviewing for a specific position initially, rather, a function/role (for example, product management or software engineering).

In my case, I was clear that I was unable to relocate, which left only a specific position available as a possibility which was nearby. The reason I thought that specific position had technical responsibilities was by the description of said responsibilities in the job description as posted. Also, I was told that I was contacted by Google in large part because of my technical background, but during the interview, that background was not discussed or explored.

Sorry I couldn't be of more help!

In general, our product managers are technical -- almost all have a computer science degree, many have worked as engineers. When going through the initial interviews you are usually not interviewing for a particular position, but rather as a "generalist."

After the hiring committee has decided whether or not to hire you, your specific background will be matched with specific openings around Google. Naturally, some products require less technical expertise than others.

In the article they say that despite being banned, they still sometimes show up in their own Google interviews.

Yes, they are different

What I think end up happening with those Fermi Problems (or other problems) is that the interviewer knows one specific answer and then ends up rejecting alternative answers

That is true of a surprising number of concrete-seeming programming questions, too, which is another reason you should standardize your questions.

I know Google has interviewers copy down character-for-character what is written on the whiteboard for their whiteboard programming questions. Presumably this is so that their hiring committees can dispassionately review what the candidate produced, instead of relying on the interviewer to report success or failure.

Yes, but for something slightly more complex than a "fixed answer" problem I would expect some flexibility in the answer.

(Especially because of limited time in an interview and the candidate is under pressure)

Which is the reason they could be completely different from brainteasers, but are the same in the form used in interviews. People are very, very bad at evaluating on the spot if some "solution with thought-process" which differs from their own is actually completely valid, usually it is: "Nope, that's not how you do it. What did he think?!"

You'd have to capture the answer and then study it later, possibly multiple times to overcome any: "Hm, this doesn't sound right, why didn't he think of x, y and z"-feeling and understand that x, y and z are just your bias, not some kind of "correct" part of every answer. I've never seen that happening.

It looks like there is a trend starting in the software industry to encourage empirical hiring. This particular article follows about a month after another hiring post reached the front page of HN.[1]

I really agree with it, and I hope more people are looking at it seriously instead of fixating on the fact that the word "Google" is in the title. Giving all the candidates the same questions and the same exact interview methodology is much more fair and empirical than simply having an interviewer wing it (which is virtually certain to bring in bias). Most interviewers I know think they are better than the average interviewer due to illusory superiority cognitive bias [2]. However, when it comes down to it, you cannot easily judge the difference between candidates if you ask one a completely different question than another. This goes against all the principles of psychometric testing, yet it is still ubiquitous because no one has bothered to empirically look at whether or not they're really interviewing in a rigorous way.

There is a serious issue in the industry right now where otherwise capable people fail interviews due to their appearance, manners of speaking or other harmless idiosyncrasies. It's because interviewers are very personally attached to their subjective methods, and they tend to really enjoy having personal ownership over the interviewing process instead of surrendering control to a standardized script. This trend looks like the software hiring equivalent of a professor grading papers without reading the name attached to the paper -i f we can have several candidates answer the same exact questions and perform the same exact activities on an interview it makes it much easier to determine who is the real "best candidate" when it comes time to comparing their results.

If this really takes off, the only remaining problem as I see it is designing interviews that accurately correlate to the job activities.

[1]: http://sockpuppet.org/blog/2015/03/06/the-hiring-post/

[2]: http://en.wikipedia.org/wiki/Illusory_superiority

From their article it looks like what they are doing is pseudo-science at best. Hopefully not harmful pseudo-science, but it could be actually harmful. If one is not careful enough it is rather easy to replace more-or-less working common sense with some horrible pseudo-scientific approach. For example, optimizing for 'individual employee performance numbers'? Without consideration that, say, adding yet another employee with 'a good number' may actually diminish the total sum?

Regarding prediction of 'employee performance numbers' based on 'interview scores'. It is not surprising that the result is completely random. As far as I understand it, in the big companies performance numbers don't show actual contribution of an employee. At best they are showing "how this person was able to use the resources around them to achieve goals", note, mostly via using human resources of informal social networks. But usually performance numbers are just random. And at worst they could be negatively correlated with actual contributions.

From the article: "A good rule of thumb is to hire only people who are better than you. Do not compromise. Ever."

Is it only me that sees this as an unsustainable goal that will likely lead to idealist driven results?

While, when interviewing, I tend to be impressed with folks who seem more well versed and smarter than me, but realistically I also feel there's a potential risk that those folks may not be sufficiently challenged in our organization.

Wouldn't the (unrelenting) drive to find people who are "better than you" lead to some of the problems in bias-matching and with first impressions dominating your perception?

Sounds a bit dogmatic and contrary to the entire focus of the article which espouses a more rigorous, measurable and evidence-based approach to hiring.

Here's a mathematical model of it:


The reason it converges to some relative percentile (apparently just under the 90th) is because of the noise in the interview process. If people could reliably measure the quality of an applicant and consistently hired only people better than them, they'd only hire the very best (100th percentile) candidate, who probably wouldn't want to work there. But because peoples' judgments are off, they end up getting folks who are good, but probably not absolutely best - but they also avoid the bozos, because it's pretty unlikely that a bozo would appear better than half the people you know.

The rubric presented "hire above the company mean" doesn't align with the dogma in the wired article "always hire people better than you, don't compromise". It's the latter I question.

Also the assertion that "it's pretty unlikely that a bozo would appear better than half the people you know" sounds just full of assumptions that may not be valid.

Just thinking from a non-Google point of view, not every wants to work at new_untested_startup or boring_postIPO_company. It seems unrealistic to a) set hiring targets and b) keep raising the bar unless you have incredible cachet and a large pool of candidates to work with.

So what does the rest of the industry who don't have that luxury do?

>I also feel there's a potential risk that those folks may not be sufficiently challenged in our organization.

I think Google is unconcerned with that option. They can get amazing candidates to do run-of-the-mill work by simply paying more and promising more than other companies. For them, keeping a great talent from joining a different company is potentially worth the increased compensation.

> For them, keeping a great talent from joining a different company is potentially worth the increased compensation.

This is logically true, but it's hard to imagine how the human interviewers/hiring-committees would consciously consider that.

>A good rule of thumb is to hire only people who are better than you.

Reminds me of another adage:

>Never be the smartest person in the room.

I interviewed with Google several times over the last decade. The last time I had more than a dozen interviews, including two visits to their campus ... and they took about 6 months to get back to me with a decision ... which was no.

Can't pretend I was happy with the process. Sort of got the sense that I was replaceable commodity, which I'm sure I am. They didn't particularly care about me. I wasn't applying for a technical role, and I probably had the same qualifications as dozens of other candidates, so they really didn't care about what I thought about waiting for months at a time with radio silence.

I'm at a start-up company now, and very happy. I'm working on a Google-X style moonshot, and I know if I was at Google I would have no chance of working on one of their Google-X projects, because everyone at Google is trying to work on one of those.

I also get the sense, based on stories, that there is a lot of politics now in Google (as there must be in most big organizations), and so somebody with no political skills, like me, is better off in a start-up.

Now. Only if they give me feedback on what I did wrong - that would be more useful I think. As it stands, however, that is calling for a lawsuit. Sigh. Catch-22.

When I was in college, the #1 source for interview feedback for me was when my friends and I would get back from on-campus interviews and "compare notes". Afterwards we'd practice interviewing and being interviewed by and solving problems together.

Getting a "reason" from a prospective employer isn't really helpful; it's usually not "oh if you knew what a skip list is then you would have gotten the job". It's more, "are you a better fit than the other candidates", and actually talking to them can help you gauge your strengths and weaknesses relative to them.

Laszlo Bock talked about that in an interview in The Guardian[0]. Here is the quote:

> After six weeks of this, 99 are rejected. They’re not told why. “If somebody just breaks up with you,” Bock says, “that’s not the time to hear: ‘And really, next time, send more flowers’… For the most part people actually aren’t excited to get that feedback, because they really wanted the job. They argue. They’re not in a place where they can learn.”

0. http://www.theguardian.com/technology/2015/apr/04/how-to-get...

Just stop replying after you tell the candidates their deficiencies wrt the position. It's not that much different than what companies do now where they stop replying immediately after the interview. Even if the candidates are in denial and argumentative at first, they will have time to reflect on a specific issue. If 9/10 employers tell you that your breath is terrible, maybe it's time to investigate your morning bathroom routine regardless of how perfected you think it is now.

Minimal upside, lots of downside. Most hiring managers (reasonably) are vigilant about avoiding post-interview drama, which is what this is an invitation for.

I understand. During my job search I was getting pretty frustrated at the little feedback I received during the interview process. Some interviews were more qualitative "we want to watch your coding process" where there was no feedback from the interviewers and a stone wall from the company after the interview, which lead me to question whether or not I actually knew the skills on my resume as well as I had represented them. (As a fresh college grad, I didn't, but it wasn't obvious at the time.)

Facebook gives feedback to interview candidates. Even if the candidate is in a bad mood when the bad news comes, the information is still valuable weeks and months later after Google is gone.

I got extremely detailed feedback from Facebook, so this isn't unheard of.

Someone should sort a consultancy similar to pivotal where they come in and teach you how to interview. Most SMBs are terrible at it.

There are also different 'best' engineers for the role/company. A company may say they want the 'best' but they really want an engineer that is going to stick around and solve their boring problems in boring ways. Even the edgiest startups likely have mostly boring problems to solve.

I see it as sort of a moneyball situation. You're looking for value. You can spend a lot of money and time searching for that unicorn 10x engineer when you could have hired 2-3 3x engineers and overall spent a lot less time and money and got your product out the door months earlier.

Isn't this the Nth PR article about Google hiring practices. To be honest, Google technical hiring bar is no different from FB, Twitter, MS, Amazon, LinkedIn, Dropbox etc but they are the only company consistently coming out with PR fluff about hiring the best. This is not only disingenuous but frankly (to put it crudely) makes Google appear as a attention mongering whore. I guess, Google PR machine has to keep this show going to keep the mind-share among potential recruits.

I can give you Facebook, Twitter, LinkedIn and Dropbox. But, there's no way Microsoft and Amazon have the same hiring bar as Google. This is based on my own personal experience interacting with them and what I've heard from friends.

Would be very interesting to hear your take on the differences...

The bar is lower or higher for MS/Amazon?

The problem, however, is that most standardized tests of this type discriminate against non-white, non-male test takers (at least in the United States).

Can anyone explain this? I don't understand the explanation that follows.

The famous example of this is the so-called "Regatta Question" in which an SAT question assumed a knowledge of Crew -- an elite sport that would have been a rather obscure reference for minorities from the inner-city. [1]

1: http://articles.latimes.com/2003/jul/27/local/me-sat27

Isn't this more an issue of class more than race, doe? Even with the argument that members of a certain race tend to belong to a certain class it doesn't change the fact that its still inherently a class and environment issue.

Like, from your post, I would say that Crew is an obscure reference for anyone living in the inner-city, not just minorities.

In this specific instance, it's really excluding all but the elite. I remember learning what crew was for the first time while visiting colleges, and I grew up in middle-class suburbs.

The idea is that if something negatively affects protected group(s) disproportionately, then it doesn't matter whether the mechanism by which it does so is via class -> strong race/class correlation. It's still discriminatory.

I don't think anything was arguing that they weren't discriminatory, or in the very least I wasn't trying to argue that. The claim was that it was discriminatory towards non-whites. I'm claiming its discriminatory to anyone regardless of race and instead its more dependent on their income level and environment they grow up in.

I'm a white male but I know that personally I had no clue what Crew was during my high school years. Anecdotally, I went to school in an area where the public school population majority was, well, what we consider the minority when discussing race relations.

Also, although racism is quite possible, the lack of social mobility may explain why race and social class are closely related (i.e. if you inherit both race and social position we'd expect a society to look like this)

That would assume that race and class can be entirely separated. I'd say that they cannot.

But even if they cannot be separated (a conversation for another time) these questions are still difficult for anyone living in the inner city, especially those in the lower income class, not just those who are minorities, wouldn't you agree?

one very old question does not prove anything about the actual cultural bias of the SAT now or then! research on this topic mostly shows that cultural bias is not a serious problem: http://lp.wileypub.com/HandbookPsychology/SampleChapters/Vol...

Only if you stretch the definition of "white man" to include Asians who actually tend to out-perform whites on standardized tests. But of course it is actually nothing to do with race and everything to do with having parents who value education above all else.

Consider, some groups are more willing to guess when given incomplete information and the SAT promotes guessing. If there was a larger penalty for wrong answers other groups would do better. Another issue is retaking the SAT tends to increase scores as you keep your best individual score.

Individually adding ~30 points might mean little, but when you look at large groups of people such small changes become important.

I don't see what's wrong with the guessing mechanism.

If you cannot eliminate any answers, then your average expected score gain will be the same as not answering.

If you can eliminate choices, then your average expected score gain will be proportional to how many answers you've eliminated.

If a group fails to realize this I would say the SAT was successfully in measuring their cognitive abilities in this regard.

is the SAT about [whatever it's testing], or is it about how good you are at estimating your confidence in your partial-guesses?

The penalty is designed so that you will not gain more points than you deserve if you guess and had no idea which answers were wrong.

If there are 5 choices and you randomly guess, you have 4/5 chances to get -1/4 and 1/5 chance to get 1 point. That means if you had no idea which answer is correct, you will net 0 points over the long run, and this will be the same as leaving the question blank. 4/5 * -1/4 + 1/5 = 0.

If you are able to eliminate 1 choice, you will on average get 1/4th more points by picking a random answer among the remaining choices. If you are able to do that, then you deserve the extra points, because it took more knowledge for you to eliminate it.

Anyone that doesn't realize this deserves to have a lower score, and if you can't eliminate answers then you also deserve a lower score. Bringing in racial and gender biases into this is ridiculous.

It's a completly arbitrary choice that happens to benifit white males. Of course people are going to bring up racial and gender bias issues.

I just showed you it wasn't an arbitrary choice and that it has a mathematical basis.

Someone who has the ability to eliminate at least 1 choice understands the question better than someone who doesn't know what the question is asking at all, and is appropriately rewarded for it.

If they removed the penalty and automatically guessed for questions left blank that would have the same effect mathematically, but women would score better. (AKA blank questions would be worth (1/number of choices) on average and educated guessing would improve your score on average.)

So, it is an arbitrary choice.

That's the most awful suggestion I've ever heard.

You want a test to introduce MORE randomness just to please 1 group of people on a test that is supposed to measure your math ability including the ability to understand their simple guessing penalty?

You want everyone to think in the back of their mind that they got screwed by the SAT's random number generator or that some idiots hit the jackpot and get a much higher score than they should have?

It all depends on where your coming from. The SAT test designers might want to promote the idea you should take and pay for the test several times to get lucky. I suspect this might be why they had pro guessing scores in the first place. They could also give you 4 points for the right answer and 3 for a blank question vs 0 points for a wrong answer. Which would penalize people that randomly guessed.

In any case this specific rule happens to benefit white men more than other groups so clearly that's going to bother people. As to the math idea, they score the English section independently so having people do math as part of the English section seems counter intuitive.

PS: I am a white male that happened to crush the SAT, but I also accept the test was biased in my favor.

Again you are completely missing the point. The SAT is not "pro" guessing. It is mathematically neutral.

If you get rid of guessing completely, then there's no way to differentiate between people who have no clue what the question is and people who actually have a clue. Giving 3 points for a blank question rewards people who are clueless and punishes people that aren't.

If you really did crush the SAT I am rather confused how you fail to understand any of this.

PS: this thought just ran past my mind, it seems like you actually think your scoring scheme is actually mathematically fair. Someone that is able to eliminate 1/5 answers will average 1 point per question (4 points / 4 choices) Someone who cannot eliminate any answers gets 3 points per question??? Where is the logic in that?

If you crushed the SAT they must have removed the probability questions these days lol.

is there any evidence for this theory or did you just make it up?

From a quick google:

"We find that when no penalty is assessed for a wrong answer, all test-takers answer every question. But, when there is a small penalty for wrong answers and the task is explicitly framed as an SAT, women answer significantly fewer questions than men."


Retaking the test: http://philvol.sanford.duke.edu/documents/SAN01-20.pdf

PS: Making an unbiased test is really hard, the SAT comes reasonably it’s not there.

The core of it (as I understand it) is that these standardized tests are written by groups of white men. Thus, the "standard" they are working to is, in fact, not all that standard, but is actually a self-replicating formula that makes sure that the next series of people to create the standardized tests are also white men.

Pragmatically, this conclusion comes from looking at how various demographics do on the SAT, versus how they perform in college after being accepted. Once you establish that this is true (since performance in college is the "ground truth"), then you can go back and look at reasons WHY this happens.

Put another way, the discrimination is the fact, demonstrated by a disparity in prediction vs. outcome. The reasons why this happens are hypotheses.

I don't think anyone is going to touch that; I will point out though, one wouldn't expect the SAT to predict college performance because it is used to sort people into less and more competitive schools and within those schools people select their majors (and some majors are more cognitively demanding than others)

Some of it is the guessing that other people mentioned. Some of it is vocabulary. Since there are correlations between race and wealth, certain words (say Commodore) may be more quickly understood by some than others.

My biggest fear is that this article and this book leads to people blindly copying the wrong parts of Google's hiring process for the wrong reasons, the way 20% is treated in this Dilbert comic:


Google interviews a lot of people in any given week. Not everything in this article applies to Engineering interviews; it's a broad overview. The way the interview process is designed needs to be interpreted in that context.

What I'm really excited about is patio11 and tptacek's Starfighter. I really want it to be the "Khan Academy" of interviewing -- best-in-class, various progressions, and really good suggestions on what puzzle to tackle next. Up until now, I've been directing people at the USA Computing Olympiad's training server, but its one-size-fits-all approach doesn't resonate well with people who don't have confidence in puzzle-solving (and give up on the first problem) or people who don't have the leisure of n years of high school/collage to work through all the problems (e.g. women who are going through HackBright and similar accelerated learn-to-code programs).

The desire to create an 'objective' measure for programming capabilities is really sort of ridiculous. Given how much of a team sport software development is, it completely underestimates so many of the key factors that go into being an effective developer.

Unfortunately, as developers, we're keyed in toward quantifying everything, even if it's to our own detriment.

I cut my google interview short when I discovered that their process was to interview me, then bin me, then pick who I'd be working with after I'd gone through a pretty arduous process. I told them that I was interviewing them as much as they were interviewing me, and I had no interest in 'getting the job', then being told after the fact who I'd be working with. The idea that I'd keep interviewing them without even knowing who I'd be working with just seemed absurd to me.

The idea that there is one single "objective" interview process is ridiculous. The idea that we should be vigilant about bias, subjectivity, and nondeterminism is the opposite of ridiculous.

I fully agree! What I see is a rush toward quantifying everything that's quantifiable and discarding everything that isn't, which is terrible.

For what it's worth, if you really wanted to interview Google, and you were in college, an internship is the way to go. If you get considered for full-time after an internship they usually give you the option to go back to the same team.

But if you already have been in industry a while, then you don't qualify for an internship and that's a bit broken.

Just to be clear: there are three of us; Erin and I started the firm.

Everyone claims they want to hire the best, but their hiring process is all about filling the pipe with unwashed masses and then putting them thru a grueling filtering process.

All of these companies, including google, are following a silly, company-centric process. Putting junior engineers in there to make candidates jump thru hoops to get a job? Why are you even bringing in people you don't already know can do fizz buzz? Bring in people who couldn't have the resume they have without being decent programmers. Google is doing cattle calls? Seriously? That reflects badly on them.

You should have senior people review the resumes. They should be able to tell from the resume whether the candidate is a good fit or not. Seriously. I can. Bring them in, spend the interview time talking to them. Ask them about a project they are proud of or liked or was challenging and get them to explain something technical to you. That's all it takes.

Then spend a significant amount of time selling them on your company and why they should want to work there. They should be asking you as many questions as you're asking them!

I do like to ask a little brain teaser, but it's relatively quick. IF you're making them write code, you've failed. I'm dead serious about this. I've hired a lot of people, never asked them to write code, then had them turn out to be great hires. Never hired someone who couldn't code.

I've seen people lie on resumes (actually got a resume from someone who claimed to be on a team I'd lead, but that he hadn't been on!) Should take very little time to figure out if they're lying on their resume or not.

Cultural fit is very important, but people apply that wrongly. They seem to think "I'm a nerdy white male who hates the new star wars trilogy, so they should too". Wrong. Cultural fit is about finding the guy who will show up to help you move without being asked simply because you mentioned you were moving and he's the kind of guy who jumps in and does shit like that. The kind of person who is brilliant but also able to communicate those brilliant ideas with others without it always being about drama. The kind of person who has enough backbone to improve the final product. The kind of woman who takes bugs and gets them cleared even though she could have reassigned them to someone more appropriate, simply because she knows that other person is overloaded.

You don't find that on a white board.

I would agree. I recently went through a phone interview with $BIG_COMPANY. I'm not actively looking for a job, but the opportunity was interesting (I'd be able to learn some things that aren't possible at the small companies I've worked at for the last decade).

I had explained to the recruiter that I had spent the last several years programming almost exclusively in C on embedded systems. The call (~45 minutes) was spent doing a programming exercise involving string manipulation where the interviewer was essentially silent, while I dealt with the details of getting string manipulation on C correct (the actual problem was trivial from an algorithm standpoint, and could be banged out in python or a similar language in ~10 minutes with access to a REPL to check the details). I wasn't about to try to remember the proper syntax for another language on the fly (especially since the interviewer didn't want me to use anything other than a shared text buffer during the interview), and didn't have the prototypes for various string-related functions memorized, so I'd bet I came across as incompetent in the reviewers eyes.

Never mind the fact that they could have asked interesting questions about how I wrote a rather complex piece of an IP stack from scratch recently, and successfully deployed it to various customers.

I suppose if you have a big enough candidate pool and a reasonable compensation package, you'll find someone acceptable with this approach - but you'll spend a lot of interviewer-hours doing it. To be honest, I don't have anything against them, but I'm unlikely to accept another interview if their recruiters call again.

I feel like everyone obsesses over google's hiring practices. I hesitate to be negative over this, but I am so incredibly tired of it. To my mind there's no point to any of this.

Just about everyone reading these articles will never work for google. I'll never work for google. Anyone qualified to work there doesn't need these write ups, and for everyone else, myself included, it won't matter.

Have they changed this process recently? It was only 3 years ago they were trumpeting their brain teasers in the WSJ.


Yes, from a 2013 interview with Laszlo Bock, senior vice president of people operations at Google:

"We found that brainteasers are a complete waste of time." "They don't predict anything. They serve primarily to make the interviewer feel smart."


Well, I interviewed a couple of times for Google (2009 and 2012, they turned my back both times at different stages of the process), and it seems like their process is, obviously, tailored for their particular use case.

They have a TON of applications, so they are more worried in filtering and discarding people than reaching out candidates, or even encourage people to apply. Their offer (prestige, great perks, great salary, etc...) is obvious to start with, so they are not bothering is going for you. You are the one that should prove worthy and eager to work for them.

I guess that's similar to places like Ivy league Universities, etc...

That's ok. It obviously works for them. It's just that I think it's not the most common case for all companies.

Except they did "bother to go for" me, to the point where they didn't stop until I threatened them with a harassment lawsuit.

Google does not have a lot of prestige, they have a long record of questionable at best ethics.

However, people who are just out of college are much less likely to be aware of this, and thus more likely to apply.

Which means google does reach out to higher skilled, higher experience people like me.

They pursued me more aggressively than any company has ever in my career.

By the way, if you feel you need to prove worthy and eager to work for a company, then your esteem of the company is out of place. You are likely going to end up taking a worse job or taking worse compensation because you aren't valuing yourself highly enough.

So, the Senior Vice President of People Operations at Google thinks that:

"All our technical hires, whether in engineering or product management, go through a work sample test of sorts, where they are asked to solve engineering problems during the interview."

is a work sample test.

Does Google hire the best people?

We took that out of the title since it's so baity.

All: please let's not argue about that ludicrous title, which I'm sure the author had nothing to do with. At first glance, this piece looks a lot more substantive than the usual posts about hiring. Let's discuss the strongest bits. (Come to think of it, that's the Principle of Charity applied to articles.)

Not exactly. They hire the people who are best at interviewing at google, and they believe their process correlates the 2 things well. I'm not so sure it does, but I am not in HR/recruiting so I haven't studied the problem.

This ^. I'd venture to even suggest that this strict and narrow hiring focus results in many of the short comings of their products. Can't have it all I guess.

Like everyone they hire the best, the average and the worst in some proportion. No matter how you interview, you really don't know the what their contribution will be until X days after the hire date. And X is often a random value.

The title really needs a [citation needed] (Since "the best" was removed from the title, this is what it was referring to)

Best is really dependent on many factors

(And it's pointless to hire "the best people" then set them on boring tasks, which seems to happen a lot at Google)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact