Hacker News new | past | comments | ask | show | jobs | submit login

> (ie, work sample tests), I do not understand why any tech company does developer hiring in any other way.

Because every method of hiring has its own biases and tradeoffs. Requiring work samples biases against programmers who aren't interested in doing a "homework assignment". For example, I don't enjoy whiteboard interviews but I'd rather write some fragments in front of the interviewer and "think out loud" instead of working on a multi-hour (or multi-day) coding project. Writing on a whiteboard is just not that big of a pressure cooker to me.

One could argue that the majority would prefer self-paced work samples instead of whiteboard interviews and you're optimizing for a wider candidate pool. That's fine but it's still a tradeoff. I think you're running a smaller scale boutique firm so work samples makes sense for you.

For a company like Google Inc who is more selective[1] than Harvard University, I don't see a compelling need for them to rely on work samples. That company is so desirable for programmers that any "work samples" Google tries to standardize on would get leaked and endlessly discussed on stackoverflow (see FizzBuzz for precedence.) If Google has to keep changing out the work samples to stay ahead of the street knowledge, you've lost the ability to compare work samples over the years which was one of the motivations to use that method in the first place.

At their scale and selectivity, the "study your algorithms book and prepare to be grilled on the whiteboard" seems to work best for them. Yes, they reject a lot of false-negatives but it's offset by ... their scale and selectivity. They can afford to reject false-negatives. If their screening methods were truly terrible, I don't see how the label "ex-Googler" would have the currency it does in the marketplace.


It's easy to be selective. I can ask all comers with a degree to roll three dice, and if they don't roll three sixes, I consider them unworthy. Only about 0.5% of candidates will pass. That's a pretty selective first screening, isn't it?

Selectivity is only good if it selects for traits you want, and, in practice, what you want is not a uniform skillset. So being selective for one trait, any trait, is already a bad idea.

Even the basics of the interview format make a difference. At other companies, all coding interviews are timed at 45 minutes or so, including the problem statement. It's a bit like going to an episode of Chopped. It's certainly selecting for 'can you think under pressure' along with 'are you friendly enough that your interviewer will help you when you are getting something wrong without docking your score'. But that's not good for a careful candidate that is trying to build sturdy, reliable code. So guess what: The company's codebase has few tests, things fail often when deployed to production, and reliability is a problem. It's not that they are willingly selecting against careful programmers, but it's a consequence of the interview process.

The moment you are hiring a lot of people, and you think you have a single model that works, you are hiring from a cookie cutter, and you'll get gingerbread men.

The "no homework" thing is such a red herring.

Companies that do work samples minimize on-site coding interviews, because the whole point of offsite work sample testing is that most candidates can't provide an accurate picture of their aptitude in an on-site coding interview.

No interviewing software developer prefers an extra 6 hours of on-site interview to an off-site coding challenge.

The problem developers have with "homework problems" is really a problem with bullshit companies that won't provide timely feedback on their submission. Nobody wants to do homework that goes into a black hole, only to find out 6 months later that they were in consideration for the role. That's a problem with all hiring companies, and it is orthogonal to the structure of the interview itself.

The problem with take-home assignments isn't how long they take - it's how many you end up doing.

Resume-to-offer attrition rates are often as high as 99.9%. In-person interview-to-offer attrition rates are rarely worse than 90% and often in the 30%-50% range.

The thing about homework is that it costs companies basically nothing (one form email followed by an automatic grader in some cases), so they can afford to give it as early as possible - strictly by the numbers, this means offers cost on average 1000x8 hours. Whereas in the in-person interview case, the big cost is much later in the process, so it's more like 10x8 hours per offer.

Worse yet, this cost falls heavier on developers that have more trouble getting jobs, who are probably less likely to be able to weather that cost. Not a great situation.

>No interviewing software developer prefers an extra 6 hours of on-site interview to an off-site coding challenge.

Sure, Google/Microsoft have the legendary "6 hour" marathon interviews with multiple teams with lunch in the middle but for smaller scale of companies, they don't have bandwidth to mess around with all-day interviews. It's ~2 hours. In this thread, it shows I'm not the only one who prefers onsite whiteboard over homework projects so the "no developer prefers" claim is too absolutist.

>The problem developers have with "homework problems" is really a problem with bullshit companies that won't provide timely feedback on their submission.

Regardless if the feedback is received within 10 minutes or 10 days of submission, if I apply to 4 companies, I don't want to do 4 homework projects. I simply don't. At least with 4 onsite interviews, I see glimpses of the office and meet the interviewers.

This is just not true. I talk to interviewing developers every day, and very few of them are interviewing at Google.

The final round of on-site interviews usually eats a whole day, and is preceded by a whole battery of time-wasting phone interviews, which themselves eat as much time as a coding challenge and often involve coding with someone else over the phone or Skype.

There are companies that are not Google that have multiple rounds of on-site interviews.

Are they not only getting very desperate candidates ?

No. Everyone interviews this way. The very most marketable candidates leverage personal brand and connections to dodge a lot of it, but you have to be good at sales to do that.

Perhaps in the US, but here in the UK, in my experience, companies have shorter onsite interviews. (except for the American ones...)

> No interviewing software developer prefers an extra 6 hours of on-site interview to an off-site coding challenge.

I'd prefer it, since if I pass the coding challenge, I'm likely in for a airplane flight and an on-site interview anyway, so I've still lost that time from my life in addition to the time spent on the coding challenge. Unless the coding challenge is conducted INSTEAD of an on-site hazing session, in which case, well, E-mail in profile!

You're always going to get flown out. But the hazing session might take 2 hours instead of 8, and won't involve you coding on a whiteboard.

Very good companies can tell you over the phone after you finish work sample challenges what your odds are of getting an offer.

The sensible solution, which I have seen done very successfully before, is to add a 1/1.5hr hr on site/skype step, in which the candidate goes over the solution, and asks for a small feature. It both proves that the code was written by the candidate, shows some of the personal stuff the homework didn't, and makes it far easier for the candidate to perform well, because they are familiar with the code, as they wrote it!

It's not an expensive investment for what you get, and since the meeting should happen soon after the code is written, it's easy to provide basic feedback.

> For a company like Google Inc who is more selecitve than Harvard University

Nitpick, but I'd love to see a citation on that. Google, like every technology company on the planet (modulo a small error value) goes to great lengths to recruit and hires lots and lots of people.

See my previous footnote about them receiving 1 million resumes to fill 1,000 slots. If that's true, it is more selective than any Ivy League university.

Here's another one showing 75,000 resumes received in one week: http://www.sfgate.com/business/article/Google-gets-record-75...

When you have the luxury to pick the best, from the best of the best, you don't have to follow the advice given by HN armchair quarterbacks like us.

No. That's not how a market works. Google can in fact maintain high and consistent standards for incoming developers. But because their processes are so dysfunctional, they overpay to do it.

And engineer headcount costs are a huge component of their business --- so much so that they've been accused of breaking the law to collude with other companies to avoid competition in hiring!

Google is succeeding in spite of the waste and unreliability of their hiring processes, not because of them.

>Google can in fact maintain high and consistent standards for incoming developers. But because their processes are so dysfunctional, they overpay to do it.

But there's no evidence that switching to self-paced work samples would cost them less. With Google's popularity, they'd get more false-positives from candidates that copied the code from widely disseminated previous projects. False-positives cost money.

Your medium size firm with smaller volume of candidates won't have that problem of increasing false-positives.

Sure, with whiteboard interviews, the rejected candidates (and even ex-Googlers) can write a "brain dump" blog with blow-by-blow algorithm questions but history seems to show that these don't work so well as cheating mechanisms.

What kind of work sample project could Google realistically design for 10000 programmers to complete? (It can't be as hard as "solve this Clay Millennium problem" or as easy as "reverse this string". Anything between those 2 extremes is trivial to copy to github.) How often do they need to redesign the work sample? What about objective "comparisons" which was touted as a feature of that method? What about the programmers that don't want to do the work sample? (They do exist!) Is there also a cost to filtering them out?

It's great you're really enthusiastic about work samples and want more companies to adopt it but I see no slam dunk evidence that they are the universal best method for every company.

Be careful about the arguments here.

Google is manifestly too capriciously selective in their interviews. They are infamous for turning down people who go on to do excellent stuff elsewhere. Watching acquaintances navigate their process makes me believe the infamy is well-earned.

That's an argument that is separable from the question of how best to design the Google interview process. I think Google's interview process should be work-sample based, like I think every tech company's should be. But you could argue that either way. We can both agree that Google's challenges are sui generis. We're both reasonable and might disagree on the rest of it.

Where I don't think reasonable people can disagree is on the question of whether Google overpays for engineers. They do. They pay a tax for the luxury of being capricious about who to hire. They can afford to do that, but most companies can't.

Finally, some validation that my interpretation of their process is not insane.

One thing that really rustles my jimmies is the constant assertion that "false negatives are (effectively) free". I think Google and the companies who hire like them seriously underestimate how much this costs them, both the direct costs of spending so much to ultimately reject people and the indirect costs from the work that is not getting done or being foisted on another overloaded engineer.

I worked in a small microcosm of Google's market (Google wants the "best of the best" software developers, and we wanted qualified app pentesters; both are a small subset of the overall market for software talent).

My experience is that running a hiring process that lights up only on the kind of talent that qualifies itself with a standard tech interview is ludicrously expensive. People that do well in standard tech interviews can work anywhere they want. If you can only hire those people, you are competing for talent with the wealthiest (or most overfunded) tech companies in the market.

Fortunately: performance in tech interviews is in fact not a good proxy for programming ability (in fact, there are ways in which being good at interviews can obscure deficits in candidates), and you can get ridiculous discounts to the price Google pays for talent if you don't try to hire people the dumb way Google does.

The root issue is that, the costs of bad hiring are even higher than that. For a rapidly-growing company bad hires can snowball into a bad organization.

Apparently this is where I will disagree with Tom, because I think the costs of a bad hire are often overstated. Part of it is loss aversion, and part of it is inability to quantify risk. In fact, it seems like the industry is at a point right now where they would rather repeat this mantra than put effort into understanding the scope of the problem.

The costs of a bad hire are overstated. The cost of systemic bad hiring may grow geometrically. If that estimation is correct (bad hires result in more bad hires) then no price is too much to avoid a bad hire.

And consider the US Air Force. They have an unlimited number of young people willing to learn to fly jets. Qualified candidates even. So they can filter any way they like, and still not run out of applicants. So their filters seem capricious, because they sometimes are. And it doesn't matter.

Is Silicon Valley in that position? Depends upon who you are. I don't think Google for instance is running out of candidates.

I suspect that the idea that the costs of bad hires are huge comes from not identifying bad hires early. If you hire a jerk who makes poor coding decisions and fail to oust him for a year or more, the effect could be devastating. If said jerk is spotted in a month or so, it may not be so costly.

And really there are levels of bad hire. There's the bad hire who really doesn't know how to do the job and will never get good enough to do it. That person should be very easy to spot with not too much effort in the interview process. The dangerous hire is someone who on the surface can do the work but has a toxic attitude and/or doesn't learn/grow. I'm not sure putting someone through a torturous interview process is going to root that person out.

I think where a lot of the elitist hiring is at now is that many companies aren't so much trying to filter out people who can't do the job, so much as they're holding out for who they think are rockstars. So they put candidates through the ringer with the thought that what will be left is rockstar material.

But many highly-qualified individuals won't jump through hoops for any but the top tier companies, and sometimes not even then. Furthermore, by definition, rockstars make up a very small percentage of the devs out there. What are the chances that every company who is holding out for elite coders even has any applying? Especially considering that elite people probably don't jump around often.

I agree, and that's even more reason that companies should use work sample tests in preference to on-site coding interviews.

The fact they have to filter down from a large number of applications to a small number of roles doesn't say anything about how selective they are.

    SELECT TOP 1000 FROM resumes ORDER BY received_date 
will produce 1000 job offers but speaks nothing to selectivity.

> filter down from a large number ... to a small number ... doesn't say anything about how selective they are.

But that's what the definition of "selectivity" is for database retrieval. Selectivity == n_rows_selected/n_row_count. The "larger number" was the denominator and the "small number" was the numerator.

Your example SQL is not consistent with your previous sentence:

  SELECT TOP 1000 FROM resumes ORDER BY received_date 
Notice that nowhere is the total row count for resumes known in your isolated example? So yeah, we don't have the denominator to determine selectivity.

For examples of Harvard and Google, we know the denominators (the total applications and total resumes). Therefore we know the selectivity.

I suspect you're mixing up "mathematical selectivity" from "decision process selectivity" because Google's internal decision tree for hiring might look to outsiders as "black box" or nonsensical.

The denominator has no relevance to how selective your hiring process is. Your process does not become more exacting and precise simply because you received more applicants.

>denominator has no relevance to how selective

Thank you for confirming that you were using the colloquial version of "selectivity" which doesn't require knowing the denominator instead of mathematical "selectivity" which does.

You're using "selective" like this definition: http://www.merriam-webster.com/dictionary/selective

I was using "selective" like this: http://www.programmerinterview.com/index.php/database-sql/ca...

The following wiki page ranks college selectivity and it absolutely requires the denominator to do so: https://en.wikipedia.org/wiki/Rankings_of_universities_in_th...

That wiki page orders the ranking on mathematical selectivity and not colloquial selectivity. My previous comment of Google Inc being more "selective" than Harvard is to be interpreted as mathematical selectivity. Sorry for not stating it more explicitly to prevent confusion.

So you don't care about on what basis selection is performed?

If your assertion is just that Google rejects a higher proportion of applicants than Harvard, that's... not at all interesting. The lottery rejects and even larger proportion of applicants for its 'grant' program, but I'm not going to try to learn anything from how it goes about picking winners.

And still they've yet to hire a single UX expert...

Part of the reason why is that it's super-easy to apply to jobs online. With just a couple of clicks (especially via LinkedIn) you can apply for whatever you find listed. This is an area where Taleo & similar provide enough friction to make it somewhat painful comparatively. So, you end up with thousands of people submitted applications for each of the positions Google lists. Most are not serious candidates (or at least worth seriously considering).

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact