Hacker News new | past | comments | ask | show | jobs | submit login
Why I Don’t Talk to Google Recruiters (yegor256.com)
892 points by kesor on Feb 21, 2017 | hide | past | favorite | 652 comments



Unlike most of HN (it seems), I like hearing from recruiters, because despite the very low signal-to-noise ratio, there's always that remote chance that one of them could be able to set me up with a "dream job". It's zero cost to me to politely reply to a recruiter and ask for more info, and I try to at least respond to everyone. What I've found is that they must have a lot of candidates they're juggling because falling out of the funnel is surprisingly easy!

It's amazing how often "Hey, thanks for reaching out, I'm interested. Can you tell me more about the role?" results in the conversation ending right away. Probably over 50% recruiters that contact me do not reply back after that very polite and neutral response.

Many who do keep the conversation going have not read my profile or resume carefully, so I'll give them a summary of the types of work I'm actually interested in, which is never what they contact me for, and politely decline to move forward with the (usually way too junior) role they are looking to fill. That will almost always end the engagement.

Sure, it's a lot of noise, but filtering is very cheap: the time it takes to reply back. My actual success rate with recruiters probably pretty average. Of the eight or so jobs I've had in my ~20 year career, about three were obtained through recruiters, two times through in-house staff, once through an external recruiter.


I take it one step further: the email address on my resume is a black hole. Its only purpose is to feed an autoresponder who kicks back a warm, generic, email thanking the recruiter for their time, acknowledging they have a difficult job, and lays out my requirements for any position: what I am and am not interested in doing, my salary/hourly/per-project requirements, my location requirements (100% remote), etc. At the bottom, there's another email alias along the lines of 'yesireallydidreadthisgiganticemail2017@mydomain.com' that goes straight to me. I ask the recruiters to not email that address unless they've read the whole thing, and their position matches my requirements. I get 200+ emails to the catch-all autoresponder a month, and maybe 1 (qualified!) reply to the 'its really me' alias every six months. About once a month, I get an email to the 'its really me' alias from a recruiter expressing joy, amusement, and thanks for spelling out what I'm looking for so early in the process. All in all, it's a far more pleasant way to go about passively searching for The Next Big Role(tm).


Have you ever actually got a job that you accepted through this method? Also, what wizardry did you use to get 200+ recruiter E-mails a month? Do you just have a highly optimized resume out there on all the job sites or something?


Yes, I've gotten my last 3 non-consulting jobs through this method. Before I switched to FT consulting, pretty much all of my jobs were through recruiters. The 'wizardry' is basically having a well put together resume (I've had several people/orgs/etc take a look at it over the years and give me optimization tips), and I maintain profiles at dice, monster, and indeed. That's... about it. I avoid LinkedIn like the plague because I detest the company and its practices, but I'm sure folks who are less picky than me could do something similar on LI and EASILY get more than 200+ contacts a month.


I would be interested in seeing what your resume looks like, at least on a generic anonymized level.


No problem, whats a good email for you?

Edit: heck with it, I've got nothing to hide. Enjoy: https://www.fuzzy-logic.org/file/Lee_Whalen_Resume.pdf

So folks don't abuse my poor auto-responder, here's what would happen if you hit the email in that resume: http://bit.ly/2lxsly3


1) I'm impressed by your technique, and I intend to copy it.

2) The numbers in that autoresponder caused my jaw to hit the floor. I thought I was well compensated but apparently there is a lot of room for me to grow!


Thank you very much, I appreciate!


I get between 30 and 50 a month and all I really did was setup a linked in profile.


> all i really did was setup a linkedin profile

AND work on IT. As an housing architect I received zero offers from recruiters despite having relevant experience. By the other hand I have been contacted several times by IT recruiters once I listed there (irrelevant) IT side projects.

Resuming: it's not you, it's IT..


Incredible--would you mind sending me your LI profile (privately of course, email in my HN profile)? I'd love to see an example of a profile that generates that much interest, even if it's mostly low signal.


I think location is very important with these sorts of profiles. I'm on linked-in, Indeed and career builder.

I normally get nearly zero recruiter emails, but late last year I changed my location preferences on Indeed and/or Career builder (I don't remember) from my hometown to Washington DC and suddenly I was deluged with the 20+ recruiter emails per month; more right after I update something on my profile.

Strangely, 1/2 of the emails are for locations far away from DC. I might try changing my preferences to San Jose and see how many more I get.

My "profile" is pretty much just my resume. Experience seems to be another important factor.


Maybe I am mistaken, but I thought that much recruiter interest on linked in is pretty typical. I live in the Mid Western United States and what was previously mentioned is also true here from my observations. Most of it comes down to having the right keywords and tags I believe. Having .net/java/mobile in your profile nets a lot of messages where I am. Words like Scala/Python/Node/Ruby/etc gets you a bit more.

80% of it is for jobs within the surrounding city, 10% for within the state and 10% out of state. That said, most of these jobs you could also find without the recruiters as well, but sometimes ones from internal recruiters (and if you're lucky a developer/dev manager) are useful.


Interesting. I suppose if I were to ever get 20+ recruiter contacts per month I would retract my previous comment about replying to each of them being cheap time-wise.

Admittedly, I'm not in the market for a developer position, and I deliberately down-play my development experience in my profile, which probably reduces my contact count substantially. I should conduct an experiment wherein I stuff my resume and LinkedIn profile with programming languages and framework keywords for a month and record whether it has an effect on recruitment volume. I suspect it would.


Another interesting bit I've noticed is coworkers getting some of the same recruiter spam from the same recruiter. Seems some of them just blast everyone working for a particular company and hope to get a reply.


It's honestly not that incredible.

Just follow all the guidelines that LinkedIn gives you, so that your profile is an "all-star" and make sure you have a bunch of connections (500+).

An email a day is fairly normal at least for SF engineers.


can you send me the link to your profile, i'd like to increase the interest on mine


I think you're right to thank them for their difficult job. I seriously got bored just reading the description of your setup. What percent get through to the second email? Is it like 10%? 50%?


(1 per month + (1 per 6 months)/6) / (200 per month) = 0.58%.

The more important nugget is that he is contacted about two attractive opportunities a year for his passive method that have a fairly high probability of leading to offers if he is interested. This enables him to: 1. Accurately appraise his worth to companies, 2. Quickly scale to a much higher number of interesting opportunities through the 14 worthwhile recruiters per year that already value his conduct (even if only 2 per year have opportunities with appropriate fit) by actively involving their aid if he becomes dissatisfied with his current employer/role (or they become dissatisfied with him), 3. Identify hiring trends in his field.

I for one think it is a brilliant strategy, and I'll probably adopt it myself!


You did get bored, didn't you. He detailed the numbers in his post.


They said 200 emails/month to the general and one every six months to the second email, so that's well under 1%


I'd pay at least a few dollars per month / tens of dollars per year for this service.

I'm not kidding. Dear HN reader, please steal this idea!


Dude, you can do it yourself with a gmail throwaway. 'hireme.myname@gmail.com' or similar, set your vacation auto-responder appropriately, and have 'realdeal.myname@gmail.com' auto-forward to your real address.


I know how to set up 2 email accounts and forward one to another email. "this service" would presumably be more than allocating 2 email accounts.


If everyone does this, then recruiters will simply start to spam the yesireallydidreadthisgiganticemail2017@mydomain.com emails without reading the autogenerated "profile".


What about 1% of people doing this? Or even 0.5%? That is easily enough to live off, and enough to slip under the radar.

To me, getting this done on interesting domains seems to be the hard part. For people with their own domain, getting a separate server to deal with email for that is some hassle. You can't really do this on a generic domain either, cause that looks a lot less professional. Signing people up for gMail accounts might work, but that's probably against google eula. I'd guess the same for other webmail services that are at least somewhat professionally acceptable.

Best way I see is to give people with their own domain as easy a time as possible to set up DNS correctly. Getting through DKIM and SPF reliably seems like a minefield though.


then we make them go through an animated slideshow and do a quiz to get to an email address


And when that gets rigged by some wicked OCR, then a Super Mario simulation where the princess is the realdeal@email.com, and every 10 coins or every level would get them an additional resume-info nugget to consume.

Okay maybe I took it too far.


Turns out Bowser was just trying to hire a competent plumber.


But at least a few people would be employed making this system.


Got to keep that arms race going!

You didn't take it too far, someone build the first online recruiter focused video game where the prize at the end of each level is the contact details for a more-and-more suitably qualified candidate.


Can I buy more coins directly in the simulation?


And we have AI that can play Super Mario now.


Thank you for your contribution


My partner works at a recruitment company. Here's the thing: most recruiters are not very good at what they do and are only chasing placements. To get the most out of recruiters you need to be more pro-active, and find a recruiter or two who really understand your skill set and career ambitions. Build a relationship with them, and don't waste time with the other 90% of batch mailed crapshoots.


> Here's the thing: most recruiters are not very good at what they do and are only chasing placements.

Related anecdote: There was a technology recruiter that was constantly calling me at work, despite the fact that my work number is not listed anywhere* and my LinkedIn profile having clear instructions not to do that.

I decided to look him up on LinkedIn. Guess what his previous job was? Debt collector.

* I figure the recruiters get around that by calling the front desk and asking to be transferred to me.


HAhahahaha debt collector. We all gotta eat somehow. Tech industry casts a wide, frothy net when business is booming!


This is true. I'm an engineer that started recruiting six months ago part-time and was able to bill six figures pretty quickly. I realized that colleagues in recruiting had a really tough time making efficient matches.

The best recruiters can not only make efficient matches, but they also connect the dots to reach out to matches in the pool of passive candidates they talked to when a new role opens up.

Even better than that is when I'm able to actual push back with companies and make an impact on the hiring process to get someone hired.


How'd you get started doing recruiting? I've been talking with recruiters lately and I find myself doing enough tech explaining to recruiters that I've wondered if I could consult for them. Interested to hear about your experience.


Feel free to e-mail (in profile). Started by trying to build tools for recruiters and interviewing recruiters for customer feedback. Found a good fit with a recruiter who placed my whole NY team before our startup got acquired and he offered me a consultant gig. Really enjoying it so far. Great way to monetize a mix of engineering career consulting + staying on top of startup trends.


Interesting! Do you feel like as you get further and further from having done the actual work (that you're helping recruit for) that you'll still be as effective?


I'm still coding professionally (js eng) and stay on top of trends! But, I don't think coding in the trenches will make me more effective. I've had a few eng. jobs, most of my friends are engineers, and I'm really passionate about job trends, job satisfaction, and employee retention.

With that foundation I'm more effective as I see more career trajectory data points when I talk to candidates that specialize (data, security, devops, ml) or ladder up (vp, manager, cto). Then I can provide even more value to new candidates with the career counseling approach.


I don't doubt that this is the way to find a good recruiter. However, if I am going to put in the time to vet multiple recruiters in order to find one worth working with, why would I not just spend that time finding a job?


It's hard to get an accurate pulse of the market if you spend only a month every few years searching through angelist/indeed/etc.

I work with about 70 companies (only in NYC) from pre-launch to almost IPO and a new role opens up to a us every week. It's helpful to find a recruiter who knows you & the market well enough to curate jobs you want and tell you about opportunities that might not even be listed or on your radar.


Yup, we need Meta-recruiters: recruiters that can find the right recruiter for you!


But how to keep out the spammy Meta-recruiters? We'll need Meta-Meta recruiters!


Quis recruitiet ipsos recrutes?


Ah yes, good o'l Juvenal. Meta Poet himself.


Head hunting is very profitable. If you get a recruiter email for a company that is not that company, it's _always_ junk. 100% of the time, it is a waste of time. Ignore those, and look for the ones that are from the actual company.

They have their own recruiting team that you are going to be dealing with anyway, and aren't playing the numbers game or shuffling their candidates between many different companies trying to get their (usually very large) hire bonus. They just throw as many people they can at as many companies they can till they get a bite.

Senior employees can be up to a FULL YEAR of the hired employees salary. As that employee you don't pay it, but it's not something that is benefiting you either. On top of that, the companies often have claw backs. If the employee leaves under a certain time, they have to give back their fee.

The reason they don't respond after the first email, is that they need someone who is going to be very proactive and actually motivated to get hired. Else their candidate catapult might hit the target, but less likely to stick.


>If you get a recruiter email for a company that is not that company, it's _always_ junk

Not true. Companies hire recruiters to find employees. Not all companies (especially startups) have their own internal recruiting department.

The flip side is true, though: as a company trying to hire people, every recruiter email that says "I have a perfect candidate for you" is junk. They just invent those people or send resumes of people who aren't even on the job market.


Recruiting (and hiring) is a core competency for every company. Especially for startups.

A founder should personally be handling recruiting until the company is big enough to have their own internal recruiting department.


Massive difference between sourcing candidates and interviewing/making hiring decisions. If a founder is doing the former it's probably that she's wasting a massive amount of time and there is a huge opportunity cost there. Recruiters have their place.


" If you get a recruiter email for a company that is not that company, it's _always_ junk. 100% of the time, it is a waste of time."

This is false. My last two roles were both initiated by 3rd-party recruiters.


Yes, it is possible to get hired with the use of 3rd-party recruiters, but it is still not in your benefit. If they never got people hired, then they wouldn't make money and even try.

If you don't have any way for companies to target you themselves, then 3rd party recruiters might be more effective the you targeting companies directly yourself.

However, if you are in the boat where you are getting many recruiter emails a week, my advice is sound. They are all junk, and the only way to get any signal to noise is to look for ones directly from the company doing the hiring.


100% true. I almost always reply to recruiters with my rate and ask if they can afford it "since most companies that pay recruiting fees cannot pay full rates."

Tumbleweeds every time.


I make it a point to reply to recruiters. But there's many different types of recruiting email. I completely ignore (and sometimes even filter straight to the trash) blatantly shotgunned form emails.

If the email shows any effort whatsoever, mentioning a project I worked on, mentioning my current job, pointing out the role in question would fit my skills and it actually does, basically anything at all that suggests it's not just a form email sent to hundreds of people, then I will reply.

However, there's a third type of recruiting email that shows the person on the other side really is directly targeting you. It usually comes from the person who would be your manager, or at least someone who has a direct stake in the company (say the CTO or CEO at a small company). These emails I take seriously and appreciate. One of these led me to my current job.

Oh and I don't appreciate recruiting emails that have tracking links in them. Usually I will politely respond that I don't appreciate the tracking.


A CEO of a small company once sent me what might have been a really interesting opportunity and I was going to reply but then I noticed the tracking links and realized it was just a very-well-crafted form letter. I stopped considering it then.


Tracking pixels on emails aren't always an indication of a form letter. Marketing folks use lots of services that add tracking to all their outgoing email. (Whether that's OK or not for you is a different matter, I'm just pointing out that it could have still been an individual email.)


Keep in mind - the CEO in question could be writing personal e-mails out from an ATS / Sourcing tool that helps keep track of communications.

Tracking links are not always == shotgun approach


Tracking links are always extremely bad form, though. I'd be interested in hearing from the other side, though, why such behaviour would be considered acceptable.


I think of acceptable behaviour in terms of a blacklist rather than a whitelist. I don't have a problem with tracking links because I don't see a reason to have a problem with tracking links.


By tracking links do you mean custom-generated links to a certain page that reply back to some software showing that you clicked the link?


Yes, exactly. Usually they are in the form of http://sometrackingwebsite.com/someid?nextUrl=https://youtub....


A colleague of mine gave me some excellent advice here. She redirects recruiters. I am very happy with my current job so I'm not generally interested in emails from recruiters. However, I do hav many friends who are at jobs doing things below their potential. It's always good to see if the job fits anyone's profile, even vaguely, and redirect the recruiter to them.

(I don't redirect all recruiters, though, so there's still a filter)


> I am very happy with my current job

The best time to interview is when you're very happy with your current job.

Zero pressure, zero commitment and potentially huge upside in terms of pay and title increase.


As a young person with zero other commitments to worry about (kids, loans, etc), I'm really enjoying what I work on (and get paid pretty well) and that's enough for me for now. This could change in the future. I'll keep your advice in mind in case this situation does change :)


My conversations have usually gone like this:

"Hey, you look like a great candidate for $AWESOME_JOB"

"Great, let's talk"

"Oh, you're not really what they're looking for, can I interest you in $GARBAGE_JOB or $BASEMENT_AT_YELLING_CORP"

But by then I've showed interest, so I start getting calls. Not worth it.


Seen that, but more often than not it's:

I see you're working as $ROLE in $COMPANY_A. How would you like the exact same $ROLE in $COMPANY_B? Or worse, How would you like $ROLE-1 in $COMPANY_B.

Sorry but it's going to take at least $ROLE+1 to get me to uproot my life and go through that interview gauntlet again.

Of course, as I said in another post, the calculus changes entirely if you're unemployed and need "something, anything".


Or, even better, would you like to be hired for $UNRELATED_ROLE.

My company has 'Linux' in it's name, apparently recruiters think that means I'm a sysadmin (I'm a dev).


At a location 3000 miles away.


I have the same experience - it's easy to turn down if you are not interested, or if they what they propose is a bad fit. I also think it is good to remember how extremely lucky we are to work in a field where there is such high demand. I have several friends that are in a technical field (but are not developers) looking for new jobs, and they have to work really hard to find anything. For developers, we can just sit back and wait for employers to look for us.


Now consider this approach with spam.

To paraphrase you, there's always that remote chance that one of the Nigerian Princes could actually need your help.

I used to do a similar thing to you, but it's too much work now and the levels of job spam ("Oil pipeline engineer" roles, simply because my CV has the word engineer in it (prefixed with Software)... lazy recruiter, that's bad!). Basically, if they can't make the effort, why should I? I guess the answer is, "Because there's always that remote chance that one of them could be able to set me up with a "dream job""..


Recruiters and Nigerian scams aren't comparable. People legitimately get jobs through recruiters. I got my dream job through a recruiter who found me. Nobody gets the Nigerian Prince's money


My logic is that if something needs to be actively sold (pushed via recruiters), it's most likely average at best. I'm betting that the best jobs don't go through recruiters (or maybe they do if that's the company-wide policy, but the hiring manager already has a candidate in mind when they posts the ad).


You've hit on a truth here. Most of the jobs available through recruiters are what I like to call "dog jobs". The ones that aren't filled internally and don't instantly get a line of top candidates because they are so good or pay so well--the ones that NEED someone to sell them. Those are the jobs that are available on job boards and that recruiters and are trying to fill--not the awesome ones.

Think about it like the real estate market (in normal markets, not Silicon Valley). Some houses sell before they even hit the market. A few also sell after the realtor does a few private showings. The rest are the "dog properties" that go on the MLS and need heavy marketing to sell.


> The rest are the "dog properties" that go on the MLS and need heavy marketing to sell.

That's... a strange way to look at the housing market. The norm is to list your property and then see what bids you get. Putting your house on the market is not some weird trick to pass a crappy house on to a bunch of rubes, or am I misunderstanding you?

Like, how do you reliably find a willing buyer pre-listing, and how do you know that's a good price (other than just blindly trusting your agent), if you don't even bother listing it? Listing is as much about price discovery as it is about finding more buyers.


The point is that the best homes(top 10%) rarely need to go on the market.

The hotter the market, the more realtors know buyers willing to pay a lot for the first home that meets all their needs. Hence more homes are sold before listed.


> The point is that the best homes(top 10%) rarely need to go on the market.

I think that's locale specific. Except in the 15million+ bracket, auctions (here in Melbourne at least) tend to get the best money for the seller.


The housing market in the Bay Area is very different from the housing market literally everywhere else in the US (except in some ways Manhattan). Pre-listing sales are very rare outside of SFBA and Manhattan, and the vast majority of domestic residential sales (>70%) are of MLS-listed houses/units, even in major metros like Chicago and LA.

The "dog properties" don't get listed on MLS at all; there is a cost to listing on MLS and the dog properties generally don't generate enough interest to justify the expense (and usually don't even attract a realtor willing to invest the effort).


This assumes that the Venn diagram of "top candidates" and "people actively looking for jobs" overlaps significantly. The best candidate for a job may not be looking for a job so companies pay recruiters to find them.

If the quality that companies were getting from regular applications was better than the quality they get from their recruiters why would companies pay to have recruiters?


I think we're agreeing with each other. Recruiters are needed if the job does not fill itself naturally, and a job will tend to get the candidate flow it deserves.

If there's an awesome job available out there, with way above average pay, benefits, great opportunities for advancement, good work/life balance, etc., you'll fill it with a top talent. You're not going to need a recruiter. Word will get around even to people who are not actively looking, trust me. Similarly, if you have an "average pay for an average worker" kind of job, you'll find that average worker.

When you have a ho-hum average job but you want top talent, then you're going to need that recruiter because the job needs to be actively sold.

I invite anyone who works at a company that pays 3X average salaries or is well known for being an unbelievably great place to work to reply and tell me they have trouble hiring.


Google and Facebook both pay reasonably above-market and have high employee satisfaction, and receive more resumes in a month than they could possibly hire in the next 10 years. They also both have ginormous internal recruiting departments.

They could definitely fill their need for new software engineers naturally, but presumably have data that the quality of candidates they get by bothering a substantial fraction of the world's software engineers on ~a yearly basis gets them better applicants and engineers.


> They could definitely fill their need for new software engineers naturally

Why do you think so? Just because you receive X resumes doesn't mean they're all qualified to work there.


Jobs do not fill magically. Most of us is too busy with work, side projects or family to track if your amazing company have new hot opening.

Above average people are usually treated well in theirs jobs. If you want to hire them you need to actively approach and lure into applying. Even then, they will not bother to refresh algorithms questions for whiter-boarding.

Below average people are looking for jobs because they are on PIP or they have toxic relationship with management or they will never get promoted in current role and need to change jobs.


I don't think it is really trying to sell the position, it's trying to find quality engineers who aren't looking.

Really talented engineers already have jobs and may not be actively looking to move, but if the right opportunity for the right company presents itself then they might consider it.


Wait! There's actually money!? :)


I've been responding with "Sorry I'm pretty happy where I'm at right now, but can I keep your email address and let you know when I do start looking for something new". My goal is to have a giant mailing list of recruiters when I do start looking for work again.


I've found that working with multiple recruiters can be a nightmare if they are all working the same geographic market.

Due to my current situation, I tend to only deal with recruiters who are local to where I live; unless the job opening allows for telecommuting, or it is a "too good to pass up" situation (I have yet to see one) - I will generally pass it up.

Instead, I currently only work with a couple of local recruiters. I have told both what I expect for interviewing (I prefer a practical interview with tests - not a whiteboard pressure interview), and I keep both informed what interviews the other has sent me on, so they aren't both submitting me to the same position opening.

Going beyond 2-3 recruiters in such a situation can and will lead to a tracking nightmare, to keep all of them in synch and not submitting you to the same opening - either at the same time, or worse, after you have already been interviewed once and weren't successful.

I do however, try to review the contacts I do get from recruiters, and if I feel they might be useful in the future, I tell them so, and keep a contact with them (even if it is just a LinkedIn or email contact) - and let them know I am interested in the future. That'll usually be enough for them to keep me in their DB for future potential offers to come up.


Most of the recruiters I've talked to work for one and only one company.


In my experience, recruiting is a fairly cut-throat industry with a very high turnover rate of staff, so once you come to use your list you may find that a fair proportion bounce (or are redirected to someone else). It can't hurt though!


I find these calls quite flattering. I haven't been on the market or done any computer consulting since 2013, but got a call from a recruiter just yesterday.

It's a little less flattering when they can't correctly pronounce my name or the state I live in.


I like getting recruiter emails because it kind of means I am marketable, even though some of the recruitments can become an anecdote I can talk about at a dinner party with friends... (you know one of those totally weird recruitments)

But mostly I want to get it because I want to see what these companies are expecting from their candidates, technology these companies are using, and the salary range. DevOps / SRE / Production Engineers are high in demand.


Do the typical recruitment emails you see specify salary? The ones I get tend not to.


Certainly some do. They may say "Cloud Engineer 150k!" when they meant up to 150K. Or they will tell you $80,000 to $120,000.

Note many job recruitments are from agency so they are likely hiring consultants, so paycheck doesn't come from the actual client company.


> filtering is very cheap: the time it takes to reply back.

See, I see that time as very expensive.


I think OP's reply 'I am happy to interview but only with the hiring manager I will be directly reporting to' is quite polite.


I tend to agree with you and try to at least be cordial with recruiters in my region. I have gotten 2 jobs from recruiters, both being external recruiters.

Once I applied for a job that had a really obscure job requirement which I met. The job was already filled but I spoke to the recruiter, who was really nice and ultimately found me a different job in another skill set about a month later.

The other was the traditional thing were a recruiter reached out to me. I actually didn't like the recruiter, but I liked the job and ended up pursuing it anyway.


I always give recruiters a fair go with their pitches (I'm between contracts now and today I had maybe 10 recruiters pitch me positions) - Most of them are pretty good at their job and they come up with decent stuff. You just have to ask the right questions to probe them and understand if the opportunity is actually good for you before you take it to the next phase.

As an engineer you should have a clear picture of the kinds of companies that you want to work for. As you get older, your selection criteria should improve and become more detailed.


Agreed, this makes perfect sense. I'll even go further and let them put me up in a hotel / buy my plane ticket even if I'm not particularly interested. Going to interviews is a lot of fun.


Many years I had sent out resumes looking for work as a MCSE. Never mentioned novell. Guess what? Recruiter emailed me looking for novell engineer... Hw fell out of my funnel real quick.


I respond to most if not all of them.

The only ones I tend to ignore are the recruiters who are bringing positions that are far out of my geographic areas and are not remote.


It's definitely not "zero cost." It may be a low cost, but as with any choice, there is always an opportunity cost.


There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

Let's presume this is out of preference and not ability. It's a pretty basic concept. If your preference stops you from learning something as basic as this as a programmer, then it doesn't seem likely that you will be motivated to keep up with even more abstruse concepts.

Nearly every programmer nowadays knows that naive string concatenation is inefficient, and so they should use a stream or something like that. I'd rather hire someone who knows exactly why it's O(n^2) and why adding to the end of an array that doubles when it expands is O(n) amortized. Why? Because a different but analogous situation might well come up in a programming job, and the person who likes to think about such things is more likely to spot the potential problem and avoid it altogether! The fact that the op would actually feature the above sentences as a large text excerpt sets off the "Dunning-Kruger" alarm for me.

That said, the op still has a good point. There is considerable organizational disconnect being displayed here. Those big companies would do well to have developers or a puzzle website do the initial filtering, rather than waste people's time by alternatively telling them they're supposedly wonderful, then supposedly horrible.


I agree. Personally, I'm a bit horrified at the idea that someone who wants to be an "expert at object oriented design" would turn up their noses at "binary-tree-traversing questions". It sounds like a formula for aspiring Architecture Astronauts.

It's also not realistic, given the wheat-to-chaff ratio out there, for a manager to interview all the candidates directly without an elaborate screening process, although the 'get interviewed by some generic programmer who has a cheat sheet for the Hard Questions you are working through on the fly' seems like an anti-pattern. I have talked to all of our groups hires as a near-final step.

There's a valid point in there somewhere, though. The recruiters that Google uses locally, for example, seem clueless enough to not care that their approach is quite offensive to someone who is not junior and/or desperate for a job. For several years, every few months, I got a come-on from our local Google office that seemed to be implying that working for Google was So Neat that I would probably want to drop 2-3 levels down the SE -> Senior SE -> Staff SE -> Senior Staff SE -> Principal Engineer chain, AND I would only have to carry a pager 1 week out of 4 as a Site Reliability Engineer. Well, hey, that sounds like a real step up from running a team doing something I really like at my current employer and getting to dream up SIMD tricks and algorithms all day! I'll get right on that. My very own pager!


Well said. The problem is, many recruiters actually wouldn't understand that sarcasm. And that's the problem the OP is talking about. For a lot of recruiters, they are so excited to be working at a big tech firm that they don't understand why anyone else would be any less excited.


  It's also not realistic, given the wheat-to-chaff ratio
  out there, for a manager to interview all the candidates
  directly without an elaborate screening process
Sure, but that screening process should be completed before you invite a candidate for an on-site interview.

I'm a hiring manager. If I say someone should spend 8 hours attending an in-person interview before I'll spend 5 minutes reviewing their resume/github, I'm saying my time is a hundred times as valuable as theirs. I can understand someone taking umbrage at that - especially someone who had better skills than me. And I want to hire people with better skills than me.


I broadly agree with you, but let me nitpick a bit, because nothing suits HN like a pedantic quest for truth.

8 hours vs 5 minutes is a bit extreme, for a start.

Secondly, the time a hiring manager needs to spend reviewing resumes includes unsuccessful candidates as well - so it's not 5 minutes, its 5 minutes * |all_the_vaguely_plausible_candidates|. There might be 20 or 30 resumes in the pile for some of these jobs. SO the manager might be 100-150 minutes in, in order to get to that 1 person who needs a "8 hour interview" (or whatever that is; I suppose places that fly you somewhere might be burning 48 hours or more - I know people in Australia who have been flown to the US for job interviews!). If you live in town it might be more like 2-4 hours. So the ratio isn't quite as extreme as you make it out to be for the manager, even if it seems unfair to count time spent reviewing all the other candidates' resumes/githubs.


There's also the issue of developer titles not being standard.

As you say at Intel: SE -> Senior SE -> Staff SE -> Senior Staff SE -> Principal Engineer

IBM has: Engineer -> Staff -> Advisory -> Senior -> Senior Member of Technical Staff -> Distinguished Engineer.

I'm sure other companies have different ranks too.

Unless you dig into the technical ladder of each company, its hard to say that Staff SE is the same everywhere.


Fair enough, and there's a legit argument that even identical titles from two given companies doesn't necessarily mean a automatic equivalence. That being said, when the scales are roughly equivalent, someone approaching you with a job offer 2-3 grades below where you are now is just embarrassing - you know, it's typical to offer a promotion to jump ship.


This sort of invites the question of what you WOULD be interested in working on (and what size team you've been leading) if you were to consider leaving.

Asking as a Googler who occasionally gets to dream up a neat trick here or there (and leads a team of pretty clever people, too!).


I'm happy where I am, honestly, and am not fishing for someone to flatter me, and am certainly not going to go into the answers to that question on a public forum. They approached me, I was minding my own business.

The main thing I guess is that a recruiter would have to be asking a question more like yours and less like "hey, how do you feel about an exciting job in the new field of SRE where only have to carry a pager 1 week in 4" to make me feel like it was even a respectful approach. They had even grasped that I was at Intel as a result of a fairly successful exit, but... hey... maybe some people want that pager.

As it stands, it was pretty much a "Hey $NAME, we like think it's good that you did a graduate degree at $SCHOOL. Congratulations on your $RECENT_CAREER_MILESTONE. If you ever tire of your $MEANINGLESSLY_SENIOR_SOUNDING_JOB at $MASSIVELY_INFERIOR_COMPANY_TO_GOOGLE you should join our generic hiring process".

(not hating on SRE, mind you - but no-one sane looking at my LinkedIn profile or anything else would think I have that skill set or would be willing to retrain from the bottom to develop it)


Conjecture: the majority (by far, probably on the order of 75%-80% or more) of programming and engineering problems to be solved in a typical company or typical application will not see significant differences in performance by selecting a naive implementation.

For example, in your string concatenation example, the naive solution is good enough except in situations where large numbers of strings are to be concatenated in a given run, or the strings are huge, etc., relative to the compute environment. Does it really matter if one uses an O(n^2) solution when a few dozen (or even a few hundred) such operations are applied on "small enough" strings in a given execution on modern computer hardware? No, it just doesn't.

Select candidates who are keenly interested in and knowledgeable about algorithms for positions where it's important (i.e. part of the regular course of development), not because of that slight chance there is one edge case where one adjustment to a more efficient algorithm might possibly be useful at some indeterminate point in the future.


> Select candidates who are keenly interested in and knowledgeable about algorithms for positions where it's important (i.e. part of the regular course of development), not because of that slight chance there is one edge case where one adjustment to a more efficient algorithm might possibly be useful at some indeterminate point in the future.

Well put! At the end of the day, it's way more important to get your product out to customers/users rather than holding a microscope over every line of code you write and endlessly obsessing over big O numbers. It's fine if you're in academia, but it can be a death knell if you're working in a crowded domain. You can always go back and improve code later; customers/users are not going to come back later.


Customers won't necessarily come back after you rewrite your product to deal with all the perf problems your naive implementations caused.

I'm all for striking the right balance with perf concerns, but in my experience that's not "just deal with it later". Giving no thought to performance is as bad as micro optimizing at the beginning.


True. It's a balance, it's hard, and there is no silver bullet. I believe I should learn from my mistakes and move on rather than have rabid obsession over big O. If something is obvious or caught by code reviews, fix it then and there else move on.


I believe I should learn from my mistakes and move on rather than have rabid obsession over big O.

It's not even an obsession. Those pieces of knowledge could be fully mastered in about a half hour. It's first principles knowledge, like chemistry or thermodynamics. I can tell in a minute that something like "Solar Freakin Roadways" is a scam, whereas so many people gave that scam literally millions of dollars. Knowledge is literally power, in a very concrete way that expresses itself directly as dollars!

Instead of having some basic first principles knowledge to spot such things ahead of time, benefiting from the collective experience of your field, you'd rather just labor in ignorance and run into everything for the first time and get out the profiler? Imagine scaling that up to the size of the Amazon or Google workforce?


We agree to disagree.

I absolutely don't disagree with optimizing when it's obvious (two for loops instead of one, etc.) All I'm saying is that, when you have thousands of lines of code (millions?) and even more complicated/intricate connections in your head of different modules, all the while when trying to reach a deadline, things aren't as obvious. You have to make compromises, it's inevitable.

Amazon or Google did not scale up magically in one go by following established guidelines. I am very certain they had growing pains and I know that they had "hacks"[0] while figuring out the right solution for the problem at hand.

[0] As attested by a former Googler who works at my current workplace


I absolutely don't disagree with optimizing when it's obvious (two for loops instead of one, etc.)

What we disagree with, is the expansion of one's knowledge of what constitutes "obvious." Many people would say that the scam nature of "Solar Freakin Roadways" was far from obvious. Other people would smack their foreheads and ask why some people didn't pay attention in middle school and high school physics!


Details do not matter until they show up. Still take string concatenation as an example. Suppose you want to highlight some particular words while a user is actively typing characters in a dialog. If you naively concatenate every character, the user may have responsiveness issues or at least wastes CPU cycles/battery unnecessarily. Small cases like this add up in a big project.

My conjecture is that a big fraction of programs could get significant performance boost noticeable by end users if they were developed by good developers who have some knowledges in algorithms. You don't need to be an expert, but knowing very basics on time complexity and fundamental data structures is necessary to every programmer IMHO.


I tend to run in to more critical issues than performance, like shipping the product/feature at all, meeting deadlines, writing maintainable code, and writing documentation.

In your highlighting example, instead of working with someone who will optimize, I'd much rather work with the guy who will realize we can just debounce the dialog, write a one line comment about the performance issue, and then move on to the next thing.

Maybe I'm biased though. I've worked with the performance guy; he was brilliant and the code was clever and highly performant, but he was also slow and didn't write clear, maintainable code. We didn't ship in time and I had to find a new job when the project got scrapped. Damned if it wasn't fast though.


> In your highlighting example, instead of working with someone who will optimize, I'd much rather work with the guy who will realize we can just debounce the dialog, write a one line comment about the performance issue, and then move on to the next thing.

As long as you are aware of the issue with naive string concatenation, you can use a string buffer or a mutable string – it is trivial to solve in a few more lines of code. What's more difficult involves deletion in the previous text. If this happens often, you will need rope or skip list, which is challenging if you implement from scratch. In the latter case, I would leave a comment without implementing the optimal solution and let the profiler decide later.


We didn't ship in time and I had to find a new job when the project got scrapped. Damned if it wasn't fast though.

Sounds more like a project management issue to me, than a problem with that coder.


Three man team and he was the lead. Yes, it was a project management issue.

I'd rather see basic project management skills than basic knowledge of algorithms. Unfortunately, the former is much more rare.


Sure. But at the scale of e.g. Google, Amazon, etc. how often do you think they are performing these lower level computations?

From my experience: very frequently.

So it's useful at these corporations for all engineers to have this understanding, I think.


From my admittedly limited experience at Google, given the size of the input data even a "small" project deals with those little algorithmic inefficiencies really add up quickly. You have to do a lot of optimizing and using clever data structures to make something work at all.


I haven't worked at any of these companies so I wouldn't know how often they're performing these computations. My conjecture is broad, of course, and a statistical suggestion at that: it seems unobjectionable to claim that for some companies the "typical application" lives in the 20%-25% "outlier" region most of the time.


Where I work our core rendering algorithm which runs several times on every page load used naïve string concatenation. When I initially wrote it I knew string concatenation was inefficient but never got back to it and for 99.9% percent of cases it didn't matter! We finally hit that 0.1% case and after optimizing the algorithm, it didn't make a dent on our overall performance numbers at all.

The biggest impact to our performance has been switching one JSON serialization library for another, no algorithmic knowledge needed, just basic benchmarking skills.


Your one anecdote obviously proves everything once and for all. The biggest recent performance gains in my personal project had to do with switching to WebRTC and switching libraries.

Knowing when you can just slap in the naive string concatenation and move on is also a useful result of the right skills. If you had better profiling tools/skills, I posit that you wouldn't have had to do the useless optimization.


What made you reimplement the rendering algorithm and/or switch JSON libraries (eg were these changes backed by data/measurements)?


Conjecture: the majority (by far, probably on the order of 75%-80% or more) of programming and engineering problems to be solved in a typical company or typical application will not see significant differences in performance by selecting a naive implementation.

True. And people who clutter their code by using some fancy "StringStream" class when they could've just used a concatenation are also part of the problem!

Saying that programmers don't need a basic knowledge of algorithms is like saying drivers don't need to know how to back up a trailer or do a 3 point turn. Over 90% of the time, you don't need it. But when you do, you really do!


Well, sure but one person's "basic" is another's "esoteric," and "when you do" is something I think is relatively infrequent. I cared deeply about this sort of thing when I was writing code for HPC modeling of physical systems, but in very specialized and narrow context. Since I've left academia I have rarely encountered a problem where understanding implementation details behind complexity was really necessary.

Knowing the typical time and space complexity for an algorithm (without even understanding one bit about the implementation details) in a given category might be considered too trivial to be even basic, but I submit it's about the most "advanced" piece of knowledge for what a typical engineer really must know to write good, high quality and performant enough code.

Most can get by using a given language's library implementation of an "obvious" data structure and naive algorithms around it. For example, in Python, the "obvious" structure to map keys to values is a dict, and iteration over the keys or item tuples is a typical access pattern. An engineer doesn't need to know whether the dict insert or lookup is amortized O(1) or any other value, and he doesn't need to know the space required is O(n), etc. In almost every case he's likely to put it to use for the dict is fine.


Knowing the typical time and space complexity for an algorithm (without even understanding one bit about the implementation details) in a given category might be considered too trivial to be even basic, but I submit it's about the most "advanced" piece of knowledge for what a typical engineer really must know to write good, high quality and performant enough code.

I think we're both saying that it's generally a good level of 1st Principles knowledge for programmers to have.


"Good" and "necessary" are different things, though. I consider it "good" to know as much as possible about things I'm interested in. I don't consider it "necessary."


It's not about using this knowledge all the time.

It is about your base knowledge (which should include algorithms) and knowing those algorithms when its matters.

If you have the choice of two identical software engineers and one knows 1 algorithm more than the other, he/she wins.


>Let's presume this is out of preference and not ability. It's a pretty basic concept. If your preference stops you from learning something as basic as this as a programmer, then it doesn't seem likely that you will be motivated to keep up with even more abstruse concepts.

This same exact argument could be used to require every interview candidate to know assembly.

>and the person who likes to think about such things is more likely to spot the potential problem and avoid it altogether!

I find that developers who constantly get caught up in the performance of individual algorithms will waste incredible amounts of time optimizing them and will miss major optimizations that come from seeing the system as a whole.

For example, I would rather take the developer who points out that a particular program can make use of the fact that most of the code doesn't need sorted widgets rather than the one who spent the whole time optimizing the widget sorting algorithm.


> This same exact argument could be used to require every interview candidate to know assembly.

Yes, and that argument could be used to require every interview candidate to know quantum physics.

At some point you draw a line, and Google/Amazon/Microsoft/etc. have very clear pictures of where their respective lines should be. It seems to work well for them.


At some point you draw a line, and Google/Amazon/Microsoft/etc. have very clear pictures of where their respective lines should be. It seems to work well for them.

Think about it this way. 1st principles knowledge, like algorithms, isn't at all useful 90% of the time. But 1% of the time, not knowing it will cause very out-sized penalties in efficiency or debugging costs. Now take the frequency of the occasional 1st principles usefulness penalties and multiply it by all of the developer-hours at Google/Amazon/Microsoft.

That is why Google/Amazon/Microsoft do that!


>It seems to work well for them.

There is no proof that they are actually preventing bad engineers with this process or that they couldn't get much better engineers and fewer flops by fixing it.

People want to work there in spite of these irritating hiring processes. I frequently travel via plane but that doesn't mean I think the TSA process is good.


This same exact argument could be used to require every interview candidate to know assembly.

Have you ever done the standard Comp Sci compiler implementation class? Do you write C++ and use the C++ standard library? If you answered yes to both questions, then you should know from first principles how just about everything in the standard library is implemented, and can use those tools with complete knowledge of when they can fail or must be used differently.

I find that developers who constantly get caught up in the performance of individual algorithms will waste incredible amounts of time optimizing them and will miss major optimizations that come from seeing the system as a whole.

Those developers are merely exhibiting yet another kind of ignorance.

For example, I would rather take the developer who points out that a particular program can make use of the fact that most of the code doesn't need sorted widgets rather than the one who spent the whole time optimizing the widget sorting algorithm.

It's the developer who has a clue about algorithms who is more likely to make the valuable insight. The needless optimizer is just doing some Cargo Cult algorithms analysis. (Probably motivated by signalling geekiness, not producing useful results.)


I'd love to agree with you - it would be good to my ego - but I just don't think you're right.

People create value with software when they use code to solve problems.

For some people, that means tackling a gnarly complex problem and by a combination of wits, experience and education, come up with an efficient and elegant solution. Typical example: someone working in a specialized role in back-end (i.e. not user-facing) systems in a large company (small companies can't usually afford specialized roles). Concrete example: V8 JIT engineers.

For other people, it means looking up from the keyboard, figuring out the company strategy and product fit, and seeing what can be most efficiently (in terms of effort) put together to improve or align both. Creating a proof of concept that gets buy-in for a more in-depth solution.

The second category almost always delivers much more value than the first. And there's a spectrum in between, a spectrum of people who are all very good at what they do, all very good at creating value; the spectrum extends from the guts of the machine all the way out to the interface with the customer.

I've worked in some very different industries in my career; ranging from compiler engineer to web app developer. Compiler engineers leave their fingerprints on far more code, and the job is intellectually stimulating. But realistically, most of the reward of the job comes from the fun of job itself. As a web app developer, I apply my knowledge of compiler techniques to things like SQL generation from filter predicates represented as ASTs, for efficient display of the user's data. But I could only deliver that because I saw the possibility of connecting the back end we had with an Excel-like experience we could give the user; and I had to take it on.

I've seen other people take a bunch of open source components - not knowing in depth how they worked - and put them together to create surprisingly credible solutions very cheaply, the kinds of solutions that win 7-figure SaaS deals. You don't get there with your knowledge of binary trees or compiler principles.


I've seen other people take a bunch of open source components - not knowing in depth how they worked - and put them together to create surprisingly credible solutions very cheaply, the kinds of solutions that win 7-figure SaaS deals. You don't get there with your knowledge of binary trees or compiler principles.

But having some basic algorithms knowledge might make the difference between delivering such a solution that runs reliably and quickly enough and not.


>It's the developer who has a clue about algorithms who is more likely to make the valuable insight.

The ability to implement the sorting algorithm in the C++ standard lib is completely orthogonal to people who have this insight.

People who have an understanding of complexity analysis is really all it takes to have that insight. Memorizing a bunch of datastructure algorithms has almost no bearing on this ability.


The ability to implement the sorting algorithm in the C++ standard lib is completely orthogonal to people who have this insight.

Weak example! Exactly when would you want to use a shared_ptr, and when you you absolutely not want to use a shared_ptr? Why? And what pieces of 1st principles knowledge would let you simply know that in about a minute?

People who have an understanding of complexity analysis is really all it takes to have that insight. Memorizing a bunch of datastructure algorithms has almost no bearing on this ability.

Note that in this thread and others I've been advocating for First Principles knowledge. If you have that knowledge, then you'd probably know several of the most basic algorithms and their analysis from having learned that. I'm not advocating that people just be able to spit out a memorized text!

http://v.cx/2010/04/feynman-brazil-education


Has modern CS education been too little first principles knowledge, too much memorization or something else that detracts from understanding the fundamentals? Maybe that's the problem.


This same exact argument could be used to require every interview candidate to know assembly.

You're not thinking big enough. My usual extension of the argument is to suggest that part of the interview process should be to give the candidate a bucket of sand and some ore, and make them smelt everything and build their own CPU from scratch.

Because, y'know, it's important to cover fundamentals!


That sounds like a really fun interview. "Given a bucket of sand, build a machine that computes pi to the nth decimal".

(I'd give people extra credit for coming up with creative and interesting variants based on Buffon's needle, rather than building the full infrastructure to do lithography).


> This same exact argument could be used to require every interview candidate to know assembly.

...because knowing assembly is actually useful to a high level developer? If it were, then yes, I'd say they should know assembly too. But I know assembly; I've written entire published games in assembly language. And yet I don't believe knowing it is actively useful any more. Knowing the basic concepts like how strings, integers, and floating point values are stored and compared, yes. You typically learn that as you're learning algorithms and data structures. But you can learn those concepts using C; actually knowing assembly language fluently is overkill today.

So there's no slippery slope argument to be made here.

> I find that developers who constantly get caught up in the performance of individual algorithms

Straw-man. [1] Developers who understand how to optimize can, at the same time, make intelligent decisions on when to optimize. Developers who are overly focused on minutia that isn't important are solving the wrong problem, certainly. But part of the skill of optimization is knowing when to do it.

The problem is that if you don't know how to optimize algorithms, if you don't understand big-O notation and its implications, then you won't really get how to optimize at either the individual algorithm level or at the system level. Because the concepts are the same across all the levels of complexity.

Yes, some developers can get obsessed with optimizing the wrong things. That's why experienced developers will profile before spending a lot of time optimizing.

But put a bunch of developers who ignore big-O together and you'll end up with code like the Quora app: If I delete a paragraph in the app it can take 10 seconds to finish deleting it. Sometimes the app will update once or twice with parts of the paragraph deleted. I'm not writing a book in the app; the N can't be more than a thousand or so for the entire answer. Even JavaScript can iterate over a thousand characters in milliseconds.

My guess? They've accidentally used an O(n^3) algorithm where they delete one character at a time and copy the entire message, re-concatenating it every time. Experienced developers wouldn't even consider writing that code to begin with, instead using something like ropes when dealing with text that's being actively edited, because that's what you do with text in an editor. [2] There's even already a JavaScript implementation they could have used off the shelf [3] (I'm assuming that the app is hybrid and running in JavaScript; if it's actually native then, well, it's quite an achievement for it to have such poor performance).

But you have to be at least passingly familiar with algorithms to even know that it's a likely problem.

And you know what? I don't always obsess over "the most optimal" algorithm for every problem. Sometimes the more optimal algorithm for large N will require more overhead for the small N that we're dealing with, and the brute force algorithm will not only be "just fine," it will be faster and require less work AND less memory overhead. And sometimes N is just always going to be too small to worry about.

I've sometimes just used a quick-and-dirty algorithm only to discover that its behavior was far worse than I had guessed (something that should be instant is taking seconds), but then because I do understand algorithms it takes me 5 minutes to rewrite for better time complexity, and the performance glitch vanishes.

And that's who Google is trying to hire, at least in general. It's not like I don't understand object oriented design as well.

[1] https://en.wikipedia.org/wiki/Straw_man

[2] https://en.wikipedia.org/wiki/Rope_(data_structure)

[3] https://github.com/component/rope


...because knowing assembly is actually useful to a high level developer? If it were, then yes, I'd say they should know assembly too. But I know assembly; I've written entire published games in assembly language.

To expand on what you are saying: If you're doing "high level development" on a business app in C++ 11 then knowing assembly and having been through a bog-standard undergraduate CS Compiler Development course will actually help you use a whole bunch of things in the C++ standard library, because you can easily guess about how such things would be implemented from first principles. Otherwise, things like smart pointers are just "magic" and just need to be used in a bunch of disconnected abstruse ways. Having the first principles knowledge lets you easily know when to use another shared_ptr, when to use a reference, and when not to use one -- all just from first principles, no arbitrary memorization required!

Hell, having that kind of knowledge even benefited someone working in Smalltalk back in the day.


> If you're doing "high level development" on a business app in C++ 11 then knowing assembly and having been through a bog-standard undergraduate CS Compiler Development course will actually help you use a whole bunch of things in the C++ standard library

Agreed, and my understanding of assembly does help in those ways. I actually feel like compiler design was one of a very few classes in college that really, really taught me something.

But if I'm hiring a Python or JavaScript developer, I'm not going to require they know assembly. If only because that would restrict the hiring pool so much that I'd likely never find an employee.

A senior developer, though, probably should understand what's happening at the low level. And most teams should have a senior developer to keep the team from making rookie mistakes. There are plenty of programming jobs that can be done with less skill or deep knowledge. Like the ones that are discussed in the Wired article on coding being the next "blue collar" job. [1]

[1] https://www.wired.com/2017/02/programming-is-the-new-blue-co...


Paraphrase: you have to compromise due to market realities. Isn't that an indictment of the poor state of training in our field?


Yes and no.

I think that the lead or "surgeon" (to quote The Mythical Man Month) in charge of any app or service development project should be well trained. Anyone who brings in a junior developer (anyone without a CS degree or equivalent, or less than 5 years of professional experience) to lead a project is asking for trouble.

But I think that average developers don't need the full training any more than a nurse needs to have a full medical degree. There are certainly things that I don't want to have to do, and while I might be able to do them better in some way than a more junior developer, they just don't matter enough.

It is a Catch-22, though: Just having a CS degree and five years of experience doesn't make a developer competent. I'd love to see some kind of certification to help separate the wheat from the chaff, so that non-experts could distinguish a top developer from a mid-tier developer.

But none of the certifications I'm aware of do anything aside from test that you've memorized the right buzzwords associated with a particular technology, and as such having such a cert is almost useless.

So yes, I'm arguing for certifications despite the fact that I view certifications as useless. If we had good certifications, maybe we could help establish a better way to distinguish developer skills. Would love to see that, but I admit it's selfishly motivated: I'm really good at programming and technical tests, so any such regimen I'd likely end up with the highest ratings, except if domain-specific knowledge was required.


But I think that average developers don't need the full training any more than a nurse needs to have a full medical degree.

A nurse needs to have enough 1st Principles knowledge plus specific training to keep from making egregious mistakes. In fact, where X is a profession, an X needs to have enough 1st Principles knowledge plus specific training to keep from making egregious mistakes.

The fact that our "field" keeps producing X without that minimum level means something is broken. It would be like the nursing field producing nurses who didn't know how to spot a Tension Pneumothorax, because most nurses don't have to deal with that 99% of the time.

So yes, I'm arguing for certifications despite the fact that I view certifications as useless. If we had good certifications, maybe we could help establish a better way to distinguish developer skills.

The fact that you entertain this thought is an indication that something is not quite right with our field.


>The fact that you entertain this thought is an indication that something is not quite right with our field.

Agreed. But try as I might, I can't come up with a solution that I'd actually support the implementation of.

If you have any ideas about we can get from where we are to where we should be, I'd be interested in hearing them. Having IEEE license developers at varying levels of skill sounds like a start, and requiring Internet-visible software to be written by developers with certifications sounds nice...until you think about open source projects, which thrive in large part from free effort. And until you think about the fact that software traverses borders fluidly. And what about entrepreneurship? Should we tell people they can't write apps unless they have the certification? Or that they need to pay an expert to audit their apps?

And the fact that big companies will do whatever they can to cut costs, including hiring developers in countries without certification requirements if you don't motivate them otherwise. (Maybe a DMCA-style safe-harbor, where ONLY if they use a development team with sufficient certifications can they be proof against lawsuits for defects; otherwise people can sue them for exploits that result from their software?)

I have lots of ideas, as you can see above. I just don't like any of them. There probably needs to be a constellation of rules in place, set by a professional organization and ensconced as legal requirements... But half of the rules I imagine wanting I also, under other circumstances, would despise.


>...because knowing assembly is actually useful to a high level developer?

It's about as relevant as implementing quicksort.

>My guess? They've accidentally used an O(n^3) algorithm where they delete one character at a time and copy the entire message, re-concatenating it every time.

Or it's actually a javascript event handler that is doing something else expensive per char delete. It's more important to know to profile rather than try to memorize every algorithm used by a system and guess which one is causing the problem. Once you identify where the bottleneck is, you can search for "efficient algorithms to do X".

People who profile and fix bottlenecks are much more valuable than people who think they have implemented the most efficient algorithm for every operation in their program.


> It's about as relevant as implementing quicksort.

...did you actually read my comment? Because I went on to say that I didn't think it was relevant.

That said, understanding how quicksort works is important, not because you'll be called on to implement it, but because there are key strategies used in its implementation.

And yes, I can implement quicksort without looking up the algorithm. It's really not that hard.

> Or it's actually a javascript event handler that is doing something else expensive per char delete. It's more important to know to profile rather than try to memorize every algorithm used by a system and guess which one is causing the problem. Once you identify where the bottleneck is, you can search for "efficient algorithms to do X".

Sorry, but no, wrong answer. Partial credit for suggesting profiling. Let me take this apart:

> Or it's actually a javascript event handler that is doing something else expensive per char delete.

If they're calling a function that triggers an event handler for every character deletion when they're deleting a block of text, then they're Doing It Wrong. Yes, profiling is important. But calling delete one character at a time for a block of text is plain lazy.

> Memorize every algorithm used by a system

Umm....this is a text editor we're talking about. There are very few algorithms, and it shouldn't be a case of "memorize them" as much as "observe them." If there are more than 3-4 algorithms, then it should also include "draw a picture of how they interact," if your abilities at modeling them in your head are insufficient.

But someone who is strong at algorithms should be able to just see what's wrong with a text editor that deletes characters one at a time. It should be obvious that it's likely to be a problem, and they shouldn't do it to begin with.

> People who profile and fix bottlenecks are much more valuable than people who think they have implemented the most efficient algorithm for every operation in their program.

Both are necessary, and as I've said elsewhere, "developers that are anal about optimizing everything" is a straw man; good developers know when optimization is important, and good developers can write more optimal code without making it harder to read. Just knowing how to optimize doesn't automatically make you optimize every single routine, nor does it mean everything is more complex. Half the time when I optimize something the code ends up cleaner and easier to read, in fact.

And it's not always about bottlenecks. Sometimes it really is about the algorithm, and if you don't know your algorithms, you won't be able to recognize this. As said above, it's a prime example of the Dunning-Kruger effect: If you don't know algorithms, you don't even know what you don't know. Claiming you don't need to know algorithms just reinforces the point. Sorry.

In another comment on this post I described an optimization that I did that had no easy-to-fix bottleneck that was slowing things down, and that required a high level algorithmic change to fix. [1] In about an hour I sped it up by a factor of about a thousand. Ultimately I was doing the same things, but I was able to improve the algorithmic efficiency.

And someone strictly looking for bottlenecks can look at that code forever and not see the algorithmic change needed, because the code itself was written to be pretty optimal; it was a high level change to the algorithm that made it so much faster.

[1] https://news.ycombinator.com/item?id=13701931


This is a great post and I completely agree, it is amusing however to note that Quora is known in the competitive programming community for having developers quite strong at algorithms, sometimes I guess that doesn't make it through to the software though.


I'd put money on the fact that those competitive programming Quora developers aren't working on the Android app. Or even on the Android/Mobile API back-end, which seems to fail frequently -- and lose information you've entered into the app, which doesn't do the obvious things like queuing up requests to be pushed later when they initially fail.

So. Much. Fail.

The Amazon Alexa/Echo app is similarly poorly programmed. Takes like 10 seconds just to boot up, which you need to do if you want to look at your shopping list -- and if the gets pushed to the background, you need to boot up the app again. Sigh.


> For example, I would rather take the developer who points out that a particular program can make use of the fact that most of the code doesn't need sorted widgets rather than the one who spent the whole time optimizing the widget sorting algorithm.

And I'm pretty sure that the person who doesn't want to learn binary-tree traversing will never be able to point that out. That's the point you missed of the post you replied to.


Instead of requiring someone to know a b-tree, how about just teaching it to them in an interview, and then walking through an exercise to see if they get it? That'd be more impressive to me. If someone claims to know what a b-tree, test them on a more advanced concept that builds on using a b-tree that they are unlikely to know. Then teach them that and see if they get it. Make the interview more about working together and capacity to learn, and less about computer science trivia night at the pub with no beer.

Your ability to efficiently teach people stuff that you claim is important is a chance to hold yourself accountable for how well you actually know something AND whether it is actually important at all. If an interviewer taught me something I didn't know, and then quizzed me on it, I'd be impressed as heck, and I'd wanna work with that person (if they were not an asshole).


This has worked quite well for me. I tend to ask one of two types of algorithm questions:

1. Pick a simple but relatively obscure data structure, something they are unlikely to have crammed the night before. I always start by asking the candidate if they are familiar with it; the answer is almost universally "no". I then pull out a wikipedia printout and a notepad, and spend 10 minutes explaining it to them, with diagrams. Once they are sure they have understood the basic operations, I ask them to implement one of these basic operation we just walked through. The amount of code here should be ~10 lines; keep on adding simplifying assumptions until it is.

2. Pick a fundamental data structure from a high-level language runtime, something you would use without thinking: python lists, javascript objects, that kind of thing. I emphasise that familiarity with said language is not required, and list some real-world properties and performance characteristics (access speed, iteration, ordering, mutation), as well as common and less-common use cases. I then ask the candidate how they think it is implemented under the hood (i.e., if you wanted these semantics in a language that didn't natively provide them, how would you do it?) I often don't have a full answer myself, so we brainstorm the requirements and implementations together; no code, just open-ended discussion. The best candidates have often emailed me afterwards, having looked up an actual open-source implementation.


Without knowing how well it works, I like this approach. It's thoughtful and engaging. Not perfect, and maybe that's a good thing.


I really like this approach; can I interview with you? :P


I've had interviewers try to school me on something that I wasn't super well versed on, and my internal thought process was "If I wanted to learn the finer points of xsd, I'd go dig in on my own and learn it. Right now I'm not that interested in you teaching it to me."

Maybe not the healthiest attitude ...


Definitely a good idea to decouple rote memorization from the interview process.


I do like your suggestion, but

> Make the interview more about working together and capacity to learn, and less about computer science trivia night at the pub with no beer.

Understanding b-trees is pretty useful. If you work with relational databases, most questions concerning query performance require some understanding of b-trees.

I wouldn't ask a candidate to implement b-trees (or a sorting algorithm, or red-black trees, etc.), but asking about their basic properties seems reasonable. Or better yet, asking about the performance of a realistic SQL query which uses a b-tree index.


> If your preference stops you from learning something as basic as this as a programmer, then it doesn't seem likely that you will be motivated to keep up with even more abstruse concepts.

No one is arguing that there isn't any value in knowing CS. Rather, the argument basically is that for the vast majority of developers, studying algorithms is a net loss because it's time that could be better spent learning more valuable skills.

If you're the person whose job is creating Redis then yeah you should probably know something about about algorithms and data structures. For most other people, having a cocktail party level familiarity with that stuff and a good understanding the phrase "tools not rules" is probably good enough.


Rather, the argument basically is that for the vast majority of developers, studying algorithms is a net loss because it's time that could be better spent learning more valuable skills.

But the op is basically implying that you don't even need the "cocktail party level familiarity." Knowing just enough first principles chemistry to know how CO2 and CO are produced can save your life. It's one thing to just know the rule you shouldn't run your car in a closed garage. It's another thing to know the general conditions where you might be producing CO, so you also know not to run your laser cutter in a poorly ventilated room.

First principles knowledge are a series of tools for reasoning about the world, your program, your software libraries, etc. Programmers thinking they don't need to know the most basic algorithm knowledge is basically the anti-intellectual stance of eschewing first principles knowledge.


I find your stance well considered and it intrigues me because I don't think I agree with it. However I might agree in principle and differ on what exactly it means for someone to know first principles.

I've been through a CS education track, including algorithms and complexity, my day job has been software development for over 5 years since I've graduated college. In practice I have been able to understand complexity tradeoffs when selecting approaches and building implementations and have been able to recognize shortcomings in the same done by others and been able to improve upon what was built. The difficulty I have encountered has never been in the implementation of an algorithm. I have even taught the first principles of complexity to others sufficiently well that I have seen them make informed design decisions.

I will openly admit that not once, during all of that time, have I ever been able to whiteboard an optimally efficient (or nearly so) algorithm implementation _and be confident it was such_. Mind you, I'd love to be able to, and I'm definitely not arguing that understanding complexity isn't useful knowledge. However I would contend that being able to bang out a optimal b-tree implementation from memory on-demand isn't first principles knowledge, but is merely _trivia_ ability.

I would further argue it's not even first principles knowledge that is most important, but rather a combination of critical-thinking, enough self-awareness to notice when you've exceeded your current knowledge/experience, and sufficient humility to fix that lack rather than plod along with blinders on.

Granted, I might be less competent than I believe, but then, in practice, it would seem that incompetence can lead to genuine success. Perhaps the scope of my 5 years real-world experience is so narrow that I've simply not had the chance to encounter a situation where others would say "if you're not able to whiteboard a b-tree, you're not the right person to solve this problem". But then, I'm a full-stack developer that regularly uses Java, Javascript, and Python and have occasionally had reason to use R and C#, so it seems unlikely that my experience is that exceedingly narrow.

Sorry that this became a bit rambling, it's just a topic that's always intrigued me in how divisive it can be.


However I would contend that being able to bang out a optimal b-tree implementation from memory on-demand isn't first principles knowledge, but is merely _trivia_ ability.

a) What the heck is an optimal B-Tree? (Don't you have to tune the parameters? b) I would also contend that it's trivia, and merely a way of getting at the ability to apply First Principles knowledge.

I would further argue it's not even first principles knowledge that is most important, but rather a combination of critical-thinking, enough self-awareness to notice when you've exceeded your current knowledge/experience, and sufficient humility to fix that lack rather than plod along with blinders on.

So you're not disagreeing with the value of 1st principles knowledge. Then you cite an even deeper kind of knowledge. Where exactly are we disagreeing here?


I'm curious to meet these devs that claim that even having a basic understanding of algorithms/data structures fundamentals is not worth their time. Something tells me that the work they do is more related to design/front end and as a consequence they've either never needed to use computer science fundamentals or have been able to get away with naive implementations due to working on trivial problems.

Usually I don't bother engaging in arguments with these people because I know that the standard algorithms interviews will weed these people out so that I never have to deal with them but I've started to notice a trend in more of these people finding their way into the "big 4" type companies and ending up on large scale/infrastructure projects where they end up making very critical mistakes, see https://news.ycombinator.com/item?id=13700452


I think you're misquoting the article, and the "Dunning-Kruger" is uncalled for. Here's the full quote:

> If she would have started her email with "We're looking for an algorithm expert," we would never have gotten any further and would not have wasted our time. Clearly, I'm not an expert in algorithms. There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

We should avoid the notion that naive string concatenation is a problem that impedes our ability to deliver software, except for niche applications, this is not the case.


> We should avoid the notion that naive string concatenation is a problem that impedes our ability to deliver software, except for niche applications, this is not the case.

The attitude that really basic Computer Science concepts like algorithms and algorithmic complexity are irrelevant is exactly why software projects are so frequently FUBAR.

The Dunning-Kruger reference is spot on. The original author doesn't even know what they don't know, or why they should want to learn it. Being proud of ignorance of a cornerstone topic isn't something to be celebrated.


I would say in my experience most projects are messed up from lack of separation of concerns--on many levels. Followed by, different teams maintaining the same code base all with their own stands.

Being generally messy and not considering the over all design hurts way more than just basic CS.

I have seen plenty of performance issues related to CS knowledge but a lot of times those are masked by messiness.


Can you provide an example of an open source code base which is messed up due to ones inability to understand algorithms?


I don't have an open source example; I tend to actively avoid such projects, and so I tend to accumulate lists of the ones that seem well engineered rather than the opposite.

The Quora app, though: If I'm writing a reasonably long answer and I delete a paragraph, it can take more than 10 seconds to complete. There's some profound inability to understand algorithms in there somewhere, I can guarantee it.

There was just a headline on Hacker News a few days ago about a "reject!" function, if I remember correctly, that ended up with an n^2 algorithm for several major Ruby releases because someone fixing a bug didn't understand the consequence of their change. I don't have the link, though.

EDIT to fix the name of the Ruby function and to add the link. [1] Thanks to user nighthawk454 who dug it up!

[1] https://news.ycombinator.com/item?id=13691303


Given that the Quora interview process is reputably difficult [0], and not lacking in algorithms questions [1][2], this is quite a puzzling situation.

[0] https://www.quora.com/Which-companies-have-really-hard-algor...

[1] https://www.quora.com/challenges

[2] http://www.businessinsider.com/heres-the-test-you-have-to-pa...


I work at a "big 4" company that's also known for having difficult algorithms questions and I can confirm that sub-par developers still pass through. After talking to a lot of my friends in the industry, my theory is that most devs these days are basically just using sites like leetcode to train and memorize implementations of algorithms which end up being the same questions used by interviewers that use leetcode as a question bank.

I'm not against this type of training, I'm very familiar with and have competed in ACM ICPC, TopCoder, etc, but what I've noticed is that there seems to be a divide in the type of devs that are merely brute forcing and solving as many questions as possible in order to create their own solutions bank vs the ones who take a more structured approach of learning problem solving paradigms and algorithms/data structures in a way that enforces why they're used in the situations they are and how those can be adapted and composed to solve larger problems.

If you want horror stories... I've worked with a dev that didn't understand the concept of passing by value vs by reference. This was a person being paid 6 figures and has been writing code for cloud infrastructure with a fundamental misunderstanding of the underlying memory model. I've also had devs that didn't even know that an abstract data type like a map could be implemented with either a hash table or self balancing search tree and therefore no idea of when you should/can/can't use one or the other. This type of thing wouldn't really bother me if I was talking to a front end dev, since I imagine most of what they do is solve design problems, but it's very worrying when these are people working on the lowest layer of a public cloud infrastructure...


See, your case is exactly when I would expect to be quizzed for algorithmic complexity and the like. And I would most likely fail! I understand pass by reference / pass by value but I would have to study hash tables (which I'm familiar with) and self balancing search trees (which I have heard of, but have not, to my knowledge, encountered).


Interesting.

I wonder what the history of their mobile app is, though? It feels like it's a hybrid app, and it feels like it's developed iOS-first, Android-as-afterthought, so it may be the ugly stepchild of the team that gets no attention. "Does it build on Android? Ship it!"

Also, if it is hybrid, it may be a JavaScript team -- and JavaScript still has that "top developers love to hate it" aura [1], so it may be that some of the least talented Quora developers populate the team? (Apologies to the Quora app team, but ... 10 seconds to delete a paragraph?)

The worst of the problems seem to be when I'm also using the SwiftKey keyboard, so they may not see them internally. Somewhere along the way they're doing something pathological, though, because SwiftKey works like lightning in other apps, and only in the Quora app can get bogged down and end up so slow that I have to disable it and use the standard keyboard just to type anything at all. There's also a bug where I sometimes can't hit enter after a link I've pasted, and again I have to switch to another keyboard.

SwiftKey may be reacting to notifications that the text has changed, for instance, and Quora is sending 200 change notifications, one for each character changed, and then SwiftKey is querying the entire source document again each time... Or some such. Don't know.

It's not just SwiftKey related issues, though. Whatever they were using for rich text editing has been broken in different creative ways every few months, where edits wouldn't take, or what you're actually seeing would vary from what gets posted, or you can't edit the link text, or you can't GET OUT of the link text, so all the new text you type ends up part of the link, or ... It's been broken in so many ways that I've lost count. A recent bug I encountered was when I was pulling the text down to get to the top of what I'd written, I scrolled it down once too often, and that triggered a "refresh" ("drag down from the top" to refresh) which lost all of my text. Sigh.

The Quora app is really in need of a solid team experienced in app development, robust data handling, and proper UI behaviors. If it has competitive programmers working on it, they're solving the wrong problems.

[1] I used to be guilty of this, but JavaScript got better, linters now help prevent the worst legacy JavaScript issues, I changed my opinion in large part, and I use TypeScript now anyway. :)



Yes, this link! Thanks!


> The attitude that really basic Computer Science concepts like algorithms and algorithmic complexity are irrelevant is exactly why software projects are so frequently FUBAR.

You must mean "some software projects" and not "frequently." And who's proud of ignorance? Ignorance of what? And, what's more, why should it be considered a negative if someone isn't concerned, or even is proud, about ignorance of certain things.


Other people have cited numbers, but in my opinion, most software project (greater than 75%) end up:

* Over budget in dollars or time or both

* Fragile

* Insecure (sometimes profoundly so)

* Having major UI issues

* Missing major obvious features

* Hard to extend

* Full of bugs, both subtle and obvious

Pick any five of the above at least. Keep in mind that most software projects are internal to large companies, or are the software behind APIs, or are otherwise niche products, though there are certainly plenty of mobile apps that hit 4-7 of those points.

Who is proud of ignorance? Original author:

> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

He proudly states that he will never be interested in learning these topics. These topics that are profoundly fundamental to computer science.

Why should it be considered harmful? Because anyone who is writing software should always be learning, and should never dismiss, out of hand, interest in learning core computer science concepts. They should be actively seeking such knowledge if they don't have it already. It's totally Dunning-Kruger to think that you don't need to know these things. His crack about learning "object oriented design" instead made me laugh: As if knowing OOD means that you don't need to know algorithms. To the contrary, if you don't understand the fundamentals, you can create an OOD architecture that can sink a project.

It's like the people who brag of being bad at math -- only worse, because this is the equivalent of mathematicians being proud of their lack of algebra knowledge.


So if the developers knew to traverse a binary tree, how many of the 75% succeed?

There is something I've mutated to my own liking called the 80/20 rule. In software, you will spend 20% of your time making the application 80% of it's maximum potential performance. You can then spend the remaining 80% of your time gaining the extra 20%. If that is cost effective to your company, then by all means do it. If it isn't 80% is just fine and you've cut your development costs by 4/5ths.

For me, being able to "rote" an algorithm on some whiteboard falls into the last 20%, maybe. Collection.Sort, whatever it uses, is good enough. Hell, you get about 78% of that 80% by using indexes properly on your RBDBMS.


> So if the developers knew to traverse a binary tree, how many of the 75% succeed?

I would say it's necessary but not sufficient. There is no perfect interview strategy. But for projects that are entirely developed by people who can't traverse a binary tree, I'd say the odds of failure are very, very high.

Sure, a strong developer can hit that 80% of performance quickly (in probably less than 20% of the time), but a weak developer who doesn't know how to optimize probably won't even make 5% of "maximum potential performance."

I had to work with a tool once that had a "process" function that would take 2-3 minutes to do its work. It was used by level designers, and it was seriously impeding their workflow. The tool was developed by an game developer with something like 10 years of professional development experience (he was my manager at the time), and he thought it was as fast as it could go -- that he'd reached that maximum potential performance, with maybe a few percentage points here or there, but not worth the effort to improve. He didn't want to spend another 80% of his time trying to optimize it for a minor improvement either, so he left it alone.

I looked at what it was doing, figured out how it was inefficient, and in less than an hour rewrote a couple of key parts, adding a new algorithm to speed things up. Bang, it went from minutes to 200ms. No, I'm not exaggerating. Yes, I had a CS background, and no, the experienced developer didn't.

If you end up with accidental n^3 algorithms baked into your architecture, 5% performance in development can be 0.1% performance in the real world, or worse as your N gets large enough. And yes, that's even if you index your data correctly in your database.

And that's when your site falls down the moment you have any load, and you end up trying to patch things without understanding what you're doing wrong. In my example above I improved the speed by nearly a factor of a thousand. In an hour. That can easily mean the difference between holding up under load and falling over entirely, or the difference between being profitable (running 4 servers that can handle all of your traffic) and hemorrhaging money (running 4000 servers and the infrastructure to scale dynamically).

Which is why you need a strong developer to run the project to begin with. Maybe you're that strong developer; I'm not really speaking to you in particular. But I know a lot of developers who just don't have the right background to be put in charge of any projects and expect that they'll succeed.


You must mean "some software projects" and not "frequently."

In the 2000's and prior, it was common knowledge that the majority of software projects failed. Even if they succeeded on paper and shipped, they weren't actually used. By some estimates, it was something like 75% of software projects.

If you look at the contents of "ecosystems" like Steam and the the various app stores, you'll see much the same. Most of the software out there is a defacto failure, and much of it is due to the ignorance of the programmers resulting in substandard programs.


Do you have a citation supporting the claim that those failures were due to poor performance and not far more significant problems like failing to correctly model the actual business problem or handle changes? That era was dominated by waterfall development which is notoriously prone to failure due to the slow feedback loop.

This is highly relevant because one not uncommon problem with highly-tuned algorithms is the greater cost of writing that faster code and then having to change it when you realize the design needs to be different, especially if pride + sink cost leads to trying to duct tape the desired feature on top of the wrong foundation for awhile until it's obvious that the entire design is flawed. That failure mode is especially common in large waterfall projects where nobody wants to deal with the overhead of another round.


Very large numbers of shovelware apps in the iPhone app store had the problems with crashing.

This is highly relevant because one not uncommon problem with highly-tuned algorithms is the greater cost of writing that faster code and then having to change it when you realize the design needs to be different

Can you give me a specific example of this? I'd say, give me a specific example, and most likely, I'll give you a reason why the software architecture of that example is stupid.


> Do you have a citation supporting the claim that those failures were due to poor performance and not far more significant problems like failing to correctly model the actual business problem or handle changes?

"Poor performance" isn't the only negative result of using an insufficiently experienced developer, or a developer who doesn't have a full grounding in algorithms and data structures.

Someone without a full CS background (and without the ability to remember much of that background) is likely to know a few patterns and apply them all like a hammer to a screw. This leads to profoundly terrible designs, not only when you take performance into account, but finding the best model for the actual business problem, handling changes, permuting data in necessary ways, and other issues.

> That era was dominated by waterfall development which is notoriously prone to failure due to the slow feedback loop.

Extreme programming was the first formalized "agile" approach, and the very first agile project to utilize it was a failure. [1] A big problem was performance, in fact:

> "The plan was to roll out the system to different payroll 'populations' in stages, but C3 never managed to make another release despite two more years' development. The C3 system only paid 10,000 people. Performance was something of a problem; during development it looked like it would take 1000 hours to run the payroll, but profiling activities reduced this to around 40 hours; another month's effort reduced this to 18 hours and by the time the system was launched the figure was 12 hours. During the first year of production the performance was improved to 9 hours."

Nine hours to run payroll for 10,000 people. We're not talking about computers in the '70s with magnetic tapes. This was 1999 on minicomputers and/or mainframes. If that wasn't a key algorithmic and/or architectural problem, then I would be amazed.

When you're designing a system using agile, it often ends up with an ad hoc architecture. Anything complicated really needs BOTH agile and waterfall approaches to succeed. You need to have a good sense of the architecture and data flow to begin with, and you need to be able to change individual approaches or even the architecture in an agile manner as you come across new requirements that you didn't know up front.

> This is highly relevant because one not uncommon problem with highly-tuned algorithms is the greater cost of writing that faster code and then having to change it when you realize the design needs to be different

I'm going to say [citation needed] for this claim.

I gave an example in another thread of having improved the speed of a game development level compiler tool by a factor of about 1000, with no major architectural changes, and it took me about an hour.

At the same time I performed a minor refactor that made the code easier to read.

Bad design == Bad design. That's it. Good design can include an optimized algorithm. A good design tends to be easy to extend or modify.

Very rarely it makes sense to highly hand-tune an inner loop. The core LuaJIT interpreter is written in hand-tuned assembly for x86, x64, ARM, and maybe other targets. It's about 3x faster than the C interpreter in PUC Lua, without any JIT acceleration. That is a place where it makes sense to hand-tune.

> pride + [sunk] cost[s] leads to trying to duct tape the desired feature on top of the wrong foundation

That's another orthogonal error. It makes me sad sometimes to throw away code that's no longer useful, but I'll ditch a thousand lines of code if it makes sense in a project.

Every line of code I delete is a line of code I no longer need to maintain. I'm confident enough in writing new code that I don't feel any worry about deleting old code and writing new. This is as it should be. Sad for the wasted effort, yes. But I know that the new code will be better.

[1] https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...


The C3 system only paid 10,000 people. Performance was something of a problem; during development it looked like it would take 1000 hours to run the payroll, but profiling activities reduced this to around 40 hours; another month's effort reduced this to 18 hours and by the time the system was launched the figure was 12 hours. During the first year of production the performance was improved to 9 hours.

And I happen to know for a fact that naive string concatenation was a big part of the performance problem! (Bringing it back to my original comment.)


You really think that the majority of failures on Steam and app stores are due to lack of basic algorithm performance knowledge?

A project can die a thousand deaths before performance becomes its death knell.


The large numbers of iPhone apps crashed a lot before ARC made memory management easier are good examples of bad programming killing a project.


In every single company I've been for the last 15 years, I haven't had any algorithmic problem to solve, most programmer jobs is just making software as quickly as possible and fix performance issues only when they appear.

I really don't think that every programmer in Amazon or Google is faced with algorithm problems everyday, that's probably localized to some projects that deal with those, not the majority of programmer population.

Expecting every programmer to know how to solve given problem on a whiteboard is like expecting every life gourd to know how to do a water escape (because it involves swimming AFTER you solve a problem of tied hands).


I agree. Most companies SAY they want algorithm experts and API ninjas, and computer science wizards, because it sounds like the right kind of thing to say. But most of the actual work out there is either 1. plumbing some data from one layer in the stack to the other (typical CRUD app) or 2. writing glue code to integrate one vendor's middleware to another vendor's middleware, or 3. fixing bugs in some 10 year old legacy application, or if you're lucky, 4. getting an application to work barely well enough in order to ship it on time. An in-depth knowledge of traversing binary trees is not required for any of this stuff.


But most of the actual work out there is either 1. plumbing some data from one layer in the stack to the other (typical CRUD app) or 2. writing glue code to integrate one vendor's middleware to another vendor's middleware, or 3. fixing bugs in some 10 year old legacy application, or if you're lucky, 4. getting an application to work barely well enough in order to ship it on time.

And most of a lifeguard's job is sitting around blowing a whistle at troublemakers. Does that mean you'd be fine with a lifeguard who only has those two actions in his skillset? Also, as someone who has held a number of jobs that involve your 1 through 4 above: knowing basic algorithms is beneficial to the job. You don't need that knowledge often, but when you do, it has outsized benefits. Also, you wouldn't know that, unless you had that knowledge while doing that job. If you didn't have that knowledge, you wouldn't know how it would have benefited you.

The fact that "in-depth knowledge" is applied to "traversing binary trees" makes me sigh and feel uneasy.


I'm not sure the lifeguard analogy is apt. There are plenty of programming jobs where algorithm knowledge is helpful but not strictly necessary. You and I might not find ourselves in them but they exist.

It's more about testing the range of skills that will actually be applied. To toss in another incomplete analogy: If you're interviewing someone to build you a deck in your backyard don't spend 90% of the interview time asking him how he'd chop down trees for wood and manufacture the nails. You probably also don't need to ask him his opinion on the local building height restrictions or if he knows how to install an attic fan. You want to know that he can build a deck.


I'm not sure the lifeguard analogy is apt. There are plenty of programming jobs where algorithm knowledge is helpful but not strictly necessary.

I think a lot of the above analogy mismatch and "not strictly necessary"-ness is because an application slowdown is much less severe than a drowning death. But if you account for that difference, it is very apt.

If you're interviewing someone to build you a deck in your backyard don't spend 90% of the interview time asking him how he'd chop down trees for wood and manufacture the nails. You probably also don't need to ask him his opinion on the local building height restrictions or if he knows how to install an attic fan. You want to know that he can build a deck.

But "how he'd chop down trees for wood and manufacture the nails" isn't apt at all! As a programmer, understanding algorithms is much more like 1st principles knowledge behind understanding building materials. Your contractor needs to know the structural properties of different kinds of wood, the properties of the soil in your backyard, and some basic chemistry/engineering/physics. Otherwise, she might put the wrong materials together and invite galvanic corrosion, or use the wrong materials in the wrong context and invite premature wood rot. You'd want a carpenter to understand wood harvesting, milling, and treatment in how it effects the properties of wood. Could you get by most of the time, just knowing how to saw boards and drive nails with power equipment? Sure. But someone with the right knowledge is going to avoid the unlikely but very costly pitfall and save her customer thousands of dollars otherwise.

You want to know that he can build a deck.

On time and under budget. She's more likely to do that if she's armed beforehand with certain experience and 1st principles knowledge. All things being equal, the contractors who built decks on time and under budget had good trades instruction or otherwise had the right experience and/or 1st principles knowledge. Also, if you look at the longevity and long term structural integrity of deck projects, you will likely find such a correlation.

(Also, it would be a really good idea if your contractor had a knowledge of local building regulations concerning the deck.)


I haven't had any algorithmic problem to solve, most programmer jobs is just making software as quickly as possible and fix performance issues only when they appear.

I sense a contradiction between the 1st clause and the last clause.

Expecting every programmer to know how to solve given problem on a whiteboard is like expecting every life gourd to know how to do a water escape

Expecting every programmer to know the basics of recursion and about 10 basic algorithms and the reasons behind their time/space complexity analyses is like expecting every life guard to know what a drowning person looks like (nothing like what they show in movies and TV shows) has the ability to tread water without using arms, and knows how to tow a panicked, struggling swimmer without letting them kick you in the stomach.

A programmer who can't explain the three bits of knowledge in my comment above is like a lifeguard who simply knows how to swim and doesn't know what a drowning person looks like. Over 90% of the time, all you have to do is sit in the tower chair and blow your whistle at troublemakers, but in the occasional instance, you will have to know something special, with a higher cost than usual if you don't know.


I like the analogy of jet fighter pilots vs. airline pilots. Most programmers are metaphorically airline pilots, doing a routine job that is more about following the processes than elite skills, but some people wish that they were doing something more complex than they are.


And yet, there are still occasions where the airline pilot needs to draw on deeper skills. We know this from many deaths averted by good pilots, and needless deaths resulting from poorly trained pilots.

The failed landing of Asiana Airlines at SF and the Air France Airbus that dropped into the ocean are two good examples.

Most programmers are metaphorically airline pilots, doing a routine job that is more about following the processes than elite skills, but some people wish that they were doing something more complex than they are.

Programmers who think they can only get by with the "routine" skill level are like airline pilots who think it's just fine for them to do the checklists, run the autopilot, and know nothing else. Who would you rather have piloting your plane?



> Nearly every programmer nowadays knows that naive string concatenation is inefficient, and so they should use a stream or something like that.

You and I live in different worlds, or have vastly different definitions of 'programmer.'


> I'd rather hire someone who knows exactly why it's O(n^2)

"How do we satisfy unlimited wants with limited resources?" - economics to the rescue again.

Everyone would love to hire Linus Torvalds for that matter, but there are plenty of programming jobs where you don't really need to, or don't want to, or can't spend that kind of money. There are plenty of useful, productive positions for people who aren't up on their O notation. The important thing is to be clear what kind of person you're looking for.


Everyone would love to hire Linus Torvalds for that matter

Torvalds is very lucky to have found his niche because he is unemployable - the first sweary rant at a teammate and he'd be shown the door. Well except maybe at Uber.


I suspect you've got the cause and effect backwards. More likely, because he found that niche where he does not have to learn to moderate, he does not.


That's like the old advice that you should always watch how your date treats the waiter...


I would not hire Linus Torvalds. For all his technical capability (which I've followed as a linux and git user for some 25 years now) many of his externally visible statements are not acceptable for any modern workplace (this has been discussed a number of times, I know people have differing opinions, but if Torvalds worked at a modern corporation with a real HR department, his words would get him in trouble.)


I strongly suspect that Linus Torvalds does not care at all about "real HR departments."


> Nearly every programmer nowadays knows that naive string concatenation is inefficient

Now that's what I call naive! Or maybe hopeful. Just saying you're giving an awful lot of credit here...


Just saying you're giving an awful lot of credit here...

I did spend awhile as an expensive "consultant" where often I was hailed as a genius just because I applied that tidbit of knowledge and sped up the application by some large fraction. However, I've also noted that there are some mainstream programming communities where people know this about string concatenation and spread this knowledge around.


There are vast amounts of basic information in the world that provides opportunities. In turn, knowing that information provides the opportunity to learn new information that is now "basic" as a result of learning the prior information.

To believe any concept is so basic everyone should know it seems unlikely, since if everyone knows it the odds of being able to find someone that knows it are high and the odds of gaining value from it are small.


It is a basic question about one very narrow skill set that is relevant for some, but not all, programming jobs. It is trivial only for people who had to slog through only-sort-of-relevant CS degrees, who may or may not be the most productive programmers. I could just as easily say "we should be asking every programmer to handle a six-box grid layout" on a white board. It is just as trivial, it is relevant to the same percentage of programmers, but if that were the intro criteria I would probably have a problem hiring people to do the algorithms work.


It is a basic question about one very narrow skill set that is relevant for some, but not all, programming jobs.

I could just as easily say "we should be asking every programmer to handle a six-box grid layout" on a white board.

It's a skill set that's relevant to almost all of programming jobs. Quick: give me an example of how time/space complexity can be relevant to layout manager code. (Can't do it? Dunning-Kruger just reared its ugly head again.)

You don't necessarily need to write your own layout manager. But to properly shop for one for some demanding use cases, you may well benefit from just cocktail-party level first principles knowledge, so you can pick the right library. Or, do you just have faith that Apple or [Big Company] knows what it's doing, and assume that everything can handle everything you throw at it. That works over 90% of the time. It's the exceptional case where you need the 1st principles, and such cases usually come with outsized penalties if you don't know.


> Can't do it? Dunning-Kruger just reared its ugly head again.

@stcredzero, I've seen your name pop up time and again on this thread. I don't think you realize how rude you're being.

You clearly think that Data Structures 101 is important knowledge, to the point of saying that most software project failures are caused by lack of that knowledge. Others in this thread take a more moderate position.

Can you accept that this is a difference of opinion, not a sign of incompetence, and stop being so rude about it?


I don't think you realize how rude you're being.

I am being what I need to be. I find it remarkable that some people are receiving this information and prioritizing protecting their ego, rather than investigating what motivates such sentiments. Perhaps that difference should be exploited in interviews?

You clearly think that Data Structures 101 is important knowledge, to the point of saying that most software project failures are caused by lack of that knowledge. Others in this thread take a more moderate position.

Let me clarify: I think that Data Structures 101 is important knowledge, to the point of saying that a significant number of software project failures are caused by lack of that knowledge.

Others in this thread take a more moderate position.

They either don't have the knowledge and don't appreciate what they don't know. Classic Dunning-Kruger. Or, they have the knowledge but haven't been in the right kind of projects where someone not knowing really bit them hard. ("It never happened to me, so it must not be that big a deal.")

Find me someone who has the knowledge and who has been "bit hard." You can find them in these threads. Read what they say carefully. It's much more than a matter of opinionated debate.


> why adding to the end of an array that doubles when it expands is O(n) amortized

Not to be overly nit-picky here, and technically O(n) is correct as well (it's also the un-amortized worst case of insertion), but you probably meant to say that the amortized time is O(1), or θ(1) to be even more precise :)


to be even more precise :)

Or, you should demonstrate the self awareness to realize that you're just playing with the ambiguity of English in a hastily written comment. Exercise: in what precise context would time/space complexity be O(n), and in what precise context would it be O(1)? (Though the emoticon is probably an indicator that you already knew all this.)


I'm well aware how to analyze a dynamically growing array (each insert "pays" for two moves in the future), but I'm not sure what you're getting at. The amortized complexity of appending an element to the end is θ(1), worst-case non-amortized θ(n) when the growing needs to happen.


Then do that N times. Then re-read the text and think of the context being discussed.


Actually, there are lots of skills and competencies that are valuable on engineering teams other than the ability to have all basic algorithms on call in your head at all times. If you filter the engineers you are willing to hire based on any one factor (unless you are looking for that specific specialty), then you are missing out on a variety of perspectives and experience that could make a better team. You don't need everyone on your team to be an algorithm expert. If you don't recognize that there are a huge variety of other skills necessary to successful software projects, then you are the one experiencing the Dunning-Kruger effect, not the op.


Point of clarification: I don't want everyone to be an algorithm expert. I do want every programmer to have the basic background knowledge. If you are passingly familiar with binary trees and recursion, then you can figure out the answer from First Principles. If you are not familiar with the skills at that level, and don't have the basic skills you're supposed to get by studying such things, then perhaps one would characterize that as merely memorizing.

Actually, there are lots of skills and competencies that are valuable on engineering teams other than the ability to have all basic algorithms on call in your head at all times.

Where did I ever say such a preposterous thing? Please provide a quote.

And again, if you understand algorithms at a basic level from First Principles, then you don't have to memorize every algorithm like baseball card trivia. The important thing is having the background knowledge. If you don't think such knowledge is foundational background, then you probably don't know enough to know that. Hence: Dunning-Kruger.


Optimal algorithms are not the only most important thing that a software developer should know. It depends entirely on the type of the job. In most cases an experience of "design patterns" is most important.


I'd rather hire someone who can actually get the job done in a timely manner. We can optimize it later, especially after profiling to know exactly what we need to optimize, and especially if there are security implications that mean we can't ruthlessly optimize everything. People who optimize too soon end up with solutions that are hard to change later, or solutions that don't even do the job they're supposed to. It's all too easy to get a half-working version that's super fast by focusing on the speed part more than the working part. https://alibgoogle.wordpress.com/2011/02/18/scheme-vs-c/ is worth a read. "Real efficiency comes from elegant solutions, not optimized programs. Optimization is always just a few correctness-preserving transformations away."

There are some "free" optimizations I'd expect a lot of seasoned programmers to be familiar with, like as you mention the string concatenation issue, but those are easy to catch during code review. Similarly for certain optimizations that are even 'freer' in the sense that modern compilers will take the easier-to-read (if naively slower) version and produce the fast version, I expect to not have to tell people during code review that the 'slower' version isn't slow because the compiler does the right thing.


Often enough, implementing the optimal solution is not really more time-consuming than the brute force solution. Often it's not about micro-optimizations, but a difference between O(n^3) and O(n*logn).


I guess I think of programming in a big company as a team exercise. And thus the hiring pipeline can take advantage of specialization and good-enough economics, as well as teaching people more on-the-job. It's a large burden if you demand everyone on the team know the same first principles (of course there may be disagreement on what those are). Even if personally I want to know as much as I can and get along with others who do too, I'm not really going to hold it against a team member if they produce a poor-complexity algorithm that in review I (or someone else, I think it is good to have at least one person on the team who has algorithm knowledge) notice and can suggest an alternative that's easy to use instead. If they do it again (for the exact same thing) I might raise an eyebrow and remind them of the repeat mistake. If they do it a third time I might write them off as incapable of learning new things. But this pattern applies to other aspects of software too. e.g. Getting people to write testable code can itself be a hassle if they haven't been exposed to that before. Or something as dead simple as getting people to use the rest of the team's coding style guidelines, I've seen interns who still had problems with that near the end of the summer internship, I wouldn't be very happy if it was a senior colleague...


The problem is when the goal is to get something out of the door as soon as possible. In such cases developers tend to settle for the first thing that comes to their mind, which is rarely an elegant solution.

These require experience and time spent carefully designing the system. A profiler shows you what is slow in your current implementation, not how to convert it to a more elegant one.


I think you got it right. The fundamental issue seems to be that the recruiter flattered the candidate and made him/her think that they had genuinely looked through their profile and determined that they would be a good candidate. Most recruiters will not do this, simply because their priorities are not aligned that way. I think the OP revealed a very important mechanism: recruiters' job is just to get bodies in the system, which needs more people continuously for replacement and expansion.


Whenever I hear a programmer talk badly about algorithms it leaves a poor impression with me. It means they have not taken the time to do basic research about their field and shows their aversion to researching new topics.

If you look at the current AI, machine learning, and deep learning fields it's an absolute must that you have some experience with algorithms.


I have done professional programming for decades. I have never written code to traverse a binary tree. I almost certainly never will.

If I ever have to write it, it's a pretty simple thing to look up. This piece of algorithm trivia has no relevance for software engineering jobs in 2017.

There is of course one reason to learn it: It keeps coming up in interviews.


could you explain what you mean by stream ?


>Nearly every programmer nowadays knows that naive string concatenation is inefficient, and so they should use a stream or something like that. I'd rather hire someone who knows exactly why it's O(n^2)

I'd love to hire a programmer that knows this is actually O(n^3), and not O(n^2).

(Well, it is if you are naively concatenating N strings, each of length N characters).

(Had a manager who insisted it was exponential. Wisely did not argue with him).

(I await attempts to recruit me based on this comment).


I'd love to hire a programmer that knows this is actually O(n^3), and not O(n^2).

Hint: Don't ever work for or hire anyone who would exploit the ambiguity of the English language in a hastily written comment to gain the false appearance of geeky superiority. (Exercise: how would you interpret the situation referred to in the comment to wind up with O(n^2) and how you interpret it to wind up with O(n^3)?)


While I wrote it without much seriousness, I think my comment does highlight a problem with the whole discussion here.

People like to boast about how important it is to know these kinds of complexities, but in reality they just use mental heuristics to come up with them (which is how we got n^2 and exp(n) from my manager).

For string concatenation, is it important they know how to do it in a linear fashion? Or is it important that they can actually calculate the complexity? Or do you want an in-between where it is OK that they do not really know the complexity, but can tell you that it is definitely super-linear?

I've seen people be fussy that they should be able to explain why it is n^3 (or n^2 or whatever), in a somewhat rigorous fashion. I say "somewhat" because when I then turn to them and ask them to derive the exact expression (not just in big-Oh notation), they will usually fail. Then they will go through hoops trying to explain why it is important to know it in big-Oh notation but not all that important to be able to derive the exact formula.

(There is, of course, a camp that does not believe much in big-Oh and actually prefers to know the leading constant, etc. My example is not contrived.)

I find people will nitpick to their level of granularity, and it is always easy to come up with some justification for their level of preference. But to others, it is just nitpicking.

For me, it is important to know that this is super-linear, whereas one can do it linearly. I don't care if the candidate knows it is n^3 or n^2. That is also why I did not point out my manager's error at saying it is exp(n).


The reason I won't work at Google is because Google is incapable of hiring the engineers I want to work with. It's not a matter of whether I could pass that interview; it is whether I want to work with the code of people who can pass that interview. Whether I want to get code reviews from people who can pass that interview. Whether I want to rely on the code of people who can pass the interview not to break down in interesting and novel ways.

I have seen the worst apps written by "Very Smart People" who obviously had never built an Android application before. It doesn't matter how smart you are, the first time you do something it will suck. I have had catastrophic failures caused by premature optimization, because locking a tool into a fancy algorithm before you know where the actual bottleneck is is a recipe for disaster. I have seen so many problems caused because people couldn't take feedback or didn't ask for help, because they were so wrapped up in being The Kind Of Person Who Knows The Answers.

Frankly, passing algorithm questions is a great way to signal that I probably don't want to have to deal with your code.

Personally, I love working with people coming out of the good agencies because building 15 or 30 applications from scratch in an environment with strong mentorship and rapid feedback is, in my experience, more likely to produce a good programmer than all the smarts in the world.


> The reason I won't work at Google is because Google is incapable of hiring the engineers I want to work with

That's a very bold (even arrogant, sorry) statement about 57,000+ (googled it) employees.

So 57,000 are worse devs then you ? This is what i hear from your statement, correct me if I am wrong ..

Many extremely talented people want to work for Google. They will take the tests, even if they don't agree with the hiring system at all .

Maybe they want to work at Google despite its incapability (according to you) in the hiring and testing process.

> it is whether I want to work with the code of people who can pass that interview

Have you found any correlation between poor code and being able to pass Google's interview ?

(edit: new lines)


I don't think he/she means that they are worse. We all want to work with developers who can help us learn and get better. From what I understand, he/she just means that their process of hiring has no correlation to the quality of code that they produce. Their hiring process is incapable by design of hiring those people BASED on those skills he/she is looking to work with, but sometimes they do end up hiring capable engineers with no credit to their process.


57,000 employees does not equal 57,000 engineers, by the way. Google has a lot of people that do things other than code.

> So 57,000 are worse devs then you ? This is what i hear from your statement, correct me if I am wrong ..

The idea that you can be objectively better or not better at programming, and that it's decidable through one or two simple factors is kinda wrong and goes with OP's point.


Since Google is an advertising company that even hires their own chefs, I would not expect 100% of their employees to be software developers.

It's more on the order of 20k developers, not 57k.

https://www.quora.com/How-many-software-engineers-does-Googl...

>Have you found any correlation between poor code and being able to pass Google's interview ?

cough cough Angular 2


> cough cough Angular 2

There is an enormous difference between poor code and a library you don't like. I may not be a huge fan of Angular 2, but it's certainly not poorly written code.


Back in the day I worked for a company that had in a single division more engineers that google has employees :-)


Who? IBM?


BT Systems Engineering division effectively the heir to Tommy Flowers


> So 57,000 are worse devs then you ?

Probably not. There's plenty of better workers than me who I wouldn't want to work with.


I never said that they were bad coders, or that I wouldn't want to work with some Google employees. I said they are incapable of hiring many developers I do proactively want to work with, because of their hiring process. It's who they exclude that is the problem; not who they hire.

I am also specifically not passing some objective value judgments here. They may very well be hiring the best developers for what they are doing: they just aren't capable of hiring the type of developers I most enjoy working with. This is a subjective preference, not an objective one.

There are absolutely talented developers who don't value working with the kinds of developers I want to work with and they won't care that Google can't hire them. I have enjoyed working with Google developers in other contexts, but many of them wasted a bunch of time and energy jumping through Google hoops because they decided it was worth it. It is my opinion that they would be better developers if they had spent that time on something useful instead.

Google makes it easy for people I don't care about working with to get hired, and hard for people I proactively want to work with to get hired. Nothing about that is about my skill at all.


>So 57,000 are worse devs then you ? This is what i hear from your statement, correct me if I am wrong ..

He never said they are worse.

>Many extremely talented people want to work for Google.

"Most people do/think XYZ."

Do I have to explain why that's not a good argument?

>They will take the tests, even if they don't agree with the hiring system at all.

Other people don't mind shitty treatment, so what?


> He never said they are worse

This is why i asked for clarifications. the OP said 'Google is incapable of hiring the engineers I want to work with', which implies that the engineers he is looking to work with are NOT there. Google is incapable of hiring them, and since they are top quality devs, someone else (more capable) did.

> Do I have to explain why that's not a good argument?

Yes, please explain. Because you only took the first sentence of my argument. the second sentence was that those "many people" are going to do what it takes, even if it means taking 'stupid' tests, smuggy interviers, and some other large corp bureaucracy (which to me personally is more concerning, but is not specific to Google, rather to any large corp). And why ? My guess is that it's probably so cool,curious, challenging to work on something like Google Maps/Balloons/Self driving cars that they will be willing to make a lot of sacrifices and tradeoffs. THAT was my argument.

> Other people don't mind shitty treatment, so what?

please explain, i didn't understand your point

(edit: spacing)


>Yes, please explain. Because you only took the first sentence of my argument. the second sentence was that those "many people" are going to do what it takes, even if it means taking 'stupid' tests, smuggy interviers, and some other large corp bureaucracy (which to me personally is more concerning, but is not specific to Google, rather to any large corp). And why ? My guess is that it's probably so cool,curious, challenging to work on something like Google Maps/Balloons/Self driving cars that they will be willing to make a lot of sacrifices and tradeoffs. THAT was my argument.

"Many people want to go to Heaven, and they are going to what it takes, even if it means answering stupid questions, pretending to feel bad about trivial things, and some other large org bureaucracy."

My guess is that it's probably so cool, curious, satisfying to be in Heaven that they will be willing to make a lot of sacrifices and trade-offs.

"Many people voted for Trump", my guess is that it's probably cool, curious, challenging to finally tackle real issues that they will be willing to make a lot of sacrifices and trade-offs.

That's stupid, right? But why?

>My guess is that it's probably so cool,curious, challenging to work on something like Google Maps/Balloons/Self driving cars that they will be willing to make a lot of sacrifices and tradeoffs.

Those people might believe that it's cool, curious, whatever. But then you should end up with "many people think X about Y", not "Y has a property X (because many people think it does)".

>This is why i asked for clarifications. the OP said 'Google is incapable of hiring the engineers I want to work with', which implies that the engineers he is looking to work with are NOT there. Google is incapable of hiring them, and since they are top quality devs, someone else (more capable) did.

He doesn't want to work with them because they are unpleasant to work with (or be around in general), not because there is an issue with their programming capabilities.


> and since they are top quality devs, someone else (more capable) did.

That's an interesting assumption to make. I know many programmers who are excellent at what they do and have also outright refused offers from Google, Amazon and Facebook; it's usually on the principle that such large, bureaucratic organizations are quite defective insofar that they are adept at suffocating those who are happiest with maximal flexibility and responsibility.

There's a group who jump from contract to contact, startup to startup, because the thrill of new challenges and little to no barriers from organizational structure is worth the uncertainty in compensation.

It also helps that a few aren't willing to relocate to the USA.


Just as a matter of correctness: only 1/3rd of Google employees are engineers/development guys.


How many Google Engineers do you know?

I know quite a few of them. All of them, without exception, are incredibly brilliant engineers. You might think this is just anecdotal evidence, that's correct. But also think that Google Engineers are responsible for many top quality, world class software projects. How could that be possible without really good engineers?

Yes the interview process is tedious and can be frustrating. But negating the fact that they have some of the best software engineers in the world is just silly.


Q: How many Google engineers does it take to screw in a lightbulb?

A: Build great content.

https://twitter.com/dr_pete/status/623981710003187712


I don't get it.


It's the standard Google answer to every question.


> It's not a matter of whether I could pass that interview

Rrright.

> Personally, I love working with people coming out of the good agencies

What does that mean? What agencies? What makes these people good to work with?

I'm seriously flabbergasted by most of the points you make.


What is it with developers and their code-centric view of employment? Any idiot can "write code" in 2017. It's not particularly difficult. You might do it slightly better but still your profession is not by a long shot "difficult".

Can you work in a team? Can you communicate? Do you dress in a presentable manner? Can you explain technical ideas simply? Do you know anything beyond cutting code?

It's simply not good enough to be some lone-ranger ubercoder anymore. Coders are a dime a dozen.

I'd actually rather take someone with a degree in statistics or physics, let them learn to code on the job, and I'd be able to apply them to an order of magnitude more problems than your typicall mathematically illiterate self-styled genius who can write a bit of code.

Programming is the easy part, stop deluding yourself into thinking what you do is difficult. It isn't.


> Frankly, passing algorithm questions is a great way to signal that I probably don't want to have to deal with your code.

Wow that's some serious sour grapes.


What are these "agencies" you speak of, and how do you tell the good ones from the bad?

(This is a serious, non-ironic question: I don't know what you mean, but it sounds interesting.)

Edit: I guess you mean recruitment agencies? How does one find the "good ones," other than through bitter experience and/or luck?


He means dev agencies that contract for projects. Agencies act as general contractors, augment your staffing, or provide consulting services. Agencies build whole or partial products, often times for stuff that you use on a daily basis. For example, there's an agency with relationships with many YCombinator companies and builds most of their mobile app or web apps.


Thanks.

Any idea how one finds the "good" agencies? Is it a networking thing?


You hire agencies the same way you hire people.


But Google's code quality is quite high...


I don't know about their code quality, but we consume some of their data via APIs and they periodically randomly change their non-nullable data types to nullable. Fun times hotfixing those in prod.


When data comes in over the wire, you probably shouldn't be making assumptions about what's nullable. You should write a parser that fails gracefully if it receives unexpected input.

It's the robustness principle.

https://en.wikipedia.org/wiki/Robustness_principle


> When data comes in over the wire, you probably shouldn't be making assumptions about what's nullable.

I have to work with their APIs often (as well as Amazon, Facebook and a couple other big players, they're all equally awful).

Their documentation will tell you a type is an integer and will never be null. They will do this for 3 months, and then change it to send a string that can be nullable or empty string, without telling you or updating API versions. It goes beyond robustness. It's the "BigCorp will actively lie to you in their documentation" principle.


You're missing the point. Sure, you should make it fil gracefully on unexpected input. But Google shouldn't change APIs in a way hat causes 3rd party apps to fail at all.


Third party apps wouldn't fail if they were correct in the first place. Failing gracefully on unexpected input is the way a competent engineer would structure their program.


Is there a functional difference between "my production code suddenly stopped processing work because it rejected a newly nullable input value" vs "my production code suddenly stopped processing work because it crashed on a newly nullable input value"?


If the crash gives an attack vector for a security threat then there is.

Personally, "Be conservative in what you do, be liberal in what you accept from others" is a bad policy in the modern world. It should be "Be conservative in what you do, be strict as if you don't trust them in what you accept from others".


That's a valid point if it's security-critical. My initial response was with regards to grandparent comments arguing over semantics, when the end results in both cases was "successful processing ceased because of an API change."


'Liberal in what you accept from others' in this case would mean being prepared to handle nullable JSON fields.


Ideally it's more like "my production code skipped over 1% of batch jobs because it didn't expect the null values" or something like that. Handling errors gracefully is a bit of an art unto itself.


Indeed it is an art.


production code that rejects a suddenly nullable input value is more deterministic than "something crashed" and can be handled better (triggering a watchkeeper, notification) rather than relying on someone paying attention to the crash logs.


"Failing gracefully" is still failing. I don't want my app to fail, gracefully or otherwise.


Funny you point this out. Google's JSON Style guidelines[1] say this:

>If a property is optional or has an empty or null value, consider dropping the property from the JSON, unless there's a strong semantic reason for its existence.

[1]: https://google.github.io/styleguide/jsoncstyleguide.xml?show...


That's a different issue. The commenter is referring to keys that can take on null.

This style guide entry suggests a course of action in the event that one of these keys currently has a null value.

The style guide isn't prohibiting nullable keys.


yes some of their api's and tools are a bit shoddy eg unable to parse a xml sitemap to spec and that spec was only a 3 page RFC!


Have you ever worked with the Android SDK?


In my opinion, Android's code organization is a result of questionable architectural constraints applied early in the development of the platform.


Isn't that exactly the point roguecoder made?

> I have had catastrophic failures caused by premature optimization, because locking a tool into a fancy algorithm before you know where the actual bottleneck is is a recipe for disaster.

EDIT: Or, are you saying that it was from the decisions made before Google acquired Android?


I think that highly depends on what you value in code.


That's a bit biased.


I certainly wouldn't want to work with you!


amen!


Its the London Cabbie(1) method. They're not looking to fill any particular role. They're just looking for smart people (for the value of smart that fits their biases).

They just need as many warm bodies as possible to ram through their test so that a few trickle out the bottom of the funnel to keep the ranks from shrinking. If too many started getting hired, they'd add competitive basket weaving to the skillset if that's what it took to balance it.

(1)http://news.nationalgeographic.com/news/special-features/201...


> They're not looking to fill any particular role. They're just looking for smart people

I've found this to be decidedly not true, from many recruiter contacts. They have a role in mind they're trying to fill and if you point out that it's far more junior than what you're looking for, the conversation's over.

Think about how must companies hire. The hiring manager fights internally and finally gets a req for a very specific role. They have to fill that role. Having a generally smart person would be great but they have this immediate need and they can only hire one person. And once that req is filled, no more hiring until you get another req.

I'd love to see a company literally just looking to snap up smart people and then have them come in and kind of define their own role, one where they can add the most value. Nobody does this!


> I'd love to see a company literally just looking to snap up smart people and then have them come in and kind of define their own role, one where they can add the most value. Nobody does this!

I don't know about letting people "define their own role", but a lot of the biggest companies are constantly hiring without a specific role in mind. If you've got thousands of employees then your best strategy is just hire the smartest people you can find, and figure out how to use them effectively - you have enough roles available that you'll find something for anyone to do, and while not every role you need filled is someone's dream job, a combination of internal mobility between teams and great benefits and pay will attract a lot of good candidates.


The problem, from my perspective, is that I (as a candidate) have certain things I want to work on. You might have something I can do, but is it something I want to do? I don't want to be hired into a general pool; I want to be hired for one of the specialties I am interested in.


The problem is that what interests you now may not be economically valuable next year. They want you there past one project, because the future is too murky to guarantee your current role will continue to be needed.

Google et. al. are hiring generalists because they want people of a mindset such that when the entire special-purpose they hired them for dissolves, they're willing and eager to ramp up on a new project and new challenges instead of saying "Well, the foo project is closing down and foo's pretty much what I wanted to do, so I guess I'm going to quit (and take all the knowledge and skills Google spent time and money to train me up on with me)."


You are taking a much too narrow view of "specialization". If project foo is really the only project at Google that uses the skills I like and know best, I am probably not going to apply there.

I get that there are people who don't care (much) what kind of code they are working on. If those are the people Google exclusively wants, that's fine. I just won't have a place there. I do not think this is a generally-advisable strategy, though, as the GGP I was responding to indicated. In fact, I'd argue that well-crafted teams of specialists are superior to, but harder to staff than, teams of generalists.


The funny part is that for all the work Google has done building a hiring process that searches for talented generalists, the actual work they have to offer is really boringly specialized. Most of what's going on in there is just stacks upon stacks of web-service middleware, endlessly slinging RPCs back and forth at each other. If that's your bag, then yay, Google is a great place to work! - but for me it was really discouraging to realize, four months in, that no more than ten per cent of Google was doing anything I was ever likely to care about. I didn't last long, and wouldn't go back (not that they'd be likely to want me).


Ah, I see what you mean and I think I agree; "well-crafted teams of specialists" are, hypothetically, one of the advantages of attempting to tackle a problem as a startup company as opposed to pulling the resources together within a large corporation to skunkworks the problem (though in terms of eventual success or failure, I don't know where to place that advantage relative to, say, having the luck / foresight to choose a problem that the market needs to solve and doesn't have a good solution already in the pipeline somewhere [and will need to solve long enough for a startup to reach minimum viable product] and the startup's capacity to trade equity for a staff that will work their bloody asses off).


I definitely don't think that the generalist hiring practice is the best for all purposes - just that it makes a lot of sense for the bulk of the hiring at a large company, and that most the big tech companies do it.

The big tech companies do also build well-crafted teams of specialists where appropriate (for instance, Google's Deep Mind team is all AI specialists). I don't know what sort of hiring process they use for those roles, but I would wager the process is a little more personalized, and also substantially more selective.


I have seen this 'fungible bricklayer' approach at amazon & have to tell you it cant be any more misguided. It may work for generic web software where learning curve is not as steep but any point you are dealing with vertical technologies like wireless or multimedia it just falls flat on its face. And the byproduct is poorly thought out software full of band-aids. unfortunately this kind of nonsense often becomes pervasive where management itself is very non-technical (google may be an exception though).


I've seen the opposite, from close observation on both sides.

The recruiter may start out with a role that they are trying to fill, but they have plenty of other roles that they could be happy to put you in. And furthermore for the right candidate, they don't have to figure out the role up front.

When I was hired by Google, SRE poached me out of a pipeline to a different group. (Accepting that offer was a mistake on both sides, but it was a learning experience.) When I was hired by Amazon, I was contacted for a job in Seattle, then got hired in Orange County.

That said, plenty of candidates start the process by being too arrogant about what they can demand. Recruiters know to look for that and stop wasting their own time taking these people seriously. That's why pointing out that a proposed role is too junior for you is likely to result in being dropped.


Interesting, thanks for the insight. Funny how two people can look at the same recruiting environment in the same industry and have two totally different experiences.

> That said, plenty of candidates start the process by being too arrogant about what they can demand. Recruiters know to look for that and stop wasting their own time taking these people seriously.

Maybe, but I think it saves both sides time. If I'm shopping for a new sports car and a salesman approaches me with a great deal on a 10 year old pick-up truck, it's helpful for both of us if I make my expectations clear right away.

Most Recruiters: "I have this role you'll be great for! Let's see if it's a great match!"

Better Recruiter: "What are you looking for? I work with a lot of companies and probably have a great match!"


Yeah, I've seen those recruiters. And they tend to not be honest about who they are actually working for. So, for example, you'll get contacted by someone who is recruiting for a job at a big name company, from someone who isn't from that company and doesn't know what is actually available from there. They only know about the one role that they were told about.

These things are generally a mess if you start out that way. Waste of time and money on both sides.


I'm interested in your experience with SRE. What did you like and dislike, and why was it a mistake?


SRE is a difficult role because you need someone with the mindset of a sysadmin and the skills of a programmer. So Google (at least used to) will hire one and hope to find the other half. As a result a large fraction of people hired into the position are not a fit. And Google had no mechanism to identify and rectify this common problem.

I am a programmer. I tend to dive into things in depth and context switching is unusually hard for me. A sysadmin needs to be able to operate on relatively limited knowledge and rapid context switching is par for the course. I was therefore a poor fit in that role.

After I left I was shocked at how many people I met who had known someone else whose story matched mine pretty closely. I would hope that Google has solved this organizational problem since. But rumor from those I know that are still there indicate that things have gotten worse over time, not better, so I don't think that it has.

However it was quite educational. I wish things had worked out differently but I definitely learned a lot that has served me well since.


I think it's tough hiring for an SRE role. Much tougher than a programming role. Although in Google's SRE book, they say the interview bar is lower than for their software engineers. Odd!

You're looking for the sort of person that could setup a startup from the data center up. On top of that, they need to be good coders so they can automate everything, and also understand the applications running on the things they build. It's generally a much more in-the-trenches job - relying on limited information, and much harder to test things in isolation since there are so many moving parts, and things you can't change or automate easily (network vendor appliances, and so on).


The coding standard for SRE hiring at Google is lower, but the other requirements are much higher or aren't considered for a generalist SWE role.


Nobody does that because being smart is of no guarantee that this person can be of any use to a business.


Are you entirely excluding university recruiting from this? Because consulting firms and investment banks routinely recruit smart people without any demonstrable experience doing what those companies plan to train them to do.


That's almost exclusively ivy. We need a Yale guy, any Yale guy, doesn't matter who, just need it for the list of accomplishments. Then we can use our new "ivy atmosphere" to get that specific guy from Stanford or whatever.

The only place hiring generic warm bodies at random state-U is Starbucks.


Consulting firms and investment banks recruit plenty out of my alma mater, UCLA (a public, state university that isn't considered part of the Ivy Plus). They did the same at University of Texas, where I went to graduate school, another public, state university.


You make it sound as if some other hiring method provides a guarantee.


Google is a little bit of both. At least from having gone through the process (and ultimately turning down an offer due to the ridiculousness of the process). There's a massive funnel where they're trying to bring in a bunch of smart people, then teams basically say "I have this position available, who has gone through the funnel that is a skills match for our specific teams".

> I'd love to see a company literally just looking to snap up smart people and then have them come in and kind of define their own role, one where they can add the most value. Nobody does this!

At least with hiring out of college, a lot of companies do the "Let's snap up smart people, then train them to do the specific job we need done". In my experience, this has been particularly prevalent among the big consulting firms.


I think Valve works a bit like that from what I've read.


There are a few different processes imo.

I have been through what you describe. A recruiter reaches out and tries to get you in their funnel. For me, it's a no-go because of my college background. (Unsexy school, shitty grades.)

A family member was pursued by them very aggressively after making some presentations at significant conferences and getting praise in a book written by a high level business exec.

This family member was received borderline harassing levels of outreach from recruiters from Google. It's pretty obvious there is some sort of targeted hunting list and KPIs associated with it.


And in theory it should totally be worth it as a candidate unless they are coasting on reputation. Otherwise the supply of candidates would dry up. In practice it wasn't for me, but I refused to tell them how much I made just to see what would happen.

If you interview at Google and they ask how much you make you should probably tell them or you might get a weak non-negotiable offer. Unless what you make is already far less than their "standard offer."

It would be a shame to waste all that effort by sticking to negotiation techniques that maybe don't work on Google for regular candidates.


why not just lie to them and tell them you make 20-30% more than you really do ?


Habit mostly I guess. I always implicitly assume lying has the risk of getting caught and then becoming known as liar. I don't evaluate on a case by case basis whether I should lie.

Also the experience of being caught lying is emotionally and mentally draining for me so I stopped way way back. Like in childhood. I wouldn't say I'm completely fib free, but if you are even remotely entitled to the truth your going to get it. If I get caught lying I have to be OK with it and not care. That is a high bar.


>but if you are even remotely entitled to the truth your going to get it

This is a negotiation, they aren't entitled to know how much you make nor do they have a good method of checking up on it.


refusing to tell your prospective employer how much you want to get paid is a 'negotiation tactic'?


I thought it was pretty well understood that you should never name a number, and if you're forced to, you should say 5 times the number you had in mind.


that's absurd. you should name the price you expect, and not go below your minimum.

would you hire a plumber that refuses to name his rate, and then when pushed, tells you a number 5 times higher than normal? why do you think any other employment negotiation is any different?

if you "never name a number", the person on the other end is going to know you're inexperienced and operating on cargo cult mythology, and will simply take advantage of you.


The 5 times part is surely a joke, but not naming a number is like HN-promulgated salary negotiation 101

http://www.kalzumeus.com/2012/01/23/salary-negotiation/ https://www.twilio.com/blog/2016/02/patrick-mckenzie-on-sala...

Definitely would be good if someone wrote a counter-response to this prevailing meme


If you have a good idea of the price you expect, based on a clear understanding of what you can command, then this is an excellent idea.

The standard "don't name a number" advice is based on the assumption that most candidates don't have that. They know what they earn right now, but they don't know what they could be earning. In that situation, you want to sweat some information out of the recruiter by getting them to name the number.

There are other ways you can get a bit of edge.

In a previous round of job hunting, i let a recruiter persuade me to apply at Amazon, even though i was 85% sure i didn't want to work there. They made me an offer substantially higher than my salary at the time, because they're desperate to hire, because nobody wants to work there, because they're a shitshow. I could then take that offer to the other companies i applied to, where i actually did want to work, as a starting point in salary discussions.

In my most recent round, one recruiter said to me "Company X want to know how much you currently earn, because they can't offer above 130k for this position and they don't want to waste their time if you're on more than that". 130k may not sound like a lot to you over in the Valley, but it was getting on for twice what i was making at the time here in London. "Well,", i said in the most noncommital voice i could manage, "130k would probably be okay".


What currency is the 130k in? If they made you move here to SV, it may not be a raise.


Pounds. It involved moving just over a mile from my old job.


"how much you make" i.e. disclosing current or previous salary, not how much you want to make.


Yeah, I engaged a recruiter who seemed pretty decent at the onset. Then when I expressed interest in 2 positions, they demanded that I provide detailed salary for the last 3 jobs I have worked in.

I gave express disinterest in doing that, and had no issue with providing salary expectations. But I made it clear that this behavior was completely unprofessional and unbefitting of a recruiter.

I then told them in no uncertain terms that as long as they have this policy on their books, do not call back. If however, they change this, I would consider them again.


There is no standard way to interview a software engineer. Whenever one of these threads come up we see multiple posters explaining their process, and while each process has it's upsides and downsides, no two are exactly the same.

For a company the size of Google, with the amount of applicants they receive, I would assume that an interview standard is absolutely necessary. It's not perfect, but for 95% of developers out there you know EXACTLY what you're going to get when you interview Google. I think there is something to be said about that. Google recruiters tell you what the interview will be like, they give you study materials, and are pretty gracious with scheduling. If you don't like the process, that's fine, but I think Google in particular has done a good job of standardizing their process. It may not work for individual cases, but I would assume it works well for the company.


the problem with the way Google interviews is that, despite it being heavily standardized, it is not sensitive to non-algorithmic skills and talents that interviewees have. It will _only_ pass candidates who are unusually good at algorithm puzzles, on whiteboards, under time pressure.


When I was interviewed for a job at Google I was told there would be 5 different interviews measuring 5 different things:

Pure Algo (which I view as "let's see that your CS grades were earned and not bought")

Pure Coding (let's see if you have a reasonable coding style and design and are proficient with at least one language)

Algo & Coding (a bit of both that makes sure you can solve a simple problem and code it [i.e. work entirely through a problem])

Software Design (so more on your skill in designing code - finding the right abstractions and interfaces etc.)

Systems Design (your ability to design complete, large scale systems at a very high level)

And this is more or less what I got, which seemed fair and logical to me. I agree that it is not tailored to the interviewee's individual skills (for example, your special training in cyber security is unlikely to give you an extra edge), but it makes sense for "good overall software engineer", and if you aren't also a good overall software engineer in addition to your special training in cyber security, then you are probably not what Google is trying to find. Also, it does look at your skills in coding and design which are both non-algorithmic.

Now whether or not this experience is shared with other interviewees, or matches what you are looking for is something else :X


This seems to be the type of employee Google is looking for, in which case, it works great.


Maybe it's not what they are looking for. Maybe it's only the easiest test, like looking for the keys under the lamp post. It could also be that this way of hiring people turned the company into a specific direction, more attention to the sw and less to the customers.


Or visa-versa, that the company always cared more about the sw than the customers, and the hiring methodology reflects that.


It seems to be working fine for them.


Is it?



> it is not sensitive to non-algorithmic skills and talents that interviewees have.

I work at Google and do interviews (though I don't enjoy them). We do ask questions around domain expertise, software design, etc. It's not all just coding and algorithms.

A good question:

1. Has a low enough floor that a poor candidate can still make some progress and not feel like they are doing poorly and get stressed out.

2. Has a high enough ceiling that a strong candidate doesn't blow through it in five minutes.

3. Has a smooth ramp between those points.

4. Doesn't rely on too much domain-specific knowledge so that a candidate who happens to have a random gap in their background that leaves them totally hosed.

5. Is concrete enough that the interviewer can capture that feedback in a way that the hiring committee can easily understand.

6. Isn't well-known outside of Google as stock interview question so that candidates can game us by just learning the answer.

7. Isn't asked by any of the other interviewers the candidates sees.

8. Can be explained and worked through in about twenty minutes.

In case it isn't obvious, it is really really hard to find good questions that pass that gauntlet. Questions do tend to skew towards smaller-scale algorithm coding questions because I think those tend to survive that gauntlet better than most other questions.

    > unusually good at algorithm puzzles
Interviewers are trained to not ask "puzzle" or "trick" questions. Not only are trick questions a shitty experience for the candidate, they are a shitty experience for the interviewer too. My job in an interview is to get as much data as I can about a candidate in order to provide information to the hiring committee. If I ask you a trick question, I get about one bit—in the binary sense—of data from you: did you find the trick or not?

I don't think you have to be unusually good at algorithms. I basically read some Wikipedia articles and spent a few hours in the hotel cramming Algorithms in a Nutshell, and I managed to squeak through.

That time was incredibly well-spent. Since then, I have relied on that algorithm knowledge way more than I expected too, and have since spent more time learning algorithms and data structures because I can clearly see it's made me a better programmer.

    > on whiteboards,
That part is hard. We allow candidates to use a laptop too, if they prefer, or both. My experience is that candidates who use the whiteboard, at least for the earlier "design" parts of the question tend to do better than the ones that go straight to typing.

We need to learn how you think, and putting a screen in front of people tends to make them clam up. If all I see is the code you write and you don't explain your thinking behind it, I don't get much data.

    > under time pressure.
That part is really hard too. The reality is that interviewers and candidates have a limited amount of time they can put into this process. Keep in mind that most candidates are currently employed and don't want their job to know they are interviewing. Many of them travel to interview. There are only so many hours.


My experience hiring at (YouTube.. at the google campus), was similar to the original post. 3 phone screens (which I must have done ok on), and then they flew me out to CA. I went through the interview process.. got tripped up on the 'puzzle' questions (which I thought was bullshit for a Python programming job.. but whatever), and then didn't get the job. I actually agree with what was said in the original post: this process probably works well for the type of employee Google/Youtube wants to hire.. shame on me for not doing my homework.. I guess. However, I can tell you what definitely was a complete turn-off, was the odd obsession of seemingly everyone there with where I/(you) went to school. At least 5 times: me: 'I live near Princeton'. they: 'oh, did you go to Princeton??!!! (elation)'. me: 'No.. I went to a state school'. they: 'oh.. ' me: (silently) 'yeah.. I'm sorry. It gets better.' Again, whatever.. in truth, I appreciated the opportunity to interview.. no big deal. What was even weirder after that was, for no less than 6 years, to be 'actively' recruited (again) by Google. My position each time was: thanks, but not interested in going through another bullshit interview where I'm continuously reminded that I didn't go to Princeton. I'm just not Google material. In all seriousness, in my opinion, Google is one of the best companies in the world and is run and staffed by the best minds in the Industry. My hat's off to them for all of their accomplishments. I just don't want to work there (even if I could :)


    > However, I can tell you what definitely was a complete turn-off,
    > was the odd obsession of seemingly everyone there with where I/(you)
    > went to school.
Ugh, that's really frustrating. I'd like to think they were just searching for common ground, but jeez.

Candidates should be judged based on what they know, not where they happened to have acquired that knowledge.

I am a college dropout and I had no idea what a rare breed I was at Google until after I got hired. (I came from the game industry where college degrees weren't as important.) There are so many over-achievers here, that I think many Googlers have never even considered that someone may not have gone to one of the top ten CS schools in the US. Especially in Mountain View, literally everyone they know probably has.

It's a weird bubble.


"It's a weird bubble."

So you imply they're doing a bubble sort on candidates?


Speaking as someone who's done these interviews (but was never, for the record, asked about anything other than coding/algorithms): I think the core premise is flawed. The idea that a good developer is somebody who can regurgitate knowledge they learned on Wikipedia in a few hours from memory is just not a good base for determining if the engineer is good. More importantly, a developer who cannot regurgitate the knowledge from memory is not necessarily a bad developer, or, frankly, in any way otherwise distinguishable from the guy who can.


> The idea that a good developer is somebody who can regurgitate knowledge they learned on Wikipedia

I don't think Google's hiring people thing that that's what a good developer is, as much as they think it positively correlates to good developers.

(I don't know to what degree that's true, but it's how I assume they arrived at the current interview process.)

> More importantly, a developer who cannot regurgitate the knowledge from memory is not necessarily a bad developer

This is also a valid concern, but Google is much more concerned about false positives than false negatives. Missing out on a good candidate is a bummer. Hiring a bad one can be a nightmare. So the process is skewed to avoid the latter even if it costs the former.


> I don't think Google's hiring people thing that that's what a good developer is, as much as they think it positively correlates to good developers.

I agree. However, I think they are wrong... I'll expand below.

> This is also a valid concern, but Google is much more concerned about false positives than false negatives. Missing out on a good candidate is a bummer. Hiring a bad one can be a nightmare. So the process is skewed to avoid the latter even if it costs the former.

This is an oft-repeated line about Google's hiring process, and in fairness, I think it's oft-repeated because insofar as it reflects Google's belief that their process results in good developer hires, it is true.

However, my suspicion is that the phenomena going on here is not "losing out on some, but not all good developers in order to weed out bad ones," rather, it's "losing out on a certain kind of good developer in order to weed out bad ones." That is, I'd conjecture that the good developers who can (or want to) memorize algorithms and regurgitate basic CS knowledge are one kind of capable dev, and the good developers who rely on tools to a greater degree are another kind of dev. Call them types A and B.

I don't have the two segregated into neat categories - because this is just a suspicion, based on people I know who work at Google, and my own experiences - but I think it's roughly along this line: Developers in category A have an innate desire to learn about and understand computer science on a theoretical level, as much as or moreso than a practical level. Such a person may or may not enjoy building things as much as they enjoy learning about how to possibly build things. Developers in category B (I'd include myself in that group) don't care about theory as much as practice; they do what is necessary to get the job done. Now, neither group hates theory or practice, they just have preferences about which one to spend their time on.

For an organization to really be successful, I'd argue, you need a mix of types A and B (tending more towards one or the other depending on the type of entity). If Google is weeding out most or all of type B, they are doing themselves a disservice - and I would argue that insofar as many of the common complaints about Google (services created then abandoned, poor support, poor attention to bugs/issues, etc.) are true, if this theory is also true, it would help explain why. The practical-preference developer wants to make things work and keep them working; the theoretical-preference developer wants to discover new things and constantly expand her knowledge. Both aims are good, but you cannot have one to the exclusion of the other as an organization.


> That is, I'd conjecture that the good developers who can (or want to) memorize algorithms and regurgitate basic CS knowledge are one kind of capable dev, and the good developers who rely on tools to a greater degree are another kind of dev. Call them types A and B.

I really dislike this characterization. I'm a googler. I've never tried to, or needed to, memorize algorithms. I don't know if this characterization comes from misunderstanding, rationalization, or what, but in my experience at least, neither do most of my coworkers.

That is, I at least don't recall a rote algorithm when interviewing In fact in one of my interviews (not at google, but I could see a similar question happening there) I had to derive a solution to a problem in a space I was totally unfamiliar with (locking and multithreading).

It seems like, if you assume (incorrectly) that somehow you can't cheat the system, Google is selecting for people who can solve unfamiliar, complex, problems by applying first principles. That is, I think, orthogonal to the idea of 'theoretical or practical' computer scientist.


We have something interesting here, in that I'm looking at this from the perspective of someone the Google system rejected (and who later decided/rationalized/realized participating wasn't really worthwhile) and you're looking at it from the perspective of somebody it embraced.

Naturally, it wouldn't boil down so easily in practice to these two types, even if they are the correct ones. Everybody is a mix of both (and plenty of other things); plus, we have to assume that, like you say, occasionally somebody incompetent or strongly type B "cheats" the system and gets hired.

But I guess the question is: How, to you, does "selecting for people who can solve unfamiliar, complex, problems by applying first principles" not seem to jive with my definition of category A?


Indeed we do.

So, I think my biggest issue with your A/B dichotomy is that I don't see it. That is, in school, in myself, in coworkers, there isn't this large group of people who are trying to do all the theory at the expense of practicality, as opposed to this group of stuff-accomplishers who aren't theoreticians.

I mean, occasionally those people exist, both the "fuck it I'm going to sit down and type until it works" people and the "I must understand this concept before I write a single line of code" people. But as you say, everybody is a mix of both, and I think most people are nearer the middle than the sides, which makes the possibility of mild bias towards one side or the other a lot less harmful than you maybe expect.


The thing is that as an outsider looking in, I see a lot of evidence that Google has this dichotomy and suffers from it. I mentioned some of it in my posts above, but in general, a lot of the company's pain points from my perspective - and from the perspective of employees less satisfied with their experiences[1] - fit what we'd expect from such an enterprise.

Here's an example of what I mean: Google is a company with incredible tech and quite a lot of money, but its products tend to fall into clear patterns. First, create something awesome, then fail to support it, then fail to monetize it, and eventually, it fall to the wayside and is discontinued. There are so many apps Google has created that fit this mold that people have done meta-analyses on how long the average Google product lasts.[2] Obviously, not all Google's products fit this mold - but I think Google is unique in being able to support this pattern, fiscally and from a development fatigue standpoint.

This pattern is exactly what I would expect to find at a company that is mostly based around theory and concept, and that is either inexperienced at or disinterested in support work after the initial execution.

[1] https://www.quora.com/What-are-the-disadvantages-of-working-... [2] https://www.theguardian.com/technology/2013/mar/22/google-ke...

It's probably true that most people, even at Google, are somewhere in the mid-range. However, a large enough group of people with a small bias will result in a shift in the policy of an organization. I still think it's fair to say that, if my dichotomy exists, it would effect Google's institutional behavior even if the degree of difference between people is low on average. That is to say that while you may not see it, it also may not be obviously or even at all visible from the level of one person in one part of the organization, whereas its effects are visible from both within and without. It's kinda like dark matter - you don't know what's happening in someone's head, so you can't observe it directly, but you can predict outcomes based on hypotheses and see what comes true.


Interesting, when I read those answers, what I see is more or less people saying "management can sometimes be downright bad, and is often stupid". I very much doubt that many of the people (broadly) deciding which products to support and which to drop were hired via a new-grad-like interview process.

For things like moonshots/other bets, the "build something awesome but don't monetize it" makes a lot of sense, since the entire point of that division seems to be "build something awesome and see if it is sustainable too". More often than not, unfortunately, the answer is no.

In fact, if anything, I'd reverse the cause and effect in your idea. If we presuppose that there is this dichotomy in people and it affect google's motives and goals as a company, then this would influence the tech hiring practices to be the way they are, not vice versa.


> I very much doubt that many of the people (broadly) deciding which products to support and which to drop were hired via a new-grad-like interview process.

I'm confused as to why you doubt that. Google's been around for almost 20 years. Their interviewing process has been around since at least 2003.[1] It's 100% possible that somebody was hired, ended up in higher management, and graduated to making driving decisions (at least over a particular area) in that time, unless your suggestion is that nobody who enters as a new grad ever stays long enough to get to that level, which at Google (vs other SV companies) seems unlikely.

[1] http://jeremy.zawodny.com/blog/archives/000616.html

> For things like moonshots/other bets, the "build something awesome but don't monetize it" makes a lot of sense, since the entire point of that division seems to be "build something awesome and see if it is sustainable too". More often than not, unfortunately, the answer is no.

It does make sense, it's more the messaging around those kinds of projects that tends to get lost. You end up in situations where thousands or sometimes millions of users are relying on a "beta" product, which has no monetization strategy, and then gets scrapped. It's a pattern that's still unpleasant for end users and not great for Google's reputation.

> If we presuppose that there is this dichotomy in people and it affect google's motives and goals as a company, then this would influence the tech hiring practices to be the way they are, not vice versa.

That's a great point, and likely, assuming of course that Google started this way (I think it likely did, given its founders' backgrounds). It creates a self-perpetuating cycle, though, which still ends up being problematic.


> I don't think you have to be unusually good at algorithms. I basically read some Wikipedia articles and spent a few hours in the hotel cramming Algorithms in a Nutshell, and I managed to squeak through.

Either a lie or you are unusually good at algorithms. People work > 50 hours on Leetcode, CTCI, etc. to try and get a job at Big 4. A couple of hours just reading Wikipedia articles isn't even close to the amount of effort people put into 'gaming' the system these days.


Sorry, I should have been less flippant here.

That was the only prep I did for the interview. I was a senior software engineer at EA at the time, with about a decade of professional software experience.

I don't have much of an academic background, but I had written and shipped quite a lot of code by that point in time.

The cramming did help—several of the algorithms I read about were either new to me or I hadn't seen in ages—and some of them did come up on the interview. (I've also used almost everyone of them at my work at Google since, strangely enough.)

My point was just, if you are already a good enough engineer to be successful at Google but don't "interview well" because of whiteboard experience or algorithms it's pretty easy to shore up those two things. You don't need to spend a decade at a monastery meditating on Knuth.

If you're a decent coder, you've already done way harder things. Breaking down a big messy problem into pieces you can code and test is hard. Shipping applications is hard. Breadth-first search is not hard.


Unless you're near genius-level smart. Then you will easily get offers from all the big4 w/o much prep.


Here's the problem I have with interviewing, speaking as someone already at a big 4 company and has gone through this process already. I know exactly what types of things I need to prep for and even as someone who's competed in competitions like ICPC and TopCoder, I'm always worried that I'll get the type of interviewer who will just ask obscure questions and not go to the level of depth you mentioned when it comes to working with the interviewer to assess their problem solving skills vs just checking to see if they know the solution to some trick question - the kind that most people not knowing the solution beforehand would almost never be able to figure out within 30-60 minutes.

I'm planning on interviewing with Google in a few months just because of all the great things I've heard about their infrastructure and engineering culture, but due to these concerns I've budgeted to use my current vacation time just to get back into prepping the way I did when training for ICPC just so that I'll have confidence knowing that most interviewers probably have never studied algorithms at that level and that I'll have more confidence in not being potentially tripped up by these types of interviewers.

Most of my friends, even those at Google, have tried to convince me that this plan is overkill but I just can't get myself to leave the fate of my future into the hands of some random interviewer and then regret not doing this level of preparation if I get rejected...


"I basically read some Wikipedia articles and spent a few hours in the hotel cramming Algorithms in a Nutshell, and I managed to squeak through."

Then you must be much smarter than the average Silicon Valley software engineer. Congrats on being born that smart. You should thank your parents.


I don't really know what "smart" means, but, yes, I am very lucky in a lot of ways, some of which are hereditary.


you're missing something important about my comment. the issue isn't the hair-splitting difference between what you call "puzzle or trick questions" and what you describe as "a good question" (with your eight criteria). the issue is that Google style interviews ONLY ONLY ONLY ask coding questions in a pressure cooker environment.

I've interviewed at Google and the interview experience was sufficient to convince me that I'm not a culture fit. Maybe that was the point. I tend to thrive in work environments where my "soft skills" matter more. Seems like that's not even on the radar at Google.


> the issue is that Google style interviews ONLY ONLY ONLY ask coding questions in a pressure cooker environment.

By "pressure cooker", to you mean in terms of time, or more social intensity? There's not too much Google can do about time. Like I said, it's not fair to ask candidates to dump weeks of their life into the hiring process.

If it's about the social intensity, I agree, that's hard. Many programmers are introverted and being "on" for an interview is really really difficult. It's one of the reasons I don't like doing interviews—it's hard for me on the other end too.

We are trained to try to make it as relaxing and pleasant of an experience as we can, but there's only so much we can do. The reality is you're in a 1-1 conversion with someone with a fixed time bound and where your performance may alter the trajectory of your life.

That is a stressful experience, no two ways about it.


I got part way through one interview process with Google -- never again. I'll stick to buying lotto tickets, the odds of 'winning' are about the same and the payout is higher.

Not to mention lotto isn't quite the tremendous time-suck that playing the Google interview game is (I mean, sometimes you get stuck waiting in line behind the old lady buying $500 worth of scratch tickets so you can buy your one powerball ticket, but still...)


There is no proof that joining Google is a lottery ticket(ie: whether you get in is luck). I have interviewed at Google, Amazon, Msft all multiple times and have passed every single one of their interviews multiple times(ie: I've never failed). If you want to attribute it to luck, then luck is when preparation meets opportunity.


You think preparation alone can get you in the Big4? Obviously you are very smart if you passed every single one of their interviews multiple times. Hard work alone won't get you there. Just curious, so where do you work now?


I do think that hard work and preparation can land you a job at these companies. I went from doing a manual labor job and never making over $33k/yr, to busting my hump 4 years in University to land these jobs. I also recognize that interviewing is its own skill that needs to be worked at, and simply doing LeetCode questions or reading Intro to Algorithms by CLRS isn't everything. I can honestly say that I have never had all the interviews go my way during a loop. I have been given problems where I could only find the naive solution. I have been in an interview where I made it through the first teaser question to be completely stumped by the second question. And I have made really ridiculous coding mistakes like increment the string variable instead of the index variable. But I also manage to never beat myself up about these mistakes, always talk about what I'm thinking during the interview, listen and try to apply the hints that the interviewer gives me, and have pleasantries with the other person during the interview. I dress well, always with a fresh haircut and clean shaved face, wear deodorant and take a shower the morning of an interview, and walk with good posture, because I do think that these things matter.

I do also believe in the interviewer anti-loop (see Steve Yegge's blog[1]), but I also think that it doesn't really matter unless you really want to work for a specific one of these large tech companies.

I currently work at Microsoft and just finished interviewing with a couple of the other large tech companies and have pending offers from them. I don't know, maybe I am smart, but when I see people like Andrei Alexandrescu or Mark Russinovich speak, or watch a video of Grace Hopper explain how she demo'd nanoseconds to military generals, I feel like an idiot compared to those people. I personally think you obviously have to have problem solving skills, but also gumption and a can-do attitude goes a long way.

[1] http://steve-yegge.blogspot.com/2008/03/get-that-job-at-goog...


The problem here is that he wasn't pursuing it, they were pursuing him. Maybe the issue lies in the actual recruitment process, and not the stupid interview questions.


I am not a standardized programmer, so I have no interest in Google's standardized interview. I tell their recruiters I am not interested as I doubt you would hire me anyway given I don't fit your standardized system. When you design your system for some specific person type, that's all you will get.


Yep. Your path into the company (were you interested in joining it) is:

1) Found your own company to do a specific thing 2) Prove out you can do the thing and do it well, and furthermore have the thing be valuable enough that it would be worth acquiring but not valuable enough that it can operate as an independent company that covers its own costs 3) Google decides that what you're doing needs to be part of their portfolio and they buy your company and staff.


What I don't understand is that this person hasn't even had a Google interview... They literally say

"I won't accept an interview at Google because: 1. I had a bad interview at Amazon (???) 2. I read a story on HN about someone having a bad interview at Google

I'm sorry but that's an awful excuse. As you mention, Google works at an insane scale, and as foolproof as you try to make a process, there will always be a very small chance that it'll fail. Doesn't mean the whole process is bad.

And there's the whole bias that you'll only hear from horror stories and never from success stories.


> They literally say

You cannot claim that someone "literally" says something then make up a fictionalised quote that mischaracterized what they're saying.

None of that "quote" appears in the article. Therefore the term literally and the quotation marks are at best misleading.


I've been working as a software engineer for quite a few years now and I'm sure I'm a much better software engineer now then when I graduated. I now have years of solving varying real life problems with real-life constraints (budgets, deadlines, ...) using a big amount of different technologies, working with a lot of different people, ...

However, I'm also pretty sure I'd have a better chance on the Google interview right after graduation then now, when all the theoretical stuff was fresh in my brain. So I'd have more chance while being a (in my opinion) less useful engineer.


> However, I'm also pretty sure I'd have more chance on the Google interview right after graduation then now, when all the theoretical stuff was fresh in my brain. So I'd have more chance while being a (in my opinion) less useful engineer.

The thing to understand is that Google is more interested in not hiring potential bad hires than in hiring every good hire. Part of the reason the interview process works the way it does is to filter for people willing to put in some effort. If you want a job at Google, it's not a big deal to spend 3-5 hours a week for a few months getting sharp on the basics. If you're not willing to do that, then that's a good sign that you're not really invested, and may not be the best hire.

Google will totally lose some good hires this way of course. But their philosophy is explicitly that it's better to lose a good hire than make a bad hire. I think they're probably right.


handwave It's a tradeoff.

What you may have atrophy a bit being out of academia for a little while is the jargon and some nuance stuff (i.e. can you still reason formally about big-O notation, do you still know the names and relative behavioral constraints on all the fundamental data structures, even the ones you personally haven't had to touch for a decade because your specific industrial domain hasn't really necessitated them---can't tell you the last time I coded my own hash function over just using whatever the default "map" / "dictionary" / "object" is in the language I'm using).

What you will have an advantage on is the real-world stuff that an undergraduate new-grad may not have ever seen yet, like "How do you actually build a system that can move data from a web-based UI with authentication through to a database or backend-processing infrastructure and then back out to the user (and how do you scale out a system like that when it starts to bottleneck)", "How do you monitor and maintain a system after you've built it," and (probably the biggest gap between graduate-knowledge and real-world knowledge) "Cool algorithm; how do you verify it's right?" A surprising number of undergrad programs still don't teach unit, functional, or integration testing as a significant aspect of software engineering.


>Part of the reason the interview process works the way it does is to filter for people willing to put in some effort

That's the sales pitch, but it's actually just because there is no other interview approach that works at scale for the volume of candidates they get. If they didn't have a stack of 100,000 resumes, they would throw this ridiculous process out the window in a heartbeat and actually interview people for the position.


> If they didn't have a stack of 100,000 resumes, they would throw this ridiculous process out the window in a heartbeat and actually interview people for the position.

Absolutely.

If they had far fewer resumes then they'd be much less willing to pass on potentially good hires, and more willing to risk bad hires. You make it sound like we're disagreeing, but this is 100% consistent with what I said.


You're not saying the same thing at all. Your statement assumes the process eliminates bad hires, while GP's point is clearly that it doesn't.

I'm not agreeing or disagreeing with either side, but you clearly aren't in agreement.


"it's not a big deal to spend 3-5 hours a week for a few months"

I think I can get further ahead than a job at Google for that kind of time investment. I don't mind big expenses, I just expect corresponding rewards.


I mean, all the big tech companies use a similar white board/algorithm centric hiring process. If you're interested in working for one of them, 40-60 hours is really not a big investment to be in a good position to apply to all of them.

I totally understand if that sort of job just doesn't interest you, but I'd be really curious as to where you think that 40-60 hours will get you a better ROI.


If you already have a good job? Exercise maybe.


Similar story: a few years back I was interested in applying to TopTal for side work. As a senior engineer working with well known companies, I thought the acceptance process wouldn't be as bad as claimed on their homepage.

Passed the first personality interview. The second was a coding challenge. I said to the interviewer several times I have not studied algorithms, and that if the coding challenge involves them, I would prefer to drop out and not waste time for anyone involved. I was very happy to say that multiple times - I know what I don't know. The interviewer goaded me into taking the challenge anyway, saying I'll "definitely be fine and pass."

The next day I attend the timed coding challenge. Three algorithm puzzles that are in no way insignificant. I had Google at my disposal and still could not solve a single problem - although I came close on one involving permutations of chess pieces.

Unreal. At least Yegor had the good fortune of not being directly lied to by the recruiters!


> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

Interview process: complete success.


> If she would have started her email with "We're looking for an algorithm expert," we would never have gotten any further and would not have wasted our time. Clearly, I'm not an expert in algorithms.

This phrase precedes the one you mentioned. He's saying the process he has been through completely ignored his background and field.


The notion that traversing a binary tree requires expertise in algorithms seems questionable to me. I view basic tree traversal as table stakes for a developer with more than a year or two of experience (IIRC, it's discussed in Intro to Algorithms in most CS curricula).

OP says that his field is object-oriented design, which links to a page where he announces that "We've started work on a new programming language". The idea that a computer scientist/software engineer would embark on building a new programming language but also state that they don't know how to solve tree traversal problems and "will never be interested in learning them" seems strangely incongruous, given that building a language almost certainly involves working with ASTs.

Specifically, OP pushed a commit in late 2016 that contains code working with stacks, linked lists, and an AST class for the EO language.

https://github.com/yegor256/eo/commit/6df2281d8b0163b9f9e1b8...

I guess I just don't believe that interview questions about trees are ignoring the author's background and field.


But optimizing for them? I've been out of school over 7 years; I'm currently a tech lead. I can guarantee you that, while I can likely solve most binary tree related questions you'd care to throw at me, it will take a lot more work, thought, and likely mistakes on my part now than it would have when I first got out of school. It's not an area I've spent -any- part of my development career on, nor is it an area I find particularly interesting. Whereas when I left school, I had actively been dealing with such problems within the past couple years, -and- had boned up on them specifically to prep for interviews, since I had nothing else in my favor.

Hiring a junior with such criteria might make sense; hiring a more seasoned person, probably not so much. Because the seasoned person will just not be as compelling as the more junior person, due to having spent the past few years working on other things.


Funny you bring up the CS curricula.

In response to not knowing the speed of sound as included in the Edison Test, Albert Einstein replied:

"[I do not] carry such information in my mind since it is readily available in books. ...The value of a college education is not the learning of many facts but the training of the mind to think."

This is even more true today with the vast trove of easily searchable called the Internet. A competent programmer should have high-level knowledge of different algorithm types, but should not strive to memorize their implementation (e.g. for interviews), since it is trivial to look that up.


That story about Einstein doesn't illustrate what you think it does. Einstein was a theoretical physicist, not an experimental physicist; there's no reason for him to have known the precise value. However, you can bet that Einstein knew the theory quite thoroughly (review his writings if you believe otherwise) and would have been rather less than impressed by anyone calling themselves a physicist who did not.

And aside from that, let's face it, none of us here are Einsteins.


I'm not sure I buy that distinction. Do you believe that experimental physicists know every single formula and can recite it off the top of their head? Because unless they use those formulas constantly in their work, they end up having to look them up.

Same thing with algorithms like tree traversal. You should know the theory behind them, but unless you regularly implement them, there is zero reason to commit their implementation to memory to the extent that you can casually write them on the white board during an interview.

>>And aside from that, let's face it, none of us here are Einsteins.

Even more reason to not try to memorize something unless you use it constantly.


But imagine Einstein having to look up how to derive x^2. I'd say that's a pretty close equivalent to knowing tree traversal in CS. It should just come naturally.


I've been working in the programming industry for ten years and have never had to implement tree traversal. I could probably scrape together a naive implementation in a relatively short amount of time, but it's certainly not something I've memorized. But then, I'm a programmer, not a computer scientist.


If one is a computer scientist, yes. If one is a programmer, not at all.


Sounds like Einstein was a Sherlock Holmes fan.


I agree, especially when designing programming languages that it's a bit weird to not know how to traverse a tree.

There is one thing I want to point out though. Before starting my algorithms course I knew how to solve problems recursively. I had done project euler tasks and what not. I just never knew the words abstract syntax tree or binary search tree. I could guess what they were prior to the course but I didn't know what they were off the top of my head.

Do you think that it's possible that a self taught engineer would know how to solve these problems given time and enough questions to the interviewer? Maybe get frustrated with "How do you balance a search tree?" questions under pressure?


> Do you think that it's possible that a self taught engineer would know how to solve these problems given time and enough questions to the interviewer?

It's very easy for new interviewers to come up with problems that are easy for them (because they know the subject), but very difficult for the interviewee. Avoiding that is a priority for me, but I still strive to evaluate how the candidate approaches problems.

My strategy is to keep my questions as close to first-principles as possible, making the problem more about programming approach than algorithmic recollection. I had a Google interview once where they asked me to implement pivot tables, and I had to sheepishly ask: "What's a pivot table?" That set me back somewhat, and I try to avoid that in my own questions.


> I view basic tree traversal as table stakes for a developer with more than a year or two of experience (IIRC, it's discussed in Intro to Algorithms in most CS curricula).

You mix two completely irrelevant things: education and experience.

You will be taught binary trees in CS, but not every programmer is studying that.

If you learn binary trees in CS, you will forget it after few years of experience, because skills that are not used are forgotten.

So yeah, Amazon/Googles of the world are looking mostly at recent CS graduates (most of the set of programmers that know many algos) and the very small set of programmers that deal with algos every day.


And a third category: people that train specifically to pass the Amazon and Google exam.

I know several people that did, taking several months off to prepare (and got in), and yes, you have to re-learn all the CS algorithms which will probably be irrelevant again after you pass the interview.


> Specifically, OP pushed a commit in late 2016 that contains code working with stacks, linked lists, and an AST class for the EO language.

Most people can work with Vectors/DynamicArrays, linked lists etc. but they don't know how to implement one or even how it works in detail under the hood.


I don't think traversing binary trees satisfies the criteria for "algorithms expert."

It would have been different if he was asked questions that only algorithms experts could answer, but really these are really basic questions.


I'd be interested in hearing from people who work at Google, and get to figure out a cool algorithm for solving an interesting puzzle-problem more an about once a year. I bet there aren't very many.

In my experience, the stuff you do in this kind of interview has very little relation with the stuff you do in the actual job. (I wish it did! I love those algorithmic puzzles.)


I don't know about the average but at Google I probably write a new call to std::set::find twice an hour, and it's important to know that std::set is a red-black tree, and how std::set::find is implemented and how much that costs and so forth. Fluency with data structures and algorithms is not some kind of brain candy. It's absolutely necessary to get the job done.


I've been a developer for 30 years, helped start a half dozen companies. Since college I've never written a single tree traversing algorithm, and would need to Google red-black tree to know what it is.

There is a whole world of development that is OS specific, UI specific, shipping commercial applications. Shipping quickly is always a higher priority than performance, because adequate performance is usually trivial to implement with hash tables/arrays/linked lists.


You don't generally need to write tree-traversal algorithms because other people have already done so and you can use their implementations. But you should still understand the tree traversal algorithm, so you know what the thing you're using is doing, what its complexity is, etc. And if you understand it, then you should be able to reimplement it yourself.


I'm not entirely sure adequate performance is good enough when you work at the scale of Google...


The performance bottleneck is not always where you think it is.


That's why Google measures.


That's right. Adequate performance will keep 1000's of extra cores burning cycles 24 hours a day.


>because adequate performance is usually trivial to implement with hash tables/arrays/linked lists

Are you sure about that? What if your code has to work for billion plus people correctly 99.999% time who are on all different kind of connections?


Correct is orthogonal to performant.

I would also argue that make a very clever algorithm do correct stuff (and be maintained to do that in the future) is more difficult than do it using simpler method/algo.


I understand the perfomance characteristics of red/black trees very well. I couldn't implement one in 30 minutes on a whiteboard.


After preparing and studying the interview questions on data structure, algorithms etc. with the help of guide books like https://books.google.com/books/about/Cracking_the_Coding_Int... You definitely can!


Neither could I, but that was also well beyond the scope of my Google interviews.


Yes but jeesus here we are discussing an algorithm that I first heard about in literally my first CS course on algorithms in my first year at the university, some 20 years ago. And I had to manipulate red black threes on the blackboard. Haven't seen them since in my 20 years of work in the industry.

I you need to write so performance critical code that the choice of tree structures matters then this interview question sure as hell isn't going to find out if you are cut out for it.


I've interviewed many candidates for one of the big 4 & sat on the decision loops too. 'Attribute substitution'[1] one thing that does stands out in these interviews. essentially interviewers substitute answering the larger question of judging the candidate's suitability for the job at hand with a simpler but easier to judge (& defend!) problem of solve this particular coding problem that I have 'normalized' over many candidates. Also, deep domain knowledge is often hard to judge.

1. https://en.wikipedia.org/wiki/Attribute_substitution


one does not need to know how to write an algorithm on a white board to understand it. Seems like it would better to ask what a red-black tree is, what's it's time complexity, and what would be a good situation to use one.

In fact, I would argue that one could know how to write one and have no idea of the practical use of it.


It pretty much has no practical use, that a different data structure couldn't do more efficiently.


What in the world are you talking about, RB trees are incredibly useful.


There's a well known analysis that shows they're asymptotically equivalent to B-trees. You can tune a B-tree to better exploit cache locality. Other kinds of ordered binary trees are far less complicated to code and have similar performance characteristics.


Maybe I am missing something but if you're using set::find so often, why not an unordered_set?


figuring out a cool algorithm is not the same as learning 1000 algorithms by rote to prepare for an interview. Just because doesn't feel like doing test prep for an interview, it doesn't mean they don't have the brain capacity to actually solve the business problem when it comes up.

If in my day-to-day I need to use an AVL tree, I will get a library for it, I won't be reimplementing it from scratch every single time.


This is what has confused me for a long time, I interviewed at Flipkart an Indian ecommerce major (ex?) and their process was horrible. At the second round they asked us to implement a toc tac toe program, others didn't even have compilers, I had Python because it was a Linux machine, was rejected because "code contained bugs and didn't run", the third round was algorithms. I think these companies don't know how to hire that's why they are sticking to algorithms, in real life nobody is going to implement an algo from scratch, if I can learn Go in a week, learn how to write a webapp in 2-3 weeks when I have never written a webapp before why does it matter if I don't know how to implement some random algorithm?


> If in my day-to-day I need to use an AVL tree, I will get a library for it, I won't be reimplementing it from scratch every single time.

Completely agree. How will you know you need an AVL tree, versus, say, a Red-Black tree? What are the performance characteristics of each? Why pick one over the other? When I interview someone, I ask algorithm questions to find out how they reason about the algorithms, not so I can watch them implement one.


OK, I'll bite: this is not something the vast majority of engineers ever need to know.

They are both balanced binary trees with the same big-O complexity. Constant factors are different, but if and when you care about that, they're both binary trees so it should be easy to switch one implementation for another. In practice you're unlikely to care because most of the time you won't be working on performance-critical code. All speculation about performance is hearsay unless you run some benchmarks.

Google search brings up this discussion: http://discuss.fogcreek.com/joelonsoftware/default.asp?cmd=s... There's plenty of interesting stuff there, but it rather reminds me of medieval theologians debating how many angels can dance on the head of a pin.

There's a much, much more important distinction: binary trees (of all kinds) versus sorted arrays. There are many cases where a std::vector will be a lot faster than a tree, due to cache coherency, and use much less memory too.

So, discussing AVL trees and red-black trees in an interview is a waste of time. All it tells you is whether somebody once studied them (and remembers their studies), or possibly just memorized them the night before. Knowing that somebody studied those algorithms (except you don't know, because they may just have crammed it) would be a positive signal but doesn't actually tell you whether their CS course covered useful up-to-date topics, like cache-friendly algorithms and data structures.


> There are many cases where a std::vector will be a lot faster than a tree, due to cache coherency, and use much less memory too.

this is also what is annoying me, the thought of so much ink being devoted to O complexity and so on when with modern processors in the end often "less optimal" algorithms are a lot faster based on how they work and often you end up writing the same thing 3 different ways so you can test and see which one is actually fastest given your language/compiler/toolchain/processor

In a book-level discussion whether a comparison in your algorithms evals to true or false in a predictable or unpredictable pattern doesn't make a difference in its performance, however write that out in code and the branch predictor of your CPU will be a LOT happier (and faster) if you make it so that all the "false" and "true" comparisons happen in streaks as opposed to randomly...


I'm not sure I understand your point -- possibly we're agreeing?

To clarify, it seems to me the interviews tend to focus on things like "what's a good data structure for the social network in a Facebook clone?" but that's only 1% of the actual job, 5% tops.

It's true that those technical design decisions are very important, but they're not decisions that every engineer needs to make on their own on a day-to-day basis. They come up fairly rarely in a typical project and usually multiple people will be involved.

I don't have any easy answers -- I don't know a good way to interview for the other 95%+ of useful skills, like communicating well within a team, testing, debugging, benchmarking, writing documentation, good source control practice, knowledge of standard tools, meta-knowledge of how to learn about standard tools, how to evaluate new tools, system administration, dealing with production panics, etc.


my point was more along the lines that even if you need to figure out cool algorithms as part of your job, interviews as they are now don't seem to be not testing for that, but more for "can you rote learn these specific algorithms".

I think some of this was why some time ago there was a way of interviews with "off the wall" questions, but those seemed to be fairly vilified so I guess that's why they aren't as common now, but personally I would get more of an understanding in how somebody thinks by giving them an unexpected question than by asking them something they could've studied for.


You don't have to memorize 1000 algorithms. Honestly you could have a really good understanding of when and how to use about 10 algorithms and probably do very well on most interviews.


The problem is each company will ask you a different set of algorithms, and you're not sure which they'll ask, so you effectively have to memorize a LOT more than 10 if you want to be prepared for whatever they throw at you (not literally 1000, but maybe 40 or 50, which will take considerable time to prepare for, in addition to all the other preparation or unrelated questions they could end up asking you (architecture, OS, process, design patterns, optimization, advanced language-specific quirks and gotchas, terminology, 3D math and graphics (if games), databases, networking, behavior questions, solving math problems you've never encountered before, questions about your past projects, etc, etc, etc).

When they passed on me after my Google interview, the HR person actually told me "You can try again in 18 months. There are quite a few people who study for it during those 18 months and get it the next time!"

Yeah, I don't want to study for 18 months to get a job at Google, sorry.


The other downside of that is: in 18 months they're going to hassle you for an interview again, whether you want one or not.

I have a friend who was interviewed at Google, rejected, then interviewed again a year later, rejected again, and then a year later a Google recruiter got in touch to see if they'd like another interview. They said it started to feel like a cruel joke.


Oh yeah, recruiters from Google have contacted me three times since then. Actually the most recent recruiter said I could contact her whenever I felt like having another interview.

I might eventually try going through the gauntlet again, but I wasn't quite willing to move to the west coast during that time (new relationship, new home, new puppy, work was going well, I didn't really need more big life changes at that point).

I'm starting to consider a job change but I'm still not sure I want to move to the west coast. I'd probably have to downshift my home quite a bit out there.


You probably won't even get to use all 10 in your 1st two years of work. All the computer science you use in 99% of a typical programming career could be described in a single conference presentation. (Not that one would master all of it that fast, but it would describe what you need to know and know you don't know.)


You might not need to implement it, but you needed to know enough to know what an AVL tree is, why you might (or might not) need it, and to be able to do some basic validations on the library you selected to make sure that it does what you are hiring it to do.


agreed, but let's assume you don't know what an AVL tree is or what the trade-offs are between an AVL, red-black or other type of trees: how long does in this age to find that information if you know what you are doing? I mean, you could allow your interviewee access to a computer and based on the type of googling they do it should be fairly obvious if they know what they are talking about.

In your day-to-day you have wikipedia, stackoverflow, hn and so on: software development hasn't qualitatively changed in the past decades to require multi-day interviews when 15 years ago a single 30-45 min interview was more than enough.

It would actually be interesting to compare the length of, say, a google interview a year after company inception compared to now. I am sure that despite the fact that nowadays the impact of a bad hire would be minuscule compared to back then, the interview process is way, way, way longer and more difficult.


> how long does in this age to find that information if you know what you are doing?

I think we're discussing fluency. If you are hiring someone to edit books written in English, do you want a person who has to look up the meaning of "present participle", or do you want someone who just _knows_ what it means? I am sure that most people can figure out what "present participle" means in a few minutes with a search engine but those aren't the people I want as the editor.


My problem is that I forget things very quickly. I've implemented optimized k-d trees and VP trees from scratch, but I forget the complexity of searching, rebalancing, etc. Even though I've implemented it before. I normally just Google something simple like that when I need to "re-remember". I suppose interview performance at those types of companies is kind of like high school and college — just cram all that stuff into memory beforehand and forget it right after the test.

My most recent interview was very well done and not like an "algorithm quiz" at all. I had to bring my laptop and give a short overview of a project I had recently worked on. I think this really plays to the candidate's strengths.


I so totally agree! Forgetting stuff is as important as learning new one, because it forces you to learn/re-learn quickly.

I wrote millions and millions of lines of code in my 30 years career, I was once a top shot mac developer, and I was once a top shot computer geometry expert, and an OpenGL expert and all of that, and I've forgotten most of it... If someone asked me a trick question, I'd look like a fool.

Simply because I don't need it. My strength over all these years is my capacity at learning stuff quickly, and that doesn't come up in a trick-question interview.

So for these data structures questions: In 30 years I once had to implement a binary tree for any sort of problem I had in my job. Now it's not the same for hashes, lists, queues and many others, but trees? Quite frankly it's a CS toy. You know it's on the shelf in case it's needed, but it very rarely get a dusting.


> In my experience, the stuff you do in this kind of interview has very little relation with the stuff you do in the actual job. (I wish it did! I love those algorithmic puzzles.)

This is what confuses me. I tend to take the problems I get during an interview as representative of the work I would be doing, and then get turned off by the job immediately.

At least at the PhD level, you'd expect that the job would be using more of my unique expertise, I'm not a BST expert (though I can write one when I need to, it definitely isn't "warm" in my brain).


I used to enjoy giving interviews because it was fun coming up with these little problems, and discussing them with smart people. Because I didn't otherwise get to do that very often at work! But I felt guilty about giving a false impression to interviewees about what the work actually involved...


For a complete success they would have a way to filter out people without algorithm knowledge early on. A day of interviews already costs them some money, especially on the scale they are doing this.


Yeah maybe you are right on that point. At Google at least the filtering by stage seems to be: Recruiter phone interview: do you speak English? Technical phone interview: have you ever programmed a computer before? Technical on-site interview: do you know a few things about data structures and algorithms?


What's funny is that Google seems to think they're really good at interviewing -- for example, see this Washington Post puff piece: https://www.washingtonpost.com/business/capitalbusiness/an-i... Possibly they are, but it's hard to see solid evidence one way or the other. I guess they can't be really bad in the sense of hiring mostly bad candidates, as that would surely cause big problems before long. But they could be about the same as other big bay area companies.


Why do they try to "recruit" him in the first place, then, and "waste" recruiter time and phone-screen time on him and people like him? Remember: this isn't people applying to Google out of the blue: this is people Google has actively courted to apply.

I did a couple rounds of their interviews a while back just to see if they really were as bad as people said they were (yup!). And that was in response to a very persistent recruiter emailing me over and over and over again to apply, despite my knowing there was zero chance I'd be a fit for the job they asked me to apply to.

And more generally: I don't really care about binary trees. They're simply not relevant to the work I do. I suppose if someday the only way I can get a job is to buy a book with all the stock interview questions in it and memorize the answers, then that's what I'll do. But if somebody's hiring me, I want them to hire me for the things I know that aren't standard interview questions anyone can memorize.

The way I try to phrase this to people who don't get it is: my metaphorical working set in memory does not include basic algorithms (that's what libraries are for), and does not include brainteasers or riddles. It does include a lot of arcana about how web application stacks work, the network and gateway protocols they use, the points where security and reliability issues are most likely to happen and how to avoid those issues, the application and database architecture patterns most often needed, available libraries and frameworks which implement them, pros and cons of different approaches, etc., etc. I could, if so inclined, probably take the metaphor all the way and outline which things are in the equivalent of CPU cache vs. the equivalent of main memory.

So sure, I could page all that out to disk and forcibly cram a binary-tree answer into my brain's L1 or L2 cache, or load up one of the dynamic-programming questions Google loves (tip if you want to work for Google: memorize the Wikipedia article on longest common subsequence, even if it will never ever be relevant to anything you'll ever do, because they will have you do it in an interview). But that would be actively harming my own usefulness, and I'm not willing to do that for Google or for anyone else.

Which I guess raises the question: why do you apparently want me to harm my own usefulness just to try to get a status-symbol passing grade on a Google interview?


Here's the thing, nothing personal about you but a true fact about Google. Nobody at Google could possibly care less about your opinions on "application and database architecture" because Google does not have application and database architecture that even slightly resembles anything you have ever seen outside of the company. In fact the less you think you know about architecture the better it would be for all involved, if you were to go work at Google. Also all those things you know about makefiles and maven and ant and ansible and chef? Nobody cares. Google doesn't use those tools. Google has lots of famous engineers and while I believe their expertise and knowledge are valued, their outside experiences are generally not.


Google also increasingly has a reputation as a company that hires top-tier CS Ph.Ds and puts them to work building CRUD web apps.

They have this reputation because, it turns out, not every engineer at the company needs to be able to rebuild all of Google from first principles in order for it to keep running. And even at Google scale there's not enough interesting greenfield work to do to, or de novo problems to solve, to keep all those "famous engineers" busy all the time. The same is true at most companies -- you need a mix of talents and skillsets, and you need to match up people to roles based on that. The Google approach -- of putting people into drudge work and paying them enough to hope they won't quit -- is a grossly inefficient use of talent and knowledge. Which is another reason I wouldn't want to work there!


Arrogance at its finest.

I'm not arguing that Google doesn't have a huge scale, but I'm pretty sure that Microsoft, Facebook, Amazon and most likely the NSA also have unique systems and challenges.


> Microsoft, Facebook, Amazon and most likely the NSA

And all of them (well maybe not the NSA not sure on that one) do the entire "algorithms quiz".


> In fact the less you think you know about architecture the better it would be for all involved, if you were to go work at Google. Also all those things you know about makefiles and maven and ant and ansible and chef? Nobody cares. Google doesn't use those tools. Google has lots of famous engineers and while I believe their expertise and knowledge are valued, their outside experiences are generally not.

What distinction are you making between "expertise", "knowledge", and "outside experiences?" The outside experiences are the basis of the expertise and knowledge. If you understood, used, and contributed to other existing build systems, you are then in a good position to understand, use, and contribute to Google's in-house build system, because you have useful knowledge about the problem domain of building software. And so on.


> It does include a lot of arcana about how web application stacks work, the network and gateway protocols they use,...

This was my second phone screen at Google, for what it's worth.


If you read and understood the article, this comment is really disingenuous. It does not in any way reflect the sentiment of the writer.


Depends what the job description is, surely. And if the developer isn't interested in working on these things, surely it's failure on Google's part to effectively target who they are approaching?


The job description is "Software Engineer at Google" They hire some special roles from time to time, but then they approach people differently (i.e. on a personal level) otherwise they hire people they ideally can move easily from project to project.


Maybe, or maybe they are missing out.

Binary-tree traversing is a rote memory sort of thing. Somebody who can come in and play 20 questions about an algorithm and answer them perfectly shows me nothing other than they studied that algorithm and have a good memory. It does not show me any insight into how they solve problems which have not already been solved for them. It won't show me how they do under stress with a completely open ended question.

I don't work at google so I won't speak to their daily work, but very rarely should you be writing your own binary tree or traversal code. If you should find yourself on that task you hopefully won't be doing it from memory / without reference because unless that is the ONE thing you know like the back of your hand, even then you are likely going to make mistakes.

I also feel that anybody who has actually been productive on a large successful project does not have time to commit to learning things that can be looked up when needed. There has only been a few times that I have actually needed to implement any of these algorithms in the last 10 years. Of those they were very low level and implemented in a kernel driver where libraries are not so abundant. Can I remember all the fine details today? No. Did I make a system that runs on hundreds of thousands of linux systems across the world? Yes. If I were interviewed on the fine details of implementing some of these algorithms and data structures would I pass? Not likely. Does this make me a bad programmer? Depends who you ask. If you ask the guy interviewing me Yes. If you ask my boss and the many customers who use my product No.

Memorizing algorithms does not make you a good programmer. Knowing how to apply and when to apply them does. The hard part was done when the algorithm was created, not regurgitating an implementation.

Now you might say hey, they should at least know binary tree! Sure maybe they should have some knowledge of it, but there will always be some algorithm you don't know. I would feel completely different if Google said "Come ready to talk about these 5 algorithms and maybe implement one or two". That falls inline fairly well with the author of the article. At least then somebody can say, "Hey, I don't have time", or "Sorry, not interested". And both sides can have their expectations insync.

Now don't get me wrong. We are talking about Google and Amazon. So yeah, somebody there might need to know something about these algorithms. But I doubt they all do. The Few interviews I have been on at these organizations I was not even convinced that all the people interviewing me would be able to pass the round of 5 had they interviewed each other. I should note not all the people who interviewed me gave me that impression. I did meet some really nice people that I thought I would enjoy working with.

I also don't think that these companies don't take into account that some people just interview badly. There is a lot of stress involved. While I think people should be able to work unders stress, I think the type of stress these interviews cause is something entirely different. I live a fairly stress free and carefree life. But when I went to interview at Google my stress levels were so high that I felt sick and worn out for days after. What I am getting at is I am presented with hard problems and short deadlines daily at work. Not once did any of those challenges cause me any level of stress. On the other hand trying to figure out what the interviewer really was asking and cramming to memorize algorithms caused so much stress I did not sleep for days days before the interview. It's not that I was up studying, its that my mind was racing and I could not sleep if I wanted. I just lied there in bed staring at the ceiling. I think they call that anxiety.

> Interview process: complete success.

It was a failure because they wasted everybody's time. Had they been more upfront on what they were looking for then he could have declined without costing both the company money, and him time and stress.


I would feel completely different if Google said "Come ready to talk about these 5 algorithms and maybe implement one or two".

Very good point. The amount of memorization involved in the process is excessive. It'd be helpful if each candidate was told what subset of all possible problems they should focus on.


Binary-tree traversing is a rote memory sort of thing.

But the real value is in being able to analyze an application's performance and deal with concepts like recursion.

I also feel that anybody who has actually been productive on a large successful project does not have time to commit to learning things that can be looked up when needed.

Some important concepts aren't the kind of thing that you "look up" then read a one paragraph blurb for 3 seconds. The pitfalls of concurrency and the pitfalls that ACID transactions are supposed to protect against are two more examples.


> But the real value is in being able to analyze an application's performance and deal with concepts like recursion.

Implementing a delete on a binary search tree or traversing one has nothing to do with analyzing performance. If you want to ask questions that require recursion don't make them dependent on algorithms that require rote memorization. Or at least have a alternative recursion question if the candidate says - "Sorry, I don't remember all the rules for this algorithm". Or even be ready as a interviewer to go over how said algorithm works and see if the candidate picks up quickly. If you wanted to really talk about performance then you can talk about WHEN to use a given algorithm and WHY. You don't want to ask questions that there literally is ONE (maybe two) answer that are correct. You don't want to ask questions that when asked WHY the answer "is because that is how the algorithm dictates it." You want to ask questions that are open for interpretation that you then can ask the candidate why they chose that solution. This gives you much more information.

> Some important concepts aren't the kind of thing that you "look up" then read a one paragraph blurb for 3 seconds. The pitfalls of concurrency and the pitfalls that ACID transactions are supposed to protect against are two more examples.

Let's not conflate algorithm implementation from rote memory with not wanting to answer any technical questions. You can't just come in and start talking about ACID transactions and concurrency as if that is what people have been objecting to. Those are valid topics and I doubt most people would have any qualms talking about them. As those are concepts, not rote memorization. You have to have a good understanding of the problems to know where the pitfalls are and know how to spot them. Implementing a algorithm rally covers higher level concepts other than simply regurgitating something like "traverse the left subtree of the root node, accessing the node itself, then recursively traverse the right subtree of the node ....".

I think what most people are suggesting is that there is too high of a penalty on not being exposed to a given algorithm. It is not that these people incapable understanding the algorithm or even implementing one. They simply don't remember. I think this is because a large number of people working at google are fresh out of school. They literally have no experience other than the rote memory required to pass their classes (yes, I simplified that a bit).

I will say that I did find a correlation to the age of the interviewer and the type of question asked. The older 30+ and 50+ guys that interviewed me asked practical questions not related to a specific algorithm. The youngest interviewer I had was a complete hothead and started out the interview saying "he was not your normal interviewer so he was going to ask the hard questions" Then proceeded to ask me a rote memory question about a algorithm. Either I had knew it and memorize it or not. That is not a hard question. The hard question have nothing to do with a defined algorithm. They are often open ended and leave lots of rope for you to hang yourself.


If you want to ask questions that require recursion don't make them dependent on algorithms that require rote memorization.

Yes, surely! Have the algorithm explained, or even diagrammed for them with pseudocode.

Let's not conflate algorithm implementation from rote memory with not wanting to answer any technical questions. You can't just come in and start talking about ACID transactions and concurrency as if that is what people have been objecting to.

Let's not conflate the need for a good background with rote memory. I object to rote memory being rewarded by interview questions. I also object to programmers who maintain they don't need to know any of this stuff at all, because they can just Google it and spend a few seconds reading the Wikipedia article. Where in the Dickens did you get the idea I'm Captain Rote-man?

One purpose of training in a highly skilled field is to teach people how to avoid the egregious gotchas. Go ahead and read my other comments in this thread and on this site. I'm not the all conquering champion of rote memorization you seem to think I am.


My apologies. I must misinterpreted your reply.

If I read them in a different mindset then i can take them as a complement to what I said. That knowing how to analyze application performance and concepts like recursion are more important than rote memory. And that interviews should focus on things that can't be ""look up" then read a one paragraph blurb for 3 seconds". I can fully agree to that.

Once again, I am sorry I misread your reply and dumped a wall of text on you.


You might enjoy this Feynman story:

http://v.cx/2010/04/feynman-brazil-education


Thanks for this. I did enjoy it and it!


Also, seems to be some personality problems (although I of course don't know the guy). No need to go as far as binary trees.


> I learned my lesson two years ago, when Amazon tried to recruit me. I got an email from the company that said they were so impressed by my profile and couldn't wait to start working with me. They needed me, nobody else. I was naive, and the message did flatter me.

I learned mine with AWS as well:

Scheduled call. They forgot to call. Waited like an idiot for an hour. Ok fine big company yadda, yadda.

Had the phone interview. Liked me, called me onsite.

Before coming onsite was sent the Leaderhips Princples and told to learn and will be quizzed on them basically. Had 2 offers in hand already and was told they'd get back to me at most 2 days after the interviews.

Got to the site. Future manager who was supposed to interview me not there.

Whiteboard questions, solve some algorithm puzzles. "Tell me about your worst failure". Most people I talked to would not have even worked with. People did not seem happy, kept warning me about how hard it is to work there and so on. (Subconsciously perhaps telling me to stay away?)

Lunch time comes, at least think I'll eat lunch with future team. So I wait, and wait, and nobody shows up. Ended up wondering the hallways exploring. Hoping someone would ask me if I am supposed to be there.

Then I got a bit snarky after that and kind of gave up on the chance of wanting to work there.

Went home. It took them 3 weeks to call me. But I wasn't surprised by that point.


LOL, your experience was much like mine. They sent me a ton of stuff to study pre-interview, which I thought was odd way to bias my answers (I attribute that to the recruiters trying to game system to get more of their finds hired and look good). When I got there the door was locked and I was trapped in an ante-room with another candidate waiting for 45 minutes. Interviews with a bunch of randoms, half from other teams. Only one whiteboard test which I apparently didn't do well on.


Scheduled call. They forgot to call.

Back in the 90's and early 2000's in non Silicon Valley tech companies, this was Simply Not Done. I find Silicon Valley tech culture a little loose when it comes to interpersonal morals.


Shitty experience of not showing up apart, I know that, at Google, the fact that the hiring manager isn't involved in the selection process is done on purpose, to reduce bias.


That makes sense then. But in my case I was definitely looking forward to talking to him and he was supposed to be there.


I think when a company reaches a certain size it is hard to expect that there is such a thing as "a manager looked at my github/qualifications and decided they need me so they will tailor the interview to my skillset".

At this point it feels the interview process is becoming more of a combo between a hazing ritual and a lottery than something actually useful to ascertain if the person interviewed would be a good match for the position or not.

The longer the interview is, also, the more likely one will be discarded because one of the many interviewers is having a bad day, or because after several hours of whiteboarding one can understandably draw a blank on a simple question they would've waltzed through 4 hours prior and be failed because of that. How long before interviews also contain an anti-doping panel to weed out candidates trying to improve their odds?

It is strange though that in a country where there is at-will employment one is basically told that hiring the "wrong person" could destroy the company or something and so the interviewer has to make really really really really sure that the candidate is absolutely ok.

I personally think the risk of hiring somebody and they can't cut it after three months and you have to let them go, is worth not passing the candidate that doesn't do well at whiteboarding but will instead prove to be great when tasked with real business problems that take more than a few minutes to solve.


It is strange though that in a country where there is at-will employment one is basically told that hiring the "wrong person" could destroy the company or something and so the interviewer has to make really really really really sure that the candidate is absolutely ok.

What it does is destroy the recruiters reputation and KPI charts.


I think when a company reaches a certain size it is hard to expect that there is such a thing as "a manager looked at my github/qualifications and decided they need me so they will tailor the interview to my skillset".

Why? As a matter of fact, I have a friend who met a manager at Burning Man, then was hired into Google in just such a manner. He described the Google interviewing process as "a breeze!" My reaction: What!?


Being an agency recruiter in 2017 is becoming impossible in that fewer and fewer people are even willing to talk to you, which is one reason I've diversified into resumes and coaching. I have almost 20 years in the industry, led a large successful users group for 15 years, have published tons of content for tech pros (often critical of recruiters), and have a pretty solid reputation in the industry, I write targeted and friendly approaches (no surprise calls at your desk), and I still have difficulty getting responses from people. I can't imagine how difficult it must be for people who don't have my 'cred' with developers, but the signal to noise makes most people filter out all messages.

So for the most part, I've stopped reaching out to new candidates. Outreach is mostly futile, so my time is better spent doing other things to try and get potential candidates to come to me instead. It goes completely against what recruiting is built upon (always be recruiting, recruit everybody, source, smile and dial, etc.), but when people approach me the outcomes are much more positive.

When I got into recruiting (circa 1998), the primary skill of a recruiter was just identifying potential candidates. There was no LinkedIn, nobody had websites, no social media. The skill was calling into a company and finding out who the Java developers were by navigating different people, and in my case I did this without lying to anyone (there was rusing, but no outright lies).

Now identifying people is ridiculously easy (identifying actual talent is a little different), and so easy in fact that we've turned into an industry of spammers. This model doesn't seem sustainable, and I expect agency recruiters will continue to be replaced by better systems. Nobody seems to have gotten it just right to this point, but some are getting close.


I used to respond to just about every recruiter email/call. Problem is, as you noted, that so much of what I receive these days is spam that I've stopped bothering to reply to most of them.

Now, occasionally I'll receive a well crafted email that was obviously specifically targeted to me and I'm so impressed that I have to reply :)


Most of my responses do mention that they did so because of the attention to detail in the message. Even the ones that are well-crafted still don't get much response. I envision that they're deleted by most people unless they are actively looking or pretty set on doing so rather soon.


While 50% reply rate from engineers is possible with the right targeting/email†...

I agree with you–sourcing is not worth it. Attracting people is much better.

†have made this happen with multiple clients


I don't know how it has changed since I left but when I was at Google and interviewing folks the idea was to find smart people who could get things done and then figure out what to do with them if they decided to join. As a result it was literally impossible to have your "future manager" interview you, because that person was only identified after you accepted[1].

But "recruiting" in the sense the author discusses is really more like a lottery for people calling themselves recruiters than it is actually finding of talent.

There are a number of these people that email hundreds or perhaps thousands of people, while simultaneously creating a "resume" out of information they found out about them online and submitting that to companies. Then when the company says "we'd like to see Bob" then they go back and figure out who Bob was and they contact Bob and try really really hard to get Bob to take an interview with the company, and if this match works they pocket anywhere from 15 to 25% of Bob's annual salary as a finder's fee.

So low overhead work that can be pretty easily automated with the occasional out sized payout? That is the recipe for a fishing business and that is what we get.

I got an email from a friend who had my "resume" cross their inbox. They emailed me to tell me my resume skills had really plummeted, what with a bit of my LinkedIn page, some Github repos, and snippets of broken english mashed together. I tied the source back to one of the "recruiters" who had sent an email saying they had companies asking for me to apply (no I didn't respond). We speculated whether the resume mashup had been done in house or if there was some Turker somewhere who was doing 'resume assembly from accessible data'.

Bottom line is that there is money to be made so people will come out and try to get that money.

[1] There were some exceptions of course and certain skills or disciplines but it was the general rule.


As a hiring manager at Amazon, I'm super excited to talk to candidates. A new job is, after all, an incredibly important life decision on the same level as getting married or buying a house. And as a manager, making sure I hire the right people to build the kind of diverse, respectful, collaborative, representative, professional, mentoring, balanced and focused team that I can is the most important thing I can do on a daily basis.

In fact, I spend a non-trivial amount of my day personally looking for, and reaching out to, prospective candidates, with no recruiter involved.

With that in mind, Amazon recruits so widely that I couldn't possibly talk to every prospect; there just aren't enough hours in the day. So I think it's important to have recruiters doing the basic screens, and am delighted to take over from there.

Anyone who wants a job at one of the big companies should understand that there's going to be some bureaucracy on the intake side just because the volume is unbelievable. If you feel like you've had a bad experience with Amazon in particular, please reach out and let me know and I'll see if there's anything I can do about it.


That NYT expose has scared anyone you would want to hire away from your pipeline altogether.

https://www.nytimes.com/2015/08/16/technology/inside-amazon-...


That article does get raised quite a bit. Fortunately I've never seen behavior like that in my org. If I ever did, I would escalate it all the way up to jeff, because working like that would be unacceptable and frankly incompetent.


The recruiters at Amazon are pretty bad in general. They don't give enough details about positions, salary, or anything else to make it worth discussing. I get contacted a few times a year and I always ask to just be added to the 'do not contact' list, but then the next year I'm always contacted again. At this point the teams at Amazon are such a crap shoot that it's not even worth talking to them (plus the pay is trash most of the time if you don't want golden handcuff equity). I always turn them down, but even that first contact is a pretty bad experience.


Nice! You've managed to stick almost all the corporate recruiting empty buzzwords in a single message.


That's not very charitable. I get where you're coming from, but in this case, I assure you I mean every word, and I'm happy to put candidates in touch with my direct reports so they can verify that for themselves.


To contrast and say something surprisingly positive about Amazon. A local Amazon engineering manager looking to hire people sent me a note and suggested we discuss over a coffee. While I didn't want to pursue the opportunity I definitely left with a much better opinion of Amazon.

Also contrast with Microsoft when I had a series of technical interviews with members of the team I would be working with. I ended up going with a different opportunity but again I left with a good impression.

The Google hiring process sucks IMO. I've been approached a few times. I recall one phone coding interview where the Google engineer was condescending and trying to prove he's smarter than me. It seemed like he was in a bad mood or something, that can happen I guess. In a later recruiting attempt a Google recruiter called me, said he'd follow up, and then left Google to go work for Apple without having anyone follow up with me. Yet another recruiter, when he heard what area I'm interested in, said I've no chance because everyone in Google wants to work there. One thing that should automatically give you pause is a recruiter in Texas while the jobs are in California. Clearly that recruiter doesn't interact closely with the teams he's hiring for.

Google has gotten away with this for so long because they can be picky. Every open position has 100's of candidates. But times will change. They certainly have a lot of good people but they also have a lot of less good people. I've worked with some ex-Google people and they're really just like everyone else. Conversely there are lots of very good/smart people who don't work for Google...


This also solves another problem. Many orgs have their recruiters slot a candidate into a particular role really early on in the process (possibly even before the first phone screen). So you're not applying for a job at FooCorp, you're applying to be a Level 2 Engineer on the WidgetSearch team. By the time the third interview finishes, it turns out you're a bad fit, and the process stops. This doesn't mean there's not a great role for you inside the company, just that the you were tagged for in their recruiting software wasn't it.

By having the future direct manager involved early, they can hopefully say "I need a algorithm person, not an OOP person. Go talk to Carla, she's been looking for someone like this" sooner.


They should ask you whether you want just to join the company in any role or you want to be working in a specific role.


Regarding his canned response:

> Thanks for your email. I'm very interested indeed. I have nothing against an interview. However, there is one condition: I have to be interviewed by the person I will be working for. By my future direct manager.

This kinda misses the point. I can't speak for any companies besides Google, and this is all my personal sense and not the company line, but anyway: speaking with a hiring manager who wants you is actually less useful than speaking to a disinterested stranger. Hiring at Google is done by committee and actually regards feedback from a hiring manager with some suspicion. Here's why:

Google allows for a lot of mobility. Internal transfers are common, projects are scaling up all the time, sometimes new engineers don't gel with their teams and want out, and we strongly prefer to keep people within the company when projects scale down. Given this, hiring an employee solely on the basis of an enthusiastic manager's word is a recipe for lowering the overall level of engineering. Even if Yegor had the strong support of a manager who wanted him, he'd still have to go through the same raft of interviews and be judged by the same hiring committee.

Alright, fine, I hear you say, but if a manager likes you, surely that's still a positive signal? Well, in the best case scenario, sure: you're talented, the manager is wise enough to spot a good engineer when they see one, and they express their assessment honestly. But the best case scenario involves the manager acting as though they were a disinterested outside, so why not get the opinion of an actual disinterested outsider? However, what if the manager is short-staffed and just needs whatever help he can get to meet their goals? What if the endorsement is made on some biased basis, such as going to the same school or assessment of talent in an area unrelated to engineering?

For all their warts, whiteboard-based interviews are the state of the art in assessing the capabilities of a large number of candidates within reasonable time and cost. They're not foolproof, and smaller companies can afford to (and should) use more bespoke methods. However, at the scale of Google and friends, they're unfortunately the best thing available. But I won't lecture hacker news on scale. We all know expectations have to be adjusted as systems grow.


You miss the point of the article. Yegor doesn't want to be put on a team using technology completely orthogonal to his expertise. He isn't interested in spending 3-5 hours per week for a few months to cram on algorithms study so he can use it once on his interview day.

Many other companies hire by role. There is no ambiguity in where a software developer will end up working on. Facebook and Google use centralized hiring. Yegor's point is that this uncertainty regarding team placement is a strategic disadvantage.

Despite what you might think, Object Oriented Design isn't going away anytime soon. There is no need for Facebook or Google to prematurely optimize away the risk of an obsolete engineer at the expense of rejecting domain experts for an important project.


Personally, my biggest issue with interviewing at large companies like these by people whom you aren't necessarily going to be working with, is that not everyone is in a position to interview people. I personally hate interviewing other people and having to be responsible for their futures. I was never trained for it, and I don't want to ever deal with it. I just want to code and make sure the product I'm working on is coming along smoothly.

I've had countless interviews where I could immediately tell upon entering the room that the interviewer does not want to be there at all. It's a lose lose situation. For me, because I get demoralized instantly, and he's already got a chip on his shoulder for having had to do this in the first place. Everything I say going forward from that point on is an uphill battle.

I think only managers/very few people who are truly trained should interview people as that is kind of the essence of a manager's job--making sure they have happy, solid team. I don't even mind the technical questions. I just want to be in a room with someone who is actually potentially excited having me there. Not grilling me in a depressed fashion


Facebook has a "shadower" who is in your room just to learn to be a better interviewer. A shadower pairs up with the interviewer for each round.

You can't just say inexperienced interviewers can't interview, then how would anyone ever get experience?


The worst part for me is how long the process takes, and how much time is invested for an interview process where the success rate is so low.

My recent experience with AMZN was:

- get contacted by recruiter, schedule a call with recruiter a few days later

- take a take-home coding test a few days later

- talk to the recruiter again a few days later to tell me I did well on the coding test

- talk to another recruiter a few days after that, get scheduled for an all-day in person interview 4 weeks in the future

- cram cracking the coding interview for 4 weeks

- go to the interview all day, hear at the interview that I was close but didn't pass, they recommend I should try again in 6 months.

All in all thats a pretty big time & mindshare investment


I remember doing a phone screen with a company who shall remain nameless, but it was a phone company that has a three letter name and two of those letters are the same :-). It went pretty well, and I predictably got not response back from them. A month or so later I accepted another job, moved my family across the country, and started on the new job. About 2 weeks into my new job, Three Letter Company's recruiter calls back and said the phone screen went great! Can I come in for a round of in-person interviews?

I LOLed over the phone, it was a riot.

EDIT: On the other extreme hand, it's a HUGE turn-off when a recruiter makes an incorrect assumption about my time availability in the initial contact. Stop me when you've seen this one: "Your background looks great! Please tell me what time slots today would work for you in order to discuss this role." Wow, really? You're assuming I have time to call you RIGHT NOW?


May be not for many people here on HN but for very large number of engineers job opportunity Google/Amzn/MS/FB etc are once in a lifetime type. I have not met people in my circles who would balk at interview process there.


for very large number of engineers job opportunity Google/Amzn/MS/FB etc are once in a lifetime type

Meh. Being a developer is being a developer. Not to say that certain specific companies don't sometimes have specific challenges or perks that would be of unique interest to somebody, but by and large, there's no real reason to think if Google/Facebook/Amazon/$anybody_else as a "dream job."

I used to think that way, and after working at two of my "dream job" companies, I've realized that it's pretty never what it's cracked up to be.

Really, the world is SO much bigger than just GoogBookHooSoftCart... I'd encourage people to NOT put those companies on any kind of pedestal.


Your best opportunity as a young engineer is always with a startup with smart leadership. You'll get to do far more and learn far more. Google/Amazon/FB/MS etc are full of smart people and offer a more 9-5 type job, but you will find it much harder to do interesting things.


When I graduated, Google/Amazon/FB/MS, I knew people were smart and the job would be okay at the very least. How should I have gone about figuring out which ones are "a startup with smart leadership". I remember being overwhelmed by everyone, everyone came across as smart.

So I counter your advice by saying the best opportunity for a young engineer is likely a large company that is well-known for having generally smart people. At least I learned what I wanted and didn't in the rest of my career.


Interviews are a two way street. For most companies you should be able to talk to the hiring manager and other developers that you would be working with. Ask yourself whether as a junior developer if you will be able to learn from them. This includes technical skills, soft skills, and personality.

From my own experience I interviewed with a small company right out of school where I would have been the second or third developer (I forget now). One reason I did not pursue it is that I seemed to know more about development best practices than their existing developers; for example they did not use source control at all. While I would have had the opportunity to make a big impact on the product, I don't believe I would have grown much.


Other than the obvious ageism problem you could just cross out "young".

This is assuming that smart leadership equals no deathmarches. Deathmarches are for the young and gullible.


From my experience, either your profile is deemed interesting and you'll get contacted by these companies from time to time (Amazon being the most aggressive) or they'll never get back to you, even if you try pretty hard.

There isn't really a "once in a lifetime" email from a recruiter. Once you interviewed with them you'll have plenty of opportunities to interview again.


If Amazon (or probably anyone for that matter) has some reason to believe you have a short timeline, you can expedite all this. Both times I interviewed with Amazon (I boomeranged and worked there in two separate stints), I had other loops/offers from other co's, and went from initial contact w/ Amazon to an offer in ~4 days.


Wow you're lucky they even tell you in the interview that they were close. Google can take many weeks to get back to you, from what I've seen.


My last interview loop I got the feeling that some companies just say it's 'close' to get you to apply again in 6 months. Unless they give actionable, solid feedback (FB always does, at least with me), I consider it a binary.

Story time: At <X>, I knew after the culture fit round that I wasn't getting an offer. I had middling coding rounds and a terrible culture fit round with the hiring manager. The recruiter claimed that it was close and the team was arguing about the hire, while the person that referred me let me know that they were passing because the hiring manager vetoed everyone with a 'no hire' about 2 hours after I left their building. I had no qualms with the reject, I truly felt a misfit in the interview, but the recruiter play was funny. Someone reached out to me 4.5 months later asking if I wanted to interview :)


I'd also like to add my own experience dealing with recruiters from Amazon, Facebook, Google, Netflix.

They are liars. They most definitely will e-mail you whatever it takes to get you into the first part of the funnel. They'll tell you its a "special project" or try and make it seem like somehow you're exactly what they are looking for. It's not true; I've been down the route a few times and have easily been able to spot the deception at second phone call. I do wonder how many brilliant people are abjectly turned off by these tactics and don't make it into these companies. But I rather think that's a good thing, call it bio-diversity. No one company should get too many very smart people.

And, unlike the OP, I can actually solve the bizarre algorithm questions they ask. Although I 100% believe they tell you absolutely nothing about the person but their ability (read: desire/motivation) to read and comprehend a basic text on the subject. No, you don't need this knowledge (unless specifically required) to be effective at your job despite what others here are posting on about.

Tech companies, once they reach a certain size, become little more than meat shops; corporate behemoths, not unlike other large companies that run the same dog and pony shows but for different kinds of human talent and capital. This is not to say they can't be good places to work, or have cool and creative teams within them, but the chances of you getting hired into one have nothing to do with the recruiting process, unless the hiring manager or manager of a specific team reaches directly out to you. If it's a recruiter doing the reaching, you'll be forced through the opaque funnel and likely lied to or manipulated in some way. Its just the nature of their job.

By far the best way to find a great job is to:

If you don't already have contacts in the industry or niche you want to work in: Go to work for a startup in the space to build your network and domain expertise.

If you do have a contact, have this person refer you from the inside. In many cases your interview process will be completely different. Following this I've actually not had to deal with recruiters much at all; maybe only as a formality.


No, you don't need this knowledge (unless specifically required) to be effective at your job despite what others here are posting on about.

Pretty much like how the Asiana Airlines pilots didn't really need to know how to manually land a plane.

https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214


That's a ridiculous analogy.


A few years ago I decided to never do these algorithm, fizz buzz and whatever tests again in order to get a job. These tests have almost nothing to do with my daily work as a software developer.

With very much ease I can ask the interviewer/dev to solve a seemingly simple algorithm, where I'm almost 100% sure he will fail, even if I'd gave him 6 hours to solve it.

I've seen so much horrible code in my life, there is no fizz buzz test that can prevent you from that. I am wary of companies doing these tests as it tells me they have no clue what my work is about.

If a company wants to hire me they can look at some of my production code and the live results of it. If that is not good enough for them I'm not interested.


> I should have told her that I didn't want to be interviewed by some programmers, because I would most certainly fail. There was no need to try. I wanted to be interviewed by the person who really needs me: my future boss. That person will understand my profile and won't ask pointless questions about algorithms, simply because he or she will know what my duties will be and what kind of problems I will be capable of solving, if they hired me.

While I agree with the author about algorithm questions being relatively pointless, my sense is they don't know about the team matching process. After I interviewed with Google, they presented me with a list of a eight teams who were interested in me and asked me to rank them in order of preference. They then had me come back a second day and meet with the managers of the top four teams I selected. At the end of the day, I ranked the teams again. The team I ranked the highest who also wanted to hire me was the team I would have joined if I had accepted my offer.

I much prefer the current process where you have one day of general interviews, and then go back a second day and just talk with the managers of the teams you would be interested in. It would only make the process much more of a hassle for everyone involved if you had to be interviewed by the managers of every team for which there was mutual interest.


The lesson based on your experience could be the advice you gave. But it could also be:

Don't get your expectations too high. If you have achieved something in your programming career already, you've done it. Something to be proud of. "Bad" interviewing practices of top companies, and the outcomes that follow, don't and shouldn't undermine your self-worth. Just take the whole interviewing experience as a chance to meet new people, have a mentally stimulating exchange, and that's it. If they hire you, great; you were able to capitalize (in one very specific, and by far not the only, way) on your achievements. If they don't, you go back to doing the great stuff you're already doing.


> Just take the whole interviewing experience as a chance to meet new people, have a mentally stimulating exchange, and that's it.

You're not wrong, but if one needs to put a premium on their time it's a little more complicated than that. Particularly if you need to take time off and travel for an interview, it'd be nice if the whole thing was more than a minor social experience.


Yeah I should've mentioned that. Well I guess I did in a way when I started by saying either the lesson from the blog post or the lesson that I talked about.

To summarize, if you put a premium on your time then we reach the conclusion in the blog post. If you are willing to reduce the premium, then we have the alternative conclusion that I suggested.


At Netflix I was interviewed by the manager I'd work for (in fact, I was interviewed by two managers, as I had the option to join one of two teams). As part of recruiting I had many conversations with my potential manager about what exactly I'd do, and how my skills can be best put to use. I ended up picking Netflix, and it's been great working here. I wish more companies interviewed in this way.

I've had the other type of experience at a major tech company, similar to what was described in this post. Interviews that were generic and not focused on what I'd been told by the recruiter the job was. I spent all day exercising brain cells from University classes a decade ago, rather than my industry experience. I remember feeling odd about it afterwards: that's the first time I've had to recall those skills since University, yet I've been working in this role for a decade and haven't needed to remember that doing the actual job...


I've been thinking about it for a while to try to transition from a mechanical engineer to a software engineer but are all the interviews as miserable as HN makes it seem? I feel like these interview structure is a form of deterrent. I had a quick call with one of the recruiters from Apple for a hardware engineer position. He said the interview will take about a week and needs about a month to prepare... I didn't call back because it doesn't even seem reasonable to me.

I've interviewed many times (mechanical engineering positions) for big companies (10K employees+) to small (<50 employees) and they're usually a one day remote and one day on-site interview. There would be a couple of technical questions and some technical discussions but nothing like, solve this Navier-Stokes equation for creep flow from your memory. I don't think any competent manager would even ask that to me.


There is a big difference between computer science as taught and software development in practice. These hiring practices are there to weed out people just can't design applications or write code with any proficiency.


But can't that be said about every discipline of engineering? I could be wrong but these hiring practices are particularly prevalent in the software roles compared to the other major discipline of engineering. I'm more curious to what is different about software engineering/development roles that require these methods. The amount of applications or size of the company doesn't seem like a logical reason since I've seen equally sized (employee-wise) companies weed out and select specific roles pretty well (relatively).


Places like google give you a guide on how to prepare. So it consists of stuff like algorithms and data structures. I do wonder sometimes how much it is tailored to getting people who are fresh out of college rather than more experienced. My take is that you basically have to spend time preparing and studying so as to get past those types of questions then finally you talk one on one with the people you likely would work with and then it's smooth sailing for experienced people. But the entry bar can be pretty tough to get past with zero prep.


Why should skilled developers with extensive track-records in their field be forced to use their free time to cram for college-style exams? Other professions dealing with exact-sciences / engineering don't seem to have this problem..


While I agree with the sentiment, I wouldn't call it "college-style exams", to be honest. Colleges would be quite different if exams were conducted one-on-one, and instructor would have been willing to talk with you through the problem and cared more about your thought train and approach rather than the exact solution. In fact, I am pretty sure that if you just knew the correct solution and the answer to the coding interview and simply wrote it down silently on the whiteboard (which would be the best outcome on a college exam), you would be rejected instantaneously.


>simply wrote it down silently on the whiteboard (which would be the best outcome on a college exam), you would be rejected instantaneously.

No, jesus. Some people need to switch to a presentation mindset and out of a problem-solving mindset. This is completely normal in life.

I'm starting to think most interviewers are terrible only for not having a decent amount of social interaction with strangers. Let's just admit it's 90% hazing ritual at this point so we can get on with putting up guides to get through it.


Oral examinations used to be fairly common, back when the student/examiner ratio was low enough. There's a legend that the Cambridge "Tripos" examination gets its name from the tripod that candidates sat on during that examination:

https://en.wikipedia.org/wiki/Tripos#Etymology


I used to interview for Google, if a candidate knew an answer immediately I'd be impressed but I'd always have a harder variation in my back pocket.

And if you write out the answer in silence, be prepared for a "tell me how it works" follow-up.


Mostly false. Other areas have professional exams, but they're generally industry standards instead of given by the hiring company. This is more efficient if you're a candidate, since you only need to take the test once. (Or more if you don't pass the first time.) My house mate just got a professional engineer's certificate after being at work in the field for about five years.

Of course, if we had this in swe land, it would probably end up looking a lot like those silly certificate programs in soon-to-be-obsolete tech of yesteryear...


This is actually a brilliant idea.

> soon-to-be-obsolete tech of yesteryear

Not if it focuses on things like basic algos, and understanding basic data structure.

Then it would be timeless.

And if that makes employers more confident in my abilities, then I will gladly do that.

Because it's a lot easier to study for a standard test than a bunch of different tech interview scenarios.


We already have this, it is called a CS degree.


I agree. If you take the time to interview at Google or Facebook and you actually want the job it's 100% worth it to bone up on cracking the coding interview as well as the papers they have published about their systems.

Cracking the coding interview is damn near encyclopedic and many interview question you are asked that have a "trick" to them will probably have something you can adapt from the book. You go in knowing most of the tricks and it's a huge edge.


From what I see around me, there are only 2 ways to get hired by Google. Either you are a fresh CS graduate, or you need to take 2-3 months off for fulltime study.


You probably didn't intend this, but your comment reads like you're trying to give advice to the author on how to pass the screening. The author explains how these initial mails set wrong expectations. They specifically mention having zero interest in learning the algorithmic skills tested for in the screening process.

It's a dishonest approach and the response laid out in the article works just fine to weed out shady recruiters doing this.


You aren't going to interview with your future manager because they don't know who your future manager is. They need to hire hundreds of people, they aren't going to try to get separate candidates for each role, and they aren't going to have you interview with a hundred different managers.

My company is experiencing this as we grow; we have too many openings to have each team try to recruit for their open spots. We need a more general queue of talent to hire, so that we can interview for 10 open spots for every interview we do.


> You aren't going to interview with your future manager because they don't know who your future manager is.

I generally agree when it comes to cold resumes. However, in this case, OP is talking about being actively recruited. In that case there ought to be a specific manager that was looking for a specific type of person. Otherwise the recruiter is just wasting everyone's time.


I recently interviewed and accepted a position at Amazon. Prior to the interview the recruiters did a great job of preparing me for what to expect. My direct manager was also part of the interview team, so I got to meet them. Overall it was a very pleasant experience, which is part of the reason I decided to join.


This is a classic example of what we call a blogrammer and self-serving bias.

Classic CS (Algorithms and Data Structures and basics of FP) is the must. The questions about subtleties of C++ syntax is, perhaps, less reasonable (but they are aimed to catch an experienced dev instead of a self-proclaimed wizard) but ignorance of the basic CS is a major red flag.


I get so many obnoxious unsolicited recruiter emails that I have set an email autoresponder that triggers when the word "opportunity" comes in from someone I don't know. Here's the template:

-----

Hi,

I get a lot of messages like these. Since time is valuable, I promise to fully read your original email plus any job web pages that describe the opportunities you are hiring for for $100. You can paypal me at <your email here>.

For $500, I'll even do a coding exercise or come in for an onsite interview.

Cheers,

<Yournamehere>

-----


The interview processes at a lot of those big companies favor fresh university graduates - Other than that, they are completely random. I bet that if they forced existing employees to take a slightly different variation of the test again, most of them would fail - Someone could be an expert with graph algorithms, but if a question popped up about decision trees and minimax adversarial search, would that graph expert still be able to solve the problem in time?

My personal experience with the interview process at a couple of these companies is that even if you CAN solve the problems, you will usually run out of time - Unless you practised that specific problem recently.

Also, they tend to favor engineers who can come up with solutions quickly as opposed to engineers who can come up with optimal solutions.

They could look at stuff like open source projects you created/worked on, past successful companies you've worked at, etc... But no, they don't care about that; instead they prefer to have this random selection process which favors experts in specific algorithms or thinking styles which often have nothing to do with the company's core business.


Another data point of ridiculousness:

I fairly often receive emails from recruiters who have 'read my profile' and found me to be 'a great fit' for some 'senior ...' role.

Had they actually read my profile, they'd have found me to be a student, who's only relevant experience is as an intern, and has accepted an offer for a graduate position.

Certainly not a fit for senior anything, but it makes me wonder how far it would go if I replied OK - do companies using these recruiters get hopelessly underqualified candidates to interview on a regular basis, and still stick with them?


> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

They are just looking for smart people. If you ever looked it up it's 5 lines of code. That's the whole point they're not forcing you to learn something:

A) so esoteric you will never encounter in your programming career B) so complicated that you'd have to spend days practicing it, etc

In my opinion a lot of these questions are disguised IQ tests wrapped in algorithms since they can't blatantly ask "brain teaser" questions anymore.


You mean like the famous inverting tree whiteboard exercise that the Homebrew author failed?

Which lead them to refuse to give a job to the guy that actually developed a software used by those with higher IQ to come up with tree inversion algorithms?


Yes, I completely agree with you that their recruiting system has a lot of false positives, but I was just clarify that algorithm questions are really IQ questions, which is as you rightly point out is a flawed proxy for predicting how well someone will perform in their role.


>If you ever looked it up it's 5 lines of code.

multiply that 5 lines of code by all the possible algorithms they couldve asked about instead of this one. Thats what you need to know walking into the interview, seems kinda stupid


They can definitely ask brain teasers, they decided to stop because it proved useless.

Some questions are fair, but there's a reason most candidates spend weeks/months practicing. Good luck pulling off something like a topological sort if you haven't studied the algorithm recently.


> Clearly, I'm not an expert in algorithms. There is no point in giving me binary-tree-traversing questions; ...I'm trying to be an expert in something else, like object-oriented design, for example.

I've always been surprised by the focus on algorithms in interviews. Maybe I'm an outlier but in my entire career I very rarely have to implement any complex, low-level algorithm - usually I can just use a standard library. But almost every day I'm using object oriented design to refactor code or meet new business requirements.


It's like this person was inside my head. I agree basically word for word and do the same thing, cannot be bothered anymore, like he suggests, if they are telling him his profile is so great and 'impressive' why must they schedule a regular interview and on a whim reject him. Waste of time.


Yeah it kind of sucks, but from Google's perspective it's absolutely necessary. So many people talk the talk, run a blog, have a neat looking resume/website/GitHub and just cannot perform. Unless you're an undisputed rockstar in a specific area and they're hiring exactly for your expertise you can't expect anything else. It's a huge waste of time to custom tailor to each candidate when 90% won't receive an offer.

> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

Take an afternoon and skim Cracking the Coding Interview before applying. There's no reason a competent engineer shouldn't be able to solve questions like that. You know exactly what's expected of you, show some initiative.


> Take an afternoon and skim Cracking the Coding Interview before applying. There's no reason a competent engineer shouldn't be able to solve questions like that. You know exactly what's expected of you, show some initiative.

I think the core issue here is whether the prospective employee should have to show some initiative. My initial reaction is that of course they should, but when you're fielding job offers from numerous companies, some of which don't require you to go out of your way to pick up new knowledge specifically and only so that you can pass an interview, why not just dump Google and go somewhere else? A few years ago Google was probably the most desirable place anyone could work, I really don't think that's true any more.

Overall, it sounds like the system is working. Developers that don't much care for Google's methods won't go and work at Google. Those that are very passionate about working at Google for the sake of working at Google will jump through those hoops.


Disclaimer: I have interviewed-trained twice at Google but I don't like the process for interviews. Athough I work there, My opinions have nothing to do with Google, just my own.

I think the process selects well for new grads who have recently completed algorithm courses.

I think the process selects well for people who are comfortable with doing work on a whiteboard.

I think the process selects well for people who are motivated enough to work at Google that they read and study books like you mentioned for weeks, and do a bunch of practice self-study interviews before coming.

Whether that set of people overlaps significantly with what makes a company do well -- I don't know. I'm not sure.

But I am pretty sure that the reverse assumption, that people who don't pass those bars are not good canddates, is an arrogant position that only a company of Google's standing and size can/should get away with.

What I don't get is the trend of small companies, startups, and the like, copying this interview process. It is entirely not a match nor a way to find the kind of motivated, culture-fitting, creative people you need in a smaller company.


>I think the process selects well for people who are comfortable with doing work on a whiteboard.

If I am not mistaken, you can choose to do the exercises on a Chromebook in an IDE instead of the whiteboard, so there's that.


> There is no point in giving me binary-tree-traversing questions; I don't know those answers and will never be interested in learning them.

I'm with him up until "never be interested...", thats a piss poor attitude. I've never needed to know big-o complexities for work, and I've only scraped the surface of needing/using various data structures, so I get annoyed at some of the heap/tree trivia questions as well. But if I were in a situation at work where I needed a tree, of course I'd stop coding and read up on how binary trees work and use them for my benefit.


You can't figure out if anyone can perform, by asking algorithms questions. That's rather a waste of time. Instead, they can be given a task to write some real code to solve a certain task, and of course not in 2 hours, but within a day let's say. And then evaluate the result. That can show how people perform. And not puzzle questions about problems that people were breaking their heads for years to crack.


Those are not at all the questions asked. They don't ask you to prove A*, they ask you to filter a binary tree based on some predicate. It's much closer to real life than people on HN like to admit -- your coworker wrote some code using some datastructure and you have to figure it out and modify it.


"Cracking the Coding Interview" doesn't seem like a book you skim. It seems like a book you buy a gigantic whiteboard for and stay up until 3AM every night for 6 months with it.

But maybe I'm just not very smart?


> Take an afternoon and skim Cracking the Coding Interview before applying.

First, I know this isn't a black or white issue but the problem I have with this is that I'm wondering if they are hiring people with the skills they want or are they hiring people with the skill to succeed in their test.

In theory, they should be the same. In practice... I don't know.


I'm convinced that they are. Sure, it's not perfect, but the set of qualities that impress me in a whiteboard are generally the same qualities that I look for in an engineer. Can you communicate well? How do you approach problems you've never seen before? Can you write readable code without thinking too hard about it?

I'm not really selecting for people who are really, really good with binary tries -- I'm selecting for people who display those qualities, the question is pretty much incidental.


Today, I read something very interesting on the Slack engineering blog on an older post - https://slack.engineering/a-walkthrough-guide-to-finding-an-...

Many candidates think they need to find someone currently at Slack to “get their foot in the door.” Rest assured this is not the case; in fact most of our hires have come from people who have applied via our careers page. We take all applications seriously. We care deeply about diversity at Slack and when you only hire from your current employees’ networks, you tend to get a homogenous set of candidates.

Kind of, the opposite of how Google hires. I wonder which one is the right approach.


I've turned down actual offers because even some medium-sized companies won't tell you which team you'll be working on when the offer is extended.

I get that things can be fluid, but its such a meat machine.


That is when you know they just see you as another cog.


These perspectives are very negative. An alternative view is that there are tons of interesting problems and projects and applications to be worked on, and to some degree at least you get a say in which ones you work on.


I did a second phone interview at Amazon once. I have several years experience and understood the algorithm for the problem fine but got tripped up on the syntax a little. I was interviewing for a specific team and I could tell the interviewer was just looking for an excuse not to hire me. Perhaps I dodged a bullet in not getting on that team, but now I have to wait before I can apply anywhere else at Amazon.


The amount Amazon spends on airfare for these wasted trips must be staggering. I'm in Chicago and I don't know very many people who haven't gone on this strange pilgrimage to Seattle.


Amazon has free shipping with Prime.


> some programmers who didn't know a thing about my profile asked me to invent some algorithms on a white board for almost four hours.

Yep. Did the phone interview after weeks of back-and-forth, told them what I was interested in, told them I was only interested in working on such-and-such, then they drop me in the standard "write memcpy for us" interview. The last interview was some dude with a weirdly confrontational attitude who wrote some assembly on the board and said "Find the bug. It took us 3 months."

If someone like David Presotto or Rob Pike want to recruit me directly, fine, but after similar experiences with Amazon I've decided the typical mega-startup interview process is for the birds.


> The last interview was some dude with a weirdly confrontational attitude who wrote some assembly on the board and said "Find the bug. It took us 3 months."

Not great, but there is a world of difference between, on the one hand: (1) there is a bug; (2) it's in these 30 lines of code, and on the other hand: something's wrong, and the problem might be somewhere in your two million lines of code.


> The last interview was some dude with a weirdly confrontational attitude who wrote some assembly on the board and said "Find the bug. It took us 3 months."

Sounds like he's narrowed it down for you! But if it's in asm, the bug is usually just a typo.


Assembly, did you apply for some low-level embedded system job? For most software engineering jobs, I would drop my jaws and freak out if someone asks me to look at an assembly bug. I probably will look at it and try to figure the shit out, but oh lord.


A thought occurred tine that maybe these types of interviews aim to filter for specific people. An H1B from Asia will have the motivation to memorize "Cracking the Coding Interview" as well as the pressure to stay and say yes to everything. This creates a distortion at the IC level. At the management level, the filters create another distortion for cultures that heavily bias statis and grouo standing (rather than get a paycheck and do the minimum to stay). At the final leadership level there isnt even another filter from the managers. The candidates come through entirely other channels, usually network related.


Interviewing only with your eventual manager is massively inefficient. Imagine I'm a hiring manager with one headcount available. Let's say Google gets about 5000 applications per day and hires 50 people per day, so I need to interview around 100 people to fill that spot. If two of those people meet my criteria, the extra ones get rejected anyway. Even more time is wasted than with the algorithm question approach (and honestly, stuff like tree traversal comes up all the time, even in client application code, so you might still get asked about it).


Engineers/managers from somewhere in the organization are still needed to phone screen and interview people that make it through the initial recruiter bar. If people in your team were to interview applicants for jobs on your team wouldn't it take the same amount of time on average?


Part of the problem is that these companies don't know who your direct manager will be, since they don't know where you'd best fit! Or worse, by the time you finish your notice period, they've re-orged and another team needs engineers more than the originally-intended team.

One of the skills they're looking for is adaptability/flexibility, to offset the terrible resource planning.

The message I really get from this post is that they'd be happier in a more structured, predictable environment, and there are plenty of places like that, old+new, small+large.


Am I the only one that kinda forgets these kinds of stuff, like binary-tree-traversing? What I mean by that is that I forget the implementation (let's say in C) very often, and have to go back and look at text book/google/and spend time implementing it. While conceptually I can explain and present everything, at any time, and show you that I understand specific data structure and algorithm, it's use cases, strengths and weaknesses, when I fire up terminal and try to write it straight from the head it takes time. And I always feel like I have to "relearn" the implementation and think about it in specific way.

These questions sound pretty daunting and stressful to do, @Google, on whiteboard, in front of strangers who's temper and mood can vary a lot. I am student still, while working for company like Google sounds awesome, and there is a lot of smart people and cool stuff to do, it's "scary" recruiting techniques and that "cog in the machinery" feel is what turns my excitement down. Especially cause I am not so good in academic sense of learning. I tend to be one of those who work on their own, and hack with things in their free time, where my results in curriculum aren't as great as people expect them to be, but once it comes down to conversation/discussion/implementation people often surprise themselves and are kinda confused.


No you are not alone. The expectation is you've reviewed and practiced these subjects before going in. Hopefully with a few weeks of practice you should be able to remember them enough to write them out on a board. But yeah it can be thought, the interviews are stressful and not good for introvert candidates or those who struggle under pressure. (I'm in the latter group. practice helps).

It's not a great system but it's the best they have come up with so far. It's frustrating because it's got a high false negative rate. Failing it isn't a indication of you on the job skill.


I like how every week one of these posts comes up.

So many posts telling the world that they don't want to be the algorithm guy--the world doesn't care. Forgetting the Maxim: "Not everyone gets to be an astronaut when they grow up."

https://cdn.shopify.com/s/files/1/0535/6917/products/potenti...


But an "algorithm guy" kind of IS the astronaut, no? Most work is line of business, CRUD, wiring up web forms, etc.


Maybe he should talk to Google's contract recruiters. I got interviewed by my manager, my interview was extremely relevant to the tasks that have since been expected of me... it was good times.

Between the perks, the pay, and getting to work on various projects that always have me learning new and exciting things about front-end development I think the fifteen minute investment I made in exchanging pleasantries with a recruiter has paid off. Though that could just be my morning coffee talking.


Well, in their defense (Google/Amazon/Facebook etc), a team leader or any other particular person cannot interview 500 applicants, so either more people are interviewers or they are way more specific with initial resume (which can be fake to a big extend).

I don't like algorithm questions either but at least they optimize by time, the alternative is to have people in for a few days, or spend a day doing chat interviews and risk getting a bullshitter.


> a team leader or any other particular person cannot interview 500 applicants

While, yes, it often takes interview more than one, and often several people, to find a desirable candidate — it's not that high. Given that any candidate will see probably around 4 interviewers, I don't think asking that the manager interview is that big a deal. I have to agree with that part of the article (though I disagree with it on the whole): meeting your manager-to-be is important. (And I don't think one understands this until one has a bad manager.)

In a prior job, we would also omit the manager interview if it was certain the candidate wasn't going to be a pass.


They don't need to interview 500 people for their generic jobs. Filter people early based on CV and some phone screens and then talk to a couple people and select someone reasonable.


Amazon won't hire you just for one role. The company allows free movement between teams by SDEs. If you're an SDE, you're an SDE, and you're welcome to change teams so long as the new manager will take you.

If you don't know algorithms, maybe that's okay for the team you're joining. But hiring you means screwing over the team you're going to be on in 3 years. That's why they interview that way.


Nit: I believe you can't move more freely than once every 30 days or so. ;)


When I joined, it was once every 12 months. All I know is that they removed that rule recently.


I think you're missing the point - those things that you don't like about the Google interview process are meant to eliminate bias. Your hiring manager doesn't interview you because they might make compromises. People interviewing you don't review your profile because they might have a bias if they know about you.

All of the things that make those interviews effective are the things that you are complaining about.


I did an interview with Google few years ago, I still don't know the position they wanted me for. I make pretty clear my preferences but I was interviewed by random engineers not related with what I wanted to do in Google.

I guess they just want software engineers as workforce. They don't really care about what you want and your skills, if you are very good at something you will probably be good at something else.


I think most young engineers are surprised to find that after they've navigated the hiring guantlet at BigCorp, they were never interviewing for a specific position to begin with. They were interviewing to be thrown into a pool of new hires.

When they show up for their first day of work, then they basically role the dice to determine whether you go to team A (interesting work) or team B (shit work).


That reply has a lot of worth to it. I very nearly worked for Yahoo many years ago and was very excited about it, but it fell apart at the last minute. In the end, what should have been my big red flag is how evasive they were every time I asked to talk to someone specifically from my team. I was given offers from multiple specific teams and had to choose one before I was given the details of the offer (i.e. salary and other benefits), so I had assumed there was a specific team. In the end their turned out to be a massive disconnect between my qualifications and what they were going to have me do. I really didn't appreciate it, especially after I had tried so hard to do more specific preparation from my end. Now I look at this standard reply and I think its genius. I'll go the extra mile for my employer but if they're that impersonal right off the bat it's probably not a good match. Sure, they're running at scale and maybe anything more isn't practical, but clearly it's not for me.