Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Can engineers from Google or Facebook solve whiteboard questions easily?
142 points by franzwong 11 months ago | hide | past | web | favorite | 199 comments
I don't know any people from those companies. I am just curious.

I work for Google and have never been good at the "Here's an NP-hard problem you haven't heard of, write correct code for nlogn solution on whiteboard in language of choice" question. I had to train extensively (reading CLR, practicing) to be able to pass.

However, large numbers of engineers at Google are very good at solving whiteboard questions. A lot of it comes from practice, a lot comes from knowing the common problems and solutions, and a lot comes from knowing the algorithmic primitives from which modern algorithms are born.

For many, whiteboard questions are "fun"- in the same way my parents like to do crossword puzzles for the intellectual exercise, Googlers like to discuss challenging, abstract-but-related-to-ads-or-search questions.

I have worked at Google for 7 years. I am a SWE and mostly agree with the parent.

I am, I think objectively, a pretty good software engineer and often couldn't solve my peers interview questions in the time allotted. I try to do my part by conducting interviews with questions that I think demonstrate the ability to actually code and think abstractly as opposed to regurgitating some random algorithm that has been memorized. I ask a simple CS 100-200 level, one-word-answer question about an algorithm and then move on to a coding question.

Despite the difficulty of some of the questions that others ask, often the interviewer wants the interviewee to pass and so will throw a hint. It's important to keep commicating because that is how those hints will be delivered.

Ironically, I never interviewed because I came in from an acquisition. Yes, I was very lucky.

> I had to train extensively (reading CLR, practicing)

I didn't know what "CLR" was, so I searched around: Cormen, Leiserson, Rivest's "Introduction to Algorithms" [1].

[1] https://en.wikipedia.org/wiki/Introduction_to_Algorithms

Yes, it's usually abbreviated as CLRS.

Apparently they added an author since I last studied. Damn, that was over 10 years ago.

Yeah, quite a bit must have changed or more likely evolved through the years, so a lot of the pseudocode and algorithms have been modified or added for instance, topics on dynamic programming, memoization and the like have had more emphasis. Several revisions to examples in basic tree algorithms and additions of some too have arisen. You could consider brushing up on a few things as a casual read perhaps, especially on the new topics. Also, another book I found great was Skiena's Algorithm Design manual.

[1] https://www.quora.com/What-is-the-difference-between-CLRS-se...

That's OK, I don't plan to read any CS books in the future, with the exception of The Art of Computer Programming, which I feel, after many years of training (including a decade as a SWE at Google), I'm finally ready to enjoy.

Oh yes, one of my bucket list kinda goals in life is to be able to get to a level where I can grok TAoCP at a visceral level and understand the underlying mathematics used to derive the algorithmic complexities and how they all function. I think any computer science person who is passionate about the topics should give it a try at some point in their lives.

Common Language Runtime!

> Here's an NP-hard problem you haven't heard of, write correct code for nlogn solution on whiteboard in language of choice

I guess that would be a trick-question then, as nobody ever solved an NP-hard problem in linearithmic time.

I should have said "n log n heuristic approximate solution" (my defense: I'm actually a biologist pretending to be a computer scientist).

Many problem solutions at Google are effectively heuristic approximate solutions to NP-hard problems (or less hard but interesting problems).

There's no trick, they just want to know you can figure out how to compute things quickly so they can run online in a server or a batch job. And the things that need to be computed are often problems with high time or space complexity and the exact answer doesn't need to be known.

But I suppose appropriate heuristic solutions are typically only "pondered about" and not "found" at the whiteboard, because they require extensive testing with real-world data to see if they are actually efficient in practice.

Usually in situations like this the person asking the interview question already has already built the system in production that uses that heuristic and saw that it worked. with a good set of rules of thumb a smart person should be able to propose reasonable heuristic solutions that have a chance of working

No, that's not necessarily how it works.

Often (most?) of the time, heuristic solutions are also found by Complexity Theorists, and they usually can prove that their solution is efficient for problems with certain characteristics, e.g. random data, not random data in a particular way, etc. These are then applied by the industry when they come across problems with these characteristics.

Yes, that's simplifying the problem by making assumptions about the data. Except you often don't know what assumptions you can make. Better to check the assumptions using the data before devising an algorithm for the given type of data.

I agree in principle. But imo in reality, what actually happens is:

1. Most data sets are roughly similar (or at least fall into one of a few cases).

2. Most algorithms are designed beforehand, on ideas of what data could be, not necessarily for actual data.

Take for example sort algs. Sure, you could look at your data and devise a sort that exactly matches it most effectively. But in reality, in most cases, people just use quick sort, which is proven to be efficient on most data sets.

I think that was meant to be a hyperbole.

Indeed, although it's not unheard of for mathematics lecturers at university to throw in a question within a twenty-question problem sheet for their course that is actually equivalent to some important unsolved problem, on the off chance that one of their students sees things in a totally new way and makes a ground-breaking discovery. Solving something NP-hard in linearithmic time would certainly qualify...

My favorite was this question, quoted verbatim from a test in my biophysics grad program:

What is the time resolved flourescence of a fluorophore in 4 dimensions?

The solution, explained to me by my friend who was a very smart physicist, was the first time I heard the word "tensor" (this was a long time ago, long before machine learning used that word) and I was obsessed with the idea that there was all this cool math you could use to solve hard problems.

Do you have a cite/example of a lecturer doing that on an exam? Are there any examples where that has been successful?

I had never heard of that happening! It would be extremely cool if it ever worked.

Not an exam but:

>An event in Dantzig's life became the origin of a famous story in 1939, while he was a graduate student at UC Berkeley. Near the beginning of a class for which Dantzig was late, professor Jerzy Neyman wrote two examples of famously unsolved statistics problems on the blackboard. When Dantzig arrived, he assumed that the two problems were a homework assignment and wrote them down. According to Dantzig, the problems "seemed to be a little harder than usual", but a few days later he handed in completed solutions for the two problems, still believing that they were an assignment that was overdue.[4][7]

>Six weeks later, Dantzig received a visit from an excited professor Neyman, who was eager to tell him that the homework problems he had solved were two of the most famous unsolved problems in statistics


i audited university of Chicago's undergrad "hard-track" real analysis (i forget the name, but iirc there was an easier version of the course available).

The problem sets would have several questions you were expected to solve, and then often 1-3 questions marked with 1 or 2 stars. I forget exactly the language, but there was some explanation that you probably couldn't solve the 1-star problems, and solving the two-star questions could be someone's PhD thesis, as it would require novel insight into an unsolved problem.

The point was that it was a class for young aspiring mathematicians many of whom were used to being the smartest person around, and this helped them confront problems they couldn't solve without feeling like total failures. Also at that kind of school, you never know, someone might be able to solve one.

Not quite the same, but David Huffman was given a choice between writing a term paper on finding the optimal binary code and a final exam. Just before he was about to give up on the term paper, he solved the term paper problem by inventing Huffman Coding[1].

[1] http://www.huffmancoding.com/my-uncle/scientific-american

my own huffman story: I took huffman's class, cybernetics, in college (thinking that it would teach me how to build robots that interfaced with humans). However, the class was mostly about how to build circuits, and Karnaugh maps weighed heavily. The final problem on the final test was a tricky Karnaugh map and I got it wrong- although I knew that you could form squares that "wrapped" around the top and bottom edges, but he put a square that was wrapped around the four corners: 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 and I totally missed that the symmetry applied in both directions at the same time (I'd call that a tricky question rather than a trick question).

That class led to a ton of imposter syndrome that took me decades to overcome. Thanks, Huffman.

Not an unsolved problem, but allegedly a new solution: The Gauss sum of integers from 1 to 100.


A teacher of mine, who studied maths at Oxford, was given a problem sheet on his first day and was told to finish by the time of his first tutorial. He made no progress and was reduced to tears as he thought he was incapable at Oxford. Turns out, he was given a set of unsolved problems on purpose :)

this was known as nerd hazing in my grad school.

Supposedly this person proposed the correct structure for diborane in his homework.


So now you have to know or invent a list of euristics that tend to go fetch the results first.

> algorithmic primitives from which modern algorithms are born.

What is a good way to learn more about these?

I find my problem mapping methodology often leads me to remember these three books:

Aho and Ullmann: Foundations of Computer Science (old but really, really, really good)

Steven Skiena: Algorithm Design Manual

Abelson: Structure and Interpretation of Computer Programs

After a quick search, this came up:

Foundations of Computer Science, full book: http://infolab.stanford.edu/~ullman/focs.html

Also turns out Skiena's website is a very large resource!


This book https://www.amazon.com/Data-Structures-Algorithm-Analysis-C-... Don't let the "in C++" scare you If you google about you'll find free copies of older editions online

The only reason it seems to even learn them is to pass an interview...

What a utterly stupid and broken process. All it's testing is recall, that you've memorized some algorithms that 99% of developers will never need. Especially when these are really hard problems that people have studied for years, to think that anyone who hasn't memorised the solution is working this stuff out in an hour on a whiteboard is craziness.

> The only reason it seems to even learn them is to pass an interview...

While I agree that the process is broken, a solid understanding of algorithms and data structures is very important for developers at my company. And nobody knows all, or even most of the algos or data structures! But we do have large amounts of data and some pretty complex analytic problems, and a poor design can cost a lot of time and AWS $. Likewise over-engineering something small is a waste of time not just in writing but maintaining it.

I agree that most e-commerce or business web developent probably don't need this level of formal skill.

Whiteboard questions are useful, but not for figuring out trick questions. Instead for figuring out how the candidate approaches problems.

Those look like software design skills. Also the type of systems you describes use something like Hadoop, and use things like Hive and Pig on top of it. Once you get the data out you use Python or R. And you generally use a standard library to work with. That really SQL + general programming.

What new problems are you working on that require you to invent novel algorithmic techniques?

Even with Hadoop, there's still algorithmic decisions that you have to make like whether to choose a stripes or pairs strategy when synchronization is needed[1], or choosing appropriate data structures for real time queries [2]. Sure something like Hive and Pig might work well enough for certain queries, but for some of the more complex queries where writing bare MapReduce or Spark is needed then these data structures and algorithms concerns pop up quite fast.

[1] https://lintool.github.io/bigdata-2018w/slides/didp-part02b....

[2] https://lintool.github.io/bigdata-2018w/slides/didp-part09b....

MapReduce is filled with algorithms. The core system is basically a merge sort engine (the shuffler). In fact, if you ever wondered about the question "sort 2M integers in 1M RAM", well, that was Jeff asking people if they could understand that shuffle was out-of-core.

Another example is lexicographic range sharding, which uses reservoir sampling to compute optimal tablet key split points by doing a constant-space-and-time heuristic sampling over the keyspace.

I used to think MR was just brute force, but it has many levels of algorithms. Probably too many- at some point it because hard to analyze how the system worked because of the various kinds of hedging and recovery strategies.

I noticed that talk comes from a University. I understand thats a totally different domain of work.

Secondly, you are not inventing any algorithm there. You are only using algorithm invented by others.

Thirdly, you are only deciding what solution works better.

Lastly, in an interview you have to invent this algorithm in 45 minutes.

None of this involves you to invent a new algorithm. At least not in 45 minutes. I doubt if the person giving that talk himself did it so quickly.

Does your standard library have persistent functional thread safe radix trees? Because mine didn't so we had to write it.

Even if it didn't, that's really like an exception to most common use cases of those tools.

Which is precisely why testing candidates on such questions doesn't make much sense.

I understand every once in a while something like that needs to get written, but again that's like an exception.

As an employer though, what do you want to happen when your employee runs into an exception? Do you want them to flounder and have no idea how to solve it or do you want them to figure it out by doing some mildly novel design work?

now I'm curious. What's the use case?

Even after choosing a technology (like Hadoop), there's a lot of difference between the best and the worst algorithm to solve a problem.

For example, the reduce phase of MapReduce is pretty much "apply the algorithm you want on this List", and that's where knowing good list algorithms (for example) shine.

Sorry, in almost all cases its really choosing over several ways of Joining the data, not writing SELECT *, not writing cross joins and learning SQL well.

Which why I ask again. Please list your problem which is so novel it requires you to invent a novel algorithm.

Please note statistics is a science that has existed for centuries now. Unless you are in a university, its highly unlikely you have a problem that will need to you to work on something that novel.

I don't understand your point.

It seems to me you mix statisticals approaches and algorithms, meaning you completely ignore the fact that code runs on computers, and that computers have mechanical characteristics, making two implementations of the same statistical approach wildly different in performance.

I used to work at a company where some guys spent literally weeks inventing a fast way to do a dot product over huge vectors. Which doesn't mean they invented the concept of dot product.

>>I used to work at a company where some guys spent literally weeks inventing a fast way to do a dot product over huge vectors.

Very nice that you bought up this point. This is exactly what I was trying to point out. If a team of full time working people took weeks to arrive at a working solution, despite a massive body of open knowledge available as a reference to prior solutions, there is absolutely no way a candidate can give even arrive at a decent direction to the solution in under 45 minutes in a interview situation.

No one is saying algorithms are not useful.

But there is a very big difference between knowing an algorithm and inventing an algorithm. Testing the former is no indication of the latter, as illustrated by your own example.

There is nothing about learning how basic data structures and algorithms work that won't help you in your day to day development.

You might be "fine" without it in a certain category of job, but rather than be dismissive about it you could be excited about having a massive wealth of information that you can use to do your job better.

There will always be something else to learn, wether it be how to apply Domain Driven Design to your work or how to work with graphs when munging data. It's always worth it.

[edit] Run on sentence

> There is nothing about learning how basic data structures and algorithms work that won't help you in your day to day development.

Having a basic understanding of these things is NOT the same as being able to regurgitate them on a whiteboard, under the stress of the interview process.

Exactly, I don't need to know this stuff of the top of my head. I can look it up...

The reason we test for things is to verify you have enough domain knowledge to "look it up". A bad interviewer will ask you to write quick sort on the board, because if we've already gotten to the point of "implement quick sort" yeah we can just look that up. However, if you're given a collection of nodes and asked to do something with them; that's a bit more interesting. If you know nothing about graph traversal, or have never seen a tree (both not uncommon in people claiming to be principle engineers in my industry) how do you even know what to search for? Best case is you're smart and waste time coming up with a new solution from first principles; most likely it just never gets done.

"these are really hard problems that people have studied for years"

Because of this. So that when you encounter a problem, you know how it can be solved efficiently. If you disregard existing knowledge, you are likely to spend a lot of time cooking up something complex and half-assed that creates an unnecessary maintenance load.

Like music is composed of fundamentals, similarly most problems in computing can be decomposed such that familiar approaches can be applied to them. I think the categorization of Aho's "Foundations of Computer Science" is pretty much to the point on which fundamental formulations are practical to know.

>>If you disregard existing knowledge, you are likely to spend a lot of time cooking up something complex and half-assed that creates an unnecessary maintenance load.

Unless you are stuck on an Island coding during a vacation, not one person on earth faces this kind of a situation.

Almost anybody doing anything apart from basic beginner level coding work knows to look up to a solution on the internet if a solution is not scaling already.

"Almost anybody doing anything apart from basic beginner level coding work knows to look up to a solution on the internet if a solution is not scaling already"

This applies to some practical areas, but some are murky and need more than agile google-fu to waid through of an algorithmic jungle to a solution that provides added value to end user.

There is a discipline and domain dependent threshold where the internet suddenly becomes obscure and unhelpful when one is trying to find solutions. In these instances either you are solving the wrong problem, or the problem is quite complex and/or does not have a closed form/ready solution. The best thing to happen in these instances is that one can limit ones input domain just so that it maps to some algorithmic concept or another. It's not rocket science, but in these instances having an intuitive feel toward the big O behaviour and various algorithmic constructs is really, really helpfull.

Sometimes you just need to implement your own fudged QR decomposition and understand a little bit of numerics, or build custom octrees and whatnot. Couldn't really do it without my books. I do CAD and computational geometry stuff, YMMV.

In your area of work, the domain is computational geometry, so asking questions there is perfectly fine.

In fact that is exactly I was trying to imply. Questions need to be about the area of your work, rather than testing candidates on proxy subject totally irrelevant to their jobs.

I'd learn it just for the knowledge, I have no desire to go into the interviewing process again. I agree that interviewing is a broken process.

I would say it depends. I have been working in this industry for 12 years but I don't need to use those knowledge. However, if you read source code of popular frameworks and systems, they use a lot of those algorithms and data structures.

What better way is there to try and get the best assessment of someone’s coding ability in a 45 minute timeframe?

IMO not doing it. Why'd you do it? There are much more important factors than someone's ability to do a whiteboard exercise while under stress. You can e.g. check their Github, ask them to make a simple example project at home, etc.

This. Test them using the tools they will be using every day.

You aren't testing their coding ability though you are testing their ability to remember how to implement a very specific algorithm.

Here's my real question: How often do you solve those kind of whiteboard problems in your daily work? Obviously, you'd never be forced to solve them in 45 minutes while being judged, but how often are you actually solving computer sciency kind of problems in general as opposed to solving more software engineering type of problems?

Almost never. I use a whiteboard, but only to draw architectural diagrams. Very few engineers at google actually do hardcore CS on a regular basis.

I work at Google and no it's not easy for us either. I've heard many of my coworkers joke that if they went through the interview process again they'd probably fail.

That said I see two kinds of interviewers. One kind (the good kind) takes a medium difficulty question and uses it to explore the candidate's coding, algorithms, communication, and problem solving skills.

The other kind has a super hard question with a single algorithmic solution that tells you nothing about what kind of engineer the person would be. This kind of interviewer takes too much pleasure in being a gatekeeper to Google.

> I've heard many of my coworkers joke that if they went through the interview process again they'd probably fail.

I’ve heard this joke at a lot of tech companies. Also, “the hardest thing I’ve ever done at my job was pass the interview.” Just shows how divorced from day-to-day work skills interviewing has become.

I think most tech companies severely overestimate the cost of false positives, particularly in the USA where it’s trivially easy to fire someone who turns out to be a dud.

Also think it represents an opportunity: if you could reliably figure out who made it to the interview stage at these top companies but did not get an offer, you could forego your recruiting expense, simply hire all those people and likely end up with a well above average team, minus the few false positives.

Tech interviews, coming from someone transitioning into the field, are blowing me away. One national retailer I spoke to does two phone screens, then an onsite round with nine coworkers. I've never seen anything like this in my previous field (mechanical engineering).

Its quite common in the tech area. Some teams in FANG companies(and wannabe FANG companies) are known to reject a candidate even if a single interviewer gives a bad feedback. Like say they did 9 rounds of interviews, and then they do a bar raiser round. They still reject you even if one single interviewer gave a bad feedback.

There is also a concept of 'machine coding round' in a few companies here in Bangalore. Where you are given like an hour to implement sort of like fairly medium application.

You could write a perfectly working application, but if the evaluator thinks its not simple or complicated for their taste, its still a straight reject.

Its almost like they want cookie cut candidates who practiced interviewing day in and out for years.

The real practical world experience is largely considered useless.

> I think most tech companies severely overestimate the cost of false positives, particularly in the USA where it’s trivially easy to fire someone who turns out to be a dud.

I don't think that's actually true. People who are net-negative on a team may be very personable and hence emotionally difficult to fire. And of course the U.S. does have a number of protected classes, who can be very difficult & expensive to fire for cause (because the assumption is that they were not fired for cause). I'm a member of a protected class, and I often worry that it actually makes it more difficult for me to get hired: will folks be willing to take a chance on me, knowing that were I a jerk I could make life difficult for them? I'm not a jerk (I think!), but they can hardly know that!

> People who are net-negative on a team may be very personable and hence emotionally difficult to fire.

Those people are rare and generally caught by much lighter screening.

> And of course the U.S. does have a number of protected classes, who can be very difficult & expensive to fire for cause (because the assumption is that they were not fired for cause).

I don't think you are using "for cause" in the proper legal sense. Firing "for cause" is indeed an time-consuming and potentially expensive process because of the effects on things like unemployment benefits. Firing "without cause" is not, and is permissible in almost every state.

What you are describing are potential accusations of illegal discrimination. This is largely orthogonal to cause.

>I'm a member of a protected class [...]

Technically speaking, we all are, no? We all have genders, sexual orientations, races, ethnicities? Of course you probably meant that you fall into a minority group in one of those classes, but pretty much anyone could allege they were discriminated against based on their protected characteristics, even if sometimes it's not a particularly believable claim.

I'd be curious what the pipeline of candidates looks like at these top places. You're probably correct that they are being overly critical and rejecting plausible candidates. They may be overestimating their needs, but they are also probably inundated with overconfident chumps.

I bet if you could select only the people who made it to the in-person interviews at these companies, which usually happens after multiple weed-out filters, you’d have a way-above-average cohort.

> I've heard many of my coworkers joke that if they went through the interview process again they'd probably fail.

I don't see how companies that claim it's hard to find talent can be OK with a system that would reject more than a small portion (<5%) of their current productive engineers.

Edit: fixed word sense ambiguity

I see it like getting into a top college. Would your average college student be able to get a 2300+ (1500+ ?) SAT score if they had to take it again? Probably not unless they re-studied and prepared for it - which I think is the same answer for FAANG engineers and whiteboarding questions.

Also, good SAT scores don't always translate into good college students and the same could be said for good whiteboarding skills don't translate into good software engineers or vice versa.

That's why standardized testing is becoming more irrelevant in college admissions.

Because they think false positives are really bad and you can always reapply.

> I don't see how companies that claim it's hard to find talent can be OK ...

I don't either, but it's an easy game to play: start a company with an impossibly difficult goal (e.g. build a more popular social network than Facebook in two years, colonize Neptune, cure cancer) and then complain that there isn't enough talent in the labor pool. Finally, blame the government.

I got the same response from an old friend who has worked for Google for the better part of a decade.

FWIW my understanding is that the Hiring Committee tends to discard feedback from interviewers like your latter type here.

I had such an a-hole interviewer the last (and hence final) interview I did at Google. His attitude was ... oh .. you have a PhD, let me show you how smart I am.

Just like getting into YC isn't necessary to be a successful founder, getting into Google isn't necessary to be a successful engineer. There are tons of companies out there, and many pay better than Google.

> many pay better than Google

Those would be? The only one I have actually seen (claimed) numbers out of that can beat Google is Facebook.

Give feedback to your recruiter. Interviewers can be forced into more training or taken off rotations.

The one that got to me admitted first thing that he already was told to knock it off, shrugged it off, and was still in the rotation anyway.

I didn't feel like giving that feedback to the recruiter would help if it was already known well in advance. That and a lot of other signals in two interview cycles seemed to indicate that the recruiter and/or Google were perfectly happy wasting my time, and I've not really felt like bothering to interview with Google since.

Is there any mechanism like those olympics diving competition? i.e. the highest and lowest score are discarded.

They don't do this as a matter of course, but they do look at the question that was asked various factors surrounding the interview. They certainly can and do throw out interviews that don't generate any signal.

I also work at Google and what mystifies me when I do interviews is just how consistently people manage to screw up even the medium difficulty questions. I used to love asking this question until it was banned:


This is a great question because it has several levels of solutions:

1. A recursive solution that iterates over all sequences, exponential time and logarithmic in space. Interviewing for T3 (entry level, think college grad), getting only this far in 45 minutes earns a "leaning no hire" in my book.

2. The above solution except with memoization, which reduces the runtime to linear but requires linear space. This one earns a "leaning hire" from me.

3. A dynamic programming solution that takes linear time and constant space. This one gets a "hire" to a "strong hire," depending on the elegance of the implementation and the strength of the discussion we have around it.

4. A logarithmic time solution involving adjacency matrices, matrix multiplication, and binary decomposition of numbers. I've never seen anyone get this and I only learned about it after months of asking this question.

It's shocking how few candidates actually got as far as solution 1. I kinda want to open this up to the HN comments section, because this does not strike me as a brutally difficult problem, but the rate at which people fell on their faces when presented with this question suggests either something about our interview process or about the candidates we interview.

EDIT: A clarification about “getting as far as” a particular solution: I’m not sitting there stone faced taking notes, I provide hints and prods and suggestions to push candidates toward more optimal solutions. I want the candidate to get as far as possible because it feels good to watch someone succeed, but oftentimes they run out of steam or take forever to implement their solution. In particular with respect to solution 1, if that’s as far as a candidate can get despite my recommendations, that’s not a good sign for their algorithmic ability.

I mean I'm surprised you don't see why this question is plain stupid.

I've had to deal with an algorithm problem of this difficulty once in my entire work life and I was given a week time to go figure out the best way to go about this.

Your problem creates an unrealistic standard of a software developer. You don't get to see something like their coding quality which is a lot more important in a candidate than their ability to solve an arbitrary algorithms question. Especially since these algorithm quesstions are now being studied for the sake of interviews. Rather than to learn something.

It's shocking to me how many engineers I come across that think having an algorithms based interview actually gives them good results.

>>I mean I'm surprised you don't see why this question is plain stupid.

Its more psychological than technical.

Once a person knows how to do a thing, they start to think any one who doesn't, is stupid.

Their intelligence becomes the baseline to measure others intelligence.

I agree with this.

My question always is why do you think this is important for the position you're hiring. Does knowing this question mean they're actually good for your position?

This is turning into the situation I've seen in places with serious entrance exams for universities. For example in Iran, a lot of not so talented but have great memorization and afforded to do mindless practice exams are making it into higher education while creative and talented and less fortunate people are staying out.

There are entire businesses built around practicing algorithm questions now, this is not a good sign.

I wouldn't be able to answer this question in the time allotted and would heavily discount its signal if I were on the hiring committee with this question in the candidate's packet. It tests for algorithm memorization which is not a hiring signal.

What memorized algorithm does this test for? As far as I know there is no algorithm to do this, and this interview tests for algorithm design rather than memorization.

You said above that you have set LNH for a T3 writing out the obvious solution to your question. The problem is that optimizing that solution comes one of two ways:

1. having seen the specific class of problem before and knowing, as you're writing that solution, to stop and switch to the class of solution you've seen before that would solve the problem more efficiently, or

2. finishing the obvious, poorly performing solution and then optimizing it like an real engineer would do. The problem is that there's not going to be enough time for this to occur in the interview session. The act of writing out the poorly performing solution will consume the entire session.

Therefore, the only way someone would have passed your interview question would have been if they had seen this class of problem before.

This is a fair point, even tough it doesn’t answer my question. When I interview I split up the coding and the algorithms assessments: if someone quickly and correctly implements the naive solution, I consider myself justified in grading them as solid in coding and moving on and having an algorithmic design discussion for the rest of the interview so I can speak more to their algorithms abilities.

If someone fails to design anything more sophisticated than number 1 despite my guidance, or takes an unreasonable amount of time to implement it, that’s a signal.

Also, let’s keep in mind a hiring recommendation isn’t a judgment of a person’a abilities as an engineer, it’s a judgement of how badly we should be trying to hire them relative to other candidates. My LNH is a statement that the candidate is solid but not stellar enough to extend an offer to. If our hiring pool were restricted or we needed to hire more people my assessment would be accordingly softened.

I'm really glad someone is seeing this for what it is. Too often people claim that you should go for the naive solution first, and then iterate, when that really doesn't work in the dynamics of a normal interview.

> I kinda want to open this up to the HN comments section, because this does not strike me as a brutally difficult problem, but the rate at which people fell on their faces when presented with this question suggests either something about our interview process or about the candidates we interview.

Because whiteboard coding is stressful and hard. I still kick myself for almost failing to get a basic sorting question right back in 2009, because the interviewer disguised it as a question about horse racing and it was interview 8 at the end of the day. (I still ended up getting an offer, but I turned it down.)

> t's shocking how few candidates actually got as far as solution

How many people in the real world sit around and practice dynamic programming questions all day?? The answer is ZERO!

>>How many people in the real world sit around and practice dynamic programming questions all day??

You will surprised how many people as a matter of fact actually do this.

In fact for a lot of people the whole purpose of a day job is to pay them salaries so that practice interview questions to hop on the next jobs.

The bad news is many companies think these sort of puzzle solving skills are penultimate level of software engineering skills and pay top salaries, so these people don't even worry about being bad at their current jobs.

Do these for a few years, and pad big brands to your resume and then you are automatically considered something special and qualified for big title promotions.

Oh trust me I know. I was being a smart ass and pointing out that only people that don't live in reality do this stupid shit. Dynamic programming interview questions, in particular, are the epitome of the absurdity of technical interviews.

It is >0, and those people make more than me :(

>A logarithmic time solution involving adjacency matrices, matrix multiplication, and binary decomposition of numbers.

can you post a link to this solution? sounds cool.

edit: lol i just realized this is simply multiplying the adjacency matrix and what you mean by binary decomposition of numbers is just exponentiation by squaring. it's funny now that i realized i'll probably never figure out the dp solution.

edit2: got it. number of length n paths for some key u is sum of n-1 paths across "knight neighbors" of u. base case is length 1 path which is just # of neighbors.

it's funny this problem is designed in a particular way. using the keypad instead of a chessboard lets you write down the graph by hand rather than actually procedurally generating it (which is awkward with lots of literal edge conditions to check).

Yeah this is one of the "standard" uses for matrix multiplication. Another one is calculating the N-th fibonacci number in logarithmic time (or any linear recurrence).

How would you rate this solution? Assuming we have static keypad grid.

  long moves(int start = 1, int len) {
    assert(len > 0);
    long[] cur = new long[10];
    for (int i = 0; i < 10; i++) {
      cur[i] = 1;
    long[] nxt = new long[10];
    while (len-- > 0) {
      nxt[0] = cur[4] + cur[6];
      nxt[1] = cur[8] + cur[6];
      nxt[2] = cur[7] + cur[9];
      nxt[3] = cur[4] + cur[8];
      nxt[4] = cur[3] + cur[9] + cur[0];
      nxt[5] = 0;
      nxt[6] = cur[1] + cur[7] + cur[0];
      nxt[7] = cur[2] + cur[6];
      nxt[8] = cur[1] + cur[3];
      nxt[9] = cur[4] + cur[2];

      for (int i = 0; i < 10; i++) {
        cur[i] = nxt[i];
    return cur[start];

Very good! However there is an off-by-one error as the exercise asks for paths of lengths not number of nodes. If you came up with that in 45 minutes without googling, you are very strong indeed.

Fun exercise. This is how far I got after about 45 minutes: https://gist.github.com/bjourne/7e45d41e8e4f5518bb92f6df7978... I wasted half the time not getting that you don't need to store the paths only the counts! But this was in my quiet home and I would probably fluke out in an interview setting where you have to talk while you code.

I'm still going to consider myself a pretty good coder. Perhaps not Google material, but pretty damn good still. :)

English is a rich and fuzzy language. I have more trouble translating problems like this into human than I do solving them.

Sometimes interviewers will respond to questions about the question. Others will only repeat the question as-stated. With the latter, I'd be sunk.

Pólya's first principle.

It becomes brutally difficult in the context of a time-limited interview. You have the pressures of: 1) anxiety about limited time, 2) anxiety about hidden "tricks" or "cleverness", 3) inter-personal anxiety.

It's that last one that I think algorithms/white-board coding entirely fails to see the trees for the forest. You would think as an industry that seems to lean heavily towards introverts and people in various places on the autism and social disorder spectrums, that the industry would not rely so much on high-pressure inter-personal communication scenarios for interviewing.

Which is not to say that soft skills don't matter: they absolutely do, and most industries their job interviews are much more highly focused on soft skills. The point is that you are testing soft and hard skills together in a highly charged mix.

If I want to sit down and play with an algorithms problem, I'd probably like to do it in a quiet room with no one watching/looking over my shoulder, and if I'm writing anything for most of the thought time it's scribbles and doodles, not algorithm-appropriate diagrams. "Programmer's Tourette's" of a sort imply that I'd probably be swearing at it for a while as I work to make sense of it. Asking a complicated algorithms question in a whiteboard scenario requires me to "play-act" a more linear string of thoughts, which diagram nicely, than I would use in an actual puzzle situation, while trying to be personal/presentable/well-spoken. I can certainly do that, but that mix of social skills and problem solving skills is very rarely needed in real life where we mostly compartmentalize such things to meetings versus coding time.

Then add in the ticking clock anxiety, that I don't feel like I have time to breathe, much less patiently tease out a solution to the problem.

Then add in the exhaustions of jet lag and trying to do all of that for 8 hours.

I feel like a lot of these sorts of questions, I could solve the problem rather well on my own time, or I could be a great, sociable, and presentable candidate with smart thoughts but maybe not immediate solutions. You want to the test for the former, but present only situations where the latter works well.

I feel like I'm not far from a baseline human on inter-personal skills, and even am somewhat more extrovert than the average programmer, and knowing that I have problems doing "both" the soft and hard skills for an eight hour interview day, I sometimes can't imagine how anyone in this industry gets hired because it's almost like we've optimized interviews to be intentional hells for programmers by trying to test hard skills in high pressure soft skills environments.

I very much believe that the technical interview culture as it exists today meets the definition of hazing [1], and that one day the industry will look back on it and wonder what was it thinking.

[1] https://en.wikipedia.org/wiki/Hazing

>>Then add in the ticking clock anxiety, that I don't feel like I have time to breathe, much less patiently tease out a solution to the problem.

The worst part about this is if you get the first attempt wrong. Then you have very little time to start again and go to a working solution.

By then your anxiety begins to multiply.

I was asked this question at a company (unicorn, but lower bar than Google, got the offer) and I only described #1. Thanks for calling me stupid though!

That wasn’t my intention, and I updated my comment to clarify. Did your interviewer offer suggestions or just sit there?

It may not have been the intention but I'm honestly aghast at the utter elitism coming out of Google and Facebook. I get that you are "superior humans" that are plucked from top institutions making 25% more than me at Amazon, but you don't have to keep emphasizing it.

My interviewer offered suggestions. Finally I described memoizing it (can be done with 2 lines in Python with a dictionary) though he was more interested in how to do it without extra space by utilizing a BFS that uses the input list as a queue. Then he ran off for another meeting.

> I'm honestly aghast at the utter elitism coming out of Google and Facebook

I presume it's Google and Facebook's extremely high hiring bar that you're attributing to elitism? Let me attempt to reframe that for you in a way that I hope will let you let go of some of the anger I'm picking up on:

There's no denying that Google and Facebook target top-tier institutions and put their candidates through hell. However, I don't attribute this to elitism. I personally got my start at a less-than-stellar CS program, and I go out of my way to recruit there as an alum. I've detected no elitism from any recruiters, interviewers, or hiring managers, and I've seen a number of people hired as interns and full time engineers as a direct result of my outreach. If Google were as elitist an organization as you seem to think, I and the people I've helped get hired would never have had a chance.

So then what explains the high bar? Simple necessity. I can't share details, but the number of applications we receive for every opening is staggering. What's more, when we approach people in industry and academia, we see a high rate of interest. The blunt truth is that we are more selective about hiring because we can be. When a hundred people (not real figures) apply for each available posting and your hiring pipeline is staffed well enough to interview all of them, you'd better believe you're going to hire the best one in that pool. This isn't a Google thing, this isn't a Facebook thing, this is a rational actor thing.

So no, I don't think you're stupid for coming up with a suboptimal solution, I have no reason to think you're a bad engineer, and I wouldn't reject you because your background. However, given a pool of candidates of whom some got as far as solution 3 and some got as far as solution 1, I'm going to more enthusiastically recommend hiring the ones who got solution 3.

>I presume it's Google and Facebook's extremely high hiring bar that you're attributing to elitism?

Yes and no. It's that I'm inferior and stupid for being at a lower tier company than at Facebook or Google. I can see it in every condescending hiring post about how easy it is to pass interviews without seeing similar problems before, switch companies and save for retirement and how nice it is to work at a company that treats you like an ubermensch.

>I've detected no elitism from any recruiters, interviewers, or hiring managers, and I've seen a number of people hired as interns and full time engineers as a direct result of my outreach.

I have absolutely heard of individuals from top tier schools getting easy questions while I got tougher ones, cases where people from top programs and other top companies (like Facebook) getting the benefit of a doubt in borderline situations, etc.

In some ways its even worse than at Facebook, at least at Facebook there isn't a hiring committee after a team match like there is at Google now.

Working at FB, +1 to all of this.

Personally I find that I can get a lot more useful signal from simple practical problems (eg "take a .txt file as input, modify all the words which contain the letter 'q' to be uppercase, write the results to a new file") which have a lot of room for expansion (What if we instead want to capitalise each word where the word after it contains a 'q'? What if we only want to modify the third instance of each word? What if the .txt file is too large for RAM? What if a single 'word' is too large for ram and the letter q is right at the end of the word?) -- starting out with something that any programmer should be able to solve in one or two minutes, and then cranking up the difficulty; adding in bits where a strong algorithmic coder has an opportunity to come up with a more efficient solution, but it isn't just a binary "has the candidate memorised the appropriate algorithm yes/no?".

(Example question made up off the top of my head because NDA, I haven't actually given much throught as to if it's a good question or not, just giving an example of "question which starts simple and then you can add complexity on top")

What would the correct solution be for the questions involving an input too large for RAM? This is new to me.

"the correct solution" implies that there is one particular correct solution - there are a lot of options, and I'd accept any answer if the candidate can come up with a reasonable justification for it. One example would be to process the data in fixed-size chunks of eg 1KB instead of trying to process the entire file at once. Pros: no matter how weirdly malformed your data is, your memory use is constant. Cons: what if that 1KB boundary line cuts a word in half? - then the candidate can either handle that new edge case, or switch to a different approach that doesn't suffer from that problem. Again, either option is valid if they have a solid explanation for what they are doing and why they are doing it.

I often do, both for fun and to prepare myself to give a question I'd like to use in an interview. The thing about interview questions at Google is there's a very fine line they need to walk: if they're too easy they give no useful information about the candidate, and if they're too hard they can never be solved in 45 minutes, again yielding no useful information. You also need to be able to provide some hints, because the vast majority of candidates need a prod in the right direction now and then.

As someone with access to the entire canon of Google interview questions, I've gone through the 20 or 30 with the highest rating, and none of them are ridiculously hard. The general pattern is either 1. a coding problem, meaning a relatively easy algorithm with plenty of opportunities to screw up the implementation so we can see how well you code and how well you catch your own errors, test your code, etc and 2. an algorithms problem that's really just a dressed-up version of some basic algorithm like BFS or dynamic programming so we can see if you can make the leap from theory to application.

Obviously I'm at an advantage here because I've seen most of the questions in use today, but whenever I encounter a new one I can intuit what's being asked and what it'll take to solve. I'm sure other companies have different sorts of questions that optimize for different things, but for our questions once you do enough of them you can sort of do any of them.


By the way, in my experience, those seem to be the biggest pitfalls for both job candidates at Google and programmers I meet in the real world. Sure, you crammed algorithms before you came here, but can you apply them in any real-world way? Sure you know your web/mobile app framework back to front, but will you hit a wall when asked to do something novel? Sure you can write a for loop to iterate over a data set, but will you fall on your face when that data set is petabytes in size?

I've become good at Google's questions because they generally test for those two things: nitty-gritty coding skills and an ability to generalize basic algorithms. The best candidates are those who know their algorithms inside and out, have the ability to break down and transform a problem until they can apply something in their toolkit, and have the coding chops to translate their concept into an implementation. All skills which, you may notice, make for a good engineer.

If you've ever wondered why we still do whiteboard interviews, that's why. Except you can use a chromebook now if you prefer, but you get the point.

I kinda agree but think that you are overselling the signal generated. We have internal data that we can't discuss here that throws some doubt on the signal from whiteboard interviews of the type described above.

What CAN you tell us about the internal data ? Why does the process persist if there's data showing its flawed ?

Cribbing from a famous quote: this interview methodology is the worst except for all of the others. We don't know what a systemic, better interviewing system would be.

I don't agree with your Winnie hijack unless you add "systematic" to that as well, like you did in the next sentence.

Perhaps the answer is that there is no systematic interviewing system that actually works well. It seems like trying to create one was the origin of all this pain, and the cargo-culting by companies that don't need a systematic process has amplified it.

Interviewing is an inherently social, inherently subjective process. Such things are highly resistant to systematic approaches.

Interesting, I've found my opinions align pretty well with who gets hired and who does well, but I'm interested in this research. Will reach out internally.

> if they're too hard they can never be solved in 45 minutes, again yielding no useful information

I strongly disagree with the assertion that you get no useful information from an unsolved problem. The goal of giving a candidate a problem should not be to have them solve it. The goal should be to learn how they approach a problem, think it through, and work on solving it. Actually coming up with a solution is the least important part.

As an interviewer I've intentionally given people problems that would take more time than they have. I'm always upfront about it, making sure they know they may not solve it and that they should get as far as possible while explaining their choices.

As an interviewee I've been dinged for not solving problems. The reason why was because I spent most of my time talking through my approach and planning how I'd solve it. Without proper planning you get sloppy code, technical debt, and miss deadlines. Planning seems to be greatly undervalued in tech, especially in small startups.

> As an interviewer I've intentionally given people problems that would take more time than they have

What are your experiences with this? I've thought about doing this but decided against it because I figure the candidate's anxiety and stress levels are already high enough without the knowledge that they've been given a problem that's impossible for them to solve.

It works well when you set proper expectations. I make sure they know they are unlikely to finish and that we're doing it to learn their process, not the solution.

> Sure you can write a for loop to iterate over a data set, but will you fall on your face when that data set is petabytes in size?

I also asked a question in HN yesterday. It looks like there are more questions on data structure and algorithm for big data. However, books I read don't focus much in this area. As I haven't worked on big data before, that would also be a challenge for me.

Here's a few pointers on big data:

* You obviously need more than one machine to do computation. Things become more difficult as you go from one machine to many, so learn about parallel programming tactics like MapReduce[1] and Flume[2].

* Storing stuff is also much more difficult. You'll either be working with distributed filesystems like GFS [3] or Colossus [4], NoSQL databases like bigtable [5], or SQL-like databases like Spanner [6] and F1 [7]

* Algorithms are tricky, because things that take no time at all on a single machine get to be very difficult when done on multiple machines. I don't actually know of any resources on this one, but if you know the algorithms and learn distribution tactics like I outlined above, it should make more and more sense with practice.

* Finally, you need to know your operating systems and networks. What operations are cheap (RAM access, local disk access, most local computation, same-cluster network operations, streaming remote disk access) and what operations are expensive (longer network hops, random-access remote disk operations). You also should be able to ballpark (to an order of magnitude) things like latencies of operations, dataset sizes, QPS's per machine, etc.

Also, when reading the papers, pay more attention to the interface than the implementation. One of the hardest things is transitioning from being spoiled by the local OS interface, where everything is cheap, to the distributed world, where seemingly random things are expensive or impossible.

[1] https://static.googleusercontent.com/media/research.google.c...

[2] https://ai.google/research/pubs/pub35650

[3] https://static.googleusercontent.com/media/research.google.c...

[4] https://www.systutorials.com/3202/colossus-successor-to-goog...

[5] https://static.googleusercontent.com/media/research.google.c...

[6] https://static.googleusercontent.com/media/research.google.c...

[7] https://static.googleusercontent.com/media/research.google.c...

Thanks. Appreciate for the links!

> The best candidates are those who know their algorithms inside and out, have the ability to break down and transform a problem until they can apply something in their toolkit, and have the coding chops to translate their concept into an implementation.

I mean, I guess, but in the many hiring discussions I've been on I found that the interviewers who asked those questions and based their feedback on them had very noisy signals. They frequently gave "lean no hire" feedback on our best hires. The interviewers who actually asked questions relevant to the work had much better signals.

>>Sure you can write a for loop to iterate over a data set, but will you fall on your face when that data set is petabytes in size?

I understand what you are trying to say here. But how many candidates are capable of inventing path breaking algorithms in 45 minutes, just to crack a job interview? I guess only a countable people in the world could achieve a feat that amazing, where they could invent a totally new novel algorithms from the scratch in 45 minutes. If I'm not wrong the very people who invented these algorithms went through years of research to publish them.

Now if you are not testing this, and just testing if the person knows an existing algorithm. Then really this a scaled version of testing the ability to memorize multiplication tables.

There is nothing novel in merely knowing an algorithm. And you also have not tested a person's ability in inventing an algorithm either.

> There is nothing novel in merely knowing an algorithm. And you also have not tested a person's ability in inventing an algorithm either.

The goal of this is too test whether the interviewee can recognize that this is a case to apply this algorithm in. It pattern recognition, not memorization, that is being tested here.

Whether that is particularly useful is of course the million dollar question.

>>The goal of this is too test whether the interviewee can recognize that this is a case to apply this algorithm in.

That's the easy part. Everyone knows if you have sort a large data set, then you partition it in someway. Searching a huge data set requires building a tree on some criteria before hand and eliminating whole branches at every node. Or that they have to use a graph algorithm to figure out shortest paths and cost associated with the paths. Or that if you face a situation where you recompute many results you memoize. Or that writing nested for loops is a very bad idea. These are easy things to spot.

But they won't clear you in the interview if you don't get the mechanics right. Knowing BST helps in sorting doesn't cut it. They want you tell them how you would traverse across a tree in some 'level ordered' fashion. Or list different ways you could order a graph and list the exact heuristic. From here on this is pure memorization.

There is huge difference between inventing a new framework of thought even at an abstract level, and knowing the precise mechanics of balancing a tree.

i don't understand why people do this. you're raging against a machine. do you think you're going to change this person's mind on how effective whiteboard questions are?

No, but it helps to highlight the pointlessness of these methods in hiring people for a real world project.

I keep hearing, how startups use multiple online judge algorithm screening sessions, to filter and hire the best. But the candidates just can't do any real world work, because they are busy practicing interview questions for the next job they seek while they are in their current jobs.

In other words you are optimizing to hire people who are good at clearing interviews, not doing their day job- Because they have to be bad enough at day jobs, as they have to be practicing these algo questions all day to clear your interviews.

>No, but it helps to highlight the pointlessness of these methods in hiring people for a real world project.

for whom? who is the intended audience of your response?

I currently work for Google, and enjoy whiteboard interview questions. When the questions are well designed there should be a relatively trivial, brute-force solution. As that solution is explored, it should highlight a bottleneck (time, space, etc.) that leads naturally to a more elegant solution. I really enjoy the excitement of finding the inflection point between the two. In addition, be aware that Google now offers Chromebooks for interviews at many locations, in case folks are more comfortable typing than writing. A recruiter can confirm if that option will be available. (See https://careers.google.com/how-we-hire/interview/#onsite-int...)

From the average skill level of Google engineers I know, I would say yes, most would do a very solid job on the average interview question.

However, you probably wanted to know if that means that they would all get back in if re-interviewed, and the answer is no. Google is famous for tolerating huge numbers of false negatives in exchange for a small reduction in false positives. So, by definition, many would fail, even if they are capable Google engineers.

When I interviewed, I knew nothing of this process. Which was probably good, it would have made me a lot more nervous if I had know what lottery I was about to enter. In fact, I was pretty arrogant, seeing that typical questions were "just CS", and thinking my CS knowledge was pretty solid, I didn't even do any prep work. So I was probably pretty lucky to get in.

My own assessment of the 5 interviews I had were that 3 of them went very well. 1 went so-so (the interviewer assumed I was a webdev and ask me all sorts of specific server balancing questions, whereas I have worked mostly on compilers and game engines.. I managed to do ok-ish since I do know distributed systems). The final one I felt went badly since there I actually didn't manage to solve the problem the interviewer asked (it was one of those more puzzle-y questions), but I guess I somehow did ok, since I did what you're supposed to do in such a situation: show that you're a good communicator and problem solver even when you don't know the answer.

I was dissapointed in all these interviews since I was hoping for interesting questions specific to me (how did you implement this type inference algorithm?) which never came.

Of all the interviews I've conducted for Google since, I've only had a 1:40 ratio of seeing people being accepted, even though plenty seemed highly capable to me. Usually some other interviewer didn't have quite as good a time with them as I did. You really need to make all of them happy to pass.

To other answers that joke that these interviews are the hardest thing you'll do at Google, I'd like to disagree. At least in the teams I've worked, I've done things that thoroughly stretched my abilities, way beyond the level of interview questions. Of course they're of a very different nature, but that is the point: interview questions are very distilled, idealized, and problem solving in the real world is very messy, and to my mind at least, much more interesting.

What do you mean by "whiteboard questions"?

The ones I had at my FB interviews are questions I would describe as "any competent engineer should be able to solve most of these in an interview slot" sort of questions — Simple data structures (sometimes explicit, sometimes implicit as part of the problem statement), simple strategies (breadth first, depth first), and a basic understanding of time (and space!) complexity.

Of course, you can hike up the difficulty a tonne (e.g. by replacing plain binary trees with red/black trees) and progressively fewer people will be able to solve those, but in my experience that's not common in an interview environment at FB.

Ultimately, the fact that the interview process includes whiteboard questions means that people at those companies _do_ perform decently (for some definition of decently) at those problems — that's how people are selected.

That’s because the FB process explicitly optimizes for speed. I recently finished a loop where I solved 2 problems optimally in 2 interviews and 1 in the last one and I was rejected.

The speed required makes me skeptical many people can solve these problems in the required time without luck of having seen similar problems before.

Did they tell you this? I feel like a lot of candidates think they nailed it when really they missed important things. I'm not sure I've ever had a candidate solve my question fully and optimally. This of course is nowhere near necessary to get a good score but I think that thinking "yep this is as good as it gets" is a common error.

The hire rate is also super low at the majors so it could have also been noise.

Are you at FB?

One of them did tell me this after his round (said we got further than he thought we would), the other one explicitly said my solution was optimal and what he was looking for.

The 2 questions per interview "rule" is an open secret now among people that have interviewed there. I've never heard of someone who only answered 1 per round get a follow up or an offer.

*Also, they generally don't move onto a second question if they don't think the first one was good enough.

The question I recently got that I found annoying was this: Given input (4, 2, 3, -1, 5) and output (-30, -60, -40, 120, -24) what's the function between them?

I don't actually mind doing whiteboard interviews even on minimal prep. I've gotten a few of the "flagship" company offers and failed a few. I think part of it is internalising that nothing is going to guarantee you anything. You can always hit a question you bomb on and have an interviewer that will move on or bury you for it.

Phone screens are 2-4 questions and on-sites can add another 6-10. A 5% bomb rate for an on-site question will put you into the 60-70% pass rate.

I'm confident that if I do all the interviews I'll land atleast one offer but if you're putting all your eggs in one basket you're going to have a rough go of it.

I also got annoyed with "closest pair of points" problem [1]. It needs some geometry observations to prove how time complexity can be achieved. I was just thinking "Are they testing me geometry?"

I don't know how questions are created and filtered for interviews.

[1] https://en.wikipedia.org/wiki/Closest_pair_of_points_problem

I agree I think it's not a great problem but it's worth knowing: some problems we're designed to have an obvious answer and allow the proctor to guide you to the ideal one.

This has the nice property of acting like a fizzbuzz and a sample of how do I work on hard problems stage. The first part is pretty pass fail the second is observational. This unfortunately requires a better thoughtful proctor which isn't always the case.

If I had a candidate that couldn't do the (n^2)/2 solution that's pretty bad. It's nested fors, man. Bang out in like 10 lines.

Still though, I agree, not great.

what is the answer y = -120/x ?

(I hope I didn't mess up the original numbers).

The output for an index i was the product of the full input sequence excluding the element at i in the input.

The followup question was how do you do it in linear time.

product of all elements except self. usually a follow up to this question is to dissalow the divide operator

Yep, so I take it you've seen it. Did you feel it was a good/fair question? I thought the follow-up was but the original a little arbitrary.

i dont like the way it was phrased. its better in the original format of just, given a list of ints, return a list where each elem is product of all except that elem.

I cant really judge, I would be hapy to have received this on an interview cause I've seen a variation of the problem on leetcode.

I just went through the interview loops at several FANGS, firs time doing this and got several offers, including G. I used to be super scared of even applying for fear of rejection because I thought I wasn't smart enough. I am self taught with lib arts degree from an average school, low gpa.

but I actually enjoyed learning these problems and figuring out solutions, and as time went on it got easier and easier. I enjoyed the interview process and had a good experience everywhere except FB.

my suggestion to those who are striving for FANG is just to keep practicing the questions and try to have fun with it, and go into the interviews with a positive, collaborative attitude.

> A 5% bomb rate for an on-site question will put you into the 60-70% pass rate.

I'm not sure I follow the math here, can you expand a little?

I just punched .95^10 (.59) and .95^6 (.73) into Google. I mean they're made up numbers, the point I was trying to get at was that you can do fairly well on these things but you just need to fail one question to fail in total.

But if you do even two interviews your chance at getting one offer jumps to 1-.4^2 thru 1-.3^2 (84-91%).

(Am I out of practice? Did I do that wrong?) -- I tapped out painful on my phone.

I suppose the math checks out, but I disagree with the model: the whole point of doing multiple interviews is to get a variety of opinions based on the knowledge that bombing an interview just happens. You're assuming that bombing one disqualifies you, but that's not true.

Yea, sure? I sorta agree and disagree but it's a little beside the point.

My point was addressing the frustration with whiteboard interviews. It sucks to feel qualified and still get rejected. That doesn't necessarily mean they all way outclass you (they might!). If you treat interviewing like applying to colleges (3 or 4 big schools, a few backups) you're going to be a lot less burned by the system.

Sure, hate the system. But we've covered blaming the interviews. But that's not helping anyone right now.

As to the model, in my experience if you do sufficiently poorly on one and leave any doubt on another no one interview will save you. Sure you could add in the probability of failing two vs one. But that's numerically the same as just making that .95 a .9. It's a made up number, so what. The conclusion is kinda the same: doing pretty well can still lead to a failure for any given job, but you can still do pretty well in your job hunt.

I worked for FB. I like the whiteboard problems and many other engineers like them too, and I enjoyed discussing or solving similar problems over lunch. That's probably why they're so prevalent in interviews -- I don't believe they actually predict a candidate's success in a company. Btw in college, I used to compete on websites like topcoder, spoj, etc for fun. The interview processes of many companies drastically favor ppl who did that in college, at least while hiring in the <10 year experience range.

edit: also worked for google, same story

edit2: I believe most people can learn to solve these problems because a lot of it is mixing and matching algorithms/techniques that you've practiced. It just takes time and practice and exposure to a wide variety of problems and crucially, confidence. I've seen people who somehow strongly believe that they're not the "algorithmic type" and will never be able to solve these even if they spend months working on it. :( I've tried and failed multiple times to convince those ppl that this is a very learnable skill.

And there are people who just find these stuff boring/mundane and also don't think it is worth to allocate months for preps. There are lots of other great opportunities so why bother?

> I don't believe they actually predict a candidate's success in a company.

They often don't which is why they shouldn't be used.

How do I force myself to find doing those problems to be fun? To me they just seem painful because they really expose how stupid I am even if I know he basics.

Start simple: find a list (with answers). Don't try to "solve" them, just read the questions and try within a minute or so to name a data structure or algorithm -- something you'd expect most undergraduate intro classes to cover -- that seems likely to be part of the solution. Check your work, keep track of which ones you got right/wrong, and move onto the next question.

Re-visit the ones you got wrong first. Try them again, repeat until you've gotten them all right (this might just be because you remember the solution from looking it up, that's fine, you'll have plenty of time to forget by the time we get back to it :).

Now, revisit the ones where you had the right data structure on the first go. HOW would that data structure help? Go a level deeper on actually trying to solve it.

I can't guarantee this will be fun, but it should be fast. A few minutes here on there on any given problem, going one level deeper each time you visit it.

Eventually you'll have the whole algorithm for one of the problems, likely using some common data structure like a hash map, tree, or heap.

Code it in a language with a good set of algorithmic primitives, use every tool the library gives you (don't worry about remembering how the data structures work on this pass). Hopefully with the whole algorithm in mind and all of the tools at your disposal, this is "just plumbing".

Rinse, repeat.

Somewhere in there, do the same with the complexity analysis (time and space!).

Next time, try it where you have to build the data structure by hand.

Then one day you'll see a problem not on your original list, and you'll immediately think "red-black tree with a hash table lookaside buffer, but I wouldn't want to have to build the RB tree by hand if they asked me to, so AVL tree because the complexity is the same and I can bang out the code like it's what I do every morning before breakfast".

Like I said, it might not be fun (at least at first), but it is a way to break up the practice into digestible chunks. Also it builds on the strategy that I've seen work best in interviews -- if you treat each step of iterative deepening of the solution as a point where you'd talk with the interviewer about what you're thinking and why, you give them a lot to go on about your thought process and a chance to course correct if you misunderstood the question or your heuristic for how to approach it simply came up wrong.

For someone who is preparing for interviews, this is really good advice. Thank you.

It's okay if you don't find these problems to be fun. Some people like crossword puzzles, some don't. If you can't solve a problem, it doesn't say anything about you, except that you haven't yet practiced using that particular algorithm/technique enough. It most definitely does not say that you are stupid, although some ppl would have you believe that.

I'd recommend that you only solve problems that are extremely easy, on websites like https://www.spoj.com or https://www.topcoder.com or old google code jam questions. Pick a single website (I recommend codejam) and do only the easy ones for a few months. It's hard to argue against the happiness that comes when your program passes the tests. The point is not to sharpen your skill but to get used to the feeling that your program passes ALL the tests cases.

If you get to a stage where you're completely sick and bored of easy problems, pick a slightly more difficult problem with basic algorithms like plain BFS or DFS. Try it for just a day or two and when you can't solve it, read the published solution. Try to understand it, apply it manually to the test cases and once you're sure that you've completely grasped the solution, implement it and see if it passes all the tests cases. Take it slowly, and return to easy problems often.

This could take you pretty far in terms of enjoying the process. :) At least that's how it worked with me -- I liked the high of getting a program "accepted".

I have the same problem. I can't say my approach works or not because I am still trying.

My approach is to keep writing the basic algorithm, data structure, common questions again and again. Do it once every 2-3 weeks. When I know every part of them, then I can start twisting or go deeper.

I am not that top talent. Sometimes even a 0-based or 1-based array can bug me a while.

It might be more fun if the problems are a bit more game like. I got my start with Euler Project which I was doing for fun looooong before I even thought about interviews. There's lots of other resources out like that now I think. I've heard people mention hacker rank.

I am a SWE at Google, and yes, most Googlers are pretty good at solving whiteboard problems. It's not like we are born with those solutions in our heads, but we enjoy solving those questions especially with other people. It's a lot of fun to "explore" answers to these questions, and that's what I think interviews are all about.

I know HN hates whiteboard problems, but just like anything, you get good with it if you're having fun solving them.

We don't hate whiteboard problems because they aren't fun. They're obviously fun for some people.

We hate them for interviews because they're a terrible signal for hiring, they're often poorly administered, they're humiliating, they discriminate against introverts, and they've become frustratingly common.

They unfairly mess with people's livelihoods. It is a really big, life-altering issue for some people.

Yet, with such a terrible signalling tool for engineers billion dollar companies like Facebook, Google, Microsoft, Uber with complex technical challenges were able to manage. /s

And yet, nobody has a significantly better tool.

I also want to see how you guys solve these problems. People on youtube videos already know the answer. They just told you how to solve it. But I can't see how they approach to get the answer. I think that part is the most interesting and valuable part for non-talent like me.

In my experience, you can't learn that skill by watching others. It is a practice, like playing chess, or writing Math proofs. So IMHO the only way is to train. Start with simple problems, and continue from there. Bang your head against problems for a few hours before looking up the solution. Over time, you will get better.

Disclaimer: I'm myself not very good at algorithmic puzzles. For a job application I took a vacation to train solving coding tasks, and was getting better every day. I got that job, but guess that for a Google application I'd rather have to train for 3 months.

>They just told you how to solve it. But I can't see how they approach to get the answer. I think that part is the most interesting and valuable part for non-talent like me.

I ran into this too when I was preparing for a loop, especially for more complicated DP and recursion problems.

I found a website called gorecursion [0] on Reddit and he is very good at describing these solutions from the naive case to the optimized case very well. This helped me quite a bit, but I still didn't get the offer.

[0]: http://www.gorecursion.com/

How do I force myself to find them fun?

Maybe start with the easy, one gets the sense of accomplishment when solving a problem, building on from there.

I came across a guy on Reddit a while back who was currently employed at Facebook. He thought he'd try making a video of him doing one of Facebook's interview questions thinking it'd be easy for him to do. He stumbled through the problem and didn't end up solving it in the timeframe that would have been allowed.

It was good of him to still post it but it showed that someone currently employed at Facebook would have been screened out at the phone call stage had he applied again without any preparation.

I remember that video. I thought it was hilarious because the question was so easy anybody that has done leetcode would be able to identify it and code it in like 5 minutes. I've gotten it at 4 different companies!

would love to watch this, can you share the link?

There is a great video lecture where a guy talks about how tough his committee was, and they denied a series of packets only to be told that the packets were theirs! Very reassuring.

I have no info about whether Google/Facebook programmers can solve whiteboard questions easily so I expect this comment to be down voted but ...

I'm consistently surprised by the fact that, despite hiring programmers who are the world's most elite, Google's products and services are just kind of ... plain/average/boring. The products and services are far less amazing than Google's engineers.

It's like Google takes in geniuses but outputs kind of boring, standard, average quality products and services with a surprising number of problems and deficiencies.

It looks like they enjoy creating it instead of making it popular for the end users. Google+ is a good example.

A friend who transferred internally to a SW engineering position at Facebook mentioned they probably couldn’t get in the front door.

This is basically the selection criteria for getting a job, why wouldn't they be?

Any good examples of “whiteboard questions”, I’m not super familiar with the phrasing.

Reverse a binary tree, check if a linked list has a loop, find common parent of two elements in a binary search tree.

Actually I mean "whiteboard interview questions". I think you can search those from google.


I worked with a guy, Bob, in our college computer room. Bob had returned to get his BS after many years in industry. He had great stories. He also had great advice. He taught me that you can learn a lot about how a company will treat their employees by paying attention to the way they treat you during the interview. He suggested several sign that indicated it was best to just "walk away".

If you go into a restaurant for a professional lunch meeting and they have a ball-pit / swing set /etc. it might not be the most professional place to hold the meeting. Just walk away.

If you go to a 5 star restaurant and they want you to wear a clown nose (Everybody Does It!) Just walk away.

The big scam is "We hire the best and the brightest". "We mandate a whiteboard test (Everybody Does It)". It is my opinion (based on experience) that companies that insist on a whiteboard test, despite years of programming on your resume and open source code, are either Ball-Pit companies (who don't know what it means to be a professional) or Clown-Nose companies (who know what it means to be a professional but still treat you like a commodity). Just walk away.

When you object, please reference one study in a journal that shows that whiteboard tests are effective in any measure when used during interviews.

I understand why Google does it. I've even interviewed there. They get thousands of resumes (which they never read, at least nobody did at my interviews). They have to weed out the "learn coding in 6 weeks" crowd. But you're a professional. You should expect to be treated like one. You don't ask your surgeon to cut up hams in an interview. You don't ask your builder to nail two boards together. Professional accountants are not interviewd by bank tellers and ask to count change.

Google, like Microsoft before it (remember the "why are manholes round? quizes?"), have combined the marketing claim "We hire the best and brightest" with the ball-pit mentality of whiteboarding. Google HR (sorry, People Resource?) SHOULD be embarrassed but apparently they, and upper management, LIKE ball-pits.

A real professional interview involves a discussion about what the company does, what it current problems are (aka why are they hiring), and a discussion of if you, as a professional, have the skills to solve the problem. What do they need? How can you contribute? It is likely that your interviewer has no idea what you are being interviewed for and cannot answer these basic questions.

We, as professionals, should consider whiteboarding as an insult.

Just walk away.

I'd have to "just walk away" from the entire industry, since I've never once in over twenty years had an interview which did not involve a demonstration of my ability to reason through a problem and communicate a proposed solution using conversation and a whiteboard. I am a professional, yes, and thinking about problems and communicating about solutions is my job; why should I be offended that someone wants me to show them how I work?

Why would I ever want to hire someone incapable of demonstrating their skills in such a simple, straightforward way? If you can't solve a toy problem and explain your solution, or at least have a conversation about the constraints and tradeoffs as you work on it, how am I to imagine there is any chance you could be a productive teammate?

That's nice in theory. But it doesn't fit the facts.

1) The whiteboard questions are unrelated to the job opening.

The interviewers have no idea why you are there. They don't know what your resume says. They don't know who you might be working for. They don't know what you might be working on. Would you hire someone to work for you without knowing if they were going to clean your house, install new lighting, or rebuild your kitchen? How exactly would you interview a person if you don't know why you need their skills? Or how their skills are related to the task? Asking a carpenter to "whiteboard" the steps to clean a bathtub during an interview is insulting.

2) The interview is being conducted by people who have almost no job experience.

My experience is that the interviewers are recent college graduates. And that the whiteboard questions are related to their data structures class. They are focused on details like worst-case behavior of the algorithm and whether you can quote the log-log-n asymtotic performance. This entirely ignores the fact that the actual behavior depends a lot on the actual measured performance. Take, for example, my paper on garbage collection (http://daly.axiom-developer.org/TimothyDaly_files/Bozm84.pdf) where the actual data access pattern matters. Theoretically strong algorithms usually exhibit bad behavior because of data access patterns, CPU caches, look-aside buffering, and/or branch prediction failures.

3) There is no known relation between being effective and being able to present information.

I know people who are strongly autistic, to the point of not making actual eye contact or giving any verbal feedback at all, appearing to be "not paying attention"... and then create beautifully crafted excellent code. I also know people who "talk a good game" and couldn't write a "hello world" page in html. Please point me to any study that shows that whiteboarding is an effective way to judge candidates. Even Google's HR head claims it is ineffective.

4) There is a ball-pit / clown nose aspect to the interview.

"Everybody does it" so wear the clown nose. I've had a fair number of interviews in my 48 year career. Most of them were with people who needed something done and I could demonstrate my ability to do that. Interviews exist for a reason. Asking me to whiteboard Knuth-Bendix because you're writing an editor is reasonable. But the same question for a customer service job would be nonsense.

These "clown-nose" interviews are simply insulting and a waste of time. Walk away.

Your theory does not fit my facts, and I can't relate to your "clown-nose" metaphor. I respect the fact that you've been working in the industry almost twice as long as I have; but if the environment you describe were as universal as you are making it out to be, I would expect to have encountered it by now (and I have interviewed at both Google and Facebook, as per the original question, along with plenty of others).

With regard to your point #1, it's not that the questions themselves must necessarily be related to the specific job one is interviewing for; it is that the process of communicating about problems and collaborating in the investigation of potential solutions represents the core of the profession. I have no idea what you mean by "no known relation" in point #3; there certainly are niches in which someone capable of nothing more than writing code might be able to eke out a living, but for the most part software engineering consists of interpreting user needs and definitions of problems, investigating those problems sufficiently to derive specifics, designing solutions which make appropriately balanced trade-offs, and communicating plans and progress in order to ensure that the proposed solutions satisfy customer needs. Sure, there's also a little typing of code every now and then, but if that's all someone brings to the table their potential contribution is very limited.

I am not aware of any studies which provide any meaningful data about software interviewing practices; we are all guessing. I would take the complaints about the use of whiteboards and toy problems in technical interviews more seriously if the people making such arguments were able to present a superior alternative, but the proposals I've heard sound even more like your "clown-nose" concept than the system we're used to.

The environment I describe is not universal, nor did I claim it was. In fact, I am advocating walking away from the subset of companies that do whiteboard interviews.

About your reply for point #1, I agree that communication and collaboration are important. I have picked up chalk (I'm old) during an interview in order to draw diagrams related to robot paths. But you'll note that I decided that the chalk-talk was an effective way to communicate essentially visual information. A "whiteboard" interview is one where the interviewer decides that I need to use a whiteboard to communicate. Coding is not a whiteboard activity and the medium choice is inappropriate. I rarely choose to communicate algorithms with an interpretive dance for that very reason.

About your reply to point #3, in any large company it is highly unlikely that a single, new-hire person would have such a broad scope of activity. Hiring a person of that scope would cause me to invest in background checks and presentations by the person showing their prior projects, designs, and outcomes. I certainly would learn nothing by asking them to reverse a tree on a whiteboard in 20 minutes.

As to your final point, I did present a superior alternative. Have the hiring manager talk to the person about the job opening and the skills expected. Ask the person to provide related background information from their prior experience. Ask them about how they might appoach a current or previous problem related to the job opening. If THEY want to use the chalkboard to communicate that's fine.

An interview tells you a lot about a company and how it treats employees. If you see a "sea of desks" in a supermarket-sized open-plan floor you can assume you're "just another ball in the ball-pit". If they hold meetings in a fisherman's net strung from the ceiling (I wish I made that up, but no...) then you can assume they have a playground mentality. If they are proud of their "80 hour weeks" you can assume your pay rate is only half what it claims. If "free food" is given as the best reason to work there, you can assume the work is painful and boring. And, to the original point, if the interview process resembles a frat-house hazing-style "paddle line" of unrelated interviews, you can assume a frat-house "good culture fit" mentality. Every frat house strives for a monoculture (as in, "we only hire the best and the brightest").

But if you're a professional, working every day to improve your skills and breadth of knowledge and experience, then respect yourself. Walk away.

Just to add facts to my argument, here is a posting from today's Hacker News about how to prepare for a Javascript interview: https://github.com/IamManchanda/algorithms-javascript

Walk away.

Yes they absolutely can.

The other comments for this post seems to cast doubt on your assertion that they all can.

Also, solving them in an interview is radically different from solving them for fun.

I know a lot of people at top companies. All of them have an innate ability to do these algorithm problems better than people like me - they can crank out solutions quickly and get offers almost everywhere they try.

Yes because they only hire the absolute best

Top 1% of intelligence

Intelligence is hard to define. There's no scientific, widely-agreed-upon definition.

Even if there were, algorithm tests on whiteboards would be a poor test, because they also involve prior knowledge of the task and a very specific type of intelligence.

In my experience, social intelligence is a better predictor of success in any workplace, anyway.

"Intelligence is hard to define."

and then you type:

"In my experience, social intelligence is a better predictor of success"

Something tells me you don't work at google :D

About 8 years ago, I got an interview at Google. I didn't really even want to work there... I just wanted to see what the place was like. So I made it through the phone screens, went to the interview, and I failed. It was still a good experience!

"I got an interview at Google. I didn't really even want to work there... I just wanted to see what the place was like"

... sure

I wanted to see if I could get in.

I don't like large corps, period.

Same, except about 1 year ago instead.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact