I have used multiple of these in my career. I have implemented few of them. I never got asked about any of them in interviews.
I would not expect people to know or even be able to explain those by heart, either. You just don't use them or even think about them often enough to internalize them. Or any complex algorithm for the matter.
This kind of interview only tests how much a person has prepared for an interview, not how well they can apply this knowledge in real life, or think outside the box when required.
A system design interview doesn't ask you algorithms. They're basically block diagrams. They're often the interview with the highest variance, because a successful one requires the interviewer to ask not only a fair question (e.g. isn't niche to their current job), but also recognize when a solution works, but isn't their implementation. This doesn't require any knowledge about algorithms beyond what types of algorithms exist. That's pretty much it. It's also the hardest to prepare for, because you could be asked pretty much anything, and there aren't very many medium level design docs out there for complex internet scale systems.
Now an algorithm design or coding interview will ask you to develop an algorithm with level of implementation, but if you not memorizing an algorithm is going to sink you, then you have crappy interviewer and shouldn't take the job anyway. A good interviewer doesn't want you to simply come in and spout out memorized factoids, they want you to develop it. A good interviewer will recognize that you memorized the answer, and ask a different question until you don't know it. Only then, will the interview be useful.
Everyone should get the solution by the end. It's just how many, and how explicit of hints you need.
The list of algorithms is a complete grab bag of random stuff. There's nothing tying them together.
> A good interviewer will recognize that you memorized the answer, and ask a different question until you don't know it. Only then, will the interview be useful.
100%. You want to get to the boundary of the candidate's knowledge as quickly as possible.
It's worth noting that the source of this is a someone who is hawking a pricey "cracking the system design" type interview book. I feel like these types of posts are intended to create anxiety in the reader - "oh my god I had no idea what a quad tree even was?!" I think much of the cottage industry that has sprung up about "cracking interviews" is predicated on maintaining a constant state of anxiety as well. If you look at something like Leetcode it's the same thing, there's something like 2,000+ questions there. So many that you can never feel confident. The job of preparing is never done. As long as people have anxiety about everything they don't and can't have any sense of confidence, they will be inclined to fork over their money in an attempt to assuage that anxiety created by such material.
Having done several system design interviews, most of the time they're badly executed. When I asked for some detailed requirements they cannot disclosure them. However in the end they have those hidden requirements and I've failed because my system doesn't meet those requirements.
It's a bad way to conduct a system design interview by having a design in mind and hoping the candidates will reach that with limited knowledge / guidance.
I interviewed for a company once where I got a real dataset from their business and just a vague assignment to analyze the data and find something that could be optimized better. So I came up with a suggestion on how to improve factor A, but they told me later that what they were looking for was for the candidates to come up with suggestions on how to improve factor B.
Why factor B was deemed more important than factor A wasn’t something that possibly could have been understood from the dataset alone.
My solution is to produce a design in my head, and figure out its limitations. Then I ask for each limitation if this limitation will be a problem. If it is, then I figure out how to iterate the design to eliminate that limitation. Once I don't know of any unverified limits, then I start explaining my design, and explain how I know it will perform to spec.
This is also a good way to approach system design in the real world. Don't simply go with the first solution that looks like it might work. Also don't add on nice looking features on general principle. Instead sketch out the simplest thing which works in your head, figure out the limitations, check whether each is OK, and then improve my design if it doesn't work yet.
Only if the interviewer cooperates with candidate. In my past cases, some interviewers are just nuh-uh or "let's continue" or just keep saying the original ambiguous requirement without stating additional requirements or giving any feedback.
Some interviewers like to move the goalposts. If I encounter those, I'll go ahead and give the simplest solution and state a general idea of what its limits are. Then when the goalposts are moved on me, I can iterate on the design for the new rules.
If an interviewer cannot tell my design skills from that, then I figure that the company failed, not me.
Lately Ive been going the other way as an interviewer. I try to get the candidate to deeply explain their system design; starting point, requirements, conflicting priorities, compromises, etc. Alternatively we can pivot slightly to solving the same class of problem with my system.
I can almost always find a relevant overlap between my background and theirs, which helps. And (I think) its beneficial for them to talk about their experience and existing decision making as opposed to hypotheticals. As a bonus you can also gauge the ability to communicate complex or nuanced ideas in a succinct manner.
This has been my experience as well. These interviews should be open-ended with no right answer as long as you can reasonably support your design and discuss its tradeoffs. Unfortunately, many interviewers have a set solution / structure / technology already in mind.
It takes a skilled interviewer to deal with and evaluate that level of nuance on the fly. It's much easier to scale an interview process with checklists.
First time I saw geohashing. Can't this result in some lopsided squares? The way it is described has a standard rectangle projection. But those are inaccurate due to the earth being spherical. So the areas on top and bottom will be much larger.
To be fair I've heard of most of these in my career but they are fairly specialized. If you are conducting a system design interview and expecting the candidate to know one of these I'd suspect there is a bit of bias there (unless that is their area of study).
I weirdly only found out about a few of these rate limiting algorithms yesterday when I was looking at rate limiting Elixir processes for a small load testing tool. Merkle Trees, bloom filters, and raft/paxos are prevalent in crypto so if you read those topics you'd know them. Hyperloglog is an HN favorite.
Failed an interview for a test infra engineer because I couldn't implement a trie from memory earlier in the year -.-'
I was expecting to see idempotent consumer, circuit breaker, and a few others that I’ve used frequently when designing systems. Maybe not interesting enough to make the list?
> but why a picture of a table instead of a table?
I guess they drawn the whole thing in an image editing tool and exported it as one image. A Table would've required much more effort, multiples images, styling, placement and many web requests to load them.
Definitely not. If they did then articles would no longer be reliable. People wouldn’t write custom HTML that renders correctly in all browsers for all devices.
I feel that if more interviews involved this sort of algorithm instead of the ultra-niche / only situationally useful, there would be way less opposition and much more signal.
At work, I have written and then used a bunch of these in production, just for the narrow scope of things I work on. Might be indicative of me being in a bubble though.
Please, ask me to make a bloom filter or show how consistent hashing enables sharding! I'll pass on yet another "Implement String.reverse()"
It's less that it's "hard" and more that it's generally not a sensible operation unless performed on known well-restricted domain (or if you're up to flipping an image of rendered text, I guess).
Combining characters are the most obvious problem.
Both of these are visually identical as "naïve", however the first is written with "ï" being a single code point, while the second is an "i" followed by a combining dieresis. In the first example, the dieresis correctly stays attached to the i, while in the second dieresis incorrectly moves to the v. To do it right, you have to scan through the string and keep the base character and all combining characters in order.
This has not been my experience, and I give "systems design" interviews.
Generally the entire interview is, "Let's design an X", where X is some kind of system that has a substantial software component. The goal is to see how a candidate handles an open ended design problem of undefined size when we only have 45-60 minutes to discuss it.
For example, do they expect to be handed requirements? Do they ask what the requirements and use cases are? How much detail do they go into? Can they strike the right balance of depth and breadth of the requirements? (Or better yet, can they ask the right questions for guidance on what level of depth I'm looking for?) Once they've settled on a design, can they at least write pseudo code to demonstrate how the software would work? Can they come up with reasonable test cases? Do they even think about how they're going to test the system at all? Did they design for testability or was that an afterthought?
Some of the most effective systems for X include things that feel trivial or simple on the surface, but have some layer of hidden complexity that's only apparent once you really start thinking about it how it works, where different people may have different expectations or assumptions about the desired system behavior. For example, I frequently use a Garage Door Opener. It seems simple enough until you start enumerating the different states the system can be in, what should happen in each state when the button is pressed, what happens if multiple buttons are pressed at the same time, how to integrate the safety laser sensor which is supposed to stop the door if something is under it, etc.
More senior engineers tend to ask lots of clarifying questions and probe the bounds of the design and expectations of the customer before they start. They usually end up in good shape, because they discovered the potential blind alleys early in the discussion, their questions helped them understand the right scope for the system design, etc. It usually works out well with some very minor refactoring along the way.
More junior engineers tend to assume a LOT of the requirements and use cases if the system's operation sounds simple, and jump straight into coding without establishing these things explicitly. This is almost always a mistake, because you discover important design considerations too late and end up having to do major refactors for things that could have been identified at the very beginning, if only you'd taken more initiative and asked more questions.
For example, I recently gave this interview to a junior with 3 years of experience. He didn't ask many questions up front, and only after 45 minutes did we come to realize that he assumed the garage door remote would have 2 buttons, one to explicitly make the door go up and one to make the door go down. I (the customer) had assumed there would be one button that would toggle directions based on the door state, but this never came up or was asked about before he launched into a complex design involving 7 different classes and several circular dependencies.
Well I don't know how you conduct the interview but in my experience from interviewee side, usually interviewer doesn't want to disclose information if it's not explicitly asked, which is bad. In your garage door example, it's better if you disclose the one button information when asked for "is there any more requirement?" or for any constraint, etc. If you only answer when asked "is it using one or two button" then it's a bad practice.
This practice makes those with only* a decent knowledge about the problem domain at disadvantage, because the interviewer doesn't disclose the requirement, even if asked (unless if asked specifically). In the junior case, it is possible that they know some garage door with two buttons, thus assuming the same since you don't disclose any information about it (assuming the junior asked). And thinking about it, one or two buttons operation should not be the top priority to be asked.
I'll happily disclose anything if they ask about, but the goal I'm trying to get to is that you land at requirements through conversation. Starting with a blank slate and saying "Give me all the requirements" and expecting that list to be complete, accurate, and never change is, at best, a complete fantasy when dealing with actual customers. If you did that to an actual customer, half the time they would blankly stare back at you with no idea what to say.
Now, if that's the level they engage at, that might be ok for a junior role. I expect juniors to require projects to be spelled out in excruciating detail, and to not deviate from exactly what they were told to do. That's what, by definition makes them junior.
A senior engineer on the other hand, can generally be given vaguely defined tasks and has the initiative to figure out what actually needs to be built in the first place, before launching into building something.
I don't have a specific threshold of performance I'm expecting for passing or failing the interview. It's about gauging where a candidates skills are on a spectrum, and then seeing if that skill level lines up with their experience and the job level the are interviewing for.
If you have senior experience and are interviewing for a senior role, my expectations are higher. If you perform at a junior level, then either we'd offer you a junior role instead or not make an offer. If you have junior experience and do well, then it might be time to make the leap to a senior role, or we'd make you a very attractive junior offer because we see that you're likely to advance quickly.
It can be used for a software architect role, but in general I'm just trying to poke at systems design ability regardless of the job level. Expectations are different for different job roles, obviously. If a software architect didn't do well on this question, for example, it would be very likely be a "no hire", but for a junior engineer their performance just needs to be appropriate for their level.
I want to see how many people can recite Paxos/RAFT or "Operational transformation" on a system design interview... If you meet such people, start throwing gold at them as you found unicorns!
When I used hyperloglog, count min sketch in system design interviews, I usually ended up explaining the data-structure from scratch as the interviewer had no clue what it was
system design interviews are weird. the best systems are designed by doing as much research as possible on relevant systems, techniques and tools; then using experience/skill to separate the good from the bad to come up with a solid plan, not some stupid real-time whiteboard performance.
that said, this is a collection of nice solutions to interesting technical problems that have come up in practice over the last 15 years. :)
It’s effective at ruling out people who haven’t even heard of any of them. Though I can’t imagine really expecting people to know more than three or four of these, certainly not in detail.
I would not expect people to know or even be able to explain those by heart, either. You just don't use them or even think about them often enough to internalize them. Or any complex algorithm for the matter.
This kind of interview only tests how much a person has prepared for an interview, not how well they can apply this knowledge in real life, or think outside the box when required.