We've gone away from system design interview questions on my team. We ask people to diagram something technical they understand well and the team digs in and asks questions to understand depth and breadth of the candidates understanding. For us it works much better. It's a chance to see how well candidates do in following instructions. It gives you a chance to explore depth and breadth of their knowledge on something technical that they claim to understand. Our philosophy is how well you understand something you claim to know well is indicative of the depth and breadth of your technical knowledge in general. There are plenty of opportunities in this to ask about design and get to know how people think when it comes to design. I think it helps to eliminate false negatives and false positives.
I like this... in theory. But I don't think I'm allowed to give you a system diagram and detailed explanation of systems I've worked on, because obviously they're proprietary. That seems like a problem.
How do your candidates normally work around this? Does everyone just talk about hobby or open-source systems they've worked on?
Show me a system you’ve worked on that wasn’t put together from knowledge from previous education and jobs (yours and others), documentation from vendors, stack overflow, code generators , etc.
I would hate to be a stakeholder in a company whose value depends on employees not talking about and reusing the incremental skills and knowledge gained while working there.
Im pretty sure I learned a full set of new skills at my current job.
I don't think knowledge on any particular api is particularly useful compare to the more generic "can you read docs and use apis" skill, which is directly applicable from something like high level hard ware work making an ftdi USB chip go brrr to making dynamoDB work nicely, but I definitely didn't know anything about dynamodb before doing a server/web job
Async distributed systems and FPGA vhdl have some overlap in how you build things, but doing an interview based on my old FPGA skills likely won't show much of my servers and databases systems design skills
Yup. I’m outside DC. Somewhere near half my software engineer (and adjacent fields) friends are under contract to either the military or NSA/CIA. Most of them with TS/poly clearances. Ask one of the what they’re working on and the most you’ll get is something like “dig data on platform X”.
If platform X is something public, they could talk about the platform. But, talking about any project specifics, implementation details, etc would be a felony.
98% of what you'd be inclined to diagram out and explain for a system design interview isn't remotely proprietary. People are generally looking for a high-level overview of a system you've worked on to see a) that you've really worked on it and b) you can explain it such that it makes sense.
Sure, folks will ask you to drill down here and there for more detailed info, but if someone pokes on something where the details really _ARE_ proprietary, it's fine to just say that and offer to drill down somewhere else.
> 98% of what you'd be inclined to diagram out and explain for a system design interview isn't remotely proprietary
Maybe we're just on different wavelengths, but literally every single thing I work on that would be a good candidate for this sort of interview is extremely proprietary...and I'm just working on regular ol' backend services at [generic big tech].
That would fall into the 2%. At least in my experience, the 98% number is in the ballpark. If you are working on backend services in big tech, what you are building is quite different than most software systems.
This comment is a prime example of how the non-technical founder understands the world.
Proprietary systems are company IP. Talking about the design of the system is protected by the NDA, which the exercise described above is asking the candidate to do.
You hire for the skills to develop your own proprietary system. The employer doesn't own the skill. The employer owns anything those skills produced for the employer.
You wouldn't want your employees giving the recipe to your secret sauce away to your competitor, right? That's what everyone is talking about.
[Client]->[Cloudfront]->[Load balancer]->[ECS service in a VPC]->[RDS/other internal systems]
* Replace the above with any other cloud provider, or just commercial/open source tools that do the same job running in a VPS or whatever *
I can't see how any of this could be claimed to be protected by an NDA or similar, as this is just a highly standard architecture.
If you work on a proprietary system that has a specific architecture just don't mention that of course. I think the above allows us to discuss things pretty well, talk about introducing caching, why each layer is there, TLS termination, authentication/authorization concerns and implementation, secrets management...
> you hire for the skills to develop your own proprietary system. The employer doesn't own the skill.
Oh OK! Then I guess a good engineer should be able to make it through a standard systems design interview process, and shouldn't complain about it!
I am glad you agree that it is totally fine to ask engineers these types of questions, and there is no problems with NDAs that will get in the way of them demonstrating basic skills!
That was my point entirely.
Yes, I totally agree that an engineer should be able to go through a tech interview.
And the engineers who are complaining about the fact that they have an NDA are wrong, and they should stop complaining.
If the NDA is so significant that it prevents you from doing an interview, then you are worthless as an employee anyway. But really that should never happen.
If your employees go to a competitor and make a secret sauce that tastes the same, how do you judge what is the source of knowledge?
They can have used their skills to come up with a recipe that tastes the same because the requirements led to that outcome, or they simply copied your recipe.
If somebody from OpenAI goes to a competitor and creates something like ChatGPT, where is the line crossed that shows that company IP is transferred? The employee knows how to create the system because he knows how ChatGPT works. If he recreates a similar system but doesn't say that it is a copy of ChatGPT, is that applied skill or is that a violation of the NDA?
If it is a violation, how could he forget the structure of ChatGPT to genuinely come up with a new structure?
Legislatures write rules, people enter contracts, sue for perceived breaches and courts rule on these disputes. People then adjust behaviors to get desired results without incurring legal liabilities. Results vary across jurisdiction. Such questions can't be answered in principle. California famously won't enforce non-compete clauses, and that apparently helps innovation. China caught up industrially with the West within one generation by flouting every IP rule we have here. On the other side of the debate, patent trolls...
The flaw in this argument is assuming that you could do a good job of diagramming a system without describing the use cases of the system. My web dev skills are not proprietary, but the processes and systems I have worked on are.
The engineering knowledge isn’t under NDA but there’s no way I could fill an hour long interview about the systems I work on without getting into enough specifics about our business processes to violate NDA.
If I were asked a question like this I’d wonder if I were being tested to see how loose-lipped I am with sensitive information.
I get it...I tend to think in point solutions as well, but getting stuck there is the problem and moving past that to abstract point solutions into a systems design that talks about the architecture of the solution and the 'why' behind the design choices is what gets you out of NDA-land. None of that stuff the next level up is proprietary. Specifically how it was applied in that problem domain is.
It’s the difference between being asked “how would you design a Twitter clone?” and “How did you design [current employer’ssystem]?”
The first is generic (unless you happen to be a current Twitter employee). No NDA impact. The second is asking you to reveal details of a proprietary system covered by NDA (or if you’re a Fed contractor, covered by various classification laws).
I'm not a lawyer, but wouldn't anything you or anyone else developed on company time at another company be owned by that company and thus proprietary?
In practice I can't imagine the spirit of any law would be violated by doing an architecture diagram, but it seems likely (to me, at least) the word of the law would be violated.
Sure. But the level of description that you're giving in an interview can be pretty generic/non-proprietary. If that wasn't the case, then you really can't reuse any knowledge from job to job and that's just not so.
Sure, every company is going to have their 'special sauce' components, that ARE proprietary, but nobody's going to expect you to unpack those.
I doubt it's common, but my friend had an experience where they kept trying to direct him towards specific technical topics of particular interest to the business.
He worked for their competitor, but not in the capacity the interviewer was trying to delve into, so in addition to scummy, it was pointless and annoying.
Most people barely consider this. In the old days, before leet code interviews, I had people give me code samples. One guy gave me a code sample from some telecom carrier's billing system. It wasn't good. It was also bad that he gave us a proprietary code sample, so he got no points for that either.
> I'm not a lawyer, but wouldn't anything you or anyone else developed on company time at another company be owned by that company and thus proprietary?
In theory yes, in practice ... if that's true, why would anyone ever hire you for your experience? You wouldn't be allowed to use it.
(coming from the standpoint of a historically non-technical individual contributor who is leaning towards technical now, and asking because I want to learn):
But isn't that exactly why you as an interviewer would pose a problem, set up the context that is potentially similar to the work your project would require, and see how the interviewee navigates that?
Yes, it is. I would want to know how a person solves my problems.
And I sure would hope they bring all their experience to bear! Just because they learned about CDNs at their previous job and “everything is proprietary”, I don’t want them to suddenly forget how CDNs work.
Very little is actually unique between software businesses. We’re mostly just doing data bureaucracy.
System diagram of a CRUD SAAS isn't going to be a critical business secret in the same way it might be for a company that does stream processing of marketing/advertising data off of twitter or video/radar analytics. Or a real-time fintech trading company. Many companies are a Python/Django monolith on Postgres with redis & || memcached. That system diagram isn't going to let you disrupt the online medical services industry but it's what a lot of them use.
We call this out in our pre-interview handout. If you've got an NDA, choose something that isn't included in your NDA or figure out a way to describe it at a high level. We caution candidates that if we dig in and we run into something blocked by the NDA it creates challenges with our evaluation.
It doesn't necessarily have to be something that you're working on. It just has to be something that you understand well enough to describe the design of and how things are inter-connected.
It's a reasonable hurdle; an employee who previously signed an NDA (or any one, for that matter) needs to be able to design new systems without regurgitating verbatim parts of an old one under NDA.
So it's really not a big ask to navigate their own NDA and come up with something interesting to talk about at an interview.
I’m going to be way more strict about confidentiality with an interviewer I don’t know at all than with a trusted colleague. For all I know you get beers every week with the CTO of my current employer’s largest competitor.
Besides which there’s a way more interesting variation on the question, which is “how would you design our product?” Got that one in my last big tech interview and quite enjoyed it.
It’s not a handicap. We’re not asking people to diagram their current systems. We don’t advocate for people to do that. You could just walk us through an HTTPS request and it would work, if you understood HTTPS requests well enough.
There’s no penalty. There are plenty of technologies that you’re working with that aren’t under NDA. We’re not asking you to diagram what you’re working on, we asking you to diagram something technical that you understand well.
There have to be parts of what you work on that are not covered by NDA. Solving problems like this would be indicative of someone we would want on our team.
When I worked at Apple they made it abundantly clear that EVERYTHING was under NDA. Even things like “what version of the standard library do you use”. Or “what are computers for”. And that any minuscule transgression of the NDA would result in me
Being immediately deported, divorced, and likely put in front of a firing squad.
That can’t tell you that you can’t explain how Kubernetes works or NGINX or the JVM. Any one of those things would be perfectly acceptable as a starting point.
The problem you're asking them to solve is a legal one, but you are not interviewing lawyers. I have never worked for a single employer that didn't require an NDA, and the NDAs are always written in a way that can be easily interpreted as "do not tell anyone anything about what you do here".
The question is not tell me what you do in your current job. The question is describe a technology or system that you understand well. Do you understand how the underlying systems work? You don’t have to tell me that your current company uses kubernetes to describe how kubernetes works. Your current job can’t keep you from describing how kubernetes works. Kubernetes is just an example here you could pick any technology that you’re comfortable with.
You're asking them to explain a technology they use while expecting them to tiptoe around what they're actually allowed to say about how they applied the technology. And you say this replaced your system design interviews. It sounds to me like someone explaining what Kubernetes is won't do nearly as well as someone who can explain in-depth how they applied Kubernetes to their job.
When I interviewed with apple they said pretty much everything they do is proprietary and considered secret. I imagine for some of these people working there for years it would be hard to avoid breaking nda
There’s no place where we are asking you to diagram the system that you’re working on. We ask you to diagram a system you understand well. Are you using messages queues, container orchestration, network components, cloud infrastructure, Java/rust/go/Python/etc.? Tell me about one of those components and how they connect together in theory vs the specifics of your system. Everyone who works in technology works with, mostly, complicated interconnected systems. The point of this exercise isn’t to design a system or tell me about a system that you’ve designed as much as it is about helping the team to understand the breadth and depth of your technology knowledge. Good engineers typically find creative ways help us understand their knowledge without disclosing NDA material.
I feel like I'm either losing my mind or people are being deliberately obtuse now.
He's not saying they ask you to give a full overview of the way Apple architects their systems, he's saying they ask you to explain how a basic Kubernetes deployment might look, or how you might go about setting up some kind of backup system, or literally any of a million different technical things that you might have knowledge of and interest in.
How could things like this possibly be under NDA? If you weren't able to use any of this knowledge, you would be completely crippled in your work.
Ok yes because that’s colleges level material. The subject here is Staff Engineer (L6) interviewing. So that will require nearly all the demonstration of value to be of high level professional context.
The only really important details to keep secret are business-specific algorithms, so talking about the overall architecture of a system is usually okay, without those details. If you're building very special architectures where that actually is the business detail, then I suppose you need to figure out how to talk about something else similar or something. I just talk about the systems I worked on in generalities, so I would talk about how we get some telemetry from some devices, process it, store it, and etc., but only the mechanical details and not the meaty algorithmic parts (we use a JSON API to submit the telemetry, then we store it in a table, then we take that data in our other component and process it through our "algorithm", etc.).
We say specifically “diagram a technical system you understand well.” It’s going to be challenging to work in technology if the only technical system you understand well is the one you’re currently working on and only the parts that are under NDA.
I have the opposite question from other people in the thread - this seems pretty open-ended.
Just as an example, I do bioinformatics work, and "technical system" for someone I'm interviewing (as I understand your question) might encompass any of:
- a workflow manager: how it works under the hood, common patterns for modularity/reuse or configuration
- a specific pipeline: the stages of cleaning and quantifying raw data, the rationale for various stages and the tools chosen to execute them (including how to benchmark various methods against each other)
- a specific third-party tool: theoretical details of the algorithm, practical considerations for when to apply one over another
- familiarity with common general-purpose packages or APIs (e.g. pandas, numpy, scikit-learn)
- QC: common metrics for QC'ing various experimental data, how to decide if data is "good enough", how to troubleshoot biological vs technical (lab) vs technical (computational) sources of error
- biology: technical details of an experiment, the technologies generating the data, or the underlying biology
- data analysis: how to choose and make relevant figures, any of many data-science/ML topics (e.g. clustering), connecting data to relevant domain questions
- devops: data storage/management on HPC or cloud, HPC job schedulers, any of many cloud topics (e.g. IAC, setting up a database or cloud execution of a pipeline)
I'm more curious about how you guide a candidate towards selecting a system that gives you the most relevant perspective on their thinking and skillset - do you provide any suggestions, or are there typically pretty evident choices based on the role?
For our more junior roles, we leave it more open ended because candidates may not have a similar knowledge bases to the panel. For more senior roles we advise that if they pick technologies that are pertinent to the role and likely to be understood by senior/lead/staff engineers, the process will go better for candidates and panel members.
If I were in your position, one of understanding many complex systems, I would advise you to pick where you felt both strong and the panel was likely to have some entry point into.
So draw the block diagram of something you worked on in school. Or, I don't know, a radio or something, if you're a hardware person.
Someone who answers, "I can't draw anything for you because everything I know about is somebody's private IP," isn't going to get the job. At least, not if I'm doing the interviewing.
You must know something about something else. Right?
Or just interview some place that doesn't give this kind of interview, like most companies. It's pretty trivial to get hired even if you can't talk about technical specifics along the lines of an architecture diagram.
This is why this question is so informative. There are a ton of people in this thread arguing that they can’t describe any technologies they understand because of NDAs.
"Great, that'll do nicely. Vacuum cleaners are complicated and interesting machines. What can you tell me about how a vacuum cleaner is built and how it works? Also, vacuum cleaners are pretty old tech. Is there room for further innovation in that area? If so, what angles would you explore if you worked at a vacuum cleaner company?"
What is the vector you’re concerned about here? You sketch out a diagram so well that the company you’re interviewing with takes a photo and secretly integrates it into their stack? Then your past employer catches wind of this, realizes you were the only person in the world to understand this system, so they go after you for damages? Sorry man, I don’t see it.
I understand that you see your past work as “proprietary”, but in the real world the vast, vast majority of us do not work on systems shrouded in such secrecy. There’s nothing interesting or proprietary about CRUD infrastructure and I genuinely can’t think of a situation where you’d be exposing yourself to any real risk by explaining it to a new employer.
In my first semester as a freshman in an American university, I took a quiz in the Calculus class that had a question based on American football. Having never played the sports or even knowing the rules all that well, I aced the exam.
Yes, it is not that hard to keep the proprietary business logic separate from the system design.
See the reason ops thing works is this can even be applied to something you haven't worked on but purely as a passion project on the side. If you are Saying "oh what if I don't have side projects" you do realize people spend/waste 3+ months going these interviews + tons of mock interviews with companies they have no intention of joining. What does it say about if you are telling me you'd rather waste valuable personal time that way instead of building something fun?
I'm fact I'd even say that even if you haven't built a side project, I'd still use op's technique and ask the candidate to talk about and white board a system they would like to build and then dive deeper from there. Heck I'd even tell this to them before an interview!
I put in 60 hour weeks at my current job, except for a month before I change jobs, where I put in an additional 2 hours/night to prep for interviews, because that’s what will benefit me for every other company.
I care about my career and my family, so I don’t have the luxury of spending it on passion projects outside of work. I’d rather spend the precious little time I have left on my real hobbies and family/friends.
It doesn’t communicate that I’m a bad engineer. On the contrary it communicates that I give my all at work.
But I never said you had to spend all your life on passion projects. How is leet code fair game but asking how they would build something they would to even if they never did or do somehow an indication that I think you are a bad engineer? Because I see today's state of system design question interviewing pretty much like leetcode no (asking this as a faang interviewer)?
Having said that what you said about spending 60 hours at work tells others you will be a much better employee. Which is probably the signal they are looking for?
I am not going make judgement on people's loyalty. If someone chooses to do 60 hours works more power to them. If people don't want to do side projects that is perfectly fine too. Seriously my motivation wasn't to suggest id penalize those without a side project. It was more from my observation that (from several l5-l7 candidates):
1. Despite the "let us work together" intent it is still an exam where you are penalized for taking too long or making a "wrong" assumption or missing a few bits,
2. Given above or not when confronted with a random question it can be easy to get very nervous unless you have built a similar system from 1 to 1b user scale or you have done tons of practise interviews with companies.
So apart from interviews being hard and what they are purely because of supply and demand I wanted to see if you can learn just as much from a more open book approach (with either a side project or candidate knowing the question ahead of time say the night before). It could even be more inclusive that way!
One place I interviewed with last year did something similar in that it was more of a BYOSD (bring your own system design). So it meant I got a chance to think through complex systems I've worked on, mock up a diagram beforehand, and then present it to the interviewer. They then drilled down on a lot of components, why decisions may have been made, etc. Sort of like what you're saying.
Out of all the similar interviews I did last year for that type of round, I enjoyed that style the most. Rather than your typical "build me an {ecommerce site, social network, video streaming site, url shortener}".
Square/Block (used to?) ask only 1 system design question and the recruiter would let you know ahead of time so you could prepare for it. Probably because it allows for deeper questioning and answering of relevant technical skills.
It’s hard to tell between those who know their stuff, and those who are good at memorising all the various courses that tell you what to say. By telling the candidate what system to design you make the latter’s job easier.
If someone can quickly learn enough about a system that they can convince a domain expert they understand it well in a deep technical discussion, they're probably a good candidate for hiring, since your business likely consists of many complex systems, and the candidate will theoretically be able to onboard faster.
I’ve seen first hand people hired who’ve aced the system design interview but then fail to get much done. Building systems in the real world is much more than a few high-level components and algorithms - it’s about all the edge edge cases, tricky stakeholders, ambiguous and changing requirements, sequencing and coordinating small pieces of work.
Attaining high level understanding and speaking convincingly about it are entirely different skills from knowing how to build it. Interviews are always a proxy anyway, your job will never be solely comprise of convincing people of your expertise. Sooner of later you have to do exercise it somehow. Unless you're the CEO, that is. The higher you go the easier it is to make a career out of being in the right places at the right times.
If you can't distinguish between those two types of people by asking additional questions about their design, are you capable of accurately evaluating them in the first place?
I work for a big tech co, and our interview training explicitly says to not ask candidates about systems they have built in the past, but to ask them to build brand new systems from scratch.
This seems completely backwards to me. Is like saying "hey, do you know that relevant on the job experience that you have? we don't want to hear anything about it, here you have a made up scenario".
Ok ok, that is a bit cynical. Asking to design novel systems can give you some good insights about the candidate, or help with people who don't have experience building systems yet. Still it seems to me that asking candidates to describe real world systems that they have actually built is much more useful at checking their skills than building imaginary systems.
One argument against this is that candidates can "cheat" by preparing in depth a made up example. But how is that much different than the current approach?
I imagine a concern is that for many candidates who are under NDA with the current and past gigs, sharing detailed system design info is not something that they would be willing to do. Do you want to penalize candidates who are careful about leaking confidential IP. Given that case the fair evaluation is to start from a blank slate to see what they would build given the requirements and constraints shared in the interview.
> asking candidates to describe real world systems that they have actually built is much more useful at checking their skills than building imaginary systems.
Except those systems are owned by other companies and I have signed agreements not to talk about them.
Few people at software companies are designing systems from scratch nowadays. If they are, they'll be gluing together third-party middleware applications like database systems, queues, proxies, load balancers, etc.
A general understanding of computer architecture, programming language theory, networking and common protocols and formats is more valuable. At that fundamental level, they won't be using recent fancy buzzwords to sound cool, but could be using their brain to come up with something original.
In 2023 knowing which AWS database to choose, even if it’s hosted for you is probably ok for the system design level. Knowing when to use a queue or a load balancer etc is ok even if you didn’t engineer the queue. The company you’re going to probably isn’t implement their own either.
I agree that you need basic networking and protocol understanding etc still. And that being able to talk through problems conceptually is important.
Someone isn't going to design one of these system or has designed some of these systems themselves or during an interview. However for something like YouTube one could say Ok to start simply I would have a frontend form to submit videos, which sends it to a video conversion API queue that calls back when it is completed or could be polled for progress. Now obviously what YouTube really does is much more complicated. But the follow up questions could be like OK your form works and now your video site becomes enormously successful and your video transcoder is overloaded, what would you do? Well I could parallelize the consumers of the queue to some large number, and scale based on load...
So isn't going to be something like I would architect my own massively parallel converter database and binaries written from scratch to process all the various formats which is probably closer to what is done in reality. But senior and lead engineers should indeed understand tradeoffs, parallelization, queues, data storage concerns. They are given as questions because people know from a use case view how they generally work.
When I interviewed at Facebook my system design question was for a Yelp-ish clone that would be rolled out to Facebook as a whole. The interviewer told me the design had to be scalable to 1 billion daily users on day 1.
> We ask people to [discuss] something technical they understand well and the team digs in and asks questions to understand depth and breadth of the candidates understanding.
This is how I conduct all of my interviews. I can't stand the gotcha-centric, pitfall-laden Jeopardy contest format of interview interrogations.
We think this is the fairest way to evaluate candidates. I had one standout interview that was very gotcha centric. I didn't think the way the interview was conducted was fair or respectful of my time. I also don't think it did a good job of assessing how well or how poorly I would have done in the job. It seems indicative of poor management. I think it's very important as managers that we respect people and their time, always. This feels like as fair of a way as we've come up with so far to do that. We do two technical exercises, diagramming being one of them. We've had really great results and it takes ~1 hour. That seems fair.
> It gives you a chance to explore depth and breadth of their knowledge on something technical that they claim to understand.
Assuming you know enough about what they're doing to actually dig in - which, granted, maybe you're only considering interviewing people who are working things you're somewhat familiar with.
Ultimately, I don't see how this is different from a technical design interview - in that you can be susceptible to hiring charlatans, and pass on people who'd rather say they don't know something than try to BS most of it on the spot to sound impressive.
The two people could have the same knowledge. You're more likely to hire the charlatan. Maybe that's what you want :shrug:
In my own experience interviewing people about systems they have worked on (as opposed to hypothetical on-the-spot design), we have been highly successful in routing out charlatans, as you put it. They very rapidly get hand-wavy and are only able to give the most shallow of answers. In a few cases they will go in-depth, but reveal truly bizarre decisions or lack of understanding of the platform.
Good senior engineers can usually go fairly deep, they are honest about where they don't have as much knowledge, and are usually candid about what was good and maybe what in retrospect was a hack or a bad decision in retrospect.
We've had very few false positives. It's a conversation and we understand and appreciate when people say they don't know. We understand that really good engineers say "I don't know how that works." If all of your answers are "I don't know how that works.", it'll be red flags enough for the team. The team of interviewers has a very wide and deep knowledge of technologies (it's kind of a virtuous cycle). We call it out in the pre-interview handout that if you pick something super obscure it'll hurt your chances as the team will struggle to ask good probing questions.
The nice thing about this is you're giving the candidate a choice about what they want to describe. Good engineers will choose appropriate topics.
Is this sort of interview common in the industry? This is the first time I've seen someone describe a technical interview I know I could pass with flying colors.
That just seems rude...but to the larger point, there are a lot of signals that you can pick up if you have the ability...if you have zero ability to read people and you can literally only understand if they are answering abstract technical questions correctly, you as an interviewer would probably better be replaced by a web form.
As a former (reformed?) non-coding architect, non-coding architects are a scourge on the industry. I know you didn't say "non-coding" but if it's a separate job, they're very likely not spending enough time coding.
Seniors should be able to build large, interconnected systems. IMO that's a basic skillset required to reach that level, and part of why I roll my eyes when I see people with 5, 4, 3 years of experience claiming to be seniors.
People can claim to be seniors because the gates for getting that title are pretty permissive. But even if you had the conviction to re-evaluate yourself ("do I deserve to be a senior engineer?"), you'll find tons of opinions on what it means to be senior, what knowledge they should know, and what they should bring to the table before claiming that. It's difficult to build yourself a widely accepted roadmap.
You have to watch out for people being too exclusionary though, almost getting into no-true-scotsman territory.
For instance: you can be a valuable contributor to a large, interconnected system, explain each part of it, but maybe not have the skills to build it from scratch. I would call that senior, but I wouldn't say architecting it from scratch is a basic skillset of this position. My opinion is if you could build large systems from scratch for a company, I'd say you probably need a title and pay higher than senior.
If you did want to roll architecture into a senior job, I'd imagine the pay is higher than non-architecting seniors, but it's hard to quantify the architecture contribution to the salary's bottom-line. If you took the non-coding architect's salary and added it to the senior's most of us would be clearing like 300k.
Personally, I'm pessimistic around job duties because you'll find so many companies looking for do-everything one-man armies at lower-end wages.
Architecture and coding are two distinct skillsets. I know that most senior developers haven't touched a book about architecture for a long time if ever but still feel competent about such topics - falling in the "You don't know what you don't know" trap. This is as problematic as the architect with rusty coding skills or who hasn't worked as a professional developer beforehand. Coding should be a small part of the architect role but this role is usually reserved for very complex systems where the majority of your work isn't anymore about coding because there is enough to fill a full time job otherwise there is no need for an architect at your company.
Why are books important? Senior engineers should be seeing design documents and code reviews that involve architecture and design patterns all the time.
Sure, there's room for external learning, especially at companies where software isn't the primary focus, but almost none of the best software architects I know spend much of their time reading about architecture.
Speaking personally, I've learned more about architecture from reading code and system design interview prep materials than I ever did from any book. The closest I found to a useful book was the "architecure of open source applications" series which is essentially reading code with a senior eng to walk you through it.
I hear the "Why are books important" pretty often when guiding developers. All of the best architects i know of spend much time honing their skills. That won't happen if all you see is just the company you're working at. The worst architects i've worked with stopped learning outside of their job tasks. It's also a step to understand that design patterns and system design are all within the core competency of the senior developer and that the role of the architect has many other responsibilities.
While I'm sure that the best employees make their entire lives about work skills, I just can't imagine that being a fulfilling life personally. If i have to spend my nights reading books about work, outside of work, then I don't want to be the best. I have other hobbies I'm more interested in than, "becoming a better employee".
Now, I do feel different about academia though. I could understand doing fundamental research and dedicating most of your living hours to those problems. But i guess I just feel like the research mission is infinitely more valuable than the corporate one. I say this having never had the chance to really contribute anything meaningful to any company I've worked for, and never having worked for a company who's mission felt personally fulfilling to me.
I'm glad that there are people who work their butts off for work in important places but I just can't imagine doing that myself and not just disintegrating with regret when I turned 65.
> I just can't imagine that being a fulfilling life personally. If i have to spend my nights reading books about work, outside of work, then I don't want to be the best. I have other hobbies I'm more interested in than, "becoming a better employee".
I'm the guy who reads books about my career (not about my work) outside of work (in my free-time) and during working hours as well. I love my career (computer science) and I love reading well-known tech/cs books. I don't do it to be the "best employee". I couldn't care less about what I do at work (I, like 99% of the people here, work for a totally useless tech company that nobody would care about if it disappeared tomorrow). What I do at work is stupid distributed systems in Go + Kafka + postgres + k8s (totally uninteresting stuff, but pays very well though). I genuinely enjoy readings books from Stevens, Kerrisk, Kleppmann, etc., just like I enjoy reading sci-fi/drama/etc. novels. I usually end up learning stuff that's actually useful at work, and once in a while I apply such knowledge at work, but I do it only for the raises and promotions.
There are people out there who doesn't give a fuck about tech companies, but care deeply about tech.
I somewhat take issue with this. (3+ decades of very hands on (read: coding) architecting, including some learning experiences in orbit. I've been coding code-doodling since teenage years.)
A lot of what non-coding architects traditionally brought to the table has been taken over by experts designing OSS protocols, data formats, etc. Take things like data frames that are now exploding in ML space. Seniors today, agreed, should be able to integrate (sub)systems and that may in fact be enough. But a competent systems architect (who have never touched code) should be able to also define data formats, patterns of movement of data between sub-systems (for say optimal performance), the actual computing platform considerations, etc. Also, sometimes when being too close to code, things degenerate to debates about tools, etc.
Naturally my points here gain more validity as system size (or its open-ness requirements) increase.
I always thought that non-coding architects still had at least some background in coding. What does the path of a never-had-coded architect looks like?
Possibly effective doodling in the design space of systems? I personally hacked at untold number of experiments with different approaches, and was doodling like mad over at least a decade. I have bookshelves full of notebooks covered with object graphs, system sketches, etc. It's not just coding.
The nice thing about it is that it scales to whatever level you need it to. We've had junior candidates with no real Ops experience diagram synthesizers that they've created. Then we ask questions to gauge their level of understanding. We have a general idea of what it means to be at each of the clearly delineated levels on the team and we ask questions to what seems like the appropriate level. We've had candidates come in to interview for senior positions and during the interview the team drilled down to what we felt like were lead candidate levels and we offered the candidate a lead role instead of a senior.
We call out in the pre-interview handout exactly what we are going to do and what we expect and we let the candidates bring what they think is best. For senior, lead, and staff candidates we call out that bringing something that the team is familiar with as well, will be beneficial to the candidate and to the team.
It's really allowed us to do our best evaluation of candidates in a reasonable amount of time. We have two technical exercises and they take ~1 hour total. We may still be getting false negatives, but we're not getting false positives. Hire hard, manage easy, right?
Reverse system design interviews have a a lot of untapped potential. They just started getting a bit more popular in the last year or so (however, most companies haven't caught on to the trend yet.)
I had a "reverse" system design interview at Amazon back in the 2000s. Possibly before the modern "forward" system design interview became so popular.
I also remember circa 2012-2014 being required to conduct what are now considered typical modern system design interviews (i.e. as an interviewer) while employed at Amazon, and 90% of (even senior) candidates had no idea how to even begin to approach the "design a system to do X"-type questions. I think this is partly because far fewer people in typical software jobs were building distributed systems back then anyway, and partly because all the YouTube videos and guides like yours didn't exist yet, so nobody was doing the kind of dedicated prep and rote-learning that seems increasingly expected nowadays.
Back then, when the problem was too far from the candidate's real work experience, and they weren't willing to make educated guesses and ask clarifying questions in order to move forward, it was often a struggle (for both of us) to pivot the interview towards something the candidate could tackle and demonstrate their design experience. ("Tell me about a system you designed", i.e. the "reverse" approach, wasn't an acceptable alternative in our rubric.)
Now, just like what happened with coding interviews vs. Leetcode, it seems that a more common challenge for the interviewer is telling the difference between a candidate who is applying real experience and understanding vs. regurgitating/performing what they read in a system design interview prep guide. But that's assuming one is actually more valuable than the other, and I don't have proof of that.
To be honest, since I have to go through preparing for the standard style of interview before a job search anyway, all these novel forms seem like annoyances.
We’ve found that candidates prefer this because it doesn’t require a ton of prep. Talk about something you know well in a conversation with other engineers. Do some minor diagramming as you go to help us understand. It’s thirty minutes with no gotchas.
This sounds like an enjoyable interview, even from the applicants point of view.
Do you follow up on your decisions to see how it held up regarding false positives you did accept, if any? I mean as related to the heuristics, not performance review generally.
When doing IT people often say "Well, I don't know anything about computers", which I generally found pointless, as with about 30 seconds of talking I can tell if you know anything or not. Your story passes my smell test, as getting someone talking is 90% of telling what they know.
People with broad and deep understanding of technology can typically gauge if someone knows there stuff or not with 30 minutes of them describing a system to you and answering questions about it.
This sounds like a type of interview that respects my time, which makes me think there is a good chance the company as a whole will respect my time and humanity as well.
This is exactly what we’re going for. It lets candidates know that we value their time and have worked as hard as we can to create an interview process that lets us gauge whether or not you’d be a good fit for the job in the least amount of time as possible.
best way of doing interviews imho. apart from anything else, it gives you a chance to show that you know how to design/implement a system that is perhaps more complex than the one you are being interviewed for
We've found that candidates who better understand how things are connected make better candidates and operators. We need some way to gauge how well they understand how things are connected. This is as fair of way as we've come up with.
I totally relate to that. In the university setting, I see many students more interested in learning keywords instead of “key concepts” behind such keywords. Think of preferring to learn “k8s” instead of “resource manager”. Today, very few of them knows that etcd is one of the key components in “k8s”.
I guess this behavior is similar to the transformation that happened in other non-CS fields.
I don't know if you work in FAANG or no, but most FAANG interviews just don't work this way. So, as much as I like this style, it's just 1 company and definitely not the norm. The rest of us are stuck with design Uber, Whatsapp, crap...
I don’t like this approach because it doesn’t provide measurable or repeatable criteria for comparing candidates. It also suffers from a sort of Gell-Mann amnesia where a candidate who spouts good-sounding BS about a topic you’re not familiar with is assumed to be competent across the board.
The process is repeatable. We ask all candidates for the same role the same questions and ask them to do the same technical tasks. The other component of our technical interview has something more similar to scoring.
We’ve found that this diagram exercise is actually a lot harder to BS your way through because the expectation is that we’re going to probe into your answers and it’s supposed to be something you understand well.
I am on the interviewer side of ~1 system design interview per week and have done probably around 100 of them at Google and other companies.
Most of the advice here is good, although it could be condensed quite a bit.
The biggest red flag for me as an interviewer is when candidates list off concepts or technologies they don't understand. DO NOT say things like "we can't do this because of the CAP theorem" or "we should add a cache to reduce latency" unless what you are saying is true and you can explain why this is true (the guide mentions this).
The biggest green flag is when the candidate obviously has thought deeply about related problems and made design decisions based on real world constraints and requirements. If you bring up a lot of related things you have worked on, explain why you made the decisions you did in those scenarios, if its obviously related to the question, and if you don't spend too much doing it, then you will definitely get a positive recommendation from me.
The purpose of the system design interview is to convince me that you could build, or lead a team to build, the thing that we are discussing at a company like the one where you are interviewing. I feel like I can easily see through interview practice or coaching, so I don't recommend spending more than an hour or so "preparing" for system design interviews. Unless you are interviewing somewhere with very inexperienced interviewers...
No offense meant, but your comment already shows your inherent bias.
You are looking for people who have solved specific system design problems. That means you are already being overlay narrow in your selection criteria.
I have a background in quant dev and HFT for example. When I was doing system design for big tech (at L6 level), I found that the kind of system design questions asked had very little overlap with my own experience. Because the fields are different and the problems are different.
I absolutely had to prepare by spending a lot of time familiarizing myself with big data/internet scale type stuff. Like spending months studying the DDIA book. Which is a lot different from low latency systems. And there's no way I would have passed those system design interviews without doing so.
The stark reality is most of the people passing these have prepared heavily and you probably don't pick up on that - because part of the practice (e.g. mock interviews) is to make you sound natural.
Now that I'm in the system, the other stark reality is that little of this system design matters. Most of the challenge is understanding the internal technologies and figuring out who to talk to to get stuff done (like pretty much every big company).
I think ultimately system design, just like LC interviews, are a proxy for "can you study hard and pass exams". Which is supposed to be a loose proxy for what makes you successful on the job. I have mixed feelings on whether it gets the right people for the job or not.
> Now that I'm in the system, the other stark reality is that little of this system design matters. Most of the challenge is understanding the internal technologies and figuring out who to talk to to get stuff done (like pretty much every big company).
Having been on the "support" side of an engineering org (supporting teams building product features) the thing I've noticed is that if you don't maintain a certain "density" of people with these system design skills things start to go off the rails. People start showing up with designs resting on some very fundamental misunderstandings or designed with no eye towards simplicity and things turn dysfunctional quickly.
It's worth noting that there are also better and worse designed system design interviews. Good ones give space to demonstrate "taste" as well as technical chops.
Well for my own startups I don't do system design interviews. I don't think they're perfect and there are better ways of measuring the same thing if you have more time per candidate.
Big companies like Google have to sometimes choose mediocre processes that are Lindy and agreed on. System design interviews are less bad than coding interviews IMO, and they are much more scalable for Google.
I think I'm better at spotting genuine experience vs preparation than most interviewers because I've done a lot of interviews. On the other hand I don't think I can tell 100% of the time, and it's not black and white to begin with.
Another point, your specific background may not matter for L6 roles. I want to know if you can build the sort of thing you will be building in the role you're interviewing for. I don't care if you're good at building something else. This is less true for L4 and L5.
> Another point, your specific background may not matter for L6 roles. I want to know if you can build the sort of thing you will be building in the role you're interviewing for. I don't care if you're good at building something else
Why would you narrow your pool so much, especially when specific experience is usually not what defines higher engineering levels but rather architecture, leadership and communication skills?
If someone is actually skilled in system design they would be able to come into a new domain, understand the constraints and design a system within those constraints.
People seem to superficially pattern match on experience that looks similar, but doesn't really have that much overlap a lot of the time.
My thinking is that most things are far more specific than we realize, so why not mostly forget about specific domain knowledge and focus on applicable high level skills? Domain knowledge is a nice bonus, but you're almost certainly going to have to learn things on the job regardless.
Well I mean your experience isn’t any different than applying for a job that uses Python when you’ve only used C++.
Sure, they can train you to learn Python, but they’re specifically looking for a Python developer. They’re not looking for a generic “person that can learn stuff” when there are many Python developers out there that can pick up the job day one.
In your case, you had to put in a lot of work because you essentially were switching specializations.
No but for a popular job listing in a well known company with hundreds of applicants, there are going to be dozens of people who have similar experience as you, plus have the exact qualification that they are asking for, putting you at an automatic disadvantage.
I don’t agree because not all interviews work that way, especially for teams that write cpp and c — they don’t let candidates interview in JS for instance.
I had a high level eng join a domain specific team I was on and he didn’t do well because he didn’t know the domain very well. He had general skills around leadership and communication that someone of his level should have, and it kept him afloat, but for a high level technical IC you need to be at worst fluent in the domain and at best a technical leader in the group.
At lower levels of IC generalist is fine, the blast radius of your technical decisions is smaller so it’s less risky to learn on the job.
But yeah, not knowing the domain you’re going to join, whether it’s a type of work or a programming language, you won’t be as effective.
> I don’t agree because not all interviews work that way, especially for teams that write cpp and c — they don’t let candidates interview in JS for instance.
I think this is a bit of a special case since these languages are largely restricted to specialized domains these days. I don't think many people are deciding between building a Web app with Python or C.
> Most of the challenge is understanding the internal technologies and figuring out who to talk to to get stuff done.
This is something interesting that often is skipped due to "sterile" nature of system design questions(solve puzzle). 90% of system design problems are tightly connected with how your org works internally. Can you depend on particular team or is it better to provide requested functionality internally. This is very often neglected but this moves or breaks stuffs.
Imagine you have to build a complete traffic system architecture for the city that will start settling in a year? It will likely be ongoing thing for you, you will observe how street layout works and adjust, build new streets, make city highway from point A to point B etc. but over time. Doing this apriori is going to go horribly bad.
> Now that I'm in the system, the other stark reality is that little of this system design matters. Most of the challenge is understanding the internal technologies and figuring out who to talk to to get stuff done (like pretty much every big company).
A few years ago, I did my round of "big tech" interviews as a manager and was somewhat naive in that I did zero prep work, but I did have almost 2 decades of actually building systems. I found system design to be the "easy" and fun part of the process (including at Google). However, if I didn't have experience with scaling and redundancy in web applications, it wouldn't have been nearly as fun.
That said, once I was in the door (different big tech co), there was absolutely zero need for any of that knowledge. Now that I'm back at a startup, it's all relevant again!
Like other commenter pointed out you were interviewing for what basically amounts to “tech lead” role without relevant experience. I’m not sure other L4-5s on the team would be happy if someone less experienced is hired in a TL role
Big tech interviews aren't team specific or even experience specific (with some exceptions). Teams do a wide variety of work, including stuff similar to low latency C++/Java, and team selection doesn't happen until after passing interviews.
It's not really a question of relevant experience at all - it's about passing the standardized exam that is a big tech interview. Experience comes more into play when doing team selection afterwards.
Even if you do have the exact kind of experience they're looking for, the interviews usually are built around an assumption like "this system needs to be built from day one to scale to massive traffic" which is not the way projects usually are approached in the real world.
I'm also a former interviewer from Google. I typically interview people for L6+ roles.
What's being called "bias" here may simply be "experience" and a fundamentally different understanding of the System Design interview's purpose.
Side note: Systems Design interviews are reserved for "senior" level candidates (L5+). It is a significant inflection point for expectations, as senior-level employees are expected to navigate through and resolve ambiguity. These interviews are not about determining if the candidate can or has solved a particular problem. When a candidate has a particular solution in mind for the presented problem, they better be prepared to explain and justify why.
(Take everything I say with a grain of salt, as there's no guarantee that an arbitrary interviewer you encounter shares the following understandings)
While Google's interviewing process for tech ladders is deliberately designed to minimize any potential interviewer bias in the process (i.e. the interviewer's role is structured to ask, as a starting point, approved questions and take notes --sometimes verbatim-- on the candidate's response for the hiring committee to make a hiring decision, the Systems Design interview is the one type of interview that does and --should-- rely on the interviewer's judgment.
The same Systems Design interview question can be given to candidates across a range of target levels.
What may be frustrating for junior candidates in particular is that unlike leetcode or cracking the coding interview questions, these questions are not intended to be or can be "completely solved." There are no specific "correct" answers. There is no book of solutions to be memorized for such questions. This is deliberate, knowing that people try to memorize answers for interviews. This does not mean a candidate can not, nor should not practice how to show their experience.
The tenets of systems design questions in engineering interviews are to:
0) Foster collaboration. The interview is about understanding how the candidate goes about solving a problem and understanding their experience solving problems (with others).
1) Give the candidate an open-ended question that is not meant to be memorizable, nor exhaustively solvable within the allotted time. This allows an interviewer/hiring committee to observe and judge a candidate's problem solving approach and experience, versus memorization.
2) Give the candidate an opportunity to show their experience -- The question and approach should be sufficiently broad to allow the candidate to surface areas where they have particular depth from their past work experience, and the interviewer to probe/explore those depths.
The interviewer's evaluation of the candidate responses and performance during a systems design interview should include the interviewer's expectations of what a candidate would at least ask or address with the presented problem. Better yet, the interviewer should include what they would expect a candidate for a given target level to address, and further specify what additional things an L+1, L+2, etc would have addressed.
Ultimately systems design interviews are not about a candidate's answers for "What" or "How" to build ______, but surfacing a candidate's judgement skills and understanding of the "Whys" along the way.
It would be great if every interview was like this. But I see system design, even at top tier companies, to be similar to leetcode. An algorithmic problem with a standard solution, where the interviewer is looking for that standard solution. There is more degrees of freedom, but not really. Interviewers are looking for memorized answers from the same few prep materials.
Unfortunately, that type of experience is becoming more and more common, and that speaks to the interviewers and their lack of understanding of the whys of the systems design interview process itself...
I suppose this is one more sad byproduct of the title/level-inflation or skill-dilution that has been happening across industry, as well as the 'gamification' of the interview process on both sides.
Some interviewers can also end up being lazy as well.
> The purpose of the system design interview is to convince me that you could build, or lead a team to build, the thing that we are discussing at a company like the one where you are interviewing. I feel like I can easily see through interview practice or coaching, so I don't recommend spending more than an hour or so "preparing" for system design interviews. Unless you are interviewing somewhere with very inexperienced interviewers...
I don't agree with this because I bombed a couple interviews till I spent the time going through the prep. I distinctly remember an interview completely going off the rails because I was asked about a retail Web site and said "honestly I'd usually start with a simple app server - SQL database setup to start; no reason for the complexity of anything more complex till you've proven you've got the traffic and have scaling problems." That's my honest opinion, borne out by real-world experience, for most people thinking of doing something like that, but it obviously wasn't what the interviewer wanted to hear and it's not like I would have been capable of giving a fancier design; I just didn't know how you're supposed to approach the question.
Well I know that now because I watched a bunch of prep videos and then easily passed interviews despite my experience not significantly changing in the intervening time. But that's my point; if you don't prep you have to just guess which aspects of this pretend scenario you're meant to be taking seriously and which are OK to gloss over. I had other interviews go off the rails because I offered too much detail about a smaller part of the system and discussion of it dominated the entire time but even though the interview kept engaging this discussion it wasn't actually what they wanted. "Don't bother prepping because you can't; it's just measuring your real abilities" is just not right, in my opinion.
A cache is such a red flag when I'm doing a system design interview.
I ask people to design a system but the system itself is very write heavy. Like 50 writes per read imbalanced. People will often suggest a cache because it's in the standard interview prep but not actually think about why they need it. They just go right to "I need a cache somewhere".
> I ask people to design a system but the system itself is very write heavy. Like 50 writes per read imbalanced. People will often suggest a cache because it's in the standard interview prep but not actually think about why they need it.
I mean, it's a cool thought process to go "Ooh, maybe I can store up repeated things so I don't need to repeat an expensive access process often!" It's a byproduct of modern computing hardware that we frequently have the luxury of not thinking about caching very often.
I don't think you should punish people for considering a cache. Write caching is a thing, it might just not be applicable for the system you're considering.
Well yeah, if you can say all this in the interview then you will be fine. 90% of candidates will instead read "add cache to improve performance" in their interview prep guide and repeat it verbatim.
What I am saying here is that it's not true in all cases, so candidates should avoid blurting out rehearsed sentences like that.
Caches usually reduce load on the thing behind the cache. They can sometimes reduce latency in ways that matter, but often don't. For example if the p99 requests all miss in cache, then the cache won't help.
Assuming your scenario of 99% cache misses, that means you need a cache whose read latency is less than 1% of the cost of a cache miss. There are plenty of ways to design such a cache so that you still get a net performance benefit, even in your greatly exaggerated scenario.
One very simple example of an incredibly cheap cache that is almost certainly going to cost less than 1% of the cache miss latency is a bloom filter. Bloom filters can be tuned to be incredibly space efficient as well.
Please don't be so dismissive of people who answer questions by giving generally good and standard advice, just because you have some silly corner case that you want to play "gotcha" on.
You are talking about minimizing the expected or average latency, which is very rarely important. The actual goal will depend on the broader system but typically you want to minimize the latency of the worst 95%, 99%, 99.9% etc of requests. Whether or not the cache reduces latency depends on the distribution of the cache hits. In practice, items that miss a simple LRU cache often have the longest latency in other parts of the system, in which case the cache would not reduce p99 latency.
EDIT: I recommend reading the Google SRE book or the famous "tale at scale" article.
Well folks, you heard it here. Average latency is rarely important and if someone presents a very standard and common way to reduce latency of a system, you should dismiss them because they should be aware that your specific system that they have no knowledge of would result in 99% of cache misses.
I used to work at Google long ago in the platforms division, in fact I worked on the BigTable cache (among other things, mostly related to performance). It would be very sad indeed if today's SRE book dismisses caching as a vital and standard optimization strategy and instead plays all kinds of gotchas with potential candidates as I believe you're doing.
With that said, you article you linked does not in any way support your claim about caching, on the contrary it hardly discusses caching at all. It's as if you just wanted to dump a document you thought I wouldn't read as a way to be dismissive.
What's your ldap? I can look you up and see what you were working on, and whether it required making design decisions around reduced average system latency.
And yeah this is exactly my point. If you present a "standard and common" solution that isn't applicable to the question I actually asked, and if you blindly apply solutions without thinking about what problem is actually being solved, then that's bad in an interview.
Average latency is usually not the thing you want to reduce because it's not representative of what any actual user is experiencing.
Why would you do that? Now you're being weird and kind of creepy.
With that said, to the best of my knowledge it is the same as my HackerNews handle, so you are welcome to find whatever info you'd like on that, but please understand your request is very creepy and akward.
If you need to reduce average latency and the cache would actually do that, sure. Cache doesn't magically make all your network latencies drop, though.
yeah, not sure what is too hard to understand about that. Perhaps it depends on the context but I highly doubt you can go wrong by saying a cache should be added to reduce latency.
The way it typically fall aparts for my candidates is I ask about optimizations, they say X could be cached, then I ask about cache invalidation and lo and behold, it didn't even occur to them that the cache needs to be invalidated at some point, so they blurt something along the lines of doing the entire expensive operation again and checking it against the cache, or something similarly nonsensical.
The problem isn't demonstrating understanding of what a cache does conceptually, in fact any SWE worth their salt ought to be able to explain caching. The problem is if the candidate's decision to add a cache causes the system behavior to become obviously incorrect upon an ounce of further inspection, because this demonstrates a lack of foresight about local maxima vs global maxima, and systems are all about trade-offs at the macro scale.
Caching is an easy thing to blurt out in an interview setting, but not all problem spaces benefit from caching and caching often isn't the only available solution, or the most ideal one.
I honestly can't decide which would be worse: a developer who literally doesn't know what a cache is or one who installs caches with bad invalidation policies.
In practice, "latency" is too vague to be what matters. In many systems it's the worst 1% or 0.1% of latencies that matter. Adding a cache will improve latency in those cases only in some situations.
Usually adding caches helps by reducing load on the service behind the cache.
> I feel like I can easily see through interview practice or coaching, so I don't recommend spending more than an hour or so "preparing" for system design interviews.
I'm very surprised by this recommendation which doesn't match my experience at all. First, lots of candidates don't have experience building such systems, so they need to learn by reading books and articles. Second, the candidates need to be familiar with the structure of the interview and what's expected from them, and that takes practice too.
As an interviewer, my job is to differentiate between people who have experience vs people who have just read books and articles. I interview mostly L6+, so I will not recommend people who I don't believe have real experience building complex systems at scale.
I think it only takes an hour or so to become familiar with the structure. I can tell during the interview if that's the problem and I will be very patient in explaining what I'm looking for in that sense.
What is the difference between real experience and books in your opinion? How similar does the experience need to be to the problem at hand for it to be relevant?
I can't say about other companies but I have personally done many, many of these interviews over the years and can say with 100% confidence that someone who has spent days/weeks/months studying system design interview material but not had any actual experience building such systems in a production setting will get weeded out in the first few minutes.
Now some interview prep might actually help a lot of people, but it will help you better present your existing knowledge, not replace it.
I may be a counter example, but I did very well at system design interviews at both Google and Meta (level L5), with no experience building such systems. As a matter of fact, my resume didn't mention any such things and everybody was aware I didn't have that experience - recruiters, interviewers. A lot of candidates don't have that background (there are lot of things to do in our field besides building Twitter newsfeed and Google search autocomplete), and companies don't want to exclude them.
Also, you can build knowledge during your preparation. It's not only about "faking" it.
>I feel like I can easily see through interview practice or coaching
How do you spot it? I've had candidates with stellar performance on system design interviews, and I've never had any suspect of them (and the resumé also seemed pretty strong in those cases). But now I'm curious if I missed something.
It's possible in those cases that the candidates were really good.
The main tells are a mismatch between apparent practical experience and apparent knowledge of the design space. I always dig into a few specific technical aspects of the design as far as possible to see how deep the candidate can go.
For example, if there's a queue I'll ask what would happen if in production the queue starts to fill, and how to mitigate. Most candidates reflexively suggest making the queue bigger, some suggest adding more downstream capacity to drain faster. People with real experience will have encountered this problem and know that you have to spend time investigating to find the root cause of the backpressure, otherwise you could add capacity to random things without reducing the queue occupancy. Are your queue consumers getting throttled by some downstream service? Are your workers CPU bound? The "right" answer here is basically "we should figure out why the queue is filling up before changing anything", but very few candidates answer that way.
Another kind of mismatch is the red flag I listed, where candidates will basically list buzzwords or ideas regardless of whether or not they are related to the question I asked. For example if I ask a question where the user-facing API is very simple (ie, write opaque data to a log) then they start talking about the tradeoffs between graphql vs REST, that's a bad sign.
> People with real experience will have encountered this problem and know that you have to spend time investigating to find the root cause of the backpressure, otherwise you could add capacity to random things without reducing the queue occupancy. Are your queue consumers getting throttled by some downstream service? Are your workers CPU bound? The "right" answer here is basically "we should figure out why the queue is filling up before changing anything", but very few candidates answer that way.
This is the opposite of what "people with real experience" do, at least in production-down situations. The first step is to mitigate, the second step is to root cause. If there's some no-brainer step that has a chance of alleviating the issue while you root-cause, and is unlikely to make things worse, you should take it.
Your point is good, but in making it, you sound like an adversarial interviewer.
You and OP are actually agreeing with "the correct answer is it depends and now let's discuss context". This echoes my experience as interviewer too. It's a red flag when the candidate responds with "the correct answer". That's what OP is calling out.
I'm replying here because I get the impression you're looking for "the right answer" as you see it: "the first step is to mitigate then do root cause". You're right! But it also could be too adversarial.
> Your point is good, but in making it, you sound like an adversarial interviewer.
Most interviewers are adversarial.
Let's be honest - if an interviewer wants you to pass a system design interview, they'll make it work. I see this with particular candidates all the time. If we want the person to make it through - we'll let them get through. If we don't want them to get through - no amount of correct and behaviorally appropriate answers are gonna make them get through.
Pretty much this. You can have experience and also perform well in a system design interview. However, passing some of these rounds involves chance. Some interviewers are looking for specific answers. For example, some interviewers are looking for you to be knowledgeable about recent papers specific to the heavy-hitters problem. Or the two ways to best support lynchpin objects. It’s not so much as dropping well-known building blocks to construct a system but rather the interviewer asking you to design a system requiring specific characteristics you might not have any specialization in building.
This might be true of SRE roles, but not really true for most engineers, particularly not where I currently work. I wasn't completely clear but what I'd be asking for here is how to predict and design for bottlenecks, not how to mitigate active disasters.
Also making a queue bigger is a bad reflexive response to queues being full. I was very involved in the SEV review (postmortem) process at Facebook and witnessed lots of cases where bad situations were made much worse by misguided hasty responses.
> Also making a queue bigger is a bad reflexive response to queues being full.
Depends on the function of the queue. If it's a dead-letter queue, there's literally no downside besides a likely-trivial amount of cost. If the problem is upstream of the queue, and the queue is actively feeding well-functioning consumers, yeah, it could make things way worse. This is where being an experienced engineer comes in. Also having 2-person approval like you would for any code changes. Point still stands that you should take low-risk actions to mitigate if they're available to you before root-causing.
I do, but I don't interview SREs. "Production down" situations are hopefully rare, so the purpose of the interview is to understand if the engineer can plan ahead by adding monitoring, logging, and knobs, and to know how to design for scale. I don't really ask about what to do in a fire because many people don't have that experience.
It feels cool to me that my nontraditional degree for software (ChemE) drilled this into us and now it comes up in my current job:
Rate of input - rate of output = rate of accumulation
It’d be fine to just up the queue capacity if the input is just inconsistent/spikey but averages to the input, but under steady state (or steadily increasing growth) higher input, you must increase the output rate or you will outgrow whatever capacity the system has
I take your word for it that they’re trying to trick you or something - it’s not my story. But it kinda comes off like you’re denigrating people for not knowing something.
If they bring up graphql, I expect them to say something like "none of the advantages of graphql are applicable in this case, so it may make sense to use if it's already well supported. All else being equal it's probably not a good fit"
The key is that the response should reflect the actual design we're discussing, it should be more specific than something that could be said in any interview.
In an interview it is a negative to not know something. It's a bigger negative to not know something and try to hide that, or to be ignorant of the ignorance. "I don't know" is a fine answer in an interview. "I don't know, here's where I would look" is better
One of the "heavy practice/coaching/leetcode grinding" signals is when the interviewee immediately dives into proposing a solution and discusses facets of that solution in great detail without bothering to ask clarifying questions or discussing assumptions and the risks/consequences of those assumptions.
I think the opposite is true. An experienced engineer not as versed in interview prep might hear the question to design something, and go ahead and show how they would design it based on their experience doing it in the real world.
The leetcode grinder would almost certainly ask clarifying questions since it’s heavily emphasized as part of the rubric in pretty much all system design interview prep material.
Of course, the non leetcode grinder might ask those questions, but even the social cues that you’re allowed to ask clarifying questions might not be understood.
In my experience, I’ve seen a lot of people say that systems design test the seniority of a candidate, but then stick to a rubric that the leetcode grinder has memorized and the senior engineer might miss several important points on despite having the experience because they didn’t understand all the unwritten rules of the song and dance.
The business goal of interviews is not to select the best one, but to narrow down candidates with an acceptable fit for the role.
An interview system with a high false-negative rate is not seem as problematic if the false-positive rate is also low. The cost of not hiring a very good candidate is way lower than the cost of hiring a bad one (even considering the opportunity cost in most cases).
It's funny cuz the leetcode grinder also learns that "asking clarifying questions" is part of the right answer structure too. So the cat-&-mouse game becomes more nuanced, but I do agree that it usually does become apparent that less-experienced candidates are _blindly_ following a strategy they learned somewhere. Interestingly this is not _inherently_ wrong since if you think about it, don't experienced people do the same thing? playbooks and checklists etc.
But TLDR: the hollowness of each step of the approach does become apparent in my experience.
Yup, exactly this. The easiest way to weed out bad candidates is to listen to all the buzzwords they rattle out and ask them to explain a single one and how it is relevant to their design.
My first instruction to candidates always is – only talk about things you know. If your system design interview prep book told you that you should use time series databases for a certain category of questions but you don't actually know how they work, you will be screwed by trying to incorporate them in your design. Just use MySQL instead and tell me its pros and cons for this use case.
I bet I can condense this even more: it's yet another FAANG shibboleth that starts out as a good idea and the evolves into a ritual candidates need to perform until they can identify that they are "in".
Google Algorithm/Whiteboard interview has morphed into a generation of robots that can solve any LC problem but are incapable of building anything real in software.
Currently I think there are more books out there on "passing the systems design interview" than there are on actual systems design.
I got such fatigue from scrolling through so very much prologue, introduction, expectation-setting, and general "here is what we are going to say and here is how you may expect we will say it" fluff that I gave up, pages in, before getting to any apparent content.
I like some parts of part 1 too, like this paraphrased quote from a senior FB engineer “I’ve learned more about distributed systems reading the internal interview wiki at FB than anything I ever actually built myself… the actual systems at scale are designed by platform teams and most engineers don’t have to think about the impact of dumping a billion messages on a queue, because the infra team will handle it”.
Goes to show what a ridiculous game we’re all forced to play in tech.
That is why the infra team get paid the big bucks (bigger than normal SWE) and are basically untouchable in layoff rounds. I know FB was still in full hiring mode for infra folks, even in the middle of layoffs / hiring freezes. I assume this is the same at the other FAANG / FAANG-adjacent companies.
Hey, founder of interviewing.io here. We actually did zero thinking about SEO. We had someone look at this guide at the end through an SEO lens, but then we decided that that would make the content not as good and would make the guide less readable.
(As an aside, I think in recent years, the spirit of SEO has become more about just making good stuff and less about hacking...)
Anyway, it wasn't for SEO. It's hard to edit stuff well because we're so passionate about what we wrote here, and we probably should do another pass.
These people don’t have to think about SEO because it’s already been embedded into them. BS corporatespeak at their very core, the perfect type to sell interview gaming.
I will read this as it seems really good.
However, it always feels like a catch-22 problem to me, in the sense that, if you've never worked at a large company before, you can't claim to have truly used or understood large scale systems properly to work on them or design them in production.
I got rejected from a large tech company after failing system design and this was exactly their reasoning: "you seem to know the concepts and the theory really well, but, it was obvious to the interviewers that you lack the real world experience and hiring you would be a risk".
Obviously I... knew or at least tried to know the theory because I PREPARED. I read books, articles, practiced mock problems, watched YouTube for a few weeks, etc, etc. I just tried to "drill" the knowledge that was not there into my brain for the purpose of passing the interview. I guess that's what a lot of people do.
So yeah, it left me really down for a few months and feeling bad about myself and like the world sucks. So, how are you all out there "faking it till you make it" in a convincing way?
Because I bet that no engineers at a FAANG will alone design a full scale system on their own ever, so I doubt the usefulness of the "signal" these interviews give, but, ofc, that's just me, someone who prepared for months and failed
I think to be frank, this gamification of interviews is BS.
I’ve built actual production systems at scale. The fact that I need to follow a guide to “demonstrate” my ability to someone who 9/10 hasn’t built (or couldn’t build) anything at scale in production shows where we are in the absurdity matrix.
I get what you are saying, but I have done tech recruiting for a long time and the reality is that everyone claims they have built complex production systems at scale. Everyone has lofty resumes with the choicest buzzwords. Everyone can come up with war stories about how they single-handedly kept the internet alive. And everyone can write the names of a dozen references who can vouch for all of this.
The moment you start to test these claims, 90% of these candidates cannot write a line of code or describe a simple distributed system by drawing a few boxes on a whiteboard. So the current process is what we end up with.
And once in a while some genius superstar 10x programmer comes along who is too good for these silly interviews... well that's fine we don't need that arrogance either.
> I get what you are saying, but I have done tech recruiting for a long time and the reality is that everyone claims they have built complex production systems at scale
Then we need a better test, but the gamification and badgering isn’t effective imo.
> And once in a while some genius superstar 10x programmer comes along who is too good for these silly interviews... well that's fine we don't need that arrogance either.
It’s not arrogance to not want to be judged by someone who literally hasn’t done and probably could never do what you’ve accomplished.
The unspoken secret is that most software engineers in the industry (even at FAANGS) can really only add features to existing systems. They can’t build green field.
> Then we need a better test, but the gamification and badgering isn’t effective imo.
The educational field has been trying to do create tests that cannot be gamed for over a century. The problem is that "gamification" is just a form a studying. In general, folks want tests to have the following properties.
1. Consistency of measurement across test-takers for a defined set of skills
2. Relatively short time-bounds (few hours).
In job hiring, the approach that tries to sacrifice (2) is contract-to-hire, just let the person do the work for a month! Turns out that people with in-demand skills aren't super interested in this.
Companies with more than a handful of engineers don't want to sacrifice (1) both out of fear of lowering the hiring bar too and because it may open them to discrimination lawsuits.
Once you have both of these constraints the number of "test permutations" (along with their relevant evaluative criteria) become limited enough that people can study them and thus gamification begins.
Cool then go design a better process that picks out the best engineers on the planet without asking them any engineering questions. Clearly you are smarter and have accomplished more than anyone at any company you have interviewed with, so it should be a cakewalk for you. Heck you can make billions by finally cracking the tech interview problem.
To be honest, you seem like the one with an issue. You're rejecting the general norm which is at this point painful, and everyone agrees, but it's the best thing we have. Your excuse to rejecting it is your own ego shooting outwards -- "99% of people haven't done what i have." That's cool and all, and I'm sure you're a very smart person, but just by the fact that you wrote this, I can tell you might be very painful to work with and would already be a huge red flag. Kill your ego and your experience in the world will be much better
Frankly, you don’t anything about what it’s like to work with me.
But to answer more genuinely, it’s a fact that most developers have never and will never work at web scale. Also, if you have ever worked at a FAANG, you would know that a majority of the topics that are involved in these hazing loops aren’t directly used in the dad work of most developers.
Rather than focusing on me, focus on the point I was making about the deficiencies in the hiring loops of tech companies.
You're asking the wrong question. You want the people with stories about how they broke the internet and remained on the same team or even advanced.
Those are often the people with enough skill that they were put in a position that they could break it in the first place, and who provided enough value that even afterwards they were still allowed the opportunity to do it again.
I like to work with companies that make a lot of profit on their 5 digit customers where Single machine is enough. But we can make it 3 behind an LB. I absolute hate business whose infra is a bunch of message broker, lambda, queue .. yet make little money.
Addendum (since quite many upvotes): for the latter, they break a team of 8 (this is already a small team apart from the whole org) into 2 teams each 4 and break a monolith app into 2 microservices. At the time I saw that, the company started to have more and more managers. Guess what, not even making a break-even yet!
I guess I'm not sure if you're suggesting otherwise, but gamification is not unique to interviews. It's a natural response to any system that relies on metrics or measurements. The request to design a theoretical system is gamification as much the guide for answering it is. These games exist because it elicits answers that can be used to learn about someone's real experience level.
It's great that you feel confident relying on your real world application but it takes a long time to get repetition on building things at scale in production. Practicing system design is a way to shorten that learning loop.
Yeah, I don't think I want a job that asks such questions. I've done the webscale BS. I've done actual architecture that exists in reality. At this stage in my career there's nothing an interviewer can do but insult me.
You might say that this is hubris about my own skill but it has nothing to do with how good I am. My resume has a LOOONG track record of consistent work in the industry. Call my references, do some actual DD on me, then ask me real questions related to actual things.
If leetcode and all these other service went straight to hell tomorrow the world would be a better place. This entire "interview" industry is propped up by a bunch of leeches capitalizing on a recently extremely popular field. Then, injecting their BS to make the process harder thereby earning them dollars. The linked article is a perfect example. It's actually an advertisement if you look close enough designed by these exact leeches. This is just a reimagining of the bloodsuckers who run SAT/GRE/GMAT prep services and "ex-ivy-league recruiter consultant" bullshit. They are the same people and the only place they belong is all the way at the back of the breadline. That may be too generous for them anyway. There are better people that deserve the bread.
There's a good chance at my career phase I have more experience than the interviewer. They "level the playing field" by asking me these stupid things. That is why it is insulting. I almost want to leave the industry entirely than have to do the process one more time. I don't need a 7 phase 360 interview with everyone including the CEO's cousin to insure I am a "culture fit". It was never like this before. It needs to stop.
Companies already look at resumes, and the interview is a good way to double check things in a pass/fail manner. It's not like the SAT where you take a single test that sorta determines where you go, but even that is useful as a broad measure. Like if I'm in a top school's admissions and see 1800/2400 (idk the new scoring), there'd better be an explanation. If Amazon saw my resume and I couldn't explain to them generally how I'd architect the backend for their lockers, something would be wrong. And sometimes something is wrong; I interview someone who clearly doesn't know how to code and must've lied on the resume. I don't know how else you're supposed to do it.
At the higher levels, they're also testing for humility.
These interviews are not gutchecks. They are shibboleth checks.
I can design you a nice document, do the research, put the pieces together, etc with the big picture. I may not know a ton about AWS or another cloud provider but I can put the document together that describes how it will be looking when it's done. That is architecture. Somewhere between UML and word documents.
What these interviews are checking, and the one you are describing, is whether or not I can parrot the correct code-words. Lambda, elastic cache, all this other nonsense. That's the purpose of the bloodsuckers I mentioned above. If you can train someone with little architecture experience to pass a senior level interview by just saying the right things and knowing the right hype tech then you're not hiring architects you're hiring grifters.
It's a problem that is endemic in this industry. You can't "gut check" 30 years of architectural experience unless you're legitimately asking the core questions of architecture. Every interview I've been in has had me studying stupid buzzwords from cloud technology and every interview I am asked how to use these technologies. As an example, I was once turned down because I didn't use Kafka. I knew what the underlying technology was and suggested using it but the fact I didn't say kafka eliminated me. The reason? I can only guess, but it's likely because the interviewer doesnt know much and was looking for a way to get into a debate over kafka vs protobuf vs whatever instead of discussing actual planning of a system. These debates are resolved after I take the problem back to my desk and think about it for a week. Not in an hour. In an hour the best I can give you is a block diagram with maybe some very rough fleshed out detail.
The humility check should be bi-directional. It has been my experience that interviewers tend to be the least humble people at a company. The power dynamic is obvious and it's not in the character at most startups where a "senior" engineer high on hopium can settle themselves into their rightful place.
> I may not know a ton about AWS or another cloud provider but I can put the document together that describes how it will be looking when it's done. That is architecture. Somewhere between UML and word documents.
> What these interviews are checking, and the one you are describing, is whether or not I can parrot the correct code-words. Lambda, elastic cache, all this other nonsense.
Maybe we've just had very different architecture interviews. All the ones I've given or received were the way you'd want. I've never specified a cloud product in these. At most might say "let's use something like Postgres." Amazon for instance didn't care that I couldn't name any of their products.
Kafka example sounds awful. I'm sorry, but on the other hand, sounds like you dodged a bullet.
> There's a good chance at my career phase I have more experience than the interviewer.
This happens a lot.
I was once in the process for google and was asked a time series systems design problem. I said, “lets throw this in a time series db”. The interviewer had no idea what a time series’s db was. Mind you this was an interview for Google Cloud and the person worked on SPANNER. They also has been at the company for 2 years (by their own admission).
How can someone less experienced that you gauge your competency for a position that requires more experience than they have?
That’s before wondering why they’re asking a question to gauge my competency when they don’t even understand the nuance of the question themselves.
An interviewer working on google’s enterprise nosql db but not knowing about (at least on a surface level) the breadth of nosql dbs doesn’t seem crazy to you?
The tech interview circuit has really become a bunch of people asking questions that they couldn’t really answer themselves without a rubric in front of them (whether leetcode or system design).
To be crude, it’s a bunch of nerds hazing for jobs.
If my house had a pest problem, I would need to hire an expert in pest-control. I need to do that without being an expert myself. How should anyone be able to hire someone with more experience than themselves, in your view? I've sometimes had to 'hire my own boss'.
Maybe the criteria for the problem was less "did they check these boxes" and more "could they be a collaborative mentor willing to work with even the junior members on the team" in the context of designing a system.
> If my house had a pest problem, I would need to hire an expert in pest-control.
The result from the expert is that you as the layman can look around and see no pests (which anyone with their naked eye can do)… you’re not judging them on their knowledge of pesticides.
> Maybe the criteria for the problem was less "did they check these boxes" and more "could they be a collaborative mentor willing to work with even the junior members on the team" in the context of designing a system.
This is a fantastical maybe. The interview was system design.
why not design it ? Anyone can say to throw in a time series db but what about it's internals ? How do people judge if you can build or add a feature without asking you to design something ?
If you can't be in a room with a more junior engineer and explain your ideas in simple language by drawing a few boxes on a whiteboard, then yeah you are probably not a good fit for these positions.
If you want to explain what a timeseries db is to someone who has no clue in a 45 minute (already time constrained) systems design interview (where you’re supposed to be actually showing how you would design a system), be my guest.
The issue could be time management and every database has same set of features but different implementations.
One of my teammate keeps talking and sometimes loses the context. We usually step and get on track. He has improved a lot in past few months. You might be in similar boat.
I know how to give concise descriptions of complex problems. The issue was that the interviewer didn’t understand the proposed tool.
Every db definitely does not have the same set of features. The read and write characteristics of dbs vary widely, the underlying storage and indexes vary as well.
There’s a huge difference in usecases between postgres (relational), redis (in memory cache), and influxdb (timeseriese)
You are supposed to structure it. I know their internals on a high level but I work with internals of a oss engine. I know how their read / write paths vary, data structures for storage, for locking, for request scheduling, checkpointing algorithms, scheduler, etc.
The point is you've to structure your knowledge in a way the other person can understand. There is no value for any company, if you can't share whatever knowledge / experience you've in a structured format.
Not everyone is familiar with internals of commonly used tools but the underlying patterns or concepts remain the same.
If you can't explain that, then you are lacking in communication skill or maybe no one ever gave you feedback. It's just a matter of time.
Well said. I’m nearing 30 years industrial experience. The last interview I had was 6 engineers and the HR chick all in the room grilling me. Next time I’ll get up and leave.
"You can pass system design interviews even if you’ve never designed distributed systems before. If you have copied files between machines with drag-and-drop, you are halfway there. If you implemented clients or servers or have opened network connections, you’ve got this. This guide will teach you the most important 20% of information that will appear 80% of the time in system design interviews. By the end of this guide you won’t be an expert, but you’ll be well on your way to being a better engineer and a much better interview candidate."
A better engineer, no less! Wow, this must be some magical guide. I have to read it now. Also what does this say about our industry. "You want to be a surgeon but don't have the experience? Ever cut a tomato? You're halfway there. If you've ever made a sandwich, you've got this. Nurse!"
The reality is you can become an engineer with no degree whilst you can’t become a surgeon with no degree. The criticism of the writing is dismissing that a lot of people want to go for that promotion or higher tier but due to imposter syndrome or lack of confidence, abstain. This guide is to help them overcome those fears and level up. If you have an issue with people leveling up, good luck to you.
That trivializes our profession and hard earned experience simply because a degree is not a requirement. It is ridiculous to say drag'n'droping something is halfway to being a systems designer. And the anecdote about "experience at facebook" was such nonsense. "I worked at facebook doing something other than distributed systems, and geez, I learned nothing about distributed systems". "QED!"
p.s. I missed this ad hom bit (no, not the issue here)
> If you have an issue with people leveling up, good luck to you.
Just because you work somewhere doesn’t make you an expert at what they do. My Home Depot cashier is incapable of installing the carpet I bought from them. There are levels. Everyone wants to pretend they have FB or Google scale when they are building only to find out they have Ruby on Rails scale after year 4. They don’t need a Stanford grad. They need a feature developer or someone who can break apart the monolith so they can scale beyond a single server or group of servers.
Our profession is trivial and full of smoke and mirrors, and barely deserving of profession. It's full of people who have literally no clue about what they are doing creating systems which screw over other people.
Just because you're all smoke and mirrors doesn't mean everyone else is.
I agree with you that there's lots of impostors that have no business being called an "engineer". But that hardly means it's not deserving of a profession.
I don't know how this is elsewhere but in Germany you cannot become an engineer without a degree. You can work somewhere in an engineering field but you are no engineer.
And BTW here computer scientists / programmers / developers are officially also not engineers.
> "You want to be a surgeon but don't have the experience?"
Well, what do surgeons have to do beforehand? Study. This is no different. Most of tech doesn't require fine coordination motor skills that need to be trained in practice, so this comparison is way off.
Also, like the excerpt you quoted says, "you won't be an expert" after reading the guide. A surgeon is an expert by any definition.
A senior interviewer's guide to the system design interview*.
I've speed read over the existing two sections and while it's informative, I'd argue that the maybe 10+/12 of the core design concepts are things that someone with a formal CS education should have picked up - you don't need to be a senior software engineer.
The barrier is knowing how these things can be pieced together to make a system from scratch which is the more difficult part as it is a much rarer occurrence in most people's experiences. Even seniors may not have much experience in setting up large scale systems (depending on your definition of senior), so at the end of the day anyone that studied or memorized the material is good enough to pass - practical experience or not.
I'd much rather have a high level view of an existing or theoretical system, be provided with some issue that occurs, and be asked for ways to diagnose and remediate said issue. Forget the dance around setting the system up. This is similar to the practice of providing existing code in an interview, describing a bug with the code, and watching the interviewee debug and fix it - but with systems. It mirrors actual work more closely.
I remember once I was interviewing for a senior position where the ask was something like "design a general purpose system for a REST layer on top of an ORM."
I had just finished implementing an OSS solution that did exactly that, including some upstream changes we made to improve the system, so I walked through exactly what we did, challenges we faced, etc.
I walked out of the interview feeling as though the interviewers and I didn't speak the same language; they might've said the same. Not only did I not get the job, I never got another communication from the company.
To me, senior level interviewing, especially at smaller companies, is fraught. A full 1/3rd of the companies I've interviewed at would not have been a good "culture fit," and I'm a picky interviewer. I expect, like coding tests etc, these interviews are not aligned with the real travails of the position - growing talent, managing time, triaging, and ensuring stability/capability.
The interviewer usually can only rise to the level of themselves, especially when assessing skills.
System design is hard, necessary skills can only be earned, and then there are many avenue down to bad system design masquerading as good that people who implemented it just don't realize.
It takes a combination of humility and a degree of the skill in question to recognize the interviewee is more proficient at the said skill, and many are lacking in either or both.
I’ve got an interview coming up for a senior role on a UX engineering team (design system, component libraries, etc). The interview is just the standard set of backend system design and leetcode problems.
It’s a pretty big red flag to me that the skills being evaluated are so tangential to the actual job. It doesn’t give me any confidence that the people I will be working with have the skillset I consider important, or that the team and I will approach problems in a compatible way.
I suspect your fears are well-placed. Whoever is hiring this team has no idea how to hire for those roles and that suggests that those roles might not be valued within the larger company. Though sometimes the hiring manager has just lucked out and hired a great team so far. Still, will that team be listened to? Will that team be respected?
Issue with system design is that our industry is so highly opinionated about things. You design something to the best of your abilities, it works (though may have its pros and cons) and then you go present it to a crowd of engineers (HN, say) ... and you get destroyed. There will always be people telling you that these were dumb choices and that that's obvious. And if you go and do what they say is the obviously better solution, then the exact same thing will happen with another crowd.
Our field is too immature to actually agree on things being good or bad.
> Interviewers want to engage you in a back-and-forth conversation about problem constraints and parameters, so avoid making assumptions about the prompt.
As someone who has conducted countless interviews at a FANG, I can not stress this point enough. The ability to just talk about a problem makes a big difference between a mediocre interview and a great interview. Besides, it's what makes an interview fun or lame.
Problem is: as candidate you don't know if "engaging in a back-and-forth conversation about problem constraints" is something the interviewers want. Some interviewers want it, but it's not always like that. And, no, usually discarding an entire company just because the interviewer on duty doesn't like "engaging" is BS.
It was some time ago and a limited sample but at one point there a was google recruiting presentation at my school.
When asked a reasonably simple question regarding expectations during interviews the google employees that were present each gave a drastically different answer(even after hearing their colleague).
The entire group covered the who spectrum of possible responses which lead me to believe that there was definitely a luck of the draw factor in who you get.
Perhaps they are more adherent to a rubric in 2023 which leads to these study guide type posts. Large company bullshit has never been my jam though so I never applied to find out myself.
Well good thing you don't have to read their mind. Communicate. Ask them. Like the above poster said, you are there to have a conversation, not deliver a lecture.
They usually want it, or at least it's neutral. Just make sure you have enough time to actually solve the problem and don't ask for ridiculous hints, and nobody is going to fault you for talking.
Are these interviews turning into a "can you regurgitate some memorized information" formality?
As an interviewer: This is always a red flag to me, because being a successful software engineer is much more than memorizing common truisms. (I spend a lot of time cleaning up misinterpreted "truisms.")
As a candidate: If I'm expected to regurgitate truisms, it's a flag that trying to do things "right" will be opposed by people who don't understand how computers / information work; and a lot of friction will come from trying to make something work versus make something fit an inappropriate ideal.
I do believe this is probably useful for people interviewing, but on the other hand it reads like mostly complete bullshit and largely social signalling rather than measuring any sort of skills.
The fact that you can be coached to pass these interviews with zero practical experience designing systems (Even toy ones or hobby projects) shows that there is very little technical skill involved.
Someone who has designed a real production system is still likely to fail these interviews unless they understand the social nuances that interviewers are looking for in a system design round.
This is, unfortunately, painting an overly rosy view of the interviewer.
A lot of the time your interviewer is just going to be some senior person with an interview guide. They'll be asking you to design Netflix, without any experience with video or streaming themselves.
Rather than "trying crazy stuff", I've found that an important step is asking a few questions early in the interview to see if they understand relevant concepts. You can't take it for granted.
Yeah they want a canned answer, like they want some standard algorithm for leetcode. You need to be as standard as possible so they can check their boxes
Interviewing is a two way street. You can judge the company as much as they judge you. If you find yourself in front of such an interviewer then move on somewhere else.
I think it's funny how they say you don't need experience to pass one of the interviews. Last time I went in to one of these as an experienced engineer I probably said all sorts of things they didn't want to hear like there's no point building a scalable system until you're sure your product has traction. That and I kept trying to extract imaginary requirements. I still have no idea what they wanted to hear so I just name-dropped architecture paradigms. Didn't get the job.
> Last time I went in to one of these as an experienced engineer I probably said all sorts of things they didn't want to hear like there's no point building a scalable system until you're sure your product has traction.
So.. you argued with the interviewer about their question being 'invalid'. That's not going to be a winning strategy. If you have to insist on making comments like these, a better way to express it would be:
"Well, I know the context here is that we're designing a new, small-scale system. In cases like these, I think it's normally most helpful to get an MVP out-the-door, and not worry about scalability. However, in a situation where we _were_ concerned about scalability, I'd start by..."
> That and I kept trying to extract imaginary requirements.
This is more on the interviewer. If you ask for requirements, they should provide them. If they've done it a while, they might have a generic set, but they should at least be willing to make them up on the spot. In lieu of that, there's no rule that says you can't make them up on your own. E.g.:
You: "How many concurrent clients do we need to support?"
Them: "I dunno, just make it scalable."
You: "Ok, let's assume it's 10k and we want to make sure we can scale that horizontally in a roughly linear fashion up to 100k.."
I don't know why you assume I was rude about it. I phrased it all equivalently to how you said. A lot of my background was in tech services so I'm very well practiced in asking these kind of questions. They pretty much waved off all of them saying it wasn't relevant.
> They pretty much waved off all of them saying it wasn't relevant.
Well, if they are asking how to build a highly scaleable system, then statements about how to build a not highly scaleable system would not be relevant.
Context is still relevant though. "Highly Scalable" means something different if you're working on core AWS infra vs an app to be used by 100k people at the same time for instance.
I'd expect the interviewer to engage and set some helpful boundaries (and the interviewee if they have the experience to talk about what changes between the two situations)
Well, the sad truth is that a lot of _interviewers_ actually suck at interviewing, so sometimes an interview devolves into a bewildering game of what this particular person wants to hear today based on their mood, especially with the open-ended questions. It is not always possible to suffer through this process politely and constructively.
If the candidate is exposing flaws in your assessment and making you think that should be a positive signal, not a negative one. Unfortunately people have egos and generally don't take kindly to situations like that.
I read all 4 parts in the pre-released version on the interviewing.io discord. Here's my feedback:
Loved how much they go into the actual interview dynamics and phrases to say, where as other resources wave away interacting with the interviewer as an implementation detail. Haven't seen that anywhere else on the web and it's super helpful. The framework is also much more fleshed out than, for instance, the HiredInTech system design guide.
However, the 12 tech ideas section was a little too dumbed down for me, though it might be helpful for someone with less experience. I also noticed a few typos (MangoDB, cache vs hash) and told them, and they said they'll fix it in the next version of the guide.
- System design interview changes depending on the company. At Facebook the interviewer said one sentence in the whole interview to me, no feedback or anything, so be prepared for nasty ones. At Google they hate if you use a particular product to solve your problem and everything needs to be backed up by numbers; they'll ask you how many servers you'd need for your solution ("bill of materials").
- Expanding on last comment, it's common to see in books and courses a basic math of number users -> estimate usage -> requests per second and storage needs but I've never seen anyone take one estimates about servers by computing processing (CPU/RAM) needed.
- This guide, like almost all resources in particular takes on the most common case of web design: client-server request/response (plus queuing). There are other completely different paradigms that show up less frequently and are harder to have experience on, like streaming and batch processing.
- As all guides given space/time limitations, some key concepts are hand-waved with a recipe-based approach.
I develop desktop and embedded applications, so do a different work than distributed systems engineers, but I wonder - do you often build systems from scratch and therefore need to have all this knowledge?
Because in every job I've worked in the past 10 years, I landed in an established project, that was developed years ago, and my work was to maintanace and add new features. Most of these projects were too big and complicated for one person to know how they work in every detail - even the ones that were there for the very beginning always said something like "after all these years I don't know how half of the things are implemented".
So, from my perspective, an interview question where someone asks me to design a complex system entirely on my own, in details, is just stupid, as I am pretty sure I will never do anything like that in my life. But maybe webdev is a different story?
I've only read up to Part Two so far but find the guide is extremely helpful and resonates with other resources (i.e Grokking the System Design Interview) and with my own experiences for what I believe is the very broken interview process that we face today.
Highly recommend. Interviewing.io is one of the best resources for those doing prep today and the team behind it is awesome.
I love the content that interviewing.io puts in this category on their YouTube channel. What was especially eye opening is seeing two senior engineers that normally conduct systems design interviews taking the driver's seat. https://youtu.be/Zi0pPkiFemE
I think there may be an error in part 2 of this guide where it talks about ACID.
To my understanding, the C (Consistency) in ACID is referring to the structure of the database. In other words, a SQL table can't be updated in such a way that violates the column definitions and constraints defined by the table.
In this guide, it seems to be talking about strong consistency vs eventual consistency when discussing ACID, which, to my understanding, is a different topic entirely, and refers to the timeliness of accurate reads.
Yes. The Consistency property means that at the end of every transaction, the resulting data state in the DB will be consistent with the schema definition, including any constraints, triggers, etc. In other words, no transaction will result in invalid data with respect to the schema definition.
This is similar to soundness in programming languages.
As someone that is still in undergrad but will be entering the labor market in a few years, this entire process makes me incredibly anxious to think about. Reading what some people go through to find decent jobs has been a huge stressor throughout my undergrad career. I was lucky enough to secure a good internship this summer (which took months of searching and getting ghosted). I’m not excited to run the rat race again once I graduate—that’s for sure!
Luck plays a huge factor, but generally the crowd on HN are opinionated and bitter. That said, if you're really brilliant and way above your peers, the luck factor drops significantly. Anyway, your experience will likely vary. Don't worry too much.
A lot of system design guides do enunciate on the fundamental concepts of system design such as scaling, design patterns etc. But explaining how to actually approach the problem before the interviewer and explaining what they're looking for and not looking for is what makes this guide unique. Otherwise, by the time the candidate gets to the meat of the problem, the interviewer might have already lost interest in them. So great job guys!
First off, I'm not disparaging the content, I'm sure its good and useful in interviewing at most companies.
...that said. This kind of thing annoys me. You shouldn't have to study for an interview. Either you meet the requirement and have the experience or you don't. Reading books and stuff like this is really a cheat by the person being interviewed. You many have studied enough to "pass the test" but you really don't have a deep usable knowledge of the material and will likely forget much of what was studied.
When I do interviews I make sure the candidate is aware that studying will be a pointless endeavor and if I can tell your answers come from one of the interview prep books I'll end the interview early. My interviews ask questions that will tell me if you REALLY know the subject. For example if JavaScript is a requirement I'll throw this at you
const a = [1,2,3,4,5,6,7,8,9];
const b = a.filter(x=>1);
if(!!1 && b.some(e=>e>6)){
foo();
}
and ask you to tell me what it does. Its totally weird code and seeing that in actual code is highly unacceptable BUT if you really know JavaScript you will be able to tell me what this does in less than 30 seconds. Immediately if you're going for a Senior position.
"I'm known for hiring the best engineers". BUT if you're really known for hiring the best engineers, please show us your accomplishments! You just told us you look for trivia which means you're great at hiring engineers who are good at language trivia.
That set has a pretty small overlap with "best" engineers. "Best" itself has a small overlap with "useful" engineers.
Your trivia knowledge is great to impress like minded people but irrelevant if the problem space doesn't depend on it.
PS: Someone could consider you a pretty junior engineer if you ship:
"if(!!1 && b.some(e=>e>6)){
foo();
}"
because the you're increasing the risk of error in a codebase that would likely be maintained by people of varying skills. You write code to solve problems and you write code so regular earth humans can grok it and manipulate it to solve problems.
Writing code with a high risk of introducing problems and misunderstanding does not signal senior in many domains.
This is not trivia. For better or for worse truthy evaluation is a fundamental part of the language which is what this code snippet appears to be primarily testing.
Don't see how you could consider the question they posed to be trivia for someone who knows javascript? Abuse of truthy evaluation is very rampant. All the JS codebases I've worked on are filled with `if (!myArray.length)` and etc. I would be very sad if anyone with a year of JS experience couldn't get this problem.
By trivia, I mean it can be thoroughly explained in a paragraph or two.
Separately, if it's the kind of thing that can be learned in less than a year of experience, it's not helpful for determining whether someone should be hired as a senior engineer.
You're saying "you shouldn't have to study for an interview because you either have the experience or you don't" and then you're equating being able to answer a trivia question with experience.
The irony is extremely rich. Being able to answer any particular question cannot measure if someone has the requisite experience. The only thing it tells you is that they can answer that question.
Your judgement of them based on your question is extremely subjective and doesn't tell you whether they "REALLY know the subject" or not. You are just testing your own biases which may yield good results, but call a spade a spade.
> When I do interviews I make sure the candidate is aware that studying will be a pointless endeavor and if I can tell your answers come from one of the interview prep books I'll end the interview early. My interviews ask questions that will tell me if you REALLY know the subject. For example if JavaScript is a requirement I'll throw this at you
How can you tell if someone had simply memorized a few pieces of JS trivia and is regurgitating it to answer your question? The code is not that complex or "totally weird".
I've never written a line of Javascript in my life but that test looks trivial.
a.filter(x=>1)
guessing that arrow looking thing is syntax sugar for a lambda that always returns a 1, then it's implicitly cast to boolean true in the filter, so nothing gets filtered out and b ends up just the same as a.
!!1
not not 1 -> true
b.some(e=>e>6)
at least one element of b greater than 6, clearly true
so both arms of the && are true, so foo() gets called.
I don't think this is a very good test if someone like me can pass it just by making educated guesses.
It's rational for most candidates to study interview prep books because that is what the majority of companies want. Not saying I like it, but why would you want to filter out candidates who are just using basic common sense?
I think it's a combination of a lot of interviewers priding themselves when they "stump the chump" and a possibly subconscious optimization to find folks who won't upset the status quo.
I'm very excited to see a guide that focuses on the process of the interview itself related to system design. There are plenty of "how to design twitter" resources out there, but for many of us, we really need to better understand the system design interview process itself. Thanks so much for putting this together, can't wait to see parts 3 & 4!
I love a few things about this guide. Firstly, that is acknowledges the fact that interviewing for a thing is different than actually doing the thing. Secondly, that it takes the position that system design interviews are (or should be) more about _how_ you approach a problem than about knowing the details of distributed systems.
This generally seems like good advice, but it does veer occasionally towards suggesting ways to appear good at system design when actually the point of a system design interview is to show the interviewer how good at system design you actually are.
This is an important distinction because it is not necessarily the case that the only way to pass a system design interview is to come across as being brilliant at it. I will happily take on a developer who shows that they have some good system design instincts but lacks a mature approach to engineering decisionmaking (assuming their other interviews show that they seem teachable and they have the other prerequisite skills). Not every developer is going to be a lead system designer. The interview is to help us figure out what kind of developer you are.
The repetition of "this is different from code" strikes me as odd.
The way I see it, it's identical to code, it just involves more moving parts.
> If two experts designed the same system, you would see two different designs, beautiful and aesthetic in their own way and both as “correct” as the other (and with the accompanying justifications to support them).
That is nonsense. Systems can be optimally designed too, just like code. Even if you have two systems that both meet the requirements for features, performance, reliability, and cost, one is usually objectively better by surpassing the requirements more than the other.
Would you like solution A that is identical to solution B in every way, but can be run with half the hardware, or do you want solution B? Of course you would want solution A.
Interviewing (especially system design) is a skill you have to learn these days. It's also a performance. You can complain about how flawed it is all day, and believe me I do (I'm a staff FAANG swe), but you gotta play the game. Knowing how to methodically break down the problem into smaller components and be able to confidently present the information in a manner that's easy to follow/collaborate is actually a real on-the-job skill of any experienced engineer. You have to be able to demonstrate this in an interview. The people at interviewing.io are legit and this guide is a great prep tool
I really like how the guide is still being worked on, and it has left some breadcrumbs for the readers regarding "what to expect in the future?". I am prepping for interviews and really like how this guide is structured!
Since we’re talking systems here and reasoning from some provided constraints:
> We began by listening to 30+ hours of system design interviews and system design lessons. We then performed data analysis to identify 50+ of our highest rated interviewers.
At an average 30 mins an interview, that’s 60 interviews in the data set this is based on. That seems like a disproportionate sample to form general advice. That doesn’t mean it’s bad, but does likely bias heavily to a subset of system/interview styles.
Can we stop saying that the 'over engineering interview' is a 'system design interview'?
Most real life problems don't get solved by adding queues and moving to an event driven architecture with thousands of micro-services.
What I'd expect from a senior engineer is deep technical understanding of how a queue works (epoll, poll, io multiplexing, io non blocking, readiness) for example.
best system design interview advice i got (and put to successful use in loops with goog and fb) was to not try to give the "right" answer -- it's like a professor grading papers on a topic they've taught countless times, you have more of an opportunity to fail than you do to succeed. instead, be decisive and give answers or approximations for any questions that arise, and bring your unique perspective to the problem -- in my case i have a background in mobile app development, so i spent a lot of time speaking to those use-cases.
if you try to cover every base and fret about everything being optimal, you're going to barely have time to get started.
Performative interviews are bullshit. I’ve been working as an Engineer for 20 years - successfully. But you want me to “whiteboard” like a trained monkey?
What are you suggesting? That these companies have poor system design? That they don't need to hire smart technically minded engineers because their problems aren't technical?