Interviews are really a dumb game these days so if you want to really game it you can go with a statistical approach:
* Practice questions by company on LeetCode, sort by frequency of last 6 months and work down the list, do maybe 75-100, the list updates once a week
* Search for the company on the LeetCode forums and sort by most recent. If a question is not on LC yet it will likely get posted there, so you can get really fresh intel.
I've read of people doing this and getting like 6/6 questions they've seen before.
No, if you really want to game it you sign up for membership on Chinese forums where people post the questions word for word minutes after completing the interview. That or work exclusively with private recruiters that tell you the questions verbatim because they have a vested interest in you passing.
Interview questions don't rotate that frequently, especially for smaller companies or more specialized roles, and a $60 membership for a month will buy you internal referrals and potentially land you hundreds of thousands of dollars of value in a new position.
As an interviewer, this is so incredibly frustrating - we don't change our questions often and because of that the questions and answers are all over these forums. With that said, it is incredibly easy to spot someone cheating - they often write the most perfect optimal solution from start to finish, helper functions first, often with the same function names as the forums themselves. The trick I've learned is to ask "why" - "why did you decide to use a linked list for this problem?" - the real cheaters often freeze up and can't give an explanation at all. It's rare, but still too common.
If you've seen the question/answer before just say so! I will totally appreciate the honesty and it goes a long way.
> If you've seen the question/answer before just say so! I will totally appreciate the honesty and it goes a long way.
Why? You're testing their ability to produce the right answer to a given problem - not their problem solving ability. To that end it shouldn't matter if they've seen the problem or not.
I always find it hilarious when recruiters say that "getting the optimal solution isn't everything." I've failed numerous interview rounds where due to time constraints or implementation details I'm not able to completely code the optimal solution, but I am able to talk/walk through each step of it in pseudocode with the interviewer. By your own criteria, being able to clearly explain the solution and demonstrate an understanding of the different tradeoffs should count for much more than just being able to copy/paste the solution from memory, but I've never advanced in any round that finished without a "working" piece of code.
Honestly, the one thing I appreciated about FB/Meta's recruiters is that they were always honest about the process and what was expected - 2-3 Leetcode mediums/hards in 45 minutes and they only care about optimal solutions. I much prefer that to disingenuous sentiments of "getting the right answer is important, but we also want to see your thought process and how you might work with another engineer on the team."
> "getting the right answer is important, but we also want to see your thought process and how you might work with another engineer on the team."
That's how it works in all the companies I've hired in.
It doesn't matter if you don't get to the end of the problem, I just need to see you can think and that you know how to code.
Do you think your daily job will require you more than that?
And believes me, this is enough to filter out plenty of bad apples.
Poor performance in my experience was never about not being able to solve a technical problem, it was always personal issues / not having motivation / hating the environment.
> the one thing I appreciated about FB/Meta's recruiters is that they were always honest about the process and what was expected - 2-3 Leetcode mediums/hards in 45 minutes and they only care about optimal solutions
A counter data point: I recently passed Google's interview a few months ago. In one round, I was asked to solve a certain kind of bipartite graph matching problem; not being a fresh new grad with ACM medals, I obviously couldn't solve it in polynomial time and just implemented a brute-force solution in the end. In another round, I came up with a solution the interviewer said they've never seen before, although it could be slightly slower than the "optimal" solution he had in mind depending on the input size.
As an interviewer in my last company, I always made sure the solutions were well motivated, and have rejected candidates who could write down the perfect solution but couldn't explain why. If I were to be asked by the candidate for the specific runtime I was looking for, I would probably just reply with "don't worry about it, just try your best" or "let's focus on correctness first and worry about efficiency and other trade-offs later".
Testing for problem solving ability is hard, but that's still one of the key signals we wish to extract whenever possible.
> Why? You're testing their ability to produce the right answer to a given problem - not their problem solving ability. To that end it shouldn't matter if they've seen the problem or not.
Pretty sure most people want to test problem solving ability, and hopefully your problem solving ability solves the problem correctly. If you method to solve the problem is to find the answer online and repeat it... that may not be how the company wants you to solve their problems.
Really? In my position as a senior member on my team, one of the biggest mentoring costs is just teaching junior members how to search and find answers for themselves.
Yes, day to day I'm not copy and pasting huge blocks of code from Stack Overflow, but when I need an answer and I don't immediately know it my first move is always to search internally or externally for others who may have already shared it.
Why is being able to effectively search for for answers not considered good problem solving?
> Why is being able to effectively search for for answers not considered good problem solving?
Memorizing interview questions is decidedly not "effectively searching for answers".
Effectively searching for answers requires breaking the problem down into separate pieces that you can actually search for. This is one of the skills that can actually be demonstrated during the interview. And then showing that you can also come up with the solution (or be guided towards it in discussion with the interviewer) is the natural way to round it out, instead of "ok, now google for this sub-problem while I'm watching you".
Go ask your manager if you should copy and paste something you found on a chinese forum as 100% of the output you generate. Nothing original, no actual thoughts for yourself, JUST copy paste.
Search and find answers for themselves is not the same as blindly copy/pasting, which is basically what the scripted forum answers are.
Why not just throw out the questions. They seem useless, easy to cheat, and as commenters are suggesting there's a "pay to win" component. What are you really getting from these questions? Why is software engineering so different to other industries when it comes to interviews? Is there really a positive to this type of interviewing? Why not do what everyone else does and look at previous work (which in software we have a lot more considering GithHub), ask some questions relevant to the job, and see if the person has a personality match. Most jobs don't need the best of the best.
I'm not convinced the coding interviews have improved upon the standard interview style in any meaningful way.
> If you've seen the question/answer before just say so!
I did that once at FAANG interview, instead of honesty credits I felt like the interviewer just got annoyed by having to come up with another question.
I recently interviewed with a startup. They had "outsourced" the first round to a 3rd party firm that specializes in taking tech interviews (Mostly Algorithm rounds).
The interviewer posted an LC question and asked me to read it. Since I was already logged into my LC account, he first asked me to show if I had solved it. I said I did. He then posted 9-10 LC questions one by one, all of which I had solved (I was doing LC regularly then). In the end he got tired, and posted a question from another website (Hackerearth) which I hadn't solved. We ended up taking ~5 minutes just going through different LC questions and he was disappointed that I had solved all of them.
I have also faced situations where I have seen an LC question that I solved but couldn't solve it in an interview setting, mostly because of the pressure.
> If you've seen the question/answer before just say so!
In my experience failing to answer the alternative question you give me has (on average) a much more negative impact than pretending I don’t know your question (especially when I can explain it).
Yep, I did it a couple of times and I didn't get any credit, just got harder questions that the interviewer did not practice in a long time. You should only disclose this if you're getting the same question in the same interview round from the same company, or maybe if you're back for another round a couple of years after failing.
As an interviewer though, there were a few instances where I just told people "let's do this problem anyway, don't worry" and the candidates didn't always do a good job.
> As an interviewer though, there were a few instances where I just told people "let's do this problem anyway, don't worry" and the candidates didn't always do a good job.
This is what I've done as an interviewer. But then, my questions actually lend themselves to that. Something that the sample LeetCode questions just don't.
In my experience (as interviewer) LeetCode kind of questions are "gotcha" kind of tests: Either you know "the trick" or you don't, but there's no real constructive value.
In contrast, I prefer tests like let's write a Tic-Tac-Toe, Snake, Twitter-clone (no DB in memory), etc. in 30/60 minutes together in your computer with your IDE, your software, google and language of choice. I am able to do a quick coding session with the person, see her weaknesses and strengths, and even if he has done it before, looking at his real project coding style is super useful.
> there were a few instances where I just told people "let's do this problem anyway, don't worry" and the candidates didn't always do a good job
Which should be a clear indicator that interviews aren't only judging problem solving skills, but also the candidate's ability to withstand the pressure of being watched and judged while they solve complicated problems.
For the interviewer, it's just another day and sure, they "want the candidate to succeed" and all that, but for the candidate, their future and livelihood are on the line and that's tough for some of us to just ignore while we focus on the not-easy school quiz problem of merging overlapping intervals or whatever.
Yeah and often the interviewer won’t be prepared with a backup question so you waste time for them to find one. It sucks that it puts the interviewee in a worse position for being honest.
A advantage to having standard interview questions. Plus the questions we ask lend themselves to extensions (also predefined) so you can still see how they reason about and solve those. And given that our interviewers are familiar with them, we can calibrate across many candidates. We also have a pool of people who maintain the standard interview questions.
Yeah I worked for a company that did all of those things as well, it didn’t change the fact that the interviewer was always less prepared with the second question, and that there’s always wasted time switching between questions. The true nightmare was when a candidate knew two questions in a row, or when they didn’t realize they knew a question until 5-10 minutes in.
In a perfect world every interviewer would calibrate for those minutes lost, but that calibration is fuzzy at best unless it’s built in to your scoring rubric.
>If you've seen the question/answer before just say so!
In reality no one will do this though. There is way to much of an incentive on the candidate side to lie and work through as if they haven't seen it before.
>I will totally appreciate the honesty and it goes a long way.
Not far enough to get a job in most cases, at which point, after the rejection, you and I are likely done and won't talk again. On the remote chance we cross paths again, the honesty might be worth another interview (if you're still in that position), but not much more. Person-to-person, I know the honesty is appreciated, but here's why candidates are not likely to be honest and simultaneously have no moral/ethical failing during interviews:
You're an agent of the company with a power dynamic over candidates when interviewing. You can't be honest with us via feedback, even if you wanted to, because there's potential legal problems for the company if you are. Many companies don't give feedback as policy for these reasons. So, knowing that I know that you can't be completely honest with me, I only hurt myself by being honest by telling you I've seen the problem before.
If I already knew the answer, but can't answer 'why?', then that's on me - I likely don't fully understand the solution in that case. Reject those people, sure. But they have no obligation to be honest with you. Similarly, if I already knew the answer and disclosed that fact, I might get a tougher question. So there is real a risk that I hurt my chances by being honest.
The risk of getting a tougher question is important. For it to be really fair, you'd have to grade these questions and determine if the next question is of similar difficulty. But your interview and problem grading process (if you have one) is not likely to be disclosed to me. The honesty would be appreciated, but we both know that interviewers aren't going to share those details chiefly because it defeats the purpose of the test, but also it'd end up on a forum if you're a well-known company leading to more cheating.
If you were giving a well-known test developed by professionals (GRE, GMAT, LSAT, SAT), there are significant resources to show me that those are fair tests developed by academics and other professionals. I would trust those testing admins to substitute questions of a similar difficulty. Were that the situation with your interview, the risk of getting a tougher question by being honest is significantly diminished because I know you have a bank of questions with accurate difficulties attached.
---
I'm not arguing that you should change your process. A candidate unable to answer 'why?' is a perfectly good reason to say the candidate failed the question and maybe the interview. I'm arguing that this really shouldn't be considered cheating or a moral/ethical failure.
If you had honest experience with the question at hand, it seems a little weird, to go into an interview, and basically say "I already know I can answer that one. Give me a new one."
Google translate is sufficient. They’ll do it pretty carefully, complete with “pretend you get stuck at this specific point and if you get asked why to use a hashmap, act baffled for a moment, and then say X”
It’s ridiculously obvious when people have seen the question before.
The way we do it is like this: we have like 3 or 4 different small variations on each question. Such that the solution is measurably different, in quite telling ways, but that the given problem looks almost identical.
In one specific case the given is identical, but there are 3 variations to the question based on how the candidate asks questions about it (if they don’t ask questions the question is not possible to solve as we don’t give all of the information you need.)
We started doing it this way precisely because we kept running into people who would have 3 nearly perfect interviews and one “hard fail”, and we eventually realized it was because they were so good at faking that they’d seen it before but if they hadn’t seen it before they bombed it hard.
So now that we have the “variations”, at least once a month someone will “hard fail” the interview because they’re obviously cheating and will quite literally give the right answer to the wrong question, just rote memorizing it.
Your candidates show that after learning how to solve a problem, they can demonstrate they're able to solve it.
Have you considered just hiring candidates and then training them, or expect them to learn approaches that are new to them?
Right now, you're pretending that your company needs random puzzles solved, and they're pretending that they're able to solve random puzzles without looking them up in an algorithms book.
What's the point of this whole theatre?
I get that your ego is enjoying that, but is that providing your company really any value?
Along with what the sibling said, I'm not interested in tricks and puzzles, but I am interested in how people take in and handle new information. I'm not going to pretend like my job is bleeding edge or remotely novel tech. I do CRUD with SQL.
It's impossible for anyone to be an expert in every application that my team handles. The key for us is that we try to keep our applications relatively simple with how data moves from point to point. Orienting yourself with new environments and applications significantly increases productivity here. It's always good to have people who can recognize and apply logic to patterns, but knowing how to ask questions is important. It isn't about the "gotchas". It's about what happens after the person is stuck. We try to make sure our applicants can make some assumption or ask clarifying questions about ambiguous portions.
Being able to implement a solution after having already been shown how to do it is sufficient for many roles. But not all. Some roles require that you are also able to figure out solutions to problems you have never seen before.
If your company requires only the former that's fine. But if you also require the latter, that's fine too and it's ok to test for it in your interviews.
If a company generally requires candidates to be overqualified for their intended role, that's a bit dumb. But I imagine that such a problem would eventually be fixed by the free market (supply of workers at various qualification levels vs. demand for said qualifications).
> Your candidates show that after learning how to solve a problem
The way I read it, they’ve shown they can learn the solution to a problem.
It’s like asking ”what’s 43 × 57?”, getting “2451” as a reply and, from there, assuming they’ll be able to calculate ”42 × 58” or “41 × 59”, too. If they memorized just “43 × 57 = 4251”, that conclusion may be (and likely is; most people who know how to multiply won’t memorize such products) incorrect.
"Your candidates show that after learning how to solve a problem, they can demonstrate they're able to solve it."
The parent of your post is talking about a situation where they've demonstrated they've memorized a solution to a particular problem, not that they can solve it. And that it was the wrong solution, which they didn't notice. It's that last bit in particular, combined with not being able to adapt or create a solution for the actual problem, that is the hard fail.
I can't imagine my personal interview questions & style are common enough to have shown up on any of these sites, but I have personally witnessed two people (out of a few dozen) who knew only and exactly what was on their school curriculum, but were completely incapable of stepping outside of it. I come at interviews from the point of view that I'm trying to discover what they can do, not what they can't, so when I say that, bear in mind that I didn't hand them a problem and smugly watch them fail for 30 minutes, I actively tried to formulate a problem they could solve and failed. I've also witnessed someone persistently applying a book solution that was the wrong solution to the problem. Perfect depth-first search, but it should have been breadth-first, they couldn't seem to understand my explanation of that, and they shouldn't have put it in a tree in the first place. (It would have worked if the correct search was applied but it was massive overkill. It was something like finding a substring in a string... yes, you can represent that as a tree problem, but you're only making things worse.) They nailed the definition of graph and basically just wrote out the depth-first algorithm as quickly as they could write... but it was wrong, and the moment they stepped off the path they were completely lost.
I also don't do brainteasers like this, I focus a lot more on very practical things, so we're talking more like "failing to write code that can take the string 'abcd,efg' and split it on the comma without hardcoding sizes, either handwritten or by calling library code". I really want to start an interview on some kind of success but every once in a while a candidate gets through that I simply can't find the place to start building successes on at all.
(You have to remember the "evaporative cooling" effect when reading about interview stories. Of the good candidates that even go through an interview process, they do one round of interviews and get snapped up immediately. People who have good-looking resumes but, alas, graduated with insufficient actual skills, interview over and over again. The end result is the average interviwee, anywhere, is not very good. One of the preceding interviews I'm referring to emotionally wrecked my day, I felt that bad for someone who had clearly put in immense amounts of (all the wrong kinds of) work but I couldn't even remotely hire. But there's nothing I could do about it.)
I should also mention I run a style of interview where if I ask you a question and you blast it out of the park in a few seconds, great, you (metaphorically) get full points and then I move on to the next harder variant on it. And I can always make up a harder variant of something we can talk about for an hour or two. If I can't find something you can't blast out of the park in an hour or two, well, unless you're completely insufferable or your salary expectations are something I can't meet, expect the offer shortly. But what you'll be "blasting out of the park" will be something like a full round-trip delivery of a service with code-as-infrastructure and a database and user logins and a solid security design and so on and so on, not solutions to Project Euler problems.
Having been on both sides of interviews but fortunate enough to no have to do leetcode interviews I have a genuine question for those that do.
Why do them?
Are you really facing those problems frequently enough at FAANG to have know them? Is it uppity engineers? Gatekeeping? Or are you just getting so many applicants that you have to filter somehow and leetcode interviewing has some nice properties (easy to apply remotely, can be done by engineers, strong pass/fail criteria).
Genuine interest. Ignore the negative subtext, that's just me on this subject.
Its probably partially gatekeeping. "I had to invent a unique sorting algo in this interview so you will too".
But also, its a good measure of someones ability to take an abstract problem and solve it. Lots of mini events in a LC problem to critically think. Like you said, we need a filter and its really easy to use LC to be that.
That said, I've worked at fang with co-workers who had trouble using iterators or properly assessing complex Boolean logic (and I'm not talking about needing de Morgan), so sometimes LC skills are needed on the job. So getting a signal that "this person can't write loops" means "we don't trust this person not to write an infinite loop", however rare that day comes.
There's enough programmers who want FAANG jobs and its easy enough to apply and the pay is high enough that you should be free to gatekeep by someone who understands intro-to-java level data structures and algos. Maybe leetcode-hard is unnecessary, but easy should be doable.
Because if you otherwise like the candidate you can help them along and if you don't like them you can just watch them sweat, but then claim your interview process has some level of technical rigor.
I really don't get shareholders at big tech companies.
They could hire a few teams of talented engineers and trove of cheap developers and forget all of this.
Almost nobody needs leetcoding engineers, 0.1% of their tech is hard, the rest is all forms and api, as is most of the industry.
The lack of self-awareness shown by this interviewer is staggering. You're right, they seem to relish wasting their (paid) and applicants' (unpaid) time. I won't hold my tongue. The people that do this are total assholes, if not downright sociopaths.
Why do you call it cheating? Did the candidate actively copy a solution by deception or were they just very well prepared - to the point that they quickly recognised and solved the problem which they had seen before?
Perhaps I should throw out my CLRS and Skiena and invent everything in those books from scratch?
It's an Arms race because people like you have turned it into one. You're not solving Nobel prize winning problems, so stop expecting people to magically invent on the spot novel algorithms for things they've never seen before.
As someone who has studied and passed before in this manner, and is now an interviewer, I have a simple solution that other companies should follow:
for at least one round of interviewing, let me (the interviewer) use my own custom question, where the goal is not so much to solve it but rather to reason outloud collaboratively about many different aspects of the question.
I like to use 3d graphics as a domain that candidates most likely havent seen before, but sufficiently motivated/smart ones can hold their own in. If someone doesn't quickly and intuitively grasp that a shape is a collection of faces, and a face is a collection of points, and a point is a collection of vertices, I'm not sure that they have what I am personally looking for (even though we dont do any 3d graphics in our project)
There's a certain irony to exploiting an ethically questionable method to obtain a job and, once you've obtained that position, using your authority as a gatekeeper to attempt to prevent others from entering the same way.
I can't speak for them, but I would assume that jsiaajdsdaa still considered themselves qualified for their positions - even if they employed ethically questionable methods for passing the interviews. In the end isn't that all that matters given that's what these interviews are supposedly trying to measure?
What part of jsiaajdsdaa's process do you think is more efficient? To me it sounds less objective ("the goal is not so much to solve it") which seems like it would make the process less efficient when assessing candidates.
I think as a general rule the idea that you give someone something that they are unlikely to solve because they are not completely familiar with the problem space does measure some things well - specifically, how one performs with something new and unknown (a critical programming skill) and their ability to abstractly reason as opposed to having memorized some information.
Although for other things related to the job not that useful.
That said I was talking about your comment that it was somewhat unethical; they didn't so much think that they gamed the system but that the system was so inefficient at doing what it should do that someone had to fit themselves to the system to get in.
Or tyrants gatekeeping future tyrants, or mature bears killing/eating younger bears to make their own life easier in the future (food and access to sex).
This is the first time I’ve come across this definition of a point. Geometrically point is defined as a zero dimensional shape (or something similar) if I recall correctly.
Besides, I don’t see how it’s intuitive at all! It isn’t to me at least.
Larger point being, if you pick such random domain without calibration you will run into such argument/discussions during an interview. Not sure what data point could be derived from such discussion when you are looking for someone who can write a decent maintainable code.
I must say that I’m terrible with geometry/graphics which hasn’t stopped me from creating value through software development in my domain, online payments.
If your intention is to gather signal for collaboration then I suggest picking something you are likely to collaborate on a day to day basis. Let’s say code review or architecture review. You could discuss why such and such an architectural pattern is useful, under what conditions, what are the pitfalls to watch out for etc etc.
A vertex is a point. So a point being a collection of vertices doesn’t make sense. A point in the sense you mean is a collection of real numbers equal to the number of dimensions of the space it’s in.
> A point in the sense you mean is a collection of real numbers equal to the number of dimensions of the space it’s in.
Not necessarily. In 3D graphics, it is common to represent points with homogeneous coordinates, where points in N-dimensional space are represented by N+1 real numbers. Using 4x4 matrices [0] to describe affine transformations of 3D points is very convenient.
(Agreed with your overall point though. Just goes to show how different some fundamental perceptions/definitions can be.)
> The extra real isn’t really part of the definition of the point in space though
It actually is. It's generally assumed to be equal to one, but it need not be.
> isn’t necessary to store to apply a 4x4 matrix
...if you assume it is equal to one, yes.
However, actually representing the fourth component is both mathematically sound and occasionally useful. For example, the midpoint of two affine points, such as (1, 2, 3, 1) and (3, 6, 9, 1) is actually just their sum: (4, 8, 12, 2), which represents the same 3D point as (2, 4, 6, 1). The fourth component can also be zero, in which case you describe a point at infinity in the given direction.
But yes, if you only use homogeneous coordinates for chaining 3D transformations, storing the extra component it pointless.
I don't understand how a point is a collection of vertices. Are you talking about X, Y, and Z coordinates?
This is a good example of how ambiguity kills an interview and reduces it to quickly figuring out what the interviewer is talking about so I might have a decent chance of solving the problem with the time remaining.
My experience with interviewing at Google, Facebook, and Amazon can be reduced to "What the hell are you talking about?"
> If someone doesn't quickly and intuitively grasp that a shape is a collection of faces, ..., I'm not sure that they have what I am personally looking for
I was doing the same to weed out the bad candidates - asking them something they should know, something logical and basic - but got bad feedback once, been asked to instead focus my questions on the strong points described in the CV. I mean, for the practical part the candidate wasn't able to count the unique words in a text file in 30 minutes. I thought opening files and reading strings and splitting are so basic anyone should know them.
I program in C++ daily and wouldn't know the currently accepted way to read lines from a text file in it off the top of my head. It's simply not something I ever have to do. A good candidate should still manage to figure it out in 30 minutes, but your programming experience is most likely a lot less universal than you think.
They didn't specify needing to read the file line-by-line. You could read the whole file at once. There might not even be any new line characters in the file. You invented a requirement.
They specified "strings", which I interpreted as lines in a text file. But it doesn't actually matter, because I could make fundamentally the same comment no matter how words (or "strings") are separated.
Are you very knowledgeable in 3d graphics yourself, some of the replies here suggest that you may have a superficial knowledge (which is what I would have), if you choose an example domain that you have some knowledge about but not deep knowledge what happens if by accident your interviewee has significantly deeper knowledge than you? I worry that person might end up sounding overly technical or even like they're BS'ing their way through the interview.
There is also another way of creating 3D graphics that does not involve the use of faces and vertices, so developers with that background would be at a loss (or win depending on your viewpoint.)
This is exactly the type of question that is the worst for interviews. It's a completely uncalibrated, completely subjective, esoteric type of question where you can't say exactly why you liked a candidate or why you didn't like her. There's no data underneath it except for "I liked how the conversation went."
It completely gives an advantage to candidates who know 3D and completely gives a disadvantage to candidates that know nothing about it. Even worse, they are relying on YOU to explain it to them. Who's to say you are sufficiently qualified to teach people the basics of 3D graphics enough such that they can answer your questions? Have you been calibrated or judged on your ability to teach even the basics of 3D graphics? Or are you assuming that you're good enough?
It's completely inappropriate and relying completely on you to determine a candidate's qualifications based on nothing except your feelings. It's a horrible question and I seriously hope this is not entertained at all at your company.
You have some good points, but can you please make your good points without crossing into personal attack? I'm sure you didn't mean to, and I'm sure you have good reason to feel strongly, but what you posted here is already aggressive and a step away from what the HN guidelines call for.
I do not believe the hard data you are looking for exist for real.
Interviewing is assessing what value an individual will bring to a project. Ideally we want this assessment to not depend on the interviewer and as a unidimensional real number so it's easy to do comparisons.
But the reality is:
- future performance depends on the team (and more largely on everything else in the business) and that varries unpredictably and is usually not part of the evaluation anyway.
- future performance also depends on how the project itself is going to evolve, which is hard enough to evaluate in itself (not just the timeline but oftentimes the involved technologies)
- assessing the value of anything is its whole can of worms (it's intrinsically subjective, you can't measure a physical "value" in SI units)
I prefer data over opinions like everyone else, but this may be a case where it is eventually safer to rely on as many as possible individual opinions and weight them, with just the amount of process in place to avoid common biases? (friends-of-friends, judging on look, country of origin, gender, ...)
I've seen big companies that takes hiring very seriously rely equaly on some preset metrics (based on prewritten questions thus easily gamed) as well as the gut feeling of several interviewers who are free to ask whatever additional questions, and I think it's the correct approach.
So I'm not a part of this world at all and I'm super fascinated by this perspective. I hear your point entirely. How do you counter the issue with pre-structured interviews where the questions get distributed and you end up with candidates who learn how to answer your exact question (but lack the skills to actually be dynamic in their job, or even do their job)?
I used to interview for a big company, and I could tell some candidates already knew the questions, but I'm a bad interviewer myself as I tend to rate most candidates as 7+/10, I very rarely found coding-illiterate candidates.
When I detected the candidate knew the questions, I'd switch order, introduce new questions I didn't usually ask.
The interview we did was quite extensive on the java basics and if the candidate somehow managed to learn all that stuff by knowing the questions, that was a pretty good sign anyway.
It was just this one time I found a candidate who aced every question I asked, even OOP, Design Patterns, I made up on the spot a code-design challenge, the guy suddenly was not so brilliant, but still managed to hold his own and I didn't penalize him for my impression that he knew the questions, he was way better than the usual candidate anyway.
Perhaps the stakes were not so high as in the USA, but I can say I've never found someone who would have failed without knowing the questions in advance; I did fail people who were trying to cheat on the phone interview, but those were rare cases.
My impression is technical interviews just filter programming-illiterate programmers and people who don't give a f#ck -- as a general rule.
There's also the 30% of cases where the interview is made up to show the candidate he's not really all that senior as he thinks and to lowball his salary request.
And some companies have very high technical interview standards because of some cultural inferiority complex (we are just like Google, you see...).
If I could answer that question, I would be a billionaire already.
Programming interviews are almost exactly akin to actor's auditions. Just because you flunk an audition doesn't mean that you're a bad actor. Also, auditioning takes a special skill and it's not very much like being a real actor. But they still do it to this day. Programming interviews are similar.
The best hiring model I can come up with is the Netflix model. Pay top of the market, hire and fire people quickly if they don't meet expectations, with a generous severance. Have high expectations, reward the ones that can fulfill those expectations, and quickly get rid of those that don't. It's ruthless, but the Netflix engineers I know love working there.
Yeah, I would say a much bigger issue than algorithm questions in interviews are interviewers who assume all programmers follow a particular path (usually the one they followed) and discriminate against those-that-did-not-follow-particular-path.
Computer science is a massive field that people enter through many different and unique ways. If you're trying to gatekeep and force everyone to enter through the same gate that you entered, you should not be an interviewer.
3D graphics is not a good moat, I agree.
There are much better moats -- such as recursion, there's no way I'd let someone in my team if the don't get recursion, even though we almost never use it :)
Interesting. I had never expected 1P3A mentioned on HN. It's mainly a forum for Chinese overseas to discuss their life and so on. Surrounding this topic they have also developed side features like COVID-19 statistics and some system for school applicants.
Is this common among immigrant communities? For example do Vietnamese or Indian or Nigerian communities exist to give each other exclusive support on finding jobs or other such advantages?
Yes. And American immigrant communities in other countries. The size and formality will generally correlate with the size of the immigrant community. From real-world social networks to Facebook groups to dedicated sites and forums.
Many people say this. But the reality is that solving 100 LC questions and actually understand the solution enough to solve a variation of the problem is a lot of work. Especially if you are working full-time. I wouldn't call that "game it", just usual study and hard work.
When gaming it is just passing by studying, a repeatable process that anyone (with a CS degree) can do, just means the interview process is quite well designed.
The interview process is a test of endurance, not intelligence. And it should be exactly that, since software engineering is mostly an exercise of endurance and focus.
Every time a friend of mine QQs about failing a FANG interview, I give them the study prescription that worked for me. The reaction is always the same: they don't do it. If you can't get past these interviews, you probably won't make it past a year in these high performance companies anyways. Because you actually have to exhibit the same work ethic that interview studying requires.
I felt the same when I failed a leetcode interview. The reason behind my annoyance was that I had spent considerable amount of time over the past few years improving my skills in the areas I thought mattered (practicing TDD, design patterns,learning FP etc.,) I felt disappointed when none of these mattered compared to an artificial set of challenges setup in an unreal environment.
I can't say for sure that such challenges (array manipulation, backtracking etc.,) are not typical for a software job but I know that they aren't relevant for my line of work. Even if they are, the challenge of solving problems in a live coding session is not the same for all. Different people need different settings to perform or excel.
But now I am committing myself to leetcode and getting better at it, but it has become one among hundred other things I want to get good at. At some point of time, I will feel the burnout. But I sincerely hope the acquired skills are useful in my professional setting.
Skills such as design patterns etc do matter. There are separate rounds for testing just that, think Design rounds (LLD or HLD) but you usually have to pass the Algorithm rounds before to get to that stage.
I don’t know if it’s as useful a signal as you imply. For me, it’s easy to be motivated studying leetcode: small, self-contained puzzles with just the right amount of challenge and immediate gratification. Actually doing a FAANG job can be a slog where it takes months to see results from your work.
I can get hired as a software engineer wherever, but I’m only mediocre at doing the job. I’m not the only person I know like this.
I don't want to underestimate the skill of being good at LC. In the past couple of months, I have solved about 200+ LC and I have been interviewing vigorously too. Its a tough skill for sure but beyond a certain point, you start seeing patterns you have applied before and once you are in the flow, it becomes slightly easier.
I only wish the real Software Engineering job was as simple as being good at LC, because it clearly isn't.
Sure. But I’m (by my own estimation) a mediocre software engineer who is very above-average at solving fun coding puzzles, and therefore at interviewing. Usually these threads are full of people complaining that they are good engineers who aren’t good at interviews; I’m suggesting that the opposite isn’t uncommon, even if it’s more rarely admitted.
Getting something working in the beginning feels very different from a coding puzzle. During the next phase of optimizing something is where it gets interesting. Understanding the hardware and figuring out the right way to make full use of it via memory tricks or specialized processor instructions always felt more like a puzzle. Unfortunately, I feel engineers are seldom given the opportunity to go deep down the optimization route once something is working reliably, since it's already getting the job done.
> I can get hired as a software engineer wherever, but I’m only mediocre at doing the job.
I feel exactly the same.
Mind if I ask how you deal with this?
I recently left software (not sure if temporary or permanent yet) and I'm pursuing tutoring in an unrelated field. So far I'm liking it more because I feel better than mediocre.
Haven’t really figured it out yet. Working on something I care about seems to help, although it’s not always easy to find. Having a strong, trusting relationship with my coworkers also helps. Easy to get detached otherwise.
Having other sources of meaning in life keeps me going during periods where my career isn’t going as well, gives me perspective and keeps me from getting depressed (I’m prone to it).
Working at top companies has helped me meet a lot of amazing people, including many of my closest friends, so I’m grateful for that at least.
Also, this gets thrown around a lot on HN, but if you’re brilliant at hard programming puzzles but not good at engineering jobs you might have ADHD. I do. Medication and/or ADHD-targeted treatments and accommodations could help. They’ve been modestly helpful for me.
Software engineers that QQ about how unfair algorithm interviews are are clearly out of touch with how difficult and truly unfair interviews are for other high paying industries like law or medicine (in the US) where getting interviews is based on pedigree and getting one or two rejections can permanently deny you from top firms/positions.
There are always things that can be improved about interview processes, but many software engineers display great immaturity when it comes to trading a few weeks of time studying content that should already be partially familiar to them from school for six figure salaries/raises.
I don’t think anybody is complaining about the initial barrier to entry. These interviews would be more like asking a surgeon who’s worked for 10 years in one hospital to pass an anatomy multiple choice quiz when transferring to another hospital
While it's true that asking an experienced surgeon basics of anatomy is worthless, please take into account that all surgeons at least operate on humans. If you take 10 random developers, chances are most of them worked on completely different levels and domains throughout their careers (e.g. embedded dev, backend, frontend, devops will have vastly different skills).
> but I’ll take it over how finance jobs are where it’s all about connections and where you interned when you were 19
Is this different than Big Tech really? If you do not go to the right schools, internships, whatever it can take years to have the right employers in your CV to be called for an interview.
Not true in my experience. I went a no-name state school, graduated with a mediocre GPA and an unremarkable local internship and I still got the same interview and job opportunities as my friends that went to Berkeley and other top schools. I know people at top tech companies that never finished their college degree.
I don't know of anyone at a top law/medicine/consulting/finance firm like this.
The average employment length at a faang is 2 years or less. A job in a single top law firm is an endurance battle that can span your career. Switching firms is looked at differently a lot of times negatively.
Once you land that job in finance or as a lawyer you are on track to be set for life. At a faang after a few years you'll get pushed into management or pushed out.
You are just as "set for life" in big tech as big law. In fact, if you're looking at the last decade, big tech won hard considering tech stocks and how big law froze (and even slightly cut) salaries during the great recession.
What is the relevance of this whataboutism? Some other industries have bad or worse interview practices as well. That doesn't make this process good. What is the point you're trying to make?
Its immature to think we can do better because others have it worse? Rubbish.
That's pretty presuptuous. Do you truely believe that working at a FAANG company requires the same amount of 'edurance and focus' as working at a non FAANG company _and_ spending several hours every week practicing coding skills?
The major frustration is that its endurance in an mostly irrelevant task. If the problems were more relevant, active software engineers wouldn't need to study outside their day to day duties, no?
If somebody has been doing something for 20 years and an assessment test can yield dramatically different results with 20 minutes of preparation, then the assessment test is total bullshit. It's going to yield a false characterization and is fundamentally garbage input
I know that's pretty strongly stated and it matches the strength of my convictions here. I've tried many methods. Asking someone to figure out something is a sliding window is a garbage test
Problem is, there is not a lot of incentive for people looking for a job to use LC and similar to understand algorithms and data structures, and with good reason;
Many interviewers do not ask the questions to check for thought process or problem solving ability, they treat it like some TV quiz: Ask question, get answer, compare answer to note in hand, applaud if its the same answer, next question. Why? Because its a lot easier to sit there watching the candidate squirm at the whiteboard, while thinking about what's for lunch, than engaging the candidate and gasp talking to him/her.
This creates incentive for people taking these BS interviews to learn-for-the-test: Get a list of the current top50 questions asked regularly at interviews (there are resources for that) and memorize them.
Why? Two reasons:
1. It is alot easier than understanding the concepts and purpose of different algos and data structures.
2. Trying to solve them by applying actual understanding, runs the risk of getting stuck on an unfamiliar problem, or producing a slightly sub-par solution instead of the "correct" answer, and getting booted out despite demonstrating the exakt thing aka."problem solving ability" the interviewers allegedly look for
And, unsurprising, because there is money involved, an industry has sprung up around this: Pay-For-Tech-Interview-Training is a thing, including regular updates on popular questions.
The result of course: Companies running the risk of hiring people who are great at answering LC questions but fail when they actually have to solve a problem where they cannot copypaste the solution from SO.
I’ve gotten interview questions I’d recently solved and my problem was it was too easy. I had trouble acting like it was the right amount of struggle. Is there a trick for that?
I've interviewed folks for FAANG roles. If you know how to solve the problem already, just tell the interviewer up front. Either they have another question or they will go deeper into a discussion about why and how you solved it the way you did, testing it, other approaches and why they are or are not good tradeoffs, etc.
It's pretty obvious to interviewers if you've solved a problem before, and we appreciate the honesty. Interviews are not adversarial; they're to see if a candidate is a good fit for the role and dishonesty is never a good fit.
"We expect you to study and be prepared for algorithmic questions, BUT NOT TOO PREPARED! Only just enough. We will give you a random question of our choosing, but if you already studied this, then you will be deducted points, unless you tell us, so that we can ask you a question you've never studied before."
A true interview would give the candidate their choice of question with the expectation that they know how to solve the question. It makes it a lot fairer for everyone involved.
I was always kind of confused about how advice for interviewers always comes along with warnings of the type "Some people talk a good game about programming, but they can't actually do it. You have to watch out for these people. Never hire one. This is the biggest trap you can stumble into as an interviewer."
After getting some interview coaching, I think I understand how this complaint arose. The whole problem is an artifact of the interview format, in which the design intent is for the candidate to be unable to solve the problem on their own. So you get scored based on how well you can charm the answer out of your interviewer while making them feel like everything they said was your idea. Instead of a test of how well you can program or solve math problems, it's a test of how good you are at cold reading. ( https://en.wikipedia.org/wiki/Cold_reading )
And unsurprisingly, a test of cold reading will end up delivering people who are good at cold reading without reference to whether they're good at other things. If you want to avoid this problem, just start giving assessments that don't involve interaction with the interviewer.
A good way to avoid this conundrum is to ask questions that are worth solving in real life.
If the candidate breezes through the discussion because they've actually had to solve the problem before, then their victory is well earned.
If on the other hand its an academic question in the same vein as the data structures or algorithms puzzles you find on $interviewprepforum, then the fact that they've solved it before tells you very little.
I don't think there's much difference either way. In both cases, a candidate gained a large advantage in a way that tells you very little about their ability.
I think this is why contrived questions gained popularity in the first place - they eliminated noise due to candidates randomly having solved similar problems before (that and "real life" problems usually can't be explained and solved in 1 hour).
I think this is on the right track. The best interviews I've been a part of involve the interviewer asking a question they don't fully know the answer to. Then the interview turns into a conversation where both parties are trying to work together to formulate an answer. The candidate should be scored based on how constructive that conversation is.
Be honest: a candidate that works through a seemingly novel problem correctly is getting a much better review than another candidate that admitted they've done the question before and breezes through it. And this is assuming your interview round doesn't get thrown out entirely to begin with.
The interviewer's goal is to evaluate the interviewee accurately.
The interviewee's goal is to be evaluated inaccurately.
If you really need an example, then look at it this way:
1. The interviewee's goal is to be hired.
2. Assuming there is no conflict of goals, then the interviewer's goal is to hire the interviewee.
3. This immediately implies that the interview is a pure waste of time. You can just make the hire without having the interview.
If you don't believe that interviews are -- in every case -- nothing but a waste of time, then you must reject one of the two premises. You either believe that (1) The interviewee does not wish to be hired, or (2) there is a conflict between the interviewee's goals and the interviewer's goals.
We can make a similar observation purely by knowing that interviewers sometimes reject interviewees.
>> The interviewee's goal is to be evaluated inaccurately.
> Only if the interviewee doesn't think they should be hired.
Nope. The interviewee would always like to be evaluated as better than they actually are, regardless of whether they meet the notional hiring threshold.
> These aren't necessarily adversarial.
The goals you list are still necessarily adversarial, because the interviewee's goal is always to get an offer and the interviewer's goal is to stymie them.
The interviewer's goal is to evaluate the interviewee accurately.
More specifically: the interviewer's goal is to minimize (1) false positives and (2) the expenditure of the company's resources.
Meanwhile, candidates hope to be evaluated "fairly", which is in direct conflict with criterion (1). They also naively expected to be treated "decently", which is in direct conflict with criterion (2) and which explains why employer-side ghosting is so widespread, along with other abusive practices like piling on lengthy take-homes, etc.
> which is in direct conflict with criterion (2) and which explains why employer-side ghosting is so widespread, along with other abusive practices like piling on lengthy take-homes, etc.
I don't think concerns about resource expenditure actually explain ghosting. I think that happens despite what the company would prefer, because the people involved find it unpleasant to notify candidates of a rejection.
Lengthy take-homes are easier to explain by reference to resource concerns.
Minimizing false positives is not a fundamental goal of interviewing -- accuracy is a goal in all settings, while minimizing false positives isn't. But minimizing resource expenditures is; you're right about that.
> It's pretty obvious to interviewers if you've solved a problem before, and we appreciate the honesty.
> ..
> dishonesty is never a good fit.
If you rejected every such “dishonest candidate”, the paid services offering candidates practice questions would have gone out of business long ago.
Given the popularity of such services, have your considered the possibility that you overestimate your ability to spot candidates who have solved the problem previously?
I understand the sentiment about honesty and finding a legitimate good fit. Neither interviewers or candidates are perfect. But there's something off about this, isn't there? If you ask a candidate a question, and they answer it well, they've done what is asked of them. Why should they put themselves in a position of vulnerability just because they're well prepared?
To some extent it paradoxically sounds like you guys want applicants to not have studied basic algorithm problems, since the problems you find if you try to study are the same problems you guys ask in interviews.
Just tell them it’s familiar. Depends on the company, but they can often tell. If you blast through a tough question and then struggle on easier ones, it’ll show up in hiring committee (or equivalent).
It could easily cost you the offer if they think you’re being dishonest.
If you say it’s familiar and they ask you to do it anyway, just solve it very quickly and thoroughly. They’ll give you points for your performance and probably ask you another.
Just go through the steps out loud, start with a really naive solution, say what's wrong with it, go on to the next. You can usually go through at least 3 levels of that. Also spend some time verifying the specification of the problem, ask about any potential edge cases. If you really know the problem well, go into maybe some extensions of it or harder versions.
The interviewers are ideally trying to get a sense of how you think through problems, not just that you can spit out an answer you know. At least that's what they say.
It's not the struggle, it's the questions you ask, and what you share about your thought process trying to got to a solution. You might work yourself into a hole solving what you think is a simple array manipulation problem, but then realize some edge cases require you to switch to a heap/priority queue, etc.
Please don't cross into personal attack, no matter how wrong's interview preparation strategy is or you feel it is. Perhaps you don't feel you owe them better, but you owe this community better if you're participating here.
Almost all advice online about interviewing is written from the point of view of the candidate. Sometimes this is good advice, but sometimes it devolves into some kind of astrology, where candidate are just guessing how things work.
I've done +300 interviews at FAANG so I can share bit of advice from the interviewer's side. The caveat is that this is based on how I conduct interviews, so YMMV with other people.
* Always ask clarifying questions. Almost all problems have some level of ambiguity that you need to clarify. This is intended. The more senior you are, the more important this becomes.
* Explain your approach before you begin, and even while you code. This gives the interviewer a chance to help you if you are down the wrong path, or even just understand what you are up to.
* If you get stuck, ask for help! think of this as a pair programming session rather than an interview. I much prefer a candidate who gets stuck, ask for hints, gets unblocked and writes good code, rather than one that doesn't ask for help and writes not-so-good code.
* Caveat: There is a tension with asking questions. There is such a thing as too much help, e.g. I need to explicitly tell you how to solve the problem. Use judiciously.
* Take interviewer's hints. The interviewer has probably asked this question dozens of times and knows it inside out. If they give you a hint, 99% of times they are on to something.
* Personally I don't like "hard LC" problems. Instead I prefer medium difficulty problems, and spend extra time probing the candidate's coding skills. This includes:
1. Write tests.
2. Handle corner cases/incorrect inputs.
3. Discuss how to scale the code.
4. Discuss how to refactor the code.
* If nothing else, write the simplest brute force solution you can think of. As an interviewer I need a coding sample to evaluate the candidate. Trying to and failing to implement an optimal solution is worse than implementing a correct brute force one.
As an interviewer I explicitly ask candidates NOT to do this (or only doing it if they want). For some reason there is this expectation in coding interviews. I challenge anyone that pushes their interviewees to do this, to sit down during one of their own coding sessions and vocalize the stuff they are coding while doing it. In my opinion it is stupid, and unnecessary.
When my interviewees start solving the coding tests and begin "thinking out loud" I told them: "you don't need to do this here, focus on your code, and once you are done I'll ask you to explain some parts to me". I invariably listen to a huge breath of relief from them haha.
Luckily my problem-solving thought process is already like conversation so just saying out loud what I'm thinking is just vocalizing what's already going through my head.
The more I read HN comments the more I realize that not everyone reasons through problems "verbally" (even if it's silent or internal) and uses some other thought process. To me, coding has always been like speaking a very specific and pedantic language.
I would find it interesting if other people were able to share their own mental models for coding, if that's even possible.
I appreciate the caveat. Caveats are almost never given when people give advice and it is exactly the thing that people get stuck on when they take the advice to heart.
To understand why here is another bit of my philosophy: the aim is to find a good match between candidate and company. The interview is a (very flawed) proxy for this, so don't overindex on it.
For instance, recently I was inclined for a candidate although he didn't make it past brute force. The reason is that he did very well in the behavioural round, and wrote a very good brute force solution. Also, during coding, he explained in detail the internals of some data structures (e.g. hash maps), which shows they know their stuff. Lastly, they had an intuition about how to approach the problem optimally, even if they didn't manage to write the code.
> After being given the algorithm problem, ask for the specific runtime your solution will need to have. Almost certainly, the interviewer will tell you.
In my experience, interviewers will rarely tell you the runtime of the optimal solution.
> In my experience, interviewers will rarely tell you the runtime of the optimal solution.
I agree. As an interviewer, I genuinely want each candidate to do well and I think that, on balance, setting a specific complexity goal is likely to do more harm that good.
For a strong candidate, it could limit their opportunity to shine by narrowing down the solution space and discouraging them from exploring tradeoffs (e.g. runtime vs memory).
For a weaker candidate, it could mean no solution at all instead of a suboptimal solution. I generally choose my problems such that they admit multiple solutions of varying degrees of efficiency/sophistication. It is generally not a hard requirement that a candidate find the optimal solution in order for me to score the interview favourably.
In my experience, "no solution at all" is already the default. When people choke on questions it is because they get stuck trying to guess at what you want, and panic thinking it must be some clever trick they don't recognize. Becasue we've all seen a million tricks and forgotten most of them.
If you're happy with just getting it done quick and dirty, at least as a start, then telling them so is obviously better than not doing so. Beyond that, anything more about what you want will help trigger their thinking in terms of the right approach. Unless of course you really are looking for some clever trick (and I have certainly seen those annoying questions).
What you do in leetcode is different from what you'd do in an actual interview though. In leetcode, I don't even bother with the trivial solution, as that would never pass the testcases. However, for an interview, I'd always start with the trivial brute force solution, at least to get out a base solution to evaluate, describe and then iterate on,
I don't ask questions that can only be solved with some clever trick. In my experience, I get the most signal by posing a question that has many solutions of varying degrees of efficiency.
This way anyone except the least qualified candidates can make some meaningful progress. I then gently guide the candidate to help them produce the best solution they're capable of in the allotted time.
I then score the interview on the basis of where they have landed, the process by which they got there and how much help they needed along the way (and of course the role/level they're interviewing for).
I’d turn that question back around at the candidate, unless it were for a junior candidate.
I’ll give hints if the candidate is struggling, but I won’t just come out and tell them something like this. If they pushed me hard enough at the start, I would tell them and then fail them on the algorithms/reasoning component of the interview.
When I implement something IRL (unless I'm implementing something from a paper or an algorithm that's well known), I typically can't just ask someone what the algorithmic complexity is. I have to determine if for myself.
Ed: to clarify a bit, in some cases the "non-optimal" solution is going to be the best one. And that's even before you start worrying about things like time/memory tradeoffs. When a candidate asks me questions along the lines of "which of the 2 input lists is bigger, what order of magnitude is the length, does XYZ fit in memory, etc", IMO it's a useful signal that shows they're aware of these tradeoffs.
The original question referred to "target runtime", which I take to me "basic expected performance characteristics." E.g. "Nothing crazy -- should run on a million integers or less, in half a second or less, which requiring not much more than the array size (or a small multiple) in extra memory. And worst case should be not too far from average case."
From there, I can start to think about the running complexity (or whether it even matters, for the scale given). But if an interviewer won't even tell me that ... I'd assume they're just like making candidates dance, for the sake of making them dance. While they sit back and stare at their phone, and occasionally interrupt with "hints".
Identifying a reasonable target runtime can be part of the problem. The longer they’ve been writing software, the more ambiguity I expect them to handle.
If the candidate wants to discuss performance, I’m obviously interested, but if they want me to serve a hint to them on a silver platter I’m just confused.
There’s more to an interview than algorithms (e.g. coding, communication), so it’s not a death sentence.
In the hundred or so interviews I’ve given, I’ve never had a candidate persist in forcing questions along these lines from the very start, even after I have turned the question back around at them. If you force me to give you hints out of the gate, something very weird is going on.
Because in the real world you don't know the runtime of the optimal solution, except for certain very well studied and simple problems. At best you might know the runtime of the best published solution. But in many cases your problem isn't something that anyone has studied in that kind of detail and it's going to be on you to determine whether your solution is satisfactory or whether you should go back and improve.
Because in the real world you don't know the runtime of the optimal solution,
The question specifically referred to target performance, which IRL is almost always known and freely communicated (in at least ballpark terms).
As in: "Find the median of this array of 10k integers in .01 seconds please. Not 10 billion, just 10k". You say: "OK, sort and done."
IRL if someone at your job says: "Write some code to find the median of an array of integers. I'm not going to tell you how many, even the order of magnitude. Could be 10k, could be 10 billion." You say "WTF?" They say: "I don't care about whether you get the right solution or not! I just want to see how think about a problem."
I think you'd know where to tell them where to put that question.
Being told what to do is for juniors, anyone at intermediate or above should be able to recognize most compute waste and be able to fix it without being told what they are supposed to do or how fast the end result is supposed to run.
It isn't "telling them what to do"; it's basic professionalism.
If someone has cleared their day to come into your office and talk to you -- you should respect their time. Don't make them guess at the goalposts, don't use implicit or hidden metrics. If they ask you what the expected latency is for an API response, or whether they should worry about hostile string injection attacks -- just tell them.
As you would in a normal, respectful, professional setting.
+1, got me thinking about this one Elon quote that's roughly about how we're trained in school to solve the problem given to us, even if it's not a valuable problem to solve. Can save a lot more time and complexity by realizing the 'problem' isn't really a problem than any solution.
The thing that people like Elon sometimes forget is that most companies have far less room for first principles multidisciplinary problem solving than for the people who then implement the plan that comes out of said problem solving. It can still be helpful but it comes with the added cost that every time you ask someone to do a task you have to explain the entire decision tree that lead to the task, and then even if they do come up with a valid improvement you probably have to throw out a bunch of other decisions that other people are already working on implementing.
What I'd actually ask as a candidate is what to optimize for. Are we looking at an N of 100 and a simple solution will do, or are we looking at an N of a million? Maybe memory is the constraint.
Atleast show the candidate that you're thinking pragmatically. The goal in the real world isn't to write the "fastest" algorithm, it's to write the most appropriate one.
Exactly. The only scenario in which I actually reveal the optimal complexity is when they got the best solution and are still struggling to see if there's a better one. It's better if they understand that it is optimal but I don't let people waste more than a minute or two on this. If their solution is not optimal I just say something like "Is this optimal?" or "Is there a better way?", and take it from there as a dialog.
Agree. When I'm interviewing I'm trying to evaluate if the candidate will be able to do a real job. In the real world you're coming up with answers to a novel problem. There's no way to know what the complexity of the optimal solution is (unless you can prove it, which doesn't happen outside of a classroom).
If you throw out a time complexity most interviewers will tell you if they expect something better. For big companies, it's always a crap shoot. For smaller companies, if the person isn't willing to have a conversation with you it's a sign they would be difficult to work with.
Not just throwing out a time complexity to see what sticks, but if you sketch out an approach and give the complexity then I agree, they will probably either tell you “implement that approach” or ask “can we do better?”.
There's a bit of a "draw the rest of the owl" thing going on here. If you have the experience to confidently eliminate the likely approaches that don't work, then you must know quite a bit about those approaches and their applications.
And in particular, I'm not sure I can agree on eliminating recursion in the anagrams problem. He just says
> Recursion – No way to apply recursion to the problem.
but...there is? One plausible approach, I sort each of the strings and put them in a trie. A fancy sounding thing, but really just a data structure which allows me to rephrase "someString in myTrie" as the recursive "someString is null || (head(someString) in myTrie.keys && rest(someString) in myTrie[someString])". I'm not arguing that this is better than a hash table, though I think it probably is on some workloads - just that it's not possible to rule this out and solve the problem by pure process of elimination.
That's the nature of thought frameworks. You need to fill in the gaps yourself. It's still useful to have an organized approach. In my experience, people can know how well each technique would be suited to the task at hand if asked. But, they rarely consciously go through each technique they know and consider if it's the right approach. Most people would be well served to do this more often.
Btw, this is a big failing in interviewers who help. Asking the right questions (what data structure could you use here?) is incredibly helpful. So helpful that you're not testing them on a slow moving, relevant skill (structured thinking) but instead a fast moving, less relevant skill (what data structures do you know?). Bad move.
I kind of agree, in that a good candidate should probably already be thinking "what data structure should I use here" no matter what the problem is. Just always think about what data structure to use. Every problem requires one.
More in the vein of what you're getting at though, I think helping a nervous candidate by asking obvious questions like this can be helpful: "What data structure would you recommend using for this problem?", or "What programming paradigm do you think would fit this problem best?". It's a bit of an ice breaker, and can lead to more interesting conversation than the candidate just sitting there, quietly thinking (admittedly, possibly not a good sign for a candidate if they do this, but interviews should be built around making it a good experience for the candidate as well).
I have completely the opposite opinion of your second paragraph. I’m fine with candidates quietly thinking in an interview. It’s a big part of the job. I’m not optimizing for interesting, but for signal. And dropping the signal of “can get themselves going with structured thinking” is not worth it to make it a better experience for the candidate in that moment. I’m okay with a candidate feeling uncomfortable while they’re failing an interview. If anything, I think it’s better than the standard which is to give hints, act like everything is great and then thumbs down them later. Makes people so confused. There’s also a whole thing about who gets hints fastest, confirming biases. But that’s another giant thread.
This type of top down learning doesn't work for interviews.
Just like how you can't learn math by reading high level summaries written by others.
You have to grind through it and see the connections yourself.
The author's lack of depth shows when he says that no graph means you can rule out DFS.
I also disagree on asking interviewer for the expected runtime. You're needlessly adding another constraint on yourself by doing that. You should ask for the input size and then figure out the best runtime that you can reach. A working suboptimal solution is more likely to get you to the next round than an incomplete attempt at an optimal solution.
Am I crazy or is the first answer terrible? I sat down and wrote out and answer for the problem and I initialized one integer and one timestamp. It should be O(1) time and O(1) memory easily, right? I'm seeing comments saying they'd use an array or a hash table -- why are you using any data structure? You don't need to remember how many times it was called 61 seconds ago; just keep a timestamp and the last time you reset the timestamp. If you need it to be clock-minutes instead of rolling (it doesn't; whose clock, anyway?), then just round the timestamp. The obvious follow-up question is "ok, what if it needs to be limited to 10,000 calls per minute?" You're going to keep 10,000+ elements in some collection for no reason? I wouldn't immediately reject a candidate for coming up with this answer but I would seriously question their thought process on this one.
In fact it's kind of ironic that his first example exposes how bad this advice is: by trying to meta-game, starting from a few example data structures, he shows he completely misses the solution that is both simpler and much more performant.
I don't think you can have a "sliding window" of time just based on that.
The next time you call the function, you need to count how many of the old calls are still "unexpired". This number (potentially) gets lower with each passing quantum of time.
How can you do that without holding a timestamp for each call? Please clarify if I misunderstood you.
TLDR : You have a maximum of N credits, when time pass you earn credit at a rate of N credit by window_size, but if the time since previous request is less than window_size/N you lose 1 credit.
I don't think you can have more than 2*N requests in any sliding window without tripping the filter, but you can't consume more than the average of N requests / window_size without tripping the filter.
I think it's a better solution than the question asked by the author, because when you are rate limiting, you and the client may not have exactly the same time, and you might have edge cases like where the client batch 60 requests in a few ms every minute. If there is some time-jitter in the requests you may have 120 requests in 59.9 seconds. (Bonus question : What time-stamping of the request should be used ?)
Whereas with my solution it is more forgiving, it allow the client to use all its rate credit without risking to trip the filter if he respect the rate intent.
That's a better solution in general IMO, but the author's approach can guarantee that you'll never go over N calls in a sliding window rather than fixed windows. I don't believe that's possible with the timestamp + count solution. Gotta bring up both solutions and ask the interviewer what they want :)
What's the problem with a sliding window and the timestamp + count solution? lastTime is the timestamp of the last call. newTime is the timestamp of the incoming call. If newTime - lastTime > 60 seconds then you're good to proceed, set count to 1 change lastTime to newTime and go on. Otherwise, check whether count is less than n and proceed accordingly (incrementing count if so). This accomplishes the sliding window and rounding down to the last whole minute handles the fixed window - right?
There’s also two additional programming techniques you should be aware of:
* Dynamic Programming
</snip>
How often do folks here use dynamic programming techniques in their professional lives? My own niche is systems programming. Dynamic programming is an important technique, sure, but given how rarely I see it used in practice compared to, say, statistical estimation it feels very overrepresented in interviews. But, maybe that's my own niche being unusual.
An interesting point. I think most of today's systems tend to be "stateless" store state in a Database, which would make DP pretty cumbersome if not impossible to manage.
The only places were such types of processing will be required is when doing specific kind of processing, which for most scenarios you'd be better using an algorithm previously implemented on an existing library.
Most things in interviews are wildly overrepresented compared to their usage though. I haven’t done any shortest path, binary search, topological sort, or many other things in most of my time in industry. I would have a hard time remembering even one instance for many algorithms.
I don’t have a niche. I’m full stack. I do everything related to web apps and been at six companies doing that all in different fields. (Seed startups to large public companies)
Maybe if I was in computer vision then I’d use some of these algos more often. But those usually require a special masters or PhD to get an interview anyway. So, not really applicable to the overall industry tbh.
Thanks! For what it's worth, I'd say that full stack work is a specific niche in the field. It's a much different area of concern than the things I work on day to day.
It's hella common for SV though. The overwhelming majority of jobs here and startups are based around it. So, unless you're doing something close to hardware - you're likely working on a web app.
This doesn't mean you're just making pretty widgets for a browser.
Yeah, it's not commonly necessary for real world engineering, but it's certainly good to know, at least as a mental exercise. There's a nice free algorithms textbook used at UC Berkeley that covers the concept pretty well: http://algorithmics.lsi.upc.edu/docs/Dasgupta-Papadimitriou-...
I usually reach for caching first because it’s more intuitive (for me and my reviewers) and because we often have well-known size bounds and relaxed enough performance requirements that make an “analytical” solution unnecessary.
Caching creates a mature product, no matter what your product roadmap says. If a rewrite is the tool of last result, caching is the second to last. Like pruning shears, once you use them, your world of options collapses considerably. All of those opportunities are gone and nature won't let you put them back.
Critically, caches are global shared state. That's why cache invalidation is the hardest thing. Global state also means people stop trying to have a data flow/architecture. None of these call trees need to advertise that they use a particular value because I can just grab it from a global if I need it. You don't know where things are actually used anymore because they all come from cache lookups, and because the lookup is so cheap (under typical system load, but catastrophic under high load), people don't even try to hold onto previously acquired data. They just fetch it again. Which is another boundary condition for cache invalidation (what if half a transaction has the old value and half the new value?)
They make flame graphs essentially useless. Sometimes immediately, or once the above starts happening. You quickly have no clear idea what the cost of anything is, because you never pay it, or if you try to pay it you end up amplifying the cost until there is no signal. Once people are promiscuously fetching from the cache, they don't even attempt to avoid duplicate lookups, so activities will end up fetching the same data 6 times. If you turn off the cache entirely for perf analysis then the cost skyrockets and the flame chart is still wrong.
You are back to dead reckoning and manually adding telemetry to potential hotspots to sort it out. This is very slow, and can be demoralizing. Quickly this becomes the fastest our app will be for a very long time.
All great points! For the type of work I'm thinking of (mostly offline data processing) I use caches in a much more limited way than you suggest. A short-lived local LRU cache on an expensive pure function over immutable data can significantly reduce our resource costs without adding significant complexity to the code. In some cases, DP would be more efficient but would require a mode of thinking and abstraction that's very different from most of our code.
Glad to hear that the software engineer selection method is determined by this very rigorous process of character assessment. No wonder why big corporations produce such amazing teams.
I am not gonna die on a hill defending these types of interviews but they are infinitely better than the alternatives. Alternatives are often subjective criteria like "culture fit" which can lead to nepotism/other types of biases or hiring being done by sorting resumes based on the prestige level of the college of the applicant and just running down the list from the top (which happens in other professions like law).
Ofcourse the interview process is (or atleast is supposed to be multifaced) and passing these problems is just one of the facets of the process. A great candidate can bomb one of these questions but I wouldn't necessarily reject them just because of that (although that happens in practice in the industry) and wouldn't necessarily okay someone just because they aced one of these questions.
I'd take complaining about these type of problems any day over complaining about only being able to get a job in certain companies by having connections or having gone to an Ivy League school.
Alternatives are often subjective criteria like "culture fit"
There's a vast territory of approaches that are neither leetcode hazing or culture fit tests. And which, while nuanced, are basically objective.
Work sample evaluations do quite well, for example. As do in-depth technical discussions about ... just about any subject the candidate claims to know about.
Neither of which have anything to do with "culture fit".
>As do in-depth technical discussions about ... just about any subject the candidate claims to know about.
Having attempted to do this after reading all the HN screeds, I found that this failed horribly. You'll find a large number of people who seemingly know the theory, but can't execute worth a damn. They can happily talk on and on about normalization, regularization, imbalanced datasets and so on. Then they fail fizzbuzz.
You can't communicate experience like you can communicate theory, if you could then it would be called theory instead of practice. Theory is great since it is something we can communicate, whatever instincts you developed working on a big codebase can't be communicated. Some people thing you can, there are lots of talks about these things etc, but in practice you can't teach practice, you need to experience it yourself. And if you distil practice into pure enough droplets of wisdom that others can recognize as the truth without being your in-group, guess what, that is what we call theory!
Talk about how the candidate used technology X in practice, what was their role in the project, what worked in practice and what not, how they would approach the bad decisions in retrospective of the lessons learned while using X.
> Work sample evaluations do quite well, for example. As do in depth technical discussions about ... just about any subject the candidate claims to know about.
How do you compare candidate A with preferred subject 1 with candidate B with subject 2? And how do you get the candidate to talk sincerely and not bullshit around the topic, especially when testing ML - a modern alchemy more or less, no definitive answers, just a bag of tricks?
I have had problems with controlling the waste of time and directing the talk towards useful topics because the candidate wouldn't stop the bullshit talk. Maybe other fields have less bullshit maneuvering space.
I think it's better to go with a list of basic questions that remains the same between candidates. Only if they ace the basics I test for depth. Depth can be tricky to evaluate, especially for people who are not very bad - maybe just in ML.
In small companies or startups I have come to believe the quality and objectivity of the interview is a fair reflection of the maturity of the thought leaders at the company. I then use that assessment as a primary indicator to drop out.
As someone who has a current stable employment I have the freedom and flexibility to fail at numerous interviews with only minor frustration of time wasted. If I were out of a job and a bit more desperate I would just lie through my teeth on all elements that of a job interview except education and employment history like a super brown-nose narcissist. Since most all programming interviews are maximally subjective intentionally injected bias goes both ways and is easily applied.
Unfortunately subjective criteria are not alternatives to objective criteria but are generally applied in conjunction with them. From my experience, in the past one year and a half I've been interviewing at several companies, and I've done over 20 different code challenges. Even though I passed 90% of those code challenges, only 1 company gave me an offer and that's the company I'm working for right now. Other companies just rejected my application because I wasn't a "cultural fit" or something.
I would also add heaps/priority queues to this list. They don't come up as often as HashTables/LinkedLists but come up often enough.
If you wanna be thorough (esp if you are applying at companies known for harder interviews) I would add practicing backtracking problems where you are doing a full exhaustive search of the problem space as well (often O(k^n) or O(n!) complexity). Yes these are often mostly just DFS + Recursion but they have subtlety to them that requires practice for most people. They tend to be tricky to get right in an interview setting without practice as due to the really bad time complexity of most problems of this type they almost never come up in production for most people so they're not even on the radar of most engineers.
There's a specific reason I didn't mention priority queues in the post. In most cases, anything you can do with a heap you can do with a binary tree instead! A binary tree has O(log(n)) insert and deletion which is the same as a traditional heap. The only advantage a traditional heap has is you can construct a heap in O(n) time whereas a binary tree takes O(nlog(n)) time.
Of course there are even more niche data structures like a Fibonacci heap which have O(1) insertion, but you will have to get extremely unlucky to get asked about a Fibonacci heap in an interview.
One more thing: a heap can be easily packed contiguously in memory, while the tree nodes are typically dynamically allocated.
This helps with allocation pressure, which is often the "hidden" cost people don't pay enough attention to, especially in garbage-collected languages where the allocation might appear to be almost "free" (often a simple bump allocator), but the deallocation price is paid at later indeterminate time (when the GC runs).
In C++ (for example) getting the minimum and maximum is an O(1) operation [1][2]. I guess this is probably enabled through some extra bookkeeping and many other languages may do the same.
C++ has built in binary trees with fast iteration. The C++ binary tree is my favourite standard library data structure of any language I've used, it isn't the fastest out there but it has so many useful things you can do with it. You can use it as a heap, you can use it as a dictionary, you can use it to sort things, you can insert items at the same time you iterate through it and if they are after where you are in the tree then you will find them in your iteration etc. It does so many things right that others don't do, I'm sure the others data structures are slightly faster but the C++ binary tree is so powerful and still fast enough for basically every problem.
Before people start complaining about leetcode and how it doesnt exemplify skills:
its a proxy for a combination of:
intelligence and how hard you are willing to study
the computer science knowledge shown is just a bonus
EDIT:
One last thing to throw in, its pretty clear that theres a correlation between the top software companies and how hard their leetcode interviews are. You can claim all you want it doesnt work, but facebook and google have very hard leetcode interviews and are known for the best software
They are known for the highest paying salaries + providing stability relative to a high paying start-up. They aren't known for the 'best' software (how do you even quantify that) and Google specifically has a reputation for not maintaining software and letting products die. Generally not a hallmark of good software.
Also, the computer science shown asked in these questions is a very subset of computer science. I've never been asked an image processing question, non-trivial concurrency/parallelism, or numerical optimization questions (all things I've actually used in my job, I've never had to do strange linked list manipulations unfortunately). Those are all CS or CS adjacent but never get asked in my experience. I've also never been asked low level networking questions.
Instead it's just tricky graph questions and list/tree manipulation questions (that aren't that hard, but they are incredibly boring). CS is such a huge field, it's truly baffling that the technical interview questions at Google and Facebook are so miopic.
You've to be a genius to solve a "hard" unseen leetcode problem in 15 mins correctly. Facebook is notorious at expecting candidates to regurgitate solutions to problems in 15 mins. Intelligence plays a lesser role than exhaustive and painful practice which involves solving the same problem multiple times. It's a full-fledged examination that expects you to excel while being constantly watched and judged.
>It's a full-fledged examination that expects you to excel while being constantly watched and judged.
and interrupted, frequently, because that's also perfectly normal.
It's comical how this industry now thinks these arcane and often quite difficult DS&A interview question processes are reasonable and how it's been so normalized people just study this for weeks before applying to a new position, sacrificing evenings and weekends just for job mobility. These processes are not even proxies for intelligence anymore except maybe the very few people so wired they miraculously could jump through a handful of these with optimized solutions in 15 minutes without ever seeing and solving the problem before. I've worked with very intelligent people before who qualify as geniuses and they couldn't solve these problems under these conditions, especially not multiple of them without having at least seen the problem before.
One last thing to throw in, its pretty clear that theres a correlation between the top software companies and how hard their leetcode interviews are. You can claim all you want it doesnt work, but facebook and google have very hard leetcode interviews and are known for the best software
How do you know that Facebook and Google have best software because of the way they do their interviews?
At their size (number of employees, number of people who want to work there) they could probably just randomly pick (or add here any other way of selecting candidates) software engineers and still get some of the best in the market that will create amazing software. Not saying this is what is happening but I am just providing an alternative explanation to underline that the conclusion hard interview => best software needs more evidence to be true.
I think the way interviews are done is a function of:
- company culture (first and foremost)
- size (how big is the company and how many people they are hiring) and churn
- their believes about building software (some people believe math is required, some people think engineering is required, some people believe no pre-requisite is required)
- employer attraction: how much/how many people want to work there
- availability/support of employed engineers to be part of interview process
- how it started (usually big companies inherit the conception about interviews from their original founders as they where the ones hiring the Cs)
- the country culture where the C level and top management is located
The problem is that soon enough, all these mediocre and wannabe companies started copying the FAANG process, thinking: "If we interview like FAANG, we must be like FAANG. Or at least candidates will think we're like FAANG." And: "If your process rejects 99 percent of all applications, then that mean we're hiring the top 1 percent."
Google and Facebook are known for the best software? They are known for "some" exceptional software; but most of Google stack is crap. Facebook we can hardly know but they rely heavily on buying startups.
Yeah but it's a "how much do you want this job?" test. Which is not necessarily crazy, if you want to retain people in the firm it makes some sense to have them feel they worked hard for it, plus that they might not get through the eyes of the needle next time.
Or even a "how hard do you throw yourself into overcoming hurdles" test, which would be more useful. Because the attraction of a job inversely correlates somewhat with your value to the company, since weaker applicants would be outperforming their ability in terms of pay.
> but facebook and google have very hard leetcode interviews and are known for the best software
...are they? whenever I go on facebook, I always have a slow, unresponsive, buggy experience just using the website, and that's on firefox with 32gb ram on a 10th gen core i7
I.e., if someone isn't willing to grind leetcode, their intelligence is irrelevant.
If someone is willing to grind leetcode, then yes, their intelligence is a secondary variable.
Regardless, such styled interviews are intended to minimize the number of false positives (since the companies have so many applicants it's safer to err on the side of "no"; there are more in the pipeline), which means they will also generate a tremendously high number of false negatives.
I have also worked with many companies that are pretty huge, cash positive, exploding growth, and with very complex software as their product. They made a rule of not hiring using DS&A after accidentaly coming across a tier-3 college student with no knowledge of DSA and still doing pretty well (and he didn't jump ships too), so then changed their strategy to: only test them for what you need from them.
I will give it to Google and FB that their technical scale is much much larger than that company, so they might demand engineers who can understand that.
> facebook and google have very hard leetcode interviews and are known for the best software
This has got to be a joke of some sorts. More like the worst software. Sure Google is super rich, they have unlimited engineers who produce a lot of code. Almost all Google products I used were crap, buggy, bloatware.
Most of the engineers at those companies (and really all companies) don't work on those projects. Some people work on truly complex projects, but those people are a tiny fraction of the entire workforce. It wouldn't even make sense for a company to allocate people that way.
Also, I'd contest the statement that Google or Facebook works on the most complex software. They don't work in fintech, medical, hard real time that I know of (waymo does, but they've been spun out), and many many more fields of SW, HW and CS.
They work on hard stuff, but don't discount the complexity that other companies deal with.
i mean do you speak from experience or is this just more conjecture? what people on the outside fail to realize is that while individual projects might not seem complex (maybe frontend engineering on FB is less complex than pytorch) it's the scale of the systems you have to orchestrate that is incredibly complex.
couch it however you want but you're deluded if you think your projects stack up against even a BS grad at FAANG if you're discounting their abilities just because they got in through grinding LC.
This might get you OK marks on the "problem solving" portion of a coding interview. But a programming interview will also look for signals of other skills -- did you communicate your approach? Really good candidates will verbally communicate their solution before writing code. The best often write out how a particular data structure changes with some iteration.
If you're a senior engineer, a coding interview should include some demonstration of correctness. Maybe you write one primitive test harness for a positive test case and add comments or empty methods enumerating a few other test cases.
You think "I'd use a well tested 3rd party library for this problem" would eliminate you from the running on an interview with this question?
This, and many other of these algorithm questions, are asking people to come up with solutions to problems solved umpteen times with well designed and tested libraries available for all of them.
A sign of competence as a software engineer would be to use them.
Each successful call puts the current timestamp in array[i] and increments i. On each attempt to call, you check if array[i] is > 1 minute ago. This way you are only ever checking 1 array element instead of potentially deleting multiple list items and counting.
The solution described in the article is likely to be extremely wasteful in both time and memory, by allocating a queue entry for each call, and then O(n) scanning and dropping stale entries on each successive call.
Tabulating call count by division(s) of time would be less obviously problematic.
> The solution described in the article is likely to be extremely wasteful in both time and memory, by allocating a queue entry for each call, and then O(n) scanning and dropping stale entries on each successive call.
Even if n elements are scanned in the the worst case, the expected time it takes to perform such a scan is O(1). This is because we perform a scan on each insertion and O(n) elements are deleted in a scan of length n. That means the number of elements scanned is proportional to the number of elements inserted.
> Tabulating call count by division(s) of time would be less obviously problematic.
Tabulating call count by division(s) of time doesn't quite work. If a 9 calls are made at the last second of one division of time and 9 more calls are made in the first second of the next division of time, you will hit 18 calls in a two second interval which is over the limit of 10 calls for any one minute interval.
Do that for rate limiting and it'll be super easy to DoS you - just show how fundamentally you can't brute force your way out of algorithmic questions despite the article suggestion (or how interviews are fundamentally flawed, but this is another debate ;)
> The function should have expected O(1) performance.
> Do that for rate limiting and it'll be super easy to DoS you
How so? The rate limiter has the same performance as, for example, a hash table. Operations are usually O(1), but are periodically O(n). It's not like every service that uses a hash-table is DoS-able.
Geniusly curious: does "expected" in English necessarily means average? My understanding was that expected is dependent of the application domain, so it can mean best or worst or average.
If you have an implementation that is O(N) in worst case it's (theoretically) DoSable since an attacker would always hit that case - so the expected complexity in case of an attack is O(N).
A trivial solution in O(60)=O(1) for worst case is to store the number of call everything second in a fixed size array and loop over it. With some arithmetic you can even avoid looping over it.
I wouldn't call it penalty, but cost, and if credit = N, then I assume the cost of one call would just be 1. So:
gain = N * (elapsedTime / window_size);
credit = min(credit + gain, N);
if (credit >= 1) {
credit--;
log(...);
} else {
return; /* not enough credits */
}
Your approach has the nice property that after the initial burst, you get regularly spread out log messages, whereas the linked list approach will stay bursty.
I was trying to express the sliding window constraint as a violation of the pigeon-hole principle : to violate the constraint you have to have at least N+1 intervals that are shorter than window_size/N over the window_size.
But I'm not really sure of the value it adds over simply taking a cost of 1 as you suggest. When time beween requests are spreaded more than window_size/N they don't cost credit. This mean you have a smaller number of "hot" clients (clients who are not full credit) to monitor.
The second idea that often occurs in problems with sliding windows is the telescoping sum which is hiding implicitly in the sum of elapsed Times.
Probably safe to say you aren't having to rate limit over a thousand endpoints from a single spot, so the linear scan sounds like a non issue. That said, I'd hesitate to get to fancy in client rate limiting. Sounds like an area that will hide bugs that are stupid hard to deal with.
I don't quite see it that way, although maybe we read the problem as something different. Suppose the rate limit is 500 per second; a client makes a burst of 500 requests in one second, sleeps two seconds, then another burst of 500. Isn't the author's planned approach going to end up doing a bunch of bogus work?
- putting 500 timestamps into a queue
- inspecting 500 queue entries and removing each
- putting 500 new timestamps into a queue
Depends on the requirements. If it's enough that the client doesn't make >N requests within the same "calendar second", then counting in bins is enough, no need to store timestamps. But then you could have a client send N requests in the second half of second k and N requests in the first half of second k+1, so you'd have a 1 second period (that spans two "calendar seconds") with 2N requests allowed. Since such rate limits are often not extremely exact, just an order of magnitude, this quantization/rounding effect may be negligibe and the speed may be worth it. But maybe not always. The original problem statement did not say that it's fine. Probably it's a good sign if an interviewee asks if this would be okay too.
I'd assume a linked list was used to be able to drop a lot in a single shot. In particular, seems safe to assume the queue is sorted by time. Such that as soon as you see a time that is outside the window, you can drop all of the rest in one shot.
The page wasn't loading for me, so I can't see the proposal. Just don't let a linear scan scare you, for small n.
There are various approaches that might or might not improve the runtime outlook (maybe a priority queue, maybe a list that supports fast random access for binary or exponential search, etc.) but wouldn't the only "fast" path to drop a bunch of entries hinge on moving the head reference and pushing the work onto the GC? One might start to think about something like a ring at the point where this would start to matter.
Asserting small n seems like a way to wash out of coding interviews that are focused on identifying and avoiding costly naive solutions. Interviewers are like as not to say "okay, now the rate limit is 5,000/sec/customer/operation, and there are 100,000 customers."
I'm assuming a mutable linked list where you can just set the rest from any point to null.
Though, I confess I don't see benefits of linked list for this, all told.
Sounds like the intent is to scan forward until you hit the limit of calls, or see you have hit the boundary. In which case you set the current next point to the current call? Is what you are describing.
One thing that I have noticed that FAANG doesn’t do well is: help people with sleep issues.
It’s not their job [0], but I do think there are some talented insomniacs that are great employees (given that they can mostly work on their own schedule, except for meetings). I remember that I interviewed for Facebook and I was so sleep deprived, I could barely write a for-loop. Ironically, I passed that interview but I couldn’t pass others that required some more creative thinking or more memorization.
[0] While it is not their job, FAANG companies do mention that they are accommodating candidates with mental health issues, and I’d argue that insomnia (diagnosed or undiagnosed) should be one of them.
The anagram solution leaves the actual hard part unfinished^. Also, to solve in O(n) you need a bit array (or a database like Postgres that can do an XAND on a string).
> Sort the characters of the words alphabetically. Since anagrams are all made up of the same letters. This will give us the same string for any pair of words that are anagrams of each.
Correct.
> Produce a dictionary of the number of times each letter occurs in each word.
Well, yeah...algorithms aren't the hard part for this kind of question. "Produce a dictionary of the number of times each letter occurs in each word." is heading the wrong direction. You want an encoding that does not care about the number of times each letter occurs in each word.
Take a string of zeroes, which is (maxWordlength*26) long.
Each segment of 26 represents a compositional letter from the word. So 100000 00000 00000 00000 00000 is 'a'
Now you can create your dictionary from the sorted word indexes.
pin becomes inp becomes
000000 00100 00000 00000 00000
000000 00000 00100 00000 00000
000000 00000 00001 00000 00000
same as nip (becomes inp, etc)
You can add 26 zero sets, to pad out to the length of longest word, if words are variable length.
^We're going to assume normalized words, which are all lower case and no punctuation, comprised of English letters from the 26 character alphabet. Getting the set of all of the possible words of any given length, is also quite an exercise.
> "Produce a dictionary of the number of times each letter occurs in each word." is heading the wrong direction. You want an encoding that does not care about the number of times each letter occurs in each word.
How so? "aab" is an anagram of "baa", but they are not anagrams of "ab".
You aren't "counting" the number of characters. You're making an encoding that includes that information, without the aggregation. The aggregation is irrelevant as is the order of duplicate characters.
>which is (maxWordlength*26) long. Each segment of 26 represents a
>^We're going to assume normalized words, which are all lower case and no punctuation, comprised of English letters from the 26 character alphabet. Getting the set of all of the possible words of any given length, is also quite an exercise.
I hate solutions like this because they'd never be even close to being viable in real world
You could use a lookup table to turn the nth letter of the alphabet into the nth prime number, then multiply those primes together. This "hash" would be unique up to reordering of the primes/letters.
Sorry, I don't understand your comment. Why would the proposed "naive" solution of generating a dictionary not work in O(n) time? What is gained by this bit encoding?
What dictionary? How do you generate it (handwaving notwithstanding, as mentioned)? It all comes back down to encoding, rather than aggregation of letter count.^
> XAND does not exist?
XAND being a convenient way to mean binary AND. I thought it was easy to infer. Taking the binary encodings, you can do a binary AND to determine if a set is an anagram subset (or full anagram) with more flexibility.
^Create a hashmap (any searchable K/V will do, redis, postgres, etc). For each word, check to see if encoding exists in set. If not, add key of encoding and value of word. If it exists, you have a match on the search word and the found key's value. Aggregate on a hashmap there for your final output, to avoid adding the same words multiple times.
> What dictionary? How do you generate it (handwaving notwithstanding, as mentioned)? It all comes back down to encoding, rather than aggregation of letter count.^
OK, first, I'm not sure your original post presents the right reading of the blog post. Sorting and using a dictionary are given in the blog post as two separate solutions, not two components of one solution. (Sorry if I'm mistaken – but it seems to read this way to me?)
Regarding your concerns about dictionary generation: I might be missing something, but this doesn't seem so hard? The first thought that comes to mind is to associate to each word a 26-tuple of integers that counts the number of appearances of each letter of the alphabet. This can be done in O(N) time, since it's an O(1) operation for each word (length of words is assumed to be bounded above by some constant). Then
1) Create a (multi)-dictionary with these tuples as keys. Insert is O(1) in the average case, you're doing it N times, so this step is O(N).
2) Go through the dictionary by key and delete all singletons. Again O(N).
3) Print the dictionary by key to get the groups of anagrams.
This is O(N) on average. It's not immediately clear to me whether one can do O(N) in the worst case – are you claiming that your way does that?
[Of course, this is not so far off from what you're doing. But it illustrates the claim that bit arrays are not necessary.]
After practicing bunch of Leetcode questions (around 300), and looking at videos from hiring managers that claimed to work at FAANG, or even the example interview at FAANG itself. I am actually more confused, whether Leetcode is the thing that they are looking for or not. Here is what I observed:
- Some Leetcode questions require tricks/complex algorithm that you absolutely won't find in the span of 45 mins. If you do, either you are ultra genius, that able to find and modify a variation of Kadane's algorithm in 45 mins (even though it is a simple one), or you just have seen the question before and you just somewhat memorized it. Obviously the answer is the latter. In this case, what do actually FAANG companies really looking for? Is it the proof that a candidate has been finishing hundreds of questions?
- I've seen videos on interview example on Youtube, and they all present medium to easy Leetcode questions. The onsite interview at FAANG is mostly hard questions. So that means there are disconnect from how FAANG actually interviews and how these interview samples are conducted. And again, solving a hard question that you've never seen before optimally require way more than 45 mins, unless you've seen the question before, which goes back to no. 1
- Hiring managers or recruiters that gave advice. Saying stuff like "talk a lot, talk to the interviewer, interview is a 2 way street" and other feel-good self-help stuff that has nothing to do with the actual interview and don't help at all. Claimed that what the companies are looking are your thought process. Nope, lies. What companies are looking is whether you can solve this problem optimally in 45 mins. If you have thought process, speaking during the interview, etc, and you couldn't even finish the question in brute force, then you fail. If you able to finish the question with brute force, but not optimal, you also fail.
- Experienced senior engineers that definitely rusty on Leetcode and just did basic DS&A brushup and definitely couldn't pass Leetcode hard, got offered with TC way higher than these fresh grads.
So, I'm quite confused on what are the companies really looking for? I think I concluded that FAANG companies are indeed practicing what they say "eliminating false positives and have a really high false negatives". Therefore the whole interview Leetcode fiasco is just a number's game.
A few things:
- If you are senior engineers and have track record at other companies, then you don't need Leetcode to prove your worth. Since statistically, you are already worthy and can perform in your job.
- If you are not a senior engineer from other FAANG companies, then you just have to play the number's game. I.e, keep interviewing (and Leetcoding) until somehow the combination of your studies and the mood of the interviewer and the questions that come up during that interview day rolls in your favor.
For the second problem, I had an idea for a different hash function. Basically map each letter to a prime number (e.g. c = 5, d = 7) and then multiply the values in a word together. I guess depending on the length of the words it might overflow. But I thought it was interesting.
I gave this very solution in Google interview many years ago. It does assume that the input characters are of a limited set, e.g. English alphabet. The interviewer said they hadn't heard that answer before. It wasn't what they were expecting and took them a while to be satisfied it would work. I was offered the job but didn't end up taking it.
Rate limiter: I would probably use a hash table of this structure -- called[yyyy-mm-dd][hh-mm] and then increment the hashtable for that minute, for example, called[2022-01-02][22-01]++ and drop any entries for the last day at the end of the day.
Post doesn't mention this, but one other common pattern I've seen requires a tree data structure, and those interview questions can be solved with recursion. Basically once you have a tree data structure, you write a f(node) that will simply call f(left) & f(right) recursively -- or for-each(child in children) { f(child) } for non-binary trees.
>Rate limiter: I would probably use a hash table of this structure -- called[yyyy-mm-dd][hh-mm] and then increment the hashtable for that minute, for example, called[2022-01-02][22-01]++ and drop any entries for the last day at the end of the day.
Why on Earth? Given that the timestamp cannot randomly jump backwards, you can achieve the same results by just storing the timestamp of the first call after a reset, and the number of calls sharing the same date/hour/minute, resetting it to 0 when the minute advances.
That said, it won't be entirely what was asked, since it won't have the rolling behavior.
> Rate limiter: I would probably use a hash table of this structure -- called[yyyy-mm-dd][hh-mm] and then increment the hashtable for that minute, for example, called[2022-01-02][22-01]++ and drop any entries for the last day at the end of the day.
As Bradley said this implementation can be called more than 10 times within a 1 minute window. If it's called 9 times at the last second of one minute and 9 times at the start of the next minute, it will be called 18 times within a two second interval.
I agree with your answer on the rate limiter, but I think it depends on whether the spec is for rolling or fixed one-minute windows. The question is ambiguous, but the sample answer implies that we want to track rolling windows.
If that's the case, an improvement on the answer is to use a ring buffer instead of a linked list, which allocates a fixed set of memory up front (slower to initialize) but then is allocation-free on each invocation.
Maybe the title should be "An Algorithm for Passing Algorithm Programming Interviews". This wouldn't help you pass our Programming Interview in the slightest.
Or don't agree to interviews that include algo challenges, leetcode/hackerrank nonsense, array shuffling shenanigans. Choose companies that put thought into their evaluations.
Just my $0.02: I'd much rather spend a few weeks refreshing basic data structures and algorithms than doing the "non-algo" challenges I've gotten because they feel way less objective than getting questions where there's a correct answer.
Most of the time the alternative to non-algorithm questions is some form of take home assignment that's usually multiple hours long and that you get no feedback on or chance to correct.
For example, I interviewed with a company where they asked me to design a program to parse some CSV data and I was told in the next round I would pair with an engineer to go over an expansion of the code. The parsing was trivial in Python and I submitted my solution with full unit tests and what seemed sensible to me, but it was as useless of an exercise as asking me algorithm questions because on a job I'm almost always going to adjust my coding style and paradigm to the context and needs of the company.
I commented my code as such to explain that if there were different constraints on the data size or API I would make different design choices than the one I submitted, but it didn't matter - I submitted and received a rejection less than 24 hours later and never got to the next round where I would actually walk through the code with an engineer. It was a complete waste of multiple hours of my time trying to produce thoughtfully designed code - I'd much rather bang out a breadth first search in 45 minutes.
On one hand I like take-home exercises because they are closer to what is a real world job and I guarantee the quality of my deliveries. Up to now I can say I've passed every single take home exercise that was thrown at me. On the other hand... the time I spent on those is abysmal. If I sum it up, I must have spent over a month working on take-home exercises. And even if I passed all of them, in the end no company gave me an offer because I wasn't a cultural fit or something. One company even ghosted me after having me to do a 1-week take-home exercise where I had to build an entire app, front end and back end, that consumed an open data API that had 1 million rows.
I still prefer take-home exercises because are more realistic and make more sense than timed, algorithmic exercises. But I wish companies would do their cultural fitness thing _before_ having candidates to spend their time on their code challenges.
The interview is about interpersonal skills as well, even in the case of "technical" tests.
I've given specially crafted answers based on what I could tell about the interviewer and what he wanted to hear.
Your solution might have looked simplistic/smug and that's why they rejected you.
Was that a good reverse filter for you?
For me it wouldn't have been, and I'd have given then the sensible solution in the scope of the interview.
I had the same thoughts, but changed them over time. If not leetcode, it quickly comes to very specific questions about the interviewer's favourite thing and its quirks. A complete hit or miss.
I would add tries to this list - I was asked trie-based questions in 4 of the companies I interviewed with over the last couple of years, including a FAANG.
The "algorithm" I wrote in this post is a formalized version of some problem solving techniques I picked up from the classic book "How to Solve It"[0]. In particular "working backwards". You start from the goal you want and see what you know that is applicable to getting you to goal.
Right? When I’m interviewing I care most about the thought process and the ability to communicate in depth about something technical, the actual solution isn’t really that important at all.
This is the thought process. I use something very similar and have done well in interviews.
I found I did better in interviews when I showed my thinking less, or adapted it to the interviewer. Stating the things I'm not doing is useful, but too risky. Just hearing the words "linked list" in a problem that can't be solved with linked lists (even in the sentence "linked list doesn't seem to be right") startles some interviewers. A high enough percentage I don't want to roll that die six times during a round.
The silliest example is I like to work on problems by writing code. Yes, even if I don't understand the problem perfectly. Seeing some code helps clarify my thinking and I throw most of it away.
Freaks. People. Out.
Writing down some wrong line of code may as well be blasphemy. Strangely, not the case with comments. So my technique in interviews was to write comments that were as equivalent as possible to code and then convert it to regular code when I felt things were close enough. The biggest challenge in Python is not making it so similar that it becomes obvious what you're doing.
lol yeah having been through these loops and training to do these loops it really seems like most people say they want to see “thought process”, but what they’re really looking for are the usual cues you see in all the prep info online.
E.g. ask some clarifying questions before coding, write or suggest a brute force, and then write an optimized algo.
Without clarifying questions, often you assume something about the problem that's not actually the case. And a brute force solution is often very useful to learn more about the problem domain. Personally I really like doing a bad solution before a better one. Especially in the setting where it's not completely whiteboard, so I have the ability to run and test my code, then having a brute force solution around to debug the problem is useful.
Not sure I understand your concern. Are you asking about worst case? All hash tables I believe are O(n) worst case because of the possibility of collisions (or the need to resize in the case of an insert). Most people talk about amortized performance though (ie when you are successful at reducing the likelihood of worst case to a rare event). Amortized time is obviously dangerous if you’re in a real-time system (eg games) where it’ll manifest as stuttering or latency spikes or potentially even worse if you’re having to respond to hardware events in a timely manner for proper operation.
It's possible to gradually resize a hash table to prevent spikes.
For collisions, if you use a good hash then you can adjust your load factor and chaining strategy to make sure it's never a problem. Where 'never' is 'less likely than the computer spontaneously self-destructing'.
> It's possible to gradually resize a hash table to prevent spikes.
Can you provide details? The only trick I know of is the same one as with arrays where you overallocate. That still doesn't get rid of spikes and the C++ STL and Wikipedia both list worst case time as O(size of hash table). It's also not enough to just have capacity - often times hash tables will abort finding a slot if gone far enough in favor of resizing & rehashing the table.
> For collisions, if you use a good hash then you can adjust your load factor and chaining strategy to make sure it's never a problem. Where 'never' is 'less likely than the computer spontaneously self-destructing'.
Yes, but the vast majority of implementations don't do a good job here. Howard's "Types Don't Know #"[1] paper is extremely well thought out and I have yet to see this adopted (except for Rust - Rust has a good track record of adopting good ideas like this and having the benefit of coming late to the game). I think it's safe to say that most hash tables don't have good worst case times (which when combined with resizing behavior of insertions, means you have super-linear worst case times on insertion).
> Can you provide details? The only trick I know of is the same one as with arrays where you overallocate. That still doesn't get rid of spikes and the C++ STL and Wikipedia both list worst case time as O(size of hash table).
When you want to grow your hash table, go ahead and allocate a bigger but blank chunk of memory. (You can either design your memory allocation system to make this fast, or you can prerequest the memory when you're x% away from full.)
Put your new item in the big chunk of memory.
Every time you search for something, check both chunks.
Every time the hash table has to allocate a new slot, move over four items from the small chunk of memory to the big chunk.
Eventually the big chunk of memory will be 65% full and the small chunk will be empty, and then you deallocate the small chunk.
And that'll work forever just fine.
The ability to shrink is only slightly more complicated, exercise for the reader.
> It's also not enough to just have capacity - often times hash tables will abort finding a slot if gone far enough in favor of resizing & rehashing the table.
Either use methods that won't abort, or use a secure hash and tune the hash table so you can make a statistical guarantee that the chance of an emergency resize is a million times smaller than the chance of the computer spontaneously self-destructing.
Edit:
> C++ STL and Wikipedia
It's no surprise the STL wouldn't so something as fussy as keeping two most-of-a-hash-tables behind the curtain. As for Wikipedia, it has a whole section talking about how you don't have to resize all-at-once. The O(n) at the top is an oversimplification.
The trick with O and hash tables is to remember that they're amortized O(1). Imagine a bag with the cost of resizing sitting inside it. Then, spread the cost across all inserts. There are always more inserts than the cost of resizing, so you end up w/ amortized O(1).
* Practice questions by company on LeetCode, sort by frequency of last 6 months and work down the list, do maybe 75-100, the list updates once a week
* Search for the company on the LeetCode forums and sort by most recent. If a question is not on LC yet it will likely get posted there, so you can get really fresh intel.
I've read of people doing this and getting like 6/6 questions they've seen before.