* Practice questions by company on LeetCode, sort by frequency of last 6 months and work down the list, do maybe 75-100, the list updates once a week
* Search for the company on the LeetCode forums and sort by most recent. If a question is not on LC yet it will likely get posted there, so you can get really fresh intel.
I've read of people doing this and getting like 6/6 questions they've seen before.
Interview questions don't rotate that frequently, especially for smaller companies or more specialized roles, and a $60 membership for a month will buy you internal referrals and potentially land you hundreds of thousands of dollars of value in a new position.
If you've seen the question/answer before just say so! I will totally appreciate the honesty and it goes a long way.
Why? You're testing their ability to produce the right answer to a given problem - not their problem solving ability. To that end it shouldn't matter if they've seen the problem or not.
I always find it hilarious when recruiters say that "getting the optimal solution isn't everything." I've failed numerous interview rounds where due to time constraints or implementation details I'm not able to completely code the optimal solution, but I am able to talk/walk through each step of it in pseudocode with the interviewer. By your own criteria, being able to clearly explain the solution and demonstrate an understanding of the different tradeoffs should count for much more than just being able to copy/paste the solution from memory, but I've never advanced in any round that finished without a "working" piece of code.
Honestly, the one thing I appreciated about FB/Meta's recruiters is that they were always honest about the process and what was expected - 2-3 Leetcode mediums/hards in 45 minutes and they only care about optimal solutions. I much prefer that to disingenuous sentiments of "getting the right answer is important, but we also want to see your thought process and how you might work with another engineer on the team."
That's how it works in all the companies I've hired in.
It doesn't matter if you don't get to the end of the problem, I just need to see you can think and that you know how to code.
Do you think your daily job will require you more than that?
And believes me, this is enough to filter out plenty of bad apples.
Poor performance in my experience was never about not being able to solve a technical problem, it was always personal issues / not having motivation / hating the environment.
A counter data point: I recently passed Google's interview a few months ago. In one round, I was asked to solve a certain kind of bipartite graph matching problem; not being a fresh new grad with ACM medals, I obviously couldn't solve it in polynomial time and just implemented a brute-force solution in the end. In another round, I came up with a solution the interviewer said they've never seen before, although it could be slightly slower than the "optimal" solution he had in mind depending on the input size.
As an interviewer in my last company, I always made sure the solutions were well motivated, and have rejected candidates who could write down the perfect solution but couldn't explain why. If I were to be asked by the candidate for the specific runtime I was looking for, I would probably just reply with "don't worry about it, just try your best" or "let's focus on correctness first and worry about efficiency and other trade-offs later".
Testing for problem solving ability is hard, but that's still one of the key signals we wish to extract whenever possible.
Pretty sure most people want to test problem solving ability, and hopefully your problem solving ability solves the problem correctly. If you method to solve the problem is to find the answer online and repeat it... that may not be how the company wants you to solve their problems.
Yes, day to day I'm not copy and pasting huge blocks of code from Stack Overflow, but when I need an answer and I don't immediately know it my first move is always to search internally or externally for others who may have already shared it.
Why is being able to effectively search for for answers not considered good problem solving?
Memorizing interview questions is decidedly not "effectively searching for answers".
Effectively searching for answers requires breaking the problem down into separate pieces that you can actually search for. This is one of the skills that can actually be demonstrated during the interview. And then showing that you can also come up with the solution (or be guided towards it in discussion with the interviewer) is the natural way to round it out, instead of "ok, now google for this sub-problem while I'm watching you".
Go ask your manager if you should copy and paste something you found on a chinese forum as 100% of the output you generate. Nothing original, no actual thoughts for yourself, JUST copy paste.
Search and find answers for themselves is not the same as blindly copy/pasting, which is basically what the scripted forum answers are.
I'm not convinced the coding interviews have improved upon the standard interview style in any meaningful way.
I did that once at FAANG interview, instead of honesty credits I felt like the interviewer just got annoyed by having to come up with another question.
The interviewer posted an LC question and asked me to read it. Since I was already logged into my LC account, he first asked me to show if I had solved it. I said I did. He then posted 9-10 LC questions one by one, all of which I had solved (I was doing LC regularly then). In the end he got tired, and posted a question from another website (Hackerearth) which I hadn't solved. We ended up taking ~5 minutes just going through different LC questions and he was disappointed that I had solved all of them.
I have also faced situations where I have seen an LC question that I solved but couldn't solve it in an interview setting, mostly because of the pressure.
In my experience failing to answer the alternative question you give me has (on average) a much more negative impact than pretending I don’t know your question (especially when I can explain it).
As an interviewer though, there were a few instances where I just told people "let's do this problem anyway, don't worry" and the candidates didn't always do a good job.
This is what I've done as an interviewer. But then, my questions actually lend themselves to that. Something that the sample LeetCode questions just don't.
In my experience (as interviewer) LeetCode kind of questions are "gotcha" kind of tests: Either you know "the trick" or you don't, but there's no real constructive value.
In contrast, I prefer tests like let's write a Tic-Tac-Toe, Snake, Twitter-clone (no DB in memory), etc. in 30/60 minutes together in your computer with your IDE, your software, google and language of choice. I am able to do a quick coding session with the person, see her weaknesses and strengths, and even if he has done it before, looking at his real project coding style is super useful.
Which should be a clear indicator that interviews aren't only judging problem solving skills, but also the candidate's ability to withstand the pressure of being watched and judged while they solve complicated problems.
For the interviewer, it's just another day and sure, they "want the candidate to succeed" and all that, but for the candidate, their future and livelihood are on the line and that's tough for some of us to just ignore while we focus on the not-easy school quiz problem of merging overlapping intervals or whatever.
In a perfect world every interviewer would calibrate for those minutes lost, but that calibration is fuzzy at best unless it’s built in to your scoring rubric.
In reality no one will do this though. There is way to much of an incentive on the candidate side to lie and work through as if they haven't seen it before.
It's not "cheating".
> If you've seen the question/answer before just say so!
Haha good one.
Not far enough to get a job in most cases, at which point, after the rejection, you and I are likely done and won't talk again. On the remote chance we cross paths again, the honesty might be worth another interview (if you're still in that position), but not much more. Person-to-person, I know the honesty is appreciated, but here's why candidates are not likely to be honest and simultaneously have no moral/ethical failing during interviews:
You're an agent of the company with a power dynamic over candidates when interviewing. You can't be honest with us via feedback, even if you wanted to, because there's potential legal problems for the company if you are. Many companies don't give feedback as policy for these reasons. So, knowing that I know that you can't be completely honest with me, I only hurt myself by being honest by telling you I've seen the problem before.
If I already knew the answer, but can't answer 'why?', then that's on me - I likely don't fully understand the solution in that case. Reject those people, sure. But they have no obligation to be honest with you. Similarly, if I already knew the answer and disclosed that fact, I might get a tougher question. So there is real a risk that I hurt my chances by being honest.
The risk of getting a tougher question is important. For it to be really fair, you'd have to grade these questions and determine if the next question is of similar difficulty. But your interview and problem grading process (if you have one) is not likely to be disclosed to me. The honesty would be appreciated, but we both know that interviewers aren't going to share those details chiefly because it defeats the purpose of the test, but also it'd end up on a forum if you're a well-known company leading to more cheating.
If you were giving a well-known test developed by professionals (GRE, GMAT, LSAT, SAT), there are significant resources to show me that those are fair tests developed by academics and other professionals. I would trust those testing admins to substitute questions of a similar difficulty. Were that the situation with your interview, the risk of getting a tougher question by being honest is significantly diminished because I know you have a bank of questions with accurate difficulties attached.
I'm not arguing that you should change your process. A candidate unable to answer 'why?' is a perfectly good reason to say the candidate failed the question and maybe the interview. I'm arguing that this really shouldn't be considered cheating or a moral/ethical failure.
The way we do it is like this: we have like 3 or 4 different small variations on each question. Such that the solution is measurably different, in quite telling ways, but that the given problem looks almost identical.
In one specific case the given is identical, but there are 3 variations to the question based on how the candidate asks questions about it (if they don’t ask questions the question is not possible to solve as we don’t give all of the information you need.)
We started doing it this way precisely because we kept running into people who would have 3 nearly perfect interviews and one “hard fail”, and we eventually realized it was because they were so good at faking that they’d seen it before but if they hadn’t seen it before they bombed it hard.
So now that we have the “variations”, at least once a month someone will “hard fail” the interview because they’re obviously cheating and will quite literally give the right answer to the wrong question, just rote memorizing it.
It’s an arms race. And one that I enjoy.
Your candidates show that after learning how to solve a problem, they can demonstrate they're able to solve it.
Have you considered just hiring candidates and then training them, or expect them to learn approaches that are new to them?
Right now, you're pretending that your company needs random puzzles solved, and they're pretending that they're able to solve random puzzles without looking them up in an algorithms book.
What's the point of this whole theatre?
I get that your ego is enjoying that, but is that providing your company really any value?
It's impossible for anyone to be an expert in every application that my team handles. The key for us is that we try to keep our applications relatively simple with how data moves from point to point. Orienting yourself with new environments and applications significantly increases productivity here. It's always good to have people who can recognize and apply logic to patterns, but knowing how to ask questions is important. It isn't about the "gotchas". It's about what happens after the person is stuck. We try to make sure our applicants can make some assumption or ask clarifying questions about ambiguous portions.
If your company requires only the former that's fine. But if you also require the latter, that's fine too and it's ok to test for it in your interviews.
If a company generally requires candidates to be overqualified for their intended role, that's a bit dumb. But I imagine that such a problem would eventually be fixed by the free market (supply of workers at various qualification levels vs. demand for said qualifications).
The way I read it, they’ve shown they can learn the solution to a problem.
It’s like asking ”what’s 43 × 57?”, getting “2451” as a reply and, from there, assuming they’ll be able to calculate ”42 × 58” or “41 × 59”, too. If they memorized just “43 × 57 = 4251”, that conclusion may be (and likely is; most people who know how to multiply won’t memorize such products) incorrect.
The parent of your post is talking about a situation where they've demonstrated they've memorized a solution to a particular problem, not that they can solve it. And that it was the wrong solution, which they didn't notice. It's that last bit in particular, combined with not being able to adapt or create a solution for the actual problem, that is the hard fail.
I can't imagine my personal interview questions & style are common enough to have shown up on any of these sites, but I have personally witnessed two people (out of a few dozen) who knew only and exactly what was on their school curriculum, but were completely incapable of stepping outside of it. I come at interviews from the point of view that I'm trying to discover what they can do, not what they can't, so when I say that, bear in mind that I didn't hand them a problem and smugly watch them fail for 30 minutes, I actively tried to formulate a problem they could solve and failed. I've also witnessed someone persistently applying a book solution that was the wrong solution to the problem. Perfect depth-first search, but it should have been breadth-first, they couldn't seem to understand my explanation of that, and they shouldn't have put it in a tree in the first place. (It would have worked if the correct search was applied but it was massive overkill. It was something like finding a substring in a string... yes, you can represent that as a tree problem, but you're only making things worse.) They nailed the definition of graph and basically just wrote out the depth-first algorithm as quickly as they could write... but it was wrong, and the moment they stepped off the path they were completely lost.
I also don't do brainteasers like this, I focus a lot more on very practical things, so we're talking more like "failing to write code that can take the string 'abcd,efg' and split it on the comma without hardcoding sizes, either handwritten or by calling library code". I really want to start an interview on some kind of success but every once in a while a candidate gets through that I simply can't find the place to start building successes on at all.
(You have to remember the "evaporative cooling" effect when reading about interview stories. Of the good candidates that even go through an interview process, they do one round of interviews and get snapped up immediately. People who have good-looking resumes but, alas, graduated with insufficient actual skills, interview over and over again. The end result is the average interviwee, anywhere, is not very good. One of the preceding interviews I'm referring to emotionally wrecked my day, I felt that bad for someone who had clearly put in immense amounts of (all the wrong kinds of) work but I couldn't even remotely hire. But there's nothing I could do about it.)
I should also mention I run a style of interview where if I ask you a question and you blast it out of the park in a few seconds, great, you (metaphorically) get full points and then I move on to the next harder variant on it. And I can always make up a harder variant of something we can talk about for an hour or two. If I can't find something you can't blast out of the park in an hour or two, well, unless you're completely insufferable or your salary expectations are something I can't meet, expect the offer shortly. But what you'll be "blasting out of the park" will be something like a full round-trip delivery of a service with code-as-infrastructure and a database and user logins and a solid security design and so on and so on, not solutions to Project Euler problems.
Why do them?
Are you really facing those problems frequently enough at FAANG to have know them? Is it uppity engineers? Gatekeeping? Or are you just getting so many applicants that you have to filter somehow and leetcode interviewing has some nice properties (easy to apply remotely, can be done by engineers, strong pass/fail criteria).
Genuine interest. Ignore the negative subtext, that's just me on this subject.
But also, its a good measure of someones ability to take an abstract problem and solve it. Lots of mini events in a LC problem to critically think. Like you said, we need a filter and its really easy to use LC to be that.
That said, I've worked at fang with co-workers who had trouble using iterators or properly assessing complex Boolean logic (and I'm not talking about needing de Morgan), so sometimes LC skills are needed on the job. So getting a signal that "this person can't write loops" means "we don't trust this person not to write an infinite loop", however rare that day comes.
There's enough programmers who want FAANG jobs and its easy enough to apply and the pay is high enough that you should be free to gatekeep by someone who understands intro-to-java level data structures and algos. Maybe leetcode-hard is unnecessary, but easy should be doable.
I really don't get shareholders at big tech companies.
They could hire a few teams of talented engineers and trove of cheap developers and forget all of this.
Almost nobody needs leetcoding engineers, 0.1% of their tech is hard, the rest is all forms and api, as is most of the industry.
Objectively, a waste of everyones time and energy is what it is.
This sounds like focusing on the wrong thing in the interview process.
Perhaps I should throw out my CLRS and Skiena and invent everything in those books from scratch?
It's an Arms race because people like you have turned it into one. You're not solving Nobel prize winning problems, so stop expecting people to magically invent on the spot novel algorithms for things they've never seen before.
Some people are good at it and some are not.
>We started doing it this way precisely because we kept running into people who would have 3 nearly perfect interviews and one “hard fail”
So which is it? Obvious they've seen the question before, or only obvious after they fail on an unseen problem...
Your arrogance and hostility is hilarious.
for at least one round of interviewing, let me (the interviewer) use my own custom question, where the goal is not so much to solve it but rather to reason outloud collaboratively about many different aspects of the question.
I like to use 3d graphics as a domain that candidates most likely havent seen before, but sufficiently motivated/smart ones can hold their own in. If someone doesn't quickly and intuitively grasp that a shape is a collection of faces, and a face is a collection of points, and a point is a collection of vertices, I'm not sure that they have what I am personally looking for (even though we dont do any 3d graphics in our project)
-hmm, well that wouldn't be very ethical would it. Guess we'll just have to leave things not working very well.
What part of jsiaajdsdaa's process do you think is more efficient? To me it sounds less objective ("the goal is not so much to solve it") which seems like it would make the process less efficient when assessing candidates.
Although for other things related to the job not that useful.
That said I was talking about your comment that it was somewhat unethical; they didn't so much think that they gamed the system but that the system was so inefficient at doing what it should do that someone had to fit themselves to the system to get in.
This is the first time I’ve come across this definition of a point. Geometrically point is defined as a zero dimensional shape (or something similar) if I recall correctly.
Besides, I don’t see how it’s intuitive at all! It isn’t to me at least.
Larger point being, if you pick such random domain without calibration you will run into such argument/discussions during an interview. Not sure what data point could be derived from such discussion when you are looking for someone who can write a decent maintainable code.
I must say that I’m terrible with geometry/graphics which hasn’t stopped me from creating value through software development in my domain, online payments.
If your intention is to gather signal for collaboration then I suggest picking something you are likely to collaborate on a day to day basis. Let’s say code review or architecture review. You could discuss why such and such an architectural pattern is useful, under what conditions, what are the pitfalls to watch out for etc etc.
Not necessarily. In 3D graphics, it is common to represent points with homogeneous coordinates, where points in N-dimensional space are represented by N+1 real numbers. Using 4x4 matrices  to describe affine transformations of 3D points is very convenient.
(Agreed with your overall point though. Just goes to show how different some fundamental perceptions/definitions can be.)
It actually is. It's generally assumed to be equal to one, but it need not be.
> isn’t necessary to store to apply a 4x4 matrix
...if you assume it is equal to one, yes.
However, actually representing the fourth component is both mathematically sound and occasionally useful. For example, the midpoint of two affine points, such as (1, 2, 3, 1) and (3, 6, 9, 1) is actually just their sum: (4, 8, 12, 2), which represents the same 3D point as (2, 4, 6, 1). The fourth component can also be zero, in which case you describe a point at infinity in the given direction.
But yes, if you only use homogeneous coordinates for chaining 3D transformations, storing the extra component it pointless.
This is a good example of how ambiguity kills an interview and reduces it to quickly figuring out what the interviewer is talking about so I might have a decent chance of solving the problem with the time remaining.
My experience with interviewing at Google, Facebook, and Amazon can be reduced to "What the hell are you talking about?"
Why do you believe now that the path you took is no longer ideal for other candidates?
I was doing the same to weed out the bad candidates - asking them something they should know, something logical and basic - but got bad feedback once, been asked to instead focus my questions on the strong points described in the CV. I mean, for the practical part the candidate wasn't able to count the unique words in a text file in 30 minutes. I thought opening files and reading strings and splitting are so basic anyone should know them.
It completely gives an advantage to candidates who know 3D and completely gives a disadvantage to candidates that know nothing about it. Even worse, they are relying on YOU to explain it to them. Who's to say you are sufficiently qualified to teach people the basics of 3D graphics enough such that they can answer your questions? Have you been calibrated or judged on your ability to teach even the basics of 3D graphics? Or are you assuming that you're good enough?
It's completely inappropriate and relying completely on you to determine a candidate's qualifications based on nothing except your feelings. It's a horrible question and I seriously hope this is not entertained at all at your company.
If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit more to heart, we'd appreciate it. Note this one:
"Have curious conversation; don't cross-examine."
But the reality is:
- future performance depends on the team (and more largely on everything else in the business) and that varries unpredictably and is usually not part of the evaluation anyway.
- future performance also depends on how the project itself is going to evolve, which is hard enough to evaluate in itself (not just the timeline but oftentimes the involved technologies)
- assessing the value of anything is its whole can of worms (it's intrinsically subjective, you can't measure a physical "value" in SI units)
I prefer data over opinions like everyone else, but this may be a case where it is eventually safer to rely on as many as possible individual opinions and weight them, with just the amount of process in place to avoid common biases? (friends-of-friends, judging on look, country of origin, gender, ...)
I've seen big companies that takes hiring very seriously rely equaly on some preset metrics (based on prewritten questions thus easily gamed) as well as the gut feeling of several interviewers who are free to ask whatever additional questions, and I think it's the correct approach.
When I detected the candidate knew the questions, I'd switch order, introduce new questions I didn't usually ask.
The interview we did was quite extensive on the java basics and if the candidate somehow managed to learn all that stuff by knowing the questions, that was a pretty good sign anyway.
It was just this one time I found a candidate who aced every question I asked, even OOP, Design Patterns, I made up on the spot a code-design challenge, the guy suddenly was not so brilliant, but still managed to hold his own and I didn't penalize him for my impression that he knew the questions, he was way better than the usual candidate anyway.
Perhaps the stakes were not so high as in the USA, but I can say I've never found someone who would have failed without knowing the questions in advance; I did fail people who were trying to cheat on the phone interview, but those were rare cases.
My impression is technical interviews just filter programming-illiterate programmers and people who don't give a f#ck -- as a general rule.
There's also the 30% of cases where the interview is made up to show the candidate he's not really all that senior as he thinks and to lowball his salary request.
And some companies have very high technical interview standards because of some cultural inferiority complex (we are just like Google, you see...).
Programming interviews are almost exactly akin to actor's auditions. Just because you flunk an audition doesn't mean that you're a bad actor. Also, auditioning takes a special skill and it's not very much like being a real actor. But they still do it to this day. Programming interviews are similar.
The best hiring model I can come up with is the Netflix model. Pay top of the market, hire and fire people quickly if they don't meet expectations, with a generous severance. Have high expectations, reward the ones that can fulfill those expectations, and quickly get rid of those that don't. It's ruthless, but the Netflix engineers I know love working there.
Is there any programmer that started coding in his teenage years that didn't at some point try to do 2D drawing in code?
Computer science is a massive field that people enter through many different and unique ways. If you're trying to gatekeep and force everyone to enter through the same gate that you entered, you should not be an interviewer.
The interview process is a test of endurance, not intelligence. And it should be exactly that, since software engineering is mostly an exercise of endurance and focus.
Every time a friend of mine QQs about failing a FANG interview, I give them the study prescription that worked for me. The reaction is always the same: they don't do it. If you can't get past these interviews, you probably won't make it past a year in these high performance companies anyways. Because you actually have to exhibit the same work ethic that interview studying requires.
I can't say for sure that such challenges (array manipulation, backtracking etc.,) are not typical for a software job but I know that they aren't relevant for my line of work. Even if they are, the challenge of solving problems in a live coding session is not the same for all. Different people need different settings to perform or excel.
But now I am committing myself to leetcode and getting better at it, but it has become one among hundred other things I want to get good at. At some point of time, I will feel the burnout. But I sincerely hope the acquired skills are useful in my professional setting.
I can get hired as a software engineer wherever, but I’m only mediocre at doing the job. I’m not the only person I know like this.
I only wish the real Software Engineering job was as simple as being good at LC, because it clearly isn't.
I feel exactly the same.
Mind if I ask how you deal with this?
I recently left software (not sure if temporary or permanent yet) and I'm pursuing tutoring in an unrelated field. So far I'm liking it more because I feel better than mediocre.
Having other sources of meaning in life keeps me going during periods where my career isn’t going as well, gives me perspective and keeps me from getting depressed (I’m prone to it).
Working at top companies has helped me meet a lot of amazing people, including many of my closest friends, so I’m grateful for that at least.
Also, this gets thrown around a lot on HN, but if you’re brilliant at hard programming puzzles but not good at engineering jobs you might have ADHD. I do. Medication and/or ADHD-targeted treatments and accommodations could help. They’ve been modestly helpful for me.
There are always things that can be improved about interview processes, but many software engineers display great immaturity when it comes to trading a few weeks of time studying content that should already be partially familiar to them from school for six figure salaries/raises.
Leetcode sucks, but I’ll take it over how finance jobs are where it’s all about connections and where you interned when you were 19.
Is this different than Big Tech really? If you do not go to the right schools, internships, whatever it can take years to have the right employers in your CV to be called for an interview.
I don't know of anyone at a top law/medicine/consulting/finance firm like this.
Once you land that job in finance or as a lawyer you are on track to be set for life. At a faang after a few years you'll get pushed into management or pushed out.
It's not the same.
(also the whole concept of "Up or out" comes from Big Law/Consulting... https://en.wikipedia.org/wiki/Up_or_out)
You are just as "set for life" in big tech as big law. In fact, if you're looking at the last decade, big tech won hard considering tech stocks and how big law froze (and even slightly cut) salaries during the great recession.
Because if you're hiring 40% of your current headcount a year and some people leave that gets you to very short average tenure very quickly.
Its immature to think we can do better because others have it worse? Rubbish.
I know that's pretty strongly stated and it matches the strength of my convictions here. I've tried many methods. Asking someone to figure out something is a sliding window is a garbage test
Many interviewers do not ask the questions to check for thought process or problem solving ability, they treat it like some TV quiz: Ask question, get answer, compare answer to note in hand, applaud if its the same answer, next question. Why? Because its a lot easier to sit there watching the candidate squirm at the whiteboard, while thinking about what's for lunch, than engaging the candidate and gasp talking to him/her.
This creates incentive for people taking these BS interviews to learn-for-the-test: Get a list of the current top50 questions asked regularly at interviews (there are resources for that) and memorize them.
Why? Two reasons:
1. It is alot easier than understanding the concepts and purpose of different algos and data structures.
2. Trying to solve them by applying actual understanding, runs the risk of getting stuck on an unfamiliar problem, or producing a slightly sub-par solution instead of the "correct" answer, and getting booted out despite demonstrating the exakt thing aka."problem solving ability" the interviewers allegedly look for
And, unsurprising, because there is money involved, an industry has sprung up around this: Pay-For-Tech-Interview-Training is a thing, including regular updates on popular questions.
The result of course: Companies running the risk of hiring people who are great at answering LC questions but fail when they actually have to solve a problem where they cannot copypaste the solution from SO.
It's pretty obvious to interviewers if you've solved a problem before, and we appreciate the honesty. Interviews are not adversarial; they're to see if a candidate is a good fit for the role and dishonesty is never a good fit.
"We expect you to study and be prepared for algorithmic questions, BUT NOT TOO PREPARED! Only just enough. We will give you a random question of our choosing, but if you already studied this, then you will be deducted points, unless you tell us, so that we can ask you a question you've never studied before."
A true interview would give the candidate their choice of question with the expectation that they know how to solve the question. It makes it a lot fairer for everyone involved.
After getting some interview coaching, I think I understand how this complaint arose. The whole problem is an artifact of the interview format, in which the design intent is for the candidate to be unable to solve the problem on their own. So you get scored based on how well you can charm the answer out of your interviewer while making them feel like everything they said was your idea. Instead of a test of how well you can program or solve math problems, it's a test of how good you are at cold reading. ( https://en.wikipedia.org/wiki/Cold_reading )
And unsurprisingly, a test of cold reading will end up delivering people who are good at cold reading without reference to whether they're good at other things. If you want to avoid this problem, just start giving assessments that don't involve interaction with the interviewer.
If the candidate breezes through the discussion because they've actually had to solve the problem before, then their victory is well earned.
If on the other hand its an academic question in the same vein as the data structures or algorithms puzzles you find on $interviewprepforum, then the fact that they've solved it before tells you very little.
I think this is why contrived questions gained popularity in the first place - they eliminated noise due to candidates randomly having solved similar problems before (that and "real life" problems usually can't be explained and solved in 1 hour).
I'm struggling to imagine a definition of "adversarial" that would make this true. You have two parties with conflicting goals.
The interviewee's goal is to be evaluated inaccurately.
If you really need an example, then look at it this way:
1. The interviewee's goal is to be hired.
2. Assuming there is no conflict of goals, then the interviewer's goal is to hire the interviewee.
3. This immediately implies that the interview is a pure waste of time. You can just make the hire without having the interview.
If you don't believe that interviews are -- in every case -- nothing but a waste of time, then you must reject one of the two premises. You either believe that (1) The interviewee does not wish to be hired, or (2) there is a conflict between the interviewee's goals and the interviewer's goals.
We can make a similar observation purely by knowing that interviewers sometimes reject interviewees.
Only if the interviewee doesn't think they should be hired.
I think a better way to think of this is:
1. The interviewer's goal is to hire somebody that will provide value at the company, using the hiring criteria as a way of judging it.
2. The interviewee's goal is to get an offer at a company that makes sense for their career goals.
These aren't necessarily adversarial.
> Only if the interviewee doesn't think they should be hired.
Nope. The interviewee would always like to be evaluated as better than they actually are, regardless of whether they meet the notional hiring threshold.
> These aren't necessarily adversarial.
The goals you list are still necessarily adversarial, because the interviewee's goal is always to get an offer and the interviewer's goal is to stymie them.
More specifically: the interviewer's goal is to minimize (1) false positives and (2) the expenditure of the company's resources.
Meanwhile, candidates hope to be evaluated "fairly", which is in direct conflict with criterion (1). They also naively expected to be treated "decently", which is in direct conflict with criterion (2) and which explains why employer-side ghosting is so widespread, along with other abusive practices like piling on lengthy take-homes, etc.
I don't think concerns about resource expenditure actually explain ghosting. I think that happens despite what the company would prefer, because the people involved find it unpleasant to notify candidates of a rejection.
Lengthy take-homes are easier to explain by reference to resource concerns.
Minimizing false positives is not a fundamental goal of interviewing -- accuracy is a goal in all settings, while minimizing false positives isn't. But minimizing resource expenditures is; you're right about that.
"Unpleasant" to send a standard form letter? I don't buy that.
I do agree though that it's what they prefer. As a reflection of how they are, and how they look at people.
> dishonesty is never a good fit.
If you rejected every such “dishonest candidate”, the paid services offering candidates practice questions would have gone out of business long ago.
Given the popularity of such services, have your considered the possibility that you overestimate your ability to spot candidates who have solved the problem previously?
It could easily cost you the offer if they think you’re being dishonest.
If you say it’s familiar and they ask you to do it anyway, just solve it very quickly and thoroughly. They’ll give you points for your performance and probably ask you another.
Beyond that though, you can just be honest and say you know a problem and they can either pick another or just talk about it anyway.
Fraud on resumes is very common.
I've done +300 interviews at FAANG so I can share bit of advice from the interviewer's side. The caveat is that this is based on how I conduct interviews, so YMMV with other people.
* Always ask clarifying questions. Almost all problems have some level of ambiguity that you need to clarify. This is intended. The more senior you are, the more important this becomes.
* Explain your approach before you begin, and even while you code. This gives the interviewer a chance to help you if you are down the wrong path, or even just understand what you are up to.
* If you get stuck, ask for help! think of this as a pair programming session rather than an interview. I much prefer a candidate who gets stuck, ask for hints, gets unblocked and writes good code, rather than one that doesn't ask for help and writes not-so-good code.
* Caveat: There is a tension with asking questions. There is such a thing as too much help, e.g. I need to explicitly tell you how to solve the problem. Use judiciously.
* Take interviewer's hints. The interviewer has probably asked this question dozens of times and knows it inside out. If they give you a hint, 99% of times they are on to something.
* Personally I don't like "hard LC" problems. Instead I prefer medium difficulty problems, and spend extra time probing the candidate's coding skills. This includes:
1. Write tests.
2. Handle corner cases/incorrect inputs.
3. Discuss how to scale the code.
4. Discuss how to refactor the code.
* If nothing else, write the simplest brute force solution you can think of. As an interviewer I need a coding sample to evaluate the candidate. Trying to and failing to implement an optimal solution is worse than implementing a correct brute force one.
As an interviewer I explicitly ask candidates NOT to do this (or only doing it if they want). For some reason there is this expectation in coding interviews. I challenge anyone that pushes their interviewees to do this, to sit down during one of their own coding sessions and vocalize the stuff they are coding while doing it. In my opinion it is stupid, and unnecessary.
When my interviewees start solving the coding tests and begin "thinking out loud" I told them: "you don't need to do this here, focus on your code, and once you are done I'll ask you to explain some parts to me". I invariably listen to a huge breath of relief from them haha.
The more I read HN comments the more I realize that not everyone reasons through problems "verbally" (even if it's silent or internal) and uses some other thought process. To me, coding has always been like speaking a very specific and pedantic language.
I would find it interesting if other people were able to share their own mental models for coding, if that's even possible.
It's fairly common to make a hire decision with one person not inclined.
To understand why here is another bit of my philosophy: the aim is to find a good match between candidate and company. The interview is a (very flawed) proxy for this, so don't overindex on it.
For instance, recently I was inclined for a candidate although he didn't make it past brute force. The reason is that he did very well in the behavioural round, and wrote a very good brute force solution. Also, during coding, he explained in detail the internals of some data structures (e.g. hash maps), which shows they know their stuff. Lastly, they had an intuition about how to approach the problem optimally, even if they didn't manage to write the code.
In my experience, interviewers will rarely tell you the runtime of the optimal solution.
Regardless, very interesting blog post.
I agree. As an interviewer, I genuinely want each candidate to do well and I think that, on balance, setting a specific complexity goal is likely to do more harm that good.
For a strong candidate, it could limit their opportunity to shine by narrowing down the solution space and discouraging them from exploring tradeoffs (e.g. runtime vs memory).
For a weaker candidate, it could mean no solution at all instead of a suboptimal solution. I generally choose my problems such that they admit multiple solutions of varying degrees of efficiency/sophistication. It is generally not a hard requirement that a candidate find the optimal solution in order for me to score the interview favourably.
If you're happy with just getting it done quick and dirty, at least as a start, then telling them so is obviously better than not doing so. Beyond that, anything more about what you want will help trigger their thinking in terms of the right approach. Unless of course you really are looking for some clever trick (and I have certainly seen those annoying questions).
This way anyone except the least qualified candidates can make some meaningful progress. I then gently guide the candidate to help them produce the best solution they're capable of in the allotted time.
I then score the interview on the basis of where they have landed, the process by which they got there and how much help they needed along the way (and of course the role/level they're interviewing for).
I’ll give hints if the candidate is struggling, but I won’t just come out and tell them something like this. If they pushed me hard enough at the start, I would tell them and then fail them on the algorithms/reasoning component of the interview.
You know, like IRL. What this is supposedly about.
Again, the question referred to the target performance, not the optimal performance.
Ed: to clarify a bit, in some cases the "non-optimal" solution is going to be the best one. And that's even before you start worrying about things like time/memory tradeoffs. When a candidate asks me questions along the lines of "which of the 2 input lists is bigger, what order of magnitude is the length, does XYZ fit in memory, etc", IMO it's a useful signal that shows they're aware of these tradeoffs.
From there, I can start to think about the running complexity (or whether it even matters, for the scale given). But if an interviewer won't even tell me that ... I'd assume they're just like making candidates dance, for the sake of making them dance. While they sit back and stare at their phone, and occasionally interrupt with "hints".
If the candidate wants to discuss performance, I’m obviously interested, but if they want me to serve a hint to them on a silver platter I’m just confused.
I agreed with you up until this point, which seems unnecessarily harsh to me
In the hundred or so interviews I’ve given, I’ve never had a candidate persist in forcing questions along these lines from the very start, even after I have turned the question back around at them. If you force me to give you hints out of the gate, something very weird is going on.
The question specifically referred to target performance, which IRL is almost always known and freely communicated (in at least ballpark terms).
As in: "Find the median of this array of 10k integers in .01 seconds please. Not 10 billion, just 10k". You say: "OK, sort and done."
IRL if someone at your job says: "Write some code to find the median of an array of integers. I'm not going to tell you how many, even the order of magnitude. Could be 10k, could be 10 billion." You say "WTF?" They say: "I don't care about whether you get the right solution or not! I just want to see how think about a problem."
I think you'd know where to tell them where to put that question.
If someone has cleared their day to come into your office and talk to you -- you should respect their time. Don't make them guess at the goalposts, don't use implicit or hidden metrics. If they ask you what the expected latency is for an API response, or whether they should worry about hostile string injection attacks -- just tell them.
As you would in a normal, respectful, professional setting.
What I'd actually ask as a candidate is what to optimize for. Are we looking at an N of 100 and a simple solution will do, or are we looking at an N of a million? Maybe memory is the constraint.
Atleast show the candidate that you're thinking pragmatically. The goal in the real world isn't to write the "fastest" algorithm, it's to write the most appropriate one.
And in particular, I'm not sure I can agree on eliminating recursion in the anagrams problem. He just says
> Recursion – No way to apply recursion to the problem.
but...there is? One plausible approach, I sort each of the strings and put them in a trie. A fancy sounding thing, but really just a data structure which allows me to rephrase "someString in myTrie" as the recursive "someString is null || (head(someString) in myTrie.keys && rest(someString) in myTrie[someString])". I'm not arguing that this is better than a hash table, though I think it probably is on some workloads - just that it's not possible to rule this out and solve the problem by pure process of elimination.
Btw, this is a big failing in interviewers who help. Asking the right questions (what data structure could you use here?) is incredibly helpful. So helpful that you're not testing them on a slow moving, relevant skill (structured thinking) but instead a fast moving, less relevant skill (what data structures do you know?). Bad move.
More in the vein of what you're getting at though, I think helping a nervous candidate by asking obvious questions like this can be helpful: "What data structure would you recommend using for this problem?", or "What programming paradigm do you think would fit this problem best?". It's a bit of an ice breaker, and can lead to more interesting conversation than the candidate just sitting there, quietly thinking (admittedly, possibly not a good sign for a candidate if they do this, but interviews should be built around making it a good experience for the candidate as well).
Just like how you can't learn math by reading high level summaries written by others.
You have to grind through it and see the connections yourself.
The author's lack of depth shows when he says that no graph means you can rule out DFS.
I also disagree on asking interviewer for the expected runtime. You're needlessly adding another constraint on yourself by doing that. You should ask for the input size and then figure out the best runtime that you can reach. A working suboptimal solution is more likely to get you to the next round than an incomplete attempt at an optimal solution.
In fact it's kind of ironic that his first example exposes how bad this advice is: by trying to meta-game, starting from a few example data structures, he shows he completely misses the solution that is both simpler and much more performant.
The next time you call the function, you need to count how many of the old calls are still "unexpired". This number (potentially) gets lower with each passing quantum of time.
How can you do that without holding a timestamp for each call? Please clarify if I misunderstood you.
TLDR : You have a maximum of N credits, when time pass you earn credit at a rate of N credit by window_size, but if the time since previous request is less than window_size/N you lose 1 credit.
I don't think you can have more than 2*N requests in any sliding window without tripping the filter, but you can't consume more than the average of N requests / window_size without tripping the filter.
I think it's a better solution than the question asked by the author, because when you are rate limiting, you and the client may not have exactly the same time, and you might have edge cases like where the client batch 60 requests in a few ms every minute. If there is some time-jitter in the requests you may have 120 requests in 59.9 seconds. (Bonus question : What time-stamping of the request should be used ?)
Whereas with my solution it is more forgiving, it allow the client to use all its rate credit without risking to trip the filter if he respect the rate intent.
There’s also two additional programming techniques you should be aware of:
* Dynamic Programming
The only places were such types of processing will be required is when doing specific kind of processing, which for most scenarios you'd be better using an algorithm previously implemented on an existing library.
Maybe if I was in computer vision then I’d use some of these algos more often. But those usually require a special masters or PhD to get an interview anyway. So, not really applicable to the overall industry tbh.
This doesn't mean you're just making pretty widgets for a browser.
Many people think DP and caches are synonymous, unfortunately.
Critically, caches are global shared state. That's why cache invalidation is the hardest thing. Global state also means people stop trying to have a data flow/architecture. None of these call trees need to advertise that they use a particular value because I can just grab it from a global if I need it. You don't know where things are actually used anymore because they all come from cache lookups, and because the lookup is so cheap (under typical system load, but catastrophic under high load), people don't even try to hold onto previously acquired data. They just fetch it again. Which is another boundary condition for cache invalidation (what if half a transaction has the old value and half the new value?)
They make flame graphs essentially useless. Sometimes immediately, or once the above starts happening. You quickly have no clear idea what the cost of anything is, because you never pay it, or if you try to pay it you end up amplifying the cost until there is no signal. Once people are promiscuously fetching from the cache, they don't even attempt to avoid duplicate lookups, so activities will end up fetching the same data 6 times. If you turn off the cache entirely for perf analysis then the cost skyrockets and the flame chart is still wrong.
You are back to dead reckoning and manually adding telemetry to potential hotspots to sort it out. This is very slow, and can be demoralizing. Quickly this becomes the fastest our app will be for a very long time.
Ofcourse the interview process is (or atleast is supposed to be multifaced) and passing these problems is just one of the facets of the process. A great candidate can bomb one of these questions but I wouldn't necessarily reject them just because of that (although that happens in practice in the industry) and wouldn't necessarily okay someone just because they aced one of these questions.
I'd take complaining about these type of problems any day over complaining about only being able to get a job in certain companies by having connections or having gone to an Ivy League school.
There's a vast territory of approaches that are neither leetcode hazing or culture fit tests. And which, while nuanced, are basically objective.
Work sample evaluations do quite well, for example. As do in-depth technical discussions about ... just about any subject the candidate claims to know about.
Neither of which have anything to do with "culture fit".
Having attempted to do this after reading all the HN screeds, I found that this failed horribly. You'll find a large number of people who seemingly know the theory, but can't execute worth a damn. They can happily talk on and on about normalization, regularization, imbalanced datasets and so on. Then they fail fizzbuzz.
I then put the programming screen first.
How do you compare candidate A with preferred subject 1 with candidate B with subject 2? And how do you get the candidate to talk sincerely and not bullshit around the topic, especially when testing ML - a modern alchemy more or less, no definitive answers, just a bag of tricks?
I have had problems with controlling the waste of time and directing the talk towards useful topics because the candidate wouldn't stop the bullshit talk. Maybe other fields have less bullshit maneuvering space.
I think it's better to go with a list of basic questions that remains the same between candidates. Only if they ace the basics I test for depth. Depth can be tricky to evaluate, especially for people who are not very bad - maybe just in ML.
As someone who has a current stable employment I have the freedom and flexibility to fail at numerous interviews with only minor frustration of time wasted. If I were out of a job and a bit more desperate I would just lie through my teeth on all elements that of a job interview except education and employment history like a super brown-nose narcissist. Since most all programming interviews are maximally subjective intentionally injected bias goes both ways and is easily applied.
If you wanna be thorough (esp if you are applying at companies known for harder interviews) I would add practicing backtracking problems where you are doing a full exhaustive search of the problem space as well (often O(k^n) or O(n!) complexity). Yes these are often mostly just DFS + Recursion but they have subtlety to them that requires practice for most people. They tend to be tricky to get right in an interview setting without practice as due to the really bad time complexity of most problems of this type they almost never come up in production for most people so they're not even on the radar of most engineers.
Of course there are even more niche data structures like a Fibonacci heap which have O(1) insertion, but you will have to get extremely unlucky to get asked about a Fibonacci heap in an interview.
This helps with allocation pressure, which is often the "hidden" cost people don't pay enough attention to, especially in garbage-collected languages where the allocation might appear to be almost "free" (often a simple bump allocator), but the deallocation price is paid at later indeterminate time (when the GC runs).
Seems way easier/time efficient to just use the built-in heap/priority queue of the language standard lib
1. Look it up in the hash table,
2. Look it up in the literature
its a proxy for a combination of:
intelligence and how hard you are willing to study
the computer science knowledge shown is just a bonus
One last thing to throw in, its pretty clear that theres a correlation between the top software companies and how hard their leetcode interviews are. You can claim all you want it doesnt work, but facebook and google have very hard leetcode interviews and are known for the best software
Also, the computer science shown asked in these questions is a very subset of computer science. I've never been asked an image processing question, non-trivial concurrency/parallelism, or numerical optimization questions (all things I've actually used in my job, I've never had to do strange linked list manipulations unfortunately). Those are all CS or CS adjacent but never get asked in my experience. I've also never been asked low level networking questions.
Instead it's just tricky graph questions and list/tree manipulation questions (that aren't that hard, but they are incredibly boring). CS is such a huge field, it's truly baffling that the technical interview questions at Google and Facebook are so miopic.
and interrupted, frequently, because that's also perfectly normal.
It's comical how this industry now thinks these arcane and often quite difficult DS&A interview question processes are reasonable and how it's been so normalized people just study this for weeks before applying to a new position, sacrificing evenings and weekends just for job mobility. These processes are not even proxies for intelligence anymore except maybe the very few people so wired they miraculously could jump through a handful of these with optimized solutions in 15 minutes without ever seeing and solving the problem before. I've worked with very intelligent people before who qualify as geniuses and they couldn't solve these problems under these conditions, especially not multiple of them without having at least seen the problem before.
At their size (number of employees, number of people who want to work there) they could probably just randomly pick (or add here any other way of selecting candidates) software engineers and still get some of the best in the market that will create amazing software. Not saying this is what is happening but I am just providing an alternative explanation to underline that the conclusion hard interview => best software needs more evidence to be true.
I think the way interviews are done is a function of:
- company culture (first and foremost)
- size (how big is the company and how many people they are hiring) and churn
- their believes about building software (some people believe math is required, some people think engineering is required, some people believe no pre-requisite is required)
- employer attraction: how much/how many people want to work there
- availability/support of employed engineers to be part of interview process
- how it started (usually big companies inherit the conception about interviews from their original founders as they where the ones hiring the Cs)
- the country culture where the C level and top management is located
Which is basically how the LC craze got started.
...are they? whenever I go on facebook, I always have a slow, unresponsive, buggy experience just using the website, and that's on firefox with 32gb ram on a 10th gen core i7
I.e., if someone isn't willing to grind leetcode, their intelligence is irrelevant.
If someone is willing to grind leetcode, then yes, their intelligence is a secondary variable.
Regardless, such styled interviews are intended to minimize the number of false positives (since the companies have so many applicants it's safer to err on the side of "no"; there are more in the pipeline), which means they will also generate a tremendously high number of false negatives.
This has got to be a joke of some sorts. More like the worst software. Sure Google is super rich, they have unlimited engineers who produce a lot of code. Almost all Google products I used were crap, buggy, bloatware.
so sayeth the people on the outside of companies building the most complex software in the world.
Also, I'd contest the statement that Google or Facebook works on the most complex software. They don't work in fintech, medical, hard real time that I know of (waymo does, but they've been spun out), and many many more fields of SW, HW and CS.
They work on hard stuff, but don't discount the complexity that other companies deal with.
Really? Google APIs I've used have been 70% crud, 30% great. Facebook has been consistently good, eg PyTorch and React.
And grind, grind, grind, grind, grind.