The most utterly absurd instantiation is the dev that's 100% sure that FAANG is going to fail soon because they're passing on people like him/her that are just so talented that they don't need to prove it with any assessment. There's also the MGTOW-esque subgroup that eschews such interviews and will proclaim it proudly, along with their salary.
Personally I'd be embarrassed if I was making a less than obscene dev salary, which is still upper middle class anywhere in the world, and all the while complaining that I deserve more for no added effort.
What's funny is there are other professions where the same sort of multimodal distribution in compensation exists (big law vs everyone else, competitive specialties vs general practice in medicine, IB and PE vs everyone else in finance) and I have never seen members of those communities complain as much as developers do.
I have a friend that's trying to break into FAANG (as a DS) who I've been coaching. He has a stats PhD and basically no software training whatsoever, outside of R. Since DS loops still have LC questions it's been very hard for him. I have never heard him once express feelings of resentment over the process - he fails an interview and just goes back to practicing. He's also a first gen immigrant from a very poor part of the middle east. Admittedly it's hard to assign/attribute his perseverance to any particular thing but naive intuition dictates that some of it must be his humility - something that software as an industry seriously lacks.
As someone who mostly agrees with you, I still bemoan the LC grind.
This is mostly because I think the software market has grown to the point where there’s a distinct market for “plumbers” vs “engineers”. I consider myself a “plumber” who understands how to piece together various frameworks and languages in the most efficient matter, but “engineer are the one who actually invent the new frameworks and who need to actually understand DSA to accomplish their jobs. It feels like distinct skills since I’ve tackled what I consider engineering problems multiple times and failed but I keep getting rewarded with higher and higher salaries for doing what I consider plumbing.
Unfortunately there seems to be an ego component and no company will admit that they need software “plumbers” more than they need software “engineers” or it makes them/their employees look bad.
I just wish I could be ranked and rewarded off my “plumbing” skills and only ranked on my “engineering” skills if I was looking for a career change
That's not even getting into how hilariously unrealistic "trial periods", for instance, are. Why am I going to leave a job for just a trial period without any more thorough attempt to check fit before hand? Why would I invest that much time in side work if I'm doing it before quitting, when I could get a FAANG offer instead for single-digit-days-amount of brushing up?
exactly. i mean like try to apply to IB at a BB and see if anyone even skims your resume without an ivy+ on it. they can't fathom how good they have it because they've never done anything else (which is, of course, what breeds entitlement).
like you don't get it: go here https://www.hudsonrivertrading.com/careers/job/?gh_jid=35621... and find the dropdown for school ("what university are you currently attending").
Other professions have guilds (not called that, but effectively that) who qualify membership and enforce various rules and standards on their members.
Maybe one day we'll have a Guild of Software Developers that provides this service and we won't need to deal with shitty interview tests. But until that day, maybe we shouldn't be comparing ourselves to professions that do have this and therefore don't face the same problems.
The other professions have a "closed shop" - you cannot work in that profession without joining that association.
e.g. in the Anglosphere, you cannot practice as a lawyer without being a member of the relevant Bar Association, and to join that you must "pass the bar" which is an exam.
If we had this in Dev, then we wouldn't be able to write code for money until/unless we were a member of the Software Dev Association, and they wouldn't accept us until/unless we'd passed a rigorous exam (passing the bar is something that takes years of study) to prove that we could code. Then we wouldn't be facing ridiculous "but can you actually code?" tests during interviews.
But creating that closed shop has always failed (so far). Employers don't want it, and new coders don't want it. It's only us old hands who expect to be grandfathered into it that kinda like the idea.
>In real life, the ones who will do so are mostly just the ones who it hurts.
you think people grinding LC that make it through are hurt by the process?
>the insanity that's making them miserable
as far as i can tell, sloth, avarice, and greed are making devs miserable (not some bogeyman called LC).
Well sure, except the first part of any job is coming up to speed on the domain, the codebase, who the key people are, etc. So if you want to see how someone really performs, you're going to have to wait a while past that initial period. And that means you're going to have a lot of people who won't make it past the trial needing support.
Companies do in fact engage in a process of getting someone in and doing an actual job, they're called internships. The problem is that these arrangements typically involve someone dedicating some non-trivial amount of their time to "babysitting" (not necessarily in the literal sense, but in the sense that a newbie doesn't have historical context or familiarity with processes and workflows, even if they are by all other measures bright individuals w/ actual experience under their belts). Either you spend inordinate amounts of time setting up and maintaining a contrived bubble where a candidate can operate cleanly for a very short period of time, or you're looking at very long evaluation times (requiring a week or more of time from a candidate, who generally already is gainfully employed).
It's also worth pointing out that this process is incredibly expensive. One hour of time from a full time employee doesn't really cost anything more than the few minutes lost to context switching (ie. not really that much worse than the person going out to buy a coffee). Literally setting up a paid one week period for a senior level candidate would cost, optimistically, a few hundred dollars for evaluating a single candidate. It's completely unworkable in a large company that conducts dozens of interviews or more per week.
It’s unworkable for anyone who is already employed. What am I going to do: use a week of vacation for this “interview”? (The fact that I’d be paid is immaterial: I lose vacation time. Plus, the more senior you are, the more of your compensation is in equity. Will I get a week’s worth of an FTE’s both salary and equity just for doing the interview? And at what job level? The highest for which I’m being considered?
This wouldn’t be viable for private companies because it would bloat the cap table with a massive number of insubstantial stockholders; and I believe that companies run into SEC compliance issues when they have more than a certain number (1000?) of stockholders.
Additionally, when I’m looking for a new position, I don’t just interview with one company. I typically aim to get between five and six offers (on a ~5 year cadence) to get an understanding of the spectrum of market rates for engineers, and to see what options are available. There’s no way I could spend weeks interviewing.
Maybe I would consider this if it was my “dream job”; but I’m not sure that the signal from such a trial would be better than typical interview.
Many companies have substantial technology stacks of proprietary infrastructure that you need to learn and become proficient with in order to be effective at your job. At recent employers, engineers have felt it takes 1-2 years to feel fully up to speed and effective using the company’s tech stack. With this kind of ramp up time, a one-week trial would only be useful with a make-work fake project.
My current employer spends six weeks full time just training all engineers who join the company - before they even begin to have actual responsibilities for doing work on their team.
I couldn't in practice see small companies choosing to go through the hassle of issuing stock to candidates for a week's worth of work though. It would be a lot of legal hassle for a process that's problematic for both sides.
Plus, you might end up with the problem of people interviewing at every hot startup that does a "1-week job trial" hoping to get some early stock at the next company that becomes a unicorn in 10 years.
Even if that’s a feature you’re looking to “force” into your recruiting program, you’re probably better off getting a “trust fund baby” work force by targeting overseas volunteer work.
But for very large companies, it's still really bad to hire bad candidates, but it's also important to be efficient. You need multiple opinions on candidates, but the interviewers can't spend a whole day each; it's too expensive. But on the other hand, you have basically an unlimited pool of candidates. So you do whatever's both fast and is unlikely to produce bad "hire" decisions, and supposedly a series of algorithm puzzlers do a good job of being fast and producing a low false positive rate. I'm not sure if that's actually true, but that's the argument. I am sure that the process rejects a huge number of perfectly good programmers, though.
I definitely believe you can get away with no coding questions like this for smaller organizations. But once you get to a big enough scale, all the other methods have too many issues (e.g. a large class of hiring methods which essentially skip anyone who currently has a job - trial periods). You can't afford to limit your hiring pool too much when you're big enough.
Allow a team to come together, demonstrate an ability to work together, to define a project (hopefully one of interest to the acquirer, if not in the output than in the process which develops it), then hire based on proven ability.
Or at least that's one interpretation, and amongst the more charitable, of the practice.
I'm not sure I still do, though.
It's true that this stuff rarely comes up and often doesn't matter for most real-world applications... but when it does, it matters a lot. The difference between O(n2), O(n) and O(log n) is huge for big values of n. Being about to go straight to a (nearly) optimal solution, instead of hacking together something, waiting for it to burst a the seams, and then having to fix it... or being able to spot someone else doing that, can save weeks or months of development at a time.
It's a bit like saying it's silly for pilots to have to know what to do in the case of engine failure or stall from memory and regularly practice in a simulator, because that stuff rarely happens and most of the time they're just using autopilot. That's true, it usually doesn't matter, but when it does... it matters a lot.
Coding floyd-warshal importantly shows that you can code, and generally signals at least one of two things:
1. You studied hard for the interview, you're willing to do difficult and unpleasant intellectual work for rewards and do so in a structured enough manner to get results.
2. You have such a deep understanding of graphs and graph algorithms you were able to re-construct floyd-warshal in 45 minutes. This implies time will not be an issue if you see an opportunity for these problems, and you can probably handle much harder problems as well.
Filtering for these two traits might work very well depending on the hiring needs of a particular company.
1. You're fresh out of school or you have a lot of time on your hands.
2. You memorized some interview preparation book.
I do agree that a candidate that has shown some effort preparing for an interview is a positive. Other than that this seems like a silly game.
The flip side is the candidate that memorized the book, prepared like it's a test at school, and is useless on the job, can't operate at all outside that memorized domain, can't write the simplest bit of real world code.
Which is handy because Google et al don't really want employees who look at the in-office perks and wonder why they'd be appealing at all if you're just going to go home at the end of the day.
Consider a world where every L6+ manager candidate has a family and kids. However, only 5% of these can make time to do the LC stuff (revising most likely if they've been managing for a while).
So your company will end up filtering out 95% of candidates immediately, but people in the company will say that there's no problem as all of the successful candidates have a family and kids, so it can't be biased.
Don't get me wrong, I'm a fan of technical screening for managers, but I'm making the point that you can't look at the output of a funnel and conclude that everything's fine, you need to look at the complete funnel to understand this.
I think this leaves out the possibility of high mastery. I think it is instructive think about algorithm skills as similar to those for algebra. If the most advanced algebra you ever solved was linear equations, remembering how to solve a linear equation may be difficult. However if you're skilled at solving advanced calculus problems by hand, your understanding of algebra is so deep that you could re-derive the process as necessary. Many of us with a lot of math experience will never forget how to solve linear or quadratic equations barring steep cognitive decline and this is also so true for certain classes of algorithms. Personally, I've studied graphs and graph algorithms beyond what you might find in just CLRS or other introductory algorithms textbooks and as a result I can re-derive many classic graph algorithms on the fly, no memorization necessary. I have done this interviews as have people I know.
1. Glue together whatever services are available and seem to make sense.
2. Throw increasing amounts of traffic at it and check what fails
3. Try to patch around failure point
4. blame the service provider for "not scaling"
5. Ask for more and more machines
6. Exclaim how great their design is on the basis of the cool class hierarchies and interfaces they wrote. Teach "junior devs" their "OOP design skills and patterns"
7. Go to step 2
They think of themselves as great engineers who scale things. The reality is they are absolute shit engineers who have no clue what they are building, frequently have crazy O(n^2) loops in their code, dont recognize a graph problem if its staring them in their face.
1. They needed a way to rerun map reduce jobs to regenerate data for previous 30 days. It was a straight forward, graph topological sort problem. Instead they built a crazy solution using multiple instances of Airbnb airflow. One server ran out of memory for these airflow instances, so they allocated a whole damn cluster of machines to run 30 instances of airflow! The job of the cluster? Issuing map reduce jobs to the hadoop cluster. They needed a cluster of machines just to issue commands to hadoop. They gave an elaborate tech talk about "scaling to 30X capacity" to process 30 days of data. Meanwhile, my team was processing 3 years of data. We considered hamake for our data rebuild process. Eventually, I wrote a 70 line scala topological sort routine that would do the job for us, because it gave us more flexibility than hamake and linked to our code libraries.
2. Some guy wrote a data pipeline to summarize a days results that took 24 hours to run, thanks to 40 different joins. I rewrote it to use a single group by. The code ran in 10s of minutes. He didn't know how hadoop worked or external sorting, beyond writing arbitrary sql. So much for looking up algorithms when required.
3. Guys presents beautiful OOP code that has major race conditions in a distributed environment. I point out the race condition. Keeps proposing hacks to "solve the problem". Does not understand transaction processing or locking algorithms or what options are available to him. Hadn't heard if read write locks. What are the pros or cons. Doesn't understand the difference between local locks and distributed fault tolerant locks. I explain it to him. He eventually comes up with a solution using zookeeper that will need zookeeper to process 1000s of updates per second. I give up.
"DSA, what DSA? They are for kids. I will look up an algorithm when I need it".
Sounds you understand locking and distributed locks. Cool. Your next project is a video encoder. Write an arithmetic coder in a 20 minute interview? Motion detection? Your next project is a chess engine. Write minimax with alpha-beta cutoff in an interview? Your next project is public key cryptography ... A distributed k/v database... Can you write a bloom filter for me in an interview? Gradient descent in some machine learning application?
If you're such a great software engineer, which likely you are, you should be able to do all the above (at least somewhat), by doing your research, and writing code.
I'm not saying hire bad engineers. I'm saying the game is silly. Sure, a good engineer (and people that aren't so good) can play the game. Also not arguing with the person that said if your goal is to get paid better and you want that job then play the game. Doesn't make it less silly.
> Having those guys memorize those won't make them better engineers, right?
They can't realistically memorize all the algorithms, they have to develop an understanding of graphs and algorithmic complexity to be able to get through. They should be able to map arbitrary problems to a corresponding graph problem. These are absolutely fundamental. At least I should be able to communicate to them, "use a topological sort" and they should be able to understand what I am talking about.
Sure, some one could set aside their job for 1 year and memorize everything and they might just get through the interview loop. But it is impossible to have a perfect filter, however DSA and complexity analysis is a bare minimum.
> Your next project is a video encoder. Write an arithmetic coder in a 20 minute interview? Motion detection? Your next project is a chess engine. Write minimax with alpha-beta cutoff in an interview? Your next project is public key cryptography ... A distributed k/v database... Can you write a bloom filter for me in an interview? Gradient descent in some machine learning application?
Funnily enough, none of these questions are asked in a FAANG interview at all. So, I don't know why you are constructing a strawman and demolishing it. The questions actually asked are from basic CS201. These are very specialized questions.
FWIW, I have already built a bloom filter, minhash and consistent hash at work and I am a machine learning engineer. So, I can implement an in memory consistent hash store, bloomfilter, minhash, count min sketch and gradient descent in an interview setting comfortably.
For motion detection a 2 dimensional derivative function should work. A simple delta between 2 images is a basic implementation . Huffman coding is not too complex to write. The basic entropy formula is pretty simple. In fact, most ML classifiers minimize cross entropy loss, so I am very familiar with entropy. Minimax - I have partly forgotten. But if the interviewer prompts me with some high level details I should be able to do it.
I have actually been asked to derive gradient descent in a Google staff eng equivalent interview and also demonstrate that gradient descent converges to global minima for the single layer perceptron. Mind you, this was a specialist ML IC6+ position. Not a new college hire loop.
 You need to prove that the Hessian is positive semi definite. I did the basic set up but couldn't complete the derivation. However, I got an offer.
Lots of people get good grades in data structure and algorithm courses where the test basically looks like these interviews. You get some problem that "asks" for some algorithm to be used, you implement it, done. Out of those people, maybe 10%-20% are going to be great programmers. IMNSHO. And I'm probably being generous. How do I know? I hire those people. Everyone I hire has great grades in those courses. And some of them, many years from now, are gonna be awesome. And I don't hire everyone that has good grades.
Most of those are exactly the people you're complaining about and they'll ace your interview. Why are you asking them a dynamic programming question and then complaining about their usage of Zookeeper or whatnot? These things have nothing to do with each other.
I didn't say those are the questions asked in FAANG interviews.
I think if ML is your domain, and you present yourself as an expert in this stuff, then technical questions along those lines are fair game. And you're right that you should know your stuff. And you should also have the ability to work in a different domain. I've written Huffman encoders a few times in my life (as far as I can recall never in a work setting) and I could maybe write one in an interview. But I can't write an arithmetic coder in an interview without looking it up. Can you? I mean what exactly is the point here? Sure, the stuff you do day in and day out, you should demonstrate that you're able to do it. The stuff that you don't do, you should demonstrate that you're able to understand this is another domain, research, and then do stuff.
This is a little bit like comparing PhDs to people without PhDs. If you're a machine learning PhD you will know a ton about the domain (and also you're forced to learn a ton about some adjacent domains). Are you a better programmer or software engineer? I don't think so. It's like comparing a mechanical engineer to a Physics PhD.. these guys are not interchangeable. CS PhDs, those guys with the knowledge you seem to think is that important on the top of their mental stack, tend to build those terrible pieces of software you're complaining about. (obviously can't generalize, some are also awesome engineers, just like some Physics PhDs might be great at machine design). Don't get me wrong, I have the utmost respect to PhDs and I worked with some brilliant scientists in different domains. I don't want most of them to write software ;)
Getting back to the point, the FAANG interviews focus on basic graph algorithms etc. Getting hired at a FAANG doesn't imply you are a great engineer. Thats not something you can figure out in 5 interview loops. It means that you have the tools to understand basic CS201 concepts and are not the "I will look up the algorithm when I need it" engineer - which is absolutely a recipe for disaster.
And these algorithms are very relevant, Apart from top sort, bloom filters etc. I have also used dynamic programming on the job, to tokenize product titles to minimize the entropy of the final inferred tokenization of a product, before indexing. I can't even begin to imagine working with an engineer who cant understand that a correct naive tokenization is exponential, that the standard trick to solve such a problem is DP etc. These are basic, it doesn't mean you are a great engineer. It means that you have the basics covered and don't need to be hand held through the implementation details of systems. It's the difference between telling an engineer "we use DP to minimize the tokenization entropy" and sitting with him/her for 1 hour and walking him through every step, like it is some kind of magic. The standard FAANG interview doesn't even cover probability that well, so they have a fairly lenient expectation from the engineers about the concepts that they need to know.
As for why I am complaining about zookeeper. Because zookeeper is an implementation of PAXOS algorithm which has an extremely high penalty for writes. A person who thinks, everything is a black box and he doesn't need to know about algorithms, and will "look them up" when required, is actually never going to looks up any algorithm. Like I said he won't understand something is a graph problem even when it is staring him in his face. I have given several examples of graphs, DP etc being used on the job. And you completely ignore the basic problem with guy implementing lost updates with race conditions.
1. He doesn't realize this is a standard problem, covered in a FAANG system design interview.
2. He doesn't look up standard solutions and cooks up his own hacks, using a central database to record state. His hack has more race conditions.
3. I have to tell him the standard solution to this is locks
4. He ignores my suggestion, and finds out about zookeeper. He thinks it's a key value store like Cassandra. He doesn't realize that zookeeper is an implementation of PAXOS and has very high write cost. Because, he doesn't care about algorithms. Everything is a black box to be glued together.
5. He judges good software engineering on the basis of object hierarchy design, design patterns etc. Algorithms, data structures etc are irrelevant to the job, except for vectors and hash maps. There are API services to do everything else.
This was an engineer at a non FAANG company. Having worked at both FAANG and non FAANG, my observations
1. FAANG engineers are generally higher quality and way faster at execution.
2. Excellent non FAANG engineers are plenty, but at a non FAANG they are fewer to be found. Very often, they end up at a FAANG a few years down the line.
3. FAANG hires don't necessarily have large scale system design skills. The interview cant really pick that up. That's what the promotion and annual reviews are for.
4. The FAANG loop is a classifier, just like any other interview loop - with both type I and type 2 errors. So, it will both reject good candidates and accept bad ones. Any criticism of FAANG interview loops that don't understand the concept of type I and type 2 errors are just emotional hyperventilation. None of the alternative proposals even care to show how it could possibly be better than the FAANG loop.
This is true, and absolutely matters. But I would suggest that developing the intuition to recognize a problem as being of a particular class, and even going further as to being able to look at the provided solution and intuit whether or not it is optimal (or optimal-enough), is a different thing than calling up that optimal algorithm from your limited brainspace.
Like--if you understand the principles of what you're doing, Google isn't a much slower recall mechanism than you are, and it's less likely to have misremembered something.
I'm guilty of it myself.
> It's a bit like saying it's silly for pilots to have to know what to do in the case of engine failure or stall from memory and regularly practice in a simulator, because that stuff rarely happens and most of the time they're just using autopilot. That's true, it usually doesn't matter, but when it does... it matters a lot.
Well, we now know what happens when pilots forget the runaway stabilizer trim procedure (turn off the stab trim).
It's a quick easy test of understanding, but why not ask "How would you speed this up" and show them some O(n2) code?
In interviewing, it is hard to produce real world problems that don't require a huge amount of context and details of some obscure software stack, so we go with abstract problems which are kind of irrelevant to real work, but the problem solving approaches you need to take are very relevant. Thinking about the requirements, thinking at the right level of abstraction, defensive coding, to name just a few.
Communication and whatnot are all on top of that, and yes, absolutely do need to be evaluated.
I'll also note that MAANG companies have extremely sophisticated software stacks (and multiple ones for each company!) that were built by people who were hired using that method.
If it is wrong, it's working pretty darn well.
There are many things I would change if I could, but we do have an existence proof that, in the aggregate at least, the people this interviewing method selects for can build amazing software stacks.
More generally, the absurd success of most of Big Tech has not come from better technology but extremely high margins facilitated by software. Apart from the core ranking/ads problem that M and G solve, most of it doesn't need to be that complicated.
I would venture that lots of the cool technology stuff is designed to keep engineers happy so that the money train continues.
Certainly the one I was at built a lot of cool stuff, but there was so much reinventing of the wheel that I suspect that the core business and products could have been served by a much much smaller engineering org.
It's also a common vocabulary shared across engineers, no matter their seniority or where they graduated from. And knowledge of algo and data structures demonstrate someone could learn it in the first place.
Especially for junior hires, it's an excellent predictor of success.
Problems like designing "the most popular package manager on macOS", for example?
I think what the author finds strange is that not only do many people still believe that LC (or whiteboarding) are effective proxies for general problem solving ability -- they seem to tacitly believe they are better measures than demonstrated, real-world problem solving achievements. Even problems that are significantly more multifaceted and nuanced.
If it is wrong, it's working pretty darn well [for MANGA].
The problem is, these tests (and extensive rounds of them, with onerous time limits) are used by companies that (inasmuch as they would like to believe otherwise) definitely are not MANGA and certainly are not offering MANGA levels of compensation.
Yes, those fall through the gaps, but people with that kind of extremely strong, publicly viewable history are much more the exception than the norm.
Not to split hairs - but it does seem to be a peculiar choice of words, there.
I just don't see why the DSA filter has come to be considered so golden that one's ability to not just demonstrate some baseline capacity for this skillset, but to positively master a sequence of concocted performative rituals around it (specifically: reciting these algorithms in front of groups of strangers at a whiteboard; "talking" large blocks of code to someone over a phone; or doing online tests under completely unrealistic time pressure) -- that, in some corners at least, has come to categorically dominate all other selection criteria in the hiring process.
(Actually I do see why -- but it all comes down to factors not related to the intrinsic importance of these skills, or to whether the candidate can effectively do the job).
Perhaps not; I think the "publicly viewable" part is going to the sticking point, most of the time. Most people don't have significant/impressive side projects that they can show off to prospective employers, their work is at prior companies.
Anyway, I can see why DSA interviews are popular. IIRC studies show IQ tests and work sample tests are two of the most effective interviewing techniques, and algorithmic whiteboard coding splits the difference between the two, and does so in a way that's relatively platform/stack-agnostic. Especially useful since IQ tests themselves are of dubious legality.
And lastly, as a cathartic bonding ritual: "You went in on this high-pressure, slightly (or sometimes very) humiliating performance rite with us together and it looks like you've passed! Or at least it seems now you do not have completely damaged DNA. Now we are ready to forgive and accept each other as equals, or some approximation thereof."
Nah. Parlor tricks would be the old riddle questions about manhole covers and light bulbs being warm. It's difficult to gain competency in algorithmic whiteboarding questions without being smart and at least half decent at coding. They're not perfect gateway questions, but most of the alternative suggestions I've heard here or on Reddit are worse.
> And lastly, as a cathartic bonding ritual: "You went in on this high-pressure, slightly (or sometimes very) humiliating performance rite with us together and it looks like you've passed! Or at least it seems now you do not have completely damaged DNA. Now we are ready to forgive and accept each other as equals, or some approximation thereof."
Having worked at Amazon and Google, I really don't see this. People bond over other parts of the company culture, but the entry process, not really. I guess people do talk about the painful Google recruiting process, but that's mostly how fucking long and drawn out it is, people don't seem to really mind the questions themselves overmuch.
And without learning what amounts to a secondary career skill in the art of doing a specific song and dance in front of a room of often ill-prepared and apathetic strangers. Who often grossly overestimate their ability to curate and conduct these sessions.
BTW "song and dance" is not meant pejoratively but refers specifically to the "art" of: talking to a room full of randoms while you are allegedly doing original thinking and coding; also, pretending that the problem is novel to you, even making fake pauses and saying "hmm" now and then to make it seem so (when, per the company's instructions, you prepared assiduously for the interview, with the precise goal in mind of anticipating as many of these problems as possible); dealing with their less than helpful interruptions and basically pedantic suggestions; etc.
Which is what the whiteboard process seems to optimize for best.
 There are forums, mentioned in other instances of this perennial thread, where people trade suggestions in precisely this extremely valuable (by whiteboard performance art standards) skill.
Inverting a binary tree means reversing the order of the sequence of elements that it represents. Thus inverting a binary tree is simply swapping the left and right edge of every node recursively down the tree.
You could do this with a recursive algorithm by modifying the tree in-place or by constructing a new tree. I don’t see any obvious candidate optimizations other than the fact that it’s straightforward to do in place (e.g. using std::swap in C++, or mem::swap in Rust).
However I don’t think this is as good of an interviewing question as other potential questions. See my other comment in the thread for one that I think is better. The reason is that programming tasks like this are pretty uncommon to encounter as a professional software engineer.
Better questions (IMO) are tasks that professional software engineers are likely to encounter while doing their job at the company — or representations of them, while testing knowledge of A&DS as necessary.
Even if a company does particularly a lot of work with binary trees, then that company would be likely to have number of such A&DS already implemented. If you’re working at Google then I can’t imagine that you would need to build a binary tree data structure or an algorithm for inverting them; undoubtedly these implementations already exist within their standard libraries.
Personally, I can’t immediately ever recall encountering a binary tree in business logic while working as a professional engineer. I wouldn’t expect to unless I was building some thing like a collections library or a specialized solution to a large scale problem like implementing a novel data store of some kind.
And if I was doing that kind of work I’d break out references (e.g. Knuth as a starting point, and any relevant recent research), and review open source software for comparison or as a candidate for directly solving the problem (SQLite, MySQL/InnoDB, LevelDB & BDB, Apache Commons libraries, etc.)
I'm not so sure. One aspect is even recognizing that the task can be done by a known algorithm. Not knowing the algorithms may make one mistakenly believe that the opportunity to apply algorithms never came up.
> I can’t imagine that you would need to build a binary tree data structure or an algorithm for inverting them
I'm sorry, they are so trivial it takes more work to understand the documentation on the library one than it takes to just do it.
Though personally, if a candidate flunked the binary tree question, but had a very successful project, I'd take a closer look before rejecting him. For one thing, I'd want to have a look at the project's source code. I'd also want to verify he'd actually written the code for that project.
A company like Google/Facebook(Meta)/Amazon doesn't want many redundant copies of binary tree implementations per language in their code base. In general they'd want one implementation with a suite of algorithms that can operate on it.
Maybe if your implementation is very specific to the task at hand it might make sense; otherwise I'd expect to use a binary tree via something like a sorted container interface. For example, in Java, something like java.util.TreeMap , which I understand is implemented as a binary tree.
Unless you were building a compiler or data store or something very specific and optimized, I see the need for building one as unlikely.
I agree with you about things like "convert this string to a number", where it's too easy to get wrong (like not handling overflow) or be inefficient. But binary trees (and linked lists), like I said, are so trivial it is just not worth the bother.
Before a tech interview, good candidates will spend a large amount of time doing 'homework' which is just practicing answering the asinine questions that tech interviewers give that is usually barely relevant to the position, if at all.
Equally, if not more important, is the candidate's experience or ability to navigate legacy code, testing philosophy, handling incomplete/inconsistent requirements, and attention to detail, among other traits. Of course these won't merit the same attention in an interview.
On the one hand firms have difficulty finding good candidates, on the other hand they are giving ridiculous tech interviews that aren't a great judge of what kind of employee they will be.
That is why I am staying put for the time being; hate the interview process, hate being asked stupid questions about solving problems that I will never have to solve in real life.
Ask me about projects I have worked on, ask me about code I have developed ask about my experience etc - but I am still looking for the development job where I will be required to solve Hanoi's tower problem....
In fact, for the current job I am in got thru the initial interviews and then was told I needed to take online 'skills assessment' test to move forward - I told the interviewer (nicely) 'forget it, I don't want the job if I have to take the test; if my resume and track record isn't enough then maybe this isn't right for me.' (keeping in mind that I have a 30 year track record, not a recent college grad)
Guess the hiring manager agreed, because they hired me anyway (still haven't taken the test).
I have another very talented SWE friend who has been doing leetcode exercises off and on for months and is struggling to start doing applications even though they do not like their current work and are undercompensated.
They fear they will fail algo / data structures and yet this person has an excellent public track record of quality contributions and comms in FOSS.
From what I've observed leetcode screens are problematic enough that they are causing market inefficiency.
i'm not particularly smart, in fact i struggle at work routinely, but i know what i have to do clear interviews, so i do the grind and it has worked well for me. (to a certain definition of "well")
now that i think of it, it has only worked well for me because actual talented swe's don't put effort into interviews the way people like me do.
Adding new project that must be executed on as a prerequisite to being seriously considered for a job when you are obviously capable can be a hard sell.
Especially given programming challenges are not that fun for people used to delivering features.
I know that for talented SWEs, having a leetcode interview sink an interview process it can be invalidating.
Companies that rely on single algo/ds interviews also discourage potentially great employees when they totally ignore evidence of capability.
The company wants candidates to grind ds/algos, but it won't compel its own employees to look at public contributions.
Totally agreed, I know of many similar situations like this.. however at the same time the question is whether there’s any other system that’s better in practice.
“Capitalism is the worst system, except for all the others” etc.
You want a controlled trial or natural experiment where two large successful software companies experiment with two different approaches to hiring over a number of years holding all other factors equal. That’s hard but may happen at some point in our lifetimes.
- download this OSS project - explain to me where / how this API works, how deep does the call stack go. What tests exist for it?
- write me the spec for that call. discuss.
I am not sure why thats trivial
One mistake some interviewers make is implicitly assuming that candidates can somehow conjure the same level of context from first principles, or that a specific algorithm might be familiar or reusable outside of its original context. Another mistake is "looks-like-me" bias.
For example, I happen to have a lot of context on a very specific algorithm that underlies basically every modern web framework but if I wanted to evaluate a candidate on web performance, I'd look at performance optimization as a open ended problem domain rather than drilling them on the particulars of this specific algorithm. In fact, out in the world of web framework performance, the most novel advancements come not from revisiting the algorithms but from looking at the problem domain from entirely new angles that had not even been considered before.
Over time I've learned that I'd kind of rather lean a little more towards the easier side than the harder for writing code during an interview, because the interview is unpredictably stressful. But at the same time, as prioritizing communication and a degree of thoughtfulness has become more important (which has ended up with me bopping over to a devex job where I am now), I've leaned more heavily on "let's talk through XYZ and suss out how you discursively approach the problem" types of interviews. Which definitely selects for a particular audience, but it's one more useful for the roles I've hired for.
On my last job search I had one interviewer state (very proudly) the systems design question I was being asked was an actual problem his team had to solve. I don't doubt the veracity of his claim at all, but it probably wasn't solved by a single person under the time constraints and pressure of an interview.
Most likely someone on that team spent hours or days researching and designing potential solutions before drafting a design document that was shared and discussed with others, perhaps informally or perhaps in a meeting (or over the course of multiple meetings) where tradeoffs were considered among people with deep knowledge of the existing system and problem space. Expecting a candidate with only superficial (at best) knowledge of your current system to come up with the same or similar solution on their own in 30 or 40 minutes seems a bit unrealistic to me.
So in the context of an interview, I'm trying to treat the interviewee like a colleague who I'm coming to with a problem I'm having, so we can come up with a solution together. That often involves drawing things out on a whiteboard: not code, but more diagrams to describe the problem. Then we come up with ideas on how to do it, under various constraints that I share.
Usually I have in my pocket 2-3 different approaches that we tried when we did it ourselves, and I'm looking for: can you understand the tradeoffs between these different approaches, do you understand how they work, and are you capable of implementing them to test and cross-compare them?
It's interesting work.
DSA style problems are good if used correctly.
Give a candidate a good challenge, preferably with some requirements that can have different interpretations, ask them questions, help them and collaborate with them.
The analysis of the whole exercise should be your hiring determination not whether the problem was actually solved.
Did the candidate:
- stick with the problem?
- ask meaningful questions about requirements?
- ask for help from a senior when stuck?
- explore different avenues to solve the problem?
- handle criticism?
Of course it is great if they solve the problem but often that often only shows you that they have memorized algorithms. ( we have the internet these days )
I think being able to handle criticism is an important trait to look out for. Many times interviewees will defend their method too aggressively.
But if you have to pick between two people who are equally pleasant to work with, but one was able to solve the problem, you'll probably pick the one who solved the problem. When the competition is fierce, the candidate would probably still have to spend some time practicing on these problems.
- I phrase the question in a way that has some semblance of day-to-day relevance. That is to say that at some point in the process of coming up with a solution, the ability to apply a relevant data structure will come up, but it will be in service of an end goal that looks like the deliverable of a sprint task.
- I come into the interview aware of multiple solutions and I am open to any of them.
- I pace feedback so that the candidate actually solves the problem by the end of the interview, no matter their level (which does mean, in some cases, literally spelling out the step to unblock themselves).
The rationale is that solving a hard DSA question doesn't give me all that much signal in and of itself. Watching a candidate bang out something with a level of complexity a little higher than fizz buzz is usually sufficient to evaluate whether the candidate has familiarity with the language. The choice of idioms and APIs can tell me things about their relative level of expertise with the stack (i.e. it can generally be safely assumed that an already employed candidate can hold their candle, and the question for me is more along the lines of "to which extent").
During the course of an interview, I can usually pick up a distinct and noticeable difference in focus between candidates, especially surrounding topics related to proactiveness/curiosity (e.g. does the candidate have understanding of aspects one abstraction level lower than the API they usually use, are they aware of well known pros and cons of some specific idiom, does their argumentation seem derived from personal experience vs parroted from a hivemind, etc). This tends to correlate surprisingly accurately to how much autonomy and growth they demonstrate on the job.
"Hardcore" DSA evaluation only really comes in as a criteria to determine whether the candidate is of very high quality when most other criteria have already been evaluated as acceptable/desirable. These nice-to-have criteria come into play in some cases where I want to advocate for the candidate when the evaluation panel is split due to one seemingly bad session (possibly due to factors such as nervousness or mixed signals), or inversely when the role logically demands a higher bar but the panel is situationally incentivized to hire down to meet a quota.
I've been told by several candidates that they appreciate my interviewing style, and conversely, I feel like I get a much better feel for the candidate than strictly evaluating DSA skills and nothing else.
Why not ask them real questions about the real sorts of things they're going to do, and through those see if they stuck with the problem, asked meaningful questions, etc.?
DSA problems don't bring anything to the table here, because it's trivial to substitute them with something meaningful. I'd even go so far as to say that we work bizarrely hard at asking uselessly inappropriate questions. It is easier to ask questions related to the real job.
Now sometimes there are compromises in algorithms (I can think of 3 different nlogn sorting algorithms and I think I've forgotten a dozen more) and so the company has a propitiatory library that has an implementation with different compromises. They should have benchmarks and pros and cons written down (ad ideally be open source) so that anytime someone asks about the algorithm they can point out why they have their own and possibly contribute it to the standard library if everyone agrees to a different compromise.
These days that vast majority of the simple algorithms have already been written. I might need someone to design a more complex algorithm, but they either takes months of thought, or are just a sequence of simple algorithms.
I don't care that you really really want a solution to P = NP; you can't have it. If you try to force it, you will pay the consequences.
I don't care that you really really want a single standardized interview for what is essentially several dozen distinct positions, even if they all "use computers". You can't have it. If you try to force it, you will pay the consequences.
If you suggested that companies should have completely standardized interviews for salespeople, lawyers, and executives, all because they "talk to people", you'd be laughed at. Change it to "talking to computers" and they lose the thread.
There are folks who are great at their job (called "software engineering") and their work never invokes their knowledge of algorithms. There are also folks who spend day in and day out tuning and designing new systems and deep knowledge of algorithms is essential (we also call this "software engineering").
20 years ago, all you had to do was get a "certification" in some technology and you'd immediately be hired to work on it. That led to a lot of really bad people getting hired because all they did was learn how to pass a test. Today, hiring in tech doesn't require any piece of paper at all. But that means that instead of relying on some standard format to prove your baseline body of knowledge, the interview process now has to do it ad-hoc, and it never seems to fit the role's actual requirements.
And nowhere in the industry have we ever required training on how to actually do a job. Does the candidate know what an SDLC is? Do they write ADRs? Have they juggled multiple changes in flight on a team with large codebases? Do they have a solid grasp of the strange subtle quirks of their tech? Have they learned to be judicious in their decisions and weigh the many long-term pros and cons? Have they ever developed any project with a team?
Trades typically require an industry board to certify them, and then often require years of apprenticeship under journeymen or masters. I think these two would go a long way towards leveling the incredible amounts of variation in candidates and eliminate these ridiculously ill-suited interviews.
Unlike at startup or enterprise dev shops, at the scale of FAANG, those sorts of problems (e.g. turning n into n*2 and the like) at a backend are very likely going to cause crashes or something essential to time out. If you haven't worked on a service with 5-10K or more servers you aren't likely to really get how quickly and how frequently things are going to go bad if you aren't very careful.
There aren't many if any engineers designing bridges who haven't had the entire curriculum. If you want to practice independently as an engineer then you have to get a license which involves taking a hard test which recaps your entire undergrad curriculum and maybe more. I peeked over my brother's shoulder when he was studying for the EIT. Not easy to pass. And like doctors and lawyers they have to keep up with their professions to renew their licenses. So, no.
Rare these days, that kind of mess unless the entire department is off kilter. Contemporary practice is to team up, and be in constant communication about the code base, requirements, approaches. Someone in the team or adjacent will know the proper approach and be able to communicate it to the others.
Or you hire a 10x like John Carmack and he codes the thing. No need to constantly have meeting and waste everyone's bandwidth.
That, of course, assumes the department understand what a 10x is and how to attract them.
So you end up with 6 month coding bootcamp "graduates" claiming to be "engineers" alongside folks who graduated from long and hard engineering programs (MIT MEng comes to mind) and who had to complete things like this  as assignments.
I recall someone from a bootcamp writing a cascade of nested if-else statement, 6 level deep in some places. Then someone with a real CS background told him that he was basically building a finite state machine, to which the other dev responded that “he didn’t need fancy thing, just for the function to work”.
That's the main reason companies use algorithmic questions when filtering out candidates (because it checks for an understanding of the fundamentals, something bootcamp grads will lack and that will take years to acquire) and why every job interview has questions like these (there's no certification process, anyone can claim the title, so the company must do the vetting every time).
We paid them for the week like a contractor, and made an evaluation at the end of the week. It weeded people out who were great at code but bad culturally, or people who worked great.
Sometimes we couldn't get a whole week to commit, but just a few days -- but these days made a huge difference.
Now, I work at a FAANG company and _hate_ the interview process. Give me 3 months and I think I could teach just about any CS interview how to pass it (without being specific to any particular FAANG company), but that doesn't mean they would be good engineers.
They are loosely related but different skills.
> but bad culturally
Unless your company can afford a month-long interview process for every candidate which the author suggests, this is the best we have.
If the current leetcode style of interviewing is the best that SV can come up with, despite having some of the best engineers and thinkers in the world and billions of dollars to spend, that's really sad.
At some point you have to accept that there is no ideal way to interview, and that there are fundamental time/precision/recall trade offs. Big tech optimizes for low time and high precision, which means you certainly will end up with low recall (false negatives).
I’m sure there are improvements to me made (and maybe the specific leetcode style is not ideal), but I don’t think the billions of dollars is relevant. It’s like saying it’s sad that tech companies haven’t improving on O(n log(n)) sorting despite their billions — it’s not possible.
Griggs applies to any hiring criteria or practice that has a negative impact on a protected class compared to people outside the class (given the set of protected classes, this is essentially any hiring criteria or practice) without sufficient evidence of probative value on job performance. It is neither limited to things very much like IQ tests, nor are IQ tests any harder to justify than any other element of a hiring process.
For all their failings, leetcode-style interviews are probably less culturally-sensitive than standardized IQ tests. They probably do have (unintentional) disparate impact, but this seems like a really hard thing to measure (and possibly correct for).
Yes, the existence of readily available statistics may make the unequal impact easier to show.
> For all their failings, leetcode-style interviews are probably less culturally-sensitive than standardized IQ tests.
“Culturally sensitive”, maybe, but they almost certainly have quite large unequal impacts adverse to protected classes (including “age, if over 40”), though the absence of external data raises the cost of proving the unequal impact. Also, certainly less demonstrably predictive of job performance in software development.
> They probably do have (unintentional) disparate impact
Probably, that, as well, but I don't think the age discrimination function largely is unintentional to start with.
Just because code challenges are not relevant to job duties doesn’t mean the results are irrelevant to job performance. They are a proxy intelligence test. General intelligence is the best predictor of success in almost any role (not just software engineering).
This is what confounds people. They think the interviews are designed that way because they are supposed to be representative of the job — I don’t believe that’s the case. They are that way because they provide a strongly correlated signal of performance after hire, and big tech has decades of data for all different interview types. I’m very confident that if they had a better interview circuit that could be done in a ~day, they would be doing that. Obviously even at big tech, referrals and recommendations count for a lot.
While I don’t like the low recall, I do think that “invert this binary tree” probably has less bias than a quiz on technology, or a design conversation (that seems way more susceptible). Perhaps it has a bias for a particular kind of computer science education and thinking, but at least that’s not a protected class. I’m not seeing the age connection, but I could imagine e.g. a computer science education in different countries emphasizing different skills over years (and leading to some candidates with a leg up on the “tests”).
This may be the case. But even if it does act as an effective proxy--and I'm not sure it does; my worst yeses in interview loops have been very adroit programmers who I passed against my better judgment when my "not sure I want to be around this person every day" bells were ringing subtly in the back of my mind--then it has a different problem. You've now set expectations with the interviewee that oh yeah, we do hard stuff here. Then they go frob knobs or write frontend stuff all day.
(This actually happened to me at my first job. I didn't know any better, of course. The hiring manager pumped up my tires with all the difficult scaling work, etcetera etcetera. Then I was writing HTML into templates for the first six months I was there because they needed a body to plug into the role.)
As humans with a strong confirmation bias, it is extremely difficult to tell what's behind your feelings for those worst yeses. It could have been the case that the candidate had red flags, you saw them, but couldn't articulate them. Or it could have just as easily been the case that the candidate had a cultural and/or communication style that was different from your own, and they also happened to perform poorly after being hired. It's important to remember that no interview process is going to yield perfect results: there will be false negatives and false positives, you can only move the trade-offs while simultaneously ensuring that you're avoiding any conscious or unconscious discrimination against protected class to the fullest extent possible. That's a hard problem to solve.
Google also found that interview performance isn't a good predictor of on-the-job performance.
Sorry, but it's hard to take this style of interview seriously after hearing about these things.
People repeat things like this without really understanding statistics and priors and what that assertion really means. Assuming that statement is true, it applies only to Googlers who have been hired by the hiring process, i.e. if you are beyond the cutoff threshold of the interview process, the ranking within that subset is not determined by the interview performance. That in no way implies that among all the interviewees, including the rejects, the job performance would not have been correlated with their interview performance, had they been hired, hypothetically.
Inverting a binary tree is not the kind of task that is a software engineer is likely to have to perform at most companies. I think companies can come up with better questions that still involve algorithms and data structures but better correspond to problems that they professional software engineer might actually need to solve.
For example, to repeat a comment I made elsewhere in this thread, ask them to serialize a binary tree to a byte stream, and deserialize the same tree from a byte stream. There are plenty of edge cases in this problem; you learn whether they understand a data structure (binary tree) and how to implement it, as well as how to write algorithms that operate on it (potentially a recursive descent on the tree; potentially a visitor, depending on how the candidate designed the tree).
This also isn’t a problem I’d directly expect to solve during regular work, but it’s useful and close enough: serializing/deserializing objects (data structures) is something I’ve had to do in reality plenty of times. (Even if you’re using an RPC library or IDL, you often have to “serialize“ in the sense of translating into the business object into one generated by the IDL or RPC client)
- Implement a LRU cache
- Implement a rate limiter
- Read and process a very large file line by line
- Combine two sorted streams
- Calculate the sum of a specific section of a MxN grid
- Parse and solve a math equation
- Find the nth most common word in a blob of text
These are all pretty realistic representations of what a software engineer can be expected to do day to day.
But that's not the same as knowing by heart how to implement non-trivial algorithms. There's the green lumber fallacy.
To have a feel for the 'right' data structures and algorithms to use for specific problems is a key skill. But then the detailed algorithm is only one Google away these days, although I would expect someone to be able to derive the simple ones themselves.
Once you've built a dozen houses you may remember these things implicitly, but you never need to remember them. You do need to know why they matter, how to look them up, and how to adapt them to your particular job.
If you're a builder I might expect you to know when to use a steel beam rather than timber even if you don't know by heart what grade and size for any random load (I don't know anything about building but that feels like the right level).
Likewise, if you're a software engineer I expect you to know what's a hash table, a binary tree, a linked list, etc., what are their pros and cons, and when you might want to use each of them. But in general I don't expect you to be able code a tree inversion off the top of your head. Obviously expectation of detailed and specific knowledge has to depend on previous experience and role.
Say you need a builder for a house that an architect designed with wooden beams. Then you interview the builder, and ask them what kind of beam they should use for a theoretical house, and then ask them all sorts of questions about steel beams. Well they don't have the expertise to answer the first question, and the second question doesn't apply to this job at all!
Of course there's a body of knowledge that needs to be known. But I think we need to better codify what knowledge is needed by which people to do what kinds of work. And then have a way to establish that they know how to 'do the work', versus just having a body of knowledge.
> I agree trial work periods may not scale
Great to see the author uses the Green Lumber fallacy to argue against leetcode-style interviews. Now I'm going to guess he also must have skin in the game, otherwise he would only be an empty suit doing armchair recruiting.
Let's say we want to do work trial periods. I tell you what actually happened to us: we opened an internship position and we had ~1600 applicants. Since having 1600 trial periods is impossible, we need a way to weed this down to a manageable number. Congratulations, now you moved the problem of "who do we hire" to "who do we invite for a trial period".
The software engineering interview isn't perfect, but sadly trial periods are not the answer.
So it's always frustrating to find a new job because I know I'll have to spend weeks practicing on problems that will have no relevance to the posts I'm applying to. To me, it's a waste of time.
I think companies that uses competitive programming problems (that are irrelevant to their engineers' day-to-day jobs) end up hiring people who are very good at interviews and perhaps not as good at their actual jobs.
Of course, there are jobs which deals heavily with algorithms and optimizations, for which these types of interviews are relevant. But the article is not talking about these relevant cases.
> The “green” in green Douglas fir refers to the fact that it has been newly cut (it has not been dried), just like someone who is new at something is referred to as green.
This was interesting/funny to me, since I was sure that the name "green lumber" comes from the fact that it is green if you look at it (being plant material which was recently alive). I'm not saying a full-size tree will be green like your kitchen garden basil stalks inside, but e.g. under the bark there are green hues.
I would never have thought it came from the "green = n00b" connection. Am I the mistaken one, now?
"but software development is not data structures and algorithms"
Do you know what memory alignment is? Padding? Branching? The implications in performance of a cache miss? These questions are never asked!
Two nested for loops O(n2) where you considered memory alignment and your cpu cache size when choosing the data structures and defining your structs will perform better than your O(log n) algo with a high missing rate and branching all over the place.
In my day to day I work on a garbage collected language (which I think are great, don't get me wrong) and I'm tired of seeing programmers thinking that memory is free, GC is free, syscalls and networking are magic...
Software engineering culture is broken in general. From the education phase to the "real world".
Am I missing something? The only excuse I can imagine is interview anxiety, which is a real problem that deserves accommodation (bad explanations of the problem may play a role too, but you should be able to ask good clarifying questions). But if someone is really incapable of this very minor feat of programming, then I'm sorry but an interview process is right to reject them.
 Even if you don't literally write recursive Java methods, which you probably shouldn't, understanding recursive logic is critical all over the tech stack.
I have never used that in my career as a dev (despite being hired for it once). Others hand that to me. I’ve also never gotten any credit whether customers like what was built.
Homebrew is a great product. But in a tech firm, the product manager gets the credit for its greatness.
These kinds of questions are great for leveling an applicant. How a junior dev answers these questions will be very different from what a senior dev has to say.
disagree. I'd say that both are needed skills for software. Software is made layers, like a cake. Maybe DSA style problems are like grounding the flour (or making the oven itself?) and the other building skills (which I agree, seem like different skillsets) are about decorating the cake and mixing the dough or something.
So in practice nobody is grinding their own flour; the flour is being bough already processed (in programming this'd correspond to importing the algorithms from a lib)
That being said I also have serious beef with "architects" whose primary capabilities are programming in powerpoint and being professional XML developers.
They argue that this is the only way to weed out all these candidates. It is MAANG's version of the SAT.
> the best solution is to have trial work periods. There’s no better way to see how someone performs at the job than having them actually do the job.
Agreed. But how do they implement this?
I can't figure out if it's a strong aversion or an inability - the end result being the same in both cases, at best some approximation gets implemented instead, and the analysis / feature / etc just doesn't get done in the worst case.
Why not Alphabet - just a guess, but they haven't really pushed Alphabet branding.
Software development is almost entirely about understanding what others want and then translating them into Data structures and algorithms. Author could have meant software development is not about implementing already implemented data structures and algorithms. imo better questions would have a candidate using the DSAs than implementing them. eg: questions that require them to use binary search, bitsets etc
"These test-like interviews mistake the trees (data structures and algorithms) for the forest (building software)."
Should say: "interviews mistake the b-Trees for the forest"
The other side of it is the ridiculous notion some developers have of not having to meet these standards to get the best jobs.... two sides of the same coin that starts with an unhealthy attachment to your profession as your identity.
Are there any great pieces of software written by ex-ICP champions?
When you're young and just starting your career, you typically have nothing but time, and doing programming challenges on evenings and weekends can even be fun. As you get older, life happens and you have children to take care of, older relatives to help out, a spouse who'd like to see you every so often, etc. Practicing leetcode-style problems in your 30's and 40's means less time with your family, offloading more of the childcare and housework onto your spouse, putting off household chores and repairs, and so on. So you end up with a hiring process that's incredibly hostile to older workers while being a relatively easy lift for younger workers with fewer responsibilities.
And, as someone in there 30s with two young children, I find time for interview prep easier then any other point in my life. As you get older you should get better at learning. Also, who doesn't have chores and jobs when they are in their early twenties, you should get better at managing your time as you get older.
Guy who made Quora did ICPC or IOI
GP probably deaf
This was on Twitter, after all.
It‘s FAANG. Don't let Facebook whitewash it's rightfully tainted name.
For the most part, I avoid the data structures-style questions. We have a limited time as interviewers and while these show off a candidates knowledge in areas we may need very rarely, I'd rather have someone who is productive at using what the framework we write code in provides. I stick with "when is it appropriate to use 'X' vs 'Y'" style questions but usually don't have to ask them; the candidate reveals that knowledge (or lack thereof) through other questions and I hate whiteboard interviews. I don't penalize the candidate for lacking knowledge they're never going to use on the job, but I like to know what areas they have deep understanding of. Before I ask these questions, I'm careful to point out "I don't care if you know the answer to this or not -- you might encounter that once in 10 years, here -- but if we do encounter it, it's helpful to know if you have a deeper understanding than others on the team." For the most part, though, it's a waste of time and I'd rather spend that time on the next two points.
My preference is to ask the candidate (very carefully) to provide a link to anything they've written that they can legally share openly. My explanation includes all of the following: I am not interested in judging them on the code they send me -- in fact, I want their worst. I want the code that they don't care about, that they wrote in a hurry to solve some random problem that they had and that "just needed to work and do that one thing". I'll use that code as a starting point to discuss refactoring -- i.e. "What if you wanted to take this code and make it worthy of a product you'd sell". If there's absolutely nothing that they are comfortable sharing, I ask them to take a few minutes and find a project in the framework we're interviewing for on GitHub that we can use for that purpose.
As for the "give them the actual work", we have a single-sheet project we give to every candidate, Junior, Mid or Senior. It's a "to do" app with basic requirements: Must use a database of some kind for storing/retrieving data (Sqlite is called out as an option, but anything relational is allowed), it must allow adding, marking complete, unmarking complete and deleting. We additional coach Senior/Mid developers to include things that would be important for a completely released application with examples (none of which are required), such as documentation, instructions for starting/setting it up, installers (with advice not to take this too far as that's one hell of a rabbit hole). Depending on the outcome of the "actual work", we might follow-up in a similar manner to ask them questions about their choices.
I've seen everything from a To Do app that used SignalR to perform its operations, allowing multiple users to watch the check-list get updated in real-time (complete with a WiX installer, OWIN hosted web service running as a Windows Service with everything wired up and a doc generation project) to a simple HTML-template based solution that is done in four files. Of course, we check the usual sources for obvious evidence of whole-sale copying, but we're not terribly strict here, either (this point is made on the spec sheet -- it's such a simple app, it shouldn't require a lot of research/copy-pasta to build).
 The first time I did this, I left off that "legally" part and the candidate shared code that -- while not representing anything their employer would care about (basic library code that probably every programmer as written from time to time) and certainly not representing anything the company I worked for was interested in "stealing", it carried a copyright that they did not own. Unfortunately, it disqualified the candidate (not my choice in this case, but the company I was working for at the time had concerns that the employee might introduce liability we didn't want). And I don't want any candidate to think I'm asking them to break their contractual obligations in order to get a job -- we operate ethically and don't want to imply otherwise.
 I'm always careful to say "comfortable" so as to allow them to not have to feel bad about not having any code written "outside of work". I have met many software developers (albiet often in the Junior/mid-level skill levels) who simply don't care to write software outside of work. At the senior level, I've met several software developers who haven't written anything publicly shared in a while -- they're working too much at their current job, have families, and might not have the time to devote to things outside of that. Interviewing sucks and anything I can do to disarm the person and get them talking/excited will help me to evaluate them better.
 The dude really wanted the job. OK, I lied, the dude was me, but it was rare that I wouldn't include these things in an app I was writing for my current employer, so those parts were automatic (and easy at that point) for me -- it felt wrong not to include them. I used the documentation to explain my choices, mostly, which avoided a second technical interview.
Design a binary tree containing integers. Design a function to serialize this to a byte array (or byte stream); and design a function to deserialize that same output back into its original tree form.
I thought this was completely reasonable and also very practical, as it tests data structure & algorithm knowledge to the extent that is likely to come up during real programming tasks, in a context that is entirely plausible as well (needing to serialize data in order to store it or pass it between systems, etc.)
By comparison to the article, I had to look up what it means to “invert” a binary tree since I hadn’t heard that term in a while. It seems like something you’d be more likely to do with a binary search tree than an arbitrary binary tree, but the operation makes sense on both.
If you realize that inversion aka reversal is as simple as swapping the left and right edges on every node — either in-place or by constructing a new tree - then the problem is actually fairly simple.
… as long as the candidate has a clear understanding of what “inverse” means — if I was asking this question I would make sure they clearly. From my perspective a term that would make more sense is “reverse“ (the collection of elements represented by) a binary tree (but if that’s the standard CS term then shrug - as long as the goal is clear to the candidate.
However I’m not a fan of interview questions that require a “flash of insight” - even one such as “oh this question has a simple solution: swap left and right of each node” - since candidates might get tripped up looking for traps that require algorithms something more complex than the obvious.
Also I think that kind of task is rather removed from the kind of problem that we typically work on as software engineers on a daily basis. Serializing and deserializing data structures is something that I do in one fashion or another not infrequently - usually not with custom code but I think a competent programmer should be able to write that code.
Binary trees and algorithms on them are not something that come up very often in practice in my experience. They might come up if you were building a collections library or a particularly optimal solution to a large scale problem.
Otherwise, I don’t think I’ve seen a binary search tree in userland business logic the entirety of my professional career. On the other hand I’ve definitely had to write serialization/deserialization functions for object graphs. Although this is less common now that there are a variety of good serialization libraries and RPC tool kits, I find it’s often still necessary to convert between their generated structures and the native ones used by business logic.
In conclusion: the question seems like one that I would expect a competent developer to be able to solve, as long as they’re given a clear understanding of what the problem actually means (i.e. explain what it means to invert a tree and not takeoff points for not knowing that) - but it’s not a good problem IMO because it’s not the kind of code one would typically need to write while solving routine business problems.
All that being said, Google also rejected me during my last round of interviews; I believe this was because I was transparent with the recruiter about my interviewing at other companies and offers that I had, and Google’s offer (per the recruiter) would have been for considerably lower compensation. They said they did not want to compete on compensation because it would be unfair to their existing employee population, and so - per my best read of the situation (there was no discussion about my interview performance, and rather about this) - they decided not to make an offer that was lower on compensation, and potentially also on comparative level. So I might not be the best person to comment on Google‘s hiring practices. During a previous interview they did give me an offer though so shrug. (I interview periodically to benchmark comp and stay sharp)
- The work sucks because existing employees don't know DSA's
- We want people to re-implement and refactor with better DSA's
- We want to know the candidate studied for the interview (took it seriously)
- We actually do want them to know DSA's when joining the team
- It's the least you can do when applying for a CS job
Most recruiters provide links and resources, the material isn't a mystery.
I understand the time-space tradeoffs of the various STL collections, and the Java collections. In 35 years, that's been all I have needed. (And, if it does come up, why spend months memorizing what I can spend minutes googleing?) I am not a huge outlier.
Interview for what you need. Anything else is wasteful. Do your people spend most of their time trying to squeeze the absolute most efficiency out of their data structures and algorithms? If so, yes, interview for that. If not, though, then don't interview for that.
Leetcode interviews when that doesn't match the work are just abuse. "Here, take months of your spare time learning to jump through this hoop that's actually irrelevant to the job." That's abuse. The only way that makes any sense is if you need employees that you can continue to abuse after you hire them. And if so, then I don't want to work for your company.
As I said, FAANGs are an exception. They need people who can go from n (log n)^2 to n log n. It makes a huge difference to them. If that's your company, then I'm not talking about you.
[Edit: Or perhaps I should say that far more companies interview as if they operated at that scale than actually operate there.]
I think that's precisely it: it's the least you can do. Students are taught data structures and algorithms because they don't know anything else. They can sort numbers and munge strings... and that's all.
They don't know any frameworks. They don't know any problem domains. They don't know how to debug, or read an API, or identify code smells.
But they have to write something, and if you want that something to be more than three lines long you have to make them do something complicated with the incredibly limited domain of knowledge they have. Thus, data structures and algorithms.
Nobody ever needs to calculate Fibonacci numbers. But if they can't write a recursive descent compiler, or even handle an HTML parser, they have to learn to write recursive code on something. Thus, they do something trivial.
And then they forget it, because they've applied that knowledge to something more useful. Much as you never do long division by hand, even though you had to learn how to, and you would possibly screw it up if handed one now.
A student getting a BS had better know a lot more than the least of it. Test it if you can't think of anything else that might know, but it's not going to tell you if they know anything that matters.
And the engineer with more experience is even less likely to have used that freshman-year stuff recently. As you say, it can tell you that they cared enough to cram it again... but they're cramming it because they don't actually need it.
I don't understand why people hate DS&A so much. Just do it. Get money, jump ship, get more money, it also makes you a better engineer. It gives you better compensation, it helps you, helps everyone, helps your team, your company. It helps avoid wasting time as well on both sides.
There is no downside of studying DS&A and Leetcode, if there is, the downside is minimal.
And yes, you can have DS&A knowledge without handicapping your other software engineering knowledge. It is a fallacy to think that a Leetcode monkey wouldn't be able to code resilient, robust software, with good variables and good readable, maintainable code, and vice versa. Like, seriously. Why this is even an argument?
And after DS&A interview, there is behavioral questions interview. If someone is a crazy person, ideally behavioral questions would weed that out.
I don't understand why people prefer "here take home questions/work for 5 hours for free" over DS&A interview.
Here is the problem with these kinds of articles. Majority of them are written by people who hate these kinds of interview questions, and by extension, they probably aren't that good at it.
There are articles that sing praise of DS&A interview but of course it doesn't get here on HN because it is not controversial and the number of bitter SWE who don't have DS&A knowledge overwhelms those who do.
Not saying the author doesn't have DS&A knowlege, just my generalization.
And yes of course there are people who excel at software engineering without DS&A, but that's not the point of DS&A interview. The point of DS&A interview is as fast as possible system to vet engineers that aren't time wasting on both sides. It accomplish its purpose nicely. Engineers aren't as non-fungible as they think. Deal with it.
It is a skill that you will lose if you spend a year without interviewing/leetcode practicing.
Sure I’ll do it, but you may as well test me on painting famous oil artwork from memory too.
> I don't understand why people prefer "here take home questions/work for 5 hours for free" over DS&A interview.
The alternative is hundreds of hours of speculative investment memorizing books/websites of algos. People will take full time courses on passing these interviews.
This is precisely my objection. Time is the one thing you can't make more of, so to have companies force you to learn and maintain a skillset that's completely irrelevant to doing real work is incredibly demoralizing.
But they don’t, and probably won’t unless I shift to much lower level work, so it’s a ton of extra work and stress for the sake of being able to pass an interview.
Graph problems are pretty common, even in frontend stuff. Often times I have to write my own library.
Learn once, use it many times, rack many offers. Its like cheating mode.
- Personal time taken that is not relevant to the job?
- Being able to demonstrate that you know how A build system works, and the details enough to show you've had to actually solve issues with it, may not be long lasting, but it does show you learn your tools.
>> I don't understand why people prefer "here take home questions/work for 5 hours for free" over DS&A interview.
- Awfully presumptive to assume they should be unpaid. I'd also take an unpaid, but interesting and novel engineering problem, over the unpaid hours of grinding leetcode that is considered interview prep.
>> Here is the problem with these kinds of articles. Majority of them are written by people who hate these kinds of interview questions, and by extension, they probably aren't that good at it.
- I'm a hiring manager and I hate them, from both sides of the interview table, because it tells me nothing about the candidate that I want to know. I've done them, I've both passed and failed them (entirely dependent on the question, my frame of mind at the time, if it's one I've done before or where I can easily come up with the 'clever' solution, etc); I also recognize they're measuring a very specific thing (your willingness to do bullshit prep work), that isn't job related.
I'll also add, the same arguments in favor of DS&A algorithms apply to those lateral thinking brain teasers that no one uses any more, the "how many manhole covers are there in NYC" style of thing.
How many manhole in NYC questions is hand wavy and not precise. Totally different than DSA.
I have to disagree. Doing LeetCode problems is a huge waste of time for me; I'm not interested in a career as a "professional interviewee". Yes, I'm totally leaving money on the table because of it but I refuse to participate in their 5 monkey experiment.