Hacker News new | past | comments | ask | show | jobs | submit login
Green Lumber Fallacy in Software Engineering (chrisbehan.ca)
131 points by weekendvampire 4 days ago | hide | past | favorite | 222 comments





Its shocking to me how people don't get bored of rehashing this same, tired opinion over and over and over again and discussing it with hundreds of comments. There is a blog post like this every other day on the front page of HN and it always generates the exact same discussion.

I have said this before: it's group therapy for entitled devs that feel they're entitled to oversized salaries just because they have tenure. Young grads accept the process as is. People that are either happy with their comp or comfortable putting in the work do not lament the process - they just accept it and proceed. Only devs that have been doing it a while, whose salaries have plateaued, that either never learned data structures or can't muster the will to review, bemoan the LC grind.

The most utterly absurd instantiation is the dev that's 100% sure that FAANG is going to fail soon because they're passing on people like him/her that are just so talented that they don't need to prove it with any assessment. There's also the MGTOW-esque subgroup that eschews such interviews and will proclaim it proudly, along with their salary.

Personally I'd be embarrassed if I was making a less than obscene dev salary, which is still upper middle class anywhere in the world, and all the while complaining that I deserve more for no added effort.

What's funny is there are other professions where the same sort of multimodal distribution in compensation exists (big law vs everyone else, competitive specialties vs general practice in medicine, IB and PE vs everyone else in finance) and I have never seen members of those communities complain as much as developers do.

Edit:

I have a friend that's trying to break into FAANG (as a DS) who I've been coaching. He has a stats PhD and basically no software training whatsoever, outside of R. Since DS loops still have LC questions it's been very hard for him. I have never heard him once express feelings of resentment over the process - he fails an interview and just goes back to practicing. He's also a first gen immigrant from a very poor part of the middle east. Admittedly it's hard to assign/attribute his perseverance to any particular thing but naive intuition dictates that some of it must be his humility - something that software as an industry seriously lacks.


> Only devs that have been doing it a while, whose salaries have plateaued, that either never learned data structures or can't muster the will to review, bemoan the LC grind.

As someone who mostly agrees with you, I still bemoan the LC grind. This is mostly because I think the software market has grown to the point where there’s a distinct market for “plumbers” vs “engineers”. I consider myself a “plumber” who understands how to piece together various frameworks and languages in the most efficient matter, but “engineer are the one who actually invent the new frameworks and who need to actually understand DSA to accomplish their jobs. It feels like distinct skills since I’ve tackled what I consider engineering problems multiple times and failed but I keep getting rewarded with higher and higher salaries for doing what I consider plumbing.

Unfortunately there seems to be an ego component and no company will admit that they need software “plumbers” more than they need software “engineers” or it makes them/their employees look bad.

I just wish I could be ranked and rewarded off my “plumbing” skills and only ranked on my “engineering” skills if I was looking for a career change


From friends and acquaintances in other industries with similar income distributions, "knock out 4-to-6 programming problems in a short amount of time" is one of the least-arbitrary and most accessible ways to jump from one group to the more highly paid one. Not to mention that if you can knock out those problems, you may not even be required to have a degree or other formal credentials.

That's not even getting into how hilariously unrealistic "trial periods", for instance, are. Why am I going to leave a job for just a trial period without any more thorough attempt to check fit before hand? Why would I invest that much time in side work if I'm doing it before quitting, when I could get a FAANG offer instead for single-digit-days-amount of brushing up?


>least-arbitrary and most accessible ways to jump from one group to the more highly paid one.

exactly. i mean like try to apply to IB at a BB and see if anyone even skims your resume without an ivy+ on it. they can't fathom how good they have it because they've never done anything else (which is, of course, what breeds entitlement).


Do people get a FAANG interview with no recognizable credentials? Surely they have to sift the CVs somehow?

For the F in that list, at least, definitely. Though that was because they used a different metric to sift/choose me for an interview (and a decade of experience) despite my lack of tertiary education.

How did you get the interview? They must get a lot of junk applications, what did you write to make them actually look at you instead of direct to trash, and then decide to interview?

easiest way to shortcircuit the process is to reach out to someone through linked in and ask for a referral. if you're really who you claim to be in your profile here then you shouldn't have any trouble getting an interview.

Well I'm not looking for myself. Was wondering if it's the case that they'll interview any CS grad?

I mean, recruiters get paid sizable commissions for each hire they make, and they really dont care where you went to school as long as they can sell you to the company. If you can sell yourself to a recruiter (low bar) then you can get an interview.

of course they need credentials but undergrad at decent state school is much more widely available credential than ivy+. hell FB hires high school grads into their FBU program (for the gap summer between high school and uni).

like you don't get it: go here https://www.hudsonrivertrading.com/careers/job/?gh_jid=35621... and find the dropdown for school ("what university are you currently attending").


> I have never seen members of those communities complain as much as developers do.

Other professions have guilds (not called that, but effectively that) who qualify membership and enforce various rules and standards on their members.

Maybe one day we'll have a Guild of Software Developers that provides this service and we won't need to deal with shitty interview tests. But until that day, maybe we shouldn't be comparing ourselves to professions that do have this and therefore don't face the same problems.


I think you are referring to Professional Orders (https://en.wikipedia.org/wiki/Professional_order) and, depending on where you live, there may actually be one. In my country, computer engineers have a professional order they can join after graduation, in the same exact way all the other kinds of engineers they've studied with do. Unlike some other engineering fields, the order does not enforce nor require active membership in any order to work.

There've been a few attempts to get this going. So far none of them have worked (see also: any kind of union specifically for tech workers).

The other professions have a "closed shop" - you cannot work in that profession without joining that association.

e.g. in the Anglosphere, you cannot practice as a lawyer without being a member of the relevant Bar Association, and to join that you must "pass the bar" which is an exam.

If we had this in Dev, then we wouldn't be able to write code for money until/unless we were a member of the Software Dev Association, and they wouldn't accept us until/unless we'd passed a rigorous exam (passing the bar is something that takes years of study) to prove that we could code. Then we wouldn't be facing ridiculous "but can you actually code?" tests during interviews.

But creating that closed shop has always failed (so far). Employers don't want it, and new coders don't want it. It's only us old hands who expect to be grandfathered into it that kinda like the idea.


When the status quo is objectively stupid, it is right and just to complain about it. In real life, the ones who will do so are mostly just the ones who it hurts. So what do you want to happen here, if it's not just for devs to shut up about the insanity that's making them miserable?

While we might be able to argue that alternative interview strategies could be better under some set of assumptions, the status quo is clearly not stupid. These companies are hiring thousands of people and pulling in stupid amounts of money. The system works. Could it be better? Maybe, maybe not. But it clearly does work.

Man, imperialism sure has worked. Doesn’t mean it’s the right thing to do. Jfc.

i have no clue what you're saying

>In real life, the ones who will do so are mostly just the ones who it hurts.

you think people grinding LC that make it through are hurt by the process?

>the insanity that's making them miserable

as far as i can tell, sloth, avarice, and greed are making devs miserable (not some bogeyman called LC).


Article says "There’s no better way to see how someone performs at the job than having them actually do the job." and argues for a trial work period.

Well sure, except the first part of any job is coming up to speed on the domain, the codebase, who the key people are, etc. So if you want to see how someone really performs, you're going to have to wait a while past that initial period. And that means you're going to have a lot of people who won't make it past the trial needing support.


Yeah, that is one of those claims that sound nice in theory but kinda come crashing down when rubber meets the road.

Companies do in fact engage in a process of getting someone in and doing an actual job, they're called internships. The problem is that these arrangements typically involve someone dedicating some non-trivial amount of their time to "babysitting" (not necessarily in the literal sense, but in the sense that a newbie doesn't have historical context or familiarity with processes and workflows, even if they are by all other measures bright individuals w/ actual experience under their belts). Either you spend inordinate amounts of time setting up and maintaining a contrived bubble where a candidate can operate cleanly for a very short period of time, or you're looking at very long evaluation times (requiring a week or more of time from a candidate, who generally already is gainfully employed).

It's also worth pointing out that this process is incredibly expensive. One hour of time from a full time employee doesn't really cost anything more than the few minutes lost to context switching (ie. not really that much worse than the person going out to buy a coffee). Literally setting up a paid one week period for a senior level candidate would cost, optimistically, a few hundred dollars for evaluating a single candidate. It's completely unworkable in a large company that conducts dozens of interviews or more per week.


> Literally setting up a paid one week period for a senior level candidate would cost, optimistically, a few hundred dollars for evaluating a single candidate.

It’s unworkable for anyone who is already employed. What am I going to do: use a week of vacation for this “interview”? (The fact that I’d be paid is immaterial: I lose vacation time. Plus, the more senior you are, the more of your compensation is in equity. Will I get a week’s worth of an FTE’s both salary and equity just for doing the interview? And at what job level? The highest for which I’m being considered?

This wouldn’t be viable for private companies because it would bloat the cap table with a massive number of insubstantial stockholders; and I believe that companies run into SEC compliance issues when they have more than a certain number (1000?) of stockholders.

Additionally, when I’m looking for a new position, I don’t just interview with one company. I typically aim to get between five and six offers (on a ~5 year cadence) to get an understanding of the spectrum of market rates for engineers, and to see what options are available. There’s no way I could spend weeks interviewing.

Maybe I would consider this if it was my “dream job”; but I’m not sure that the signal from such a trial would be better than typical interview.

Many companies have substantial technology stacks of proprietary infrastructure that you need to learn and become proficient with in order to be effective at your job. At recent employers, engineers have felt it takes 1-2 years to feel fully up to speed and effective using the company’s tech stack. With this kind of ramp up time, a one-week trial would only be useful with a make-work fake project.

My current employer spends six weeks full time just training all engineers who join the company - before they even begin to have actual responsibilities for doing work on their team.


The maximum number of stockholders for a private corporation is 2000, and it excludes stock awarded as part of employee grants, so that point is not a factor. But your other points still aptly demonstrate the utter lunacy of this idea.

Thanks for the information. I was not aware that employee compensation was exempt from that limitation.

I couldn't in practice see small companies choosing to go through the hassle of issuing stock to candidates for a week's worth of work though. It would be a lot of legal hassle for a process that's problematic for both sides.

Plus, you might end up with the problem of people interviewing at every hot startup that does a "1-week job trial" hoping to get some early stock at the next company that becomes a unicorn in 10 years.


This might change when a lot of people are remote. I could see people working a second job temporarily to see if they want to jump ship.

Trial periods and take home assignments also exclude vast swathes of society. Anyone who can’t have an uncertain job, who doesn’t have lots of extra time, is currently employed (compared to a college student getting their first job), and can’t have periods without insurance (for trial periods).

Even if that’s a feature you’re looking to “force” into your recruiting program, you’re probably better off getting a “trust fund baby” work force by targeting overseas volunteer work.


For very small companies, hiring someone with the right skills who will work well with the team is a crucial, "this might make or break our company" decision. I've seen a couple of companies who solve this by having the candidate spend a day pairing with someone on a real problem. When this works (whatever's being worked on is small enough to be understandable, the language/platform/domain/tool isn't completely foreign to the candidate, etc), it's a really great signal.

But for very large companies, it's still really bad to hire bad candidates, but it's also important to be efficient. You need multiple opinions on candidates, but the interviewers can't spend a whole day each; it's too expensive. But on the other hand, you have basically an unlimited pool of candidates. So you do whatever's both fast and is unlikely to produce bad "hire" decisions, and supposedly a series of algorithm puzzlers do a good job of being fast and producing a low false positive rate. I'm not sure if that's actually true, but that's the argument. I am sure that the process rejects a huge number of perfectly good programmers, though.


Agreed. This idea could only by put forth by someone who has never had to do the work involved in hiring a software engineer.

Or at least never had to do the work involved in hiring a hundred or a thousand software engineers.

I definitely believe you can get away with no coding questions like this for smaller organizations. But once you get to a big enough scale, all the other methods have too many issues (e.g. a large class of hiring methods which essentially skip anyone who currently has a job - trial periods). You can't afford to limit your hiring pool too much when you're big enough.


The increasing prevalence of the acqui-hire seems to me to be one manifestation of this.

Allow a team to come together, demonstrate an ability to work together, to define a project (hopefully one of interest to the acquirer, if not in the output than in the process which develops it), then hire based on proven ability.

Or at least that's one interpretation, and amongst the more charitable, of the practice.


If that is what they really want, then do a 1099 paying task that takes 40 hours over a months time with a weekly check-in meeting to ask questions not solvable via email. I don't love that idea but it is respectful of everyone's time at least.

But then you're not actually working there. That's just an incredibly arduous and expensive take-home.

I used have a similar perspective as the author, that the leetcode style questions / data structure / algorithm stuff they ask in interviews but seems to have little to do with day-to-day corporate software development is all a big waste of everyone's time.

I'm not sure I still do, though.

It's true that this stuff rarely comes up and often doesn't matter for most real-world applications... but when it does, it matters a lot. The difference between O(n2), O(n) and O(log n) is huge for big values of n. Being about to go straight to a (nearly) optimal solution, instead of hacking together something, waiting for it to burst a the seams, and then having to fix it... or being able to spot someone else doing that, can save weeks or months of development at a time.

It's a bit like saying it's silly for pilots to have to know what to do in the case of engine failure or stall from memory and regularly practice in a simulator, because that stuff rarely happens and most of the time they're just using autopilot. That's true, it usually doesn't matter, but when it does... it matters a lot.


When it comes up do you need to code it up in 20 minutes or is looking the algorithm up in a textbook ok? If all you want to test that sort of knowledge just ask the knowledge question. I don't need to memorize Floyd-Warshal and be able to code it on demand to be able to recognize when it's needed and look it up. I'm sure I remembered it back in school, haven't used it since, but if the day comes I'll look it up...

I do think there's a real place for interview questions where you identify algorithmic opportunities in real world problem scenarios quickly, but you also may be missing some of the incentives for testing the way it's currently tested:

Coding floyd-warshal importantly shows that you can code, and generally signals at least one of two things:

1. You studied hard for the interview, you're willing to do difficult and unpleasant intellectual work for rewards and do so in a structured enough manner to get results.

2. You have such a deep understanding of graphs and graph algorithms you were able to re-construct floyd-warshal in 45 minutes. This implies time will not be an issue if you see an opportunity for these problems, and you can probably handle much harder problems as well.

Filtering for these two traits might work very well depending on the hiring needs of a particular company.


I would say:

1. You're fresh out of school or you have a lot of time on your hands.

2. You memorized some interview preparation book.

I do agree that a candidate that has shown some effort preparing for an interview is a positive. Other than that this seems like a silly game.

The flip side is the candidate that memorized the book, prepared like it's a test at school, and is useless on the job, can't operate at all outside that memorized domain, can't write the simplest bit of real world code.


I mean it's also a pretty good filter by age: if you're middle aged, working and have children, then you're not going to really have the time to get through memorising a bunch of leetcode.

Which is handy because Google et al don't really want employees who look at the in-office perks and wonder why they'd be appealing at all if you're just going to go home at the end of the day.


If you're a good middle aged engineer, you should have enough former coworkers happy to hire you that you don't need get on the leetcode treadmill.

This is such a funny thing to read, because I see new L6 and 7 tech managers hired (above me) all the time. They're often middle aged people with kids. They don't bemoan this stuff (and yes, while they have fewer of them, they still need to solve a whiteboard problem or two to get hired)

But you're seeing the outcome of a selection process. You might feel differently if you saw the top of the funnel.

In what way? If you mean that there might be management wizards who are just fantastic but we skip, perhaps. But that isn't the point I was making, just that despite what people think, these interviews don't actually filter out experienced people with families and kids. Those kinds of people get hired regularly.

Well you don't actually know that, because you don't know what the top of the funnel looks like.

Consider a world where every L6+ manager candidate has a family and kids. However, only 5% of these can make time to do the LC stuff (revising most likely if they've been managing for a while).

So your company will end up filtering out 95% of candidates immediately, but people in the company will say that there's no problem as all of the successful candidates have a family and kids, so it can't be biased.

Don't get me wrong, I'm a fan of technical screening for managers, but I'm making the point that you can't look at the output of a funnel and conclude that everything's fine, you need to look at the complete funnel to understand this.


> The flip side is the candidate that memorized the book, prepared like it's a test at school, and is useless on the job, can't operate at all outside that memorized domain, can't write the simplest bit of real world code.

I think this leaves out the possibility of high mastery. I think it is instructive think about algorithm skills as similar to those for algebra. If the most advanced algebra you ever solved was linear equations, remembering how to solve a linear equation may be difficult. However if you're skilled at solving advanced calculus problems by hand, your understanding of algebra is so deep that you could re-derive the process as necessary. Many of us with a lot of math experience will never forget how to solve linear or quadratic equations barring steep cognitive decline and this is also so true for certain classes of algorithms. Personally, I've studied graphs and graph algorithms beyond what you might find in just CLRS or other introductory algorithms textbooks and as a result I can re-derive many classic graph algorithms on the fly, no memorization necessary. I have done this interviews as have people I know.


I have worked with engineers who treated their work as glueing together various services to get the results required. They were absolutely shit engineers, whose approach to building services for massive scale were simply

1. Glue together whatever services are available and seem to make sense.

2. Throw increasing amounts of traffic at it and check what fails

3. Try to patch around failure point

4. blame the service provider for "not scaling"

5. Ask for more and more machines

6. Exclaim how great their design is on the basis of the cool class hierarchies and interfaces they wrote. Teach "junior devs" their "OOP design skills and patterns"

7. Go to step 2

They think of themselves as great engineers who scale things. The reality is they are absolute shit engineers who have no clue what they are building, frequently have crazy O(n^2) loops in their code, dont recognize a graph problem if its staring them in their face.

Examples

1. They needed a way to rerun map reduce jobs to regenerate data for previous 30 days. It was a straight forward, graph topological sort problem. Instead they built a crazy solution using multiple instances of Airbnb airflow. One server ran out of memory for these airflow instances, so they allocated a whole damn cluster of machines to run 30 instances of airflow! The job of the cluster? Issuing map reduce jobs to the hadoop cluster. They needed a cluster of machines just to issue commands to hadoop. They gave an elaborate tech talk about "scaling to 30X capacity" to process 30 days of data. Meanwhile, my team was processing 3 years of data. We considered hamake for our data rebuild process. Eventually, I wrote a 70 line scala topological sort routine that would do the job for us, because it gave us more flexibility than hamake and linked to our code libraries.

2. Some guy wrote a data pipeline to summarize a days results that took 24 hours to run, thanks to 40 different joins. I rewrote it to use a single group by. The code ran in 10s of minutes. He didn't know how hadoop worked or external sorting, beyond writing arbitrary sql. So much for looking up algorithms when required.

3. Guys presents beautiful OOP code that has major race conditions in a distributed environment. I point out the race condition. Keeps proposing hacks to "solve the problem". Does not understand transaction processing or locking algorithms or what options are available to him. Hadn't heard if read write locks. What are the pros or cons. Doesn't understand the difference between local locks and distributed fault tolerant locks. I explain it to him. He eventually comes up with a solution using zookeeper that will need zookeeper to process 1000s of updates per second. I give up.

"DSA, what DSA? They are for kids. I will look up an algorithm when I need it".


You're missing the point. What you're describing has nothing to do with memorizing data structures and algorithms. Having those guys memorize those won't make them better engineers, right?

Sounds you understand locking and distributed locks. Cool. Your next project is a video encoder. Write an arithmetic coder in a 20 minute interview? Motion detection? Your next project is a chess engine. Write minimax with alpha-beta cutoff in an interview? Your next project is public key cryptography ... A distributed k/v database... Can you write a bloom filter for me in an interview? Gradient descent in some machine learning application?

If you're such a great software engineer, which likely you are, you should be able to do all the above (at least somewhat), by doing your research, and writing code.

I'm not saying hire bad engineers. I'm saying the game is silly. Sure, a good engineer (and people that aren't so good) can play the game. Also not arguing with the person that said if your goal is to get paid better and you want that job then play the game. Doesn't make it less silly.


You are missing the point

> Having those guys memorize those won't make them better engineers, right?

They can't realistically memorize all the algorithms, they have to develop an understanding of graphs and algorithmic complexity to be able to get through. They should be able to map arbitrary problems to a corresponding graph problem. These are absolutely fundamental. At least I should be able to communicate to them, "use a topological sort" and they should be able to understand what I am talking about.

Sure, some one could set aside their job for 1 year and memorize everything and they might just get through the interview loop. But it is impossible to have a perfect filter, however DSA and complexity analysis is a bare minimum.

> Your next project is a video encoder. Write an arithmetic coder in a 20 minute interview? Motion detection? Your next project is a chess engine. Write minimax with alpha-beta cutoff in an interview? Your next project is public key cryptography ... A distributed k/v database... Can you write a bloom filter for me in an interview? Gradient descent in some machine learning application?

Funnily enough, none of these questions are asked in a FAANG interview at all. So, I don't know why you are constructing a strawman and demolishing it. The questions actually asked are from basic CS201. These are very specialized questions.

FWIW, I have already built a bloom filter, minhash and consistent hash at work and I am a machine learning engineer. So, I can implement an in memory consistent hash store, bloomfilter, minhash, count min sketch and gradient descent in an interview setting comfortably.

For motion detection a 2 dimensional derivative function should work. A simple delta between 2 images is a basic implementation . Huffman coding is not too complex to write. The basic entropy formula is pretty simple. In fact, most ML classifiers minimize cross entropy loss, so I am very familiar with entropy. Minimax - I have partly forgotten. But if the interviewer prompts me with some high level details I should be able to do it.

I have actually been asked to derive gradient descent in a Google staff eng equivalent interview and also demonstrate that gradient descent converges to global minima for the single layer perceptron[1]. Mind you, this was a specialist ML IC6+ position. Not a new college hire loop.

[1] You need to prove that the Hessian is positive semi definite. I did the basic set up but couldn't complete the derivation. However, I got an offer.


I know I'm being trolled but I'll try a serious answer.

Lots of people get good grades in data structure and algorithm courses where the test basically looks like these interviews. You get some problem that "asks" for some algorithm to be used, you implement it, done. Out of those people, maybe 10%-20% are going to be great programmers. IMNSHO. And I'm probably being generous. How do I know? I hire those people. Everyone I hire has great grades in those courses. And some of them, many years from now, are gonna be awesome. And I don't hire everyone that has good grades.

Most of those are exactly the people you're complaining about and they'll ace your interview. Why are you asking them a dynamic programming question and then complaining about their usage of Zookeeper or whatnot? These things have nothing to do with each other.

I didn't say those are the questions asked in FAANG interviews.

I think if ML is your domain, and you present yourself as an expert in this stuff, then technical questions along those lines are fair game. And you're right that you should know your stuff. And you should also have the ability to work in a different domain. I've written Huffman encoders a few times in my life (as far as I can recall never in a work setting) and I could maybe write one in an interview. But I can't write an arithmetic coder in an interview without looking it up. Can you? I mean what exactly is the point here? Sure, the stuff you do day in and day out, you should demonstrate that you're able to do it. The stuff that you don't do, you should demonstrate that you're able to understand this is another domain, research, and then do stuff.

This is a little bit like comparing PhDs to people without PhDs. If you're a machine learning PhD you will know a ton about the domain (and also you're forced to learn a ton about some adjacent domains). Are you a better programmer or software engineer? I don't think so. It's like comparing a mechanical engineer to a Physics PhD.. these guys are not interchangeable. CS PhDs, those guys with the knowledge you seem to think is that important on the top of their mental stack, tend to build those terrible pieces of software you're complaining about. (obviously can't generalize, some are also awesome engineers, just like some Physics PhDs might be great at machine design). Don't get me wrong, I have the utmost respect to PhDs and I worked with some brilliant scientists in different domains. I don't want most of them to write software ;)


First off, you are going off on a tangent with questions that are never asked in a FAANG junior engineer interview. I should have just ignored that tangent, but whatever.

Getting back to the point, the FAANG interviews focus on basic graph algorithms etc. Getting hired at a FAANG doesn't imply you are a great engineer. Thats not something you can figure out in 5 interview loops. It means that you have the tools to understand basic CS201 concepts and are not the "I will look up the algorithm when I need it" engineer - which is absolutely a recipe for disaster.

And these algorithms are very relevant, Apart from top sort, bloom filters etc. I have also used dynamic programming on the job, to tokenize product titles to minimize the entropy of the final inferred tokenization of a product, before indexing. I can't even begin to imagine working with an engineer who cant understand that a correct naive tokenization is exponential, that the standard trick to solve such a problem is DP etc. These are basic, it doesn't mean you are a great engineer. It means that you have the basics covered and don't need to be hand held through the implementation details of systems. It's the difference between telling an engineer "we use DP to minimize the tokenization entropy" and sitting with him/her for 1 hour and walking him through every step, like it is some kind of magic. The standard FAANG interview doesn't even cover probability that well, so they have a fairly lenient expectation from the engineers about the concepts that they need to know.

As for why I am complaining about zookeeper. Because zookeeper is an implementation of PAXOS algorithm which has an extremely high penalty for writes. A person who thinks, everything is a black box and he doesn't need to know about algorithms, and will "look them up" when required, is actually never going to looks up any algorithm. Like I said he won't understand something is a graph problem even when it is staring him in his face. I have given several examples of graphs, DP etc being used on the job. And you completely ignore the basic problem with guy implementing lost updates with race conditions.

1. He doesn't realize this is a standard problem, covered in a FAANG system design interview.

2. He doesn't look up standard solutions and cooks up his own hacks, using a central database to record state. His hack has more race conditions.

3. I have to tell him the standard solution to this is locks

4. He ignores my suggestion, and finds out about zookeeper. He thinks it's a key value store like Cassandra. He doesn't realize that zookeeper is an implementation of PAXOS and has very high write cost. Because, he doesn't care about algorithms. Everything is a black box to be glued together.

5. He judges good software engineering on the basis of object hierarchy design, design patterns etc. Algorithms, data structures etc are irrelevant to the job, except for vectors and hash maps. There are API services to do everything else.

This was an engineer at a non FAANG company. Having worked at both FAANG and non FAANG, my observations

1. FAANG engineers are generally higher quality and way faster at execution.

2. Excellent non FAANG engineers are plenty, but at a non FAANG they are fewer to be found. Very often, they end up at a FAANG a few years down the line.

3. FAANG hires don't necessarily have large scale system design skills. The interview cant really pick that up. That's what the promotion and annual reviews are for.

4. The FAANG loop is a classifier, just like any other interview loop - with both type I and type 2 errors. So, it will both reject good candidates and accept bad ones. Any criticism of FAANG interview loops that don't understand the concept of type I and type 2 errors are just emotional hyperventilation. None of the alternative proposals even care to show how it could possibly be better than the FAANG loop.


> It's true that this stuff rarely comes up and often doesn't matter for most real-world applications... but when it does, it matters a lot. The difference between O(n2), O(n) and O(log n) is huge for big values of n. Being about to go straight to a (nearly) optimal solution, instead of hacking together something, waiting for it to burst a the seams, and then having to fix it... or being able to spot someone else doing that, can save weeks or months of development at a time.

This is true, and absolutely matters. But I would suggest that developing the intuition to recognize a problem as being of a particular class, and even going further as to being able to look at the provided solution and intuit whether or not it is optimal (or optimal-enough), is a different thing than calling up that optimal algorithm from your limited brainspace.

Like--if you understand the principles of what you're doing, Google isn't a much slower recall mechanism than you are, and it's less likely to have misremembered something.


Over and over, I see code written by star professionals that you've heard of, reinventing the bubble sort.

I'm guilty of it myself.

> It's a bit like saying it's silly for pilots to have to know what to do in the case of engine failure or stall from memory and regularly practice in a simulator, because that stuff rarely happens and most of the time they're just using autopilot. That's true, it usually doesn't matter, but when it does... it matters a lot.

Well, we now know what happens when pilots forget the runaway stabilizer trim procedure (turn off the stab trim).


I mean this is also what happens when pilots aren't following their checklists though.

Checklists in an emergency are the same idea as Dealer's Strategy in blackjack. If you've got nothing better, or are under so much stress you can't think, or don't have time to think, do Dealer's Strategy. But sometimes, pilots are needed to recognize when the checklist isn't going to work, and can innovate on the job. That's why we still have pilots on airplanes, as a computer can follow a checklist.

if i need a toposort in my proprietary task scheduling system, or fisher–yates shuffle in my proprietary ranking system, what do you recommend? this isn't hypothetical - i've had these things come up in the last half at work.


Bubble sort is really bottom of the barrel.

what was your point? that it's silly to reinvent the wheel over and over. so i gave you an example of a wheel (two) that often needs to be reinvented.

Knowing that O(n2) is a problem and that an O(n) algorithm exists matters. But why do they have to know how to write that O(n) themselves?

It's a quick easy test of understanding, but why not ask "How would you speed this up" and show them some O(n2) code?


Friendly reminder that O(2n) = O(n)

The test isn't so much about the ability to invert a binary tree (for example), but the ability to problem solve complicated problems where the obvious solution is probably not the best one.

In interviewing, it is hard to produce real world problems that don't require a huge amount of context and details of some obscure software stack, so we go with abstract problems which are kind of irrelevant to real work, but the problem solving approaches you need to take are very relevant. Thinking about the requirements, thinking at the right level of abstraction, defensive coding, to name just a few.

Communication and whatnot are all on top of that, and yes, absolutely do need to be evaluated.

I'll also note that MAANG companies have extremely sophisticated software stacks (and multiple ones for each company!) that were built by people who were hired using that method.

If it is wrong, it's working pretty darn well.

There are many things I would change if I could, but we do have an existence proof that, in the aggregate at least, the people this interviewing method selects for can build amazing software stacks.


If we're going to use Meta for the acronym, then for God's sake can we call them MANGA?

More generally, the absurd success of most of Big Tech has not come from better technology but extremely high margins facilitated by software. Apart from the core ranking/ads problem that M and G solve, most of it doesn't need to be that complicated.

I would venture that lots of the cool technology stuff is designed to keep engineers happy so that the money train continues.

Certainly the one I was at built a lot of cool stuff, but there was so much reinventing of the wheel that I suspect that the core business and products could have been served by a much much smaller engineering org.


According to Jim Cramer (who came up with the FAANG acronym), we should now be using MAMAA (Meta, Apple, Microsoft, Amazon and Alphabet).

https://finance.yahoo.com/news/not-faang-mamaa-jim-cramer-16...


I kinda agree with that, but MANGA is such a great acronym that I'd like to keep it.

> which are kind of irrelevant to real work, but the problem solving approaches you need to take are very relevant

It's also a common vocabulary shared across engineers, no matter their seniority or where they graduated from. And knowledge of algo and data structures demonstrate someone could learn it in the first place.

Especially for junior hires, it's an excellent predictor of success.


But the ability to problem solve complicated problems where the obvious solution is probably not the best one.

Problems like designing "the most popular package manager on macOS", for example?

I think what the author finds strange is that not only do many people still believe that LC (or whiteboarding) are effective proxies for general problem solving ability -- they seem to tacitly believe they are better measures than demonstrated, real-world problem solving achievements. Even problems that are significantly more multifaceted and nuanced.

If it is wrong, it's working pretty darn well [for MANGA].

The problem is, these tests (and extensive rounds of them, with onerous time limits) are used by companies that (inasmuch as they would like to believe otherwise) definitely are not MANGA and certainly are not offering MANGA levels of compensation.


> Problems like designing "the most popular package manager on macOS", for example?

Yes, those fall through the gaps, but people with that kind of extremely strong, publicly viewable history are much more the exception than the norm.


Does have to be not just "strong" -- but "extremely strong" evidence?

Not to split hairs - but it does seem to be a peculiar choice of words, there.

I just don't see why the DSA filter has come to be considered so golden that one's ability to not just demonstrate some baseline capacity for this skillset, but to positively master a sequence of concocted performative rituals around it (specifically: reciting these algorithms in front of groups of strangers at a whiteboard; "talking" large blocks of code to someone over a phone; or doing online tests under completely unrealistic time pressure) -- that, in some corners at least, has come to categorically dominate all other selection criteria in the hiring process.

(Actually I do see why -- but it all comes down to factors not related to the intrinsic importance of these skills, or to whether the candidate can effectively do the job).


> Not to split hairs - but it does seem to be a peculiar choice of words, there.

Perhaps not; I think the "publicly viewable" part is going to the sticking point, most of the time. Most people don't have significant/impressive side projects that they can show off to prospective employers, their work is at prior companies.

Anyway, I can see why DSA interviews are popular. IIRC studies show IQ tests and work sample tests are two of the most effective interviewing techniques, and algorithmic whiteboard coding splits the difference between the two, and does so in a way that's relatively platform/stack-agnostic. Especially useful since IQ tests themselves are of dubious legality.


Yeah, that's what I see it as -- in part as a quick stand-in for a higher-end IQ test (or whatever of one wants to call "innate" intellectual ability). As well a way for the company to signal that of course they're all of very high innate intelligence themselves, and they know how to test you for it, too (by asking you to do what amounts to a few carefully rehearsed parlor tricks).

And lastly, as a cathartic bonding ritual: "You went in on this high-pressure, slightly (or sometimes very) humiliating performance rite with us together and it looks like you've passed! Or at least it seems now you do not have completely damaged DNA. Now we are ready to forgive and accept each other as equals, or some approximation thereof."


> by asking you to do what amounts to a few carefully rehearsed parlor tricks

Nah. Parlor tricks would be the old riddle questions about manhole covers and light bulbs being warm. It's difficult to gain competency in algorithmic whiteboarding questions without being smart and at least half decent at coding. They're not perfect gateway questions, but most of the alternative suggestions I've heard here or on Reddit are worse.

> And lastly, as a cathartic bonding ritual: "You went in on this high-pressure, slightly (or sometimes very) humiliating performance rite with us together and it looks like you've passed! Or at least it seems now you do not have completely damaged DNA. Now we are ready to forgive and accept each other as equals, or some approximation thereof."

Having worked at Amazon and Google, I really don't see this. People bond over other parts of the company culture, but the entry process, not really. I guess people do talk about the painful Google recruiting process, but that's mostly how fucking long and drawn out it is, people don't seem to really mind the questions themselves overmuch.


It's difficult to gain competency in algorithmic whiteboarding questions without being smart and at least half decent at coding.

And without learning what amounts to a secondary career skill in the art of doing a specific song and dance in front of a room of often ill-prepared and apathetic strangers. Who often grossly overestimate their ability to curate and conduct these sessions.

BTW "song and dance" is not meant pejoratively but refers specifically to the "art" of: talking to a room full of randoms while you are allegedly doing original thinking and coding; also, pretending that the problem is novel to you, even making fake pauses and saying "hmm" now and then to make it seem so[1] (when, per the company's instructions, you prepared assiduously for the interview, with the precise goal in mind of anticipating as many of these problems as possible); dealing with their less than helpful interruptions and basically pedantic suggestions; etc.

Which is what the whiteboard process seems to optimize for best.

[1] There are forums, mentioned in other instances of this perennial thread, where people trade suggestions in precisely this extremely valuable (by whiteboard performance art standards) skill.


This actually seems like a pretty simple problem, to be honest, where the obvious solution is the best one.

Inverting a binary tree means reversing the order of the sequence of elements that it represents. Thus inverting a binary tree is simply swapping the left and right edge of every node recursively down the tree.

You could do this with a recursive algorithm by modifying the tree in-place or by constructing a new tree. I don’t see any obvious candidate optimizations other than the fact that it’s straightforward to do in place (e.g. using std::swap in C++, or mem::swap in Rust).

However I don’t think this is as good of an interviewing question as other potential questions. See my other comment in the thread for one that I think is better. The reason is that programming tasks like this are pretty uncommon to encounter as a professional software engineer.

Better questions (IMO) are tasks that professional software engineers are likely to encounter while doing their job at the company — or representations of them, while testing knowledge of A&DS as necessary.

Even if a company does particularly a lot of work with binary trees, then that company would be likely to have number of such A&DS already implemented. If you’re working at Google then I can’t imagine that you would need to build a binary tree data structure or an algorithm for inverting them; undoubtedly these implementations already exist within their standard libraries.

Personally, I can’t immediately ever recall encountering a binary tree in business logic while working as a professional engineer. I wouldn’t expect to unless I was building some thing like a collections library or a specialized solution to a large scale problem like implementing a novel data store of some kind.

And if I was doing that kind of work I’d break out references (e.g. Knuth as a starting point, and any relevant recent research), and review open source software for comparison or as a candidate for directly solving the problem (SQLite, MySQL/InnoDB, LevelDB & BDB, Apache Commons libraries, etc.)


> The reason is that programming tasks like this are pretty uncommon to encounter as a professional software engineer.

I'm not so sure. One aspect is even recognizing that the task can be done by a known algorithm. Not knowing the algorithms may make one mistakenly believe that the opportunity to apply algorithms never came up.

> I can’t imagine that you would need to build a binary tree data structure or an algorithm for inverting them

I'm sorry, they are so trivial it takes more work to understand the documentation on the library one than it takes to just do it.

Though personally, if a candidate flunked the binary tree question, but had a very successful project, I'd take a closer look before rejecting him. For one thing, I'd want to have a look at the project's source code. I'd also want to verify he'd actually written the code for that project.


They are trivial to implement, but not desirable if you want your project to integrate with the company's existing codebase and be able to draw on a large base of preexisting algorithms.

A company like Google/Facebook(Meta)/Amazon doesn't want many redundant copies of binary tree implementations per language in their code base. In general they'd want one implementation with a suite of algorithms that can operate on it.

Maybe if your implementation is very specific to the task at hand it might make sense; otherwise I'd expect to use a binary tree via something like a sorted container interface. For example, in Java, something like java.util.TreeMap [1], which I understand is implemented as a binary tree.

Unless you were building a compiler or data store or something very specific and optimized, I see the need for building one as unlikely.

[1] https://docs.oracle.com/javase/7/docs/api/java/util/TreeMap....


On the other hand, I also like to make my modules standalone, and not have 20-30 dependencies.

I agree with you about things like "convert this string to a number", where it's too easy to get wrong (like not handling overflow) or be inefficient. But binary trees (and linked lists), like I said, are so trivial it is just not worth the bother.


I think this phenomena is at least partly responsible for why so many devs stay in the same position and why they are difficult to find/hire.

Before a tech interview, good candidates will spend a large amount of time doing 'homework' which is just practicing answering the asinine questions that tech interviewers give that is usually barely relevant to the position, if at all.

Equally, if not more important, is the candidate's experience or ability to navigate legacy code, testing philosophy, handling incomplete/inconsistent requirements, and attention to detail, among other traits. Of course these won't merit the same attention in an interview.

On the one hand firms have difficulty finding good candidates, on the other hand they are giving ridiculous tech interviews that aren't a great judge of what kind of employee they will be.


>>partly responsible for why so many devs stay in the same position

That is why I am staying put for the time being; hate the interview process, hate being asked stupid questions about solving problems that I will never have to solve in real life.

Ask me about projects I have worked on, ask me about code I have developed ask about my experience etc - but I am still looking for the development job where I will be required to solve Hanoi's tower problem....

In fact, for the current job I am in got thru the initial interviews and then was told I needed to take online 'skills assessment' test to move forward - I told the interviewer (nicely) 'forget it, I don't want the job if I have to take the test; if my resume and track record isn't enough then maybe this isn't right for me.' (keeping in mind that I have a 30 year track record, not a recent college grad)

Guess the hiring manager agreed, because they hired me anyway (still haven't taken the test).


I have a very talented SWE friend who only left a now-public unicorn because they were recruited by a former colleague at a startup they knew wouldn't involve leetcode in the interview process.

I have another very talented SWE friend who has been doing leetcode exercises off and on for months and is struggling to start doing applications even though they do not like their current work and are undercompensated.

They fear they will fail algo / data structures and yet this person has an excellent public track record of quality contributions and comms in FOSS.

From what I've observed leetcode screens are problematic enough that they are causing market inefficiency.


leetcode difficulty is vastly exaggerated. people study 4 whole years of their lives in college to increase their chances of making money, why not spend around 5 months on leetcode which can perhaps get you much farther than anything from college?

i'm not particularly smart, in fact i struggle at work routinely, but i know what i have to do clear interviews, so i do the grind and it has worked well for me. (to a certain definition of "well")

now that i think of it, it has only worked well for me because actual talented swe's don't put effort into interviews the way people like me do.


I suspect some people want to make time but have family or other obligations.

Adding new project that must be executed on as a prerequisite to being seriously considered for a job when you are obviously capable can be a hard sell.

Especially given programming challenges are not that fun for people used to delivering features.

I know that for talented SWEs, having a leetcode interview sink an interview process it can be invalidating.

Companies that rely on single algo/ds interviews also discourage potentially great employees when they totally ignore evidence of capability.

The company wants candidates to grind ds/algos, but it won't compel its own employees to look at public contributions.


People who are good at the grind are good at the grind. These are also often the people who succeed in life; intelligence is vastly overrated.

> From what I've observed leetcode screens are problematic enough that they are causing market inefficiency.

Totally agreed, I know of many similar situations like this.. however at the same time the question is whether there’s any other system that’s better in practice.

“Capitalism is the worst system, except for all the others” etc.

You want a controlled trial or natural experiment where two large successful software companies experiment with two different approaches to hiring over a number of years holding all other factors equal. That’s hard but may happen at some point in our lifetimes.


It is possible to ask non-asinine coding questions in interviews, though, and it seems to be a good idea to do so if you want to ensure that you're hiring somebody that can program a computer. Simple five-minute loop-free "write an expression" problems are effective filters.

Interesting interview questions might be

- download this OSS project - explain to me where / how this API works, how deep does the call stack go. What tests exist for it?

- write me the spec for that call. discuss.


Why would you ask someone to do something absolutely trivial?

Why is it trivial to take a unfamiliar code base and dive into it, hunting down calls, working through the stack to understand what / how a particular function works. This is a hugely important part of familiarising oneself with a codebase - it leads to many useful discussions. Who should you talk to if there is problem in a function halfway down. If I want to add a parameter to the top function, what might have to change down the stack?

I am not sure why thats trivial


You could see it as a modern FizzBuzz. Many candidates will fail at such a triviality.

This is the #1 cause for me staying in my current position that I hate.

I've heard bitching about some of the interview questions our team asks before, but here's the thing: each of those questions is about a problem our team actually had to solve before, reformulated into an interview-style question. Yes, we've had to use Hamming distances, worry about the scaling (N log N vs N^2) of particular solutions, use error-correcting codes, interesting data structures and all of that. Is that most of the job? No, we do a lot of more boring stuff too, but the algorithms and data structures are definitely a part of it. I don't want someone who can only glue pieces together, developing novel tools to solve the problem is important too.

One needs to be especially careful with adapting experience to interview questions. Unlike the candidate, you probably were already immersed in the problem domain for months or years and have good context in what techniques exist and are relevant to the problem domain.

One mistake some interviewers make is implicitly assuming that candidates can somehow conjure the same level of context from first principles, or that a specific algorithm might be familiar or reusable outside of its original context. Another mistake is "looks-like-me" bias.

For example, I happen to have a lot of context on a very specific algorithm that underlies basically every modern web framework but if I wanted to evaluate a candidate on web performance, I'd look at performance optimization as a open ended problem domain rather than drilling them on the particulars of this specific algorithm. In fact, out in the world of web framework performance, the most novel advancements come not from revisiting the algorithms but from looking at the problem domain from entirely new angles that had not even been considered before.


Every time I feel "this would be a good interview question!" it is not. It's usually something juicy that I chewed on for two days. How can I expect a candidate to solve it in 45 minutes?

I've had that same feeling on that side of the table, too. Like some of the stuff I've run into is really cool and really rewarding and is really tempting because it would make an interview something more fun for me. Because I know the answer.

Over time I've learned that I'd kind of rather lean a little more towards the easier side than the harder for writing code during an interview, because the interview is unpredictably stressful. But at the same time, as prioritizing communication and a degree of thoughtfulness has become more important (which has ended up with me bopping over to a devex job where I am now), I've leaned more heavily on "let's talk through XYZ and suss out how you discursively approach the problem" types of interviews. Which definitely selects for a particular audience, but it's one more useful for the roles I've hired for.


I bet you have. But how long did you have to solve it? Did you have access to the internet, talking out ideas with your colleagues, coffees? I agree that you can’t simulate real scenario in an interview, but you can also acknowledge that the process is a little cartoonish.

Interviewees are encouraged to use Google, code on a laptop rather than on a whiteboard, etc. Sadly, I only get an hour to interview people rather than a week. But I have had people complain about "completely unrealistic problems unrelated to the real work" when the problem is something I literally had to solve 3 months ago.

To be fair, you could just give them the question ahead of time. Email them 3 days before their interview with the question and say see you soon. That is much closer to actual conditions

That sounds far more reasonable.

> each of those questions is about a problem our team actually had to solve before

On my last job search I had one interviewer state (very proudly) the systems design question I was being asked was an actual problem his team had to solve. I don't doubt the veracity of his claim at all, but it probably wasn't solved by a single person under the time constraints and pressure of an interview.

Most likely someone on that team spent hours or days researching and designing potential solutions before drafting a design document that was shared and discussed with others, perhaps informally or perhaps in a meeting (or over the course of multiple meetings) where tradeoffs were considered among people with deep knowledge of the existing system and problem space. Expecting a candidate with only superficial (at best) knowledge of your current system to come up with the same or similar solution on their own in 30 or 40 minutes seems a bit unrealistic to me.


The context is more like this: we regularly have internal brainstorming sessions when we run up an interesting or tricky problem, to come up with ideas on how to solve it.

So in the context of an interview, I'm trying to treat the interviewee like a colleague who I'm coming to with a problem I'm having, so we can come up with a solution together. That often involves drawing things out on a whiteboard: not code, but more diagrams to describe the problem. Then we come up with ideas on how to do it, under various constraints that I share.

Usually I have in my pocket 2-3 different approaches that we tried when we did it ourselves, and I'm looking for: can you understand the tradeoffs between these different approaches, do you understand how they work, and are you capable of implementing them to test and cross-compare them?


And then you hire the ones that can properly treat a formal meeting with massive power differential just like a casual chat with equals! Smart.

Is your team hiring? Sounds like fun.

What kind of stuff does your team work on? Seems like you have to solve a lot of diverse problems.

Scientific computing. We write software to do domain-specific scientific analysis on customers' machines. So we have a diverse set of needs of the specific analyses to do: sometimes we just use algorithms out in the open literature, sometimes we have to develop them ourselves. We've also had substantial work in devops, because we have to release software packages that not-very-savvy academic users can deploy and run on their own wildly-varying machines.

It's interesting work.


The post arrives at the wrong conclusion.

DSA style problems are good if used correctly.

Give a candidate a good challenge, preferably with some requirements that can have different interpretations, ask them questions, help them and collaborate with them.

The analysis of the whole exercise should be your hiring determination not whether the problem was actually solved.

Did the candidate:

- stick with the problem?

- ask meaningful questions about requirements?

- ask for help from a senior when stuck?

- explore different avenues to solve the problem?

- handle criticism?

Of course it is great if they solve the problem but often that often only shows you that they have memorized algorithms. ( we have the internet these days )


In reality though, you learn how to solve a bunch of categories of problems on your own time, then put on a show where you go from the naive solution to the correct one. It's not as if someone is going to figure out dynamic programming during the interview, and once you know those concepts it is pretty apparent where to apply them. Honestly, I probably wouldn't even come up with the trick for the 3sum problem during an interview. Maybe I am dumb or maybe this is common, but the only reason I pass is because I've done the leetcode grind. The interview is just performative.

I agree with everything you said. The best interviews ive had are those where I felt like I am working together with the interviewer.

I think being able to handle criticism is an important trait to look out for. Many times interviewees will defend their method too aggressively.

But if you have to pick between two people who are equally pleasant to work with, but one was able to solve the problem, you'll probably pick the one who solved the problem. When the competition is fierce, the candidate would probably still have to spend some time practicing on these problems.


I interview candidates at a company with a relatively high volume of interviews and I use DSA criteria in my interviews. But here's what I do:

- I phrase the question in a way that has some semblance of day-to-day relevance. That is to say that at some point in the process of coming up with a solution, the ability to apply a relevant data structure will come up, but it will be in service of an end goal that looks like the deliverable of a sprint task.

- I come into the interview aware of multiple solutions and I am open to any of them.

- I pace feedback so that the candidate actually solves the problem by the end of the interview, no matter their level (which does mean, in some cases, literally spelling out the step to unblock themselves).

The rationale is that solving a hard DSA question doesn't give me all that much signal in and of itself. Watching a candidate bang out something with a level of complexity a little higher than fizz buzz is usually sufficient to evaluate whether the candidate has familiarity with the language. The choice of idioms and APIs can tell me things about their relative level of expertise with the stack (i.e. it can generally be safely assumed that an already employed candidate can hold their candle, and the question for me is more along the lines of "to which extent").

During the course of an interview, I can usually pick up a distinct and noticeable difference in focus between candidates, especially surrounding topics related to proactiveness/curiosity (e.g. does the candidate have understanding of aspects one abstraction level lower than the API they usually use, are they aware of well known pros and cons of some specific idiom, does their argumentation seem derived from personal experience vs parroted from a hivemind, etc). This tends to correlate surprisingly accurately to how much autonomy and growth they demonstrate on the job.

"Hardcore" DSA evaluation only really comes in as a criteria to determine whether the candidate is of very high quality when most other criteria have already been evaluated as acceptable/desirable. These nice-to-have criteria come into play in some cases where I want to advocate for the candidate when the evaluation panel is split due to one seemingly bad session (possibly due to factors such as nervousness or mixed signals), or inversely when the role logically demands a higher bar but the panel is situationally incentivized to hire down to meet a quota.

I've been told by several candidates that they appreciate my interviewing style, and conversely, I feel like I get a much better feel for the candidate than strictly evaluating DSA skills and nothing else.


What is unique about "DSA style problems" that makes it that only through them you can see if they stuck with the problem, asked meaningful questions, etc.?

Why not ask them real questions about the real sorts of things they're going to do, and through those see if they stuck with the problem, asked meaningful questions, etc.?

DSA problems don't bring anything to the table here, because it's trivial to substitute them with something meaningful. I'd even go so far as to say that we work bizarrely hard at asking uselessly inappropriate questions. It is easier to ask questions related to the real job.


The meaningful question for most DSA problems "does my standard library already have an an algorithm for this?", if no the proper follow up is "okay, I need to head to my library for a few hours to research if there is a known answer that I can copy - will I be given time to propose the addition of the algorithm to my languages standard library so nobody else needs to do this again?" I don't want someone write an algorithm in a couple hours, I want someone to write the one implementation that is correct.

Now sometimes there are compromises in algorithms (I can think of 3 different nlogn sorting algorithms and I think I've forgotten a dozen more) and so the company has a propitiatory library that has an implementation with different compromises. They should have benchmarks and pros and cons written down (ad ideally be open source) so that anytime someone asks about the algorithm they can point out why they have their own and possibly contribute it to the standard library if everyone agrees to a different compromise.

These days that vast majority of the simple algorithms have already been written. I might need someone to design a more complex algorithm, but they either takes months of thought, or are just a sequence of simple algorithms.


Most of the substitutes you might want to pick are specific to a particular domain (webdev, for example). If you run a big company, you want to give a generic interview that gives you signal on a candidate's fit with a wide variety of teams that touch many domains. So you abstract away the domain-specific parts of the task until you have something that is broad enough to apply to many subdomains... and now you're doing DSA interviews, because DSA is exactly the bits of coding that are generic, abstract, and applicable to many domains.

Just is in programming, if you've abstracted away to the point that you're no longer even doing the task you started with, you've failed.

I don't care that you really really want a solution to P = NP; you can't have it. If you try to force it, you will pay the consequences.

I don't care that you really really want a single standardized interview for what is essentially several dozen distinct positions, even if they all "use computers". You can't have it. If you try to force it, you will pay the consequences.

If you suggested that companies should have completely standardized interviews for salespeople, lawyers, and executives, all because they "talk to people", you'd be laughed at. Change it to "talking to computers" and they lose the thread.


In the physical world there are folks who design bridges, dams, buildings, telescopes, furniture, and a million other things. All of those have unique requirements and skill sets. In software, we also make a huge number of categorically different systems each with its own skill set, but we call all of it 'software engineering.'

There are folks who are great at their job (called "software engineering") and their work never invokes their knowledge of algorithms. There are also folks who spend day in and day out tuning and designing new systems and deep knowledge of algorithms is essential (we also call this "software engineering").


I completely agree, we need better compartmentalization/specialization and a better way to target specific skillsets and experiences. Not knowing how to hire the right people is a source of so many problems in the industry.

20 years ago, all you had to do was get a "certification" in some technology and you'd immediately be hired to work on it. That led to a lot of really bad people getting hired because all they did was learn how to pass a test. Today, hiring in tech doesn't require any piece of paper at all. But that means that instead of relying on some standard format to prove your baseline body of knowledge, the interview process now has to do it ad-hoc, and it never seems to fit the role's actual requirements.

And nowhere in the industry have we ever required training on how to actually do a job. Does the candidate know what an SDLC is? Do they write ADRs? Have they juggled multiple changes in flight on a team with large codebases? Do they have a solid grasp of the strange subtle quirks of their tech? Have they learned to be judicious in their decisions and weigh the many long-term pros and cons? Have they ever developed any project with a team?

Trades typically require an industry board to certify them, and then often require years of apprenticeship under journeymen or masters. I think these two would go a long way towards leveling the incredible amounts of variation in candidates and eliminate these ridiculously ill-suited interviews.


I've spent a lot of time fixing or working around messes caused by people who don't know algorithms and data structures. If you don't know about those problems then you're likely to solve one without recognizing it and do a poor job of it. Worse if you don't have data structures and algorithms training you're unlikey to be able to reason about time and space complexity.

Unlike at startup or enterprise dev shops, at the scale of FAANG, those sorts of problems (e.g. turning n into n*2 and the like) at a backend are very likely going to cause crashes or something essential to time out. If you haven't worked on a service with 5-10K or more servers you aren't likely to really get how quickly and how frequently things are going to go bad if you aren't very careful.

There aren't many if any engineers designing bridges who haven't had the entire curriculum. If you want to practice independently as an engineer then you have to get a license which involves taking a hard test which recaps your entire undergrad curriculum and maybe more. I peeked over my brother's shoulder when he was studying for the EIT. Not easy to pass. And like doctors and lawyers they have to keep up with their professions to renew their licenses. So, no.


> I've spent a lot of time fixing or working around messes caused by people who don't know algorithms and data structures.

Rare these days, that kind of mess unless the entire department is off kilter. Contemporary practice is to team up, and be in constant communication about the code base, requirements, approaches. Someone in the team or adjacent will know the proper approach and be able to communicate it to the others.


> Contemporary practice is to team up, and be in constant communication about the code base, requirements, approaches. Someone in the team or adjacent will know the proper approach and be able to communicate it to the others.

Or you hire a 10x like John Carmack and he codes the thing. No need to constantly have meeting and waste everyone's bandwidth.

That, of course, assumes the department understand what a 10x is and how to attract them.


Another problem nobody wants to publicly acknowledge is that unlike in law, medicine or even classical engineering, everyone can call himself a Software Engineer!

So you end up with 6 month coding bootcamp "graduates" claiming to be "engineers" alongside folks who graduated from long and hard engineering programs (MIT MEng comes to mind) and who had to complete things like this [0] as assignments.

I recall someone from a bootcamp writing a cascade of nested if-else statement, 6 level deep in some places. Then someone with a real CS background told him that he was basically building a finite state machine, to which the other dev responded that “he didn’t need fancy thing, just for the function to work”.

That's the main reason companies use algorithmic questions when filtering out candidates (because it checks for an understanding of the fundamentals, something bootcamp grads will lack and that will take years to acquire) and why every job interview has questions like these (there's no certification process, anyone can claim the title, so the company must do the vetting every time).

[0] https://www.cs.cmu.edu/afs/cs/academic/class/15213-f10/www/l...


As a previous CTO and hiring manager, we did do trials, and it worked great. At the time I found an article showing that traditional interviews resulted in an 18% true-positive expectation of how they would perform, while a week-long trial got you to something like 73%.

We paid them for the week like a contractor, and made an evaluation at the end of the week. It weeded people out who were great at code but bad culturally, or people who worked great.

Sometimes we couldn't get a whole week to commit, but just a few days -- but these days made a huge difference.

Now, I work at a FAANG company and _hate_ the interview process. Give me 3 months and I think I could teach just about any CS interview how to pass it (without being specific to any particular FAANG company), but that doesn't mean they would be good engineers.

They are loosely related but different skills.


  > but bad culturally
Like people who are already employed?

We did a trial week with someone who ended up technically excellent, but the feedback I got from existing employees who worked with the candidate really didn't like the candidate (so we passed). This didn't come out in the culture screen we did prior to the week.

Data structures and algorithms questions are a perfect proxy for (1) is the candidate smart and (2) can the candidate write moderately complex code (arrays, hash maps, pointers, nested loops). 90% of candidates will fail 2, and you will get a good idea of the rest with 1.

Unless your company can afford a month-long interview process for every candidate which the author suggests, this is the best we have.


> Unless your company can afford a month-long interview process for every candidate which the author suggests, this is the best we have.

If the current leetcode style of interviewing is the best that SV can come up with, despite having some of the best engineers and thinkers in the world and billions of dollars to spend, that's really sad.


Why is this sad? Standardized IQ tests are notoriously difficult to get right and may be legally questionable (Griggs v. Duke Power Co.).

At some point you have to accept that there is no ideal way to interview, and that there are fundamental time/precision/recall trade offs. Big tech optimizes for low time and high precision, which means you certainly will end up with low recall (false negatives).

I’m sure there are improvements to me made (and maybe the specific leetcode style is not ideal), but I don’t think the billions of dollars is relevant. It’s like saying it’s sad that tech companies haven’t improving on O(n log(n)) sorting despite their billions — it’s not possible.


> Standardized IQ tests are notoriously difficult to get right and may be legally questionable (Griggs v. Duke Power Co.).

Griggs applies to any hiring criteria or practice that has a negative impact on a protected class compared to people outside the class (given the set of protected classes, this is essentially any hiring criteria or practice) without sufficient evidence of probative value on job performance. It is neither limited to things very much like IQ tests, nor are IQ tests any harder to justify than any other element of a hiring process.


The point there is standardized. If you apply a standardized test, then it becomes feasible to measure and argue the disparate impact.

For all their failings, leetcode-style interviews are probably less culturally-sensitive than standardized IQ tests. They probably do have (unintentional) disparate impact, but this seems like a really hard thing to measure (and possibly correct for).


> If you apply a standardized test, then it becomes feasible to measure and argue the disparate impact.

Yes, the existence of readily available statistics may make the unequal impact easier to show.

> For all their failings, leetcode-style interviews are probably less culturally-sensitive than standardized IQ tests.

“Culturally sensitive”, maybe, but they almost certainly have quite large unequal impacts adverse to protected classes (including “age, if over 40”), though the absence of external data raises the cost of proving the unequal impact. Also, certainly less demonstrably predictive of job performance in software development.

> They probably do have (unintentional) disparate impact

Probably, that, as well, but I don't think the age discrimination function largely is unintentional to start with.


You are committing a corollary to the green lumber fallacy described by the article.

Just because code challenges are not relevant to job duties doesn’t mean the results are irrelevant to job performance. They are a proxy intelligence test. General intelligence is the best predictor of success in almost any role (not just software engineering).

This is what confounds people. They think the interviews are designed that way because they are supposed to be representative of the job — I don’t believe that’s the case. They are that way because they provide a strongly correlated signal of performance after hire, and big tech has decades of data for all different interview types. I’m very confident that if they had a better interview circuit that could be done in a ~day, they would be doing that. Obviously even at big tech, referrals and recommendations count for a lot.

While I don’t like the low recall, I do think that “invert this binary tree” probably has less bias than a quiz on technology, or a design conversation (that seems way more susceptible). Perhaps it has a bias for a particular kind of computer science education and thinking, but at least that’s not a protected class. I’m not seeing the age connection, but I could imagine e.g. a computer science education in different countries emphasizing different skills over years (and leading to some candidates with a leg up on the “tests”).


> They think the interviews are designed that way because they are supposed to be representative of the job — I don’t believe that’s the case.

This may be the case. But even if it does act as an effective proxy--and I'm not sure it does; my worst yeses in interview loops have been very adroit programmers who I passed against my better judgment when my "not sure I want to be around this person every day" bells were ringing subtly in the back of my mind--then it has a different problem. You've now set expectations with the interviewee that oh yeah, we do hard stuff here. Then they go frob knobs or write frontend stuff all day.

(This actually happened to me at my first job. I didn't know any better, of course. The hiring manager pumped up my tires with all the difficult scaling work, etcetera etcetera. Then I was writing HTML into templates for the first six months I was there because they needed a body to plug into the role.)


I don't doubt anything about what you're saying, but I'm not sure what argument you're making. Yes, false negatives exist. Will another kind of interview have better rates of false negatives? At the big tech firm where I work, there are many different kinds of interviews, and HR definitely has all the data about what correlates with job performance. I'm not saying coding-style interviews are perfect or even good, but there has got to be a reason they still do them.

As humans with a strong confirmation bias, it is extremely difficult to tell what's behind your feelings for those worst yeses. It could have been the case that the candidate had red flags, you saw them, but couldn't articulate them. Or it could have just as easily been the case that the candidate had a cultural and/or communication style that was different from your own, and they also happened to perform poorly after being hired. It's important to remember that no interview process is going to yield perfect results: there will be false negatives and false positives, you can only move the trade-offs while simultaneously ensuring that you're avoiding any conscious or unconscious discrimination against protected class to the fullest extent possible. That's a hard problem to solve.


Your sadness is not actionable. It will remain status quo until someone thinks of something better, regardless of ambient sadness levels.

Silicon valley seems to be doing just fine with this system in place. If there is a better process, let's formalize and test it. Here's a hint - large companies have spent millions/billions of dollars over the years doing just that, and this is what the results show.

I bet they have spent a lot of money on it. How about those folks at Google that, when they were unknowingly reviewing their own packets from when they originally interviewed, decided that they wouldn't even hire themselves?

Google also found that interview performance isn't a good predictor of on-the-job performance.

Sorry, but it's hard to take this style of interview seriously after hearing about these things.


> Google also found that interview performance isn't a good predictor of on-the-job performance.

People repeat things like this without really understanding statistics and priors and what that assertion really means. Assuming that statement is true, it applies only to Googlers who have been hired by the hiring process, i.e. if you are beyond the cutoff threshold of the interview process, the ranking within that subset is not determined by the interview performance. That in no way implies that among all the interviewees, including the rejects, the job performance would not have been correlated with their interview performance, had they been hired, hypothetically.


Why not ask the candidate to solve a problem of the type that they are likely to encounter while actually working at your company: the kind of task that you would ask them to perform on day one.

Inverting a binary tree is not the kind of task that is a software engineer is likely to have to perform at most companies. I think companies can come up with better questions that still involve algorithms and data structures but better correspond to problems that they professional software engineer might actually need to solve.

For example, to repeat a comment I made elsewhere in this thread, ask them to serialize a binary tree to a byte stream, and deserialize the same tree from a byte stream. There are plenty of edge cases in this problem; you learn whether they understand a data structure (binary tree) and how to implement it, as well as how to write algorithms that operate on it (potentially a recursive descent on the tree; potentially a visitor, depending on how the candidate designed the tree).

This also isn’t a problem I’d directly expect to solve during regular work, but it’s useful and close enough: serializing/deserializing objects (data structures) is something I’ve had to do in reality plenty of times. (Even if you’re using an RPC library or IDL, you often have to “serialize“ in the sense of translating into the business object into one generated by the IDL or RPC client)


"Invert a binary tree" is something that keeps coming up in discussions like this one but isn't a real question that is often asked. Some of the questions I have personally come across in coding interviews with large companies (Google, Uber, Square, Twitter, LinkedIn):

- Implement a LRU cache

- Implement a rate limiter

- Read and process a very large file line by line

- Combine two sorted streams

- Calculate the sum of a specific section of a MxN grid

- Parse and solve a math equation

- Find the nth most common word in a blob of text

These are all pretty realistic representations of what a software engineer can be expected to do day to day.


Bingo. Smart people understand the domain of interview questions (which we know is contrived) and typically can study enough to implement the solutions. It's not always about getting the question right- it's about demonstrating you are smart, but also these questions are core CS fundementals so it also tests that they have a formal education. Anyone can learn Git, JS, but how many can apply a programming language to a complex problem on the spot?

Data structures and algorithms are important to very important in software engineering and I think it's quite right to include questions on them in interviews.

But that's not the same as knowing by heart how to implement non-trivial algorithms. There's the green lumber fallacy.

To have a feel for the 'right' data structures and algorithms to use for specific problems is a key skill. But then the detailed algorithm is only one Google away these days, although I would expect someone to be able to derive the simple ones themselves.


Data structures and algorithms are just parts, like screws and nails. When you're building a house, you don't need to remember if you should use 2 18-penny nails or 3 16-penny nails, or if you can use screws, and what kinds. What matters is that you know how to go look up the building code, find out what's required, use those parts, know how to verify they're installed correctly, and know the consequences to doing it wrong.

Once you've built a dozen houses you may remember these things implicitly, but you never need to remember them. You do need to know why they matter, how to look them up, and how to adapt them to your particular job.


This is not a good analogy as it seems to go very much into details.

If you're a builder I might expect you to know when to use a steel beam rather than timber even if you don't know by heart what grade and size for any random load (I don't know anything about building but that feels like the right level).

Likewise, if you're a software engineer I expect you to know what's a hash table, a binary tree, a linked list, etc., what are their pros and cons, and when you might want to use each of them. But in general I don't expect you to be able code a tree inversion off the top of your head. Obviously expectation of detailed and specific knowledge has to depend on previous experience and role.


I think we're in agreement. My point is that there's certain questions that certain roles shouldn't be expected to have the expertise to answer, and certain answers that make no difference to the job being hired for.

Say you need a builder for a house that an architect designed with wooden beams. Then you interview the builder, and ask them what kind of beam they should use for a theoretical house, and then ask them all sorts of questions about steel beams. Well they don't have the expertise to answer the first question, and the second question doesn't apply to this job at all!

Of course there's a body of knowledge that needs to be known. But I think we need to better codify what knowledge is needed by which people to do what kinds of work. And then have a way to establish that they know how to 'do the work', versus just having a body of knowledge.


We are still in the early days of the profession, and standard libraries are pretty small. Software sometimes requires fabricating parts that don’t exist, at least not on your platform or for your use case. None of us do it every day, but it’s tough to use people who can’t ever do it.

This! Data structures obviously do matter for software engineering. But memorizing the solutions to a handful of random, difficult, data structure problems isn't very useful for a real job and shouldn't be part of the hiring process.

> I’d take working with someone who’s great at naming functions and variables over someone who can code a solution to the knapsack problem.

totally agree


> but the best solution is to have trial work periods. There’s no better way to see how someone performs at the job than having them actually do the job.

> I agree trial work periods may not scale

Great to see the author uses the Green Lumber fallacy to argue against leetcode-style interviews. Now I'm going to guess he also must have skin in the game, otherwise he would only be an empty suit doing armchair recruiting.

Let's say we want to do work trial periods. I tell you what actually happened to us: we opened an internship position and we had ~1600 applicants. Since having 1600 trial periods is impossible, we need a way to weed this down to a manageable number. Congratulations, now you moved the problem of "who do we hire" to "who do we invite for a trial period".

The software engineering interview isn't perfect, but sadly trial periods are not the answer.


I still don't get how OP (and every person that agrees with him) seems to conveniently avoid tackling all the logistics of their proposed solution.

Personally, I dislike competitive programming interview problems. Competitive programming, when viewed as a sport, requires training and practice. It's a muscle that you must constantly train. If you stop for a few months, it may take you a few days or weeks to get back to the previous level.

So it's always frustrating to find a new job because I know I'll have to spend weeks practicing on problems that will have no relevance to the posts I'm applying to. To me, it's a waste of time.

I think companies that uses competitive programming problems (that are irrelevant to their engineers' day-to-day jobs) end up hiring people who are very good at interviews and perhaps not as good at their actual jobs.

Of course, there are jobs which deals heavily with algorithms and optimizations, for which these types of interviews are relevant. But the article is not talking about these relevant cases.


While trying to read more about the actual case of the trader who didn't know what he was trading, I found [1] which contains this quote:

> The “green” in green Douglas fir refers to the fact that it has been newly cut (it has not been dried), just like someone who is new at something is referred to as green.

This was interesting/funny to me, since I was sure that the name "green lumber" comes from the fact that it is green if you look at it (being plant material which was recently alive). I'm not saying a full-size tree will be green like your kitchen garden basil stalks inside, but e.g. under the bark there are green hues.

I would never have thought it came from the "green = n00b" connection. Am I the mistaken one, now?

[1] https://with.thegra.in/green-lumber


It's the other way around: the "green = new" is derived from the usage of green timber, where newly cut timber is literally green.

See also "green cheese". "Green" has metaphorically meant "fresh, new, immature" in English for a long time. I don't think that quote is claiming that it _came from_ "green = n00b" but that it shares the same source.

    "but software development is not data structures and algorithms"  
Now I know why there is so much bad software out there... you just gutted the core of what SD is.

The biggest problem for me is that this style of interview focuses on theoretical efficiency. And the discussions about the interview process only cares about "real world engineering" vs "interview questions". We are discussing algorithms and data structures. What about actual performance? Your code will run on real hardware. Are we interviewing considering that?

Do you know what memory alignment is? Padding? Branching? The implications in performance of a cache miss? These questions are never asked!

Two nested for loops O(n2) where you considered memory alignment and your cpu cache size when choosing the data structures and defining your structs will perform better than your O(log n) algo with a high missing rate and branching all over the place.

In my day to day I work on a garbage collected language (which I think are great, don't get me wrong) and I'm tired of seeing programmers thinking that memory is free, GC is free, syscalls and networking are magic...

Software engineering culture is broken in general. From the education phase to the "real world".


So a lot of the more complicated DS&A questions seem ridiculous, but I had to check on "inverting a binary tree" to see if it was really as simple as it sounded. It is. Just swap left and right, recursively. It's one of the easiest possible tests of being able to grok recursion[0]. You should be able to do this, even if you're self-taught or early in a college degree program.

Am I missing something? The only excuse I can imagine is interview anxiety, which is a real problem that deserves accommodation (bad explanations of the problem may play a role too, but you should be able to ask good clarifying questions). But if someone is really incapable of this very minor feat of programming, then I'm sorry but an interview process is right to reject them.

[0] Even if you don't literally write recursive Java methods, which you probably shouldn't, understanding recursive logic is critical all over the tech stack.


I agree with this. But asking the candidate to invert a binary tree is one thing and asking the candidate to flawlessly solve back to back leetcode "hard" level problems is another. I think the latter is mostly what people lament, where you end up doing so much prep that it boils down to "I've seen this one before" and you proceed to regurgitate what you memorized.

My point is that people lament the latter while using the former as an example. This makes everyone dumber.

I recently went through two interview loops at big tech companies and the DSA questions are not very hard. The typical question starts super easy and then layers on more complexity as time and skill allows. If you are nervous about interviewing because of that, just try a few "easy" leetcode questions or the first ~5 days of advent of code, it really is about that level. Don't be intimidated, you do NOT have to be perfect to get an offer. If you know what a priority queue/heap is you will be fine. Yes practice, but don't slack on preparing for the 1-2 system design interviews and 1-2 behavioral interviews (for senior candidates). System design especially I personally think is way harder because it's (typically) candidate driven and there's no fixed finish line, just tradeoffs and more tradeoffs.

A lot of it is that building successful software requires product knowledge.

I have never used that in my career as a dev (despite being hired for it once). Others hand that to me. I’ve also never gotten any credit whether customers like what was built.

Homebrew is a great product. But in a tech firm, the product manager gets the credit for its greatness.


Excessively complex DSA questions are not very useful for me as an interviewer. I prefer to offer a fairly simple test problem that has room for elaboration. For example, when evaluating SQL skills, looking how the candidate would model mapping between companies and stock ticker symbols is instructive. You can start with a simple flat table, extend to one to many relationships, discuss when normalization and denormalization make sense. You can keep elaborating this, too--add in multiple exchanges to get many to many, and so forth. This lets you see how the candidate evolves a design in the face of changing requirements, which is critical.

These kinds of questions are great for leveling an applicant. How a junior dev answers these questions will be very different from what a senior dev has to say.


This sounds like the system design round at many companies.

I am a frontend web developer and I still get these types of questions. It's incredibly stupid. I don't think I've passed any of them yet I'm making 250k still so whatever. The most I have to think about data structures is deciding when to use a map, set, or array.

> solving DSA-style problems and building software are two different things

disagree. I'd say that both are needed skills for software. Software is made layers, like a cake. Maybe DSA style problems are like grounding the flour (or making the oven itself?) and the other building skills (which I agree, seem like different skillsets) are about decorating the cake and mixing the dough or something.

So in practice nobody is grinding their own flour; the flour is being bough already processed (in programming this'd correspond to importing the algorithms from a lib)


I agree to a certain extent, it has been in my mind the big difference between impactful engineers and engineers that do a lot of stuff, it seems that many people can write code, but being able to conceptualize the software at the layer of DSA's allows one to flow fluidly from multiple levels of abstractions and aspects of the problem more seamlessly.

That being said I also have serious beef with "architects" whose primary capabilities are programming in powerpoint and being professional XML developers.


> All the MAANG companies use algorithmic and system design style questions as their main metric for hiring candidates. The internal justification for doing so is likely a combination of “this is what everyone else does” and “it’s a quick way to evaluate someone’s skill”.

They argue that this is the only way to weed out all these candidates. It is MAANG's version of the SAT.

> the best solution is to have trial work periods. There’s no better way to see how someone performs at the job than having them actually do the job.

Agreed. But how do they implement this?


One amusing thing I've noticed about MAANG companies (with perhaps the exception of Google) is that despite passing the technical interview, the employees do not actually do any difficult or tricky implementations when something like that is actually needed.

I can't figure out if it's a strong aversion or an inability - the end result being the same in both cases, at best some approximation gets implemented instead, and the analysis / feature / etc just doesn't get done in the worst case.


This is highly off topic so I apologise to everyone - but this is the second time I’ve seen MAANG and it’s obvious what it is, but why have the body public decided to replace the F with an M, when we haven’t replaced then G with an A?

I'm just going with what the article used. I think MANGA is another acronym and is more pleasant to write or pronounce.

Why not Alphabet - just a guess, but they haven't really pushed Alphabet branding.


> software development is not data structures and algorithms.

Software development is almost entirely about understanding what others want and then translating them into Data structures and algorithms. Author could have meant software development is not about implementing already implemented data structures and algorithms. imo better questions would have a candidate using the DSAs than implementing them. eg: questions that require them to use binary search, bitsets etc


Author wrote:

"These test-like interviews mistake the trees (data structures and algorithms) for the forest (building software)."

Should say: "interviews mistake the b-Trees for the forest"


I can't help but feel that this makes a better point about the lack of intrinsic merit in trading and more generally other profit making pursuits.

I very much agree. But this is written in a somewhat "this is bad" style and misses the "you should do this, instead" part. This article contains both: https://talktotheduck.dev/debugging-the-technical-interview-...

people in software tend to be introverts who sink a lot of their life and self worth into the job. We see the effects of this in the gatekeeping by elite software engineers who firmly believe the vast majority of degree holding, gainfully employed software engineers are complete idiots lucky to have a job - which is ridiculous.

The other side of it is the ridiculous notion some developers have of not having to meet these standards to get the best jobs.... two sides of the same coin that starts with an unhealthy attachment to your profession as your identity.


sure, but if performance in those types of questions are a good predictor of success in the job then I'd go on asking them. Probably they aren't but just saying "I won't need this in my job so I won't ask in an interview" is short sighted.

> I’ve never heard of an ex ICPC competitor go on to build a great piece of software, yet I’m certain every ex-ICPC competitor would outperform the best software creators on interview-style questions.

Are there any great pieces of software written by ex-ICP champions?


If interview tests were truly useful, wouldn't companies want to assign them continually to their existing employees? Constantly keeping their skills sharp, proving their competence without any confounding variables.

The leetcode style of interviewing is essentially a legal way to practice age discrimination.

When you're young and just starting your career, you typically have nothing but time, and doing programming challenges on evenings and weekends can even be fun. As you get older, life happens and you have children to take care of, older relatives to help out, a spouse who'd like to see you every so often, etc. Practicing leetcode-style problems in your 30's and 40's means less time with your family, offloading more of the childcare and housework onto your spouse, putting off household chores and repairs, and so on. So you end up with a hiring process that's incredibly hostile to older workers while being a relatively easy lift for younger workers with fewer responsibilities.


It's almost a way to see how much time and effort someone will spend preparing for something. If someone spends a lot of time preparing for a job interview, it is not a stretch to think they will spend a lot of time on work assignments.

And, as someone in there 30s with two young children, I find time for interview prep easier then any other point in my life. As you get older you should get better at learning. Also, who doesn't have chores and jobs when they are in their early twenties, you should get better at managing your time as you get older.


> I’ve never heard of an ex ICPC competitor go on to build a great piece of software

Telegram.


Also, Craig Silverstein, 1993 ICPC winner and Google employee number one?

https://en.wikipedia.org/wiki/Craig_Silverstein


I know a guy who had won gold medal in International Olympiad in Informatics (for high schoolers) - he's now the CEO of Korea's largest food delivery company.

Vitalik Buterin did IOI I think and made Ethereum

Guy who made Quora did ICPC or IOI

GP probably deaf


Google’s first hire (Craig Silverstein) was an International Collegiate Programming Contest champion. Make of that what you will.

The real question is why do we keep doing this type of interview even though it seems everyone knows this is a problem?

Because they're not a problem?

I think "invert a binary tree on a whiteboard" was meant to be a joke, not taken literally.

This was on Twitter, after all.


Good article but

> MAANG

It‘s FAANG. Don't let Facebook whitewash it's rightfully tainted name.


I agree with the author, mostly, though I don't have as much of a problem with the deeply technical questions he's suggesting one shy away from -- my problem is with how the responses are handled by the interviewer. And as far as the whole "give them the actual work" approach, I love it -- I do it, and I stand by it, but don't abuse the candidates time in the process.

For the most part, I avoid the data structures-style questions. We have a limited time as interviewers and while these show off a candidates knowledge in areas we may need very rarely, I'd rather have someone who is productive at using what the framework we write code in provides. I stick with "when is it appropriate to use 'X' vs 'Y'" style questions but usually don't have to ask them; the candidate reveals that knowledge (or lack thereof) through other questions and I hate whiteboard interviews. I don't penalize the candidate for lacking knowledge they're never going to use on the job, but I like to know what areas they have deep understanding of. Before I ask these questions, I'm careful to point out "I don't care if you know the answer to this or not -- you might encounter that once in 10 years, here -- but if we do encounter it, it's helpful to know if you have a deeper understanding than others on the team." For the most part, though, it's a waste of time and I'd rather spend that time on the next two points.

My preference is to ask the candidate (very carefully) to provide a link to anything they've written that they can legally[0] share openly. My explanation includes all of the following: I am not interested in judging them on the code they send me -- in fact, I want their worst. I want the code that they don't care about, that they wrote in a hurry to solve some random problem that they had and that "just needed to work and do that one thing". I'll use that code as a starting point to discuss refactoring -- i.e. "What if you wanted to take this code and make it worthy of a product you'd sell". If there's absolutely nothing that they are comfortable[1] sharing, I ask them to take a few minutes and find a project in the framework we're interviewing for on GitHub that we can use for that purpose.

As for the "give them the actual work", we have a single-sheet project we give to every candidate, Junior, Mid or Senior. It's a "to do" app with basic requirements: Must use a database of some kind for storing/retrieving data (Sqlite is called out as an option, but anything relational is allowed), it must allow adding, marking complete, unmarking complete and deleting. We additional coach Senior/Mid developers to include things that would be important for a completely released application with examples (none of which are required), such as documentation, instructions for starting/setting it up, installers (with advice not to take this too far as that's one hell of a rabbit hole). Depending on the outcome of the "actual work", we might follow-up in a similar manner to ask them questions about their choices.

I've seen everything from a To Do app that used SignalR to perform its operations, allowing multiple users to watch the check-list get updated in real-time (complete with a WiX installer, OWIN hosted web service running as a Windows Service with everything wired up and a doc generation project[2]) to a simple HTML-template based solution that is done in four files. Of course, we check the usual sources for obvious evidence of whole-sale copying, but we're not terribly strict here, either (this point is made on the spec sheet -- it's such a simple app, it shouldn't require a lot of research/copy-pasta to build).

[0] The first time I did this, I left off that "legally" part and the candidate shared code that -- while not representing anything their employer would care about (basic library code that probably every programmer as written from time to time) and certainly not representing anything the company I worked for was interested in "stealing", it carried a copyright that they did not own. Unfortunately, it disqualified the candidate (not my choice in this case, but the company I was working for at the time had concerns that the employee might introduce liability we didn't want). And I don't want any candidate to think I'm asking them to break their contractual obligations in order to get a job -- we operate ethically and don't want to imply otherwise.

[1] I'm always careful to say "comfortable" so as to allow them to not have to feel bad about not having any code written "outside of work". I have met many software developers (albiet often in the Junior/mid-level skill levels) who simply don't care to write software outside of work. At the senior level, I've met several software developers who haven't written anything publicly shared in a while -- they're working too much at their current job, have families, and might not have the time to devote to things outside of that. Interviewing sucks and anything I can do to disarm the person and get them talking/excited will help me to evaluate them better.

[2] The dude really wanted the job. OK, I lied, the dude was me, but it was rare that I wouldn't include these things in an app I was writing for my current employer, so those parts were automatic (and easy at that point) for me -- it felt wrong not to include them. I used the documentation to explain my choices, mostly, which avoided a second technical interview.


One employer that I respect gave me what I thought was one of the best technical interview questions that I have seen at any recent company where I have interviewed:

Design a binary tree containing integers. Design a function to serialize this to a byte array (or byte stream); and design a function to deserialize that same output back into its original tree form.

I thought this was completely reasonable and also very practical, as it tests data structure & algorithm knowledge to the extent that is likely to come up during real programming tasks, in a context that is entirely plausible as well (needing to serialize data in order to store it or pass it between systems, etc.)

By comparison to the article, I had to look up what it means to “invert” a binary tree since I hadn’t heard that term in a while. It seems like something you’d be more likely to do with a binary search tree than an arbitrary binary tree, but the operation makes sense on both.

If you realize that inversion aka reversal is as simple as swapping the left and right edges on every node — either in-place or by constructing a new tree - then the problem is actually fairly simple.

… as long as the candidate has a clear understanding of what “inverse” means — if I was asking this question I would make sure they clearly. From my perspective a term that would make more sense is “reverse“ (the collection of elements represented by) a binary tree (but if that’s the standard CS term then shrug - as long as the goal is clear to the candidate.

However I’m not a fan of interview questions that require a “flash of insight” - even one such as “oh this question has a simple solution: swap left and right of each node” - since candidates might get tripped up looking for traps that require algorithms something more complex than the obvious.

Also I think that kind of task is rather removed from the kind of problem that we typically work on as software engineers on a daily basis. Serializing and deserializing data structures is something that I do in one fashion or another not infrequently - usually not with custom code but I think a competent programmer should be able to write that code.

Binary trees and algorithms on them are not something that come up very often in practice in my experience. They might come up if you were building a collections library or a particularly optimal solution to a large scale problem.

Otherwise, I don’t think I’ve seen a binary search tree in userland business logic the entirety of my professional career. On the other hand I’ve definitely had to write serialization/deserialization functions for object graphs. Although this is less common now that there are a variety of good serialization libraries and RPC tool kits, I find it’s often still necessary to convert between their generated structures and the native ones used by business logic.

In conclusion: the question seems like one that I would expect a competent developer to be able to solve, as long as they’re given a clear understanding of what the problem actually means (i.e. explain what it means to invert a tree and not takeoff points for not knowing that) - but it’s not a good problem IMO because it’s not the kind of code one would typically need to write while solving routine business problems.

All that being said, Google also rejected me during my last round of interviews; I believe this was because I was transparent with the recruiter about my interviewing at other companies and offers that I had, and Google’s offer (per the recruiter) would have been for considerably lower compensation. They said they did not want to compete on compensation because it would be unfair to their existing employee population, and so - per my best read of the situation (there was no discussion about my interview performance, and rather about this) - they decided not to make an offer that was lower on compensation, and potentially also on comparative level. So I might not be the best person to comment on Google‘s hiring practices. During a previous interview they did give me an offer though so shrug. (I interview periodically to benchmark comp and stay sharp)


Reasons we test for data structures and algorithms:

- The work sucks because existing employees don't know DSA's

- We want people to re-implement and refactor with better DSA's

- We want to know the candidate studied for the interview (took it seriously)

- We actually do want them to know DSA's when joining the team

- It's the least you can do when applying for a CS job

Most recruiters provide links and resources, the material isn't a mystery.


The material is irrelevant, at least for most software engineers. (For a FAANG, not so much. But for most places? Irrelevant.)

I understand the time-space tradeoffs of the various STL collections, and the Java collections. In 35 years, that's been all I have needed. (And, if it does come up, why spend months memorizing what I can spend minutes googleing?) I am not a huge outlier.

Interview for what you need. Anything else is wasteful. Do your people spend most of their time trying to squeeze the absolute most efficiency out of their data structures and algorithms? If so, yes, interview for that. If not, though, then don't interview for that.

Leetcode interviews when that doesn't match the work are just abuse. "Here, take months of your spare time learning to jump through this hoop that's actually irrelevant to the job." That's abuse. The only way that makes any sense is if you need employees that you can continue to abuse after you hire them. And if so, then I don't want to work for your company.

As I said, FAANGs are an exception. They need people who can go from n (log n)^2 to n log n. It makes a huge difference to them. If that's your company, then I'm not talking about you.


You're making it sound like only FAANGs operate on such a scale.

I didn't mean to do so. But I would say that the large majority of software engineers work on stuff that doesn't operate at that scale.

[Edit: Or perhaps I should say that far more companies interview as if they operated at that scale than actually operate there.]


It's the least you can do when applying for a CS job

I think that's precisely it: it's the least you can do. Students are taught data structures and algorithms because they don't know anything else. They can sort numbers and munge strings... and that's all.

They don't know any frameworks. They don't know any problem domains. They don't know how to debug, or read an API, or identify code smells.

But they have to write something, and if you want that something to be more than three lines long you have to make them do something complicated with the incredibly limited domain of knowledge they have. Thus, data structures and algorithms.

Nobody ever needs to calculate Fibonacci numbers. But if they can't write a recursive descent compiler, or even handle an HTML parser, they have to learn to write recursive code on something. Thus, they do something trivial.

And then they forget it, because they've applied that knowledge to something more useful. Much as you never do long division by hand, even though you had to learn how to, and you would possibly screw it up if handed one now.

A student getting a BS had better know a lot more than the least of it. Test it if you can't think of anything else that might know, but it's not going to tell you if they know anything that matters.

And the engineer with more experience is even less likely to have used that freshman-year stuff recently. As you say, it can tell you that they cared enough to cram it again... but they're cramming it because they don't actually need it.


If I have money for every article that rail against DS&A algorithm....

I don't understand why people hate DS&A so much. Just do it. Get money, jump ship, get more money, it also makes you a better engineer. It gives you better compensation, it helps you, helps everyone, helps your team, your company. It helps avoid wasting time as well on both sides.

There is no downside of studying DS&A and Leetcode, if there is, the downside is minimal.

And yes, you can have DS&A knowledge without handicapping your other software engineering knowledge. It is a fallacy to think that a Leetcode monkey wouldn't be able to code resilient, robust software, with good variables and good readable, maintainable code, and vice versa. Like, seriously. Why this is even an argument?

And after DS&A interview, there is behavioral questions interview. If someone is a crazy person, ideally behavioral questions would weed that out.

I don't understand why people prefer "here take home questions/work for 5 hours for free" over DS&A interview.

Also, it is "Green Lumber fallacy" to think that a knowledge of "how webpack 1, 2, 3, 4, and nth build system in JavaScript here works" will make you a better engineer. Those kinds of knowledge aren't long lasting, and it is also quite easy to learn relative to DS&A.

Here is the problem with these kinds of articles. Majority of them are written by people who hate these kinds of interview questions, and by extension, they probably aren't that good at it.

There are articles that sing praise of DS&A interview but of course it doesn't get here on HN because it is not controversial and the number of bitter SWE who don't have DS&A knowledge overwhelms those who do.

Not saying the author doesn't have DS&A knowlege, just my generalization.

And yes of course there are people who excel at software engineering without DS&A, but that's not the point of DS&A interview. The point of DS&A interview is as fast as possible system to vet engineers that aren't time wasting on both sides. It accomplish its purpose nicely. Engineers aren't as non-fungible as they think. Deal with it.


My objection is maintaining an entirely unrelated skillset that I have never encounter in my day to day work.

It is a skill that you will lose if you spend a year without interviewing/leetcode practicing.

Sure I’ll do it, but you may as well test me on painting famous oil artwork from memory too.

> I don't understand why people prefer "here take home questions/work for 5 hours for free" over DS&A interview.

The alternative is hundreds of hours of speculative investment memorizing books/websites of algos. People will take full time courses on passing these interviews.


> My objection is maintaining an entirely unrelated skillset that I have never encounter in my day to day work.

This is precisely my objection. Time is the one thing you can't make more of, so to have companies force you to learn and maintain a skillset that's completely irrelevant to doing real work is incredibly demoralizing.


Yes, well said. If these skills maintained themselves after being learned through day to day, I wouldn’t mind interviews centered around them.

But they don’t, and probably won’t unless I shift to much lower level work, so it’s a ton of extra work and stress for the sake of being able to pass an interview.


But I do get to implement these in some my day to day work. Not everyday of course, but every other month there are some problems that I used my DSA knowledge to solve.

Graph problems are pretty common, even in frontend stuff. Often times I have to write my own library.


imagine if you were actually an oil painter...you pass the interview with flying colors and show up to work and there is nary a brush in sight

"Your job in the team will be to apply the gesso to the raw canvas".

I came here to basically say this. You want the big salary at the fancy company, show me you put in the work to earn it.

The alternative is way way way worse for both the companies and the candidates. While DSA is basically upside for everyone but especially, especially the candidates.

Learn once, use it many times, rack many offers. Its like cheating mode.


>> "There is no downside of studying DS&A and Leetcode, if there is, the downside is minimal."

- Personal time taken that is not relevant to the job?

>> Also, it is "Green Lumber fallacy" to think that a knowledge of "how webpack 1, 2, 3, 4, and nth build system in JavaScript here works" will make you a better engineer. Those kinds of knowledge aren't long lasting, and it is also quite easy to learn relative to DS&A.

- Being able to demonstrate that you know how A build system works, and the details enough to show you've had to actually solve issues with it, may not be long lasting, but it does show you learn your tools.

>> I don't understand why people prefer "here take home questions/work for 5 hours for free" over DS&A interview.

- Awfully presumptive to assume they should be unpaid. I'd also take an unpaid, but interesting and novel engineering problem, over the unpaid hours of grinding leetcode that is considered interview prep.

>> Here is the problem with these kinds of articles. Majority of them are written by people who hate these kinds of interview questions, and by extension, they probably aren't that good at it.

- I'm a hiring manager and I hate them, from both sides of the interview table, because it tells me nothing about the candidate that I want to know. I've done them, I've both passed and failed them (entirely dependent on the question, my frame of mind at the time, if it's one I've done before or where I can easily come up with the 'clever' solution, etc); I also recognize they're measuring a very specific thing (your willingness to do bullshit prep work), that isn't job related.

I'll also add, the same arguments in favor of DS&A algorithms apply to those lateral thinking brain teasers that no one uses any more, the "how many manhole covers are there in NYC" style of thing.


I mean, you can have all of it u know. Like learning JS and then Webpack etc etc with DSA as well. I do it. Everyone else does it.

How many manhole in NYC questions is hand wavy and not precise. Totally different than DSA.


> The point of DS&A interview is as fast as possible system to vet engineers that aren't time wasting on both sides

I have to disagree. Doing LeetCode problems is a huge waste of time for me; I'm not interested in a career as a "professional interviewee". Yes, I'm totally leaving money on the table because of it but I refuse to participate in their 5 monkey experiment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: