Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My career path in tech so far has been junior engineer -> mid level -> senior -> tech lead.

As I've progressed through that path, I have had literally no opportunities of applying the stuff I've learnt on HackerRank, save for a couple of interviews.

At this point I'm convinced that the focus of those interviews was entirely wrong. Once you pass those tests and get hired, you will be dealing with tons of legacy code riddled with dumb queries, questionable code quality, and very often the wrong stack for the problem at hand.

I'd be OK with modern interviews if they reflected the job you will be doing to a reasonable degree. But knowing the state of the industry I feel that they fall somewhere between gatekeeping and a bait & switch scheme.

If you haven't spent enough time practising those problems, you will look like an idiot. If you have spent enough time practising them, your expectations will be high, and you will feel scammed when you read the steaming shit your new employer wants you to improve.



Bingo.

It should be telling that there are a number of, usually for-profit, resources for prepping for just the interview process:

* leetcode

* Cracking the Coding Interview

* Elements of Programing Interviews

* Interview Kickstart, which costs like $5500, is a bootcamp just for passing interviews

* educative.io

* algoexpert.io

That being said, I might just play along with the DS / Algo interview game. Having an interview process that is artificially arduous and way outside the expectations of the job duties means that the job market will stay artificially inflated on the demand side. Hiring leetcode monkeys who can barely deploy and manage k8s cluster on AWS means the demand for my skills will be more valuable than if the interview process actually worked. I can complain all day to my bosses about how hard it is to hire (its not that hard) and meanwhile rip their HR budget to shreds when I threaten to leave unless I get paid more (I've done that before, I recently got a promotion and 50% raise on base alone).

So yeah, I'll gatekeep as long as others play too. I hate the game, but I will play it to my advantage.


I know you think you’re making a “cynical but rational” kind of argument here, but man, this comment is a huge fucking bummer.

You’re acknowledging that you have the both the ability and the knowledge to change (your small part of) a shitty system, and instead you’re pulling up the ladder behind you. That sucks.

You should not let yourself feel good about that, even if it’s “economically rational” or whatever.

You’re making things just a little worse for everyone but yourself — actually, even for yourself, too — and trying to excuse it with “well, that’s just the way things work!”


You'll feel better when you see some of my other comments. I'll paste it here for you:

> What’s your suggested alternative? In my experience, any alternative to leetcode provides a better signal-to-noise ratio. You are better off asking trivia questions.

Additionally, this is the common narrative among leetcode advocates - "If no leetcode, then what?" As a hiring manager, you think harder and do better, that's what.

> Please remember that while some of us would do well and prefer some live pair programming or debugging sessions, it stresses the hell out of some folks and penalizes them unfairly.

So does leetcode. And a candidate that is truly a good fit for the role would be more at ease with the interview than if they saw a leetcode question they haven't seen before.

> Take home tests penalizes people with families and other responsibilities...

So does studying for leetcode for hours.

> ...and probably has a racial bias as well in countries where different races have different societal loads.

Share some sources before making a claim this ridiculous.

> In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job, as much as any other method can be when we look for unbiased markers.

In my 6-ish years of experience, this is false. I have actually tested this theory in practice, and through my anecdotal experience, leetcode is a poor signal for job performance.

I can usually usually accurately screen a candidate for success in about 45 mins. And, unlike leetcode, you can't pretend like you haven't seen the question before, bc I can push the candidate to their limit, which is where you will find hire / no-hire signals. I love talking about this stuff, but I find that my peers engineering peers do not give a shit and would prefer to stay in leetcode hel


What do you suggest instead?


I mean, I don't even have to suggest; I can let the OP suggest instead.

> Having an interview process that is artificially arduous

Make your interviews less artificially arduous.

> way outside the expectations of the job duties

Re-design your interview questions to be more in line with the actual job.

> leetcode monkeys who can barely deploy and manage k8s cluster on AWS

See previous suggestion.

If you want my personal suggestions on interviewing better:

- Interview for problem-solving process, not just for correctness. A candidate's ability to solve an algo problem in 45 minutes is much less important than their ability to calmly, thoroughly, and clearly explain the way that they think.

- Tune interviews to the role at hand. This should be obvious, but for some reason full-stack web devs still have to pretend like dynamic programming is a big part of their daily work.

- More systems interviews. I find that it's much harder to bullshit/memorize here, at least not for very long.

- Interviews based on actually building stuff. "Hey, here's a super simplified version of a feature someone in your role built last month. Can you walk us through how you would build the same thing?"

Interviewing is hard. It's not that fucking hard. When someone is out here complaining about what they see as obvious flaws in their process, and then decide that the best course of action is to "gatekeep as long as others play too," it just reeks of laziness and self-interest.

This does not get better unless individuals decide to start making it better. That's my biggest suggestion, really; don't become a cog in the machine. Every single interview that you, personally, conduct has a small but meaningful impact. It's up to you to have that impact be positive or negative.


> leetcode monkeys

I hate this from someone who manages interviews. They are leetcode monkeys, because they are not dumb and this is what gets them rewards and jobs. If he changes the criteria, they will change what they spend time on.

But, instead, he chooses to insult them ... for doing exactly what he rewards.


> Tune interviews to the role at hand. This should be obvious, but for some reason full-stack web devs still have to pretend like dynamic programming is a big part of their daily work.

This personally hurt :)

Completely agree with you on everything you've written.


This means all salaries go back to 100k. The salaries in silicon valley normalize with india.


I don't hate leetcode, but I'd really rather do interviews in one of these ways

1. Rely on referrals from people I trust. If somebody I've worked closely with and respect highly says that Person X is awesome, then just hire Person X. This obviously has massive problems with creating an insular culture and only works for people with a sufficient professional network, but it is incredibly high signal.

2. Have a long conversation with the candidate. Discuss their prior experience and war stories. This can be gamed, but I suspect is harder to game than leetcode.

Both of these options have a big problem as they scale, which is that interviews do not transfer. You need the hiring manager to do this interviewing. Making every hiring manager at a 100,000 person company do their own sourcing and interviewing is going to be a mess. Megacorps want to allow anybody to interview and then once somebody passes those interviews, they'd be able to join any team. I have mixed feelings about this approach.

So the best option IMO is two 2-3 hour pairing sessions. Refactor some code, develop a feature, and diagnose a bug. Have the person comment on the architecture of the code or system they used and describe large scale changes they'd make to improve it or prevent it from arriving at this state.

This requires more pre-work since you need to create an entirely working fake system and environment. And since there is a huge incentive to practice for these things, people would definitely sell knowledge about the fake system. It is also higher variance when there are new interviewers, since they represent a larger portion of the total interview panel.


Make the pie bigger.


Personally, I'd much rather rather have a strong understanding of Algo/DS than deploy K8s clusters.

To me, not only is that an annoying external cognitive load, but one that will probably be irrelevant within the decade.

I use my knowledge of graphs quite frequency, and what was true about graphs in the 60s is still true today.


What is that supposed to be telling? There are a quite a few industries where there are profitable neighbor industries around breaking in to it. Even ones with licensing and blah blah blah instead of whiteboards.

Hell, the cynics would call the entire current university system one! And if not that, than it least the massive industries around getting into better schools...


Why don’t other STEM fields have those neighbor/parasite industries?


Science typically requires advanced degrees, publications, etc. You're not doing much of anything a lay person would consider you a "scientist" for with just a Bachelor's. Getting these degrees serves as the hurdle/filter.

Math is the same as science. You're probably not getting hired as a mathematician if you just graduated with your Bachelor's.

[Non-software] engineering has all the professional certification, shadowing, etc. that is required to get very far in the field.

Software is the only STEM career you can get into right out of high school, being completely self taught. Yes obviously a degree helps and it's much easier with one, but not having advanced degrees is a non-starter for most STEM careers, let alone not having one at all.


Oh, boy, now let me tell you about medical school.


They do. Google “VLSI interview prep”


> I can complain all day to my bosses about how hard it is to hire (its not that hard) and meanwhile rip their HR budget to shreds when I threaten to leave unless I get paid more (I've done that before, I recently got a promotion and 50% raise on base alone).

Kudos for that. I may be making a similar move soon.


The hiring rigmarole is just an IQ test in disguise.

The only downside, you can prep for IQ tests, and spend unlimited time doing the prep.

If only someone would come up with a way to control for how long they prepped for this: one week on hackerrank, or 2 years of courses.

I bet those that learn the rigmarole faster, will also pickup k8s and other such fairly ephemeral trivia rather quickly as well.


> The hiring rigmarole is just an IQ test in disguise.

Do you have any data to back a claim that ridiculous?

I hear this bullshit all the time, yet nothing to back it up.

If you think IQ would be a good hiring signal, then just administer an IQ test instead of LC question.


I agree with you in that, we should drop this nonsense, and administer IQ tests legally.

Until laws change, this rigmarole will continue though. Administering IQ tests is illegal, that's why this rigmarole exists in the first place.

The military isn't subject to silly laws like that, and they do administer IQ tests directly, and place talent accordingly.


The reason this complaint never changes anything, despite being very popular, is that these interviews serve two purposes that it doesn’t address: 1) they’re an intelligence test; 2) they’re a perseverance/conscientiousness test.

Will you take the time and put in the work to pass one of these interviews? That measures perseverance and conscientiousness. Having put in the time, can you recall and adapt what you learned to perform complex tasks on a whiteboard in a stressful environment? That’s an intelligence test.

In combination, these criteria measure the famed, “smart and gets thing done” in a way that more practical lines of questioning might not. Or at least that’s the theory.

For the record, I’ve only interviewed at one FAANG company, got to the on-site interviews, did not get an offer (in case you think I’m secretly patting myself on the back here).


> The reason this complaint never changes anything, despite being very popular, is that these interviews serve two purposes that it doesn’t address: 1) they’re an intelligence test; 2) they’re a perseverance/conscientiousness test.

No argument on perseverance, but I don't think it measures conscientiousness. At least, not how I understand the term, which has connotations of intrinsic motivation rather than extrinsic.

Now, if measuring how a person responds to extrinsic motivation is your goal, then this type of interview does just fine. It measures a person's ability to delay gratification for a later reward (because you can have perseverance without that, which you can also call being stubborn or determination, eg. "I'm going to figure this out even if it kills me" ).


I agree, I don't think it measures conscientiousness well either. There's not a lot of room for careful thinking and trial-and-error in a timed, 45 minute, observed, pass/fail interview.


They are a memory test not an intelligence test. If you truly wanted to measure intelligence it would have to be based on a priori knowledge and not on the ability to memorize algorithms and their implementation.


The notion that somebody could simply "memorize algorithms" and then apply that information on-the-fly to questions they won't know in advance -- demonstrating adaptiveness, creativity, and the application of general problem-solving knowledge to novel problems -- and that this doesn't measure intelligence is not credible.

I'd bet my net worth that the ability to perform well on these interviews positively correlates with IQ.


I don't think it is as you describe it though, the leetcode questions in interviews are usually just slightly rehashed. What would be interesting is giving someone a puzzle they hadn't seen before. My wife can't remember formulas or algorithms for anything - she defends that by saying she can just look it up if she needs it. Saying that, we did Advent of Code last December and she came up with quite a few optimal solutions by first principle (albeit slowly). She'd suck getting through a leetcode interview. I'm quite lucky in that I remember these things really really easily but really struggle with truly novel problems. I just reach for my toolbox. I don't believe these interviews correlate with genuine intelligence, just a particular type of memory and question recognition.


There is a bit of reasoning backwards from conclusions here.

For example, you will likely find the applicants rejected numerous times from FAANG have higher age-adjusted bodyfat levels and more dental cavities compared to successful candidates. And of course these measures negatively correlate with IQ too.

So what? Correlation doesn't necessarily make for reasonable selection criteria. Variance is too high.


Algorithm design is a skill you can learn. It probably correlates with IQ, but you have to control for experience to see it. Experience is certainly more important than intelligence.

In my experience, algorithm design is primarily pattern matching. You have a toolkit with a set of abstractions for modeling the problem and a set of algorithmic techniques for solving it. If you have the right tools in the toolkit, potential solutions will jump out. You then pick a promising solution and figure out the details. If you don't have the right tools, you have to go back to reading or start building new tools from basic principles. That can take hours, days, weeks, months, or years, if you succeed at all.

I guess this is similar to proving mathematical theorems, but I haven't done much of that after grad school.


> primarily pattern matching. You have a toolkit with a set of abstractions for modeling the problem and a set of algorithmic techniques for solving it. If you have the right tools in the toolkit, potential solutions will jump out. You then pick a promising solution and figure out the details. If you don't have the right tools, you have to go back to reading or start building new tools from basic principles. That can take hours, days, weeks, months, or years, if you succeed at all.

But you're claiming this is somehow not "intelligence"?


Intelligence is usually understood as a relatively fixed factor that cannot be trained effectively. Algorithm design is mostly about the patterns you already know.


IME how many patterns you were able to rote-memorize is a very small factor in how good you are at algorithm design; the limiting factor is more likely to be your ability to filter out irrelevant details and abstract out the essence of the problem and see how that relates your library of patterns, and that part is mostly innate rather than trainable.


and then apply that information on-the-fly to questions they won't know in advance

But that's the whole point of LC grinding: to claw your way through enough of these problems until, lo and behold... there ends up being an 80 percent chance that any given "medium" problem thrown at you -- is one that you have in fact seen and worked on, already.

I'm just guessing at the 80 percent figure. But it seems pretty clear from the cult of LC grinding that one of the goals of this strategy is not just to learn general techniques for solving these problems -- but to get a significantly high percentage of these problems under your belt already, or nearly so.

While also learning to deploy the just the right grunts and moans and pauses to make your interviewer think you're seeing the problem for the first time, of course.


I feel like people who only memorize would not be able to pass contemporary test at most of these big companies.


I feel like you don't know that you're talking about. That's not meant to be an insult but to highlight the fact that "feeling" doesn't matter. Because my feelings about your statement has no bearing on the accuracy or content of your statement.


"I feel like" in this context is a synonym for "it is my belief that," so your response is a non sequitur based on a misunderstanding of a colloquialism.


If you aren't interviewing with the team or hiring manager with the opening and just getting assessed by random SWE in a giant company that seems like it is subject to a lot of variance.

Why don't they just conduct a proctored test? Doesn't make any sense.


I really think you nailed it, will have to borrow this point in the future.

A lot of it is about gauging the candidates aptitude, and of course their willingness to put in the work.

It's like those kids in math class that say, "I'm never going to use this in real life."


Those kids are right. School math is mainly an attrition machine. Most students, including in STEM fields, look forward to forgetting all of their school math on graduation day. The math that engineers need is baked into their high level software. Most engineering teams have a "math person" who handles any higher math related problems that crop up. I'm one of those people at my workplace.

I was a college math major, but believe that the math curriculum is ripe for reform.


This is a chicken and egg problem that I believe exists in Computer Engineering/developer positions because of i) the extremely diverse background of hires (some of them not being engineers or mathematicians or physicists per se i.e. could be coming in with a bio background (ML usually), no degree at all (perhaps a bootcamp -- usually front end work), or other) ii) a lot of businesses do low hanging fruit work with little novelty, iii) a lot of cost can be thrown to "buy more hardware/server time," iv) low quality is not just advised but sought for.

Because of the usual lack of rigorousness -- e.g. most companies will not require you to prove that the system is going to be up for 99.9999% of the time, the load balancing is optimal, the abstract queue in our system is not going to diverge (when it does inevitably we will just reboot and keep doing it until we get more funding) -- the expectation and targets is also to avoid it.

Most developer positions are doing essentially the equivalent of designing over and over the equivalent in civil engineering terms of one or two story buildings.

There are positions that will exercise one's engineering skills (including math). However, the culture and expectation of "math person" does not promote them. Graphics, cryptography, distributed/byzantine systems (I don't mean using AWS here), optimization, ML (new algorithms/improvements), maps, are some areas that one's cs/math/engineering skills can be used at.

But note that when any X of those areas become trendy there is the gut reaction of an "intro to X for programmers" coming out that is a couple of hand-wavy sketches, some sketchy intuition -- most often wrong in the details -- and an API guide.

Having done my undergraduate in Europe, every year was math heavy -- math was the core not the attrition machine. And I do see this mentality in the U.S. startup industry where a lot of times the Edison approach is taken of trying every single filament width choice on a mock-up instead of doing a quick calculation.

Having taught math to undegrads in the U.S. and from discussions with people that TA'd, I would agree with you that the math curriculum needs to be revised from the ground up as being extremely inadequate. On the other hand, most software "engineer" positions, do not require any engineering to be done.


I don't believe it is a chicken and egg problem. I believe there aren't that much of a demand in the industry for solving "Math" problems.

From an (software even!) engineering perspective, nobody needs to solve the same Math problem twice. All "Math problems" are either solved or are open problems (this is almost by definition). The ones which are already solved do not require re-solving again, which means that in the vast majority of cases a competent (software) engineer only needs to learn how to apply them. I don't know about traditional engineering, but in software, often it's just a matter of importing a library and calling an API. The software development ecosystem has developed to a point where you don't need to know how to implement things, you just need to know what you need. Unless you're an expert specialist in a particular field, you often can't do better than that, because whatever idea you have, most likely somebody more competent than you has already written an open source library for that.

I don't think it's useful to contrast general incompetence with a lack of mathematical ability -- though it sometimes correlate somewhat if only because general intelligence often manifests in mathematical ability. There are simply too many unproven assumptions from "math education" => "writing better software". If anything, good software engineers need to be ready to learn, and learning maths outside of formal schooling is just one of the many things they need to do. The general lack of quality in software is a supply and demand problem -- as long as FAANG can afford to pay 200k+ USD to fresh grads with no experience, companies with a smaller budget must make a trade off between lowering quality and not delivering at all.

Ironically(?) the market is actually responding. People doing interview prep are learning the minimum subset of mathematics and engineering required for a high paying software job. Then you get people complain about grinding leetcode. Imagine what happens if FAANG companies add abstract set theory to their standard interview questions! (I'm willing to bet this would lower software quality...)


> The math that engineers need is baked into their high level software.

Well, someone, has to write that software.

When people used to say, "I'm never going to need to know this", my teacher would reply "You're right you won't, but the smart kids will"


Indeed, but I think we could revise the curriculum without causing a shortage of "math people" relative to existing needs. What school math teaches right now is a lot of expression manipulation, and it's all done by hand. That's not how people do math after school, even mathematicians. And it creates a distorted view of math, since it's limited to problems that have closed solutions using relatively straightforward algorithms.

I'd like to see school math place roughly equal emphasis on:

1. Arithmetic, i.e., expression manipulation 2. Computation, both numeric and symbolic 3. Learning from data 4. Theory, i.e., things like sets and proofs

Items 2 and 3 are things that people can use throughout their lives, even outside of STEM careers, and could be blended with the science curriculum. Items 1 and 4 would serve the needs of "math people" and academic mathematics.


Can't that be taught in the higher-education, like most skills that are only required in a certain area? It's not just "the smart kids will", but "the kids who go into the mathematics field".


> math that engineers need is baked into their high level software.

This is only true if you’re working in a mature business with stable usage patterns. Even then, using lower level APIs can be easier and more flexible.


Who writes the software that has the math baked into it ?


Reading comments like this makes me think that the real problem is that recruiters have never worked in engineering so have no idea what the job is actually like.


Recruiters have nothing to do with the structure or format of the interview, so I'm not sure I follow.


If we define willingness to cram, cram, cram as "conscientiousness" - then sure.

The idea that this is a meaningful proxy for "ability to get things done" -- about that I'm not so sure.


I interview senior SWEs at a big tech company (I've done in the order of a few hundred interviews, which were a mix of coding and system design sessions).

I'm always open to hearing about ideas to improve my interviewing, but I find that threads like this are repetitive and not really actionable.

One common misconception I keep seeing is the idea that the interview result revolves around solving the question. IMHO, that's a bad approach (which is unfortunately taken by many interviewers) and in fact it's an incomplete way of assessing candidates, given that most big tech companies have a list of competencies and/or values that are supposed to be evaluated. Big tech companies usually want interviewers to assess on multiple dimensions, ranging from communication quality to specialization fit.

As an interviewer, you're supposed to think of the question as a vehicle to go over these topics, not a pass/fail end goal in and of itself. In my interviews, I pace my hints with the goal of having the candidate complete the question, even if it means dictating answers to unblock them. There's nothing worse that sitting there wasting time feeling smug when the candidate gets stuck on something. Time is short, so it's much better to move on quickly to different topics than linger on something unproductive.

If you're on the interviewer side, my number one advice would be this: question the calibration of the question bank. It's usually a kitchen sink of everything the person writing it could think of, without regards for time. You might already feel that you don't have time to cover everything that the question is supposed to cover, let alone all the non technical criteria that leadership asks for. So instead, cover a few actual technical concepts (syntax, recursion, what have you) to rule out complete incompetence, and then take liberties to go into topics that do cover a wider variety of aspects. For example, discussions about testing, refactoring or debugging can all be productive ways to spend time in a coding interview.

</two-cents>


"One common misconception I keep seeing is the idea that the interview result revolves around solving the question."

Candidates know this.

The issue is:

Interviewers who do their job, and build a broad based picture of a developer's skills and give fair ratings, are rare.

The perception is 75% interviewers don't even bother. They're tired, they're unmotivated.

Their annual bonus doesn't reflect "conducts fair and balanced job interviews."

They underclock their brain to 10% of its maximum, and at the end of the hour look at how shiny and perfect the piece of code at the end is.

Despite whatever the intentions of the system, I don't think anyone has ever hired a candidate with a messy code who spoke and reasoned well.

My extremely strong perception, having never conducted a big tech company interview, is that a candidate who submitted a shiny piece of clean code at the end of a technical interview will always be first pick. No matter how much they mumbled.

A picture is worth a thousand words.

A shiny perfect coding solution is a perfect image in the mind of the interviewer.

Interviewers are human, fallible people. When they're doing the end of day calibration on interview performance, having a shiny perfect solution changes their perception of a candidate utterly.


I had an interview a few weeks ago that ended in an offer and was basically a verbatim Spiral Matrix problem from LeetCode (they had changed the starting direction of the spiral but that's it). Didn't run it once because we ran out of time, and when I pasted it into LC after the interview, noticed several errors to the point where it's honestly probably better we didn't run it. The interviewer was super engaged the whole time.

When I'm conducting interviews, I'll do LC easies or mediums depending on the role we're hiring for - basically a couple easies for a new grad or someone with just a couple years of experience. One medium for a mid-level. If you're a senior we're not doing a coding exercise, we're having you teach us a system you've designed in the past, then we're going to change some of the core assumptions/requirements and see how your design changes. This has worked super well for us in the past, especially at the senior level. I would say most of the mid-levels we've hired since I've started doing this did not have a working solution at the end of the interview.


The "Merge intervals" question, or variations in particular are such a good interview question.

Off by one errors. Test the developers on the off by one errors.

Whatever your skillset it's truly the fairest tool in the book I think.


That sounds even worse in terms of setting realistic expectations for what the job will involve day-to-day. How many times is someone actually going to redesign a system like that?


Respectfully disagree. Teaching and explaining the current system to new and lower-level folks is arguably one of the most important skills for a senior developer. As for the system, they're not redesigning anything, they're teaching something that (as a senior) they should already know, as if we're juniors. Think typical box-arrow system architecture diagram, DB schema, that sort of thing.


Explaining the system is certainly something that happens, but in terms of hours/month I'd say it's something you do less of than leetcode-like coding work. Actually reconsidering your core design in response to some changing fundamentals, which sounded like it's what you're testing in this interview, is something that you do a few times a year at best.


Yeah, this is admittedly a problem. I don't have a great solution other than relentless education, which is... hard. Especially at scale.

From a cultural side, some companies have a competency pillar that is supposed to embody the idea of being a "good person". In mine, we call it "Citizenship", it's tied to performance appraisal and interviewing is one way to develop that competency.

As for code cleanliness, I've passed candidates with messy or even broken code before, if the candidate demonstrated competence in enough other aspects and the code quality issues can be attributed to nervousness or running out of time half way through a refactor or whatever.


I guess the question is, what's the proportion either side?

Is it 25% will do it properly, 75% don't? Am I too cynical about tech interviews, or too optimistic?

I'd also add, for a couple of companies, Google's approach springs to mind.

I think for places where, you know, they do search algorithms, knowing how to print the spanning tree of a graph is probably an important thing to know.


My understanding is that googlers typically don't know/care what role the candidate is applying for (outside of maybe the hiring manager round). There are arguments both for and against their methodology.

Personally I think there's a strong correlation between necessity and hiring quality (i.e. if teams really need the help, they typically engage candidates more meaningfully than the Google-style clinically impartial methodology)


That makes sense.

Sounds a bit like what my old timer buddy says.

Google's KPI is hiring talent away from competitors who physically live in the San Francisco bay area. And the ads/ML teams that they drench in their 1%ers.

Everyone else is window dressing.


Oh the focus of the interview is very good.

It's Google's interview format after all (actually Microsoft's, but operating systems are hard as well).

The great big sleight of hand trick behind all of this is that the entire industry has copied the interview process of a single, extremely large, company.

And Google makes $$$ writing software for *extremely complex graphing and search algorithms*.

It's a great interview process. For Google.

For everyone else, it's holding up a great big sign that says "I'm a moron and I copied my homework assignment from Wikipedia."


I went to an internal job interview about 5 years into working for the company. They were asking me all sorts of textbook questions about defining stuff like polymorphism and explaining some sort algorithms. I bombed. Many of these terms and algorithms had never been used during my 5 years, and I forgot many since they hadn't been used since college.

I wanted to ask the interviewer if they actually use these terms in meetings or used the algorithms, but I didn't. Since it was internal, I already knew the answers would be that they didn't, and I didn't want to appear confrontational.

I was internal, a high performer on my team at the time, and you're going to ask me this useless stuff?


Haha. At some companies, they made the interviews tougher for internal candidates... Amazon used to require full loop, but last time I changed jobs there (I no longer wok there), all it took was conversations with some senior engineers, and somebody reviewing my work sample.

And totally agree with the absurdity of the questions we ask in tech interviews. At least for my current employer, we use simplified versions of real problems we faced in the past, and place more emphasis on how they communicate, rather than just throwing some leetcode medium/hard problems, and let candidates self select based on how much prep work they put in.


Well, there's really two ways it works at my company. The other way it works is managers making backroom deals and then it's just some conversation with the tech lead that counts as the interview for HR's sake. So sort of similar to what you described.


The next big tech interview craze, I have long thought:

Here is a fucked up environment.

Here are logins for each role.

Here is your team.

Here is your role.

Each of you has a chance at each role. There are three of you. So three challenges.

The environment is experiencing the following symptoms. Diagnose, report, fix.

---

Then throw them at some broken stack with admin rights and figure why its broken and fix it.

Create various stack models to throw at teams based on team skill-set.

Never hire individuals. Hire strong individuals who can work in a team.


I was recently given something like this as a phone screen. I was provided an ip address and some credentials and I was expected to ssh in and get some python scripts working.

I was only given this after passing a difficult HackerRank though.


I'd kill for something like that. Don't have me create and solve some arbitrary math problem in code syntax, this isn't quiz bowl and work isn't a scholastic decathlon. This is tech, give me broken technology tell me its expected behaviors, give me space to figure out what makes it tick and and I will fix it.


At least have empathy for your predecessors! How often have you chosen an architecture and stack to solve a problem only have the slow but constant shifting sands of managements' requirements completely undermine the foundation of what you've built, and been given no extra time to migrate and make it right instead of adding on another layer of shortsighted hacks?


I totally agree with you. Some of the content there seems relevant to CS in general, but I hate the fact that you need to study content that you won't use on the job in order to pass an interview. It doesn't make sense to me. I've seen new grads perform as well as senior engineers with these Leetcode-style questions so I've stopped using them to assess engineers. Further, I know we're specifically trying to assess technical ability in an interview, but that's not the whole picture for an engineer. They need to be able to take feedback well and provide good mentorship also. I think we miss this.

Inspired by some realistic interviews that I did recently, I've been working on something[1] that aims to equip companies to hire engineers using realistic assessment methods. It scales like Hackerrank, but I think it's fairer.

[1] https://devscreen.io/


> I've seen new grads perform as well as senior engineers with these Leetcode-style questions so I've stopped using them to assess engineers

To me, this seems to be reasoning from an unproven premise. As tech becomes ever more lucrative, it honestly wouldn't surprise me that the average competence of new grad hires will skyrocket - supply of software engineers is going to increase and each subsequent generation will have more experience with technology.


> I've seen new grads perform as well as senior engineers with these Leetcode-style questions so I've stopped using them to assess engineers.

Good point, it could start a real crisis if the industry collectively admitted that senior engineers are no better than new grads.


do you have a demo video or something for this?


Not yet - I'm going to add one to the homepage this week and also support a free trial.


What’s your suggested alternative? Please remember that while some of us would do well and prefer some live pair programming or debugging sessions, it stresses the hell out of some folks and penalizes them unfairly. Take home tests penalizes people with families and other responsibilities and probably has a racial bias as well in countries where different races have different societal loads.

In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job, as much as any other method can be when we look for unbiased markers.

This is similar to how the entrance exams to the IITs in india are pretty much completely discordant to what you actually end up having to know, the rigor and preparation you need to do ends up reasonably selecting for well performing students anyway. Not perfect though. Any system can be gamed but it works alright.


> What’s your suggested alternative?

In my experience, any alternative to leetcode provides a better signal-to-noise ratio. You are better off asking trivia questions.

Additionally, this is the common narrative among leetcode advocates - "If no leetcode, then what?" As a hiring manager, you think harder and do better, that's what.

> Please remember that while some of us would do well and prefer some live pair programming or debugging sessions, it stresses the hell out of some folks and penalizes them unfairly.

So does leetcode. And a candidate that is truly a good fit for the role would be more at ease with the interview than if they saw a leetcode question they haven't seen before.

> Take home tests penalizes people with families and other responsibilities...

So does studying for leetcode for hours.

> ...and probably has a racial bias as well in countries where different races have different societal loads.

Share some sources before making a claim this ridiculous.

> In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job, as much as any other method can be when we look for unbiased markers.

In my 6-ish years of experience, this is false. I have actually tested this theory in practice, and through my anecdotal experience, leetcode is a poor signal for job performance.

I can usually usually accurately screen a candidate for success in about 45 mins. And, unlike leetcode, you can't pretend like you haven't seen the question before, bc I can push the candidate to their limit, which is where you will find hire / no-hire signals. I love talking about this stuff, but I find that my peers engineering peers do not give a shit and would prefer to stay in leetcode hell.


> You are better off asking trivia questions

Bet you most people would disagree with just trivia.

> So does studying for leetcode for hours.

You honestly do not have to do this to pass the LC interview. I didn't.

Certainly not as much as a takehome test.


> You honestly do not have to do this to pass the LC interview. I didn't.

You were able to prep for 2-4 hours and still pass LC interviews? I find this hard to believe given that I, and peers I know, took much longer to prep.


Having just passed Google's algorithmic interviews, and established a leetcode account specifically to prepare for that, I have a good record of how much leetcode I did:

3 easy problems, 4 medium problems, and 3 hard problems.

It would make sense to count 2 additional hard problems that I worked on (with pencil and paper) but didn't submit a working solution for.

This is more than four hours of work, but it's much less work than people suggested was necessary. (For example, my recruiter strongly believed that candidates needed a minimum of 5 weeks to prepare for an interview.) I wasn't really able to make myself do more leetcode, because whenever I started working on it, what I really wanted to do was to write up proofs that the algorithms were correct, not to submit a solution to one and move on to the next one. (OK, it passed all the test cases, but what if that's just a coincidence?)

But writing up a formal proof can easily chew up most of your day, and it does nothing for you in terms of practicing getting the code out. So I found the whole process fairly demoralizing.


> (For example, my recruiter strongly believed that candidates needed a minimum of 5 weeks to prepare for an interview.)

Your recruiter believes this for a reason and they have probably seen many candidates go through the process.

Not saying you're a liar, but if you got into G with that little prep, I have some follow up questions:

* Where did you level when you got your offer? L3 vs L4 vs L5 makes a big difference.

* How many LC questions were you given per round? I would expect 2 is the norm.

* Did you have optimal solutions or were you nudged to a solution by the interviewer?

* Did you pass any other FAANG / leetcode interviews?

* What's you educational background? Do you have computer science / STEM degree from MIT or similar?

* Did you crush the system design interview?

* How do you feel behavioral interviews went?


I can answer for myself as GP, didn't interview with G but got into a similar company.

> L3 vs L4

Slightly above L3 would be the equivalent ($200-$260 salary band)

> * How many LC questions were you given per round? I would expect 2 is the norm.

6 interviews, 5 were technical -> 3 were algorithmic style, 2 were domain specific. Usually they had 2 questions, but really the first question was very easy and often related to the second one.

> * Did you have optimal solutions or were you nudged to a solution by the interviewer?

No clue, there was one I was struggling on because string parsing is annoying in C++. Not much nudging. I believe my solutions were generally pretty efficient.

> Did you pass any other FAANG / leetcode interviews?

Yes, the only other one I was in. I found actually getting interviews (as entry-level) to be much harder than passing the interviews once you got them.

> What's you educational background? Do you have computer science / STEM degree from MIT or similar?

Yes to the second and I did take a DS&A class in sophomore year of college.

> Did you crush the system design interview?

Probably was my weakest.

> How do you feel behavioral interviews went?

Well, I am good at talking.


> Not saying you're a liar, but if you got into G with that little prep, I have some follow up questions

First off, I didn't get in. I passed the algorithmic interview, I was told to expect a round of "team fit" interviews, and then they notified me that instead of having team fit interviews they were just rejecting me. From what I understand, after passing the algorithm round you're supposed to collect "statements of support" from team leaders at Google, and then your file with the statements of support included goes to a centralized hiring committee. I would guess the difference between passing the algorithm round with no team fit interviews and failing the algorithm round is largely in whether you get rejected before or after that committee sees your file at all.

I can answer the questions anyway:

What level were you applying for?

L3.

How many questions per round?

Well, this depends on what you mean by "round". I had a day consisting of 5 interviews with 5 people. One of those was the behavioral interview; the other four were algorithm questions. Each (non-behavioral) person asked one question, for a total of four questions.

Did you have optimal solutions or were you nudged by the interviewer?

Question by question:

- Near-total failure.

- An interview involving a lot of verbal problem solving verging on system design. But technically not a system design interview. L3s don't get those. The problem was less difficult (which makes sense since the design of the interview allows less time for it) and I didn't need much in the way of help. Preparation was relevant here; one of the things I had done to prepare was to write out a binary search, which came up in this interview.

- A question for which I had a good solution offhand, which the interviewer nudged to a perfect solution.

- A question for which the interviewer did not want an optimal solution. He specifically remarked to me that when he gives this question and the candidate appears to be familiar with the solution, he substitutes a different question. This interviewer spent some time talking to me once my solution was written (and I had taken help from him). I had used a brute force approach. He remarked that my approach was atypical, but that the typical approach was also a brute force solution, and he described the optimal solution for me.

Have you passed any other FAANG / leetcode interviews?

No. I don't do well in interviews. I have worked for Amazon in the past; I got in by winning a programming contest they held at my school. Notably, you do not have to speak to anyone to win a contest. I did not continue to Amazon after graduating for a couple of reasons, but it seems to have been more or less a guarantee for everyone who was hired under that program and chose to continue there.

What's your educational background?

I have a BA in computer science (and in math; double major) from a poorly-regarded school. You might find it relevant that, although I didn't go to a good school, I had good standardized test scores: 36 ACT; 1600 SAT; 800 on each of 3 SAT subject tests.

Did you crush the system design interview?

L3s don't get those.

How do you feel behavioral interviews went?

Iffy.


> What’s your suggested alternative?

I generally give (mostly) simple questions where I explain how to solve them. Assuming you're a competent programmer in the language we're dealing with, you should be able to figure out how to solve the problem in your head in a minute or two. The trick is that I'm gauging accumulated skill, not memorized algorithms.

For example, I'll explain how to use a data reader, explain the constructor for a simple object, and provide a SQL statement. Then you need to write the code that reads a few values from the data reader and constructs and object. After that we discuss edge cases and error handling, and if you should throw an exception or return null. Easy-peasy! (And if it's not easy-peasy, you really shouldn't be here.)

Another example: I'll explain an older API for running a lambda in a background thread, and then an API for running a lambda in the main (UI) thread. (Mac, Windows, iOS, and Android all have the same threading model for their UI.) Easy-peasy if you understand basic scoping and threading. (And if it's not easy-peasy, you shouldn't be working in any job where you have to deal with threads.)

In both cases, the candidate can ask any many questions as they want, and I'll happily steer a candidate.

The point is, I'm not relying on memorized algorithms, or requiring prep. What I'm doing is relying on, and judging accumulated skill. Only a novice will have trouble with my questions; and I wouldn't use these questions when hiring interns or entry-level engineers. If you've programmed with a database before, you should understand the basic concept of a data reader. If you've programmed for awhile, you should understand scope. And, if you've programmed with threads, you should understand running a lambda on another thread. (You probably should understand lambdas, too.)


> I generally give (mostly) simple questions where I explain how to solve them. Assuming you're a competent programmer in the language we're dealing with, you should be able to figure out how to solve the problem in your head in a minute or two. The trick is that I'm gauging accumulated skill, not memorized algorithms.

That's a decent way of interviewing. It depends on the skill of the interviewer though, I can easily imagine someone handling it very poorly.


Take home OR coding challenges is a false dichotomy.

> In the end, even if these coding challenges might not reflect the actual work you do, they might still be predictive of success in the job

They are snake oil. You cannot predict success at some job by asking candidates to prove their ability at something that is only tangentially related to the actual job.


General mental ability is general, that’s the point.

> The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings

> On the basis of meta-analytic findings, this paper presents the validity of 31 procedures for predicting job performance and the validity of paired combinations of general mental ability (GMA) and the 29 other selection procedures. Similar analyses are presented for 16 predictors of performance in job training programs. Overall, the two combinations with the highest multivariate validity and utility for predicting job performance were GMA plus an integrity test (mean validity of .78) and GMA plus a structured interview (mean validity of .76). Similar results were obtained for these two combinations in the prediction of performance in job training programs. A further advantage of these two combinations is that they can be used for both entry level hiring and selection of experienced job applicants. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed.


> General mental ability is general, that’s the point

I've seen PhDs melt under pressure at two different jobs. Which tells me that an exceptional IQ or work ethic doesn't necessarily equate with success in the industry. Those two people had something in common, and it was their poor communication skills, both input and output. If your brain cannot process some coworkers feedback efficiently and it cannot spit out feedback efficiently, the logical / mathematical side of your brain is not going to make up for it.


Do you have any insights on what kind of "integrity test" they used?


The system works "alright" if the end goal of the system is to whittle down candidates in a 100:1 ratio. As someone who has been through both the JEE nonsense and passed FAANG interviews, I really question if this is at all necessary.

In the case of JEE, kids spend pretty much anywhere from 2-6 of their teen years preparing for the exam. Leetcoding on an average, takes a few weeks to a few months depending on the person. And you're going to have to repeat the prep every time you interview (maybe spending less time than you did in the beginning but the ramp up takes time).

I see only one "scalable" solution to this problem: Something like a bar exam that people can take at regular intervals. This could have different levels that could unlock different jobs. Companies require that you pass the exam at a certain level and if there are too many applicants, a lottery picks the lucky numbers.


> they might still be predictive of success in the job

Basing an assumption out of thin air like this, we can throw the opposite argument: 'they might be predictive of failure in the job'.

Writers of bad code and choices mentioned by OP were likely hired out of a fantastic hacker rank score, and yet failed to produce good output.


grinding leetcode is just as bad, if not worse, than a take-home test


Perhaps not everyone has to grind it so much. (I would wager they're the same people who do regularly find themselves thinking about computational and space complexity, and about using different datastructures and algorithms for problems, vs the ones who never see reason to!)

A take home test requires spending free time for everyone. A whiteboard interview requires variable amounts of free time. I would wager that, on average, the people inside companies using these processes require less time to "grind leetcode" than the people outside those companies do.

You gotta be able to convince the folks on the inside, not the ones on the outside. To them, it being hard to "grind" might even seem like a good thing! "It's hard to cram for this compared to other sorts of tests!"


I’ve passed almost every interview loop I’ve had without ever studying leetcode. I have a (maybe uncharitable) suspicion that people who find they need to study are spending all their time at work gluing software together without actually writing any from scratch.


Knowing the data structures and algorithms is different from actually passing these interviews. Interviews are designed in a way that you are going to have to commit a lot of stuff to memory that you would have just looked up if you were just coding up a problem.

A common example brought up here on HN is binary search. It is usually simple to figure out that a problem requires Binary search and even which variant. But the differences between the variant are really small and you can very easily make a mistake when implementing it.

I have passed enough of these interviews myself and used to think similarly (all you need to do is learn the fundamentals and the rest will follow) but my hit rate went up substantially when I realized that I had to do the same problem again and again and it almost became part of my muscle memory. Looking at a new problem, analyzing it and then coming up with a novel solution takes a lot of time, and there are several mistakes you can make on the way. The interviewing culture today is to reject "false positives" so anything apart from a perfectly coded response results in a mixed/negative response.

This whole leetcode thing is a farce and has to end. Let's be honest, the Google guys used it because they wanted "like-minded people". In short, they discriminated based on their own criteria and successfully brushed it off as a "fair and unbiased" way of selecting people.


(Speaking as someone who uses theoretical computer science fairly often) I wonder what the solution to this would be, honestly. It seems like there are a lot of jobs where we just expect that gluing things together is an acceptable, productive, necessary task to be done. Should we interview for these differently? Can we avoid a stigma being attached to this (productive) work?


I think we should automate this work away, or at least figure out what’s preventing that. Boilerplate is a problem, not a good investment of human effort.


Seems like most of the industry is made up of such roles these days.


> grinding leetcode is just as bad, if not worse, than a take-home test

For leetcode, I can study once for a week or so, and then do a bunch of interviews at different companies, where each onsite is gonna take me a single day (4-5 hours or so).

For take-home projects, it usually takes way more time than that (about a whole week of a few hours every day). And it doesn't reduce the more I study or do those projects.

So no, thank you, I would rather take a single day to do a full loop with a company, as opposed to spending a week doing the same. Back when I was heavily interviewing last year, I could easily do leetcode interviews with about 1-2 companies a week without breaking a sweat. I did a take-home project a couple of times, and I am never going to do it again. The time sink on take-home projects is insane, and it is a much weaker signal. Given a lot of people here heard those stories last year where people were even trying to cheat on leetcode interviews (by asking other people to take those for them, and they eventually got caught), I imagine the cheating on take-home project is much more common and is much more difficult to detect.


Having had quite a bit of experience on both sides of the equation, you’re absolutely right.

The very act of extrapolating how a person would perform based on a brief snapshot in time seems flawed. It is unreasonable to task the interviewer with making a good decision with barely any data.

I used to think that a well calibrated question bank or interview style could be a reliable signal. But the moment you ask something specific is the moment you don’t ask everything else. A poor performing but good candidate’s strengths could lie in the latter category and a well performing but bad candidate’s strengths could be in the former.

There needs to be a reliable and credible way to learn about a persons strengths and weaknesses based on their track record.

I’d weight credible reference checks, previous projects and credentials more than their actual interview performance.


The whole psychology around tech interviews tends not to be focused on the right things. At least in the general sense, a lot of it appears to be people copying others without understanding the motivations.

The theory of interviewing should be to ask questions that are as predictive as possible e.g. success on the interview correlates as closely as possible with success on the job. The strengths needed will vary greatly by the role, company, and so on.

But what you tend to see is, everybody has a handful of "pet" questions that they ask over and over, without applying strict rationale for why its predictive to success.

Personally, any good interview question must both be iterative and highly correlated with actual work. Questions that have one "trick" and judging on that basis is a poor interview. Questions that focus on data structures or algorithms that aren't going to be relatively common are also poor. Questions which lack real world applications, also poor.

At least for me, I have a handful of questions, but one of my basic ones is to implement a certain data structure. And honestly, I care more about seeing the naive solution than the highly optimized one... because I care first and foremost that the candidate can code proficiently. The naive solution is super easy and requires little theoretical strength... but it becomes very obvious how proficient of a coder they are by seeing them write out the solution. How quickly can they write it? Do they make syntax errors? How long to grasp the intention of the question?

Some write the solution in 5 minutes, some take 40 minutes. That gap alone is probably the most strong predictor.

There is a more theoretically optimal solution, but I just followup at the end and we'll talk through what it could be. But if somebody can hammer out a naive solution with good code quality, quickly I weight that about 90%.

But my company also needs strong coders over particularly theoretical people. One of my best coworkers bombed on a tree interview questions I asked... but I could tell from all the supplemental details that he was very strong.


Seems like the consensus is that engineering interview is broken, but no one has come up with a better method? You’d think with the war to get talent these days if there was a better way to screen for candidates companies would use it.


> Seems like the consensus is that engineering interview is broken, but no one has come up with a better method?

These sort of "consensus"-es appear for most sorts of screening processes where the majority of people are screened out. See: any form of standardized testing.


Self-plug but I think I have an answer

https://haig.io/post/most-underrated-software-skillset/


I'd be OK with modern interviews if they reflected the job you will be doing to a reasonable degree. But knowing the state of the industry I feel that they fall somewhere between gatekeeping and a bait & switch scheme.

Yep,

Back in the 90s and early 2000s, the interview problems I got were often the actual problem I was meant to solve on the job. Which is to say that I don't think there's any barrier to coming up with questions like. Of course, if the job is as you describe, putting people through hoops is indeed a way to avoid actually telling people what they'll do.


> Back in the 90s and early 2000s, the interview problems I got were often the actual problem I was meant to solve on the job.

Really? In the early 2000s, half the questions I got for dev jobs were completely irrelevant riddles and nonsense like "which US state would you remove?"


> I have had literally no opportunities of applying the stuff I've learnt on HackerRank

I have. Many times.


Yes, the opportunities are possible, but exceptional.

For the most part, you will be wiring crud APIs or trying to figure out why your button is misaligned by 12 pixels. Those problems also pay better too, bc those are business problems. The hackerrank problems have mostly been solved.

Think about that - you think Ed Dijkstra or Tony Hoare ever made more than a facebook swe?


Tony Hoare works at Microsoft as a Principal Researcher, as well as having won over 5 million dollars just in awards. I'm quite sure he makes more than a Facebook SWE.

Dijkstra won a similar amount of money in prizes and worked at UT Austin which according to their 1985 docket paid 110k a year ($287,418.96 adjusted for inflation) and also worked in private industry in a research role, so once again... likely made more than a FB SWE.


Alrigt, alright, I stand corrected. Here's your upvote.


You're still right overall though. Most programmers don't get to use the theory they (should still) know


Picking 2 famous computer scientists was not a good comparison on my part. I'll own that.


If you are THE Walter Bright, I have no doubt you have. Respect


There can be only one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: