Hacker News new | past | comments | ask | show | jobs | submit login

> Keeping those skills sharp even when you’re not job hunting.

Can I just add "Leetcoder" as a part time job on my resume? It is amusing how in this industry, your ability to interview will not at all be improved by spending time doing your job unless it is algo heavy. It is a separate job entirely.




The reality is that industrial psychology has consistently found that fluid intelligence is the best predictor of job performance. Even more so than experience.[0]

Whiteboard interviews are used, not because they have a lot of relevance to the job, but because they're a hard-to-fake indicator of high intelligence. Somebody who shows a lot of mental fluidity with complex data structures, almost assuredly can quickly master any specific tech stack. Even more so than a less intelligent person with years of experience in that specific tech stack.

It's the same reason that the military selects infantrymen based on word synonyms and fast mental arithmetic. What does that have to do with loading a rifle or marching in formation? Not a whole lot specifically, but whenever they've lower the intelligence standards, the results have been disastrous.[1]

There are other important qualities for certain. Honesty, resilience, hard work, enthusiasm, being a team player, reliability, etc. But almost all of those qualities are easy to fake and hard to assess in a 30 minute conversation. People who are hired on these qualities, are usually tapped from a pre-existing network, not selected from the recruiter funnel. If you've only got a half hour to choose a candidate, then pretty much your only reliable option is test their intelligence.

[0] https://psycnet.apa.org/doiLanding?doi=10.1037%2F0022-3514.8...

[1]https://en.wikipedia.org/wiki/Project_100,000


Doing well on whiteboard interviews can be a sign of high intelligence (a true positive result). But what signal does doing poorly on them give? Did you measure the candidate's intelligence (true negative), or preparedness maybe anxiety levels (false negative)? I guess the answer is that it does not matter as long as you have a large enough true positive candidate pool. And FAANG has this.


Sure it matters. Who doesn't do well due to stress? I think you'll find women, minorities and basically a lot of people outside of the cultural norm/majority do poorly.

That's not an argument that you can eliminate test anxiety, but certainly if we can reduce it without destructively eliminating the high value candidate pool we should do so.


What you are saying is an ethical/moral consideration and it is a valid one. When I said it doesn't matter, I meant that FAANG is commercially not incentivised to account for the anxiety factor. Even if they miss out on good candidates due to their anxiety, they can still choose from plenty of good candidates who don't suffer from anxiety. Given how many people are aspiring to join FAANG, I'd assume it's a buyer's market for these companies.


I'm not sure that's true. What you describe is the case for any individual candidate, but doesn't account for how those candidates interact with one another. Specifically, it has repeatedly been shown that more diverse teams will consistently outperform more homogeneous ones.

If your company only hires above the 95th percentile but their method for finding that quality cutoff is heavily skewed towards a certain demographic, in the long run that company may well perform worse than a company that hires at the 80th percentile but without the demographic skew.


> Specifically, it has repeatedly been shown that more diverse teams will consistently outperform more homogeneous ones.

From the data I've seen made public, FAANG are usually more diverse than other companies.


I think you'll find men, people in majorities and lots of people on average will do poorly, also.


The ratio that's important for those huge companies is the "true positive"/"false positive" one.

I guess if you make the test hard enough, always change it, and add some unhealthy dose of suspicion over the candidate's behavior, you can maximize that ratio. But I'm not confident FAANG even looks at it. It's much more likely that they just postulate that a hired employee is a competent one and go on with their day.

From the candidate point of view, of course, things are very different, and the negative results are much more important.


Companies don't care about false negatives, they don't want false positives because that is concretely lost revenue, rather than not hiring someone.


"Whiteboard interviews are used, not because they have a lot of relevance to the job, but because they're a hard-to-fake indicator of high intelligence."

By now it mostly correlates with effort spent preparing. There's dozens of web-sites, companies, consultants and message boards where one can train for those interviews. Whiteboarding is a skill like any other, it can be trained, faked, etc.

"But almost all of those qualities are easy to fake and hard to assess in a 30 minute conversation."

Perhaps it's not possible to assess somebody's future job performance in a 30m conversation.


Before taking algos/data structures I did terrible on whiteboards. After, I do well. From this experience, they seem reasonable indicators of a cs grads intelligence. I doubt I am too much better of a software engineer in my day to day job, but I do think this toolkit does unlock a second disperate level of engineering ability, to code core algos for novel software.

I just have a problem suggesting learning this one set of skills really boosts my fluid intelligence. Im probably just as intelligent as I was before.

But I do get the cooralating signal as a cheap, low false positive approach to selecting good candidates.


Doesn’t that require a lot of interpretive skill on the part of the interviewer?

And for the interviewer to not arrive with the expectation that you will pass the same hazing ritual they passed to deserve that big paycheck?

It’s been a long time since I interviewed at a FAANG but I don’t think my whiteboard mojo played much of a role. It was more down to my personality fit with the interviewers, and the correctness of my algo answers. Which were still pretty softball way back then.

Everything I’ve read and heard since then makes me think it’s much more about those things now, and even less about general mental agility.


That infantryman comment is blatantly false. The literal bare minimum entry level job in the military is infantry. Like if you pass the ASVAB with the lowest score possible, you are infantry without a say in the matter. The army wants dumb people to get hit with bullets first. Then the laborer types, then the admins. Thats why those "better" jobs are harder to get.


That "lowest score possible" for the Army is a 31 percentile on the AFQT, which is a proxy for an IQ score. That means about 100 million Americans are too stupid to carry a rifle. My understanding is that the real score is actually higher than the advertised minimum.


I've taken the ASVAB and it is definitely not a meter of IQ. Middle school children who at least know simple algebra would have no problems scoring fairly well.

The ASVAB iirc was designed to determine critical thinking and mechanical skills. If you aced it, you either studied or shouldn't be in the army.


If fluid intelligence was truly perceived as the most important thing, then people would just skip the coding entirely and try to directly measure fluid intelligence. There are companies more than happy to run IQ tests and the like to screen candidates. Probably a lot cheaper than the current hiring processes.

The fact that they aren't trying that, and are instead attempting to get a measure of coding ability, suggests they don't really believe in the approach.


It's a lot easier to demonstrate the business necessity of a whiteboard test than IQ tests.

Administering an IQ test would be legally risky and could have a disparate impact on protected classes.

Griggs v. Duke Power Co.[0] is the Supreme Court case on this topic.

[0] https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.


Thank you for putting this eloquently. It's very popular to just hate technical interviews which involve white boarding.


> “but because they're a hard-to-fake indicator of high intelligence”

Surely you realize large companies are full of people whose friends just told them what the questions were going to be? Memorizing 4 problems is still easier than cramming 40.


In what universe are whiteboard interviews used this way, as opposed to the interviewer expecting an exact answer regurgitated instantly?


I'm sure many interviewers do cargo-cult whiteboard interviews, but the core intention of the archetypical whiteboard interview is to spring a complex question on the candidate that they have never seen before and observe how the candidate thinks. If the interviewer is asking questions that the candidate knows the answer to the interviewer is literally doing it wrong. It's not a trivia challenge, and the interviewer should only ask questions that they are confident the candidate hasn't ever seen before. That's not to say that Leetcode-style preparation doesn't work, but it works by tricking the interviewer into thinking that the candidate has solved the problem very quickly, when in fact the candidate actually already knew the solution.


Once you "memorize" a core set of data structures and algorithms, you can apply them to a variety of "novel" problems. The issue is those technique that are memorized, that become useful in solving these problems, aren't intuitive, despite their deceptive simplicity. In reality there's little memorization of solutions, and much memorization of useful techniques (data structures, algorithmic techniques) that are useful for these problems. And the more you do, the more you'll have at your disposal for "novel" problems. Further, many of these memorized techniques are not as intuitive as they seem. Quite a few resulted in publications when they were discovered.


5 years ago the fast/slow pointer technique for cycle detection could have been impossibly hard, now I see questions that just assume the candidate knows this. Every year the bar gets higher.


Your comment made me feel better about myself, thanks.


I've had this discussion during interviews as a disarming technique. Candidates will sometimes express nervousness at the beginning and say stuff like "I'm terrible at interviews" or "it's been a while since I've had to do this kind of programming," and I try to respond by making it clear that I understand that the whiteboard coding has very little in common with the actual job and that I'm not judging them, so when I dutifully proceed to ask them to invert a linked list, they hopefully don't feel as judged when they approach the problem. Seems to help at least a bit. Mind you, I'd prefer NOT asking those questions, but I'm not empowered to make that change at my workplace and I'm honestly not sure what I'd replace it with if I were.


> I'm not judging them

I don't understand - what is the point of you interviewing them if you aren't there to judge them? What are you doing it for then?


When a candidate bombs an interview, I don't think "wow, what a shitty engineer, how are they even employed?" I think, "wow, interviewing sucks."


Why bother interviewing if you aren’t learning anything about the person but only about how bad your interviews are?


Interviewing should normally have two results: the candidate was able to demonstrate the appropriate skills, or the candidate was not. Not being able to demonstrate the right skills in the interview does not mean that the candidate is a bad programmer or even a bad fit for the job. It just means that I was not able to verify their bona fides in that hour.

That's the difference between "that interview didn't go well" and "you are a bad programmer." One's a major blow to self confidence, and the other's a bummer. It's also a helpful mindset shift for both the interviewer and the interviewee: both of us are working together to achieve the goal of proving your capabilities. My job is to give you as many opportunities as possible to do so.


Why do you think that whenever we discuss interviews on HN, there's several highly upvoted comments complaining about how many incompetent people show up for interviews and how this basically makes coding interviews necessary?

I think it's because most interviewers consider people that failed their interview bad programmers.


Trying to be charitable, I think that is about people failing FizzBuzz not LC Hard.


There are several hills I would strongly consider dying on, but interview reform is not one of them.


So you’re not going to answer the question I guess?


You get classroom training, shadow hours, supervised hours, and certification on a particular type of interview. From that day forward any open slot on your calendar is fair game for Recruiting to schedule, up to 3 per week. You might get away with a "No" RSVP if you have a good excuse, but not in general. It's part of your job.


Three a week? What SVP did your group piss off? Or are they back to the 4 rejects per hire game?


It was like that during hyper growth and also during massive attrition in tougher times. Now it’s more like 0-1.


This is a beautiful take on life. Thanks.


I think one of the biggest problems we have today is that we use the same word for estimating or evaluating a characteristic or ability of a person, as we do for evaluating the person themselves. I am evaluating your ability to perform this complex task in this situation, and it has nothing to do with my value for you as a person.

You may not care whether I value you as a person, but at that point, that's your problem and not mine.

I'm not even exaggerating, I believe this is a major source of strife in modern societies.


Trying to understand their thought process, is the candidate coachable, can the candidate accept feedback, does the candidate at least understand the underlying ideas, etc.

The more prestigious the employer then the less forgiving they are on your shortcomings.


> is the candidate coachable, can the candidate accept feedback, does the candidate at least understand the underlying ideas

...isn't that judging? Making a judgement on whether a candidate is coachable is judging isn't it?


The word "judging" has multiple variant meanings. It is being used to mean two different things in this subthread.

Judging whether a candidate is coachable, their thought processes relevant to the job, understands the ideas etc is what the interview is for, you're right.

But "I'm not judging" doesn't refer to that. It refers instead to the concept in the word "judgemental". Some definitions of that are "having or displaying an overly critical point of view" and "thinking, speaking, or behaving in a manner that reflects a critical and condemnatory point of view".


Apologies are due, sir. Two things happened. First, I hastily read through the question and rushed my response. Second, just a few minutes prior I finished a workout and was quite exasperated.

The use of the term "not judging" is a bit strange, without further context.


> I'm honestly not sure what I'd replace it with if I were.

Just to theorize: how about having the interviewee write a narrowly scoped toy project that's relevant to the position? (Edit: On site, as a replacement for white boarding.) A JS to-do app for a front-end engineer, for instance. Just a short thing you and the candidate can attack together in a couple hours. The candidate can write "normal" code, google things, clarify requirements, use their own architecture/patterns/style to reach the solution, etc. You could even add/change a requirement midway through the session, too. That would be realistic, and test for brittleness (and maybe be mean).

I don't interview people, so this might be hopelessly naive, or maybe untenable for some types of positions. But that interview process strikes me as a far more representative microcosm of an actual job than regurgitating LeetCode solutions or hardcore algo arcana.


I worked at a company that did this. This company always pair programmed. So the interview took place at a pairing station. We gave the candidate a choice of about 10 toy problems to solve, and then we coded for 2 hours. They could use Google, their preferred IDE, whatever they wanted.

Of all the companies I have worked at, this was by far my favorite interviewing technique. But, I think it works best when you pair with the candidate. So if pairing at your company is not done, it might not be a good fit.


> Just a short thing you and the candidate can attack together in a couple hours

Unless your hiring pipeline is absolutely perfect (you pretty much get great candidates all the time and almost no bad ones) this doesn't scale at all.


Could you explain why it scales worse than current typical interview sessions?


On one hand, it would currently double the time commitment from me (and my interviewing peers) per candidate. That would be a tough pill to swallow.

Interview processes are designed mostly for the employer, not the candidate. That's just the reality.


Because a candidate can grind leetcode and prepare for a huge number of companies.

On the other hand, something like a homework assignment is typically good for a single company, and if you fail, it's time and effort and emotional energy down the toilet.


Homework are great if you:

Are extremely prestigious or have compensation clearly above market rates.

Candidates only have so much time to dedicate to homework so they will sort companies and work on homework assignments from top to bottom.

Homework are easy to grade (some of that can be automated!) and low commitment from the employer side. But that low commitment sends a signal that the candidate isn't really valued.

Having an on-site with engineers sends a much better signal as it tells the candidate he's worthy of discussing with a real engineer on company time rather than some gradings software.

Of course, if your recruiting pipeline is very poor (low signal to noise ratio, majority of applicants can't code) then sure, weed out with a homework assignment.


Agree 100%. I've decided to just reject any company that requires a homework assignment unless it's a FAANG or nearly FAANG level of prestige and/or comp. Otherwise, it really is a waste of time when there are many, many, companies that do not require homework.


I've done tons of interviews over the past 10 years. It was surprising to me the number of times the experience on a resume didn't line up with what the interviewee showed through conversation and small whiteboard exercises.

I get that it's not very real-world representative, and that's why homework is used instead by some companies.

Maybe the answer is to give the applicant a choice? Either be prepared to convince me you know what you claim, or demonstrate it with a take-home problem.

Personally, I'd prefer a day of in-person-pair programming as part of the interview. I'd get a better feel for the team and their workflows. If it's a total horror show, it's good to know up front :)


What does your applicant pipeline looks like?


I think I misled you. I meant for the toy project idea to be an on-site, swap out replacement for a typical white board session. The time commitment should remain unchanged. Maybe I should edit the original comment. Sorry!


Couple of hours vs 30-40 minutes.

I prefer to have multiple short interviews with different engineers rather than a single multi-hour long one with only one engineer. You get better feedback and it helps against bias.

Not only that but a "toy JS app" implies the candidate knows JS at all. If it's college recruiting, you are better sticking with algorithms since you know that they know it.


To be clear, the goal is to watch the candidate build something using the prerequisite skills expected for the position. I'm not prescribing 3 hour sessions, or JavaScript. Target a 45 minute session. Target whatever tools, languages, etc are expected for the role. Maybe that means being language agnostic. Whatever is appropriate.

Watching folks make something -- for any significant duration -- is perhaps a better estimator of a candidate's genuine job ability than testing how well they approximate an algorithm textbook. Well...that's the assumption, anyway.


Now I get it. Thanks


We need to band together and stop rewarding companies who pull this shit in interviews. Leetcode problems are not applicable to 99% of the actual work in most software engineering roles. There are other, gentler ways to find out if someone knows what binary trees, dynamic programming, sorting, and big O notation are.


Not being directly relevant to the role also has its benefits. It means that you can interview for a job without necessarily having experience in the company's exact problem domain or with the framework that they happen to be using this quarter.


True, but the algorithms challenges are not very representative of the actual work we do in our industry. They are a nice way to filter people who are not willing to spend a lot of time studying challenges to have job interviews though.


> They are a nice way to filter people who are not willing to spend a lot of time studying challenges to have job interviews though.

They filter for general problem solving skills. Whether you acquired those skills through a degree, self-study, a week of leetcode or 3 months of leetcode will be up to your individual background and aptitude. It's a hiring process - filtering for something is the whole point. Personally, I prefer being tested on more general stuff (like algorithm problems) over most of the alternatives.


They filter for memorization. Most leetcode problems you will not be able to solve in the allotted time without first being familiar with the problem or one very similar to it. Because most of those problems (and many CS algorithms) hinge on a certain "trick" to solving them.

The way to pass a leetcode interview is to know exactly what the interviewer is asking and then pretend like you've never seen this problem before. Then you amaze them by pulling a rabbit out of a hat. We are all a bit dumber for putting up with this charade.


Or they are just seeing how well your pattern recognition and subsequent application of an efficient solution. The problem is our industry is full of people who put their whole self worth on intelligence and then pick learned skills to judge anyone too harshly that hasn't worked through algo and data structures. You miss alot of smart and great engineers who don't have that skill set. But for hiring, its about reducing false positives. So you might as well ignore those who dont atleast have that algo/ds toolkit.


I think that you're misrepresenting the process of solving these problems. Recognising that you need to use a certain data structure or that a problem falls in a larger class of similar problems is a skill that you can develop. Just because you get better at it with practice, doesn't mean that it's the same as memorization or that there is some special trick to solving each problem.


> They filter for general problem solving skills.

Not necessarily saying you're wrong, at least not when it comes to general trends, but nonetheless this definitely needs a citation. Is there really that much of a correlation between 1) general intelligence and 2) happening to find the "trick" corresponding to a typical tech interview problem?


Speak for yourself. I love Leetcode. It's a standardized process that I can study for and basically be guaranteed to get a job in a sunset of companies that pay extremely highly.


One tried and true way to “band together” is to form a union, but somehow I don’t think I’ll see that in my lifetime for my industry.


What use would a union be for such a highly paid and fluid industry? Genuinely curious.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: