Hacker News new | past | comments | ask | show | jobs | submit login
Tech sector job interviews assess anxiety, not software skills: study (ncsu.edu)
1782 points by sizzle 23 days ago | hide | past | favorite | 1141 comments



All: please don't miss that there are multiple pages of comments. The top few subthreads have become so large that they fill out this first page entirely. You have to click 'More' at the bottom to see the rest (there are over 700 at this point).

https://news.ycombinator.com/item?id=23848039&p=2

https://news.ycombinator.com/item?id=23848039&p=3

We're working on performance improvements that will hopefully allow us to go back to HN's original style of one big page per thread (not infinite scroll, don't worry). In the meantime please look for those 'More' links when the total number of comments is over 250 or so.


I conducted a couple hundred interviews for my first FAANG employer, and I was constantly amazed at the percentage of candidates with years of Microsoft or Facebook experience on the resumes who apparently did not know how to program. I always thought, 'huh, guess I know why they quit after 3 years, amazing that they all lasted this long."

Then I interviewed for another company and utterly bombed. It became suddenly clear to me that I had been an idiot. Of course nearly all of those candidates were perfectly good programmers. They had had shit interviewing days, probably mostly due to nerves, but they probably would have mostly been perfectly good employees. How frickin' arrogant I had been for concluding that people who couldn't solve incredibly high stakes algorithm riddles on a whiteboard in 45 minutes with enough speed and flair were somehow not qualified to be my coworkers.


I was on the other side - I flew out to FB during a vacation earlier this year. I got there, could explain the solution but something was off and I kept losing the thread when I went to convert it to actual code.

The interviewers didn’t know I’d gotten a call from my cat sitter that morning and one of my cats had to go in for an emergency check up. (He ended up being fine but didn’t know that at the time).

Later that week I got the rejection from FB and then, 15 minutes later the positive response from Google. Really made me realize how much impact the day has on an interview outcome.


A pro tip for anyone reading this who ever ends up in a similar situation: just postpone the interview. I promise you that everyone will understand. Everyone gets that these things are a little goofy[1] and wants to set people up for success and not doom them to failure.

1. Yes, there is a larger and more complicated discussion to be had about making the whole system better. But that isn't gonna happen in the short term. This is advice for dealing with the situation as it exists today.


Yeah I didn’t realize how distracted I was until I was there and just blanked.

I also have real problems with traveling to interviews since my other half has issues being alone at home hence doing the interview while we were visiting her family. Even then me being gone was highly stressful for her.

Google was willing to let me interview with folks local to us which really helped. I’m curious if more companies will conduct remote interviews in a post Covid world too.


So far I managed to avoid traveling for interviews by just telling them that I prefer doing it over the internet.


If you explain extenuating circumstances to your recruiting contact, you probably can get a chance to reinterview sooner than the usual cool off period (especially if you did well in the earlier rounds).


It's worth emphasizing, as a hiring manager: By the time you're in an interview, I want to hire you. I'm sitting there hoping to write "Strong hire" on my feedback form.


I think this is true as a hiring manager, but isn't necessarily true for an interviewer, who may be a senior engineer applying a very strong filter to the "who would I want on my team" test.


Uh, unless you saw a better candidate earlier in the day/week.


I'm senior enough that if I rate two people strong hire, I'm capable of getting more budget. This has in fact happened recently -- we couldn't decide between two great candidates, so we offered and hired both.


That depends on the size of the company, no?


Yes, and in some cases, the company needs one person now, and another one a bit later. Then can be nice to have a list of people who did well, to contact again


Same. If someone makes it that far, I (and several others) have already decided that they stand out. So I'm secretly rooting for them the moment the interview starts.


I mean this would apply to going on a first date as well. First impression is what matters and once the impression has been ruined, all bets are off.

It is not specific to the interview process.


Rescheduling a date or an interview at the last minute doesn’t make a great first impression either.


I would LOVE to get my 2 hours back in a day because of a last minute interview cancelation. The only person who would be truly irritated is the recruiter, who's job depends on meeting candidate quotas. Even then, they want you to be successful.

Trust me anyone reading this, if you are not feeling well, or something is distracting you, please, please, do feel free to reschedule your interview, even last minute.

Once you go through the process, and it's a no-hire, that is recorded in the system and you won't be allowed to try again for some extended period of time. At my company, it's 1 year.


This sounds like sth it could be good if companies informed the applicants about, hmm

I think I too wouldn't have "dared" to postpone, in some/many cases


The obvious counter to this is: Which will give the worse impression: Canceling for a good reason or showing up and being distracted and performing poorly? Companies like the usual FAANG are nice enough to merely give you a 1 year embargo. In other companies I've worked at, you interview for a team and if you do poorly, the team will simply never interview you again (although you could interview for a different team at the company).

Also: Do you really want to date someone who doesn't care that your cat may have a life threatening problem?

Finally, even if it's not that serious (cat emergency): Do you really want to work/date someone who is that susceptible to this bias? I know it's the norm to fall for first impressions, but for me, it's also a signal of problems I'll have with them in the future.

BTW, my cat had a serious health problem once. She needed lots of immediate care for a few months (at home, away from work). And the vets were telling us that even if she got through it for now, it will come back soon and we should really consider euthanasia - the condition would eventually kill her - not many cats would last a year with it, and chances are her she would be in pain for much of it.

And this all started a few days before I started my new job.

I went straight to my new manager (before my official start date), and explained to him that the cat was my priority - I could take a leave of absence or whatever was needed, but I needed to figure this out (either euthanasia, or time off for treatment options, etc) and could not be anywhere close to 100% at work.

Fortunately he was sympathetic and said I should not worry and do whatever was necessary.

The job turned out to be crappy, and I stayed there longer than I should have entirely because of this.


For a small company it matters, but for a FAANG interview the person rescheduling your interview is completely disconnected from the people making the hiring decision so it would literally not make a difference.


I rescheduled the second date with my now wife at the last minute. Worked out well for us, but I was so tired from work that there’s no way that date would have gone well (busy period of a project at the time).


It doesn't give a good first impression, but failing miserably is somehow even worse...


I think knowledge workers will have to learn more about human psychology in the future.

I know I go thru waves of high productivity and low productivity. That productivity seems to be closely related to the ability to concentrate in my environment, sufficient sleep, how closely I've followed y daily rituals, fewer stressors in the past few weeks, etc.


Sorry to hear! I have a similar (sort of) story.

While waiting at UBER reception area I got a call from a "tax office" telling me about suspected tax fraud transactions (it was a fraudster call but imagine me at that moment) a minute before I was called in for an interview.

Yeah, I was really kicking ass at that interview (NOT!)


The same is true for design/ux jobs interviews. Well, at least for me. I got laid off a couple of months ago, and have been interviewing pretty steadily since then. I have yet to receive an offer.

I already know where my strengths and weaknesses are. I'm fine with 1 on 1s and all that, but when it comes to the technical part of the interview process like a take-home design project or a live design challenge, I usually fail.

I know I have the skills. My work has always been well-received, and I've always received good marks on reviews and feedback, but I haven't been able to translate it to the interview process.

I've tried to assess where I go wrong, and I think I panic and put too much effort into trying to figure out what they want to see or come up with something innovative rather than just doing the work and following my normal approach.

The stress of the time crunch (and sometimes not knowing anything about the product or problem) only add to my inability to improve in this space.

--

When I've sat on the other side of the table, I know that most interview processes aren't that well-structured or set up to actually evaluate the candidate. Even when there's a good process, it's often too easy for one person to influence the results.

So sometimes I tell myself that it was a crappy interview process and it just wasn't set up for someone like me to succeed. But I still end up stewing over it and feeling depressed for a few days.


The depression hits those hardest who put their identity into what they do. Often, I don't realize how dependent I am on positive feedback until it hits. Lately, I've tried to remember in those moments that I am a diverse person and that if I looked in the mirror and saw the worst developer ever I would be OK


If you’ve been looking for a while I think it’s totally normal to get some deep depression regardless of how you identify. Interviewing is stressful and draining and feels incredibly arbitrary. Also, at least in the US, your employment is absolutely tied to your quality of life; unemployment benefits are a joke and you’re on your own after they run out. I’d be depressed as well.

Not saying I disagree with you though, I think identifying heavily with your job makes it even worse.


That's a good point. I probably put way more into my identity as a designer than I should or care to admit. Thanks, gives me something to think about and work on.


Hang in there dude, I think the coronavirus and remote working really messed with the traditional hiring process.

The fact that you are getting steady interviews speaks for itself that you have the design chops and experience. I think people don't know how to interview design candidates well as is, removing the physical cues you get from an in-person interview makes it even harder to assess a designer. Also, it could just be companies are flying by the seat of their pants and a lot have reduced workforce with this second wave of infections.

High probability it's not you, it's them and their broken process during covid-19. Keep your design skills sharp and keep building your portfolio and use this time to work on concepts that you are passionate about that haven't come up in a 9-5, that will help you until the right opportunity comes along. You got this!


Thanks, I appreciate the words of encouragement!


>I panic and put too much effort into trying to figure out what they want to see

My approach for take home challenges is this. Often I complete them and never even get any feedback from the firm beyond a form rejection letter. If that is going to be the case anyway I might as well make something I'm proud to have in my portfolio, even if it doesn't line up with exactly what they are asking for. That transforms the work from something that I'm doing for them to something I'm doing for me. I know that's hard when you are unemployed and have bills to pay, but if at the end of the day you have some work that at least meets with your expectations you can retain that bit of your pride.


This made me think, what if we post take home assignments on github. Along with company name and status - accepted/rejected. This is a testimony of your work as well as of hiring company - if code is good and you are still rejected, that is the red flag to developers.


This. Adding up, front-end engineering has an identity crisis now. A friend of mine is a web dev who started back in the days when Javascript was for form validation.

When JS frameworks and SPAs took over the web, he adapted himself. But being a UI engineer he cares about the craft's visual aspects, than the technicalities of JS.

Although he has a proven record of his works, he is now dealing with interview questions like "what are the problems with closures" and "when you do use function.apply()" and "immutability".


For a live design challenge I recommend slowing yourself down and asking for more input from your interviewers. If you appear relaxed and asking for input from others then you will seem more open to both collaboration and constructive criticism. My two cents and good luck.


Do you have a (substantial) GitHub? Why not mass-apply for positions and refer people to that instead of accepting laborious technical tests?


Have done over 1,000 interviews for a FAANG competitor that IPO'ed in the past 10 years. Completion time (in minutes) of a coding puzzle is essentially the only objective measurement taken during these coding puzzles, and is sadly the primary datapoint fed into the hiring manager's decision. The hiring managers usually adjust the cutoff based upon whatever quota they want and/or if they're expecting people to leave. I've worked directly with a VP of Engineering and that's really all he wanted to hear.

The whole process is missing a feedback loop to the evaluators, resulting in the OP's situation. In my own situation, it took several years after that mess of being forced to shotgun candidates to pick apart what in the experience was actually predictive of success and what wasn't. To this day, the most successful hires from my perspective were those who just really wanted the job and wanted to contribute to the product; definitely no strong correlation with "leetcode" ability.

It's not just that the coding quizzes are poorly administered, but that the hiring mangers and leadership industry-wide have been obstructive to getting evaluators feedback (for starters, what did a candidate's other offers look like? where were they 3 months after the interview?) and especially tight-lipped on sharing interview questions. That last bit is particularly obnoxious because people monetize their insider knowledge either through things like Rooftop Slushie or they take a more Gayle Laakmann angle. (Just to be clear: while many questions are "protected under NDA," many questions are also stolen from other companies-- interviewers asking them are already violating their own NDAs when asking them, or potentially when feeding the question bank of their new employer).

What you can do is go to your manager in your 1:1 and ask for interview feedback. Ask what happened to candidates 3-6 months after the interview, even if they were hired (perhaps not just to your team). Start to get feedback, and then work from there. You'll probably do better interviews, and at least curtail some of the information arbitrage culture in the C-suite.


Occam's razor is: algorithm tests are specialised IQ tests, and that version of IQ is actually a very reliable indicator of performance (compared to the other signals available).

I'm not exactly sure I buy it, but it seems like the simplest explanation.


It's not just nerves either. All interviews, but especially FAANG ones, are random. The interview might be racist, sexist, etc. They might have gotten divorced that day. The question they ask you is something you haven't heard of but they spent their PhD on. You can even do everything correctly, but the interviewer doesn't like the way you did it.

The leetcode stuff is just masochism. You will spend X months beating DS&A into your brain until someone can pullout a random question and you immediately know what questions to ask, what steps to take, and how to write it out on a whiteboard.


It's difficult to remove that randomness from the process. As I see it the only reliable approach for both sides, given that structure, is volume - you just do/conduct a lot of interviews and evaluate/get evaluated on the top percentile.


I agree with you for the most part, but Google goes out of its way to force the interviewer to just hand in an emotionless, gender-neutral description of the candidate's code and coding performance to a hiring committee that makes the decision

This kinda bit me in the rear when I applied to Google a couple of times. As a nerdy American male, aged 22 to 40, with above-average people skills (for nerds) I got along great with everyone I talked to. But the hiring committee would get a sloppy page of half-functional garbage code, along with a note like "Seems like a great candidate". I found another FAANG company I fit into a lot better anyway, so it all worked out great


You've been arrogant for years and you know what? So have I. I and many, many people in this industry have been doing this for years.

What's different from you and many other people is that you admit and face your bias rather than justify it.

I'm seriously curious what google interview board members have to say about this study. People like Gayle Laakmaan have been saying things to justify the whole process for years; but now that there's actual science, what do they have to say in the face of science?


I interviewed a lot of people while I was at Google, and sometimes the people were clearly, clearly too anxious to interview. Like one fresh-grad who's hands were shaking. I thought about how much pressure the kid must have been under.

You try to help them relax, but you don't have much time, and the whole thing is just so unnatural.

Other places I've since been at, and interviewed with, give you a laptop, a problem to solve, and some time. You can focus on the problem instead of the situation much better, and I think that really helps. As an interviewer, it also helps keep some distance from the interviewer. It's too easy to try to micromanage the interview and keep the candidate uneasy. But there's a natural "don't interrupt someone when working" instinct that keeps interviewers more distant when the candidate's programming.

Coderpads on video conference, I think, are pretty good when the candidate has a good space to work in. They're in comfortable territory and you can see them by their keystrokes. The interviewer feels like they're getting more accurate data about how the candidate really operates. I just wish it could do keyboard shortcuts better -- emacs users like me hate seeing Ctrl-N bring up a new window.

The issue of IDE/compiler/keyboard/keybindings is also there, but that's easier to work with.


Microsoft interview. I was just graduating college, and was super anxious walking in to my first interview. The interviewer realized that writing on the whiteboard was going to be awkward for the problem he was asking me to solve so he let me just sit down at his computer while he hung out to to the side and watched me work in notepad. I think that was the best way to do that interview, because the second I started writing code my brain flipped in to code-writing mode and everything relaxed. One of the better interviews in my career.


Nowadays that's "favoritism" in an interview and it wouldn't fly. Everyone must pass through the same meatgrinder (that, surprise, fails people other than white men more often).


She will just say that the process is designed to minimize false positives and the expense of more false negatives. A bad hire is more expensive than a no hire.

The problem is this isn't what gets you the smartest people on the planet. A small number might be interested in DS&A to that level, but most are not. They will learn enough to know how to google what they need and move on to what they are interested in. Smart people are constantly getting job offers from coworkers and bosses who move to other companies if they don't start their own.

The few smart people who do put in the effort necessary to go to the FAANG companies always leave after getting the golden sticker on their resume. They leave behind them a residue of SAT preppers who have PhD's in inversing binary trees.


From my experience it won't get you the smartest people on the planet, but it will get you people who _think_ they are the smartest people on the planet.

At Google I've worked with some very humble but also super intelligent awesome engineers, people much smarter than I. Definitely the majority of my interactions. But I've also worked with people who clearly took the "Google hires the smartest people" company line, and their own success at it, a little too personally.

I don't think it's a good message to send. Although to be fair I haven't heard internally in a while.

Also I don't know if it's really a gold sticker anymmore. Nor do I think it's true that the smart people leave. Some do, but honestly, there are some damn brilliant people at Google who have found their brilliant corner and produce brilliance there.

Or some people here who are just brilliant at playing Big Company. Unfortunately there's more of that all the time.


> She will just say that the process is designed to minimize false positives and the expense of more false negatives. A bad hire is more expensive than a no hire.

That's what they keep saying. However, it remains to be proven that regurgitating answers on a whiteboard in 45 minutes implies that the candidate is not a false negative, or even a true positive to begin with.


Someone who able to immediately understand and write out the solution to 80% of hard DS&A questions is at least able to write code. The question is if this is honestly better than generating your own FizzBuzz. I don't think so. I have heard rumors that Google has studied employee performance predictions based on interview score and found no correlation. I would bet this is an open secret and no one knows what to replace it with that isn't enormously expensive.


> no one knows what to replace it with that isn't enormously expensive

That's the crazy thing about this. FAANG has near-infinite resources, relatively speaking. Google could be A/B testing their interviewing process, experimenting new approaches, experimenting (as the OP study did) with eye-trackers and other tech to try to gain insights on candidate behavior. They have the "engineering for the sake of engineering" and "innovation for the sake of innovation" culture that allows for moonshots and boondoggles. I don't think they're exactly known for following "if it ain't broke, don't fix it", given how many times they kill off products only to recreate them later on, especially their chat apps.

Yet they stick to the traditional whiteboarding method because-- why?


Maybe Google has studied all of those things and found whiteboarding to be the best predictor. The company seems to be doing well, so something must be working with their hiring. They dropped GPA/college requirements so they're clearly open to non-traditional backgrounds.


> The company seems to be doing well, so something must be working with their hiring.

People use this argument often, but is technical competence of engineers- specifically at passing their interview process -the explanation for their economic success? And not market penetration, near-monopoly positions, great UX, etc.?

Not to mention, as we've seen in the high-profile Facebook SDK crashes this past week, share price doesn't necessarily mean product quality.


>Not to mention, as we've seen in the high-profile Facebook SDK crashes this past week, share price doesn't necessarily mean product quality.

These events are noteworthy because of how rare they are for such a large software-based company.


It was the second time it's happened in a matter of months- the first time was in May.

It's also noteworthy because of how destabilizing this was for the immense number of apps that depend on that software.


It means you're someone willing to spend two to three months on your life sucking a little dick for a chance to work at FAANG... the companies so good they've recommended 15 vacuum cleaners after purchasing one. How many hands do they think I actually have.


The answer is sixteen, you'd know that if you'd spent a bit of time practicing your addition interview questions.


Damn it, back to grinding leetcode!


Boy, am I the only one who thinks working at FAANG is pretty great and worth the time practicing the interview?

The compensation is much better. The management is much better. The work life balance is much better.

I moved from SEA. Everything is worse there.

I read this and feel like people have unrealistic standard. FAANG isn't good enough? Working there for a year probably put your wealth of life at 0.1% of the world. That's not good enough?


That's why people join unicorns. All the comp and potential equity upside of FAANG, less bureaucracy, greater urgency, more sexy work or technologies.


We don't care about reality, merely perceptions. "Well I humiliated the person for an hour in front of a whiteboard" is the new "Nobody gets fired for buying IBM."


I've seen people say it's like a form of IQ test and I agree with that.

Generally speaking, I think more intelligent people will have an easier time learning the patterns in algorithms.

It's not the best filter, but it's easy to see why it's an attractive option for companies.


intelligence is multi dimensional. Even more so at a working environment. I can't believe algorithm intelligence is a relevant skillset for 99% of the projects. Including projects inside google.


I agree, which is why I said it's not the best filter, but I think it's good enough that companies are happy with the people they hire.


Gayle Laakman specifically has a business empire dedicated to help people get through the process. Regardless of the actual merits of the process her opinion is going to be biased.


Quite a racket, being part of setting up this gauntlet and then selling books on how to get through it. Some might call that a conflict of interest, but there's no such thing as ethics for the nobility so to hell with what the peasants think.


I don't think it's relevant. Her business is designed to get you to game and pass the interview regardless of whether or not the interview is effective.

Therefore her business interests are orthogonal to her opinions on "interviews." She may still be biased, but I don't think business interests color her bias.


I'm not a googler (but I've interviewed there) and I disagree somewhat with the original premise. I think whiteboarding does filter out bad candidates but also filters out good candidates who fail the anxiety test. Google, with their massive pipeline, probably doesn't care about the false negatives enough to change; they still find enough people to hire. It's the companies with smaller pipelines that need to scoop up these good candidates.


I used to be reasonably good at algorithmic stuff in university 20 years ago. Since then I hardly ever practice. It's not that I am bad at them, now, but its just so hit and miss whether an answer will come to me in time, you might as well toss a coin. If I was doing algorithmic stuff every day I have no doubt I would be better. Usually what happens is I get out of an interview and a an answer (or a better answer than I gave) pops into my head. Like I say, might as well toss a coin.


It seems you cannot see FAANGs perspective on this

There are other people who always fail those coin tosses


In general, hiring practices probably optimize for filtering out bad (or at least somewhat speculative) hires over passing on a potentially really good candidate (especially if there are any concerns to go along with the overall positives).

Of course, if the pipeline is massive (whether it's jobs, schools, etc.) this tendency gets amped up even more and anyone who doesn't come across as pretty much perfect on all dimensions--whether they are or not--is going to get dinged.


I've trained for interviewing twice at Google. I don't volunteer to give the interviews, though, because the interview I was trained to give, I would never pass.

And sadly, this is common and recognized at Google, was mentioned in interview training, and is just said to be part of the fact that the bar gets higher, or something, mumble mumble.

I find this disturbing.


Every year I've noticed the questions get harder too. A hard question in 2012 (off the top of my head, 2 pointer solution to linked list cycle detection) might be considered an easy question today.


From what I have read from Gayle Laakmaan, they know this throws away many good engineers, but it is acceptable to them because it doesn’t let bad ones through.


I've definitely repeated that line in the past to justify tech industry interviewing practices - but sometimes it goes beyond "setting a high bar" to something more toxic: an interviewer using the interview as a chance to show off how smart _they_ are, rather than assess the candidate; deliberately creating high-stakes "pressure cooker" environments because, you know, that's just how we work here and they should get used to it; and so on. These things then get rolled into the same justification: after all, we're an exclusive club of ultra-smart 10x ninja pirate Jedi, which of course means we should have the default assumption that others don't belong in this club.

I wonder now: for every "bad one" not let through by this sort of process, how many other "good ones" look at it and say "no thanks"? How many hear about this sort of hazing and are dissuaded from applying in the first place? How many experience it one too many times and just quit the industry altogether? How many of those "bad ones" are even objectively bad, and not just having a bad day or intimidated by a process rife with both intentional and unintentional hostility? In other words: are we actually assessing what we claim to assess with _any_ predictive accuracy, and is the collateral damage to company reputation and the pool of available candidates - a damage often hidden to the individual companies that impose this process - even worth it?


I think it goes far beyond that. I'd bet dollars to donuts that you will find both protected and unprotected groups over represented in the false negative pile. Do women have more test anxiety then men? If so, guess what your process is selecting for. Do minorities have more self doubt than majorities. Same thing. And so on.

And that doesn't address the file drawer effect. I'm 54. I can code a binary tree, hash table or what have you, if I have to, but I am not as practiced as somebody fresh out of school, because that heavy lifting is done by libraries, and I'm busy contributing original mathematical algorithms that no one ever asks about because they don't have the background to understand the explanation. So I don't go on FAANG type interviews. I know I will fail, and if I don't my offer will be predicated on that apparently lower performance. At best I will have an utterly miserable experience. So I self select myself out, mostly on age.

This is not good, for so many reasons.


> I can code a binary tree, hash table or what have you, if I have to, but I am not as practiced as somebody fresh out of school, because that heavy lifting is done by libraries, and I'm busy contributing original mathematical algorithms that no one ever asks about because they don't have the background to understand the explanation.

This made me chuckle, because it's absolutely how it is.

Some other comments here talk about how nobody creates new algorithms these days unless they are a researcher. But it's not true at all! Like you I'm creating new algorithms and other tricky techniques all the time, but I'm not fresh on the stuff I learned at school... or so I thought.

I have been really surprised to find some screening interviews recently asked me shockingly trivial questions. So simple that I stumbled over myself trying to simplify the answers, thinking "I know too much about this topic and need to keep it simple", and "can these questions really distinguish candidates?".

I did the TripleByte online test recently and found the questions much simpler than I expected, including those in languages I've never seen before. That is not TripleByte's reputation. I see people writing about how difficult they found the questions. (Admission: I didn't score all 5s, but I can't figure out why unless it's timing as I think I answered them all correctly.)

So I'm thinking, perhaps it's not so bad, people just make it sound bad because there's a wide variation in people's knowledge, abilities and expectations.

If you are thinking you might apply for something like a FAANG or other hard-reputation company, and feeling put off by the horror stories, I would say, just take a look at the old stuff a bit for a refresh, then give it a try and you might be pleasantly surprised to find their reputation is because people less skilled than yourself found it hard. Their "hard" might not be hard for you.

You'll probably still fail the interview if it's a FAANG because they are so selective, but I would bet my dollars to donuts that it would be more refreshing than miserable if you're not attached to passing.

I know your point isn't about yourself, it's about bias in hiring, including bias due to perception by the candidates, but I wanted to address that side point about feeling there's no point applying. That sounds like anxiety to me. (I have it too, I'm trying to get over it.)


>And that doesn't address the file drawer effect. I'm 54. I can code a binary tree, hash table or what have you, if I have to, but I am not as practiced as somebody fresh out of school, because that heavy lifting is done by libraries, and I'm busy contributing original mathematical algorithms that no one ever asks about because they don't have the background to understand the explanation.

Ooohhhh this is frustrating. Nothing is more frustrating than telling an interviewer about something cool you've done, and realizing they don't understand, but also don't care.


Yes, I remember one guy asking what I would when a page was going slow. I explained how I would work backwards checking the network tab in the browser, make sure it was the endpoint that was the bottleneck, blah blah blah to the database. He just wanted me to say "use explain" - which i would have got to if he had bothered listening a couple more minutes. Then he wanted me to do a "challenge" that was expected to take around 10 hours. No thanks, you didn't convince me I want to work for you.


But they're not aware that the filter is just anxiety.

Google is basically using anxiety to filter good candidates and eliminate false positives which works in a sense but is still highly illogical.

Why not use a technical filter to filter for technical candidates? Anxiety seems like a pointless filter.... how does that even eliminate false positives?


it would seem it might be 'letting in' a whole load of people who don't have appropriate anxiety responses. There are some situations where anxiety is perfectly normal, and perhaps even useful, but if you screen out people who have normal anxiety, what impact does that have on your company operations and culture?


I'm a pretty anxious person, so much so that I've pretty much retired instead of doing anymore technical interviews. I had a discussion with a very anxious, but brilliant software developer a few years back and we came to the conclusion that as long as we don't let our anxiety get too far out of hand it's actually kind of a superpower in the job setting. That's because that niggling anxiety you get in the back of your head as you're coding generally makes you more careful about how you're designing and testing your code. You're more careful about security, safety and correctness. Whenever I get anxious while coding I stop and ask myself if maybe it's a message that I need to listen to - maybe it's trying to tell me to tread carefully because I'm entering an area that's potentially problematic.

People without that niggling sense of anxiety always in the background, the folks who are easily going to ace the programming interview because their anxiety levels are so low, those folks, I theorize, are potentially not going to be so careful. Now you could argue that that's a plus in many situations - a startup that needs to get code out the door right away, for example. But it really depends on the domain. For critical systems in applications like healthcare, avionics or robotic control I think the anxious coder is the one you want.

Companies that completely weed out the anxious programmers, as you imply, will have a different culture with perhaps too much emphasis on risky behavior.


> But they're not aware that the filter is just anxiety.

Could this be deliberate? As working with people with anxiety problems is often a pain in the ass as they won't report problems for fear of seeming stupid or ask for help.

Confident people are far easier to work on a team with.


This is essentially the point of a lot of hiring and performance evaluation theory.

E.g. it's acceptable (but not ideal) not to promote talented commanders. It's catastrophic to promote someone to General who isn't ready or is unsuited.


Software engineers are not commanders or generals and it is NOT difficult to fire someone in the United States of America. Everyone in this industry has a at-will contract.


However, high rates of employees getting fired is bad for culture. People are always anxious about their jobs - if they see a few colleagues fired for nonperformance, a number of them will be driven to anxiety. (I am one of these people, and being anxious about job security ironically makes me much worse at my job)


One of the things I've liked about freelancing/contracting is that I know I'm going to get "fired" (we call it "successful completion of the project" but its the same thing, just that they would like to have me work for them again). And since it happens on a regular basis, I also am practiced in finding a new job and I know about how long it will take and what types of prospects to look for. It's really reduced job anxiety for me, although I never had too much.

Although as a counterpoint, it's also easier to get work, because part of the value I'm providing is that you "fire" me whenever you want and no hard feelings. So nobody does "leetcode interviews", because it's not like we're getting married, like an employee. So there's much lower risk to hiring someone, because a bad hire doesn't infect anything, because you expected it to be temporary in the first place.


Right but I also do think that FAANG is over-estimating how many of and how bad these "bad apples" aka. slighly above average developers would be and how terribly difficult all the work they do is.


When you have your pick of the bunch, of course you're going to look at slightly above average as "bad".

I'm not sure how much engineer skill matters though. One slightly above engineer might be fine. But if half your org is made up of slightly above average engineers, I feel like there will be a knock-on effect.


I don't know maybe my view of slightly above average is inflated but I think of that bunch (which I might be a part of) as being easily pulled in by good technical leadership. If only to make their life ultimately easier / less painful. You don't need 60-70% amazing engineers to do that. But I guess they can do whatever they want. They have infinite money and unfortunately a good reputation (in many cases unwarranted). They've become like the Harvard or Yale of tech. Oh you went to Google, great come on in and join us.


And yet, time and time again we see that Generals promoted during peace-time rarely have the skills necessary to lead men under intense combat conditions. The best generals are identified on the battlefield.


Coming from a military background, I can confidently say that promotions at that level are more the end of a political process than a true assessment of skills...


Over the past few years in tech, I have interacted with and observed (as a fly on the wall) a wide variety of recruiters, hiring managers, etc. from companies as small as a dozen (usually around the time a founder/CEO brings in external "support").

There exists an incredible amount of arrogance and ego among them by-in-large. The best are humble and curious while maintaining clear and direct communication. To be fair, they are often tasked with making important judgement calls sans complete information, though that's the fault of the leadership/culture as to a poor approach.


At the start of my third year as a student, I biked through a record-worthy rainfall to my first interview with a small local games studio. Showed up completely soaked, but sat down relaxed and feeling reasonably confident. (I had done a handful of unity and opengl projects.)

The interview started with a bit of small talk, I chatted with one of the guys about Brood War and Starcraft 2. Later during the technical interview they asked me about the difference between private, protected, and public and said I was the only student they had interviewed who had answered correctly which was wild and honestly stunned me for a moment.

They liked how in one of my examples of scripting something I had written some fun dialog and I mentioned I did writing as a hobby which seemed like a plus. I talked about which classes I enjoyed the most and how they were challenging/interesting.

I did not get a job offer and learned from a friend that I had come off as "depressed and disinterested" (I don't think they realized he was going to relay that info to me...) All I can do is guess that it's a combination from showing up completely drenched and sharing how I enjoyed those "challenging" classes made it seem like I would get quickly bored of scripting...

After that, I bombed an interview for a QA role because I went into it completely unprepared for just how much it would differ from a programming interview.

Point being those first two experiences got in my head and I basically had anxiety over anything job-hunt related for the next 2.5 years.

When I did get a job, I was extremely happy with the interview process (even though I felt I did poorly on it). Here's more or less how it was structured (TL;DR):

  - Office tour/chat with a programmer who had referred me
  - quick 5 minute introduction to senior programmers in charge of the technical interview  
  - 1 hour alone in a meeting room with a laptop and 4 written questions (a generous amount of time)  
  - ~15 minutes reviewing my answers with the senior programmers  
    - importantly, they gave me a chance to talk about my answers and when I got something wrong they would simply state that it had an error and see if I could spot it  
  - 15 minutes on C++/memory/performance/behavior quirks (important stuff in AAA games)  
  - ~30 minutes talking about stuff I had worked on  
    - occasionally they would mention something related I hadn't heard about and explain it while gauging how well I could follow along
Basically, based on my own interview experience/anxiety if I had to choose an interview method, it would be very similar to what I just mentioned. Seeing a familiar face and the introduction followed by the alone time did a lot to ease my anxiety. The time where I felt most comfortable was talking about the stuff I had done because I was very familiar with all of it/it was easy to talk about.

The process seemed less focused on where I had gaps in my knowledge and more focused if I had a decent amount of knowledge in general and if I had the ability to recognize and correct the gaps in my knowledge.

Sorry if it's a bit of an info dump, but it's something I think back to a lot.


I like that you give an example of what kind of interview you do like.


Additionally, companies doing common pool hiring are worsening the situation even further. One bad interview day or one failure to solve complex riddle means that you're not good for any teams within that company until cool-off time passes which is typically from 6 months to 1 year. The system of cool-off period is equally brutal.


I've heard HR optimizes processes that limit Type I errors (false positive) at the expense of Type II errors (false negative).

I've often wondered if this is because, from the HR perspective, false positives are much more costly. I.e., Those who would be helped by limiting the false negatives are project managers who are too downstream in the process for HR to care.


You only need to fill a position with a person that is good enough.

If you reject someone who would have been a good fit but still hire someone good eventually, that’s ok.

A bad hire is not fun, not for HR, not for the hiring manager, not for the colleagues and ultimately not for the person who got hired. Especially if they had to move for the job, possibly with family.

A bad hire has massive negative consequences for many people. Of course you want to avoid that, if that comes at the cost of not hiring someone who would have been great occasionally, that’s unfortunate but acceptable.


I don’t disagree but “good enough” comes with its own risks. That may be fine for an organization that has a lot of inertia and is biased towards the “maintaining” side of the house. However, I think it would make it make things more difficult if the intent was for bold, transformative change.


The bar for "good enough" can of course be quite high.

If you're hiring people at the very top or with rare skillsets, like building browser engines, building databases or people whose skill is at author/contributor level to relevant technology (for the job in question), you're probably not going to go with the usual process anyway as there is less uncertainty.


Fun fact: Google & other tech giants have poached a lot of game engine designers to optimise datacenters at truly obnoxious salaries, because their fundamental skill is making complex systems of interacting entities run as efficiently as possible. It's causing problems in the games industry because they can't compete.


Understanding this is the most important part of interview process, in my opinion. Large companies want to minimise bad hires, small companies are more open to odd CVs and novel approaches. Market yourself appropriately.


Yes this. Hiring the wrong person can be ridiculously expensive, especially if they linger on in a role for years corroding the culture.


obligatory request for citation


Not trying to be snarky but isn’t it intuitively more expensive assuming you’ll try to fire and then rehire correctly? It’s like asking “why does it cost more to go through a process twice rather than once?”


Do you really need a citation for this obvious fact?


This isn't StackExchange.


I think the problem is HR itself. Few companies should be large enough to warrant full time HR, but even small companies need many.. HR are specialist in expanding the disconnect between people the company thinks it needs and people who are willing to work for it.


You say that, but look through this thread. HR has a solid understanding of what a given company's large-scale objectives are when hiring. As evidenced by most replies in this thread, technicians on the ground do not.


The evidence is that it is a hard problem that is not solved. No amount of people who can't do the task demonstrate the existence of experts who can. Google and Microsoft have shown us examples of failing at simple HR tasks for years or decades with budgets for HR that dwarf a small companies entire payroll.

What virtually every attempt shows is that it is better to hire people who are roughly qualified and see what happens, managing the results instead of filtering. Having an HR strategy for hiring leads to creating biases (i.e. requirements based on your current work force) that gets you a group of overly similar and therefore collectively incompetent work force.


What some of us do now is at home exercises with no deadline and setup the interview to do a code review of it after it's given back.

It gives so many advantages: - people will have some fun trying to solve a real issue / build a little software - they will show how good they are at stack overflow, general methodology, cleanliness, test coverage, deployment and even git (how many we see who can't even gitignore their project files, make a build script that works, or test for real) - horrible candidates who can't even cheat will not be able to lie their way around a fake resume, cheaters will bomb the code review quite fast - candidate will also discover their future team mates general work attitude during the review and if the reviewer is good, will lock the candidate faster than "how many billard balls can you put in a Boeing 747" type of interview. - candidate will even LEARN during the code review => you get away with something out of the interview even if you fail

Since I've been interviewed that way, doing my little project on my wife's pregnancy bed smiling like an idiot at solving the challenge, then fell in love with my manager showing me new tricks during the review, I'll never do any other kind of interviews myself.


I recently did one of these where I was paid at half the hourly rate of the job offer. There was an initial unpaid assignment that was difficult but totally related to most engineers' day to day work. That took less than 30 minutes. Anyone who passed that got the paid assignment which took a few hours.


If I had options I wouldn't even do a 30-minute assignment. I'd just go with someone else. I'm reading about employers giving assignments a lot in this thread, but what do you do about the fact that the people you really want just don't have a reason to waste that time?


There was no interview. Not even a phone call. After I submitted the 3 hour paid assignment they hired me by giving me access to Slack and we just started working together. One of the smoothest and easiest "interviews" I've ever done and I got paid for everything except for a 30 minute challenge that helped solidify my understanding of git.

> people you really want just don't have a reason to waste that time

I'm "really wanted". In the past week I have had to say no to two clients calling me for follow up work. Where do you think I wasted my time?


This seems like a very nice process.


It was great and I enjoyed working with them. That's why even though interviewing is broken at most companies I enter the process assuming the best and with an open mind. I'm disappointed 90% of the time, but you don't want to throw out a gem just because you usually dig up worthless stones.


That's why I think take-home exercises are a much better way to evaluate candidates. Give them the exercise and enough time and you will figure out:

1) How good they are 2) How much they actually care about the job offer -- the more detail and the more passion they put in the exercise, the more into the job they are.


Give 20 candidates take home exercises, they're all passionate and put a lot of work into it, then you'll reject 19 of them.

The input required for this type of interview is heavily skewed in favor of the company. No thanks.


Why not drastically different approach - ie. instead of active technical interview process on company's side, just put up few dozens of open source projects/libraries that are being used inside company and leave it as technical interview sandbox.

Keep brownie points on pull requests to unlock "you gained <<passed technical interview>> badge" a'la stack overflow or do one off evaluation based on contribution when candidate applies. Stack overflow contributions can be equally taken under account to gain points and arguably they have badge system that does the job for you already.

Github could introduce something similar and wipe out "anxiety interview" completely, if they wanted to.

There is really no reason to do technical interview when you can inspect open source work. For people who don't have contributions yet, open source libraries/projects by the company should be made available.

Seems like win-win for everybody.


Unless you are looking for a "senior developer with x years of experience". I have a live, hobbies and family. I don't care to spend most of my free time fixing code in some obscure library I'm not ever going to use only to stand a chance to get a different job in the future.

How in the world did current developers EVER get ANY job? Who gives developers the chance to grow these days?


It's a "foot in" problem, I feel. Shit companies have shit interviews and give out shit workloads to MAYBE get a foot in the door.

You can follow along or be proactive, for instance start a new github account just for interviews, do code wars challenges, write a blog and just link your writeups, github and blog to applications. Along with your CV with major projects etc. of course.

Most people I know who work at big firms got calls from friends or friends of friends to interviews, so spending time socializing the local hacker circles is probably going to be worthwhile as well.

If you have life, hobbies and family, you should still be able to schedule 1-4 hours a week to job hunting, much more if you don't have a job of course.

It's not pretty or much fun, but it's certainly doable.


Not the worst approach, and as long as it's in the spirit of cooperation might even be good.

UX people could tune or at least comment on the interface, security people could fuzz or pentest the library... however, for a lot of actual problems in actual open source libraries it requires a crapton of internalization -- again, for free. Granted, it's for open source but it could be argued it's even WORSE approach for the company, since they're getting free benefit from expertise and work.

Also, I'm a private person, I have splintered online identities, I'm not going to give my 'hobbyist' github account in a professional context, and I'm also certainly not going to make a github account just for interviews and applications.

I'm lucky enough to have connections that will probably keep me employed as long as I like, and the tales of interviews and free work that's required to put up just to get a chance seems insane.

4 hours... okay, I can sort of see that, I might set myself a limit of 1 hour and tell them this was my approach, this was how far I got. I've heard of 30-60 hour workloads given to applicants to 'top five' companies and it's just ridiculous. As are the stories of interviewers who have no technical skills themselves and are just looking for a canned response (Google, looking at you).

I was going to say that the best approach I've seen are companies that put up interesting puzzles and security challenges on their website. The technical skills required aren't _that_ high, but certainly require an amount of ingenuity, persistence and just plain being interested in tinkering that they feel suits their company profile. The applicants can solve or try to solve these challenges, do writeups and probably get interviews just based on those.

Granted, I can also understand someone looking at 50 companies all giving them 4-10 hour workloads to MAYBE get an interview might feel frustrated and overloaded. It's a classic egg-and-chicken problem. If I were to start looking for a job from scratch I'd probably solve a bunch of code wars challenges and security CTFs, make a blog and just link my writeups and blog to applications.


Totally agreed. I've been in the position to evaluate several take-home exercises during the years and they always produced good results


Doesn't that tell you that the method of hiring is flawed in the first place?

If you care to weed out bad candiates, what help are 100% perfectly fine take-home exercises? The only thing that tells you is, that all of your candidates are viable in the first place. Just go into an interview with THAT assumption and ask the right questions.

Why do so many companies assume that people who try to get hired for a certain job don't meet the basic requirements? That gotta be the rule not the exception? That's just a bad faith assumption.

Just make sure your hire fits into your culture and give them sufficient probation time to grow into your codebase.


> > produced good results

> what help are 100% perfectly fine take-home exercises? The only thing that tells you is, that all of your candidates are viable in the first place.

I think you might have interpreted that, in a different way than what was meant.


Could you provide a bit more info? How long were the exercises? Were the applicants evaluated more based on their results or possible writeups? How much did their CV weigh compared to the take-home exercises?


Aren't you worried about filtering out good candidates who have enough options that they won't spend time on your test?


Where engineering fields have a disproporationate number of individuals with ASD (née aspergers), and where anxiety plays a large part of the syndromes, it's no surprise that many, many good engineers are terrible at interviewing.


I am focusing on the algorithmic part, which is the only thing I can influence in my company's interviews. I wish we would evaluate candidates with some pair programming, but that's never going to happen.

> incredibly high stakes algorithm riddles

Yeah there you go. This is your problem (Google/Facebook I assume?). It's more than questionable to expect candidates to solves complex algorithmic problems on a whiteboard in 45 minutes. You should ask moderate algorithmic questions that offer many solution approaches and accept solutions that are "good enough". It is not about having a candidate gaining some magic insight that some MIT student had 30 years ago during his master thesis (and reproducing it in 20 minutes lol). It's about giving a reasonably complex problem, that is not too easy but also not too difficult and allows you to focus evaluation on:

* Does the candidate write proper code that isn't far from compiling? (If they can't, they don't have the experience. It's like not being able to write without a spell/grammar checker. Small mistakes are fine. Big mistakes indicate lack of understanding.)

* Does the candidate have a structured approach to problem solving (all the time I see people starting to write code immediately and getting completely lost in what the problem even was. This is a red flag to me and baffles me each time again.).

* Does the candidate debug his code and walk me through. Does he find obvious bugs while doing so and can he convince me that his code works? (If not, and that happens often, its another red flag)

* Can he rank the speed of his solution with other theoretical solutions? Let's say they found an N^2 algorithm. I usually ask if there is anything faster (even if there isn't). This shows if they have some decent fundamentals in CS and are able to think about the boundaries of an optimal solution and how far they are from it. This is something, people without a CS major usually can never do and unfortunately also not too many with a CS major. It's kinda relevant though, if you optimize for performance and have no clue what the theoretical limitations are, then you are grasping for straws.

There is one big secret for getting A LOT out of easy questions:

I start to modify the problem statement and see if they understand how this changes their solution and their algorithm. People who don't have a good grasp of CS will fail miserably at this task, because this isn't something you can memorize.


You've just slightly improved upon a very broken way of interviewing engineering candidates. I've been paid by clients happy with the results to do everything from cryptography to writing a compiler / interpreter. Very computer sciencey stuff at times. The devs I work with consistently rate me highly and I often get put in mentoring positions. I couldn't tell you what N^2 is. I learned it in uni. In decades of programming it has come up in many interviews and never once on the job. Maybe in your industry it's important. But if not, please consider re-imagining your interview process. Having hired many developers over my career including at one time for my own business I can assure you there are far better ways at getting high quality signal out of candidates than trying to improve upon what you experienced as a candidate.


"N^2 is bad, except when it isn't" is my take on it.

In uni I maybe had one week of doing O-evaluations, it never came up and I never got interested. I've seen 20,000-word debates on Reddit on whether something is n, n2 or logn... and to my knowledge nobody ever learned anything from those.

In the workplace I've several times ran into the problem of "This is taking way too long," developing tools and methods to measure and drill down, then figure out if it can be improved upon or should be left as-is for now.

Honestly discussions and articles like this leave me absolutely terrified of interviews. My first and only technical interview was my future boss leaving me alone for 30 minutes in a room with a laptop "Code something you like yourself."

That man was brilliant.


> In the workplace I've several times ran into the problem of "This is taking way too long," developing tools and methods to measure and drill down, then figure out if it can be improved upon or should be left as-is for now.

To play devil's advocate, someone with the word Senior in their title would probably have command of recognizing Big O issues, and skip the "this is taking too long" step, and code it efficiently the first time. This is industry dependent, as well, obviously. I've been doing low-latency C++ for 15+ years, and it's not really something you can compromise on.

The mistake, I think, is skipping otherwise good candidates because they don't immediately see these issues. We should put more emphasis on identifying these weaknesses and finding the right mentors to teach them up. We should be hiring more junior and mid level engineers, with the assumption that they will learn and be up to speed in a few years. This has been my approach to interviewing, but I'm often vetoed by other team members.


The problem with Big O complexity is that it just isn't the reason most things are slow. Usually, your input is either so large that you physically can't run a polynomial solution, or your input is so small that it doesn't matter. Speed increases usually come from caching, avoiding indirection, removing slow API calls & properly making use of your hardware.


pivotal does pair programming interview. The morning with one person in a theoretical problem and afternoon with another in a real problem.

I liked the experience and the process, in the morning it was a blast. We really clicked.

In the afternoon I had to use the same setup as the guy I was collaborating with -- Vim with his bindings and Golang. I had never used Golang before and even tho I was proficient with vim his binding were quite opinionated.

I felt the guy from the afternoon had 0 interest in being programming with me. He wasn't collaborating and was looking at his cellphone the whole time.

The problem was to write a functional test for a CLI that would spawn a web service in Golang, parse some json and run some assertions.

Golang had some quirks with JSON and new line encoding that I spent my whole afternoon with. It is something he could have unblocked me as it a very specific problem, instead he kept telling me to read the official golang docs, not even to google the error.

As someone being interviewed I followed what he asked me to do and worked in his environment but it wasn't a representative of how I would have worked.

That said, I feel it could work! I just should be able to judge the person who pair programmed with me instead of just being judged :) the person in the morning would be a 10/10 and in the afternoon a 4/10.


For three years or so I taught week long corporate training classes at Apple, Microsoft, Facebook, eBay, PayPal, Cisco and Intel. Every week, week after week, real rocket-science level Android and iOS deep-dive in to the guts of the operating system classes dealing with firmware and device drivers and file systems. In the classes we didn't build apps, we built operating system components. Small to large classes, thousands of software engineers from senior and up, over the course of those three years.

I would stand up in front of 80 or more engineers, go in to esoteric aresa of the operating and talk eloquently and at length about the structures and code found there, and live code on a projector, answering questions about the code and debugging the code of everyone in my class as they followed along. Day-after-day. 40+ hours a week. It was intense. I became an independent advisor to teams and a consultant for some of the groups on how to bring up Android on new devices, create device drivers, and so forth.

Some of those classes, I wondered how some of the people I was teaching was actually able to hold down their job.

Months later, after I stopped teaching, I interviewed at Intel and PayPal and Cisco. I bombed every interview. I was, by all accounts, unable to program my way out of a wet paper bag.

Everyone has off days.


It's also sort of ironic how studies repeatedly show high preforming teams, are cognitively diverse teams. Even though research shows this, the hiring strategy of most technology companies, is hiring new employees with the same skills, and traits, as current employees.

https://hbr.org/2017/03/teams-solve-problems-faster-when-the...


This is a critical point that a lot of hiring misses. We talk about building great teams, but the hiring requirements are static and never seem to consider who's currently on the team and what mindsets or backgrounds are missing.


I much preferred our style of (technical) interviews (I took on a few dozen over the years). First there's a review of the CV, Linkedin, and github, if that looks good, a first interview - just get to know the person, very casual. If that clicks, second interview is the technical interview - we did that by giving the candidate homework, 4-8 hours worth of tasks that are close to their day-to-day work, usually something with a REST API, some basic math/arithmetic, and a front-end / forms, and plenty of room for the candidate to play with.

It was received pretty positively; especially early on (2012-2015) we had a lot of candidates indicate they had never done anything with JSON or REST before. (later on they mentioned having done nothing with XML before, lol).

And we got to see some interesting solutions and creativity; one guy we hired did it all in J2EE, while we were a Spring fanclub. But the code was sound, he showed that he understood how it worked, etc.

The conversation during the technical interviews were entertaining as well.


> basic math/arithmetic

Can I ask for some example(s) of what type of questions you asked?


I call these high stakes interviews "the perfect dismount".

You can feel the pressure mount - any algorithm that doesn't run on the first try or second (with a couple of quick fixes) and you're being mentally discounted.

The same for silence - if you're thinking through something, take abnormally too long, you're discounted.

The Perfect Dismount. A 10.0 is what's needed to land a FAANG job.


I just pulled out my laptop and if they aren't cool with the solution I google'd in 30 seconds, they can test my critical thinking in another manner. That was my critical thinking and resolve under pressure.

These are nothing but shit test.

What are shit tests? When somebody fucks with your head to see how you will react, what you are experiencing is typically a (series of) shit test(s). Everyone has been shit tested, gets shit tested and will continue to be shit tested; We use shit tests to make value judgements about people, likewise they can be used to determine how people cope under pressure. The underlying mechanism of shit tests is to test your mettle.


I learned this lesson a few years back when I interviewed for the CTO role of a educational software company - the main part of the interview was preparing a presentation on how to launch a new product which I had to prepare and present to the management team.

That went very well and I was feeling pretty good.

Then the CEO had a quick chat and happened to ask me a very simple technical question and my brain would not work - I completely failed to do it even though it was trivial.

I think that's how I discovered that mental context switching is a real thing - I had spent a few hours preparing and giving a presentation and then when asked a simple technical question those parts of my brain were completely offline. I think the shock of not being able to do it had a big effect on me and I went to bits (possibly the first time I've ever done that).


> I think that's how I discovered that mental context switching is a real thing

You had to go to that level to understand that? I am glad you did realize it, but I am surprised it was that late in your career.


I'd probably heard about it but I'd never had it really hit me like that before.


How did the CEO react / say? If I may ask


The best advice I ever got about interviews is to figure out ways to prove yourself wrong about what you think about the candidate. Although it's good advice for a candidate who seems too perfect, this advice really shines when someone is bombing.


That advice seems like unnecessary mental gymnastics.

Just take an unseen technical problem, solve it for the very first time in front of a colleague, then double your time as the expected time for candidates in an interview.

When you're under time-pressure, you actually spend more time thinking about how much time you have left, rather than thinking about how to solve the problem.


It's not mental gymnastics. The point is to challenge yourself as an interviewer to reveal a diamond in the rough. Some people suck at interviews and are great at coding. I've hired a lot of them and we've made a lot of money together. I've never had to fire anyone either so I'm not getting false positives either.

> Just take an unseen technical problem, solve it for the very first time in front of a colleague, then double your time as the expected time for candidates in an interview.

That gives very poor signal in my experience due to the time pressure. Your process is now selecting for people who are good at taking tests instead of selecting for good engineers.


I've done my first programmer hiring for my startup several years back. Thankfully I've came across Guerilla Guide to Interviewing[1] article by Joel Spolsky, and that helped greatly.

I've interviewed 6 candidates within a week, and hired one. Never regretted of the outcome.

I've set up a test code so: an input variable, comparison variable, and an empty function. Function takes input variable as an input, and output of which is compared with comparison variable.

Interviewee would be asked to fill in the body of the function, and play with it until function gives the expected output. I've set up a computer with two screens, put IDE on one, and Google on the other. Two chairs, and some coffee.

First I've chatted with the interviewee for about 10 minutes, tryin to make them comfy as possible. Then before the test, I've stressed very much that I am also a programmer, and I know how awkward it is to be coding while someone watching, and that'd alone would make me do silly mistakes, so I was expecting same from them and that was completely OK.

Afterwards, I've encouraged them to use Google in plenty, and feel free to stay silent or explain as they go.

So I got to assess their fluid intelligence, their ability to break the task down and progress efficiently, their English proficiency[2] (if they use Google in English), their usage of keywords, their choice among in stack overflow responses, etc.

At the end I've rejected a guy with 8 years of experience on his CV, and hired a junior. Looking back, that turned out to be an amazing decision, best I could have made, for work went good with the junior and I've had the (mis)fortune of working side by side with the 8yr guy several years later.

PS 1: Task was to write a recursive function to travelse a multidimensional array, and find out whether the first letter of every "value" was a capital letter. PS 2: The work was to maintain and develop mid-scale SaaS project along with me.

1: https://www.joelonsoftware.com/2006/10/25/the-guerrilla-guid... 2: Country's mother language was not English, and English proficiency was not good on average.


Thank you for posting this.


Couple that with 45 five times over! I was flown down for a Facebook interview from the east coast. It was my first tech interview and it was with facebook. I was so exhausted by it, and then the next day I had an interview at Uber. I started that one already pretty tired.


I'm so glad this realization is the top comment on HN.

I think Fizzbuzz can be actually a reverse test: if an experienced programmer can fail fizzbuzz in an interview, then maybe it's the interviewer who screwed up.


Senior/Principal Engineers at FAANGs may not even code all that much in their day to day. The difficult decision that gets faced when interviewing those folks is whether they could walk into a fresh environment, and stack, and immediately earn the trust of their team.

In practice whiteboard interviews assess this pretty well. A new senior engineer must immediately be shipping optimal, tested, and well designed code. Otherwise you'll have Junior Engineers coaching Senior Engineers and the Junior Engineers will just get frustrated and leave. Worst case, you'll have an "architect" who never learns how to actually deliver independently in the environment.


Any significant system will take months to learn. Your statement is probably only true for some trivial system.

A senior engineer comes with years of experience designing and coding systems that took years to develop and run at scale. Given time to learn they should be just as good at solving day to day bugs but if you assign them that you are wasting your time and money on experience. If you don't value experience you are wasting your time and money on experience.

It's pretty normal to have more junior engineering staff help new senior staff learn the ropes. It's also common for those less experienced staff to learn from the more experienced people even in that situation. I'm not saying senior hires should be respected from the first second because they are over 30 or have a long resume.


Learning the ropes of a new company is fair and being slower at first is to be expected, but explaining that code reviews need to be tested to a senior hire isn't as fair. Similarly a new senior hire should be able to hop into a code review and catch problems/suggest improvements early in the process.

These two behaviors aren't as dependent on knowledge of the system as they are on core coding competencies. The whiteboard interview reasonably works to assess these, but it requires a lot of prep and is prone to false negatives.


I'm pretty angry with the person that you were. I'm frustrated with the people who look up a question, see the exact solution and walk through and then expect the candidate seeing the problem statement for the first time to walk through it in the same way.

There are questions being asked that full academic papers have been written on. What's even worse is typically I get interviewers that are not experienced+prepared enough to give hints or work with you.

I am happy that you've had an epiphany to see where you were wrong. If you ever are in a situation to build a quality team. Interview people for the skills to work with others+ teach others, look to optimize, experience, and that they're willing to learn. You'll have a fairly good group of candidates to pick from as that don't do well in these coding interrogations.


I bomb on technical interviews the same way because my mind just freezes and can't get through even simple problems.

I had the same question you mention. "This is a problem a paper was written on, are you expecting me to derive a whole paper from first principles in half an hour, or are you expecting me to have seen it before? If it's the latter, why not let me Google, as I normally would?"

I feel that saying "I would solve this problem with <algorithm>" should be good enough, and if I'm asked to implement it, I should be able to copy paste the Wikipedia pseudocode and start converting.


> I had the same question you mention. "This is a problem a paper was written on, are you expecting me to derive a whole paper from first principles in half an hour, or are you expecting me to have seen it before? If it's the latter, why not let me Google, as I normally would?"

> I feel that saying "I would solve this problem with <algorithm>" should be good enough, and if I'm asked to implement it, I should be able to copy paste the Wikipedia pseudocode and start converting.

Long ago, milo.com posted a challenge to codeeval.com (both websites now dead) which turned out to be an example of the "assignment problem". That is, given a set X, another set Y, and a payoff function f(x,y), assign each element y in Y to exactly one element x in X such that the total payoff from all assignments is maximized.

I tried to solve it, realized I didn't know how, and was eventually able to look up the solution in a couple of papers. There is an algorithm called "the Hungarian algorithm" which will do it.

So I wrote up the Hungarian algorithm and sent it in. This was good enough to get a phone interview with the company.

I tried to prepare for the screen by going over the Hungarian algorithm to the point that I could myself give the proof that it correctly solved the problem. This was fun. But in the phone interview, it was barely mentioned at all. According to the interviewer, I passed that step by just mentioning the name "Hungarian algorithm".


That's better than asking you to come up with the complete solution, but it's still a bit of a trivia question. On one hand, it's good to be able to recognize algorithms so you can look up their solutions instead of wasting your time on trying to reinvent them, on the other hand, how often are problems this algorithmic?

Maybe I've had a particularly depressing career, but 99% of it consisted of various ways of interacting with a database.


> That's better than asking you to come up with the complete solution, but it's still a bit of a trivia question.

How? You have unlimited time to find the answer; it happens before you make any contact with the company at all.


Oh, I misunderstood the situation, sorry. I thought it was on an interview.


Yeah, in certain circles the hard part is figuring out the solution is the assignment problem. Then you’re supposed to just know that the most commonly used algorithm for it is the Hungarian algorithm, maybe its complexity and that’s it, because you can find code for it in most languages and so you just need to call the function with appropriate inputs.


I had that exact same experience with milo.com. Small world.


Well, I found their challenge because they posted it to HN.

Everyone inside Milo seemed to think that their hiring process was hugely better than what happened elsewhere, including by producing better hires. But the approach -- set a difficult challenge instead of an easy challenge, such that, if someone passes the challenge, you will almost certainly hire them -- still seems to be vanishingly rare.


Oh, was that their logic? I don’t even recall if they actually interviewed me, despite passing their challenge. As I recall, I was rejected for not enough experience.


I find that hard to believe, since I was hired despite experience totalling less than a year of internship.

However, there might have been external reasons. I was hired after they were acquired by eBay, and apparently they had preexisting approval for a certain number of hires that year that they didn't end up meeting. If you were earlier or later (pretty likely!), maybe they might not have been so cavalier.


That could be. I could also be misremembering. In any case, I definitely wasn’t offered a job, despite solving the problem in exactly the same way you did.

I’m not complaining. Who knows? Getting rejected might have been for the best.


The funny thing about this is:

They're not looking for someone that is eager enough to show they're willing to work through it even though they have no idea to proceed.

They are looking for someone that got lucky enough to be ready to answer that particular problem, but won't stand up to them that the ask was a bit too much for a reasonable interview that is intended to see if you're a fit for the company and skilled enough to get the work done.

They aren't looking for someone that will dress them down to tell them that's incredibly insulting to ask something that took an academic a long time to come up with a solution for that. [Although a rough approach and fairly unfriendly. That shows a lack of desperation, confidence on the knowledge, and a good understanding of the difficulty of the task at hand (heck that's a good signal for won't underestimate points during planning)] (There's a singly linked list question that amazon used to ask that qualifies for this)


Exactly agreed, and because my inclination is to say "I've never used this in previous jobs and will never use it here", I don't think it's useful for me to be interviewing for jobs.

A friend of mine, when asked to implement an AVL tree, actually asked an interviewer when the last time he had to implement an AVL tree on the job was. He wasn't hired.


They must be willing to eat their own dog food or else the question won't make any sense for the interview.

The department should run the interviews through their own employees first. If some questions cause an "interview anti-loop" (a set of other employees S who would not hire E) it's time to revise those questions.

Revise and rehearse these practice interviews within the department, and do it blind, until all of S are willing to "hire" each other for their respective roles.


I've never had this prep for interviewing people.

Most of the time you see the resume a few days before. They may say a few things "don't say this don't say that". (hr)

Most companies take the rejection as a point of pride. "We've rejected 100s of candidates."


Asking you to visualise & manipulate a novel data structure you don't have practice with is a feature, not a bug. It's a specialised IQ test.


And what is asking you to reinvent on the spot an algorithm that took scientists years to come up with?


Very stupid, and shows the interviewers don't understand the point of the process either.

I don't like the structure, I think it's very flawed, but asking you to do something you do regularly defeats the purpose.


I don't think it's that complex. I think they're just cargo-culting. Technical interviews were designed to test certain things (brainpower, grasp of fundamental data structures) and not other things (workflow, ability to write out large volumes of trivial code). I think companies just copy that and don't realise what they're really testing for.


Don't forget, not only are they looking for someone who already just knows the answer, they frequently aren't hesitant to penalize someone who says "Fair warning: I've already seen this problem."


You're misunderstanding what they're trying to test. They're testing your raw ability to manipulate algorithmic structure in your head. It's important that you have to visualise the data structures and manipulate them without being able to look it up. It's an brainpower test, not a workflow test.


Which is fine if your job is really going to be centred around developing algorithmic software in a unique or high pressure way. The problem is most software engineering day jobs are nothing like this, so the test is not a good filter.


I always say that you aren't qualified to work on the hardest problems you can solve, so to check if someone can do the job you need to ask them to do something harder than you expect them to encounter. People who work at the edge of their ability makes tons of mistakes and are very slow, and it isn't very fun for them either.

It is the same as if you expect a warehouse worker to regularly lift 20kg then you don't test your hires if they can take 20kg at their best, because then you will get a lot of people who simply can't do the job for 8 hours a day. Instead you'd make sure they can lift 40 kg or so to ensure that lifting 20 kg regularly wont be a problem.


Yeah maybe, but these hardcore algorithmic "leetCode" things are like asking the warehouse worker to design a logistics system instead of seeing if they can carry 20kg boxes. The vast, vast majority of software jobs out there involve more soft skills (requirements, planning, scrum and other processes) and "plumbing" libraries/frameworks/data layers together than implementing algorithms from scratch. I'm not saying you'll never have to do it, but it's going to typically be a small proportion of your day job, so why test this stuff so much at interview stage?


The point is that designing an iteration that does something should be a trivial operation requiring no thought at all. If you can leetcode problems then we can be sure that you can do loops without effort, otherwise it might be that you struggle with them and as a programmer it will really hurt your productivity. Testing if you can do loops doesn't test if you can do loops easily, so we test something requiring a bit more thought.

Edit: And I think that you'll find that most leetcode problems are pretty easy if you are an expert at doing loops and recursions.

> I'm not saying you'll never have to do it, but it's going to typically be a small proportion of your day job, so why test this stuff so much at interview stage?

It isn't a small portion of your time spent at Google at least. Almost no engineers at Google ever talks to non technical people, product managers or engineering managers does that. Most engineers works to reduce cpu usage of backend servers or build scalable features for backend servers or similar stuff. Even high level decisions like deciding what service to use requires more engineering hours in time spent refactoring the thing than engineering hours spent deciding what to use, so there is much greater need for people who can code than people who can make decisions.

And if you argue that Google is bad at making new good products, that isn't the fault of their software engineers since they don't decide what to build. Instead it is the fault of VP's, and those don't do white boarding interviews.


I mean, I can see why they would want to test both. Horsepower is a good indicator.


So much agreement here. Who hasn't been rejected by some 25 year old who's been with the company all of 6 months and who asks silly questions straight from LeetCode? I have literally been asked questions that were PhD theses 30 years ago.

How is any of that supposed to be part of a good interview process? My only solace here is that those things are red flags on the company side, so I might have dodged some bullets.


Wouldn't it be a more balanced approach if the technical problem were presented on the spot to __all__ interview participants, not just the candidate?

It's a technical job, it must be a collaborative effort in finding solutions! Aren't we looking for a team member, not a lone-ninja.

Sure, the candidate has to be more vested in advancing the process, but the fact that there's no known-ahead answer would potentially surface not only technical skills, but also personal ones, and ... at both ends of the table.

Not to mention, the "house-experts" could find such sessions stimulating, to say the least... Imagine, you get back to your desk after interviewing a candidate, and your team mates ask you "so what was the tech problem this time?", "could you crack it?", "did he/she crack it?".


I like it. However, I have had an interviewer at a "top chicago tech place" (roll your eyes on that one) He basically did an inperson imitation of leetcode. Say problem, look at you. Don't give help when asked, stay emotionless.


Very intriguing idea. I’m thinking about how logistically this could actually work. It would build a sense of camaraderie as well.

ryder9 [banned] 22 days ago

[flagged]


We've banned this account for repeatedly breaking the site guidelines. Please don't create accounts to do that with.

https://news.ycombinator.com/newsguidelines.html


My experience has been quite opposite.

I worked for a FANG and conducted over 100 interviews. Most of that time I thought I was asking really hard questions and wasn't sure I would have been able to answer them if I hadn't seen the question before.

I later interviewed at a different FANG, surprised myself and answered harder questions than I typically gave and answered them better than most of the people we hired.

Not sure what to make of that.


Looks like being an interviewer taught you a thing or two about the test content.


That sounds like when you spend a lot of time exposed to and walking through certain types of problems led you to getting good at solving those types of problems.


People bomb, including good people. That is sadly part of the system.

Once I had this candidate. Damn, I know my questions are deceptively simple on purpose, but he literally couldn't do anything. Not even a trivial brute force. Not even any related simple knowledge questions. Couldn't tell an average from a median. The only good thing I could write in the feedback was "seems to know some basic syntax".

It was quite a learning moment to read that everyone else was praising how this was the most brilliant candidate they've seen in years. Well, mine was the interview just before lunch, in a schedule that was atypically later than usual. I guess starting an interview after most people start their lunch break was asking for it. He also got hired - apparently the committee agreed that "can't think when hungry" is not really that important a flaw.


One trend I've noticed that is markedly different from when I started programming in 1998, is how dependent we've become to program via Google/StackExchange searches.

I'm not sure I know how to write anything from scratch anymore, because I just search/read/alter/test. The breadth of what I work on is 100x wider than it used to be, and so I've become absolutely dependent on quickly reading docs, copying code found online, and then deep diving into testing and rolling it out the door.

I deliver a TON more, but I'm convinced I'd bomb literally every interview I go to today. Today, for example, I updated our firewall rules, updated some Java business logic, tweaked some javascipt and PHP on the site, reversed an odd macro email-phishing virus we got, configured a new RDS server VM for clients, and updated some network GPOs. 20 years ago, I would have never worked on ALL of those things in the same week - let alone the same day.

Maybe we should interview with small take-home tasks to be submitted with some write-ups to test how people research, reason, and write-up problems, rather than writing code on the spot on a whiteboard. ...but maybe that's just me.


I've been doing this for a LOOONG time. Early on, the core skill for a programmer was intensive in-depth knowledge of the language being used. We all talked a good game about what we called "reusable software," but yeah. Talk.

Now things are TOTALLY different. Doing a good job requires extensive, encyclopedic, knowledge of what's out there, how well it works, and how to integrate it. npm, nuget, maven, anaconda, we all know the drill. In a day's work it's much less necessary to implement an algorithm from memory.

What should programmers know from memory? Simple stuff like fizzbuzz. The difference between TCP and UDP. What DNS is for. If SQL data's involved the difference between JOIN and LEFT JOIN.

And, if they're experienced, they must be able to describe intelligently how their own stuff fit into the bigger system they worked on. Because there's always a bigger system.

If they're being hired to be a tech rep to an *sshole customer, possibly their performance under pressure matters a lot. Otherwise, not.

This business of high stakes quiz questions serves just one purpose: feeding the ego of the interviewer. "We have high standards!"

Yeah, so high that we wouldn't hire Jesus of Nazareth to be a storyteller.


That is exactly my current employer. Our main office is in a small town, so there aren’t that many candidates to begin with (a job stays open for 6–9 months and we get maybe 15 applicants during that time)

Our software is utter trash because the business is very clannish, fragmented, and the people in charge either never ever wrote code or did it 15 years ago.

And yet we fail 95% of the technical candidates we interview because they don’t meet our “high” standards. We won’t even consider hiring remotely (but within the country because we’re making government software) even though an overwhelming majority of the company and tech team has been working from home 80–100% since March... No, we’re insisting on rejecting the handful of candidates that apply in our small town. Insane.


>Doing a good job requires extensive, encyclopedic, knowledge of what's out there

I wouldn't say that. What I feel like I've accumulated in knowledge over time is I can look at a new problem and without having the solution/encyclopedic knowledge, I can know in my gut that (a) there is a solution, and (b) the solution I'm looking at is not the solution.

Merely being able to look up things on the internet without these two things means you will either give up too early or settle on the wrong solution.

This is the most succinct way I can describe what I think >a decade of experience is worth compared to being fresh out of college.


I describe this as known-unknown vs unknown-unknown (credit to Donald Rumsfeld)

Unknown unknowns: The things that you don't know are possible and thus don't even know how to start solving. You will accidentally reinvent the wheel, most likely very badly. Or give up right away.

Known unknowns: An experienced developer most likely unconsciously knows (has read on HN/Reddit/Book/Blog some time in the last 20 years) that a thing they encounter has been solved already, at least partially.

This allows them to start searching for the answer with a few key words, because they know it's possible but don't know how it can be done.

Same comes with having a network of people who know different things, I know the bare minimum about professional pentesting, but I know a few people who I are absolute pros in the field.


> Known unknowns: An experienced developer most likely unconsciously knows (has read on HN/Reddit/Book/Blog some time in the last 20 years) that a thing they encounter has been solved already, at least partially.

I feel like my brain is full of "I read a headline about this once" tidbits, in part because of HN. It's like I don't know the answer, but I know the question has been solved before so I know there is an answer.


Yup, a lot of developers grow into generalists that know a lot of things about a wide range of subjects; they suck in a technical / algorithm interview, but they can solve most problems. They may not know all of NPM's options (for example), but they know how to quickly find an answer if they are confronted with a problem.


>>This business of high stakes quiz questions serves just one purpose: feeding the ego of the interviewer. "We have high standards!"

It's scary how true this is. I know one manager who routinely rejects perfectly good candidates for making negligible mistakes in the interview. All on the grounds of "We have high standards!"


Haha. I worked at a place that did not start out as a software company but now had a software team (they built industrial machinery). The director of the electrical and software department was completely clueless of course. He insisted that since we are an engineering company, an engineering degree was required to work here. And that meant any candidate for a software engineering position was automatically rejected if they only had a computer science degree since it didn't have the word "engineering" in the title.


It sounds like the process works! … at preserving the candidate from your organization. :)


I honestly don't agree with this. The worst programs are written by people who know how to plug a million and one things together, but can't drill down and analyse the algorithmic implications of what they're doing. Electron runs like shit and inhales RAM is because it was programmed by people who don't have solid understanding of fundamentals. They understand a huge number of horizontal abstractions but they have no concept of how it looks vertically.

Knowing how to maximally exploit a CPU is way more important than knowing eight different Javascript frameworks if good software is your objective. And frankly, learning Node is way easier than figuring out how to structure basic, bare-bones Javascript so that it leverages your L1 cache.

And therein lies the problem. How many interviewers dock marks for iterating over columns, instead of rows? Because that matters, a huge amount. How many interviewers would give credit for "how can you speed this up?" if the interviewee said, "write it in C, and simplify the datastructures you want me to use so we maximise sequential lookups over basic arrays, to maximise cache usage." They'll look at you like you have three heads.

"Don't you know Big N complexity is the only thing that really matters if you're looking for speed?" - then you get Electron.


> Electron runs like shit and inhales RAM is because it was programmed by people who don't have solid understanding of fundamentals.

I feel like this is a huge oversimplification. I’d argue that for sufficiently large projects (say, Chromium) every single programmer who works on it could have a strong understanding of performance fundamentals but the product itself could still have performance problems because there are necessary levels of abstraction involved and no one person has the entire codebase in their head. Performance problems could come from emergent behavior that could not have been predicted by the original engineers.

But, as for implementing an app on top of Electron. Say I need to write a cross-platform app with a consistent style and fluid animations. I know how to write a NEON implementation for ChaCha20 for fun, or write a zero-copy binary property list parser that’s faster than anything I’ve found. I have a decent understanding of performance. That doesn’t help me write this cross-platform app. I don’t have to time to write a novel native UI toolkit that allows me to share code while maintaining platform consistency and native performance. I don’t have time to fix the memory consumption of Chromium. I’m going to write my Electron app in JavaScript, carefully, and hope the performance will be acceptable (but not great). I don’t see how that can reflect on any individual programmer poorly.


I disagree. As evidence: games are way more complex, have enormous teams, and manage to simulate an entire game world rendering render hundreds of thousands of polygons 60 times every second. Twitter crashes my browser.


But I don't think that it has _anything_ to do with the individual programmers inherently understanding performance better.

There is likely a huge organizational focus on performance and more people expressly dedicated to the task because it is a higher priority for Epic. It is directly correlated to Epic's ability to deliver a successful product to game companies, whereas for browsers it is slightly more orthogonal to their success.

LLVM has a bunch of contributors who likely understand the intricacies of machine code better than either of us, and:

> Each LLVM release is a few percent slower than the last.

> The larger problem is that LLVM simply does not track compile-time regressions. While LNT tracks run-time performance over time, the same is not being done for compile-time or memory usage. The end result is that patches introduce unintentional compile-time regressions that go unnoticed, and can no longer be easily identified by the time the next release rolls out.

https://nikic.github.io/2020/05/10/Make-LLVM-fast-again.html


Electron is a running joke because of how horrible the performance is. Again, I don't buy it. I think it's much more a matter of "they can't" than "they don't want to."

I think if those LLVM contributors understood compilers better, they wouldn't be introducing those issues. Look at Jon Blow's achievements on Jai. LLVM is the bottleneck for his entire compiler at the moment, and he's planning to replace it with what... Three people or something? But I also think there are structural issues there. I don't think a top-down project from a large company gets to use that excuse.


That's because GPUs are insane and purpose built. For an honest benchmark, you'd only look at instructions executed on the CPU which won't be any rendering, but stuff like world ops, AI, game flow. Some of those are not really that taxing.

As an example, Stellaris had performance issues that were CPU bound for a while and it's gotten better. It's not a visually demanding game after all.


So run Electron like a game. Leverage the GPU, if that really is the issue. Or run it like Qt, which manages to avoid that problem.

I just think that's a weak excuse, especially given most games have tremendous complexity running through the CPU too.

(As a side note I don't think paradox games are a good example. They're famously poorly optimised.)


Think about the requirements for market success of the different pieces of software. Games most run fast and any performance you safe can go back into better graphics. Performance is crucial. Electron is hugely successful despite not being very performant.

Developers love thinking about performance. It's fun and asked us to use so many of our skills. However, for many products other quality aspects are more important, like maintainability, extendability and expediency of development.

That performance often falls by the wayside on terms of business priorities ends up being reflected in the typical developer skill set.


I don't know if I agree. Its plausible, but also these businesses are monopolies, so even if they churn out shit it doesn't undermine their market position.

But I think Electron in particular is a symptom of the fact that GUI frameworks are in general horrible. There was a market gap for usability because using everything else was obtuse, and now we're paying for it.


Which businesses are monopolies? Slack surely isn't. If there's a market for faster chat clients because that's what people want then looks like a business opportunity?

FWIW all of slack competitors are also electron (rocket.chat et all), so it seems like you possibly know something that other engineers don't and might have a competitive advantage!


Google is a monopoly.

If someone released a GPU-accelerated version of Electron that had the performance of the GUI in basically every video game, it would destroy Electron. Now, would it be popular? I'm not sure, but let's say the answer is no. Ok, now imagine Google released that.


The likelihood that people who created Electron don't understand caches or cache locality is close to zero. That's an oversimplification. Electron is a large project that does many things… there are many reasons why it could have been slow. Today there are many Electron based apps that are quite decent.

> Knowing how to maximally exploit a CPU is way more important than knowing eight different Javascript frameworks if good software is your objective. And frankly, learning Node is way easier than figuring out how to structure basic, bare-bones Javascript so that it leverages your L1 cache.

Learning Node isn't easier than learning how to structure basic, bare-bones code that leverages L1 cache. In fact, it's quite the opposite. You can learn cache locality as a concept in much less time than you would learn Node (or any programming ecosystem for that matter).


I don't buy it. Unreal Engine is a larger project that does many more things, and the speed at which it does them compared to Electron is not even in the same universe.


How can you compare unreal engine with electron ?! That makes no sense at all.


The fact that people think they're somehow fundamentally incomparable is the problem.


Seriously. This whole part of the comment tree has been fabulously enlightening. Great insight into the psychology of how we end up with chat clients that do less with 20x the resources than entire operating system + office suites of yesteryear.


I feel you tried to be snarky and I didn't really follow even what you tried to imply! Regardless would like to have your opinion if you don't mind explaining it better.


I’d figured it was mostly cheap companies driving the broadly terrible performance of modern software for often fairly small benefits to speed and dev cost, but it turns out there’s a much stronger contingent of software developers with not just a tolerance for business-driven trade-offs, but a strongly enabling attitude toward the whole thing, apparently not seeing what they’re doing the same way I do at all. Adding this information, observed software quality makes a lot more sense to me now.


I'm not sure it's the cheap companies or developers driving this. Even companies that pay really well produce pretty slow software (Gmail, etc.). It's mostly the ecosystem in which software is developed.

As someone else said, businesses hire software engineers to solve business problems. Speed often isn't a priority. Fast software will make the experience perceivably better, but it's hard to get on a sales call and tell a potential customer that your application loads much faster when they are looking for feature X. And most companies buying software are looking at a checklist of features first and then maybe the experience.

You also need to put in the time to make software fast and then keep it fast. It's not something you can tack on at the end—if you want something to be fast, that has to be carefully considered from the very beginning. By the time people notice something is too slow, you're burdened by half baked architecture and a monstrosity of a codebase. At that point, it's near impossible to really make anything fast.


> most companies buying software are looking at a checklist of features first and then maybe the experience.

I think the hugely successful SalesForce proves this point. Just one standout among many such products.


I would argue they won because of marketing, not because they had unique functionality. Features are not that hard to build, most basic tooling can be cloned with a relatively small budget.


Most enterprise businesses win because of better sales and distribution, but your product ultimately has to support your sales. Salesforce had both (and other advantages). How does sales work in medium to large B2B? You'll get a RFP with a list of questions, you'll often get a list of features that company X wants. Naturally, as a company, you'd focus on building those features instead of optimising speed.

If the market values fast software, you'll see a lot more fast software. There _is_ a lot of unnecessarily slow software these days, so it _is_ starting to happen. Think Superhuman, Linear.app, etc. — an entire category of software based primarily on the idea of being fast, aimed at the power user. If these apps succeed, you'll see a lot more of them. If the market doesn't value fast software, most commercial software will be slow.


I don't really agree with the premise. Sales by definition exploits the fact that the consumer can't differentiate between products on their own. That leaves a lot of leeway for broken software and misleading marketing.

I don't think companies choose to make bad software as a tradeoff. Salesforce had enough budget to build out those features and not be terrible. I think in general, when software is bad it's because the company wasn't able to make it good. And I think the distinction between those categories is important.


SalesForce ticks enough checkboxes. Competing products may tick the boxes too, but without great marketing that won’t matter, the company that can sell their product will win.

My point was that SalesForce has fairly poor user experience and performance, but does offer the features. It has the additional advantage (for SalesForce) or quick lock-in once the customer commits to it. SalesForce has improved the user experience over time because they have the customer base and revenue to allow that.


Software developers are hired by business to solve business problems.

Optimization of speed is sometimes it, sometimes it isn't.

I have worked accelerating algorithms in VHDL with a PCI-e interface, embedded linux without a MMU because a MMU uses too many logical gates, digital signal processing systems (FFT, goertzel, sigma delta filters) that had to process a lot of data under uS and etc.

Now a days I work more in the devops space, full stack dev and whatnot. I have worked with a lot of technologies, different constraints, different teams, different companies (12 in total) in different industries.

Trying to paint all business and developers as bad or as cheap because business requirements do not align with your view isn't really fair.


Twitter failing to load on a stable connection and crashing the browser if you scroll too far is not the result of business tradeoffs. It's a failure to achieve their own goals.


Are you sure there's not something wrong with your environment? Twitter has lots of users (me included) and is not a problem I have seen in any platform I access twitter from.


I also hit errors following links to twitter more often than not, reliably, on all sorts of environments. It feels like they’re doing some kind of cache-related routing based on headers. Often “refresh” doesn’t fix it, but hitting enter in the address bar does. Seen on iOS, macOS, Windows, and Linux, mostly in very normal browsers (I’d expect mobile Safari in particular is at least on their top-5 most-seen browsers). I’d assumed they were trying to annoy me enough that I install their app. I’ve seen many others report the same kind of reliable, very frequent errors.

Notably, I’m also not logged in, which might be another thing they’re trying to get me to do and another difference between those who hit this more often than not, and those who rarely or never see it. It’s been like this for years.


That sucks! I heard reddit also does things differently when you're logged in/ logged off. I haven't had a bad experience with these platforms myself but it sucks that it is so bad for other people.


The page loading thing used to happen to me regardless of login status, but it's less frequent now. The crashing was mostly just on mobile - that's still a problem.

If I remember right the app would also crash but it could take bigger loads. Not sure, I uninstalled it.


> Trying to paint all business and developers as bad or as cheap because business requirements do not align with your view isn't really fair.

Yes, that would be unfair.


Software is all about trade offs. I would actually recommend you the book Software Engineer at Google since you mentioned them so many times: https://www.amazon.ca/Software-Engineering-Google-Lessons-Pr...

The first chapter is all about that and how software engineer isn't programming. You seem to think as programming and I believe you're missing the bigger picture.


I understand what you're getting at but I think you're seriously underestimating the implications of what I said.


Performance matters when it matters. Part of the job is knowing when it matters and adjusting the approach accordingly.


Efficiency automatically matters for something like Electron because if it's not well-engineered, it will produce monstrosities like Slack.

If Slack was written in Qt with the same budget it would just be fast. Even with effort we don't have Qt apps that behave like Slack.


If slack were built in Qt it wouldn’t ever have hit the market or adapt to changes. It would be an order of magnitude more expensive too.

Do you feel that you somehow know something that nobody else knows ? It’s all about trade offs . Being able to move quicker seems more important to businesses than writing more performant software .


It's a chat client, it's really not complicated. Even if we take for granted that they genuinely had to ride on top of Electron to get to market (I think the back-end is probably much more complex), they have more than enough money at this point to rewrite from scratch.

Note that one person is literally doing that, with ripcord. Looks like shit but it illustrates the point.


It is really that complicated. They have a lot of features, integrations and platforms. I don't believe you understand how a chat client code base could be much more complex than Unreal.


Every multiplayer game on the market does 90% of what Slack does as a side note.


Can you give me one that integrates to all my business ?

Monitoring , metrics , deployment , etc.

I am also looking to play said game on all platforms ! iPad , Android , iPhone , Mac , Linux , Windows and etc.

Would be great if the chat client supported markdown as well


I understand the point you're trying to make but arguing that World of Warcraft is less complex than the Slack client is not the hill to die on.


From your posts your view of complexity seems very limited to the art of programming and making complex, fast and correct programs.

Complexity for me is the software engineering part which the Software Engineering at Google displays really well and I agree wholehearted. I would argue a game is simpler because it doesn't need frequent updates and doesn't live forever.

I work in systems at my company used in a very big scale that were written 30 years ago and are still maintained / changed. That system I would argue is much more complex than any game.

Even tho from your point of view it probably only updates files on disk :)


> Electron runs like shit

Does it? I've seen some electron apps that run like dogs, and others that seem perfectly performant. So, I'm curious... in what way does the framework run like shit? In what way could it be improved?

> How many interviewers dock marks for iterating over columns, instead of rows? Because that matters, a huge amount.

I think you mean the difference between SoA and AoS. I'm not sure that one is inherently better than the other, except the former scales well for large homogenous datasets.

I doubt optimising for DSPs/SIMD/big-data to maximize throughput and reduce cache-misses, is something the typical FAANG employee needs to know about. Could be wrong.

> "Don't you know Big N complexity is the only thing that really matters if you're looking for speed?"

It does, even if you cache-align and pool your data. And you can optimise your in-memory data-structures (and database IO) later, if profiling finds them to be a bottleneck.


>I've seen some electron apps that run like dogs

And almost no Qt apps run like that. "As long as I don't notice it most of the time, it's not slow." No, sorry, I don't agree. Frameworks should be fast. Slack is unforgivably slow.

>I think you mean the difference between SoA and AoS.

I mean literally in the interview when they do a nested for loop over primitives. Do they lose marks for going row-first? Why isn't that considered a basic rule? It's not complicated. Most interviewers don't even realise there's a difference (!!).

>I doubt optimising for DSPs/SIMD/big-data to maximize throughput and reduce cache-misses, is something the typical FAANG employee needs to know about.

Twitter is so bad it crashes my browser. It's 90% text! The only ram-hogs are auto-playing videos that dissapear after you scroll past them. These engineers are paid half a million per year. Half the time the website doesn't even load (!!!). That's insane.

>And you can optimise your in-memory data-structures (and database IO) later

Ehh, skeptical. Unless your data & algorithms are structured as contiguous arrays of primitives to begin with you're going to have trouble refactoring the abstraction.


> Do they lose marks for going row-first?

It's not objectively wrong to do this. Suggesting that "SoA is always right", tells me that maybe you don't understand the trade-offs between expediency, readability and performance... or understand the trap of early optimisation. Always measure first before optimising, otherwise write code that's easier to read, and simpler to write.

> Twitter is so bad it crashes my browser. It's 90% text!

Not sure how knowing how to micro-optimise cache-effects fixes these issues. Profiling and spending time on perf does, but they've decided it's just not worth their time.

> Ehh, skeptical. Unless your data & algorithms are structured as contiguous arrays of primitives to begin with you're going to have trouble refactoring the abstraction.

The keyword is refactor. Developers and organizations do it all the time - or at least they ought to.

Disclaimer: Used to work in the video games industry. Literally worked for years doing perf, and cache-level optimisation.


You're responding to things I didn't say.

No, I don't mean SoA, I just said that.

I don't think twitter is crappy because of cache misses, I think it's crappy because they don't have a solid understanding of what their code is doing. Ignorance of caching is another symptom.

If you build your game on an entity-component system, you can't "just refractor" that to make it contiguous. Your structure is pretty baked.


> I don't think twitter is crappy because of cache misses, I think it's crappy because they don't have a solid understanding of what their code is doing.

It's my hunch that you have little idea of what the code-bases for twitter are like, nor what factors are at play that affect usability issues.

> If you build your game on an entity-component system, you can't "just refractor" that to make it contiguous.

Moving to object pooling of components is very do-able. There might be some extra work, like extracting a generic matrix hierarchy and physics primitives out, but you'd only do such a thing if you found that d-cache misses on v-table lookups, or if i-cache misses due to update code churning were of real concern... and these could realistically be mitigated with homogenised object pools for the particular use case... though usually yes.

Edit: I'd expect anyone on an engine team, or technical programmers at a game dev shop, or maybe even someone working on a browser renderer to have such knowledge from day 0, rather than the average FAANG employee.


FWIW I work in a place which uses a lot of C++ and cares a lot about performance.

I have seen a lot of really bad Qt UIs as far as responsiviness. The Signal/SLot framework can get really hard to follow in huge code bases with lots of state. The whole thread interaction in Qt is not ideal.

It is interesting because most of people are now actually using PyQT or pyside. Even with these being slower than pure Qt. Because it is easier to adjust to business requirements with Python.

Oh, and I have personally rewritten some of these UIs in React and replaced them with a web browser which made them much more responsive.


> Half the time the website doesn't even load (!!!). That's insane.

I’d assumed Twitter showing me “an error occurred” on about 50-75% of inbound link-follows to their site, across multiple browsers on multiple operating systems, was an intentional “feature” to drive me to their app.


It's a deliberate business tradeoff.


just here to say that vscode works plenty fine on electron and has handily beat out other IDE's you might consider better software. it's always possible to use a tool badly but it's not always the fault of the tool.


As someone who uses Emacs (which is a terrible piece of software in its own right), looking at the top feature requests for VS Code is like looking into the twilight zone. It lacks absurdly basic functionality.

Granted, they made Electron run fast enough to be usable but I wouldn't call it good software.


It might lack functionality that you use on a daily basis, but I don't think it lacks any objectively basic functionality. In my opinion this is proven by the fact that thousands (millions?) of programmers, including me, work with VS Code on a daily basis.


What's the alternative? All editors are kind of crap. Compare them to raw level of engineering you find in, say, a video game. It's a different world.

Imagine VS Code was exactly how it is now, but had the extensibility of Emacs. It would blow everything else out of the water. Why isn't it like that?


The extensibility of emacs is insane is why, it really is more like an OS that happens to have editor functionality. I think the only thing that even gets close is Eclipse which has a ton of performance issues which I'm convinced are due to the (enterprise-y) way they try to achieve that extensibility. After that would be the IntelliJ platform, it's usually much faster and nicer to use but also has performance issues and is known for using a ton of RAM.

VS Code is doing well here compared to its competition. It is quite extensible, you can add extensions to it written in JavaScript that can add new UI and make changes to the editor view. It's also much more performant than either of the other two IDEs I mentioned, even after you load it up with extensions to replicate some of the most used features from those IDEs.


> Why isn't it like that?

Because most of us don't care. I need my code editor to just work out of the box. I want to be able to write code and install a few plugins to enhance the tools I'm using. Hacking my editor is a waste of my time and even if I wanted to, I'm certainly not going to learn lisp to do it.


Sublime Text is nice if you care about good engineering and performance.


your definition of good software here reminds me of the critic in Ratatouille who loves food so much he doesn't eat any of it. must be terrible living in a world where everything sucks as badly as you perceive it.


Hacker News, which is literally just text, can't display every comment in this post. There's a disclaimer at the top. Scroll up and read it.

How much money does the Y Combinator Web team receive? Do you think its enough that they should be able to render... Text?


This might be off topic, but what basic functionality does Emacs have that VS code doesn't? I know there's SLIME and Org, but I don't think you meant that.


Difficult to answer directly, but scanning through the most popular feature requests, most of the non-VSCode-specific feature requests have been solved (somewhere) by Emacs:

https://github.com/Microsoft/vscode/issues?q=is%3Aopen+is%3A...

I will say that this was way worse a couple of years ago. They seem to have added a lot since then.


> Knowing how to maximally exploit a CPU is way more important than knowing eight different Javascript frameworks if good software is your objective

Which is why 90% of all software is very profitable crap.


It's not just you :) A much more realistic way to test people is to tell them to bring their laptop, point to a problem on the whiteboard and say "here, solve it, and use whatever resources you need."

I'm somewhat like you; I do a wide variety of tasks every day and simply can't keep all the different rules and syntaxes in my head. I'm terrible at tests anyway and would probably bomb a FAANG interview badly because of my lousy memory.

As an interviewer, I would rather someone be generally smart, have a good work ethic and lots of experience solving problems. In the long run that seems more useful than being able to write computer programs on a piece of paper in a conference room.


This has actually not been my experience. I generally agree with your reasoning and have tried to run coding interviews this way, thinking that it is most similar to doing real work. Candidates can use their own computer, their own editor and plugins, whatever framework and language they prefer, can google anything they need, etc.

It causes a surprising number of problems. A lot of candidates come in with their environments just very poorly configured, run into weird dependency errors, mismatched installed versions, no linting or syntax highlighting set up in their editor because they don't actually ever code on their personal computer, etc. It's not uncommon to spend half the interview watching someone debug their dev environment, or struggle to remember the basic syntax of their favorite framework since they aren't working on the same codebase that they're used to.

I've mostly come around again at this point and decided that even if I like this type of interview best in theory, focusing on standalone problems that can be done in coderpad with no dependencies is a more reliable approach.


> A lot of candidates come in with their environments just very poorly configured, run into weird dependency errors, mismatched installed versions, no linting or syntax highlighting set up in their editor because they don't actually ever code on their personal computer, etc. It's not uncommon to spend half the interview watching someone debug their dev environment, or struggle to remember the basic syntax of their favorite framework since they aren't working on the same codebase that they're used to.

Now THAT reflects the actual daily experience of working programmers.

All the maddening little configuration options to get things working correctly, and seem to take up far more time than actually writing code to solve customers problems.


> A lot of candidates come in with their environments just very poorly configured, run into weird dependency errors, mismatched installed versions, no linting or syntax highlighting set up in their editor because they don't actually ever code on their personal computer, etc.

Isn't that a useful way to filter out candidates that you don't want to hire? Presumably they know in advance what kind of code you're going to ask them to write, so they have time to set things up beforehand.


Yeah, you could argue that this means these people aren't that serious about us, and we should be looking for people who are dying to work for our company specifically. Those people will surely spend hours preparing specifically for our interview, and triple-check they have followed all of our preparation instructions.

The reality is that for the past few years at least in SF, hiring engineers has just gotten more and more competitive. Most people who pass our interview will have multiple other offers, often some from companies with a bigger name brand than our own. At least in my opinion, most companies in this environment can't afford to have such strict purity tests. It's supply and demand.

This balance may be changing now given the current recession, it seems even in tech things have cooled off a bit the past few months.


I use repl.it for interviews. I preconfigure the environment and sometimes fix typos if I think they're getting stuck.


A lot of developers wont use their personal computer to develop on but as a terminal.

Its actually bad practice in a lot of peoples opinion you should develop and test on an identical system hw/sw to the system you will deploy on.


As the replies to this are getting at, it's really important that if you do this you don't require the candidate to use their own laptop.

You need to provide a laptop or like, a mac mini or something you know you can plug in and has baseline requirements for working on the problem you're giving them. And you need to make sure that the problem isn't complex enough that they can't do it in the allotted time if they struggle a bit with an editor they're not used to (including like, vim without their usual barrage of scripts).

Otherwise you're just trading one bias for another: winners will be people who have personal laptops set up for coding, which is not a universal trait of good employees who code.

Let them use a laptop they bring if they want, but be ready to supply them with something they can use if it doesn't work out.


Give me a Mac and I'll spend half of the time asking you how to do X on a Mac, keyboard, mouse, touchpad, menus and unfamiliar software.

Do you really have to test for the ability to use tools instead of the ability of reasoning? A whiteboard doesn't seem bad for that. I would like to understand how the candidates reason, how they interact with colleagues and yes, also how they handle stress.

Example from yesterday: I had to fix a problem on a software in production before the end of the day (it must be available today) and before a reasonable hour (I wanted to go out at night.) That means working fast, debug, find solutions, possibly develop workarounds that leave the system working even if the problem is still not fully understood and solved and, most important, don't panic.


> Give me a Mac and I'll spend half of the time asking you how to do X on a Mac

I worked at a place that got around this by giving the candidate a choice of their own laptop, a Windows laptop, a Mac laptop, and an Ubuntu laptop. Half a dozen different IDEs on each.

Of course, you have to find someone to take on the glamorous job of keeping track of all the laptops, and wiping them between candidates...

> Do you really have to test for the ability to use tools instead of the ability of reasoning? A whiteboard doesn't seem bad for that.

It depends: If you're a big believer in test-driven development, red-green-refactor, clear naming and good code structure, test coverage, use of version control - you don't tend to see those in whiteboard coding.


Hm, maybe. I was thinking, work in an environment you're familiar with, since that's a more realistic measure.

For example, I'm an Emacs kind of guy, so if they hand me a Windows laptop, do I limp along with Notepad? or some IDE that happens to be installed? Generally, as soon as I've settled in at a new job, I've customized my work environment with something as close to Linux and Emacs as possible -- if Windows, I'll install Cygwin if allowed, and/or Emacs and GNU utilities.


It sounds like definitely an opportunity at the minimum for the existing online interviewing sites to provide a service which installs a candidate's vim configuration (or emacs, not sure how that one works) from a public Github (or other host) git repository URL. They've already gone through the trouble of sandboxing the runtimes somehow so it can't be that difficult to also sandbox the editor. Not sure how it would work for other IDEs but that should at least support vim, maybe others like VS Code and IntelliJ too.


I don't even have a personal laptop, seems like bringing the employer provided laptop to interviews is not the most sensible.


I have a personal laptop but I haven't used it in a few years. I spend every productive moment deep in the weeds on my employers laptop!

I'm not sure I would even look like a proficient computer users on my personal laptop. It runs Windows, and at this point my macos work environment is so heavily customized that I'd be bound to get tripped up just by the sudden unexpected need to use a mouse.


I'm glad my (former) employer has a scheme where after two years you can take over your "old" laptop for just €200. I'm at my new job right now with a 2017 MBP, which their current laptop budget just wouldn't even come near to. Oh and I got it revised just before leaving, the speakers were broken but there was a recall / extended warranty about that problem, it got the whole top replaced.


Wait, your employer makes you pay to get a new work laptop, or they have a scheme where you can buy your old work laptop for $200? If they make you pay to work on up-to-date hardware, that's insane.


"take over" probably means that it becomes his personal laptop.


Thought so - wanted to double check.


one time I went to an interview and they had a laptop ready with something for me to do. This guy had some insanely powerful glasses to have the font so small on his computer (considering he was also quite old) and a few other things combined with it being a Windows machine and I'd been off of windows for a couple years made it almost impossible to make it work.


I like take home tests. Unfortunately, the one time I applied to a place that gave them, and apparently aced it, they wanted to validate it in the interview, by having me program in an environment and language I'd never used before. I get that everybody cheats at everything, but since some employers (all the ones that have ever hired me) hire without any coding demonstration, why can't you just take the risk and trust that a regular interview is enough?


No company should trust their employees. The risk of a type II error is too great. To certain kinds of people a type II error is the worst kind of error you can make.


I mean, every other company having a giant hole in hiring policy is not a good reason for them to have it. But them not letting you use tech you're comfortable with is a stupid way to verify.


It feels like that the company is lacking a programmer who is decent at communication.

It can be easily solved by asking what is a parts of code do, or how it works and why.


Why not give the take home test in advance, and then have the candidate explain their solution in the interview?


I once brought a laptop to an interview. Completely bombed it, because when asked to connect it to a projector I couldn't do it - I'd never used the video output on that laptop before and it was hidden behind a flip-down door that I didn't know about.


And that's it? Interviewer saw that as reason enough for you to fail the interview? Be glad you didn't get in!


In this case I was being asked to supply an example of a GUI I had worked on in the past, and the relevant program was installed on my laptop and there was no publicly installable version. I don't blame them at all, the problem was entirely mine.


More likely it's a time issue: Every minute spent fiddling with your laptop is a minute not spent on the coding problem that's being used to evaluate you.


I completely bombed an interview because they gave me a laptop but refused to supply a mouse. Plus the laptop touchpad scrolled in the opposite direction to what I was used to. I felt quite sorry for the interviewers who had to sit silent for ten minutes watching me try to copy a function from one file to another.


Mark and rikroots, I'm curious about how long it took for you to travel to the interview and back


Not long at all for me. In fact the company was next door to another place I had worked in the past.


Ok, good it was close

>Maybe we should interview with small take-home tasks to be submitted with some write-ups to test how people research, reason, and write-up problems, rather than writing code on the spot on a whiteboard. ...but maybe that's just me.

I'm a huge fan of giving people tasks that somewhat resemble a real world problem they'd likely encounter or at least aligns categorically with their proclaimed resume feats and more importantly on top of that giving them the same environment they're comfortable working in on a daily basis which might be on a laptop in a quiet room from 6am to 10am in the morning.

However, I'm usually instantly met with the "oh well we don't want to expect too much of our interview candidates" mentality as if asking them to do a "take home" (rather than telling them to figure out how to take a day off of work for an onsite panel) is somehow grotesquely disrespecting their time and would make the company look bad if the broader community even got a whiff that it had been considered. I'm so confused. People easily identify that white-boarding interviews suck for more than just the interviewee. And yet any time a solution is proposed that involves doing something other than putting the candidate in a room in front of a whiteboard for an hour, it's shot down.

I've concluded it's a generational thing and the problem won't be fixed until we get a wave of hiring managers that weren't schooled on the Google interview process. But, also note it's a self perpetuating issue too so :shrug:.


"Take home" questions are heavily biased in favor of the employer because they're not required to put anything on the line.

An asynchronous test is definitely more flexible for all parties, but it also means the employer can more forward with far more interviews than they would be able to do in person. Leading to more candidates wasting their time doing nonsense work when they have proportionally less of a chance of getting the job.

Possibly I'm just bitter because of the time I spent 4 hours on a take-home only for the company to ghost me (name-and-shame: Instacart). But at least for in-person interviews they have to buy you lunch before the ghosting can occur.


Speaking of Instacart, I scheduled an interview with them via interviewing.io, which I was told was a "technical chat," designed to get to know a little about me, and to answer any questions I might have. I had qualified for the interview by scoring well enough on phone screens on interviewing.io, so, this technical chat was supposed to be followed by a real, actual coding interview.

Guess what? It wasn't. They rejected me as soon as I de-anonymized to allow them to see my resume.


> Possibly I'm just bitter because of the time I spent 4 hours on a take-home only for the company to ghost me (name-and-shame: Instacart).

Arthena, a Y Combinator funded company, did this to me. It truly was a waste of time.


Take home interviews should be paid, like 200$ and not require more than 4 hrs.


They should really be paid at the comp rate of the position you are hiring for (which is... a lot more than $50/hour for the Bay Area employers who have given me take-home interview problems).


Agreed. I am quite disillusioned by take home exams / online hackerrank style tests.


You got lucky. Arthena is a mess. Super disorganized and doesn't treat their people well.


So I've heard. I only went through with it because I have a good rapport with one of their employees who recently left for those reasons.


Worst for me was for Rakuten when they asked to basically implement a complete ETL pipeline to populate a mini-datawarehouse. It was easily more than a week's worth of 2 hours per day work that was supposed to be done within a week and worse, they didn't even bother to provide feedback.


The worst experience I've had with a take home problem was when the interviewer did not even run the program I submitted. As I explained the solution it became clear that the interviewer had not even looked at or run the program. It would have been easy, it was in Python.

To top it off the interviewer then sprung a surprise leetcode problem on me that had to work and was required to be written in a language ill-suited to the task. I bombed it and the interview ended. No one ever even ran my submitted program.


If you think that's bad. I did my code submission tested the crap out of it. Then found out that the person "couldn't run it". We had a back and forth conversation about it. I even created a docker test bed with the right environment.

mvn clean package worked.

Turns out the guy was running mvn exec:java.


>"Take home" questions are heavily biased in favor of the employer because they're not required to put anything on the line.

Not to mention it doesn't test the collaboration aspects of real work. Imagine doing your job but you're not allowed to talk to your teammates in case of a blocker. How are we doing our jobs here?

But when it comes down to actually starting them, I will always pass if they come before any interviewing round. You don't yet have a stake in the process.


And maybe excludes short of time people taking care of kids? Maybe more often women


The "easy" fix for this is to pay the candidate for the time spent on the take-home.


That has the down side of some potentially sticky legal issues. For instance, people on visas can't do paid work for any company other than the one sponsoring them. It introduces a small annoyance at tax time, as well.


You could ask them to name an institution for you to donate money to. It would at least be something to make the world better :)


This is actually not a bad idea. As a candidate, it still gives me the assurance that they aren't just throwing out these tests to 1000 applicants.


> However, I'm usually instantly met with the "oh well we don't want to expect too much of our interview candidates" mentality as if asking them to do a "take home" (rather than telling them to figure out how to take a day off of work for an onsite panel) is somehow grotesquely disrespecting their time and would make the company look bad if the broader community even got a whiff that it had been considered. I'm so confused. People easily identify that white-boarding interviews suck for more than just the interviewee. And yet any time a solution is proposed that involves doing something other than putting the candidate in a room in front of a whiteboard for an hour, it's shot down.

I would be (and have been) one of those people. Why? In short, likely because your take-home tasks are tedious, boring, and unrepresentative of real-life work. They are not whiteboarding, but they aren't necessarily any better.

I have done take homes as a candidate, and also have reviewed them on the hiring side. Most of the time, as a candidate, any take home I have done has been wasted time. Nothing I feel like I could count as a significant contribution, or something I could add to my resume. Take homes usually have some boring cookie cutter problem, with your most nitpicky code reviewers as the judges. Or it's something that's grotesquely oversize that's expected to be done in two hours (hedging with "but feel free to use up more time if you want"). As a reviewer, I see that it also leads to candidates doing ridiculous things like overusing decorators or metaclasses in an attempt to impress the reviewers.

Honestly, as a candidate, I would be 100% satisfied with a bitesize fix to an open source library that your company maintains, or even a bitesize fix to a closed source library under contract. Or pair the candidate with an employee who is working on a real, minor task at your company, and let the candidate "drive". Give me something real to do, not some contrived situation about an ordering system or building the next Twitter.

My mini-conspiracy theory about this is that most companies are so embarrassed about their codebases and internal process that they don't let prospective employees look at them until after they are hired.


> Honestly, as a candidate, I would be 100% satisfied with a bitesize fix to an open source library that your company maintains, or even a bitesize fix to a closed source library under contract. Or pair the candidate with an employee who is working on a real, minor task at your company, and let the candidate "drive". Give me something real to do, not some contrived situation about an ordering system or building the next Twitter.

It can take relative ages to become familiar with a code base, especially a large one, such that you can easily hop in and commit fixes that actually work. I'm not going to spend hours trying to gain context for the code, or deploying and testing it.

The employer is going to ghost 80% of their candidates anyway, and if your fix does work, you just did free work that benefits the company for little to no gain for yourself.


> It can take relative ages to become familiar with a code base, especially a large one, such that you can easily hop in and commit fixes that actually work. I'm not going to spend hours trying to gain context for the code, or deploying and testing it.

If none of their codebases are easy to jump into, then that might be a red flag in of itself. One thing I suppose I have been lucky about is that I have worked for companies that have some (albeit very minor) open source presence. I'm not talking new features. IMO a small fix to an existing codebase can be as minor as adding a new CLI flag, or lightly modifying some existing behavior. In many cases, it's what you'd give a new intern as their first fix.

> if your fix does work, you just did free work that benefits the company for little to no gain for yourself.

Well the gain is that you learned to work with a real-life codebase. The thesis of my comment is that take-home projects do not often approximate a real life codebase well.


Interesting. I think we're getting into the implementation of take homes. My proposal is (1) that a take home technical challenge entirely replace any in-person spontaneous white board drills, and (2) that it only be issued to candidates who you're prepared to bring on-site or otherwise schedule to meet the team if they present you a working solution. I'm not advocating for them to be used as a low-pass filter and spammed to all candidates without some mutual engagement first. The expectation of the onsite is that everyone has reviewed your solution and people can discuss why you made certain decisions or how the introduction of new requirements might impact the design. Or you can talk about why you hate kubernetes or the mini pineboard cluster you built.

I disagree that the goal of an interview experience should be to necessarily give the candidate a new portfolio accomplishment. It's cool if it works out that way, for sure, but that can't be the expectation. I'm mostly interested in incrementally improving the status quo which also isn't focused around an interview day that bolsters your portfolio as an engineer.

I will say my favorite and most memorable interview experience was a phone screen followed by a take home to implement bottles and kegs (a fizz-buzz game) in javascript, a language I was/am not proficient in. Come onsite day, we worked though a ux design problem and a database design problem, we pair debugged some ruby code, and then we wrote a multiplayer back-end for my fizz-buzz client, something I'd also never done before. It definitely ticked the "add something to your portfolio" box but also wasn't stressful in the same way flash 1-hr whiteboard sessions are because I was bringing something to the table that we expanded on during the day-long session.


I like this solution a lot, but if they asked you to contribute to a public codebase, you'd be able to tell who the other interviewees were, and how many of them you were up against.


A number of small tasks one can do in a few hours _and_ add to one's resume is probably approaching zero...


I guess portfolio is a better word I might use? Like if you ask me to write an interesting API, I could convert it into a project for future work. If you ask me to write a cool script, I could put it into an existing "interesting scripts" repo I maintain.

I have never seen a take-home project yet that I could do this with. Also they come with scary "You may not show this to anyone else, ever!" disclaimers which are more off-putting.


I don't like take homes for two reasons:

1) It's less time spent talking with potential coworkers. Which for me, coworkers (and how we work together/get along) determine about 90% of how satisfied I am with my job (assuming pay/benefits are standard).

2) You're still going to require me to take a day off and stand up in front of a whiteboard anyway eventually. Every take home I've done is an early stage filter. The final round is still a whiteboard panel.


There comes a point in an interview where you're not sure you want the job given what you've seen, and any judgement of the candidate's ability once they've checked out is useless.

You can learn a lot about a person by what questions they chose to ask you, but only if it's not a thing tacked onto the end of a long interview. If they haven't gotten much time to ask you questions, they only have your interview questions to go on. Which you know are not representative of the kind of work you do (but then, why are you asking them?)

So because you only ever talked to them about things that could readily be had off the shelf, and this is the only information they have, they can't know if you're bad at interviewing or some psycho who spends all their time reinventing wheels and then demanding that other people be impressed by your insanity.

The chances of the latter are bigger than you think. So if it's down to you or another opening, they'll take the other opening, and then you've wasted everyone's time.


Oh man I can relate, I once took a home project which took around 6hrs and then was subsequently invited for a full day of white-boarding with 10 different people throughout the day.

Didn't hit it off with the last interviewer so it was a no-go.

If a take-home is required I'd at least expect a code review over it and focus the rest of the interview process on soft skills rather than trivia questions that are answered essentially by the take-home exercise.


This is exactly the solution I'm imagining. You do a take home after the initial soft screen(s) but prior to coming on-site. Onsite the focus is on the work you did for the take home--discussion like "why did you model X that way?", "if a new requirement Y was added, how would it impact the design", etc. And of course all the normal soft talk and cultural stuff. Just don't put them in front of a board and ask them to solve something under pressure, please.


I had that exactly experience with a start up and it was awesome! They also were very generous in the the time given to solve it (2w).


Ten people? 10?! I guess if it's really 5 slots with 2 interviewers (though that has its own set of disadvantages, on both sides), I guess. But, that just sounds like a distressingly long day.


It was, 5 meetings including lunch. Each meeting had at least 2 interviewers and I had lunch with my prospective manager and another senior peer.

  1. 2 pms - collab and project management questions.
  2. Trivia Q&A with 2 devs and tech projects
  3. Lunch with Dir. Eng, and senior dev.
  4. Whiteboard leetcode with A researcher and a junior dev.
  5. Systems design with Dir. Eng and a senior devops
  6. Chat sessionr for questions with VP of Eng.
Needles to say I was burn out on the systems interview after having been in interviewing mode all day and the interviewers were definitely mentally drained, nitpicking on a lot of low leve elements on a systems design task.


2. is the biggest thing for me. Exactly the same experience. I would be happy with take-homes if companies actually trusted them. The biggest reason I'm willing to do them is to avoid the whiteboarding.

Of course from their perspective I could be getting someone else to do it for me or something. Or maybe the take-home leaked (which becomes more probable as the company grows).


I’m surprised this is your experience.

At my current employer, that’s exactly how we approach hiring. I send the candidate a take-home that broadly covers the basic tech skills they need to be effective in their role (implement a simple test api, some questions about scaling web apps in production, and a possible architecture sketch for a described system). It’s expected to be around 4 hours of work. If they pass that, the interview is free of technical topics - we just talk about your past experience and assess whether you’d be a good fit for the team.

Is this not how other take-home assignments work?


I can confirm this has also been my experience. I have spent multiple hours on take-home tests only to be ghosted. In no case have I ever been able to skip the whiteboard hazing because I did well on a take-home.

I'm interested to know who your employer is, if you're comfortable saying.


Not comfortable sharing my exact employer, but it’s a small biotech in the Netherlands.


I've never seen it done that way -- just used as a pre-screen for an onsite that is the typical code-on-a-whiteboard scenario. Had one place that used an at home test + a one hour task in the office on a laptop they provided + several more whiteboard sessions (so long they split it over 2 days).


I too think that's how things should work if a take-home is involved, but I haven't experienced this yet. There are times I've gotten past the take-home and then had to do a technical _phone_ interview after - not even an onsite.


How many candidates do you have doing a take-home per position?


Roughly 5-10 I’d say. We are able to weed out a good chunk in the phone screen (which is done by me or another engineer, not a recruiter)


Yes, it should be. This is how we do it


Do you feel comfortable saying who "we" is?


> People easily identify that white-boarding interviews suck for more than just the interviewee. And yet any time a solution is proposed that involves doing something other than putting the candidate in a room in front of a whiteboard for an hour, it's shot down.

Most people take issue with the repetition of similar tests and evaluations for multiple companies. If A, B, and C were considered different stages of a technical interview, candidates are repeating A and B too often, whereas applying A and B once would suffice for several companies.

It's a problem that is realized when opportunity cost is compounded when interviewing multiple companies and it's not something that a single company alone can fix. A more "Docker-ish" solution where using one of "A" to lay the foundation for many company interviews (and not just the ones expecting you to practice Leetcode) might curtail the problem of using more time and resources than necessary.

But more than that, there is also a cognitive dissonance with the beliefs of how to reform a broken system. Programmers largely oppose industry-wide regulation while also expecting the vetting process to be more consistent and make more sense. This is a tough nut to crack because we seemingly want to have it both ways.


> Programmers largely oppose industry-wide regulation

...of programming (because the regulation would never be able to catch up to the state of the art), sure. Of HR practices? I don't think I've ever heard an engineering lead lament that their company's restrictive HR practices prevent them from testing candidates the way they want.

> and it's not something that a single company alone can fix

Sure they can: micro-credentials. Someone needs to be the CompTIA of tiny programming challenges, where you prove once-and-for-all (in a proctored situation) that you can solve FizzBuzz or whatever, and get that fact recorded in some database.

Hopefully, candidates could then put their MicroCredentialsCorp username on their resumes or into companies' application forms or what-have-you; and then first-stage HR automation (the same kind that matches resumes to keywords) could run through a bunch of RESTful predicate-endpoints checks like https://api.example.com/u/derefr/knows/fizzbuzz (which would return either a signed+timestamped blob of credential JSON, or a 404.)

Hopefully also, allow anyone to create a test (maybe have the end of that HTTP route just represent the username+repo of a GitHub repo where an executable test for the micro-skill lives?), such that these micro-credentials can live or die not by central supply, but rather by market demand.

(I think someone suggested doing this with a blockchain once, but there's really no need for that.)


Unfortunately, nobody has cracked up the problem of how to do credentials without creating a conflict of interest in favor of one of the parties (ok, maybe only in favor of the candidate). Or at least, nobody has created a way to communicate that lack of conflict of interest widely.


> Someone needs to be the CompTIA of tiny programming challenges, where you prove once-and-for-all (in a proctored situation) that you can solve FizzBuzz or whatever, and get that fact recorded in some database.

I don't think any of the major companies would accept such a solution. I mean, they make people who already passed their own interviews once re-interview if they leave and come back or apply again too long after they pass and decline. If a company isn't willing to trust its own interview process as a once-and-for-all test, why would they even consider doing so for a third party test?


Heh, another thing is variable salary on a test period. Like how I was recruited out of uni, on a 6 month cheap internship barely paying a rent, with 0 commitment from the company to keep me. Long enough to evaluate me, see me contribute, teach me part of the sytem, not long enough to "exploit" me.

We could scale this to 3 months on 80% salary (whatever the company is comfortable spending on throwaway) with the full salary after the test period, with maybe even back pay and a little ceremony to make it a big deal. That would kick my ass for three months, cost me nothing long term, put a big smile in my face and let any company hire me without 15 interviews.


Isn’t that exactly what probation is for, although for full salary? To evaluate the new employee and lay them off if they don’t meet the requirements? Disagree with the discounted salary, if 20% for three months is going to break the bank, the company probably won’t be around much longer anyway.


This is theoretically what TripleByte is supposed to be. My experience with them was pretty horrendous though.


We do take-home, and tell the candidate to not use more than 4 hours on it and deliver it any time during the next week (this is to respect their time). The testing system logs their activity, so if someone decides to spend two days on it, we'll know. Not a problem as such, but it raises our expectations of the delivery. :)

Next stage is a technical interview of 60-90 minutes where we a) try to find what their strengths are and how deep they go, and b) ask them to explain how they solved the take-home.

b) is critical. Of course it tests for the skill that is critical in teamwork: communication ability, but it also lets us catch cheaters.

After this it's HR and reference checking.

It's worked well so far.


[flagged]


Yes, I did one recently, got told it should take 2 or 3 hours, so I spent a morning on it.

Got failed for a number of the most trivial reasons - it didn't meet PEP 8 (it was two white space errors, as I didn't have pep installed on my home laptop, so I missed them).

A load of stuff about not mocking tests (setting up that would have doubled the scope of the project). The spec and said if you don't have time, then describe what you would have added in terms of tests - which I did). I did provide the most difficult to implement integration test to prove the system worked.

There was a complaint providing an HTML webpage rather than a JSON endpoint (the spec described it as a "page" and said not to bother with styling - so implied that it was HTML). Plus a couple of other equally ridiculous complains.

Don't waste your time with takehome tests without speaking to someone in engineering first. Sennder is the company if anyone else wants to avoid, I think we should be naming and shaming such companies.


Good God. That sounds awful.


Yes, if an employer doesn’t understand how to look at my GitHub then they aren’t worth my time.


I only used clever algorithms maybe, MAYBE a dozen times in 12 years as a java developer.

I know I can, in university I implemented huffman compression(I know there is nothing special about it algorithm-wise) in java and I got the best run-time in 2 series of ~100 students, and by that I mean orders of magnitude better; my runtime was on par with the best C++ programs I could find, and my program could work on arbitrary file size, the fastest C++ program I could find at that time (20 years ago?) loaded everything in-memory. I wrote all the data structures from scratch, using java structures consumed a lot of memory and was way slower.

But the jobs I got in my country as a J2EE developer were much better paying, at that time at least, than any C++/algorithms stuff.

If I was younger, with the same singlemindedness I used to have, I'd absolutely spend 1-2 years tuning my algorithms skills so I could get those sweet FANG job.

Or if I had the ego of my ex submediocre-redneck boss who thought Spring Framework was bullshit, without actually knowing it, and got into programming competitions(only to be butthurt by his results because he did not put in the effort to actually practice algorithms and data structures), again, I'd study hard on algorithms and datastructures.

Or if that was my hobbie -- algorithms and datastructures is 100% a valid hobbie -- but it is simply not my hobbie.

Sure, if I encounter a hard problem, I can look up the best algorithms, copy&paste, modify, optimize and optimize untill I'm satisfied with the result, but there simply is no payoff for me to be an algorithms grand master.


> Today, for example, I updated our firewall rules, updated some Java business logic, tweaked some javascipt and PHP on the site, reversed an odd macro email-phishing virus we got, configured a new RDS server VM for clients, and updated some network GPOs. 20 years ago, I would have never worked on ALL of those things in the same week - let alone the same day.

If you're happy, then awesome, but the reason you would have never worked on all of those things before is because companies are loading up more responsibilities onto technical people to cut down on costs.

You carry the skills of 3 roles from 10 years ago: Sysadmin, developer, and reverse engineer, and yet you're likely only being paid for one.


Unless they're getting 120 hours of productivity out of him, then no - they're just more in need of fewer generalists than many specialists. Specialization is great, but only if all of the other necessities get covered.


This. Last interview I bombed because no one would ever remember what they were asking for in the interview test. I said I'd just google it and then dropped "Never memorize something that you can look up." ― Albert Einstein


I like this quote, but I think the point comes across better if you rephrase it:

> Never _intentionally_ memorize something that you can look up.

I guess you can't quote Einstein, then.



Yup, the tasks that you repeat often enough will get stuck in memory.

In theory anyway, I always have to look up how to remove a remote git branch or how to do the euro sign on a Mac.


I have a standard preface to start a live coding or technical session with a candidate to account for this. I look up stuff all the time, and expecting candidates to not do so is cruel.

> We're going to go through a few technical exercises together. It's open book, open documentation, use whatever resources you would normally use — though please do not copy and paste complete solutions from StackOverflow and guess&check to reach our solution. It's open language, so please select whatever you're strongest in. If it is a language I am not familiar with, I will ask questions. Some of those questions may be naive. Use whatever editor and environment you're most comfortable in. Please ask questions of me. There is no specifically correct answer, though your solutions should hit a level of baseline correctness. These problems are designed to mirror some typical day-to-day problems you might face on the job; we're not looking to test rote memorization of _Hacking the Coding Interview_. We want working code, but some aspects or edge cases may remain in pseudocode as we iterate on these problems together. There are no designed 'gotcha' moments in these exercises, and if you feel there may be a 'gotcha' moment, ask clarifying questions. I would rather you ask questions than struggle through a solution.

It works well for modern development.


Maybe I should start saying (as a candidate):

> I understand you want to test my programming ability, and in order to do that, we should do it under conditions as close to real world as possible. Because I don't know every algorithm in existence, I'm going to use the internet like I do normally to look stuff up. And, I'm going to use my own laptop that I've brought here. Is that cool?

Not really, of course, but I'd like to.


Yes, you should be saying exactly that.

Just do it ahead of time instead of on the spot. You're probably emailing back and forth with a recruiter. Send them the exact comment you posted here, and ask that they make sure that your interviewers know about it.

It's amazing how little things can throw you off your game. I had a couple where they handed me a MacBook set up for "traditional" touchpad scrolling (where it works like a scrollbar instead of like a touchscreen).

Every time I tried to scroll it went the wrong direction!

Of course in an interview we aim to please, so I lived with it for a few minutes, but finally asked "Do you mind if I change your touchpad settings?" Of course that led to a brief discussion on the virtues of traditional scrolling.

This was not only precious time wasted, but a minor blow to my self-confidence. Imagine you tried to drive a car or fly an airplane and they had sneakily reversed the steering wheel or yoke and pedals! You would be lucky to survive.

In fact this is why the preflight checklist for a light plane always includes "controls free and correct".


Why would I trust my memory on how a utility works with an empty set when I can just look it up?

So many languages, frameworks, and libraries copy the same functionality and every single one does it differently.

The cost of looking it up is a little bit of humility. The cost of not doing so is a small chance of spectacularly breaking something for my entire team. Suck it up, apply the Precautionary Principle, and most importantly, get over yourself.


Most of my job depends on analyzing the runtime complexity and memory complexity. It's my passion. I don't memorize any language, but boy do I understand the small complexities and seemingly trivial interactions that have profound implications. When I interview, I ask them about what they enjoy about programming and how they approach problems that they're stuck on.


This makes me very nervous - I agree it is technically possible to do that volume of change in a short amount of time now, but I don't think it is possible while still accounting for QA / testing and documentation. That is a massive amount of change in a small time without any team oversight


I believe we should program the way you do nowadays. This is what people praised in the past so much as code reuse. This is what research people call standing on the shoulder of giants. We should reuse other's wisdom (well, you need to filter what wisdom is and what not first, of course, which is not that straightforward itself). Most profession work on pre-manufactured components for the sake of... well, many things (its a story of its own)!

Do it smartly, know what that component is, if that is good for the purpose, if that is the best for the purpose, its trade-offs, but that should be the primary way. With fallback to reinventing the wheel.

We still will have to provide custom built solutions from scratch from time to time (in fact in a lot of cases, the CS is still not mature enough for proper higher level operations) but better to rely on tested and ready pieces.

Recruiters seems to have no clue how to select people for the particular position therefore they go back to the ancient times of CS like this dumb whiteboard thing. I also felt like they seems to expect that custom made employees exists out there and they only need to wait for the perfect candidate, not really expecting someone to learn while the continuous learning is the industry norm, the inherent requirement. I've seen positions remained unfilled for more than 6 months sending away partially fitting experienced candidates.

I failed at technical interview where you must react instantly and accurately without any help normally available in everyday life (Google). After looong time of programming and learning various approaches I find myself question myself continuously even in fundamental matters. Or is it something by age? Nevertheless while being young, self confident and energetic and the Google style of programming was far from invented my fingers were spitting huge amount of brilliant code that did not work until I fixed all the stupid mistakes I made, which took frustrating amount of time. Coding gives a good amount of humility by showing you that you are to blame in every last cases. Every time! Making mistake, misunderstanding something, not knowing necessary elements, misjudging aspects, not checking the reliability of some component you rely upon, it always comes back to you as the computer will do exactly what you tell it to do, it will always be you. Frustrating. I learned to doubt myself and verify seemingly evident matters which does not play well in whiteboard interviews as you might imagine (in fact once I had mental whiteboard interview, speaking about solutions, how common is oral programming in practice?). This is an unnatural and sinfully inaccurate way of judging how one will work in a position. Due to the constantly changing (improving?) profession and the huge amount of beautiful aspects it entails keeping every bit of detail in head that you can spit out in a second notice is far from being a smart expectation. It happened more than once that the right answer occurred me after walking away from the interview as the knowledge bubbled up of the sea of experiences. Once I told someone that sorry, I never met that technique before just to realize later it was an essential component in a task several years back, I used that quite a lot for that one task.

I did not change position too much (3-8 years spent in one place) but I never been hired based on whiteboard test but by trial tasks or probation period, fulfilling the need of the position (leaving due to dissatisfaction every time). In fact I just met whiteboarding recently and I find it hugely inadequate to judge competency.


> Couldn't tell an average from a median

Without meaning to attack you personally (especially in the context of the rest of your comment), a comment like this annoys me a bit.

I presume by "average" you mean arithmetic mean, but the median is also an average, and depending on the context the median might be a far more useful statistic than the mean. Confusing the mean and the median is one thing, and perhaps you actually used this terminology in the interview; "confusing" "the average" and the median isn't really worthy of comment, and sounds more like a breakdown in communication between interviewer and candidate rather than a lack of technical knowledge. It just seems vaguely hypocritical to me to be expecting a certain level of ability from the candidate and then using imprecise/informal terminology.


I know nothing about GP, but note that it is possible that English is not his first language, and the subtleties of the story and vocabulary might be lost in translation.

In my native language (French), as you describe it, there is no word for "average" (the technical and the common terms "moyenne" are exactly the same). Until your comment, in English, I assumed you could use "average" and "mean" interchangeably.


My native language is English and I'm fluent and well educated.

Although I do know that "average" covers mean, median and mode, when I was young I was not taught that way. First I was just taught "average" with a formula, and many years later "median" and "mode". I don't think I noticed the term "mean" until I learned about median and mode.

You can see this reflected in scientific calculators used at school, where the button for calculating mean is labelled "Avg".

And in Microsoft Excel or LibreOffice Calc, the function to calculate mean is called "AVERAGE".

So nobody should be hard on themselves for not knowing the difference. You probably weren't taught the difference, and common tools work against it.

It looks to me that the common word for statistical mean is "average" in English, but it's not the correct technical language.


> So far as I can tell, the common word for statistical mean is "average" in English

You're quite probably right about common parlance..

> but it's not the correct technical language.

..but as you point out it's not really appropriate in a technical context, indeed I don't think I've come across the term "average" being used in this way in technical literature; I've normally either seen expectation (for probability theoretic cases) or mean (for statistical cases).

> So nobody should be hard on themselves for not knowing the difference.

Absolutely, which is why I think being hard on someone for not knowing the difference between "the average" and the median is similarly an issue.


You realize that an interview is not the same as writing a technical article, right? It should be a conversation between two potential coworkers, and, in that context, I think saying "average" instead of "mean" is totally fine.

Incidentally, "mean" doesn't quite cut it, if you want to be a stickler about it. There are several types of means: arithmetic, geometric, harmonic, etc. By that standard, you just failed the interview.


> "mean" doesn't quite cut it, if you want to be a stickler about it.

I'm going to assume, then, that you didn't read my original comment which started off this line of discussion? By that standard, you just failed the interview. ;)


Sorry, I dropped this! :P

Touche.


In most practical situations, your initial impression is correct. As a native English speaker, I've never heard seen or heard someone use "average" to refer to a statistical summary other than the mean. If it doesn't refer to the mean, it doesn't mean anything formal/technical at all. I read GP as "median is sort of an average in the broad sense", but this is honestly kind of a stretch.


I see it quite commonly, from technical discussions to newspaper article. When people say "the average income in the US" they are almost never talking about the mean.


I think they are. I doubt any newspaper articles use it to refer to the median, at least I've never seen them use it like that. Do you have any examples?


Almost every discussion about average salary or iq uses median. Even finding results for mean salary is difficult. The first page of results for mean us salary gives a list of results for average, many of which specify they are actually the median.

https://www.google.com/search?q=mean+us+salary


Well, https://en.m.wikipedia.org/wiki/List_of_countries_by_average... as a counterpoint. Almost all the Wikipedia pages about average wage use mean.


Math definitions only..


French and English are strictly equivalent.

moyenne is mean or average. The formal name is "arithmetic mean" and "moyenne arithmétique".

median is médiane. people frequently use any word but median in both languages when they want to talk about the median.

https://en.wikipedia.org/wiki/Arithmetic_mean

https://fr.wikipedia.org/wiki/Moyenne_arithm%C3%A9tique


Almost correct.

The mean, median, and mode are three types of averages. None is 'the average'; though if you asked someone random on the street they'd probably give you the mean average.


On an average day?

Mode: On a typical day <<-- most common meaning for average in day to day conversation

Mean: On a day that is in the calculated middle (for some variance) <<-- most common meaning when we think the question is about math

Median: On a day that is the middle for all the possible days <<-- nobody thinks of this, but they think this is what the mean is.


This really depends on context. When evaluating a likelihood (prediction), "average" is generally colloquial for "expected median."


All you're pointing out here is that a candidate should be able to differentiate between average, mean, median, and mode, and should be able to swiftly correct "what's the difference between average and median" to "do you mean mean and median? because a median is a type of average".

Given that, showing more charity to the comment you're responding to would have been the correct course of action.


Yeah but will embarrassing the interviewer work well as a strategy to get hired?

Depends on the interviewer.


Do you want to work at a place where you cannot event define the words used before solving a problem?


...I always learned in grade school math class that "average" = "mean", very specifically. I'm pretty sure there would have been test questions that required this knowledge.

It's very possible my schooling was wrong, but presumably I'm not the only one.


I have a graduate education in probability and statistics; in the technical literature, an "average" is an estimator which, subject to some bias, predicts the value a sequence tends to. My professor was fond of saying, "the function f(x) = 15 is an average, but since it's a constant function it will almost always be a terrible one" to drill this into our minds.

The arithmetic mean is colloquially called the average and differentiated from the median and mode, but technically speaking these are all averages. They are all estimators of central tendency. It would be not be out of the ordinary if a statistician asked, "which average do you mean" for clarity. Which definition of average is best depends on what you're trying to measure.

I dislike this interview question because it's an area where it's 1) more trivia than practical knowledge, and 2) easy to not recognize someone who knows more about it than you do. It's like if I tried to test your knowledge of the difference between "affect" and "effect", and then you correctly used "effect" as a verb and I thought you were wrong.

If I repeated everything I just said about estimators in an interview, the interviewer might think I'm too ivory tower to realize that "of course he's just talking about the mean!" But then I could also ask why I'm being asked a question like this in a software engineering interview.

It's a language game where no one wins (somewhat in the sense of Wittgenstein).


> "the function f(x) = 15 is an average, but since it's a constant function it will almost always be a terrible one"

It has minimum variance, how can it be a terrible estimator?.. ;)


Hah, that's a good rebuttal. Well it would be an excellent estimator if it was unbiased (or at least had very little bias). But if f is an estimator for the true average of a function F where E[F] = 100 (for example), it's very clearly not unbiased.

It is a bit tongue in cheek.


> It is a bit tongue in cheek.

As was my reply ;)


It is a very precise function, I'll give it that.


Thank you for articulating this. This issue has many manifestations in the interview process. It can be so frustrating.


i worked at a company where i had done over 150 interviews and the company did at least 10 interviews a day at the office i was at. we did metrics on what times of day or days of the week or rooms.

before lunch or 4pm or later interviews we’re worse across all candidates

certain rooms we knew were smelly, loud, or poorly converted from like a storage closet scored worse across all candidates

we took some rooms out of the rotation and made sure everyone was well fed/had drinks. the effect is real


Interestingly, there's a famous study showing those doing the judging "can't think when hungry" either. (Or, at least can't think unbiasedly).[1]

[1] https://www.pnas.org/content/108/17/6889


That study turned out to be quite flawed, because the cases were ordered [1].

In particular, cases were grouped by prison and within each prison, prisoners without an attorney went last. As the well-known saying goes, "the man who represents himself has a fool for a client": the pro se prisoners fare far worse than those with a lawyer. Judges tried to complete an entire prison before taking a meal break, and therefore ended one session with the statistically weakest cases and began the next with stronger set, thereby guaranteeing the result.

If you look at the original data, some other oddities pop out. A physiological process, like becoming hungry, should depend more on the wallclock time than the number of cases heard. However "However, note that in an analysis that included both the cumulative minutes variable and the ordinal position counter, only the latter was significant."

[1] https://www.pnas.org/content/108/42/E833?ijkey=21207497c684f...


Ah, thank you! The more I look into a lot of these (in)famous behavioral psychology studies, many don’t hold up or aren’t replicable. As a layman who is simply interested in the topic, I appreciate the correction and insight!


Well, then. There's the solution. From now on, I'm bringing my attorney to all my on-site interviews. :P



Oh no, confusing median vs mean is a very specific interview question I've bombed in the past so hard that the interview almost immediately stopped!


> People bomb, including good people. That is sadly part of the system.

It seems like there's probably some value in a "MoneyBall" company that identifies a way to interview without most of the anxiety. Perhaps hiring a few people with remarkably high EQ who customize the interview setting who review the inteviewees' work and watch them program over a few days in their own computer and IDE.

I know I've interviewed people that have frozen up, but for whatever reason I didn't pause the interview and address the anxiety, but rather I tried to adjust the questions (which generally didn't work).


These people are usually called "references" :)

As for interviewing for a few days... Yeah. No. Not in a country with an abysmally bad vacation policy like the US.


The whole reason we’re in this mess is every company is trying to play moneyball. They’re all convinced that by creating a unique interview process they’ll find better talent than other companies and this make more money. The end result is more and more obtuse hiring practice which have zero practical value.


I do think that the people giving the questions and interacting with the applicant should be different from the ones judging their results. This also gives you the opportunity to blind the judging for race, gender, etc. The proctor should be explicitly on the applicant's "side" IMO.


I used to 'sit in' on interviews, and basically say very little except to answer questions about working here or to serve as the 'linter' to get them unstuck.

You hear something very different than the person who asked the question. And in one case I discovered that my coworker had been asking a question that he thought he knew the answer to and did not; a deep clone implementation (where he never asked about cyclic graphs).


I assumed it rather than say it out loud, but my idea is to have standard, vetted questions, to reduce the risk of a question being mal-formed like that (unless you want to see how they make tradeoffs in response to an overambitious spec). Your situation sounds rather different. Do you think my scenario would work better?


> He also got hired - apparently the committee agreed that "can't think when hungry" is not really that important a flaw.

I hope you suggested that this employee get tested for hypoglycemia. It'd probably greatly improve their quality-of-life to go from "my brain isn't working and I don't know why" to "my brain isn't working and I know exactly why; I need to drink some fruit juice."


Hah. "Did a tech interview, found out I had diabetes."


So is the moral of the story that this hire worked out or not? Was this a good coder?


That's another downside of the system: learning that is too much effort, unless they land in your team. But knowing how we hire only obviously overqualified people, I trust he did well.


(This subthread was originally a child of https://news.ycombinator.com/item?id=23848916)


> Couldn't tell an average from a median.

Shouldn't that be tell a mean from a median?


It should. There is a sibling thread about this ;)


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: