We're working on performance improvements that will hopefully allow us to go back to HN's original style of one big page per thread (not infinite scroll, don't worry). In the meantime please look for those 'More' links when the total number of comments is over 250 or so.
Then I interviewed for another company and utterly bombed. It became suddenly clear to me that I had been an idiot. Of course nearly all of those candidates were perfectly good programmers. They had had shit interviewing days, probably mostly due to nerves, but they probably would have mostly been perfectly good employees. How frickin' arrogant I had been for concluding that people who couldn't solve incredibly high stakes algorithm riddles on a whiteboard in 45 minutes with enough speed and flair were somehow not qualified to be my coworkers.
The interviewers didn’t know I’d gotten a call from my cat sitter that morning and one of my cats had to go in for an emergency check up. (He ended up being fine but didn’t know that at the time).
Later that week I got the rejection from FB and then, 15 minutes later the positive response from Google. Really made me realize how much impact the day has on an interview outcome.
1. Yes, there is a larger and more complicated discussion to be had about making the whole system better. But that isn't gonna happen in the short term. This is advice for dealing with the situation as it exists today.
I also have real problems with traveling to interviews since my other half has issues being alone at home hence doing the interview while we were visiting her family. Even then me being gone was highly stressful for her.
Google was willing to let me interview with folks local to us which really helped. I’m curious if more companies will conduct remote interviews in a post Covid world too.
It is not specific to the interview process.
Trust me anyone reading this, if you are not feeling well, or something is distracting you, please, please, do feel free to reschedule your interview, even last minute.
Once you go through the process, and it's a no-hire, that is recorded in the system and you won't be allowed to try again for some extended period of time. At my company, it's 1 year.
I think I too wouldn't have "dared" to postpone, in some/many cases
Also: Do you really want to date someone who doesn't care that your cat may have a life threatening problem?
Finally, even if it's not that serious (cat emergency): Do you really want to work/date someone who is that susceptible to this bias? I know it's the norm to fall for first impressions, but for me, it's also a signal of problems I'll have with them in the future.
BTW, my cat had a serious health problem once. She needed lots of immediate care for a few months (at home, away from work). And the vets were telling us that even if she got through it for now, it will come back soon and we should really consider euthanasia - the condition would eventually kill her - not many cats would last a year with it, and chances are her she would be in pain for much of it.
And this all started a few days before I started my new job.
I went straight to my new manager (before my official start date), and explained to him that the cat was my priority - I could take a leave of absence or whatever was needed, but I needed to figure this out (either euthanasia, or time off for treatment options, etc) and could not be anywhere close to 100% at work.
Fortunately he was sympathetic and said I should not worry and do whatever was necessary.
The job turned out to be crappy, and I stayed there longer than I should have entirely because of this.
I know I go thru waves of high productivity and low productivity. That productivity seems to be closely related to the ability to concentrate in my environment, sufficient sleep, how closely I've followed y daily rituals, fewer stressors in the past few weeks, etc.
While waiting at UBER reception area I got a call from a "tax office" telling me about suspected tax fraud transactions (it was a fraudster call but imagine me at that moment) a minute before I was called in for an interview.
Yeah, I was really kicking ass at that interview (NOT!)
I already know where my strengths and weaknesses are. I'm fine with 1 on 1s and all that, but when it comes to the technical part of the interview process like a take-home design project or a live design challenge, I usually fail.
I know I have the skills. My work has always been well-received, and I've always received good marks on reviews and feedback, but I haven't been able to translate it to the interview process.
I've tried to assess where I go wrong, and I think I panic and put too much effort into trying to figure out what they want to see or come up with something innovative rather than just doing the work and following my normal approach.
The stress of the time crunch (and sometimes not knowing anything about the product or problem) only add to my inability to improve in this space.
When I've sat on the other side of the table, I know that most interview processes aren't that well-structured or set up to actually evaluate the candidate. Even when there's a good process, it's often too easy for one person to influence the results.
So sometimes I tell myself that it was a crappy interview process and it just wasn't set up for someone like me to succeed. But I still end up stewing over it and feeling depressed for a few days.
Not saying I disagree with you though, I think identifying heavily with your job makes it even worse.
The fact that you are getting steady interviews speaks for itself that you have the design chops and experience. I think people don't know how to interview design candidates well as is, removing the physical cues you get from an in-person interview makes it even harder to assess a designer. Also, it could just be companies are flying by the seat of their pants and a lot have reduced workforce with this second wave of infections.
High probability it's not you, it's them and their broken process during covid-19. Keep your design skills sharp and keep building your portfolio and use this time to work on concepts that you are passionate about that haven't come up in a 9-5, that will help you until the right opportunity comes along. You got this!
My approach for take home challenges is this. Often I complete them and never even get any feedback from the firm beyond a form rejection letter. If that is going to be the case anyway I might as well make something I'm proud to have in my portfolio, even if it doesn't line up with exactly what they are asking for. That transforms the work from something that I'm doing for them to something I'm doing for me. I know that's hard when you are unemployed and have bills to pay, but if at the end of the day you have some work that at least meets with your expectations you can retain that bit of your pride.
When JS frameworks and SPAs took over the web, he adapted himself. But being a UI engineer he cares about the craft's visual aspects, than the technicalities of JS.
Although he has a proven record of his works, he is now dealing with interview questions like "what are the problems with closures" and "when you do use function.apply()" and "immutability".
The whole process is missing a feedback loop to the evaluators, resulting in the OP's situation. In my own situation, it took several years after that mess of being forced to shotgun candidates to pick apart what in the experience was actually predictive of success and what wasn't. To this day, the most successful hires from my perspective were those who just really wanted the job and wanted to contribute to the product; definitely no strong correlation with "leetcode" ability.
It's not just that the coding quizzes are poorly administered, but that the hiring mangers and leadership industry-wide have been obstructive to getting evaluators feedback (for starters, what did a candidate's other offers look like? where were they 3 months after the interview?) and especially tight-lipped on sharing interview questions. That last bit is particularly obnoxious because people monetize their insider knowledge either through things like Rooftop Slushie or they take a more Gayle Laakmann angle. (Just to be clear: while many questions are "protected under NDA," many questions are also stolen from other companies-- interviewers asking them are already violating their own NDAs when asking them, or potentially when feeding the question bank of their new employer).
What you can do is go to your manager in your 1:1 and ask for interview feedback. Ask what happened to candidates 3-6 months after the interview, even if they were hired (perhaps not just to your team). Start to get feedback, and then work from there. You'll probably do better interviews, and at least curtail some of the information arbitrage culture in the C-suite.
I'm not exactly sure I buy it, but it seems like the simplest explanation.
The leetcode stuff is just masochism. You will spend X months beating DS&A into your brain until someone can pullout a random question and you immediately know what questions to ask, what steps to take, and how to write it out on a whiteboard.
This kinda bit me in the rear when I applied to Google a couple of times. As a nerdy American male, aged 22 to 40, with above-average people skills (for nerds) I got along great with everyone I talked to. But the hiring committee would get a sloppy page of half-functional garbage code, along with a note like "Seems like a great candidate". I found another FAANG company I fit into a lot better anyway, so it all worked out great
What's different from you and many other people is that you admit and face your bias rather than justify it.
I'm seriously curious what google interview board members have to say about this study. People like Gayle Laakmaan have been saying things to justify the whole process for years; but now that there's actual science, what do they have to say in the face of science?
You try to help them relax, but you don't have much time, and the whole thing is just so unnatural.
Other places I've since been at, and interviewed with, give you a laptop, a problem to solve, and some time. You can focus on the problem instead of the situation much better, and I think that really helps. As an interviewer, it also helps keep some distance from the interviewer. It's too easy to try to micromanage the interview and keep the candidate uneasy. But there's a natural "don't interrupt someone when working" instinct that keeps interviewers more distant when the candidate's programming.
Coderpads on video conference, I think, are pretty good when the candidate has a good space to work in. They're in comfortable territory and you can see them by their keystrokes. The interviewer feels like they're getting more accurate data about how the candidate really operates. I just wish it could do keyboard shortcuts better -- emacs users like me hate seeing Ctrl-N bring up a new window.
The issue of IDE/compiler/keyboard/keybindings is also there, but that's easier to work with.
The problem is this isn't what gets you the smartest people on the planet. A small number might be interested in DS&A to that level, but most are not. They will learn enough to know how to google what they need and move on to what they are interested in. Smart people are constantly getting job offers from coworkers and bosses who move to other companies if they don't start their own.
The few smart people who do put in the effort necessary to go to the FAANG companies always leave after getting the golden sticker on their resume. They leave behind them a residue of SAT preppers who have PhD's in inversing binary trees.
At Google I've worked with some very humble but also super intelligent awesome engineers, people much smarter than I. Definitely the majority of my interactions. But I've also worked with people who clearly took the "Google hires the smartest people" company line, and their own success at it, a little too personally.
I don't think it's a good message to send. Although to be fair I haven't heard internally in a while.
Also I don't know if it's really a gold sticker anymmore. Nor do I think it's true that the smart people leave. Some do, but honestly, there are some damn brilliant people at Google who have found their brilliant corner and produce brilliance there.
Or some people here who are just brilliant at playing Big Company. Unfortunately there's more of that all the time.
That's what they keep saying. However, it remains to be proven that regurgitating answers on a whiteboard in 45 minutes implies that the candidate is not a false negative, or even a true positive to begin with.
That's the crazy thing about this. FAANG has near-infinite resources, relatively speaking. Google could be A/B testing their interviewing process, experimenting new approaches, experimenting (as the OP study did) with eye-trackers and other tech to try to gain insights on candidate behavior. They have the "engineering for the sake of engineering" and "innovation for the sake of innovation" culture that allows for moonshots and boondoggles. I don't think they're exactly known for following "if it ain't broke, don't fix it", given how many times they kill off products only to recreate them later on, especially their chat apps.
Yet they stick to the traditional whiteboarding method because-- why?
People use this argument often, but is technical competence of engineers- specifically at passing their interview process -the explanation for their economic success? And not market penetration, near-monopoly positions, great UX, etc.?
Not to mention, as we've seen in the high-profile Facebook SDK crashes this past week, share price doesn't necessarily mean product quality.
These events are noteworthy because of how rare they are for such a large software-based company.
It's also noteworthy because of how destabilizing this was for the immense number of apps that depend on that software.
The compensation is much better. The management is much better. The work life balance is much better.
I moved from SEA. Everything is worse there.
I read this and feel like people have unrealistic standard. FAANG isn't good enough? Working there for a year probably put your wealth of life at 0.1% of the world. That's not good enough?
Generally speaking, I think more intelligent people will have an easier time learning the patterns in algorithms.
It's not the best filter, but it's easy to see why it's an attractive option for companies.
Therefore her business interests are orthogonal to her opinions on "interviews." She may still be biased, but I don't think business interests color her bias.
There are other people who always fail those coin tosses
Of course, if the pipeline is massive (whether it's jobs, schools, etc.) this tendency gets amped up even more and anyone who doesn't come across as pretty much perfect on all dimensions--whether they are or not--is going to get dinged.
And sadly, this is common and recognized at Google, was mentioned in interview training, and is just said to be part of the fact that the bar gets higher, or something, mumble mumble.
I find this disturbing.
I wonder now: for every "bad one" not let through by this sort of process, how many other "good ones" look at it and say "no thanks"? How many hear about this sort of hazing and are dissuaded from applying in the first place? How many experience it one too many times and just quit the industry altogether? How many of those "bad ones" are even objectively bad, and not just having a bad day or intimidated by a process rife with both intentional and unintentional hostility? In other words: are we actually assessing what we claim to assess with _any_ predictive accuracy, and is the collateral damage to company reputation and the pool of available candidates - a damage often hidden to the individual companies that impose this process - even worth it?
And that doesn't address the file drawer effect. I'm 54. I can code a binary tree, hash table or what have you, if I have to, but I am not as practiced as somebody fresh out of school, because that heavy lifting is done by libraries, and I'm busy contributing original mathematical algorithms that no one ever asks about because they don't have the background to understand the explanation. So I don't go on FAANG type interviews. I know I will fail, and if I don't my offer will be predicated on that apparently lower performance. At best I will have an utterly miserable experience. So I self select myself out, mostly on age.
This is not good, for so many reasons.
This made me chuckle, because it's absolutely how it is.
Some other comments here talk about how nobody creates new algorithms these days unless they are a researcher. But it's not true at all! Like you I'm creating new algorithms and other tricky techniques all the time, but I'm not fresh on the stuff I learned at school... or so I thought.
I have been really surprised to find some screening interviews recently asked me shockingly trivial questions. So simple that I stumbled over myself trying to simplify the answers, thinking "I know too much about this topic and need to keep it simple", and "can these questions really distinguish candidates?".
I did the TripleByte online test recently and found the questions much simpler than I expected, including those in languages I've never seen before. That is not TripleByte's reputation. I see people writing about how difficult they found the questions. (Admission: I didn't score all 5s, but I can't figure out why unless it's timing as I think I answered them all correctly.)
So I'm thinking, perhaps it's not so bad, people just make it sound bad because there's a wide variation in people's knowledge, abilities and expectations.
If you are thinking you might apply for something like a FAANG or other hard-reputation company, and feeling put off by the horror stories, I would say, just take a look at the old stuff a bit for a refresh, then give it a try and you might be pleasantly surprised to find their reputation is because people less skilled than yourself found it hard. Their "hard" might not be hard for you.
You'll probably still fail the interview if it's a FAANG because they are so selective, but I would bet my dollars to donuts that it would be more refreshing than miserable if you're not attached to passing.
I know your point isn't about yourself, it's about bias in hiring, including bias due to perception by the candidates, but I wanted to address that side point about feeling there's no point applying. That sounds like anxiety to me. (I have it too, I'm trying to get over it.)
Ooohhhh this is frustrating. Nothing is more frustrating than telling an interviewer about something cool you've done, and realizing they don't understand, but also don't care.
Google is basically using anxiety to filter good candidates and eliminate false positives which works in a sense but is still highly illogical.
Why not use a technical filter to filter for technical candidates? Anxiety seems like a pointless filter.... how does that even eliminate false positives?
People without that niggling sense of anxiety always in the background, the folks who are easily going to ace the programming interview because their anxiety levels are so low, those folks, I theorize, are potentially not going to be so careful. Now you could argue that that's a plus in many situations - a startup that needs to get code out the door right away, for example. But it really depends on the domain. For critical systems in applications like healthcare, avionics or robotic control I think the anxious coder is the one you want.
Companies that completely weed out the anxious programmers, as you imply, will have a different culture with perhaps too much emphasis on risky behavior.
Could this be deliberate? As working with people with anxiety problems is often a pain in the ass as they won't report problems for fear of seeming stupid or ask for help.
Confident people are far easier to work on a team with.
E.g. it's acceptable (but not ideal) not to promote talented commanders. It's catastrophic to promote someone to General who isn't ready or is unsuited.
Although as a counterpoint, it's also easier to get work, because part of the value I'm providing is that you "fire" me whenever you want and no hard feelings. So nobody does "leetcode interviews", because it's not like we're getting married, like an employee. So there's much lower risk to hiring someone, because a bad hire doesn't infect anything, because you expected it to be temporary in the first place.
I'm not sure how much engineer skill matters though. One slightly above engineer might be fine. But if half your org is made up of slightly above average engineers, I feel like there will be a knock-on effect.
There exists an incredible amount of arrogance and ego among them by-in-large. The best are humble and curious while maintaining clear and direct communication. To be fair, they are often tasked with making important judgement calls sans complete information, though that's the fault of the leadership/culture as to a poor approach.
The interview started with a bit of small talk, I chatted with one of the guys about Brood War and Starcraft 2. Later during the technical interview they asked me about the difference between private, protected, and public and said I was the only student they had interviewed who had answered correctly which was wild and honestly stunned me for a moment.
They liked how in one of my examples of scripting something I had written some fun dialog and I mentioned I did writing as a hobby which seemed like a plus. I talked about which classes I enjoyed the most and how they were challenging/interesting.
I did not get a job offer and learned from a friend that I had come off as "depressed and disinterested" (I don't think they realized he was going to relay that info to me...) All I can do is guess that it's a combination from showing up completely drenched and sharing how I enjoyed those "challenging" classes made it seem like I would get quickly bored of scripting...
After that, I bombed an interview for a QA role because I went into it completely unprepared for just how much it would differ from a programming interview.
Point being those first two experiences got in my head and I basically had anxiety over anything job-hunt related for the next 2.5 years.
When I did get a job, I was extremely happy with the interview process (even though I felt I did poorly on it). Here's more or less how it was structured (TL;DR):
- Office tour/chat with a programmer who had referred me
- quick 5 minute introduction to senior programmers in charge of the technical interview
- 1 hour alone in a meeting room with a laptop and 4 written questions (a generous amount of time)
- ~15 minutes reviewing my answers with the senior programmers
- importantly, they gave me a chance to talk about my answers and when I got something wrong they would simply state that it had an error and see if I could spot it
- 15 minutes on C++/memory/performance/behavior quirks (important stuff in AAA games)
- ~30 minutes talking about stuff I had worked on
- occasionally they would mention something related I hadn't heard about and explain it while gauging how well I could follow along
The process seemed less focused on where I had gaps in my knowledge and more focused if I had a decent amount of knowledge in general and if I had the ability to recognize and correct the gaps in my knowledge.
Sorry if it's a bit of an info dump, but it's something I think back to a lot.
I've often wondered if this is because, from the HR perspective, false positives are much more costly. I.e., Those who would be helped by limiting the false negatives are project managers who are too downstream in the process for HR to care.
If you reject someone who would have been a good fit but still hire someone good eventually, that’s ok.
A bad hire is not fun, not for HR, not for the hiring manager, not for the colleagues and ultimately not for the person who got hired. Especially if they had to move for the job, possibly with family.
A bad hire has massive negative consequences for many people. Of course you want to avoid that, if that comes at the cost of not hiring someone who would have been great occasionally, that’s unfortunate but acceptable.
If you're hiring people at the very top or with rare skillsets, like building browser engines, building databases or people whose skill is at author/contributor level to relevant technology (for the job in question), you're probably not going to go with the usual process anyway as there is less uncertainty.
What virtually every attempt shows is that it is better to hire people who are roughly qualified and see what happens, managing the results instead of filtering. Having an HR strategy for hiring leads to creating biases (i.e. requirements based on your current work force) that gets you a group of overly similar and therefore collectively incompetent work force.
It gives so many advantages:
- people will have some fun trying to solve a real issue / build a little software
- they will show how good they are at stack overflow, general methodology, cleanliness, test coverage, deployment and even git (how many we see who can't even gitignore their project files, make a build script that works, or test for real)
- horrible candidates who can't even cheat will not be able to lie their way around a fake resume, cheaters will bomb the code review quite fast
- candidate will also discover their future team mates general work attitude during the review and if the reviewer is good, will lock the candidate faster than "how many billard balls can you put in a Boeing 747" type of interview.
- candidate will even LEARN during the code review => you get away with something out of the interview even if you fail
Since I've been interviewed that way, doing my little project on my wife's pregnancy bed smiling like an idiot at solving the challenge, then fell in love with my manager showing me new tricks during the review, I'll never do any other kind of interviews myself.
> people you really want just don't have a reason to waste that time
I'm "really wanted". In the past week I have had to say no to two clients calling me for follow up work. Where do you think I wasted my time?
1) How good they are
2) How much they actually care about the job offer -- the more detail and the more passion they put in the exercise, the more into the job they are.
The input required for this type of interview is heavily skewed in favor of the company. No thanks.
Keep brownie points on pull requests to unlock "you gained <<passed technical interview>> badge" a'la stack overflow or do one off evaluation based on contribution when candidate applies. Stack overflow contributions can be equally taken under account to gain points and arguably they have badge system that does the job for you already.
Github could introduce something similar and wipe out "anxiety interview" completely, if they wanted to.
There is really no reason to do technical interview when you can inspect open source work. For people who don't have contributions yet, open source libraries/projects by the company should be made available.
Seems like win-win for everybody.
How in the world did current developers EVER get ANY job? Who gives developers the chance to grow these days?
You can follow along or be proactive, for instance start a new github account just for interviews, do code wars challenges, write a blog and just link your writeups, github and blog to applications. Along with your CV with major projects etc. of course.
Most people I know who work at big firms got calls from friends or friends of friends to interviews, so spending time socializing the local hacker circles is probably going to be worthwhile as well.
If you have life, hobbies and family, you should still be able to schedule 1-4 hours a week to job hunting, much more if you don't have a job of course.
It's not pretty or much fun, but it's certainly doable.
UX people could tune or at least comment on the interface, security people could fuzz or pentest the library... however, for a lot of actual problems in actual open source libraries it requires a crapton of internalization -- again, for free. Granted, it's for open source but it could be argued it's even WORSE approach for the company, since they're getting free benefit from expertise and work.
Also, I'm a private person, I have splintered online identities, I'm not going to give my 'hobbyist' github account in a professional context, and I'm also certainly not going to make a github account just for interviews and applications.
I'm lucky enough to have connections that will probably keep me employed as long as I like, and the tales of interviews and free work that's required to put up just to get a chance seems insane.
4 hours... okay, I can sort of see that, I might set myself a limit of 1 hour and tell them this was my approach, this was how far I got. I've heard of 30-60 hour workloads given to applicants to 'top five' companies and it's just ridiculous. As are the stories of interviewers who have no technical skills themselves and are just looking for a canned response (Google, looking at you).
I was going to say that the best approach I've seen are companies that put up interesting puzzles and security challenges on their website. The technical skills required aren't _that_ high, but certainly require an amount of ingenuity, persistence and just plain being interested in tinkering that they feel suits their company profile. The applicants can solve or try to solve these challenges, do writeups and probably get interviews just based on those.
Granted, I can also understand someone looking at 50 companies all giving them 4-10 hour workloads to MAYBE get an interview might feel frustrated and overloaded. It's a classic egg-and-chicken problem. If I were to start looking for a job from scratch I'd probably solve a bunch of code wars challenges and security CTFs, make a blog and just link my writeups and blog to applications.
If you care to weed out bad candiates, what help are 100% perfectly fine take-home exercises? The only thing that tells you is, that all of your candidates are viable in the first place. Just go into an interview with THAT assumption and ask the right questions.
Why do so many companies assume that people who try to get hired for a certain job don't meet the basic requirements? That gotta be the rule not the exception? That's just a bad faith assumption.
Just make sure your hire fits into your culture and give them sufficient probation time to grow into your codebase.
> what help are 100% perfectly fine take-home exercises? The only thing that tells you is, that all of your candidates are viable in the first place.
I think you might have interpreted that, in a different way than what was meant.
> incredibly high stakes algorithm riddles
Yeah there you go. This is your problem (Google/Facebook I assume?). It's more than questionable to expect candidates to solves complex algorithmic problems on a whiteboard in 45 minutes. You should ask moderate algorithmic questions that offer many solution approaches and accept solutions that are "good enough". It is not about having a candidate gaining some magic insight that some MIT student had 30 years ago during his master thesis (and reproducing it in 20 minutes lol). It's about giving a reasonably complex problem, that is not too easy but also not too difficult and allows you to focus evaluation on:
* Does the candidate write proper code that isn't far from compiling? (If they can't, they don't have the experience. It's like not being able to write without a spell/grammar checker. Small mistakes are fine. Big mistakes indicate lack of understanding.)
* Does the candidate have a structured approach to problem solving (all the time I see people starting to write code immediately and getting completely lost in what the problem even was. This is a red flag to me and baffles me each time again.).
* Does the candidate debug his code and walk me through. Does he find obvious bugs while doing so and can he convince me that his code works? (If not, and that happens often, its another red flag)
* Can he rank the speed of his solution with other theoretical solutions? Let's say they found an N^2 algorithm. I usually ask if there is anything faster (even if there isn't). This shows if they have some decent fundamentals in CS and are able to think about the boundaries of an optimal solution and how far they are from it. This is something, people without a CS major usually can never do and unfortunately also not too many with a CS major. It's kinda relevant though, if you optimize for performance and have no clue what the theoretical limitations are, then you are grasping for straws.
There is one big secret for getting A LOT out of easy questions:
I start to modify the problem statement and see if they understand how this changes their solution and their algorithm. People who don't have a good grasp of CS will fail miserably at this task, because this isn't something you can memorize.
In uni I maybe had one week of doing O-evaluations, it never came up and I never got interested. I've seen 20,000-word debates on Reddit on whether something is n, n2 or logn... and to my knowledge nobody ever learned anything from those.
In the workplace I've several times ran into the problem of "This is taking way too long," developing tools and methods to measure and drill down, then figure out if it can be improved upon or should be left as-is for now.
Honestly discussions and articles like this leave me absolutely terrified of interviews. My first and only technical interview was my future boss leaving me alone for 30 minutes in a room with a laptop "Code something you like yourself."
That man was brilliant.
To play devil's advocate, someone with the word Senior in their title would probably have command of recognizing Big O issues, and skip the "this is taking too long" step, and code it efficiently the first time. This is industry dependent, as well, obviously. I've been doing low-latency C++ for 15+ years, and it's not really something you can compromise on.
The mistake, I think, is skipping otherwise good candidates because they don't immediately see these issues. We should put more emphasis on identifying these weaknesses and finding the right mentors to teach them up. We should be hiring more junior and mid level engineers, with the assumption that they will learn and be up to speed in a few years. This has been my approach to interviewing, but I'm often vetoed by other team members.
I liked the experience and the process, in the morning it was a blast. We really clicked.
In the afternoon I had to use the same setup as the guy I was collaborating with -- Vim with his bindings and Golang. I had never used Golang before and even tho I was proficient with vim his binding were quite opinionated.
I felt the guy from the afternoon had 0 interest in being programming with me. He wasn't collaborating and was looking at his cellphone the whole time.
The problem was to write a functional test for a CLI that would spawn a web service in Golang, parse some json and run some assertions.
Golang had some quirks with JSON and new line encoding that I spent my whole afternoon with. It is something he could have unblocked me as it a very specific problem, instead he kept telling me to read the official golang docs, not even to google the error.
As someone being interviewed I followed what he asked me to do and worked in his environment but it wasn't a representative of how I would have worked.
That said, I feel it could work! I just should be able to judge the person who pair programmed with me instead of just being judged :) the person in the morning would be a 10/10 and in the afternoon a 4/10.
I would stand up in front of 80 or more engineers, go in to esoteric aresa of the operating and talk eloquently and at length about the structures and code found there, and live code on a projector, answering questions about the code and debugging the code of everyone in my class as they followed along. Day-after-day. 40+ hours a week. It was intense. I became an independent advisor to teams and a consultant for some of the groups on how to bring up Android on new devices, create device drivers, and so forth.
Some of those classes, I wondered how some of the people I was teaching was actually able to hold down their job.
Months later, after I stopped teaching, I interviewed at Intel and PayPal and Cisco. I bombed every interview. I was, by all accounts, unable to program my way out of a wet paper bag.
Everyone has off days.
It was received pretty positively; especially early on (2012-2015) we had a lot of candidates indicate they had never done anything with JSON or REST before. (later on they mentioned having done nothing with XML before, lol).
And we got to see some interesting solutions and creativity; one guy we hired did it all in J2EE, while we were a Spring fanclub. But the code was sound, he showed that he understood how it worked, etc.
The conversation during the technical interviews were entertaining as well.
Can I ask for some example(s) of what type of questions you asked?
You can feel the pressure mount - any algorithm that doesn't run on the first try or second (with a couple of quick fixes) and you're being mentally discounted.
The same for silence - if you're thinking through something, take abnormally too long, you're discounted.
The Perfect Dismount. A 10.0 is what's needed to land a FAANG job.
These are nothing but shit test.
What are shit tests? When somebody fucks with your head to see how you will react, what you are experiencing is typically a (series of) shit test(s). Everyone has been shit tested, gets shit tested and will continue to be shit tested; We use shit tests to make value judgements about people, likewise they can be used to determine how people cope under pressure. The underlying mechanism of shit tests is to test your mettle.
That went very well and I was feeling pretty good.
Then the CEO had a quick chat and happened to ask me a very simple technical question and my brain would not work - I completely failed to do it even though it was trivial.
I think that's how I discovered that mental context switching is a real thing - I had spent a few hours preparing and giving a presentation and then when asked a simple technical question those parts of my brain were completely offline. I think the shock of not being able to do it had a big effect on me and I went to bits (possibly the first time I've ever done that).
You had to go to that level to understand that? I am glad you did realize it, but I am surprised it was that late in your career.
Just take an unseen technical problem, solve it for the very first time in front of a colleague, then double your time as the expected time for candidates in an interview.
When you're under time-pressure, you actually spend more time thinking about how much time you have left, rather than thinking about how to solve the problem.
> Just take an unseen technical problem, solve it for the very first time in front of a colleague, then double your time as the expected time for candidates in an interview.
That gives very poor signal in my experience due to the time pressure. Your process is now selecting for people who are good at taking tests instead of selecting for good engineers.
I've interviewed 6 candidates within a week, and hired one. Never regretted of the outcome.
I've set up a test code so: an input variable, comparison variable, and an empty function.
Function takes input variable as an input, and output of which is compared with comparison variable.
Interviewee would be asked to fill in the body of the function, and play with it until function gives the expected output.
I've set up a computer with two screens, put IDE on one, and Google on the other. Two chairs, and some coffee.
First I've chatted with the interviewee for about 10 minutes, tryin to make them comfy as possible.
Then before the test, I've stressed very much that I am also a programmer, and I know how awkward it is to be coding while someone watching, and that'd alone would make me do silly mistakes, so I was expecting same from them and that was completely OK.
Afterwards, I've encouraged them to use Google in plenty, and feel free to stay silent or explain as they go.
So I got to assess their fluid intelligence, their ability to break the task down and progress efficiently, their English proficiency (if they use Google in English), their usage of keywords, their choice among in stack overflow responses, etc.
At the end I've rejected a guy with 8 years of experience on his CV, and hired a junior. Looking back, that turned out to be an amazing decision, best I could have made, for work went good with the junior and I've had the (mis)fortune of working side by side with the 8yr guy several years later.
PS 1: Task was to write a recursive function to travelse a multidimensional array, and find out whether the first letter of every "value" was a capital letter.
PS 2: The work was to maintain and develop mid-scale SaaS project along with me.
2: Country's mother language was not English, and English proficiency was not good on average.
I think Fizzbuzz can be actually a reverse test: if an experienced programmer can fail fizzbuzz in an interview, then maybe it's the interviewer who screwed up.
In practice whiteboard interviews assess this pretty well. A new senior engineer must immediately be shipping optimal, tested, and well designed code. Otherwise you'll have Junior Engineers coaching Senior Engineers and the Junior Engineers will just get frustrated and leave. Worst case, you'll have an "architect" who never learns how to actually deliver independently in the environment.
A senior engineer comes with years of experience designing and coding systems that took years to develop and run at scale. Given time to learn they should be just as good at solving day to day bugs but if you assign them that you are wasting your time and money on experience. If you don't value experience you are wasting your time and money on experience.
It's pretty normal to have more junior engineering staff help new senior staff learn the ropes. It's also common for those less experienced staff to learn from the more experienced people even in that situation. I'm not saying senior hires should be respected from the first second because they are over 30 or have a long resume.
These two behaviors aren't as dependent on knowledge of the system as they are on core coding competencies. The whiteboard interview reasonably works to assess these, but it requires a lot of prep and is prone to false negatives.
There are questions being asked that full academic papers have been written on. What's even worse is typically I get interviewers that are not experienced+prepared enough to give hints or work with you.
I am happy that you've had an epiphany to see where you were wrong. If you ever are in a situation to build a quality team. Interview people for the skills to work with others+ teach others, look to optimize, experience, and that they're willing to learn. You'll have a fairly good group of candidates to pick from as that don't do well in these coding interrogations.
I had the same question you mention. "This is a problem a paper was written on, are you expecting me to derive a whole paper from first principles in half an hour, or are you expecting me to have seen it before? If it's the latter, why not let me Google, as I normally would?"
I feel that saying "I would solve this problem with <algorithm>" should be good enough, and if I'm asked to implement it, I should be able to copy paste the Wikipedia pseudocode and start converting.
> I feel that saying "I would solve this problem with <algorithm>" should be good enough, and if I'm asked to implement it, I should be able to copy paste the Wikipedia pseudocode and start converting.
Long ago, milo.com posted a challenge to codeeval.com (both websites now dead) which turned out to be an example of the "assignment problem". That is, given a set X, another set Y, and a payoff function f(x,y), assign each element y in Y to exactly one element x in X such that the total payoff from all assignments is maximized.
I tried to solve it, realized I didn't know how, and was eventually able to look up the solution in a couple of papers. There is an algorithm called "the Hungarian algorithm" which will do it.
So I wrote up the Hungarian algorithm and sent it in. This was good enough to get a phone interview with the company.
I tried to prepare for the screen by going over the Hungarian algorithm to the point that I could myself give the proof that it correctly solved the problem. This was fun. But in the phone interview, it was barely mentioned at all. According to the interviewer, I passed that step by just mentioning the name "Hungarian algorithm".
Maybe I've had a particularly depressing career, but 99% of it consisted of various ways of interacting with a database.
How? You have unlimited time to find the answer; it happens before you make any contact with the company at all.
Everyone inside Milo seemed to think that their hiring process was hugely better than what happened elsewhere, including by producing better hires. But the approach -- set a difficult challenge instead of an easy challenge, such that, if someone passes the challenge, you will almost certainly hire them -- still seems to be vanishingly rare.
However, there might have been external reasons. I was hired after they were acquired by eBay, and apparently they had preexisting approval for a certain number of hires that year that they didn't end up meeting. If you were earlier or later (pretty likely!), maybe they might not have been so cavalier.
I’m not complaining. Who knows? Getting rejected might have been for the best.
They're not looking for someone that is eager enough to show they're willing to work through it even though they have no idea to proceed.
They are looking for someone that got lucky enough to be ready to answer that particular problem, but won't stand up to them that the ask was a bit too much for a reasonable interview that is intended to see if you're a fit for the company and skilled enough to get the work done.
They aren't looking for someone that will dress them down to tell them that's incredibly insulting to ask something that took an academic a long time to come up with a solution for that. [Although a rough approach and fairly unfriendly. That shows a lack of desperation, confidence on the knowledge, and a good understanding of the difficulty of the task at hand (heck that's a good signal for won't underestimate points during planning)] (There's a singly linked list question that amazon used to ask that qualifies for this)
A friend of mine, when asked to implement an AVL tree, actually asked an interviewer when the last time he had to implement an AVL tree on the job was. He wasn't hired.
The department should run the interviews through their own employees first. If some questions cause an "interview anti-loop" (a set of other employees S who would not hire E) it's time to revise those questions.
Revise and rehearse these practice interviews within the department, and do it blind, until all of S are willing to "hire" each other for their respective roles.
Most of the time you see the resume a few days before. They may say a few things "don't say this don't say that". (hr)
Most companies take the rejection as a point of pride. "We've rejected 100s of candidates."
I don't like the structure, I think it's very flawed, but asking you to do something you do regularly defeats the purpose.
It is the same as if you expect a warehouse worker to regularly lift 20kg then you don't test your hires if they can take 20kg at their best, because then you will get a lot of people who simply can't do the job for 8 hours a day. Instead you'd make sure they can lift 40 kg or so to ensure that lifting 20 kg regularly wont be a problem.
Edit: And I think that you'll find that most leetcode problems are pretty easy if you are an expert at doing loops and recursions.
> I'm not saying you'll never have to do it, but it's going to typically be a small proportion of your day job, so why test this stuff so much at interview stage?
It isn't a small portion of your time spent at Google at least. Almost no engineers at Google ever talks to non technical people, product managers or engineering managers does that. Most engineers works to reduce cpu usage of backend servers or build scalable features for backend servers or similar stuff. Even high level decisions like deciding what service to use requires more engineering hours in time spent refactoring the thing than engineering hours spent deciding what to use, so there is much greater need for people who can code than people who can make decisions.
And if you argue that Google is bad at making new good products, that isn't the fault of their software engineers since they don't decide what to build. Instead it is the fault of VP's, and those don't do white boarding interviews.
How is any of that supposed to be part of a good interview process? My only solace here is that those things are red flags on the company side, so I might have dodged some bullets.
It's a technical job, it must be a collaborative effort in finding solutions! Aren't we looking for a team member, not a lone-ninja.
Sure, the candidate has to be more vested in advancing the process, but the fact that there's no known-ahead answer would potentially surface not only technical skills, but also personal ones, and ... at both ends of the table.
Not to mention, the "house-experts" could find such sessions stimulating, to say the least... Imagine, you get back to your desk after interviewing a candidate, and your team mates ask you "so what was the tech problem this time?", "could you crack it?", "did he/she crack it?".
I worked for a FANG and conducted over 100 interviews. Most of that time I thought I was asking really hard questions and wasn't sure I would have been able to answer them if I hadn't seen the question before.
I later interviewed at a different FANG, surprised myself and answered harder questions than I typically gave and answered them better than most of the people we hired.
Not sure what to make of that.
Once I had this candidate. Damn, I know my questions are deceptively simple on purpose, but he literally couldn't do anything. Not even a trivial brute force. Not even any related simple knowledge questions. Couldn't tell an average from a median. The only good thing I could write in the feedback was "seems to know some basic syntax".
It was quite a learning moment to read that everyone else was praising how this was the most brilliant candidate they've seen in years. Well, mine was the interview just before lunch, in a schedule that was atypically later than usual. I guess starting an interview after most people start their lunch break was asking for it. He also got hired - apparently the committee agreed that "can't think when hungry" is not really that important a flaw.
I'm not sure I know how to write anything from scratch anymore, because I just search/read/alter/test. The breadth of what I work on is 100x wider than it used to be, and so I've become absolutely dependent on quickly reading docs, copying code found online, and then deep diving into testing and rolling it out the door.
I deliver a TON more, but I'm convinced I'd bomb literally every interview I go to today. Today, for example, I updated our firewall rules, updated some Java business logic, tweaked some javascipt and PHP on the site, reversed an odd macro email-phishing virus we got, configured a new RDS server VM for clients, and updated some network GPOs. 20 years ago, I would have never worked on ALL of those things in the same week - let alone the same day.
Maybe we should interview with small take-home tasks to be submitted with some write-ups to test how people research, reason, and write-up problems, rather than writing code on the spot on a whiteboard. ...but maybe that's just me.
Now things are TOTALLY different. Doing a good job requires extensive, encyclopedic, knowledge of what's out there, how well it works, and how to integrate it. npm, nuget, maven, anaconda, we all know the drill. In a day's work it's much less necessary to implement an algorithm from memory.
What should programmers know from memory? Simple stuff like fizzbuzz. The difference between TCP and UDP. What DNS is for. If SQL data's involved the difference between JOIN and LEFT JOIN.
And, if they're experienced, they must be able to describe intelligently how their own stuff fit into the bigger system they worked on. Because there's always a bigger system.
If they're being hired to be a tech rep to an *sshole customer, possibly their performance under pressure matters a lot. Otherwise, not.
This business of high stakes quiz questions serves just one purpose: feeding the ego of the interviewer. "We have high standards!"
Yeah, so high that we wouldn't hire Jesus of Nazareth to be a storyteller.
Our software is utter trash because the business is very clannish, fragmented, and the people in charge either never ever wrote code or did it 15 years ago.
And yet we fail 95% of the technical candidates we interview because they don’t meet our “high” standards. We won’t even consider hiring remotely (but within the country because we’re making government software) even though an overwhelming majority of the company and tech team has been working from home 80–100% since March... No, we’re insisting on rejecting the handful of candidates that apply in our small town. Insane.
I wouldn't say that. What I feel like I've accumulated in knowledge over time is I can look at a new problem and without having the solution/encyclopedic knowledge, I can know in my gut that (a) there is a solution, and (b) the solution I'm looking at is not the solution.
Merely being able to look up things on the internet without these two things means you will either give up too early or settle on the wrong solution.
This is the most succinct way I can describe what I think >a decade of experience is worth compared to being fresh out of college.
The things that you don't know are possible and thus don't even know how to start solving. You will accidentally reinvent the wheel, most likely very badly. Or give up right away.
An experienced developer most likely unconsciously knows (has read on HN/Reddit/Book/Blog some time in the last 20 years) that a thing they encounter has been solved already, at least partially.
This allows them to start searching for the answer with a few key words, because they know it's possible but don't know how it can be done.
Same comes with having a network of people who know different things, I know the bare minimum about professional pentesting, but I know a few people who I are absolute pros in the field.
I feel like my brain is full of "I read a headline about this once" tidbits, in part because of HN. It's like I don't know the answer, but I know the question has been solved before so I know there is an answer.
It's scary how true this is. I know one manager who routinely rejects perfectly good candidates for making negligible mistakes in the interview. All on the grounds of "We have high standards!"
And therein lies the problem. How many interviewers dock marks for iterating over columns, instead of rows? Because that matters, a huge amount. How many interviewers would give credit for "how can you speed this up?" if the interviewee said, "write it in C, and simplify the datastructures you want me to use so we maximise sequential lookups over basic arrays, to maximise cache usage." They'll look at you like you have three heads.
"Don't you know Big N complexity is the only thing that really matters if you're looking for speed?" - then you get Electron.
I feel like this is a huge oversimplification. I’d argue that for sufficiently large projects (say, Chromium) every single programmer who works on it could have a strong understanding of performance fundamentals but the product itself could still have performance problems because there are necessary levels of abstraction involved and no one person has the entire codebase in their head. Performance problems could come from emergent behavior that could not have been predicted by the original engineers.
There is likely a huge organizational focus on performance and more people expressly dedicated to the task because it is a higher priority for Epic. It is directly correlated to Epic's ability to deliver a successful product to game companies, whereas for browsers it is slightly more orthogonal to their success.
LLVM has a bunch of contributors who likely understand the intricacies of machine code better than either of us, and:
> Each LLVM release is a few percent slower than the last.
> The larger problem is that LLVM simply does not track compile-time regressions. While LNT tracks run-time performance over time, the same is not being done for compile-time or memory usage. The end result is that patches introduce unintentional compile-time regressions that go unnoticed, and can no longer be easily identified by the time the next release rolls out.
I think if those LLVM contributors understood compilers better, they wouldn't be introducing those issues. Look at Jon Blow's achievements on Jai. LLVM is the bottleneck for his entire compiler at the moment, and he's planning to replace it with what... Three people or something? But I also think there are structural issues there. I don't think a top-down project from a large company gets to use that excuse.
As an example, Stellaris had performance issues that were CPU bound for a while and it's gotten better. It's not a visually demanding game after all.
I just think that's a weak excuse, especially given most games have tremendous complexity running through the CPU too.
(As a side note I don't think paradox games are a good example. They're famously poorly optimised.)
Developers love thinking about performance. It's fun and asked us to use so many of our skills. However, for many products other quality aspects are more important, like maintainability, extendability and expediency of development.
That performance often falls by the wayside on terms of business priorities ends up being reflected in the typical developer skill set.
But I think Electron in particular is a symptom of the fact that GUI frameworks are in general horrible. There was a market gap for usability because using everything else was obtuse, and now we're paying for it.
FWIW all of slack competitors are also electron (rocket.chat et all), so it seems like you possibly know something that other engineers don't and might have a competitive advantage!
If someone released a GPU-accelerated version of Electron that had the performance of the GUI in basically every video game, it would destroy Electron. Now, would it be popular? I'm not sure, but let's say the answer is no. Ok, now imagine Google released that.
Learning Node isn't easier than learning how to structure basic, bare-bones code that leverages L1 cache. In fact, it's quite the opposite. You can learn cache locality as a concept in much less time than you would learn Node (or any programming ecosystem for that matter).
As someone else said, businesses hire software engineers to solve business problems. Speed often isn't a priority. Fast software will make the experience perceivably better, but it's hard to get on a sales call and tell a potential customer that your application loads much faster when they are looking for feature X. And most companies buying software are looking at a checklist of features first and then maybe the experience.
You also need to put in the time to make software fast and then keep it fast. It's not something you can tack on at the end—if you want something to be fast, that has to be carefully considered from the very beginning. By the time people notice something is too slow, you're burdened by half baked architecture and a monstrosity of a codebase. At that point, it's near impossible to really make anything fast.
I think the hugely successful SalesForce proves this point. Just one standout among many such products.
If the market values fast software, you'll see a lot more fast software. There _is_ a lot of unnecessarily slow software these days, so it _is_ starting to happen. Think Superhuman, Linear.app, etc. — an entire category of software based primarily on the idea of being fast, aimed at the power user. If these apps succeed, you'll see a lot more of them. If the market doesn't value fast software, most commercial software will be slow.
I don't think companies choose to make bad software as a tradeoff. Salesforce had enough budget to build out those features and not be terrible. I think in general, when software is bad it's because the company wasn't able to make it good. And I think the distinction between those categories is important.
My point was that SalesForce has fairly poor user experience and performance, but does offer the features. It has the additional advantage (for SalesForce) or quick lock-in once the customer commits to it. SalesForce has improved the user experience over time because they have the customer base and revenue to allow that.
Optimization of speed is sometimes it, sometimes it isn't.
I have worked accelerating algorithms in VHDL with a PCI-e interface, embedded linux without a MMU because a MMU uses too many logical gates, digital signal processing systems (FFT, goertzel, sigma delta filters) that had to process a lot of data under uS and etc.
Now a days I work more in the devops space, full stack dev and whatnot. I have worked with a lot of technologies, different constraints, different teams, different companies (12 in total) in different industries.
Trying to paint all business and developers as bad or as cheap because business requirements do not align with your view isn't really fair.
Notably, I’m also not logged in, which might be another thing they’re trying to get me to do and another difference between those who hit this more often than not, and those who rarely or never see it. It’s been like this for years.
If I remember right the app would also crash but it could take bigger loads. Not sure, I uninstalled it.
Yes, that would be unfair.
The first chapter is all about that and how software engineer isn't programming. You seem to think as programming and I believe you're missing the bigger picture.
If Slack was written in Qt with the same budget it would just be fast. Even with effort we don't have Qt apps that behave like Slack.
Do you feel that you somehow know something that nobody else knows ? It’s all about trade offs . Being able to move quicker seems more important to businesses than writing more performant software .
Note that one person is literally doing that, with ripcord. Looks like shit but it illustrates the point.
Monitoring , metrics , deployment , etc.
I am also looking to play said game on all platforms ! iPad , Android , iPhone , Mac , Linux , Windows and etc.
Would be great if the chat client supported markdown as well
Complexity for me is the software engineering part which the Software Engineering at Google displays really well and I agree wholehearted. I would argue a game is simpler because it doesn't need frequent updates and doesn't live forever.
I work in systems at my company used in a very big scale that were written 30 years ago and are still maintained / changed. That system I would argue is much more complex than any game.
Even tho from your point of view it probably only updates files on disk :)
Does it? I've seen some electron apps that run like dogs, and others that seem perfectly performant. So, I'm curious... in what way does the framework run like shit? In what way could it be improved?
> How many interviewers dock marks for iterating over columns, instead of rows? Because that matters, a huge amount.
I think you mean the difference between SoA and AoS. I'm not sure that one is inherently better than the other, except the former scales well for large homogenous datasets.
I doubt optimising for DSPs/SIMD/big-data to maximize throughput and reduce cache-misses, is something the typical FAANG employee needs to know about. Could be wrong.
> "Don't you know Big N complexity is the only thing that really matters if you're looking for speed?"
It does, even if you cache-align and pool your data. And you can optimise your in-memory data-structures (and database IO) later, if profiling finds them to be a bottleneck.
And almost no Qt apps run like that. "As long as I don't notice it most of the time, it's not slow." No, sorry, I don't agree. Frameworks should be fast. Slack is unforgivably slow.
>I think you mean the difference between SoA and AoS.
I mean literally in the interview when they do a nested for loop over primitives. Do they lose marks for going row-first? Why isn't that considered a basic rule? It's not complicated. Most interviewers don't even realise there's a difference (!!).
>I doubt optimising for DSPs/SIMD/big-data to maximize throughput and reduce cache-misses, is something the typical FAANG employee needs to know about.
Twitter is so bad it crashes my browser. It's 90% text! The only ram-hogs are auto-playing videos that dissapear after you scroll past them. These engineers are paid half a million per year. Half the time the website doesn't even load (!!!). That's insane.
>And you can optimise your in-memory data-structures (and database IO) later
Ehh, skeptical. Unless your data & algorithms are structured as contiguous arrays of primitives to begin with you're going to have trouble refactoring the abstraction.
It's not objectively wrong to do this. Suggesting that "SoA is always right", tells me that maybe you don't understand the trade-offs between expediency, readability and performance... or understand the trap of early optimisation. Always measure first before optimising, otherwise write code that's easier to read, and simpler to write.
> Twitter is so bad it crashes my browser. It's 90% text!
Not sure how knowing how to micro-optimise cache-effects fixes these issues. Profiling and spending time on perf does, but they've decided it's just not worth their time.
> Ehh, skeptical. Unless your data & algorithms are structured as contiguous arrays of primitives to begin with you're going to have trouble refactoring the abstraction.
The keyword is refactor. Developers and organizations do it all the time - or at least they ought to.
Disclaimer: Used to work in the video games industry. Literally worked for years doing perf, and cache-level optimisation.
No, I don't mean SoA, I just said that.
I don't think twitter is crappy because of cache misses, I think it's crappy because they don't have a solid understanding of what their code is doing. Ignorance of caching is another symptom.
If you build your game on an entity-component system, you can't "just refractor" that to make it contiguous. Your structure is pretty baked.
It's my hunch that you have little idea of what the code-bases for twitter are like, nor what factors are at play that affect usability issues.
> If you build your game on an entity-component system, you can't "just refractor" that to make it contiguous.
Moving to object pooling of components is very do-able. There might be some extra work, like extracting a generic matrix hierarchy and physics primitives out, but you'd only do such a thing if you found that d-cache misses on v-table lookups, or if i-cache misses due to update code churning were of real concern... and these could realistically be mitigated with homogenised object pools for the particular use case... though usually yes.
Edit: I'd expect anyone on an engine team, or technical programmers at a game dev shop, or maybe even someone working on a browser renderer to have such knowledge from day 0, rather than the average FAANG employee.
I have seen a lot of really bad Qt UIs as far as responsiviness. The Signal/SLot framework can get really hard to follow in huge code bases with lots of state. The whole thread interaction in Qt is not ideal.
It is interesting because most of people are now actually using PyQT or pyside. Even with these being slower than pure Qt. Because it is easier to adjust to business requirements with Python.
Oh, and I have personally rewritten some of these UIs in React and replaced them with a web browser which made them much more responsive.
I’d assumed Twitter showing me “an error occurred” on about 50-75% of inbound link-follows to their site, across multiple browsers on multiple operating systems, was an intentional “feature” to drive me to their app.
Granted, they made Electron run fast enough to be usable but I wouldn't call it good software.
Imagine VS Code was exactly how it is now, but had the extensibility of Emacs. It would blow everything else out of the water. Why isn't it like that?
Because most of us don't care. I need my code editor to just work out of the box. I want to be able to write code and install a few plugins to enhance the tools I'm using. Hacking my editor is a waste of my time and even if I wanted to, I'm certainly not going to learn lisp to do it.
How much money does the Y Combinator Web team receive? Do you think its enough that they should be able to render... Text?
I will say that this was way worse a couple of years ago. They seem to have added a lot since then.
Which is why 90% of all software is very profitable crap.
I'm somewhat like you; I do a wide variety of tasks every day and simply can't keep all the different rules and syntaxes in my head. I'm terrible at tests anyway and would probably bomb a FAANG interview badly because of my lousy memory.
As an interviewer, I would rather someone be generally smart, have a good work ethic and lots of experience solving problems. In the long run that seems more useful than being able to write computer programs on a piece of paper in a conference room.
It causes a surprising number of problems. A lot of candidates come in with their environments just very poorly configured, run into weird dependency errors, mismatched installed versions, no linting or syntax highlighting set up in their editor because they don't actually ever code on their personal computer, etc. It's not uncommon to spend half the interview watching someone debug their dev environment, or struggle to remember the basic syntax of their favorite framework since they aren't working on the same codebase that they're used to.
I've mostly come around again at this point and decided that even if I like this type of interview best in theory, focusing on standalone problems that can be done in coderpad with no dependencies is a more reliable approach.
Now THAT reflects the actual daily experience of working programmers.
All the maddening little configuration options to get things working correctly, and seem to take up far more time than actually writing code to solve customers problems.
Isn't that a useful way to filter out candidates that you don't want to hire? Presumably they know in advance what kind of code you're going to ask them to write, so they have time to set things up beforehand.
The reality is that for the past few years at least in SF, hiring engineers has just gotten more and more competitive. Most people who pass our interview will have multiple other offers, often some from companies with a bigger name brand than our own. At least in my opinion, most companies in this environment can't afford to have such strict purity tests. It's supply and demand.
This balance may be changing now given the current recession, it seems even in tech things have cooled off a bit the past few months.
Its actually bad practice in a lot of peoples opinion you should develop and test on an identical system hw/sw to the system you will deploy on.
You need to provide a laptop or like, a mac mini or something you know you can plug in and has baseline requirements for working on the problem you're giving them. And you need to make sure that the problem isn't complex enough that they can't do it in the allotted time if they struggle a bit with an editor they're not used to (including like, vim without their usual barrage of scripts).
Otherwise you're just trading one bias for another: winners will be people who have personal laptops set up for coding, which is not a universal trait of good employees who code.
Let them use a laptop they bring if they want, but be ready to supply them with something they can use if it doesn't work out.
Do you really have to test for the ability to use tools instead of the ability of reasoning? A whiteboard doesn't seem bad for that. I would like to understand how the candidates reason, how they interact with colleagues and yes, also how they handle stress.
Example from yesterday: I had to fix a problem on a software in production before the end of the day (it must be available today) and before a reasonable hour (I wanted to go out at night.) That means working fast, debug, find solutions, possibly develop workarounds that leave the system working even if the problem is still not fully understood and solved and, most important, don't panic.
I worked at a place that got around this by giving the candidate a choice of their own laptop, a Windows laptop, a Mac laptop, and an Ubuntu laptop. Half a dozen different IDEs on each.
Of course, you have to find someone to take on the glamorous job of keeping track of all the laptops, and wiping them between candidates...
> Do you really have to test for the ability to use tools instead of the ability of reasoning? A whiteboard doesn't seem bad for that.
It depends: If you're a big believer in test-driven development, red-green-refactor, clear naming and good code structure, test coverage, use of version control - you don't tend to see those in whiteboard coding.
For example, I'm an Emacs kind of guy, so if they hand me a Windows laptop, do I limp along with Notepad? or some IDE that happens to be installed? Generally, as soon as I've settled in at a new job, I've customized my work environment with something as close to Linux and Emacs as possible -- if Windows, I'll install Cygwin if allowed, and/or Emacs and GNU utilities.
I'm not sure I would even look like a proficient computer users on my personal laptop. It runs Windows, and at this point my macos work environment is so heavily customized that I'd be bound to get tripped up just by the sudden unexpected need to use a mouse.
It can be easily solved by asking what is a parts of code do, or how it works and why.
I'm a huge fan of giving people tasks that somewhat resemble a real world problem they'd likely encounter or at least aligns categorically with their proclaimed resume feats and more importantly on top of that giving them the same environment they're comfortable working in on a daily basis which might be on a laptop in a quiet room from 6am to 10am in the morning.
However, I'm usually instantly met with the "oh well we don't want to expect too much of our interview candidates" mentality as if asking them to do a "take home" (rather than telling them to figure out how to take a day off of work for an onsite panel) is somehow grotesquely disrespecting their time and would make the company look bad if the broader community even got a whiff that it had been considered. I'm so confused. People easily identify that white-boarding interviews suck for more than just the interviewee. And yet any time a solution is proposed that involves doing something other than putting the candidate in a room in front of a whiteboard for an hour, it's shot down.
I've concluded it's a generational thing and the problem won't be fixed until we get a wave of hiring managers that weren't schooled on the Google interview process. But, also note it's a self perpetuating issue too so :shrug:.
An asynchronous test is definitely more flexible for all parties, but it also means the employer can more forward with far more interviews than they would be able to do in person. Leading to more candidates wasting their time doing nonsense work when they have proportionally less of a chance of getting the job.
Possibly I'm just bitter because of the time I spent 4 hours on a take-home only for the company to ghost me (name-and-shame: Instacart). But at least for in-person interviews they have to buy you lunch before the ghosting can occur.
Guess what? It wasn't. They rejected me as soon as I de-anonymized to allow them to see my resume.
Arthena, a Y Combinator funded company, did this to me. It truly was a waste of time.
To top it off the interviewer then sprung a surprise leetcode problem on me that had to work and was required to be written in a language ill-suited to the task. I bombed it and the interview ended. No one ever even ran my submitted program.
mvn clean package worked.
Turns out the guy was running mvn exec:java.
Not to mention it doesn't test the collaboration aspects of real work. Imagine doing your job but you're not allowed to talk to your teammates in case of a blocker. How are we doing our jobs here?
But when it comes down to actually starting them, I will always pass if they come before any interviewing round. You don't yet have a stake in the process.
I would be (and have been) one of those people. Why? In short, likely because your take-home tasks are tedious, boring, and unrepresentative of real-life work. They are not whiteboarding, but they aren't necessarily any better.
I have done take homes as a candidate, and also have reviewed them on the hiring side. Most of the time, as a candidate, any take home I have done has been wasted time. Nothing I feel like I could count as a significant contribution, or something I could add to my resume. Take homes usually have some boring cookie cutter problem, with your most nitpicky code reviewers as the judges. Or it's something that's grotesquely oversize that's expected to be done in two hours (hedging with "but feel free to use up more time if you want"). As a reviewer, I see that it also leads to candidates doing ridiculous things like overusing decorators or metaclasses in an attempt to impress the reviewers.
Honestly, as a candidate, I would be 100% satisfied with a bitesize fix to an open source library that your company maintains, or even a bitesize fix to a closed source library under contract. Or pair the candidate with an employee who is working on a real, minor task at your company, and let the candidate "drive". Give me something real to do, not some contrived situation about an ordering system or building the next Twitter.
My mini-conspiracy theory about this is that most companies are so embarrassed about their codebases and internal process that they don't let prospective employees look at them until after they are hired.
It can take relative ages to become familiar with a code base, especially a large one, such that you can easily hop in and commit fixes that actually work. I'm not going to spend hours trying to gain context for the code, or deploying and testing it.
The employer is going to ghost 80% of their candidates anyway, and if your fix does work, you just did free work that benefits the company for little to no gain for yourself.
If none of their codebases are easy to jump into, then that might be a red flag in of itself. One thing I suppose I have been lucky about is that I have worked for companies that have some (albeit very minor) open source presence. I'm not talking new features. IMO a small fix to an existing codebase can be as minor as adding a new CLI flag, or lightly modifying some existing behavior. In many cases, it's what you'd give a new intern as their first fix.
> if your fix does work, you just did free work that benefits the company for little to no gain for yourself.
Well the gain is that you learned to work with a real-life codebase. The thesis of my comment is that take-home projects do not often approximate a real life codebase well.
I disagree that the goal of an interview experience should be to necessarily give the candidate a new portfolio accomplishment. It's cool if it works out that way, for sure, but that can't be the expectation. I'm mostly interested in incrementally improving the status quo which also isn't focused around an interview day that bolsters your portfolio as an engineer.
I have never seen a take-home project yet that I could do this with. Also they come with scary "You may not show this to anyone else, ever!" disclaimers which are more off-putting.
1) It's less time spent talking with potential coworkers. Which for me, coworkers (and how we work together/get along) determine about 90% of how satisfied I am with my job (assuming pay/benefits are standard).
2) You're still going to require me to take a day off and stand up in front of a whiteboard anyway eventually. Every take home I've done is an early stage filter. The final round is still a whiteboard panel.
You can learn a lot about a person by what questions they chose to ask you, but only if it's not a thing tacked onto the end of a long interview. If they haven't gotten much time to ask you questions, they only have your interview questions to go on. Which you know are not representative of the kind of work you do (but then, why are you asking them?)
So because you only ever talked to them about things that could readily be had off the shelf, and this is the only information they have, they can't know if you're bad at interviewing or some psycho who spends all their time reinventing wheels and then demanding that other people be impressed by your insanity.
The chances of the latter are bigger than you think. So if it's down to you or another opening, they'll take the other opening, and then you've wasted everyone's time.
Didn't hit it off with the last interviewer so it was a no-go.
If a take-home is required I'd at least expect a code review over it and focus the rest of the interview process on soft skills rather than trivia questions that are answered essentially by the take-home exercise.
1. 2 pms - collab and project management questions.
2. Trivia Q&A with 2 devs and tech projects
3. Lunch with Dir. Eng, and senior dev.
4. Whiteboard leetcode with A researcher and a junior dev.
5. Systems design with Dir. Eng and a senior devops
6. Chat sessionr for questions with VP of Eng.
Of course from their perspective I could be getting someone else to do it for me or something. Or maybe the take-home leaked (which becomes more probable as the company grows).
At my current employer, that’s exactly how we approach hiring. I send the candidate a take-home that broadly covers the basic tech skills they need to be effective in their role (implement a simple test api, some questions about scaling web apps in production, and a possible architecture sketch for a described system). It’s expected to be around 4 hours of work. If they pass that, the interview is free of technical topics - we just talk about your past experience and assess whether you’d be a good fit for the team.
Is this not how other take-home assignments work?
I'm interested to know who your employer is, if you're comfortable saying.
Most people take issue with the repetition of similar tests and evaluations for multiple companies. If A, B, and C were considered different stages of a technical interview, candidates are repeating A and B too often, whereas applying A and B once would suffice for several companies.
It's a problem that is realized when opportunity cost is compounded when interviewing multiple companies and it's not something that a single company alone can fix. A more "Docker-ish" solution where using one of "A" to lay the foundation for many company interviews (and not just the ones expecting you to practice Leetcode) might curtail the problem of using more time and resources than necessary.
But more than that, there is also a cognitive dissonance with the beliefs of how to reform a broken system. Programmers largely oppose industry-wide regulation while also expecting the vetting process to be more consistent and make more sense. This is a tough nut to crack because we seemingly want to have it both ways.
...of programming (because the regulation would never be able to catch up to the state of the art), sure. Of HR practices? I don't think I've ever heard an engineering lead lament that their company's restrictive HR practices prevent them from testing candidates the way they want.
> and it's not something that a single company alone can fix
Sure they can: micro-credentials. Someone needs to be the CompTIA of tiny programming challenges, where you prove once-and-for-all (in a proctored situation) that you can solve FizzBuzz or whatever, and get that fact recorded in some database.
Hopefully, candidates could then put their MicroCredentialsCorp username on their resumes or into companies' application forms or what-have-you; and then first-stage HR automation (the same kind that matches resumes to keywords) could run through a bunch of RESTful predicate-endpoints checks like https://api.example.com/u/derefr/knows/fizzbuzz (which would return either a signed+timestamped blob of credential JSON, or a 404.)
Hopefully also, allow anyone to create a test (maybe have the end of that HTTP route just represent the username+repo of a GitHub repo where an executable test for the micro-skill lives?), such that these micro-credentials can live or die not by central supply, but rather by market demand.
(I think someone suggested doing this with a blockchain once, but there's really no need for that.)
I don't think any of the major companies would accept such a solution. I mean, they make people who already passed their own interviews once re-interview if they leave and come back or apply again too long after they pass and decline. If a company isn't willing to trust its own interview process as a once-and-for-all test, why would they even consider doing so for a third party test?
We could scale this to 3 months on 80% salary (whatever the company is comfortable spending on throwaway) with the full salary after the test period, with maybe even back pay and a little ceremony to make it a big deal. That would kick my ass for three months, cost me nothing long term, put a big smile in my face and let any company hire me without 15 interviews.
Next stage is a technical interview of 60-90 minutes where we a) try to find what their strengths are and how deep they go, and b) ask them to explain how they solved the take-home.
b) is critical. Of course it tests for the skill that is critical in teamwork: communication ability, but it also lets us catch cheaters.
After this it's HR and reference checking.
It's worked well so far.
Got failed for a number of the most trivial reasons - it didn't meet PEP 8 (it was two white space errors, as I didn't have pep installed on my home laptop, so I missed them).
A load of stuff about not mocking tests (setting up that would have doubled the scope of the project). The spec and said if you don't have time, then describe what you would have added in terms of tests - which I did). I did provide the most difficult to implement integration test to prove the system worked.
There was a complaint providing an HTML webpage rather than a JSON endpoint (the spec described it as a "page" and said not to bother with styling - so implied that it was HTML). Plus a couple of other equally ridiculous complains.
Don't waste your time with takehome tests without speaking to someone in engineering first. Sennder is the company if anyone else wants to avoid, I think we should be naming and shaming such companies.
I know I can, in university I implemented huffman compression(I know there is nothing special about it algorithm-wise) in java and I got the best run-time in 2 series of ~100 students, and by that I mean orders of magnitude better; my runtime was on par with the best C++ programs I could find, and my program could work on arbitrary file size, the fastest C++ program I could find at that time (20 years ago?) loaded everything in-memory.
I wrote all the data structures from scratch, using java structures consumed a lot of memory and was way slower.
But the jobs I got in my country as a J2EE developer were much better paying, at that time at least, than any C++/algorithms stuff.
If I was younger, with the same singlemindedness I used to have, I'd absolutely spend 1-2 years tuning my algorithms skills so I could get those sweet FANG job.
Or if I had the ego of my ex submediocre-redneck boss who thought Spring Framework was bullshit, without actually knowing it, and got into programming competitions(only to be butthurt by his results because he did not put in the effort to actually practice algorithms and data structures), again, I'd study hard on algorithms and datastructures.
Or if that was my hobbie -- algorithms and datastructures is 100% a valid hobbie -- but it is simply not my hobbie.
Sure, if I encounter a hard problem, I can look up the best algorithms, copy&paste, modify, optimize and optimize untill I'm satisfied with the result, but there simply is no payoff for me to be an algorithms grand master.
If you're happy, then awesome, but the reason you would have never worked on all of those things before is because companies are loading up more responsibilities onto technical people to cut down on costs.
You carry the skills of 3 roles from 10 years ago: Sysadmin, developer, and reverse engineer, and yet you're likely only being paid for one.
> Never _intentionally_ memorize something that you can look up.
I guess you can't quote Einstein, then.
In theory anyway, I always have to look up how to remove a remote git branch or how to do the euro sign on a Mac.
> We're going to go through a few technical exercises together. It's open book, open documentation, use whatever resources you would normally use — though please do not copy and paste complete solutions from StackOverflow and guess&check to reach our solution. It's open language, so please select whatever you're strongest in. If it is a language I am not familiar with, I will ask questions. Some of those questions may be naive. Use whatever editor and environment you're most comfortable in. Please ask questions of me. There is no specifically correct answer, though your solutions should hit a level of baseline correctness. These problems are designed to mirror some typical day-to-day problems you might face on the job; we're not looking to test rote memorization of _Hacking the Coding Interview_. We want working code, but some aspects or edge cases may remain in pseudocode as we iterate on these problems together. There are no designed 'gotcha' moments in these exercises, and if you feel there may be a 'gotcha' moment, ask clarifying questions. I would rather you ask questions than struggle through a solution.
It works well for modern development.
> I understand you want to test my programming ability, and in order to do that, we should do it under conditions as close to real world as possible. Because I don't know every algorithm in existence, I'm going to use the internet like I do normally to look stuff up. And, I'm going to use my own laptop that I've brought here. Is that cool?
Not really, of course, but I'd like to.
Just do it ahead of time instead of on the spot. You're probably emailing back and forth with a recruiter. Send them the exact comment you posted here, and ask that they make sure that your interviewers know about it.
It's amazing how little things can throw you off your game. I had a couple where they handed me a MacBook set up for "traditional" touchpad scrolling (where it works like a scrollbar instead of like a touchscreen).
Every time I tried to scroll it went the wrong direction!
Of course in an interview we aim to please, so I lived with it for a few minutes, but finally asked "Do you mind if I change your touchpad settings?" Of course that led to a brief discussion on the virtues of traditional scrolling.
This was not only precious time wasted, but a minor blow to my self-confidence. Imagine you tried to drive a car or fly an airplane and they had sneakily reversed the steering wheel or yoke and pedals! You would be lucky to survive.
In fact this is why the preflight checklist for a light plane always includes "controls free and correct".
So many languages, frameworks, and libraries copy the same functionality and every single one does it differently.
The cost of looking it up is a little bit of humility. The cost of not doing so is a small chance of spectacularly breaking something for my entire team. Suck it up, apply the Precautionary Principle, and most importantly, get over yourself.
Do it smartly, know what that component is, if that is good for the purpose, if that is the best for the purpose, its trade-offs, but that should be the primary way. With fallback to reinventing the wheel.
We still will have to provide custom built solutions from scratch from time to time (in fact in a lot of cases, the CS is still not mature enough for proper higher level operations) but better to rely on tested and ready pieces.
Recruiters seems to have no clue how to select people for the particular position therefore they go back to the ancient times of CS like this dumb whiteboard thing. I also felt like they seems to expect that custom made employees exists out there and they only need to wait for the perfect candidate, not really expecting someone to learn while the continuous learning is the industry norm, the inherent requirement. I've seen positions remained unfilled for more than 6 months sending away partially fitting experienced candidates.
I failed at technical interview where you must react instantly and accurately without any help normally available in everyday life (Google). After looong time of programming and learning various approaches I find myself question myself continuously even in fundamental matters. Or is it something by age? Nevertheless while being young, self confident and energetic and the Google style of programming was far from invented my fingers were spitting huge amount of brilliant code that did not work until I fixed all the stupid mistakes I made, which took frustrating amount of time. Coding gives a good amount of humility by showing you that you are to blame in every last cases. Every time! Making mistake, misunderstanding something, not knowing necessary elements, misjudging aspects, not checking the reliability of some component you rely upon, it always comes back to you as the computer will do exactly what you tell it to do, it will always be you. Frustrating. I learned to doubt myself and verify seemingly evident matters which does not play well in whiteboard interviews as you might imagine (in fact once I had mental whiteboard interview, speaking about solutions, how common is oral programming in practice?). This is an unnatural and sinfully inaccurate way of judging how one will work in a position. Due to the constantly changing (improving?) profession and the huge amount of beautiful aspects it entails keeping every bit of detail in head that you can spit out in a second notice is far from being a smart expectation. It happened more than once that the right answer occurred me after walking away from the interview as the knowledge bubbled up of the sea of experiences. Once I told someone that sorry, I never met that technique before just to realize later it was an essential component in a task several years back, I used that quite a lot for that one task.
I did not change position too much (3-8 years spent in one place) but I never been hired based on whiteboard test but by trial tasks or probation period, fulfilling the need of the position (leaving due to dissatisfaction every time). In fact I just met whiteboarding recently and I find it hugely inadequate to judge competency.
Without meaning to attack you personally (especially in the context of the rest of your comment), a comment like this annoys me a bit.
I presume by "average" you mean arithmetic mean, but the median is also an average, and depending on the context the median might be a far more useful statistic than the mean. Confusing the mean and the median is one thing, and perhaps you actually used this terminology in the interview; "confusing" "the average" and the median isn't really worthy of comment, and sounds more like a breakdown in communication between interviewer and candidate rather than a lack of technical knowledge. It just seems vaguely hypocritical to me to be expecting a certain level of ability from the candidate and then using imprecise/informal terminology.
In my native language (French), as you describe it, there is no word for "average" (the technical and the common terms "moyenne" are exactly the same). Until your comment, in English, I assumed you could use "average" and "mean" interchangeably.
Although I do know that "average" covers mean, median and mode, when I was young I was not taught that way. First I was just taught "average" with a formula, and many years later "median" and "mode". I don't think I noticed the term "mean" until I learned about median and mode.
You can see this reflected in scientific calculators used at school, where the button for calculating mean is labelled "Avg".
And in Microsoft Excel or LibreOffice Calc, the function to calculate mean is called "AVERAGE".
So nobody should be hard on themselves for not knowing the difference. You probably weren't taught the difference, and common tools work against it.
It looks to me that the common word for statistical mean is "average" in English, but it's not the correct technical language.
You're quite probably right about common parlance..
> but it's not the correct technical language.
..but as you point out it's not really appropriate in a technical context, indeed I don't think I've come across the term "average" being used in this way in technical literature; I've normally either seen expectation (for probability theoretic cases) or mean (for statistical cases).
> So nobody should be hard on themselves for not knowing the difference.
Absolutely, which is why I think being hard on someone for not knowing the difference between "the average" and the median is similarly an issue.
Incidentally, "mean" doesn't quite cut it, if you want to be a stickler about it. There are several types of means: arithmetic, geometric, harmonic, etc. By that standard, you just failed the interview.
I'm going to assume, then, that you didn't read my original comment which started off this line of discussion? By that standard, you just failed the interview. ;)
moyenne is mean or average. The formal name is "arithmetic mean" and "moyenne arithmétique".
median is médiane. people frequently use any word but median in both languages when they want to talk about the median.
The mean, median, and mode are three types of averages. None is 'the average'; though if you asked someone random on the street they'd probably give you the mean average.
Mode: On a typical day <<-- most common meaning for average in day to day conversation
Mean: On a day that is in the calculated middle (for some variance) <<-- most common meaning when we think the question is about math
Median: On a day that is the middle for all the possible days <<-- nobody thinks of this, but they think this is what the mean is.
Given that, showing more charity to the comment you're responding to would have been the correct course of action.
Depends on the interviewer.
It's very possible my schooling was wrong, but presumably I'm not the only one.
The arithmetic mean is colloquially called the average and differentiated from the median and mode, but technically speaking these are all averages. They are all estimators of central tendency. It would be not be out of the ordinary if a statistician asked, "which average do you mean" for clarity. Which definition of average is best depends on what you're trying to measure.
I dislike this interview question because it's an area where it's 1) more trivia than practical knowledge, and 2) easy to not recognize someone who knows more about it than you do. It's like if I tried to test your knowledge of the difference between "affect" and "effect", and then you correctly used "effect" as a verb and I thought you were wrong.
If I repeated everything I just said about estimators in an interview, the interviewer might think I'm too ivory tower to realize that "of course he's just talking about the mean!" But then I could also ask why I'm being asked a question like this in a software engineering interview.
It's a language game where no one wins (somewhat in the sense of Wittgenstein).
It has minimum variance, how can it be a terrible estimator?.. ;)
It is a bit tongue in cheek.
As was my reply ;)
before lunch or 4pm or later interviews we’re worse across all candidates
certain rooms we knew were smelly, loud, or poorly converted from like a storage closet scored worse across all candidates
we took some rooms out of the rotation and made sure everyone was well fed/had drinks. the effect is real
In particular, cases were grouped by prison and within each prison, prisoners without an attorney went last. As the well-known saying goes, "the man who represents himself has a fool for a client": the pro se prisoners fare far worse than those with a lawyer. Judges tried to complete an entire prison before taking a meal break, and therefore ended one session with the statistically weakest cases and began the next with stronger set, thereby guaranteeing the result.
If you look at the original data, some other oddities pop out. A physiological process, like becoming hungry, should depend more on the wallclock time than the number of cases heard. However "However, note that in an analysis that included both the cumulative minutes variable and the ordinal position counter, only the latter was significant."
It seems like there's probably some value in a "MoneyBall" company that identifies a way to interview without most of the anxiety. Perhaps hiring a few people with remarkably high EQ who customize the interview setting who review the inteviewees' work and watch them program over a few days in their own computer and IDE.
I know I've interviewed people that have frozen up, but for whatever reason I didn't pause the interview and address the anxiety, but rather I tried to adjust the questions (which generally didn't work).
As for interviewing for a few days... Yeah. No. Not in a country with an abysmally bad vacation policy like the US.
You hear something very different than the person who asked the question. And in one case I discovered that my coworker had been asking a question that he thought he knew the answer to and did not; a deep clone implementation (where he never asked about cyclic graphs).
I hope you suggested that this employee get tested for hypoglycemia. It'd probably greatly improve their quality-of-life to go from "my brain isn't working and I don't know why" to "my brain isn't working and I know exactly why; I need to drink some fruit juice."
Shouldn't that be tell a mean from a median?