>How could someone with 10 years experience on their CV be unable to start a new project in their own editor
This is something I need to study each time I want to change jobs.
I suspect that I will have to create a new project somewhere in the interview process but it is just something I very rarely do in my day to day job, if ever. I suspect that it is the same for many engineers.
FWIW, at my current company, the interview projects come preconfigured, so you only have to import them and start coding.
We have also already imported some popular libs.
Watching somebody rightfully struggle to do that is :
- a pain for the interviewee
- awkward and uninteresting for the interviewer .. I don't expect many people to have to create new projects often so it is totally ok to have forgotten it
- a waste of time for the both of us.
We also actually don't do algorithms.
We have a debugging interview, with a shitty codebase where you have to find the issues and correct them. When you are done with the bugs, you can collect your thoughts on what's wrong with that codebase so we can discuss how to do it better.
Another interview is to have to design a product similar to ours (a simplified version of course) and you have to come up with API and app technical designs.
> Watching somebody rightfully struggle to do that is :
Being watched usually makes me a horrible coder. I get so distracted from having a backseat driver that it can even hard to type at times. The only thing I can whip up on the spot in a readable manner is something I already know pretty well, and if the interviewer can't keep up, it becomes a big conversation about "what are you doing again?"
Yes! I learned this in my first internship when I froze up on the terminal because I had a manager hovering. It's something that comes up a lot in my circles. I did feel a little better when I eventually taught that same manager something about SSH. That excited "Oh you can do that??" felt so good.
It can be pretty hard to make a stranger feel at ease in a stressful situation so I can't pretend I have a 100% success rate, especially for the one screening interview we do which is a one hour pair programming exercise. However we do it, I am going to be there and look at what you are doing.
For the onsite though, most of the time, I explain the exercise and then I leave people work while I do something else in the same room.
I stay there so they can ask questions if they are stuck but I am not hovering over their shoulders so they can concentrate.
> the one screening interview we do which is a one hour pair programming exercise.
I'm not toot excited about the pairing part. Most pair interviews I've had feel like the interviewer is trying to shove a box that says "they arrived at the solution I like in the manner I was expecting". Most pairing feels like me going out of my way to connect with a robot interviewer w/ no imagination.
> For the onsite though, most of the time, I explain the exercise and then I leave people work while I do something else in the same room.
I love this. I'd love to see more interviewer's do this.
Pairing is hard. It is the part of our process that I like the less.
We try to make it easier and fairer with a rule like " if candidate reaches this step = automatic pass" (and the step in question is not that far in the interview).
But there are still many person to person differences and I often have dilemmas.
It is very hard to differentiate between giving a pointer and handholding the candidate.
So sometimes I ask myself if I gave too much or too little help, since there are cases where the candidate would have gone far enough with just a little bit more push and others where I feel that I helped them too much. On a 45 minutes exercise, if you help somebody not getting stuck for 10 minutes, it can make a difference.
Another part that make me uneasy is wondering if accents or culture differences can make me act differently.
Like different micro expressions or attitudes from what I would expect, can these influence me enough to go from pass to no pass more easily for some ethnical/cultural groups ?
Is the person you’re pairing with googling things and actively helping you? That interview style is fun because then you’re problem solving together and you can see how well you can communicate about complex technical subjects.
Kudos for you trying to alleviate that, but from my past experience you are very the minority - most of the interviewers behave like '12 angry men' judges.
I had never heard of this set up a project interview test. Sounds stressful and awkward but
For me, it would possibly be preferred to algorithms because one of my strengths is reasoning about technologies. In group projects and when working with friends, I naturally take the pragmatic setup role.
Algorithms, on the other hand, I struggle with because even tonight at dinner my sister was trying to remember what word describes toxins moving up the food chain, something we learned 10 years ago.
Before she could finish her thought I was blurting out bio-magnification!
It made me think why do I remember some things so well and others really do go in one ear and out the other.
I know to an interviewer it may seem like they're testing if we have knowledge that will be applicable to the job at hand but to me, it feels arbitrary.
I am adaptable. I am not a textbook.
When I am on the other end of that table, I will have already asked for a portfolio of sorts: school work, personal projects, anything that you want to represent you.
I will spend some arbitrary amount of time finding code snippets that are problematic or interesting and I will ask you to speak to them.
I feel that will be enough to know whether or not I want you on my team.
> When I am on the other end of that table, I will have already asked for a portfolio of sorts: school work, personal projects, anything that you want to represent you.
> I will spend some arbitrary amount of time finding code snippets that are problematic or interesting and I will ask you to speak to them.
> I feel that will be enough to know whether or not I want you on my team.
I like your approach but wonder if I would be successful if looked at through that lens—I have looked back at code written a couple years ago and was surprised to find that I was the author and would definitely be unable to speak to my choices then (though I could probably speak to how I'd do it differently now, which might be a useful signal in the interview).
Yes! That's what I'm getting at. It seems like technical interviews as I've experienced them (Microsoft being the biggest name I've tried so far) are knowledge based stress tests.
This doesn't make sense to me because once I become comfortable in a group, immediate stress isn't an issue anymore (background stress yes but that's another beast entirely).
I don't preform well while stressed, to the point that I pass out in acutely anxiety producing situations.
My test (which is really just more of a traditional behavioral interview) is looking for how you introspect and reason about design decisions that you've made in the past.
Something that I value in teammates over anything else.
It's a glimpse into how they will respond if I do have to ask them about why they did this or that in a merge request.
If you can reason about why you chose a foo over a bar 4 years ago then I'll feel pretty confident that you can speak to the decisions you made last week.
My thoughts will likely change as I get more industry experience but this is a good milestone for me to look back to.
It's not always easy to remember how it felt presenting your first programming project or landing your first technical interview. Or worse how it felt just before that stressful event.
We all have a tendency to think wow I guess I was so anxious over nothing. Is this what keeps the status quo in place ? Hindsight bias
> We have a debugging interview, with a shitty codebase where you have to find the issues and correct them. When you are done with the bugs, you can collect your thoughts on what's wrong with that codebase so we can discuss how to do it better.
I've always wanted to do this, but I couldn't convince any previously employers to pay for it.
> This is something I need to study each time I want to change jobs.
+1. It is not that I don't know how to do it, but I have to practice beforehand to have it in my auto-pilot and not spend time during the interview. Otherwise I will panic seeing the clock ticking and my IDE freezing indexing files, etc.
"Companies do code testing because they have encountered so many candidates who look good on paper, and may even be able to talk about computers convincingly, but can’t actually write a program when asked. Any program. At all."
I interview programmers and I've encountered this a few times, and yet, I still don't do code tests. I talk to the candidate and you can easily ask the right questions to tell who can code and who can't. You can even get a judge of experience level (junior, intermediate, senior) which you won't get from a code test.
We get a lot of English-as-second-language and shy developers (who can code) and I can still figure out by asking pretty general questions about development. The idea that you need a code test to figure out who can code and who can't is false. I'm not arguing against it, I'm simply arguing that it's not strictly necessary.
What has worked well for me (at least for non-junior positions) is to just ask about their prior work. "You were on a team that developed x - great. Assume I'm an expert in that and am happy to geek out about the tiny details. What was a part that you specifically were responsible for? What was most fun/interesting about it? What was one of the challenges and how did you overcome it? What was something new you learned? If you started over today, what was a part you'd do differently, and why and how?" Really just pick something they seem excited about and dive really deep into it. "Why didn't you use y for that? How did you test it?" The key is really simple: find something they're passionate about and get them to talk about it.
The annoying thing about refining interview techniques is you only get a small view of effectiveness: you can't tell how many times when you didn't hire it was the correct choice vs passing on someone great, so I can't judge from that perspective. That said, I haven't regretted hiring anyone that did well on this style of interviewing, without another coding interview.
> I talk to the candidate and you can easily ask the right questions to tell who can code and who can't. You can even get a judge of experience level (junior, intermediate, senior) which you won't get from a code test.
I also think I can do this, but I believe most people (myself included) introduce a lot of bias when doing this. I've spoken to developers who in 15 minutes I might have written off as not very qualified, but after hours/days have realised are great at what they do, and very experienced in types of software engineering that I have had less exposure to.
I think we have to be careful when judging candidates with what amounts to "gut feeling" like this, as it's typically biased, and not aligned within the hiring team.
The thing is: The method described in the post has the same problem. It just doesn't look like it. Most of these types of interview templates are designed by one or two persons and then blindly applied. Sometimes you have a view modifications, but for the most part it's fixed.
Since every interviewee gets the same set people then start to think this is more scientific than "gut feeling" - it is not, but instead of your own biases you have the biases of the template designer.
There are probably a few companies which extensively study if their interview process for the people hired (the only subset they have access to, which already is a problem) actually matches their job performance and then try to readjust the process, but it doesn't seem to happen very often.
This also gets us back to the old problem that we have no idea what a good programmer is or how to evaluate job performance.
I think that question can be great, but it also puts a lot of pressure on the candidate to execute a rather complicated query on the spot. I've definitely encountered lots of interesting bugs in my time, but my initial reaction to this question is a total mind blank. Especially since "worst" could mean a lot of things, and what I really want is "worst bug I've encountered that I can say something interesting and impressive about".
I'd say it's not a bad idea to forward questions like that to the candidate in advance of the interview. That way, you avoid false negatives from people who just couldn't think of a good answer under interview pressure.
1. I'd struggle to answer it myself despite nearly three decades of fixing bugs. I could probably come up with some interesting ones after a few minutes of thinking, but most of the bugs that'd immediately pop to mind would just be recent bugs that were hard to explain without knowledge of the codebase.
2. There's no way to guide an interviewer on how to mark this question beyond gut feel. What exactly does a good answer to this look like? More importantly: what does a bad answer look like? Asking questions which are impossible to actually fail at, or for which the outcome is basically random and unjustifiable, is a very common failure mode for interviewers.
3. It tells you nothing about the candidate because they can easily just repeat a bug they read about on a blog last week. You can't check they actually debugged it themselves.
For instance I just searched for "most interesting software bug" and clicked the first relevant link, the second story on this Quora post could easily just be repeated verbatim to an interviewer over the phone:
If the interviewer wasn't alert they might be fooled by it. Candidates will quietly Google for answers during talking questions whilst trying to avoid the interviewer noticing, and many other things.
That's why I emphasise that designing good interview questions is quite hard.
I agree that's not a great question. I usually have few simple questions that can tell you if your candidate really understands the platform they say they understand -- it doesn't take much.
I usually have one or two projects set aside specifically for new hires; the type of project that is maybe a month-long or longer but that doesn't really require any domain knowledge or big integrations. It allows them to have something to do when they're learning everything else. In the interview, I can ask them how they'd proceed with it and I always offer that I'm here to help them and will answer any questions they have. This is literally no different than what I would ask them on day one of the job if they got hired.
Is that more informative than whether or not they can reverse a linked list? I think so.
I think I like this variation the best, because it opens things up to bugs in tools that one has used, and other kinds of bugs that aren’t necessarily ones the candidate would have had to attempt to fix.
I think that's a better way of phrasing the question. However, I think my real issue is just that it's difficult to rummage through your mental database of past bugs in the space of a few seconds. (Realistically, it's awkward to pause for more than a few seconds before answering a question in a face-to-face interview.)
What would be the downside to giving the candidate a heads-up that you're going to ask this question?
> I interview programmers and I've encountered this a few times, and yet, I still don't do code tests. I talk to the candidate and you can easily ask the right questions to tell who can code and who can't.
How do you know? Is this repeatable -- e.g. can you write down the questions and the criteria such that someone who is not you can follow them and come to the conclusions you would?
Do you ask all candidates the same questions, or do you go with your gut and hope there's no systemic bias?
> I interview programmers and I've encountered this a few times, and yet, I still don't do code tests. I talk to the candidate and you can easily ask the right questions to tell who can code and who can't. You can even get a judge of experience level (junior, intermediate, senior) which you won't get from a code test.
How do you know? Do you check your interview assessments against actual performance among those hired (measured how)? What about the ones you turned down, what makes you so confident that they actually couldn't program as opposed to just answering differently from how you expected?
> What about the ones you turned down, what makes you so confident that they actually couldn't program as opposed to just answering differently from how you expected?
I'm pretty forgiving -- heck it usually takes at least half the interview before the candidate's nerves settle down.
Being a developer involves being able to explain yourself clearly. You can't just sit in the dark and code, talking to no one. If I came away from an interview with the impression that you can't code then I don't want you. The candidate has to do some work too -- they are there to convince me to hire them, not just pass a test and get a grade.
It's much easier to "explain yourself clearly" to someone from the same cultural/social/... background. Doesn't that approach create a huge bias towards hiring people like you?
Literally every single candidate we get is from a different cultural/social background. I'm not sure why giving them a test first and then interviewing them would make any difference.
Obviously if you ignore the test results then the test doesn't make any difference. But if you assign some weight to whether their solution to a test problem worked (which is culture-independent, or at least less culture-dependent) rather than how well they explained themselves to a person from a particular culture (which is easier for people from that same culture), then you end up with a process that is much fairer across cultures. And similarly for non-culture aspects of a person's background.
If you can't explain yourself then I don't want to hire you. Developers are not sitting in a dark corner not interacting with anyone; they are working with the team, they are gathering requirements from end users, etc.
If you can pass a coding test but you can't express yourself in a way that anyone can understand, then that's a no-hire. If that's a cultural bias, so be it. I don't see how it can be any other way.
Now it's been my personal experience that culture doesn't matter too much. For more than half the team, English is a second language. HR will reject candidates for language reasons even before I get to them.
I agree 100% and I would argue that unless you are pairing together just to see how you work together and for some back and forth problem solving, no other code test provides a better signal than a good conversation.
Not set questions, even, just a good conversation. What issue have they run into with Framework X, how do they refactor, what's the weirdest bug they've ever had to track down, etc etc etc.
SV seems to enjoy grueling 7-10 round interviews these days. I had one a couple months ago that was over 40 hours (of my time) and over 20 hours (of their time, that I was aware of). I just can't believe that much time provides 20-40x the value of a simple relaxed conversation.
It's interesting. I've encountered a very different animal, even without having the candidate do a "homework test" or a "coding test" during the interview.
I have found that this different animal is competent, and certainly can code, but is of the very unproductive, slow type of developer.
And so my biggest pain point is rarely around a developer's competence during the interview process (i.e. Does this person know the material and are they able to do the job?) but rather instead around their overall productivity, their "capacity" to build software using their competence.
There are plenty of people in the world who know the material but are extremely "slow" in actually getting things done and problems solved in a day job. Unfortunately, the in-interview coding tests don't do a very good job of measuring the person's productive capacity to get problems solved, only whether or not they can solve some coding problem in a 30-minute timespan.
I could do that for my specific niche for programmers (c, embedded systems [for the systems I know], a bit of web stuff), but not in general for a data analyst or ML for example.
I imagine I would ask for knowledge only practical experience could provide (interest in a topic is not the greatest teacher, but a decent one)
Better than fizz-buzz I think...
But you can never be sure either way. I would think that actually giving people a chance while defining quantifiable expectations would help here. Costly and time intensive perhaps, but I don't believe there to be a reliable shortcut. Especially not if the labor market is in need of more engineers.
It’s certainly possible but it’s obviously much harder to do. Let’s say I own a professional basketball team and need to make a hiring decision on a player who walks in the door. Am I going to be able to, with just an interview, determine whether they are a currently good player, or a formerly great player who has only been coaching in recent years and has forgotten how to play? Rather than trying to determine this just by asking questions, why not take the much easier approach of putting them on the court for 2 minutes to see how they play?
This is unrealistic because coding in an interview is not similar to coding in reality, at least for me. When I develop software I silently get to think about reason about problems, try things and read things on Google. In an interview I am with a person I don't know staring at my hands and expected to talk and impress them, all while a timer is ticking.
Strongly agree, I know there is some literature on that, but just wanted to add that for me the process of solving problems is very asynchronous and usually the a-ha comes out in the most unexpected times.
“Let’s say I own a professional basketball team and need to make a hiring decision on a player who walks in the door. Am I going to be able to, with just an interview, determine whether they are a currently good player, or a formerly great player who has only been coaching in recent years and has forgotten how to play?”
No, that would be phenomenally stupid. You would do what teams actually do, and look at their recent playing history.
Note that this is also entirely possible for engineers.
> Note that this is also entirely possible for engineers.
How exactly? Most people cannot take their code with them and show off. It belongs to previous employers, and please do not begin with the "show me your GitHub" - it's unrealistic to expect people to devote their life 100% to coding. Just as I wouldn't expect a carpenter to build houses for fun in his spare time, I do not expect programmers to spend their spare time in front of computers. In fact I believe that socially it's a benefit if they have other hobbies.
This is the problem with programmers: everyone gets obsessive about finding perfect unicorn universal solutions, and gives up on obvious stuff that works 95% of the time. Get out of your own way!
First, most programmers have at least some code they can show you. Stop fretting about the 5% who don’t. If you give up here simply because you have a theory about programmers “devoting their life to coding”, you lose.
Second, you call their references. Yes, really. I know it’s not “objective”. Do it anyway. You will learn a lot. Really.
Third, look around for other people they’ve worked with, who weren’t listed as references. Call some of them. See if you get different opinions.
If your candidate truly, honestly has no code available, no enthusiastic references, and you can’t find anyone who worked with them who will vouch for their skill, then do you really want to hire them?
Finally, yes: do a trivial fizzbuzz coding test to filter out the total fakes.
Listening to coders complain, you’d think that nobody else in the world hired anyone, ever. Let go of your need to make hiring a failure-proof system, and you’ll discover a bigger world.
The majority of my successful senior programmers do not have any private code to show off. Reasoning and ability to think abstract is important and impossible to judge from code they have written. I need programmers that can do those thing in a reasonable time frame and in a group.
Most junior programmers have code to show, and most of the time I don't use it for anything. I'd rather do some FizzBuzz level whiteboard that we spend some time discussing and extend on. It gives a far better picture of their future as a developer. I know this excludes some, but that's the trade off I'm making.
I call references every time, but calling around willy nilly is not a good idea. Apart from the fact that it's not legal, most of the time I get "Yes I've worked with X as a developer for Y years" and nothing more.
”The majority of my successful senior programmers do not have any private code to show off.”
You’re exaggerating or you are an extreme exception.
Given the number of qualifiers (“successful senior programmers”, “private code”) I strongly suspect you’re just trying to find a reason to justify your opinion.
What I see in your comment is someone who has decided the answer, and refuses to consider alternatives.
More than half have no code "they own" or OSS contributions they would show off to an interview (the last part is important). That's neither an exaggeration nor trying to find a justification.
They are fathers, amateur musicians, or like to restore old cars. Just to mention a few things. Could they spend 5-10 hours weekly writing code and hacking things together? Yes, but that would be a significant chunk of their spare time, so understandably they do not.
> If your candidate truly, honestly has no code available, no enthusiastic references, and you can’t find anyone who worked with them who will vouch for their skill, then do you really want to hire them?
Graduates?
> no enthusiastic references, and you can’t find anyone who worked with them who will vouch for their skill
This will make your diversity even worse unless you're extremely careful with it.
Usually the only people who see engineers playing are their own team members, while basketball players play in public, in front of crowds, recorded simultaneously from many angles by video cameras.
Most junior level developers these days have open source code to look at it (as schools really push that). For higher-level developers you can ask more complicated and telling questions.
That's good and valuable, and it's laudable in its own right in a way that basketball isn't (since open source is a contribution to the intellectual commons) but it's not the same thing as working with someone. A lot of people only publish the projects where they were happy with the results, and often it's hard to tell how long someone took or what kind of help they had when they were writing something.
Unlike basketball, coding is not reproducible and has no clear rules nor set of tools even. Thus your analogy is flawed from the outset.
Additionally, a person who can adapt but has lower specific testing skill might actually be better at everything. As in your coach example, a slightly rusty to player from years past might be better than slightly better new guy with no track record. You're taking a gamble either way.
The lowest rung coding test is only good to filter out people who cannot code at all, and that if simple enough. They tend to be stupidly worded though instead.
Trying for anything tough tests for specific tricks, tooling and skills. Giving a longer time task is comprehensive but rules against people who are actually busy, presumably working, thus already kind of tested.
> why not take the much easier approach of putting them on the court for 2 minutes to see how they play?
If every basketball team did that, every player would practice for that 2 minutes of play and you'd have no idea if they would handle an entire game.
The effect is the same for developers: interviewing developers practice for coding tests; memorize common interviewing patterns, quick solutions, and junior level algorithm stuff from their university days.
If you were interviewing a player, you'd probably look to their past experience and teams they played on, their positions on the court, and you'd check with their teammates.
> How could the candidate have started coding in one language, then decided actually they don’t know that language and should start over from scratch in a different one? How could someone with 10 years experience on their CV be unable to start a new project in their own editor? How could the candidate spend thirty minutes trying to generate a random number and still fail? This is crazy!
They panicked.
I would like to see a large company trying a replication study on their interview practice with their current employees. You'd need to make sure that the test was significantly different from the one used to hire them, and that they were put under sufficient economic and social psychological pressure - perhaps tell them that if they fail they will be fired and immediately escorted from the building (+), then make them do it with the CEO sitting behind them. I would expect over a 10% failure rate, including some people who are normally regarded as business-critical important team players.
You then have to hope that people lean the lesson "interviews are a poor predictor of performance" and not "some of our people are really imposters"..
(+) obviously don't actually do this, but like fire drills you need to convince people that it might be a real fire sometimes. I am also aware that there are ethics problems with performing stress experiments on employees without their consent, although nobody seems to care about this for interviewees
I would expect a far higher failure rate, maybe closer to 80-90%, especially at a successful company with happy employees.
The engineers will be focused on building actual systems and be totally unprepared for technical interviewing.
Engineers great in one domain will be interviewed by those in different areas with different views on what constitutes “basic” knowledge.
None of their work history within the company can be used as part of the evaluation—you must simulate the problem where nearly all your work is locked up behind an NDA.
> I would like to see a large company trying a replication study on their interview practice with their current employees.
IIRC Google did that and the results showed that more than half of employees wouldn't pass the interview. I can't really find the links for that bc I heard that from close friends.
Technical interviews are disguised IQ tests. IQ tests are actually not allowed in the US based on a Supreme Court decision[1] that said they were discriminatory.
We know that cognitive ability has one of the strongest correlations with job performance.
So "technical interviews" that are heavy on algorithm puzzles are just a loophole to get around the IQ test issue. They aren't actually testing domain knowledge.
The Griggs decision explicitly places the same restrictions on college degrees that it does on IQ tests. Obviously nobody cares about this, and it turns out the Griggs restrictions aren’t that strict and most people manage to ask about college degree just fine. This changes the argument to be about whether there’s complicated case law around Griggs which makes employers figure that college degrees will probably be accepted in practice, but IQ tests probably won’t be.
There's not actually a prohibition on IQ tests in hiring, and in fact, during my recent job search, one company I interviewed with had me take an IQ test (it wasn't explicitly labeled as one, but it had exactly the sorts of questions you would find on one). The same company also had me go through a more typical algorithm-heavy technical interview later in the process.
There's historical basis and case law to suggest that IQ exams have been used as illegal barriers on race, and it's not too far of an argument that your test could also be discriminating against a protected class. And it's potentially horrible PR.
> What is required by Congress is the removal of artificial, arbitrary, and unnecessary barriers to employment when the barriers operate invidiously to discriminate on the basis of racial or other impermissible classification...
> On the record before us, neither the high school completion requirement nor the general intelligence test is shown to bear a demonstrable relationship to successful performance of the jobs for which it was used. Both were adopted, as the Court of Appeals noted, without meaningful study of their relationship to job performance ability. Rather, a vice-president of the Company testified, the requirements were instituted on the Company's judgment that they generally would improve the overall quality of the workforce.
> The evidence, however, shows that employees who have not completed high school or taken the tests have continued to perform satisfactorily, and make progress in departments for which the high school and test criteria are now used.[8] The promotion record of present employees who would not be able to meet the new criteria thus suggests the possibility that the requirements may not be needed even for the limited purpose of preserving the avowed policy of advancement within the Company.
> In the context of this case, it is unnecessary to reach the question whether testing requirements that take into account capability for the next succeeding position or related future promotion might be utilized upon a showing that such long-range requirements fulfill a genuine business need. In the present case, the Company has made no such showing.
> We know that cognitive ability has one of the strongest correlations with job performance
There is a good argument that the technical interview success is much more strongly correlated to the number of hours spent practicing Leetcode-type questions than to actual cognitive ability.
The technical interview is not the only stage in the interview process. It might be better thought of as a way to filter out applicants who have poor innate ability and/or poor learning skills.
I would argue that being able to study leetcode and then apply that new knowledge you've learned is an important skill in and of itself in this industry. Isn't that what all of highschool and most of college is? Signalling that you a quick learner and a hard worker is important.
>It might be better thought of as a way to filter out applicants who have poor innate ability and/or poor learning skills.
It’s a way to filter out people who haven’t had time to study leetcode style quiz questions. It has no relationship to learning skills or innate ability. There is no way to distinguish between someone who hasn’t spent time grinding algorithms and someone who had with poor learning skills.
>I would argue that being able to study leetcode and then apply that new knowledge you've learned is an important skill in and of itself in this industry. Isn't that what all of highschool and most of college is?
Nope, it demonstrates that you had time to waste, nothing more. Candidates that built actual software to accomplish something are much more impressive because it shows they can write software attached to reality.
And just like the standardized test rigamarole, technical interviews can be gamed by investing in prep courses and materials for those who are willing to dump the money and time on them.
It's like saying "This guy spent 4 years in college trying to get a 4.0 GPA. Clearly, he only did it to game the system and get a good job/career. So I won't hire him"
A person willing to spend time and money preparing to succeed in a scenario that exists in real life (pass interviews), is one of the types of persons I want to hire.
The situation clearly becomes complicated when an ostensibly purely meritocratic measure becomes something that those with means can improve. If there was free and quality standardized test prep / technical interview prep for all, and then it was just a matter of dedicating enough spare time for the latter, then you might have a better case that it's good. Though you're also ignoring the fact that people have a limited amount of time/energy to devote to that, such as in the case as engineers with children.
I have recently passed several technical interviews at multiple large companies. The first one I botched completely. Then I looked at some algorithmic courses from Princeton on Coursera and passed the rest of the interviews in a breeze.
My point is, there _are_ free and good quality courses out there and if you have enough experience with actual work then it is just a matter of brushing up on the foundation. In the end I’m glad that I did.
Well, yes. However if you are already a good programmer then it will not take that much time, and if you are looking for a job then there is nothing wrong in expecting that you would prepare. And I would argue that it's not sunk time either. In my day to day job I do not use that many algorithms because most of the actually needed ones are hidden by the internals of the language, I got rusty and the refresher was useful.
>New data shows studying for the SAT for 20 hours on free Official SAT Practice on Khan Academy is associated with an average score gain of 115 points, nearly double the average score gain compared to students who don’t use Khan Academy. Out of nearly 250,000 test-takers studied, more than 16,000 gained 200 points or more between the PSAT/NMSQT and SAT …
But the company doesn’t really suffer if they don’t hire that disadvantaged developer who doesn’t have the resources to study for the test. Sure they have to spend some more $$$ on recruiting but there aren’t a ton of companies in America existentially threatened by their inability to hire these hypothetical engineers.
You're stating that the company doesn't lose out with false negatives. That's certainly true with big FAANG companies. But I suppose in my original comment, and when I brought up Goodhart's law in a descendent comment, also suggests that by creating a system that can be gamed by prep, you end up creating a situation that encourages false positives. In the end, is a company of Leetcoder elites and new CS grads going to be a more productive company than one comprised of experienced and capable engineers who don't happen to do as well on those tests? Which I suppose is the question for a different study.
In my experience hiring, the diligence to sit down and practice code tests correlates very highly with being a good employee. Just like strong academic achievement at good schools.
That's not to say there aren't gems in the rough who don't interview well, or skipped college, or who didn't go to good schools. But there's a huge time cost to looking for them.
Given that the former is dealing with decreased earnings growth and the latter is dealing with decreased user growth, perhaps all options should be explored.
> A person willing to spend time and money preparing to succeed in a scenario that exists in real life (pass interviews), is one of the types of persons I want to hire.
This brings us to an interesting dilemma. Do you want to hire people who know the problem domain you're solving --- or people who really badly want to be hired?
> There is a good argument that the technical interview success is much more strongly correlated to the number of hours spent practicing Leetcode-type questions than to actual cognitive ability.
Where is that "good argument" substantiated?
People say exactly the same thing about IQ tests, probably because it's something they'd like to be true. But it's not backed up by the evidence.
Ten years ago, I couldn’t program, so I couldn’t do leetcode problems. Now I can. If I go on the market, I’ll probably study a bit, and get even better at leetcode problems. During that whole time, my cognitive ability will not have significantly changed.
There probably is a decent correlation between cognitive ability and performance on these tasks, but it’s obviously mediated by practice.
https://www.gwern.net/DNB-FAQ contains a rather extensive review of this study and its follow-ups, critiques, etc, and comes to the conclusion that Dual n-Back training probably doesn't work to improve fluid intelligence.
That's quite a leap there. There is no contradiction between thinking IQ tests work and thinking that DNB increases IQ; I thought it did, and it was a bitter experience learning first-hand how exactly the Replication Crisis works.
If you think I am an unreliable source, why don't you look at the other meta-analyses like Melby-Lervåg, or Sala? Or look at the criticisms made by researchers like Redick whose experiments turned out to be nulls? (Redick's most recent comment on DNB is titled "The Hype Cycle of Working Memory Training" https://journals.sagepub.com/eprint/U8YNVXKM5SKF6ZY9TQBR/ful... , so you can probably guess what he thinks of DNB now.) Or look at what the field thinks in recent studies - assuming you can find any recent DNB/IQ studies, as everyone has realized that it doesn't do anything.
The process can definitely reject good candidates who are out of practice, but it’d be pretty hard to do the practice and give good performances without some underlying skill.
But look at it that way: a candidate that didn't practice before the interview, is either not so interested, a bit dull about basic human interaction or cannot even understand what an interview is.
It's honestly like coming to an interview without a suit and a tie. Sure if you're a pretentious startup it'll horrify you, but anywhere else it shows that the candidate just doesn't understand what's happening, that he could have just made the cheapest first impression effort and didn't even think about it, he came in sandals and tshirt instead.
Do you want him talking to clients ? Can you afford him not talking to clients ?
Is there anything wrong with preferring people who have spent a lot of time practicing leetcode?
Most jobs prefer people with certain kinds of experience vs raw cognitive ability. Raw cognitive ability doesn’t mean much outside of entry level jobs.
I don’t believe the interview process would be any more efficient or effective for software engineering roles if they started preferring pure “cognitive ability” to leetcode effort.
Assuming all other things are equal, obviously anyone will choose the candidate who can do these questions vs one who cannot.
The problem, obviously, is that all other things are seldom equal IRL. So, the issue now becomes : Would you rather hire someone who has a solid github profile but can't whiteboard algos, or someone who doesn't have any past portfolio but can ace whiteboard algos.
If you give me 20 years, I could probably get a PhD. Given ENOUGH time, anyone could theoretically achieve anything. But the differentiator between candidates happens when the clock is ticking.
I, and most other employers, are not looking for perfect answers. We want to know how you approach a problem under these constraints : short time, high pressure, unseen problem.
We want to know whether you are the type who chokes. It tells us that your desire to look good in front of someone (or conversely, your fear of looking bad in front of someone) fills up your mind more than your desire to solve a problem.
Would you choke if you mom asked you the same question - No, because you don't fear being judged, right ?
We want a similar trait in our developers. We want you to GIVE VERY LITTLE SHIT about what we think of you, at least for the next 30 minutes. We want you to have enough control of your mind to TEMPORARILY ignore our existence and focus on the problem at hand.
Granted, such scenarios occur rarely IRL, but it tells us a lot about how much control the person has over their psyche, how honest they are about their limits, what they do when they reach their limit etc. It's as much a behavioral test as an IQ one.
Having said all that, not all companies need non-choking geniuses. Choking geniuses work fine for many.
> I, and most other employers, are not looking for perfect answers
I wouldn't agree with that. Most of the interviewers I had at the end of the day didn't want the take the risk of "hire" if the answer wasn't like they expect.
> We want to know whether you are the type who chokes. It tells us that your desire to look good in front of someone (or conversely, your fear of looking bad in front of someone) fills up your mind more than your desire to solve a problem.
I think that can be better written in the job requirement. I would think twice before going to an interview which the solely purpose is to make an artificial stressful situation, out of an already quite stressful situation, just to see if I break or not.
Yeah, I can give zero shit if the stakes are low or if I am not interested in the job or the company for example. What happens is not my ego is hurt, but before I go for an interview I am already emotionally invested that I want to work for that prospective company.
All that said, people come from different backgrounds, imagine interviewing someone who is constantly judged in life and/or have struggled with bad managers prior to expect they would be free like a bird in front of failure.
The problem is that you're not paying enough for this. You're just not, even if you're FAANG. A smart dev can likely find a good answer for you---but it will almost always be true that it is a better answer, for them personally, to just go work somewhere else.
We are not paying you to SOLVE the problem with those constraints.
We are simply evaluating how you APPROACH a problem with those constraints.
If given such a problem, you throw your hands up in the air, or give me a lecture about how such a problem does not occur in real life, or how it doesn't represent your best-self, that itself tells a lot about you.
My question, as I repeat in other comments is this : WHY do you feel stressed ? Delve into your psyche for just 1 minute and it will boil down to - fear of being judged as incompetent. All we are looking for is someone who can set aside this fear TEMPORARILY.
You don't feel pressure if your mom throws you this problem, right ? Can you do the same even if a million people are watching, but just for 2 minutes ?
Even if we throw you a NP-Hard problem, can you objectively spend 2 minutes thinking about it, talking another 2 minutes about how you'd approach it, and then saying, 'This looks like it's beyond my skillset' - That clarity of thinking, mastery of fear, and blatant honesty is worth a hundred github apache open source projects.
People who can't do this lack the maturity to understand that in real-life, nobody really gives a damn about our achievements and failures except us (and very close relatives of course).
I think you are kidding yourself if you think the majority of these companies are satisfied with an answer consisting of "This looks like it's beyond my skillset". All things considered, they are going to progress the candidate who can solve the problem because it's an early stage test.
I wouldn't feel pressure if my mom threw me the problem, but I still probably wouldn't be able to come up with a solution (or even approach anything useful in a reasonable amount of time) that would satisfy an interviewer because frankly, I'm terrible at it and haven't spent the time to get good, just like many, many other successful developers.
Stress is only one aspect of these that people dislike.
>All we are looking for is someone who can set aside this fear TEMPORARILY.
Why should I do this? Fear is a survival signal, why should I ignore it? It's not fear of the problem, it's fear of the company. And is this not a justified fear? You say not to be afraid of being judged, but isn't that the entire purpose of the interview?
Now, there is a scenario I can imagine where I have some pride invested and am willing to be a fearless engineering badass, the developer equivalent of a Marine. But the Marines are loyal back in return, never leaving a man behind, etc. I'd also happily attempt such a problem for my mom, because I know our relationship isn't dependent on this particular problem. Is your company going to be as loyal as the Marines, or love me unconditionally like my mom?
Why do you say that FAANG isn’t paying enough for this? They seem to be, since they are hiring more than enough engineers in this manner to build even trillion-dollar companies. I doubt they give two shits about the developers who decide to work elsewhere.
Sure, they hire a good # of devs---who then mysteriously want to go into management, leave to start a startup, or simply complain about their bosses. No one with actual agency and power in the matter chooses to stay in that position, once they understand what it is, what their options are, and aren't handcuffed by a mortgage. Or perhaps the company isn't as dysfunctional as the interview, and they stay---in which case it is just the interview that's miscalibrated, serving as an arbitrary hazing ritual.
Despite any problems they have, they are able to retain enough devs to get their work done. Are they losing out on even $1 of earnings because of their inability to hire devs? Even with the arbitrary hazing ritual in place?
> Is there anything wrong with preferring people who have spent a lot of time practicing leetcode?
If you care about diversity and inclusion, yes. This interview format prefers people with formal education, extra time, good mental health, no kids, etc.
This can easily eliminate some women + caregivers + those with kids, the self taught, people that struggle with mental health or learning disabilities (including ADHD), etc.
Anecdotal story:
Studying LC because I'm self taught led me into burn out and I ended up with an ADHD diagnosis (common for women to get diagnosed at an older age anyway). My ADHD brain wants to do things it likes and studying LC isn't one of them so it led to being burned out faster. A former manager even told me to slow down and I didn't listen!
I struggled for a while with interviews and getting my current job. Each additional interview became extremely traumatizing for me (which is probably compounded by Rejection Sensitive Dysphoria that comes with ADHD). It's been almost a year since I got my job and I still worry about interviewing again in the future.
For me, being a woman in tech with ADHD makes it really hard to walk into an interview feeling confident. It's intimidating to be obviously different then the vast majority of my interviewers and I'm so worried if I'll keep my train of thought and remember things or talk in a way that makes sense. And it spirals into me worrying if I make women in tech look bad because so many people seem to believe women are inherently worse.
I've noticed that exactly. IQ presumably is something inherent.. But personal experience shows that technical interviews have nothing to do with IQ. They can be easily solved by creating a dynamic programming problem pattern matcher in your brain, which is what I created after doing 300 leetcode problems. I did not become smarter in general or at coding at all, I just got good at mapping problems to binary trees or dynamic programming or string search.
I don't get how IQ could be considered inherent. It's a test, so results are defined by talent and training. A person who spent a month solving the types of problems present in an IQ test will certainly get a higher score. By, the way, you did become better at coding by solving Leetcode problems, cause now you can easily spot them in the wild and use a proper algorithm instead of some naive solution.
aren't IQ tests the same?
You just learn how to solve some kind of problems (series of images, series of numbers and so on) etc.
This obviously does not make you "smarter", it just makes you good at solving some kind of puzzles.
But "smart" is ill defined and vague anyway..
That's the reason all real IQ tests (not the ones you find on the internet, the real ones) are secret/closely guarded: Knowing the tests beforehand skews the results.
So: yes, it is the same in that way and has been theorized as one of the reasons for the Flynn effect. Afaik there's also research into new types of IQ tests to mitigate this all the time, but a good IQ test has higher requirements than the typical interviewing test, so it takes longer.
It's really not. Knowing how to solve those leetcode questions has not improved my professional life one bit. I am not a better programmer.. In fact it probably made me worse. At my work I pretty much never deal with the kind of coding that leetcode requires. I'm more interested in writing larger scale software rather than throw away toy problems which completely ignore code quality or maintainability.
In fact, I'm not even good at solving leetcode problems that well so I wasn't able to get a job at the big N companies...although it did help with passing the interview at one of the smaller companies.
It depends- maybe if people devoted enough time to cramming algorithmic problems (more than they did in their CS classes, even) could they have memorized enough solutions and seen the patterns enough to be able to master this type of technical interview? One wonders, can that rote memorization + pattern machine process be trained into anyone- meaning that this is not a measure of intelligence but of brute force, like memorizing Rubik's Cube solutions?
And then the practical concern arises- does that actually make anyone a better programmer, and benefit employers?
Maybe they are at Google, but hiring managers are not generally savvy enough to have the ulterior motive you're attributing to them; most of them are just doing what they were taught to do at previous jobs.
I think your comment is a pretty gross mischaracterization of the relevant Supreme Court ruling. It's definitely not the case that "IQ tests are actually not allowed in the US". From the wiki page you link:
The Supreme Court ruled that under Title VII of the Civil Rights Act of 1964, if such tests disparately impact ethnic minority groups, businesses must demonstrate that such tests are "reasonably related" to the job for which the test is required.
I think it's pretty trivial for any company to argue that technical interviews are relevant to a developer job.
Note that the material you quote from Wikipedia is in error; the simple-to-meet “reasonably related” standard only applies for disparate impact cases where the discriminatory effect is based on age over 40, which is not under Title VII; the Title VII test is a much higher bar and once the disparate effect is shown requires showing “business necessity”, and even then disparate impact discrimination can be found if the employer declined to adopt available alternatives with less discriminatory effect.
> I think it's pretty trivial for any company to argue that technical interviews are relevant to a developer job.
Is it? Can you show, right now, that your companies technical interviews and job performance correlate positively? Cause from the things I've seen that's not checked commonly in companies and even if it is you don't hear "the technical interview is bullshit, let's do something else" as a result.
That's how I view them as well. It's why a lot of people can get through them by doing lots of Leetcode in the same way that one might do lots of logic problems in order to score higher on an IQ test.
Leetcode practice gets you through a particular kind of coding test. I would say that only covers ~half what a good coding test interview should cover. It makes you good at the "how", and solving algorithmic or textbook architecture questions.
The questions I prefer to ask are about writing a basic but readable and maintainable piece of code, maybe 50-100 lines long, that requires the candidate to talk about why they are making certain trade-offs, questions that look at when they would involve other team members in decisions (when do you ask a Product Manager or Designer for help?), questions that try to understand whether they can solve problems rather than the one problem in one language given to them in the interview. I typically do those interviews in Python but some of the best candidates have never written any Python – but their communication, intuition, and problem solving abilities were great.
For junior positions, I tend to ask very simple questions. Reverse a string or given an array and an int count how many times that number occurs (or variants thereof). Something that requires a loop and some ifs. It opens the posibility of endless discussions about extensions, refactoring, runtime, edgecases - it gives a solid bases on which to judge their ability.
For senior positions, it's slightly more advanced. But if they make it through to second round, we have them bug hunt and refactor some piece of code.
Programming is a craft and I need to evaluate that craft.
At least there’s considerable variance in the problems on an IQ test so your intelligence is diversified across many problem types. The type of shit I’ve come up against in past interviews has often been essentially an all-or-nothing single problem. Which is why I now refuse technical interviews.
Advice on getting a job as a software engineer without a technical interview? Literally 100% of the dozens of interviews I've had in my life were coding interviews.
Have projects available online that showcase your ability/ingenuity/creativity. Just reject the technical interview and explain your reasoning, offer an alternative.
Works for me because I don’t want to work anywhere where interviews aren’t being conducted by people competent enough to handle fluid requirements.
I don't see how asking a potential software engineer to write a 30 line function in 30 minutes is bad. Being able to write code is part of the job description.
I don’t need to “prove” that I can write code by memorizing an algorithm. I have made more than enough code available on my GitHub which I’d be happy to explain in depth with anyone who is interested. Not interested? Great, you’ve made it easier for me to make my decision.
Depending on what those 30 lines are supposed to do and whether they have to be asymptotically optimal at doing it, writing down 30 lines of code while talking about every decision you’re making, to show the “thought process” might very well be more than a 30 minute task.
I didn't mean to imply that this is the only way, just that it's a way that has worked for me.
I've worked with embedded systems and with desktop software that was used by my employers (electronics manufacturers) rather than sold by them, and I have never been asked to code in an interview.
IQ Tests ARE allowed in the US, as clearly stated in the page you linked ... "Broad aptitude tests used in hiring practices that disparately impact ethnic minorities must be reasonably related to the job."
Technical interviews are NOT a loophole about IQ tests ... they are tests that are related to the job.
This is questionable. There is data suggesting job performance is not correlated to interview performance. Also, none of the questions in my last interviews have anything related my actual job, not even the language nor the problems I solved (since I could pick any language to write the code). The link between them are very arbitrary imo.
I’ve interviewed many developers who pass a code test, get hired, and then can’t design a basic interface for an object. Or take forever getting through a large feature. Or can’t commmunicate clearly. Or go down the rabbit hole and implement something really complex and shiny but irrelevant. (Irrelevant isn’t always bad, but does often expose a weakness in effectively interpreting user needs).
Maybe 1 in 10 engineers who pass a code test end up being actually good software engineers. The yield definitely varies by the role.
Wait, if code tests aren’t perfectly predictive, then how do interviewers get feedback about failures (or even successes) of the screening process? They don’t, because recruiters and hiring managers tend to keep that to themselves.
The story is way way more complicated than this post alludes.
I know of at least a couple instances where the person failed the code test and then (according to LinkedIn) got a solid job somewhere else.
Triplebyte actually has some hard data on this trend. Many candidates definitely bomb a few initial screens, do well on later screens, and then burn out and even withdraw from later on-sites. I imagine there’s high variance. Any experienced recruiter I’m sure has observed the same pattern... bombing one code test is typically not definitive.
>bombing one code test is typically not definitive
How many times would you say someone has to bomb Triplebyte (and/or similar) to make it “definitive”?
And after said number of failures what is it exactly that is “definitive”? What is the conclusion you would make? What is the conclusion that the candidate should make?
When I work with students, they're able to get _somewhere_ after several tries, even if they're tuned out for the first 5-7 times or so.
Definitive is a relative term. It's the point at which you give up. If you give up early, that might be a good thing or a bad thing for you. It's a choice you have to make yourself.
There's also the result that at Google they found hires who had one bad score typically outperformed others; the hypothesis was that there was somebody at Google who was willing to fight for them. Having a supportive manager / peer is IMO a much greater predictor of success than passing code tests.
Triplebyte has posted that onsites typically have a 20-30% pass rate. That means a totally average developer can expect to do 3-5 onsites per offer, and there can be significant variance in that number depending on any number of factors, including what the candidate ate for breakfast. There’s very little information conveyed by the number of technical interviews someone has failed.
Yup. Just means the normal distribution applies to software engineers too, which makes perfect sense. A handful will be so god awful you'll wonder how it is they even have a 10 year career in the first place. A handful will be in the first couple of years in their career and you'd think they have been doing it for 10 years. Most are average.
But here's the kicker... in software engineering average means bad. Because average code is not good code, because only amazing code is good code. So all these questions and tests trying to figure out who's a good engineer are kind of rubbish. Mostly they will be average engineers. But companies seem to think "we only hire the best" all the meanwhile the trash-fire slowly burns.
Technical interviews have been quite useful to me in the past.
Once I applied for a job, and received an online technical test where every single question used the wrong terminology for a bunch of stuff and half the "answers" in the multiple choice section were wrong, while some of the "wrong answers" happened to be right.
That technical interview served it's purpose perfectly, assuming it's purpose was to save me from accidentally taking a job that would have been a fucking train wreck.
> To see this more clearly, consider interviewing a pilot. After establishing basic bona fides, it would be reasonable to ask the candidate about what to do in various emergency situations. Emergency situations aren’t representative of the daily work of flying, but safety is important so nobody would accuse such an interviewer of asking irrelevant questions.
Minor nit about what is really a pretty good post. This paragraph sounds quite reasonable to this non-pilot, but do you notice what it doesn't say? It doesn't tell you whether pilots are actually interviewed this way.
Here's an actual example I know of, though it's not making the point the author wants to make. At an old job, every so often, a guy used to wander over from the other side of the yard and ask us to cut a few random pieces of pipe. We then carried them across the yard for a candidate pipe welder to weld them. If the welds looked good, they'd probably hire the candidate. It was pretty close to actual work you'd do, and the results seemed ok.
I'm sure there are examples that would suit his point and be real, not just speculation.
Oral exams are part of certification for pilots. Additionally, pilots have to keep log books so you can simply examine their history and progress. So, repeating that during a job interview is kind of redundant because either they are certified and current, or not. For software engineers, certifications are kind of meaningless. I actually treat them as a red flag because in my experience it's not so great consultants that tend to overemphasize certifications in their CVs to cover up that they haven't done a lot of great projects. Usually, this is obvious from their CVs.
I've been on both sides of the table for technical interviews. If we're talking, that means CVs and linkedin & github profiles were scrutinized, etc. and we're now moving to the phase where we are going to mutually find out whether there's a basis for working together. Part of that is quickly verifying those things were accurate; but most of it is simply eliminating any red flags: any obvious reasons you should not hire someone.
I usually focus on just getting people to talk about what they've recently been doing and getting them to talk about things that they are interested in technically.
When I'm interviewed myself, I mentally reverse the roles. I already know I'm good enough; the interview is about determining whether the company is good enough for me. I've declined offers because of how the interview went or because I realized the interviewer would not have gotten past my own red flags: I'd never hire them.
This attitude works well on both sides of the table. Would I want to work for me based on how I'm behaving? Is my behavior making them more or less enthusiastic about working for me? Part of an interview is that its a sales job. Imagine you get an excellent candidate and that they have multiple job offers: how do you make them pick you?
Actually the pilot anecdote is nonsense. Emergency procedures are part of the procedures you're supposed to be able to execute, typically in a simulator.
A stronger anecdote would be asking an airline pilot to fly an aerobatic maneuver in an airliner like an outside loop or aileron roll.
(There are only a couple of aerobatic maneuvers done in an airliner. For example, when power is stuck at full throttle or the elevator forces the plane to climb, a quarter aileron role can be done for recovery.)
> Despite all that I must finish by observing that regardless of our best efforts in the industry, hiring is still largely random. Well designed interview processes make it slightly better than random, which is why we do them.
This article is a defense of an asymmetric process that stresses out software developers and forces them to jump through hoops. Slightly better than random selection is a garbage pay-off for the nonsense I've experienced in technical interviews.
In which case framing the test as “technical interview” would be a bit dishonest. You should name it “stress environment coping test”.
But anyway you are right those moments do happen, but those times it’s usually “all hands on deck” kind of event where people are actively trying to help each other. Those are actually “fun” in a sense since you actually feel needed and encouraged rather than judged and scrutinized.
And each time something like this happens, it usually means a failure of the company process as a whole. Can expect it from startups but definitely not from big companies. So if you honestly want to test that out for a candidate, it puts a huge red flag for him that said company is mismanaged.
In effect you’re filtering for people that don’t know this - e.g. junior devs, or people who don’t care / tolerate it, so they won’t attempt to fix the mismanagement that allowed it to happen, the last thing you would want probably.
Thinking about it you could even stage such a contrived stress test, and actually hire people that _do_ flip out as those would probably be the ones that care enough to fix longstanding issues.
You’re misattributing the cause of the stress. Any competitive job opening is intrinsically going to be stressful because there are there are X people interviewing for 1 position and you have to outperform the other (X-1) candidates. The interview format is irrelevant. The only “non stressful” way to do an interview is to hand out $500k/year job offers to every warm bodies that hands you their resume, no questions asked. Sorry, but that’s not going to happen.
I think you're getting at the core of the situation. Ultimately the interview is going to be stressful because it can only end in two ways. But I think there are ways to mitigate it, as suggested by some of the proposals in other comments.
Another cause for stress is simply because the whole process is just so damned opaque. There's not just a power asymmetry between hirer and applicant, there's also an information asymmetry. At the end of the day, you don't know what the true criteria were used to judge you. You don't know if you hit a mark and was just passed over in favor of a better candidate, or if you bombed without realizing it, or one of the interviewers simply didn't like you. And at that point it's not simply losing out on the offer, it's losing out the time spent to get a rejection that blandly says that you weren't the right fit. And sometimes they don't even remember to reject you.
Like the intrinsic stress of being at the mercy of an employer, it's part of the process that one just has to get used to. But one wonders if there's some better way. Perhaps pre-interview rubrics and post-interview feedback? Make applicants sign a waiver so they can't use it in a lawsuit. At least it'd be more transparent than the status quo.
Very much agreed. Opacity is the larger issue for me. Especially considering the disproportionate investment I have to make as an interviewee. And it seems people readily admit the process yields marginally better than random results - so it's pretty puzzling to see people defend it like there's not much to complain about.
I don't agree that interviews for "competitive" jobs are inherently stressful. Insofar as any interview is nerve-wracking - I agree that's unavoidable. The point of hiring is finding anyone that's capable of doing the job, not overseeing a gladiator deathmatch. I get the sense that no one is vulnerable enough to admit that hiring is a toss-up and almost intractable problem. Even more annoying is that experienced and competent workers in industry are usually going to be the people that believe they'd be good at judging candidates when there's little evidence that tech competence translates to hiring competence.
Dealing with stress well is a valuable skill. Many of those that do not manage emotions well become toxic. I personally would let a few great engineers who cannot find balance through tough deadlines, un-paved roadmaps, and tricky social situations find jobs elsewhere than hire people who cannot maintain their emotions.
Yes. That kind of stress would be like trying to fix an issue causing the company to hemorrhage money within 30 mins without being able to google anything and with others looking over your shoulder and wanting you to explain your thoughts. Oh and you have to get it correct on a whiteboard first
Sometimes there's no time to google, and you want to be talking to someone while you're doing it, as a sanity check. https://dealbook.nytimes.com/2012/08/02/knight-capital-says-...
$10M/minute is an extreme case, but these things happen on a smaller scale all the time.
I'm not saying this is the justification for coding interviews, just saying that it happens, and the ability to code something up quickly that also works correctly without having time to test it sometimes saves the day.
I do think coding interviews that people are complaining about are good, and getting in shape to do well in them is good for you. You can tell it's good for you just by the amount of complaining people are doing :)
What do you mean no time to Google? You're never going to be able to remember everything. Sure the ideal situation is that you're fixing a bug where you happen to know the exact syntax for every piece of every library you need.
But in the very likely situation where you don't, of course you have time to google.
>and the ability to code something up quickly that also works correctly without having time to test it sometimes saves the day.
Often becuase the company has optimized the hiring process for people who are good at doing this vs. people who are good at designing maintainable, durable systems.
If 30 minute, million dollar fixes is a frequent enough occurrence to make aptitude at them the primary or even a major factor in your hiring decision, you have done something horribly wrong.
>I do think coding interviews that people are complaining about are good, and getting in shape to do well in them is good for you.
Having hired people using many different techniques, I think that these types of interviews are only a bit better than random chance. They are a separate skill from 99% of daily programming work, and they are gameable. I also think they are nothing more than a fad caused by people cargo culting Google.
Not a single other technical discipline widley uses this kind of stage performance based interview process. Not one.
>and you want to be talking to someone while you're doing it, as a sanity check.
That makes zero sense. Sure sometimes it helps to talk through a problem if you're working with someone who is also trying to solve the problem with you, and you might want to explain a fix after you solve the problem but before you push as a sanity check.
But if it's a hard problem you can kick rocks if you expect me to explain what I'm thinking about while I'm concentrating on solving it--while you're watching over my shoulder, and are not actively engaged in helping me solve the problem. You're just slowing me down.
Many problems have solutions that were worthy of publication and took a lot longer than 30 minutes for very smart people to solve. So if you don't already know the answer or the answer to a very similar problem then yes objectively they are hard problems.
But you are supposed to know the algorithms and approaches that apply to the problem, that's the whole point. It took humanity tens of thousands of years to come up with the first script, but nobody wants you to invent the English alphabet in an interview, you're expected to know it.
I also can't remember ever being asked to solve a problem whose solution would have merited publication in my lifetime, and I was born around the same time Unix was.
There's a difference between knowing which class of algorithms to use or even which specific algorithm to lookup in a given situation and knowing exactly how to implement the specific pet algorithm of an engineer on a power trip without using reference materials. This kind of adversarial process is nothing more than a hazing ritual. Other industries think we're insane, and the vast majority of companies that do this style of interview just do it because they want to do what Google does not because they have any evidence it works.
>I also can't remember ever being asked to solve a problem whose solution would have merited publication in my lifetime, and I was born around the same time Unix was.
We've had different experiences then. I've definitely had to implement algorithms designed after 1971.
Software is very different from any other type of engineering I can think of, so I'm not surprised that the interviewing processes would be different. Two different aeronautical engineers for example may come up with two different solutions, but I don't think the better one would be better by a factor of ten, let alone something like a million which is common with software. If any aeronautical engineers read this I'd like to be corrected if I'm wrong.
I agree that for GUI work you probably don't need this, but for a back end position I would like a person for whom this kind of interview isn't a big deal to prepare for.
I'm curious to hear what the post 1971 algorithms in question are.
Wow you are really invested in the current interview process. I've never met anyone who defended it quite so vigorously.
Any engineering disciplines that deal with designing processes are very similar to software engineering. It's nowhere special enough to demand a completely unique hiring process.
Even if it were, I'd like to see some hard data to support the current interview best practices.
>I'm curious to hear what the post 1971 algorithms in question are.
Off the top of my head: You've had a question with Red-black trees? B-trees? Quad-trees and oct-trees?
For quad-trees and oct-trees--they are fairly common in gaming and not a particularly time consuming to implement.
For Red-Black and B-Trees, I've never heard of anyone being asked to implement the entire thing from scratch, but significant portions sure. Also explaining the implementation of significant portions.
Here's a similar post from 2015 by tptacek
>You're a 55-year old engineer who graduated from school in '81. You're being interviewed by a 24-year old engineer who graduated 2 years ago. You're asked, in an "algorithm interview", to explain some detail of the implementation of an AVL tree.
>In reality, despite the fact that they're covered in CS courses, virtually nobody uses AVL trees in industry. In fact, people barely use balanced binary trees at all, and when they do, they use red-black trees, and they use someone else's implementation, because they're a bear to debug.
>The 24-year old has an advantage with this question because they were recently taught about, and perhaps had to do exercises/homework/tests based on, AVL trees.
>That this happens is stupid, because it's very unlikely that conversance with AVL trees is going to make much of a difference to your on-the-job performance. Almost all the narratives you can come up with about this involve reading tea leaves and are shot down easily by better selection/hiring/interviewing techniques that answer questions more directly.
>This line of questioning can go too far; I don't, for instance, think knowing the big-O characteristics of an AVL tree is unreasonable (it's a balanced binary tree, and you should have the complexity of operations on those in resident set throughout your career).
Here's another coment from tptacek in the same thread. If you don't know who he is--you definitely want to hire him if you can. Him failing out of your hiring process would be a travesty.
>I was asked to do Bellman-Ford. I bombed that question, and was so shaken that I bombed the rest of the interview as well.
>The irony was, I'd just spent the preceding 2 years writing multicast link-state routing code. I could have coded a decent C++ Bellman-Ford in a couple hours, but couldn't pull it out of my head on the spot in an interview without babbling. (I tried to dodge by describing link-state LSP forwarding and then Djikstra, but wasn't given credit for knowing the more sophisticated algorithm).
>Algorithm interviews suck. We should stop doing them entirely.
Those are exceptions, if it was common to get these kinds of questions I would have gotten at least one like that at one of the many interviews I've been in. Getting rid of coding interviews won't stop some interviewers from being jerks. I much prefer being rejected for bombing a coding interview, than killing it and then getting rejected because of the behavioral interview, where someone 20 years younger than me acts like they can figure me out by asking me a bunch of silly questions.
First you deny the experience then when presented with evidence--the types of interviews people complain about are just rare.
>Getting rid of coding interviews won't stop some interviewers from being jerks. I much prefer being rejected for bombing a coding interview, than killing it and then getting rejected because of the behavioral interview
The same thing applies to your point. Coding interviews don't replace those issues they just add another hoop. What do you think behavioral questions, lunch interviews, and culture fit are?
>where someone 20 years younger than me acts like they can figure me out by asking me a bunch of silly questions.
Based on what I've observed this is what currently happens. 99% of companies have a terrible process for developing questions and evaluating responses.
And programming is pretty much the only field where it's the norm to be interviewed by someone much younger than you.
Here's the difference - when shit hits the fan everyone in the room wants to solve the problem and they are actively helping each other. Technical interview is completely different.
We conduct it as a working session. Our primary goals are to determine ability to reason about problems, apply the proper technique (not syntax), the ability to communicate with us about what they are doing clearly and to get a sense of how they work during the planning phase of a project. We are looking for a connection with the prospect more than a complete technical solution.
Worth saying, sometimes there’s nothing to google, or googling wouldn’t yield a significant answer - perhaps the issue lies in some module of fairly home grown code.
Google, or any other reference, is the hard drive, and the skill you have in your noggin is the cache. You wouldn't want to buy a machine that's all hard drive and no cache.
I think the issue is that some people either don't remember the stress of an interview or just don't get that stressed so they can't empathise with people that do get horrendously stressed in an interview.
I'm with you on this one, if working at a company induces interview levels of a stress daily, even regularly, then please don't hire me.
I've never had a whiteboard session, these type of interviews are not common in the non-start-up world in the UK. I just get really, really stressed in an interview.
To make matters worse, it seems that I project this aura of calm and self assurance, or so I've been told by two of my interviewers after they hired me, so that I don't even get a concession for obvious nervousness.
I agree that handling stress and managing emotions is a super valuable skill. But I'll hard-pass any interviews that try to stress-test my ability to maintain my emotions.
Yea that post didn't come out well. Either way it's not hard to see that I was trying to convey that any company that's going to put me through the wringer to see when I break down isn't worth my time or frustration. Be upfront about what you want or stop wasting my time and energy. It's like the scene at the end of Charlie and Chocolate Factory - or this video[1].
I failed a C++ test that was a requirement to get a job (30 minutes timed, multiple choice). I had written a C++ application that I had sold to thousands of customers (Fortune 100s in there too). I had mistakenly thought I could be a programmer at a real job.
Later, I sold $20,000 worth of source code that took me 60 hours to produce. The client thought it was easy to read.
I still don't think I'm a real programmer. The test sure seemed to prove that. But, people still buy my software today (I have other apps).
From my perspective, it seems the current technical interviews are looking for something in particular, from which you don't later actually do that for the real job. Kinda like seeing if you can survive a skydive, but you'll actually only be packing chutes after hire.
I wish there was a way to say, up front, that there are different types of programmers, vs the catch all "programmer" job title. Because it sure seems like I'm typing out code. The computer has always accepted it.
I would not get too hung up on those multiple choice tests.
I've been programming professionally now for over 25+ years and ran into my first multiple choice test about 4 years ago.
In a very short space of time I managed to failed three (two C# and one C++) in quick succession.
Now days, if I come across a role that requires such a test I just politely refuse and move on.
FWIW why I think these test are a waste of time and why I did so badly is because they don't actually test day to day programming skills.
The three test I did where all very similar with a lot of gotcha type of questions.
They asked questions like spot the obscure error in a very poorly written piece of spaghetti code, or tell me exactly what exception this code will throw exception etc.
As a programmer you never have to directly deal with these types of issues only because your compiler finds the errors and the documentation gives you exception details.
And in a way the tests knew this as well, because all three warned you could not switch out of the test window while doing the test.
In other words the test would have been a trivial exercise if you where actually allowed to use your day to day work tools.
I wouldn't take it personally. Language-specific tests sometimes focus on memorizable trivia instead of problem solving. Like, "How many generations does the garbage collector have?"
The fact that interviews are so standardized in this industry is such a rarity and the pros outweigh the cons for me as someone interviewing. It means you should basically know exactly what to expect, and you have every resource, most free, to study for them. I find that a much more meritocratic process than the way hiring is done in many other industries/professions.
The alternative is widely varying interviews from one company to the next with no idea what to expect, which leads to a mismatch in expectations and mostly wasted time. Take home coding assignments is one alternative I see as well but as someone interviewing that is a much bigger time commitment and something I would not want to do especially if I’m interviewing at a few places.
The problem with take home assignments is that while investing time the candidate doesn't know how it will be judged. Sometimes people want just a working prototype but sometimes they want essentially a production quality high-performance code.
I agree that the candidate should be able to demonstrate ability to write code, but I'm surprised that the author is unable to find interesting, short, diagnostically useful problems that crop up in the course of his day-to-day programming.
Often when writing code I think, "hmm, this problem would make a good interview question" so I keep a note of it. Here's a recent example from an accounting program where I was reporting debit and credit transactions: Given a list of integers, remove all pairs where the magnitude is the same but opposite sign. Eg. given [-1, 4, 6, 1, -2, -4, 4], the result should be [6, -2, 4].
For more in-depth assessment, the author dismisses the idea of asking the candidate to do an all-day task or homework because it isn't respectful of the candidate's time. So why not just pay the candidate for that time?
The example question you give is exactly the type of "algorithmic" question people complain about, and that the article says is the only kind of question you can realistically ask. And for the same reasons.
> For more in-depth assessment, the author dismisses the idea of asking the candidate to do an all-day task or homework because it isn't respectful of the candidate's time. So why not just pay the candidate for that time?
First of all, it costs money. Significant money even, if you tend to interview a lot of people.
Secondly, it's often an accounting hassle to pay money - you can't just hand someone cash, it needs to be in the form of a salary, and is not always so easy to do (and some candidates would probably be put off by it, especially if they're already working, since having a second income can get complicated).
> So why not just pay the candidate for that time?
If i'm working full time I probably don't want to give you my entire weekend unless you pay me silly money.
Meanwhile - at least in the UK - if I'm earning under £100,000 I might well have to fill in a self assessment solely because of your payment (I can't see you onboarding me onto your PAYE system for 8 hours of interview work). Its just not worth the hassle.
"Everyone who builds a team of developers, and I do mean everyone, rapidly gets used to people turning up to interview who cannot actually program computers, even under the most generous definitions of the term."
I've started doing a small whiteboard coding exercise during interviews, and over the past couple of iterations it's devolved down to something simple...and I mean really simple. And I'm watching senior developers fail the test.
For example: I'm trying to fill a bootloader developer position so I get a couple of senior embedded developers to the table. Passed the screen, HR likes them, yadda yadda. Some even have U-boot experience. Should be easy, I think.
So I'll ask them to perform a simple bit-flipping exercise on the whiteboard. Nothing insanely tricky and it's usually something that any register-level developer should be able to blurt out without thinking twice about it. And they are choking on the question.
It's kind of freaking me out. Is the test that hard? Is the developer that inexperienced or distant from doing this kind of work even though they're applying for it?
The whiteboarding in the problem setting might be the issue. How many engineers actually whiteboard regularly? It's as old fashioned as waterfall design. Everything has a REPL shell (even Java!) now. Hell, even CloudFormation stacks let you fiddle with entire architectures through a command line. Engineers have always poked and prodded at things to learn and refine them. Whiteboards are for throwing around ideas; not for solving problems.
I spent most of my grad school years researching learning and memory. The medium and environment of the fact recall/problem solving can greatly affect performance. Now compound that with the stress of your standard technical interview and you'll get some flubs.
You might want to take a step back and consider how you're asking them to show competence in addition to what you're asking them to show competence of.
Would it really be any better if I handed them a laptop with Notepad or some IDE and asked them to type in their ideas?
What I'm asking is the bitbanging equivalent of "store 2+2 in a variable", so setting up an entire coding ritual for 3-4 lines of code isn't really what I had in mind.
To answer your question with a question - Would you be satisfied if they could solve the problem in that setting? Is making a laptop available to them for problem solving that much trouble?
Or, rather than looking for _any_ competent candidate, are you looking for someone like you and your coworkers, who can give direct answers without a pause? I ask because in one case it's more like an attempt to hire capable employees and the other is perhaps a subconscious attempt to maintain a monoculture.
You know that's a good question and, to be honest, if they could only solve this at a keyboard then I wouldn't be satisfied.
We constantly draw things out and scribble over it again when we're designing. If you can't translate and communicate to others then it's not a good fit for me.
> whiteboard ... it's as old fashioned as waterfall design.
Call me old-fashioned then. I like to map out the data flow and process flow a bit. A little planning now, saves quite a bit later. Call it, pre-re-factoring.
You don't go scuba diving, hiking, camping, etc, without a plan.
> It's kind of freaking me out. Is the test that hard?
You didn't show us the test so let me flip a coin...
One can get pretty far in embedded just knowing how to OR flags and mask & shift values. I guess that's something most embedded devs working at that level should be able to just blurt out. Is that what your test was about?
Then again, I'm sure you can find "senior embedded developers" who worked on the browser based interface to an appliance that runs Linux... and, well, the last few times I used U-Boot on our product I didn't really need to play flipper with bits. (Well, I did, even if I didn't need to.)
Here's the test. It's one I stole from an online test practice site but IMO it actually works well on a number of levels:
Give me a routine takes in an unsigned integer, pick any size you want, and returns TRUE if the binary representation has two consecutive 1s in it.
Test data: 9 = 1001b = FALSE, 13 = 1101 = TRUE.
I'll even give them an example function prototype:
bool adjacent_bits(uint32 x);
As I see it there are multiple ways you can write this and I really don't care how you do it. If you start with a simple case (say, a for loop), get it working, then realize you can optimize it...that's even better. It even optimizes down further to a single inline math operation.
The key is that I can see you understand bit shifting and a bit operator in C. If you've done this kind of work as your resume claims, you should be somewhere in zone on this without too much struggle.
I've seen candidates write triple-nested for loops. Bizarre interlocking bools trying to make a state machine. Hard-coded numbers that kill portability. Calls to printf where I asked for a boolean return. Attack ships on fire off the shoulder of Orion...
Perhaps, like you said, this helps weed out the people that just played with the GUI to configure something when I need people that got a driver working down at the register level. So I'm sticking with the test for now.
Perhaps, some of the time. I doubt it's the whole answer, though. I suspect that some of the answer is stress, and some is fakers, and some is people who have been in embedded and never had to do that. (I haven't for 10 years... but I'm sure I can still do it.)
why is it 'high-stress' ? desire to look good ? fear of being judged as a fool?
The ability to temporarily NOT GIVE A SHIT about what others think of you is a pretty valuable psychological/leadership indicator IMO.
I would want such folks, who may not be able to code up a binary-tree search, but would work through it in the same manner they would at home in pyjamas, to be team-leads or managers over time.
They all had a direct answer or something approaching it without even pausing to think about it.
It gives me reinforcing information that the problem isn’t out of line, but you make a good point that one shouldn’t lose a problem that nobody else at the table can’t solve.
People who can't get a job interview far more often than people who have a job. Rejecting a large proportion of candidates at the first step is not necessarily a red flag.
Off topic, but my mind immediately went to WW2 German submarines when you wrote this. I was wondering how you were screening so many 90+ year old German men for about half a second. :P
It's the same power dynamic that will be in place forever, incumbents unable to recognize talent outside of their narrow perspective. The penalty is paid by the candidate, and the company will usually go along just fine, but it is a sad broken system for a lot of qualified people.
> The penalty is paid by the candidate, and the company will usually go along just fine, but it is a sad broken system for a lot of qualified people.
i don't understand how people don't get it: if this were a malfunctioning system companies' margins/bottom lines would be affected and they would correct course. the fact that not only does this trend persist but grows signals that it's actually effective.
A company I used to work at made ~$10Bn in profit per year and employed 300k people this does not mean that all its practices were profit making, some were demonstrably not profit making, even admitted to be so by SVPs. I don't mean some investment that would pay off in the future, I mean cost cutting exercises that went so far as to go past the fat and into the bone (SVPs words when we got a new CEO)
Another company I used to work at, this one employing 20 people, had a shockingly bad product. It looked dated, it was slow and it had all manner of process issues, in a word: Dysfunctional. The company was pretty profitable as it had excellent sales people. Working here really made me question a lot of things. How could a company with such a shitty, outdated, slow and unmaintainable product be so successful (for its size)?
The point is that the relationship between qualified people and financial success is far from linear.
If this were correct startups wouldn't have a chance to dethrone an incumbent, because companies would adjust course long before.
The thing is: Some people may even see this. But then they have to convince their boss. And maybe that boss has to convince their own boss. Or HR. Or both. And then you get into "but does this mean a part of our workforce is incompetent" territory and so on.
Something doesn't get corrected just because it affects the bottom line. Someone has to fight for it and with a topic like that you probably won't win many friends by doing it.
I suspect it's really difficult to tell from within whether your bottom line is being hurt or helped by the interview process.
The effects are not immediate, and you rarely have the possibility to see how the numbers would've looked if you had done everything differently.. of course, that goes both ways.
There's a lot of luck and a lot of pareto distributions for software development. There is no straightforward relationship between effectiveness and profit like there is for other careers (e.g. sales).
> but it is a sad broken system for a lot of qualified people.
The positions get filled. The candidate who gets the job is also qualified.
There is way to much bias from people who go through these interviews and don’t get hired not because they weren’t competent and not because the system is broken. It’s just simply the case that someone else was better.
Someone who has talent in a way that the incumbent is unfamiliar with is far more likely to be able to demonstrate that talent in a coding test (where the interviewer can't deny that their code works, even if they don't understand it) than in a conversational interview (where the interviewer can easily dismiss anything they don't understand as bullshit).
>Fix a real bug or implement a real feature. Beyond the obvious copyright issues if you don’t get an assignment from the candidate,
Irrelevant if you've already fixed or implemented it and do not intend to use the candidate's code.
>there’s no good way to make this process repeatable,
Select fixed bug/feature, isolate code, create question, use in interview, iterate on task.
>no way to ensure the question always fits the available time,
Other than selecting the question such that it does and/or tweaking it such that it takes more/less time.
>and no way to ensure you see a good range of programming skills (e.g. the feature may simply involve copy/pasting some existing code with minor tweaks).
Other than selecting the question such that it isn't that.
>It also assumes you’re hiring Java developers for a Java codebase or Ruby developers for a Ruby codebase
I'm certainly not going to hire ruby developers for a java code base or java developers for a ruby code base, and I find it weird to assume that I would.
> I'm certainly not going to hire ruby developers for a java code base or java developers for a ruby code base, and I find it weird to assume that I would.
Learning a programming language is usually not the hard part unless it follows a completely new paradigm that you are entirely unfamiliar with. E.g. programming in an object oriented language if you only ever have done imperative. I've interviewed people in languages they didn't know with great success, you just gotta ask the right questions and sometimes their questions (or lack of) are more important than their answers.
>Learning a programming language is usually not the hard part
The syntax and semantics can indeed be picked up in a day or more. It's working within the ecosystem and circumventing all of the non-obvious pitfalls, knowing the shortcuts (and knowing what can't be shortcut), knowing where all the important information is located that all requires years of experience.
I've worked with developers who have switched languages and they often port across bad habits and quirks and try to make the new language more like their "home" language. Sometimes years later they're still trying to do things in a non-optimal way.
I also think that most people are more productive in their favored language (or languages) and you're unlikely to get as good quality work if you pull them away from it.
There's also a cultural element to programming languages that I think is underrated. I don't expect java programmers to have the same kind of outlook or approach to python programmers.
You simply cannot be arguing that companies who need a developer to maintain a java code base should take special care to not consider candidates ability in java just in case they don’t know the language at all.
Yes learning a new programming language can be easy, so go do it before you apply for the job that’s looking for someone experienced in the language.
I disagree with the author's explanation as to why all questions are algorithmic. I suspect the real reason is due to the US dominance in tech and the popularity of algorithmics in US academia. Indeed the parts of computer science to do with logic, formal methods, semantics, mathematics of program construction etc are, I am told, often described in the US as "Eurotheory". This is a great shame. A new graduate arrives at Google fully trained in algorithms as mandated, but is unlikely to need to implement any of them during their career. It is far more likely that they will need to glue existing systems together, consume APIs, author APIs etc. "Eurotheory" would much better prepare them for this.
As an example, consider that we could be asking questions on API design: get candidates to work through their semantics/axioms (introductory forms to create objects, methods to transform and eliminate them). Get them to write out high-level testable properties such as the way objects compose, associate, commute etc. Using types to enforce invariants etc
Yes, and the algorithmic tests are highly biased towards those who've focused on that (either through academics or by self training afterwards) - and a miss on these questions still doesn't tell which is the correct story. Then again, this might not matter. Firstly, everybody knows these types of questions are to be expected so everybody cab prepare. Secondly, the problem (from an employers perspective) might not be the false negatives, but the false positives.
This in turn points to that we might try to generalize this a little bit too much, with assuming there're generally applicable ways for software developers but rather we need to adjust the process depending on the type of software development context. If you're developing a web service of minor scale, then algorithmic questions might be stupid (because the knowledge needed is rather one of network, async, web and structure rather than solving math problems), but they might be relevant for Google (where they are might be relevant because they search for developers with that mindset and who can use that mindset to come up with and solve big problems in a certain way - I don't know because I lack insight into the developer strategy at Google).
As engineers and developers my experience is we tend to love to generalize, even when it's not wise to do so.
Also describes the interview process this company uses:
> Firstly, how do we hire at R3? We’re recruiting for technical roles around the world as part of building an open source Bitcoin-inspired decentralised database. The interview process for developers consists of, firstly, a piece of code that you’re asked to send back a quick code review of (this is meant to take about 5 minutes), then an 30–60 minute long video chat+screenshare in which you may join from home and some coding is done in an editor and language of your choice*, then finally an invitation on site to meet the team and talk to senior management. The code test sometimes also includes design and ‘talky’ questions, depending on the nature of the candidate and precise job role: it’s not just coding. We’ve found this process to be pretty accurate whilst still being quite lightweight (compared to some hiring processes at least!).
All I would need added to that style of interview is something along the lines of:
"And if you have to Google things during the video call, go right ahead. But try to talk through your thought process."
So that I don't go into the interview feeling super nervous about the moment where I get to say, "I'm googling the bidict library because I forget the API. But I want to use a bidict because..."
We allow candidates to use Google if they need to during the interview. However, the interviewer is free to draw conclusions from that. If a candidate is routinely searching for things that someone using the language would use every day, that's going to count against them. If they need to look up something that is only rarely used, nobody will care.
There can be occasional disagreement about what's reasonable to look up. A surprising number of developers don't know how to read from files, and I've encountered a few that didn't know files are random access. Some people think that's a problem, others think it's expected. I tend to see ability to use files as a proxy for general experience, which for R3's projects matters a lot (we aren't writing generic database-backed web apps, there's a significant R&D component to it).
This post describes an unusually good technical interview program, in which most of the tech-out is apparently done at home, the interview is prefaced with a short but useful work-sample code review test, standardized questions are used, and candidates are allowed to use their preferred tools and languages.
That's commendable, but I think the argument is still faulty.
As I understand it, the premise of the argument for interviews and against work-sample testing is that candidates won't perform work-sample challenges. I've interviewed almost as many candidates as this person has (and screened out far more based on challenges) and that simply hasn't been my experience. More importantly, the argument is incomplete. Stipulate for a moment that the "best" developers won't take work-sample tests (I'll come back to that)... and?
The point of work-sample tests is twofold: to stop being beholden to subjective impressions of how well candidates can perform the work you need them to do, and to provide a framework for tech qualification that's straightforward to iterate on. If your work-sample tests are sound and you're getting enough candidates to saturate your headcount requirements after setting a bar you're comfortable with, why obsess over who exactly you're hiring? The industry is full of developers with glittering resumes and lists of past achievements who are dead weight on teams†. Why assume the developers with the best marketing are the ones who will perform best on your team? Why not instead just spend the time to figure out what exactly a good performer is, and then directly test (and iterate on testing) for that?
Regardless, the belief that strong candidates won't do work-sample tests is pervasive. And for good reason: work-sample testing at most companies seems cargo-culted: "take-home tests" (that often follow tech-out interviews) that can only rule candidates out and never rule them in. I agree: that's bad. But the solution is straightforward:
* (1) Allot a budget to the entire technical qualification process, including all interviews and challenges, (2) try to make as much of the process objectively scored based on a rubric your team has committed to writing, and (3) ideally move as much of the process "offline" (so the candidate can perform it at home and without you watching) as possible.
* Make the work-sample challenges as determinative as possible. Candidates who do well on our work-sample challenges are presumed technically qualified; 1:1 interviews that follow it are largely pro-forma. At Matasano / NCC Group (while I was there) the bar for a technical interviewer to overrule the output of the work sample challenges was quite high.
Regardless of whether you use work-sample challenges or scripted, structured interviews, a bit of low-hanging fruit that nobody seems to pluck is: relentlessly keep candidates informed about what to expect in your process. Since about 2010 we've been buying books (expensive books!) for candidates and providing study guides to go with them; we also provide fairly detailed descriptions of what our challenges will be. We're even moving to a system now where we offer candidates a practice version of some of our challenges (if I could fake up the entire AWS API, I'd do practice challenges for all our challenges).
But most employers seem determined to make their tech-out process as much of a gotcha game as possible, and tell candidates almost nothing about the actual hard problems they'll be expected to deal with.
This post uses a framing that says whatever tech companies do to qualify candidates must be rational; after all, if they're wrong, they're raising their own costs. I respectfully disagree: engineering teams do not, as a general rule, optimize their processes for efficiency or shareholder value. Many teams literally use interviews as a hazing process (one they often exempt "in-network" hires from). Still more delegate interviewing out to the entire engineering team and exhibit little or no interest in establishing rigor or rolling back bias. But engineers are as biased as the rest of humans, and most of hiring today is not in fact rational.
Also: sorting algorithms make dumb interview questions.
† This is in part due to a dynamics in our industry that make it easy to "gradually fail upwards": most companies can't fire fast enough after making a bad hiring decision to prevent the bad hire from accruing another positive line-item on their resume, so smart people can bounce from team to team for years without ever making a serious contribution and end out ahead.
> Make the work-sample challenges as determinative as possible. Candidates who do well on our work-sample challenges are presumed technically qualified; 1:1 interviews that follow it are largely pro-forma. At Matasano / NCC Group (while I was there) the bar for a technical interviewer to overrule the output of the work sample challenges was quite high.
I would argue that this worked for your hiring field because you had a secondary bar of "technical competence" but a primary bar of "motivated persistence". And your tests definitely set a bar for persistence.
The real problem most companies are currently facing is simply that no one wants to take the time to train someone. Most companies would be better served by training someone internally and then promoting them into the higher slot and then backfilling the more junior position where the stakes are lower.
The HR problem is that you have to fight to adjust someone's salary properly after that promotion. I think I have won an internal salary adjustment fight only once in all the years I have been responsible for hiring people. Hiring an outsider at $100K is easier than promoting someone and getting them a raise from $80K to $100K--and it's completely stupid.
I think there's a lot of truth to this, and to the argument that work-sample testing really only works as well as I like to say it does when you have an engineering culture that can metabolize new hires without much previous experience in your particular problem domain. But we started doing work-samples for the opposite reason: because we kept getting bamboozled by good interviewers --- a problem I know to be rife across the whole industry. I still think you're better off keeping most of the weight of your qualification process on practical challenges rather than interviews.
We have not generally had a problem recruiting people with experience in our field, but there's obviously some selection bias in that observation.
Here's the thing for me. I try to be in the situation of trying to find the next job while I still have the old job. That means I'm not desperate. That also means I'm still working a full-time job while I'm in the job-hunting process. For an interview, I'm probably going to have to take a vacation day. For an out-of-town interview, I might have to take two days. That's bad enough, especially since I might go through several different companies in the process of finding a job.
Now you want me to do a 2-4 hour work-sample test? And maybe 5 other companies do, too? No. Just no. I have a job, I have a life outside my job, and you're not going to abuse it with making me jump through this hoop. If I lose your job, so be it. I don't need to work for people who will abuse me in the hiring process, no matter how valid they think their reasons are.
Your intent isn't to abuse me. Your intent is to weed out candidates who were making it past your interviews, but still couldn't code. But the effect on me is still abusive, no matter what your intent is.
The whole idea is not to make you have to take the vacation day to interview --- or at the very least, not to have to take the day unless you're assured you're technically qualified, and you're not going to get teched out on site.
If the choice is between taking a vacation day to come on site and code on a company laptop with someone staring over your shoulder, or doing almost nothing other than a take-home problem that you can do wherever and whenever you want, tell me how we'd be the company making it impossible for you to fit us into your schedule?
Here's the way interviewing used to work: You got a pile of resumes. You picked the ones that looked the best. You called them and asked them some questions. The two or three best of those, you asked to come in for an in-person interview.
Note how symmetric the time is. When you do a phone interview, they're on the phone and you're on the phone. It costs you as much time as it does them. When you invite them in for an interview, it costs you time to interview them, and it costs them time to be there. So you have an incentive not to waste the time of people you're not going to hire, because it also wastes your time.
But with a code assignment, you can ask 100 people to each do 2-4 hours, while you just compile their results and run some test data at it. It takes them hours, and you minutes.
When you ask me to come in for an interview, I know that you're serious about me before I commit to that kind of time. You're serious enough for you to commit that kind of time. But with a code assignment, how serious are you about me? Are you giving the assignment to 100 people to fill one position? I have no way of knowing.
So the problem here is that, if you call me in for an interview, I expect that I am one of a small number of people. I expect that I have at least a 30% chance of landing the job. If you ask me to do an assignment as the first step, I don't think that. I think this is your first screening step, and therefore that I've got more like a 1% chance of landing the job. Now, maybe you don't operate that way, and my expectations don't match your statistics. (On the other hand, a real jerk of a company might lie about their statistics...)
So that's why I say that it's abusive. If you ask me to spend 4 hours for a 30% chance at a job, I'll accept that. If you ask me to spend 4 hours for a 1% chance at landing a job, get lost. I don't have time to jump through your hoops.
Just a thought: Have you considered narrowing the list of companies you are most interested in working for...? Let’s say someone in your top five list are interested in hiring you and then want you to spend four hours for the work test. Maybe then it would be worth it, even if the outcome is not certain?
> Regardless of whether you use work-sample challenges or scripted, structured interviews, a bit of low-hanging fruit that nobody seems to pluck is: relentlessly keep candidates informed about what to expect in your process.
Amen. And I'd add: let them know when they can expect to hear back from you at each step. It can be as simple as:
"Here's my email address. You should hear back from me by the middle of next week. If you don't, please follow up with me."
Then, of course, follow up, even if it's just to say, "Sorry, we enjoyed meeting you but we were lucky to interview a candidate who was a better match for us. Thank you and good luck!"
>>As I understand it, the premise of the argument for interviews and against work-sample testing is that candidates won't perform work-sample challenges.
My understanding, as I've heard from others, is that it's not that candidates won't perform work-sample challenges, but that they can't, due to various non-work obligations that result in lack of free time.
This leads to a situation whereby interview processes that feature work-sample tests discriminate against people with families and kids/dependents.
Well, I “can” perform these so called “work sample tests” that a lot of companies want to give out (sometimes even before getting to speak with anyone at the company). Here’s my issue with them:
* Either they’re strictly timed, introducing an artificial sort of pressure that’s rarely present on the job, or one ends up competing against people who spend 3 or 4 times the suggested amount of time. I don’t know about you, but I can do a way better job on most of these sorts of tasks if I spend a large multiple of the expected time on it, too.
* The tasks are often poorly specified, sometimes with the justification that it tests the ability to clarify requirements. I don’t think that argument holds water, because all “clarifying requirements” does when one doesn’t have ready access to folks who can answer those kinds of questions is introduce insane amounts of latency and frustration into the process.
* Even when the tasks are reasonably well specified, the grading rubric is often kept from the candidate. At work, if I don’t know if I or my team is building the right thing or doing it in the right way, I can go ask people (PM, tech lead, EM, another team member, etc.)
* Finally, most of these “take home assignments” I’ve seen that are supposed to be in the 2-4 hour range to finish don’t get me as a candidate out of 2-4 hours of giving impromptu algorithms lectures at the whiteboard. If the prize for putting in 2-6x as much time as I would to get to an onsite interview with another company is just to come onsite with your company and spend all day doing the exact same thing, then, no thank you.
I realize that few or none of these things apply to the process at Matasano, but there is another one that does: I can already get a job that pays reasonably well, doing things I already mostly know how to do, without spending hours studying a field I know little about.
That is arguably a feature and not a bug, so, I don’t mention it as a criticism, but rather as a reason why I probably wouldn’t do Matasano’s work sample test. For the type of people they’re looking for, that’s probably fine. It’s just that I’m not one of those people.
This is all well said. I wonder if you have thoughts on what employers might do about the "competition with completists taking 2 days on a 2 hour challenge" versus "pressure from timing people". We've taken the tack of telling people what our timing expectations are, but also not timing or tracking time (you could ostensibly have picked up the problem on a Monday, noodled for 15 minutes, decided you'd do a better job on Wednesday, picked it back up for 45 minutes, had a sudden business call, and then had to wait until Saturday to finish it; right now, we'd like that to just seamlessly work, but it obviously creates the effect you're talking about).
If you’re not going to track time in any way, then there’s no way to prevent anyone spending as much time on the problem as they have available. My best suggestion is to just ask people how much time they spent on it. The biggest problem with that is that by telling people the time expectations, you’ll probably get answers within or near that range.
I had one of these tests say they wanted the project delivered as a git repo and would use the timestamps on the commit history to figure out how much time was spent. I just laughed at that and figured the people who knew how to forge the timestamps would do that to make themselves look good.
The only alternative I can think of to explicitly tracking time is to just not give a time expectation and ask how much time they spent. That way, answers aren’t biased by you anchoring a range in their mind. This has other obvious disadvantages, but it would take care of the “candidate spent 2 days on a 2 hour project” problem.
Also, take home assignments are often time intensive for the candidates but not for the company giving out the assignment. I’ve done remote coding assignments but then had my candidacy dismissed for such silly reasons as “you live in the wrong time zone” AFTER I had completed the assignment. I would have felt a lot better about that sort of nonsense (it’s unavoidable if you have done more than a few interviews) if I knew that the company had invested as many hours in my interview process as I had.
This is a legit concern. But I think it's rooted in processes that don't make work-sample challenges determinative, and for which candidates can only lose and never win from taking them. A firm that standardizes its whole process on work-sample challenges can't deploy them abusively and still recruit successfully.
Meanwhile, if you commit to relying on challenges, you have a pretty major incentive to put time in to help candidates through them (if everyone bombs out of them, you don't hire anyone; if you're not rigorous, you're hiring randomly and you know it). We'll spend $100 and several hours for blind candidates (or at least: for every candidate we'll actually let into our hiring process) because we need the process to work, and to actually highlight good fits, in order to staff the company.
As always, a qualification process run haphazardly won't work and will alienate people.
Again, my suspicion is that most companies that do take-home problems are cargo-culting them, and not actually hiring based on their results. Rather, they're relying on the same interviews everyone else does, and using the challenge as a very elaborate form of FizzBuzz.
Allocate N hours to your whole technical qualification process. A process that spends more of those hours on at-home work-sample problems will be more inclusive of people with time commitments than one that spends less.
It's also that lots of times, the hiring company hasn't actually put prerequisite thought into how to limit the scope of the exercises they send out.
They simply tell you "this should take you 4-6 hours" and then you're left trying to figure out the tradeoffs to make. Usually that 4-6 hour exercise can EASILY take double the time, between you trying to get the best possible solution, handle all the edge cases, going back and forth with the interviewer on questions about the exercise, overthinking what the person on the reviewing end of the exercise is going to see as some irredeemable element of taste that they do not share with you etc
Then after putting in all that work, you possibly get to go through an actual interview process, get a form rejection email or even better never hear back from the company at all ... after going through that 2-3 times you learn that the upside for you is low, and you could probably make more progress with other firms in that same time.
What I personally have arrived at is that if a company isn't paying me for my time and the exercise will take over 2 hours, I simply won't participate. We'll see how it works out.
TLDR; Companies need to properly limit the scope of their take home assignments.
Or you could just not spend more than 4-6 hours on the assignment. The role might just not be a good fit, right now. It seems strictly better to find that out in a work-sample challenge, than in an interview being asked to code radix sort from memory.
Even then, it’s sometimes hard to know how you’re supposed to be allocating that time.
Suppose the challenge is to scrape some movie reviews and cluster them. On the surface, this is fairly reasonable. If you’re familiar with some relevant concepts and libraries, it isn’t too hard to get something up and running. On the other hand, measuring document similarity is still an open research problem. Should I do something fancier than a bag-of-words representation? What about the clustering algorithm? Evaluation?
I should show some kind of result. Is a simple tSNE plot enough to cover “visualization experience required”, or do I need a GUI that lets people tweak hyperparameters?
Maybe this is a sanity check to make sure I can code, so I should focus on documentation and tests. Or maybe I should make the scaler incredibly robust and scalable, since they kept talking about Big Data?
This isn’t an insoluble problem. Companies just need to be very specific about what they expect and how it’s going to be evaluated. If the point is code quality or ML chops, come right out and say so.
Thanks. I'm glad to hear your description of our process, it's nice to get some positive feedback.
... candidates won't perform work-sample challenges. I've interviewed almost as many candidates as this person has (and screened out far more based on challenges) and that simply hasn't been my experience
I'm curious what you mean exactly by work-sample challenge here - I guess you mean homework assignments, as a coding test is in effect a work-sample challenge of a sort.
The belief that senior candidates won't do them is based on observation of behaviour when asking them to take 'robot' automated interviews. The people with the most experience and best known companies on their CV (the sort of places that do hire carefully) were the most likely to drop out of the process at that point. We didn't do a big study because, well, we didn't want to lose more senior candidates, but there were enough samples to see a pattern. We haven't seen that problem with video-call interviews.
I guess you're asking more about either showing open source code, or homework assignments.
High quality open source code can be a good factor in favour of a candidate and it is checked if someone puts some on their CV. Unfortunately in the past I got burned by hiring a candidate based purely on code on their github repository and a Q&A style interview, without doing a live coding test, and the person turned out to be largely incapable of writing code to the necessary standard. We never figured out why his code on github seemed so much better. There's just so much more information you get from watching someone work, like whether it takes 10x longer than it should to produce that code, or how much help they need.
The issue with homework assignments can be summed up as - I wouldn't be willing to do them if I were looking for a job right now, unless I was completely unemployed and desperate. There are very limited hours in a day and most programmers looking for a new job already have one. Also, you can't prove it was actually them that wrote the code, and you can't tell how long it took them to do so.
I make no claim our process is the best possible, or even very good. It was just descriptive and seems to work well enough - other than the one bad hire where we didn't follow the standard process, we have a pretty uniformly competent team.
A work-sample challenge is a practical problem that mimics the actual work done on a job (the term comes from W. Edwards Deming back in the 1950s). Doing a Github-PR-style code review is a perfect example of a work-sample challenge; you can come up with any number of work-sample challenges that involve actually coding; a friend used them to recruit for his sales team by having them write outbound mails or handle simulated inbound requests.
Our work sample challenges at Latacora involve as assessment of a Django web application, an assessment of an AWS deployment environment hosting that application, and the construction of a simple scanning tool to spot the vulnerabilities discovered. At my previous firm, the challenges were similar – there was no programming challenge, but there was an assessment of a custom binary trading protocol.
I'd balk at robot automated interviews too. I'd be concerned that my responses wouldn't fit whatever weird template they were using; also, I'd be almost certain that any robo-tronic assessment thingy they were doing almost certainly wouldn't be determinative of the outcome of the qualification process, but rather just an arbitrary hurdle I'd have to jump over before the "real" interview happened. Obviously, if my options include equally attractive companies that don't present that same extra hurdle, I'm not going to bother applying!
I also think there is certainly a class of senior developer that won't do practical at-home challenges at all, no matter how carefully designed, even if they're entirely determinative of outcome and even if they're calibrated carefully to make the interview process less logistically arduous (for instance, by reducing the time commitment for actual interviews from 4-6 hours down to a single hour, and assuring before the interview happens that you're going in already tech-qualified). Certainly, work-sample qualification isn't the norm in our industry, and senior developers also tend to move from job to job through networking and very warm introductions.
What I'll contend, though, is that there's a pretty significant adverse selection problem in that pool of senior developers. I don't doubt that the bulk of them have earned their experience and reputation, but there're also a lot of developers swimming in that pool that have failed upward into it, and move from job to job largely through interviewing skill. What I've learned (or think I have) is that there's virtually no correlation between someone's ability to talk about programming and their ability to effectively deliver code on a commercial project.
What makes the work-sample process so attractive to me is that the opposite phenomenon also seems to exist: in the pool of "not labeled senior" developers, there seems to be a usefully large number of exceptionally talented developers who have, owing to circumstance, simply not acquired the resume labels to navigate themselves into good jobs. Hiring from that pool is very risky if you rely on interviews! But what we learned over the last 10 years is that you can build practical challenges and get to a level of confidence where you can rely on them over interviews and resumes, and when you do, you can hire a lot of talented people that other firms can't, because their interview processes won't let them.
Candidates don't often appreciate the unequal disincentives for the interviewer in making the wrong call. For the company, hiring one poor programmer is usually worse than rejecting 10 good programmers. So a coding test which reduces the rate of false positives is useful, even if it increases the rate of false negatives.
That's certainly a valid stat that I've heard before, but one wonders then if the situation shouldn't also be improving mechanisms when managing existing employees. California, where much of the tech industry is located, is an at-will state. If a bad programmer is really so bad, then shouldn't it be easier to terminate such an individual? Because the alternative, of rejecting 10 good programmers to get to a good one, seems just as much of a waste of programmer time and resources, as hiring a bad one. Perhaps there should be a probationary period with teeth, programmers should be hired as contractors for the first month.
The one point where I agree is, in my view, the sole reason why big companies use algorithm-style interviews.
FAANG-level companies can put out a job role and have hundreds, if not thousands of people apply. Out of all of these people, there might be dozens of people with phenomenal CV's - relevant experience to the role, mastery of a programming language or two, public publications of their work, and provable experiences through living projects.
You reach a point where it's not practical to compare these people through programming ability alone, and that's where the algorithm interview comes in. It allows companies to choose a suitable person while alienating good developers, because through sheer interest alone they can afford to do so.
I just wish that more companies were honest about why this approach is taken, because it would stop startups asking you to find a join in two linked lists when the role itself is to maintain a shitty CRUD app. These companies cannot afford to alienate good developers because they're not getting the sheer amount of interest a Google or Facebook will get.
Again, just my opinion, but people need to embrace the fact that interviewing isn't perfect. A code test should purely exist to ensure that you can weed out people that cannot do ANY programming. Outside of that, use the interview itself to determine their fit for the role.
I think the vast majority of the issues come from the fundamental disconnect between what people want to believe they are hiring for ahd what they are actually hiring for.
I have been in this industry for over 25 year from being a grunt, all the way to three letter jobs. The most uncomfortable question in decision to open a job is "What are we trying to accomplish by opening this job?"
In my experience if one is cut through the posturing the answer is "We need two developers with the following credentials because we believe that by having those developers we can increase the output -- more lines of code that we need to generate to make feature we promised on a road map ship. They need to know Angular/Javascript/Css/Node/Go/React/Kafka/K8s/RMQ and be ninjas". Should one drill down what that feature is one would find "It is a new dialog box and if a user is to click this button some things will happen and X email would be sent. Our users want this. We currently use backbone but we will be using Angular in future. API is written in node but we have a few projects to explore Go and we are using RMQ for queues though there's a team that is tinkering with Kafka" That's what the manager is actually hiring for.
So the real job requirement is:
"Need to get two butts in seats to build a dialog box to trigger an API action that would publish a message to a workflow queue. Must know a bit about backbone/css/javascript, nodejs and rabbitmq" [0]
Recognizing that one wants to put butts in seats rather than someone who can spearhead the API move from nodejs to Go would change the entire dynamics. Hardcore interviews for the 1st make no sense -- if a person can think, you can teach them how to write some javascript, mangle some backbone code, create a new route in an existing Express application and publish a message to a queue.
[0] Look at the hiring ads here. It looks like everyone is hiring Chief Architects that code.
> Companies do code testing because they have encountered so many candidates who look good on paper, and may even be able to talk about computers convincingly, but can’t actually write a program when asked. Any program. At all.
Maybe I'm reading too much into this and it's just semantics quibbling but maybe talking about computers is not talking about coding ? I've never met someone able to pass for a programmer when they are not. I've always felt just asking candidates to talk about their last or favorite project help catching a glimpse of how they think about software problems.
Little anectode : my brother is the first software engineer of the family
Father : "Son, what kind of computer should I get ?"
Son : "I don't know much about choosing computers"
Father : "Don't you write software for a living ?"
To me the biggest problem with algorithmic interviews is when the candidates are truly asked to reproduce a well established algorithm. Frequently at this point candidates will just have memorized the answer, but aren't actually good at algorithm thinking. I've interviewed many candidates who thought I wanted them to reproduce how a certain data structure worked although I was not. As soon as we'd deviate about the well treated path they be completely lost. I think puzzle style programming interviews that's require algorithmic thinking and good problem solving skills can be valuable. Especially talking through possible solutions and their trade-offs can be great. But only if we aren't testing that the candidate is good at memorization and studied a lot.
> I think puzzle style programming interviews that's require algorithmic thinking and good problem solving skills can be valuable
Think of it this way. Do the majority of applicants you have the luxury of interviewing think those questions can be valuable? I'm inclined to not think they are, so giving me one makes me less interested in your company. I might still do it, but it's a bit off a turn off, like if you show up to an interview 5 minutes late.
Honestly, talk is cheap. If they do hire people with programming tests and keeping the performance metrics of the people they hire (as most companies do), they could have just published how test performance predicts actual job skills. Until then, the null hypothesis reigns supreme.
I'm not sure why you say that here, but it's actually not a problem on HN as long as the article is interesting. I wrote about this recently: https://news.ycombinator.com/item?id=20186280.
When I conduct interview, I ask very simple programs like fizz buzz for logic. If s/he succeeds, I focus to understand if interviewee is passionate about his craft, which is the hardest problem s/he encounter ? which is code s/he proud of ?
Well, I have tens of thousands of lines of super-high-quality code, for multiple shipping apps and entire systems, used by thousands of people daily, out in the open-source domain, in dozens of repos. These have over a decade of detailed checkin commentary, with Doxygen and Jazzy docsets. I have hundreds of pages of documentation and blog posts, often intricately explaining my design process. I am constantly trying to learn the latest tech for my specialty. But I spend exactly zero time on LeetCode or HackerRank, so it's likely I'll fail Secret Santa. I doubt that I'm what you're looking for.
To be clear: if you fail a coding interview question and your self-judgement is accurate then either the interview question is a bad question, or the interviewer is a bad interviewer, or the process has a problem.
I don't think many (any?) of the people I've hired this way have spent time practicing coding interviews, let alone on HackerRank. The questions asked are simply not challenging enough to require drill. If you can code competently in your daily job you can pass them.
Also, I know that no one will ever read this, but I figgered I'd put it there for posterity.
It's amusing, when I say "I have a published open-source portfolio that consists of XXX LoC, ten years' worth of checkin history, in XX numbers of repos.", the response is "IF what you say is true..."
Look for yourself. I can back up every single claim I make.
I don't pretend that all my code is screamingly efficient or the best-written code on Earth, but what I do have, is tens of thousands of lines of super-high-quality, well-documented, code, graphical assets (including the original vector masters), clever designs, design documents (including things like OmniGraffle originals), Medium and blog postings -often going into very deep detail about my designs, architectures -and the decisions while developing them-, GitHub issues and responses, Apple RADAR reports, and multiple published, high-quality, well-branded apps on the App Store.
Sheesh. Don't take my word for it. See for yourself.
Now, THAT being said, why doesn't everyone have a portfolio?
You ever see a designer or graphic artist going to an interview? They bring along these big black cases (although nowadays, it's probably more like a laptop case). These cases contain drawings, printouts, raw materials and sketches, etc. They are called "portfolios." Even students, fresh out of school, have them. It's considered a requirement.
No design firm on Earth would ever consider ignoring one of these, and instead, pull out a matchbook with "Draw Spunky" on the cover, and insist on that being the hiring criteria.
Which is EXACTLY what most software shops are doing, these days.
You know what I mean, use questions straight out of textbooks that have only been in circulation in the last five or ten years.
I've found that it's easier all around if I simply don't hide that I'm over 40. It means a lot fewer responses to inquiries, but also a lot less wasted time. Avoids that crestfallen look when you walk in the door.
Apparently, the industry is crawling with folks that are under 35, but have the judgement and expertise you get from 30 years' of programming.
I'm also puzzled why folks would think they can learn more about me from a twenty-minute test, than by looking at ten years' worth of code in dozens of repos.
I was a hiring manager for many years. I would have drooled to see that kind of background on any one of my candidates. Still, I generally found that I was able to accurately judge their competence after about a half-hour chat (at least, I never seemed to get it wrong), and it was all about cultural fit, after that.
I suppose being an up-to-date engineer, myself, helped.
My first reaction to this was: "Damn... where's my ASCII table?"
Because this question can be taken two ways: "What comes after f in hexadecimal notation?" and "What is the UTF-8 code for 'G'?"
A very important thing to keep in mind for both sides in an interview is that misunderstandings will happen, for the most mundane and obvious things. How you respond to these will determine your success in interviewing. Give the benefit of the doubt until the other party has conclusively proven that they don't know what they're talking about.
That would be very weird to misunderstand this. I get it's stressful during interview, but "number" and "hexadecimal", as well as "F" all points exactly to 10, not 'G'.
It cannot be taken two ways, but ofc you can slip meaning, associate ideas etc, and if you fall back on your feet and joke about it quickly, nobody will mind.
I've had this happen quite a bit when investigating things, where I get the wrong idea from one step, and don't realize it until I've run with it for some time. This is a normal occurrence and we do it all the time. In an interview setting, with added stress of being watched and judged, the problem is magnified.
In interviews, I'll always throw in comments to help clarify if it looks like they're going off the rails, because letting them run on an incorrect assumption is only going to diminish your ability to judge their abilities. We all make mistaken assumptions all the time, about everything. The person who made no bad assumptions in an interview just got lucky.
I have to disagree: When you're hiring a person, you're hiring someone to join your team. Hopefully, we can agree that the majority of people in our field have the capacity to learn.
In fact, in many FAANGs, your first performance review year is "freebie" because they know that you're not going to come in and know the role and the tools used intnerally.
So, then, what is the technical aptitude test (is it really anything other than that?) for? Well, it could be used to measure a baseline for basic programming skills; however, this isn't how most technical interviews are designed.
Interviews aren't designed around the premise that knowledge on the internet exists or that your colleagues could, oft, be your first line of support (or code reviews, if the company has a good culture). Hell, there are interviews where the candidate has to code in notepad (or other text editor equivalent) because they don't want the candidate availing of things like Intellisense.
If we look at it from a different perspective, though, we live in a time when we can now put our source code online for the world to see. We can publish packages for the world to consume. We can make changes to production software (FOSS) and documentation and that's easily traceable (if you're not obfuscating your identity through sixty different handles).
It seems we don't consider these viable avenues of purview to view the candidate's ability before even asking them questions.
With that being said, are you really hiring the candidate because they would be good for the team or are you just hiring the candidate who can check off some interview-type question boxes?
I have interviewed a number of people in my day and the most excelled candidate that we interviewed and hired, because he checked all of the technical boxes, ended-up being a raging asshole and bringing the team morale down, considerably.
So, at the end of the day, you're not just hiring someone to fill a role. A single role is all but mostly dead, anymore. You're hiring someone to join a team and the human aspect should outweigh the technical aspect.
After all, just because someone checks all of the boxes, it doesn't imply that they would "be a good fit".
> It’s vastly preferable to asking someone to write code with a fat marker pen whilst standing up in a tiny conference room.
Or, you know, just use a whiteboard in the interview for the same purpose you use it in meetings:
- not at all
- for high level algorithmic descriptions, data flow or architecture sketching
(Of course, the UML being demonized is a bit of a misfortune because now boxes and arrows can mean anything, but hey, better than having a visual language that has been developed by a bunch of old CIS White males, according to tech diversity Twitter.)
> All day interviews. Many firms expect an interview for a developer to take an entire day, typically with between 5 and 8 separate interviews. This makes it hard for developers who already have jobs to attend.
This bothers me immensely. If you are job hunting, you want to apply to several jobs to get a number of offers and get an overall better deal. How do people manage this when they got a full time job? Do they literally take an entire week vacation to have 5 all day onsites?
If you're applying for your first job, you might treat job hunting like a fussy suitor problem and want to get five offers to get a good picture of your market value. You get a job where you're pretty happy with the salary, the benefits, and the work.
When you apply for subsequent jobs, you no longer need to take a large sample to calibrate your judgement; you only need to compare the job to your current one. And questions like salary and the nature of the work can be answered during the phone screen.
If changing jobs takes three days of annual leave and boosts my salary by $10,000 that seems like a pretty good trade to me.
> Do they literally take an entire week vacation to have 5 all day onsites?
That wouldn't be actually that bad as onsite is an advanced stage in the recruitment process. In the current state of the job market it's more entire week vacation to complete take home assignments and online tests.
Over the span of the last 10 years I've probably interviewed between 80 and 100 engineers, at some stage in the process and even ran the whole hiring process in my last company.
There are points in this blog post I strongly agree with like how disrespectful "homework" and the importance of letting a candidate code on their own machine, where they have all their shortcuts etc. setup and are able to be productive. But I've found the algorithmic coding challenge to be in equal parts harmful as useful.
The biggest problem, as I see it, is as software engineers we are strongly biased towards a rational understanding of the world. For example I've seen hiring teams argue _for_ a candidate they clearly don't like because that person aced the coding challenge. In such situations, as hiring manager, I then challenge the hiring team with "OK it's Monday morning. You had a bad weekend. And now you're in the office sat next to the candidate. How do you feel about that?" and usually the truth _then_ comes out and we avoid hiring someone that would destroy the team. The point here is we have a natural bias to see things in absolutes; zeros or ones. This _tends_ to lead to an over-focus on the coding challenge because it gives us the illusion of a black or white answer on whether we should hire.
When I've interview people, I always ask myself "what's my gut feeling about this person?". What I've found there is "my gut" is sometimes wrong in the positive direction - I liked someone but they turned into a bad hire but always right in the negative direction; if I have a bad gut feeling about someone that I can't fully explain, these days I always go with it. The "evidence" I have there was two hires that I had a bad feeling about, one I later had to fire myself and the other that stuck around for a couple of years until someone else fired them.
Otherwise, when it comes to assessing coding ability, these days most programmers have code on github, even if it's just "hobby" code. I'd rather review that, look at their commit messages etc. and get a feel for how they program when relaxed and not trying to impress.
And if we must make candidates do homework, how about contributing to the greater good and asking them to make a contribution to some Open Source project e.g. fix a bug in numpy or otherwise?
Algorithm puzzles are a good test for the ability to solve algorithm puzzles (which usually means having seen a similar puzzle before or guessing the right trick.)
Which is probably correlated with solving problems in algorithm puzzle (aka "programming") contests.
If solving algorithm puzzles or competing in "programming" contests is your company's core business, then probably it's a good interview test.
The problem I have with coding interviews isn't necessarily the algorithms-based questions. I don't have a problem with reviewing undergraduate data structures/algorithms as well as those related to my subfield (for example, someone who is familiar with DBMS implementation should know the basics of a B-tree even if knowing its exact implementation requires review).
Here are the problems that I have with coding interviews:
1. Often I feel that interviewers are looking for exact, optimal solutions rather than caring about how the interviewer actually approaches problem solving. Forget about being asked FizzBuzz-style questions or about being asked to delete a node properly in a binary search tree; in the interviews I've had in the past three years, I've encountered difficult Leetcode- or ACM International Collegiate Programming Contest-style problems where it's expected that I come up with an optimal solution within 30-45 minutes. It's even worse with companies that give you a hard-level Leetcode-style problem that is automatically graded instead of being examined by a human.
2. The lack of receiving feedback about interviews that didn't go well makes the process difficult. At least on a standardized test you receive a score, and at least at the end of a final exam in class, you get your final class grade. Case in point: I've had two successful Google software engineering internships with great reviews from my mentors, and so it's not like I'm incapable of programming FizzBuzz or writing a for loop, but after three tries I haven't been able to get an offer for a full-time position there, despite making it to the on-site round each and every time. It's similar with other companies: I'm usually able to make it past the phone screen and about 50% of the time past the initial programming question, but I have a hard time making it past the on-site round for software engineering positions.
3. The sheer breadth of possible questions to be asked. Not only are Leetcode-style puzzles "fair game," but also domain-specific questions. For example, suppose I'm interviewing for a position where I'm working on a DBMS. Despite having taken graduate-level courses on the implementation of databases and distributed systems, there is still a very large amount of questions that I could be answered, including specific details of specific databases that I might not have been exposed to.
The frustration I have with the software engineering interview process is enough for me to want to change fields at times. Thankfully I've found the interview process for more research-style groups to be less about coding acrobatics and more about conveying previous research experiences and successes as well as proving competency and curiosity. I have a pleasant role as a research engineer where I'm doing very interesting research work while also maintaining my coding skills. Unfortunately such jobs are hard to find in industry, which means that one day I'm going to have to take up the gauntlet of software engineering coding interviews all over again.
But Google doesn't want you, you want Google. They want the process as annoying, unfair and difficult as possible cause they have entire countries that want to work there. Your internship was free so, well, it was easier to sell you to management.
Go to a small mom and pops startup where you do absolutely everything from rebooting servers to convincing clients to pay you. That'll be a thousands time more valuable than 10 years at google sleeping between desks pretending to change the color of a gmail icon :)
Maybe not a developer job, but for jobs that actually require public presentations asking candidates to give a talk on some topic seems very reasonable. And, yeah, if they fumbled through it that would be mark down if that's something they'll do as a job.
Writing may be more important, but presentation skills are absolutely important for some jobs involving communications.
I don't understand this mentality at all. If I'm hiring you to do X, I want some sort of confirmation that you can actually perform X. What's the alternative? Hiring the first person who applies?
The vital difference is that doctors have doctorates. Becoming a doctor is an incredibly rigorous process and you can be confident that anyone who is a doctor is competent. You don't need a demonstration of their skills.
Any shmuck can spend a week on leetcode and call themselves a programmer. There needs to be a way to sort out that type of applicant in a field where so many are 'self taught'
Generally when I start a new project I'll do it differently from last time. Might be a different language or framework, maybe it uses docker or not, maybe the current best practice or boilerplate has changed in the last six months. It might be different for people in agencies, but then they might have some code generator or starter pack for optimization.
I work at Pivotal where we pair program with candidates on work that's usually actually in the backlog. In some cases you might have to replay a story you did already if you don't have a new one that's well suited. One of the biggest benefits is that the candidate also sees what they are getting themselves into. I don't think it would work without pair programming though and might make less sense in am environment where you won't pair either once you got the job.
This isn’t going to change in the near term, because the companies in the middle emulate the companies at the top, and the companies at the top are doing just fine with their current hiring processes. Sure, they’re missing out on some good people, but they have enough excess demand for employment that they can keep their cubicles well-stocked with enough C++/Java/Python coders to accomplish their business objectives and make tens of billions of dollars per year.
Telling them they should change is like telling Tom Brady he should change the way he holds the football — even if you’re right, he’s not going to listen to you while he’s still winning the Super Bowl every other year.
This is something I need to study each time I want to change jobs. I suspect that I will have to create a new project somewhere in the interview process but it is just something I very rarely do in my day to day job, if ever. I suspect that it is the same for many engineers.
FWIW, at my current company, the interview projects come preconfigured, so you only have to import them and start coding. We have also already imported some popular libs.
Watching somebody rightfully struggle to do that is :
- a pain for the interviewee
- awkward and uninteresting for the interviewer .. I don't expect many people to have to create new projects often so it is totally ok to have forgotten it
- a waste of time for the both of us.
We also actually don't do algorithms.
We have a debugging interview, with a shitty codebase where you have to find the issues and correct them. When you are done with the bugs, you can collect your thoughts on what's wrong with that codebase so we can discuss how to do it better.
Another interview is to have to design a product similar to ours (a simplified version of course) and you have to come up with API and app technical designs.