When this all began, it was with the justification that it's not about getting the right answer, it's about seeing how you think and approach a problem.
Companies don't even pretend this is the case anymore. It's now just expected rote memorization of algorithm patterns, talk about it with the right terms and key phrases, still pretend like the candidate had some brilliant "aha" moment, when both sides know it's bullshit, and write the solution on the board as fast as possible without errors.
A completely pointless process that's now heavily pushed by an entire industry of tech interview prep (websites, books, interview coaches, etc.) When I now read a discussion about tech interviews, I wonder how many are industry shills who want to keep pushing the narrative that the process is great, but you just have to keep practicing and studying more, using the right materials of course.
There's also the side effect of ageism. Who other than college grads or seniors in college has the free time to study for this process? Those with families and full time jobs are of course going to have a hard time. Even if they manage to find some free time here and there, how can they compete with someone in college with no responsibilities who has time to hundreds of problems? The simple answer is they can't. The college grad will always look smarter and faster in the interviews.
- Brute force is often simple to understand. That directly translates into maintainable code.
- The business constraints on the input size may make the brute force an acceptable solution.
- If you need a better optimized algorithm, brute force is a great place to start. It's basically your test case generator for verifying that your more complex algorithm is right.
After they get brute force code written, the coding part is over and the rest of the interview is about discussing why it's brute force and what we could do about it. But no more coding is expected. It's a chance to be creative. If they want to talk about improving the algorithm that's fine, let's draw some diagrams. They want to throw more hardware at it? Sure, let's talk about how to scale that. Maybe they've actually seen a business problem before that was similar? Great, let's discuss how you solved it.
I still remember a recent interview I had with a team at Amazon.
That team had accomplished NOTHING (and I mean NOTHING OF SIGNIFICANCE) in past 3 years, went through re-orgs and new managers, high attrition, nothing to show for.
Here I am with successful products, published code and a track record of delivering. I have the skills to help them out.
What happens? I get rejected because I didn't remember how to use a comparator in merging k sorted arrays and write that perfect code on a whiteboard.
WTF??!! Did hiring rote memorizers really help that team in Amazon? Why do they not have something productive in 3 years despite having hired experts at tree traversals?
Well, sure, but my experience has been that the technical interview exam is far more elaborate than this. My questions were more along the lines of finding all permutations of a set (usually with some twist, like find all permutations that, when combined, match another member of the set). Or finding all matching sub trees in an binary tree, or something depending on merge sort or quick sort. That sort of thing. Nothing impossible, but considerably more involved than simple recursion. And the teams that interviewed me clearly did expect to see largely working code written on a whiteboard.
Overall, it's elaborate enough that people do need to study substantially for their interview exams, and good developers may simply decide they don't want to waste the time on a job that might not even come through anyway. I once learned to do intricate integration by parts in calculus, but I wouldn't be especially interested in training up to do it for a data science interview. If it comes up for reals, I'll deal with it.
This is something the industry has inflicted on itself, which is why I feel a lot of irritation when I hear the same companies talk about a severe shortage of talented software developers.
Interviews are a mutual process, where the hiring team must do their best to persuade me to join. (The same way I do my best to persuade them to accept me.) Some of them utterly fail, even if they make an offer.
Indeed. Now the default seems to be companies sending out a Hacker Ranks test consisting of 5 separate problems, a countdown timer and zero human interaction.
I can't help but think that recruiters are playing a part in pushing for this online Hacker Rank test bullshit.
It feels to me that as the quality of people working as recruiters goes further into the toilet the reliance on the Hacker Rank puzzle test seems to increase.
This depends on the problems being reasonable, though.
Sure, for various reasons, most programmers know the factorial, but I'd consider remembering the definition of some mathematical operation only common in certain domains (statistics and finance?), that most people only learn in high school and never use again, to be rote memorization.
Or you can use a search engine.
I actually don't have a problem with the data structures interview exam inherently, my problem is with how it's administered.
It doesn't bother actuaries have to take exams to demonstrate an understanding of linear algebra, vector calculus, stats, numerical analysis, and so forth. Nor does it bother me that they have take these exams even if they took the material from a reputable college with a good grade. I think it's great that the actuarial field will allow people from multiple educational paths to take the test, giving them a choice about how they learned the material.
But here's the thing - software developers have to take these "interview" exams over and over, at the whiteboard, every time we look for new job. And unlike the actuarial exams, we take these tests under conditions of tremendous secrecy. We don't know if our examiners are qualified, or if their questions are vetted, or if the exam is consistently applied and graded. We don't have a clear and defined study path, aside from books like "cracking the coding interview". And we get no feedback, hell, at times we don't know if our performance was evaluated at all.
I've considered this for a while, and I've come to understand that exam-based professions and institutions, such as the medical boards, the bar, universities, have evolved a set of considerations, kind of a student bill of rights. Students must submit to the exams, but they have the right to a fair and consistently administered exam, evaluated by respected people in the profession or field, that the exam questions should not be capricious or utterly unexpected, that there is a preparation path, they get feedback on why they did not pass. What we have in tech is the stress and burden for examinees, in many ways amplified (whiteboard coding, on the spot, is among the more stressful ways to take an exam). But the student bill of rights is largely non-existent, and I think this is the deep and severe problem with tech interviews. Until this changes, to me, nothing can change meaningfully. And I don't just mean in interviews, I think this sort of thing is what is deeply rotten about tech, at the core of many of the problems (in a field where there's a lot of suspicion about bias, are secret tests under capricious conditions a good idea?)
And we're expected to come back for more of this, over and over, and to just take it.
Oh, by the way, there's a shortage of software engineers, best solved by putting tech companies in charge of the immigration system so they can decide who is and isn't allowed to come to the US based on how well they do on these interview tests/how much they're willing to put up with these interview tests.
One way in which we're approaching the stress of interviews is to strip away the bad outcomes and allow people to approach interviews as "there is nothing to lose". Another way is experimenting with different types of interviews such as group interviews or project based interviews. Sometimes the issue with those experiments though is that they're even harder to standardize.
I think that in addition to what you said, we need to get to a measurable level of improvement. We try to measure the repeatability of our sessions in two ways: 1) when the same session is evaluated by two different interviewers, the ratings across dimensions should be equivalent and 2) (this one is harder to experiment control) when the same person does two interviews, they should be evaluated equivalently on the intersection of sets of things that were evaluated.
By optimizing for these two ways of repeatability, we are hopefully going to move toward much more objective driven evaluations that capture the engineer's understanding of different areas and not just the spur of the moment.
Based on what I've seen from your site, this does look like an improvement.
An interviewee can prepare properly, and take the interview once prepared, rather than doing this at arbitrary times that may be very busy, simply because an interview came up.
Because the interview results can be used in multiple places, there's no need to do this repeatedly for companies that have confidence in the exam (I hope!).
There are reassurances that the interviews are conducted by experienced engineers. I'd be interested in hearing more about this.
There is actual feedback, which is critical. I interviewed at google, and my understanding is that there are actual, numerical scores in a database for my performance in the various interview exams, but I'm not allowed to know what they are. To me, that's a huge problem, and I'm glad to see from your example that you provide feedback.
As far as the experience of the interviewers goes, we're pretty careful about it in our on-boarding process. Even more than the experience we look for open-minded engineers who are willing to question their evaluation and constantly work on improving the overall process.
Re: feedback at Google. It's hard for companies to give you access to feedback because of potential legal repercussions and also because it is very hard for them to deal with back and forth that would inevitably occur.
That's another big point that we emphasized early on that seemed like a crazy proposition - fully transparent interview process. That is, anything that a company would see, you see beforehand.
Although I agree with this point, it only explains why a bad practice occurs, not why it should happen. If anything, it indicates why google might not be a good institution to be conducting exams. What you're saying here (accurately) is that because Google could be sued for how it evaluates and uses its exams, they just keep the results secret.
To drive the point home - there is currently a class action forming against google for age discrimination. However, the process is opaque enough that people probably don't really know if there has been any age discrimination. One possible data point that might be of interest would be to see the exam results, or other information about how the exams were graded (I believe there are even images of the code I wrote stored in this database for google's record keeping - but not mine. Again, I'm not allowed to see them). So, one way to discriminate and get away with it would be to simply deny anyone access to the data used to evaluate them. That way, how would they ever know?
I understand this is true of all interviews, not just tech. But tech really is unusual (perhaps unique) in that a huge part of our "interviews" truly are exams. They're not "tell me about your experience". They're "write code to find all sub matrices in an NxN matrix with a positive determinant", often at the whiteboard, in 45 minutes.
If that sounds extreme or excessively suspicious, I'd like to point out that we're talking about a company that was clearly, glaringly guilty of an industry wide collusion to suppress wages and maintain no-hire lists. I'm hoping you've read those articles and seen those emails. It's reasonable for people, at this point, to be looking for some kind of outside regulation and transparency.
1. Don't ask questions that are hard to explain over phone. For instance I was asked if I've used Lambdas in C# and I said, "Yes, extensively.", next question was. "Explain to me what a Lambda is.". I gave the textbook definition for it, and he was expecting more, so I simply said "It's hard to explain lambdas, I can show you some source code I've written and that way I'll be able to explain it better.". I thought that was it and that I would be 'screened-out', but I was moved to the next round of on-site interviews!
2. Do ask questions about how the candidate has used the technologies that the job requires, and ask them to explain how they designed and implemented their most recent solution, and where they used what technology, and how. After initial answer, drill deep into areas that are important for the job posted, and from the answers, you can get a fairly good assessment of whether the candidate should be moved to the next stage of the process.
3. After passing the tech phone screen, I've received and completed 3 take-home coding exercises (1 each from 3 different companies), and in 1 of those companies, I was subject to white (water-) boarding exercises on site in addition to the take home test. The white boarding seldom goes well because of the environment and nature of the whole thing.
I found the take home exercises to be better than on-site white-boarding because the former is closer to how a future employee would work, than the latter.
Also, how a candidate does on algorithm and data structure questions is a false positive (or negative) because these can be practiced, rehearsed, perfected and hence gamed. Coding exercises, or specific question on the candidate's most recent project, on the other hand, cannot be 'gamed' and even if it was, you could tell from listening to the response, or from reading the code...
Hope this helps!
(In a previous role, I gave applicants the same -- very trivial -- question that they had already coded the answer for in a do-at-home screening. I wasn't secretive about this, either, I told them that this is the exact same question they had already answered. A significant percentage of them were unable to figure out the same answer they had provided a few days earlier.)
I never understood this. In a programming / software development job more than any other job, aren't you very likely to fail at delivering on the job, if you cheat on the interview and tests?
If you're pretty happy with your routine the next step is review. Look at candidates who passed a phone screen but were later turned down at a followup stage. Is there anything you could change about the phone screen so that the later time and money could be saved? (If you have no one who passed the screen but failed the later stage, sounds like you could just eliminate the later stage altogether.) Another interesting thing to review is candidates who didn't work out after hiring, or didn't work out as much as hoped. Again, anything at the phone screen that could have caught this? Or for those ones you can look at other stages too. One example for me is an intern that didn't work out so well, we only do a couple phone screens for them though, but still the biggest skill that was missing that I realized I could try testing for was the ability to dive into unfamiliar code they didn't write and make a few small changes at various places to fulfill some business goal. Or in other words, filter out people who can only program if they can keep the whole program in their head (which usually only works for larger things if you wrote everything yourself).
I think only google needs to decide this. Everyone will just follow what google does.
"If you have kids, you're too old to code and should move into management." This is basically what the industry is saying.
This is one reason why most interviewing is a subjective clusterfuck (not just software, I suspect). I'd bet really good money that numerous companies would reject Linus, Carmack, etc.
Hence the giving of graphs and other media that are meant to project an idea of objectivity while reducing an engineer to a few quantities turns my stomach.
I make a point of not telling you. I want you to demonstrate that you're able to work out the requirements when you're given a task.
Some people will take a task, go off for a few weeks, develop ill-fitting solutions, and then say "well, you didn't say you wanted unit tests" or "you didn't tell me it had to support files that can't fit in memory".
Others will try to find out what's needed, with simple questions like "Roughly how big are these files?" and "Does this actually need to run in line with the front page? What if we load it asynchronously?"
I'm looking for the latter.
Some places value communication skills over technical skills. Others will value experience over credentials. I think it's unrealistic to expect that there is "one true way" to perform an evaluation (technical or otherwise). It all needs to be in the context of the team/department/organization you're hiring for.
This is the thing, all OTHER interviewing doesn't do this, they do it by talking to the candidates. Damn, I am going to get out of development.
This is why I get a laugh out of people complaining about whiteboard-style interviews. Far and away tech still has one of the most intensely meritocratic hiring processes. Not perfect, sure, but better than almost every other industry. The majority of high-skilled jobs rely heavily on pedigree, education, social standing, references, sociability, etc. I'd much rather reject Linus for flubbing a BFS traversal than hire him for attending my Alma Matter and knowing a lot about golf.
Almost every industry, including tech, relies heavily on those things as pre-interview filters. Tech is not a meritocratic exception here.
My SO works in media and I can assure you the hiring process there is completely, thoroughly, perniciously subjective. Experiencing her job search really gave me appreciation for the fact that taking a test is 80% of my interview process.
The big tech firms are notorious for the same thing with a slightly different set of high-prestige schools, so I don't know what difference you are trying to draw?
I just did a quick check in LinkedIn, choosing four people at GS with "Trader" in their title. The four schools I got: UNC Chapel Hill; Duke; Williams College; Amherst College. I find that distribution hard to correlate with your statement.
"The only difference now is that you'll constantly have an ethical and deontological loop running in your head."
As an licensed Engineer (once again, PE in most jurisdictions, Quebec just can't agree on using the same terms as everyone else), you have an ethical, deontological and legal obligation to ensure that your work conforms to the "state of the art". You constantly have to think about the consequences of your work on everyone downstream, from workers on the shop floor, to the final customers, maintenance workers, etc.
It's not so much about the actual work (many drafters, technologists, etc could often do something an engineer has done), but about the profound care that went into making sure it was right.
In this instance if I were John I probably wouldn't want to work for the company anyways because they clearly have no idea how to hire.
Agile has the same issues.
I’ve yet to see someone tell me with a straight face that they hold any value like what we want them to be.
That agile estimation is fuzzy BS is a certainty. But it's a nice comforting blanket to help clients accept "we're just gonna start working on this, if you decide you don't want us to work on it anymore tell us to stop," which is important because that's the most cost-effective way for the client to spend their money, in many cases.
It's nowhere near as accurate as anything resembling an actual engineering estimation process, but ain't no-one got the time/money for that. It's probably a little better than the seat-of-pants "just give me a ballpark right now (but I'm totally gonna hold you to it, gee isn't it weird how we're always fighting fires and in crunch mode?)" that it's replacing as far as accuracy, but more importantly it makes that crap not happen (assuming it's functioning correctly).
BTW, you forgot the part about giving improving metrics to satisfy management and maybe get a raise (vs 100% not getting one), or a nice story line about how you managed to improve team velocity YOY by 50% through magic on your resume, which in the same way is "reducing an engineer to a few quantities" and equally "turns my stomach" as for alexandercrohde. Most of the things I've done to introduce improvements are immeasurable because they're changes that are mindset shifting or technological leverage therefore there is no metric associated because of the pervasiveness of their effects.
† if you're doing, say, 10 push-ups, instead of "1, 2, 3" try counting each number twice, like "1, 1, 2, 2, 3, 3". Magic, you've done 20. You know it's a mind trick, but it'll still work.
†† Going from email+phone+hallway communication, silo'd knowledge and SPOF Bus Factor to a serious issue tracking, knowledge sharing, and automation on GitLab. Going from manual machine building to fully automated system provisioning and one liner deployment. Any associated metric would be either completely immaterial (e.g confidence and well being) or produce an off-the chart ratio (just computed one just for fun, that's a ~1800% improvement in human cycles — not even counting failure modes — it's so beyond ridiculous it doesn't even mean anything).
I know I need to look for a new job, but I hate the hiring process sooooo much that I keep procrastinating.
I'm a senior full-stack, have written loads of production software over 20 years of experience. Who the hack from refdash is going to assess me with some stupid algorithm question? Can he build shit himself? If so, he knows this whole process is irrelevant.
Sometimes I really want to quit this business, it has destroyed a lot of fun and passion for coding already..
That's all I need to know about you. If you can do this well, I (or you) can fix all your bugs quickly. I (or you) can fix (most of) your performance issues quickly. If you can decouple, you can isolate whatever other areas of ignorance you have with ease.
How well can you decouple? Is it instinctual for you to riddle off code architectures and solutions that are decoupled? It takes time + a value system to acquire this skillset (so that it's instinctual) but once you have it, you can move mountains with code.
The sad thing is our industry doesn't create the conditions for acquiring this skillset. (And rarely, like in this scoresheet, do you see it measured in interviews.)
The usual refrain is "I didn't decouple here or there because, um, it was faster to take on the technical debt."
No, you didn't decouple there nor there because you didn't spend several years falling on your face trying to decouple and failing. Because you were too busy getting paid to crank out whatever works in the shortest possible time, all the while saying "I didn't decouple here or there because it was faster to take on the technical debt." You didn't decouple there because you don't know how to decouple there.
But the key point is that it's an objective measure. Given two equally functional solutions to the same problem, one will be objectively more, less, or equally coupled than the other. It's a demonstrable, measurable skill set; the relative coupling of two separate solutions is generally speaking not some subjective measure.
I think what these interviews test best is your ability to interact with your peers. Candidates who interface badly with other people would have trouble getting a good score. So yes some brilliant people will bomb those interviews. And some talkers will manage to pass one interviewer.
In total I would expect those interviews to indicate a minimum of professional knowledge in combination with good communication skills. That is valuable data without it being objective in an absolute sense.
Individual interview results for a single candidate vary wildly from day to day. I wouldn't want one bad day to sink my chances with a large number of companies at once; I'd much rather interview with each company separately if that were the case. Using this company would just be far too risky, even if I was 90% sure I'd ace the interview.
On the other hand, if my score was entirely private, and I needed to actively grant each company that was interested permission to view it, I would view this as a useful tool. A bad score would provide useful personal feedback for improving my interview skills in the future, and I'd be happy to send along a great score to potential employers.
Basically it's a good chance to practice and get feedback in the worst case, and a way to save some time and get your foot in the door in the best case.
disclaimer: I work for refdash
It'd be even better if I could TAKE the interview anonymously. Then I wouldn't need to trust Refdash on whether it was recording my information. Bonus: it's much easier to limit bias when you don't know any demographic information about the interviewee.
I understand that the downside here is that I could potentially take the interview many times until I generated a positive result. But maybe there's a happy middle ground somewhere.
Interviews are somewhat anonymous, the interviewer only gets your first name (added that too). We've discussed making it even more anonymous (voice modulation, hiding video, etc.) but it seems like this has been studied before and did not have much of an effect.
A follow up question could be: "Do you see some problems with running this with big numbers? How would you scale it?"
Beyond that, "it depends" on what you're doing with the results. It's very likely that the developers' time would be better spent removing the factorial from the calculation that uses it, than on making the factorial calculation better at calculating factorials.
Besides that, since you can only meaningfully do 22 different factorials within the bounds of a 64-bit architecture anyway, just pre-calculate the results and make it a lookup table. Returns in O(c).
Not many real-world applications require precision greater than a double, but pure mathematics can calculate millions of [decimal] digits of an irrational number just for amusement. A cryptography application, for instance, would need full precision, whereas an engineering problem might be perfectly fine with fewer significant digits.
For example, if a candidate cannot implement a simple algorithm (like the factorial here), he very likely has poor skills.
If he can implement it without too much trouble, that's a good sign, but he only proved that he probably has some good skills.
Again, the signal is narrow but strong.
Point is, although there might be place for algorithm coding in developer interviews, I am willing to wager a large chunk of my salary that performing well in algorithm interviews doesn't necessarily correlate strongly to performing well on the job.
Why? Performing well on the job needs to be properly defined in it's context first.
Why on earth do you bring this bullshit sexism to this discussion? Of course I have a prototypical picture of a developer in my head. Like everyone. So what? I have interviewed developers and 99% of them have been male. I'm not arguing anything about that percentage here. Let's just assume _male_ developers here then if that makes you feel better.
Point is, although there might be place for algorithm coding in developer interviews, I am willing to wager a large chunk of my salary that performing well in algorithm interviews doesn't necessarily correlate strongly to performing well on the job.
You don't have a point. It doesn't matter how well they communicate, or anything, IF THEY CANNOT CODE. It also doesn't matter if they can code it they cannot communicate. I'm _not_ saying that coding skill is THE ONLY skill they must have. Get that to your head already. It's ONE of the many skills that they absolutely must have. Another skill is the ability to express their ideas clearly. There are others of course but we are discussing "programming tests in interviews" here.
In addition, I don't believe interviews or any tests during interviews can accurately predict job performance. I think there are a lot of studies that support this. Even though all the tests show green light, the person can fail in the actual job. Nothing guarantees that 100%. We get false positives. But that doesn't mean interview tests are useless!
Moreover, I think it's highly unprofessional to reinvent the wheel by writing most algorithms for production. I always first look for a bullet proof library for that.
Won't you use a calculator for 345 x 27?
Again, the idea of the coding test in interviews is not to produce production quality code. It is testing that the candidate's brain can write some code. Try to understand that.
Is it a perfect measurement? Of course not.
Also, the idea is to select an algorithm that doesn't require encyclopedic knowledge of different algorithms. Even (any) sorting algorithms may be too complex for this purpose.
You don't need a calculator. Use your brain and show your work.
Do you check for nulls in linked list traversals? Potential stack overflows? How about array out of bounds?
If yes, why don't you check their data type?
Do you now see why this process is fucked up? The candidate doesn't know what is your definition of "ok"
No, the process is not fucked up. There is no one right answer. It is CLEAR that the candidate cannot write perfect code in an interview. But it can still provide valuable data.
Basically it can answer to this question: "has the candidate got _any clue_ about programming?". Sometimes that has a lot of value.
Think about a candidate that cannot even write a function that calculates the length of a null-terminated string, for example. Doesn't that test tell you immediately that something is terribly wrong? I've seen these candidates.
When you get over a certain skill threshold, every candidate passes the coding test. After that, it's useless, of course. Then it's probably best to show some code and discuss.
NO SINGLE recruitment test can provide absolute certainty. Coding in interview is just one variable.
The interview process usually requires a portfolio, 2 phone interviews and one onsite interview.
I had to talk about previous projects and what was my problem solving process, followed by a case to solve. This was refreshing and way less stressful, a walk in the park compared to the white board interview.
I understand the challenges that hiring managers face, but who has time to spend a day at 5-10 companies to switch job... A lot of companies still offer only 10-15 days vacation a year...
I have limited information about interviews for other industries, but other fields are often not evaluated in the same way. Lawyers do not have a mock trial as part of an interview, so tools for some industries would likely just be a video chat.
Also, as a minor correction this tool is Refdash.
This is a hard problem. Kindly think about it from the perspective of a technical interviewer who really has no idea whether the candidate on the other side of the table can program a computer or not and has to make a hire/no-hire call in 45 minutes.
Does anyone have any data that surfaces this information? All these companies like Triplebyte do is standardize incoming interviews, but what's the point if it has no bearing on whether or not the candidate is actually successful.
They may look at tickets completed, "buzz" around the individual, being assigned to a project that was highly profitable, staying late a lot, or any arbitrary thing they want to.
On the downside, scheduling seems to be awkward: you have to manually select your availability, instead of picking a timeslot, then the interview is scheduled for you. Personally, I prefer to select my own timeslots like with interviewing.io, unfortunately, the latter ironically is almost always out of availability.
The fact that there's very little proven correlation between being able to jump through these hoops and being a good programmer is another story altogether, but I like this!
This is a fake feedback report for an interview, to show what companies and candidates can expect.
Thanks for your comment. I do think it's important particularly from a feedback perspective. Since companies don't give feedback, this is one of the first things to attempt to help candidates learn about what is being evaluated and how to improve.
I'm hoping this is a first step to finding out a better way to correlate those things, for now I think that things that help candidates find jobs are a good step :)
Somebody feel free to change my view - but I don't see why there can't be a standardized certification (could be broken up into specific areas of tech/comp sci/programming) that proves the applicant is competent in the industry standards, and just focus on the personal/soft skills/culture fit of the interview.
The real problem is that very good candidates are often not motivated to get certs, since they likely don't need them. However with the added benefit of speeding up the interviewer process, I think more people would do it and it could then become a standard and not a negative signal.
Welcome to the Cult of Hiring, I guess.
It is meant to highlight some of the criteria which Refdash uses to evaluate candidates and how companies and candidates can expect to receive that feedback.
disclaimer: I work for Refdash