Hacker News new | comments | show | ask | jobs | submit login
Hired – Technical Interview Score (refdash.com)
59 points by guessmyname 30 days ago | hide | past | web | favorite | 119 comments



Have we reached the tipping point yet? How much worse does the technical interview process need to get before the industry finally decides enough is enough?

When this all began, it was with the justification that it's not about getting the right answer, it's about seeing how you think and approach a problem.

Companies don't even pretend this is the case anymore. It's now just expected rote memorization of algorithm patterns, talk about it with the right terms and key phrases, still pretend like the candidate had some brilliant "aha" moment, when both sides know it's bullshit, and write the solution on the board as fast as possible without errors.

A completely pointless process that's now heavily pushed by an entire industry of tech interview prep (websites, books, interview coaches, etc.) When I now read a discussion about tech interviews, I wonder how many are industry shills who want to keep pushing the narrative that the process is great, but you just have to keep practicing and studying more, using the right materials of course.

There's also the side effect of ageism. Who other than college grads or seniors in college has the free time to study for this process? Those with families and full time jobs are of course going to have a hard time. Even if they manage to find some free time here and there, how can they compete with someone in college with no responsibilities who has time to hundreds of problems? The simple answer is they can't. The college grad will always look smarter and faster in the interviews.


Some comments here are asking what would be a better process. I think a big step is just stopping with the requirement to get the optimal solution. If I was doing these interviews, brute force is fine. If they can come up with a brute force and write code for it, they completely pass my coding bar. Some may say that's a low bar, but it's not. Because expecting anything more is going into rote memorization territory of algorithms. It's also not realistic. In the real world, brute force is always the first solution and it's the right first choice for a number of reasons:

- Brute force is often simple to understand. That directly translates into maintainable code.

- The business constraints on the input size may make the brute force an acceptable solution.

- If you need a better optimized algorithm, brute force is a great place to start. It's basically your test case generator for verifying that your more complex algorithm is right.

After they get brute force code written, the coding part is over and the rest of the interview is about discussing why it's brute force and what we could do about it. But no more coding is expected. It's a chance to be creative. If they want to talk about improving the algorithm that's fine, let's draw some diagrams. They want to throw more hardware at it? Sure, let's talk about how to scale that. Maybe they've actually seen a business problem before that was similar? Great, let's discuss how you solved it.


I really like this comment a lot because it mirrors my experience with brute force type solutions to problems. It's pretty much what I always start with because on my first pass I need to prove that the problem can be solved in the first place. Then that gives me a baseline where I feel comfortable making changes and trying to optimize. Like you said, very often because of business constraints the optimization only needs to go so far and so an "optimal solution" may not necessarily be the one that is fastest in benchmarks. It may be the one I can get out today that solves the business problem now.


+1 for test case generator, some of our libraries even have a dividing path between optimized vs brute force modes just so we can be certain of correctness in unit test and what not. Correctness first with all the testing infrastructure and TODOs to at a later time go in and lay better algorithms down/execute general refactoring.


I LOLed at your comment!

I still remember a recent interview I had with a team at Amazon.

That team had accomplished NOTHING (and I mean NOTHING OF SIGNIFICANCE) in past 3 years, went through re-orgs and new managers, high attrition, nothing to show for.

Here I am with successful products, published code and a track record of delivering. I have the skills to help them out.

What happens? I get rejected because I didn't remember how to use a comparator in merging k sorted arrays and write that perfect code on a whiteboard.

WTF??!! Did hiring rote memorizers really help that team in Amazon? Why do they not have something productive in 3 years despite having hired experts at tree traversals?


This is a good comment. One thing I'd like to point out is that people often defend technical interview exams on the notion that if someone can't write a factorial function, that's a bad sign.

Well, sure, but my experience has been that the technical interview exam is far more elaborate than this. My questions were more along the lines of finding all permutations of a set (usually with some twist, like find all permutations that, when combined, match another member of the set). Or finding all matching sub trees in an binary tree, or something depending on merge sort or quick sort. That sort of thing. Nothing impossible, but considerably more involved than simple recursion. And the teams that interviewed me clearly did expect to see largely working code written on a whiteboard.

Overall, it's elaborate enough that people do need to study substantially for their interview exams, and good developers may simply decide they don't want to waste the time on a job that might not even come through anyway. I once learned to do intricate integration by parts in calculus, but I wouldn't be especially interested in training up to do it for a data science interview. If it comes up for reals, I'll deal with it.

This is something the industry has inflicted on itself, which is why I feel a lot of irritation when I hear the same companies talk about a severe shortage of talented software developers.


Good thing you did not end up in a lame team like that, with unreasonable decision-making like you've seen.

Interviews are a mutual process, where the hiring team must do their best to persuade me to join. (The same way I do my best to persuade them to accept me.) Some of them utterly fail, even if they make an offer.


Coupled with the fact that most recent grads don't have much wisdom or common sense, relatively less so in the coding sense, and much more so in the everyday business environment sense. As a friend of mine used to have in his email sig; "You have a Ph.D.. Good, don't touch anything."


>"When this all began, it was with the justification that it's not about getting the right answer, it's about seeing how you think and approach a problem."

Indeed. Now the default seems to be companies sending out a Hacker Ranks test consisting of 5 separate problems, a countdown timer and zero human interaction.

I can't help but think that recruiters are playing a part in pushing for this online Hacker Rank test bullshit.

It feels to me that as the quality of people working as recruiters goes further into the toilet the reliance on the Hacker Rank puzzle test seems to increase.


I still suppose it's a reasonable input filter. If you can solve a few simple coding problems, you're worth talking to face to face onsite.

This depends on the problems being reasonable, though.


I think the goal is to be able to sell engineers on Amazon. To do this, a spec sheet of each engineer is needed and all these tests are attempt to create a standardised way of generating spec sheets.


Memorization? It's a factorial function. You shouldn't need to study or memorize anything to implement a factorial function. If you can't come up with this algorithm off the top of your head, that is legit telling something about your problem-solving ability.


I'd still ask for the definition of factorial.

Sure, for various reasons, most programmers know the factorial, but I'd consider remembering the definition of some mathematical operation only common in certain domains (statistics and finance?), that most people only learn in high school and never use again, to be rote memorization.


Ability to detect your deficiencies of knowledge and ask questions is hugely important, and is worth testing.

Or you can use a search engine.


Not particularly factorial problem, but the general interview questions require some amount of prior knowledge about the techniques. Topics like math based or bit manipulation, permutations/combinations have questions which are often asked in Interviews and require some prior knowledge to be able to give optimized solutions.


Also if it's legitimately the first time you're solving something like "all permutations" you'll be slower than someone who has done it before, and that might hurt you too.


Great question, when will we decide enough is enough?

I actually don't have a problem with the data structures interview exam inherently, my problem is with how it's administered.

It doesn't bother actuaries have to take exams to demonstrate an understanding of linear algebra, vector calculus, stats, numerical analysis, and so forth. Nor does it bother me that they have take these exams even if they took the material from a reputable college with a good grade. I think it's great that the actuarial field will allow people from multiple educational paths to take the test, giving them a choice about how they learned the material.

But here's the thing - software developers have to take these "interview" exams over and over, at the whiteboard, every time we look for new job. And unlike the actuarial exams, we take these tests under conditions of tremendous secrecy. We don't know if our examiners are qualified, or if their questions are vetted, or if the exam is consistently applied and graded. We don't have a clear and defined study path, aside from books like "cracking the coding interview". And we get no feedback, hell, at times we don't know if our performance was evaluated at all.

I've considered this for a while, and I've come to understand that exam-based professions and institutions, such as the medical boards, the bar, universities, have evolved a set of considerations, kind of a student bill of rights. Students must submit to the exams, but they have the right to a fair and consistently administered exam, evaluated by respected people in the profession or field, that the exam questions should not be capricious or utterly unexpected, that there is a preparation path, they get feedback on why they did not pass. What we have in tech is the stress and burden for examinees, in many ways amplified (whiteboard coding, on the spot, is among the more stressful ways to take an exam). But the student bill of rights is largely non-existent, and I think this is the deep and severe problem with tech interviews. Until this changes, to me, nothing can change meaningfully. And I don't just mean in interviews, I think this sort of thing is what is deeply rotten about tech, at the core of many of the problems (in a field where there's a lot of suspicion about bias, are secret tests under capricious conditions a good idea?)

And we're expected to come back for more of this, over and over, and to just take it.

Oh, by the way, there's a shortage of software engineers, best solved by putting tech companies in charge of the immigration system so they can decide who is and isn't allowed to come to the US based on how well they do on these interview tests/how much they're willing to put up with these interview tests.


I'm a co-founder of Refdash. Thanks for the thoughtful comment. I largely agree with things that you said. I think that interviews need to become significantly less stressful and more objective.

One way in which we're approaching the stress of interviews is to strip away the bad outcomes and allow people to approach interviews as "there is nothing to lose". Another way is experimenting with different types of interviews such as group interviews or project based interviews. Sometimes the issue with those experiments though is that they're even harder to standardize.

I think that in addition to what you said, we need to get to a measurable level of improvement. We try to measure the repeatability of our sessions in two ways: 1) when the same session is evaluated by two different interviewers, the ratings across dimensions should be equivalent and 2) (this one is harder to experiment control) when the same person does two interviews, they should be evaluated equivalently on the intersection of sets of things that were evaluated. By optimizing for these two ways of repeatability, we are hopefully going to move toward much more objective driven evaluations that capture the engineer's understanding of different areas and not just the spur of the moment.


Thanks for your response.

Based on what I've seen from your site, this does look like an improvement.

An interviewee can prepare properly, and take the interview once prepared, rather than doing this at arbitrary times that may be very busy, simply because an interview came up.

Because the interview results can be used in multiple places, there's no need to do this repeatedly for companies that have confidence in the exam (I hope!).

There are reassurances that the interviews are conducted by experienced engineers. I'd be interested in hearing more about this.

There is actual feedback, which is critical. I interviewed at google, and my understanding is that there are actual, numerical scores in a database for my performance in the various interview exams, but I'm not allowed to know what they are. To me, that's a huge problem, and I'm glad to see from your example that you provide feedback.


Yes, you get to fast-track through interviews at multiple companies.

As far as the experience of the interviewers goes, we're pretty careful about it in our on-boarding process. Even more than the experience we look for open-minded engineers who are willing to question their evaluation and constantly work on improving the overall process.

Re: feedback at Google. It's hard for companies to give you access to feedback because of potential legal repercussions and also because it is very hard for them to deal with back and forth that would inevitably occur.

That's another big point that we emphasized early on that seemed like a crazy proposition - fully transparent interview process. That is, anything that a company would see, you see beforehand.


It's hard for companies to give you access to feedback because of potential legal repercussions and also because it is very hard for them to deal with back and forth that would inevitably occur

Although I agree with this point, it only explains why a bad practice occurs, not why it should happen. If anything, it indicates why google might not be a good institution to be conducting exams. What you're saying here (accurately) is that because Google could be sued for how it evaluates and uses its exams, they just keep the results secret.

To drive the point home - there is currently a class action forming against google for age discrimination. However, the process is opaque enough that people probably don't really know if there has been any age discrimination. One possible data point that might be of interest would be to see the exam results, or other information about how the exams were graded (I believe there are even images of the code I wrote stored in this database for google's record keeping - but not mine. Again, I'm not allowed to see them). So, one way to discriminate and get away with it would be to simply deny anyone access to the data used to evaluate them. That way, how would they ever know?

I understand this is true of all interviews, not just tech. But tech really is unusual (perhaps unique) in that a huge part of our "interviews" truly are exams. They're not "tell me about your experience". They're "write code to find all sub matrices in an NxN matrix with a positive determinant", often at the whiteboard, in 45 minutes.

If that sounds extreme or excessively suspicious, I'd like to point out that we're talking about a company that was clearly, glaringly guilty of an industry wide collusion to suppress wages and maintain no-hire lists. I'm hoping you've read those articles and seen those emails. It's reasonable for people, at this point, to be looking for some kind of outside regulation and transparency.


Any suggestions for how to improve the process? I've been conducting a lot of technical phone screens lately and I'm always interested in how to make it better for both parties.


I've been attending a lot of phone screens, and in-person interviews for a Senior Software Engineer ( C# / .Net ) Position. Here's some feedback. YMMV.

1. Don't ask questions that are hard to explain over phone. For instance I was asked if I've used Lambdas in C# and I said, "Yes, extensively.", next question was. "Explain to me what a Lambda is.". I gave the textbook definition for it, and he was expecting more, so I simply said "It's hard to explain lambdas, I can show you some source code I've written and that way I'll be able to explain it better.". I thought that was it and that I would be 'screened-out', but I was moved to the next round of on-site interviews!

2. Do ask questions about how the candidate has used the technologies that the job requires, and ask them to explain how they designed and implemented their most recent solution, and where they used what technology, and how. After initial answer, drill deep into areas that are important for the job posted, and from the answers, you can get a fairly good assessment of whether the candidate should be moved to the next stage of the process.

3. After passing the tech phone screen, I've received and completed 3 take-home coding exercises (1 each from 3 different companies), and in 1 of those companies, I was subject to white (water-) boarding exercises on site in addition to the take home test. The white boarding seldom goes well because of the environment and nature of the whole thing.

I found the take home exercises to be better than on-site white-boarding because the former is closer to how a future employee would work, than the latter.

Also, how a candidate does on algorithm and data structure questions is a false positive (or negative) because these can be practiced, rehearsed, perfected and hence gamed. Coding exercises, or specific question on the candidate's most recent project, on the other hand, cannot be 'gamed' and even if it was, you could tell from listening to the response, or from reading the code...

Hope this helps!


Take home is great, except people cheat. People even cheat on early screening questions knowing that an interview is coming up.

(In a previous role, I gave applicants the same -- very trivial -- question that they had already coded the answer for in a do-at-home screening. I wasn't secretive about this, either, I told them that this is the exact same question they had already answered. A significant percentage of them were unable to figure out the same answer they had provided a few days earlier.)


is take home great? if you have a full time job, any sense of home life, is being asked to code a small project in your spare time (and on a limited time basis) sane? I did two this year, and tbh I think the next company that assigns me one I'll just decline.


I think take-home projects are great for evaluating domain experience and general programming aptitude but should be relatively small. Anything that would take more than 1-2 hours is an overkills, although sadly I see a lot of companies expecting you to build a full product (e.g. production ready iOS app with custom design and full test coverage).


The ones I've taken this year were two days of effort and five days of effort. Because I was so busy with other things, both submissions were littered with TODOs and hacks, and both were given no-pass with zero feedback.


N.B. the LOE estimates of 2 & 5 days were theirs, not mine.


> except people cheat.

I never understood this. In a programming / software development job more than any other job, aren't you very likely to fail at delivering on the job, if you cheat on the interview and tests?


If cheating is defined as looking up the solution on Google, I frequently cheat while performing my job responsibilities.


But the money's good while it lasts, right? And you can always 'fake it until you make it' on the job too, right?


I like your style on this one, basically pulling out these: "I was on fire the day I solved it in the recent past, I just don't know why I can't pull it together today"


I made my own minor variant of https://sites.google.com/site/steveyegge2/five-essential-pho... but I think it's still good advice in general and I'm fairly happy with how my version of the process has gone, given I only get 1 hour. If I had 90 mins or 2 hours I could be happier. It doesn't matter now though since higher ups at the company are deciding to revamp the process and remove all interviewer decision making from it in terms of what to ask and how to conduct things, which is probably going to be both good and bad.

If you're pretty happy with your routine the next step is review. Look at candidates who passed a phone screen but were later turned down at a followup stage. Is there anything you could change about the phone screen so that the later time and money could be saved? (If you have no one who passed the screen but failed the later stage, sounds like you could just eliminate the later stage altogether.) Another interesting thing to review is candidates who didn't work out after hiring, or didn't work out as much as hoped. Again, anything at the phone screen that could have caught this? Or for those ones you can look at other stages too. One example for me is an intern that didn't work out so well, we only do a couple phone screens for them though, but still the biggest skill that was missing that I realized I could try testing for was the ability to dive into unfamiliar code they didn't write and make a few small changes at various places to fulfill some business goal. Or in other words, filter out people who can only program if they can keep the whole program in their head (which usually only works for larger things if you wrote everything yourself).


> industry finally decides enough is enough?

I think only google needs to decide this. Everyone will just follow what google does.


And the college grad will potentially be cheaper.

"If you have kids, you're too old to code and should move into management." This is basically what the industry is saying.


I don't know what to make of this. Looks like a potential headache. As an engineer in an interview, we never get told what we're being scored on in advance (naming, speed, coming up with wonky edge-cases, scalability to 1million times workload, versatility, mentioning a bunch of relevant buzzwords (bloom filters!), confidence?).

This is one reason why most interviewing is a subjective clusterfuck (not just software, I suspect). I'd bet really good money that numerous companies would reject Linus, Carmack, etc.

Hence the giving of graphs and other media that are meant to project an idea of objectivity while reducing an engineer to a few quantities turns my stomach.


To be fair, I'd reject Linus. I wouldn't want to work with someone like him even if he is a well known programmer with leading a well known open source project very successfully. I don't think he'd make a good cultural fit at most of the places I've worked.


I think you missed the point.


> we never get told what we're being scored on in advance

I make a point of not telling you. I want you to demonstrate that you're able to work out the requirements when you're given a task.

Some people will take a task, go off for a few weeks, develop ill-fitting solutions, and then say "well, you didn't say you wanted unit tests" or "you didn't tell me it had to support files that can't fit in memory".

Others will try to find out what's needed, with simple questions like "Roughly how big are these files?" and "Does this actually need to run in line with the front page? What if we load it asynchronously?"

I'm looking for the latter.


I generally give all the important parts of the question up front. Having to fish for requirement isn't the point.


I like to leave at least something small ambiguous to see if clarification is requested. My clients rarely give unambiguous requirements, so I feel it’s pertinent to the job.


Depends on the role, if it requires high levels of autonomy and working with highly ambiguous product definitions (e.g. most freelancing jobs) - then it's a part of your role to constantly fish for requirements or even suggest your own requirements.


100% agree with this. At any given company I've worked at, the recruiting/evaluation strategy has been different--largely because they _are_ looking for different things in candidates.

Some places value communication skills over technical skills. Others will value experience over credentials. I think it's unrealistic to expect that there is "one true way" to perform an evaluation (technical or otherwise). It all needs to be in the context of the team/department/organization you're hiring for.


> This is one reason why most interviewing is a subjective clusterfuck (not just software, I suspect).

This is the thing, all OTHER interviewing doesn't do this, they do it by talking to the candidates. Damn, I am going to get out of development.


FYI, most evidence suggests unstructured interviews (talking to the candidates) have very low predictive power about job performance and that work sample tests (take home problems and on-site coding) fare much better in that regard.


Where is the evidence?



> This is one reason why most interviewing is a subjective clusterfuck (not just software, I suspect).

This is why I get a laugh out of people complaining about whiteboard-style interviews. Far and away tech still has one of the most intensely meritocratic hiring processes. Not perfect, sure, but better than almost every other industry. The majority of high-skilled jobs rely heavily on pedigree, education, social standing, references, sociability, etc. I'd much rather reject Linus for flubbing a BFS traversal than hire him for attending my Alma Matter and knowing a lot about golf.


> Almost every other industry relies heavily on pedigree, education, social standing, references, sociability, etc.

Almost every industry, including tech, relies heavily on those things as pre-interview filters. Tech is not a meritocratic exception here.


How many traders at Goldman Sachs don't have degrees from Penn/Harvard/Yale/etc? I bet not very many. Yet some of the best developers I know are self-taught or went to no-name schools.

My SO works in media and I can assure you the hiring process there is completely, thoroughly, perniciously subjective. Experiencing her job search really gave me appreciation for the fact that taking a test is 80% of my interview process.


> How many traders at Goldman Sachs don't have degrees from Penn/Harvard/Yale/etc?

The big tech firms are notorious for the same thing with a slightly different set of high-prestige schools, so I don't know what difference you are trying to draw?


> How many traders at Goldman Sachs don't have degrees from Penn/Harvard/Yale/etc?

I just did a quick check in LinkedIn, choosing four people at GS with "Trader" in their title. The four schools I got: UNC Chapel Hill; Duke; Williams College; Amherst College. I find that distribution hard to correlate with your statement.


Aren't these all expensive private "elite non Ivy" schools? Maybe OP was being too restrictive with the 3 listed schools but I think the general point being implied is correct.


Chapel Hill is a relatively prestigious public school. The other three are prestigious private schools.


This is why I get a laugh out of developers who think tech has a meritocratic hiring process. The majority of real engineering (I've come to the conclusion that most of software development in the context you would identify with isn't engineering except inasmuch as the Orks in Warhammer are engaged in the practice) doesn't grill candidates to solve math or engineering trivia on a white board in 45 minutes or less.


Please, do tell what lofty work you do that deserves the holy moniker of engineering.


I'm not trying to make a point (namely, that programming isn't engineering) here, but for kicks, here's something I was told by an experienced (mechanical) engineer when I obtained my engineering license after 3 years experience as a junior engineer (aka Engineer In Training in a lot of jurisdictions):

"The only difference now is that you'll constantly have an ethical and deontological loop running in your head."

As an licensed Engineer (once again, PE in most jurisdictions, Quebec just can't agree on using the same terms as everyone else), you have an ethical, deontological and legal obligation to ensure that your work conforms to the "state of the art". You constantly have to think about the consequences of your work on everyone downstream, from workers on the shop floor, to the final customers, maintenance workers, etc.

It's not so much about the actual work (many drafters, technologists, etc could often do something an engineer has done), but about the profound care that went into making sure it was right.


Hey, software developers do that too... Management just doesn't reward us for it.


I work in data science and machine learning. I interview and as the lead on the team I strongly advocate for practices that are very close to the engineering practices in real engineering, insofar as it's possible. My academic background included engineering (aerospace) and most of my undergrad friends went that route. We all have a good laugh at the practice of "engineering" in the software industry now and again.


I wasn't actually interested, mostly just trying to point out that your holier-than-thou attitude isn't really helpful to the discussion. I really doubt that you're some kind of data science deity because you got an engineering degree.


Suppose you have a job for a Kernel hacker. You have John, who has zillions of non-trivial patches applied upstream available, but "flubbed a BFS traversal". And you have Joe, who you haven't heard before but took a whole year doing nothing but traversing graphs and trees to pass your interview. Who would you reject?


Whiteboard interviews are not for hyper specialized roles. They're for companies who get hundreds of applicants every week. Right tool for the job.

In this instance if I were John I probably wouldn't want to work for the company anyways because they clearly have no idea how to hire.


Problem is that most companies copy “the right tool” for all the jobs just because the famous big brother(s) do(es) it.


Totally agree. Classic fallacy of test confidence. The question is, why are we confident in the test to begin with...


> giving of graphs and other media that are meant to project an idea of objectivity

Agile has the same issues.


Would you like to elaborate? What "agile graphs" fail and why?


Burndown charts, planning poker, difficulty weights, priority tiers, team velocity over time, milestone progress bars.

I’ve yet to see someone tell me with a straight face that they hold any value like what we want them to be.


Their value is that they make the client comfortable with estimates that are explicitly fuzzy BS, which is vital since there's no way it makes sense for the client to pay for the time it'd take to give them a non-fuzzy, non-BS estimate. It also deflects them from attempting to get non-fuzzy, non-BS estimates out of you in a fraction of the time it'd otherwise take—the reputation and authority of the Agile brand is what gets them to do this, not any actual utility it has.

That agile estimation is fuzzy BS is a certainty. But it's a nice comforting blanket to help clients accept "we're just gonna start working on this, if you decide you don't want us to work on it anymore tell us to stop," which is important because that's the most cost-effective way for the client to spend their money, in many cases.

It's nowhere near as accurate as anything resembling an actual engineering estimation process, but ain't no-one got the time/money for that. It's probably a little better than the seat-of-pants "just give me a ballpark right now (but I'm totally gonna hold you to it, gee isn't it weird how we're always fighting fires and in crunch mode?)" that it's replacing as far as accuracy, but more importantly it makes that crap not happen (assuming it's functioning correctly).


Precisely, IOW they're just for show and are basically GIGO when you're completely honest, but they're a psychological device required both externally (as leverage one way or another towards clients) and internally (it's motivating to see something go forward, even when you know it's completely bollocks†). Just to say that this definitely subscribes to the basic idea that they "are meant to project an idea of objectivity".

BTW, you forgot the part about giving improving metrics to satisfy management and maybe get a raise (vs 100% not getting one), or a nice story line about how you managed to improve team velocity YOY by 50% through magic on your resume, which in the same way is "reducing an engineer to a few quantities" and equally "turns my stomach" as for alexandercrohde. Most of the things I've done to introduce improvements are immeasurable because they're changes that are mindset shifting or technological leverage therefore there is no metric associated because of the pervasiveness of their effects.

† if you're doing, say, 10 push-ups, instead of "1, 2, 3" try counting each number twice, like "1, 1, 2, 2, 3, 3". Magic, you've done 20. You know it's a mind trick, but it'll still work.

†† Going from email+phone+hallway communication, silo'd knowledge and SPOF Bus Factor to a serious issue tracking, knowledge sharing, and automation on GitLab. Going from manual machine building to fully automated system provisioning and one liner deployment. Any associated metric would be either completely immaterial (e.g confidence and well being) or produce an off-the chart ratio (just computed one just for fun, that's a ~1800% improvement in human cycles — not even counting failure modes — it's so beyond ridiculous it doesn't even mean anything).


That is a bit ridiculous. It shows everyone who the project is tracking, and how inline with reality estimates are. I am not sure why you guys find it so hard to comprehend why that is useful?


Burndown charts and the like, one imagines.


The company I work for has turned into an Agile church. My productivity dropped with more than 50%. Most of my colleagues have left already, now I'm stuck with Agile masters, coaches and all the misery they come up with.

I know I need to look for a new job, but I hate the hiring process sooooo much that I keep procrastinating.

I'm a senior full-stack, have written loads of production software over 20 years of experience. Who the hack from refdash is going to assess me with some stupid algorithm question? Can he build shit himself? If so, he knows this whole process is irrelevant.

Sometimes I really want to quit this business, it has destroyed a lot of fun and passion for coding already..


Ditto. I'm taking steps to move into the business end of things.


How well can you decouple?

That's all I need to know about you. If you can do this well, I (or you) can fix all your bugs quickly. I (or you) can fix (most of) your performance issues quickly. If you can decouple, you can isolate whatever other areas of ignorance you have with ease.

How well can you decouple? Is it instinctual for you to riddle off code architectures and solutions that are decoupled? It takes time + a value system to acquire this skillset (so that it's instinctual) but once you have it, you can move mountains with code.

The sad thing is our industry doesn't create the conditions for acquiring this skillset. (And rarely, like in this scoresheet, do you see it measured in interviews.)

The usual refrain is "I didn't decouple here or there because, um, it was faster to take on the technical debt."

No, you didn't decouple there nor there because you didn't spend several years falling on your face trying to decouple and failing. Because you were too busy getting paid to crank out whatever works in the shortest possible time, all the while saying "I didn't decouple here or there because it was faster to take on the technical debt." You didn't decouple there because you don't know how to decouple there.


What do you mean by decoupling?


Aka, separation of concerns. There's a lot of literature on this.

But the key point is that it's an objective measure. Given two equally functional solutions to the same problem, one will be objectively more, less, or equally coupled than the other. It's a demonstrable, measurable skill set; the relative coupling of two separate solutions is generally speaking not some subjective measure.


What has this proven exactly? Has an study been done to show a strong correlation between these test scores and ACTUAL programmer productivity? What two metrics are you correlating here? All I see is a score. Is this score valuable? I see nothing outside of pure speculation or opinion to suggest that's the case in the link. And anyone who wants to rebut and say, "that's an impossible task" or "but we know what it takes to be a good programmer" is full of it. We don't. We actually have no real, documented objective way of determining what a good programmer is. My definition is creates a program, to spec that is acceptably performant. Even that has tons of subjectivity and bias to it. Guess what, everyone has a different definition or bar. And frankly they are all full of our inherent biases.


While I don't know whether this has been studied, I would expect the interview results to correlate well with years of experience for example.

I think what these interviews test best is your ability to interact with your peers. Candidates who interface badly with other people would have trouble getting a good score. So yes some brilliant people will bomb those interviews. And some talkers will manage to pass one interviewer.

In total I would expect those interviews to indicate a minimum of professional knowledge in combination with good communication skills. That is valuable data without it being objective in an absolute sense.


I'm not convinced they correlate with years of experience. In fact they may well inversely correlate. I give a relaxed algorithm design screen to candidates (of which roughly 70% pass -- I'm a soft touch) that include a problem that is mildly academic in nature. New grads are the ones that tell me they already know it and I have to move on to the next problem (which, amusingly, they never have experience with). Experience people often have a vague recollection of what it does, but not enough of a recollection to prevent me from learning something from the discussion about how to get it done.


What degree of control do I have over the sharing of my interview score? If I sign up for an interview with Refdash and get an outcome of "1.0/4, do not hire," is that information going to be sent to every employer Refdash works with?

Individual interview results for a single candidate vary wildly from day to day. I wouldn't want one bad day to sink my chances with a large number of companies at once; I'd much rather interview with each company separately if that were the case. Using this company would just be far too risky, even if I was 90% sure I'd ace the interview.

On the other hand, if my score was entirely private, and I needed to actively grant each company that was interested permission to view it, I would view this as a useful tool. A bad score would provide useful personal feedback for improving my interview skills in the future, and I'd be happy to send along a great score to potential employers.


The interview data is only ever shown anonymously to companies, even if you apply to a company with it. Only when the candidate and company agree to do an onsite are names revealed.

Basically it's a good chance to practice and get feedback in the worst case, and a way to save some time and get your foot in the door in the best case.

disclaimer: I work for refdash


Could I ask you to add that to your FAQ? Knowing that the information is anonymous would be super great.

It'd be even better if I could TAKE the interview anonymously. Then I wouldn't need to trust Refdash on whether it was recording my information. Bonus: it's much easier to limit bias when you don't know any demographic information about the interviewee.

I understand that the downside here is that I could potentially take the interview many times until I generated a positive result. But maybe there's a happy middle ground somewhere.


Thanks for the suggestion, just added it.

Interviews are somewhat anonymous, the interviewer only gets your first name (added that too). We've discussed making it even more anonymous (voice modulation, hiding video, etc.) but it seems like this has been studied before and did not have much of an effect.

http://blog.interviewing.io/we-built-voice-modulation-to-mas...


Clearly the person didn't run the code. The factorial function returns an int. The factorial of 13 is already too big for a 32-bit int. The factorial of 21 is bigger than a 64-bit int.


I think the code people have to write in interviews is not supposed to work/scale in the real world. The purpose usually is to test whether the candidate can implement a correct algorithm. Even though this is Java (?), it's used more like pseudo code.

A follow up question could be: "Do you see some problems with running this with big numbers? How would you scale it?"


A double only has 52 bits of precision, but in a factorial, every second number in the multiplication has at least one factor of two in it, so you could get to 22! that way without any loss of precision.

Beyond that, "it depends" on what you're doing with the results. It's very likely that the developers' time would be better spent removing the factorial from the calculation that uses it, than on making the factorial calculation better at calculating factorials.

Besides that, since you can only meaningfully do 22 different factorials within the bounds of a 64-bit architecture anyway, just pre-calculate the results and make it a lookup table. Returns in O(c).


What do you mean you can "only meaningfully do 22 different factorials within the bounds of a 64-bit architecture anyway"?


The 52-bit mantissa of a 64-bit double is only sufficient to represent all the significant figures of 22! and a 64-bit integer can only represent 20!, so if you want larger factorials with full precision, you need to use more bytes for your numeric representation.

Not many real-world applications require precision greater than a double, but pure mathematics can calculate millions of [decimal] digits of an irrational number just for amusement. A cryptography application, for instance, would need full precision, whereas an engineering problem might be perfectly fine with fewer significant digits.


So why ask people to write "code". What are you testing?


Personally, I think that asking people to write code in interviews brings very _limited_ data. However, from my experience, it usually is also a very _strong_ signal. Narrow but strong.

For example, if a candidate cannot implement a simple algorithm (like the factorial here), he very likely has poor skills.

If he can implement it without too much trouble, that's a good sign, but he only proved that he probably has some good skills.

Again, the signal is narrow but strong.


I don't mean to be offensive but reading your statement alone shows me your bias. 1) You constantly referred to the developer as he. You may not realize it but the language you use is an indication of the prototypical picture of a developer you have in your head 2) You are making a strong assumption based on one point of data, your "experience". I am not trying to belittle it but here's a counter. Based on my experience, I have hired developers who did great at algorithmic interviews, design and coding. Guess what, they SUCKED as developers. They didn't understand architecture. They weren't willing to learn a new language or framework. They couldn't properly evaluate why a framework would perform poorly. They had terrible communication skills which matter because their team didn't understand them and they couldn't make a strong case for why their point was right.

Point is, although there might be place for algorithm coding in developer interviews, I am willing to wager a large chunk of my salary that performing well in algorithm interviews doesn't necessarily correlate strongly to performing well on the job.

Why? Performing well on the job needs to be properly defined in it's context first.


> I don't mean to be offensive but reading your statement alone shows me your bias. 1) You constantly referred to the developer as he. You may not realize it but the language you use is an indication of the prototypical picture of a developer you have in your head

Why on earth do you bring this bullshit sexism to this discussion? Of course I have a prototypical picture of a developer in my head. Like everyone. So what? I have interviewed developers and 99% of them have been male. I'm not arguing anything about that percentage here. Let's just assume _male_ developers here then if that makes you feel better.

> Point is, although there might be place for algorithm coding in developer interviews, I am willing to wager a large chunk of my salary that performing well in algorithm interviews doesn't necessarily correlate strongly to performing well on the job.

You don't have a point. It doesn't matter how well they communicate, or anything, IF THEY CANNOT CODE. It also doesn't matter if they can code it they cannot communicate. I'm _not_ saying that coding skill is THE ONLY skill they must have. Get that to your head already. It's ONE of the many skills that they absolutely must have. Another skill is the ability to express their ideas clearly. There are others of course but we are discussing "programming tests in interviews" here.

In addition, I don't believe interviews or any tests during interviews can accurately predict job performance. I think there are a lot of studies that support this. Even though all the tests show green light, the person can fail in the actual job. Nothing guarantees that 100%. We get false positives. But that doesn't mean interview tests are useless!


And is the signal still strong if candidate has a bad day or for some other reason cannot get the right focus in that specific moment?

Moreover, I think it's highly unprofessional to reinvent the wheel by writing most algorithms for production. I always first look for a bullet proof library for that.

Won't you use a calculator for 345 x 27?


Nothing but "try applying again in 6-12 months" can fix the "candidate is having a bad day" cause for a false negative, don't know why you're bringing that up. Similarly this thread is about code in the context of interviewing, not production code, so the whole side-argument about when libraries are/aren't appropriate doesn't seem relevant... As a candidate I've been asked things like "write code to reverse this string" and I've given "this_str[::-1] is probably what I'd use on the job. Do you really want me to write a str_reverse() function? Does it need to be in-place?" Each time is yes at least for the first question, so I do it, but I wouldn't expect it to go to production. When I'm on the interviewer side and made to ask questions like string reverse or whatever I say "no, you don't have to do the lower level implementation", all I'm trying to verify is that someone can program, not that they can program arbitrary foo the way I want it done, but typically people start with the low level function, even when I say "use whatever language you want" they still go for Java or similar.


Yes, I think it is fairly _realiable_. It reveals whether the candidate can't really write code.

Again, the idea of the coding test in interviews is not to produce production quality code. It is testing that the candidate's brain can write some code. Try to understand that.

Is it a perfect measurement? Of course not.

Also, the idea is to select an algorithm that doesn't require encyclopedic knowledge of different algorithms. Even (any) sorting algorithms may be too complex for this purpose.


345 x 27 = 2415 + 6900 = 9315

You don't need a calculator. Use your brain and show your work.


>> I think the code people have to write in interviews is not supposed to work/scale in the real world. The purpose usually is to test whether the candidate can implement a correct algorithm

Do you check for nulls in linked list traversals? Potential stack overflows? How about array out of bounds?

If yes, why don't you check their data type?

Do you now see why this process is fucked up? The candidate doesn't know what is your definition of "ok"


I think you have misunderstood the purpose of these tests.

No, the process is not fucked up. There is no one right answer. It is CLEAR that the candidate cannot write perfect code in an interview. But it can still provide valuable data.

Basically it can answer to this question: "has the candidate got _any clue_ about programming?". Sometimes that has a lot of value.

Think about a candidate that cannot even write a function that calculates the length of a null-terminated string, for example. Doesn't that test tell you immediately that something is terribly wrong? I've seen these candidates.

When you get over a certain skill threshold, every candidate passes the coding test. After that, it's useless, of course. Then it's probably best to show some code and discuss.

NO SINGLE recruitment test can provide absolute certainty. Coding in interview is just one variable.


I am really curious to know why there are businesses such as hired.com build around "interview" tools that rate software engineers only ? What tools are available to evaluate other professions : 1) Doctors ? 2) Engineers from other disciplines such as Chemical engineers, mechanical, civil , electrical engineers ? 3) Lawyers ? 4) VPs / CEOs 5) Financial analysts ?


I moved from SE to UX because I needed a change from coding 8 hours a day. That's one of the field where I could easily transfer my skills.

The interview process usually requires a portfolio, 2 phone interviews and one onsite interview. I had to talk about previous projects and what was my problem solving process, followed by a case to solve. This was refreshing and way less stressful, a walk in the park compared to the white board interview.

I understand the challenges that hiring managers face, but who has time to spend a day at 5-10 companies to switch job... A lot of companies still offer only 10-15 days vacation a year...


The biggest thing is that there is more money in evaluating software engineers. There's higher demand than most other professions, so companies will pay more.

I have limited information about interviews for other industries, but other fields are often not evaluated in the same way. Lawyers do not have a mock trial as part of an interview, so tools for some industries would likely just be a video chat.

Also, as a minor correction this tool is Refdash.


Wild guess but maybe it’s easier to sell quantitative tools to industries that are already used to making decisions based on quantitative measurements in other parts of the business?


Don't just whine about bad programming interview practices. Suggest something better that wouldn't have more false positive hires (i.e., candidate can't program, but can pass interview), and maybe fewer false negatives (candidate can program, but gets filtered out anyway).

This is a hard problem. Kindly think about it from the perspective of a technical interviewer who really has no idea whether the candidate on the other side of the table can program a computer or not and has to make a hire/no-hire call in 45 minutes.


I have never seen a correlation between passing standard coding interviews (whiteboarding, algorithms, etc) and how they perform as a candidate 1 year after hiring.

Does anyone have any data that surfaces this information? All these companies like Triplebyte do is standardize incoming interviews, but what's the point if it has no bearing on whether or not the candidate is actually successful.


Well the other half of the problem is that I think most companies are so disorganized that even if an engineer does great work the company might not know within a year.

They may look at tickets completed, "buzz" around the individual, being assigned to a project that was highly profitable, staying late a lot, or any arbitrary thing they want to.


I've tried Refdash twice as a candidate. Frist time I used, I couldn't connect with the interviewer because of the issues with VOIP, the second time it went smoother and the interviewer appeared to be doing a great job at providing written feedback after the interview.

On the downside, scheduling seems to be awkward: you have to manually select your availability, instead of picking a timeslot, then the interview is scheduled for you. Personally, I prefer to select my own timeslots like with interviewing.io, unfortunately, the latter ironically is almost always out of availability.


In 4 - 6 weeks I'll be interviewing again. If anyone asks me to implement an algorithm during an interview, I'm going to search for the answer and use it. I don't have time to waste nor do I want to work for a company that wastes time which is money. How can they afford me if they keep wasting it?


I'm assuming this is a fake interview to show off the interview feedback for your platform? If so, I think it looks great and is a huge part of what's missing from the current "standard" engineer interview. Receiving feedback like this would be incredibly helpful in figuring out what I'd need to work on to pass technical screens.

The fact that there's very little proven correlation between being able to jump through these hoops and being a good programmer is another story altogether, but I like this!


Not OP, but I work at refdash

This is a fake feedback report for an interview, to show what companies and candidates can expect.

Thanks for your comment. I do think it's important particularly from a feedback perspective. Since companies don't give feedback, this is one of the first things to attempt to help candidates learn about what is being evaluated and how to improve.

I'm hoping this is a first step to finding out a better way to correlate those things, for now I think that things that help candidates find jobs are a good step :)


Would an industry-approved certification be a viable replacement for on the spot technical interviews?

Somebody feel free to change my view - but I don't see why there can't be a standardized certification (could be broken up into specific areas of tech/comp sci/programming) that proves the applicant is competent in the industry standards, and just focus on the personal/soft skills/culture fit of the interview.


I think this is a good idea and would speed up interviewing at companies since you would be able to focus on the match. It does rely on companies actually knowing what criteria they are looking for and to be upfront about it, which unfortunately, is not common.

The real problem is that very good candidates are often not motivated to get certs, since they likely don't need them. However with the added benefit of speeding up the interviewer process, I think more people would do it and it could then become a standard and not a negative signal.


Certs are worthless. Generally, the most terrible programmers out there are loaded down with certs.


Largely agreed about certs. But the actuarial exams aren't worthless. We may need to look to a new model.


Always embarrassing when after deciding academic results aren't a good enough indicator, the showcase achievement for "a better process" is a single tutorial question from an undergraduate class (or less than a thousandth the evaluation we do of students while they study)...

Welcome to the Cult of Hiring, I guess.


Just for some context about this document, this is a fake interview with fake interview feedback.

It is meant to highlight some of the criteria which Refdash uses to evaluate candidates and how companies and candidates can expect to receive that feedback.

disclaimer: I work for Refdash


This seems to have a rather strong focus on speed.


Algorithmic complexities in general, which in the day-to-day life of a developer are rarely considerations. I can probably count on one hand the number of times I ran into a problem that required the type of algorithm design needed in these interviews, yet they form the basis of hiring in many places. Crazy town.


crowd boos


This is horrible.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: