Hacker News new | past | comments | ask | show | jobs | submit login
The Problem with Using HackerRank as a Programmer Screening Tool (dreynaud.fail)
48 points by drdrey on Dec 19, 2017 | hide | past | favorite | 51 comments



This does a nice job of explaining why it's not a good solution. For the company, it does not measure any kind of communication or collaboration ability (which is so important for programming). For the candidate, it does not give her/him any more information about people at the company or provide a positive, interesting experience.

As an engineer, you work with other humans first and machines second. Platforms like HackerRank seem to miss that, while better ones like interviewing.io put it front and center.


A code screening doesn't need to measure communication. It only needs to measure basic coding ability. You can either write fizzbuzz in a reasonable amount of time or you can't.

There will be plenty of opportunities to evaluate communication during the other stages of the interview.


Fuck it, don't bother with any job that uses these things.

It's lazy on behalf on the employers.


I'd agree with you, but you have to interview probably 400/500 people, all with passable resumes, just to find basic competence (let alone the skills you might really need).

What if you're talking about interviewing thousands of people, a number of which are passable on paper?

It's a search problem, not laziness.


It's not lazy and it's great for employees too.

When I change jobs, I can't schedule bazillions of 1h phone screening on top on my work time at my current company, only for the interviewer to not show up or realize they don't really have a position available. I'd rather have a one hour test from home, whenever I want.


Hi, OP here. Good point about the lack of interaction with people at the company, I did not consider that. I'm not familiar with interviewing.io, what's the hook?


Hey founder of interviewing.io here (saw we were mentioned and figured I'd jump in).

We provide engineers with free, live (and anonymous) interview practice. If you do well, you get fast-tracked at top companies.

I also agree with the lack of interaction being a problem. I've always thought about this in the context of value symmetry. If you are expecting people to put time into a coding assignment, you have to give them something useful back... especially in a market where good engineers are the ones who are scarce, not jobs.


it's amazing how people of different companies just come together to discuss issues and how to solve them online. One of the things I love about the Internet


I don't recall that HackerRank allowed to read and write to the filesystem. I've never seen any stock test doing that.

The default test settings generate code for parsing the stdin input and create the body of the program.

That company really went out of their way to make a terrible test. That's not representative of HackerRank.

I used to write tests for one company. The tool is great and totally flexible. A test should be taken as a representation of the company writing it, not as an example of the platform.


OP here, the test was definitely in HackerRank's library, not something custom. I went through the exercise as we were evaluating HackerRank internally, not as a candidate (important context I should probably have mentioned in the post)


That changes everything. It's not a real test at all.

There is a broad range of exercises, a lot of which are for coding competition, way too long and too difficult for a short screening test.

I'd advise you to keep the majority of the exercises dead simple. An exercise to print numbers from 1 to 100 will tell you a lot about the candidate. An 1h dynamic programming exercise will stay unanswered most of the time, that doesn't tell you anything useful.


A hacker-rank screening test is exactly that, an initial screening test. It tests basic aptitude and ability to implement a simple program to spec. It's not a substitute for face-to-face pairing interviews and projects.

> 1. they expect input and output in a necessarily contrived form

> 2. there is no communication or creativity involved

1. aka a spec 2. not the job of a screening test

This whole post assumes the premise that a screening test needs to fulfill all the requirements of a full interview process.


As a hiring manager, that's just not a useful "initial screening test" for me.


Nailed it.

Let's not forget companies that send algo-heavy tests for front end roles (my personal favorite). Can't they come up with a front end version? Is it better that they don't?

I'd find it, and similar sites, less objectionable if they were used as interview "extra credit," rather than as gatekeeper screening tools.


I had a math teacher in middle school that put it pretty well: if you have two candidates that are equally qualified, but one knows calculus and the other doesn’t, you may as well go with the one that knows calculus.

From a hiring perspective, it cuts down the size of your candidate pool. Interviewing is expensive, so it’s better to choose from 5 people instead of 30.

From a personal perspective, I don’t want to work with you if you don’t know how to use a hashmap. There’s a difference between filtering duplicates efficiently and validating a binary tree, so it does it some injustice to lump it all together as irrelevant algorithms testing.


In operations management there’s two terms that come to my mind that you are describing. These terms are directed towards consumers’ behavior in whether they will buy a product. The first term is ‘Order Qualifier’ this is the minimum possible feature set (of goods services) that a consumer needs in order to purchase your product. The second is ‘Order Winner’ which differentiates a product from another product once the products have qualified, i.e. met the minimum requirements. You could apply the general principle to recruiting.

To me the way your teacher described Calculus as something that differiantiates two job candidates that are equally qualified it seems more like an Order Winner, but you then went on to say you should narrow your whole group of candidates by that. You shouldn’t cut down your whole pool of candidates by an Order Winner, but by an Order Qualifier. As an example if you’re hiring web developers they may pass an Order Qualifier if they have necessary web development skills. The Order Winner might be that they have good people skills. That may be backwards depending on what you need gaurenteed more. If you feel they can be trained you can certainly do it the other way around. It all depends on what you feel qualifies them and what differentiates them in the final choice.


I'm very careful inferring information I didn't actually test for. If the output of my process ranks two people the same, I can't do anything but flip a coin for one and recommend hiring both. I can't go back to their resumes and see "ahh yes, this person knows calculus, might as well go with them." Because the other person might know calculus too but didn't think it relevant to put on the resume.

If the process is giving equal outputs to multiple candidates, then you might need to fix it to be more granular. There's plenty of room for subjective criteria, e.g. it shouldn't be hard to have a few ranks on "seems nice to work with" metric and so it shouldn't be hard to have a process where equal outputs are unlikely.

On the other hand there's the concept of a journeyman in the craft world, anyone with that level of skill is basically interchangeable for the usual jobs. So you might by design have a test that just outputs "at least journeyman level" or not, and you can just hire the first one you find.


Some people are different and will differentiate candidates by sometimes seemingly unrelated things. I remember hearing a story in my Engineering Communications class that someone was selected for a job over an equally qualified candidate for putting down that they liked to play chess as a hobby. Apparently the person hiring liked chess and thought it stimulated the mind, and I believe they also thought it gave a more human feeling to the candidate.


It's impossible to rank two people the same. You will never encounter two people with the same profile, the same experiences in the same domains, who both want to do the same thing next.

Well, I suppose if we only look at first year college grad, there is a lot of duplicates.


I honestly think this is just going with the easy solution. Doing interviews correctly is hard and time consuming, so the easiest thing is just to do what everyvody else is doing, and that's some form of algorithm problem on a whiteboard.


Let's not forget companies that send algo-heavy tests for front end roles (my personal favorite). Can't they come up with a front end version?

Front-end devs don't need to understand algorithms?


Plenty will tell you: not very often, day to day. The algo part isn't objectionable because it's algo or a test. It's objectionable because it doesn't test the skill set that would be used on the job.

I'm sure database admins would be less than thrilled to have to duplicate a mock in CSS. You wouldn't judge an electrician on their carpentry. It's a tired subject that a lot of companies still seem to have trouble grasping.


Plenty will tell you: not very often, day to day

People will tell you all sorts of things.

It's objectionable because it doesn't test the skill set that would be used on the job.

Maybe they just say that because that's easier than investing the time to develop a good understanding of basic algorithms?

Don't get me wrong... I'm all about specialization of labor and all that, and I'm not saying every programmer in the world needs to be able to write a provably correct implementation of a PriorityQueue from scratch, at the drop of a hat. But I would argue that all developers, front-end or otherwise, benefit from algorithmic knowledge, and that it is reasonable to test front-end devs on that knowledge - to a point.

And given how much more complex "web apps" are becoming with pretty complex SPA's delivered with lots of complex logic executing on the "front end" I think that's become even more true over the past few years. It's not like "Front end developer" means "person who does CSS styling and can add a few neat effects to a page using javascript".

You wouldn't judge an electrician on their carpentry.

No you wouldn't but that analogy really doesn't fit the present discussion at all.


I think we're missing a key piece of information. I'm referring to companies that want to hire for a specialized role, do no testing germane to the role, but do test competency for the job solely via a skill set that is tangential to the job itself. A lot of places do this.

I think it must be a relic of the Google-style interview cargo cult approach to hiring.


I'm sorry, but I don't care what you do—if you work in tech, you need to be able to demonstrate an ability to think algorithmically in order to solve problems with a computer. It's a) not acceptable to get by in tech without knowing anything about computers and b) not rocket science—anyone can learn!


What you're saying is true -- but at the same time it is a bit awkward that there are things you need to work on for interviews only. For instance, I have never had to use linked lists in an actual dev job (~6y experience), but they come up so often in interviews that you just have to practice things like loop detection


It's a little awkward, but not so bad if it's within basics, like linked lists - which one may be rusty on but will generally know or re-derive - rather than really specific algorithms. But that's just a starting point, then it's about the interview and the communication and the interaction - exactly like the examples above of talking to real users or colleagues.


I have to ask. What roles were you in? and what languages did you develop in?


I was working on LLVM (C++ compiler) for 3 years and Java server backends the other 3 years. Those were the main products, with a side of Python, Groovy, Bash and JavaScript. In all fairness, I suspect LLVM uses linked lists internally for things like instructions in a basic block, but that was a level of abstraction under me.


Me?


the poster who said he never had to use a linkedlist in 6 years.


Ah, I see. I guess it all depends on the meaning of "use",


This is less useful than you think. What you need is the ability to look at a problem and solve it. Understanding the problem is a lot harder than most people (technical experts) realise.

Far too often, the technical expert solves a problem on the basis of their current solution space tools and not what the actual problem requires. In other words, they cannot think outside of the solution space box they are stuck in.

Knowing that there are all sorts of algorithms available for all sorts of variations of problems and knowing where you can get the information to implement a solution to your problem is a lot better than being able to reel off one or two algorithms and not know that there are other solutions.

Over the decades, I have become involved with various projects that were created by "programming guru's". They knew the systems involved and made solutions that showed off their "guru-bility". The problem - well the systems were a nightmare to make changes to. They used "industry standard" practise and supplied a solution that forced people to change how they did their business, without actually considering the business at hand.

One such system was built using dynamically generated SQL. Had the original "guru" actually thought more carefully about the problem in hand, no dynamically generated SQL would have been needed and making changes to the system would not have been difficult. In that particular case, I was there doing "after the fact" technical and functional specs. As part of my package of documentation, I included a specification for rebuilding the entire system so that it would be very easy to maintain and change and the runtime for each system run would have been (on my estimates) reduced to about 5% or less on then runtime of 25 hours.

To get everything back on track generally required rewrites (sometimes complete rewrites) to be able to extend these systems.

It is our responsibility to enhance the end-user experience not make life easy for ourselves.



> Special kudos to the guys who are able to solve a problem -with a perfectly optimized solution- in less time than it would take to actually read it =D

oh my


The problem is that it is completely unrelated to the job in most cases. That may put off some decent candidates.


Right, as I said above, doesn't give you any sense of what other engineers are the company are like. Nothing wrong with algorithmic coding interviews, but it should be coupled with human interaction.


Wrong - it does. Such process in company indicates that the engineers employed there didn't mind to go through such process and don't see a problem in imposing such process on others i.e. people with volatile visa/legal situation (e.g H1B in US, American in EU, Russian/Ukrainian in EU), and probably awfully underpaid (I mean the salary range 40-50k USD in a developed EU country). Basically company is cherry picking engineers willing to engage in race to the bottom.


"Nothing wrong with algorithmic coding interviews"

I would say there is if the job doesn't involve any algorithms. You are testing for the wrong skills.


Got this HackerRank link from the company I applied to. Five questions from completely different domains (database query, graph algorithm, construct a regular expression, etc), 120 mins. Submitted blank solutions. You need cheap compliant robots, not humans - stop looking for them.


I would contend that HackerRank has done a good job of screening and filtering on this occasion. Given clear instruction, the only skill demonstrated was insubordination.


Are they looking for a problem solver or an executor? Looked up on Linkedin - the manager has minimal online presence (the opposite of what is expected from me as a candidate), many years in the same company - it's safe to assume the main motivation the person stays there is paying off the mortgage and inability to find something else. The hiring person is a working student or intern. Why should I submit or even trust that their stack/technological decisions will be optimal or even competent? I'm being judged for the most detailed nuance and decision in my entire curriculum which I'm required to openly declare, why shouldn't I judge them?


I would add that the topics given are paramount to development. Refusing to do anything demonstrated that he doesn't want to do any work -or can't- for the position he applied to.


The topics yes, but on-demand, sprint-like solving quizzes in these topics, and context switching between all of them in short timespan, are these the requirements for the position? Then the compensation must be respectively outstanding? In most cases - no, it isn't.

Seriously, submit your current workforce to this smartass test under condition that they need to pass it to come over to the office tomorrow.


Don't be so quick to judge. Answering any question might have been enough to pass the screen, but you will never know that because you didn't try.

My current workforce had to pass tests much deeper than that. Using database queries, graphs and regex is part of the day job.


> Answering any question might have been enough to pass the screen

Not in this area/location, not in case of this kind of company. I know of cases where the score 67% was too low (2 tasks on 100%, 1 task 0% but missing couple lines of code for 100% solution). It's basically partially automated filter with non-technical people sending rejection emails based on score in the list without digging into solution. I gave up on it not from arrogance, but based on experience.


I feel like you took the specifics around how your evaluation was done and your choices regarding how you solved the problem and conflated those with the entire HackerRank platform, which seems at best disingenuous and at worst intellectually irresponsible.

Regarding the idea that "they expect input and output in a necessarily contrived form", they are giving you a problem to solve. You could say any expectation around input and output is contrived. As a professional, you will not always be able to control this, and need to be able to work against the constraints that are imposed on you.

Regarding the idea that "there is no communication or creativity involved", this is just flat out wrong. You might have had a bad experience due to the way the exercises were set up by the interviewers, but that's not because of HackerRank. What I did is limit the allowed languages and progress people from initial code puzzle screenings to pair programming sessions, where the communication and collaboration evaluation comes in. In every case (puzzles included), there is plenty of room for creativity. In fact, in my own hiring, I have seen a very wide range of different solutions—some very interesting and creative!

I found your choice of Bash to be pretty odd. I'm not sure I'd allow people to write their solutions in Bash, because I realize that a lot of it comes down to fiddling with things like the sed expression you described, which I don't care about. If, as an employer, I want to evaluate your problem solving skills, I'd prefer languages that abstract as much as possible away and force you to focus on the problem as a whole. If, as an applicant, I wanted to highlight my ability to solve problems with code, I would never in a million years choose Bash.

It seems like the way the company you applied to set up the HackerRank platform, mismanaged their communication with you, and selected and configured the questions combined with your choice of Bash, as well as your own personality idiosyncrasies (like getting stressed about the countdown instead of focusing on the problem & remaining calm) came together to create a perfect storm for you, but in my own experience, I've really enjoyed doing their puzzles. I've also had great success using their platform for my own hires and have found that the platform does a great job of highlighting exceptionally skillful engineers and reducing bias in the hiring process. Our team avoided several very bad hires who looked great on paper with the help of the HackerRank platform. So, I think that while it's easy to blame HackerRank, that's not where the real problem is IMO.

The thing with HackerRank is that a LOT comes down to the hiring manager. If s/he doesn't do a good job managing the candidate pipeline, communicating, setting up the puzzles, doing the pair programming sessions, running interviews, etc., then people are going to have experiences like yours. There's no magical platform that will solve for a bad hiring process.


Thanks for the detailed feedback! Here is what I mean by "necessarily contrived IO": in a more realistic setting, we might have to answer a question like "what host returns the most 500s?" coming in an email or on a support channel. Or one I actually had to answer a couple weeks ago: "what are our most frequently firing alerts?" That is all of the problem statement right there, not a detailed description of the formatting of the input and the output. Just a user with a question, so you have some work to do to understand what is being asked and how you can find the most time-effective way to answer it. I might have follow up questions like "over what time period? For a particular service or all services of team X? Just in prod or in all environments?", etc. The user also didn't ask for a specific format, I just need to get an intelligible answer with reasonable effort. So based on the context, maybe bash makes sense to answer a one-off question rapidly. In the context of a customer facing app, maybe introducing some abstraction makes sense (again, context). This is whst I would consider a more realistic "on the job" setting, you have to understand users and make trade offs and understand constraints and it's all a little bit fuzzy. In contrast, an online judge necessarily imposes more rigid constraints because of the automated verification -- I don't really see a way around it.


(I work at Hackerrank)

> an online judge necessarily imposes more rigid constraints because of the automated verification

The platform is evolving to not be as rigid anymore as well. For example, we have a product (in a closed beta) which assess candidates on complete projects (eg: build a nodejs app that defines this api and behavior), and testing and scoring is based on unit test cases.

This way, you are testing for more real-world usecases, and also allowing the flexibility beyond straight IO.


That's still just as rigid, it's just increasing scope of the deliverable so a smaller portion overall has to be dedicated to complying with your rigidity.


Interesting, thanks for jumping in! I'd be happy to revise my judgment when I see the new product in action




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: