As an engineer, you work with other humans first and machines second. Platforms like HackerRank seem to miss that, while better ones like interviewing.io put it front and center.
There will be plenty of opportunities to evaluate communication during the other stages of the interview.
It's lazy on behalf on the employers.
What if you're talking about interviewing thousands of people, a number of which are passable on paper?
It's a search problem, not laziness.
When I change jobs, I can't schedule bazillions of 1h phone screening on top on my work time at my current company, only for the interviewer to not show up or realize they don't really have a position available. I'd rather have a one hour test from home, whenever I want.
We provide engineers with free, live (and anonymous) interview practice. If you do well, you get fast-tracked at top companies.
I also agree with the lack of interaction being a problem. I've always thought about this in the context of value symmetry. If you are expecting people to put time into a coding assignment, you have to give them something useful back... especially in a market where good engineers are the ones who are scarce, not jobs.
The default test settings generate code for parsing the stdin input and create the body of the program.
That company really went out of their way to make a terrible test. That's not representative of HackerRank.
I used to write tests for one company. The tool is great and totally flexible. A test should be taken as a representation of the company writing it, not as an example of the platform.
There is a broad range of exercises, a lot of which are for coding competition, way too long and too difficult for a short screening test.
I'd advise you to keep the majority of the exercises dead simple. An exercise to print numbers from 1 to 100 will tell you a lot about the candidate. An 1h dynamic programming exercise will stay unanswered most of the time, that doesn't tell you anything useful.
> 1. they expect input and output in a necessarily contrived form
> 2. there is no communication or creativity involved
1. aka a spec
2. not the job of a screening test
This whole post assumes the premise that a screening test needs to fulfill all the requirements of a full interview process.
Let's not forget companies that send algo-heavy tests for front end roles (my personal favorite). Can't they come up with a front end version? Is it better that they don't?
I'd find it, and similar sites, less objectionable if they were used as interview "extra credit," rather than as gatekeeper screening tools.
From a hiring perspective, it cuts down the size of your candidate pool. Interviewing is expensive, so it’s better to choose from 5 people instead of 30.
From a personal perspective, I don’t want to work with you if you don’t know how to use a hashmap. There’s a difference between filtering duplicates efficiently and validating a binary tree, so it does it some injustice to lump it all together as irrelevant algorithms testing.
To me the way your teacher described Calculus as something that differiantiates two job candidates that are equally qualified it seems more like an Order Winner, but you then went on to say you should narrow your whole group of candidates by that. You shouldn’t cut down your whole pool of candidates by an Order Winner, but by an Order Qualifier. As an example if you’re hiring web developers they may pass an Order Qualifier if they have necessary web development skills. The Order Winner might be that they have good people skills. That may be backwards depending on what you need gaurenteed more. If you feel they can be trained you can certainly do it the other way around. It all depends on what you feel qualifies them and what differentiates them in the final choice.
If the process is giving equal outputs to multiple candidates, then you might need to fix it to be more granular. There's plenty of room for subjective criteria, e.g. it shouldn't be hard to have a few ranks on "seems nice to work with" metric and so it shouldn't be hard to have a process where equal outputs are unlikely.
On the other hand there's the concept of a journeyman in the craft world, anyone with that level of skill is basically interchangeable for the usual jobs. So you might by design have a test that just outputs "at least journeyman level" or not, and you can just hire the first one you find.
Well, I suppose if we only look at first year college grad, there is a lot of duplicates.
Front-end devs don't need to understand algorithms?
I'm sure database admins would be less than thrilled to have to duplicate a mock in CSS. You wouldn't judge an electrician on their carpentry. It's a tired subject that a lot of companies still seem to have trouble grasping.
People will tell you all sorts of things.
It's objectionable because it doesn't test the skill set that would be used on the job.
Maybe they just say that because that's easier than investing the time to develop a good understanding of basic algorithms?
Don't get me wrong... I'm all about specialization of labor and all that, and I'm not saying every programmer in the world needs to be able to write a provably correct implementation of a PriorityQueue from scratch, at the drop of a hat. But I would argue that all developers, front-end or otherwise, benefit from algorithmic knowledge, and that it is reasonable to test front-end devs on that knowledge - to a point.
You wouldn't judge an electrician on their carpentry.
No you wouldn't but that analogy really doesn't fit the present discussion at all.
I think it must be a relic of the Google-style interview cargo cult approach to hiring.
Far too often, the technical expert solves a problem on the basis of their current solution space tools and not what the actual problem requires. In other words, they cannot think outside of the solution space box they are stuck in.
Knowing that there are all sorts of algorithms available for all sorts of variations of problems and knowing where you can get the information to implement a solution to your problem is a lot better than being able to reel off one or two algorithms and not know that there are other solutions.
Over the decades, I have become involved with various projects that were created by "programming guru's". They knew the systems involved and made solutions that showed off their "guru-bility". The problem - well the systems were a nightmare to make changes to. They used "industry standard" practise and supplied a solution that forced people to change how they did their business, without actually considering the business at hand.
One such system was built using dynamically generated SQL. Had the original "guru" actually thought more carefully about the problem in hand, no dynamically generated SQL would have been needed and making changes to the system would not have been difficult. In that particular case, I was there doing "after the fact" technical and functional specs. As part of my package of documentation, I included a specification for rebuilding the entire system so that it would be very easy to maintain and change and the runtime for each system run would have been (on my estimates) reduced to about 5% or less on then runtime of 25 hours.
To get everything back on track generally required rewrites (sometimes complete rewrites) to be able to extend these systems.
It is our responsibility to enhance the end-user experience not make life easy for ourselves.
I would say there is if the job doesn't involve any algorithms. You are testing for the wrong skills.
Seriously, submit your current workforce to this smartass test under condition that they need to pass it to come over to the office tomorrow.
My current workforce had to pass tests much deeper than that. Using database queries, graphs and regex is part of the day job.
Not in this area/location, not in case of this kind of company. I know of cases where the score 67% was too low (2 tasks on 100%, 1 task 0% but missing couple lines of code for 100% solution). It's basically partially automated filter with non-technical people sending rejection emails based on score in the list without digging into solution. I gave up on it not from arrogance, but based on experience.
Regarding the idea that "they expect input and output in a necessarily contrived form", they are giving you a problem to solve. You could say any expectation around input and output is contrived. As a professional, you will not always be able to control this, and need to be able to work against the constraints that are imposed on you.
Regarding the idea that "there is no communication or creativity involved", this is just flat out wrong. You might have had a bad experience due to the way the exercises were set up by the interviewers, but that's not because of HackerRank. What I did is limit the allowed languages and progress people from initial code puzzle screenings to pair programming sessions, where the communication and collaboration evaluation comes in. In every case (puzzles included), there is plenty of room for creativity. In fact, in my own hiring, I have seen a very wide range of different solutions—some very interesting and creative!
I found your choice of Bash to be pretty odd. I'm not sure I'd allow people to write their solutions in Bash, because I realize that a lot of it comes down to fiddling with things like the sed expression you described, which I don't care about. If, as an employer, I want to evaluate your problem solving skills, I'd prefer languages that abstract as much as possible away and force you to focus on the problem as a whole. If, as an applicant, I wanted to highlight my ability to solve problems with code, I would never in a million years choose Bash.
It seems like the way the company you applied to set up the HackerRank platform, mismanaged their communication with you, and selected and configured the questions combined with your choice of Bash, as well as your own personality idiosyncrasies (like getting stressed about the countdown instead of focusing on the problem & remaining calm) came together to create a perfect storm for you, but in my own experience, I've really enjoyed doing their puzzles. I've also had great success using their platform for my own hires and have found that the platform does a great job of highlighting exceptionally skillful engineers and reducing bias in the hiring process. Our team avoided several very bad hires who looked great on paper with the help of the HackerRank platform. So, I think that while it's easy to blame HackerRank, that's not where the real problem is IMO.
The thing with HackerRank is that a LOT comes down to the hiring manager. If s/he doesn't do a good job managing the candidate pipeline, communicating, setting up the puzzles, doing the pair programming sessions, running interviews, etc., then people are going to have experiences like yours. There's no magical platform that will solve for a bad hiring process.
> an online judge necessarily imposes more rigid constraints because of the automated verification
The platform is evolving to not be as rigid anymore as well. For example, we have a product (in a closed beta) which assess candidates on complete projects (eg: build a nodejs app that defines this api and behavior), and testing and scoring is based on unit test cases.
This way, you are testing for more real-world usecases, and also allowing the flexibility beyond straight IO.