Hacker News new | past | comments | ask | show | jobs | submit login
Why HackerRank and other coding tests are ageist (efinancialcareers.com)
61 points by forrestbrazeal on Sept 6, 2017 | hide | past | web | favorite | 94 comments



<<Most damningly, he says the questions posed by HackerRank and similar tests aren’t necessarily relevant to the work actually done in finance technology roles: “They’re the sort of problems you’re posed at university. – There’s a focus on data structures and algorithms, but this is less relevant when your success as a developer is in building complex enterprise systems.”>>

Exactly on point. Also one gripe I have with hackerrank (I did some of the challenges to brush up on some language knowledge), is that a lot of the questions are poorly designed and from obvious amateurs.

I still think that neither whiteboarding nor hackerrank selects for good developers. In the end you get people that studied hard to solve a certain class of problem that you'll never need in the real world. It's wasting candidates time, and it still shows nothing, except that your candidates know something unrelated to the job.


> similar tests aren’t necessarily relevant to the work actually done in finance technology roles

IMO, what's relevant for a role is the skill set most people currently doing that role have and that's not always a good thing.

I've been the first shouting on top of a roof that we shouldn't ask people to know stupid little algorithm trivias to get a job. But the narative for that quickly devolved to "data structures and algorithms are not important for day to day work". To which I honestly say bullshit. The industry is managing without it: between people who forgot everything from school, bootcamps, self taughts, and people with alternate degree, people who can do actual computer science on the job are a tiny, tiny minority. So the solutions those people come up with to day to day problems are considered a speciality (eg: "machine learning!", "data science!").

But if doing that shit was just second nature to everyone, how would it change the way we do things? What if category theory wasn't so scary? What if people weren't scared of threads? What if hashmaps were not magical black boxes people use because the person who wrote the code before them did?

The foundation is becoming pretty darn weak if you ask me.

I totally agree shit like HackerRank is poorly done. But I'm not sold that the skillset it tries to assess (poorly!) is that useless. We, as a community and an industry, essentially did everything we could to MAKE IT useless. That's different I think.


I thought about this, I thought about this a lot and I still do. Thing is, this attitude can quickly get you tumbling down the rabbit hole.

So say I'm a Java developer. But surely in order to be a good Java developer I should know the innards of the JVM, right? To do that I now have to also have to know C++ because that's what JVM is implemented in or whatever. But surely I can't know C++ well if I don't know C, right? But then I can't really grasp C well enough if I don't understand how assembly works, right? What about CPU instructions, different architectures, etc?

My opinion is that nobody should even try to drink what seems to be an ever expanding ocean of knowledge in IT. Let the people (in academia or wherever) who work on algorithms work on algorithms, let the people who write frameworks continue writing frameworks and let the rest of us code monkeys to use all those digested tools and technologies to do what we have to do on daily basis. You don't ask a common plumber to describe Bernoulli's principle during an interview, do you?


Scott Hanselman has said a few times that two layers of abstractions is generally sufficient. I'm not sure if he stole it from someone but using your example, that means to be a great Java developer, you should have a good understanding of the JVM, and an adequate understanding of C++. Do you need C++ to be a good Java developer? No. If you've got a decade of experience in both, are you a better Java developer than the guy who learned it in school and that's all he knows? Probably, but not necessarily.

In addition I think the argument falls apart the closer down you get. Granted I am no C expert but I don't think you need to know any assembly to be a very good C developer.


When your writing C you are usually thinking about how the compiler will translate it to assembly. A lot of C's features don't really make sense without understanding the underlying machine.

You can get away without knowing assembly, but you might as well just use a higher level language at that point.


I consider it from a problem-solving perspective: I don't need to know the intricate details of dozens of sorting algorithms or data structures. What I need to be able to do is solve the problems that get thrown at me in daily work. So if I get assigned, say, a performance problem I might, after some research, come up with the idea to use a different data structure to solve it. Or something entirely else! Depends on the problem and where my research leads me.

As a software developer I see myself mainly as someone who solves problems to help others do their work better. So, while I agree that having a strong CS background is definitely a benefit, I don't think it matters that much if you can't get binary trees just right after not needing to implement them for years.


> So if I get assigned, say, a performance problem I might, after some research, come up with the idea to use a different data structure to solve it.

Yup. And my feel on that is that (assuming the perf issue is related to a core data structure), that -particular- perf issue shouldn't even have happened in the first place. It should have been obvious given the right background and you could have spent that time doing something less fundamental.


Things evolve. The scale of problems evolve. A data structure that was handling hundred of transactions, crumbles on hundred of thousands. Premature optimizations (and premature abstractions) are the root of all evil.


No argument there, but that's not what I'm arguing.

Do you have to reach out for google or a book to do a simple if/else statement? What about a for loop? Now what about the difference between a map and a set? Likely not, it's the basics used in any app.

Talking about premature optimization here implies there's some work to be done, things to think through, stuff that's not obvious. And that's totally true at a certain complexity level.

But what is "complex" and what is "obvious" isn't objective. It's purely a factor of what the "average software developer" knows.

What I'm arguing is that while at one point the pendulum was swinging too far one way, it's now too far in the other, and it's affecting software quality industry wide.


Corollary: A data structure that is instantiated million times in small sizes is different from one huge instance with million entries.


Testing foundation is great but that is not at all what the current interviewing system does. It hides foundation in trick questions that require you to have a moment of insight that fires off a light bulb that you can then use your foundation to solve.

For example why cant they just ask. Ok givin an array find all the pairs of the array. Great now whats the big O of that. Great...

Instead they ask some trick question that disguises the fact that you should solve this problem by finding all pairs etc.


I totally agree. My point wasn't that the interviewing system was great. Just that the problem with it isn't what it's trying to test, but how.

I have issues with people who dismiss the problem space entirely. "I don't need to know CS fundamentals to do my job". To which I say: you do. You're just doing a subpar job because -everyone else- is and that's the standard even though it's suboptimal.

(Note I don't have a CS background myself, and thought this for a long time, until I ended up working in environments that were CS heavy and seeing how "hard problems" were trivialized, so we could spend time solving "real" problems instead)


Then the issue isn't with the tests necessarily, it's with employers using a test of Skillset A to screen candidates for a job that is 80% Skillset B and 20% Skillset C. That doesn't make the tests ageist (which I suspect was included mostly as click bait), it just makes the hiring managers at these companies bad at their jobs.

Raise your hand if hiring managers being bad at screening technical candidates for highly technical roles is surprising to you.


Agree, but what is the alternative from the standpoint of the interviewer? These types of challenges are simple to state, self-contained, easy to timebox.

For the interviewer for a say intermediate-level enterprise software backend position who needs to quickly sift through 50 applicants, what is the better option by way of code challenges?


> For the interviewer for a say intermediate-level enterprise software backend position who needs to quickly sift through 50 applicants, what is the better option by way of code challenges?

Code challenges are a dead end. You can have someone who passes every code challenge imaginable, but is absolutely awful to work with. Software development is more about communication and problem solving than anything else, so focus on those things first. Technical challenges are easier to overcome than personal ones! Here's my strategy:

- Resume smell test to filter out 90% of the applicants (spelling errors, run on sentences, poor formatting, poor experience, etc. If the applicant doesn't care or know enough to fix these on their resume, chances are they will do the same to your code base)

- 20min In person or on-camera interview smell test to filter out another 5% (punctuality, check for communication and personality problems, smoke test of technical knowledge)

- 1 hour+ in person interview to pick your candidate (deep dives into past projects, talk through or whiteboard a common problem in your industry, meet'n'greet with other team members)

- 3 month probation period to fully vet the new candidate and build trust.

There's no way that, out of 50 applicants, all of them are going to be equal, don't treat them that way. You need to fast track ones that are very promising and get them to meet with the team asap.


Take some Real World™ problems that are applicable to the positions are create your own. I don't mean come up with something completely from scratch; literally comb through your code base to see what kinds of design patterns and algorithms you're using. Then generalize it.

It could be as simple as: Given a list L of accounts, return only accounts from L that match key K. That's programming 101 type stuff that is applicable to the job.

Or it could be more abstract and open ended: Given some user-submitted form data, what steps would you take to move the data from the browser's front-end to the database. Assume the database has been created. Tailor the question to the technologies on used on the job (Spring, .NET). This type of question gives you a whole host of information about the candidate such as their overall knowledge of the framework, their areas of expertise, potential weaknesses, as well as the ability to nudge the interviewee in the correct direction if they're stuck.

Interview questions don't have to be clever. They don't need to have obscure edge cases to trip people up. I firmly believe that interviews should be about gauging candidate ability to be functional on the job rather than solve the riddle (how many piano tuners are in Chicago?) or non-applicable CS (create a script/program that can output the nth row of Pascal's Triangle [ignoring int overflow]).


> For the interviewer for a say intermediate-level enterprise software backend position who needs to quickly sift through 50 applicants, what is the better option by way of code challenges?

Patrick McKenzie (patio11) created a lecture about this topic:

> http://businessofsoftware.org/2016/07/hiring-at-scale-patric...

If you are short of time, here is a summary:

> https://42hire.com/hiring-at-scale-c99f0665b893


Credentialing. Seriously. Have a Bar Association for developers. When anyone can do a few weeks of Googling and claim to be a software developer, you're going to get mostly crap candidates. Offload that process to a formal body, like lawyers, doctors, and engineers do. You can hire unlicensed developers, but you do so at your own risk. Then once you're talking to someone licensed in an interview, you can focus on just getting to know them and talking about their past work.

Edit: wow, I've clearly touched a nerve. Good thing your doctor is licensed to prescribe something for it instead of a rockstar named Chad who read about MongoDB once but can basically build Uber by this point.


Part of the problem is that the discipline is very wide, and there's different mixes of skills needed:

    +-----------+
    |\ business | - excel macros
    | \     +   |
    |  \  users | - web dev
    |   \       |
    |    \      | - back end dev
    |     \     |
    |      \    | - framework dev
    |       \   |
    |        \  | - tooling
    |         \ |
    | coding   \| - OS / compiler / database dev
    +-----------+
There's no simple cutoff that can mark someone out simply as a developer. And the diagram above doesn't reflect the fact that within each slice of the coding wedge, there's a different mix of technologies, different rates of technology turnover, requiring a different mix of e.g. aptitude to pick up new technologies quickly, vs depth of experience in existing technologies, quite apart from what the specific technologies are.


Conjoined triangles of success!


This is maligned and dangerous. It has been shown that credentialing hurts the marketplace and consumers. Having a credential authority would not cause there to be a skill bar, it would just require all of us to pay money to a worthless authority.


> it would just require all of us to pay money to a worthless authority.

Today these authorities are universities.


HackerRank-style test are one of the few ways into a good career for lower-class people who have the skills but can't afford the credentials.


It has been shown where? Please don't forget that inexperienced practitioners also hurt the marketplace and customers. Every data breach because someone kept a plaintext password field in MySQL hurts the customers. Every lost hour of productivity because of shoddy code hurts customers.


Big companies already spend a lot of time and money interviewing candidates to figure out if they should be hired. Credentialing will turn that into either a standardized test (which is known to be inferior) or simply require a degree (which will exclude too many good candidates).


Under a hypothetical licensure system, companies would be more than welcome to hide rockstar Chad from the local CS department. They would just, for example, be more liable for damages or something. We can think of all types of licensure schemes, with all types of consequences. What makes you think software engineer credentialing will automatically work out poorly? Moreover, have you also considered its benefits? Oftentimes the world exists in shades of grey, where a system has good parts as well as bad parts. Lawyer credentialing, for example, isn't perfect, but it's better than nothing at all.


Lawyer credentialing exists because individual clients are unable to vet lawyers otherwise. Same with doctors. In both cases, the consequences of getting a quack doctor or lawyer can be disastrous... death or imprisonment, for example. General contractors can cause quite a bit of damage to your home so they often have to be licensed and bonded, electricians can cause fires so they're licensed, and bad plumbers can do plenty of damage to your house too. All of these people are likely to be hired by private individuals.

A bad programmer could rack you up AWS charges, or leak your clients' credit card numbers to an attacker, or other things like that, but these are just financial losses for businesses that don't do their due diligence (nobody's dying, going to jail, or having their house burn down).

> Moreover, have you also considered its benefits?

I don't think the benefits have been adequately explained to me. Maybe I could consider them if you told me what they are.


doctors

Because the healthcare job market is such a success! Remember those accusations against tech companies for colluding to depress wages? Now imagine the same, but with State immunity!

The anti-trust class-action lawsuit Jung v. AAMC alleged collusion to prevent American trainee doctors from negotiating for better working conditions. The working conditions of medical residents often involved 80- to 100-hour workweeks. The suit had some early success, but failed when the U.S. Congress enacted a statute exempting matching programs from federal anti-trust laws.


I never said "let's make software licensure exactly like healthcare licensure". There are parts of healthcare licensure we could also criticize, but that's not my point. If you'd like to discuss specific parts of softwares licensure, I'm more than willing, but don't engage in faulty reasoning.


You don't get to decide how it'll shape up; public choice and markets don't work like that. The decision to mandate licensure unleashes forces that very often lead to the kinds of problems that I described above.

Unless you have a very good reason why it won't in this case, you're just playing with fire.


You are just adding a lot of useless bureaucratic overhead. Are you sure that the problem you are trying to solve will not just be replaced with an even worse one?


No, I'm not, but that's, to be rather frank, a stupid fucking reason not to try. :)


There is powerful opposition in our line of work to this type of thing, particularly the standardized tests. I don't fully understand it because I think that certification exams, etc. have clear, if limited value.

I think it stems from an intrinsic starting point among many developers that programming is a true meritocracy and that all hiring questions can be settled with a judicious study of what the person is truly capable of (eg. the fabled personal github account filled with side projects). And that anything other than "hacking on the code" is a waste of time.

Of course this isn't how the real world works. In fact almost no one has the time, energy, or inclination to do impressive side projects. Gauging skills on the basis on resumes and very short interviews is quite hard and time-consuming. Hiring decisions are made on the basis of essentially social interactions and personal references.

I do think there is room for an informal credentialing process in principle, but there are longstanding mental blocks that will need to be skirted. There will need to be leadership from the top to make this happen, ie. the best, most-respected programmers will need to take it seriously first.


I don't know from where it stems for the majority of opponents. For me, it stems from a complete lack of faith that the system would be well designed and even relatively free from corruption, and therefore that it would not make things much worse, while not actually solving the problem it's supposed to solve.

Certification exams already exist, and employers are free to require them to avoid the "crap candidates". Other problems, like security flaws affecting the users, can be better solved by imposing real penalties on companies.


Another solution is law. Establish a formal licensure body and then slowly make it necessary for things like government contracts, or even private contracts.


For the professions this is quite simple as there are standards, Doctors, Accountants etc are all trained to the standard qualifications that are often internationally recognised.

How could you do that for software engineers? There are so many programming languages, frameworks out there that are constantly evolving and companies have different demands.

I think it would be a really hard problem to generalise.


> There are so many programming languages, frameworks out there that are constantly evolving and companies have different demands.

An idea that I have thought of is to standardize on some programming language/frameworks/demands that are known to be supported for a very long time (15 years?). If you need something from this buffet, you are fine and have the advantages that this provides. If you have special demands, you are on you own - which is not a problem per se, but as all special demands this simply needs to increased risk or cost.


I would love to see some sort of standardisation and rationalisation within software engineering. It's so fragmented and ego driven. Every framework is better than the last, my programming language is better than yours. Thought is rarely given to longevity. I'd much rather work with a more dated ecosystem that I know will be around of true next 20 years than the continual paradigm shifts we endure just now.

I'm exhausted and frustrated with the field as it exists just now.


Fair point, but consider that software engineering is a MUCH younger profession than, say, accounting. Consider that "double entry book-keeping" dates back to around 1340, and accounting in general is even older than that.

We've had about 60 years to figure this stuff out, and arguably the ground is always moving beneath our feat as the nature and scope of the systems we build keeps changing.

I'm exhausted and frustrated with the field as it exists just now.

Likewise, but I'm not sure what choice we have, except to hope Ray Kurzweil is right about some of this life extension ideas. If we can survive another 300-400 years, maybe things will be better.


I'm not sure of anything, but if I had to pick one thing to bet on it would be that none of us, or our children's children['s children] will be here in 400 years.


There are many jobs like that; it's not hard to some Java shop maintaining some CRUD software for over a decade, with no signs of switching. Hell, a colleague of mine just got a decent offer to develop in Informix 4GL.


It exists: https://ncees.org/engineering/pe/software/

No one cares though.


I would love to sit for a PE exam, except my state requires you to apply for permission, and one of the requirements is 4 years of professional experience working under another PE, after you're already approved as an Engineer-in-Training, which itself required 8 years of experience if you don't have an engineering degree. 15 people sat for the software PE exam last year nationwide compared to nearly 1,000 for some other disciplines.

With those kind of numbers how is anyone with the 12+ years of experience required who doesn't personally know a software PE (of which it seems there are maybe 100 in the United States based on the age of the test and pass rates) supposed to get the certification? I totally accept that my Poli Sci degree means I'll need more professional experience that a Comp Sci or Comp Eng graduate, however the requirement to work under another PE eliminates the vast majority of potential applicants.


Hence our dilemma.


I recently had a round of interviews for a senior engineering position.

Luckily it was on hackerrank, but only as a way to store the exercise instructions with a timer.

The test itself was not terribly difficult, build a small mobile app (I am a mobile dev). It was still enough to demonstrate how you can architecture an app, good design patterns, clean code, etc

After that I had another round of interview with that same company.

I feared that ridiculous whiteboard was on the menu. One of the engineers reassured me when I inquired about the content of this interview : "we think that wb are bullshit, we just want to discuss".

In the end I was offered the job, with a really nice compensation package. I should mention that I live on another continent and did all these interviews remotely.

Is this a general trend in the silicon valley area ? or was I just lucky to stumble on companies that don't believe in whiteboard questions ?


I get idea that these questions are exactly same from that you get from clients and incompetent bosses, where you expected by other side to fill the missing bits and show your brilliant intuition to solve the right problem.

they are entertaining to play with(solve), though i haven't encountered any serious algo problems during my line of work in 15 years. my friend, game coder, always does.


We faced exactly this problem at the current company I work for. When the company was just starting, we had the great idea to use interview questions "like the ones they make in SV", thus used algorithmics-heavy problems when interviewing candidates.

Fast forward 4 years, and I realized that the dev team we have is basically a group of just-graduated Engineers or people with at most 1 year of experience.

My "eye opener" moment was when I realized that by asking for difficult algorithms (tree and graph traversal, subsequences, etc) the following was happening: The people that know the most about how to solve these types of problems are students, or people that had graduated recently. Not only that, but this groups is a very reduced sub-group from the recent graduates group, because these people dedicate their Uni free time to solve these problems and get in ACM, IOI competitions. Thus, they don't play with different technologies, they oftentimes don't feel confortable with shell, and finally, the type of systems they develop are optimized to be "used and thrown away".


In my network many people that I would classify as bad to some of the worst developers I have ever worked with ended up at Google. They are masters of this type of programming problem solving but could not build a straight forward, maintainable system to save their lives.


How much algorithm work does your dev team deal with on a day-to-day basis?


I'm in my early 50's. I've been getting paid well writing code since 15, and have been employed at (arguably) the top of the industry in my fields. Nobody at my level takes tests; none of us would pass them as they deal with utter bullshit. Work for a few decades, at the highest levels of the intersection of business and technology, and you'll find complex and/or sophisticated algorithms often get removed due to support difficulties. Sophistication occurs within the core of the product, which needs to embody cutting edge features and performance, so unless you are the product's core architect, sophisticated algorithms are taboo. The vast majority of software is user interface, import/export logic and other aspects that are not the core of a product. Those non-core features and facilities need to be nimble and simple because they are transitory, and operated by juniors. Frankly, the world where HackerStank is valid is a type of software company that is best avoided, as they treat you 100% like a cog, and not a human.


If a person in the workplace is asked to implement quick search or some sophisticated algorithms on linked lists or binary trees there are only two possibilities: this is a low level and high importance project for NASA/Intel/Lockheed Martin, or one is being teased.


I am in my 40s and have a relatively safe job and not looking for a change. A friend of mine told me to apply for a certain company and I did. They asked me to free an hour one evening and they put me through hackerrank. And I failed. I failed big time.

My take away: Surely we all need to revise if we seriously want to apply for a job yet it did feel like the screening was asking me to see how good I am at resolving small problems quickly.

Yet, my 25 years of experience told me that we don't need to solve problems quickly we need to solve them rightly. and that takes "mulling over"


I'm only 27 and I suck at these tests. This year alone I've run the gamut working frontend, backend (Spring), services (clojure), elasticsearch, and a bit of devops.

I tried one last night after a nine hour work day in JavaScript and it kicked my butt. First I was thinking solution X but eventually realized Y was simpler, so then I've wasted a bunch of time because the pressures on coding quickly instead of "mulling" over as you mentioned.

It was similar with a Ruby one which I prepared for by running through the Ruby Koans. Lots of time spent looking up syntax, and the vast majority of the time I'm not starting entire repos from scratch, so a lot of simple but not oft repeated things can trick you up to.

And, to be quite frank, when I'm at home, and I want to program, I want to work on my over engineered, terribly underdeveloped personal website where I get to do things I don't normally do, putz around with erlang and postgres, do the devops and all that other stuff.

Ahh the afflictions of the affluent!


HackerRank tests are very effective for companies in that they capture candidates who:

- Have great problem solving ability.

- Low emotional intelligence or self-esteem (don't realize that the kinds of companies that make you do these tests are exploitative and will treat you like livestock).

- Don't have specific ambitions and don't place much value on their own time (prepared to spend tons of time studying/practicing for the tests).


The question is though how the hell do we find companies that don't do these test? It seems everyone and their mom are doing it now days, even immigration officials :)



Most startups and some big companies like banks don't use these kinds of tests - Their process is mostly based around peer-review from other engineers in the company.


What I like about these tests, is that everyone can have a shot. You can't forge a university degree you don't have, or a 5-years work experience, but you can take a couple of months and work hard to practice your algorithmic skills. As far as ageism is concerned, I prefer to have the opportunity to show my skills through technical tests rather than seeing my application ignored.


> you can take a couple of months and work hard to practice your algorithmic skills

Unless you have kids, or a demanding job, or personal obligations, or you don't want to make learning sorting algorithms for a job you'll never need sorting algorithms a full time job on top of your current full time job.

It's just pre-selecting for a group you're in as opposed to a group you're not (the royal you :)).


"For experienced programmers like Adler, the new tests aren’t insurmountable – all that’s required is an injection of time to learn the new Hackerrank tricks."

Well, yeah. I do that too and I've been out of university for 5 years.

You have to prepare for interviews, and to resharpen your skills when you're changing jobs, it's the same whether you're 28 or 48.

I understand finding time is harder when you have a family and all, but it should not take much more than a few hours to get back to a decent skill level, and then you get to the interview stage where you can really stand out with your actual real world experience.


If you're a successful working programmer you shouldn't need to study to pass your next interview only to forget it all again after a couple more years on the job.

That is why these tests are bogus. If they measured useful skills they wouldn't require any study.

I've been working with a lot of graph algorithms lately. I've already forgotten several I learned a couple months ago. That is OK.

The more interesting question for interviewees is where they would find the answer rather than whether they can implement algorithm X from memory.

If a candidate says "I'd use A* and maybe compare it to similar algorithms to solve the problem" that's enough. There's no reason they need that permanently committed to memory. Almost no one writes A* twice a year.


Yes, you should not have to do that.

But place yourself on the other side of the fence : you just opened a new positions, there is a hundred of applicants and you have limited time to interview them all.

If you do spend 1 hour with each one of them, you have spent at least 200 hours on this task, not accounting for the back and forth to find a right schedule and all, probably spread over several months since most people are only available for interview from 17h to 20h or 12h to 14h.

Let's say you have 5 people dedicated to this, obviously technical ones for this kind of task, interviewing everyday for 4 hours (2h interview + 2h of debrief/internal discussion), that's still a whole month of interviews, everyday, for 5 people who are not HR and probably have better things to do, for one not that long interview for one position.

It's simply not efficient, and smaller structure can not handle that.

Alternatively, you can filter out 50% easily by making them take a programming test, with overall few false negative. There's a non negligible portion of false positive as well, but now from 100 applicants you have 50, so interviews are much easier to manage, you have more time to do it and the overall quality of interviewee is much higher.

So yes, in an ideal world, everyone would know the exact level of everyone just by looking at a CV and people would not need to prove anything by a test, but that's not how it is for now.

If you have a better solution, I'm sure a lot of people would be keen to hear it because recruiting is very costly and arguably one of the most sensitive part to build a company.

There are solutions like referals and such which works very well when they do, but when you have a lot of positions to fill it's hard to find enough people.


> If they measured useful skills they wouldn't require any study.

I would be careful with this kind of statements. It can also show that most programming jobs don't actually require a sophisticated level of knowledge.


I think this is unfair.

As a crude comparison, it would be like a mathematician being asked to recite his times tables. Something he could probably do, given a bit of time and practice, but so outside the normal day-to-day stuff they normally do that it doesn't tell you much other than they spent time preparing. Which is fine if that's what you're looking to measure.

I think the point that the grandparent is trying to make is that if you take a group of developers with a good amount of experience and who everyone agrees are skilled in their field and administer a test with no warning/preparation, and none of them score very well, are they suddenly not very good developers or is the test not a very good predictor of whether someone is a skilled developer?


One programming job may require a completely different skillset than another.

Going over the Hackerrank algorithms, how many times do you do things like fancy bit manipulation or game theory algorithms for building a simple CRUD web app? There are certainly programming jobs where bit manipulation is common (systems level / embedded code is one I can think of) but from my experience it's pretty rare to XOR bits in Javascript land.

Same with things like data structures. Again: in the world of C / C++, certainly you'd use trees / linked lists / etc. I do think is good to know about these concepts regardless so that this type of thing is more than just vague black box magic, but in Javascript CRUD web app land all of that stuff is mostly "behind the curtain", so it isn't the most necessary skill to have.

Hackerrank in fact seems really poorly suited to web development positions. No tests on CSS, HTML, design patterns (yer model-view-* patterns, dependency injection, responsive web design, etc etc.), data transmission structures (eg JSON and XML), browser structure (eg the DOM), dynamic communication (eg Ajax/XHR, Websockets), etc.

Heck, Hackerrank may show that you can write some SQL, but ironically for something with "hacker" in it's name, it can't show that you know about how to mitigate the common "web hack" tricks (eg SQL injection, XSS vulnerabilities, URL fuzzing), something I would think would actually be way more relevant for any public CRUD web app.


Isn't saying that a few hours is enough to get past it the nearly same thing as saying that it is just garbage?


If it takes a few hours for someone with actual programming experience vs. a year for someone without any experience whatsoever, it's not exactly garbage. Just like FizzBuzz isn't garbage. It doesn't tell you much, but it filters out the total frauds.


I used to think that, but you'd honestly be surprised how many people it filters out successfully.

It is just an advanced version of FizzBuzz : even if it's not enough to determine whether someone is good or not, it definitely burns people that are not able to code at all. It also provide talking points for the interview afterward about coding style and all.

Of course, it's not sufficient, but it's just a stage among others if the interview is done correctly, and it's a useful one


So it's a step up over randomly pulling people right off the street?


> but it should not take much more than a few hours to get back to a decent skill level

It really depends where you apply. Some companies have interview processes that require a very specific preparation and it can be very time-consuming.

I applied once for a quant position. The interviewer didn't really care about my resume and we started directly with logic and probability puzzles (a few phone interviews). I was about 35 at the time, and this was the type of things I would have been more comfortable doing when I was a 20-years old math undergraduate. On one hand, I like that they put everyone on an equal ground. On the other hand, I think it's a bit of a shame that they dismiss many years of experience.


These are all good points, but how does the author reach the conclusion that the coding tests are ageist? Ageism was only mentioned in the headline.


When he says the tests favor people that just graduated from school. Those that have been in the workforce for a while (older people) are focused on "real world" problems and not the academic problems seen on HackerRank. Any system that favors young people over old people is ageist.

I'm not saying I agree with the argument (old people have been complaining about young people for a very long time), but the logic is easy to follow.


Bigger software companies also typically require "experienced" candidates (read: not new grads) to pass a systems design interview, which is very subjective and typically fairly difficult. This essentially makes the bar for new grads lower.

The argument is that this is because they expect experienced developers to be able to enter at a higher level, but the cynical side of me thinks that it's because they really prefer new grads, and this tips the scales in their favor.

Remember, Zuckerberg said, "young people are just smarter". I fear that attitude is pervasive, but hidden under the surface; they're not going to just come out and say it, as that would be illegal.


You can be a new grad and have experience. I made a point to do work in college that was appropriate for my work field when I graduated. Appropriate summer and part time jobs helped quite a bit as well. Having a bunch of appropriate jobs and projects under your belt (especially a few apps in app stores and code up on github) goes a very long way.


Sure, I wasn't arguing that they're mutually exclusive. I was referring to the way that e.g. Google would refer to "experienced" or "industry" candidates.


In my experience, any whiteboard test of generic algorithms is more of a window into how the lead developer thinks about the people he manages, than an actual test of programming ability. It almost always denotes a fixed, "I know best" mindset. It's usually a interview killer for me.

And it's just a poor way to assess skill. Learn about modulus once and you'll pass every other wizbang somebody else thinks they're the first to give you.

You'll always learn more about a candidate's ability by just having a conversation about programming with them. Anyone worth their salt will not only be able to colloquially talk about their skillset, they'll enjoy it.


Hello all,

I am a bit late to this discussion, as Sarah Butcher (the author of the article) just informed me about this thread. This is Marc Adler, the person that was talked about in this article. It's great to see the discussion around Hackerank-type coding tests and the hiring process. There are probably some clarifications that need to accompany the article.

First, when I was interviewed for the article, there was no mention of the word "ageism". As something in this thread surmised, it was probably inserted into the title for a bit of "shock value". The article was supposed to be about the correlation of Hackrank tests and the hiring of experienced people who have been coding successfully for a number of years on "real-world" problems. Many of you in this thread fall into that category.

Second, I have about 30 years of experience in the industry. I started out in the mid-1980s as a Windows developer at Goldman Sachs, writing equities trading systems, went on to form my own software company (which I had for ten years and concentrated on programming tools), and then continued as a developer/architect for more Wall Street companies. Among some of my roles were Chief Architect of Citigroup's equities division, and Chief Architect of MetLife (a global insurance company). I am currently CA at another big company that is not on Wall Street. All of this time, I have been coding. Recently, I developed an Uber clone in Scala/Akka/Play.

I am not completely against Hackerank-type tests for hiring junior devs or devs right out of college. But what I am against is the use of these coding tests to hire people who have 10, 20, or even 30 years of experience. When was the last time that you used dynamic programming algorithms in your job? When was the last time that you seriously thought about Big O? In my experience, a lot of apps use linked lists and hash tables. If you want a nice sorting algorithm, there is one that comes with one of Microsoft's C# assemblies. What is more critical is knowing how to find information and how to apply it in your day to day job. And that's the kind of developer I want to hire ... one who is resourceful and productive.

-marc


Hackerrank is a really good tool to separate the incapable from the capable, but not a very good one to separate the good from the best (or even the average from the good).

It's impressive the number of supposedly senior candidates that can't follow simple instructions from the platform or write a couple of lines of code in their language of choice to sort the words in a string or something similar.

I fully agree that multiple complex algorithm or puzzle questions are bad - require a longer time investiment that most candidates should be willing to devote to such process, are distanced to the reality of most programming tasks and favor those who enjoy and practice programming puzzles.


I interview 20-30 people per week and send most of them Hackerrank tests. I'm a big proponent of vetting that people can write code, but 99% of the questions on HR are waaaay too "algorithmic". Had to go through almost all of them to find a few tests that are not about graph algos or dynamic programming – finally found a question that was based on regular expressions and a few other ones that I think are a bit more representative of real world challenges.

I really wish Hackerrank could add more problems or let employers add their own question. Are there any good alternatives to HR?


Hmm, pretty sure you can add your own questions...

I like to add simple coding tasks, multiple choice stuff, and a free form text with 500 chars to explain something..


We (the company I work for) use Codility for some of our roles. I think you can add your own tasks/questions.


Big fan of Codility. They do have a lot of algorithmic questions as well, but they also have easy ones (which we use), and we tested them on our own staff prior to deploying (all then-current staff got perfect or nearly perfect). I would only use medium or above if the question tested something that the job absolutely needed in a candidate.

They also allow candidates to add their own test cases easily, which, for the ones that do, tends to be a very strong signal in their favor (they also tend to do better).


Big NOT fan of algorithmic question that can be summed up in one sentence taking a full page of text description.

In most of these coding test the algorithm is easy, but figuring out the catch in the description is harder than actual implementation (unless you're insanely limited and have to reimplement basics from scratch)


To get a true stress read, I think you would have had to tell your staff that they needed to pass this test to come back to work in the morning.


As I understand that having a first selection maybe a good idea to sort out if a person can program or not, it amazed me that that sort recruiters seems to not read the applicant resume.

I do relate with the article. For experienced developers, it's probably not the best way to recruit. Ask something related to the job, not the degree.


This isn't surprising. I remember getting the strong impression when poking around on Hackerrank (between the UX, question quality, etc.) that it was designed by people who just graduated from school.


> When you’ve got a history of having built a high frequency trading system for Citadel, Deutsche Bank, Morgan Stanley, you’ll know a lot about latency and order flow – but you might not know how to find a maximal duplicated substring within a palindrome on a HackerRank-type test.

I have only a vague idea of the requirements in financial technology, but aren't streams of financial data central to it? And repeating substrings correspond to pattern repetitions, so they might be important, especially the longest ones.

I think if you can find the longest repeating pattern in a stream of data, you should also be able to find maximal duplicated substrings in palindromes.

Maybe it's more about the way the problem is phrased rather than about the difficulty of coming up with a solution?


TL; DR summary: programmers... err.. software engineers discover that fundamentally most of the companies consider them to be be totally interchangeable, just like janitors are fundamentally interchangeable or models walking down the catwalk during the NY Fashion Week are interchangeable.

Since they did not save money for the twenty years that they have been paid quite well, they dont like the prospect.


Outside of certain bubbles (i.e. in most of the world) they have never been paid exceptionally - well, at most.


Outside that bubble they were paid just as well compared to the going salaries of other people living outside that bubble.


Are there any HackerRank-like apps out there that go beyond simple code challenges?


> Are there any HackerRank-like apps out there that go beyond simple code challenges?

What kinds of challenges to you have in mind?


You can contribute to open source projects for another kind of challenge


Contributing meaningfully to a big OSS project is a way bigger time investment. Not only you have to understand a lot about the project, you have to actually work on open issues in it.

And contributions to tiny projects tend to get passed over anyway.

Moreover, certain hiring agreements would consider it breach of contract or confidence.

Making your own Github or such portfolio faces the same problems.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: