Hacker News new | past | comments | ask | show | jobs | submit login

So there are multiple things that are true with people applying for development roles:

1. These "dev gate" programming challenges are filtering out senior devs, talented devs, creative devs etc. people who would be great at the role.

2. There are people applying for these roles who can't knock out a decent Fizzbuzz solution (in any amount of time).

3. For many roles, there's a flood of applicants

Any solutions to this need must address all three of these things simultaneously which seems astoundingly difficult.




I think my team has gotten pretty good at this. We address the "flood of applicants" by tossing out any resumes that don't have some sort of CS or programming on them (about 25%), and favoring, in order, people who have held a programming job, people who have done programming internships, and people who have taken CS classes. A decent GitHub profile will bump you up in the two later categories.

Next is a five minute phone screen. We're a Java shop, so I ask them something dumb, like "what's the difference between public, private, and protected?" Something any Java dev would know; I'm just trying to find out if they have ever actually used the language.[1]

People that pass the phone screen get a Skype interview, where they write code in an IDE. The first half hour or so is chat and trivial problems like "sort this array" or "return true of this String starts with a letter between A-Z, inclusive." They're allowed to Google and use the standard libraries.

Finally, we have a "close to real world" problem for them to work on. It's a standalone, mostly-toy CRUD application, and we'll ask them to add a feature that represents the kind of work they'd be doing. Again, they have their IDE of choice, Stack Overflow, etc.

I don't think we've had a bad hire since we started using this process. Have we turned aside some all-stars that interview poorly? Maybe, but the team we've built is really good at what they do, so I'm pretty happy with the results.

[1] We have hired Senior devs that don't know Java. One of our team leads was a C# guy for a decade or so, but he was smart and available, so we scooped him up. Java is easy to learn. Programming well isn't.


About your last comment - c# was explicitly made similar to java (at least at first, it diverged in time but the basics are close to being a superset of java).

My first job using c# was in 2010, and I have been programming in java since 2002 at that point. I was productive pretty much since the first day & not because I'm a genius - c# is that similar to java...

I suspect that if you'd have hired a php or a python expert they would have taken more time to get used to java (not that that would have meant you shouldn't hire them!)


Oh absolutely. I used to joke that you could convert C# to Java by replacing "string" with "String" and changing the extension.

But really, most of the C-family of languages are similar enough that you can kind of hack your way through pretty quickly, and become mostly-productive in a few weeks.

The family of Go, Java, C#, Python, Kotlin, Javascript, etc etc are all similar enough in theory that "working code" is as close as Googling the right syntax.


I have a degree in physics and 20 years of programming. Just out of curiosity, would I be weeded out in the first 25%?


> any resumes that don't have some sort of CS or programming on them

> I have a degree in physics and 20 years of programming

Depends, do you mention your 20 years of programming on your resume?


I thought that meant CS or programming classes.

I do have Java, Perl and C++ from SF City College - so I guess that counts.


with the communication skills I've seen so far, probably not


After you've been coding for a few years, I think your degree is largely irrelevant. A guy with 20 years of programming knowledge has far, far more than a degree.

Similarly, a candidate could have graduated top of their class from the most prestigious university in the world, but if they've never been able to hold a job for more than six months, that tells me something.


Your last point is interesting and hits close to home for me, from the other end. My current gig is python, but when I got hired I new exactly zero python. But I had been writing software a long time and in the same area (networking & automation) so they scooped me up. Let's just say it's worked out very well, for everyone involved.

I still don't know Java that well, though ;-)


filtering for cs is short sighted imo. You're going to see less and less folks holding a cs degree as more folks wisen up to the trap of student debt in the us.

Instead put folks on trials and keep the good performers.


I agree; that requirement was pushed down on me from on high. I think we're starting to loosen up about it, though.


It does scare me when experienced developers can't write FizzBuzz though, in any language of their choosing, in a reasonable time frame. The thing I tell myself is that there are so many "gotcha" style programming interview questions that people might just be nervous that there's some trap waiting for them in the question.

A few years ago I instituted an "interviewing code of conduct" for my teams with a few tenants that we've refined over the years, but the first one was "We will treat every candidate with respect and empathy in our interview process". The team has adopted some attitudes and techniques to do so while also not compromising on the talent or skills we are looking for. We've gotten very positive feedback on our interviews, so we think it's working.


Can you share some concrete examples of techniques?


I can tell you something that has worked very well for me:

1. Grab a sub-set of the competencies that you NEED to have https://sijinjoseph.com/programmer-competency-matrix/

2. From that subset, setup questions for the different levels For example, for the VCS competency we have:

source code version control Lvl 1 questions: "- What is the difference between Git and Github? - What is a branch? - What is a pull or merge-request? - What is a commit?

Lvl 2 questions: - Mention other VCS besides Git - Mention good practices of a Commit (short title, description content, etc) - Describe one branching model (aka gitflow or other) - What is a cherry-pick? - Describe good practices of a Pull/Merge request

Lvl 3 questions - What is the difference between a Merge and a Rebase? When to use them? - What is Git Bisect? how to use it? - Git vs Mercural or other DVCS. - What is partial clone? - What is a Git Submodule? - What are Git Hooks

3. Split your 1 hour interview into 2 parts: Coding and Q&A. For the coding part, ask them to build something, in their machine, with the language of their choice, sharing screen. The problem should be something for 1 hour, and split in steps (I usually do 6). With this problem you will score coding ability, cleanliness, defensive coding and similar stuff. I usually score between 1 and 3 (1 - Below Average I have seen, 2 - Above Average I have seen, 3 - Amazing/Impressive)

The second 30min part is Q&A, use the questions to check whether they are a 1, 2 or 3 (or maybe 0 if they cannot answer a Lvl1 question).

Then combine those scores anyway you want (average, weighted average with extra weight for code or for some question) and see what % of the max score the candidate gets.

The key during the Q&A is that you start with 1 or 2 lvl1 questions, and then if they answer, you go to lvl2 questions, and if they get them, go to lvl3 questions. The idea is, to never make the candidate feel like they don't know. With experience you can get very good at seeing what levels of questions a specific candidate will answer.


I've been on both sides of this. As an applicant, sometimes it's a crapshoot in terms of getting lucky that they asked me the right question. On the other hand, as a hiring manager or interviewer, I see lots and lots of candidates who can't write out a fizzbuzz solution, and it wastes a lot of my time (I'm a sucker and will take a lot of time guiding interviewees to the right solution, even on a simple problem when I know the interview is over, but I don't want to bruise their ego too much).

I don't have a silver bullet solution to this, but I think a few things go in the right direction:

1) Candidates should have to write code in interviews. But they shouldn't have to solve puzzle problems with "gotcha" solutions. If there's a specific trick that requires an "aha" moment, you are really testing how well a candidate solves puzzles under pressure, not how they code.

2) Test candidates on what they are best at. If someone has been working in C# for the last 5 years, don't ask them to whiteboard in Python, which they used in college. Picking up new languages/frameworks is quick for someone who knows what they are doing [0].

3) Offer candidates a choice between an in person interview or a take home coding test, the latter of which would take more time. Some candidates don't want to deal with doing a 6 hour take-home coding problem [1]. Other candidates suck at whiteboarding under pressure. So more options seems better.

[0] There are exceptions to this. You might have a unique problem and the budget/resources to hire a rockstar for a specific role. Desirable companies willing to dole out big salaries do this all the time. But much more often, I see companies offering average salaries for very very specific roles. One company near me told me that I was one of the best candidates they've seen, but they are looking specifically for someone with 1+ years of Java experience. I could have picked up the basics of Java in a month, and been fairly proficient in 2-4 months. Meanwhile, they are still looking to fill that role and it's been over 2 years.

[1] I've had a few companies that insist on this, but I haven't had a period of unemployment where I have the time for this. Good developers tend to be/stay employed, so if you are looking to hire senior devs, you probably need to consider their schedules. Unless I'm desperate to leave a job, I can only make so much time for interviews.


> they shouldn't have to solve puzzle problems with "gotcha" solutions.

I’ve never come across this myself, but I always figured that sort of interviewing would correct itself over time - if you ask questions that nobody is going to know the answer to, eventually when three years have gone by and you still haven’t hired anybody, you’re going to have to adjust your tactics.


There are different philosophies, but it comes down to how you want to balance type I and type II error, where:

Type I error == false positive == hiring someone who isn't qualified

Type II error == false negative == failing to hire someone who is qualified

I think a lot of companies are obsessed with minimizing Type I error. They really don't want to hire bad developers. As a hiring manager, your ass is on the line if you make too many of these mistakes (when your own manager asks, why are we paying 100k/year for someone who you are telling me isn't very good?). And perhaps you'll have to fire someone, which is painful for most people to do [0].

On the other hand, the costs of Type II error fly under the radar. Your manager comes to you and asks why haven't you hired anyone yet? "Well, I haven't found someone qualified yet" is the only answer you need to give. So it's easy to avoid culpability and it's harder to measure the costs associated with the work not getting done (generally, it's easy to measure a developer's cost, which is their salary + benefits + time they spend using others' time multiplied by those employees compensation. It's much harder to measure the value of their output in most cases, unless they are working alone on a revenue generating project.

I think there's a problem (at many companies) of people being held accountable for Type I but not Type II error. And so naturally, people worry more about Type I.

[0] On a tangential note, I had a boss once who made a good point to me. He encouraged me to take risks in hiring, but he said tgat the worst person to hire is someone who is mediocre. If someone is really bad, it's easier to fire them. If someone is really good, then everyone is happy. But if someone is bad, but not bad enough to fire, then they stick around and cause the most damage.


You would think that. But there's that saying about how "the market can stay irrational longer than you can stay solvent." It's true.

I've seen many places sit on job reqs for a year or two after I applied and would have done quite well. Sure, I may have needed to study a week or two. Then they'd have a solution ~11 months earlier.


I used to just look at someone's resume and then start asking them, in a collegial engineer fashion, about technical questions that come to mind about things they've worked on, and go from there, perhaps in speculative directions. But that's not good, now that everyone is concerned about real or perceived IP taint.

There's now a better way, now that a lot of people have open source contributions. Look at their open source contributions, especially if they're involved in public discussions as well as in code. Then you can followup and ask them about those.

Open source behavior is not necessarily quite the same as workplace behavior, especially if their participation was unpaid, and they had limited time, and the team dynamics were different, but there can be a lot of overlap.

(Simple example, using someone famous: if you did not know anything about Linus Torvalds, and didn't trust his resume and references, you could learn from looking at his open source participation that he knows how to code, has managed ambitious projects with cat-herding, is knowledgable and conscientious, historically has a very frank manner that some might find discouraging, and has recently reflected on manner and is modifying it. If that isn't enough, start discussing a technical topic with him that doesn't seem to involve proprietary IP.)

One engineer taking a quick glance at open source participation, and then asking questions about that, is arguably more useful than the engineer spending the same amount of time asking some contrived question and sitting in a stuffy room while the candidate does a theatre performance under conditions that aren't representative of real work.

Also, before considering dumping many hours of take-home makework programming on someone, it's respectful to first take a look at their open source. (Especially with a person who does open source on the side. There's an extra frustration with takehome, which is that they probably have backlogs of unpaid open source things they'd like to spend time on, and the takehome is hours of similar work in free time, but gets thrown away.)


It's a sad fact that interviewing sucks for both sides. Nobody trains us to conduct interviews, so we all just wing it.

I hate wasting time on candidates that can't answer rudimentary programming questions and I hate being in interviews where I'm asked questions which I feel are silly or irrelevant. I can't even trust my results sometimes because there's always that feeling that perhaps a candidate missed a question because I was a poor communicator.

Listening in on interviews with my team mates has really opened my eyes to the random nature of hiring. I'm certain I would not have passed the bar if interviewed by Team Member B instead of Team Member A.


> I can't even trust my results sometimes because there's always that feeling that perhaps a candidate missed a question because I was a poor communicator.

This sentence shows a level of awareness sadly uncommon, that I don't believe I've encountered in... thousands of interviews.


Not only that, but they sometimes even yield False-Positives, like in my experience:

I have been in charge of smallish engineering teams for around 5 years (as head of engineering for different startups). In the past, I used to actually do the 3-HackerRank-timed question as a 1st automated filter.

The problem is that, by testing for "fundamentals" (algorithms and data structure really) I skewed my hiring pool to the ones that were best at those.

Who is the people that are best at solving those kind of puzzle problems? (like the ones in HR, Codility, CodeFight, Codewars, etc), those most likely are Jr developers that are in Uni or recently graduated and spent their Uni free time in coding competitions.

The problem with that is that, these are a very particular type of programmers: They are super-effective in writing tiny one-of code to solve a specific "closed" problem. They usually don't care about testing, code reading quality, interactions, maintainability, etc. Given that they optimize for time and "pass the test cases".

Because of that, suddenly I had like 10 devs that were very good at algorithms but very Jr with regards to Software Engineering, architecture, maintainability, business understanding, etc.

Nowadays I have developed an automated challenge that 1. Requires coding, 2. Requires HTTP requests interaction, 3. Requires thinking and allows me to filter out people that really don't know what they are doing ( https://paystand.ml/challenge/ ).

WRT the 1 hour interview, I have always used a modified version of ( https://sijinjoseph.com/programmer-competency-matrix/ ) to be as objective as possible, and to be able to score Developers in a wide range of skills, and not only "they don't know how to solve the problem of returning a correct sub-tree from a BST within certain ranges". Sure, Algorithms and Data Structures are part of the requirements, but even knowing little of that should not disqualify you.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: