Hacker News new | past | comments | ask | show | jobs | submit login

There are different philosophies, but it comes down to how you want to balance type I and type II error, where:

Type I error == false positive == hiring someone who isn't qualified

Type II error == false negative == failing to hire someone who is qualified

I think a lot of companies are obsessed with minimizing Type I error. They really don't want to hire bad developers. As a hiring manager, your ass is on the line if you make too many of these mistakes (when your own manager asks, why are we paying 100k/year for someone who you are telling me isn't very good?). And perhaps you'll have to fire someone, which is painful for most people to do [0].

On the other hand, the costs of Type II error fly under the radar. Your manager comes to you and asks why haven't you hired anyone yet? "Well, I haven't found someone qualified yet" is the only answer you need to give. So it's easy to avoid culpability and it's harder to measure the costs associated with the work not getting done (generally, it's easy to measure a developer's cost, which is their salary + benefits + time they spend using others' time multiplied by those employees compensation. It's much harder to measure the value of their output in most cases, unless they are working alone on a revenue generating project.

I think there's a problem (at many companies) of people being held accountable for Type I but not Type II error. And so naturally, people worry more about Type I.

[0] On a tangential note, I had a boss once who made a good point to me. He encouraged me to take risks in hiring, but he said tgat the worst person to hire is someone who is mediocre. If someone is really bad, it's easier to fire them. If someone is really good, then everyone is happy. But if someone is bad, but not bad enough to fire, then they stick around and cause the most damage.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: