Hacker News new | past | comments | ask | show | jobs | submit login

The point there is standardized. If you apply a standardized test, then it becomes feasible to measure and argue the disparate impact.

For all their failings, leetcode-style interviews are probably less culturally-sensitive than standardized IQ tests. They probably do have (unintentional) disparate impact, but this seems like a really hard thing to measure (and possibly correct for).




> If you apply a standardized test, then it becomes feasible to measure and argue the disparate impact.

Yes, the existence of readily available statistics may make the unequal impact easier to show.

> For all their failings, leetcode-style interviews are probably less culturally-sensitive than standardized IQ tests.

“Culturally sensitive”, maybe, but they almost certainly have quite large unequal impacts adverse to protected classes (including “age, if over 40”), though the absence of external data raises the cost of proving the unequal impact. Also, certainly less demonstrably predictive of job performance in software development.

> They probably do have (unintentional) disparate impact

Probably, that, as well, but I don't think the age discrimination function largely is unintentional to start with.


You are committing a corollary to the green lumber fallacy described by the article.

Just because code challenges are not relevant to job duties doesn’t mean the results are irrelevant to job performance. They are a proxy intelligence test. General intelligence is the best predictor of success in almost any role (not just software engineering).

This is what confounds people. They think the interviews are designed that way because they are supposed to be representative of the job — I don’t believe that’s the case. They are that way because they provide a strongly correlated signal of performance after hire, and big tech has decades of data for all different interview types. I’m very confident that if they had a better interview circuit that could be done in a ~day, they would be doing that. Obviously even at big tech, referrals and recommendations count for a lot.

While I don’t like the low recall, I do think that “invert this binary tree” probably has less bias than a quiz on technology, or a design conversation (that seems way more susceptible). Perhaps it has a bias for a particular kind of computer science education and thinking, but at least that’s not a protected class. I’m not seeing the age connection, but I could imagine e.g. a computer science education in different countries emphasizing different skills over years (and leading to some candidates with a leg up on the “tests”).


> They think the interviews are designed that way because they are supposed to be representative of the job — I don’t believe that’s the case.

This may be the case. But even if it does act as an effective proxy--and I'm not sure it does; my worst yeses in interview loops have been very adroit programmers who I passed against my better judgment when my "not sure I want to be around this person every day" bells were ringing subtly in the back of my mind--then it has a different problem. You've now set expectations with the interviewee that oh yeah, we do hard stuff here. Then they go frob knobs or write frontend stuff all day.

(This actually happened to me at my first job. I didn't know any better, of course. The hiring manager pumped up my tires with all the difficult scaling work, etcetera etcetera. Then I was writing HTML into templates for the first six months I was there because they needed a body to plug into the role.)


I don't doubt anything about what you're saying, but I'm not sure what argument you're making. Yes, false negatives exist. Will another kind of interview have better rates of false negatives? At the big tech firm where I work, there are many different kinds of interviews, and HR definitely has all the data about what correlates with job performance. I'm not saying coding-style interviews are perfect or even good, but there has got to be a reason they still do them.

As humans with a strong confirmation bias, it is extremely difficult to tell what's behind your feelings for those worst yeses. It could have been the case that the candidate had red flags, you saw them, but couldn't articulate them. Or it could have just as easily been the case that the candidate had a cultural and/or communication style that was different from your own, and they also happened to perform poorly after being hired. It's important to remember that no interview process is going to yield perfect results: there will be false negatives and false positives, you can only move the trade-offs while simultaneously ensuring that you're avoiding any conscious or unconscious discrimination against protected class to the fullest extent possible. That's a hard problem to solve.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: