Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

code reviews are not one of the things gpt-4 is mediocre at; it's much better at them than it is at, for example, providing reliable information or writing code. its pattern of strengths and weaknesses in code reviews is not the same as human patterns

by 'red flags' i inferred copenjin to be referring to things like getting aggressive or defensive, rather than making dumb mistakes, but i could be wrong about that. i guess there are also some mistakes that are so dumb that they'd be a red flag, and i have to admit that gpt-4 is somewhat subhuman at avoiding those

if you play a board game with someone you can assess their general intelligence and capacity for logic. all else being equal, having more general intelligence and knowing how to think logically do make you a better programmer. (if you just want to assess general intelligence, a much faster pair of tests would be reverse digit span and reaction time.) but those are far from the only things that matter, they're not enough to be a great programmer, and they're not even among the most important factors. other important factors in programming include things like knowing how to program, knowing how to listen, and being willing to ask for help (and accept it), which a board game generally will not test




> its pattern of strengths and weaknesses in code reviews is not the same as human patterns

I agree with you. Which is why I say that they complement. But your reply to Baron implies that what they were suggesting wasn't a solution. I agree with the sentiment of the post to do things in person. But what I take from Baron is that it is much harder to fake the process with GPT because the actual part of the code review isn't so much about finding the bugs, it is you watching someone perform the code review (presumably through screen sharing and a video chat). You could have this completely virtual, but I think you're right to imply that the same task could be then optimized.

But at the end of the day, I think the underlying issue is that we're testing the wrong things. If GPT can do sufficient, then what do we need the human for? Well... the actual coding, logic, and nuance. So we need to really see how a human performs in those domains. Your interviewing process should adapt with the times. It is like having a calculus exam where you test someone and ban calculators but also include a lot of rote, mundane, and arduous arithmetic calculations. That isn't testing the material that the course is on and isn't making anyone a better mathematician, because any mathematician in the wild will still use a calculator (mathematicians and physicists are often far more concerned with symbolic manipulation than numerals).

> other important factors in programming include things like knowing how to program, knowing how to listen, and being willing to ask for help (and accept it), which a board game generally will not test

I agree the game won't help with the first part. But I thought I didn't need to explicitly state that you should also require a resume and ask for a github if they have one. But I did explicitly say you should ask relevant questions. And I'm not entirely convinced on the latter, which are difficult skills to check for under any setting. There's a large set of collaborative games in which do require working and thinking as a team. I was really just throwing a game out there as a joke, being more a stand-in for an arbitrary setting.

At the end of the day, interviewing is a hard process and there are no clear cut solutions. But hey, we had this conversation literally a week ago: https://news.ycombinator.com/item?id=40291828




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: