I've long had a dislike for leetcode interviews on both sides of the coin.
Project-based interviews can also be challenging because it can end up filtering out folks who don't want to spend several hours building something in their free time.
I met up with a friend last year -- a very senior dev -- who had taken some time off to study leetcode exercises while preparing for FAANG interviews. It struck me that one of the key measures of whether a candidate is a fit for a team is rather synthetic.
CodeRev.app is a simple, lightweight tool that helps teams evaluate candidates using code reviews. While programming tends to be a more isolated activity, code reviews tend to be more open ended, collaborative, and reflective of how a candidate communicates, interacts, and provides feedback day-to-day. It may also be a better yardstick for roles that are biased towards reading code rather than writing code (engineering manager, support, QA).
More interesting is that as we come to rely on AI generated code in our workflows, testing for the ability to read and evaluate the quality of generated code and whether it is fit-for-purpose becomes more important. Being able to quickly identify security flaws, logical flaws, and domain specific gaps (auditing, logging, etc.) becomes increasingly important.
More in depth thoughts here: https://chrlschn.dev/blog/2023/07/interviews-age-of-ai-ditch...
I created a GitHub organization just for the hiring process with a template repo that was published as a code review repo for each candidate. I feel there's benefit in using GitHub for the code review exercise because it's what we use on the job. But I'd like to try this dedicated tool as well. It's a worthwhile project. Thanks!