Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: CodeRev.app – Code Review as Interview (coderev.app)
150 points by CharlieDigital 9 months ago | hide | past | favorite | 48 comments
I've long had a dislike for leetcode interviews on both sides of the coin.

Project-based interviews can also be challenging because it can end up filtering out folks who don't want to spend several hours building something in their free time.

I met up with a friend last year -- a very senior dev -- who had taken some time off to study leetcode exercises while preparing for FAANG interviews. It struck me that one of the key measures of whether a candidate is a fit for a team is rather synthetic.

CodeRev.app is a simple, lightweight tool that helps teams evaluate candidates using code reviews. While programming tends to be a more isolated activity, code reviews tend to be more open ended, collaborative, and reflective of how a candidate communicates, interacts, and provides feedback day-to-day. It may also be a better yardstick for roles that are biased towards reading code rather than writing code (engineering manager, support, QA).

More interesting is that as we come to rely on AI generated code in our workflows, testing for the ability to read and evaluate the quality of generated code and whether it is fit-for-purpose becomes more important. Being able to quickly identify security flaws, logical flaws, and domain specific gaps (auditing, logging, etc.) becomes increasingly important.

More in depth thoughts here: https://chrlschn.dev/blog/2023/07/interviews-age-of-ai-ditch...




I'm delighted to see further advocacy for code reviews in a hiring process. As a software engineering manager, dir of eng, and CTO in 3 companies I replaced take-home tests with one hour code reviews. The process gave me everything I needed to evaluate and hire great people and everyone was happier. I explained to candidates (and my team) that I see software engineering as primarily a social practice. If I hire engineers for excellence in code reviews it means they are technically skilled and also a good fit for a collaborative team.

I created a GitHub organization just for the hiring process with a template repo that was published as a code review repo for each candidate. I feel there's benefit in using GitHub for the code review exercise because it's what we use on the job. But I'd like to try this dedicated tool as well. It's a worthwhile project. Thanks!


> I see software engineering as primarily a social practice.

Yes! I would say that the ability to collaborate well is absolutely necessary (but not sufficient).


There's a great talk by Margaret Heffernan where she brings up the term "social capital"[0].

In my own experience working in overperforming small startups, I've always found that this layer of trust can accelerate and multiply the output of small teams that have a tight understanding of how each member's contribution can be maximized.

The social and collaborative aspect of software engineering is indeed highly underrated.

[0] https://ideas.ted.com/the-secret-ingredient-that-makes-some-...


Could you expand on what are the common differentiating points that candidates succeed/fail at?


Critical take below, but i appreciate the hard work spent in building this and i think its going to help a lot of people.

This is productizing a totally broken understanding of code reviews it seems.

Anybody can stare at someone else's code and criticize it.

It's a well known meme that many devs heavily criticize their own code 6 months lateral.

Code reviews in practice need to be about ensuring conformance to written team agreements on which practices to follow and the reasoning for it, identifying emerging patterns that need to be addressed as new standards adopted by team agreement, and generally publishing changes to the codebase so the whole team is aware how the repo is evolving.

This isn't a code review product, it's just selecting candidates who think similarly to the publisher, ensuring a monoculture of thought.

However, great job on building this product! It's a great start and I wish you the best of luck!


> This isn't a code review product, it's just selecting candidates who think similarly to the publisher, ensuring a monoculture of thought.

I have to agree to this. It takes a quite a lot of experience, maturity and wisdom as a software engineer to learn to appreciate other view points that do not agree with your current view.


    > Anybody can stare at someone else's code and criticize it.
I think there is a distinct difference between "criticize" and "uncover" and it seems like it's pretty easy to distinguish a candidate that can only criticize and not uncover.

Part of this is that it is dependent on the using good exercises to begin with. For example, find a PR that fixed a performance issue and use the before code as the exercise to see if the candidate spots the issue. Do they propose the same fix? Perhaps they can propose an even better fix.

Another example might be to use code before a refactor and ask candidate for feedback to see how they might think about this code, why it should be refactored, and how they would propose the refactor. Will they come up with the same analysis as your team? Perhaps they have some approach that's entirely novel. Or they may completely miss the point of the why the code should be refactored.

It seems that such an exercise can reveal a lot about the depth of experience of a candidate without many of the downsides of live coding exercises focused on leetcode or long take homes that then require followups to close the loop.

Like any tool, there's No Silver Bullet, but it can be another option to have.

   > However, great job on building this product! It's a great start and I wish you the best of luck! 
Thanks!


> Part of this is that it is dependent on the using good exercises to begin with. For example, find a PR that fixed a performance issue and use the before code as the exercise to see if the candidate spots the issue. Do they propose the same fix? Perhaps they can propose an even better fix.

I think this exactly highlights the problem with this approach. Most senior engineers I've talked to would start by looking at metrics and traces versus code. If that didn't exist then they'd start by adding instrumentation before trying to solve the performance problem. In fact they'd consider a code first approach as a sign of a very junior engineer (or out of touch EM). The original PR as a result likely took that all into account. Someone who figures it out code first either has bad habits/approaches to performance issues (thus making them good at the wrong way of solving such problems) or just got lucky.


I'm against classic leetcode style interviews - asking questions wholly unrelated to any problems they'll encounter in the job. And understanding someone's ability to review code is definitely a useful thing to understand.

But I don't see how it tests whether someone can efficiently architect and build an appropriate solution to a problem. I think that's a pretty important thing to test. And that's what leetcode style questions are supposed to do, but have gotten completely out of hand (imo).

For what it's worth, I think great interview questions are taking problems you or others have solved (or tried to) at your company - something they might do too - and see what they can manage.

It'll start an on topic conversation at the very least, gets sense of their experience, seniority and mental dexterity at the least. If you have them fully implement- coding ability too.

I've worked with engineers that are pretty good at reviewing code, but not the strongest engineers for one reason or another (weaknesses in architecture or ability to get stuff done).


I think an important question here is whether you view architecture as an individual or team effort.

I've always viewed it as a team effort since it's rare for a single individual to understand all of the nuances -- especially a new hire -- of a particular application or domain space.

I don't view a candidate's architectural knowledge in a vacuum to be a meaningful sign of anything as there is insufficient context to determine whether a proposed design is optimal or not.


Totally fair.

Different positions require different interviews, and I think it aligns well with the argument that someone's ability to review code well is not a direct indication of their ability to be effective as an engineer.


> And that's what leetcode style questions are supposed to do

I am extremely doubtful about that.


I'm always happy to see efforts to improve the state of technical interviewing.

IMO this style of interview works better for more senior engineers, where you want them to think about performance issues, pitfalls, API considerations, maintainability, etc.

There's a separate but related workflow you could test which focuses on actually reading code to answer questions / debug issues. That's hinted at as one of the use cases in the FAQ ("roles like technical support or QA who may be tracing code, but not writing code"), but I think it's a sufficiently distinct workflow that it should be separated -- the difference here is reading a code sample, versus reading a pull request.

I don't think either is a full replacement for coding exercises -- I for one can passably read code in many more languages (and even make some small edits) than I'd be comfortable writing in (mostly due to lower familiarity with standard libraries, but I can always consult an API reference if I'm not sure about some function or type).


Code review is useful to a point but it can become insanely frustrating to have your work delayed due to lack of resources or comments that are more about preference(or, worse cargo culture) than function. It takes a team commitment on what matters and a managerial commitment that fast reviews are a priority.

As for feature suggestions- it's really good to get into the habit of stating the criticality of a comment. Like a 1-5 score of nitpick to strong objection. Code review tools never have this but it's a huge time/frustration saver.


That’s why everywhere that I work I introduce Must, Could, Should.

It gives people the ability to express their opinion with coulds, improve for objective reasons with shoulds, and enforce in house conventions with musts.

The rule for a must is that to apply one you have to say yes to the question “would I apply this to code written by the CTO/Head of Engineering?” and to argue a must you have to answer yes to the question “would I query this from the CTO/HoE?”

It’s not perfect but it always makes code review a faster, less contentious process in my experience.

To be clear, I didn’t come up with this - I picked it up at my first real dev job and have carried it on.


There are only 4 cases where I have a non optional comment: * code has a bug * code is not compliant with a style guide, API rubric, or similar * architecture is flawed (usually due to skipping a design doc or review) it is introducing tech debt that wasn't agreed to take on * missing test cases

Everything else is a suggestion I start with the word nit or consider. I leave it up to the author to decide what action to take. Most often they take the suggestion, but not always. It's never been a problem imo either way.


An issue for many self-taught developers, who may be good at leetcode, can build full-stack portfolio projects, but they never worked in a "real commercial grade" codebase, this is a large cause for imposter syndrome. Have you considered gearing your platform to developers as well, where you have companies which give a small part of a codebase in a realistic setting that they may encounter on the job. I'm am sure there are companies that will want to test for stuff happening on the job vs random problems (that seems to be your selling point over leetcode), and potentially browse well performing candidates.


Like let companies post some sample code that anyone can pick up and review to provide feedback?


The same way hackerrank is used for companies for exclusive problems for interview, but also there are many problems users can browse and do on their own. So here also should be companies they have exclusive that they invite people for interview / like it is now, but also they post ones for everyone that anyone can try. And perhaps they can also browse profiles/solutions of people who so their public ones.


As a recruiter for many years and previous as a programmer i can tell you that my gutfeel says that might work


I like the idea but I think competition will be pretty tough if it's positioned as a tool you'd use during a live interview. There's a number of free collaborative code sharing sites out there that don't require the candidate to sign up or sign into anything and that could be good enough to do this style of interview. You basically share a link with them to get started.

I've given lots of interviews through them where you share a ~100 line example and then you and a candidate go through it where it's positioned as a code review. They verbally chat about specific lines and can change the code as needed. This is the bare minimum you need to do this style of interview.

The interview itself is synchronous since it's done over a video call (webcam only since the doc is collaborative). This way a conversation can happen naturally and the candidate isn't blocked or getting hung up on the tools around interacting with the code. 99% of the time is spent verbally chatting about the code.

Now, if it's positioned as an asynchronous tool that someone can fill it on their own and you review it afterwards as if it were a submitted test result, that's interesting but I think you'll miss out on good live discussions this way and those discussions can really dictate being a hire or not. This is especially true for more junior or mid level roles where it's expected you'll be dropping lots of context specific hints to them such as "I like where you're going with this, based on that do you think there's an edge case waiting to happen on line 35?". You can have hints-as-a-feature for this tool but it would be generic hints.


We’ve implemented a single file code reading/refactor step of our interview processes and it has been very effective.

I think you could have great success with this, good luck!


Thanks for the support!


Well done for launching this. This goes in line of what I said before [0] about Leetcode being a less useful measure of a candidate's ability given that it can be gamed easily via ChatGPT and LLMs writing code quicker and much better.

Code reviews either closed-source or in the open (with optional evidence of some contributions to open source projects or having their own) makes it a far and much better holistic measure of a candidate's ability than memorizing rehased Leetcode / Hackerrank questions that LLMs have already been trained on them and can be gamed quite easily as shown in this experiment. [1]

As LLMs only get better, Leetcode, Hackerrank, etc will cease to become a good measure of programmer ability other than being gamed to the ground.

[0] https://news.ycombinator.com/item?id=39209673

[1] https://interviewing.io/blog/how-hard-is-it-to-cheat-with-ch...


During Engineering Manager interview setup with Google, you are given a choice between writing code (problem solving) or reviewing code. Latter is not any better, as you might be given only 10-15 lines of code to review. You end up guessing and filling-in a lot of missed code, which I think is worse than writing code where you have complete control.


I like a code review process better than a coding exercise for sure, but it’s not clear to me that it helps avoid LLM-enabled interview “performance enhancement”. Avoiding “take home” is a big step, but even in real-time determined LLM use can make someone look better in an artificial context than they will be in practice. And, as others have pointed out being good at code reviews is one facet of development, and over representation of it at the detriment of other skills isn’t great. If your code review conversation spans all those areas, maybe that’s the best we can do?

I don’t have a great alternative (I’ve used a similar approach for years), and the app is certainly interesting. I might be giving it a try.

Yes, using LLMs is a skills too.


I had an interview where I had a relatively small project. Just a basic multi-threaded web widget. Then a later part of the interview was a live code-review where we went over the code I had written and worked out a bug or two.

I really enjoyed that interview process. I wish I had gotten that job.


Congrats on launching! I also dislike leetcode very much and truly feel like it can never give neither true positive or true negative.

Reviewing, bug fixing or even working a few hours/day in a codebase is, in my opinion, a better way to assess seniority, in every possible way.


Hi, I’m the creator or gitinterview.dev which I primarily built to be able to run code review as an interview (async or sync).


This solves a candidates problem (hating leet code), not the company’s problem who are presumably making the purchase.


I think many companies using Leetcode are doing so because they don't know how to hire engineers, so they're outsourcing the problem to Leetcode. In that respect, those same companies are unlikely to think through interviewing enough to choose a tool like this.


    > ...are doing so because they don't know how to hire engineers
I agree with this. I think teams no longer know how to actually select and filter good engineers so they spend multiple weeks filtering candidates using take homes, live assessments, followups, panel interviews, and so on because it's what everyone else does.

Instead, this approach tends to yield the "least worst" candidate; the ones that have the time to waste going through a 3 week interview process.

The take home is the most egregious. I think that the solution is rather simple: what would knock your socks off if you saw it in a submission to a take home? Could you design 2-3 questions that would get right to that scenario and test for that output? It seems like it would be a far more efficient use of time.

I wrote a followup blog post on this: https://chrlschn.dev/blog/2023/10/your-interview-process-is-...


Completely agree, they aren’t really in the market.


I'd say sadly it is almost the inverse.

It's the companies that hire really high volumes that I see resort to leet code tests as sort of a great filter.

I rarely see small companies use leet code interviews, but that does not say much about the rest of their interview quality.


I disagree. Leetcode is less representative of your job than code review on a PR. Code reviews are an every day activity that can easily cause outages.

What's your argument that code reviews aren't a good representation of skills?


I’m sure you’re familiar with the arguments and don’t believe them. So I’ll just say this:

clearly a code review is an easier task, and so provides a weaker signal.


Reciting all x86-64 opcodes and their possible encodings from memory is even more difficult, so it must be an even stronger signal. Why not test programmers solely on this essential knowledge? It’s almost unavoidable that their code will execute on an Intel/AMD CPU at some point, so it’s best to filter for programmers who have the strongest grasp of the fundamentals.


Easy to refute that. That test gives you 0 every time you run it. Leet code gives a 1 10-20% of the time.


If you let people know that they can get a $300k job by memorizing x86-64 opcodes, you will start getting >0 pass rates on this test. It’s a given.

When people have time to prepare for the test and the incentives are high enough, you only get what you measure. The leetcode fans don’t seem to understand this.


If that’s true then a large number of candidates should be passing leet code tests, right?


As someone who hires a ton of engineers, I wholeheartedly disagree with this.

I actually really like this concept. The main benefit is that this approach allows a more conversational interview and also allows to cover a broader range of real-world problems.

There is almost 0 correlation between candidates ability to be effective collaborator and their ability to solve leet code problems. The former is a lot more important to us than your ability to chew out code.


More than that: a lot of candidates who are good at leetcode challenges are good at them because they specifically practice them.

Does this translate to creative problem solving and thinking? To productivity when encountering novel tasks? To being a good teammate? There is a stronger correlation with "having free time" and "solving leetcode" than being a great engineering teammate, IMO.


That’s been addressed. Practice vs Iq is possible but rare and represents skin in the game.

If you don’t think it shows problem solving, why would you prefer a test that involves no writing or creation?

Reading is very practicable m. and there is an issue that you can’t evaluate someone smarter than you.


> Code review as interview

Shouldn't the description be the other way round? "Interview as code review"


This is a really cool idea. But is it me or does the text under Rationale look wonky?


I agree with your premise that code reviews are a good exercise in interviews. But I'm struggling to see what this app does that screen sharing an editor couldn't do. The Benefits section doesn't really sell it to me.

Another aspect that would be helpful is to let the candidate implement the suggestions they make directly in the code. For example, say that you deliberately introduce a bug or issue, and then in addition of them pointing it out, a follow-up question of "how would you fix it?", the discussion about that, and the final fix, would be very beneficial signals. For this to work, it's not sufficient to have a read-only interface that allows adding comments only[1], but they should have access to a proper IDE.

Of course, they could always game this as they would a leetcode exercise, but I think that allowing candidates to use all tools at their disposal during interviews that they would use during actual work ultimately helps you judge their real performance. This includes their own editor, with autocompletion, AI assistance, access to documentation, online resources, etc.

We'll have to accept the fact that AI tools are here to stay, rather than setup artificial interview scenarios that don't reflect the reality of their day to day work. The actual discussion during the interview will allow interviewers to judge performance, and, perhaps more importantly, whether they want to work with this person or not.

[1]: Why would they need to type in their feedback during a live interview? I saw text comments in one of your screenshots. Are reviews meant to be conducted asynchronously?


Well, first, it's open source and free :) I hope that folks that find it useful fork it and build their own extensions or contribute to the project.

Second, it has some basic functionality to manage candidate workspaces, track comments, and originally had some more plans to do side-by-side candidate comparisons of feedback and detection of outliers.

I think that generally as AI-powered coding tools become more powerful and more prevalent in their use, productive programmers are those that can quickly review the output for correctness in the domain. (Maybe at some point, with 1m+ context, this won't be an issue!).

You mentioned also being able to fix the issue - I'd argue that in most cases, if a candidate can spot the issue, then it's a matter of reading docs, checking StackOverflow, or even asking ChatGPT for a proposed fix. Spotting the issue seems the more interesting and challenging task, IMO.


I agree with this.

I see a lot of benefits in advocating this style of interview. However, I am struggling to think of what good does a SaaS bring here.

For one, we use GitHub for all code reviews, and we'd want interview to be as close as possible to the environment that the engineers are familiar with. Any time we tried bringing another tool to the process, it only causes additional stress/friction for everyone involved.

Also, I found that constructing example code for interview produces worst insights than using your actual codebase. We have a few codebases that are of lesser importance that we use for interviews. The extra benefit of this is that the candidate gets to see approximation of how your actual codebase is structured.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: