
Ask HN: How do you conduct code reviews? - mooseburger
I have some idle time and have been asked to look into producing a code review checklist. This is quite an interesting matter, as to me, much like programming, code reviewing is a highly idiosyncratic, ineffable, thing, and so I wonder whether the community here has somehow managed to formalize it.<p>There are obvious things to put in such a checklist (e.g. short and legible methods, check for potential null reference exceptions) but sometimes even glaring bugs can sneak through code review. A recent example: in a PR, I noticed a method call had been deleted and replaced with nothing, and asked the developer about it. The developer assured me the functionality that the method had provided was implemented elsewhere in the PR, and that indeed seemed to be the case.<p>Turns out, that other implementation did not in fact cover all the functionality the deleted method call had provided, and an entire feature was lost due to that PR. In light of this, something else that could go in a checklist is &quot;when a method call is deleted, verify what the method did&quot;. Another is that deleting that method call meant the method had no references (dead code), yet was not deleted, so neither I nor the other reviewers saw the method body in the diff, which would have shown that a feature was being lost. Another item could be &quot;If method calls are deleted, check for dead code&quot;.<p>Do you have a formal method for conducting code reviews? What things do you look for?
======
beliu
In my experience, code reviews are more about knowledge sharing than
correctness checking (unit tests and CI are better direct correctness checks).
You prioritize understanding the code that has changed and hope you get
correctness as a side-effect of that understanding. If you try to directly
construct an exhaustive correctness checklist, you'll end up producing a CR
process that's overly burdensome and that no one actually follows. So create a
checklist for understanding the code, not checking correctness.

Here's the rough, hierarchical checklist I wrote up for myself at some point:
[https://gist.github.com/beyang/b3b494f028996b32ab3dae237bc1c...](https://gist.github.com/beyang/b3b494f028996b32ab3dae237bc1c2eb)

I don't go through this entire checklist for _every_ code review obviously,
but whenever there's a diff that's large/complex enough for me to say "hoo-
boy", I revisit it to remind myself of what I actually need to check in order
to obtain an adequate understanding of the change.

The other thing that matters is tooling. One of the annoying things about most
CR interfaces is they lack the capabilities of an IDE that actually let you
jump around and make sense of the code in a tractable manner. Shameless plug:
I helped build a browser extension that adds these capabilities back into most
of the mainstream CR systems:
[https://docs.sourcegraph.com/integration/browser_extension](https://docs.sourcegraph.com/integration/browser_extension).
People swear by it and I'd highly recommend trying it out.

To get back to your original scenario, it seems like you asked the right
initial questions, but stopped short of building a satisfactory understanding
of what was going on. You took the author's word for it, when you could've
asked a few more probing questions to check your (and their) understanding of
the change. At the end of the day, the threshold for "LGTM" is a function of
both how well you understand the code and how much you trust your teammates.
Better process and better tooling can help out a lot with the former, but the
latter is a human element that can't be ignored either.

------
pedro1976
In my previous job we went through the same process. A review involves two
parties (the author and the reviewers) with different goals and obligations.
Fast forward, we decided that the goal of a reviewer is to check

\- readability/maintainability

\- coding style

All other things (e.g. linting, formatting, simple error checks...) have to be
automated, cause this is super anoying.

> As a general rule, keep the process lightweight, it should not be a burden,
> make it a fun activity.

Here are some more question that came up:

\- Whats a good size for a review -- something up to 400 LOC, there are papers
on that topic

\- How long can the author wait, hence how often to reviewers have to review
-- we tried to establish a habbit to do reviews twice a day

\- How many people have to accept? -- this question is interesting, cause
there is a correlation of number of reviewers and bad review quality.

\- Who is reviewer -- opt-in, if knowledge sharing is your goal, keep it open.
There are companies that mark branches in the source code with OWNER files to
make this more explicit

\- Are there mandatory reviewers -- yes, the owner should give his approval,
unless he is on holiday

\- How do you resolve discussions (and are they efficient)? -- IMO pairing on
non-trivial comments was the most efficient approcah.

\- If you are unhappy, can you decline a review? -- that would be brutal

\- How do you communicate/notify people?

\- How do you handle late comments? if you use pull requests, reviewers could
still comment? \- To resolve a comment in the review you changed the code and
revoke all approvals?

\- Do you want to enforce behavior -- just make it a fun activity :)

------
mav3rick
Things I look for -

\- Is the commit message coherent ? Can I understand the intent of the CL from
the message ?

\- If yes, then is this commit doing much...should it be split ?

\- Is it functionally correct after a brief look at the code ?

Now on to the implementation - \- Are there any big functions ?

\- Are there useless one line functions ?

\- Is there too much nesting with conditionals ? Look for the snake pattern.

Concurrency -

\- make sure that there is no cute "trick" to guarantee safety. Message
passing preferred if not possible then basic locking techniques and language
primitives. For eg ExecutorService in Java than reinventing your own wheel

------
marktangotango
In 10 years I have found very little value in code reviews. I’ve seen orgs
expend tremendous amounts of time reviewing their code, and ending up with bug
ridden spaghetti and missed requirements.

What’s more valuable to me is design and implementation reviews, where you can
ensure people know about and are using the features provided by their language
and framework and not reinventing the wheel, poorly.

------
ken
I'd say that once you're at the point of formalizing reviews, you should have
requirements documentation, and then, most of this becomes just an instance of
"does the implementation meet the specification?".

When I worked in aerospace, every feature was linked to the requirement line
that it implemented. Then it's easy to identify functionality which was
removed (or left in) by mistake.

------
V-2
This should have been caught out by (failing) tests. Code review is not the
right tool for the scenario you describe. There's an item for the checklist:
is the new (or refactored) code covered by unit tests that demonstrate its
correctness as well as no regressions?

------
ponyous
Besides code quality and logic checking, I think it's important to also check
if all the relevant docs have been updated. This means comments, README files,
external wikis...

