
Code Review Checklist - GarethX
http://blog.fogcreek.com/increase-defect-detection-with-our-code-review-checklist-example
======
duncanawoods
I recently read Atul Gawande's "The checklist manifesto" which covers applying
aviation style check-lists to surgery. The outcome was a 30% reduction in
post-surgical complications.

From reading this, I see how bad most software process check-lists are. They
tend to be laundry lists mixing obvious minutia and vague uncheckable goals.
They are not practical executable tools but well meaning documents that are
ignored at the error prone moments when a check-list has the most potential
impact.

A check-list for a good check-list:

1\. Plan for a specific trigger giving a single pause point of up to 30-90s at
which point the check-list is opened and executed

2\. Decide whether it is a DO-CONFIRM or READ-DO list

3\. Design to engage the brain rather than switch it off

4\. 5-9 items at most

5\. Don't include any action that is rarely forgotten i.e. the criteria for
inclusion is the probability and cost of error. This is very different from
listing the most important steps or attempting to fully document a process.

6\. Regularly test and measure the check-list for effectiveness

I would say Fog-creek's article is in the "read once" training material
category and not a practical check-list. For a real check-list, I would
interview the team and examine recent commits and look for the 5 most serious
_real_ issues you are experiencing that you need to eliminate from future
code. That is a now a review check-list you can mandate, measure and iterate.

~~~
smackfu
Practically speaking, I think the parts of code review that could actually be
covered properly by a checklist could also just be done by software. The parts
that are "fuzzy" shouldn't really be in a checklist at all.

~~~
MereInterest
Perhaps having automated tasks be a single check box "Has this task been
performed?".

For example, instead of having a check for whether braces are in the correct
spot, have a check for whether 'astyle' or a similar code formatter has been
run.

~~~
joshuacc
That can work, but ideally, a CI server would actually run all of those,
leaving only a handful of things that require human judgment in the checklist.

------
ddlatham
From my experience doing reviews and having my code reviewed the most
important thing is to understand:

    
    
      - What the intention of change is (what is this trying to achieve)
      - What is the code actually doing
    

There are three levels of review I typically observe:

    
    
      1. Skim, find one or two comments to leave. LGTM!
      2. Review each line or method in isolation. Recognize style and
         formatting errors. Identify a couple local bugs.
      3. Understand the purpose and the code.
    

#3 takes time. It's what it takes however to find the issues where the
implementation doesn't actually accomplish what was set out to do (at least in
all cases). To recognize where the code is going to break when integrated with
other modules. To suggest big simplifications.

#3 is also where you get one of the biggest review benefits - shared
understanding of the code base.

In my experience developing software, it's easy to have a sense of ownership
and care most about the code one personally writes. That is often reinforced
by the development process - you are responsible for delivering an issue.
However, we're much less likely to feel responsible for the code we are
reviewing. We don't answer for that deadline. As a result, it's often seen as
a favor to review someone's code (even when the policy is for all code to be
reviewed). It's also often not time allotted or planned for. So the tendency
is to rush them.

Do other people observe the same thing? Have people used a process which gives
more incentive to invest time in code reviewing well?

~~~
colinbartlett
My experience is similar.

On some times, I have become very frustrated that I always contribute a #3 but
all I ever receive is a #1 of my code.

~~~
hckr1292
Ditto! I see code review as an opportunity to learn from other members of the
team and also to make sure I don't have to fix something tricksy in the middle
of the night.

I think that others are more (rationally) motivated by the desire to iterate
quickly and deliver a large volume of features and fixes in a short window of
time.

------
planetjones
It seems a bit of a lazy article to me. So apologies but my reply is only
brief and I am not going to spend a lot of time on it...

But... on one hand it talks about programmers making 15-20 common mistakes.
Fine create a checklist for these issues. And then at number 1 it asks "does
the code work?".

Of course it's common that the code doesn't work, but it may not be a binary
decision - requirements ambiguity, corner cases, dependent on data, etc.

I have also have issues about the Documentation section - none of that will
necessarily help "Stop more bugs"...

I don't see the purpose of the article is clear and I don't think it's
particularly consistent between the title, checklist and explanatory
paragraphs.

~~~
oneeyedpigeon
Yeah, unless you're Knuth, the answer to #1 is almost certainly "no".

------
burnout1540
Here are some other code review checklists, for comparison:

[https://wiki.apache.org/hadoop/CodeReviewChecklist](https://wiki.apache.org/hadoop/CodeReviewChecklist)

[https://gist.github.com/bwest87/10049924](https://gist.github.com/bwest87/10049924)

[http://www.mindfiresolutions.com/Code-Review-
Checklist-238.p...](http://www.mindfiresolutions.com/Code-Review-
Checklist-238.php)

~~~
godber
Cool, thanks for sharing!

I am very procedure oriented and have always liked checklists and templates. I
have just created a github repo to share checklists and templates here:

[https://github.com/godber/software_dev_templates_and_checkli...](https://github.com/godber/software_dev_templates_and_checklists)

Feel free to contribute if you think its a good idea.

I've started it out with the Fog Creek checklist ... if Fog Creek guys don't
like my having included it, I'll happily take it down.

------
duncans
10 lines of code = 10 issues.

500 lines of code = "looks fine."

Code reviews.

[https://twitter.com/iamdevloper/status/397664295875805184](https://twitter.com/iamdevloper/status/397664295875805184)

------
jstanley
While I have no problem with the concept, I think this list is rather sloppily
produced.

On some of the items a "yes" is good, and on some a "no" is good. A little
more thought could have made the list much easier to use.

------
herge
I was just thinking about code reviews and how shitty Fogcreek's Kiln is as a
code review tool. It has (or at least had) a weird "workflow" were you can't
approve or finish a review. Also, it is very hard to see the full list of all
the current reviews, because even 'approved' reviews would still stay in the
list. And of course, browsing code in kiln is very slow compared to github.

All in all, it's subpar compared to github pull requests or Reviewboard.

------
dothething
Code reviewing is such a hilarious concept in programming. Everyone thinks
it's the most important thing, yet I've only seen rubber stamps and
uninterested reviewers.

I'd be so much happier if we could get rid of the entire notion and do up
front design and pair programming.

Also, oh my god that checklist. Anyone that has a business dedicated to
delivery is going to have to throw that thing out the window.

~~~
manifestsilence
I don't know if newer things have contradicted the advice of the Mythical Man
Month, but I seem to recall it claims testing finds ~50% of bugs whereas
review finds ~80%, with a key caveat being that the two methods tend to find
different bugs so the overlap of using both is useful.

I think it also depends on the type of code and the consequences of failure. I
have seen good success reviewing data migration and modification code for
medical records because that stuff was considered important data and there was
a culture there such that if something went wrong, they would ask, "why wasn't
this caught in review?", so the process was taken seriously.

I agree with the pair programming sentiment though - I think it could be
looked at as a form of code review with a fresher context since you see
mistakes as they are made rather than puzzling out the thought process of the
other coder when it's cold.

~~~
RogerL
I've never been a big fan of code review for finding bugs. Sure, you can find
bugs, but the question is, how efficiently?

Long ago I learned a couple simple rules that get my defect rate very close to
zero: asserts everywhere, and step through every line of code. Seriously.
Write two lines of code, fire up the debugger, and step through them,
inspecting each variable. Any tiny 'unit' that you write should go through the
debugger, where unit might be that 5 line function you are writing, or a bug
fix, or whatever. It just catches a vast amount of the bugs if you are writing
in a modular, perhaps functional style. Tons of side effects, well, it's less
good at that. asserts do 'okay' at finding those things, but of course the
time to find those bugs is longer.

This buys you a few things. First, you are 'reviewing' your code while you
write, which is when it is freshest in your mind. It is hard to mentally model
the behavior of the computer, hence code reviews are mentally difficult. The
debugger ALWAYS shows you what the computer will do (bar things like nasty
thread interactions). The debugger does not lie. You get to see nasty things
like Nan or underflow that might just pass silently depending on your error
handling, which is again hard to find in a code review. And, of course, the
sooner you find bugs the cheaper it is to fix - so the best time is about 10
seconds after you typed that line of code. After that the costs grow either
proportionally or exponentially.

I know very few people with that uses this discipline, but it changed my life
when I took it up. Any time I slip and fail to do it, I rue it.

Pair programming get you similar benefits. I find it harder to
think/talk/respond to another person when I am deep in thought (different
brain pathways, or so it feels), so I'm more a solo programmer by nature.
However, I would still step through the code when pair programming; I
guarantee that your mental model of your code is likely to be flawed. The
second person will reduce, but not eliminate that threat.

I understand not all code bases/IDEs/whatever allow this (long startups, 5+
minute compiles, etc), but it it is available, avail yourself of it. If you
program in a dynamic language and find yourself using a REPL all the time to
test your code you are doing a form of this, but it is still better to test
the actual code in place.

------
dahart
Am I weird if I say "thank you" for the checklist? I think it's a good
starting point and has some helpful reminders.

As always, there are some really great and insightful comments and suggestions
here, but I'm a bit surprised about the overwhelming negativity around code
reviews and this post.

FogCreek's post is just a starting point, and they said so right in the post.
It wasn't intended to be taken as the end-all-be-all of code review
checklists, but that seems to be getting ignored, sometimes in favor of
unconstructive criticism.

It's a decent starting point, and the main takeaway is that there are lots of
angles to think about when reviewing commits, every single time, not just
whether the code works. It's easy to forget all the angles because code
reviews are hard to always do well, so having a checklist, even with redundant
items or minutiae really isn't the awful idea it's being made out to be.

The included video, which based on comments I'd be forced to guess almost
nobody watched, also clarifies their process and intent, and addresses some of
the comments here, for example pair programming. Joel mentions pair
programming being the most extreme form of code reviews. In my own experience,
he's spot on when he says pair programming is really useful in certain
situations, and very hard to use frequently.

One angle that hasn't been discussed that that a long checklist is actually a
good thing when it comes to justifying time spent code reviewing to managers
(as well as fellow coders). In my own experience, I've witnessed the attitude
that code review time is unproductive, and the thinking that code reviews
shouldn't take long. Having a specific process that demands a minimum amount
of attention is a good thing on its own, completely outside of the benefits to
the code and code quality.

Edit: spelling

~~~
hckr1292
True, this isn't designed to be an end all solution. Based on the number of
comments and range of opinions/supplemental links, I'd say you are spot on
about this being a great starting point.

Personally, I found the very idea of a checklist itself to be a helpful idea
that I'd like to use at work. The specifics are going to depend a lot on the
language, project, and team.

RE: long checklists -- I think that most of us commenting on this thread are
very passionate about code reviews, whereas our teammates are less so. I think
a short code review checklist is more likely to engage coworkers in code
review period.

------
dankohn1
An automated checklist seems far superior for most of the items he covers.

We have had good success using pronto to automatically run Rubocop, Rails Best
Practices, and Brakeman running on CircleCI against every code commit on
Github. Then, we still have another team member review a branch before
approving a merge, but we know that all tests pass, coverage has not dropped,
and there are no obvious errors.

------
greggman
This is funny coming from Fog Creek who's founder seemed to suggest such
checklists are bad... or I guess I'm reading between the lines. What I
remember taking away from Joel's blog was something like "the more steps you
add the less likely they'll get done and people will work around the process
you'd prescribed". But maybe I mis-understood

~~~
mhp
The most famous thing Joel has written was a checklist.
[http://www.joelonsoftware.com/articles/fog0000000043.html](http://www.joelonsoftware.com/articles/fog0000000043.html)

------
mks
The checklist oscilates between high level (does it work?) and very specific -
even stuff auto format could take care of (braces).

Creating such an checklist is great exercise though - writing my own made me
think heavily on what is important. Then I realized writing it down is not
enough - how do I know it is not bunch of made up rules? So I have also tried
to provide rationale for each topic. At the same I have tried to keep the list
actionable.

You can judge if I di well here:
[http://www.michalkostic.com/post/104272540521/code-review-
ch...](http://www.michalkostic.com/post/104272540521/code-review-checklist)

------
reledi
When I first started doing code review, there was a similar post on HN. I
thought it was a good idea and tried to use a checklist, but a lot of the time
you skip over items because they don't apply. And as you do more code review
you get better at identifying possible issues, reducing the need for a
checklist.

Nowadays you also have software (e.g. Hound) that does some basic checks for
you.

------
FrejNorling
I'v just released a first private beta for a tool to manage reusable
checklists like code-review checklists. For you HN readers, if you are
interested you can try it out at
[https://app.firesub.com](https://app.firesub.com)

~~~
cmpb
Trying it out now. So far, I am enjoying how clean the interface is.

It really feels like a glorified TODO list (or TODO list manager), but I do
think that there is potential here. I would recommend adding features that
separate it from the usual TODO list, e.g.:

\- Add features of flowcharts

\-- conditionals (if this then go there)

\-- subtasks

\- Allow for finer control over the contents of the headers and items

\-- Currently, it looks like you can add descriptions which display directly
beneath the item or header (and is very clean looking!), but it would be nice
if I were able to add html content (consider allowing markdown?)

\- I may be using it incorrectly, but I started creating a list and
accidentally started the list but kept adding items anyway. Consider adding
the ability to add the new tasks from an already-started list to the parent
list from which it was created.

\- Other things? What are your currently-planned unimplemented features for
this application?

I could see myself (and my team) using this application if the bullets above
were addressed.

~~~
FrejNorling
Wow, thanks for the feedback!

Here is my comments on your suggestions,

\-- Flowcharts I think I need a bit more data on how users use the app and in
what direction I should take the app and features before implementing this,
but it's very possible I will in the future.

\-- Conditionals Yes, I'm thinking of adding something like this, but on a
higher priority I will add Triggers, so a checklist can be started
automatically based on for example, a time interval or a callback over HTTP. I
will also add Outcomes, so you can set up a action that should happen when a
checklist is completed, like trigger another checklist.

\-- Finder control over headers and items I will work on this, with the goal
of making as simple as possible to get started with the first checklist, but
over time be able to use more advanced features for creating items.

\-- Extended item details Markdown is in the backlog. Alos item audit trail,
like, Frej changed the description from ... to ... (5 min ago), and so on, so
you have a complete track record of changes.

\-- Merge items from "started-checklists" to parent checklist. This is
something I'm going to implement pretty soon, so that, when a person perform a
checklist and discovers a the need of a new item, or an obsolete existing
item, they should not need to make the change first on the active checklist
and the same changes on the parent checklist.

\-- On the roadmap for the near future Triggers (se abow), Outcomes, Audit
trails for checklists & items, "News feed" and Notifications center is things
I have on the roadmap.

Thanks again for your thoughts!

/Frej

------
stevebot
It strikes me that most of these points can be automated (well depending on
the language perhaps). I prefer automating and blocking checkin's based on
criteria like these. Leave code reviews for design discussion.

------
jader201
The first one doesn't belong on this list:

 _> Does the code work? Does it perform its intended function, the logic is
correct etc._

I believe this concern lies with testing (automated or manual). I don't
believe a code reviewer should even have to know about the requirements of the
feature/bug, and should focus more on the design/quality vs. whether it works.

Mostly because it's almost impossible to know this by just reviewing the code,
without having to run it -- and I certainly don't think a code reviewer should
have to execute the code.

~~~
howeyc
Maybe I'm weird, but I think whether it works should be #1 concern over
design/quality. I agree design/quality is a close #2 though.

Also, perhaps you need more practice reading code. So you are not so reliant
on a "TEST PASS" output from a tool to know if a particular section of code is
likely working.

~~~
jader201
I certainly agree that whether it meets the requirements is important
(debatable whether it's more important than design/quality).

I'm just saying it's not the concern of a code review.

~~~
collyw
Am I the only person who works in a place with very half arsed requirements? I
have to take a guess a lot of the time as to what management actually want
form what they say. Code quality matters 6 moths later when they say it isn't
working (as expected) and I have to check what the code actually is doing.

------
walkon
> Are all functions commented?

No.

~~~
mattcrwi
I saw that and didn't bother reading the rest.

------
gkop
The two stock photos have no place in this article and give it the impression
of a content-farm piece. I am really surprised to find such tackiness on Fog
Creek's blog. I guess that's what happens when you trust your employees and
impose no editorial process.

