
Ask HN: How do you review code? - skewart
I&#x27;m hoping to find ways to improve the code review process at the company where I work.<p>My team has a fairly has a fairly standard github PR-based process. When you have some code you want to merge into the master branch you open a PR, ask another developer or two to review it, address any comments they have, and then wait for one of the reviewers to give it an LGTM (looks good to me).<p>The problem is that there can be a lot of lag between asking someone to review the PR and them actually doing it, or between addressing comments and them taking another look. Worst of all, you never really know how long things will take, so it&#x27;s hard to know whether you should switch gears for the rest of the day or not.<p>Over time we&#x27;ve gotten used to communicating a lot, and being shameless about pestering people who are less communicative. But it&#x27;s hard for new team members to get used to this, and even the informal solution of just communicating a ton isn&#x27;t perfect and probably won&#x27;t scale well.<p>So, has anyone else run I to similar problems?  Do you have a different or better process for doing code reviews?  As much as this seems like a culture issue, are there any tools that might be helpful?
======
dcraw
We had a similar problem and have made a lot of progress by reducing the size
of changes submitted for review. We used to submit code only when we thought a
feature was complete, and the median review was probably 1000 lines. Now we're
closer to 100-200.

This has a ton of positive effects:

    
    
      - Reviewing code is much less daunting
      - Reviewers give more detailed feedback
      - Developers get feedback on lower layers (such as model definition) before writing a ton of code that depends on it
      - Reviews come quickly enough that developers don't need to switch to another project before continuing
    

It was a bit difficult to make this transition. We started out by explaining
what we wanted to do (submit shorter diffs) and why, and people generally were
on board. But most people didn't know how to do it. It's not as simple as
submitting a diff whenever you write 100 lines of code, since the diffs need
to be coherent. Here are some techniques we used:

    
    
      - Use feature flags to commit unfinished features to master/trunk without exposing them
      - Submit model changes as their own diffs, then API changes, then UI changes
      - As you work on a feature, pick out parts that seem ready for review, stage and commit them, send for review, then branch again from that point and continue working.  This takes a decent amount of git-fu, so train folks accordingly.
    

Another thing to note is that our reviews are not at all evenly distributed
around the team. We have a team of 13 engineers, and it's probably an 80-20
split—80% of reviews are done by 20% of reviewers

~~~
simonhfrost
I agree it's important to reduce code review sizes, but I try to avoid reviews
that end up leaving incomplete code in production such as having individual
domain changes and feature flags. If a feature is cancelled before it's
completed there will be unused code checked-in - which to me is technical
debt.

I prefer to split domain changes into different commits on the pull request so
they can be inspected individually - eg #1 Create API, #2 Consume API with
model, #3 View Logic. And for feature flags, try to figure out a possible
fully implemented subset of features - eg show new model properties to the
user that will in the future be editable.

~~~
tamana
Feature flags are huge win for building reliable systems. You want them.
Incomplete implementations should be modular, like complete implementations.
Don't fool yourself, all code is tech debt. If you aren't writing modular PRs,
you are in for a world of pain down the road anyway.

------
partycoder
What you describe is:

* There's a group of people, and at least one is needed to review.

* The reviewer has to decide to either make an effort or freeride by letting someone else complete the review.

* If no review is done, then the group cannot benefit from progress.

That in game theory can be represented through a model game: the volunteer's
dilemma.

[https://en.wikipedia.org/wiki/Volunteer%27s_dilemma](https://en.wikipedia.org/wiki/Volunteer%27s_dilemma)

The fact that nobody is reviewing the pull requests reflects that your team
members are competing against each other, to the detriment of the group.

What you need to do is to redefine the PR process so there's a main assignee
(use the github assignee feature), and that is the person accountable for
completing the code review. If you diffuse responsibility among competitive
people, the group loses.

The same happens with technical debt. If the group is competitive and managers
are non-technical, fixing technical debt is not rewarded, leaving it to
volunteering. Technical debt will continue to grow until a person volunteers
to fix it. That person will most likely won't be rewarded at all. They will
only enable competitors to compete in an easier way.

Unfortunately this also denotes a crappy culture. I don't really recommend you
to spend your career in such shit culture.

~~~
eoin_murphy
Even worse, working on technical debt may be viewed as a negative thing ("if
it's working why did you change it?" said the phb).

I've yet to find a code review process beyond a trivial implementation that
scaled. The best results I've had were with a rotating buddy system. (Every
week/month) you review someone else's code and they yours.

~~~
partycoder
There are some static analyzer metrics as well as metrics you can obtain
yourself. These metrics relate to the cost of maintaining code:

\- cyclomatic complexity

\- statement length, function length

\- number of singletons / global instances

\- dead code

\- duplicated code (intelliJ has a good duplicate detector that anonymizes
variables and ignores whitespace and comments).

\- number of known functions/types per file (coupling)

\- number of commits per file (git-effort from git-extras)

Some of them are tricky to obtain. Making a simple script that parses import
statements and outputs a DOT file with a digraph is useful to map
dependencies, which is good to explain coupling.

------
arcticbull
Our code review policy is to require a minimum of 2 people who did not
contribute to your changeset review and approve your code prior to merging.
We've been able to reduce the amount of time a review takes in four big ways:

(1) Automated style checker. By adding a pre-commit hook to validate code
style in an automated way, the amount of noise in PRs went down dramatically.

(2) Don't do architecture review in PRs. If you're going to make architectural
changes, do so in a lightweight design document and get approval there first.

(3) If it languishes, go and physically (or virtually) ask the specific people
to take a look. This works best in my experience when you find someone else
who's having trouble getting their code reviewed and offering a trade. This
person is then likely to review your code quickly in the future if you do the
same.

(4) Feature flags allow you to get incomplete but atomic and correct units of
work merged to master without showing them to your end-user.

This way, code reviews are not about style (much), not about architecture
(much) and simply about making sure you executed on the plan correctly. Turn-
around times went from days to hours in many cases.

~~~
wdewind
> (1) Automated style checker. By adding a pre-commit hook to validate code
> style in an automated way, the amount of noise in PRs went down
> dramatically.

Would you mind sharing your precommit hook? We have a simple one that prevents
console.log etc., but nothing super fancy.

~~~
bpicolo
[http://pre-commit.com/](http://pre-commit.com/) might help you out.

------
mahyarm
At a place I worked at, we did over the shoulder code reviews. You go to
someone in person, ask them to do an OTS and you go over the code with them
explaining anything. It meant most code reviews took about 5-10 minutes and
was pretty fast.

People didn't review as deeply as the traditional async process, but code
reviews were never an issue that delayed people and took about as much
interruption time as someone asking you a question about something.

The downside is it didn't scale past a single office or with remote workers
and it's hard to do such things as automatic blocking reviewers and such.

\----------

Now I work at a place that does traditional phabricator code reviews and has
all the problems you mention. But people do review more thoroughly and it does
scale past offices.

Personally I deal with it by messaging people asking them to do a review after
I post it, and talk about how it's important that we get reviews done quickly
during social situations.

Everyone gets annoyed how long reviews take when this is the case, so it's
easy to agree about. I also offer to give people code reviews when they
inadvertently complain about how long code reviews take, and they start to
appreciate a helpful & quick attitude to code reviews and start to
reciprocate.

~~~
arnorhs
I also worked at a place where we did OTS reviews. In my experience it was
really good and it was easier getting deeper/more thorough feedback, since
you're basically forced to tell what your code is doing etc.

I'm really curious about how you feel people review more thoroughly when doing
an async CR? Do you think there was an issue with people not feeling
comfortable criticizing in person, or feeling like they'll just say "go on"
even when they don't understand a code block, just b/c they didn't want to
feel unproductive having the other person wait or something like that?

~~~
optforfon
I had a bit of a problem with OTS reviews b/c not everyone is as professional
when it's not in an open forum. We used GitLab merge requests with much better
results.

It's kinda hard to describe in words - but programming is an opinionated
profession and people have a bad habit of telling you that you should rewrite
stuff b/c of code-style differences and what in essence is non-substantive
feedback. They want everyone to look like the way they would write it. Not to
say it's always without merit.. but it's honestly kinda irritating after a
while b/c you feel you are being micromanaged and are losing ownership of your
work (when reviewed by someone who is senior to you)

~~~
Can_Not
You should be coding in the official company coding style--or one should be
written one for you to follow. In my experience supervising junior devs,
reading code is like driving down a highway. When you submit code that is
written out of sync with 99% of the entire company codebase, it's like going
60mph then suddenly the highway becomes a bumpy rocky dirt field. It's not
really an opinion thing, it's a consistency thing. The time for opinions ended
a long time if you are not the first programmer.

You wouldn't join a law firm and start ignoring their legal document
guidelines, because their senior lawyers need to read your written documents
at the same fast speed they read everyone else's. We can spot your real
mistakes a lot faster if you stop cluttering the codebase with the numerous
cosmetic ones. Your seniors are pissed because you are paving a dirt road for
them to drive on. You shouldn't be upset that they are asking you to do the
bare minimum. If they don't have a formal style guide though, then they
deserve the dirt road.

~~~
arnorhs
well said

------
yason
Reviewing is switching gears as well. When you're coding you can't just jump
into a code review mode for a moment. It's not a culture issue, this is IMHO
inevitable for any work environment that requires deep concentration and
immersion into the work inside your head.

So you'll want to batch the reviews somehow, based on your schedule. On the
other hand, as a programmer, you don't want to be blocked by code reviews. So
you'll want to finish a somewhat self-contained set of changes, push them out
for a review, and continue on top of those locally. If the reviews suggest new
modifications, you'll do those and rebase your local new work on top of those.
It doesn't matter that much how the changes are reviewed as long as it's not
blocking work. There should be an emergency vent for the cases when unreviewed
changes do block significant other changes.

~~~
todorus
I've made the same argument about the need for a programmer to not be
disturbed. A colleague of mine has eloquently compared it to juggling: you
start with one ball, add another and it gets complex over time. Disturb him
and he starts over with one ball.

In my experience as a programmer I've grown to think that this argument only
holds partial value. An observation that helped in this, is that programmers
hate to be disturbed, but can always manage to be on time for lunch or beers
on friday (it's a dutch corperate tradition to drink a few beers after the
workday on friday).

I find myself, and others, to be capable of reviewing a PR within an hour. I
just need to be aware that when I'm done with a coherent unit of code, that I
should switch to review mode and check if there are outstanding PR's. To reuse
the metaphor: I've stopped juggling and can choose which set of balls to pick
up next.

So I do think it is a cultural thing. You need to commit yourself to have
reviewing be a part of the work process. I find IM services and PR's to cater
perfectly to the way the usual programmer works. He can ignore the message
while he's mentally keeping 5 balls in the air and look through his messages
when he's ready to start his next activity of the day.

~~~
yason
Everyone is winding down during a Friday. After Friday noon, everyone is
avoiding to start anything that will take hours. That's why they can leave the
office.

On the other hand, when we've had company parties at the office, I've seen
people go back to programming after a couple of beers because they just had
something on their mind that they need to finish.

People can certainly review something within an hour or so, but only
sometimes. If you just had another interruption your juggling balls are still
on the table and you haven't picked them up yet. So this is when you can
finish a code review ten minutes after someone requested it. But if the
request came in a couple of hours later, you wouldn't be able to take a look
at it until the next day.

Culture can affect whether programmers are able to get long stretches at all.
If they aren't, then they're obviously more available to short-term tasks such
as code reviews. On the other hand, I know people who work best when left
alone for days. There's no asking for a code review if the guy has been
juggling since Wednesday because if you do he'll have to spend the next week
as well working on the same task.

~~~
timr
_" Everyone is winding down during a Friday. After Friday noon, everyone is
avoiding to start anything that will take hours. That's why they can leave the
office."_

Yeah, that's true...but it doesn't explain why programmers are almost never
late for lunch. That's also been true, in my experience. And moreover, it's
nearly universally true that the folks who complain the loudest about
interruptions are the same ones who are johnny-on-the-spot for coffee-break,
games, etc.

In fact, a more cynical person might say that it's all about avoiding
_unwanted work_. Ahem.

Whether or not you believe me, the important point is that _yes_ , it's a
decision of culture: at some point, you establish a culture for your team. If
that culture includes a maximum latency for code-reviews, then the people who
don't like to do non-coding work for days at a time can go code in a cave
somewhere else. People who can't be "interrupted" with reasonable non-coding
communication activities tend not to be very good collaborators anyway, and
probably aren't a good fit for a growing team.

~~~
ozim
For me doing code review is not just looking at diffs in git.

Things that I can review by looking ad diffs are not worth reviewing.

I have to at least checkout different branch, probably I have to change my
configs (while stashing or saving copy of my current working context), rebuild
solution to make code navigation works so I can at least navigate around. Now
I start running variables and code paths in my mind which by changing context
takes my cognitive load off my previous context.

Going for lunch requires locking my computer, maybe taking some money. Then I
eventually talk about stuff that is not heavy load just generic stuff.

------
pwmarcz
We had a similar problem - a lot of pull requests being stuck in a back-and-
forth remarks and fixes. The "social" solution was to increase the amount of
communication in the team: instead of everyone having their own tasks and
working in a silo, we made multiple people (ideally, the whole team)
responsible for a task, splitting work as we go. The technical solution was to
use pair programming instead, at least for some changes. Both worked really
well.

Pair programming achieves a similar result to code reviews (another person
knows the code very well, and is able to criticize it and improve on the
design). It also does it much better because the review is done as the change
is being made, and not after the fact, which improves the understanding of the
reviewer and allows them to give immediate feedback.

The old code review process still works as a fallback, and is especially good
for small but tricky changes: I need to do something novel, it's not a big
change but there are a lot of ways to do it, can you comment on my approach.

One other thing that we also ended up doing is "ping-pong pull requests": if
you give me a pull request, I don't just criticize it leaving the issues for
you to fix, I fix them and leave that for you to review. Then you can do the
same, etc. As a result, both people get better acquainted with the code,
instead of just one of them being the "owner" of a change.

~~~
satyajitranjeev
We tried a few methods and eventually went to pair programming. But never
tried the ping-pong pull requests. It sounds interesting. What gets me off for
pair programming is that it's not always feasible. If the other person who
could work with you is on another task you spend a lot of time try to play
catch up. How well did the ping pong work?

~~~
pwmarcz
It works well. I think with the original pull-request process we basically had
a mental block that only one person is allowed to write the code, and the
reviewer can only comment. This can be frustrating especially if you know
exactly what needs to be fixed. If you just go ahead and make the change, you
save both people's time.

I guess one place where it fails is that it's not very educational. If you're
tutoring a less experienced programmer, you don't want to fix the mess after
them, you want to teach them how to doing without you the next time around. In
that case, I either leave the fix to the author, or fix the issue as they
watch.

------
aaronharnly
+1 to the other comments about having linting and CI to reduce the nitpicks
and basic verification in PRs.

One technique I've used and liked is to have the PR author create a calendar
invite for the review, usually with enough lead time to be reasonable, eg the
following afternoon. This isn't an old-fashioned "everyone get in a conference
room" code review -- you still it do it individually at your machine -- but it
allocates the time.

The author puts one or two people that they would like to review as required,
and the rest of the team as optional. That helps the reviewers carve out time
to get it done, encourages people to RSVP as either reviewing or not, and
makes for a well-understood deadline when to get your comments in by if you
want to have any influence on the PR.

It's not a perfect system, but might be worth giving a try if you are having
problems getting prompt sign-up for reviews.

~~~
takno
My experience is more as a roadblock than as somebody getting code live
(although I have been held up by my own cursed rules a couple of times). I'd
have to say that I really appreciate the calendar approach - it gives me a
time slot to think about the code, and more importantly it forces me to stop
prevaricating and make a decision so that I'm no longer blocking others

------
Denzel
At a previous company, we adopted a system similar to Google's early review
process. Every repo would have two owners (for redudancy in case of sickness).
We'd put the emails of both owners in an OWNERS file. Then, whenever a pull
request would pop up, an automated process would send out an email to the
owners. It would continue badgering them, decreasing the period between
reminder emails in proportion to how long the pull request has stood
outstanding.

This was a great, easy process that helped ensure a timely review for our pull
requests.

~~~
protomyth
Interesting, that sounds like it works ok for two owners. I was at a place
where I was the code reviewer for C programs[1], and its was every scheduling
nightmare and being stuck in a fish bowl that you would assume. This was not
distributed, so I would do the review then there would be a meeting where we
went over the review. It was not fast development, but it sure was documented.
One of two computer related activity that almost got me killed[2].

1) but, interestingly, not allowed to write any since who would review mine?
sigh...

2) don't review really bad (MST3K-level bad) C code in a car with two C gurus
(one of which was driving)

------
mpermar
Our approach is very simple. Each feature or bug needs to be reviewed by one
person of the team, anyone with decent tech skills and enough knowledge on the
language.

For such a naive approach is fundamental to teach people have to do the
reviews. Our approach is try to make people (both reviewers and "reviewees")
aware that reviewers are not meant to be judges. They are not meant to try to
impose their opinions and ways to do things either. Reviewers can always
obviously "suggest" but not mandate.

So a reviewer is expected to:

\- Detect any fundamental mistakes that can cause severe malfunctioning like
memory leaks, file leaks, bad performance signs and general bad smells.

\- Use his or her business knowledge to flag possible missing facts in the
implementation.

\- Check if there is tests for the feature/bug. If there is no tests then the
feature/bug is rejected.

\- Check that there is enough tests to cover the functionality and bug that is
being addressed.

\- Check that the build passes and there is no regressions.

So, for us, the key is to make the tests become the real low level reviewer.
And the reviewer needs to focus on higher level requirements instead of going
around the code checking every loop and condition.

Honestly, when I do a review, I start from the tests. Those are my docs and my
reviewing mates. They tell me if the feature/bug fix is realiable or needs to
be reworked. Then I try to use my expertise to help the "reviewee" rather than
to judge.

Would love to get feedback.

~~~
KuhlMensch
Hm, I don't believe any senior dev "suggested" a workplace out of
bad/unscalable practices. And realistically, if you _review_ my code and give
it 3/10, isn't that a judgement?

Part of the problem of bad code is solved by communicating as much as possible
what is expected - so everyone knows what 3/10 code looks like. And
importantly why 3/10 code is bad, and thus the motivations to create code that
is 10/10.

This softer "reviewing" approach could definitely have indirect benefits on
how criticism of code plays out. But if a workplace understands and practices
respect, then everyone should be able to get on with crafting a healthy code
base.

All this is underpinned by one important tenant of professionalism: good
communication.

------
andreasklinger
Had similar problems.

Atm our protocol works for us - maybe for you too:

    
    
       - It is expected that you do PR reviews every morning
       - It is expected that you wait for 2x +1's ([1])
       - It is expected that every comment is addressed either with a comment or code 
          - that way people can give 2 comments and give +1 and move on
    

Minor rule: "Code style" discussions are made explicit in code linters and
shouldnt (repetitive => linter rule) happen in PRs

Minor rule: If people dont like to review a PR, the PR is too big or not
properly prepared (explanations [2], no committer comments, no screenshots,
etc) or both

Most important rule: PR Reviews are meant to enable the committer to do a
better job - not to "code check"

Hope that helps! GL.

[1]:
[http://cl.ly/2M3K1I0s380m/Image%202016-04-03%20at%2019%3A04%...](http://cl.ly/2M3K1I0s380m/Image%202016-04-03%20at%2019%3A04%3A45.png)
(2 days b/c weekend usual cycle time is 24-48h)

[2]:
[http://cl.ly/1w3s2y0m0T2H/Image%202016-04-03%20at%2019%3A12%...](http://cl.ly/1w3s2y0m0T2H/Image%202016-04-03%20at%2019%3A12%3A27.png)

------
verst
Thank you to the parent for asking this question. I have some related
questions for all engineers and engineering managers.

I'd love to hear your perspective from your current or most recent job:

\- Do you think Engineering managers are aware how much time you spend on Code
Reviews?

\- Do you track your number of / time spent on code reviews? Does your
manager?

\- Are code reviews valued as much / more / less than writing new code? Why /
Why not?

\- How are code reviews prioritized against writing code for bug fixes, or
implementing new features?

\- How do you choose a code reviewer?

\- Do you ensure code reviews are distributed across the team? If not, how
would you do it? How do you ensure you are not doing too many reviews
yourself?

~~~
skewart
OP here, you're welcone! I'm thrilled at how many awesome responses there are
here.

You bring up some really important questions that I think get at the heart of
the problems teams often have with code review. Even if it is valued in an
abstract way, it isn't really treated like a first class coding task. It's not
measured and planned in the same way that writing code is. It seems like a lot
of comments here talk about ways to quantify code review and bring it onto
more of an equal footing with writing.

On my team we all informally budget time for reviewing PRs, but we don't
really plan and track it with any rigor. I don't think any of us have a very
good idea how much time anyone else spends on code review, which can make
picking a reviewer awkward.

I'm curious to hear other people's responses to your questions.

------
arktisklada
We've tried scheduling 2-3 times per day where everyone should focus on PR
review. It's quite easy to get lost in one's own tasks. This helps maintain
the understanding that everyone needs to work together to move all
features/bugs/chores/etc. along.

We also have a slack channel for pull requests, where everyone subscribed gets
notified when someone posts a PR for review, or later for re-review.

I think timing and communication can be an issue on any team, especially with
distributed resources (I'm remote). We regroup regularly as a team to discuss
process, issues, and anything else on our minds. We experiment with solutions,
and try again and again until something sticks. It's not always perfect the
first time, but we end up with something that works for us. For now, we try to
hold each other accountable to our scheduled times, and ping (and re-ping) on
slack for reviews.

------
jwcrux
Like others have said, pre-commit hooks for linters are very helpful. Also, CI
is a _must_.

These alone can significantly speed up code review since you can be fairly
confident you won't break anything new. Then, you just have to focus on: "does
this fix the issue that was found?"

Then, it's just a matter of breaking up issues into meaningful, modular tasks
that can be taken care of without days of code review.

------
EliRivers
Ask someone to do a review? Why would you _ask_ , other than out of politeness
with the clear expectation that the answer "no" should come with a solid
business reason? It's their job. If someone isn't doing their job, then
management has either not made it clear to them what their job is, or
management needs to ask them why they aren't doing their job. To my mind, it
sounds like what you need is for management to get their act together, sort
out the review process and then enforce it.

I worked somewhere where a reviewer was assigned before code was cut. Once the
code was ready for review, the reviewer became the publicly visible person
responsible for that particular feature/bugfix/whatever.

People who had reviews sitting on their plate for too long would find
themselves being asked questions by management about why they weren't doing
their job and why they were holding things up [1]. We wouldn't put up with
someone getting round to doing the implementation whenever they felt like it;
likewise, we wouldn't put up with someone getting round to doing the review
whenever they felt like it. As it was, this never happened, because it's part
of the job.

[1] There are, of course, good and bad answers to this question, much as there
would be good and bad answers to the same question asked of the implementer
rather than the reviewer. The answer "Oh, I just haven't got round to it, I'll
get to it eventually" is as acceptable as you probably imagine.

~~~
adaml_623
This is a really interesting idea for me. Moving 'ownership' of the code away
from the coding engineer makes sense. A similar concept to having automated
test cases written by someone different to the implementer.

------
strictfp
We have three rules which help here.

1\. You cannot start a new task before all reviews have been done 2\. You need
to talk to the author somehow when reviewing. 3 If you've pair programmed you
don't need to review.

The first rule makes people allocate time. The second rule takes focus off
minutia and moves it more to the issue or imrovement and suggested changes,
including architecture.

The last rule is there since we believe that reviewing is a natural part of
the cooperation when pair-programming.

~~~
why-el
For number 1 did you mean all reviews _that have been assigned to you_?
Because if you mean otherwise then it is impossible to follow once you hit a
certain size.

------
superuser2
This is one of the things I found most difficult/interesting as an intern.

It's really important to have the local, pre-code-review infrastructure to be
able to keep developing and testing in the same direction while a code review
is pending. I had very good experiences with Phabricator, which has support
for "dependent diffs" so you can create further separate diffs (sort of like
PRs) that build on as-yet-unreviewed diffs to avoid requests ballooning in
size while awaiting review.

We pester each other incessantly on the team chat to actually do the review.
We do cross-functional teams, so there's little dilution of responsibility -
only 1 or 2 people are clearly qualified to review your code, and they
generally do so in a timely manner.

~~~
takno
You can create PRs against branches other than master in github, which lets
you create dependant releases. Don't know if that helps at all

------
mhotchen
Initially we would just have everyone in the team on the PR, but I noticed a
trend of only a couple of people actually reviewing pretty consistently and
occasionally people waiting for feedback for a couple of days, even for single
line changes.

I solved the lag in my team by having "merge masters": dedicated people for
that week who will drop what they're working on to review. It rotates on a
weekly basis. We do also include the entire team (5 devs + team lead + QA) on
the PR and if anyone happens to be free they'll take a look. Our PRs tend to
be very thorough and in depth and will often end up with face-to-face
discussion as well.

------
mmozuras
We try to solve it in a couple of ways:

* Enable smaller PRs. Reviewing 50-150 lines of code is much less daunting. This can be done by using tools like feature flags, AB tests, github/scientist etc.

* Ask/communicate about the quality of PRs. Like: splitting big ones into multiple,writing good descriptions, including the problem and screenshots.

* Automate the trivial parts. We use Pronto (disclaimer: I'm the author - [https://github.com/mmozuras/pronto](https://github.com/mmozuras/pronto)), but there are other tools/services that can help achieve the same result.

~~~
sandstrom
Excellent tool (pronto), I've been looking for something like this!

Also looked at Hound ([https://houndci.com](https://houndci.com)), but it's
primarily a paid product, with no easy way of running it ourselves. Pronto
seems way better!

------
fishnchips
We use a standard GitHub process with a standard branching model (master and
feature branches) for reviews. We require that each change is a separate PR
and unless it's an emergency we require that each change is reviewed before
committing. We tried dedicated tools for reviews but found them too much of an
overhead.

Unlike some places with strict procedural requirements (e.g. Google with it's
OWNERS and readability process) we depend on an honour system which given our
size (~30 devs) seems to work rather well. We assume that doing code reviews
is an important - if not the most important - thing in a dev's life so we
spare no effort in order for the code to be reviewed properly while also not
blocking others' work.

However, we heavily rely on tools to ensure that human reviewers spend their
time on things automated review can't figure out. These are generally linters
and code smells detectors (e.g. reek in Ruby) which run on CI (Codeship). We
use these helper tools to push statuses to GitHub and we block merging
branches on CI passing so if tests are failing - which might even be just for
a stylistic issue - human code review process would not even start. We also
use a hosted code quality analyser (codebeat - which is our own product) to
track quality over time, keep everyone on the same page and help devs and
reviewers focus on the right things.

Overall what we found out is that code review is a learning process and very
thorough reviews for new devs help them ramp up very quickly so that
subsequent PRs require less and less attention.

------
tootie
It's up to the tech lead/scrummaster to enforce. If your code reviews are too
onerous, your feature branches might be getting too big which I think is bad
practice for multiple reasons. We do branch per story and keep each story
about as small as possible to deliver incremental value. Pull requests should
be small and frequent. This leads to easier code reviews, quikcer integration,
tighter turnaround with QA and quicker delivery to product owners.

------
babo
We used to sit down together to the do review. The reviewer set an open
timeslot for a possible review, the first who grabs it sits down and together
they review it. The calendar entry helps to timebox the review process, the
verbal communication speeds it up and helps to avoid conflict.

One more addition, take care of the important bits only, don't fight over
coding style or such.

------
makeramen
We use phabricator which has a much clearer flow for blocking on code reviews
and you can easily see your list of diffs awaiting your review. I find the
workflow much better than GitHub. That said, communication is really key, and
that comes at an organizational and cultural level to define what's an
appropriate amount of nagging you can do to get reviews through.

~~~
fishnchips
I used Phabricator at Facebook and I loved it, if only for the macros (memes).

------
leighmcculloch
Here's my process:

1\. Open a PR when you're ready for review. The PR should have the following
attributes, which make PRs fast to review, and reduces the back and forth
since the reviewer has everything they need to provide a really good review.
\- Be for a single "What" and have a clear "Why". Both of these should be
explicitly in the commit message. \- Preferably be a single commit PR. \-
Preferably be __less than 100 lines __of changes. Keep them as small as
possible. If you see a refactor opportunity while working on something, `git
checkout -b do-the-refactor master` and make it a separate PR. People review
+10 -3 PRs eagerly and very very fast, much faster than a +300 -200 PR that
contains refactors, fixes and other jibble.

2\. Drop the PR into a #channel and immediately switch to doing something
else. Your assumption in an asynchronous work environment is that you are an
interruptable as anyone else. This means you default to being interrupted and
pickup another piece of work rather than expecting someone else to be
interrupted.

3\. If I don't hear back within 24 hours, assign an individual and ask them to
put it in their "queue" of work for doing in the next 24 hours.

4\. If there's non-LGTM feedback, then make the changes in a new commit (so
it's clearly separate) and after that I jump back on some other work.

The take aways are I context switch in favor of not interrupting my team
mates. There are plenty of times I need to interrupt them, for help, code
design discussions, etc. PRs are not something I need to interrupt people
with.

You might think this process means it takes a long time to merge a PR, and
when looking at a single PR that's true. But I'm constantly working on 3-4
tasks at a time, interleaved, and using the above process I can still ship
multiple PRs a day.

------
timr
You're right that the problem is cultural at its root -- there's no one
solution to "making" programmers eat their vegetables. But it helps to have
many automated reminders, so that the nagging doesn't cause people to hate a
single member of the team: when a reviewer is added or a review is updated,
your tool should remind the reviewer(s) on a regular basis that new work is
pending.

I've been working on a better review tool for teams who have built their
process around GitHub pull requests
([https://www.omniref.com/code_review](https://www.omniref.com/code_review)),
and one thing I've learned from talking with teams is that GitHub's
notification system is just comically broken for a team review workflow --
completely overwhelming with emails, but in the wrong way (i.e. a notification
for every edit). People just ignore it. So perhaps you should check out a
different tool?

~~~
skewart
> GitHub's notification system is just comically broken for a team review
> workflow -- completely overwhelming with emails, but in the wrong way (i.e.
> a notification for every edit). People just ignore it.

Yup. I've found it to be pretty useless for exactly that reason.

Omniref looks awesome. In my experience one of the biggest bottlenecks for
getting things done is understanding code bases - wether it's new hires,
working with a new repo at your company, or digging into an open source
library you use to figure out a bug. I often wish I had better context for why
things were done the way they were. Integrating PR comments, and other
annotations, into the code, in a way that's easy to browse, sounds incredibly
helpful. I will definitely check it out.

~~~
timr
Thanks. Let me know if you have any questions or feedback! My email is tim at
omniref.

------
borski
We had exactly the same problem, so we built a bot for our Engineering room in
HipChat that pesters the entire room about how many pull requests are
currently open, what they are, and links to them, once every 2 hours (during
business hours).

It's worked really well. We've also extended that same bot to do other things
like remind you about support requests, etc.

------
dochtman
We use Jira, GitLab and Slack in our agile teams at work. I've written some
automation that listens to GitLab webhooks and (a) announces new Merge
Requests in the #team-review channel, which all team members are in, and (b)
creates sub-tasks on the original Jira story or bug, which appears on our Jira
boards.

But yeah, most of it is about setting expectations. I think our conclusion was
mostly that review times under about ~24h are okay, so within that space
people can organize their own time.

That said, I have been trying to get my team members to create smaller
commits. I'm personally convinced that review time is exponential in commit
size, such that reviewing a branch with 20 small commits is much more
efficient than one squashed commit. That said, this hinges on actually doing
the work to create good commits: no fixup commits, one logical change per
commit, etc.

This has come up enough that I think I should write a blog post about it some
time.

------
x0x0
We have a slack channel for dev and people issue review requests in it. This
works best when most people can review the code obviously. There's also an
informal culture of trading (you promptly review my branch; I'll promptly
review yours.)

We cut some corners and assume devs are ok to address minor complaints w/o
further review. So you see "approved pending minor issues X, Y" which doesn't
require further review.

We do something akin to pair programming for reviews, though we don't pair
program at all. If you can, you sit with the reviewer and discuss. This often
leads to a joint effort to fix anything the review brings up and speeds
turnaround.

Finally, it helps if you get serious about a linter and, as much as possible,
software enforced code style.

------
kazinator
We use Gerrit. Gerrit is a review system derived from the Git technology.

From the Git user's point of view, Gerrit is installed as a git remote. When
you develop some changes, you push them to the Gerrit remote, using a special
branch naming convention. Gerrit dynamically takes this push and converts it
to a review item, which is then subject to a web-UI-with-email-notifications-
based review workflow.

A special "Change-ID:" item in the commit header identifies the change. This
lets you push revised versions of the commit: if the Change-ID matches, Gerrit
recognizes a push as being the new version of a previous push. These versions
are organized in the review process under one item. You can see what the
differences are between different versions and so on.

The review process involves multiple users in reviewer roles. They get
notified via an e-mail that there is something for review. Then they can look
at the changes, and approve or reject the commit. General comments can be
added to the item and comments can be attached to individual lines of code,
also, so you can easily say very specific things about specific pieces of
code.

We have automatic reviewers who participate also. For instance, if a kernel
change is proposed, then "checkpatch" is run on it automatically, and posts an
approval or disapproval to the Gerrit item. The code also has to build for
several targets and pass automatic smoke tests.

Commits are ultimately merged by users who have the power to do that. They
take into account all the other votes and then cherry pick the change.

Gerrit is far from perfect, but it provides a decent workflow, and is a huge
improvement over _ad hoc_ development. For instance, the Change-ID can be a
pain. It's assigned by Gerrit when you first push something, and then you must
remember to add that same Change-ID to the new version of the commit. I wrote
myself a script which generates a Change-ID; I stick it into the very first
commit, so Gerrit doesn't have to make one up for me.

To use Gerrit, users have to acquire a somewhat better than average
understanding of git; it's just a little bit more advanced than the "pull,
commit and forget" use cases whereby you pretend that git is SVN or CVS.
Things should be set up to prevent common mistakes; for instance, lock the
repository from direct pushes, so only pushes to Gerrit are allowed. I would
make a blank "Change-ID:" part of the commit message template, and give users
a way to generate these things, to avoid the common mistake of pushing a new
version of a commit without a Change-ID, causing Gerrit to generate a whole
new review item.

~~~
zb
All of the problems you mention are completely, 100% solved:

[https://m.mediawiki.org/wiki/Gerrit/git-
review](https://m.mediawiki.org/wiki/Gerrit/git-review)

You're welcome ;)

~~~
liveoneggs
git review to submit, git review -d branch to review.. too bad the ux is so
bad

~~~
pekk
If this is all that's wrong with the UX, why not write two tiny one-line
scripts called gerrit-submit and gerrit-review?

------
tehwalrus
We use ReviewBoard, which allows you to tag both a group and specific members
of the team with review requests.

I generally tag more than one person, and if possible people who know that bit
of the code. I wait until I've had two signoffs (with verbal badgering if
people are being slow; we are all near each other in an open plan.)

I will often find myself holding out for the one person who "owns" that bit of
the code before pushing the patch, though.

One suggestion that we tried: _People agree to do check for assigned reviews
first thing, and as they return from lunch. This way people aren 't
interrupted from their development flow by the reviews._

~~~
bostik
I've looked at reviewboard and so far have found two major problems with it.

1) You cannot create accounts through the API. This makes it _really_ annoying
to test and evaluate.

2) It operates primarily diffs. When you have a patch set to review, you
really want to see both the overall diff, as well as the individual commits.
Then, when you want to update a review with new commits to address comments on
previous round, the tools do not allow to do this without manual intervention.
At least I haven't found a way to run `rbtools` for review update in full
noninteractive mode. Again, this acts counter to integration.

From (workflow) integration point of view these two are pretty big killers.
They are also the two reasons we don't yet have RB even in full-scale testing.

I _do_ want to use something that can import copies of git branches so
historical reviews are available without polluting the primary gitlab
installation with hundreds or thousands of stale branches.

~~~
chipx86
Hi, Review Board founder here. Let me address these real quick.

1) This has come up before. We'll probably add it. Most places that ask for it
really are wanting something like an auth backend that ties into another
service, like LDAP/ActiveDirectory (which are supported natively), a Single
Sign-On service (which we'll be adding support for in Power Pack, our add-on
product), or something more custom. For those who do want to add via the API,
it's something I imagine we'll do in the next major release or two. I'll make
sure it gets on the roadmap.

2) Review Board 4.0 (which is in development, alongside the upcoming 3.0)
features DVCS support that will make it a lot easier to update patchsets and
to keep them together as a whole without having to squash commits or create a
review request per commit or anything else like that. Reviewers can view each
commit individually, or the whole thing squashed, or a range squashed, and
we'll help visualize how the history of those commits evolve with each update
(such as adding new commits to HEAD, squashing commits together, etc.). It'll
be _really_ nice when it's done.

We're also looking at adding push-to-commit through our in-development Git
(and eventually Mercurial) wrapper service, rb-gateway, so that you can push a
branch there and have it appear on Review Board, and keep updating that branch
to update the review request.

And, we're prototyping support for pull requests (first for GitHub, and then
for others later), so that one can submit a pull request and have it basically
appear in Review Board. This is still in the very early stages, but it's a
priority. May not be relevant to your workflow, but thought I'd mention it,
because that's asked a lot.

Again, Review Board 4.0 is when a lot of this will make an appearance, but
I'll try to get account creation via the API to happen in 3.0. (Just added it
to our roadmap.)

~~~
bostik
Oh hi, this was an unexpected pleasure.

My usecase for item 1 is a bit special, I can admit. The pain point came up
when I tried to set up RB for testing, and interestingly enough both items tie
directly into each other.

\- Reviews need to be generated automatically when suitable trigger conditions
are met. This means that there has to be a dedicated user account in RB that
can create reviews on behalf of other users. Reviews also need to be
automatically updated when new changes land in the branch under review.

\- While setting RB up, it's not at all uncommon to do a round of
install/provision/repopulate iterations. Without the ability to generate users
and assign their permissions through an API the automation and iteration go
out of the window.

\- The account permissions for RB are very granular. I don't want to manage
these permissions via a web-UI, I want them populated and updated based on
reviewable commits. Hence, _everything_ must go through config management. Yet
another web-UI for managing yet another set of accounts and permissions is ...
increasingly repulsive.

I can understand where the permission and account creation friction comes
from. For you, RB is a product. (Hell, it _certainly is_ one!) For me, RB is
simply a tool, and as such it has to integrate with other tools. Code review
tool is an extremely important one, but without fluid integration with other
development workflow tools it becomes an additional burden to maintain. For
developers who don't need to worry about integration efforts, yet another
extra account adds to the cognitive overhead.

Furthermore, I realise your permission model is tightly coupled with the
underlying architecture, which itself builds on top of Django. Exposing the
kind of functionality I ask for requires to expose and nest quite a few
internal API layers. I do genuinely see why that has not been roadmap item so
far. Codewise that is a big undertaking.

As for item number 2... you just made me drool. I haven't seen a tool yet that
would allow to easily see change comparisons with dynamic interdiff windows.
Having visibility into how a change set has evolved, and what the compound
intermediate results have been _between any given steps_ would be a huge boon.
(No, `git diff ref1..ref2` is not ideal. On the other hand something like `git
show ref1 ref2 ref3..ref6 ref7..ref8` gets a lot closer.)

Once you have native DVCS support and push-to-commit for fluid integration
between other tools, RB has the ability become "just another tool". I see that
as a good thing :)

I haven't yet tried phabricator (evaluating it is on our infrastructure team's
roadmap), but so far it looks like it tries to do too much.

You guessed correctly, the PR integration is not relevant for me, but I can
definitely see how it would better integrate with lots of teams' workflows. No
wonder it gets asked for.

> _Again, Review Board 4.0 is when a lot of this will make an appearance, but
> I 'll try to get account creation via the API to happen in 3.0. (Just added
> it to our roadmap.)_

Thank you. Thank you so much.

NB. I am a big believer in tooling and workflow. Related to that, I have
developed an increasing intolerance to anything that cannot be configured via
config management. Changes to infrastructure are no different to changes in
code - everything needs to be reviewable and versioned. (Which also means that
the infrastructure must be rebuildable with minimal manual steps.)

~~~
djtide
Hey Phabricator dev here, would love to see you take a better look at
Phabricator as well. I'm not sure that we really "do too much" as our product
is guided mostly by the tenants of code quality and developer efficiency.
Specifically we feel it better to have these tools all integrated so
developers spend more time just building features. Some basic overview for
code review would be:

Differential - Pre commit code review

Audit - Post commit code review

Arcanist - command line, patch emitting lint, unit tests (pre commit)

Herald - Custom "business rules" for notifications (cc me on all JS diffs)

Owners - Mark code territory

Harbormaster - Continuous integration

Macro - I mean, code review with memes!

And yes, you can uninstall any application you don't want like Maniphest or
Countdown, or sadly even the meme generator.

~~~
bostik
Oh hi, and apologies for the slightly late reply.

> _would love to see you take a better look at Phabricator as well._

I will. Phabricator was brought into my attention some time ago, not
surprisingly around the same time I was looking at the alternatives. I think I
should expand on the "does too much" part a bit, because in using that
expression I clearly managed to upset you.

Let me state one thing upfront - good engineering workflow is _critical_. That
is also a reason why introducing new tools to existing workflows is so damn
hard. And that is precisely why I am looking at tools that integrate into, and
enhance the existing setup.

Flag days make everyone unhappy.

From looking at the existing documentation and "pitch pages" for phabricator,
I've had the constant feeling of getting overwhelmed. From the list of
features you mentioned, I for one don't really see much of a difference
between distinct Owners and Herald modules. If code review system has the
concept of marking code ownership, then of course automatic notifications for
incoming changes in that code should be standard practice.

But the moment CI gets implemented as core module of a code review system, I
think it veers off into murky waters. You can't easily smoketest different CI
systems. They are tightly coupled with release and deployment tooling, and
carry the need for their own notifications. CI is a damn complex beast. I
would rather have a very good CI with plugins for the extra functionality than
a code repository / review system with a CI slapped on top. If you're going to
integrate a CI into code repo (gitlab, I'm looking at you), then you have to
aim _really_ high. As in - if you find a regression, and commit a test for it,
I would expect the integrated CI to be able to spin off an autonomous bisect
run to detect exactly when the regression was introduced.

But let's get back to the subject at hand. The few things I feel are core to a
really good code review system:

\- Allows to automatically review feature branches, and keeps track of all the
changes and comments locally; once the review is completed, the branch can be
nuked, but the review remains as a historical item and contains full history

\- Default reviewer groups. If a change set touches multiple parts of the
code, the relevant people across teams ("owners", in phabricator I think)
should all be marked as automatic reviewers.

\- Related to the point above, the ability to automatically mark a changeset
with "I'm looking at it" if you're doing the review in the web-UI. That would
provide visibility for others in the reviewer group.

\- Mail in support. Most people get code review comments delivered via email.
It should be possible to reply to such a mail, and have the comments from the
mail integrate back into the review thread visible in the web UI. If I could
use my preferred mail client (mutt, in my case) to respond directly to code
review threads, I would be quite a bit happier.

\- Some kind of alerting mechanism for "problematic" reviews. If a change set
takes ages to review, it usually means that either the change is too complex
or too large - or possibly that the reviewer is overwhelmed.

\- Again, only slightly related to the previous step: if you are going to
integrate CI into a review system, it would be beneficial to see in the under-
review section when a change has landed in master that would cause a merge
conflict. The probability increases with code age, after all.

And that's just off the top of my head. I mentioned in the sibling comment
that I realised RB is built as a product. That comes with its own baggage. In
case of Phabricator, I see a an even more complex product. It appears to have
so many tentacles that even Lovecraft could have been envious.

There you go.

~~~
sytse
You're looking at GitLab and we're listening to you. We're aiming high with
GitLab CI, the CI system on the planet or bust. Your suggestion for automatic
bisecting is really interesting and we discussed it. The problem is that we
can't see a way to do this without making it language dependent, this is not
annotation for us.

------
49531
Our team has a similar bottleneck. A few weeks ago I attempted to solve the
problem during the static showdown hackathon, but I haven't touched the
project since the hackathon ended so it's pretty
rough:[https://ss16-obelus.firebaseapp.com](https://ss16-obelus.firebaseapp.com)

The thing we wanted was a news feed of sorts of outstanding PRs so we could
track who checked them off and which ones needed our attention first.

I guess it wasn't a big enough pain point to continue developing on it :(

Also I think a whole web app is a clunky tool for the job, something like a
chrome extension might be more appropriate.

------
wonnage
The problem I often run into is people making a bunch of comments but not
having the confidence to actually make a call (ship or no ship). I think this
has roots in the idea of code review as a way to cover your ass when something
goes wrong.

This would be nice: As a reviewer, your responsibility is to say whether the
code is shippable or not; the remaining content is simply justification

As a submitter, you determine who needs to see your code, and this is a
flexible thing - if you paired with the people who would be reviewing your
code anyway, maybe getting approvals from all of them is unnecessary, etc.

------
chvid
You could consider turning the process "on its head":

Do release branches instead of a peer reviewed master branch.

So you would have every developer continuosly commiting into a (CI'ed) main
trunk and when you wanted to do a release you would carefully review / quality
assure the changes between the last release and the main trunk, taking only
the stuff ready for release into new release branch and moving fixes back into
the main trunk.

This approach will give you a messier main branch (rather than messy
development branches) and move the reviewing and QA till the time of release.

------
sevilo
_The problem is that there can be a lot of lag between asking someone to
review the PR and them actually doing it_ \-- yes, I have definitely ran into
this issue at my previous job, some PRs take forever to get merged because
everyone else is so busy working on their tasks and nobody has the time to do
reviews.

The process at my current job: we have a soft limit on how many tasks can be
in the work-in-progress status at a time, that includes tickets that are
currently being worked on, or being reviewed. Then there is a hard limit of 3
issues at a time that's test ready. When we exceed these limits, developers
need to stop pulling in new tickets, and our priority becomes to help with
review or test issues, that helps moving things along. Instead of assigning
your issue to a specific person on the team, it is the team's responsibility
to move things along and get these reviews done, that eliminates constantly
bugging the same person for reviews.

Since my current job is front-end heavy, we also do preliminary QA when we're
reviewing code, we pull the branch in and make sure the obvious aren't
breaking. I personally find it really helpful for understanding the context of
a PR and gives me a clue of what I'm trying to look for when reviewing. Of
course this process is still not perfect, there can still be a lot of back and
forth, and sometimes you need to ask people to explain some of the context to
you. But overall I find this a fairly decent process so far, most importantly
I feel like it gives everyone the sense of team work, no issue should be the
responsibility of a single person.

------
505
At one time I was part of a team of around 15 developers working on a
PostScript engine, mostly written in C, and included hundreds of test scripts.
It shipped on a number of multi-function printers. We started on it around
2002. Today we might have used Git. We used
[http://aegis.sourceforge.net](http://aegis.sourceforge.net), sadly stuck at
SourceForge for the time being.

Aegis is notable for (by default) recording and enforcing code and test script
review before permitting a commit. It has other virtues, and faults, and is no
longer widely used, AFAICT. Reviews were treated as transactions, and the
reviewer comments went into the project history. Also time spent on
development, reviewing, resuming development after 'failed' reviews, and more.

The engine is mentioned in passing on
[https://www.eid.toshiba.com.au/n_news_detail.asp?news_id=8](https://www.eid.toshiba.com.au/n_news_detail.asp?news_id=8).
It did a number of other page description languages as well.

We also used Aegis on a number of smaller projects.

~~~
nickpsecurity
Interesting. You're the first person I've seen here that used it. It was the
only one that meets most of Wheeler's requirements:

[http://www.dwheeler.com/essays/scm-
security.html](http://www.dwheeler.com/essays/scm-security.html)

So, was it a good tool compared to subversion or centralized git? Work fast
and reliably enough? Not get in the way too much outside authorization checks?
Just getting feedback on field use here.

~~~
505
I don't think I know enough about subversion or centralised git. Aegis was
great for the first few years. For the main project, there was a perception
that it was slow and unreliable. But a lot of us loved the transactional
features and preservation of the timelines of comments for code review. Its
management of test scripts was very very valuable.

I'm not sure about your authorisation checks question. Maybe Aegis did get in
the way.

An afterthought about speed and reliability - Aegis is almost too
configurable. For instance, it relies on a 'file history tool' per project;
that can be SCCS, RCS, FHist, or anything that you can tell Aegis how to use.
We used the somewhat obscure FHist. It has the advantage that it was written
by the same person as Aegis, and so is extensively tested with it.

Our project had the main repository sitting on an NFS server, and in later
years around a dozen working on one or two changes at any one time, with those
stored on each engineer's Linux workstation. That was slow and sometimes
fragile, but we never really explored other confiration options.

We accumulated hundreds of unit test scripts which had to pass on each
integration. In retrospect we should have worked harder to retire or refactor
the oldest ones. Having fewer or faster tests would have saved time and angst.

Aegis' implementation is very dependent on the Unix security model. We thought
about doing something to make it work on Windows, but quickly got discouraged.

I don't know much about security, but I'd guess you'd be able to discover
many.

Sadly, Peter Miller, the originator and I guess owner of Aegis, died a few
years ago. User and developer mailing lists were on the web last time I
checked.

~~~
nickpsecurity
Thanks for the feedback. Interesting. The UNIX dependence was one of my main
gripes. I had some workarounds but figured a clean-slate model or retrofit of
cross-platform SCM was better. I still wanted to eventually meet and get
insight from the author on such a project. Too bad he died. (sighs)

Back to clean-slate or retrofit of alternative as I figure Aegis might be too
complex to use for such a project if main mind behind it isn't available.
Still, they get credit for doing a lot of things right.

~~~
505
Yes. I might suggest you give one of the tutorials a try. He as several,
depending on which tools you're more comfortable with.

Creating a small project seemed to be quite a bit of work, and I think Peter
realised this and put effort into the examples.

You might also like his PDF on recursive Make,
[http://aegis.sourceforge.net/auug97.pdf](http://aegis.sourceforge.net/auug97.pdf).

------
barrkel
We use Jira to track development tasks, and have two states for review:
Awaiting Review and Under Review. Every branch (and thus every PR) has a
corresponding dev task. We configure Jira so it lights up the Review column in
red when there are too many items in it. It's a fairly unsubtle hint that
items in the column need to get reviewed before moving on to the next dev
task.

We also use Stash (now Bitbucket) rather than Github, so we can configure it
to trigger Jira state transitions.

Depending on how features get factored, you can end up requiring certain tasks
as pre-requisites before other tasks can get started. Then there may be direct
communication / nagging involved. But we try and cut things up a little bit
differently, so that there's less scope for lag to impact overall feature
delivery time. The main idea is to prevent PR reviews from being blockers for
multiple people.

------
slantview
We had a similar problem with lag for code reviews, so I wrote a bot to nag
the Hipchat (moving to Slack this week) channel and remind people that repos
had pending PRs that needed review. This helped quite a bit of getting code
reviewed.

Keeping PRs smaller definitely helped as well and now we typically have 3-5
PRs per day go in.

------
Mongoose
A former colleague of mine wrote a great article on this topic which
summarized how the infrastructure group at Twitter thought/thinks about code
review:
[http://glen.nu/ramblings/oncodereview.php](http://glen.nu/ramblings/oncodereview.php)

------
joshribakoff
The company I work for is using microservices, I usually I try to focus on how
the app behaves, and make sure the UI works & there are no bugs. I do also
read the code itself to make sure there's no blatant security issues, but I
try to focus my review on the app's behavior more than it's code, since a
microservice can be easily rewritten later if any problems come to our
attention. If the person writing that app chose to copy/paste instead of use
inheritance, I'll assume they have a good reason. Code quality is important,
but no one even cares about it besides the developers. Usually I'll also ask
the developer if there's any parts of the code they'd like constructive advice
on, that way I don't come off as if I'm lecturing them when I do critique
their code.

------
rwmj
I find that reviewing code is a pain. Not just for me, but for the submitter
too if I take too long / I am excessively nitpicky about code style issues. I
don't think good reviewing tools exist right now. You can rope together
multiple tools which require constant babysitting, but there's no good low
maintenance all-in-one solution (or if there is, please suggest it!)

Anyway, given that caveat, here are my tips:

(1) Have a "checkpatch" script that can be run over the code _before_ it is
sent. This script should do all the nitpicky coding style stuff. If the
submitter didn't check their patch, you immediately reject it (some projects
automate this). Here is an example from the qemu project:
[http://git.qemu.org/?p=qemu.git;a=blob;f=scripts/checkpatch....](http://git.qemu.org/?p=qemu.git;a=blob;f=scripts/checkpatch.pl;hb=HEAD)

(2) Have something like patchwork, or github pull requests, or openstack
gerrit reviews, which tracks which submitters have submitted patches, which is
the latest version of their patches, and so on.

(3) Have some automation which runs the test suite against submitted patches
and immediately rejects any which fail.

(4) Have documentation which tells people how to submit patches, what scripts
to run, and encourages submitters to split their changes into reviewable
commits. This is more art than science, but good coders will eventually get
it. Here are qemu's instructions:
[http://wiki.qemu.org/Contribute/SubmitAPatch](http://wiki.qemu.org/Contribute/SubmitAPatch)

Now when you review a patch:

(5) Apply it to your tree and make sure it compiles. Enable warn=error so no
warnings can escape your notice.

(6) If it comes from a trusted submitter, and looks sane, I generally ACK
those without a detailed review.

(7) If an earlier version of the patch was sent, go back and look at your
comments, then check everything that you asked to be fixed was fixed.

(8) Go through each commit, line by line, and check everything. This is the
tedious bit I'm afraid. But hopefully the previous steps will have rejected
the worst mistakes.

(9) Try to be level headed and provide constructive criticism (even if the
patch is objectively terrible). Also try to turn around patches quickly (I'm
very guilty of not doing this, so do as I say not as I do).

------
Xelank
What code review tool do people actually use for a PR-based workflow? My
company has been recently looking for better code review tools than
Bitbucket's.

A few key features I'm looking for:

\- Incremental review: show what files has changed since last review (ideally
what lines)

\- Timeline view: See what have happened since you last reviewed it

\- Custom "PR Approved" rules + indicator: e.g. 3 approvals + build passed.
This gives us an easy way to see whether a change is ready to be merged and
released.

A lot of these features can be found in (what seems to me to be) trunk-based,
pre-commit style tools like Phabricator and Gerrit. However I'm not sure
whether the extra tooling (i.e. ramp up cost and complexity) is worth the
return. Does anyone have any experience as to how switching over yielded good
gains?

~~~
piotrkaminski
If you're willing to use GitHub, then
[https://reviewable.io](https://reviewable.io) has all the features listed
above and more.

Disclosure: I'm the founder.

~~~
Xelank
reviewable is probably my favourite at the moment which ticked all my boxes,
though it involves switching over to github which may or may not happen.

Do you have any more advanced example of real code reviews using reviewable?
Seems like there isn't a easy way to find more advanced examples (I found one
particular one for Rust, but I can't seem to find a way to navigate to other
code reviews of the same project)

~~~
piotrkaminski
I should do a better job of tracking "interesting" reviews, but for now the
easiest way to find some is to poke around the PRs of some open source
projects that have adopted Reviewable, for example
[https://github.com/servo/servo/pulls](https://github.com/servo/servo/pulls),
[https://github.com/cockroachdb/cockroach/pulls](https://github.com/cockroachdb/cockroach/pulls),
and [https://github.com/dolphin-emu/dolphin/pulls](https://github.com/dolphin-
emu/dolphin/pulls). Each PR will have a link to the corresponding review in
its description.

Let me know if you need anything else!

~~~
Xelank
Thanks for the links! Having a list of cool PRs definitely helps conversion :)

------
akeating
I find that timely feedback is essential to the pr process working
effectively. A few things can help: * submit RFC prs that are not intended to
be merged early so architectural or design issues can be socialized early *
break your commits into logical chunks * open an epic branch and merge smaller
commits into it. When ready, open a new pr to merge the epic into master.

This does sound like a culture issue. Everyone should be empowered in this
process and everyone needs to be involved. I think of it a check-and-balance,
but we all have a responsibility to complete features to a high standard, in a
timely manner. If a feature is delayed because reviews are slow, we're all
responsible.

------
akurilin
If you can't reduce PR size, then at the very least you can make the process
less painful by splitting the work up with a tool such as
[https://reviewable.io/](https://reviewable.io/)

------
JesseAldridge
I wrote some software that counts the number of comments made on PRs on
GitHub, and then posts the results to our company slack channel:
[https://github.com/JesseAldridge/code_review_scoreboard](https://github.com/JesseAldridge/code_review_scoreboard)

I think that having visibility into who's actually putting effort into code
reviews is the first step toward getting everyone to put in more effort.

Note that I'm the only one currently using the above software, and it works on
my machine... but it will likely take a bit of work to make it work on yours.

------
known
1\. Read the requirements/expectations document

2\. Read how much of the code is reused

[https://en.wikipedia.org/wiki/Pareto_principle](https://en.wikipedia.org/wiki/Pareto_principle)

------
mjbellantoni
Shameless plug: I've created a tool which helps engineering teams and their
managers surface actionable metrics out of their GitHub-based pull requests.
Basically, it surfaces some metrics about pulls which have been open too long
or which are too large to be effectively reviewed.

I'd love to get some feedback. You can find it here:
[https://www.pipelineinsight.com](https://www.pipelineinsight.com).

------
paulhallett
We (Lyst) recently shared our code review guidelines on GitHub:

[https://github.com/lyst/MakingLyst/tree/master/code-
reviews](https://github.com/lyst/MakingLyst/tree/master/code-reviews)

We've tried to find a balance being keeping momentum and making sure you're
shipping good quality code.

------
crummy
10K-LOC just wrote an article on this titled Effective Code Reviews:

[https://10kloc.wordpress.com/2016/04/03/effective-code-
revie...](https://10kloc.wordpress.com/2016/04/03/effective-code-reviews/)

------
cdevs
Pull request , one line changed -My team has a fairly has a fairly standard
github +My team has a fairly standard github

~~~
skewart
Haha, nice. Also a few lines later "has anyone else run I to similar
problems..." should be "has anyone run in to similar problems..." But HN won't
let me edit the text now so it's too late. All I can do is cringe at the irony
having a couple of typos in a question about code review. I wrote the question
on my phone from bed this morning, and I guess I wasn't fully awake.

------
sidcool
Pair programming or integrated code reviews in development workflow are
preferred.

------
0633892575
4339734

------
ebbv
My team uses Fisheye + Crucible. We have an expectation that outstanding
reviews will be handled end of day and beginning of day (some people work
staggered shifts, so they may have reviews awaiting them when they start in
the morning.)

Additionally, we will do reviews for each other as we return from lunch or on
a break between issues, etc. as possible for each person.

Revisiting a revised review is handled the same way. If I put up a review and
there are defects found that need to be handled, I'll handle those and then
add the resulting commit(s) to the review and let the team know (though they
will check their Crucible inbox when they get a chance anyway.)

If a review is holding other things up we can be pests about it if necessary,
but it's up to the other devs to decide when they have time to take a look.
It's not up to me.

Since the reviewers and reviewees are all on the same team together we are all
motivated to take care of reviews. If you're not all on the same team, or your
team doesn't have a healthy "We're all in this together." that could be
problematic.

------
throwaway_exer
Development tools ultimately are there to help the developers.

For teams with a mix of abilities, something like gerrit can make sense where
approval is needed before a merge to avoid total chaos.

For a group of experienced developers, approval is a very expensive delay, and
merging should be allowed first and review done in parallel.

Where I work the only people who commit are experienced developers, so the
whole +2 gerrit thing is rather pointless, especially since jenkins does
syntax-checking.

You might spot the occasional "typo" in a commit, but are you really going to
tell the local language expert or framework author they're wrong about
anything else, when your opinion carries the same weight as theirs?

