
Pull request review mistakes - scottnonnenberg
https://blog.scottnonnenberg.com/top-ten-pull-request-review-mistakes/
======
hnruss
My team has benefited significantly from diligently reviewing pull requests.
It may seem like a waste of time at first, but I can't count how many bugs
we've caught this way. Team productivity is also increased due to knowledge-
sharing, as we have to understand more parts of the code in order to properly
review it. Reviews typically take between 10 mins and 4 hours, depending on
the size of the PR. We've also improved our competence with Git, as we have
tried to make the series of commits in a PR more understandable. PRs have
actually increased work satisfaction on my team, as we get much more feedback
on our work this way. Usually feedback is positive, but even when it isn't, we
work at it together until the PR looks good.

~~~
throwawayish
If there are multiple releases/branches then I also recommend to _very
carefully_ review merges between branches, even files/regions where git (or
whatever SCM is used) merged automatically with no conflicts. I've personally
seen multiple incidents now were through review bugs and security issues where
found in merges that ranged from subtle to catastrophic. I even saw one
catastrophic security issue introduced into a file git merged with no
conflicts.

(However, more issues are typically introduced by manual conflict resolution,
since we humans are also easily confused when doing it. Both are a problem
relatively independent of language, although some specific instances might be
caught by a compiler or tests. If the latter don't catch it, it may be a sign
that some tests are missing.)

 _Be careful._

------
blauditore
About _4\. Style over substance_ :

Coding style (apart from formatting) can be crucial for understandability, so
it's important to have a critical look at it when reviewing. For instance,
consider the following snippets:

    
    
      void myMethod(FooBar arg) {
        if (arg != null) {
          arg.foo(something);
          for (int i = 0; i < arg.bars().size(); i++) {
            arg.bars().get(i).baz();
          }
        }
      }
    

vs

    
    
      void myMethod(FooBar arg) {
        if (arg == null)
          return;
    
        arg.foo(something);
        arg.bars().forEach(baz);
      }
    

Even though they're functionally identical, they look wildly different. These
things can hardly be judged by an automatic process like linting, so it's
important to manually do that.

~~~
glebm
> These things can hardly be judged by an automatic process like linting

Both of these (if vs guard clause, loop vs each) are enforced by the Rubocop
linter in Ruby.

~~~
throwawayish
Today in: Broadly missing a point.

It's good and nice that there is one linter for a particular language that
catches 100 % of these issues _in an example_ , but that doesn't do much to
change the fact that most linters for most languages are unable to do this,
and that there are quite some instances of these decisions where it is
doubtful that any linter would be able to lint it.

~~~
glebm
> most linters for most languages are unable to do this,

In the long run improving the linter is better for everyone. The examples
given by the OP as infeasible are feasible.

Focus on the design over trivial style issues. Your style guide should be 95%
covered by the linter. If it isn't, fix your linter.

> Broadly missing a point

My points is: Underestimating static analysis will lead you to the wrong
conclusions.

------
LandoCalrissian
The frontend complexity one rings very true. I know they are largely talking
about CSS changes in the article, but I think sometimes experienced developers
can just discount the frontend as a whole.

At my job my coworker and I did a major refactor on part of our frontend
codebase. Probably 2,000 lines of code were changed and the only comment on
the PR was for additional unit tests on one method we had touched on the
backend. We brought it up multiple times in standup that this really needed to
be reviewed properly and we got radio silence. The PR was just passed through.

Your entire platform really does matter, it's important to make sure all
things work. Not just what you understand.

~~~
hobarrera
I really try to filter out how the code review / testing works when
interviewing for a job. Whenever I get to the "do you have any questions for
us" part, I start asking about their workflow, review process, and tests
coverage, as well as several more practical questions on how they deal with it
("what would obviously get a PR rejected").

The obvious ones I'd discard are the ones that don't use a VCS (I don't think
I'd take a non-git one either, and CSV is definitely a no-go).

Sometimes it's interesting that no mention of tests is made when asking these
questions, or style guides maybe. It really give you a taste of the "code
culture" for that employer.

------
zachrose
I'd be very interested in a tool for diffing that gave some broad overview of
how things have moved around. Otherwise, the kind of refactoring where you
break things apart just creates a pull request that's harder to get oriented
in. Which is a shame because a harder to gloss code review can disincentivize
the changes that are needed most.

Conversely, it's just so easy to add an if statement there or any early return
here, and over time you've got a 400-line method with like 100 branches and an
Avagadro's number of possible paths. But you look at a two-line diff and say,
"Hey, this looks reasonable."

------
SoulMan
One of the problems we have faced when multiple engineers ding exactly similar
work (e.g. Create and Update operations of a CRUD module) but in a very
different class and method structures. Now whoever makes it to master first
becomes the standard and others needs to modify to confirm to that style.
However everyone's style is correct in some sense but reviewer needs to make
sure all code is written by one team not that one file has factory pattern and
another one has switch case .

~~~
quantumhobbit
Not that there isn't value in having everyone follow the same patterns, but I
find it gets emphasized over just getting work done a lot of the time. I've
seen projects waste so much effort making the code uniform instead of
delivering features. If the code works and is readable, who cares if it varies
somewhat from similar code.

~~~
jlg23
> I've seen projects waste so much effort making the code uniform

This one is usually an automated task. Everyone agrees on a coding style,
configures his/her development environment and first time you work on a non-
std file you convert and commit it (most things are done by your editor/IDE,
if configured correctly).

> If the code works and is readable, who cares if it varies somewhat from
> similar code.

Me. I regularly have to clean up after lone warriors who talk like you have
"delivered features".

A coding style does not represent some universal truth about code esthetics:
On one hand it allows programmers to read code with a well defined set of
expectations, on the other hand it helps prevent some of the big annoyances
for reviewers.

The point is not to like some specific style but to adhere to it. It takes you
5 minutes to adapt your own code to the project's coding style, it will take
others 10 minutes to do so and, unfortunately the most common case, it will
take hours over a single year when nobody fixes your coding style and
everybody who has to touch your code trips over the same things.

------
rocky1138
One thing that's improved our results is asking to see the changes actually
working. The person has to demo them live to the reviewer. Many times a small
bug or hiccup occurs which would never have been seen by comparing diffs.

~~~
hobarrera
This sounds like it might work for PRs for huge features, but not every PR.
Coordinating a live demo takes more time than it takes to write entire PRs for
most cases. Especially if more than one person needs to see it.

~~~
tdumitrescu
What I dream of implementing at work (once our infrastructure is more fully
containerized) is an addition to the CI/CD pipeline where each Pull Request
doesn't just kick off tests but also spins up a full staging environment of
its own, just waiting for reviewers/designers/PMs a click away. If there were
no resource constraints you'd get a separate one for each commit/push so you
could compare UX from individual changes.

~~~
deckar01
You don't have to wait until you migrate to containerized deploys. Last year
my team started deploying our project to folders named after the branch and
setup one apache rule to redirect all *.dev. subdomains to the correct folder.
We call them virtual development environments, and they worked really well
with our existing QA process.

~~~
hobarrera
What about dependency changes (ie: a new package needs to be installed or
updated), or database schema updates?

This also sounds very specific (probably using apache+cgi?). It won't really
work for most techs out there.

~~~
deckar01
The deploy script updates dependencies.

We're not using the apache cgi plugin for python to resolve dependencies, just
packages relative the the application root. I think most languages that can
import dependencies can import relative dependencies.

Although my specific setup may not work for everyone, I suspect that there is
an equally simple solution for most projects. Containerization is definitely a
long-term goal, but it is not prerequisite for automated deploys. The things
you have to change to get deploys to work on an existing server will also be
useful for containerization.

