
How to Do Code Reviews Like a Human - sidcool
https://mtlynch.io/human-code-reviews-1/
======
dahart
> Let computers do the boring parts

It's hard to overstate the positive changes having good test coverage can make
on your code reviews. You get to switch from discussing potential mistakes to
discussing strategy, almost wholesale.

> Never say "you"

Do use "you" when complimenting someone. Think twice when critiquing them.

There was a fabulous article that passed HN once on how to communicate with
optimism.
[https://news.ycombinator.com/item?id=12298898](https://news.ycombinator.com/item?id=12298898)

The main point I took away from it was to use specific, personal and permanent
language when complimenting someone. Use general, impersonal, and temporary
language when criticizing someone. Positives are things people get personal
credit for, negatives are forgivable temporary behaviors that can change and
everyone is prone to.

> Tie notes to principles, not opinions

Like most programmers, I automatically enjoy discussing principles. But, most
often principles _are_ opinions. I try to keep them out code reviews when I
can (which is perhaps the majority of the time, but definitely not always). I
like to focus solely on what works and what needs to be done, if I can. Being
too "principled", in my experience, often leads to not looking for, and not
understanding situation specific reasons to go against a given principle, or
to prioritize one principle over another. Plus, almost everyone is applying
their principles when they code, it's just frequently not the same principles
you think of when you review the code.

~~~
jschwartzi
Yeah, I've been asked to make an implementation worse before because it didn't
look like the reviewer's code. I used a static pure function to implement some
basic functionality and made that function public so that it could be unit
tested. The reviewer asked me to make the pure function impure and protected
and then use the friend keyword to couple the class to the unit tests. When I
asked him about it all of his responses were on the principle that we
shouldn't use too many techniques from the language, and we're already using
friend to do what I was doing with the static function.

Needless to say since this was a favor for the reviewer and his feedback was
basically "completely reimplement the solution for a totally arbitrary
reason," the pull request languishes to this day.

~~~
matwood
If I'm reading this right, it sounds like the reviewer was arguing for
consistency with the rest of the code base. If so, I can agree with the
reviewer but I would have added that if the static function was superior how
much work would it be to make them all consistent.

~~~
dahart
Being a programmer, it's _extremely_ likely that @jschwartzi was already
trying to be consistent in every way he knew how, and every way that mattered
to him.

Bringing up a random and surprise way to make something consistent in a code
review, for reasons not already written down and agreed upon in your team's
coding standard, should generally be deemed inappropriate.

I recommend codifying the axes of consistency in your org's coding standard
and style guide -- that's what they are, after all. Make sure everyone agrees.
If a new consistency rule seems important after multiple similar PRs, it
should go through a process where the whole team discusses whether it should
be adopted into the style guide.

In my experience, "consistency" is one of those abused opinion-principles.
Everyone has a different idea of what it means to be consistent. Some
consistency is a great idea, but in general, appeals to consistency that
aren't already in the coding standard should be prioritized lower than
simplicity, and lower than whether the code & tests work as designed.

~~~
matwood
The problem is that what I'm talking about is not style based. In a simple,
made up case if I have a class that contains utility functions and I task
someone to add a new utility function I would expect it to end up in the
already existing class.

Making a new utility class for the new function because the old class is too
large in the new programmers opinion is breaking consistency. The class _may_
be too large, but putting the new lone function off by itself is now an
exception that everyone else will have to deal with. That one inconsistency
then leads to others adding their own utility classes and then you have a big
mess with some functions scattered across new utilityX, utilityY, and the old
utility.

As I said above, I would have handled the review differently and
suggested/asked if all of the functions involved could be refactored to use
the new improved method. In my example above, I would have asked if he/she
could go ahead and break up the utility class as part of the change to add the
new function. If the work was too much to bring it all up to date, then fit
with what already exists and put a ticket in to do the explicit refactor
later.

~~~
dahart
I understand and appreciate your point. And there are definitely situations
where I agree with your take. Also I'm glad you'd phrase the review as a
request and not a demand, that is closer to the right approach.

But, as an engineering manager, I'm making a procedural point about code
reviews, not disagreeing with your opinion. If the type of consistency being
requested in the code review is not in the coding standard, then it's not fair
game for blocking a PR or even making judgements, pure and simple. It's
certainly okay to discuss it, and for the reviewer to ask questions and see if
the submitter would like to improve consistency according to the reviewer, but
if it's not part of the standard, then it's not a rule.

"Consistency" is not a rule by itself anyway, it's a vague unspecified idea.
There's no such thing as 100% consistency in multiple pieces of code, unless
they're 100% copies. Every single file has unique differences that the team
has to deal with. There's not much evidence that being extremely nit-picky
about consistency improves safety or productivity, and in general, there's no
reason in advance to think the "exception" will cause anyone else any trouble.
Worrying about it may be premature optimization.

To be more specific to the example above, there are times when visibility of
functions should be limited, for safety reasons. Exposing functions
selectively using friend declarations is generally seen as good practice in
many situations, it's not a bad default choice. But, static public functions
absolutely have their place too, and moreover they are a simpler language
feature than friends. Simplicity is (to me) more important than consistency.
In the example above, @jschwartzi's reviewer should have asked the question
"hey do you think it would be more consistent to use friend instead of static
public?" even before making any actionable requests. And if the answer is no
and backed up with reasons, which it was, and if there's no rule in the coding
standard on this point, then the reviewer should have said okay and approved
it.

------
watwut
Nice one. It always bothered me that code review is talked about as something
completely different then any other supervisory/controlling or people managing
(in the sense of trying to make people do what you want) work. It is not, it
can demotivate people and rob them of any motivation the same way as any other
supervisory function done badly can. The most important point would be to
distinguish between bad code and "I would have done that slightly
differently". The two are not nearly the same.

Through, I dont thing "we" is better then "you". The most maddening for me was
when reviewer attempted to take complete control and responsibility over task
as if it would not be me who was supposed to do it in the first place. It is
not us, it is you telling me what to do and me then doing it (oftentimes
regardless of what I think due to time pressure or simply because reviewer has
higher status in the company), so lets not lie about it. Speaking about
feelings, let people feel like they did work when they did task, don't take it
from them.

The impact of this "actually it is our task lets do it my way" on juniors was
quite visible - they essentially stopped to think for themselves or stopped to
look proud of their work. They became quite passive as if working in a
factory, you did not seemed them to make suggestions how to make something
better even if they knew the way (based on personal discussion) and shyied
away from any dilemma or initiative.

------
wakamoleguy
Bookmarked, and I'm excited for part two.

Most of these points, I think, generalize to all code review workflows. Some
teams, though, put a high priority on individual ownership and accountability,
which creates a few interesting dynamics. First, code reviews were mostly
treated as nonblocking. With high trust, we assumed that most code was good by
default. From that perspective, it was also discouraged to drop everything to
do a code review. You were better off focusing on being productive until you
found a natural breaking point. Because the code was often already shipping
when it got reviewed, small style changes were typically moot and the focus
was higher level.

There's definitely a trade-off with that approach in terms of catching bugs
early, and it does depend on a high level of trust in developers. But I didn't
feel like the posts assumptions quite covered that.

~~~
KKKKkkkk1
I have never seen bugs caught in code review. I did see plenty of bugs go
through code review just fine. I'm sure bugs might be caught occasionally, but
I doubt code reviews are an effective tool for this.

~~~
nine_k
I have seen and prevented bugs in others' code, and by others in my code. I've
seen important misunderstandings unearthed by a few questions during code
review, leading to certain design changes.

If I never saw bugs during code review, it might be due to my peers producing
flawless code, or me not looking hard enough. I see the latter as more
probable.

------
watwut
One more thought: some code review comments are basically feature requests.
E.g. they are improvement on functionality that were not required by original
requirements (but may be still good ideas). It is often the case that reviewer
found another solution wants it even more configurable or something of the
sort.

The best way to treat these is to open new issue in whatever tracker you use
instead of blocking already done features and forcing long live branch. This
way they can be prioritized properly, pm or analyst gets to have say on
whether they are in line with what customer needs and most importantly are not
endangering next release. We promised X to customer and while X+Y might be
better, holding needed X as hostage to just proposed Y is not a good process.

~~~
slrz
Blocking such changes might be an entirely reasonable thing to do. I've seen
it a number of times.

CL author implements some feature, touching a library in the process. The
changes made to the library help them match their requirements but are awfully
specific to whatever feature it is they're implementing and don't make much
sense in the context of the existing library API.

Sorry, but NAK. Don't increase the API surface with one-off additions just
because it is the most convenient way to implement some feature. If you can't
do with the existing API and don't think there's a nice and general
abstraction to add to the library, you might want to figure out the absolute
minimum API modification necessary that lets you achieve what you want to do.
It might require more effort in the user part of the code, but is also that
much more likely to be acceptable.

~~~
watwut
Then the code review should said: this function does not match library
abstraction level. It should not said: make default search string configurable
in settings and while you are at it, refactor settings page. Nobody asked for
configurable default search string, customer is cool having it empty.

If your required abstraction level forces new requirements (instead of just
forcing code to be moved elsewhere or written differently while keeping
requiremnts), then your abstraction is wrong.

------
mrgriffin
I mostly agree, with one exception: the author prefers "sucessfully ->
successfully" to "You misspelled ‘successfully.’". We have a lot of these
kinds of comments at work and I find both equally frustrating. I wish
reviewers would just make those simple changes themselves, it's almost the
same amount of effort as writing a comment.

~~~
Bahamut
So you want reviewers to drop what they’re doing and take the time to pull in
your changes to fix the typo, break up commit history by adding their change,
instead of you fixing the mistake when it’s found, especially if it’s newly
introduced? That’s pretty inconsiderate/lazy.

Don’t be that teammate please.

~~~
dmoy
Where I work the reviewer can seamlessly make a (suggested) edit to the change
request which the author can accept with a single button click. So I guess it
depends on the toolchain you have available, might be that GP has better tools
than what you're used to.

(To be clear, on submit the edit you make would appear as though author had
did it themselves, so it doesn't polluted commit log at all)

~~~
watwut
What tool do you use? That feature sounds cool.

~~~
dmoy
It is an internal only tool, idk of a public equivalent. Indeed a cool
feature.

------
hnruss
> Frame feedback as requests, not commands

On my team, PRs do not require approval to be merged by the author. This gives
the author full autonomy. They can entirely ignore any/all feedback if they so
choose. (However, they are also held responsible if ignored feedback results
in problems after the work is merged.) As such, feedback is always framed as a
_suggestion_ , not a command or a request. Suggestions must be supported by
evidence, or they’re challenged (or ignored). It’s a lot easier to discuss a
suggestion than a command or request.

