Hacker News new | past | comments | ask | show | jobs | submit login
How to Do Code Reviews Like a Human (mtlynch.io)
110 points by sidcool on Oct 15, 2017 | hide | past | favorite | 37 comments

> Let computers do the boring parts

It's hard to overstate the positive changes having good test coverage can make on your code reviews. You get to switch from discussing potential mistakes to discussing strategy, almost wholesale.

> Never say "you"

Do use "you" when complimenting someone. Think twice when critiquing them.

There was a fabulous article that passed HN once on how to communicate with optimism. https://news.ycombinator.com/item?id=12298898

The main point I took away from it was to use specific, personal and permanent language when complimenting someone. Use general, impersonal, and temporary language when criticizing someone. Positives are things people get personal credit for, negatives are forgivable temporary behaviors that can change and everyone is prone to.

> Tie notes to principles, not opinions

Like most programmers, I automatically enjoy discussing principles. But, most often principles are opinions. I try to keep them out code reviews when I can (which is perhaps the majority of the time, but definitely not always). I like to focus solely on what works and what needs to be done, if I can. Being too "principled", in my experience, often leads to not looking for, and not understanding situation specific reasons to go against a given principle, or to prioritize one principle over another. Plus, almost everyone is applying their principles when they code, it's just frequently not the same principles you think of when you review the code.

Yeah, I've been asked to make an implementation worse before because it didn't look like the reviewer's code. I used a static pure function to implement some basic functionality and made that function public so that it could be unit tested. The reviewer asked me to make the pure function impure and protected and then use the friend keyword to couple the class to the unit tests. When I asked him about it all of his responses were on the principle that we shouldn't use too many techniques from the language, and we're already using friend to do what I was doing with the static function.

Needless to say since this was a favor for the reviewer and his feedback was basically "completely reimplement the solution for a totally arbitrary reason," the pull request languishes to this day.

If I'm reading this right, it sounds like the reviewer was arguing for consistency with the rest of the code base. If so, I can agree with the reviewer but I would have added that if the static function was superior how much work would it be to make them all consistent.

Being a programmer, it's extremely likely that @jschwartzi was already trying to be consistent in every way he knew how, and every way that mattered to him.

Bringing up a random and surprise way to make something consistent in a code review, for reasons not already written down and agreed upon in your team's coding standard, should generally be deemed inappropriate.

I recommend codifying the axes of consistency in your org's coding standard and style guide -- that's what they are, after all. Make sure everyone agrees. If a new consistency rule seems important after multiple similar PRs, it should go through a process where the whole team discusses whether it should be adopted into the style guide.

In my experience, "consistency" is one of those abused opinion-principles. Everyone has a different idea of what it means to be consistent. Some consistency is a great idea, but in general, appeals to consistency that aren't already in the coding standard should be prioritized lower than simplicity, and lower than whether the code & tests work as designed.

The problem is that what I'm talking about is not style based. In a simple, made up case if I have a class that contains utility functions and I task someone to add a new utility function I would expect it to end up in the already existing class.

Making a new utility class for the new function because the old class is too large in the new programmers opinion is breaking consistency. The class may be too large, but putting the new lone function off by itself is now an exception that everyone else will have to deal with. That one inconsistency then leads to others adding their own utility classes and then you have a big mess with some functions scattered across new utilityX, utilityY, and the old utility.

As I said above, I would have handled the review differently and suggested/asked if all of the functions involved could be refactored to use the new improved method. In my example above, I would have asked if he/she could go ahead and break up the utility class as part of the change to add the new function. If the work was too much to bring it all up to date, then fit with what already exists and put a ticket in to do the explicit refactor later.

I understand and appreciate your point. And there are definitely situations where I agree with your take. Also I'm glad you'd phrase the review as a request and not a demand, that is closer to the right approach.

But, as an engineering manager, I'm making a procedural point about code reviews, not disagreeing with your opinion. If the type of consistency being requested in the code review is not in the coding standard, then it's not fair game for blocking a PR or even making judgements, pure and simple. It's certainly okay to discuss it, and for the reviewer to ask questions and see if the submitter would like to improve consistency according to the reviewer, but if it's not part of the standard, then it's not a rule.

"Consistency" is not a rule by itself anyway, it's a vague unspecified idea. There's no such thing as 100% consistency in multiple pieces of code, unless they're 100% copies. Every single file has unique differences that the team has to deal with. There's not much evidence that being extremely nit-picky about consistency improves safety or productivity, and in general, there's no reason in advance to think the "exception" will cause anyone else any trouble. Worrying about it may be premature optimization.

To be more specific to the example above, there are times when visibility of functions should be limited, for safety reasons. Exposing functions selectively using friend declarations is generally seen as good practice in many situations, it's not a bad default choice. But, static public functions absolutely have their place too, and moreover they are a simpler language feature than friends. Simplicity is (to me) more important than consistency. In the example above, @jschwartzi's reviewer should have asked the question "hey do you think it would be more consistent to use friend instead of static public?" even before making any actionable requests. And if the answer is no and backed up with reasons, which it was, and if there's no rule in the coding standard on this point, then the reviewer should have said okay and approved it.

It would've required completely rewriting the solution. The reason we used friend functions in the past was when we had god objects that we needed to test. It did not make any sense here since the class under test had a single responsibility. It would also have made the implementation more complicated and would have tightly coupled the unit tests to the implementation of the class.

I mean you could just explain why you're not going to do that, and if he still refuses to sign off because of it, find another reviewer or escalate or both.

I agree that it's no good having a reviewer with bad priorities hang you up over them. But you're the one whose task it is to carry through if you can, and parsing blame doesn't ship code.

It's nice to have the luxury to have a PR languish because of obdurate code reviewers.

Does strategic speaking make any sense when you’ve both read these guides (the answer is no). I don’t think it’s normal to need a guide on how to communicate with people, and I don’t think a guide can help you if you feel like you need one.

I don't understand your point, can you elaborate?

If I know you’re trying to manipulate me with some speaking strategy, it looks bad for you and fails

Man... wild. I sincerely have no idea what is bugging you, whether it was something I said or something in one of these articles. I'm honestly interested to hear why you're responding this way. I am open to a point of view that contradicts mine. I'm wrong about things as often as anyone. But it feels like you might have misunderstood something along the way and taken it personally? I don't know, feel free to clarify, if you want to discuss this. If you don't want to discuss this, how about we just stop and let it be?

It sounded like you thought the code reviews article at the top of this thread, and the optimism article that I linked were contradicting each other? Is that what you meant by strategic speaking not making sense after reading both of these articles?

As far as guides on how to communicate, I'm not sure what you mean about it not being normal. I can't even count how many best-selling books on communication there have been. Learning how to manage programmers is largely about how to communicate. Learning how to be a better programmer is as much about learning how to communicate with your manager and team mates as it is about learning programming language features. I spent 20 years in school learning about ways to communicate. I've since spent 20 years coding for corporations of various sizes, and all of them have had both mandatory and elective classes on communication. Learning communication styles & pitfalls seems absolutely normal to me. In my opinion, breakdowns in communication are the single largest problem in software development, and probably the single largest problem with the human race. I'd speculate that someone who thinks learning how to better communicate won't help them at all is most likely unaware of history and also frequently failing to communicate effectively.

Can't tell if this thread got meta or if this a perfect example of the original article's point to not use "you". Eradicatethots intention (I believe) was to say: "If I know [someone] is trying to manipulate me with some speaking strategy, it looks bad and fails."

Great point, I did assume it was me and not someone. Maybe he's not as agitated as it seemed. But the article's point was to avoid personal attacks, not about pronoun confusion, right? If you're right, then I made the mistake. Yours is the better more generous benefit of the doubt assumption anyway, so in retrospect I prefer it.

That said, I don't think it's fair to claim that avoiding personal attacks, and using language that helps people to be more optimistic is equivalent to manipulation that looks bad and fails. It's very, very hard to read @raganwald's blog post and take it as manipulative - did you check it out? You should!

I still don't understand "I don’t think a guide [on communication] can help you if you feel like you need one." Did I mis-interpret that one too, is there also a better assumption here than the one I made?

I agree, the thoughtful and intentional use of language is important (read through raganwald's post and it is a good explanation of that). Seems cynical to me to think of conversing as a strategic game of manipulation.

Didn't mean to agree with the previous poster – simply found it amusing that the idea of not needing to think about communication was immediately disproven by the statement being misinterpreted.

Nice one. It always bothered me that code review is talked about as something completely different then any other supervisory/controlling or people managing (in the sense of trying to make people do what you want) work. It is not, it can demotivate people and rob them of any motivation the same way as any other supervisory function done badly can. The most important point would be to distinguish between bad code and "I would have done that slightly differently". The two are not nearly the same.

Through, I dont thing "we" is better then "you". The most maddening for me was when reviewer attempted to take complete control and responsibility over task as if it would not be me who was supposed to do it in the first place. It is not us, it is you telling me what to do and me then doing it (oftentimes regardless of what I think due to time pressure or simply because reviewer has higher status in the company), so lets not lie about it. Speaking about feelings, let people feel like they did work when they did task, don't take it from them.

The impact of this "actually it is our task lets do it my way" on juniors was quite visible - they essentially stopped to think for themselves or stopped to look proud of their work. They became quite passive as if working in a factory, you did not seemed them to make suggestions how to make something better even if they knew the way (based on personal discussion) and shyied away from any dilemma or initiative.

Bookmarked, and I'm excited for part two.

Most of these points, I think, generalize to all code review workflows. Some teams, though, put a high priority on individual ownership and accountability, which creates a few interesting dynamics. First, code reviews were mostly treated as nonblocking. With high trust, we assumed that most code was good by default. From that perspective, it was also discouraged to drop everything to do a code review. You were better off focusing on being productive until you found a natural breaking point. Because the code was often already shipping when it got reviewed, small style changes were typically moot and the focus was higher level.

There's definitely a trade-off with that approach in terms of catching bugs early, and it does depend on a high level of trust in developers. But I didn't feel like the posts assumptions quite covered that.

I work in such environment, it is pretty cool. We have also testers who test things completely independently of developers , so that helps with bugs. Independently means that my biases over what could break and what is important dont affect them.

I have never seen bugs caught in code review. I did see plenty of bugs go through code review just fine. I'm sure bugs might be caught occasionally, but I doubt code reviews are an effective tool for this.

I have seen and prevented bugs in others' code, and by others in my code. I've seen important misunderstandings unearthed by a few questions during code review, leading to certain design changes.

If I never saw bugs during code review, it might be due to my peers producing flawless code, or me not looking hard enough. I see the latter as more probable.

I've seen it go both ways enough times to know better than thinking code reviews are a silver bullet, but still be glad to have them and still prefer having detailed review be part of the culture over having it not be.

One more thought: some code review comments are basically feature requests. E.g. they are improvement on functionality that were not required by original requirements (but may be still good ideas). It is often the case that reviewer found another solution wants it even more configurable or something of the sort.

The best way to treat these is to open new issue in whatever tracker you use instead of blocking already done features and forcing long live branch. This way they can be prioritized properly, pm or analyst gets to have say on whether they are in line with what customer needs and most importantly are not endangering next release. We promised X to customer and while X+Y might be better, holding needed X as hostage to just proposed Y is not a good process.

This one hundred times.

I once had a reviewer who took this even further, asking for changes in code that were unrelated to my changeset but appeared in the diff. e.g. I add a parameter Y and get a comment like "parameter X should support feature Z".

Please just let me get my feature out!

Blocking such changes might be an entirely reasonable thing to do. I've seen it a number of times.

CL author implements some feature, touching a library in the process. The changes made to the library help them match their requirements but are awfully specific to whatever feature it is they're implementing and don't make much sense in the context of the existing library API.

Sorry, but NAK. Don't increase the API surface with one-off additions just because it is the most convenient way to implement some feature. If you can't do with the existing API and don't think there's a nice and general abstraction to add to the library, you might want to figure out the absolute minimum API modification necessary that lets you achieve what you want to do. It might require more effort in the user part of the code, but is also that much more likely to be acceptable.

Then the code review should said: this function does not match library abstraction level. It should not said: make default search string configurable in settings and while you are at it, refactor settings page. Nobody asked for configurable default search string, customer is cool having it empty.

If your required abstraction level forces new requirements (instead of just forcing code to be moved elsewhere or written differently while keeping requiremnts), then your abstraction is wrong.

I mostly agree, with one exception: the author prefers "sucessfully -> successfully" to "You misspelled ‘successfully.’". We have a lot of these kinds of comments at work and I find both equally frustrating. I wish reviewers would just make those simple changes themselves, it's almost the same amount of effort as writing a comment.

I've never been in a culture where reviewers add commits to a topic branch they're reviewing! If I may, what have you found the pros and cons to be, compared to what I guess might be the more usual style?

We're pretty conservative with the things we add, it's 95% correcting comments/documentation and the other 5% minor (behavior-preserving) transformations.

We work in Python, so those transformations are things like using `collections.defaultdict` instead of a plain dictionary where it makes sense etc etc.

I mean my IDE tells me if I spelled something wrong usually, and I try to use similar tooling to everyone else I'm working with, so this specific case might be rare in some shops. You would have to completely ignore the feedback your IDE gives you about your code, at which point you might have bigger issues at your code review if you don't even listen to some of the minor suggestions of your IDE.

Or your misspelling might have produced a different, correctly spelled word.

So you want reviewers to drop what they’re doing and take the time to pull in your changes to fix the typo, break up commit history by adding their change, instead of you fixing the mistake when it’s found, especially if it’s newly introduced? That’s pretty inconsiderate/lazy.

Don’t be that teammate please.

Where I work the reviewer can seamlessly make a (suggested) edit to the change request which the author can accept with a single button click. So I guess it depends on the toolchain you have available, might be that GP has better tools than what you're used to.

(To be clear, on submit the edit you make would appear as though author had did it themselves, so it doesn't polluted commit log at all)

What tool do you use? That feature sounds cool.

It is an internal only tool, idk of a public equivalent. Indeed a cool feature.

Similar to dmoy our code review workflow supports making a (small) change while reviewing. We use GitHub PRs, and the edit function combined with choosing a commit message like "!fixup <original commit message>" and `git rebase --autosquash` (or `git merge --squash` when merging the PR) works well.

It's funny that you mention not wanting people to drop what they're doing as IMO that's one of the advantages of the approach I'm describing. The reviewer is already in the code, whereas the person being reviewed has often moved on to something else.

> Frame feedback as requests, not commands

On my team, PRs do not require approval to be merged by the author. This gives the author full autonomy. They can entirely ignore any/all feedback if they so choose. (However, they are also held responsible if ignored feedback results in problems after the work is merged.) As such, feedback is always framed as a suggestion, not a command or a request. Suggestions must be supported by evidence, or they’re challenged (or ignored). It’s a lot easier to discuss a suggestion than a command or request.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact