It's hard to overstate the positive changes having good test coverage can make on your code reviews. You get to switch from discussing potential mistakes to discussing strategy, almost wholesale.
> Never say "you"
Do use "you" when complimenting someone. Think twice when critiquing them.
There was a fabulous article that passed HN once on how to communicate with optimism. https://news.ycombinator.com/item?id=12298898
The main point I took away from it was to use specific, personal and permanent language when complimenting someone. Use general, impersonal, and temporary language when criticizing someone. Positives are things people get personal credit for, negatives are forgivable temporary behaviors that can change and everyone is prone to.
> Tie notes to principles, not opinions
Like most programmers, I automatically enjoy discussing principles. But, most often principles are opinions. I try to keep them out code reviews when I can (which is perhaps the majority of the time, but definitely not always). I like to focus solely on what works and what needs to be done, if I can. Being too "principled", in my experience, often leads to not looking for, and not understanding situation specific reasons to go against a given principle, or to prioritize one principle over another. Plus, almost everyone is applying their principles when they code, it's just frequently not the same principles you think of when you review the code.
Needless to say since this was a favor for the reviewer and his feedback was basically "completely reimplement the solution for a totally arbitrary reason," the pull request languishes to this day.
Bringing up a random and surprise way to make something consistent in a code review, for reasons not already written down and agreed upon in your team's coding standard, should generally be deemed inappropriate.
I recommend codifying the axes of consistency in your org's coding standard and style guide -- that's what they are, after all. Make sure everyone agrees. If a new consistency rule seems important after multiple similar PRs, it should go through a process where the whole team discusses whether it should be adopted into the style guide.
In my experience, "consistency" is one of those abused opinion-principles. Everyone has a different idea of what it means to be consistent. Some consistency is a great idea, but in general, appeals to consistency that aren't already in the coding standard should be prioritized lower than simplicity, and lower than whether the code & tests work as designed.
Making a new utility class for the new function because the old class is too large in the new programmers opinion is breaking consistency. The class may be too large, but putting the new lone function off by itself is now an exception that everyone else will have to deal with. That one inconsistency then leads to others adding their own utility classes and then you have a big mess with some functions scattered across new utilityX, utilityY, and the old utility.
As I said above, I would have handled the review differently and suggested/asked if all of the functions involved could be refactored to use the new improved method. In my example above, I would have asked if he/she could go ahead and break up the utility class as part of the change to add the new function. If the work was too much to bring it all up to date, then fit with what already exists and put a ticket in to do the explicit refactor later.
But, as an engineering manager, I'm making a procedural point about code reviews, not disagreeing with your opinion. If the type of consistency being requested in the code review is not in the coding standard, then it's not fair game for blocking a PR or even making judgements, pure and simple. It's certainly okay to discuss it, and for the reviewer to ask questions and see if the submitter would like to improve consistency according to the reviewer, but if it's not part of the standard, then it's not a rule.
"Consistency" is not a rule by itself anyway, it's a vague unspecified idea. There's no such thing as 100% consistency in multiple pieces of code, unless they're 100% copies. Every single file has unique differences that the team has to deal with. There's not much evidence that being extremely nit-picky about consistency improves safety or productivity, and in general, there's no reason in advance to think the "exception" will cause anyone else any trouble. Worrying about it may be premature optimization.
To be more specific to the example above, there are times when visibility of functions should be limited, for safety reasons. Exposing functions selectively using friend declarations is generally seen as good practice in many situations, it's not a bad default choice. But, static public functions absolutely have their place too, and moreover they are a simpler language feature than friends. Simplicity is (to me) more important than consistency. In the example above, @jschwartzi's reviewer should have asked the question "hey do you think it would be more consistent to use friend instead of static public?" even before making any actionable requests. And if the answer is no and backed up with reasons, which it was, and if there's no rule in the coding standard on this point, then the reviewer should have said okay and approved it.
I agree that it's no good having a reviewer with bad priorities hang you up over them. But you're the one whose task it is to carry through if you can, and parsing blame doesn't ship code.
It sounded like you thought the code reviews article at the top of this thread, and the optimism article that I linked were contradicting each other? Is that what you meant by strategic speaking not making sense after reading both of these articles?
As far as guides on how to communicate, I'm not sure what you mean about it not being normal. I can't even count how many best-selling books on communication there have been. Learning how to manage programmers is largely about how to communicate. Learning how to be a better programmer is as much about learning how to communicate with your manager and team mates as it is about learning programming language features. I spent 20 years in school learning about ways to communicate. I've since spent 20 years coding for corporations of various sizes, and all of them have had both mandatory and elective classes on communication. Learning communication styles & pitfalls seems absolutely normal to me. In my opinion, breakdowns in communication are the single largest problem in software development, and probably the single largest problem with the human race. I'd speculate that someone who thinks learning how to better communicate won't help them at all is most likely unaware of history and also frequently failing to communicate effectively.
That said, I don't think it's fair to claim that avoiding personal attacks, and using language that helps people to be more optimistic is equivalent to manipulation that looks bad and fails. It's very, very hard to read @raganwald's blog post and take it as manipulative - did you check it out? You should!
I still don't understand "I don’t think a guide [on communication] can help you if you feel like you need one." Did I mis-interpret that one too, is there also a better assumption here than the one I made?
Didn't mean to agree with the previous poster – simply found it amusing that the idea of not needing to think about communication was immediately disproven by the statement being misinterpreted.
Through, I dont thing "we" is better then "you". The most maddening for me was when reviewer attempted to take complete control and responsibility over task as if it would not be me who was supposed to do it in the first place. It is not us, it is you telling me what to do and me then doing it (oftentimes regardless of what I think due to time pressure or simply because reviewer has higher status in the company), so lets not lie about it. Speaking about feelings, let people feel like they did work when they did task, don't take it from them.
The impact of this "actually it is our task lets do it my way" on juniors was quite visible - they essentially stopped to think for themselves or stopped to look proud of their work. They became quite passive as if working in a factory, you did not seemed them to make suggestions how to make something better even if they knew the way (based on personal discussion) and shyied away from any dilemma or initiative.
Most of these points, I think, generalize to all code review workflows. Some teams, though, put a high priority on individual ownership and accountability, which creates a few interesting dynamics. First, code reviews were mostly treated as nonblocking. With high trust, we assumed that most code was good by default. From that perspective, it was also discouraged to drop everything to do a code review. You were better off focusing on being productive until you found a natural breaking point. Because the code was often already shipping when it got reviewed, small style changes were typically moot and the focus was higher level.
There's definitely a trade-off with that approach in terms of catching bugs early, and it does depend on a high level of trust in developers. But I didn't feel like the posts assumptions quite covered that.
If I never saw bugs during code review, it might be due to my peers producing flawless code, or me not looking hard enough. I see the latter as more probable.
The best way to treat these is to open new issue in whatever tracker you use instead of blocking already done features and forcing long live branch. This way they can be prioritized properly, pm or analyst gets to have say on whether they are in line with what customer needs and most importantly are not endangering next release. We promised X to customer and while X+Y might be better, holding needed X as hostage to just proposed Y is not a good process.
I once had a reviewer who took this even further, asking for changes in code that were unrelated to my changeset but appeared in the diff. e.g. I add a parameter Y and get a comment like "parameter X should support feature Z".
Please just let me get my feature out!
CL author implements some feature, touching a library in the process. The changes made to the library help them match their requirements but are awfully specific to whatever feature it is they're implementing and don't make much sense in the context of the existing library API.
Sorry, but NAK. Don't increase the API surface with one-off additions just because it is the most convenient way to implement some feature. If you can't do with the existing API and don't think there's a nice and general abstraction to add to the library, you might want to figure out the absolute minimum API modification necessary that lets you achieve what you want to do. It might require more effort in the user part of the code, but is also that much more likely to be acceptable.
If your required abstraction level forces new requirements (instead of just forcing code to be moved elsewhere or written differently while keeping requiremnts), then your abstraction is wrong.
We work in Python, so those transformations are things like using `collections.defaultdict` instead of a plain dictionary where it makes sense etc etc.
Don’t be that teammate please.
(To be clear, on submit the edit you make would appear as though author had did it themselves, so it doesn't polluted commit log at all)
It's funny that you mention not wanting people to drop what they're doing as IMO that's one of the advantages of the approach I'm describing. The reviewer is already in the code, whereas the person being reviewed has often moved on to something else.
On my team, PRs do not require approval to be merged by the author. This gives the author full autonomy. They can entirely ignore any/all feedback if they so choose. (However, they are also held responsible if ignored feedback results in problems after the work is merged.) As such, feedback is always framed as a suggestion, not a command or a request. Suggestions must be supported by evidence, or they’re challenged (or ignored). It’s a lot easier to discuss a suggestion than a command or request.