If no one is willing to accept a PR with that provision, the author needs to make their code clearer.
In my experience, people doing a code review don't want to bruise their egos by saying "I don't understand this.". So they superficially check the code and OK it.
Many teams would never actually put in the effort to trace issues back to individual failures in a code review process, when they could be spending the time just fixing the problem and moving forward.
The value trade-off you're suggesting is that the time and effort required to set up this process structure, and shift everyone's work habits over to this, is worth it in... maintenance?... at some point? Reduced risk of some sort?
This seems like a complicated position to argue, can you be any more specific about where and how you see this type of exacting process adding business value?
Most reviewers have a tendency to go the Socratic approach: Asking questions to lead to an answer. In my opinion, and consistent with recommendations from books on general communication: Don't ask questions without expressing your concern.
Q: Why did you do X here?
Completely valid and sincere response:
A: It solved the problem.
The coder at this stage has no idea why the reviewer is asking, and there could be many possible answers depending on his concern, but since the reviewer did not state his concern, we are reduced to the simplest, noncontroversial answer.
Instead, the question should be:
Q: I'm concerned that doing X here has the following drawbacks: A, B and C. Was there a particular reason you chose to do X here?
Or even better:
Q: I'm concerned that doing X here has the following drawbacks: A, B and C. If you do Y instead, you don't have these drawbacks. Was there a particular reason you chose to do X here?
A: Yes, because although X has drawbacks A, B and C, Y has drawbacks D and E.
(And the conversation can continue).
Q. Why is X done here?
Q. What is the reason to do X here.
Going a bit off topic, and to a wider context: There is a trend in general to avoid "I" and "you" in the work place to produce a veneer of objectivity. The communications resources I've read mostly disagree with that stance. The number of issues that can be handled truly objectively are rarer than most people think, and an attempt to insist on a facade of objectivity often comes across as manipulative. Most people will suspect that what you are asking is something you want, but instead of coming out and saying it, you're trying to hide the fact.
Example of something that is clearly objective: "When program A is running and the following sequence of keys is typed, A crashes."
Example of something at the other extreme: "Using negative logic in code confuses some people". And the conversation often devolves to "I don't think so." "People who can't handle negative logic shouldn't be programmers." "Who exactly in the team is getting confused?" "There will be others who find the positive logic version of this piece of code harder to read."
I've seen this pattern all the time. And in 80+% of the cases, the person complaining about negative logic is the one who does not like it, but is working really hard to avoid that fact coming out. Instead, he is trying to appeal to some wider principle that, not surprisingly, is not uniformly agreed upon.
(BTW, if there is uniform agreement, or it is enshrined in coding standards, then the approach is fine).
The above is an obvious extreme but the really annoying cases are the more subtle ones.
What the books point out, and after reading and observing, I agree: For things that are not clear cut rules (e.g. in the coding standards), you'll be more successful personalizing it - and it is in large part because you are being completely sincere. And given that most things are not that clear cut, this is the better approach. Now they have guidelines/recipes as to how to personalize it to avoid defensiveness, which is the element you wanted to avoid.
(An aside: I'm speaking in situations where neither person has authority over the other - one is not a manager of the other, etc).
Another example: If you've ever heard someone respond with "Yes, but why do you care?" Or "What's it to you?", it is very likely this dynamic is at play. While I don't advocate the phrasing, I am now much more sympathetic to it. When someone says it to me, instead of getting annoyed, I now examine how I phrased what I said and usually find the flaws in my phrasing. Occasionally, I ask people the same question, although in a much more polite manner.
Compared to other hard disciplines, software is one of the more subjective ones. There's a lot of "religious" behavior amongst SW developers (think evangelizing of functional programming, OO, TDD, etc). If I look at a typical coding guidelines, probably over 50% is completely subjective - that there is significant consensus on some of these rules does not change the subjectivity.
It also the best forum to remind them that my code isn’t perfect either, that this is a process to help make us both better developers.
If something is so unclear that I cannot figure it out on my own, the code / commit message / cover letter likely need to be improved.
Nothing like git-blaming your way to a two-year old commit and get a commit message that actually explains the whys and trade-offs.
I agree that some things do need discussion, but code review tools typically have some sort of chat interface for that purpose. I am doing open source work and most of the time just do code review over the e-mail, which obviously works well for this purpose. [Edit: and of course the calls work as well here! In any case, the result of the discussion should make its way to the code / commit messages / cover letter.]
Yes, absolutely this.
I think it's essential that the reviewer takes their time to inspect the code without distractions and without being rushed.
If you want to have a face-to-face code review, in my experience it works better to have that meeting after finishing the code review, so you can go over your review and discuss each point in greater detail.
Another thing that I maybe didn't make clear enough was it's important to be emphatic, and treat them as equal team-members, even in a Sr to Jr review. Give them an opportunity to weigh in on why they made certain choices and discuss all the pros/cons. Sometimes I even change my mind and agree with their choices when it's clearer what their objectives were.
1. Keep the changes small and focused
2. Ensure logical coherence of changes
3. Have automated tests, and track coverage
4. Self-review changes before submitting for peer review
5. Automate what can be automated
6. Be positive, polite and respectful
I've always believed there are 2 levels of code review, defined by their deltas:
1. Review against standards, defined by the delta between the code and shop standards, which should be well defined for everyone ahead of time. Things like formatting, white space, variable naming, conditionals, iterations, standard modules, etc. Every shop has (or should have) a standard way of doing a lot of these things. Much of this level of code review can be automated.
2. Review against expectations, defined by the delta between the code and the technical specs. Of course this requires technical specs which many shops don't do. If you write technical specs, and peer review the specs, agreeing that this is what is expected, then this level of code review becomes much more straight forward. This includes things like business rules, logic, data base updates, UX, interfaces and APIs with other apps, etc. This is much harder to automate, but offers a checklist to streamline the review and keep it objective. It also makes User Acceptance Testing easier down the line because code review has already eliminated a large subset of potential UAT rejections.
I'd love to hear how others handle these 2 levels of code review. Now that would be much more valuable than this list of feel good platitudes offered by the OP.
Thank you for your feedback. The framework that you put down, of code reviews being of these 2 levels, is valid. The goal of this blog post, however, was to touch on the basic hygiene which makes the process more effective -- including both these two levels of review.
I would take a shot at writing a follow-up post to this, going deeper into the subjective aspects of peer code review. Thanks for reading!
Maybe not argue, but I still find this link useful. I will link it to more junior engineers in my team. Believe it or not, not everybody does self-review before they ask for peer review.
That's not hard to believe given the stream of shit that comes out of a feed, any feed.
But I'm appalled by y'all's assumption that #3 is a given. Why do you all believe that?
To me, including a test suite in your code base is a code smell.
If you have a problem with #3 you might as well have the problem with #5. You don't want to automate everything, only where it makes sense.
>> Got some complex business logic
So complex the only one who understands it is...who? A test?
you're dealing with humans on a day to day basis that err very easily. the dumb mistakes i've made because there was a lack of tests are countless.
Because it signals to me that the only one who understands this piece of business logic that we have applied to our code base is a test, which is a piece of code.
Why should I trust this piece of code but not this other piece? What pieces of code should I trust?
Me, I don't trust tests.
That...makes no sense. Tests do not understand rules, nor do they (or are they intended to) fully encapsulate them. They are intended to capture specific instances of application of the rules that have been verified with the person(s) defining expectations—the customer for acceptance tests or unit tests of key business functions, but possibly the team itself when the test is a unit test of an implementation detail.
The test isn't the only one that understands the ruler, it's the output of a process that validates that the test writers (who are humans on the team) understand the rule, and also concrete examples which aid anyone who comes.later in understanding the rule.
To me, not having defined test scenarios (whether automated and thus part of the code base or not) is a signal that the team probably doesn't (and almost definitely doesn't if there has been turnover so that the implementors of the functionality are no longer on the team) fully understand the intended function of the code.
The only test that matters to me is the one where a customer says "Yes, good!"
That's a weird leap of logic. No, it does not mean that. It means that a human wo understood that piece of logic wrote a test, so the test can in the future warn about issues before a human spends time on checking if it still works like it should.
>> No, it does not mean that. It means that a human wo understood that piece of logic wrote a test
I don't not trust a human to understand (1) a piece of logic nor to understand (2) a test written by someone who does not understand said piece of logic.
You'd put it somewhere else, then?
- "But my code base is so big and complex!"
Make it smaller.
Tests are also used to enforce certain edge cases (e.g. return nil when the the input collection is empty instead of raising an exception), and in this regard allow you to assert stuff your language isn't able to express (because its type system is not as powerful as Haskell for instance).
Tests also allow you to avoid stupid mistakes like using a string instead of a symbol as well.
And anyway, tests are just a way to durably persist the testing that is always done by a programmer, be it in a REPL or with the few lines of codes at the end of a file used in the writing of code.
Above some system size, the certainty of correctness is merely probabilistic, and automated testing is what gives you assurance. Programming without testing is like driving with a seatbelt: sure it is feasible, but it's insane to do it.
First thing before you do any sort of code review is that you NEED to have a well defined coding standard. This allows for rules of engagement. If you have no coding standard, then what happens is one person will say one thing, the other will say another, and the submitter is caught between two people giving contradictory comments based on opinion, not fact.
Having a single set of rules of engagement levels the playing field completely and makes it easier for everyone.
However there was no written (or often, even oral) statement of what it was supposed to do so all I could do was spot syntactic cruft and offer cleaner code, and... that was about it.
To me, without clear statement of code's intent it must be impossible to review that code meaningfully, but I've never come across this most basic requirement in Agile that the code must have a spec of some sort to review against.
What am I missing, or was their Agile practice simply wrong, or what?
The idea, imho, is that you are supposed to be intimately familiar with all the projects on your team, while only coding for your own. If you each follow each other's tech doc/spec, and actually put real, professional effort into good technical writing and understanding of your team members' communication styles, you can all have a shared understanding of the problem and the solution architecture/idea, and then after the code is written, code review becomes a process of aligning on what the "right" solution is for the team, given hyper specific context of the one PR/feature/project/bug fix/whatever. Did it turn out like everyone expected? Why or why not? Does this align with the maintainability goals of our team? What will ongoing support for this look like? Does it move the needle for (any) clients? Does it unblock other projects/tickets? etc...
I don't really agree with the type of code review where you nit pick about adherence to arbitrary "style guides," read the code line by line to "try and find bugs," or asking for changes because you think something could be "cleaner" or whatever. The only code reviews that have real impact in my mind are the ones focused on how the code maps to real business goals.
@nradov: Nope, no 'user stories', whatever they are.
There were a lot of other bad decisions at the company. Some really stupid stuff. I don't suppose they'll last much longer. Thanks to you both. I'll spend this eve reading up on Agile.
I usually do this for changes before I have someone else review them anyway. Of course ideally someone else could review the change but if you don’t have someone you don’t have someone.
The main limitation here is that while this should help you with bugs, it’s not good for catching things which aren’t idiomatic or having someone point out a better way to do something.
But it’s better than nothing.
I recently came across https://www.pullrequest.com and have heard good things about them.