I love code-reviews. With the right people, they can be useful both to achieve high-standards of code quality and also improve your own engineering process. The requirements are quite high, I think you need some combination of the following:
1. Both reviewer and reviewee are focusing on getting the best outcome possible, in good faith and with generosity.
2. The reviewer concedes that there can be equally valid approaches to a given problem or taste wrt. aesthetics. They do not try to gratuitously force their style upon the reviewee. The reciprocal must hold as well.
3.The reviewer makes practical suggestions and ideally accompany their comments with snippets of code. They involve themselves into being part of the solution.
4. The reviewer focuses on the substance of the pull request. That means putting in the time to understand the real logic, putting themselves in the shoes of the reviewee (with their help), and trying to challenge it so that the limitations of the approach are known and documented.
5. The reviewee makes an effort at slicing their request into readable and compact sub-PRs if needed. Similarly, the reviewer makes a best-effort attempt at starting their review within a reasonable time.
Those are central aspects of a productive engineering culture anyway. You only need to find someone else that shares those "values" (i.e focusing on the best outcome possible as a sort of devotion to Engineering - so to speak) to grow this attitude within your company.
> 2. The reviewer concedes that there can be equally valid approaches to a given problem or taste wrt. aesthetics. They do not try to gratuitously force their style upon the reviewee. The reciprocal must hold as well.
If my code is being reviewed, and the reviewer has opinions about how they would do it, I like it when they share their preference but also
1. consider if the code as submitted has the same result
2. share their personal preference for how they would have written it
3. explain why they prefer it over the code I wrote
4. not block the review on me rewriting the code to match their preference
I think it’s fine (and valuable) to comment “I hate this code, shipit” or (better) “ship it, if you change foo to bar then <explaination of why they prefer it>”
The worst code review comments are “this isn’t performant/idiomatic/maintainable, fix it” or other statements of opinion presented as facts. This kind of faux-objective feedback is terrible, since a fact can be argued against (or could be wrong). A statement of opinion like “I don’t like it” or (best) “I had to think really hard before I understood it” is great, since that’s what someone diving into the code for a maintenance ticket/incident will be thinking but won’t have the original author on hand to walk them through the code.
Linking to articles or references that explain why you prefer a particular implementation also gives the reviewer a learning opportunity & lets them save face when they post a follow up CR “thanks, didn’t know that”.
>The worst code review comments are “this isn’t performant/idiomatic/maintainable, fix it” or other statements of opinion presented as facts.
This behavior is usually symptomatic of an excessively dogmatic outlook. That is, I don't think it's the review style per se that's the problem - it's the person.
I find developers like this nearly impossible to work with. They won't just unknowingly write poor code themselves (dogmatism typically leads to bad decisions which leads to really poor code) they will likely try and force you to do the same.
Very rarely I get stuff that is so badly made, or naively implemented that I just have to say no, then go help the person redesign the solution. This happens rarely though, maybe twice a year with a fresh employee.
If the code is not formatted to the linter, it should not have been submitted for review and I will probably reject it.
We have to hold our work to some kind of standard.
Otherwise I do the "I did not understand this, can you please explain?" Quite a bit if a part is confusing.
I try to invite to a discussion, not demand doing it to my taste and make it clear a lot of the comments are suggestions for future work.
I try to never block correct code. I can usually find a few things that should definitely be changed (needs more comments, delete unreachable code, remove unnecessary/unhelpful extra variables, etc), so everything else is suggestion.
Usually the submitter takes my advice, but sometimes I am educated in the process (I usually ask for better comments in this case).
I think in such a situation, receiving a review is great, even though the downsides mentioned in the article ("what am I going to do while I wait for the review to come in?") are still presents.
However, it doesn't do away with how unsatisfying performing a review is. If I spend a day doing code reviews, that is not an enjoyable day to me. This is regardless of how useful I feel it is; it's just not a job I enjoy doing. It'd be nice if there was some change we could make to make it a more pleasant aspect of the job.
There are definite upsides to having a detailed written record of a code review, but in terms of enjoyment if I have some code to review I read though it, take some notes, grab a meeting room and have a discussion with the person that wrote the code. I find it far less tedious than having to write up my critiques in detail. Then the written review in the tool can just be terse reminders to the things we discussed.
I'm sure there are people that feel exactly the opposite, but it's worth trying.
Hmm, that does sound like a good idea that would make it a lot more enjoyable, with the added bonus of being able to use tone of voice and facial expressions to prevent remarks from being taken the wrong way.
I'm not currently in a position where I share an office with people whose code I review, but when I am again, I definitely want to try this. Thanks for sharing!
I want to second what OP said. A lot of the pain points people have from code review seem to come from limiting themselves to the review tool to understand the aim, scope, and rational behind some decisions. You should definitely use other channels (1-1 discussion, grabbing coffee, a meeting room, a call, whatever works).
While you wait for code review you work on another ticket. What kind of screwy engineering process assumes that code review is instantaneous and doesn’t give you something else to do in parallel?
Nobody expects code review to be instantaneous - that's the problem. Working on another ticket involves context switching, and context switching negatively affects the productivity.
Sure, that downside is probably offset by the benefits of code reviews, but that was exactly what I said: receiving a review is beneficial, it's just a bummer that there's the downside that it's not instantaneous.
You're going to be context switching at the point you make the pull request anyway.
And if you structure your day well, you can do the rest of the context switching at a point when it's not a burden anyway - such as making changes in response to code review first thing in the morning before returning to your other tickets.
Well, I guess we'll have to agree to disagree there. It's far easier to continue working on the same thing, even if I have to make a pull request (or simply have to write a commit) in between, as long as that's still related to the same change. If I have to switch between entirely different problems, that has more impact on my productivity than having to switch tasks while still working on the same problem.
Again, in practice I still do actually switch to a different problem if that allows me to get my code reviewed. It's just that I wished it didn't come with the productivity hit that I experience.
This still sounds to me like it might be a process issue around code review rather than code review actually being the problem. What else would you be doing on the same problem post-PR if you didn’t have to wait for code review?
Otherwise I'd be working on the next step of the problem? Which I can still do but, as OP mentioned, does lead to more work if feedback comes in on the first part while I'm already far underway with the second part.
Ah, we don’t do code review mid-project except for very large projects. For those large projects, we just keep working in the same branch and apply the code review changes when they come in. It’s not much of a disruption.
For preliminary foundational work on a big project, where you’d have to change everything if the early work changes in response to code review, one good solution is to do an informal design review before jumping in to get feedback on plans for data model, architecture, boundaries, and APIs from a high level.
...and if any of those four are not in place, it can be a nightmare. At a rough guess, less than a 1-in-16 (2^4) chance of a non-aggravating code-review process at any given company.
The author of this piece admits that they have to change the code they write in response to comments. This means the code review is flagging areas of improvement. Surely the resulting code is better than the original code (otherwise I'd expect the author to push back on the comments instead of implementing bad suggestion).
Similarly, when acting as the code reviewer, the author is giving feedback to others (rather than just rubber-stamping the PR), and presumably that feedback is implemented.
So code review seems to be doing its job here. It's producing better code. Ultimately I think the problem is revealed at the end, where the author says they feel like code review is optional. If you think it's optional, then the delays inherent in review are going to feel like artificial slowdowns.
The better way to think about it is that code review simply isn't optional. Yes it can be a pain, and there's a lot of friction in the process, and our tooling kinda sucks for it. But it's a necessary step. I can't even begin to count how many bugs my team has avoided shipping due to code review. And I always get really annoyed when I run across a bug that should have been caught in code review but wasn't for whatever reason (perhaps a more junior coder did the review and therefore missed stuff that a more senior coder would have caught). Code review stops bugs before they happen. And the best bug is the one that was never shipped to customers.
Surely the resulting code is better than the original code (otherwise I'd expect the author to push back on the comments instead of implementing bad suggestion).
Only if the author cares enough about the code. I've implemented things I know are bad in the past because the politics of pushing back aren't worthwhile. Especially if you work in an environment where your review is driven by the opinion of your colleagues (eg stack ranking).
"Code reviews are painful" are often a sign of bad team dynamics or a culture of politically motivated decisions.
code review mandatory at last place I worked, the place had a policy of any code you push in that does not match the guidelines must be changed. Guidelines was that no em allowed, must use rem. Old part of codebase assigned to me, I found some things in CSS I improved (reuse of code, a small overflow bug)
Got comment - you need to change em to be rem.
I can't do that because there is em all over the place and this can have side effects. The diff shows it as my code but if you look you can see it is actually the old code just moved to right place.
Those are the rules fix them.
since this code was in a big release of stuff and none of our stuff was allowed through unless I changed one em to rem even though I could not be sure there wouldn't be problems I went ahead and rolled back my changes.
fixed overflow bug other way. no code reuse ever in that part of codebase.
right. I said that to him and to the frontend lead for the whole organization (very big place) - codebases this big you can't have that requirement, do it with a small codebase/team who all sit in the same room and know what's going on sure. 10-11 teams spread around with lots of different programmers, many also remote, it can't be done. But no dice.
In fact I had people specifically recommend me at times, don't do that because when it hits review with reviewer X it will be a problem. Sometimes you had to butter them up asking their opinion before (and that wouldn't always work because sometimes they would change their mind) anyway I better shut up and get to work, I can feel the bitterness returning!
There is something to be said for continuous improvement (my company has a policy of leaving the campground neater than when we got there, which is one of the reasons I enjoy working there so much), but there is a time and place for everything...
Not that this applies to your particular situation of course but I have seen places where that duplication was actually warranted - especially in css. E.g. some code was extracted and reused when it was not meant to, just drying the code up, but those things turned out not to be the same thing and it was rather painful when we had to change them independently.
As to your particular case - I’m sure the politics of the situation were pretty bad but I’ve solved situations like this by having two separate commits/prs one that uses the old style to fix a problem so the change is obvios, the other to change all the old style to the new style of code. Helps the reviewer understand what’s happening and move the PRs along. Though this was in a much more liberal code review environment of course ...
Both em and rem refer to element sizing. Rem is relative to the base page font size, while em is relative to the containing element font size.
The consequence is that nested em specifications are cumulative; 1.2em of 1.2 em of 1.2em gives 1.2^3, while 1.2 rem is fixed, antwhere it is used on the page.
This is not of itself good or bad, it is behaviour. Use it to desired effect.
As examples, I prefer to specify font size within elements (header, nav, aside, article, footer) in rem, font size of contained text (h1-h6, code, blockquote, pre, generally) in em, and width of text elements in em.
This allows for less CSS (most of my styles are a few dozen selectors, often less), as relative text doesn't require respecification, and content blocks are defined in terms of sensible, reader-focused, preferable widths, padding, and margins.
Much of that is personal preference: I design around article text, mostly, not wireframes or pixel-perfect layouts. The designs also tend to be robust.
Relative sizes of h1-h6 don't need to be respecified when used in header, footer, aside, nav, or .callout elements. And in callout, they're automatically scaled to 2x 1rem * factor -- cumulative application of em sizing.
With simple DOMs, this can be useful. Complex DOMs and stylesheets may benefit from direct specification via rem.
The 'em' sizing is inherited by child elements, which can be either a feature or a bug, depending on interactions.
If you're aware of this and design for it it can be a useful feature, and enhances reusability. If you're unaware of this, or your design is complex, it's much more likely to be a bug. "Bug or not?" is entirely dependent on that context.
em is relative to the font size of the nearest parent thus you can have situations where your font size of an element node is radically different due to the parent it is placed in. This can of course be powerful if used correctly, but it takes someone really good to use it correctly and perhaps it shouldn't be done in big teams because big teams imply some people not very good in some particular discipline.
> where the author says they feel like code review is optional. If you think it's optional
I think these are two different things. The author says they don't enjoy code review, and that that probably is because it feels optional. From the post, I gathered that that's not what the author thinks - the first few paragraphs emphasises that they think it is an important part of the work. But apparently, that knowledge isn't enough to change the experience of reviewing.
> The better way to think about it is that code review simply isn't optional.
Of course it's optional. Tests are optional too. There are many projects and situations where trading code quality for development speed makes sense. Especially when you're building MVP's, experimenting, or trying to validate market demand.
It's also often not worth it on small to medium sized projects where only one person is working on the codebase (think 3-person project: designer, front-end dev, back-end dev).
Larger projects with multiple people working on the same codebase are a completely different story. In those cases code reviews are valuable and highly recommended but still optional. The only thing that isn't optional is delivering a working product.
To me, code review is a combination of two things I would think we would try to avoid:
1. Testing implementation over behavior
2. Using a human being to test in a non-automated way
Proper test coverage and human QA to me is the most effective approach to not shipping bugs, because it really doesn't matter to the customer what the code does or doesn't look like if it runs. And I just don't believe that code review is an efficient means to catch that if user is in a state of X and tries to do Y to Z that the app blows up. That becomes most apparent when that code is running in the full context of the application, not when viewed as a git diff. A bug is so often a perfect storm of seemingly innocuous but conflicting/insufficient behavior than it is some block of code that is clearly broken at a glance.
There is something to be said about stylistic and architectural issues and code reviews do give an opportunity for asking questions and driving "better" code. But this is actually just another reason why I don't like code review. "Better" is subjective and there's always another level of "better" to be reached. If I held them to my own personal standards sometimes I'd be asking for a rewrite. So what do I do, suggest that they improve their code, but not to the full extent that I feel it could be improved?
There are so many bugs that I've caught in code review that would not have been caught by automated or manual testing. Two big reasons for that:
1. Automated testing is still written by a human. A human who doesn't understand the edge cases in their code isn't going to understand how to write tests for those edge cases either. And "have someone else write the tests" won't work either, because frequently these edge cases are a result of the code implementation, not inherent in the problem, and so without a thorough understanding of the code you wouldn't even know the edge cases is there to be tested. Same goes for manual testing too, the manual testcases are still constructed by a human.
2. Race conditions, or rare conditions that most people won't hit. These are rather unlikely to be uncovered by testing (whether automated or manual). Race conditions in general are hard to test, and rare conditions are, well, rare.
There's also just cases like unexpected interaction paths that can produce improperly-handled states in the code, or bugs that don't result in an obvious bug to the observer but are still incorrect (the obvious example here is memory leaks, but this could also just be leaving the app in a bad state such that subsequent actions would fail, but you don't catch those because your test is done at this point).
> The author of this piece admits that they have to change the code they write in response to comments. This means the code review is flagging areas of improvement. Surely the resulting code is better than the original code (otherwise I'd expect the author to push back on the comments instead of implementing bad suggestion).
There's a potential problem with that idea: The assumption that, if several programmers involved offer different solutions, the best solution will prevail.
I think in many cases the accepted solution ends up being the one pushed by the person that holds more power, the one who can communicate better, or the person that 'negotiates' more aggressively.
I'm not saying that to argue against code reviews - It's just a friendly reminder that although as technical people we hope to achieve some objective level of quality, we can't escape office politics, human bias and general subjectivity.
You're right. I'd say it's an aspect of professionalism to get better at this as a reviewer. One of the things you need to try to do when you read somebody else's pull request is to put aside your ego and focus on the code.
I'm fortunate enough at my company to not have to submit to the code review bureaucracy - and I can justify it by having metrics which indicate that my code ships with fewer bugs than the code of other teams which do go through code reviews.
I think everyone should have some level of freedom to work in a manner which works best for them, rather than forcing everyone into the same process. Success should be measured by achievement of business goals and revenue.
For every anecdote you have about code reviews finding bugs, I have my own anecdote about people writing code going through a thorough code review process and the code crashes the first time an end user tries to use the functionality.
So yes, code review IS optional if you can justify it.
Saying "my code is better than other teams' even without review" does not prove that reviews are not helpful - maybe if you get your code reviewed you'll ship even fewer bugs, and if other teams don't review they'll ship even more bugs.
Further, eliminating bugs caused by any individual code commit is not really the goal behind code reviews in the first place. Finding bugs that the user notices on immediate use is an indictment of the QA process (whether manual or automatic).
Code reviews help maintain sanity in the code base, and are useful for sharing knowledge (both in terms of quality of code, and ensuring that there are at least 2 people who have some idea behind the code written). It provides an opportunity to have it read by someone who doesn’t share the same priors as you and helps you identify sections of code that may be confusing for another developer (including future you).
If Code review does lead to the identification of bugs that’s just a bonus.
There’s a reason Code review does not have a “run the code and test it” component to it and is entirely focused on code, and not whether it runs correctly.
>Code reviews help maintain sanity in the code base
But why delay the shipment of features/fixes to the customer for any of these reasons?
If code quality and human understanding is the actual rationale, it could make more sense to review code after it has shipped. In that way, it would cease to be a bottleneck for the developers or customers. Additionally, any bugs that appear in production could be accounted for during the refactoring process.
Saying that there is value to a practice is the easy part. We're forgetting to ask when to code review and whether or not code review should ever be a blocker to other processes.
Because in most environments it's unusual that a developer would go back and change the code after shipping it -they'll have e.g. another high priority bug to fix.
Reviewing and refactoring code prior to shipping helps to keep development going sustainably, reducing build up of technical debt.
> people writing code going through a thorough code review process and the code crashes the first time an end user tries to use the functionality.
This is not an argument against code review. This means that the change wasn't even ready for code review. Whoever put up those patches skipped the step before it, where they verified the functionality of their own code.
Code review shouldn't -- can't -- be a time for someone else to mentally (or actually) execute the code and sign-off on it being bug-free. It's a sanity check for read-/maintain-ability. You can generally validate that the patch isn't doing something stupid or dangerous, but it's 100% on the submitter to ensure that their code works. If you have people putting code up for review without testing it first, then you have a bigger (people) problem, that code review will never be able to help with.
For every anecdote you have about code reviews finding bugs, I have my own anecdote about people writing code going through a thorough code review process and the code crashes the first time an end user tries to use the functionality.
Sounds like you have bigger problems than code review!
> Surely the resulting code is better than the original code (otherwise I'd expect the author to push back on the comments instead of implementing bad suggestion).
Sometimes the suggestions add very little / nothing. Push-backing (i.e. adding a back and forth) would be a waste of time so you just do whatever the code reviewer says.
> So code review seems to be doing its job here. It's producing better code.
As with a lot of things, balance is needed. Better code comes with a cost (time), and that time may have been better spent somewhere else.
Wouldn’t bugs be caught during dev testing, or QA testing or automated testing or while running on a staging server? It seems nuts to rely on a developer looking at a code diff to catch bugs in an app, it happens sometimes but I generally think review is more about quality.
Code review doesn’t catch bugs. Sure, sometimes an experienced dev can spot a bounds error or race condition, but CRS are for knowledge sharing and style adherence.
Completely disagree. I catch bugs, and have my bugs caught, at least a few times a month.
Beyond the "this will likely deadlock without a timeout", "this does not clean up after itself on exception", "if the cache is not hot, this operation you assume is 1 sec will take 1 min" type thing — one of the strongest points of code review is having a conversation about "this code is hard to understand" and turning code that's correct if you think about it really hard into obviously correct code.
It takes a reasonably strong reviewer to get there though, and that's not always possible.
I review code everyday and it's my main way of finding bugs (besides automated tests). It saves us a lot of time that I can detect issues before asking testers to run the application and go through test scenarios.
For us, a failing unit test is almost always a bug. Lots of people write terrible unit tests. Too much mocking, asserting methods called, etc. Brittle. Test the edge of your public interface. Supporting methods only need specific unit tests if they are sufficiently complex. If your unit's interface changes or it's behavior changes, it should fail. That is a good thing.
think there is a lot of religious-like thinking when it comes down to tests: many people swear by them even against all the day to day proof to the contrary that they are really not that useful at the application level.
I think if we are shipping a framework to thousands of users its great that it has unit tests, but if its an in-house application built by a team of 10 developers, I doubt that the tests are doing anything useful.
But what happens is that these best practices that are used by developers of libraries are then used blindly at the level of application code, without pausing to think if those are indeed best practices in that particular context.
Developing software at the leaves of the dependency tree (the application) is very different than developing libraries. Some things are the same while many others are not.
The end result in practice is a lot of wasted time and effort spent at the application level writing bad tests that yield no results, all in the name of the testing religion.
Because that is what it really is, its an unprovable belief system with a set of rituals that you absolutely must follow blindly, or else.
Usually "unit tests" should be for verifying error code paths. These are valuable at the library/package level and maybe at some level of internal integration. 100% code coverage is likely useless most of the time. At the application level, testing user inputs to user expectations is highly valuable (perhaps more so than unit tests in a lot of contexts). It is this integration/acceptance testing that any application with users should have under automation. Any codebase lacking that risks breaking or regression at the cost of user experience (which, depending on your app might be a fair trade off).
>When was the last time that you caught a significant bug with a test?
Tests only really catch bugs when you're refactoring the implementation of something while trying to keep its behavior the same. You're right, in many ways, tests exist to be broken. But they still serve their purpose. They either break because you unintentionally changed behavior and need to fix your code, or because you intentionally changed behavior and need to change your tests.
>When was the last time that you caught a significant bug with a test?
This happens at least once a week, and that's just for me on a ~15 person team.
I work on a compiler and most of our tests are in terms of user visible effects from running snippets. If any of them break, it's either a rare test harness failure or a real bug.
Whether or not any particular spec violation is significant is another matter of course, but they're definitely genuine bugs.
I think the author and many folks here have an unhelpful understanding of code reviews.
I suggest code reviews' primary purpose should not be to catch bugs or trivial style issues; automated tests and linters exist for those purposes.
Instead, code reviews should be primarily thought of as "human comprehensibility tests".
When we write code, we've may have spent a significant amount of time thinking about the problem. This puts us in a bad position to gauge how readily someone in the future can reason about our code in order to fix or modify it safely and easily. Code reviews are a mechanism for showing empathy to our future selves, which accumulates over time into a much more pleasant coding environment.
Relatedly, comments on a PR are ephemeral. A rule of thumb I use: if a reviewer asks a reasonable question -- something someone in the future might wonder about when I'm no longer around, I answer it by first improving the code.
1) Anything cosmetic gets a "nit" added to the comment.
2) When a review looks like mostly assets or boiler plate, I look through quickly for anything glaring and approve while noting my assumptions that the author has run the build and verified the additions.
3) If I'm in a hurry and the author is available I grab them and do an over the shoulder review. You might be surprised how much time this saves.
4) Don't struggle with difficult sections of other people's code unless you have a vested interest in remembering this specific section of code. Ask about unclear areas.
5) Trust your CI process. Don't try to consider every edge case for your reviewee.
6) Develop a culture of small work units. Encourage people to integrate every day or two. This means reviews are small and easier to grok.
Code reviews are not a guarantee of correctness. They are another tool. We should find a good trade off between effectiveness and cost to get the most value out of them not strive for perfection.
These are all good tips, but I want to stress the importance of (6). Much of what the original post is talking about can be improved by adopting the habit of making small, incremental changes. As the author, you no longer have to wait "in the order of weeks" for the feedback and a potential rebase should pose little risk. As the reviewer, you don't have as much mental burden and most reviews can be done "in-between" bigger tasks, e.g. in the 15 minutes before a meeting or before lunch, etc.
As for 1) I highly recommend to automate that away as much as possible. A strict syntax guideline and a toolchain that enforces it will go a long way.
Conceptually, I completely agree with (6). Practically, though, I found it really hard to do in some situations.
Specifically, when doing something where the solution is well known for the beginning, creating PRs which are small and easy to understand is easy.
When instead I have to solve difficult problems for which I don't already know the solution beforehand, and which require writing quite a bit of complex code - eg multiple classes that work with each other, and which maybe work with some complex external system I/the team doesn't already know well - I find it difficult to split the PR into smaller chunks, because I "evolve" the separate classes together, while also doing exploratory testing.
"Production ready" code comes later in the process, and at that point I usually have quite a bit of code. I noticed that people don't like reviewing the non-production-ready code, but what's the alternative? In theory, after I found out what the solution is, I could go back and rewrite it better from the beginning, but that would just require too much time.
The answer to your problem of "large commit" is this:
1) Create "research prototype" of your solution.
But do not commit it or commit it into separate branch (not master).
2) Create a plan how to split that code into smaller steps.
Separate refactorings that will support your new feature - from your actual feature (that changes functionality).
3) Commit refactorings separately from your main feature commit. Keep every refactoring commit as small as possible).
4) At the end commit your functionality changing feature.
If possible, split it into multiple commits too.
That should help with code review.
Remember, your team will have to code review your code multiple times:
1) When you are making your commits.
2) When your code reviewer reviews it.
3) In the future, when you or other team members maintain your code and try to understand why you created code that way.
So - optimize your commit to reduce code review time (even if it increases initial code writing time).
Thank you for your answer, but I don't think that would have been feasible - our employer called me exactly because the team was very slow, and adding so many additional steps would make me very slow too.
I'm now changing contract, hopefully with the new team it will be possible to find a better compromise. For example, I offered to walk the reviewer through my code, which I think would have helped a lot, but this was refused...
Your real choices are: "fast and buggy" vs "slow and correct".
What is your preference?
> I offered to walk the reviewer through my code
Walking reviewer through your code is a good exercise. Code review of complex change should be done in interactive session (code reviewer + coder). There is so much to learn for both sides in such session.
> Your real choices are: "fast and buggy" vs "slow and correct".
It's not a binary choice, it's a dial. And anyway, after more than 30 years of programming experience, I can tell you that I can write mostly correct code by applying diligently the pyramid of tests. In my experience, code reviews catch very few bugs that weren't revealed by testing, if any.
Much more than correctness, I find them useful to verify and improve the readability and maintainability of code - which are important goals, but as always the cost needs to be measured against time and cost constraints. Depending on the project, being twice as slow to accomodate the reviewer can or can't make sense. What I'm looking for is more of a 80/20 solution: can I be 20% slower, and still get 80% of the benefits of code reviews?
I think these points are very salient. I'd suggest adding another one: create a checklist for code review that is relevant to the issues your team faces. The checklist makes it much easier for you as a reviewer to perform evenly.
> 3) If I'm in a hurry and the author is available I grab them and do an over the shoulder review. You might be surprised how much time this saves.
One concern here is you may ask questions whose answers are then given to you, and you alone. If it's likely someone else in the future will have the same questions, the "answers" should be in the form of improved code.
> 4) Don't struggle with difficult sections of other people's code unless you have a vested interest in remembering this specific section of code. Ask about unclear areas.
I'm not clear on the distinction between "difficult sections" and "unclear areas"? Either way, one of the outcomes of the review should be to make life easier for the folks that follow us.
I've worked in teams bad enough to make code reviews a real pain.
I've seen pretty horrible code constructs that actually seemed to work without bugs. What should I say? That the author of that piece of code should completely rewrite it to my taste? It would not only frustrate him/her but also slow down the entire team. So I often accept what is rubbish IMAO. And not me alone, have you ever seen two code-buddies in a team always doing each others code review and(quickly) accepting and merging everything they produce before anyone can make a comment?
I've had numerous pointless discussions about why I wrote a specific piece of code where (a junior dev) the assessor just wanted me to do things by the book as he learned it, not really understanding why I did it that way. But there was no way to convince him, it ended up in an unresolved conflict that I only could solve by messing up my code to his taste.
All these pointless discussions that particularly affect the relationship you have with your co-developers in the team you work in, are not always fruitful and can be quite destructive if you're not careful enough.
I do believe code reviews can be very important and valuable, but at the same time I've seen huge amounts of wasted time and frustration for nothing. It's not always and only a good thing is my conclusion, it depends on the variables.
> I've had numerous pointless discussions about why I wrote a specific piece of code where (a junior dev) the assessor just wanted me to do things by the book as he learned it, not really understanding why I did it that way. But there was no way to convince him, it ended up in an unresolved conflict that I only could solve by messing up my code to his taste.
I've left the place I worked at for over 3.5 years over stuff like this adding up. Starting at my new job in a few weeks, getting paid more than I did at my previous job and will be working with people more senior than in my previous team. All in exchange of the rapport I've built over 3.5 years.
> If I spend a day doing nothing but reading code reviews, I'll end up feeling unsatisfied and unproductive. Because code review feels fundamentally optional -- even though I believe it's beneficial, it's something we've chosen to do, not something that we absolutely have to do in order for the project or business to keep operating -- it's more frustrating to find myself spending a large amount of time on.
This is where you need to change your mindset from writing code to delivering business value. At my first job we realized that tickets were actually spending longer in code review than in development - and therefore our primary bottleneck on delivering functionality was code review. So as a developer code review is actually your main job, and writing code is something you do in your downtime when you don't have any reviews available.
Once everyone in the team is making reviewing code their top priority (or at least, a higher priority than writing their own code), waiting for review becomes much less of a problem.
I'll add that code reviews are vulnerable to garbage in, garbage out. Code reviews the are magically 100% efficient at catching will still be expensive in terms of time if all incoming code is garbage. By reducing upstream garbage, code reviews become less expensive, making more time available for anything else, including coding.
Systematic code reviews is really a complete subversion of the agile spirit. Agile was about doing what makes the most sense for the project at any given time, and code reviews don't make sense all of the time, far from it.
I worked on projects with systematic code reviews, and while the reviews hardly caught any issue, they did add significant delays and endless discussions with some very unreasonable people that would systematically question every single line of code for no good reason.
I had to submit reviews and wait a day for them to be approved for things such as 3 lines of CSS changed.
Code reviews should be left for occasions when the code actually needs some review: for example, the developer changed a part of the code that he/she is not familiar with and wants the code to be checked by someone who is experienced in that part of the system.
Code reviews yes, but only when it really makes sense, and not by default.
Some of the things I've seen regarding code reviews are just way too absurd and it just adds to the overall demotivation of the team to have to do something absurd every day just because its the rules.
Most of the time, these procedures are put in place by non-technical managers who have usually not coded themselves a single line of code in their lives.
While I agree with the spirit, I disagree in whole. "One line config change? I don't need a code review." At our org, even that needs a CR. It takes a few minutes generally to request it in our chat room. Usually, the other person sees no issue and you get their stamp that guards the build process. However! Multiple times (a small percentage), the person asks for a clarification on it and it turns out that a conflicting config needs updating or a comment is warranted or they remind you that this change will (maybe minorly) affect another team and asks if you reached out to let them know.
Can it get in the way? Yes, it can. So as a team, learn how to streamline. For us, usually someone is available for a "quick cr." Larger ones take longer. Maybe some changes are too small, but for our team (no CSS), we've not found a consistent candidate.
I love reviewing code and I love having my code reviewed. In my current team we make significant improvements to the code during the review phase, both shaking off bugs and making architectural improvements.
That said, I think that there are some prerequisites for productive code review. Changes are preferably small, reviews need to happen in a timely fashion, the surface area for style nitpicking needs to be minimized with linting and formatting tools and the team needs to be pretty accepting when it comes to different minor design decisions that the different team members might make.
If your change is 2000 lines for what could have been several atomic changes and you get 50 comments about indentation, "you should do it this [slightly different way that the reviewer would have preferred that doesn't significantly improve legibility or design]" after prodding the other team members for a week, address the comments and wait yet another week (and also manually rebase the patch because of 10 conflicts) it's no fun.
1. The reviewer usually lacks the context to truly review the solution and all its implications, and instead gives only a cursory review focused on style and trivialities. Sometimes they're familiar enough with the code to give an insightful review, and there's benefit to commenting on even the small things, but I've seen countless major architectural problems pass review because the reviewer lacked the time and familiarity to notice them. This isn't helped by reviews typically being done on GitHub code diffs; you're seeing a couple out of context lines and it's hard to understand how it all fits together.
2. By the time code is ready for review, it's too late. Sure, you can stop the presses if you notice an egregious problem (not that anyone usually does; see #1). My team tries to keep its reviews small and frequent, but they still often contain one or more full days of effort. If the solution has major problems, catching them after the fact means a lot of wasted effort on rewrites. Often there's no time to fix it right, so someone hacks around the faults and ships a "good enough" product. The tech debt piles up quickly.
As a result of both of these, code reviews are often little more than a rubber stamp.
This isn't to say that I don't think code reviews have any merit. I think that code reviews on every single change are a waste of time. Periodic detailed reviews of architecture are valuable, and periodic team reviews of code can help spread good patterns.
Many of these problems are obviated by pair programming. It's real-time code review before it's too late. It's the single biggest benefit to pairing, although there are plenty of others (like sharing knowledge and building team bonds). Pairs are also subject to the same pitfalls, but at a far lower rate than solo developers.
1. You need to put context into comments. Code is written once but read multiple times, so you should optimize for later case.
2. "Too late" is when bug hits the client and affects income. If not, then it's not "too late", it's just "a bit late". 2-3% in effort increase to decrease number of bugs (increase customer satisfaction) by 15-20% is huge win. If you will not do that, your competitors will do that instead of you.
"Good cooking takes time. If you are made to wait, it is to
serve you better, and to please you."
^^ From the Menu of Restaurant Antoine, New Orleans, quoted in the beginning of The Mythical Man Month.
Good engineering takes time and deliberate effort. Invariably, it will sometimes be tedious, but it is for the benefit of the product. IMO that includes code reviews, and/or pair programming.
In my experience, the biggest source of friction is in comments over style and reviewers/reviewees that have an obstinate and opinionated stance on the matter. In my current project, we resolved this by simply requiring everything passes a linter (gofmt and ESLint in our case). Follow the linter, learn to like it, end of discussion. There's still structural / design style to fight over in reviews, but those are healthy fights IMO.
,,Reviewing code is one of those things that I would enjoy if I had infinite time, but that I find a nuisance when I don't.''
I can't really imagine any work that can be more important than code review for other people. This is the way Linus can have a 100x leverage on the Linux code base. This is the most important and hardest work in the Bitcoin code base as well, probably apart from cryptography research. I think the bigger problem is that code reviewing may be not well compensated, not that it's not important.
The compensation, in Linux anyways, is that others will review your code if you review theirs. So the motivation to review stuff on the mailing list is purely that you need someone to review your code to get it in.
It sounds like quid pro quo but I found the level of review (e.g. for the DRM subsystem) to be excellent, even if it's one AMD eng reviewing some other AMD engineers code in the AMD specific code.
I get the sentiment. I'm currently in a small startup and not doing any code reviews as such. That's probably not ideal because I know I make mistakes.
Last year I was on a big project in a senior role and doing and receiving lots of code reviews. There's a whole etiquette around these things that you have to appreciate. First off, I got lots of feedback that was valuable.
My view with code reviews is that they should be timely and appropriate in scope. Reviewers should have some time to review but not forever. I tend to do them early in the morning or before I leave but I'm not likely to interrupt my workflow for them; unless I'm asked to do so. Real time is not a reasonable expectation. But days delay is also not reasonable. The smaller the pull request, the less time is acceptable. The bigger the change, the longer you allow for reviews.
Also, the bigger the PR, the higher the workload for reviewers. Keep your PRs small and create lots of them instead of dumping lots of big changes in one PR.
It also helps to classify review comments as blocking or not blocking. A valid outcome can be to file a follow up ticket for another change. Yes, it would be nice to refactor this bit of code but given that it was a two line code change that got the job done, maybe right now that is good enough. My view is that if CI tests pass and the code is better than before it was changed in some meaningful way, it should always be OK to merge.
Finally, sometimes there's an asymmetric relation with the reviewer and reviewee in terms of seniority. You can use this as a didactic tool to educate juniors. But it is also an opportunity to point out they did well; which I'd argue is just as important. Likewise, you have to place comments from juniors in context as well. They might feel a little intimidated commenting on your code and if they mean well, it's important to give them feedback on their comments. IMHO it is important that people speak up and feel safe to do so. A simple "You are right, I will fix this. Thanks for pointing that out." makes them feel good.
I've found that favoring full time pair programming and not reviewing work that was soloed upon unless the author explicitly asked for it because of self-identified uncertainty or caution has resulted in a better workflow with little to no actual hit to quality.
Additionally, proper test-driving and testing practices are incredibly effective ways to enforce high quality code. Again, this comes down to culture and practices. Teaching and encouraging good testing will pay off far more than enforcing "two pairs of eyes" type rules.
My experience is that code review is great at linting, nit-picking, and causing arguments about idiom. The times a bug was okayed with 8 different :thumbs-up: or "lgtm" on the PR in github are uncountable. It's not that code review doesn't work, because I'm sure it often catches plenty of bugs, it's just that I have no reason to believe it's more reliable than pairing or letting authors ask for help explicitly when they need it.
Disclaimer: I spent my formative years at Pivotal Labs. Yadah Yadah Yadah it's a cult of pair programmers so something about a grain of salt.
Pair-programming 100% of the time sounds like a nightmare to me. I need uninterrupted focus - something I can't do with someone else there constantly, even if we're working on the same thing
This sort of thing is why I'm all in on strictly, program-enforced code-style. It closes off the single most common, most useless line of argument in code reviews that there is.
What I do wonder about is what else can be done to bring tooling into the process in a productive way. I'm imagining something like the Go and Rust playgrounds where reviewers actually contribute test-cases or something to the real code base.
The problem with code review is that it works. That's why we can't shake the industry's fascination with it. It works to catch bugs, but it's horrendously inefficient in doing so.
I ran some numbers on our code review process on a project last year. Of patches returned for modification, 5% contained a bug. 95% were for entirely stylistic changes. I find it hard to square that with being a good use of time.
As an industry, we're making the mistake Deming points out in Out Of The Crisis: we're attempting to inspect defects out of existence rather than improve our production process so that they never get created in the first place. We concentrate our efforts on being better at 100% inspection, and don't allow ourselves to think that a process without that inspection could be a hell of a lot better, if we invested the effort in figuring out what that might look like.
Pairing might be one answer here, although I don't think I've heard it talked about in those terms before.
First step to get right would be having automatic code formatters in place properly to cut out stylistic nits. With tools like Prettier, gofmt, yapf, Black and similar you can focus on the code itself, not how it looks. Of course it can take a while to find some common ground to some stylistic options (which is why I love opinionated formatters that don't give the choice) but it pays off quickly regardless and you gain huge wins with a tiny set of tooling.
For myself, I actually stopped caring of code styles and only run code formatters in a pre-commit step (if possible). By giving all the trust of fixing stylistic issues to the formatter and just focusing on the code itself while coding I ensure myself that I focus on the most important thing: getting stuff done.
That takes care of the lowest-level stylistic issues, but doesn't capture things like "why are we trapping errors here rather than in a central error handler there". The sort of question where the app will function either way, but reasonable people can have aesthetic disagreements, and a formatter is totally useless.
Code review was never about catching bugs. Not primarily, anyway. Code is written for humans to read (and modify), and a reviewer's job is to make sure the code can adequately fulfill that purpose. A reviewer is a proxy for the person who has to touch the code in the future – indeed, often they're the same person! Lumping all code quality issues under "stylistic changes" is just a disingenuous false dichotomy.
> Code review was never about catching bugs. Not primarily, anyway.
That's incorrect. Code review originated as a technique specifically to find defects, under the name Fagan Inspection. That we've found peripheral benefits is great, but the central justification is defect reduction.
> Code is written for humans to read (and modify), and a reviewer's job is to make sure the code can adequately fulfill that purpose.
I quite like that formulation, but it doesn't undercut the point that we only seem to be able to cope by having a second (third, fourth) pair of eyeballs inspecting every single line of code after it's been written, rather than working on systems to get it right first time.
> false dichotomy
It's not, though. There is a real difference. If you find a defect in the code, you've definitely prevented a bug from reaching production today. If you find a style problem in the code, you've maybe prevented next month's work from slowing down. If the project stopped tomorrow, none of the style issues would matter, and the code would continue working just fine.
The inefficiency you speak of is not inefficiency of code review, but rather the inefficiency of reaching a shared, deep agreement on what your code should look like and how it should work. Code review is just a place that it pops up.
Thinking of it as inspection alone is a disservice to the cultural value of code review. You want a process that can teach a team member to contribute with lesser inspection? Code review.
I've typically found that with a new team or team member, initially there are a lot of patches returned for modification. After 3-9 months, it tends towards 80% of patches being one-shot LGTMs, the remaining 20% having spec issues or substantive style issues (ie. this module structure will bite us in the butt because...)
This idea of communicating shared knowledge also points at more efficient ways to do that, if CR is a bottleneck:
- Technical onboarding - Google does a great job of this. Taking time to explain how to work with your technologies, and what the expected code style is.
- Linting + style guides - Arguing over style is dumb.
This is great as long as you've got a culture which tends towards trusting LGTM's. That's not inevitable; you can just as easily get to a point of escalating nit-pickery.
However, even if you get to a point where most PR's get waved through, it's still inefficient in the sense that finished code ends up sitting around in a queue, waiting for its LGTMs before it can go to production. That can add days of delay, for no added value in the majority of cases.
What kind of stylistic changes are you talking about? If it is things like whitespace and brace placement style - yeah you are wasting your time. Decide on a style and have a tool enforce it.
But code reviews are for other things than just catching bugs, it is also for sharing knowledge, learning useful idioms for each other, discussing ideas and so on. I have learnt a lot from code reviews, even when the reviewers were junior to me. Feedback like "there is actually already a function for this" or "this can be done simpler if you use X language feature" have great value even if it is not directly catching a bug.
And why is it a bad use of time if it actually helps finding bugs? What method do you know that is more efficient?
> What kind of stylistic changes are you talking about?
It's the level above the decisions a tool can enforce which I've seen tending towards bikeshedding.
> But code reviews are for other things than just catching bugs...
I don't dispute that there is value here. I think that value is tremendously overplayed in comparison to the costs incurred. Or rather, I think the costs are underplayed.
> And why is it a bad use of time if it actually helps finding bugs?
Tweak the numbers slightly: would it be reasonable, for that purpose, at a rate of 1%? 0.1%? Think how much time has been spent on the reviews at that point, and how much delay has been injected between initial commit and release. There comes a point where you have to look at the process and think "this is not a worthwhile investment of everyone's time."
> What method do you know that is more efficient?
I don't. That's the point. As an industry we seem to have latched on to code reviews because they're one of the very few techniques which there's actually any evidence for, to the extent that questioning whether they're the best we can do is discouraged.
Code reviews should not devolve into bikeshedding. That is a problem of people not understanding the purpose and value of code reviews, not a problem with the concept of code reviews. Ignore the issue if it is not important or make a ruling and move on.
If your code reviews find bugs left and right I would say it indicates a deeper problem: Why are there so many easy-to-spot bugs in the first place? The amount of bugs found is definitely not a measurement of success. The purpose of review is to improve overall readability, competence, and code quality to level were fewer bugs are introduced in the first place.
I agree that most code review change requests are style-related.
One reason I can think of this happening is that in order for one's brain to focus on just the parts of the CR that matter, stylistic oddities need to be sorted out. Ideally after some time, a reviewee would stop opening PRs that contradict the basic stylistic standards that the group adheres to (and thus stop wasting everyone's time). Ideally there would also be a linter to get everyone to that point as quickly as possible.
Imagine if you were running an assembly line, and every person in every shift did their welds significantly differently. It would be a waste of time for the inspection workers to have to continuously remind everyone that it's hard to check quality on a variety of different welding methods, rather than just one or two. It would be a better use of everyone's time to just agree on what kinds of welds are acceptable.
Code reviews are not only there to find bugs. They can "just" help to improve code readability and extendability, so less bugs are produced in the future and it's easier to develop new features.
Fascinating numbers. Do you know if that 95% number could be brought down by tooling? We have deployed fairly strict code style guides + included some linters and other tools to watch over code quality to reduce these conversations about style, but it obviously doesn't catch everything.
I used to hate code-reviews but now I'm a convert after:
(1) we brought in better people. It's fun to sit with competent people and trash-talk some code, and (2) after I pushed the team to use better tooling, which in our case is GitLab + Merge Requests. You can imagine the dark ages before that.
We're happy to hear that GitLab helped you improve your code-review experience! We documented some of the advice and best practices that might be useful when performing code review or having your code reviewed [0].
Code review aren’t meant to find defects- they are meant to prevent them. Every study I’ve seen on code or peer review effectiveness (and there are many) demonstrates that when code reviews are added into an orgs development process, product defects are reduced, and reduced by far more than what is accounted for in bugs directly filled in code reviews. Knowing that code is going to be reviewed and/or having to explain that code changes the way developers write code to begin with, for the better.
I’m a bit of a code review evangelist, so I’m surprised at the amount of negativity toward code reviews. Some of it sounds a bit self centered, upset that it slows down “thier” productivity or ability to self express through code rather than the real purpose of the reviews- to improve the product and organization as a whole.
In one of the teams I worked in we had a policy to mitigate some of the things we found inefficient in our code reviews:
1- anything done in a pair doesn't need a code review.
2- any story that is more than 3 story points should be worked on in pair.
3- if you are doing a code review and you find something that needs to change you don't leave a comment, you update the branch and ask the original branch owner to review your new commit.
#3 sounds like a really good idea to get the reviewer to put their money where their mouth is, so to speak.
Conversely,I could see that requirement causing particularly poorly-skilled developers tying up a disproportionate amount of the more skilled developers' time
I see your point but we found out that this way the review will be done quicker rather than having discussions on the PR, since before adopting this policy the comments usually had pseudocode in them.
anyway duo to #1 and #2 what actually ends up needing a review is the only minor or medium scope changes, and most of the time the commit will be renaming a variable/method or adding a test case.
of course if you see the PR went in the wrong direction you can still reject it.
For me, I've never had an issue context switching from various branches and PRs. After a moment or two of looking through my notes and my code I can quickly get back up to speed with what I was doing.
The issue I end up taking with code review is cases where reviewers end up forcing their own stylistic choices to the detriment of my time. Insisting that all equal signs be lined up, insisting that variables must be spaced in a very certain way etc. I'm diligent at following the style guide for whatever language I'm working in, but the most annoying code reviewer I had to deal with would specifically call out things that were up to the individual developers.
Auto code formatters for the win. If that is not an option, maybe a line in the style doc that says "shall" vs "may" (ie, things that are and are not kick backable).
Only time i've come to hate code review was when my counterpart (reviewer or the author) has been overly pedantic with no hint of any empathy in hir writing. I do not know was it because of inherent insecurity or no previous experience writing in more friendly manner but boy oh boy i was boiling inside arguing about useless things. The socially awkward seem to be in high numbers amongst programmers which makes these kind of ordeals a bit of agony at times.
I don't get this. A code review is a purely technical process. If he says your code is bad / wrong etc then that is a purely technical discussion that has to be had if you feel they are wrong.
This might had been a cultural thing between me and the other party. I don't mind being very to the point and strict about programming but when it comes off as condescending and "I know better than you" it's just something I can't stand. Or maybe a better example, they make pesky remarks on your programming style while ignoring all your writing about theirs. I think because the other person here was a bit older than me, not a senior dev but still old enough apparently, that he felt he had to prove something to the younger guy.
But yeah I'm going off the rails here. My point is that relationships are hard and even in seemingly neutral setting, a code review, tensions can arise. I do not know what to say on those occasions but getting snarkier and snarkier in comments is definitely not the way to go. Bad code reviews have been one of the most infuriating things I've come across my developer career.
As you already said, this sounds like a people issue. Snark, sarcasm or belittling has no place whatsoever in a code review (or at a work setting in general, obviously). Sounds like you just got a bad colleague.
In other jobs, I accept that I'm occasionally going to be waiting. The goal is to produce value at the organization level, and that just isn't compatible with each individual operating at 100% efficiency at all times. It's egotistical of programmers to think that they should. It's unfair of managers to expect this of them. Amdahl's Law applies to humans, too.
The issue is procedural. If you hate that code review of an API implementation might require changing the interface, then you should do a review of the API design before you implement it. That should be the case with every phase. Any time you have to perform multiple steps and worry that review of a later step might cause an earlier step to be invalidated, that simply means you're reviewing too late, at too coarse a level.
Or not. Adding more reviews keeps the big-O factor down, but increases the constant factor. As an organization, you can pick if you want individual contributors to never have to backtrack, or if you want them to be able to use speculative execution. One way is more predictable, and the other way can be faster if you're good at guessing right.
I hate code review too, mostly power plays it brings. I had reviewer forcing me to produce code I considered bad - but the prolonged discussion before deadline would mean more wasted time so I did not argued. I have seen code review block code for subjective differences in variables naming (e.g. no convention was broken and both names were fine). I have had code reviewer demanding change, after I implemented exactly demanding next change in his own code and after I complained he just shrugged and said "now I think this is better".
I had seen people misuse code review to force standards that they just made up based on blog dude read yesterday - only to change opinion three weeks later.
I had seen senior developer letting junior work alone for two weeks, only to iteratively force the junior to rewrite all of it to seniors liking. Ridiculous waste of time and cruel.
The core problem is that code reviewing is leadership, supervisory/gatekeeper function as any other, kind of like editor and should be treated as such. Even if reviewer and coder are formally at same footing or switch functions, it is momentary it. So, treat it as such and expect people to treat it as such. Meanwhile, people in tech talk about it in idealistic terms that are sometimes true, but often not.
1.) Don't use code review to teach juniors, talk with juniors in advance and supervise them as they go. If they learn about major architectural problem in code review, you are senioring wrong.
2.) Don't try to tech colleges in code review or use code review to force new standards on ad-hoc basis. Talk about issues in standards before, teach colleges as you go around your day without holding pull requests (and their weekends in work) hostage. If your only communication about these issues is code review, you are doing it wrong.
3.) Don't assume that code reviewer is automatically right just by its function and don't let code reviewer to steal responsibility from coder. Don't assume opposite either (through assuming opposite is better). Don't make discussion a taboo.
4.) Difference in opinion is normal and does not mean one of the people in conflict is idiot.
5.) Passive aggression does not belong to code review. Don't lie in code review comments either. "This is objectively horribly wrong" counts as lie if you are using it to push for personal preference and not ready to defend it. If you say or imply in your code review comment that someone is lesser coder, it is ok for that person to challenge the assumption and defend himself.
6.) Keep your own emotions out if as much as possible. If the code is making you angry, is it objectively bad or just something you are personally unused to (new syntax can be annoying even if it is good)? Distinguish the two.
Code review should be about one things only: Is this good enough to go to codebase? Is this required change a blocker or minor thing (variable naming)? All the other functions can be a bit helped by review, but absolutely should not be seen as review purpose.
Of the choices that he proposes, #2 seems like the obvious solution to me - but he writes it off as too difficult.
Phabricator’s stacked diffs [0] works wonderfully well to support this exact workflow (it isn’t clear from the post which versioning system the author uses and what workflows it supports - maybe I missed it?)
Thanks for the link! I went down a small rabbit hole from this which ended up with a script that allows for similar style workflows even within the GitHub PR world:
I think the author is talking about a 'Forced Code review' because it is supposedly a good practice to have a code review before you put something into production (which it is).
The current practice is a coder submitting the entire code for review, and getting comments on every aspect that the reviewer feels like. This is not an efficient process as it takes up a lot of time, but reviewer also put in a position as to 'How would he solve the problem the current code has already solved'
I feel a good code review process should have two components:
1/ Self-review. A team/company should agree on baselines/conventions that any code shipped by them should have, and ensure that any code not corresponding to that will be rejected. This review should be done by developer himself, rather than a reviewer.
2/ A code review by an experienced dev who needs to check for libraries/custom functions that are used might not break something somewhere else, or wont work because some other change is going somewhere else, that they know of. This way, potentially buggy code can be avoided, and process could be faster as well. Ofcourse, if they want to provide feedback on a better algo, they can, but should be optional
I think the problem is more about workflow than code review per se. I have previously complained about the exact same things as this article. If you factor in your CI pipeline as well it could be hours from writing a patch until its merged and during that time you are in practice blocked from continuing to build stuff on top of your previous commit. Because of this high friction you wont bother to rename that variable or fix that whitespace because its not worth the time.
Instead I thought it would help to make code reviews optional. It is dangerous and I wouldnt recommend it for everyone and it only works if you have a CI pipeline in place of good quality. What you do is that you allow people to push directly to master, then you have your CI system continously testing all the latest commits and only if something breaks will it report it back and then the person who wrote that commit needs to fix it. I have never tried this in a real working environment but I think it would decrease friction a lot and just be slightly worse in quality. Also instead of pushing to master you could have a staging area that automatically pushes to master if it passed the tests. That might even be better actually.
My only advice to reviewers is: If you say "do X", also explain the "Why?" behind it.
Other than my only advice, these points are also helpful:
- Zero Code Style, Formatting Comments should be allowed on reviews. If it passes the linter, it's good.
- Re-naming classes/variables comments should be kept to minimum. Unless someone is using obviously bad names like `a`, it's probably a fine name.
When the right processes are in place I am huge believer that most of the concerns raised from code reviews can be fully automated. I usually find people suck up code review time with code style or concerns of conformance to team standards. This is really basic stuff that sucks the life out of people and isn't productive.
For me the residual benefit of code reviews is to invite people to examine how the code works so that it can be improved or simplified. By improve I mean objective concerns like execution performance, reduction of instruction size, security concerns, and so forth. I expect a very critical analysis. Don't be afraid to hurt my feelings because this code is about to go to production.
Unfortunately, often times the proper automation controls are either not in place or the developers are fragile timid souls, particularly in group settings. Code reviews can be wonderfully constructive, but many times aren't.
What this calls for is fixing / streamlining process, and awareness of context-shift costs of path-dependent processing.
The "have multiple projects, or alternatively, 'scuttwork' model, likely makes most sense. A list of noncritical-but-evergreen tasks that don't require heavy cognitive load is useful. Shoring up docs, drilling on procedures, interviewing, studying new skills or tools, etc. (Retail has a similar concept for sales associates durring slack time.)
Another is to tighten or parallelise, and prioritise and reward the review process. Both familiarity and unfamiliarity with the code base can be useful -- quick assimilation, and fresh eyes / expanded institutional awarenesss and knowledge (plus increased Bus Number).
i agree with the author in that i personally don't like code reviews much as well, though i agree they are valuable.
to be more specific though, i don't like "remote" or "detached" code reviews much; "over the shoulder" code reviews are a completely different thing. as you can actively communicate with the author or the reviewer, it tends to reduce the bike shedding a lot and the value of positive feedback increases.
i guess, ultimately, it depends on the team; if you're operating well together, classic code reviews might work as well. if the team is new or the members have varying styles or levels of exerience, over-the-shoulder reviews tend to pay off significantly.
I view code reviews as a good opportunity for learning but I've one particular grievance with them. Specifically, when I'm asked to code review a pull request that's alarmingly huge (i.e. touches more than a dozen files, +1000 LOC changes, etc).
I've never come up with a bullet-proof way to deal with this. The situation usually ends up with me providing a rubber stamp of approval and making some caveat commentary about "uncertainties" due to my inability to devote so much time to be rigorous. I feel bad every time I've got to do this.
The problem might be exacerbated at another level too. For features, stories might not be broken down enough.
At one project there was a rule for large PRs: if the change exceeds a thousand lines, you have to bring review cookies. Keeps you mindful of the PR size and the rest of the team won't grumble that much.
How about a novel concept: there will always be some parts of your job that you don't enjoy. Tedious things, boring things, annoying things. All the different check lists, beurocracy, reports, task tracking - there are a lot of annoying little stuff that we sometimes have to do.
Some of those things can be optimized or automated away. But some of those things can't: turns out, although they're completely irrelevant 98% of the time, they're completely critical in the remaning 2%, and nobody knows what those 2% are, so we're stuck with these practices.
Folks, keep in mind that the value of code reviews, tests, etc. varies wildly from project to project.
- What kind of product are you building?
- How large is the project?
- How many developers are working on it?
- How experienced are the developers?
- How is the project being managed?
- Are there any hard deadlines?
- What is the budget? Burn-rate? Runway?
- Is the product established or are you just building the MVP and trying to validate market demand?
A two-person development team building an MVP with ever-changing requirements won't have the same workflow as an established company with a large development team and a bunch of existing users.
The core message is not "Code reviews are a bad thing" but "Blocked on code review is expensive" and I agree that aspect can be an issue: it can be a badly timed context switch for the reviewer, and we know context switches are expensive in terms of productivity, so in some cases it makes good sense to defer it to a later point in time. And this is valuable time lost for the author.
Have you tried a pair programming? I realized that this practice much more effective than code review. Pair programming encourages learning inside a team, interactions and code quality. It also reduces time to market.
Pull request reviews are for merges into production, (or staging depending on your QA process) but you should always have a develop branch you can thrash on in the case that you are pushing a feature that is being worked on by another team (e.g. front/back end)
Fix the workflow. Reviews are the best quality control measure out there, perhaps even better than unit tests.
On the one hand I get where you’re coming from, on the other hand I definitely broke our submit tracking by randomly inserting a space into a string in a commit I made months ago and my coworker just noticed it last week. Sometimes it’s nice just to have another pair of eyes glance over your diff to pick out obvious mistakes.
Code reviews have been both a useful medium to share new learnings among team members that actually makes the code elegant or perform better, and an endless threads about conventions and nitpicking.
I think it makes sense to set premise or template for a good peer reviewing exercise. New team members should be educated about these guidelines.
"There's no such thing as a bad book" - you can and should treat it as a learning experience and take away one positive thing. "I wouldn't do that", "If I see that again, I'd do it this way", etc.
QA incurs a cost. It's perfectly natural to dislike this cost, but that's also largely irrelevant: the question (in each specific case) is whether the benefits are worth the cost, or not.
Parts of this are in other comments, but I find a couple things help code reviews enormously to be more efficient.
1) Automatic code formatters/linters before the code gets to review. When everybody knows there's a formatter, reviewers can spend less time looking for formatting issues. Generally, people are more inclined to take a quick look at the code then too, because they know their focusing on "just" the logic, which helps with getting a review sooner.
2) Smaller sets of changes up for review. Yes, this might mean more code reviews overall and more process, but it's way easier to catch a bug in a < 100 line change than in a 1000+ line change. This also means keeping the contents of the review as focused as possible. If you want to refactor some major chunk of code because it makes the actual thing you're trying to do easier, go for it, but do it in a first code review and separate the logical changes. This goes to both reviews being more straightforward for the reviewer to get through and them wanting to do it in the first place. I'm way more likely to avoid a massive 30 changed files code review than a quick 5 line change, because I need way less focus to review the latter.
3) (this one might be controversial) Early code reviews where appropriate. If you're making a change that's more likely to be totally rejected, get it into code review as soon as the basic structure and logic is in place and ask somebody to review it as quickly as possible. This is before tests are ready and every todo is completed or whatever else. The review should be quick and dirty, but at least that way you know if you should be proceeding with your current path at all or change course. Again, this means more time spent reviewing, but it's a lot better than getting a review after a week saying what you just wrote has already been done, or it totally misses goal, or it has some critical issue, or whatever. This helps a lot with what to do while you're waiting for feedback too. If you've got this type of code review up then you can keep working on the details while you're waiting on feedback. If your code review doesn't fall into this category, you're probably relatively safe to start working on the next thing, because at most you'll need to make some small changes to what you're working on, not throw it out altogether.
4) Making sure code review is considered a priority on the team. That includes the team lead prioritizing code reviews and encouraging others to prioritize code reviews. If the code doesn't get reviewed, it doesn't get shipped, so it's ultimately more important than writing new code.
The real problem with code reviews is when the reviewer isn't good. If they pick on trivia or make you redesign unnecessarily or are unpleasant or too busy its a problem.
Was nominal lead on a smallish project a couple months back, with 2 other folks. Worked with one before, one I hadn't worked with before.
We're all remote, and the 'new' guy started doing 'code review' on code before he had a running environment. We'd taken over another codebase, and he was pretty up-front about wanting to adopt/enforce certain stylistic standards, which... to me, I don't particularly care about as much (on the PHP side of things, PSR-2, in this case).
We're all remote, so we had a call - a couple actually - and I indicated I didn't care all that much, but he was free to make changes as he saw fit, as long as 1) he got a working environment up and running, 2) he was also spending time writing some tests around the code as he explored it, to have a firmer understanding of what was going on, and 3) he didn't break any existing tests (or new ones that were developed).
I swear that I don't think he'd actually had a working copy of the project, but was still committing code. Not a lot, but ... it broke tests. It broke stuff that was not 'tested' in an automated way yet, but was testable by running the app, and I think he'd just not bothered to test it at all.
But... hey - look at the formatting! Look - spaces, not tabs! - how useful when someone else is having to go back and debug your crap. I received a couple small lectures on variable naming and the importance of descriptive variable names.
function getListOfUsersForCompany()
{
$res = db::query('whatever query');
return $res;
}
Apparently, that $res is confusing and 'bad practice' because it's "hard to reason" about what's going on there. Literally 2 line methods with an internal placeholder variable (which also had some working functional tests around them) - those were being criticized by someone who had committed test-breaking code multiple times.
In all this, I kept having to figure out how much of my reaction was because my code was being criticized, vs what the criticisms were. I got accused of taking things 'personally', but I kept coming back to 'working tests broke with these changes', and in my mind, that this code was considered somehow 'better' because it more closely followed a stylistic 'standard' was bothersome. Working tests should a standard we strive for too, right?
I have worked with some folks who had a real eye for analysis - remote or colo pairing on problems has helped reason out issues that I know I've missed. And when I'm pairing or doing a requested review, cosmetic style tends to be the last thing I'll bring up, because it generally has the least impact on something. Much prefer to have someone spend time writing tests and/or docs and/or insightful comments vs reformatting. When I've refactored with someone to make something more testable, better naming/comments/docs tends to flow out of that refactoring anyway.
While I understand formatting is often not that big of burden, especially with modern IDEs, it's also not terribly high value on many projects. I'm primarily speaking on projects I've come in to where there's already extant code - in this case, we had taken on a very weird hybrid of 2 PHP frameworks and 2 JS frameworks (some screens used raw vue, some compiled vue, and some compiled angular).
Doesn’t sound to me like you are the problem. Sounds like your colleague has lost sight of the end goal and needs to refocus on what you’re supposed to be delivering.
(Unless the whole project was to reformat the code?)
I'd later learned through someone else who'd worked with 'new guy' that they'd had similar problems, and apparently he only really works well on his own. I'd been 'given' him on this project because the first person wasn't really sure if he (first person) was at fault, or if 'new guy' was. I was the test case that confirmed.
By no means was the purpose to just 'reformat' (far from it!) :)
I have wrestled with this a couple times before over the last ... 15+ years. When working with other folks, I'm normally not in 'full time w2' arrangements - it's usually shorter term contracting, so the dynamics are a bit different. That may also be why the style/formatting is usually less important to me. As with the project above, I'd seen many well-formatted projects with descriptive variable names that don't function. And... seen horrific spaghetti code that "just works" but people are afraid to touch it.
Finding that middle ground is a never-ending journey, but I feel I've got a decent sense of how to get pragmatic results, and it usually involves repeatable sample data, repeatable processes and tests and/or docs. Building those as a foundation will make stylistic changes a breeze, because the confidence to make "trivial" changes will be there. Coming in to new codebases with no tests, sample data or processes should be considered scary/risky, and when people don't seem to have an understanding of the risks involved, I get even more scared.
It took us 9 weeks on this project to get to a point where we made a first push out to production (and even then we ran in to a few small bumps). But we'd practiced with tests (only have dozens, not hundreds at this point) and pushing to staging, and as the client reviewed... we kept finding more and more bugs - things that had existed in the code for years that we were seemingly the first people to exercise in front of the client. I'm sure their confidence in us was bruised a bit - we apparently just kept showing bugs(!) - but it feels like we provided more of a base in 9 weeks than we inherited from the previous 4 years of teams (automated builds, sample data, automated unit tests, automated browser tests, etc).
Just did this a couple days ago and it's still very fresh/raw in my mind :)
Mediocre code review is a human linter; decent code review catches obvious bugs; good code review catches subtle bugs; great code review teaches you something. I’m always excited to learn (or teach) when there’s a better way to write something. Even if not adopted, the surrounding discussion is a boon to future readers.
If people are giving crazy suggestions, or resenting good ones, something is seriously wrong with your team.
1. Both reviewer and reviewee are focusing on getting the best outcome possible, in good faith and with generosity.
2. The reviewer concedes that there can be equally valid approaches to a given problem or taste wrt. aesthetics. They do not try to gratuitously force their style upon the reviewee. The reciprocal must hold as well.
3.The reviewer makes practical suggestions and ideally accompany their comments with snippets of code. They involve themselves into being part of the solution.
4. The reviewer focuses on the substance of the pull request. That means putting in the time to understand the real logic, putting themselves in the shoes of the reviewee (with their help), and trying to challenge it so that the limitations of the approach are known and documented.
5. The reviewee makes an effort at slicing their request into readable and compact sub-PRs if needed. Similarly, the reviewer makes a best-effort attempt at starting their review within a reasonable time.
Those are central aspects of a productive engineering culture anyway. You only need to find someone else that shares those "values" (i.e focusing on the best outcome possible as a sort of devotion to Engineering - so to speak) to grow this attitude within your company.