
Confessions of a programmer: I hate code review (2010) - shubhamjain
https://blog.nelhage.com/2010/06/i-hate-code-review/
======
erwan
I love code-reviews. With the right people, they can be useful both to achieve
high-standards of code quality and also improve your own engineering process.
The requirements are quite high, I think you need some combination of the
following:

1\. Both reviewer and reviewee are focusing on getting the best outcome
possible, in good faith and with generosity.

2\. The reviewer concedes that there can be equally valid approaches to a
given problem or taste wrt. aesthetics. They do not try to _gratuitously_
force their style upon the reviewee. The reciprocal must hold as well.

3.The reviewer makes practical suggestions and ideally accompany their
comments with snippets of code. They involve themselves into being part of the
solution.

4\. The reviewer focuses on the substance of the pull request. That means
putting in the time to understand the real logic, putting themselves in the
shoes of the reviewee (with their help), and trying to challenge it so that
the limitations of the approach are known and documented.

5\. The reviewee makes an effort at slicing their request into _readable_ and
compact sub-PRs if needed. Similarly, the reviewer makes a best-effort attempt
at starting their review within a reasonable time.

Those are central aspects of a productive engineering culture anyway. You only
need to find someone else that shares those "values" (i.e focusing on the best
outcome possible as a sort of devotion to Engineering - so to speak) to grow
this attitude within your company.

~~~
falsedan
Great points. Regarding point 2:

> _2\. The reviewer concedes that there can be equally valid approaches to a
> given problem or taste wrt. aesthetics. They do not try to gratuitously
> force their style upon the reviewee. The reciprocal must hold as well._

If my code is being reviewed, and the reviewer has opinions about how they
would do it, I like it when they share their preference but also

1\. consider if the code as submitted has the same result

2\. share their personal preference for how they would have written it

3\. explain _why_ they prefer it over the code I wrote

4\. not block the review on me rewriting the code to match their preference

I think it’s fine (and valuable) to comment “I hate this code, shipit” or
(better) “ship it, if you change foo to bar then <explaination of why they
prefer it>”

The worst code review comments are “this isn’t
performant/idiomatic/maintainable, fix it” or other statements of opinion
presented as facts. This kind of faux-objective feedback is terrible, since a
fact can be argued against (or could be wrong). A statement of opinion like “I
don’t like it” or (best) “I had to think really hard before I understood it”
is great, since that’s what someone diving into the code for a maintenance
ticket/incident will be thinking but won’t have the original author on hand to
walk them through the code.

Linking to articles or references that explain why you prefer a particular
implementation also gives the reviewer a learning opportunity & lets them save
face when they post a follow up CR “thanks, didn’t know that”.

~~~
pytester
>The worst code review comments are “this isn’t
performant/idiomatic/maintainable, fix it” or other statements of opinion
presented as facts.

This behavior is usually symptomatic of an excessively dogmatic outlook. That
is, I don't think it's the review style per se that's the problem - it's the
person.

I find developers like this nearly impossible to work with. They won't just
unknowingly write poor code themselves (dogmatism typically leads to bad
decisions which leads to really poor code) they will likely try and force you
to do the same.

~~~
danmaz74
Even dogmatic people can learn not to be dogmatic.

------
eridius
You know what I hate more than code review?

Shipping bugs.

The author of this piece admits that they have to change the code they write
in response to comments. This means the code review is flagging areas of
improvement. Surely the resulting code is better than the original code
(otherwise I'd expect the author to push back on the comments instead of
implementing bad suggestion).

Similarly, when acting as the code reviewer, the author is giving feedback to
others (rather than just rubber-stamping the PR), and presumably that feedback
is implemented.

So code review seems to be doing its job here. It's producing better code.
Ultimately I think the problem is revealed at the end, where the author says
they feel like code review is optional. If you think it's optional, then the
delays inherent in review are going to feel like artificial slowdowns.

The better way to think about it is that code review simply isn't optional.
Yes it can be a pain, and there's a lot of friction in the process, and our
tooling kinda sucks for it. But it's a necessary step. I can't even begin to
count how many bugs my team has avoided shipping due to code review. And I
always get really annoyed when I run across a bug that should have been caught
in code review but wasn't for whatever reason (perhaps a more junior coder did
the review and therefore missed stuff that a more senior coder would have
caught). Code review stops bugs before they happen. And the best bug is the
one that was never shipped to customers.

~~~
bryanrasmussen
so another story here:

code review mandatory at last place I worked, the place had a policy of any
code you push in that does not match the guidelines must be changed.
Guidelines was that no em allowed, must use rem. Old part of codebase assigned
to me, I found some things in CSS I improved (reuse of code, a small overflow
bug)

Got comment - you need to change em to be rem.

I can't do that because there is em all over the place and this can have side
effects. The diff shows it as my code but if you look you can see it is
actually the old code just moved to right place.

Those are the rules fix them.

since this code was in a big release of stuff and none of our stuff was
allowed through unless I changed one em to rem even though I could not be sure
there wouldn't be problems I went ahead and rolled back my changes.

fixed overflow bug other way. no code reuse ever in that part of codebase.

~~~
rasteau
Yikes! Demanding a bug fix also remediate unrelated, extant style violations
is a good way to ensure nothing improves.

~~~
bryanrasmussen
right. I said that to him and to the frontend lead for the whole organization
(very big place) - codebases this big you can't have that requirement, do it
with a small codebase/team who all sit in the same room and know what's going
on sure. 10-11 teams spread around with lots of different programmers, many
also remote, it can't be done. But no dice.

In fact I had people specifically recommend me at times, don't do that because
when it hits review with reviewer X it will be a problem. Sometimes you had to
butter them up asking their opinion before (and that wouldn't always work
because sometimes they would change their mind) anyway I better shut up and
get to work, I can feel the bitterness returning!

------
koala_man
The arguments essentially boil down to:

* "I don't have time to review people's code, I just want to code"

* "I don't have time to have my code reviewed, because if it's bad I'd have to fix it"

This really mirrors people's complaints about unit testing:

* "I don't have time to write tests, I just want to code"

* "I don't have time to run tests. If they're failing I'd have to fix it"

It also mirrors the 19th century complaints from surgeons who didn't have time
to wash their hands, because they had a lot of surgeries to perform...

~~~
vfc1
The problem is that most tests break frequently not because of bugs, but
because of some environment dependency that changed.

The tests break and we have to fix them not because a bug was found, but
because something else changed that affected the test.

Most of the time spent with tests is not to catch bugs. When was the last time
that you caught a significant bug with a test?

~~~
sethammons
For us, a failing unit test is almost always a bug. Lots of people write
terrible unit tests. Too much mocking, asserting methods called, etc. Brittle.
Test the edge of your public interface. Supporting methods only need specific
unit tests if they are sufficiently complex. If your unit's interface changes
or it's behavior changes, it should fail. That is a good thing.

~~~
vfc1
think there is a lot of religious-like thinking when it comes down to tests:
many people swear by them even against all the day to day proof to the
contrary that they are really not that useful at the application level.

I think if we are shipping a framework to thousands of users its great that it
has unit tests, but if its an in-house application built by a team of 10
developers, I doubt that the tests are doing anything useful.

But what happens is that these best practices that are used by developers of
libraries are then used blindly at the level of application code, without
pausing to think if those are indeed best practices in that particular
context.

Developing software at the leaves of the dependency tree (the application) is
very different than developing libraries. Some things are the same while many
others are not.

The end result in practice is a lot of wasted time and effort spent at the
application level writing bad tests that yield no results, all in the name of
the testing religion.

Because that is what it really is, its an unprovable belief system with a set
of rituals that you absolutely must follow blindly, or else.

~~~
sethammons
Usually "unit tests" should be for verifying error code paths. These are
valuable at the library/package level and maybe at some level of internal
integration. 100% code coverage is likely useless most of the time. At the
application level, testing user inputs to user expectations is highly valuable
(perhaps more so than unit tests in a lot of contexts). It is this
integration/acceptance testing that any application with users should have
under automation. Any codebase lacking that risks breaking or regression at
the cost of user experience (which, depending on your app might be a fair
trade off).

------
rasteau
I think the author and many folks here have an unhelpful understanding of code
reviews.

I suggest code reviews' primary purpose should not be to catch bugs or trivial
style issues; automated tests and linters exist for those purposes.

Instead, code reviews should be primarily thought of as "human
comprehensibility tests".

When we write code, we've may have spent a significant amount of time thinking
about the problem. This puts us in a bad position to gauge how readily someone
in the future can reason about our code in order to fix or modify it safely
and easily. Code reviews are a mechanism for showing empathy to our future
selves, which accumulates over time into a much more pleasant coding
environment.

Relatedly, comments on a PR are ephemeral. A rule of thumb I use: if a
reviewer asks a reasonable question -- something someone in the future might
wonder about when I'm no longer around, I answer it by first improving the
code.

------
DubiousPusher
Some simple tips I use to make this better.

1) Anything cosmetic gets a "nit" added to the comment.

2) When a review looks like mostly assets or boiler plate, I look through
quickly for anything glaring and approve while noting my assumptions that the
author has run the build and verified the additions.

3) If I'm in a hurry and the author is available I grab them and do an over
the shoulder review. You might be surprised how much time this saves.

4) Don't struggle with difficult sections of other people's code unless you
have a vested interest in remembering this specific section of code. Ask about
unclear areas.

5) Trust your CI process. Don't try to consider every edge case for your
reviewee.

6) Develop a culture of small work units. Encourage people to integrate every
day or two. This means reviews are small and easier to grok.

Code reviews are not a guarantee of correctness. They are another tool. We
should find a good trade off between effectiveness and cost to get the most
value out of them not strive for perfection.

~~~
neathack
These are all good tips, but I want to stress the importance of (6). Much of
what the original post is talking about can be improved by adopting the habit
of making small, incremental changes. As the author, you no longer have to
wait "in the order of weeks" for the feedback and a potential rebase should
pose little risk. As the reviewer, you don't have as much mental burden and
most reviews can be done "in-between" bigger tasks, e.g. in the 15 minutes
before a meeting or before lunch, etc.

As for 1) I highly recommend to automate that away as much as possible. A
strict syntax guideline and a toolchain that enforces it will go a long way.

~~~
danmaz74
Conceptually, I completely agree with (6). Practically, though, I found it
really hard to do in some situations.

Specifically, when doing something where the solution is well known for the
beginning, creating PRs which are small and easy to understand is easy.

When instead I have to solve difficult problems for which I don't already know
the solution beforehand, and which require writing quite a bit of complex code
- eg multiple classes that work with each other, and which maybe work with
some complex external system I/the team doesn't already know well - I find it
difficult to split the PR into smaller chunks, because I "evolve" the separate
classes together, while also doing exploratory testing.

"Production ready" code comes later in the process, and at that point I
usually have quite a bit of code. I noticed that people don't like reviewing
the non-production-ready code, but what's the alternative? In theory, after I
found out what the solution is, I could go back and rewrite it better from the
beginning, but that would just require too much time.

Didn't find a good answer yet.

~~~
dennisgorelik
The answer to your problem of "large commit" is this:

1) Create "research prototype" of your solution. But do not commit it or
commit it into separate branch (not master).

2) Create a plan how to split that code into smaller steps. Separate
refactorings that will support your new feature - from your actual feature
(that changes functionality).

3) Commit refactorings separately from your main feature commit. Keep every
refactoring commit as small as possible).

4) At the end commit your functionality changing feature. If possible, split
it into multiple commits too.

That should help with code review.

Remember, your team will have to code review your code multiple times:

1) When you are making your commits.

2) When your code reviewer reviews it.

3) In the future, when you or other team members maintain your code and try to
understand why you created code that way.

So - optimize your commit to reduce code review time (even if it increases
initial code writing time).

~~~
danmaz74
Thank you for your answer, but I don't think that would have been feasible -
our employer called me exactly because the team was very slow, and adding so
many additional steps would make me very slow too.

I'm now changing contract, hopefully with the new team it will be possible to
find a better compromise. For example, I offered to walk the reviewer through
my code, which I think would have helped a lot, but this was refused...

~~~
dennisgorelik
> additional steps would make me very slow too

Your real choices are: "fast and buggy" vs "slow and correct".

What is your preference?

> I offered to walk the reviewer through my code

Walking reviewer through your code is a good exercise. Code review of complex
change should be done in interactive session (code reviewer + coder). There is
so much to learn for both sides in such session.

[https://dennisgorelik.dreamwidth.org/161605.html](https://dennisgorelik.dreamwidth.org/161605.html)

~~~
danmaz74
> Your real choices are: "fast and buggy" vs "slow and correct".

It's not a binary choice, it's a dial. And anyway, after more than 30 years of
programming experience, I can tell you that I can write mostly correct code by
applying diligently the pyramid of tests. In my experience, code reviews catch
very few bugs that weren't revealed by testing, if any.

Much more than correctness, I find them useful to verify and improve the
readability and maintainability of code - which are important goals, but as
always the cost needs to be measured against time and cost constraints.
Depending on the project, being twice as slow to accomodate the reviewer can
or can't make sense. What I'm looking for is more of a 80/20 solution: can I
be 20% slower, and still get 80% of the benefits of code reviews?

------
messit
I've worked in teams bad enough to make code reviews a real pain.

I've seen pretty horrible code constructs that actually seemed to work without
bugs. What should I say? That the author of that piece of code should
completely rewrite it to my taste? It would not only frustrate him/her but
also slow down the entire team. So I often accept what is rubbish IMAO. And
not me alone, have you ever seen two code-buddies in a team always doing each
others code review and(quickly) accepting and merging everything they produce
before anyone can make a comment?

I've had numerous pointless discussions about why I wrote a specific piece of
code where (a junior dev) the assessor just wanted me to do things by the book
as he learned it, not really understanding why I did it that way. But there
was no way to convince him, it ended up in an unresolved conflict that I only
could solve by messing up my code to his taste.

All these pointless discussions that particularly affect the relationship you
have with your co-developers in the team you work in, are not always fruitful
and can be quite destructive if you're not careful enough.

I do believe code reviews can be very important and valuable, but at the same
time I've seen huge amounts of wasted time and frustration for nothing. It's
not always and only a good thing is my conclusion, it depends on the
variables.

~~~
therein
> I've had numerous pointless discussions about why I wrote a specific piece
> of code where (a junior dev) the assessor just wanted me to do things by the
> book as he learned it, not really understanding why I did it that way. But
> there was no way to convince him, it ended up in an unresolved conflict that
> I only could solve by messing up my code to his taste.

I've left the place I worked at for over 3.5 years over stuff like this adding
up. Starting at my new job in a few weeks, getting paid more than I did at my
previous job and will be working with people more senior than in my previous
team. All in exchange of the rapport I've built over 3.5 years.

------
lmm
> If I spend a day doing nothing but reading code reviews, I'll end up feeling
> unsatisfied and unproductive. Because code review feels fundamentally
> optional -- even though I believe it's beneficial, it's something we've
> chosen to do, not something that we absolutely have to do in order for the
> project or business to keep operating -- it's more frustrating to find
> myself spending a large amount of time on.

This is where you need to change your mindset from writing code to delivering
business value. At my first job we realized that tickets were actually
spending longer in code review than in development - and therefore our primary
bottleneck on delivering functionality was code review. So as a developer code
review is actually your main job, and writing code is something you do in your
downtime when you don't have any reviews available.

Once everyone in the team is making reviewing code their top priority (or at
least, a higher priority than writing their own code), waiting for review
becomes much less of a problem.

~~~
PopeDotNinja
I'll add that code reviews are vulnerable to garbage in, garbage out. Code
reviews the are magically 100% efficient at catching will still be expensive
in terms of time if all incoming code is garbage. By reducing upstream
garbage, code reviews become less expensive, making more time available for
anything else, including coding.

~~~
dennisgorelik
Fortunately, code reviews improve quality of incoming code: both original
coder and code reviewer learn how to code better during code review.

------
vfc1
Systematic code reviews is really a complete subversion of the agile spirit.
Agile was about doing what makes the most sense for the project at any given
time, and code reviews don't make sense all of the time, far from it.

I worked on projects with systematic code reviews, and while the reviews
hardly caught any issue, they did add significant delays and endless
discussions with some very unreasonable people that would systematically
question every single line of code for no good reason.

I had to submit reviews and wait a day for them to be approved for things such
as 3 lines of CSS changed.

Code reviews should be left for occasions when the code actually needs some
review: for example, the developer changed a part of the code that he/she is
not familiar with and wants the code to be checked by someone who is
experienced in that part of the system.

Code reviews yes, but only when it really makes sense, and not by default.

Some of the things I've seen regarding code reviews are just way too absurd
and it just adds to the overall demotivation of the team to have to do
something absurd every day just because its the rules.

Most of the time, these procedures are put in place by non-technical managers
who have usually not coded themselves a single line of code in their lives.

~~~
sethammons
While I agree with the spirit, I disagree in whole. "One line config change? I
don't need a code review." At our org, even that needs a CR. It takes a few
minutes generally to request it in our chat room. Usually, the other person
sees no issue and you get their stamp that guards the build process. However!
Multiple times (a small percentage), the person asks for a clarification on it
and it turns out that a conflicting config needs updating or a comment is
warranted or they remind you that this change will (maybe minorly) affect
another team and asks if you reached out to let them know.

Can it get in the way? Yes, it can. So as a team, learn how to streamline. For
us, usually someone is available for a "quick cr." Larger ones take longer.
Maybe some changes are too small, but for our team (no CSS), we've not found a
consistent candidate.

------
boomlinde
I love reviewing code and I love having my code reviewed. In my current team
we make significant improvements to the code during the review phase, both
shaking off bugs and making architectural improvements.

That said, I think that there are some prerequisites for productive code
review. Changes are preferably small, reviews need to happen in a timely
fashion, the surface area for style nitpicking needs to be minimized with
linting and formatting tools and the team needs to be pretty accepting when it
comes to different minor design decisions that the different team members
might make.

If your change is 2000 lines for what could have been several atomic changes
and you get 50 comments about indentation, "you should do it this [slightly
different way that the reviewer would have preferred that doesn't
significantly improve legibility or design]" after prodding the other team
members for a week, address the comments and wait yet another week (and also
manually rebase the patch because of 10 conflicts) it's no fun.

------
caymanjim
I agree with everything in the article.

My biggest problems with code review are:

1\. The reviewer usually lacks the context to truly review the solution and
all its implications, and instead gives only a cursory review focused on style
and trivialities. Sometimes they're familiar enough with the code to give an
insightful review, and there's benefit to commenting on even the small things,
but I've seen countless major architectural problems pass review because the
reviewer lacked the time and familiarity to notice them. This isn't helped by
reviews typically being done on GitHub code diffs; you're seeing a couple out
of context lines and it's hard to understand how it all fits together.

2\. By the time code is ready for review, it's too late. Sure, you can stop
the presses if you notice an egregious problem (not that anyone usually does;
see #1). My team tries to keep its reviews small and frequent, but they still
often contain one or more full days of effort. If the solution has major
problems, catching them after the fact means a lot of wasted effort on
rewrites. Often there's no time to fix it right, so someone hacks around the
faults and ships a "good enough" product. The tech debt piles up quickly.

As a result of both of these, code reviews are often little more than a rubber
stamp.

This isn't to say that I don't think code reviews have any merit. I think that
code reviews on every single change are a waste of time. Periodic detailed
reviews of architecture are valuable, and periodic team reviews of code can
help spread good patterns.

Many of these problems are obviated by pair programming. It's real-time code
review before it's too late. It's the single biggest benefit to pairing,
although there are plenty of others (like sharing knowledge and building team
bonds). Pairs are also subject to the same pitfalls, but at a far lower rate
than solo developers.

~~~
v_lisivka
1\. You need to put context into comments. Code is written once but read
multiple times, so you should optimize for later case.

2\. "Too late" is when bug hits the client and affects income. If not, then
it's not "too late", it's just "a bit late". 2-3% in effort increase to
decrease number of bugs (increase customer satisfaction) by 15-20% is huge
win. If you will not do that, your competitors will do that instead of you.

------
ereyes01
"Good cooking takes time. If you are made to wait, it is to serve you better,
and to please you."

^^ From the Menu of Restaurant Antoine, New Orleans, quoted in the beginning
of The Mythical Man Month.

Good engineering takes time and deliberate effort. Invariably, it will
sometimes be tedious, but it is for the benefit of the product. IMO that
includes code reviews, and/or pair programming.

In my experience, the biggest source of friction is in comments over style and
reviewers/reviewees that have an obstinate and opinionated stance on the
matter. In my current project, we resolved this by simply requiring everything
passes a linter (gofmt and ESLint in our case). Follow the linter, learn to
like it, end of discussion. There's still structural / design style to fight
over in reviews, but those are healthy fights IMO.

------
xiphias2
,,Reviewing code is one of those things that I would enjoy if I had infinite
time, but that I find a nuisance when I don't.''

I can't really imagine any work that can be more important than code review
for other people. This is the way Linus can have a 100x leverage on the Linux
code base. This is the most important and hardest work in the Bitcoin code
base as well, probably apart from cryptography research. I think the bigger
problem is that code reviewing may be not well compensated, not that it's not
important.

~~~
stefan_
The compensation, in Linux anyways, is that others will review your code if
you review theirs. So the motivation to review stuff on the mailing list is
purely that you need someone to review your code to get it in.

It sounds like quid pro quo but I found the level of review (e.g. for the DRM
subsystem) to be excellent, even if it's one AMD eng reviewing some other AMD
engineers code in the AMD specific code.

------
jillesvangurp
I get the sentiment. I'm currently in a small startup and not doing any code
reviews as such. That's probably not ideal because I know I make mistakes.

Last year I was on a big project in a senior role and doing and receiving lots
of code reviews. There's a whole etiquette around these things that you have
to appreciate. First off, I got lots of feedback that was valuable.

My view with code reviews is that they should be timely and appropriate in
scope. Reviewers should have some time to review but not forever. I tend to do
them early in the morning or before I leave but I'm not likely to interrupt my
workflow for them; unless I'm asked to do so. Real time is not a reasonable
expectation. But days delay is also not reasonable. The smaller the pull
request, the less time is acceptable. The bigger the change, the longer you
allow for reviews.

Also, the bigger the PR, the higher the workload for reviewers. Keep your PRs
small and create lots of them instead of dumping lots of big changes in one
PR.

It also helps to classify review comments as blocking or not blocking. A valid
outcome can be to file a follow up ticket for another change. Yes, it would be
nice to refactor this bit of code but given that it was a two line code change
that got the job done, maybe right now that is good enough. My view is that if
CI tests pass and the code is better than before it was changed in some
meaningful way, it should always be OK to merge.

Finally, sometimes there's an asymmetric relation with the reviewer and
reviewee in terms of seniority. You can use this as a didactic tool to educate
juniors. But it is also an opportunity to point out they did well; which I'd
argue is just as important. Likewise, you have to place comments from juniors
in context as well. They might feel a little intimidated commenting on your
code and if they mean well, it's important to give them feedback on their
comments. IMHO it is important that people speak up and feel safe to do so. A
simple "You are right, I will fix this. Thanks for pointing that out." makes
them feel good.

------
zaksoup
I've found that favoring full time pair programming and not reviewing work
that was soloed upon unless the author explicitly asked for it because of
self-identified uncertainty or caution has resulted in a better workflow with
little to no actual hit to quality.

Additionally, proper test-driving and testing practices are incredibly
effective ways to enforce high quality code. Again, this comes down to culture
and practices. Teaching and encouraging good testing will pay off far more
than enforcing "two pairs of eyes" type rules.

My experience is that code review is great at linting, nit-picking, and
causing arguments about idiom. The times a bug was okayed with 8 different
:thumbs-up: or "lgtm" on the PR in github are uncountable. It's not that code
review doesn't work, because I'm sure it often catches plenty of bugs, it's
just that I have no reason to believe it's _more_ reliable than pairing or
letting authors ask for help explicitly when they need it.

Disclaimer: I spent my formative years at Pivotal Labs. Yadah Yadah Yadah it's
a cult of pair programmers so something about a grain of salt.

~~~
barbs
Pair-programming 100% of the time sounds like a nightmare to me. I need
uninterrupted focus - something I can't do with someone else there constantly,
even if we're working on the same thing

~~~
rabidrat
Have you actually tried it? It's different than you think. (At least it has
been for me).

------
regularfry
The problem with code review is that it works. That's why we can't shake the
industry's fascination with it. It works to catch bugs, but it's horrendously
inefficient in doing so.

I ran some numbers on our code review process on a project last year. Of
patches returned for modification, 5% contained a bug. 95% were for entirely
stylistic changes. I find it hard to square that with being a good use of
time.

As an industry, we're making the mistake Deming points out in Out Of The
Crisis: we're attempting to inspect defects out of existence rather than
improve our production process so that they never get created in the first
place. We concentrate our efforts on being better at 100% inspection, and
don't allow ourselves to think that a process without that inspection could be
a hell of a lot better, if we invested the effort in figuring out what that
might look like.

Pairing might be one answer here, although I don't think I've heard it talked
about in those terms before.

~~~
petetnt
First step to get right would be having automatic code formatters in place
properly to cut out stylistic nits. With tools like Prettier, gofmt, yapf,
Black and similar you can focus on the code itself, not how it looks. Of
course it can take a while to find some common ground to some stylistic
options (which is why I love opinionated formatters that don't give the
choice) but it pays off quickly regardless and you gain huge wins with a tiny
set of tooling.

For myself, I actually stopped caring of code styles and only run code
formatters in a pre-commit step (if possible). By giving all the trust of
fixing stylistic issues to the formatter and just focusing on the code itself
while coding I ensure myself that I focus on the most important thing: getting
stuff done.

~~~
regularfry
That takes care of the lowest-level stylistic issues, but doesn't capture
things like "why are we trapping errors _here_ rather than in a central error
handler _there_ ". The sort of question where the app will function either
way, but reasonable people can have aesthetic disagreements, and a formatter
is totally useless.

------
dotdi
I used to hate code-reviews but now I'm a convert after: (1) we brought in
better people. It's fun to sit with competent people and trash-talk some code,
and (2) after I pushed the team to use better tooling, which in our case is
GitLab + Merge Requests. You can imagine the dark ages before that.

~~~
btasovac
We're happy to hear that GitLab helped you improve your code-review
experience! We documented some of the advice and best practices that might be
useful when performing code review or having your code reviewed [0].

[0]
[https://docs.gitlab.com/ee/development/code_review.html#best...](https://docs.gitlab.com/ee/development/code_review.html#best-
practices)

------
topkai22
Code review aren’t meant to find defects- they are meant to prevent them.
Every study I’ve seen on code or peer review effectiveness (and there are
many) demonstrates that when code reviews are added into an orgs development
process, product defects are reduced, and reduced by far more than what is
accounted for in bugs directly filled in code reviews. Knowing that code is
going to be reviewed and/or having to explain that code changes the way
developers write code to begin with, for the better.

I’m a bit of a code review evangelist, so I’m surprised at the amount of
negativity toward code reviews. Some of it sounds a bit self centered, upset
that it slows down “thier” productivity or ability to self express through
code rather than the real purpose of the reviews- to improve the product and
organization as a whole.

------
_shadi
In one of the teams I worked in we had a policy to mitigate some of the things
we found inefficient in our code reviews:

1- anything done in a pair doesn't need a code review.

2- any story that is more than 3 story points should be worked on in pair.

3- if you are doing a code review and you find something that needs to change
you don't leave a comment, you update the branch and ask the original branch
owner to review your new commit.

~~~
arwhatever
#3 sounds like a really good idea to get the reviewer to put their money where
their mouth is, so to speak.

Conversely,I could see that requirement causing particularly poorly-skilled
developers tying up a disproportionate amount of the more skilled developers'
time

~~~
_shadi
I see your point but we found out that this way the review will be done
quicker rather than having discussions on the PR, since before adopting this
policy the comments usually had pseudocode in them.

anyway duo to #1 and #2 what actually ends up needing a review is the only
minor or medium scope changes, and most of the time the commit will be
renaming a variable/method or adding a test case.

of course if you see the PR went in the wrong direction you can still reject
it.

------
fzeroracer
For me, I've never had an issue context switching from various branches and
PRs. After a moment or two of looking through my notes and my code I can
quickly get back up to speed with what I was doing.

The issue I end up taking with code review is cases where reviewers end up
forcing their own stylistic choices to the detriment of my time. Insisting
that all equal signs be lined up, insisting that variables must be spaced in a
very certain way etc. I'm diligent at following the style guide for whatever
language I'm working in, but the most annoying code reviewer I had to deal
with would specifically call out things that were up to the individual
developers.

~~~
sethammons
Auto code formatters for the win. If that is not an option, maybe a line in
the style doc that says "shall" vs "may" (ie, things that are and are not kick
backable).

------
tekkk
Only time i've come to hate code review was when my counterpart (reviewer or
the author) has been overly pedantic with no hint of any empathy in hir
writing. I do not know was it because of inherent insecurity or no previous
experience writing in more friendly manner but boy oh boy i was boiling inside
arguing about useless things. The socially awkward seem to be in high numbers
amongst programmers which makes these kind of ordeals a bit of agony at times.

~~~
LandR
I don't get this. A code review is a purely technical process. If he says your
code is bad / wrong etc then that is a purely technical discussion that has to
be had if you feel they are wrong.

Why are you looking for empathy in code reviews?

Why are you getting feelings involved at all?

~~~
tekkk
This might had been a cultural thing between me and the other party. I don't
mind being very to the point and strict about programming but when it comes
off as condescending and "I know better than you" it's just something I can't
stand. Or maybe a better example, they make pesky remarks on your programming
style while ignoring all your writing about theirs. I think because the other
person here was a bit older than me, not a senior dev but still old enough
apparently, that he felt he had to prove something to the younger guy.

But yeah I'm going off the rails here. My point is that relationships are hard
and even in seemingly neutral setting, a code review, tensions can arise. I do
not know what to say on those occasions but getting snarkier and snarkier in
comments is definitely not the way to go. Bad code reviews have been one of
the most infuriating things I've come across my developer career.

~~~
distances
As you already said, this sounds like a people issue. Snark, sarcasm or
belittling has no place whatsoever in a code review (or at a work setting in
general, obviously). Sounds like you just got a bad colleague.

------
ken
In other jobs, I accept that I'm occasionally going to be waiting. The goal is
to produce value at the organization level, and that just isn't compatible
with each individual operating at 100% efficiency at all times. It's
egotistical of programmers to think that they should. It's unfair of managers
to expect this of them. Amdahl's Law applies to humans, too.

The issue is procedural. If you hate that code review of an API implementation
might require changing the interface, then you should do a review of the API
design before you implement it. That should be the case with every phase. Any
time you have to perform multiple steps and worry that review of a later step
might cause an earlier step to be invalidated, that simply means you're
reviewing too late, at too coarse a level.

Or not. Adding more reviews keeps the big-O factor down, but increases the
constant factor. As an organization, you can pick if you want individual
contributors to never have to backtrack, or if you want them to be able to use
speculative execution. One way is more predictable, and the other way can be
faster if you're good at guessing right.

------
watwut
I hate code review too, mostly power plays it brings. I had reviewer forcing
me to produce code I considered bad - but the prolonged discussion before
deadline would mean more wasted time so I did not argued. I have seen code
review block code for subjective differences in variables naming (e.g. no
convention was broken and both names were fine). I have had code reviewer
demanding change, after I implemented exactly demanding next change in his own
code and after I complained he just shrugged and said "now I think this is
better".

I had seen people misuse code review to force standards that they just made up
based on blog dude read yesterday - only to change opinion three weeks later.

I had seen senior developer letting junior work alone for two weeks, only to
iteratively force the junior to rewrite all of it to seniors liking.
Ridiculous waste of time and cruel.

So, with wrong politics and people it is hell.

~~~
watwut
The core problem is that code reviewing is leadership, supervisory/gatekeeper
function as any other, kind of like editor and should be treated as such. Even
if reviewer and coder are formally at same footing or switch functions, it is
momentary it. So, treat it as such and expect people to treat it as such.
Meanwhile, people in tech talk about it in idealistic terms that are sometimes
true, but often not.

1.) Don't use code review to teach juniors, talk with juniors in advance and
supervise them as they go. If they learn about major architectural problem in
code review, you are senioring wrong.

2.) Don't try to tech colleges in code review or use code review to force new
standards on ad-hoc basis. Talk about issues in standards before, teach
colleges as you go around your day without holding pull requests (and their
weekends in work) hostage. If your only communication about these issues is
code review, you are doing it wrong.

3.) Don't assume that code reviewer is automatically right just by its
function and don't let code reviewer to steal responsibility from coder. Don't
assume opposite either (through assuming opposite is better). Don't make
discussion a taboo.

4.) Difference in opinion is normal and does not mean one of the people in
conflict is idiot.

5.) Passive aggression does not belong to code review. Don't lie in code
review comments either. "This is objectively horribly wrong" counts as lie if
you are using it to push for personal preference and not ready to defend it.
If you say or imply in your code review comment that someone is lesser coder,
it is ok for that person to challenge the assumption and defend himself.

6.) Keep your own emotions out if as much as possible. If the code is making
you angry, is it objectively bad or just something you are personally unused
to (new syntax can be annoying even if it is good)? Distinguish the two.

Code review should be about one things only: Is this good enough to go to
codebase? Is this required change a blocker or minor thing (variable naming)?
All the other functions can be a bit helped by review, but absolutely should
not be seen as review purpose.

------
terapixel
Of the choices that he proposes, #2 seems like the obvious solution to me -
but he writes it off as too difficult.

Phabricator’s stacked diffs [0] works wonderfully well to support this exact
workflow (it isn’t clear from the post which versioning system the author uses
and what workflows it supports - maybe I missed it?)

[0] [https://jg.gg/2018/09/29/stacked-diffs-versus-pull-
requests/](https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/)

~~~
jakecraige
Thanks for the link! I went down a small rabbit hole from this which ended up
with a script that allows for similar style workflows even within the GitHub
PR world:

[https://twitter.com/jakecraige/status/1056835104440545280](https://twitter.com/jakecraige/status/1056835104440545280)
[https://gist.github.com/jakecraige/51f9583f1f55cc0ecccfaa11e...](https://gist.github.com/jakecraige/51f9583f1f55cc0ecccfaa11e7a7c6b1)

------
ankit219
I think the author is talking about a 'Forced Code review' because it is
supposedly a good practice to have a code review before you put something into
production (which it is).

The current practice is a coder submitting the entire code for review, and
getting comments on every aspect that the reviewer feels like. This is not an
efficient process as it takes up a lot of time, but reviewer also put in a
position as to 'How would he solve the problem the current code has already
solved'

I feel a good code review process should have two components: 1/ Self-review.
A team/company should agree on baselines/conventions that any code shipped by
them should have, and ensure that any code not corresponding to that will be
rejected. This review should be done by developer himself, rather than a
reviewer.

2/ A code review by an experienced dev who needs to check for libraries/custom
functions that are used might not break something somewhere else, or wont work
because some other change is going somewhere else, that they know of. This
way, potentially buggy code can be avoided, and process could be faster as
well. Ofcourse, if they want to provide feedback on a better algo, they can,
but should be optional

------
jezze
I think the problem is more about workflow than code review per se. I have
previously complained about the exact same things as this article. If you
factor in your CI pipeline as well it could be hours from writing a patch
until its merged and during that time you are in practice blocked from
continuing to build stuff on top of your previous commit. Because of this high
friction you wont bother to rename that variable or fix that whitespace
because its not worth the time.

Instead I thought it would help to make code reviews optional. It is dangerous
and I wouldnt recommend it for everyone and it only works if you have a CI
pipeline in place of good quality. What you do is that you allow people to
push directly to master, then you have your CI system continously testing all
the latest commits and only if something breaks will it report it back and
then the person who wrote that commit needs to fix it. I have never tried this
in a real working environment but I think it would decrease friction a lot and
just be slightly worse in quality. Also instead of pushing to master you could
have a staging area that automatically pushes to master if it passed the
tests. That might even be better actually.

------
jatins
My only advice to reviewers is: If you say "do X", also explain the "Why?"
behind it.

Other than my only advice, these points are also helpful: \- Zero Code Style,
Formatting Comments should be allowed on reviews. If it passes the linter,
it's good. \- Re-naming classes/variables comments should be kept to minimum.
Unless someone is using obviously bad names like `a`, it's probably a fine
name.

------
austincheney
When the right processes are in place I am huge believer that most of the
concerns raised from code reviews can be fully automated. I usually find
people suck up code review time with code style or concerns of conformance to
team standards. This is really basic stuff that sucks the life out of people
and isn't productive.

For me the residual benefit of code reviews is to invite people to examine how
the code works so that it can be improved or simplified. By improve I mean
objective concerns like execution performance, reduction of instruction size,
security concerns, and so forth. I expect a very critical analysis. Don't be
afraid to hurt my feelings because this code is about to go to production.

Unfortunately, often times the proper automation controls are either not in
place or the developers are fragile timid souls, particularly in group
settings. Code reviews can be wonderfully constructive, but many times aren't.

------
dredmorbius
What this calls for is fixing / streamlining process, and awareness of
context-shift costs of path-dependent processing.

The "have multiple projects, or alternatively, 'scuttwork' model, likely makes
most sense. A list of noncritical-but-evergreen tasks _that don 't require
heavy cognitive load_ is useful. Shoring up docs, drilling on procedures,
interviewing, studying new skills or tools, etc. (Retail has a similar concept
for sales associates durring slack time.)

Another is to tighten or parallelise, _and prioritise and reward_ the review
process. _Both_ familiarity _and_ unfamiliarity with the code base can be
useful -- quick assimilation, and fresh eyes / expanded institutional
awarenesss and knowledge (plus increased Bus Number).

------
stefs
i agree with the author in that i personally don't like code reviews much as
well, though i agree they are valuable.

to be more specific though, i don't like "remote" or "detached" code reviews
much; "over the shoulder" code reviews are a completely different thing. as
you can actively communicate with the author or the reviewer, it tends to
reduce the bike shedding a lot and the value of positive feedback increases.

i guess, ultimately, it depends on the team; if you're operating well
together, classic code reviews might work as well. if the team is new or the
members have varying styles or levels of exerience, over-the-shoulder reviews
tend to pay off significantly.

------
miqkt
I view code reviews as a good opportunity for learning but I've one particular
grievance with them. Specifically, when I'm asked to code review a pull
request that's alarmingly huge (i.e. touches more than a dozen files, +1000
LOC changes, etc).

I've never come up with a bullet-proof way to deal with this. The situation
usually ends up with me providing a rubber stamp of approval and making some
caveat commentary about "uncertainties" due to my inability to devote so much
time to be rigorous. I feel bad every time I've got to do this.

The problem might be exacerbated at another level too. For features, stories
might not be broken down enough.

~~~
distances
At one project there was a rule for large PRs: if the change exceeds a
thousand lines, you have to bring review cookies. Keeps you mindful of the PR
size and the rest of the team won't grumble that much.

------
golergka
How about a novel concept: there will always be some parts of your job that
you don't enjoy. Tedious things, boring things, annoying things. All the
different check lists, beurocracy, reports, task tracking - there are a lot of
annoying little stuff that we sometimes have to do.

Some of those things can be optimized or automated away. But some of those
things can't: turns out, although they're completely irrelevant 98% of the
time, they're completely critical in the remaning 2%, and nobody knows what
those 2% are, so we're stuck with these practices.

------
rumcajz
In the end this boils down to the problem of interruptions.

People often discuss the topic with smalltalk and such in mind:
[https://heeris.id.au/trinkets/ProgrammerInterrupted.png](https://heeris.id.au/trinkets/ProgrammerInterrupted.png)

The code reviews go more to the core of the problem. Interruptions are often
necessary but, for the person being interrupted, they always suck.

How much should we try to learn to multitask? Where's the boundary where it
stops being useful and starts being toxic?

------
oldboyFX
Folks, keep in mind that the value of code reviews, tests, etc. varies wildly
from project to project.

\- What kind of product are you building?

\- How large is the project?

\- How many developers are working on it?

\- How experienced are the developers?

\- How is the project being managed?

\- Are there any hard deadlines?

\- What is the budget? Burn-rate? Runway?

\- Is the product established or are you just building the MVP and trying to
validate market demand?

A two-person development team building an MVP with ever-changing requirements
won't have the same workflow as an established company with a large
development team and a bunch of existing users.

------
carlsborg
The core message is not "Code reviews are a bad thing" but "Blocked on code
review is expensive" and I agree that aspect can be an issue: it can be a
badly timed context switch for the reviewer, and we know context switches are
expensive in terms of productivity, so in some cases it makes good sense to
defer it to a later point in time. And this is valuable time lost for the
author.

So whats a better approach?

~~~
Telichkin
Have you tried a pair programming? I realized that this practice much more
effective than code review. Pair programming encourages learning inside a
team, interactions and code quality. It also reduces time to market.

------
weeksie
Pull request reviews are for merges into production, (or staging depending on
your QA process) but you should always have a develop branch you can thrash on
in the case that you are pushing a feature that is being worked on by another
team (e.g. front/back end)

Fix the workflow. Reviews are the best quality control measure out there,
perhaps even better than unit tests.

------
jasim
I echo most of these sentiments, but it might be possible to solve it once and
for all with an approach similar to Jane Street's Iron.
[https://blog.janestreet.com/ironing-out-your-development-
sty...](https://blog.janestreet.com/ironing-out-your-development-style/)

------
aalleavitch
On the one hand I get where you’re coming from, on the other hand I definitely
broke our submit tracking by randomly inserting a space into a string in a
commit I made months ago and my coworker just noticed it last week. Sometimes
it’s nice just to have another pair of eyes glance over your diff to pick out
obvious mistakes.

------
let_var
Code reviews have been both a useful medium to share new learnings among team
members that actually makes the code elegant or perform better, and an endless
threads about conventions and nitpicking.

I think it makes sense to set premise or template for a good peer reviewing
exercise. New team members should be educated about these guidelines.

------
jlundborg
In my team, we have started using these rules to address the code review lag-
time:

* While you are waiting for code review, review someone else's code

* Aim for reviewing two CR:s per CR you submit.

This has helped us both keep on top of the CR queue and to have something that
is not too distracting to do while waiting for review.

------
DrBazza
"There's no such thing as a bad book" \- you can and should treat it as a
learning experience and take away one positive thing. "I wouldn't do that",
"If I see that again, I'd do it this way", etc.

------
ckastner
QA incurs a cost. It's perfectly natural to dislike this cost, but that's also
largely irrelevant: the question (in each specific case) is whether the
benefits are worth the cost, or not.

------
mavelikara
(2010)

------
adwmayer
Parts of this are in other comments, but I find a couple things help code
reviews enormously to be more efficient.

1) Automatic code formatters/linters before the code gets to review. When
everybody knows there's a formatter, reviewers can spend less time looking for
formatting issues. Generally, people are more inclined to take a quick look at
the code then too, because they know their focusing on "just" the logic, which
helps with getting a review sooner.

2) Smaller sets of changes up for review. Yes, this might mean more code
reviews overall and more process, but it's way easier to catch a bug in a <
100 line change than in a 1000+ line change. This also means keeping the
contents of the review as focused as possible. If you want to refactor some
major chunk of code because it makes the actual thing you're trying to do
easier, go for it, but do it in a first code review and separate the logical
changes. This goes to both reviews being more straightforward for the reviewer
to get through and them wanting to do it in the first place. I'm way more
likely to avoid a massive 30 changed files code review than a quick 5 line
change, because I need way less focus to review the latter.

3) (this one might be controversial) Early code reviews where appropriate. If
you're making a change that's more likely to be totally rejected, get it into
code review as soon as the basic structure and logic is in place and ask
somebody to review it as quickly as possible. This is before tests are ready
and every todo is completed or whatever else. The review should be quick and
dirty, but at least that way you know if you should be proceeding with your
current path at all or change course. Again, this means more time spent
reviewing, but it's a lot better than getting a review after a week saying
what you just wrote has already been done, or it totally misses goal, or it
has some critical issue, or whatever. This helps a lot with what to do while
you're waiting for feedback too. If you've got this type of code review up
then you can keep working on the details while you're waiting on feedback. If
your code review doesn't fall into this category, you're probably relatively
safe to start working on the next thing, because at most you'll need to make
some small changes to what you're working on, not throw it out altogether.

4) Making sure code review is considered a priority on the team. That includes
the team lead prioritizing code reviews and encouraging others to prioritize
code reviews. If the code doesn't get reviewed, it doesn't get shipped, so
it's ultimately more important than writing new code.

------
quickthrower2
The real problem with code reviews is when the reviewer isn't good. If they
pick on trivia or make you redesign unnecessarily or are unpleasant or too
busy its a problem.

------
flurdy
"Pair with people you like, code review people you don't like" Me, some years
ago.

Not black and white, but less code review is needed if a task was paired on,
if at all.

------
lowercased
Rant...

Was nominal lead on a smallish project a couple months back, with 2 other
folks. Worked with one before, one I hadn't worked with before.

We're all remote, and the 'new' guy started doing 'code review' on code before
he had a running environment. We'd taken over another codebase, and he was
pretty up-front about wanting to adopt/enforce certain stylistic standards,
which... to me, I don't particularly care about as much (on the PHP side of
things, PSR-2, in this case).

We're all remote, so we had a call - a couple actually - and I indicated I
didn't care all that much, but he was free to make changes as he saw fit, as
long as 1) he got a working environment up and running, 2) he was also
spending time writing some tests around the code as he explored it, to have a
firmer understanding of what was going on, and 3) he didn't break any existing
tests (or new ones that were developed).

I swear that I don't think he'd actually had a working copy of the project,
but was still committing code. Not a lot, but ... it broke tests. It broke
stuff that was not 'tested' in an automated way yet, but was testable by
running the app, and I think he'd just not bothered to test it at all.

But... hey - look at the formatting! Look - spaces, not tabs! - how useful
when someone else is having to go back and debug your crap. I received a
couple small lectures on variable naming and the importance of descriptive
variable names.

function getListOfUsersForCompany() { $res = db::query('whatever query');
return $res; }

Apparently, that $res is confusing and 'bad practice' because it's "hard to
reason" about what's going on there. Literally 2 line methods with an internal
placeholder variable (which also had some working functional tests around
them) - those were being criticized by someone who had committed test-breaking
code multiple times.

In all this, I kept having to figure out how much of my reaction was because
_my_ code was being criticized, vs what the criticisms were. I got accused of
taking things 'personally', but I kept coming back to 'working tests broke
with these changes', and in my mind, that this code was considered somehow
'better' because it more closely followed a stylistic 'standard' was
bothersome. Working tests should a standard we strive for too, right?

I _have_ worked with some folks who had a real eye for analysis - remote or
colo pairing on problems has helped reason out issues that I know I've missed.
And when I'm pairing or doing a requested review, cosmetic style tends to be
the last thing I'll bring up, because it generally has the least impact on
something. Much prefer to have someone spend time writing tests and/or docs
and/or insightful comments vs reformatting. When I've refactored with someone
to make something more testable, better naming/comments/docs tends to flow out
of that refactoring anyway.

While I understand formatting is often not that big of burden, especially with
modern IDEs, it's also not terribly high value on many projects. I'm primarily
speaking on projects I've come in to where there's already extant code - in
this case, we had taken on a very weird hybrid of 2 PHP frameworks and 2 JS
frameworks (some screens used raw vue, some compiled vue, and some compiled
angular).

~~~
mnd999
Doesn’t sound to me like you are the problem. Sounds like your colleague has
lost sight of the end goal and needs to refocus on what you’re supposed to be
delivering.

(Unless the whole project was to reformat the code?)

~~~
lowercased
I'd later learned through someone else who'd worked with 'new guy' that they'd
had similar problems, and apparently he only really works well on his own. I'd
been 'given' him on this project because the first person wasn't really sure
if _he_ (first person) was at fault, or if 'new guy' was. I was the test case
that confirmed.

By no means was the purpose to just 'reformat' (far from it!) :)

I _have_ wrestled with this a couple times before over the last ... 15+ years.
When working with other folks, I'm normally not in 'full time w2' arrangements
- it's usually shorter term contracting, so the dynamics are a bit different.
That may also be why the style/formatting is usually less important to me. As
with the project above, I'd seen many well-formatted projects with descriptive
variable names that don't function. And... seen horrific spaghetti code that
"just works" but people are afraid to touch it.

Finding that middle ground is a never-ending journey, but I feel I've got a
decent sense of how to get pragmatic results, and it usually involves
repeatable sample data, repeatable processes and tests and/or docs. Building
those as a foundation will make stylistic changes a breeze, because the
confidence to make "trivial" changes will be there. Coming in to new codebases
with no tests, sample data or processes should be considered scary/risky, and
when people don't seem to have an understanding of the risks involved, I get
even more scared.

It took us 9 weeks on this project to get to a point where we made a first
push out to production (and even then we ran in to a few small bumps). But
we'd practiced with tests (only have dozens, not hundreds at this point) and
pushing to staging, and as the client reviewed... we kept finding more and
more bugs - things that had existed in the code for years that we were
seemingly the first people to exercise in front of the client. I'm sure their
confidence in us was bruised a bit - we apparently just kept showing bugs(!) -
but it feels like we provided more of a base in 9 weeks than we inherited from
the previous 4 years of teams (automated builds, sample data, automated unit
tests, automated browser tests, etc).

Just did this a couple days ago and it's still very fresh/raw in my mind :)

------
weissguy68
Nelson is brilliant!

------
sid-
Coding is like sex ? Nobody wants to admit publicly that they suck at it

~~~
malmsteen
Haha

------
geoffmunn
Code reviews, aka opportunities for dickheads to demonstrate how clever they
are.

"Did you know you can replace that entire function with a regex?"

~~~
tristanperry
I think this comes down to team culture and morale.

This certainly does happen in a team with a bad culture/morale, but IMO it's
just a symptom of the wider issues.

But yep, code reviews do make situations with bad morale worse IMO.

