Hacker News new | past | comments | ask | show | jobs | submit login
Effective Code Reviews – A Primer (deepsource.io)
78 points by pplonski86 12 days ago | hide | past | web | favorite | 58 comments





Whoever OKs a code review can be called upon at any moment to explain the logic of the change in front of the team. If you OKed it, you must understand it, right?

If no one is willing to accept a PR with that provision, the author needs to make their code clearer.

In my experience, people doing a code review don't want to bruise their egos by saying "I don't understand this.". So they superficially check the code and OK it.


There are also places where the incentives just are not there for in depth code reviewing. You could work in a culture where you are literally hurting a junior's chances of promotion by being tough on their code and asking for changes, because their peer on another team is getting twice as many tickets "done" because his team just "rubber stamps" LGTM on everything.

Many teams would never actually put in the effort to trace issues back to individual failures in a code review process, when they could be spending the time just fixing the problem and moving forward.


I would question if that approach is appropriate. It will lead to less releases with bugs but will also lead to lower velocity. Lower velocity can have the effect that in the long term the system is in a worse place than if it had more iterations of work. As many things, it depends on the company and the industry it is in.

Sorry for the double post, I had another thought here.

The value trade-off you're suggesting is that the time and effort required to set up this process structure, and shift everyone's work habits over to this, is worth it in... maintenance?... at some point? Reduced risk of some sort?

This seems like a complicated position to argue, can you be any more specific about where and how you see this type of exacting process adding business value?


Definitely guilty of this :(

What is completely missing, and IMO more important, is how to be the reviewer.

Most reviewers have a tendency to go the Socratic approach: Asking questions to lead to an answer. In my opinion, and consistent with recommendations from books on general communication: Don't ask questions without expressing your concern.

Example:

Q: Why did you do X here?

Completely valid and sincere response:

A: It solved the problem.

The coder at this stage has no idea why the reviewer is asking, and there could be many possible answers depending on his concern, but since the reviewer did not state his concern, we are reduced to the simplest, noncontroversial answer.

Instead, the question should be:

Q: I'm concerned that doing X here has the following drawbacks: A, B and C. Was there a particular reason you chose to do X here?

Or even better:

Q: I'm concerned that doing X here has the following drawbacks: A, B and C. If you do Y instead, you don't have these drawbacks. Was there a particular reason you chose to do X here?

A: Yes, because although X has drawbacks A, B and C, Y has drawbacks D and E.

(And the conversation can continue).


I recommend not using words like "you" and "your". It can make people defensive. Try to only talk about the code and changes. Not the person who wrote the change.

Q. Why is X done here?

Q. What is the reason to do X here.


I also love the wording "Can you help me understand the reasons behind X?" because it shows that you're a) just trying to understand/learn and b) you respect the other person, because you're asking for their help.

Your phrasing in this scenario is as effective as my intent, and makes sense in the wider context of a code review, where it is the goal of the reviewer to query these things.

Going a bit off topic, and to a wider context: There is a trend in general to avoid "I" and "you" in the work place to produce a veneer of objectivity. The communications resources I've read mostly disagree with that stance. The number of issues that can be handled truly objectively are rarer than most people think, and an attempt to insist on a facade of objectivity often comes across as manipulative. Most people will suspect that what you are asking is something you want, but instead of coming out and saying it, you're trying to hide the fact.

Example of something that is clearly objective: "When program A is running and the following sequence of keys is typed, A crashes."

Example of something at the other extreme: "Using negative logic in code confuses some people". And the conversation often devolves to "I don't think so." "People who can't handle negative logic shouldn't be programmers." "Who exactly in the team is getting confused?" "There will be others who find the positive logic version of this piece of code harder to read."

I've seen this pattern all the time. And in 80+% of the cases, the person complaining about negative logic is the one who does not like it, but is working really hard to avoid that fact coming out. Instead, he is trying to appeal to some wider principle that, not surprisingly, is not uniformly agreed upon.

(BTW, if there is uniform agreement, or it is enshrined in coding standards, then the approach is fine).

The above is an obvious extreme but the really annoying cases are the more subtle ones.

What the books point out, and after reading and observing, I agree: For things that are not clear cut rules (e.g. in the coding standards), you'll be more successful personalizing it - and it is in large part because you are being completely sincere. And given that most things are not that clear cut, this is the better approach. Now they have guidelines/recipes as to how to personalize it to avoid defensiveness, which is the element you wanted to avoid.

(An aside: I'm speaking in situations where neither person has authority over the other - one is not a manager of the other, etc).

Another example: If you've ever heard someone respond with "Yes, but why do you care?" Or "What's it to you?", it is very likely this dynamic is at play. While I don't advocate the phrasing, I am now much more sympathetic to it. When someone says it to me, instead of getting annoyed, I now examine how I phrased what I said and usually find the flaws in my phrasing. Occasionally, I ask people the same question, although in a much more polite manner.

Compared to other hard disciplines, software is one of the more subjective ones. There's a lot of "religious" behavior amongst SW developers (think evangelizing of functional programming, OO, TDD, etc). If I look at a typical coding guidelines, probably over 50% is completely subjective - that there is significant consensus on some of these rules does not change the subjectivity.


Author here. Great point! Would love to take a shot at writing a follow-up focused on this.

I’ve found reviewing the code together face to face (or video call) with author is very effective. It allows you to discuss why they made certain decisions, why you recommend different choices, and the relative benefits of each option.

It also the best forum to remind them that my code isn’t perfect either, that this is a process to help make us both better developers.


Online code review puts pressure on the reviewer to find the problems quickly so as not to take the other engineer's time. Usually I want to take my sweet time, look at the surrounding code, figure out things for myself. Sometimes I just want to stare at the screen for five minutes to think about it. That's really uncomfortable when there is another person on the other end of the line.

If something is so unclear that I cannot figure it out on my own, the code / commit message / cover letter likely need to be improved.

Nothing like git-blaming your way to a two-year old commit and get a commit message that actually explains the whys and trade-offs.

I agree that some things do need discussion, but code review tools typically have some sort of chat interface for that purpose. I am doing open source work and most of the time just do code review over the e-mail, which obviously works well for this purpose. [Edit: and of course the calls work as well here! In any case, the result of the discussion should make its way to the code / commit messages / cover letter.]


> Online code review puts pressure on the reviewer to find the problems quickly so as not to take the other engineer's time. Usually I want to take my sweet time, look at the surrounding code, figure out things for myself. Sometimes I just want to stare at the screen for five minutes to think about it. That's really uncomfortable when there is another person on the other end of the line.

Yes, absolutely this.

I think it's essential that the reviewer takes their time to inspect the code without distractions and without being rushed.

If you want to have a face-to-face code review, in my experience it works better to have that meeting after finishing the code review, so you can go over your review and discuss each point in greater detail.


These are both good points that I neglected. In fact that's actually what I do, I never do a face to face unprepared, I always have a list of things I've found that I want to review.

Another thing that I maybe didn't make clear enough was it's important to be emphatic, and treat them as equal team-members, even in a Sr to Jr review. Give them an opportunity to weigh in on why they made certain choices and discuss all the pros/cons. Sometimes I even change my mind and agree with their choices when it's clearer what their objectives were.


I've had my code reviewed face-to-face when I was just beginning my career, and I think that it has helped me learn much more than I would have otherwise. At least for someone who has never written production-ready code (first internship / first job etc), it is hard to understand why a particular piece of code is "bad design" even though it seems correct logically.

As an alternative, I encourage developers on my team to liberally sprinkle comments on their own PRs which explain their logical path. Not only does it help me understand where they were going with something, it also serves as a bit of 'rubber duck' debugging -- more than once as I've explained a bit of my own code I've seen obvious ways it could be more elegant, or I've spotted potential bug situations.

The only problem I see is the synchronous nature. The reviewer and author should be active at the same time. But yes, it helps.

Exceedingly superficial, unfortunately, as it seems all the content marketing on this blog is.

No real insight is offered here, just a list of "givens":

  1. Keep the changes small and focused
  2. Ensure logical coherence of changes
  3. Have automated tests, and track coverage
  4. Self-review changes before submitting for peer review
  5. Automate what can be automated
  6. Be positive, polite and respectful
Who could argue with these?

I've always believed there are 2 levels of code review, defined by their deltas:

1. Review against standards, defined by the delta between the code and shop standards, which should be well defined for everyone ahead of time. Things like formatting, white space, variable naming, conditionals, iterations, standard modules, etc. Every shop has (or should have) a standard way of doing a lot of these things. Much of this level of code review can be automated.

2. Review against expectations, defined by the delta between the code and the technical specs. Of course this requires technical specs which many shops don't do. If you write technical specs, and peer review the specs, agreeing that this is what is expected, then this level of code review becomes much more straight forward. This includes things like business rules, logic, data base updates, UX, interfaces and APIs with other apps, etc. This is much harder to automate, but offers a checklist to streamline the review and keep it objective. It also makes User Acceptance Testing easier down the line because code review has already eliminated a large subset of potential UAT rejections.

I'd love to hear how others handle these 2 levels of code review. Now that would be much more valuable than this list of feel good platitudes offered by the OP.


Hi, author of the post here.

Thank you for your feedback. The framework that you put down, of code reviews being of these 2 levels, is valid. The goal of this blog post, however, was to touch on the basic hygiene which makes the process more effective -- including both these two levels of review.

I would take a shot at writing a follow-up post to this, going deeper into the subjective aspects of peer code review. Thanks for reading!


> Who could argue with these?

Maybe not argue, but I still find this link useful. I will link it to more junior engineers in my team. Believe it or not, not everybody does self-review before they ask for peer review.


>> Believe it or not, not everybody does self-review

That's not hard to believe given the stream of shit that comes out of a feed, any feed.

But I'm appalled by y'all's assumption that #3 is a given. Why do you all believe that?

To me, including a test suite in your code base is a code smell.


Well depends, having 0 testing sucks too. I would argue 100% TDD can be a code smell, especially if you try to unit test everything. Use testing where it makes sense. Got some complex business logic in some function with several different edge cases? Cover them in test cases, it will also serve as documentation.

If you have a problem with #3 you might as well have the problem with #5. You don't want to automate everything, only where it makes sense.


>> Use testing where it makes sense.

I agree!

>> Got some complex business logic

So complex the only one who understands it is...who? A test?


i'm not sure what you're arguing here. if you have a complicated codebase you write tests so that when you add other code, you have a bit more confidence that you're not breaking things. furthermore, if a new person joins a team, they can use old tests to understand what the hell is happening in the code otherwise you just have an unadulterated mess of code to read through.

you're dealing with humans on a day to day basis that err very easily. the dumb mistakes i've made because there was a lack of tests are countless.


Why do you think having tests is a code smell? Sure, you can overdo it, but testing is useful and it's wasteful to have humans do it where it isn't necessary.

>> Why do you think having tests is a code smell?

Because it signals to me that the only one who understands this piece of business logic that we have applied to our code base is a test, which is a piece of code.

Why should I trust this piece of code but not this other piece? What pieces of code should I trust?

Me, I don't trust tests.


> Because it signals to me that the only one who understands this piece of business logic that we have applied to our code base is a test, which is a piece of code.

That...makes no sense. Tests do not understand rules, nor do they (or are they intended to) fully encapsulate them. They are intended to capture specific instances of application of the rules that have been verified with the person(s) defining expectations—the customer for acceptance tests or unit tests of key business functions, but possibly the team itself when the test is a unit test of an implementation detail.

The test isn't the only one that understands the ruler, it's the output of a process that validates that the test writers (who are humans on the team) understand the rule, and also concrete examples which aid anyone who comes.later in understanding the rule.

To me, not having defined test scenarios (whether automated and thus part of the code base or not) is a signal that the team probably doesn't (and almost definitely doesn't if there has been turnover so that the implementors of the functionality are no longer on the team) fully understand the intended function of the code.


Do you trust any documentation at all? A test's just documentation a machine can read and, to some extent, validate. Other docs are necessarily even less trustworthy, if they can't be automatically checked for accuracy.

>>Do you trust any

The only test that matters to me is the one where a customer says "Yes, good!"


Sure, a test suite doesn't replace that. It helps ensure that test returns "Yes, good!", instead of "Why did you break this thing that worked last week!", assuming you know anything about what the customers consider good.

The parent sounds correct. One can have an extensive test suite and still turn out a product that is useless to the customer. These are 2 entirely separate concerns: testing is sanity checking that the code does what it is written to do, user validation is sanity checking that the code is actually useful to a customer (which most certainly cannot be automated or programmed).

> Because it signals to me that the only one who understands this piece of business logic that we have applied to our code base is a test, which is a piece of code.

That's a weird leap of logic. No, it does not mean that. It means that a human wo understood that piece of logic wrote a test, so the test can in the future warn about issues before a human spends time on checking if it still works like it should.


I disagree with the notion that this leap is weird.

>> No, it does not mean that. It means that a human wo understood that piece of logic wrote a test

I don't not trust a human to understand (1) a piece of logic nor to understand (2) a test written by someone who does not understand said piece of logic.


So how are you implementing code when you don't understand the logic of what it does? How do you make sure it does what it is supposed to do?

> To me, including a test suite in your code base is a code smell.

You'd put it somewhere else, then?


My point is, you shouldn't need one.

- "But my code base is so big and complex!"

Make it smaller.


Tests aren't written to simply assert complex things behave as they should.

Tests are also used to enforce certain edge cases (e.g. return nil when the the input collection is empty instead of raising an exception), and in this regard allow you to assert stuff your language isn't able to express (because its type system is not as powerful as Haskell for instance).

Tests also allow you to avoid stupid mistakes like using a string instead of a symbol as well.

And anyway, tests are just a way to durably persist the testing that is always done by a programmer, be it in a REPL or with the few lines of codes at the end of a file used in the writing of code.


Some problems are irreducibly complex. There is no way to make the code smaller or simpler without losing essential functionality or performance.

This is so true. If the problem domain is so small that you can get away with no tests (only small programs or small teams can indeed get away with it, IMHO), then it cannot be a system of interest. For large systems (i.e. systems so large that any one human cannot fully grasp them), testing is required. Testing does not prove much about the system, it only encodes the assertions of the programmers about how it is meant to work. Testing is merely a harness that constrains the flight domain of the code, it does not guarantee that the code within the flight domain is correct, only that it presents a certain level of certainty about it.

Above some system size, the certainty of correctness is merely probabilistic, and automated testing is what gives you assurance. Programming without testing is like driving with a seatbelt: sure it is feasible, but it's insane to do it.


Yes, I agree completely with your #1 completely.

First thing before you do any sort of code review is that you NEED to have a well defined coding standard. This allows for rules of engagement. If you have no coding standard, then what happens is one person will say one thing, the other will say another, and the submitter is caught between two people giving contradictory comments based on opinion, not fact.

Having a single set of rules of engagement levels the playing field completely and makes it easier for everyone.


People love to write write write code and don't give a shit about size or how it impacts reviewers.

These are not givens. They're things that have to be drilled into newer devs.

In my experience code reviews need to be a conversation. I'm on a small scrum of 3 developers and we all review each other's code. The major thing I've learned is that you need to depersonalize the code. Don't ask "why did you do this", or "you did it this way two weeks ago and now it's failing in production". It's all about the we. I say we so much in my code reviews you'd think I was French. Also, I really try to make my tone seem more of someone who wants to learn vs. someone who is criticizing.

Honest question: A couple of years ago I worked for a startup. They used Agile, which mandates code reviews. "Brilliant", I thought, "that can only be good." I'd never really had code reviews before even though I've got decades in industry. I like the idea.

However there was no written (or often, even oral) statement of what it was supposed to do so all I could do was spot syntactic cruft and offer cleaner code, and... that was about it.

To me, without clear statement of code's intent it must be impossible to review that code meaningfully, but I've never come across this most basic requirement in Agile that the code must have a spec of some sort to review against.

What am I missing, or was their Agile practice simply wrong, or what? Genuine question.


Was this someone on your direct team? Were you not part of the spec-review/sprint planning/etc meeting(s) where the project goals for the code you're now reviewing was discussed? If someone just dumped a PR in your lap with no context and said "do a code review," then this sounds like just a really terrible process/implementation.

The idea, imho, is that you are supposed to be intimately familiar with all the projects on your team, while only coding for your own. If you each follow each other's tech doc/spec, and actually put real, professional effort into good technical writing and understanding of your team members' communication styles, you can all have a shared understanding of the problem and the solution architecture/idea, and then after the code is written, code review becomes a process of aligning on what the "right" solution is for the team, given hyper specific context of the one PR/feature/project/bug fix/whatever. Did it turn out like everyone expected? Why or why not? Does this align with the maintainability goals of our team? What will ongoing support for this look like? Does it move the needle for (any) clients? Does it unblock other projects/tickets? etc...

I don't really agree with the type of code review where you nit pick about adherence to arbitrary "style guides," read the code line by line to "try and find bugs," or asking for changes because you think something could be "cleaner" or whatever. The only code reviews that have real impact in my mind are the ones focused on how the code maps to real business goals.


@Benjammer: what you say sounds so unlike what happened that I can only take it that our process was a mess. You final line about real business goals summed up a lot of what should have been, but wasn't, considered there.

@nradov: Nope, no 'user stories', whatever they are.

There were a lot of other bad decisions at the company. Some really stupid stuff. I don't suppose they'll last much longer. Thanks to you both. I'll spend this eve reading up on Agile.


"Agile" doesn't mandate code reviews. However many agile teams find that code reviews are helpful to improve quality and increase velocity. The user story acceptance criteria (confirmation) provides the clear statement of the code's intent. Every code review should be tagged with a specific user story number. (If the user story isn't clear then that's a separate problem.)

For a team that has low code quality - here's how I like to start my code review process. not everyone will like it, but I find it effective at growing people's skills: https://www.5whys.com/articles/step-4-start-doing-code-revie...

If you're working in a language like Java or C++ which has good static analysis tools available then the developer should be required to resolve all static analysis issues first before submitting code for review. That way the human reviewers don't have to waste time manually marking simple issues.

Any idea how to do code review, when you are solo-founder?

When I was the only guy in my team writing code, I found it helpful to put a bit of processes - like having a threshold of test coverage, and running some kind of analysis (flake-8, for example) on my codebase. Another thing I did was letting one of my friends look at a random piece of code in my codebase. He would suggest changes, but what helped more was that I wrote code more consciously than I would if I did it all alone.

You could switch to working on some other component and then come back after a few days - I find that length of time often makes me fairly critical of my own code!

Unrelated to the original topic but whenever I can, I'd try to work on 3-4 distinct work items in parallel in a sprint... typically doing a context switch once in a day. I've found that it really helps in letting ideas and solutions brew up and mature over time against trying to 'close' a single item quickly. the context switch can be quite useful if you have hit a roadblock or even just to break monotony.

Yes, that's the approach that I'm taking right now. I'm a little afraid that with this method I'm not improving my programming skills.

Go through the motions of reviewing your own code separately from writing it. IOW, have a code review tool such as Github or Gerrit and create a PR or change request. Then review your code similar to how you might proof read a paper. I find just the change of mindset and looking at the code in a different context (the diffs in a browser window instead of the code in your editor) helps me to catch mistakes.

I usually do this for changes before I have someone else review them anyway. Of course ideally someone else could review the change but if you don’t have someone you don’t have someone.

The main limitation here is that while this should help you with bugs, it’s not good for catching things which aren’t idiomatic or having someone point out a better way to do something.

But it’s better than nothing.


You could try posting the snippets you need peer reviewed here on code review stack exchange - https://codereview.stackexchange.com

I recently came across https://www.pullrequest.com and have heard good things about them.


Add a good static analysis tool such as SonarQube to your continuous integration pipeline. It isn't really a substitute for human code reviews but it can quickly catch a lot of common problems.



Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: