Hacker News new | past | comments | ask | show | jobs | submit login
How to do a code review (google.github.io)
1471 points by iamaelephant 14 days ago | hide | past | web | favorite | 367 comments



Here's the part that resonated with me most:

  A particular type of complexity is over-engineering, where developers have made the code more generic than 
  it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be 
  especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be 
  solved now, not the problem that the developer speculates might need to be solved in the future. The future 
  problem should be solved once it arrives and you can see its actual shape and requirements in the physical 
  universe.
Note how Google does NOT say "make sure the code is properly architected". Instead they say "make sure the code is not over-engineered"! At top companies like Google, projects rarely fail because there isn't enough architecture. Instead projects end up costing 10x to 30x because of unnecessary complexity. Over-engineering is a trap that very good developers fall into all too often. I am glad to see Google has cautioned against this, because I can now point my fellow developers to this when they are about to fall into the same trap!


The quote, for people on mobile:

> A particular type of complexity is over-engineering, where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system. Reviewers should be especially vigilant about over-engineering. Encourage developers to solve the problem they know needs to be solved now, not the problem that the developer speculates might need to be solved in the future. The future problem should be solved once it arrives and you can see its actual shape and requirements in the physical universe.


Thanks!

I agree so much. Zero-cost abstraction is a misnomer.


isn't "zero-cost abstraction" shorthand for "zero-performance-cost abstraction"? i.e. computer, not human, performance?

It is! Which is why I don’t like the term. It often masks the fact that it has a cost in complexity.

i've never read it that way! could you give an example of that?

Two examples in C++: virtual method dispatch via vtable (in C you'd implement the same mechanism manually), templated generic code. C++'s generic data structures and algorithms are faster than C's because they don't have to use indirection through pointers. The compiler creates specialized code for each type instead. It causes binary size bloat, but execution time is low.

so what's the "cost in complexity" here? the complexity would be there whether you went with a a hand-rolled or "zero-cost" version (at least that's the idea), except the "zero-cost" version should require less code and thus be easier to maintain.

Preface: I was also very glad to see this called out specifically, and think it's a great rule.

That said...

> Note how Google does NOT say "make sure the code is properly architected".

is not accurate. The very first paragraph on the same page is:

> Design

> The most important thing to cover in a review is the overall design of the CL. Do the interactions of various pieces of code in the CL make sense? Does this change belong in your codebase, or in a library? Does it integrate well with the rest of your system? Is now a good time to add this functionality?

Emphasis mine. But that sure sounds like they care about proper architecture. They just also care about avoiding over-engineering. They're not mutually exclusive.


But that sure sounds like they care about proper architecture.

Of course they care about architecture—I don’t think anyone implied otherwise. But at good companies like Google good architecture is a given. Top developers often fall into the pit of over-engineering, and almost never under-architect. So at top companies code reviewers have to be vigilant about over-engineering and rarely have to worry about under-architecting.


>But at good companies like Google good architecture is a given.

Absolutely not. Google’s interview process and inflow of fresh graduates does not bode well for good architecture. Having spent time at G and FB, I can certainly tell you that employees at both are no better at architecting code in a sane way than SWEs at other companies.

Code architecture requires experience. Google does not.


I think the hiring bar at places like Google and FB is astronomically higher than almost all other places - they definitely have generally higher skilled people there on average.


Checking for memorized algorithms and an MIT degree do not guarantee good code.

I've seen more horrible code at a FAANG company than in many other places. Complete absence of overflow analysis in C++, race conditions, architecture astronauts, exploding code bases that no one understands any more.

Features are being added on a daily basis, basically what matters is a high line count.


You clearly haven't seen code at Google and its "base" library.

The process does not encourage experience but mainly the ability to remember recent academic programming puzzles. The first comment is correct even though they do have many existing experienced developers that would be in a position to provide architectural advice.

The hiring bar at Google does not test for the ability to organize code coherently. The skill is completely unrelated to algorithms and data structures.

It’s like requiring candidates to deadlift 500 lbs and then assuming it means they can all run marathons.


Most teams at Google have senior devs that ensure this. The average level is way higher than other companies.


So for anyone, including me, when looking out for the next job should be:

- Is `good architecture` a given in this team?

- What are they doing to avoid `over engineering`?

What are the specific questions one can ask to find out the answers to the above two?


what does the _CL_ notation indicate?


"changelist" is the term perforce uses for a a single commit before it gets committed. A bit like a PR in git. Google used perforce before rolling their own vcs, and they kept the perforce terminology.


Changelist, perhaps?


Thanks. This would be a nice thing to explain in the document that's open sourced.


The HN submission links directly to https://google.github.io/eng-practices/review/reviewer/ but if you start at the top namely https://google.github.io/eng-practices/ then there's a Terminology section right there on the first page. Maybe the GitHub Pages theme used by this site should be changed to one in which where every page links to its parent page (right now there doesn't seem to be any way to navigate upwards except by editing the URL).


It sounds like walking a tight line between over engineering, and falling into technical debt. If you design code specifically solves the immediate need, that may need to be thrown away or extensively worked on / around when future needs come up. On the other hand, you can write code that solves future needs that never appear, and still fail to solve the actual needs that end up appearing.

For me, I would rather put a bit of additional effort up front gathering enough information about the direction of the project or usage of it to better formulate an attack plan so that it does actually become more future proof. But even that is subject to failure, so what do you do?


I think of it this way: How can I build this so that it only solves today’s problems but doesn’t make it overly difficult to solve tomorrow’s problems?

Loose coupling, dependency injection, composition over inheritance, and similar techniques tend to be good answers to this question in my experience.

In contrast, over engineering attempts to solve tomorrow’s problems before they arrive and, if they arrive differently than predicted, makes them harder to solve, because when you have to change something it’s less clear which parts of the design were necessary to solve the original problem and which were only necessary for the future problem that was incorrectly predicted. Often times you might end up having to rethink the entire architecture rather than having the relatively simple problem of adjusting things to meet new requirements.


Very well said!

My own experience is that effort invested in removing restrictions and handling corner cases is generally well spent. It may not seem too onerous to have to remember that a particular function works only for nonempty inputs or when called after some other function, but in a large system where many operations have such restrictions, keeping track of them quickly overwhelms the capacity of human memory. Sooner or later someone is bound to forget, and it may even be the author of the code in question. I try to ask "what are people going to expect this code to do?" and then, if within reason, to make it do that (and failing that, to protect it with assertions). Alternatively, someone may be forced to make some other part of the system more complicated to work around the restriction, leading to excessive coupling.

I suppose this practice sometimes risks over-engineering, but I have found the risk to be worth taking. As you say, it makes the system easier to extend further.


Yeah absolutely. I think a big part of it is considering possible edge cases or failure scenarios and not necessarily solving them immediately but considering how your design might need to be changed in order to solve them. Many times I have found there’s a solution that requires little or no more work than a naive implementation but is far more robust to future changes.

To use inheritance as an example - if there’s a conceivable possibility that some future requirement might lead you to add the same functionality to other classes, you probably don’t want to have to deal with a long and convoluted inheritance chain when you could compose some set of functionality onto one class as easily as many classes.

Or in the case of dependencies, wrapping some third party mail sending library in your own generic API is barely more work than using the third party library directly and will pay dividends if you change mail providers in the future.

One very specific example that I worked on recently: In most cases there are one of X, but in a handful of cases there are many of X. My initial inclination was to branch off and handle the many case as an exception, but then I realized that you could eliminate that corner case entirely if you handled everything as a collection of X, even if in most cases that collection only has one item in it.


That's a pattern I learned a long time ago. It's much more future-proof to treat items as an array, even if it seems over-engineering at first, because scope-creep dictates it's more likely you are going to handle more items in the future, not fewer. And dealing with one item vs. many can change your architecture pretty drastically.

> Loose coupling, dependency injection, composition over inheritance, and similar techniques tend to be good answers to this question in my experience.

So, adding some extra architecture solves it?


In my experience it’s often just “different” rather than “more” but in some cases, yes, it might be a little more. It depends on where you’re at in a project. Very early on, it’s usually more. In a more established project where these patterns are already in use it’s usually not. Definitely an important thing to consider - it’s not a one size fits all solution.

It obviously depend on the language and your tooling too, but I find that it's generally much easier to refactor an under-designed code to add functionality, than trying to refactor a code that was over-designed but doesn't fit the spec anymore.

For example, in Python, there's no need to go for a class when a simple function does the job. And once you do need a class, it's fairly trivial to swap one for the other, especially in a good IDE.


Your goal should be to get to the destination as fast as possible.

"Lean" tells you that small batches reduce the waste.

As an analogy, think about looking in front at the road that unwinds passing through cities and attractions until finally it becomes too small to see ahead. You feel that if you stay on the road, it will likely bring you to your destination.

Nevertheless you are not sure. As you move ahead it gets clearer were you are and where you will have to go.

So how far should you go before stopping?

It depends. It is all about how far you can see and how confident you are that it is the right road.

If you do not see too far ahead, it is probably a good idea to just reach the next city at hand and spend some time checking if you are on the right road. Otherwise you may waste time going too far on the wrong path. The further you go, the further you will have to backtrack.

But if you can see further ahead, you want to move faster and skip the pitstop.

This analogy only goes so far. But depending what you are building and were you are in process, and more importantly, how sure you are of what you actually need to build, you may less time in planning and just building what is merely necessary vs building a more complex architecture.

So you may say: "let's try to find as much we have to go and where before we start. But even the pathfinding activity takes time...


I can tell you what I do: I allow the total cost to increase by no more than 10% for future proofing. It is all about controlling cost. It is not justifiable to spend more than 10% of the time for future proofing because you have no idea what the future is going to be. Of course if you do have some idea about requirements coming in the near future then it may be justifiable to spend more.


It’s more than a flat cost estimate. It’s more about risk mitigation. Trying to add multi-tenancy to a system that wasn’t designed for it, can mean a major rewrite. Building the system from the beginning with the idea that there could be multiple tenants may cost 10% more, but the opportunity cost of not being able to address the multi-tenant market could be much bigger.


building a multi-tenant system without having at least two tenants from the get go means you're still likely to need that rewrite. The chances that the assumptions you made for your first tenant meet what your next one needs are pretty low

Does Google really practice what they preach though?

Just recently I was looking at Angular, Google's web frontend framework. Services, modules, directives, angular-specific markup. Coming from React, I find this grossly over-engineered.


Googler here.

In general, yes we do practice it. They are guidelines, not rules tho.

Is Angular grossly over-engineered? Depends on how you look at it.

Any code's complexity reflects the complexity of its use case and Angular, for example, needs to fit A LOT of use cases at Google. So yea, sometimes, I think it can be a big hammer for a small nail. In other cases, all that abstraction helps. I guess that's the nature of how code eventually matures over time.


> I guess that's the nature of how code eventually matures over time.

Not really. It’s a sign of too many responsibilities being given to one project. Feature bloat is a real thing.

As code matures it shouldn’t be getting more and more abstraction and APIs, it should be stabilizing with fewer changes. If the former is happening, it’s a sign it needs to be broken into multiple projects.


> If the former is happening, it’s a sign it needs to be broken into multiple projects.

Breaking something in smaller projects also has its own associated costs.


Yeah, based on my experience with android and tensorflow they fail spectacularly at it.

React does must less than Angular...

Defer all decisions until you have to make them.

Leave the code in a state where it's understandable.

ie don't over specify - not just simple things like interfaces, but also things like premature decomposition, that's exposed at the higher level for configurability.

While on the face of it, having everything as small interacting functions means to change any function is quite easy, changing overall behaviour might be very hard - hard to understand all the moving parts, and even worse, if the code split is wrong for future needs, very hard to change.


I've been pushing the idea that if you're getting meaningful feedback on design (and over design) in your PR reviews than you've failed. That stuff should be shaken out long before you have working, complete code.


Is that a productive idea to push?

Not to say you're wrong, but some hard human problems to solve are:

- getting people to not be defensive during code reviews

- getting people willing to be critical (constructively) of their peers work

Emphasizing that code reviews with productive design comments indicates a failing seems more likely to stop the comments, not to improve the design. Most people wont want to do the work multiple times and will quickly learn to do as you desire in the face of repeated code reviews that have comments pointing out design problems.


There are times that I get into the flow while coding and end up solving a few future problems in addition to right now problems. I know that the code is over-engineered, but if it passes all of the tests and there's nothing obviously wrong with it I would probably check it in anyway. Over engineering never comes up in our code reviews but I think it would be a productive conversation if it did - even if the decision was still to accept the code as-is.

The programmers I work with can be defensive of their architecture, but they also love to talk about architecture generally. I think there's a "yes and" [0] way to bring this up that engages the whole team.

[0] https://en.m.wikipedia.org/wiki/Yes,_and...


And then you decide to get another job and next guy to maintain the code is utterly confused and misdirected because let’s be honest, there’s no documentation or there is but is either not up to date or sparse, theres no thorough unit testing (lucky if there is any). Not saying that’s your case but it happened to me personally to be on the other end a few times and let me tell you, it’s quite an effort to ramp up on such projects.


Yes! Engineers should collaborate on the overall approach to a code change/addition with the reviewers before any PR is submitted. Those early discussions are extremely important since teams will converge to an appropriate solution quicker because the process is much more informal. These frequent early discussions also build team spirit, and if you're so lucky, individuals will start to click and the team will start writing code and designing software in similar ways. These early discussion are generally more jovial. The actual review can focus on details, and, if feedback was gathered and put into practice early, nobody will mind the occasional nitpick about punctuation and naming. These will feel earned, even welcome, the PR being a final touch of polish.


Out of curiosity how do those conversations happen before the PR?

Asking because we might have a different process for PRs or a different definition of what it means to submit a PR - on many teams I’ve been on it’s been encouraged to publish a WIP PR in order to facilitate these conversations. GitHub recently added a feature to formalize that but before this we used labels such as “WIP” and “Ready for review”


In person, or for a remote job, with screensharing and online chats. In my current job, it generally starts with the implementer reaching out and saying, I want to achieve xyz and think about doing abc. That's how the conversation gets started. Granted, sometimes you need code examples already ready-made to gather meaningful feedback, but that can be personal branch, one-off patch, gist, WIP PR ... feedback can happen over slack, simple chat, comments on the WIP PR ... As long as the final ready-for-review PR doesn't catch the reviewers off guard in terms of overal design/approach.


WIP PRs are fine. Essentially the PR is used as a mechanism to clearly communicate an idea, rather than containing code that is intended to be merged into the codebase.

There shouldn't really be a PR review where the author is seeking input on naming/testing completeness and major design choices. Those two types of reviews are mutually exclusive.

Alternatives to WIPs include whiteboard discussions, pair programming, and written proposals.


For smaller changes it can happen over a coffee in the hallway. For large changes there are design reviews. For stuff in between you can schedule a meeting with a few concerned people.


> I've been pushing the idea that if you're getting meaningful feedback on design (and over design) in your PR reviews than you've failed. That stuff should be shaken out long before you have working, complete code.

Sometimes a prototype can help shake these things out, though. It's often okay to go an extra stage beyond what's been reviewed/approved (e.g., send out a prototype implementation before design review is complete) if you're willing to throw it away. If you'd be upset by someone saying "try this better design", you need to wait for a design review before writing any code.

Another thing to be careful about with a prototype is keeping it as far away from the normal production serving path as possible to avoid compromising the essentials of reliability, privacy, and security. (Ideas along those lines: separate production role, separate server, flag-enable it only in a test environment, dark-launch it, experiment-gate it, etc.)


The problem comes in when you can anticipate a problem and also anticipate not being given the resources to solve the problem then when you do have those resources now.

For example, had a user who wanted a document produced in one specific format. They promised this was what they wanted and it wouldn't change. It changed 5 times as we were nearing release. So I over-engineered it to allow a template to produce the document that can easily be changed (compared to the release process for a code change). It ended up being used numerous times, despite the problem were given clearly states an unchanging template.


I would argue that as soon as they asked to change the format the first time then the template solution is no long over engineering as it has been demonstrated that the format can and will change. In general I wouldn’t call it over engineering if the problem you are trying to solve occurred in he past and you have reason to suspect it will occur again. The problem with over engineering is solving problems that have never (and may never) occur.

If the user did not have that problem wouldn't you have wasted engineering time?

It was a calculate risk, same as many others, and by far not the most wasteful event of that year even had it not been needed.

When you have very smart people, it may be difficult for those people to write simple stupid code. Smart people like to write delightful, sophisticated and elegant solutions that will address problems and use cases that the code will never go through. A library may be an exception.

I used to be one of them. "Growing up" I realized that striving for simplicity and adding just complexity when necessary is the ultimate sophistication.

Good engineering values and a shared vision of what "doing a good job" means helps out a lot. If we prioritize "development speed" and "quality from a user prospective", extrinsic complexity clearly becomes a burden.

Instead, if these values are not shared, the main motivation for smart people may easily become "ego" and showing off code that may look clever and supporting a lot more use cases but most likely it did not need to be written, wasting time and adding unnecessary complexity with the risk of introducing cognitive overload.


When you have very smart people, it may be difficult for those people to write simple stupid code.

If they can't write idiomatically, are they "very smart?" Code is a communication medium as much as a functional one, and if someone doesn't understand you when you're talking to them, such that you have to say, "well I use a cutting edge grammar, aren't you familiar with Lakoffian generative semantics? Imre Lakatos has a sick presentation on Vimeo," you're not maintaining communication skills. I imagine this could be what "unshared values" means.

The way I see it, it's like driving: people who don't stop at stop signs, who don't use turn signals, who don't know right-of-way rules are not good drivers. These things are a part of driving just as much as knowing where you're going and trying to run the Nurburgring in under 8 minutes. They also demonstrate a concern and respect for the health and wellbeing of your fellow drivers (not to mention pedestrians, cyclists, etc.).

This is not a matter of education, age, or maturity, it's manners. It's saying "please," and "thank you." It's parenting yourself if your actual parents didn't teach you. I'm not sure any of that can be chalked up to surmounting ego. "Ego" is just an excuse and probably not based on actual psychological concepts anyway.

the main motivation for smart people may easily become "ego" and showing off code that may look clever and supporting a lot more use cases but most likely it did not need to be written, wasting time and adding unnecessary complexity with the risk of introducing cognitive overload.

"Very smart?" ;)


> where developers have made the code more generic than it needs to be, or added functionality that isn’t presently needed by the system.

I agree with this when it's internal interfaces. When you have public interfaces that you expect to have to support in a backwards-compatible manner for (ideally) years in the future, it's worth taking some time to think about how people might want to extend it in the future.


Within a monorepo like most code at Google, there isn't as much of a bright line between public and internal interfaces. If you don't get a public interface quite right on the first try, but it still has <~100 call sites, it's really easy to just make the change and fix all the callers. If your library has become popular and has substantially more call sites, then it's hard but still doable to fix.

This assumes that the interface is a function or method call within a binary. If it's an RPC, then making changes is much trickier, since you can't assume that both sides of the call were built at the same commit. This requires a lot of thought to make sure any changes to your RPC messages are both backwards- and forwards-compatible.


This only works at Google because most of the code is rewritten every few years [1]. The dynamics between tech debt and over-engineering in such a setting are not really representative of the rest of the industry.

1: https://arxiv.org/pdf/1702.01715.pdf


This is for a code review which would be well after architecture and be the entirely wrong place to systematically question architecture.


My #1,2, and 3 rules are to get the code in the right place. You could vaguely say that is architecture related--is it in the main app, a library, etc... Those decisions can be tough for junior developers and can often be disputed by seniors (we don't need a library yet.)

"Architecture" is very vague and used a lot of different ways.


To be honest, after reading this document, I believe it contain MUCH more information than just how to conduct a code review.


# This is for a code review which would be well after architecture and be the entirely wrong place to systematically question architecture.

I think that is part of the scrum philosophy: management can question whatever it wants and whenever it wants to; if the timeframe explodes that that is the blame of the developer.


That's kinda funny, because their interviews are exactly the opposite. Toy problems you'd never see in the real world, crazy abstracted solutions, O(n) demands, almost encyclopedic knowledge of data structures and algos.

Do as I say, not as I do I guess.


I suppose you just wanted to complain, but in case you're serious: interviews and code reviews fulfill entirely different purposes. Choosing the right data structure may or may not be over-engineering, it depends on the application.

Google absolutely want developers that are capable of over-engineering.


>are capable of over-engineering.

Anyone can over-engineer. It’s not a compliment nor a desirable quality.


> Google absolutely want developers that are capable of over-engineering.

Why, so, when they get on the job, they need to be told not to over-engineer?


Making code execute efficiently is not over-engineering, it is saving many millions of dollars and often a requirement to even launch.


But, we should also mention that Google also has some of the best software designers and architects in the world. You can't get a well-engineered library without someone experienced at design leading it. All of Google's open source projects have significant design. And, if you look at the commit logs, you can see that their designs didn't emerge from thousands of little pushes. Someone led those projects and enforced design criteria from the beginning.

[1]: https://opensource.google.com/


I have worked on a Google project as a contractor and I can confirm that over-engineering was our biggest problem. I was also partially to blame, but at the time I did not know better.


Yes, I don't think I have ever seen a premature optimisation actually ending up being beneficial when the time came to add new features to the project.


Premature optimization rarely helps, but well though out flexibility/decoupling in core system components has had a significant positive effect on velocity down the line, and lack of the latter has all been shown to be disastrous.

I do believe though that's really hard to discuss effectively as there seems to be no good, and common definition on what over engineering actually is, except in retrospect.

I've seen teams where they were so "good" at avoiding over engineering and "architecture astronauts" that thousand line functions with a Byzantine labyrinth of conditional was preferred to even the most basic of design.

With that said, what would you consider over engineering of the kind that never works, and in what kind of systems?


Too bad the Google developer API's didn't go through that review. 200 lines of code (LOC) for a simple "Hello world" and that is using libraries that are probably a million LOC. Which could probably just be a HTTP GET, and run with a simple curl command.

Any particular APIs you have in mind? I find App Engine’s ‘getting started’ examples quite sane while they are ‘hello world’.

Try for example updating a column in a spreadsheet. Or uploading a file to Google drive.

For a proper discussion I'm surprised no one has mentioned this principle from Extreme Programming possibly predates Google as a company.

https://ronjeffries.com/xprog/articles/practices/pracnotneed...


Overall architecture should have been discussed and agreed before writing any code, not at code review.


In an ideal world, yes. In practice this almost never happens. Architecture evolves as the code is written.

I don't think this is a good way to do things, but it's mostly what happens IME.


Looks like Magento2 could make use of this, every part of it is too over-engineered, adding un-necessary complexity to make it look like enterprise ready application

If your making mistakes like over engineering are you actually a good developer then if that is part of what bad code is. Maybe you fall into the expert begginer at that point


I'll take a noob who doesn't understand basic syntax all day every day vs. the dev who finds a way to make everything complex. At least in the former the damage is limited.


Good dev's understand how to simplify complex problems by breaking it into smaller components. Bad devd add complexity to already complex problems.


Breaking down complex problem does not simplify it, you merely move the complexity to the graph of dependencies between smaller components.

I've seen this a lot. Where it is broken down so much that it involves mental gymnastics to follow whats going on. It also results in Spaghetti-code with fragmented functionality.

In my experience, projects using Java are the worst offenders here.


It can happen when a really good coder gets bored.


Sometimes you can generalize the problem a bit to get a shorter solution.


Pretty much by definition you will be covering a wider surface of possible inputs and behaviours and so acquiring complexity


Here's a fun example from math which requires generalizing to get a good solution to.

Suppose you have a 2^n x 2^n sized courtyard. You have one 1x1 statue, and unlimited L pieces (2x2 with a corner missing).

You would like to have a layout which places the statue in one of the centermost tiles, and fill the rest with L pieces.

----------------

Solution

----------------

Define a layout function with allows for the empty tile to be in any corner. This is trivial for 2x2, as it's just an L peice.

By tiling these squares, you can solve for the next size up. Put three with the hole in the center and fill in with an L piece to get the bigger square.

At the end, take 4 2^(n-1) squares and put the holes in the middle. Add one L and you are done.

What was the point of that?

By generalizing your solve function to have more outputs, it lets you build up a structure from it, in which it's easy to solve your actual goal.


And complex analysis informs number theory. An example from software development might be more useful.

> Over-engineering is a trap that very good developers fall into all too often.

I would not call them 'very good developers' in that case.


Over-engineered is not a trap in BigCo, it is simply the by-product of KPI/Goal/Impact driven.

People tend to do it to get promo/bonus.


It’s an old person thing. We learned to program way before the internet, when one only had a language reference, some vague requirements, and a lot of time. Things took much longer, and communication tools didn’t exist. Maintainability was once a big deal.

These days, it’s easier to do a search, grab a snippet, and paste. There’s little need for reusability. Extrapolating a bit, I imagine in a few years, we’ll have one big ‘library’ of functions built into our IDE (built, and copyrighted by Google or Microsoft, of course).

Programming used to be skilled labor, but now it’s kind of dumbed down for higher productivity. A natural evolution.

Personally, I’m trying to re-train myself for the more modern rapid-fire programming. I’m not completely convinced that it results in better products, but it does feel good to be constantly committing.


This is great advice and isn't followed often enough, especially when reviewing code written by people new to an organization/team:

> If you see something nice in the CL, tell the developer, especially when they addressed one of your comments in a great way. Code reviews often just focus on mistakes, but they should offer encouragement and appreciation for good practices, as well. It’s sometimes even more valuable, in terms of mentoring, to tell a developer what they did right than to tell them what they did wrong.

I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them. That most new developers naturally suffer from imposter syndrome is not helped at all by 100% critical code reviews.


I once worked on a team that specialized in very long littanies of code review comments... but they were able to bake this into their culture in fundamental ways such that it ended up being one of the most positive experiences in my software engineering career.

The basics of how they accomplished this was:

- The obvious- no personal / destructive attacks or insults, no cussing, no comments on any person's abilities.

- Having your code picked apart is a badge of honor, and you were expected / required to do the same to the most senior of your teammates when they submit code.

- Collective ownership- it's never "your" code, it's the team's code.

- Constant acknowledgement that our system is hard and complex, and it is really important that it works as expected / promised.

That last part was an ingredient I never found anywhere else. The opposite seems to be more common- every other team I've worked on tended to underestimate or even trivialize the difficulty and complexity of the systems they work on. By acknowledging that this work is hard for everyone involved, and that despite this, it must be well-done and function properly, the code review and the ensuing discussions became a very welcome and encouraged part of this process. It also helped defeat newcomers' imposter syndrome because this mentality was effective at making everyone feel like they had an important role to play, and that even the most senior folks often felt like a noob when they screwed up.


Sure, but “Hey I really appreciated how you did this thing here because it’s tidy / does thing X really well / takes into account future whatever” never hurts to throw in either!


Yup, I failed to emphasize how much I agree with that as well!


I found that really informal comments cut less deep 'hey man this needs double checking, read up on $x and then reconsider this block' (gender aside) is far better received than 'This is missing fundamental concepts around $x read up on them then rewrite this block'


> I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them. That most new developers naturally suffer from imposter syndrome is not helped at all by 100% critical code reviews.

This, combined with a large portion of developers lacking social empathy, poor communication skills, and (unfortunately) a desire to appear to the the smartest person in the room, can lead to severe demotivation and stress for both experienced and new team members.


> hundreds of comments (many of them minor, nitpicky)

This is one reason I liked Phabricator’s review system, which allowed drafting comments on an entire PR before submitting the comments. This allows you to be as nit-picky as you want when reading the PR, and then delete or modify any of them at the end. Instant submission of line-level comments, on the other hand...

I think reviewers should have the awareness to understand not to over-burden the person whose code they are reviewing.

For example, if I’m reading some very junior code and finding lots of issues, maybe I should delete all my comments about minor things like style and only leave the comments requesting major fixes like code design or potential bugs? When the developer re-submits the code after fixing the main issues, maybe it will be less frustrating to get a review back with “this looks great, can you just fix these small issues that don’t conform to our style guide and it’s good to go.”?


GitHub features the same draft system you described.


I thought theirs was immediate, but I was referring mostly to Bitbucket Server's system from previous experience with that.


GitHub has both. You can add an immediate comment, or add it to a draft that you post as one unit.


Critique and Gerrit (code review systems at Google) also do this.

Social empathy is a really important skill, especially when delivering feedback. However, the best way to improve as a junior developer is to welcome feedback, not panic.

I've seen junior developers welcome feedback or run from it, and I think that has as much to do with the reviewer as the author of the code.

On the flip side, I've learned almost as much from reviewing more senior engineer's code and just asking dumb questions or looking up functions in documentation that I didn't know about. I think there's huge potential to learn as the code reviewer.


I'm sure we've all worked in places where a senior would not like it if a junior reviewed their code, regardless of the juniors intent. But yes, always a tonne to learn.

IMO the first few code reviews should be done face to face so some rapport can be built. Receiving feedback from a 'human' is far easier to process than a faceless Github profile picture.


The face to face is definitely a good approach for first reviews. Also it’s worth starting with lower expectations and less critical feedbacks then increase over time. It takes time for people to feel comfortable receiving direct critics from persons they aren’t familiar to.


100's of comments should raise a red flag. I think there are 1 of 3 scenarios at play:

1. The commit/change is too large.

2. The reviewer is nit-picky.

3. The person that wrote the code made a lot of mistakes.

I really feel like if a reviewer is making 100 comments on a change, they're doing something wrong even if there are a lot of mistakes. That reviewer should really reach out to the person that wrote the code and talk to them human to human.


As someone who did a code review for a colleague and left over 100 comments... I would say that leaving those comments is handy as it's a single place I can check to see what comments I left and the state of the fix, or if it's not fixed yet.

If I just ended up talking to the guy, I'd have no record and he would have no record (unless he took notes) of all the places he has to fix.

The reason for 100+ comments was a junior developers code and far too large a changeset.


I feel as though leaving 100 comments only addresses the code but not the coder.

Telling someone that the changes they made are too much for a commit would probably go a lot farther.

Identifying common architectural problems would also reduce what you need to connunicate. Maybe they made a lot of mistakes but I would find it difficult to believe there were 100 unique mistakes.


Want to know a great way to handle a subset of these? Good automated code linters. Google has a ton of formatters and linters that either auto correct or suggest changes. To paraphrase a smarter Googler than me: People tend to just do what the bots say and don't care too much about. The best way to impose your code ideas on others is via a linter.

Automated tools here have also drastically reduced the number of comments I get on CLs at Google, as people don't bicker about the little things as much anymore.


Something I've started doing that I picked up was prefixing my nitpicky comments with "Nit: ..." so that it's clear that certain comments are just minor suggestions, not that anything is necessarily wrong.

I'm usually okay with preemptively accepting code with only nit comments too just to signal that those comments are not too big of an issue (if at all).


My favorite way to do that has been to indicate “Not a blocker” to differentiate nits that I’m picking from things that would make me not approve the review.


The guide goes into this as well with the “LGTM/Comments” approval flow. It is something we’ve done organically but nice to see written up as well.


Agreed. I find this extra important when working remotely or with other people who work remotely. If you’re not getting any face-time, critique has to be balanced with positivity. In my experience it’s true even with very experienced team members, and even with the very serious & stodgy people who avoid chit-chat and claim to not be bothered by feelings & opinions.


Why is it that people feel so discouraged by loads of review, especially early on? I always had a good bit of imposter syndrome early on, but never considered quitting. I always assumed that you have a lot to learn, that it’s expected you’re going to suck at some level.


People have their own internal stories about how good they are, and all that feedback can be very painful.

It doesn't help that many comments are nitpicks and pedantry, made by people without much social empathy.

This is especially true when sending a patch for a high-level review of your proof of concept, and the next thing you know people are complaining about your formatting.


It can be hard to review badly formatted code. The reviewer has to put in effort to understand your change, but they have to put in more effort to parse the change if the formatting is bad.

"Why is this function's return value not being checked? Oh, turns out this person used a yoda conditional even though the rest of the code doesn't do that."

"Why is this block of code running even when the condition is false? Oh, turns out the else branch already ended and this part of the code is just indented wrong."

And so on. Even something as minor as `foo ()` vs `foo()` can stand out and act as a constant stream of mental speedbumps.

Claiming your change is high-level / only looking for feedback to the overall design doesn't change that. You're asking for a code review because you want the reviewer to read your code, but reading it is exactly the part that they're finding hard to do.

And it's not excusable, but the reviewer might be insulted that the reviewee is wasting their time and their feedback may be ruder / snappier as a result. After all, the reviewee could easily have put in the effort to run the auto-formatter, follow the existing code's style, etc.


Prose editors have various levels of edits, edits for content, edits for organization, and copy editing and finally proofreading.

Yes, one should avoid misspelling words when you are about to submit a manuscript for content, but wordsmithing all the the sentences is a mistake.

Likewise, one should run an auto formatter before sending a proof of concept off for review, but a reviewer who is all tied in knots about low-level nits when looking at a general proof of concept is editing at the wrong level.

Otherwise you end up with a fully-baked, carefully written solution that satisfies all the nits, but was a giant waste of time because it takes the wrong overall approach.


The formatting & style could be automated via IDE configurations, so that when the code is submitted for review it is already clean. This will make reviewer to focus on the design/logic instead of formatting.


Autoformatters fix many things, but not all of these issues.


That’s where gofmt shines :-)


The best critical code reviews I've had focused on the design and completely ignored things like formatting.

On balance, formatting is an important part of code health. Hopefully, using auto-formating tools (gofmt and friends) should hopefully make code formatting issues not take up time in code review.


Thanks For The Feedback[1] is a pretty decent book on the subject. Highly recommended if you're interested both in the mechanisms and improving how you receive/give feedback.

[1] https://www.amazon.com/Thanks-Feedback-Science-Receiving-Wel...


Everyone is different, and some people have a harder time with critical feedback than others. It could be that they have different past experiences, or that they just have different personalities.


>Why is it that people feel so discouraged by loads of review, especially early on?

Many people often think their skills and actions are them, instead of things they've acquired.

The question you're asking is oddly similar to asking why people get stressed and mentally suffer at all. It's a deep and complex subject. Taking is personally only scratches the surface.

Eg, I know someone who switched her career over it. I asked her why she left and she told me they kept criticizing her, and she took it as if they were attacking her and trying to get her to quit.


I can offer a personal anecdote here on my experience with code reviews, on both sides.

I was the primary architect and reviewer for a complex real-time mathematical application. When reviewing code, I was pretty much a tyrant: the code had to be correct, well tested, conform to the theory, interface with the rest of the system correctly, etc., in order to be allowed in. I remember leaving some pretty brutal reviews when the proposed design was different than what I thought it needed to be. I thought I was doing the right thing, the project lead thought I was doing the right thing, but maybe I was just making my teammate's lives hell.

In a subsequent job, I was on the receiving end: I was again the primary architect and maintainer, but still needed to seek review from a larger team (my component was part of a larger project they owned). The experience was not enjoyable: reviews sometimes took months (I sure wish I was kidding), sometimes were passthrough "LGTM! I don't understand it at all!", sometimes asking questions like "why is this mutex here" and then I have to spend 3 hours writing up an explanation for how threads and locks work in this case. I found that my mental model shifted: instead of committing small improvements here and there like cleaning up comments or renaming something I just... didn't. I didn't want to deal with a multi-day process of bugging someone to review (they were always busy), dealing with the roulette wheel of comments that might come up, the possibility that I might have to justify some minor thing that I don't even remember the reasoning for. It felt like making a PR opened you up to an uncomfortably invasive inspection, one where the reviewers look down their nose at you and ask you to elucidate why you chose to wear the red shirt today instead of the blue one, as if you're supposed to have some grand unified theory of shirt colors when the actual reasoning is "I thought red would work and it did". How are you supposed to justify why you didn't do all the things you didn't do?

I think an issue is that there's always a different approach that could be used and in a perfect world perhaps we'd iterate endlessly until we found the best one. I've seen plenty of systems that have a design very different than what I think I would do, but as it turns out those systems work too.

I've honestly become less convinced that code reviews are the answer. Is there a possibility for learning reviewer <-> submitter? Of course. Do some teams find code reviews to be hugely beneficial? I would assume so. But I don't know if an organization-wide mandatory absolutely-zero-exceptions is the way to go.


I share this sentiment about code reviews. Too often, they are time consuming discussion over trivialities and/or taste. Also, like you said, it's way to easy for someone to ask one-liner questions which require long and time-consuming explanations. I would be fine with doing code reviews with people who I consider reasonable (ex. I hand-picked them for a team). Otherwise, it is often a drag which doesn't improve the product that much.

At work, and especially when you are new you are at the mercy of others in several ways to an extent that usually eclipses almost anything we expose ourselves to in our lives.

Reviewing can thus be experienced as opening up yourself to critique by essentially anonymous "kingmakers", which at any time can make you look like a clown, if they want to. This experience might not be rational, but I believe it's still extremely common, as an experience.

It's made worse by the fact, that if the review culture is anything else but great, it's not at all unlikely to try and take advantage of that - in one of many ways - in order to feel a bit better about themselves.

At our core most of us are social creatures, and thus most of us easily pick up on that danger/risk, including most that are otherwise not socially adept, as they have usually suffered bullying of some kind and have learned to avoid those exposed situations.


I never considered quitting, but it still feels bad. Putting your code out there for the first time feels vulnerable (sort of similar to publishing your writing or public speaking) and it feels bad to get shit on.

When you've been working for a while you can filter out the noise regarding formatting and other smaller issues (or, ideally, get autoformatting set up), and it gets easier to separate your ego from your code. When you're new though, and showing someone else your code for the first time it's hard to see a huge volume of what seems like substantial negative feedback on your code (and by extension yourself).

I realize the "right" solution here is that people shouldn't equate criticism of their code with criticism of themselves, but I think that's a huge thing to expect at first and contributes to making the field unwelcoming.


Your lead should really be setting you up for success on your first PR. I always made sure I paired with my new employees before they hit submit on their first few PRs. It is much easier to be empathetic when you are next to a person. It would also give me time to deep time into certain aspects of our codebase that I felt they could handle in future sprints.


There are often multiple ways to make a make a skinny cat, with no particular way better in all respects than any other method. Programmers end up designing code based on their own mental model, held in their head at that time, and would not really be able to explain that model to someone else. That someone else in turn has a different mental model of what is the correct way. Both ways are correct, just one of them is not what the reviewer would have done.

When the reviewer communicates this to the coder, this often comes across as criticism of code that is in reality perfectly fine.


I think one clear distinction is if the review makes you think (1) my code sucks or (2) these people despise me.

It's much harder to power through (2) than (1).


> I've seen cases where people got hundreds of comments (many of them minor, nitpicky) from more experienced developers and were discouraged by the sheer number of them.

Unfortunately many people stink at communication. The best thing a reviewer can do during a code review is ask questions. A reviewer who comes in and just says 'this is wrong' misses an opportunity to either learn themselves or actually teach the submitter. A much better approach is to ask the submitter why they did something certain way. What were their thoughts, and what did they see.

IME, asking questions leads to much better outcomes. Sometimes the submitter will just admit they were not thinking about anything and see their own errors. Upside, they found the issue themselves. Other times they'll explain some complexity or use case only they could see while deep in the code. Upside, I just learned something.

In general the best teachers ask questions. It doesn't put people's egos on the defensive, and tends to help people learn more effectively (the goal if any review IMO). I've seen this work on me in other scenarios like Jiu-Jitsu. My teacher will ask mid-roll or afterwards why I did something, no matter how stupid it was. Explaining my thought process helps me to better recognize the error, and I'm not immediately on the defensive. I'm then much more receptive when the teacher goes "that wasn't a terrible idea, but if you did this...".


Agreed. You don’t need fake praise but you should know whether something you did is just adequate or really good. You rarely have discussions about why something is good.


At Amazon, all code is reviewed. Comments are constructive and straight to the point. Standard courtesy applies, but anything more than that is left out. Unless there's something extremely interesting, overly cautious phrasing and "positive" comments are mostly noise. Also, people tend to specify when comments are nits.


Why the scare quotes around "positive"? I have lots of positive thoughts when doing code reviews. I don't write them into comments all that often, but sometimes I do. "Oh I didn't know about this API, nice find!" "Ah, nice approach, this is a big improvement." "Thanks for improving the test coverage!" Maybe this seems like unactionable "noise", but it isn't, it encourages future actions of the kind being positively reinforced. Plus it's free, it takes no time to write and no time to read comments like this, all it does is make the author's experience of reading the review a bit more pleasant. This is all upside, no downside. It would be crazy to discourage such a thing.


Depends how it's come across. If your not careful it can come across as patronising.


Full disclosure, I work at Google, and I actually find positive comments mostly rare, but code review culture is very different team to team, so I can hardly speak for the entire company.

Personally I try to make them often, but I also try to be thoughtful about making them as to not come off patronizing like you mentioned.

Though I’ve never felt like a positive comment ever made me feel patronized in any situation, I think a good way to not come off patronizing is to to make the comment more personal than general, ie:

“This is a cool way to use the rest operator (or whatever thing), Ima steal this”.

Obviously it needs to be an authentic comment, but I think it’s a significant difference vs

“Great use of the rest operator!”

When the interpretation on the other end might be “oh so this person is surprised I know these basic language constructs”.

I learn things from doing code reviews all the time, and I like letting my coworkers know that I feel like I’ve gotten better at my job as a result of reviewing their contributions.

I think it’d be a shame if I didn’t let them know.


> it can come across as patronising

Sometimes, it is worth the risk of sounding patronising.


Agree, I love to point out a nice approach or a TIL, and I like that Google put it down in writing. If Amazon really considers it noise, it is very descriptive of their culture.


> Unless there's something extremely interesting, overly cautious phrasing and "positive" comments are mostly noise.

Not really. Positive comments help teach engineers which things they've done that conform to local best practices (and why) without them having to meticulously dig those up (assuming they're even documented). A lack of positive comments leaves engineers to learn them only by running afoul of them. Effectively it provides direction only when some threshold of badness is crossed, while leaving positive comments on good code (especially for new team members or junior engineers) provides a beacon pointing away from the badness threshold entirely.

Put differently: commenting on good code makes for swifter and less eventful code reviews by steering engineers away from the bad practices that make code challenging to review in the first place.

Also, it's just nicer to spend forty hours a week with people who demonstrably appreciate their peers' good work.


A code review is not the time and place to report positive or convoluted comments. A code review is to find issues and to report them clearly.

The only positive thing to report should thus be that all is fine if the reviewer did not find any issues.

Now, if you are conducting the review in a meeting then obviously you can make oral comments in passing. If you're using a software tool for reviews then the all the comments should be on point. Nothing prevent you from talking to your coworker afterwards to spread some love if you want to.


> A code review is not the time and place to report positive ... comments.

Why not?

(Convoluted comments should be made more concise)


Because a code review is not to pat each other on the back, it is to inspect and report issues. It is already costly enough without going off topic.

As said, if you want to praise then you are free to do it offline.

This is a professional procedure in a professional setting, not warm words from an encouraging teacher at school... I don't want to have to go through comments that do not add any value to the exercise of finding issues, and I have never seen people leave such comments in 20 years.


> This is a professional procedure in a professional setting, not warm words from an encouraging teacher at school...

Citing a good practice in someone's code as "yes, please do more of this" alongside "don't do this please" is not, in my opinion, fluff.

> I don't want to have to go through comments that do not add any value to the exercise of finding issues, and I have never seen people leave such comments in 20 years.

So only pay attention to unresolved comments?


> Citing a good practice in someone's code as "yes, please do more of this"

That simply isn't the purpose of a code review.

Good practices should be documented externally, so you can check them consistently during code review ;) It's also quite useful to have a checklist when doing a code review.


I disagree, with a caveat. Assuming your review tool supports "Resolved" or "No Action Required" comments (ours does), it's rather easy to distinguish something that is informative from something that is actionable. I am now recalling that some review tools don't distinguish open/unresolved comments from resolved or informative comments, which would make this more of a trade-off than an obvious win.

People call positive comments noise? They put scare quotes around “positive”? This field really scares me sometimes.

Communicating positive feedback is crucial for teamwork, teaching, and passing ideas on.


Depends on level of seniority and amount of time the person has been on the team. Positive comments to a senior engineer about using a hash map instead of doing a huge linear scan are just noise.

Nurses don’t compliment each other for using a new pair of gloves on each patient.


If this is representative of the Amazon engineering culture, I'm gonna add this to the (already overflowing) pile of reasons I would never work there.

Feeling valued is linked to job performance. And if I notice any of a) a generally interesting piece of code b) the engineer being a boy scout and fixing something in that particular module to make his changelist better c) a novel/comprehensive way of testing the code automatically d) elbow grease to just go the extra mile in terms of doing a great job (without adding unnecessary complexity) I will call it out in the code review along with the regular 'fix this' feedback. As a principal engineer my word carries some amount of weight and I truly want the person to feel good about their work when they deserve it.

Another thing I often do is add a 'thank you for taking the time to do this' whenever someone slightly decreases the amount of tech debt (by refactoring, or removing code/complexity) when it was obvious it wasn't absolutely necessary to get their work item completed. Basically whenever someone shows they're thinking strategically rather than tactical about their work, I want to make sure that person knows that I noticed and that I value that.

People aren't robots, we're all professionals and we all like to feel good about our work.


I did a pretty reasonable chunk of C++ code once years ago. I was junior level. Wanted a good review.

I got 21 comments to remove blank lines and none about the code. Thanks. Really useful. This was the team of "experts" on the code base.

Then a principle engineer reviewed some other code and oh look, actual useful comments.

Nitpicking can be worse than useless. Obfuscates or ignore real issues in the code. Seems to usually be a sypmtom of the reviewing not being competent enough to actually review the code properly.


Sometimes people don't want to actually review but feel they have to leave comments.

Trivial comments are a sign of that.


I'd add two things, from a decade of experience at Google:

Review code in this order: protocol buffers, unit tests, headers, implementation. It's common for a new employee to be an expert on C++ or Java or whatever languages but it's very uncommon to meet anyone who knows how to define a decent protocol message. The tests should give the reviewer a very clear idea of what's happening in the code (and this should be congruent with the description of the change), if not send back to author at this point. The headers should contain clear interfaces, types, and comments and should not contain anything that's not part of the API (when this is technically possible). Finally look in the CC file; at this point the reviewer should see things they were already expecting and no funny business.

Any claims about speed, efficiency, or performance whether in comments or during the review must have accompanying microbenchmarks or they should be deleted. Wrong ideas about software performance abound, and even correct beliefs become incorrect with time, in which case the microbenchmarks are critical to evaluating the continued value of the code.

When sending a changelist for review, always clear all automated warnings or errors before wasting the reviewers' time. Nobody wants to see code that doesn't build, breaks a bunch of tests, doesn't lint, etc.


Re: performance claims, well put. I often see coding decisions and review claims made on untested or outdated facts. For Java specifically three most common things I run across are:

1) Decision to use some kind of 3rd party library that purportedly speeds up handling primitive types (Trove specifically. Almost never worth it, and it's extra painful because it doesn't plug in well into the rest of the standard library).

2) Using mutable objects to avoid the inefficiencies of allocation and copying with immutable patterns. Can lead to a brittle stateful design with hard to track down bugs.

3) Reusing serialization artifacts across the call stack to save copying/allocation, again like 2). Now you end up coupling the API interface to layers deep in the call stack making things harder to modify.

Also best review advice I got and follow: be the first reviewer yourself! Looking at a nicely formatted diff really lets you see your code in a new light, and usually I end up doing several iterations before I run out of things to tweak. Things like forgetting debug log statements or commented out code, or dev settings, as well as other easy but effective improvements.


> Also best review advice I got and follow: be the first reviewer yourself! Looking at a nicely formatted diff really lets you see your code in a new light, and usually I end up doing several iterations before I run out of things to tweak. Things like forgetting debug log statements or commented out code, or dev settings, as well as other easy but effective improvements.

I've been enjoying lots of comments in this thread, but unequivocally THIS. My company (for no particular reason) has two code review tools. Each team picks which one they like. Early in development I'll shelve a CL into the tool my team doesn't main, and just review and annotate diffs. It's a good place to keep TODOs and notes to myself.

Additionally, if I share it with my team, its clear that its "pre-review" quality and not to nitpick. "WIP" in the subject line helps, but being in an entirely different application really solidifies the point.

Then, when ready to go into real review, I've got a really good sense of how the CL looks as a diff, and where any remaining annotations need to be placed (that aren't truly a comment).


> Any claims about speed, efficiency, or performance whether in comments or during the review must have accompanying microbenchmarks or they should be deleted.

That's way too strict. If I want to suggest moving a statement that's inside a loop to the outside, I don't want to waste my time writing microbenchmarks showing that it improves performance.

If the suggestion is complicated or non-obvious, and it's in a really critical part of the code, then sure, write a benchmark. Outside of that though, it's rarely a good use of developer time.


I'm not saying you should write code in a performance-oblivious fashion. I am saying that if you want to do something non-canonical, or you want an exception to style guide or other established guidelines, or if you write a comment in your code that says "This function is faster than std::foo", then you must have evidence. In the absence of evidence, adhere to normal rules.

I came across a comment in google base libraries that said "This is faster because cache line is 32 bytes". It had been written by a very famous engineer, and it was even true in the days of the Pentium III processor. But at the time I found it it was not only false but the code as written was slower on modern CPUs than the shorter and totally obvious equivalent.


> I came across a comment in google base libraries that said "This is faster because cache line is 32 bytes".

You would want to ban that sort of comment?

If you see a piece of code written oddly, then someone telling you why they wrote it that way is very helpful. If later you have to refactor it then it's doubly nice to know the reason it was written that way doesn't apply any more.

I suspect your problem is with pre-mature optimisation rather than commenting, and if so I imagine the majority of programmers would share your views. But if that is the case banning comments that make it plain something may have been pre-maturely optimised doesn't seem like a good way of solving the problem.


> If the suggestion is complicated or non-obvious, and it's in a really critical part of the code, then sure, write a benchmark. Outside of that though, it's rarely a good use of developer time.

If it isn't worth your time to write a benchmark then it isn't worth my time to change the code.

Everyone thinks their own performance tweaks are simple and obvious, but I've rarely seen measurable performance boosts from comments in a typical PR.


You have to be careful though. The compiler can do a very good job of lifting statements (and many other optimizations), so it is generally a better use of developer time to try and write code that is clear than to write code that is believed (without evidence) to be more performant.


I agree - most folks overestimate the effectiveness of microbenchmarks. I work in a field where microbenchmarks often feel useless: computer graphics. Often we have to consider the entire pipeline of work. Writing to a texture in Stage 3 may seem perfectly fine, but it could be thrashing the cache in Stage 4 when the texture is being read from. Benchmarking them separately misses this.


You're right that many microbenchmarks can be noisy (and the environment you run them on can introduce more noise).

But if you're attending cppcon, two engineers in Google's production toolchain organization are giving a talk on "Releasing C++ Toolchains Weekly in a 'Live at Head' World" where they'll discuss (among other things) how we use various microbenchmarks to catch performance regressions.

https://sched.co/Sft4


Ah I’ll take a look!


I want to say that yes, if it's simple and straightforward enough or obvious then just do it - it should have been written like that in the first place.

BUT I think a more important consideration is readability and clarity. Make it work, make it pretty, make it fast - in that order. If there was anything I learned from a Go course some time ago (Ultimate Go iirc) is that speed is a natural result of readable code. Plus if your code is good and well structured it becomes trivial to benchmark, identify and resolve any performance issues.


I agree that "must" is too strict. If I replace linear search with binary search, or hashmap/hashset lookup I don't need to write a benchmark to prove it improves performance. There is math, logic, Big O analysis that allows to reason about performance and speed without microbenchmarks.


If you did that to code in Google search you’d be required to run a special load test to prove you didn’t screw it up. Nobody should assume either of the things you just implied were obvious. In fact linear search is guaranteed to beat binary search for short vectors. O-complexity analysis is good for undergrads but the ONLY aspect of software performance that matters any more is cache behavior.


bt848 already said it, but you picked a really poor example. In many real world scenarios, a linear search beats a binary search due to a lack of overhead.

Along those lines, in C++, a vector very often performs better than a theoretically better data structure. I learned C++ fairly well from a very experienced guy in the company, and he said that you should always benchmark against a vector. I suspect if I was in his team and he were reviewing my code, he wouldn't let the code pass unless I had benchmarks that show whatever data set I picked is faster than a vector.

He wasn't against other data structures - he just wanted proof they would perform better. In quite a few cases, they didn't.


So now the requirement is not just to run a micro benchmark, but to run one with inputs that approximate the distribution of production inputs, along all possible axes. For many sorts of projects, this is totally unreasonable. It's easy to figure out how the code performs with a given input, it is much harder to figure out what the inputs really are like.

In this particular case I'd expect the change to be motivated by profiling of the real production instances. And that makes it pretty obvious how the change should be evaluated. "We're spending more time than is reasonable in linear scans, so the worst case inputs must be worse than expected. Switch to a data structure more suited to large inputs, and see if CPU use improves in the next rollout."


Yes, the size of the data matters, and linear search may be the right choice for many use cases and may outperform more advanced data structures. You learn it on the job as it is not a topic usually studied in universities. But I don't agree that we need to completely throw away theory, O complexity analysis where N - input size is the main variable. Microbenchmarks are hard to write correctly. Recreating the scenarios occurring in production is not always possible in the microbenchmark settings. You can write a microbenchmark that shows your code works great, but fails to perform in production because the input data is totally different. bt848 mentioned load test, which is not the same as microbenchmark. Load test that simulates production settings is a better tool to validate the optimizations.

>But I don't agree that we need to completely throw away theory, O complexity analysis where N - input size is the main variable.

No one here is agreeing with that.


> When sending a changelist for review, always clear all automated warnings or errors before wasting the reviewers' time. Nobody wants to see code that doesn't build, breaks a bunch of tests, doesn't lint, etc.

Generally true, but not when prototyping. As a reviewer, I would like to see the general idea before spending time on cleaning up. If the grand design is wrong, time spent on cleaning up would be a waste.


Those are an oddly specific and narrowly focused set of recommendations to add to a set of guidelines that are largely very general and apply to a broad range of circumstances..


Sorry. I was responding mostly for this audience to the section about handling pushback. One common form of pushback I experienced was from authors who wanted an exception on performance grounds. These people were almost universally mislead, so I honed a specialized skill of disabusing wayward C++ authors of naive beliefs about performance.


No it's very practical advice that I use day to day as well. Basically check the interface first. Implementations can be fixed but contracts are harder to change when clients depend on it.


The review should simply be gated on passing the automated bits (tests/lint/warnings/style checkers whatever).


Would help if they mention what a "CL" is somewhere before using it ubiquitously


It refers to "changelist". Google uses a Perforce-like VCS internally, and they kept the Perforce terminology: https://www.perforce.com/perforce/doc.051/manuals/p4guide/07...


aka "Pull Request" in Github parlance.


aka "Merge Request" in Gitlab vernacular (which makes more sense than "pull request" in my opinion).


aka "patch" for the Linux kernel development community


aka "diff" or "revision" at FB .)


aka "CR" or "Change Request" at Amazon


Yeah, merge request seems like the best terminology of the ones I've seen.


A Perforce changelist is more like a git commit than a GitHub pull request.


The way Google uses it, a "CL" means both "a thing you sent out for pre-commit review" and "a submitted change". It's very similar to a GitHub PR, if you use a master branch and always squash your PRs when merging to master.


They do define it, see https://google.github.io/eng-practices/. It means "change list."


Where does that define CL?


Under "Terminology"


My bad. For some reason I was looking at this https://google.github.io/eng-practices/review/developer/ after opening a new tab.


It's defined on the front page, <https://google.github.io/eng-practices/>, under Terminology:

CL: Stands for “changelist,” which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a “change” or a “patch.”


It's very interesting that so much of it is about the social aspects of engineering - a lot of it kind of reads like "don't be a jerk". In CS engineering classes, where presumably we would learn to become great engineers, I don't recall learning about any of this, and instead I remember the emphasis being on technical knowledge and accomplishment. I'd probably have been a better engineer in my early career if I'd understood how much the ability to exchange clear, constructive, and nonthreatening critique with peers actually mattered. It's reminiscent of how training in technical fields such as architecture and medicine often put the emphasis on technique over the service aspects of the job such as client interactions and bedside manner.


>It's very interesting that so much of it is about the social aspects of engineering

Code reviews are mostly in the social domain and not the technical domain. Other than bugs and performance[1], everything in a typical code review is an opinion. Hence, you need guidelines on how to handle differences in opinions. Different work environments structure code reviews differently, but often the worst is where a reviewer finds problems, and the code review cannot be marked complete until the reviewer is satisfied. This often leads to "I have to change my code to the reviewer's preferred style" instead of "The reviewer found real issues". Other models, exist and are better.

[1] Actually, performance conversations are very often based on opinions. Things like what level of performance is acceptable, etc.


Your comment resonates with me and a book I recently read - Coders by Clive Thompson. It too points out how the emphasis has been on pure code, pure logic, merit over individuals, code over opinions, and of course the hero developer who doesn't want anyone else to work on their code.

Which of course is not reality. I mean I get it, I too would, I think, love to be a hero developer and just immerse myself into code for weeks on end, but that's simply not the reality. I don't write code, I contribute to a customer-facing website / shop, and at the moment my time and energy would be better spent adding a date picker than to fantasize about rebuilding the back-end in Go.


Code review is a social activity and when you frame it as such it’s not so surprising that a lot of the discussion of it involves these social aspects.

I definitely agree that my experience was the same w/ regards to an almost exclusive focus on the technical aspects of programming. I only had one CS project in my senior year that involved building something with another person and even in that case it was only one other person.

I don’t think this problem is simple to solve or specific to programming. On the one hand, educators have to asses individuals so it makes sense that they’d want to isolate work to the individual. On the other hand, work after education is never (I hesitate to generalize but this seems like a safe generalization) an individual activity, and the rare dreaded group project in school is the norm afterwards.

I am not sure what the solution to this problem is. Even in a situation where you have version control and can look at each individuals contribution to the code itself, unless 100% of the discussion around that code happens in comments on a platform like github, it’s still difficult to assess things like how much the input of one individual contributes to the output of another.

In addition to this, teaching effective approaches to technical problems is easy compared to teaching effective approaches to human interactions.


> I only had one CS project in my senior year that involved building something with another person and even in that case it was only one other person.

I had many group projects in school, but they didn't set it up so that we'd review each other's code. We'd just break it up into modules, split the work accordingly, and each worked on a different part.

A more fundamental limitation of school projects is that they have by definition a limited lifespan, so that the notion of code debt, the importance of ensuring that code be readable by current and future colleagues, etc. are irrelevant - in all the projects I've done in school, even group ones, I was the only one reading my own code and it would never be run after class finished. In such a situation, "improve overall code health" makes no sense. You write only what you need to pass the class, which is usually some kind of demo feature.

Maybe making contributions to well-established open source projects should be encouraged as part of the curriculum, at least to experience the receiving end of it...


> It's very interesting that so much of it is about the social aspects of engineering - a lot of it kind of reads like "don't be a jerk". In CS engineering classes, where presumably we would learn to become great engineers, I don't recall learning about any of this, and instead I remember the emphasis being on technical knowledge and accomplishment.

Yes. Dorm life is supposed to provide the social aspect. And to a lesser extent, real group assignment/projects.


>In general, reviewers should favor approving a CL once it is in a state where it definitely improves the overall code health of the system being worked on, even if the CL isn’t perfect.

This is a great rule of thumb


It’s a nice rule of thumb for CLs (changelists) that provide a bug fix or refactoring. But I think it’s a terribly useless rule of thumb when evaluating a CL that just implements a new feature.

Imagine some new features such as “support exporting to CSV” or “auto-generate avatars for new users”. Implementing them will necessarily add more code, and more complexity, to the codebase. And adding more complexity makes the code harder to work with – in other words, harms “the overall code health of the system”. Businesses develop new features because of the improvements to the product they bring and the reputational or monetary gains that come from that, in spite of the damage done to the codebase. The only way you could get a new-feature CL to pass by the standard you quoted (the standard of “improving the overall code health of the system being worked on”) is to refactor existing messy code written by someone else, and that would not be sustainable.

Does anyone see a way to interpret the rule, or have a preferred variation of the rule, that makes it useful for evaluating new-feature CLs?


Yeah, that was the first section I read and I raised an eyebrow, I hadn't thought of it like that yet. I can see how one would get paralyzed by a focus on perfection; I too have left idk, dozens of comments on a feature PR of a few hundred lines of code.

Mind you that was before Prettier, nowadays a lot of these niggly things are automatically identified and fixed in editors and a pre-commit hook.


In the "What Is Not An Emergency?" section:

> It is the end of the day on a Friday and it would just be great to get this CL in before the developer leaves for the weekend.

I laughed out loud because it reminded me of so many times I have seen it happen and then someone had to fix in the weekend.

Who shares the same experience?


They didn't give the two examples that are most common inside Google: Google I/O is coming up, or annual performance reviews are coming up.

Strangely they also list as "not an emergency" rollbacks of clearly broken code, but that's an exception to review rules inside Google. Anyone can do a pure rollback of a change without getting the approval of the owners of the code, and there are automated tools that will roll back changes that appear to have broken at least 1000 tests, without human review.

Edit: There's even a button right in the code review web UI to roll back a committed change.


> Google I/O is coming up, or annual performance reviews are coming up.

As long as everyone admits it, sounds like those are actual emergencies :)


> at least 1000 tests

The threshold for this seems at least a little too high, doesn't it?


They have a lot of tests :-) Abseil-cpp alone has 2400 tests, at least in the open-source tree.

https://github.com/abseil/abseil-cpp/tree/master/absl/base


I was thinking low :)


Worse: The developer wants to get this checked in before they go on vacation.

And they really want to, because they don't want to have to do a bunch of merges when they come back...


Some places I've worked explicitly won't push/deploy/similar on Friday.


I've mostly worked for small industrial light manufacturers. First job I worked at they were always late with shipping stuff. And always taking stuff down to the UPS office after hours. After watching this for a few years I got the shop manager to adopt a rule. If it's not ready to be boxed up before lunch, it's not shipping today.

After they started enforcing that they got a lot more productive and started shipping on time much more often. Turns out the mad rush to meet an arbitrary deadline absolutely kills productivity.


Forgive me, I don't understand your last sentence. It sounds contradictory vs "they got a lot more productive".


What would happen is the shop would start thrashing with people switching from one task to another trying to hurry one unit through production. You eat the setup time for each production task for exactly one unit.


Yeah, I misread your comment as saying they rushed units in before lunch :D :D


They got fewer returns/defects when they stopped rushing.


I realize that now. I initially read it as they rushed to get things done before lunch everyday. Oops!


Your choices are often between these two:

1. Higher throughput, higer latency and correctness on one side.

2. Lower throughput, lower latency and some risk of faults on the other side.

It's like the difference between tcp and udp.

The question is: Where do you need to be? What are your requirements?


I worked at one (horrible) place where we explicitly only deployed on Friday afternoon.

The logic was we have fewer users over the weekend, so if something goes wrong, fewer people will notice.


It depends on what company you are.

Some who administers some SAP thing for a huge supermarket chain tells me they do all of their upgrades on weekends because their users aren't at work.

Makes sense for them, but it would be a crazy thing to do the consumer-focused interweb company that I work for.


I rewrote an app that used to only deploy on weekends because we do a lot of processing on weekday afternoons and nights. I changed the deployd to the mornings.

I justified as saying if something breaks we have everyone there to fix it. Plus I'm not going to spend my weekend working if I don't have to.

My app has global traffic. Sometimes stuff breaks Sunday night and Asia is the first to find out.


Ugh, I know the feeling - one client I worked for would only do deploys a few times a year if that, and only at night. I mean we'd get extra pay or free time for doing night shifts, but really, it's not good practice.


We had no-deploy Friday which made support & product people feel happy and devs dread the pileup on Monday. It has evolved into "don't deploy higher impact things on Friday" and "try not to deploy at all Friday afternoon". That gives us a half day to hear about issues and fix them before the weekend.


This implies they don't have very good automated deploy, monitoring and automated rollback solution.

The test of a good cd is that your ok with releasing on Friday at end of the day.


I agree I've never worked anywhere with really heavy duty CI, as in I've never worked somewhere that automated rollbacks were either trusted in all cases or business acceptable after something was production live. (Which is to say I haven't worked at places that had really significant scale.) And CI is only as good as your test coverage, which is to say if something esoteric is overlooked in your test suite...

You are assuming that monitoring and rollback solutions are bug free, always on and have 100% knowledge of what's going on. That is never true.

That's a good policy. We currently don't deploy on Friday afternoons.


But it's such a trivial change...! What could go wrong?


But then Thursday just becomes Friday...


That's fine, because when something breaks the day after you've pushed, it's Friday and everyone's in the office. People are still going to rush the deadline, but putting the deadline earlier means you have time to deal with stuff that breaks after the deadline.


We call it a chuck-in.


What is good design, what is good naming, over-engineering, etc, etc.. How on earth can a group of people agree on all those points? Look around here on HN, all those discussions.. Do we have consensus? No, and that is OK, but it's not in a team by code review.

There is only one way a group of people can have healty code reviews, that is when they like each other so much, or enough to accept what they not agree on. But man, if one or more dev's are not you're best friend, you have a problem there. Because it is not who has the best code or ideas, it's who fits best in the group, not?

But I totally agree code that comes into a code-base needs to be checked. And IMAO only by 1 developer, the LEAD who is responsible for the code-base. No democracy, no endless discussions, no 6 people spending 1 hour a day to do this.

Personally I prefer to do only gigs where there is a lead dev. I try to stay away from the popular self managing teams. I've seen enough horror, where a good developer had to leave the team because they were constantly frustrated, endless discussions about futilities, and in the end it all became personal.. Is it only me who sees this?


I agree with you. This "Agile" cancer that has infected the industry needs to die now. Someone here called it collectivized micromanagement and that's the best term I have heard for what's become of "Agile".

I don’t think you should accept bastardizations of terms into your lexicon.

Yes, acknowledge when someone means “micromanaging” when they say “agile”, and act accordingly. But don’t redefine the word in your own head, otherwise how can you even speak?

Like, what word do you use now for the basic principle of agile development now that you’ve changed it to mean micromanagement in your head?


I'd say Agile development is about "agorisation" of the development process. "Agorisation" is the act of making a place (virtual or not) amenable to proposing, discussing, and setting up and in motion categories that will enhance this development process in both qualitative and quantitative aspects, such that agents who collaborate into it can live through this process like a citizen through the city.

----------------------------------

category (n.)

1580s, in Aristotle's logic, "a highest notion," from Middle French catégorie, from Late Latin categoria, from Greek kategoria "accusation, prediction, category," verbal noun from kategorein "to speak against; to accuse, assert, predicate," from kata "down to" (or perhaps "against;" see cata-) + agoreuein "to harangue, to declaim (in the assembly)," from agora "public assembly" (from PIE root *ger- "to gather").


I don't view the term "collectivized micromanagement" as a bastardized term for what's going on at most places touting themselves as "Agile". In fact, I think it's a much more honest and accurate term for what's going on at those places.

Now, if you've been fortunate to work in places that more closely adhere to the original ideas and concepts that's great; count yourself fortunate.


> What is good design, what is good naming, over-engineering

A Philosophy of Software Design by John Osterhout is a good book on these questions.


> what is good naming

It's mostly a "I know it when I see it" kind of thing. We know most of the time what bad naming looks like - it creates confusion, tells you nothing about the variable ("i", "n", "val", etc). I guess this is not a science (yet?) and we're currently at the stage of defining anti-patterns, out of which maybe we'll figure out some actual desired patterns.


> if a reviewer makes it very difficult for any change to go in, then developers are disincentivized to make improvements in the future.

This is a problem I see far too often but it’s rarely talked about. Too often, engineers misinterpret “quality” code for “their” code. Code review turns from “what should we be doing here?” Into “what would this reviewing engineer name this one unimportant variable here?”

There needs to be a happy medium between velocity and quality, and increasing velocity doesn’t necessarily mean decreasing quality.


I have never not seen code reviews end up being a massive social/cultural pain point for most people involved. Most people hate getting their code reviewed in depth. In situations where there isn't just one lead dev who is responsible for reviews, this leads to "Merge Request symbiosis", where two people uncritically approve each others requests so they don't have to deal with the third.


Code reviews by senior people are massively helpful for junior devs. It's one of the perks of being in a good big company that very few startups can replicate.


Agreed. I once left a job in large part because there wasn't anyone else to review my code.

It seems incredibly self-defeating, to quit over a lack of CODE REVIEW at any point in someone's career.

Hiring someone just to review your code is not a sane business decision, so you might want to think about how you overvalued that aspect. I would be very surprised if someone could make a business out of 3rd party code reviews, but stranger things have happened.


How is that self defeating from the programmers perspective? Code reviews can contribute to growth as a developer, ultimately helping ones career. When a developer desires such reviews, and a company is either unwilling or unable to provide them, why should they stay? There are plenty of other companies around to work at.

So suppose you met someone who graduated uni a year ago, was working as the sole developer at a non-profit, and wanted to learn to be better at recognising and writing well-structured code. Suppose they then asked you if they should stay at their job or go work for a company where they were working with other software engineers and you said they shouldn't. How would you advise them to develop their sense of good code style?

> I would be very surprised if someone could make a business out of 3rd party code reviews, but stranger things have happened.

Depending on how you define "make a business", this already happened. There are paid for code review and vulnerability scans. Sadly, I can't remember the companies that did them. I think one of them was by IBM... I saw them applied to new software (when it was nearly done) at two big, European companies. They were mostly worthless: The insights were barely above what Sonar gives you and many findings were "never gonna happen" edge cases.


I agree, my role of thumb is to only review code if I was asked by the author to do so, and never offer to review someone's else code.

Code is a bit like music, there is very little right or wrong, but a lot of flavours and opinions.


Very little wrong? You haven't seen enough code! I have seen everything in merge requests: From liquibase scripts that would have wiped the production database to production secrets in yml files.

Sure when you're the only one working on a project. Otherwise, the only right way of writing code is that agreed upon by the whole team. Any personal opinion, while affecting the 'opinion' of the team, does not matter.

Here's my workflow and it works very good to get everyone on the same level:

- Each night, I go over all the commits of the day and do a code review - Each morning at 9:00, we go over all the comments on the commints, with everyone, talk about it and make sure everyone understands.

This allows me to explain more advanced or new concepts that one developer uses to make sure everyone understands. It allows me to introduce improvements and explain why they are beneficial, and it allows me to signal when people don't follow the standards we impose; giving not only the person who did it wrong a refresher, but it makes sure everyone is reminded of the way we should write code, etc.

The feedback has been amazing and everyone loves it because it helps everyone grow and they feel like they are allowed to focus on quality; due to time pressures, coders tend to let standards slip when under pressure. When you focus on quality every day, it stays with them, because they know, if they don't deliver quality, it will be in the daily review. So far, this has not hurt production speed at all and quality has gone up a lot since we started doing this.


This is an interesting approach. How many engineer hours do you think you spend each day doing this? If you're reviewing CLs as a group, it seems like it could get very expensive very quickly, no?

They last anywhere from 15 minutes, to 90 minutes, depending on the topics and it replaces a lot of other meetings/sharing that normally takes place.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: