I worked at Google from 2014-2018. I'm extremely skeptical of these numbers.
Based on my experience working on three different teams and sending changelists to many other teams outside my own, I'd expect something more like 3-4 hours for small changes (<50 LOC) and 1-2 days for large changelists (500+ LOC).
Google has so many automated changelists that I'm wondering if that skewed their numbers. Like if someone just clicks through 10 auto-generated CLs and approves them all, did that count for a bunch of near-instant review cycles?
That said, I was a big fan of Google's code review process. I know people complained about the unnecessary level of process and the hassle of getting readability, but I found it all a great way to spread knowledge across teammates and improve programming skills.
If you want the least bias, you should find someone to review something that hasn't been deeply involved in designing it. And for pieces of code that are small enough, reading and getting up to speed shouldn't take that much time. I prefer sending reviews to engineers that are familiar with the area of the code, but we don't do pair-programming so the implementation hasn't been seen before.
We do, however, do a lot of design work before a single line of code is written.
*also this is at google.
Sure, you can split out 9 little cleanups and typo fixes while working on a larger change, but it's still days to get the main change approved.
No, it means there is a cultural expectation / etiquette that you prioritize keeping your teammates unblocked by reviewing their code quickly, even if it means putting aside your own coding to do so.
This depends on the particular language and context, but simple, isolated, and usually small changes are preferred and encouraged. This allows relatively quick review for lots of things.
If I see a change that modifies a single function and adds a test to validate the change in functionality, that's very quick to review.
Modern Code Review: A Case Study at Google [pdf] - https://news.ycombinator.com/item?id=18035548 - Sept 2018 (64 comments)
While quite a large number of changes, that seems a rather small sample size for interviews, and especially for survey response.
I was writing JS, while my sole teammate was a firmware engineer. He was not willing or able to meaningfully review my code, so I had to try to scrounge random reviewers from other teams. They naturally de-prioritized it, so it took forever. And because they were not involved in my project, they would only have local stylistic feedback.
Months in, I finally figured out that other engineers were just self-reviewing: you can add yourself as a reviewer and just click merge. So I started doing that too.
Yes, there have been times when code review wasn't super useful, but on the whole I never experienced the kind of dysfunction you're talking about and merging reviews without external review is a pretty big no-no.
Where? This certainly isn't possible in google3 (and wasn't possible in 2015 either).
That said, there are now some changes made to make, in general, the code review process for people without readability and without teammates who have readability more streamlined.
I never dealt with readability, because Android did not require it. In fact that was part of their internal recruiting pitch: fed up with readability? Come work on Android! Heh.
Culturally, it was worse when Andy was in charge, but it was still true when I left in 2018.
In my N years of experience, very few engineers are in an environment where only they have meaningful context on what they're working on. Generally any non-trivial project worth staffing is worth having two headcount work in the space, even if one is a TL with divided attention.
You can break glass in emergencies by committing code TBR, or "to be reviewed", however this automatically escalates to owners of the code in question plus your manager and director, and all TBRs have to be resolved by actual review within a short time. An author cannot submit to-be-approved code; they have to be owners of the code in question (personally or transitively included in the OWNERS file) to TBR.
You can read about this system here: https://cloud.google.com/security/binary-authorization-for-b...
A much easier process would go something like this: Suppose my teammate has readability in language X, They can tag review comments as being readability related or not. Once someone has N CLs under their belt with readability related comments under some low percentage in the past M CLs, they are granted readability automatically, or have to submit just 1-2 CLs for a speedy readability process.
Seems like if you've been certified you don't need an extra person who has been certified in 'readability' to review your code so it makes it easier.
1. LGTM (that is, any second set of eyes)
2. Readability Approval
3. Owners approval (similar to github ownership)
The CL author can provide readability and ownership, if I have python readability and own the file I'm changing, anyone else can LGTM. But if not, at least one of my reviewers needs to provide the language approvals.
There's a process for gaining readability in a particular language, where you send your normal code changes to be additionally reviewed by experts who give deeper feedback on language idioms/use. This process is contentious because Google is fairly strict about style guides, and the readability reviewers goal is to ensure that you are writing idiomatic code, not to ensure your changes get submitted fast, so they'll often request changes and delay things (but this is known in advance, and for a high priority change, there are other avenues to get language approvals).
After you are granted readability, you no longer need approval from someone else with readability for changes in that language.
Personally, I found the process valuable for all of the languages I've gone through it in, but I can absolutely understand people's frustration, especially when dealing with deadlines or similar.
I didn't feel the same for Python though.
Essentially what it means is that, when writing code in certain programming languages, one of your code reviewers has to have proved that they're fluent in that language and the style guide for it.
Then they’ll finally rewrite the thing some day, after a massive and often unnecessary design doc filled with needless bike-shedding that ultimately proposes an overengineered solution to help the person make L+1.
Not everything, though. Infrastructure code and libraries are usually very high quality and very nice to work with.
As long as you’re only looking at like 30 lines it’s lovely. Then you zoom out to view the beast.
The paper itself says: "Employing lightweight, tool-based code review of code changes (aka modern code review)"; I think that substitution was a disservice. Practically all case studies are on modern processes so it's not really worth specifying.