
What Google Taught Me About Scaling Engineering Teams - danso
http://www.theeffectiveengineer.com/blog/what-i-learned-from-googles-engineering-culture/
======
hibikir
While I cannot judge how those practices work at Google, I have seen similar
practices fail miserably when applied in smaller companies.

Opinionated best practices stop being useful when the emphasis is on
opinionated and not on best. If the cabal that has political influence is not
the strongest there is, standardization just increases how much technical debt
can be built in a short period of time.

I have seen expenses on tools teams that went nowhere, for two reasons. The
first is that tools teams are attractive to people that do not want to deal
with customers, so they can attract the opinionated and antisocial. The other
problem is that the generic tools really have to solve the problem very well:
Lack the engineering quality in said tools team, and you are spending a bunch
of money on infrastructure that will both fail to evolve at the speed of OSS,
and get entrenched everywhere in your codebase.

Automated testing will only get you far if it is pragmatic. I have seen
millions of dollars spent on test automation that was extremely fragile and
never paid for itself, because it had an extremely short shelf life. Picking
the wrong tools for it makes it even worse.

Code reviews can be great, or can be terrible. A code review searching for
real problems in the code, or that really considers alternative solutions,
will be very valuable. But code reviews can quickly become pissing contests
used by people to impose their preferences on others: A passive aggressive way
of asserting dominance on a team. You can see a lot of that in proponents of
the 5 minute code review: No actual analysis of the code is done, but 5
minutes is plenty to use it as a tool for abuse.

So while the techniques described in the article can be very helpful, the
wrong implementation of them will just help you standardize on a monoculture
of bad engineering. So before implementing such things think about how good
your engineering really is, and whether you really are better off making sure
people bend to your standards, or you are better off learning from the
experience of new people. Chances are you are not Google.

~~~
jrockway
You have to be careful about who you hire. If you do that well, then opinions,
code reviews, one shared repository, and so on, work great. If you hire a
bunch of assholes, then they won't work.

I do about 20 code reviews a week; there is never bullying and they are only
rarely rubber stamped.

(My favorite code review of my own code this week was where I updated a
counter to use atomic.AddInt64 instead of sync.Mutex. But where I was reading
the counter, I just read the integer value directly instead of using
atomic.LoadInt64. 30 seconds of the reviewers time, and we avoided a
potentially difficult-to-track-down bug. No pissing match, no bullying. Just
better programs.)

~~~
kabouseng
20 code reviews a week? How many lines of code for each review?

~~~
jrockway
I looked in more detail and if I average it out over 2.5 years and only
include actual code I care about (not importing OSS packages, which I also
review), it's more like one a day.

I was too lazy to calculate a histogram, but let's go with ~100 lines a day.

~~~
kabouseng
Thanks for the reply.

------
remar
Speaking from my experience as an intern at Microsoft, I really wish there was
more focus on training. I've heard from a lot of praise about the quality of
codelabs from friends at Google and never received anything like that when I
was working. I was basically just given a project and expected to pick up
things by asking people or reading some half-outdated sharepoint pages.

From speaking with other interns it seems their experiences varied entirely
based on their teams. The impression I got was that it was really fragmented
across teams and that each team felt like it had its own way of doing things.

I'm not exactly sure how it is at Google but I've heard from a friend that the
entire company works on a single shared codebase. The project I was working on
had 3 different forks of the framework I was developing (that I knew of), each
maintained by different teams. Basically, the impression I got was that
everything just felt really fragmented and that I was only working with my
team rather than with an entire company.

I should mention that this was my 2nd software development job ever so maybe
this is normal for lots of companies and that Google is one of the few
companies doing it right, but the experience didn't really leave me with lots
of confidence in the engineering environment/culture of the company.

I've heard tons of good things about the engineering culture at Google so I'm
considering if I should try applying there.

~~~
mike_hearn
Ex-Google engineer here. The way Google does things is indeed very rare, your
experience at Microsoft is more typical.

Note that the path Google chose isn't easy! As the codebase got larger and
larger they had to basically design and build their own build system, a
distributed unit testing engine, custom code refactoring and hyperlinking
tools, do major IDE modifications to Eclipse to make projects even loadable,
eventually even build their own version control system because no other VCS
scaled to the sizes and speeds they needed. They invested a TON of time and
gold into allowing a single codebase to scale like that; it's practical if
you're sitting on a hosepipe of money and you can hire brilliant engineers
then assign them to build tools, but it's probably not practical for most
companies despite that the resulting environment was quite pleasant.

That said, although the codebase was remarkably consistent, there were still
variations. The most notorious was the split between the C++ and Java
codebase. Some frameworks were written in c++ and bound into Java using things
like SWIG. Others were bound using cross-process RPCs. A lot of smaller
libraries were simply written and maintained twice. The culture was different
too. Some Java codebases were way over the top of excessive abstraction and
use of dependency injection, making them an absolute nightmare to understand
or debug for newcomers. Others were simpler and more understandable (typically
the older ones). The C++ side felt much more light weight, but on the other
hand, it was largely stuck in the 90s, with memory management being entirely
manual even in cases where conservative GC could probably have helped avoid
mistakes. When I left C++11 was in the process of being whitelisted one
feature at a time.

The biggest problem with the culture I had at the time I left (and was one
reason I decided to pack my bags after 7.5 years there), was that basic tasks
were becoming more and more bureaucratic. Often this was due to poorly
designed processes put in place in a panic after some PR disaster around
privacy or data handling, like Street View or Buzz. Sometimes it was just
because some engineer needed to redesign a widely used system in order to
prove they were being "impactful" and get promoted even though it wasn't
really broken. There was a running joke there that there were two versions of
every tool, the deprecated one and the one that didn't work yet. It started
out as a lighthearted take on the companies rapid progress and by the end
unfortunately just reflected a sad reality.

As an illustrative example, around the time I started to step back, one member
of my team had spent on the order of 6-8 weeks attempting to simply download a
file via HTTP in production, from our own servers (was a self test). This task
involved filling out forms, arguing with the owners of the HTTP download
service (the old one didn't require this process but was deprecated),
discovering that the new version was hopelessly buggy and was breaking random
products for end users without anyone on those teams noticing, etc. This is a
task that could be accomplished in two minutes with a random Linux VPS and
"wget" but turned into an epic struggle against a Kafkaesque corporate
disaster zone. The problem was not that one team was poorly performing though:
the problem was the company had lost the ability to pre-emptively catch this
or even recognise that there was a problem. Most team members were happy to
just collect a paycheque each day; if they got paid for filling out forms
justifying why they needed the "fast reliable" HTTP download service instead
of the "slow unreliable" service, why worry? Best not to rock the boat.

------
yayata
> One reason I was able to quickly become productive within Google was because
> the company had invested so many resources into training documents called
> codelabs.

Counter perspective here. The on-boarding training at Google for SWEs is
indeed very extensive, spanning several weeks with tons of material. For
example, the first code lab alone covers how all of Google's source is in a
single repository, how to check it out, how it gets built on a server, how to
do code reviews, what the coding style is, etc.

I threw it all out the window after my first two weaks. I was joining an
Android team, and as it turns out, Android works completely differently.
Different source repo, different SCM tooling, different build system,
different development workflow, different deployment mechanisms, even
different workstations.

And the company culture was different as well. For example, in the Life of an
Engineer class, they impressed upon us Google's culture of openness by poking
around the source tree for search, and showing how we already have access to
its code! But the Android source tree is tightly controlled, and my access
request required two weeks and VP approval. (It's still the case that the
majority of Google SWEs do not have access to Android source, or the Android
café menu.)

I eventually came up to speed the old fashioned way: poking around outdated
documentation, trial and error, and lots of bugging my neighbors. So in my
case, the Google on-boarding process was mostly useless and partially
misleading. This illustrates one way that "reusable training materials" can go
wrong, especially with a larger organization.

~~~
bsaul
From your experience, it just seems like the android team was lacking those
"training material". I think your case shows it would have been a good idea
for them to have some as well, doesn't it ?

~~~
jrockway
More like: when company A acquires company B, company B does not instantly
replace its culture with company A's. Especially if company B is quite
successful.

------
danso
> _One reason I was able to quickly become productive within Google was
> because the company had invested so many resources into training documents
> called codelabs. Codelabs covered the core abstractions at the company,
> explained why they were designed, highlighted relevant snippets of the
> codebase, and then validated understanding through a few implementation
> exercises. Without them, it would’ve taken me much longer to learn about the
> multitude of technologies that I needed to know to be effective_

Any Googler care to share what a "codelab" looks like? Always interested in
seeing innovations in documentation/onboarding (and I'm guessing it has
nothing to do with the now retired "Code Labs"?
[https://code.google.com/labs/](https://code.google.com/labs/))

~~~
jrockway
We have some public codelabs; the Dart ones come to mind first:

[https://www.dartlang.org/codelabs/darrrt/](https://www.dartlang.org/codelabs/darrrt/)

Basically, a tutorial. The internal codelabs have some logic to substitute
your username throughout the text (good for things like "create a test
directory"), but are otherwise very similar to the public ones.

------
theforgottenone
It is very difficult for me to translate lessons learned at google to anything
except google. I have read a lot of blogs from googlers and formers who write
from a perspective of, I guess, a luxury I have literally never seen, even at
MS during its golden age. For example, the idea of code reviews and
whitespace. I have never worked in an environment where we had that kind of
money to spend.

~~~
chetanahuja
_" code reviews and whitespace. I have never worked in an environment where we
had that kind of money to spend."_

Also my experience with google coding practices. Code reviews process can be
excruciatingly long and painful (especially when you're starting out). Lots of
back and forth over things like whitespace between operator/operand and
whether to put the stream operator at the end of a continuing line or at the
beginning of the continued line etc.

Someone else said that the google practices are optimized for code
maintainability rather than developer efficiency. Absolutely true. Reading
code (at least C++ which I have experience with) at google is a joy while
writing (and submitting) code is at best a chore. At it's worst it's bad
enough that I've seen a new hire (senior engineer, known for delivering at a
high level from his previous company) go back to his old company after a
quarter or so of frustrations with the process.

------
meanJim
"Invest in reusable training materials to onboard new engineers."

So important! Doing this really helps keep people on page and limits the
amount of time spent arguing about semantics or why we came to a certain
decision.

