
Google’s Gmail Outage Is a Sign of Things to Come - Libertatea
http://www.businessweek.com/articles/2012-12-17/google-s-gmail-outage-is-a-sign-of-things-to-come#r=rss
======
jonknee
This article is FUD. Complete nonesense really. Continuous integration had
nothing to do with the Gmail outage (which mind you was just for the web app,
no emails were lost and since my phone/desktop clients kept working I only
knew Gmail was down because of Twitter).

> The search giant reported that it conducted an update of its load-balancing
> software from 8:45 a.m. to 9:13 a.m. U.S. West Coast time, and after the
> problems were detected it managed to quickly roll back the buggy code. But
> this didn’t stop some people from questioning why Google would roll out a
> software update during peak e-mail hours on the West Coast.

Perhaps because the update wasn't to Gmail, it was to the load balancing
system? It's not uncommon to have issues arise in load balancing while under
load. I'd say the engineers did a good job dealing with the problem all things
considered--load balancing issues are notoriously tricky.

------
thezilch
When exactly is a good time for Google to release code? Facebook? Twitter?
These are global markets. There are over 1 billion smartphones in the world,
alone, and less than a tenth are US; I'd gander most are used to interact with
one of these or similar big guys that do "continuous deployment."

~~~
jonknee
The best time to deploy is the time with the best engineers at the ready. If
you're updating component X, you should do so when the stakeholders of X are
online. It sounds like Google was able to solve things quickly, I don't think
deploying at another time would have been any better.

~~~
babarock
I'd even argue that the best time to deploy is during the best engineers'
working hours. At least you guarantee they're around when/if things go bad.

And by "best", I don't mean all stars (a la Fitzpatrick, Norvig, Pike or other
Google celebs), but the engineer who actually followed the whole project/patch
from dev environment to production.

------
Groxx
in a nutshell: gmail went down for 18 minutes because of continuous
deployment[citation needed], which they explain a bit of the rationale for.
Then they come back to the headline, basically saying that as more companies
do continuous deployment, we might see more breakages, though probably
briefer.

Not the most focused article, but kind of cool that they wrote about such a
thing. Though the headline feels like pure linkbait.

They also missed my personal favorite attribute of continuous deployment:
isolated problems. If you deploy 10 features and something breaks, unless
they're completely orthogonal you now have more than one place to look/person
to ask/team to deal with. If you deploy one and it breaks, you know where the
problem lies.

~~~
jonknee
> Not the most focused article, but kind of cool that they wrote about such a
> thing

Why? GigaOm writes about that sort of thing all the time. (BusinessWeek is
just syndicating here, it's a GigaOm article: [http://gigaom.com/cloud/why-
you-should-expect-more-online-ou...](http://gigaom.com/cloud/why-you-should-
expect-more-online-outages-but-less-downtime/) )

