
Coinbase Incident Post Mortem: June 25–26, 2019 - jessepollak
https://blog.coinbase.com/incident-post-mortem-june-25-26-2019-1d08c1657cbc
======
cle
Lots of architectural and cultural problems IMO. Mixing different kinds of
queries on the same cluster, no auto scaling on neither cluster nor web
server, they seem to be okay with customer-impacting maintenance events
(seriously?!), and their "fix" for an event caused by cache misses is to add
more caching, which will make their system even harder to understand and
predict, increasing the likelihood of more severe and byzantine failures.

It's often an unpopular opinion around here, but this is why I prefer simple
hosted databases with limited query flexibility for high volume and high
availability services (Firestore, DynamoDB, etc.). It's harder to be surprised
by expensive queries, and you won't have to fiddle with failovers, auto
scaling, caching, etc. Design your system around their constraints and it will
have predictable performance and can more easily scale under unexpected load.

------
viraptor
> We have ensured that failovers for this cluster may only be initiated during
> rare, scheduled downtime, when there will be no impact on customers.

I hope all their hardware crashes are also scheduled when there will be no
impact... This seems a bit backwards - unless you constantly exercise the
instantaneous failover, how do you know it works?

Edit: Actually it's worse - if you don't test the instant failover under a
full load, how do you know it's still instant then?

~~~
rednixion
1: Sounds like they are saying that until the changes to remove the cluster
out of the request path, failover cannot be trusted so the only immediate
action that can be taken is to disable the automatic failover logic and
prevent any live maintenance jobs needing to take it offline from running. Not
unreasonable depending on the timeline to implement the changes when comparing
the risk of a cluster killing hardware failure (assuming that there is at
least a one node tolerance) vs behavior inside of your buy/sell pipeline being
"undetermined"; no failover is probably a great way to keep the sprint work
priority at the appropriate level.

2\. Loadtesting? Accurate end-to-end loadtests are painful to bootstrap from
nothing and requires an environment that can be expensive, worth their weight
in gold depending on how critical downtime is or if everything in your
platform is pulling from the same resource pool(ie appliance based
deployment).

------
twblalock
I suppose this is a good time to ask whether Coinbase thinks they are a bank,
or a brokerage, or both.

If they are a bank, this isn't the end of the world. I've had online banking
outages at "normal" banks. It is still a bad thing, but there are other ways I
can get my money, like going to a branch.

On the other hand, if Coinbase is like a brokerage, this is really bad. And
let's face it, most use of crypto is for investment and speculation purposes.
For trades to fail for half an hour is really bad. If they are running this
thing like a startup on MongoDB ( _seriously?_ ) I don't see how anyone who
puts their money in can have any confidence of getting it back out.

~~~
wakeywakeywakey
> If they are running this thing like a startup on MongoDB (seriously?)

Do you base this on recent info (latest versions w/ Jepsen tests)? If so, what
specifically makes Mongo a "startup" db?

------
staticassertion
> Before the incident began, a background job performed an aggregation for a
> large number of accounts, causing excessive reads into cache and cache
> evictions for the cluster’s storage engine cache.

I thought this was interesting. I think that caches can be so dangerous in an
incident - suddenly operations that are almost always constant time are
executing in a much different complexity, and worst is that this tends to
happen when you get backups (since old, uncached data is suddenly pushing
recent data out).

I think chaos engineering may be a good solution here, in lieu of better
architectures - see what happens when you clear your cache every once in a
while, how much your load changes, how your systems scale to deal with it.

------
drefanzor
So basically price alerts lagged the system.

~~~
noitsnot
Completely shut it down. It's funny they DDoS'd themselves.

------
kvlr
TLDR: MongoDB

~~~
oconnor663
Sounds like a maintenance error and an application design error that led to a
flood of traffic. Doesn't sound MongoDB specific.

~~~
mbell
I don't think there is enough information to come to a conclusion here, but I
doubt your proposed failure mode.

Based on my experience with largish data in mongo, my guess would be that the
database size was >> than ram and, due at least in part to mongo's design,
when the master failed over the memory state of the new master didn't have the
working set present in RAM. This lead to a huge inrush of disk IO resulting in
all sorts of bad until the RAM state sorted itself out.

~~~
latchkey
My experience with mongo is the same. Works great, in RAM. As soon as you hit
disk, it falls over completely.

------
redis_mlc
> We take uptime very seriously, and we’re working hard to support the
> millions of customers that choose Coinbase to manage their cryptocurrency

No you don't.

\- If you did, you'd hire a DBA team and they would be familiar with the
various jobs in your environment. But first your founders would have to have
respect for Operations, which will take a dozen more major outages.

The other major Coinbase outages have also been database-related, namely
missing indexes.

\- If you did, you wouldn't be doing major database (or other production)
changes at 3 pm in the afternoon.

So let's cut to the chase. You prioritize features over Operations, and as a
result guinea-pig your users. Just like any other SF startup. So just admit
that to your end-users.

~~~
Klathmon
>you wouldn't be doing major database (or other production) changes at 3 pm in
the afternoon.

Coinbase isn't dedicated to one timezone. They fully support tons of countries
and have customers all over the place.

Things like this have to happen at some point, and there are benefits to doing
stuff like this during "work" hours (like having all of your staff online and
available)

~~~
kerkeslager
> Things like this have to happen at some point, and there are benefits to
> doing stuff like this during "work" hours (like having all of your staff
> online and available)

That sounds like a benefit to Coinbase and not to any of their customers.

~~~
saalweachter
Eh, if Coinbase were the stock market and only open for trading for < 8 hours
a day, I'd ding them, but it's always 3pm somewhere. 3pm PDT is 6pm EDT is
10pm UTC.

~~~
bdamm
One can only hope that they know when their peak trading patterns generally
are.

------
nocitrek
Great post mortem. Well detailed, good work ethic there.

