
The Limits of the CAP Theorem - bdarnell
https://www.cockroachlabs.com/blog/limits-of-the-cap-theorem/
======
wwilson
It's kind of amazing how we have to have this discussion again every time
somebody designs a CP system with excellent availability.

I'll just come out and say it: the 'A' in CAP is boring. It does not mean what
you think it means. Lynch et al. probably chose the definition because it's
one for which the 'theorem' is both true and easy to prove. This is not the
impossibility result with which designers of distributed systems should be
most concerned.

My heuristic these days is that worrying about the CAP theorem is a weak
negative signal. (EDIT: This is not a statement about CockroachDB's post,
which doubtless is designed to reassure customers who are misinformed on the
topic. I'm familiar with that situation, and it makes me feel a deep sympathy
for them.)

(Disclosure: I work on a CockroachDB competitor. Also none of this is Google's
official position, etc., etc. For that, here's the whitepaper by Eric Brewer
that we released along with the Cloud Spanner beta launch
[https://static.googleusercontent.com/media/research.google.c...](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45855.pdf)).

~~~
oillio
I read the paper and I don't understand this passage:

    
    
      For example, any database cannot provide availability if all of its replicas are offline, which has nothing to do with
      partitions. Such a multi-replica outage should be very rare, but if partitions are signi cantly more rare, then you can effectively
      ignore partitions as a factor in availability. 
      For Spanner, this means that when there is an availability outage, it is not in practice due to a partition, 
      but rather some other set of multiple faults (as no single fault will forfeit availability).
    

My understanding was that 'Partition' in CAP was a bit of a misnomer. To a
running node, a partition of half the cluster is indistinguishable from half
the nodes failing. So, partition tolerance really covers partitions as well as
multi-node failures. Brewer wrote the original paper, so I will trust his
definitions. However, if P doesn't cover multi-node failures, it seems to
weaken the usefulness of CAP considerably. As is mentioned, in my experience,
partitions are very rare. Multi-node failures on the other hand are the
primary failure case I worry about.

(edit): I have thought about it some more, and this article really annoys me.
It reads like marketing material: "CAP doesn't apply to us because we are
Google, bitches."

There is an argument there, but I think the way Brewer makes the argument is
really weak. I would much rather them say: "We have built a really great CP
system. Also, because we are Google we are capable of 99.99958% uptime, so you
really don't need to worry to much about tiny edge cases where you will lose
A."

~~~
joncrocks
From what I understand of CAP (and I'm no expert by any means), 'partition
tolerance' is the ability of a system to reconcile itself in the event of the
system being split into partitions.

In these types of scenarios, different sections of the system are still
working as normal, and each have a different view of the network of available
nodes.

In a scenario where a set of homogeneous nodes in a system is split in two,
both are equally 'available' and so the system as a whole has to decide what
to do in that scenario. If both sides present themselves as available then
they will be making decisions based only on interactions with half of the
nodes in the system and their view of the system as a whole will start to
drift apart.

This is bad because at the point where they get reconnected again they may
well realise that the system as a whole is now not internally consistent. If
you think about a distributed database, then you can start to having
conflicting commits and now your DB is FUBAR.

You are right in thinking that from the point of view of a partition that it
can't know if the rest of the system is just partitioned away or has crashed
and will never come back.

But what that simplifying assumption is saying is that if you can ensure that
you are much much more likely to have the nodes go down completely rather than
actually be partitioned, then things are easy because you don't have to
consider diverging system views and how you might re-integrate them.

Or something like that, I ended up writing more than I was planning!

~~~
oillio
Right. My understanding is that the clever bit of CAP is that, from the
perspective of a single node, a true network partition is indistinguishable
from a bunch of nodes failing simultaneously. If, as a node, you find yourself
in a minority network (and cannot make quorum) you need to decide what to do.

Maybe you are on the losing side of a split, and there is a set of nodes out
there that can make quorum.

Or maybe 51% of the nodes crashed and what you see is all that is left of the
cluster.

Whatever you decide, it has to work for both those possibilities.

So are you interpreting Brewer as saying, in practice we never have a split.
Just assume what you see is all there is of the network. However, Spanner is a
CP system. If you are willing to assume you will never need to merge
inconsistent data, wouldn't you go for AP?

~~~
acjohnson55
And what's "quorum" in a system where nodes are free to join and leave?

~~~
bdarnell
There's a lot of complexity here, but the short version is that nodes cannot
quite come and go freely. A replica set keeps track of its members; a quorum
of the existing members must vote to admit a new node, or to remove one.

Note that in both CockroachDB and Spanner a cluster contains many independent
and overlapping replica sets. The data is broken down into "ranges" (to use
the terminology of CockroachDB; Spanner calls them "spans"), each of which has
its own replica set (typically containing 3 or 5 members).

------
thraxil
Martin Kleppmann's "A Critique of the CAP Theorem" lays this all out very
nicely and goes further, providing a better conceptual framework for
discussing the tradeoffs:
[https://arxiv.org/abs/1509.05393](https://arxiv.org/abs/1509.05393)

One of the best papers I've come across in the last few years.

------
Dave_Rosenthal
An older piece from FoundationDB (archived by odbms.org) that talks about the
same issues and comes to many of the same conclusions:
[http://www.odbms.org/wp-content/uploads/2013/11/cap-
theorem....](http://www.odbms.org/wp-content/uploads/2013/11/cap-theorem.pdf)

I think the overloaded term "availability" has been a big source of confusion
for many trying to understand the implications of the CAP theorem at a simple
level.

For example, a simple PAXOS implementation is "high availability" (continues
working even when individual machines fail) but sacrifices "availability" in
the CAP sense.

------
itcmcgrath
It is refreshing to see an article from a distributed database vendor that
gives a reasonably good description of the trade-offs they make and why -
without the all the nonsense hyperbole claiming they're the best for
everything without any trade-offs*

* I've reviewed ~400 databases over the last month and it's surprising (?) how many of them are all the best of every use case and are the [fastest|first|only|best]

------
ainar-g
Does anybody have an experience with CockroachDB in production? Is it ready to
replace PostgreSQL as "the default database"? How does it handle querying and
updating big (>10Gb) collections of data?

~~~
kevan
It only hit 1.0 6 weeks ago[1], I don't think we'll have a good sample size of
prod usage until the end of 2017 at the earliest

[1]
[https://www.cockroachlabs.com/blog/cockroachdb-1-0-release/](https://www.cockroachlabs.com/blog/cockroachdb-1-0-release/)

------
YZF
I like to look at multi-core CPUs as examples. While in theory cores can
partition from each other or fail in myriad of ways the system is engineered
such that the probability of these failures is low enough that it doesn't
matter. If you lose a core or you lose an interconnect between the cores, you
lose the chip. Really you can look at each transistor on a chip (any chip) as
a node in a distributed system, as long as the system is engineered not to
fail you don't really think about CAP.

The more interesting trade-off is using consensus algorithms for availability
and durability. You can keep going as long as you have a quorum of nodes but
you pay an extra rtt (at least). Having multiple replicas (in either
consistent or eventually consistent systems) costs in linearly more expensive
writes and storage (typically, unless you use some sort of erasure coding.)

~~~
xfer
In that sense multi-core cpus/"logic cells" in FPGA are not really partition
tolerant.(i.e CA)

------
falcolas
> In the event that the leaseholder is partitioned away from the other
> replicas, it will be allowed to continue to serve reads (but not writes)
> until its lease expires (leases currently last 9 seconds by default), and
> then one of the other two replicas will get a new lease (after waiting for
> the first replica’s lease to expire).

So, what happens to readers who are partitioned away from the node which holds
that data? Can they not read the data for that lease duration? If they can't,
then yeah, CP is a good description.

...

So the design doc seems to hold this up - reads must go to the lease holder,
until the lease expires. Nice.

EDIT: Design doc link:

[https://github.com/cockroachdb/cockroach/blob/master/docs/de...](https://github.com/cockroachdb/cockroach/blob/master/docs/design.md)

~~~
YZF
If a partitioned node can serve reads and the other nodes can serve writes
then you must be reading stale data though.

~~~
Groxx
Yeah, that outcome seems pretty straightforward...

It's probably not hard to require that writes (which require a majority) also
require the lease-holder to ack the write, which seems like it'd solve this.
It's a bit odd that they don't mention anything like this, but it _is_ a
fairly short blog post.

A bit of lazy browsing didn't lead me to any more detailed descriptions of how
it handles partitions. Anyone else know?

~~~
falcolas
Writes can only get issued from the partition holder, in addition to reads. I
had to dig into the documentation links to find the design doc on github,
which detailed this behavior.

------
zimbatm
I have a new definition:

* CP is a database

* AP is a cache

Anyone else pretending AP is a database is lying (unless it's a content-
addressable store) :p

~~~
jedberg
Casandra and Riak are AP, and both can certainly be used as sources of truth.
You just have to move the "C" part up into your app, which may actually be a
better place for it, since what is "consistent" can be dependent on the data
and application of that data.

~~~
irfansharif
Here is what Google has to say about 'moving the "C" part up into your
application':

“We also have a lot of experience with eventual consistency systems at Google.
In all such systems, we find developers spend a significant fraction of their
time building extremely complex and error-prone mechanisms to cope with
eventual consistency and handle data that may be out of date. We think this is
an unacceptable burden to place on developers and that consistency problems
should be solved at the database level.”[1]

[1]: [https://yokota.blog/2017/02/17/dont-settle-for-eventual-
cons...](https://yokota.blog/2017/02/17/dont-settle-for-eventual-consistency)

~~~
hodgesrm
There's a quote from _Transaction Processing_ by Gray and Reuter about ACID
transactions that is quite relevant to eventual consistency:

"Without transactions, distributed systems cannot be made to work for typical
real-life applications."

This is as true now as it was 25 years ago, for exactly the reasons cited by
Google. The book is incidentally still a good read.

~~~
zeckalpha
Transactions are related to atomicity, not consistency. There are atomic AP
systems with transactions but without consistency.

------
marknadal
Key quote:

"The only time that a CAP-Available system would be available when a CAP-
Consistent one would not is when one of the datacenters can’t talk to the
other replicas, but can talk to clients, and the load balancer keeps sending
it traffic. By considering the deployment as a whole, high availability can be
achieved without the CAP theorem’s requirement of responses from a single
partitioned node."

It is true that if you assume your client app is not important that a CP
system is the right choice. And I would also say this /was/ true up till about
2004 when Gmail was released. But it definitely stopped being true in 2007
when the iPhone was released and you started having installed apps.

Since then, users have slowly grown to expect both mobile apps and SPAs to
work regardless of whether the servers work, regardless of load balances,
regardless of connectivity.

If you look at the market trends, things are increasingly going in this
direction. From self-driving cars, to IoT devices, to drone delivery, to even
traditionally server-dependent productivity tools like gDocs and others -
people need to get work done even if the internet to your server doesn't
exist.

Will banking applications still need mostly server-dependent behavior? Yes. Is
CP still important? Yes. But it is biased to say that CP systems are better.
Choose the right tool for the right job. CockroachDB and RethinkDB are
definitely the right choice for a strongly consistent database, but they
aren't the right choice for everything. My database is an AP system, but it
should not be used for many apps out there. Neither of these are "better",
they are just tradeoffs you have to decide upon.

~~~
bdarnell
That's an important point. With mobile applications that support offline
usage, you can no longer assume a single global source of truth, and the
application as a whole is AP.

However, I'd argue that this tilts the balance even more in favor of a CP
database on the backend. Even when the client application is not executing
transactions on the database, consistency at the database level is what makes
it possible to support secondary SQL indexes that work without surprises. An
offline-capable mobile app buffers writes, moving the write to the server out
of the critical path so server-side write-latency is not as visible to the
user.

~~~
marknadal
Yes, again if you are doing some transactional behavior like two users buying
the last concert seat. However, making those apps be offline-first is kinda
silly in the first place.

The types of apps that naturally fit with mobile apps, client-facing behavior,
are ones that have more append-only data structures (twitter, snapchat,
messaging, etc.). Those apps benefit much more from an AP system rather than a
CP system, because it makes the end user's (the client) life better/more-
available.

Again, the right tool for the job. And CockroachDB is certainly the right
choice for the right problem. Well written article, keep it up!

