I'll just come out and say it: the 'A' in CAP is boring. It does not mean what you think it means. Lynch et al. probably chose the definition because it's one for which the 'theorem' is both true and easy to prove. This is not the impossibility result with which designers of distributed systems should be most concerned.
My heuristic these days is that worrying about the CAP theorem is a weak negative signal. (EDIT: This is not a statement about CockroachDB's post, which doubtless is designed to reassure customers who are misinformed on the topic. I'm familiar with that situation, and it makes me feel a deep sympathy for them.)
(Disclosure: I work on a CockroachDB competitor. Also none of this is Google's official position, etc., etc. For that, here's the whitepaper by Eric Brewer that we released along with the Cloud Spanner beta launch https://static.googleusercontent.com/media/research.google.c...).
For example, any database cannot provide availability if all of its replicas are offline, which has nothing to do with
partitions. Such a multi-replica outage should be very rare, but if partitions are signi cantly more rare, then you can effectively
ignore partitions as a factor in availability.
For Spanner, this means that when there is an availability outage, it is not in practice due to a partition,
but rather some other set of multiple faults (as no single fault will forfeit availability).
(edit): I have thought about it some more, and this article really annoys me. It reads like marketing material: "CAP doesn't apply to us because we are Google, bitches."
There is an argument there, but I think the way Brewer makes the argument is really weak. I would much rather them say: "We have built a really great CP system. Also, because we are Google we are capable of 99.99958% uptime, so you really don't need to worry to much about tiny edge cases where you will lose A."
In these types of scenarios, different sections of the system are still working as normal, and each have a different view of the network of available nodes.
In a scenario where a set of homogeneous nodes in a system is split in two, both are equally 'available' and so the system as a whole has to decide what to do in that scenario. If both sides present themselves as available then they will be making decisions based only on interactions with half of the nodes in the system and their view of the system as a whole will start to drift apart.
This is bad because at the point where they get reconnected again they may well realise that the system as a whole is now not internally consistent. If you think about a distributed database, then you can start to having conflicting commits and now your DB is FUBAR.
You are right in thinking that from the point of view of a partition that it can't know if the rest of the system is just partitioned away or has crashed and will never come back.
But what that simplifying assumption is saying is that if you can ensure that you are much much more likely to have the nodes go down completely rather than actually be partitioned, then things are easy because you don't have to consider diverging system views and how you might re-integrate them.
Or something like that, I ended up writing more than I was planning!
Maybe you are on the losing side of a split, and there is a set of nodes out there that can make quorum.
Or maybe 51% of the nodes crashed and what you see is all that is left of the cluster.
Whatever you decide, it has to work for both those possibilities.
So are you interpreting Brewer as saying, in practice we never have a split. Just assume what you see is all there is of the network. However, Spanner is a CP system. If you are willing to assume you will never need to merge inconsistent data, wouldn't you go for AP?
What he's saying is that it's strictly CP, as it will handle partitions (there's a section further down the paper describing how it would be handled). But as P's hardly ever ever happen, because Google, then it's pretty much CA (always consistent and available).
So yes, for all intents and purposes, he's saying "Yeah, CAP does apply but we're so good we can make P 'go away'."
Note that in both CockroachDB and Spanner a cluster contains many independent and overlapping replica sets. The data is broken down into "ranges" (to use the terminology of CockroachDB; Spanner calls them "spans"), each of which has its own replica set (typically containing 3 or 5 members).
It is not "kind of amazing" considering Eric B. felt required to write a follow up to his CAP paper.
Isn't that equivalent to serializability, which is a guarantee that lots of DBs offer (though you can choose weaker ones instead.)
CockroachDB offers serializability (as the default isolation level). Most other SQL databases offer serializability as their highest isolation level, but default to something weaker.
Serializability and linearizability are not equivalent, although it's not always easy to devise a scenario in which the differences are apparent. The "comments" tests in Aphyr's Jepsen analysis of CockroachDB is one such scenario: http://jepsen.io/analyses/cockroachdb-beta-20160829#comments
Note: its been so long since I dealt with these things that I can't remember what the difference means in practice. :-)
One of the best papers I've come across in the last few years.
I think the overloaded term "availability" has been a big source of confusion for many trying to understand the implications of the CAP theorem at a simple level.
For example, a simple PAXOS implementation is "high availability" (continues working even when individual machines fail) but sacrifices "availability" in the CAP sense.
* I've reviewed ~400 databases over the last month and it's surprising (?) how many of them are all the best of every use case and are the [fastest|first|only|best]
The more interesting trade-off is using consensus algorithms for availability and durability. You can keep going as long as you have a quorum of nodes but you pay an extra rtt (at least). Having multiple replicas (in either consistent or eventually consistent systems) costs in linearly more expensive writes and storage (typically, unless you use some sort of erasure coding.)
AP would be to keep trying to run the chip with severed connections between cores.
So, what happens to readers who are partitioned away from the node which holds that data? Can they not read the data for that lease duration? If they can't, then yeah, CP is a good description.
So the design doc seems to hold this up - reads must go to the lease holder, until the lease expires. Nice.
EDIT: Design doc link:
Writes require the lease too, so it is not possible for a quorum of nodes on one side of a partition to serve writes while a lease holder on the other side serves stale reads. This is a degenerate case of quorum leases (https://www.cs.cmu.edu/~dga/papers/leases-socc2014.pdf) for a single lease holder; in the future we're interested in supporting multiple lease holders to improve read latency (at the expense of write performance and availability).
Writes becoming unavailable during a partition is a reasonable solution for a CP system.
It's probably not hard to require that writes (which require a majority) also require the lease-holder to ack the write, which seems like it'd solve this. It's a bit odd that they don't mention anything like this, but it is a fairly short blog post.
A bit of lazy browsing didn't lead me to any more detailed descriptions of how it handles partitions. Anyone else know?
* CP is a database
* AP is a cache
Anyone else pretending AP is a database is lying (unless it's a content-addressable store) :p
“We also have a lot of experience with eventual consistency systems at Google. In all such systems, we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency and handle data that may be out of date. We think this is an unacceptable burden to place on developers and that consistency problems should be solved at the database level.”
"Without transactions, distributed systems cannot be made to work for typical real-life applications."
This is as true now as it was 25 years ago, for exactly the reasons cited by Google. The book is incidentally still a good read.
Automatic with CRDTs, but there is a price to pay in storage and available data-structures to play with.
Manual, like Git.
Denial, aka. Last Write Wins. With is probably good enough for a cache but has been used in other context.
The initial comment was mainly intended as a joke since a lot of AP systems use the last-write-wins strategy.
I don't require last-write-wins. It's an immutable time series with no updates, no deletes.
"The only time that a CAP-Available system would be available when a CAP-Consistent one would not is when one of the datacenters can’t talk to the other replicas, but can talk to clients, and the load balancer keeps sending it traffic. By considering the deployment as a whole, high availability can be achieved without the CAP theorem’s requirement of responses from a single partitioned node."
It is true that if you assume your client app is not important that a CP system is the right choice. And I would also say this /was/ true up till about 2004 when Gmail was released. But it definitely stopped being true in 2007 when the iPhone was released and you started having installed apps.
Since then, users have slowly grown to expect both mobile apps and SPAs to work regardless of whether the servers work, regardless of load balances, regardless of connectivity.
If you look at the market trends, things are increasingly going in this direction. From self-driving cars, to IoT devices, to drone delivery, to even traditionally server-dependent productivity tools like gDocs and others - people need to get work done even if the internet to your server doesn't exist.
Will banking applications still need mostly server-dependent behavior? Yes. Is CP still important? Yes. But it is biased to say that CP systems are better. Choose the right tool for the right job. CockroachDB and RethinkDB are definitely the right choice for a strongly consistent database, but they aren't the right choice for everything. My database is an AP system, but it should not be used for many apps out there. Neither of these are "better", they are just tradeoffs you have to decide upon.
However, I'd argue that this tilts the balance even more in favor of a CP database on the backend. Even when the client application is not executing transactions on the database, consistency at the database level is what makes it possible to support secondary SQL indexes that work without surprises. An offline-capable mobile app buffers writes, moving the write to the server out of the critical path so server-side write-latency is not as visible to the user.
The types of apps that naturally fit with mobile apps, client-facing behavior, are ones that have more append-only data structures (twitter, snapchat, messaging, etc.). Those apps benefit much more from an AP system rather than a CP system, because it makes the end user's (the client) life better/more-available.
Again, the right tool for the job. And CockroachDB is certainly the right choice for the right problem. Well written article, keep it up!