Partition tolerance does not mean your distributed system can't be consistent and available because your network dropped one packet, or one node failed. What would be the point of such a definition? Instead, the CAP theorem implies that while the network is partitioned, consistency or availability must be sacrificed. In the case of the dropped packet, once it is retransmitted the partition is healed and progress can be made. Or in the case of the failed node, nothing says that the rest of the system can't be consistent and available, so that the system as a whole maintains that property. There is no requirement that the unavailable node be available.
Truly partition tolerant systems are those that continue to function in the face of a prolonged partition, and those are the systems that must sacrifice either consistency or availability.
"Consistent sometimes" is not the same thing as "Consistent" and "Available sometimes" is not the same thing as "Available" - and so "Consistent and Available sometimes" is not the same as "Consistent and Available".
I believe you might be guilty of confusing "Eventual Consistency" with "Consistency".
Funnily enough, no-one has found much use for "Eventual Availability" so far.
A distributed system is considered available if "every request received by a non-failing node [results] in a response." It does not mean you cannot retransmit or retry.
Similarly, the consistency guarantee only requires that there exist a total order on operations. Failures are ok, as we're allowed to retransmit, retry, and otherwise tolerate faults. There is no inconsistency, nor is anything eventual.
My point is that a "temporary" partition is just a fault, and as long as the fault is shorter than the allowed response time of the system, it doesn't make a difference.
The consistency guarantee requires that RW histories are compatible with some sequentially consistent history on a non-concurrent RW register. Defining a total order on operations is sufficient, I believe, but not necessary (does it matter what order two consecutive reads happened in?).
Paxos is, fundamentally, a quorum-based system that deals with reordering of messages. It sacrifices liveness for correctness - if the proposer does not hear back from a majority of nodes (in the case of, e.g. a partition), the protocol will not complete (availability is sacrificed).
My point is not that there is a 'vital packet' in every protocol, the omission of which will cause either a lack of availability or consistency (although I can certainly design protocols that way!) - it's that for every protocol there is a network partition which causes it to be either unavailable or inconsistent. That network partition might be dropping ten messages, or just one. Retransmitting would make sense, but in real life message failures are often highly temporally correlated :(
The proof of this, by the way, is in a very famous paper by Fischer, Lynch and Patterson called "The Impossibility of Distributed Consensus With One Faulty Process". One take away is that one slow-running process can take down any protocol. It may take a few missed messages, but only a single node...
Paxos sacrifices latency
“The network will be allowed to lose arbitrarily many messages sent from one node to another”
The theory behind all this really does hold this point up. I have another blog post with much more detail on the theory here: http://the-paper-trail.org/blog/?p=49, but I warn you it may be heavy going.
If a node you can contact is required to guarantee consistency, there will be some times that it will have to refuse your request because other nodes are not contactable.
The author's point was that in any distributed system there is a non-zero probability of a network failure. While both clients and server nodes can retry connections, there is a non-zero probability that the problem will persist longer than your "availability agreement" allows. In that case, you have a choice - return potentially inconstent data or refuse the request.
What you seem to be arguing is that the probabilities of failure - in particular of repeated failure - while non-zero, are effectively zero. The author would disagree (as they point out, the probabilities combine exponentially as the number of nodes increase.) I think he's right and that you are wrong.
So it is tolerant of partition and it sacrifices availability in favour of consistency. So it is CP, not CA.
But what does it mean to tolerate a partition? As if the system has a choice?
Any CA system is claiming to be consistent and available as long as the network doesn't partition. That's the strongest statement you can make under the CAP theorem, and Paxos certainly falls in that camp.
My problem with the original article was that it claimed that any individual network or node failure was a partition affecting the consistency or availability of the system. Paxos is a clear counterexample to that, as it tolerates a lot more than that without sacrificing consistency or availability.
Once the network actually partitions (or half the nodes become unreachable), then you are correct. The CAP theorem comes into play again and we must sacrifice either C or A, and Paxos chooses A.
Or does any system subject to hardware or power failure fail to count as "available"?
If the network is allowed to drop packets there are times where either you must not respond to requests (as doing so would violate sequential consistency) or you must respond incorrectly, potentially with stale information.
The network partition that forces you to drop one of these guarantees might be quite dramatic - but note that, for example, a quorum system is only available on the majority side of its partition. Therefore if a single node is unable to deliver messages (due to a network partition event), it will not be able to correctly respond to requests and must either not respond to its clients or respond incorrectly.
So then what am I missing? It sounds to me like the rest of your post is reiterating what you just called incorrect.
I think this is the tweet that prompted the post: http://twitter.com/JamesMPhillips/status/26502076366
Although it is pretty much the only choice for general purpose file sync, it's still not a good choice. It is difficult to train non-technical staff to deal with inconsistencies on resync, and they don't want to have to deal with it. I've had success with Synctus precisely because it guarantees consistency (it is CP, and any one node keeps the A for a given file). Of course, this only works for mostly-on systems.
Do I understand correctly?
What Redis Cluster will guarantee is that you can have M-1 nodes, with M being the number of replicas per "hash slot", that can go down, and/or get partitioned.
So this is a form of "weak" tolerance to partition, where at least a given percentage of the nodes must remain up and able to talk to each other.
But in the practice this is how most networks work. Single computers fail, and Redis Cluster will be still up. Single computers (or up to M-1) can experience networking problems, and Redis will continue to work.
In the unlikely condition that the network is split in two halves the cluster will start replying with an error to the clients.
This means that the sys admins have to design the network so that it is unlikely that there are strange split patterns, like A and B can talk to C that can tolk to D but blablabal... in high performance network with everything well-cabled and without complex routing this should not be a problem, IMHO.
In the rest of your post, you seem to be sacrificing consistency; one server is down, and thus not receiving any updates from the other servers when data gets updated.
I'm not sure you understood the point of the article, so I'll try to restate it: When part of your system goes down (and it will), you can choose between refusing requests, in which case you sacrifice availability, or serving requests, in which case you sacrifice consistency, since the part of the system which is down cannot be updated when you update data, or cannot be queried in the case of data which is insufficiently replicated. You _cannot_ choose both, since that would require communicating with the downed server.
In Redis Cluster there is no cluster data communication if not for resharding that only works when the whole cluster is on and is done by the sys administrator when adding a node.
So in normal conditions, a node will either:
1) Accept a query, or
2) Tell the client: no, ask instead 184.108.40.206:6380
All the nodes are connected only to make sure the state of the cluster is up. If there are too much nodes down from the point of view of a single node it will reply to the client with a cluster error.
What I'm sacrificing is only consistency because in every given time there is only a single host that is getting the queries for a given subset of keys.
The exception is in the resharding case that is also fault-tolerant. Or slave election (fault tolerance is obtained via replicas).
As a side note, the clients should cache what node is responsible for a given set of keys, so after some time and when there are no failures/resharding in act, every client will directly ask the right node, making the solution completely horizontally scalable.
Dummy clients will just do always the ask-random-node + retry stage if they are unable to take state.
Edit: there are little fields like this that are totally in the hands of academia. My contribution is from the point of view of a dummy hacker that can't understand complex math but that will try to be much more pragmatic.
This sacrifices availability. Remember, the cluster doesn't include only the servers; it also includes the clients, since ultimately the point of a database server is to provide the data to the clients upon request.
What I did was to stress the tradeoffs that my data model was forcing itself, as Redis handles complex aggregate data and an eventual consistent solutions sucks in this context.
So Redis Cluster users will be a fast scalable consistent solution that will start trowing errors if the network will go down badly, but that will survive if a few nodes will go bad or if there are small network problems affecting a small number of nodes. If this sounds too little available please explain me this:
We have a network with 10 web servers and 10 DB nodes.
The netsplit will split 8 nodes from all the rest, so 10 web servers will be able to talk with 2 nodes.
I wonder how this two nodes will be able to handle all the traffic usually handled by 10 nodes. Netsplit tolerance is a myth if you don't specify very very very well under what conditions.
Also, beware of confusing your local failure detector for a global one. The emergent behavior is a pain when you don't take this into account.
For a CA system, any node which is unable to assure global consistency reports itself as failed. It will neither return bogus results, nor hang indefinitely.
Reporting an error condition counts as an availability violation.
The reason failures are hard to detect in asynchronous networks is that permissible message transit times are unbounded; ie they refuse to acknowledge the presence of any partition. If you acknowledge the possibility of partitions, then your system is by definition not asynchronous.
This is bullshit. Per the definition quoted in the linked article, availability only means that "...every request must terminate.". It is not required that it terminate successfully.
No. Like you say, async means failures are hard to distinguish from delays. If a node's NIC sets on fire, I'm pretty sure no messages are ever going to get delivered to it - hence it is partitioned from the network. It is very hard to tell whether it has failed, or whether it is just running slowly, in an async network.
"This is bullshit. Per the definition quoted in the linked article, availability only means that "...every request must terminate.". It is not required that it terminate successfully."
No. The definition of the atomic object modelled by the service doesn't include an 'error' condition. Otherwise I could make a 100% available, 100% consistent system by always returning the error state, which is thoroughly uninteresting. You have to read more than the quoted definition in the Gilbert and Lynch paper to start calling bs - it is very clear that authors do not allow an 'error' response.
A curious inconsistency.
But still... ok, the bullshit is elsewhere.
For a distributed (i.e., multi-node) system to not require partition-tolerance it would have to run on a network which is guaranteed to never drop messages (or even deliver them late) and whose nodes are guaranteed to never die. You and I do not work with these types of systems because they don’t exist.
This is bullshit; it assumes that the only options are complete reliance on absence of failures, and tolerance of arbitrary partitions. Specifically, it claims that P(system fails) = 1 - (1 - P(any particular node fails)) ^ (number of nodes), and that therefore "the question you should be asking yourself is: In the event of failures, which will this system sacrifice? Consistency or availability?". There are plenty of real-life counter-examples to this; given a real-world (ie, at least partially synchronous) network, it is possible to maintain consistency and availability in the face of partition/failure of up to half-minus-one of your nodes. This blows the probability calculations completely out of the water.
You cannot, however, choose both consistency and availability in a distributed system. ... As a thought experiment, imagine a distributed system which keeps track of a single piece of data using three nodes—A, B, and C—and which claims to be both consistent and available in the face of network partitions.
Hey, see how that "in the face of network partitions" snuck in there? It's bullshit, you want "in the face of these specific kinds of network partitions", things like crashed nodes. Just enough to invalidate that abuse of statistics used to claim that Availability is impossible.
I don't agree - availability is a totally meaningless property if you are allowed to occasionally return "no, I won't process your request". Such a response communicates nothing about the state of the atomic object you are writing to or reading from, so you can always return it and trivially satisfy 'availability' if we define it this way.
To your other point - be aware that I didn't write the article, so I'm not speaking for the author. However, I think you're right that the article makes it sound a bit like a single failure or message loss will cause any protocol to immediately sacrifice availability or consistency. This isn't the case - all CAP does is establish that for every protocol, there exists a failure pattern that will force it to abandon one of the two.
For quorum systems, this means that a permanent partition causes one half of the partition to no longer be the majority, and therefore can no longer be consistent if it responds to any requests. Paxos is another example.
So you're right, there are particular patterns of partition that stop a protocol from functioning correctly. And many that don't - hence the term 'fault tolerant' has some meaning.
Avoiding these patterns, in practice, can turn out to be surprisingly tricky. High-performance systems can't afford to have too many participants, which means that the probability of a problematic failure is higher than we might like (five participants in a consensus protocol is already a lot for high throughput, but now we are susceptible to only three failures). Failures are also often correlated, so independence assumptions don't hold as much as would like. Machines crash. Networking gear fails.
There's no abuse of statistics here. The probability of a particular failure pattern can be engineered low, and at that point you must weigh the trade-offs of the cost of loss of availability / consistency vs. the effort you make to minimise the chance of occurrence. We are talking about edge cases here, and the implicit assumption is that the cost of hitting one of them is huge (and it often is). However if you run a cluster large enough, you hit edge cases all the time.
(Although you mention partial synchrony, note that most of these results are mainly applicable to asynchronous networks in the first instance).
Availability of the individual node, or of the service as a whole? So long as the nodes always answer quickly, can't I just ask a few different ones until I get a successful response (or conclude that the entire system is down)?
Let's imagine a system where you want to be 100% available for reads, for any number of failures less than N. Then you need to be able to submit every single write to every single node in the system, otherwise the failure of all but the up-to-date node will result in stale reads.
But then if a single node is partitioned from the network, we can't (correctly) be available for writes, because the system is incapable of sending updates to all reads as required. It doesn't matter which node you ask.
The point is that every system has a failure mode like this. I take your point that it's not always just a single node failure that precipitates the abandonment of C or A, but that was never the point of the CAP theorem.
This lack of availability is different from the availability in the A of CAP, since that availability holds only so long as the network is not partitioned (by definition in a CA system).
Such a system might not be considered a distributed system at all (although it may still be distributing load), since a partition-intolerant system is effectively one system as far as the CAP theorem is concerned.
So it's essentially a special case of the CAP theorem, but it is still useful to describe it as CA.
Therefore it's not a CA system, but a C system.
If there is some time period during which requests are not responded to within a time bound, the system is not available then, and further is not a 'highly' or 100% available system. That is what the CAP theorem is talking about.
Consistency, similarly, is not a state but a property that holds across all responses. Either you return a consistent response to all your requests, or you don't. In the context of CAP, there is no middle ground.
3 nodes, 2 must agree. When the partition heals, the resolution process happens. It would have to be a SERIOUSLY bad network design and quorum setting that allows a quorum on both sides of the split.
It's just like eventual consistency. We're not talking days or even minutes. We're talking milliseconds/seconds of partition split. If you have a partition split for days, you have OTHER issues to address.