To go a little further, I suspect with CP cases like banks the real underlying problem is more fundamental. You can't realistically, under the laws of physics, have a hard notion of transactional ordering (did the account's money come in before it went back out?) without pinning down the concept of an account to a location. At least, not efficiently or quickly.
In other words, eventual consistency in the face of asynchronous remote actors never makes sense when your requirements dictate hard, consistent transactional ordering. You have to think of it as "the transactions happen in the order they arrive at the account's virtual location in New York". To think of them as globally-distributed in nature is always going to cause logical problems at some level. If your database was eventually-consistent, you'd have to build in some sort of after-the-fact safety checks that have the ability to abort the outer transaction, at which point you've wasted a lot of effort patching over the wrong model.
So either you have a need for strong consistency guarantees (order really matters), in which case you have to pin the transactions' locality down (where do they meet up at for their efficient strict ordering?) and CP is your model, or you don't (simpler things like social network updates) and you're better off with AP and eventual consistency to scale things out easier and make it faster for everyone, and really who cares if once in a great long while a user-visible race happens and some people see a couple of posts in a different order than someone else does for a few minutes until things snap back into sync?
> You can't realistically, under the laws of physics, have a hard notion of transactional ordering (did the account's money come in before it went back out?) without pinning down the concept of an account to a location
You've just made me realise that the universe itself is only eventually consistent - that's what all those weird quantum observer / wavefunction collapse events are.
Yep, that's what I was trying to get at, but failed in my explanation.
Everything's eventually consistent on an ever-present sphere around the incident expanding at the speed of light. No faster.
Even if the sun were to blink out of existence, we'd have 8 minutes that we wouldn't know. Gravity would still be there, holding Earth in place. The light would still warm us. Then 500 seconds later, darkness. We'd be flung out on a tangential course.
This isn't quite right. Even with SR and the speed of light its possible to build consistent systems and achieve consensus. Not-eventually-consistent doesn't mean instantaneous. The SoL just sets a lower bound on the speed of consensus.
It's important to not overstate the importance of that bound, though.
Sure. CA fulfills that requirement. Of course, you throw away any semblance of partition tolerance.
Of course, a single machine guarantees there can be no partitioning, and really easy to obtain consensus. It might not be terribly fault-tolerant, however.
CA is a not really a valid/possible thing in the context of CAP. The original phrasing of the theorem was poor and the "choose 2" myth persists. You can choose to (or accidentally) give up C or A but you don't get to choose to not have partitions. Not being partition tolerant doesn't really make sense (you're just broken?) if partitions are going to happen. A better phrasing of CAP is "in a network with partitions a distributed system cannot be both consistent and available." (note: this doesn't guarantee that you are one of C or A, you just can't be C and A.) You can see that definition used in formal treatments, e.g. Theorem 1 in https://users.ece.cmu.edu/~adrian/731-sp04/readings/GL-cap.p...
(Briefly, note that the original article is critiquing that definition of availability in practice which is legitimate but not relevant to this sub-thread.)
What I'm saying is that EC is most definitely NOT a requirement of physics/the speed of light (what your original post claimed.) The speed of light only sets a (theoretical) limits on how fast you can implement a consistent system.
The original Paxos paper ("The part-time parliament") uses an analogy of a quorum of parliamentarians occasionally getting together in the the same building and agreeing on something. Of course it being the same building is arbitrary and doesn't actually matter, but it's easier to intuit that the speed of light isn't an insurmountable road-block at that scale.
Special relativity and the speed of light impose some fundamental lower bounds on the cost of consistency: https://en.wikipedia.org/wiki/Relativity_of_simultaneity . This theoretically manifests as lower bounds on the performance of consistency in distributed systems.
In other words, eventual consistency in the face of asynchronous remote actors never makes sense when your requirements dictate hard, consistent transactional ordering. You have to think of it as "the transactions happen in the order they arrive at the account's virtual location in New York". To think of them as globally-distributed in nature is always going to cause logical problems at some level. If your database was eventually-consistent, you'd have to build in some sort of after-the-fact safety checks that have the ability to abort the outer transaction, at which point you've wasted a lot of effort patching over the wrong model.
So either you have a need for strong consistency guarantees (order really matters), in which case you have to pin the transactions' locality down (where do they meet up at for their efficient strict ordering?) and CP is your model, or you don't (simpler things like social network updates) and you're better off with AP and eventual consistency to scale things out easier and make it faster for everyone, and really who cares if once in a great long while a user-visible race happens and some people see a couple of posts in a different order than someone else does for a few minutes until things snap back into sync?