In practice, you can achieve consensus across multiple nodes with a reasonable amount of fault tolerance if you are willing to accept high (as in, hundreds of milliseconds) latency bounds. That's a loss of availability that's not acceptable to many applications.
This means, that you can't build a low-latency multi-master system that achieves the "A" and "I" guarantees. Thus, distributed systems that wish to achieve a greater form of consistency typically (Megastore from Google being a notable exception, at the cost of 140ms latency) choose master slave systems (with "floating masters" for fault tolerance). In these systems availability is lost for a short period of time in case the master fails. BigTable (or HBase) is an example of this: (grand simplification follows) when a tablet master (RegionServer in HBase) for a specific token range fails, availability is lost until other nodes take over the "master-less" token range.
These are not binary "on/off" switches: see Yahoo's PNUTS for a great "middle of the road" system. The paper < http://research.yahoo.com/node/2304 > has an intuitive example explaining the various consistency models.
Note: in a partitioned system, the scope of consistency guarantees (that is, any consistency guarantees: eventual or not) is typically limited to (at best) a single partition of a "table group"/"entity group" (in Microsoft Azure Cloud SQL Server and Google Megastore, respectively), a single partition of a table (usual sharded MySQL setups) or just a single row in a table (BigTable) or document in a document oriented store. Atomic and isolated cross row transactions are impractical on commodity hardware (and are limited even in systems that mandate the use of infiband interconnect and high-performance SSDs).
[Disclaimer: I am commiter on Project Voldemort, a Dynamo implementation; in addition to Dynamo, I also find Yahoo's PNUTS and Google's BigTable to be very interesting architectures.]
The truth is so simple: some applications can give up milliseconds (even hours) of "A" for strong "C" (but not otherwise), some apps can give up strong "C" for high "A" (but not otherwise). How difficult it is to accept this?
The tricky part is when to give up "C" or "A", where to draw the lines. There's no ready recipe for this, sorry. Basho post seems to point right into that direction when it states that it will provide various options across the CAP spectrum. Smart companies deliver what their customers want.
"But the world is eventual consistent then you always should choose A". Classic non sequitur. Yes, real world is weakly consistent, fractal and uncertain, but we, as computer professionals, aim to build models (i.e., simplifications) of real world processes. Now, try to model and automate all that uncertainty and inconsistency of the world when the deadlines are just around the corner!
Yours in perpetual discovery,
- Lil' B