Just to expand on this, the "C" in CAP corresponds (roughly) to the "A" and "I" in ACID. Atomicity across multiple nodes requires consensus. According to FLP Impossibility Result (CAP is a very elegant and intuitive re-statement of FLP), consensus is impossible in a network that may drop or deliver packets. Serializable isolation level requires that operations are totally ordered: total ordering on multiple nodes, requires solving the "atomic multicast" problem which is a private instance of the general consensus problem.
In practice, you can achieve consensus across multiple nodes with a reasonable amount of fault tolerance if you are willing to accept high (as in, hundreds of milliseconds) latency bounds. That's a loss of availability that's not acceptable to many applications.
This means, that you can't build a low-latency multi-master system that achieves the "A" and "I" guarantees. Thus, distributed systems that wish to achieve a greater form of consistency typically (Megastore from Google being a notable exception, at the cost of 140ms latency) choose master slave systems (with "floating masters" for fault tolerance). In these systems availability is lost for a short period of time in case the master fails. BigTable (or HBase) is an example of this: (grand simplification follows) when a tablet master (RegionServer in HBase) for a specific token range fails, availability is lost until other nodes take over the "master-less" token range.
These are not binary "on/off" switches: see Yahoo's PNUTS for a great "middle of the road" system. The paper < http://research.yahoo.com/node/2304 > has an intuitive example explaining the various consistency models.
Note: in a partitioned system, the scope of consistency guarantees (that is, any consistency guarantees: eventual or not) is typically limited to (at best) a single partition of a "table group"/"entity group" (in Microsoft Azure Cloud SQL Server and Google Megastore, respectively), a single partition of a table (usual sharded MySQL setups) or just a single row in a table (BigTable) or document in a document oriented store. Atomic and isolated cross row transactions are impractical on commodity hardware (and are limited even in systems that mandate the use of infiband interconnect and high-performance SSDs).
[Disclaimer: I am commiter on Project Voldemort, a Dynamo implementation; in addition to Dynamo, I also find Yahoo's PNUTS and Google's BigTable to be very interesting architectures.]
Oh, great! Here we go again with that CAP flame war. In spite of strlen's well written post, things always degenerate in a pointless discussion everytime CAP is cited (Godwin, right?)
The truth is so simple: some applications can give up milliseconds (even hours) of "A" for strong "C" (but not otherwise), some apps can give up strong "C" for high "A" (but not otherwise). How difficult it is to accept this?
The tricky part is when to give up "C" or "A", where to draw the lines. There's no ready recipe for this, sorry. Basho post seems to point right into that direction when it states that it will provide various options across the CAP spectrum. Smart companies deliver what their customers want.
"But the world is eventual consistent then you always should choose A". Classic non sequitur. Yes, real world is weakly consistent, fractal and uncertain, but we, as computer professionals, aim to build models (i.e., simplifications) of real world processes. Now, try to model and automate all that uncertainty and inconsistency of the world when the deadlines are just around the corner!
Thanks for bringing some much needed science to these proceedings, my man. Too many people mistaking computer science for a therapy session where their feelings matter. Computer science has much in common with the honey badger. Think upon this and be enlightened.