UPDATE table SET value = value + 2 WHERE key = 'world';
UPDATE table SET value = value * 1.1 WHERE key = 'world';
An alternative mechanism could have been to create a log of changes. The order in which the changes are added to the log then determines the order of execution. That way, we don’t have to block the UPDATE, right?
Unfortunately, things are a little more complicated in an RDBMS. Each of the UPDATEs could be part of a bigger transaction block that could (atomically) roll back. Moreover, the RDBMS provides consistency which means that the new value should be visible immediately after the UPDATE. However, the new value cannot be determined until all preceding UPDATEs have been committed or rolled back. So in the end, the UPDATE will have to block until all preceding transactions that modified the same rows are done and the new value can be computed, just like what would happen with row-level locks, except with more bookkeeping (and serialisation issues).
If you strip out some of the functionality of an RDBMS such as multi-statement transaction blocks or consistency, then you could potentially avoid locks, but Citus distributes PostgreSQL, which does provide this functionality.
The primary restriction for anything involving CRDTs is that no operation can "take back" something added by a previous operation, so the data in question can only evolve monotonically. It's great when you can get away with it, but it's unclear how often you can actually get away with it.
Have you made a compilation of your thoughts from the various research you've done? Would love to see them!
edit: Although to be fair that blog post is ancient and covers Cassandra 2.0, now no longer supported. Counters were rewritten in Cassandra 2.1.