Hacker News new | past | comments | ask | show | jobs | submit login
Mergeable replicated data types – Part I (acolyer.org)
104 points by telotortium 8 months ago | hide | past | favorite | 21 comments



The "characteristic relations" of a datatype looks really interesting. It's similar to abstract data type (ADT) except you only care about modeling the state but not the operations.

And if you ignore operations, many datatypes reduce down to very simple mathematical objects. For example even though set and priority queues are very different in code, they both only need membership (R_mem) to fully specify because you can rebuild a priority queue knowing just the members. To specify a list/queue/stack, you only need to additionally know ordering (they denote this relationship as R_ob for "occurs before").

The advantage of converting a datatype (bijectively) to this relational domain is that you can essentially do "state-based" merging. You only need to define: how to convert to the relational domain, how to merge within that domain (which is still rich enough for you to specify relational invariants you need preserved), and how to convert back to the concrete datatype. Contrast this with operation-based syncing, where you would need a quadratic explosion of definitions for how any two operations should merge together. Here you don't even need to handle any particular operation individually (e.g., if it works for push_front, you will likely get push_back for free). Essentially it is saying the semantics of an operation is captured by the delta caused to the characteristic relations of the datatype.


CRDTs (and I guess MRDTs are no different in this respect) are often touted as an enabling technology for collaborative local-first applications (see e.g. recent discussion: https://news.ycombinator.com/item?id=21581444) but I fail to see how they are useful beyond the simplest of cases: the emphasis is on automatic convergence, but it seems almost impossible to preserve non-trivial application-specific invariants without manual merging (I am thinking of such applications as collaborative source code editing or vector graphics editing). Am I missing something?


The point is often you don't need to. If you use Google docs, the chances are you will be editing it disjointly most of the time and occasionally you'll edit the same bit of the document. Then you don't really care if what comes out is a meaningful sentence, but that all parties involved (regardless in which order they see the operations) see the same version of the document so that they can fix it. That's what CRDTs buy you.

Without them, you have to use either explicit synchronisation which is a bad idea if you want local-first software (doesn't work offline, needs to be done even when editing disjoint bits of the file, etc.) or something like CRDTs. Now there is a great paper by Victor Gomes, Martin Kleppmann, and Dominic Mulligan that proves (in a proof assistant!) that pretty much all other alternatives that do not use strong consistency is flawed in edge cases.

As for triviality of the operations, I know for a fact that Martin Kleppmann has a paper in the works that show (and prove) how to do tree move efficiently as a CRDT. That means you can basically do arbitrary edits on JSON. Hopefully, that should clear up any confusion about for which applications it might be useful.


Thanks for your comment!

> The point is often you don't need to. If you use Google docs, the chances are you will be editing it disjointly most of the time and occasionally you'll edit the same bit of the document. Then you don't really care if what comes out is a meaningful sentence, but that all parties involved (regardless in which order they see the operations) see the same version of the document so that they can fix it. That's what CRDTs buy you.

Ok, so the hope is that semantic conflicts are rare and if they do happen, users can easily see them and promptly correct them. I guess this works well if the invariants are local and synchronization is fairly frequent (i.e. on the scale of seconds, not weeks).

> That means you can basically do arbitrary edits on JSON. Hopefully, that should clear up any confusion about for which applications it might be useful.

Not quite. Of course you can model everything in JSON, but the question remains as to how much additional implicit structure is needed atop the explicit tree-like structure that JSON provides and how easily this implicit structure is disturbed by merges.


I think the subtlety with CRDTs is that how you choose to represent your data model via CRDT data structures encodes the semantics of how conflicts are resolved. As a simple example there are multiple versions of Set (remove once, add only, remove observed, etc.) that have different semantics.

As a more concrete example, Apple Notes edits tables and resolves conflicts via CRDTs. Now for editing tables in a word processor, you can have multiple users moving, adding, and deleting columns, in addition to editing the text within a cell. (One user adds a row, another adds a column. One user deletes a column and another moves it. etc.)

If you just naively represent this as a list of lists, the semantics expected by the user breaks. (One user adds a row, another adds a column, and you have a short row with cells in the wrong place.)

Apple ended up representing the columns as an ordered set of ids (piecing this together from other CRDTs) the rows as a second ordered set of ids, and then the text blobs as a map of column id, to a map of row id to a text crdt object. With the additional assumption that any "missing" text is in the blank, initial state. (I might have this transposed, it's from memory, but that's the basic idea.)

It works, and works well, but a lot of thought and effort was put into modeling the domain as CRDTs to capture the required semantics.

(Here, for ordered set, you can use a list CRDT, drop duplicates, and tack on an add-only set of tombstones to keep move/delete conflicts from resurrecting rows/columns.)


"often" isn't that useful to me as an application developer. If I'm going to build local-first software and put it out on hundreds of thousands of desktops, once it's deployed, the database is outside my control.

Yes, I need convergance, but I also need to ensure application invariants are maintained.

I agree with parent that a CRDT isn't that useful for local-first software unless you can encode arbitrarily complex programmatic logic in the transitions. That way I as a developer have control over how merges happen globally and all invariants. It's too complex to reason about otherwise.

It's not clear to me if this paper gives me what I'm looking for -- I'm looking forward to digging into it.

There are other approaches to CRDTs that I'm more excited about: Statebox (https://github.com/mochi/statebox) from years ago is a CRDT that encodes arbitrary transitions and converges by rewinding and replaying in a consistent order. This is basically the same trick blockchains typically do to merge forks. Radicle (https://radicle.xyz/) is another related approach.

I think things like this are the better path forward for local-first software, personally.


The paper is based on 3-way merge of relations (sets) but I'm not sure it's uniquely defined. Imagine we have a data structure consisting of two sets A and B, with the invariant that B is a subset of A. Initially A = {1}, B = {}. Imagine this data structure is replicated across two computers. On the first computer, we add 1 to B; on the second computer, we remove 1 from A. Then we try to merge these changes. It seems that the merge result could be either A = {1}, B = {1} or A = {}, B = {} and I don't see how to choose between them.


I believe their counter only works when they have 2 replicas:

         0
        / \
    +1 /   \  +2
      /     \
     1       2
      \     /
       \   /
        \ /
         3  <= correct result
With 3 replicas it breaks apart:

             0
            /|\
           / | \
          /  |  \
         /   |   \
        /    |    \
    +1 /     |+100 \ +2
      /      |      \
     1      100      2 
      \     / \     /   
       \   /   \   /
        \ /     \ /
        101     102
          \     /
           \   /
            \ /
            203   <= incorrect result, 100 is accounted for twice.
                     It is accounted for twice because the lowest
                     common ancestor is '0'.
My team solved that problem by creating virtual common ancestors. This is similar to what git does in some cases.


In this case 100 is the LCA isn't it? Then you'd end up with 100 + (101 - 100) + (102 - 100) = 103, which is correct.


Yes, 100 is the LCA.

The problem I should have started with is that regular LCAs are not useable for conflict resolution because there can be multiple different LCAs.

For example, in the graph at https://cs.stackexchange.com/questions/72538/lowest-single-c... which LCA do you use to merge 3 and 4?

So you have to use some sort of "unique LCA", which is 0 in the example I gave. But even that does not work as I showed in 2nd example.


After looking at it for a bit I've determined that it's not so much a problem that you can't pick a unique LCA (you just need a guarantee that either works). However you run into problems when you merge two nodes that already contain some merges. Consider the following example:

        0
        /\
    +1 /  \  +2
      /    \
     1      2
     | \  / |
     |  \/  |
     |  /\  |
     | /  \ |
     3      3
     | \  / |
     |  \/  |
     |  /\  |
     | /  \ |
     ?      ?
Here you've got two choices for LCA (which aren't equivalent but that aside), both will give you the wrong answer. As the correct answer should obviously be 3 as only 1 and 2 were ever added to the counter, exactly once.

Basically the problem is that the merge operation isn't idempotent, hence it doesn't define a semilattice, which seems to be what you want if you want to ensure consistency even over distributed systems.

They could have easily fixed this by picking a better merge operation such as a union or maximum, but it's not encouraging when someone picks an illustrative example that doesn't quite work.


You do recursive LCA merge, like git. I give a hand-wavy argument on why the algorithm is semantically correct here: https://infinity0.github.io/msg-notes/causal/04-state.html#w...


Okay, I can see the recursive algorithm working provided the merge works correctly if you've got just 1 LCA.

I'm not entirely convinced that the merge for the 'counter' always works if you've got a single LCA though. It seems like it should obviously work, but I'm a bit concerned given that you can easily create a counter example where the formula doesn't work if you allow for multiple LCA. So somehow having a single LCA is vital for the formula to work and it's not entirely obvious to me why.

Anyway if you end up just calculating something equivalent to the union of the history (+some interpretation) then you're probably fine.


The recursive algorithm works for any number of LCAs, as I argued.

If you're referring to having multiple roots, there are two solutions:

(a) define your 3-way-merge primitive (the non-recursive version) to support merging two states against an imaginary "zero" that acts as the single root. So in the counter version you can imagine that everything "actually" starts from 0.

(b) arrange your application so there are never multiple roots. In some cases this not really a constraint, usually one single person starts a document and it doesn't make sense to merge something else in later. In some other cases (group chats, tree histories like git) this is probably not acceptable and you will need (a). But in the examples I just gave, a reasonable (a) is fairly straightforward to think of.


If it works for 1 your algorithm works for any number so far I can follow.

It's a bit tricky to show that the 3-way merge always works fine if you've got a single least common ancestor. But yeah it looks like you're fine because any update that both have in common will also have reached the LCA (this was the insight I was missing that explains why a single LCA works and multiple LCA do not if you naively apply the 3-way-merge).


Wait, that doesn't look right. Why does 100 have two children? It seems to me that 1 should get merged with 100 to end up with 101, then that should be merged with 2 to end up with 103.


This is a distributed system.

You can have some replica that has 100, merge with an other replica and get 101. Then an other replica that also had the same 100 merge with an other replica and get 102.

Then when 101 and 102 merge, the lowest common ancestor is still 0, so you get 203.


Ah I see, sorry, that makes sense.


I like the phrasing "Mergeable Data" over CRDTs (I've been on the CRDT bandwagon for years now, with https://github.com/amark/gun ). Probably worth it as an industry of we shift language to "Mergeable".


Holy shit, the creator of GUN lurks HN? Huge fan.

Super cool, I first used the library back in 2015-ish and I remember being blown away by how innovative and unique it was.

I'm working on a serverless, distributed P2P compute network that's basically a PubSub event queue/job system using some hacks on top of Dat protocol right now for a side project.

Do you think I might be able to use or leverage GUN instead?


:D glad to know some HNers are fan!

Yeah, absolutely. Tho DAT is one of the better ones. Do you need mutability? That is probably the biggest differentiator.

Either way, you should come chat about it on http://chat.gun.eco/ super friendly active community discussing stuff like this all the time. :)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: