My hunch is that because CRDTs are so much easier to grok than OT, engineers are empowered to make use case-specific improvements that aren't reflected in academic literature.
For example, the Logoot/LSEQ CRDTs have issues with concurrent insert interleaving; however, this can be solved by dedicating a bit-space in each ID solely for followup inserts. The "Inconsistent-Position-Integer-Ordering" problem is solved by comparing ids level-by-level instead of in one shot.
Fundamentally, CRDTs have strong strategic advantages over OT. Given a document size N and a document edit history H:
* CRDT size is O(N) where OT is O(H)
* CRDT updates are O(log N) where OT is O(H^2)
In any nontrivial document, H ≫ N. This means CRDTs have much better perf than OT. Additionally, the best CRDTs (like Logoot/LSEQ) don't require tombstones, garbage collection, or "quiescence." The complexity burden is far lower.
To top it off, CRDTs are offline-capable by default.
For example, it's a logical contradiction to begin the statement with "OT's complexity is _generally_ governed by editing history (H), i.e. O(H)" to conclude that "CRDTs have much better perf than OT", then adding a qualification that argument applies only for the specific off-line editing scenario (which btw is also not true).
It's also inaccurate to base CRDT's complexity on N - the document size, i.e. number of characters visible in the document. You need to include tombstones for the case of WOOT variants or use garbage collection for RGA (which then requires vector clocks). These nuances are described in sections 4.3.
OT's complexity is governed by O(c) where c is number of concurrent operations, period.
For off-line edits, the short answers is that OT's complexity is also not O(H), if H is the number of character edits, because you can easily apply compression. Now, the longer answer: there is pretty strong spatial locality in the distribution of edits over a document -- we don't sporadically and randomly add or delete characters around a document, but intuitively, most of the inserts will be adjacent characters (i.e. strings), and many of the deletes are over newly inserted strings. OT uses string-wise operations, hence it will compress n consecutive character insert ops down to a single string op. In addition you can compress delete edits over newly inserted content: i.e. [insert 'To be or not yoo be' at 0, delete 'y' at '14', delete 'o' at '14', insert 't' at '14'] => [insert 'To be or not to be']. This is basic idea behind "operation compression" which is what OT would use to support off-line, non-real-time, asynchronous editing, however you want to call it. There is a better example here (http://www3.ntu.edu.sg/home/czsun/projects/otfaq/#_Toc321146...).
DOI 10.1145/2957276.2957300 doesn't deal with operation compression. If you believe it does, please point out where we might find it.
It is trivial state opinions like 'OT is complex', but much harder work to back it up with concrete evidence. I'd think folks will find the discussion more informative without the constant hand-waving.
This discussion is of interest to folks both inside and outside of academia and the developer community brings valuable hands-on experiences to the table. So, let's keep it inclusive.
Part of the gap is that many academic papers model text operations as just single character edits. If you do that, copy+pasting some text into an editor can inject thousands of insert operations into the system all at once. But that design is really sloppy - nobody actually designs real world OT systems like that. A much better way to design text OT operations uses a list of skip, insert and delete parts. That way you can arbitrarily compose edits together. This way an entire pasted string just gets inserted in one operation and performance is fine. (Or if the user goes offline, you can merge all the edits they do into a single change, then transform & commit it all at once).
I've still never seen the OT algorithms of anything I've worked on have a meaningful impact on performance. Actually thinking about it I'm not sure if I've ever even seen the OT code show up in profile traces.
OT aside, data sync in general is not a solved problem.
Maybe I am especially unlucky, but I have yet to see a chat app that has no sync glitches. A chat is not the most difficult data structure to sync, but I saw bugs in Slack, Telegram, Instagram, Gchat, Skype, others... WhatsApp does not even try to sync...
A lot depends on the diversity of your use cases and the number of the nines you expect from the system...
When we profiled our string OT implementation on a single 2-3 GHz CPU, with 2-3 CPI (Cycle Per Instruction)), it was able to handle about 10M transformations per second. On a half dozen of OT based system I've worked on, the OT implementation was never where we found the real engineering complexity or the performance bottleneck in working coeditors.
The significance of using string-wise operation model in OT is under appreciated and can't be emphasized enough for real world systems. The first string-wise OT transformation functions can be traced back to Achieving Convergence, Causality Preservation, and Intention Preservation in Real-Time Cooperative Editing Systems, 1998. We also found that skip, insert, and delete can improve compression of string edits.
Additionally, it is more common to see CRDT libraries that make character-wise operations and create an object for each character (eg. y-js.org, github.com/google/ot-crdt-papers), which is not ideal.
Obviously there are also libraries such as Atom's Teletype that have string-wise operations.
The cynic in me feels like the CRDT vs. OT war misses the forest for the trees. What matters is lacking features, and the feature that is most needed and least described is a systematic way to offer a diff editor matching the normal experience. Indeed, after having been offline a while, one wants to see and select how their changes will integrate the shared resource.
One of the key messages we tried to get across is that academic research for co-editing ought to go beyond theoretical analysis but rather drive research by pushing the envelope in new areas of application and/or features. OT's evolution has been symbiotic with new applications, while we haven't seen this in CRDT (for coediting) research. The reality is that you'll find new CRDT papers every year that still (a) make fairly broad claims of benefits are either theoretically dubious or not backed/validated by application/system implementations, (b) dwelling on technical issues that were resolve years ago. To move beyond this, we want to have put these issues in context (a general transformation approach) and dispel the most common fallacies in theoretical analysis.
Related to the 'visual-merging' feature you mentioned, when I worked on making MS Word collaborative as a research project, one of the UX research questions was how to allow users to visualize the different combined effects of updates to objects (e.g. if you changed the background of object X to red and I change it to yellow at the same time). We came up with a multi-versioning technique to display the potentially different combinations and help users select one that's the most desirable. It is definitely a interesting and challenging problem to consider for text documents.
Is that not the case?
This completely misses the main point. OT is complex, the theory is weak, and most OT algorithms have been proven incorrect (see http://hal.inria.fr/inria-00071213/). AFAIK, the only OT algorithm proved correct is TTF, which is actually a CRDT in disguise.
In contrast, the logic of CRDTs is simple and obvious. We know exactly why CRDTs converge. There are several papers proving that specific CRDTs are indeed correct.
Furthermore, I'm somewhat doubtful of the performance discussion, since Attiya et al. proved that there is a lower bound on the complexity of concurrent editing, independently of the technology used. See http://dx.doi.org/10.1145/2933057.2933090.
Disclaimer: I did not read the paper in detail, just skimmed over it.
To be specific, I will enumerate two examples.
First, in Logoot (https://hal.inria.fr/inria-00432368/document",
the function generateLinePositions in section 4.2 has a serious error, which could lead to the failure of generating new identifiers. This flawed design can be also found in the following research (Logoot-Undo, https://hal.inria.fr/hal-00450416/)
Second, in TreeDoc (https://hal.inria.fr/inria-00445975/document), the function newPosID described in section 3.2 may generate incorrect ids. This function is to generate an id r between any two ids p and q (p<q) so that p<r<q. However, in line 5, if pn=0, then we would get r<p. Readers can refer to Figure 3, in the tree, the character dY precedes the character dZ in infix order, however, the id of dY (10(0:dY)) is greater than the id of dZ (100(1:dZ)), thus their positions are inconsistent their ids.
In the literature, CRDT researchers always claim their CRDT designs are formally proved.
However, there are various scenarios whereby CRDTs would have inconsistency issues as stated above.
In my opinion, I don't think there are much validity for these claims.
What did catch my eye in this paper is that the authors claimed to have used CoWord as a case study (and of course, claimed that the theorem prover showed CoWord doesn't satisfy TP2). I worked on CoWord, and it was never open-sourced so how the authors got hands on CoWord's transformation functions (which were inside a compiled C++ binary OT-engine) for evaluation is truly troubling. Care to explain?
Re: the articles you cited: Molli et al.'s http://hal.inria.fr/inria-00071213/ (A) and their earlier paper: https://www.lri.fr/~mbl/ENS/CSCW/2013/papers/Imine-ECSCW03.p... (B), are unfortunately wrong about OT correctness, demonstrably so.
The formalism can obscure the underlying argument, but both A and B uses the same reasoning, which goes something like this:
1. Assumption: An OT system's transformation functions must always satisfy TP1 and TP2, independent of the OT algorithm or protocol, otherwise they are incorrect.
2. Proof: take any existing OT system, disregard the OT algorithm or protocol, just look at the OT transformation functions in isolation, and show that the functions do not satisfy TP1 and TP2.
3. Conclusion: All known OT systems are incorrect
The starting assumption of the proof is a false premise: OT's transformation functions do not always require TP2. The proof itself, of evaluating correctness on a subset of OT in isolation, then broadly generalize to OT as a whole, is a sleight of hand (and sloppy theory work). Hence, the conclusion is plainly wrong.
TP2 is only required under a very specific pre-condition, and meaning that avoiding this pre-condition means removing the need to preserve TP2 at the level of transformation functions. TP2/CP2-avoidance strategy is commonly used in many OT algorithms and protocols: GOT, JUPITER, G-WAVE/DOCS, COT, TIBOT etc., which are provably correct (see https://www.computer.org/csdl/trans/td/2016/03/07060680-abs..... Transformation functions capable of preserving TP2/CP2 for string-wise text editing (with verified correctness and without tombstones or other meta-data)are also available (see https://dl.acm.org/citation.cfm?doid=2998181.2998252)
What I also find really dubious is the verification method used in A and B: paper B starts with using a theorem prover to definitively prove that all previously known OT systems are wrong. Then the paper proposed new transformation functions that the same theorem prover showed to be definitively correct. Paper A, which came after paper B by the same authors, used another theorem prover to show again that all previously known OT functions to be wrong, definitively, including the ones that they proposed to be "definitely correct" from B. So A basically contradicts the results in B, and both were definitively "proven correct".
Your solution is to combine GOTO with TTF. Actually, it is a combination of OT and CRDT.
Before I dive knee deep into this, does anyone have any opinion on sharedb or better alternatives? (it was listed in this post under it's old name, sharejs, as one of the more successful OT implementations out there so that's a good sign)
We wrote ShareDB at Lever, which was in the 2012 YC batch (iirc). We wrote it to allow realtime collaborative editing in our application of all our data fields by default. I'm still really proud of that work. ShareDB primarily uses JSON-OT, which lets you do realtime OT over arbitrary application data. The lever team has been maintaining & improving sharedb for the last few years, which is really lovely to see as I've moved on to other projects.
One nice thing about OT is that its much easier to implement. The original quilljs author (Jason Chen) wrote the rich text OT implementation. ShareDB works with any OT code (so long as operations can be JSON stringified / parsed). And its all hooked up in quill, which is super neat. And looking at the quill github, it looks like David Greenspan has been maintaining quill recently. David Greenspan is one of the original authors of Etherpad, which is one of the first web based text OT engines.
TP2 has been made into this monster that seems to scare anyone starting to look at OT. You don't need to worry about TP2 with a suitable protocol with a server or fully peer-to-peer, and either solution are simple enough to implement. A fine point is that TP2 and intention-violation are distinct properties: you can easily achieve convergence by serializing the operations through a total order at every site (but your won't have intentions preserved), on the other hand, if you insert 'a' and I insert 'b' at the same location simultaneously, it's reasonable for both results 'ab' or 'ba' to be intention preserved, but without transformations we'd each see a different result (thus divergence).
We've documented plenty of CRDT's performance and correctness issues in that article. I think a bigger app design question I would ask is if you want to "lock-in" your apps internal data model with a consistency-maintenance scheme -- there is real value in separation of concerns (SoC) .
Like,algo x has issue X, algo y has issue Y, z has Z, so CRDT has issues X, Y and Z... and many things like that.
In referring to "CRDT", the paper qualifies upfront that it is "CRDT for co-editors" that we focus specifically on (title and 2nd sentence in the abstract). CRDT counters, sets etc are not in scope.
Why don't you compare them the other way around? Rope-based CRDT and a naive OT? String splicing is O(N), you may put it on either side of the scales. Hashtable-hosted Causal Tree is O(1); hashtable-hosted RGA is basically the same.
I find the general discourse of the article not so useful. I may guess, you overreacted to certain marketing messages coming from the CRDT community.
1. I think you are confusing the cost of OT with the cost of implementing standard editor operations (i.e, insert/delete characters/strings/object).
Editors have existed before collaborative editing came along and folks have figured out how to build it to be performant using ropes, piece table or many other data structures -- optimized for their own set of needs (more discussions here https://news.ycombinator.com/item?id=15381886). OT is orthogonal to the choice of design/implementation of the underlying editor, but interacts with the editor through an abstract-data-type, which in the case of text editing is a character sequence. Lumping the cost of OT with the cost of editor operations is a common mistake I see.
2. Hashtables in RGA are used only for processing remote operations. Comparing local to remote processing cost is like comparing apples and oranges.
Elaborations from the article, on page 9:
We should further highlight that the use of position-based operations does not imply the text sequence
must be implemented as an array of characters. The positional reference to the text sequence could
be (and has commonly been) implemented in numerous data structures, such as an array of
characters, the linked-list structures, the buffer-gap structures, and piece tables, etc. [8,69].
.. and on page 11:
In OT-based co-editors, for example, the choice of strategies (e.g. what native data structures or operation models to use) for implementing efficient document editing is completely left to application designers, and the support for real-time collaboration is layered above and interfaced with the editing application’s exposed abstract-data-type, which is, in the case of text editing, a sequence of characters [8, 21]
What is a rope-based CRDT?
What is the meaning of "String splicing is O(N)"
The String refers to what? And What is N?
CRDTs don't require an external causal-order-preserving communication service. That's kind of the whole point of CRDTs. At the same time this is what imposes certain limitations on how they can be used. But so is automatic conflict resolution in collaborative editing.
OT requires centralized servers to do intention resolution.
CRDTs are a set of operations that can work on any peer and resolve correctly, without extra coordination or central resolution.
The two things are about opposite as they can get. Some CRDTs can be used to make Google Docs style OT, but are not specific to OT use cases.
Source: Me, I'm the leading industry and open source expert on CRDTs which power the popular https://github.com/amark/gun decentralized database that the Internet Archive, D.tube (1M+ monthly visitors), notabug.io (1K/daily users) use. CRDTs have let us scale to 3K tx/sec on $99 hardware in P2P mesh networks.
And so humble too!
With a centralized server, you can consider every resolution as happening between 2 nodes (server and client). This means transform / catchup is as simple as a for loop.
In contrast, in decentralized context OT code needs to support Transform Property 2 in order to converge correctly in all cases. TP2 dramatically complicates the OT implementation, and you need a much more complicated resolution algorithm to merge arbitrary changes between nodes.
For text, this means you need:
- Tombstones (or something like it) - eg https://github.com/josephg/TP2/blob/master/src/text.coffee
- Either an operation prune function (anti-transform) or full history, forever.
- A resolution algorithm that can flatten DAGs of operations into a transformed list. This stuff gets really hairy, and hard to implement efficiently. This is my attempt from a few years ago. Its correct, but super slow: https://github.com/josephg/tp2stuff/blob/master/node3.coffee
Implementing high performance OT with centralized servers is easy. Implementing OT in a decentralized network is hard to do correctly, and much harder to implement in a highly performant way. For decentralized systems, CRDTs are a much better approach imo.
Do your system always need to preserve TP2/CP2 to converge correctly? Contrary to what many folks and papers will claim, answer is categorically no.
There are two general approaches to arrive this desired outcome: (1) avoiding it or (2) preserving it. Let's look at the basics of what you need for each:
(1) Avoiding it - this can be done with a centralized server, as well as in a fully distributed, decentralized, peer-to-peer context. So I'm in agreement with josephg in that a central server will avoid TP2/CP2, but I'd like to amend that with a decentralized avoidance strategy is available and just as easy to adopt. To back this up, this paper  looks at TP2/CP2-avoidance systematically in representative protocols and algorithms: JUPITER, NICE, Google Wave, COT, TIBOT/TIBOT2. TIBOT/TIBOT2 are two fully decentralized algorithms/protocols that avoids TP2/CP2.
In addition,  also answers the more interesting question of why these systems were able to avoid TP2/CP2, by giving the general conditions for TP2/CP2-avoidance. You can apply the conditions to design a new protocol or algorithm that avoids TP2/CP2.
(2) Preserving it - By TP2/CP2 preservation, it usually refers to the task of writing transformation functions (e.g. between insert and delete, or delete and delete) that satisfies the aforementioned condition. This is a solved problem for String-wise operations, from 2016 :
- You can take these String-wise transformation functions and use them with many of the existing OT algorithms in fully-decentralized context and satisfy all TP1 and TP2.
- The transformation functions are free of Tombstones (or its variants) or other schemes (DAG etc) to ensure TP2.
Happy to say more about it if someone has an interest.
You can implement correct and performant OT with both a centralized server or with a fully decentralized topology, using TP2/CP2-avoidance and/or with TP2/CP-preservation. I've built multiple production systems with all the varieties. I generally do prefer OT with TP2/CP2-avoidance on a server, but it's not because OT can't avoid TP2/CP2 without a server or preserve TP2/CP2, it is that there is not much real-world evidence showing clear benefits moving a collaborative editing system to decentralized topology, esp when most of the CRDT-based p2p proposals in papers that I've come across involves a server one way or another.
TP2/CP2 in OT is often stood up as a straw-man so folks can throw stones against its correctness, it is really a solved problem. The unfortunately outcome is that you see a ton of papers adopting this strategy to get published and overtime the conversation becomes really convoluted.
We discuss these ideas and try to dispel many of these confusions in detail in the arxiv manuscript. A good starting point would be Section 4.1.3: OT Solutions to the CP2 Issue and the FT Puzzle
Source: All your comments are downvoted to hell. Either you idiot or you next Galileo.