Big kudos for pulling it off!
> Two main candidates are Operational Transformation (OT) and Conflict-Free Replicated Data Type (CRDT). We chose OT and perhaps one day we will write down our thoughts on the ongoing OT vs. CRDT battle.
Please do, I would love to hear your thoughts on OT vs CRDT!
So, some of my colleagues see CRDT as a potential saviour. There was even a paper on CRDT for tree structures that we might use (can't find a link now). However, we'd rather go with a linear model for CRDT. That'd make it a bit easier on the CRDT level but it would require some really smart decisions about: types of changes, the structure of the linear model and how it is transformed to the tree model (that we need after all). We know, though, that without an actual implementation, it's just a theoretical discussion. Therefore, we started thinking whether we could add CRDT "below" our current tree model. That might allow validating the concept in a real-life. Can't wait to do that :)
BTW, in the meantime, we recommend this paper: https://arxiv.org/abs/1810.02137
Thanks to the author as well, for distilling so much information into the very readable document that this was.
One thing I am wondering about is in the conclusion of the document, the author says:
> You’d be hard-pressed to use CRDTs in data-heavy scenarios such as screen sharing or video editing.
With regards to video editing, could it not be rather doable after all?
Consider a non-linear video editing system (NLVE)  like Adobe Premiere Pro .
While the size of the video data source material you are editing might range into probably several hundreds of gigabytes for a 4K feature length film, the project file itself remains very small, because the video data is kept separate from the timeline data.
> Non-destructive editing is a form of audio, video, or image editing in which the original content is not modified in the course of editing; instead the edits are specified and modified by specialized software. A pointer-based playlist, effectively an edit decision list (EDL), for video or a directed acyclic graph for still images is used to keep track of edits.
Are these not in fact the sorts of data structures that would lend themselves the most naturally to being implemented as CRDTs? :D
As for the video source material that is used in the project, those files can be distributed by users outside of the editor software using for example BitTorrent, or they could be automatically distributed by the editor software via for example public IPFS or a private Dropbox folder, or if the users that are working together are on a shared network then the source files could be located on a shared NAS.
«Ditto is a library for using CRDTs.» 
> Code coverage: 100%
> Development team: 25+
> Estimated number of man-days: 42 man-years (until September 2018), including time spent on writing tools to support the project like mgit and Umberto (the documentation generator used to build the project documentation)
Software is expensive
or something like that ;-)
Compared to any physical project or product software is by comparison so cheap!
So instead of trying to fix those issues, we tried a different approach.
Regardless, the pricing for their commercial offering seems really intense. $29/mo for 25 MAU? I feel like they're missing a few zeroes there. Maybe they meant 2,500 MAU? 25,000 MAU? The demo I played with makes it clear that the editor component is rather nice, but I would rather look at alternatives at that price. I would imagine the collaborative nature is what influences the price, but somehow Google Docs and MS Office do fine with collaboration as a core product feature without charging >$1/MAU just for that feature.
It makes sense to me, that you can use their flagship editor product and advertise for CKeditor, as long as you're publishing your work under the same copyleft license so they don't need to feel like you've taken advantage of them for a commercial endeavor.
Under a GPL license, nothing is preventing you from charging your users, but also it is probably generally expected that nothing is preventing the users from borrowing your source code, then standing up their own instance of the tool and maintaining it in-house, either.
In this way, it makes sense. You can either pay CKeditor for their work when you integrate it ... or, for no money, you can commit to not use the editor to hold your users and their data hostage.
(I'm over-dramatizing a bit, but also not. People regularly talk about freeing themselves from their Google Mail hostage situations, especially on HN.)
Maybe $29/mo is a lot for 25 monthly active users when you're operating a business at web scale, but it's really not a lot of money for a company with 25 employees who has maybe built their own bespoke in-house collaborative solution, and managed to do it using CKeditor instead of rolling their own editor or cobbling one together from other free components. I don't know if that's the target market, but that's certainly the base case, if your start-up company is "dogfooding" its own solutions.
If your users are not worth $1/mo + your other overhead costs, then maybe they don't need to get real-time collaborative editing? (Or maybe it doesn't need to come from your solution...)
CKEditor 5 is cool and all, but $1 USD per month per user is in our case way more expensive than our cloud infrastructure.
In our flexible plan, we always try to find a working licensing scenario with our customers, with volume discounts, especially for bigger projects. Also, note that MAU count is not the only licensing metric that’s offered, and we are happy to simply sit down with you, discuss your project and reach an agreement.
Somewhat related (Common Lisp is typically compiled) I found the "Lisp LGPL" / LLGPL license to be a clear way of resolving the inherent C-bias of the GPL license suite. It's essentially just the LGPL but with this preamble that defines valid/invalid forms of proprietary software "linking" with an LLGPL library: http://opensource.franz.com/preamble.html Of note is a fun instance of the open/closed principle -- your subclasses of a library's class are yours but modifications to the base class trigger the LGPL licensing hook for your application.
I would assume that if you're serving Ckeditor in your webapp, the GPL would apply and your whole webapp needs to be GPL'd.
Given they had other license options for version 4 (https://ckeditor.com/legal/ckeditor-oss-license/), I suspect they might just use the GPL only option for 5 to try and drive sales. Good luck to them, lots of companies are using Quill (BSD) instead.
The page doesn't seem to imply that the price for 25+ users would scale linearly; for that they have a "Contact Us" call to action for negotiating pricing.
25 MAU seems reasonable to me for the following use-case: Company A sells a custom CMS to Company B, with a CKEditor-powered admin area to be used by <25 of Company B's content writing staff.
I confirm that! It is just a specific offering for a small project where 25 people team use our editor. It's a fair price (covers granting the commercial license, the right to use / change / hide our code, enterprise support, maintenance, warranties, indemnifications, etc). For bigger projects we always try to implement a licensing scenario that works well for a given customer. Contact us and we will do our best to craft you an optimal plan.
In this case, essentially all your front end code would need to be GPL2+ as well (GNU considers dynamic linking to count for GPL purposes I have a hard time imagining you could sufficiently separate the CKE code from the rest of your code/site to satisfy them). If CKE requires any server side code your back end code technically would need to be GPLed as well but you'd never have to share it (back end code is generally considered to not be distributed hence the AGPL) so it would have no real effect and no one would know if you didn't.
Practically speaking, it also means that if you want to contribute code back to this code base, you would have to license your code in the same way, allowing redistribution under all GPL versions 2 and up.
We did some testing to see how big this issue is and it turned out that it's not dramatic. The main difficulty with graveyard purging is that you need to keep the undo in mind. Elements in the graveyard are used for undo, so garbage collecting will have to be strictly connected with limiting undo steps.
Add to that collaborative changes and it becomes a tougher problem. Therefore, we'd probably start from what seems to be the safest option – trimming the history stack and implementing a garbage collector that goes through all the possible reference points.
Another Q for you if you're still around: did you have to make any changes to your algorithm for when it's running through the undo queue, or is it basically the same thing but in reverse? I found some operations could better preserve intention with some adjustments to the undo transform
The crucial difference is that with undo you have a context (old document state). With collaboration, you don't. So if both users put a character at the same position, in collaboration the order of output doesn't matter (and can't be solved really). In undo it does, because a user remembers what was the order of the letters before they made a change.
Also is it possible for third party plugins to add new operations (defining their own conflict resolution against the existing types)?
In the end, we concluded that users will see defining deltas and transformations as too complicated and won’t use it. Later, we also decided to drop deltas idea at all and we rewrote deltas into operations.
As an alternative solution, we provide the post-fixing mechanism. Your plugin can listen to all the changes done on the model and apply fixes if something has gone wrong. We used it for the famous "insert table row / insert table column" problem, to add the missing cell. This alternative is not as clean as explicitly providing a transformation algorithm but it is much easier to implement.
The problem with defining your own operations is that you need to write transformations against all the other operations which is a huuuuge work. It took as a few years and we still see some room for improvement.
Companion reading (the approach taken in ProseMirror, which was implemented in significantly less person-years): http://marijnhaverbeke.nl/blog/collaborative-editing.html
But I can see that if the app is divided into a multiple code-files then different coders can be working on different files at the same time. Still I prefer some form of code ownership or at least locking of multiple files just for me while I'm working on them.
I don't quite see how it would be much different for other kinds of text. While I'm writing this post right now I wouldn't want somebody else modifying its beginning while I'm writing its end.
Not saying there aren't any good use-cases, just can't think of any right now. So that's my question, what are they?
With text presentation and reading, you are ultimately limiting the number of things that can be seen that way. Though, you can of course expand your entity list to include more and more things. Headers, lists, sections, chapters, paragraphs, etc.
That is to say, I think this is already difficult for structured text. Adding more structure, in the forms of "requirements section" and such just compounds on that complication. I'm not sure it adds more.
Also wonderful blog post, actually read it in full, thanks!
The more you use a product, the more you visit related sites and materials on the web; so yes, ads.
I hope I'm wrong since I have some ideas I'd like to try it out with. Sending my customer's data to their cloud is probably a non-starter.
It was interesting to see how they approached moving from a linear data model to a tree structured data model. Is anyone aware of similar work with graph structured data, as you find in electronic design automation? Think collaborative schematic editing.
We achieved something similar to shareJS functionality quite early, to be honest. What proved to be extremely difficult was all the extras to smooth up the user experience. We would have to (heavily) extend shareJS anyway, so it made more sense to go out with own solution, crafted for our needs.
Am I the only one who doesn't like other people typing through my work? Why not simply let the user choose when to merge their work with others?
Honestly, sometimes I feel collaborative text editors have been created "just because we can" or "just because we want to see if we can".
Certainly, there are different use cases and different expectations. What we call "offline collaboration" can be seen as a special case of the real-time collaboration. In "offline collaboration" you also need to merge changes so you also need the conflict resolution. However, the longer you postpone merging, the better the conflict resolution algorithms need to be because the more conflicts you have. That's one of the reasons why we introduced new, semantic operations.
The other option is so-called change tracking or suggestion mode. That's on our roadmap as well and thanks to the platform we created it's not a big deal now. In fact, if we started with implementing suggestion mode (or "offline collaboration"), we might have ended up with an engine which is not ready for real-time collaboration. But once we have real-time collaboration we can now be quite certain about solving other issues.
we have to split our "collaborating" to google docs, and our "carving in stone" in JIRA and Confluence.
I'll say though that the "suggested edit" and "comment on a range" features (and the distinct edit modes about commenting, suggesting and approving edits, and free for all battle royale are all valid modes for us all the time.) (300 person SF+intl. software company)
Demo can be found at https://ckeditor.com/docs/ckeditor5/latest/features/collabor...
BTW any timeline on sunsetting the venerable CKEditor 4?
 I was part of the CKEditor 4 team back when the CKE5 guys were laying down some the first iterations of what is described in this blog post.
These abstractions should be in the OS really.
"OK, so to fix that, I'll just..."
That'll break in some other case. If you then work on fixing that up, in another 20 or 30 steps, you end up at these algorithms. Or something far worse, which is actually pretty likely if you try to bodge something together one hyper-local problem at a time.