It's an incredibly fun topic, and one that's also really challenging. It also gave me first-hand experience with fuzz-testing where I could simulate all kinds of concurrent-edit conflicts and ensure both clients came out with the same end-result. While the end client/server implementations are working, it was a lot more effort than I anticipated building a full-fledged editor on top. I was too stubborn at the time to attempt to integrate with existing RTEs though, so the most notable part of the project was the core lib, not the end product.
For those interested in implementing it for themselves, I can't recommend this piece  by @djspiewak enough.
Arxiv meta: https://arxiv.org/abs/1810.02137
It’s well written imo. The conclusions chapter is very good.
“In this work, we have critically reviewed and evaluated representative OT and CRDT solutions, with respect to correctness, time and space complexity, simplicity, applicability in real world co- editors, and suitability in peer-to-peer co-editing. The evidences and evaluation results from this work disprove superiority claims made by CRDT over OT on all accounts.”
And it is also argued that OT is a more direct approach to the fundamental issues of realtime multiuser editing, and that that directness brings effectiveness.
HN discussion: https://news.ycombinator.com/item?id=18191867
Systems perspective: https://arxiv.org/abs/1905.01517
Correctness/theoretical perspective: https://arxiv.org/abs/1905.01302
I don't know how much they differ, though.
"After over a decade, however, CRDT solutions are rarely found in working co- editors or industry co-editing products, but OT solutions remain the choice for building the vast majority of co-editors. In addition, the scope of nearly all CRDT solutions for co-editing are confined to resolving issues in plain-text editing, while the scope of OT has been extended from plain-text editing to rich text word processors, and 3D digital media designs, etc."
Just after that it says "The contradictions between these realities and CRDT’s purported advantages have been the
source of much debate and confusion in co-editing research and developer communities." That's kind of my question/confusion: people writing CRDTs really talk them up, and I can't tell what's true in practice. I have no way to judge the accuracy of this paper myself (I don't know enough to really understand it), so I'm hoping someone who knows this stuff in a practical way can shed some light on it.
However, I think either could work. The task mostly maps onto editing a long string, right?
Here is a Quill.js coediting demo, without using a ShareDB backend: https://codepen.io/dnus/pen/OojaeN
Is the FROG repo you linked the repo of the wiki?
I'm just curious as it doesn't mention anything about it being a wiki in the readme (it's also a bit unclear how/what to use it for from the description).
Or go to https://icchilisrv3.epfl.ch/wiki/hn/Home?login=YOURNAME to test it immediately. (I added a tiny bit of content).
So just because your favorite editor can be used offline and it has an (online) collaboration feature, does not mean it is capable of being an offline-capable collaborative editor.
We make a powerful collaborative word processor for the browser (Zoho Writer https://writer.zoho.com) with complete offline support. This is one problem we'd like to solve in the long term. One way to solve this is to fallback to differential sync when syncing huge offline edits - and present the changes as tracked-changes (underlined and striked-out) to the user. This way the user has the power to resolve the conflicts, instead of the app messing it up by itself.
I'd like to hear if there are better ways to sync large offline edits, with the intentions preserved.
This is another question -- does automatic syncing make sense for huge differences in users' contents? The answer might be no.
OTOH, isn't this a rare scenario? How perfect it has to be?How much of a compromise can the users accept? These are interesting questions which shows that creating this kind of software isn't easy -- both from technical and UX point of view.
Offline editor from the point of view of the app's software developer might mean that editor can sometimes survive few minutes of lost connectivity, after which it will reconnect and possibly sync the team work.
Offline from the perspective of a team of scientists writing a joint paper means that some of them will take their work truly offline, to a secluded mountain hut, for an extended period (week or month) and will come back with a complete rewrite of the text to the point of being unrecognizable to the other authors. And then the coauthors will disagree with most of it, and will want to revert some of the pieces and keep some other pieces.
And scientists are not the only people needing joint collaborative environment. And not just for short papers. Few years ago, I was working in a larger team with a group of people doing revisions of a book, and there were so many revisions and revisions of revisions of revisions, and zillions of comments, that we had to split the book not in chapters, since Word got stuck even with 20-page chapters. We had to split the chapters in sub-chapters, of 5-10 pages in separate files, so that the processor would not get stuck with the myriad of comments and revisions.
These are all separate scenarios in collaborative writing, and there could be many other scenarios, so any solution should first explain for what type of scenario is it really targeting. Automated sync of collaborative work is not always the best thing to do.
The issue is less about which fancy algorithm/data structure to use and more about even defining in human terms what would be expected in certain merge conflicts. Currently, we err on the side of raising a merge conflict rather than deciding to use one version for a certain change, but in practice most conflicts have a pretty obvious resolution when a human looks at the two conflicting versions.
Exactly. When it comes to rich text editing which can have semantic trees (e.g tables and tables within tables), merge conflicts are so tough to handle.
Consider this: one version deleting a column in a table, and the other version splitting the column and adding a new row - by this time it's sort of impossible to find a meaningful representation of the table without manual intervention.
How would that be possible if none of the users can see what the others are doing? If you have three users that start with version 1 and they all make their own version 2, even the users don't know what the "correct" version 2 should look like. Or to put it differently, I don't even know what is meant by "intentions" in this scenario.
Of course, it depends on the collaborative editing solution. When it comes to OT -- if the implementation is correct, it doesn't really matter how many operations are queued for synchronising. So, it doesn't really matter how long users were de-synced when creating their content. But this is only theory and it applies more to technical side as there might be some semantical problems (intention preservation).
I believe the same is true for CRDT but I am not sure.
So, on one hand - if the editor is working fine for small batch of changes, it should also work fine for big batch of changes. The reality is often more harsh, though, and full of edge case scenarios.
Well, OT in general does not need the server but server-less implementations are more complex (more transforming functions to write, except of "inclusion transformation" you also have to write "exclusion transformation" algorithms).
I also wouldn't say that "OT is very simple to implement" - it is in it's base form, for linear data, with the server in the network. But every enhancement brings a lot of complexity.
If two clients, your client (a) and another client (b), writes the letters a and b respectively at the same time, from either client's perspective they where first, so on client A the state will be "ab" and on client B the state will be "ba". How do you solve that without a server/master ? With a server/master the server just have to increment a counter for each operation, and the clients can use that counter to know which order. So if a has counter 77, and b has counter 76, the state will be "ba".
We're working on both offline mode and a generic rich-text data model with support for the major editors.
If anyone's interested, I'd happily do some Q and A here.
One of our value propositions is a unified backend for dealing with any JSON data (and eventually beyond), avoiding binding server-side code to a particular choice of UI component.
I guess one reason you could need custom types would be to ensure consistency - if two keys depend on each other, and one user sets one key, and the other user sets another key, and the document is now invalid, you'd need the engine to be able to reconcile at a higher level?
I am not sure if I understand you correctly here, but it's not really that. Could you give me a more concrete example?
The kind of problems for extra types are, for example: user A changes a paragraph to a list item and user B splits it. As a result you'd like to have two list items instead of a list item followed by a paragraph. This is impossible if you don't give more semantic meaning to the operations.
There are other problems though, as you mentioned - with invalid document. For example, you have this kind of a list:
User A outdents "Bar" and user B indents "Baz" creating a list like this:
In CKE5 this is an incorrect list (we don't allow indent differences bigger than one). This cannot be fixed through OT so we fix it in post-fixers which are fired after all the changes are applied.
The example cases for additional types / custom implementation are in this section: https://ckeditor.com/blog/Lessons-learned-from-creating-a-ri...
These content-preservation edge cases weren't possible to solve with what was available (at least at the time we started the project).
Even apart of that, ottypes/json0 was lacking some basic things, like moving objects. I see they came up with a new implementation recently (https://github.com/ottypes/json1) and it allows moving objects. Maybe the new implementation would solve some problems. However, it is in "preview" state, and the last update was 2 months ago, so I am not sure how well it will be maintained.
Also, there are some edge cases when transforming ranges (which CKE5 use to represent, for example, comments on text or content created in track changes mode). I don't want to bury you in difficult to understand examples but if you are interested you might want to check the examples listed in inline codes for this function: https://github.com/ckeditor/ckeditor5-engine/blob/master/src....
As far as Quill.js is concerned, it is based on the linear data model, which brings limitations when it comes to complex features. Transformation algorithms for linear data models are much simpler and there are more implementations and articles in this area. Everything depends on your needs. If Quill.js features set and functionality fit your needs then the solution you chose is correct.
With CKE5 however, we didn't want to go on any compromises. We needed complex structures for our features, and for having a powerful framework - to enable other developers to write whatever feature they want and have those features working in real-time collaboration. We wanted transformation algorithms which will handle all the edge cases. It is true, some of those cases are quite rare. And the old "10/90" mantra applies here, in this case "10% of use cases brings 90% of complexity". But those edge cases happen and we didn't want to disappoint our users.
For example, imagine you have an application that has a list that must contain at least one element. Assume there are two elements in the list. A Shared JSON data structure on its own (that allows for immediate local edits) would to allow two clients to simultaneously delete one element each. The end result is that the client app on both sides will become aware that the constraint was violated only when the remote operation comes in. Resolving this becomes difficult. What is the resolution strategy? Which of the two clients should initiate it? This is a contrived example for sure. But you run into things like this in various use cases, and occasionally you need either new data structures that encode these semantics, or you need an extendable system that allows you to customize constraints, and resolutions.
That said, you can get pretty far with just JSON!
Essentially, they are doing OT on the DOM.
The big failing of co-editors is that currently they all silo into a single editor. THat's OK for e.g co-editing stuff on a web platform (e.g Google Docs) but it totally sucks for collaborating on code/documents you edit locally, forcing everyone to use the same editor/IDE, which ultimately fails because no one will use the same editor/IDE across a whole company/team/project, so although useful I've never seen one take hold for anything but a very short amount of time in any company.
We desperately need (a couple of) well-defined protocols that get implemented across editors, and one to emerge as a widely supported winner, whatever its shortcomings.
It's much harder for rich text editors (RTEs) because the various RTE's vary widely in the exact subset of rich text features they support. One will support tables, and another will not. One will support video and another will not. One will use a linear position to describe where the cursor is, and another will use a DOM Range. This makes it very hard to support co-editing between different rich text editors. It's not impossible, just hard enough where most of us are not tackling it.
The data model could easily be defined in Convergence.
And that article is tangentially related to a recurring interest of mine (shared editable 3d universe in which you can code collaboratively). If you happened to have an article comparing the merit of several server architectures/algorithms for collaborative text edition, I think many things can be related. I am now reading about CRDT (What the hell am I doing?) and that's something I may use to edit a scene graph collaboratively!
In terms of editable 3d universes (whoah!), have you considered chucking together Three.js's JSON scene description (https://github.com/mrdoob/three.js/wiki/JSON-Object-Scene-fo...) with https://github.com/automerge/automerge, a CRDT for JSON? What could possibly go wrong?!
I won't claim it is practical or a good idea in any way, just an obsessive thing I want to try.
Right now I am staying in the python world, that I know well, and appreciate its introspection capabilities. I play with OpenGL and PyQt to display in 3d the code of objects loaded in memory. It was surprisingly easy to do. When I first tried, 15 years ago, that was more of a mess.
Now I mentioned CRDT but I was not thinking on raw text but using nodes of the scene graph as characters. I like the idea of insertions and deletions that keep invisible tombstones available.
There are a bunch of basic integration features that would also be fantastic, but if the product was literally just Google Docs, minus the WYSIWYG bits, and all plaintext w/ maybe some syntax highlighting thrown in, I'd be super happy.
Not plain text only but probably can be configured for that.
No syntax highlighting.
That is the subset of features you want. Everyone else wants a slightly different subset of features. Add all of those subsets together and you end up with the total feature set of Google Docs.
And the feature creep begins. Eventually you'll be asking for a full blown IDE.
IANAL, but if you want to use the product yourself (even as a company) that should be just fine, but you are neither allowed to host the product for someone else (commercially) nor to created a product based on their frontend code.
They also have a demo which you can try here:
The online word-processor provides contextual options without as much clutter as google docs.
However I do note that we do not yet have a production ready backend for collaborative editing. I would suggest to use their service if possible, since we're developing an open source product it would not be an option for us.
There is a quite leaning curve when implementing new plugins due to the strict separation of model and data (commonmark in our case) and user-facing views.
I wanted an editing experience similar to Medium.com text editor: consistent, well-formed html output and formatting. At the time, Quilljs was one of the best open-source editor that produced consistent output regardless of the browser.
Also I didn't want to store html, I wanted to store operations in the database.
I have an app similar to Figma (basically real-time Sketch) and I needed just the real-time text editing feature.
But for other uses, it looks absolutely incredible.
From my understanding, an implementation of logoot is built to create the skeleton of a collaborative environment. From there on, you have the building block to make other things work with it, such as collaborative canvas.
I built one using Socket.io on Angular and used Ace Editor own implementation of CRDT
I used the following references to build it
To get started with the theory CRDT low level algorithm, these two post go into great details
I'd love to take it to another level if anyone here has mutual interest in this field of work.
I realize that the OP is about something that works with structured text, but the headline (as is) is just click-bait.
They built their custom app with the open source ProseMirror (http://prosemirror.net/) toolkit.
This figures. This is one of the ways 'open source' companies can make money.
So, the author sets about making an open-source collaboration server that inter-operates with the existing open-source editor?
But now the company can retaliate by adding features or inconsistencies to the closed source server and its integration into the UI, making the open-source server forever out-of-date and playing catch-up.
SubEthaEdit is a collaborative real-time editor designed for Mac OS X. The name comes from the Sub-Etha communication network in The Hitchhiker's Guide to the Galaxy series.
You can try a lot of them out in the demo online https://etherpad.org/
Not web-based though.
Imagine if Google Wave had a friends list similar to Telegram and each peer acted as a public cache for dead hypermedia.
One of the differences is real-time games do motion prediction, to hide latency. I haven't heard of editors doing so.
(this version will launch soon as soon as we complete the “pinning” / backup link to ipfs-cluster and make some UI tweaks)
Yes, if you do not mind using Google's infrastructure, Google Docs is easier & more featureul. There are some cases in which running on premises may make a difference, and Etherpad strives to fill that palce.
Hits all those open-source client and server notes, and has the advantage of supporting everything that mediawiki does.
Sorry I can't remember what the plugin is called, I used it every day at my last contract though. Made pairing super easy.
As for the existing implementations of CKEditor 5 with real-time collaboration, I'm afraid since the collaborative editor is usually a component of a larger platform (publishing, e-learning, CMS, intranet, documentation management system etc.) it'd be difficult to find something like a publicly available real-life use case demo of it. Without much effort, you can check out the real-time collaborative editing demos (https://ckeditor.com/collaboration/real-time-collaborative-e...) that we have on our website.
A while ago we started building our own internal implementation of a collaborative document management system using CKEditor 5 and features such as real-time collaboration with track changes (something like Google Docs) for our own needs - hopefully, once we polish it a bit, we will be able to release it. We are working on publishing a few case studies with existing customers, too.
As for offline collaboration, our current solution switches the editor to read-only when you e.g. lose connectivity, and then the editing mode is back on when you reconnect, along with the changes that were done in the meantime by other users when you were offline. The platform is ready to fully support "offline" writing. We tested some more complex scenarios with one of the users being offline - and it works (however, no periods as long as days of being offline). We focused on delivering other collaborative editing features such as track changes or comments that were more required by our customers. Full support for writing while being offline is still on our roadmap, though.
As for the answer to why the collaboration component is not Open Source: we have currently 40 people in total working on CKEditor (CKEditor 4 & CKEditor 5). Developing such a complex application takes a lot of time and resources and we simply need money to cover the expenses. Without that, we’d not be able to spend so much time on fixing bugs or bringing new features. Also, unfortunately, we learned the hard way that some of the biggest IT companies don’t want to help Open Source projects by spending even a single dollar on them, even if they use it in an application that brings them millions of dollars.
Regarding mobile support: CKEditor 5 works well on mobiles (with and without real-time collaboration). The comment you referred to was about the lack of an online demo of real-time collaboration on ckeditor.com for mobile devices. The reason behind this decision was that we are using the sidebar for displaying comments, which results in rather poor UX on mobiles. Fortunately, we are almost finishing the inline annotations plugin which will display comments inline, without using the sidebar. Feel free to assign a full Kiwi for mobile support :)
Thanks again for the article!
how did you solve the merge when somebody comes online again but creates a conflict with his changes?
We use Operational Transformation, so all the changes are stored as operations (with their most important property being path - the position where the operation happened).
When a user comes online, they try to re-send all the operations that are buffered and not yet sent to the server. Then it is a matter of transforming these operations by the new operations that happened in the meantime (when the user was offline). Of course, the user also has to accept the new operations.
This is a bigger / more complex version of the basic problem of the real-time collaboration. During real-time collaboration with all users online you might need to transform your operation by maybe several other operations at most. When you go offline, it might be tens or hundreds of operations but the problem is basically the same, just bigger.
The quality of the transformation algorithms will make a huge role in how well the user intentions will be preserved.
"Platform is ready" means that the theory behind our solution is correct and that we checked some moderately complex scenarios.
Edit: still, the base scenario is that the users write "online" and "offline" kicks in when you lose a connection (for hours, even). If we are talking about "everyone writes offline and then they magically merge" then I think this might be a totally different feature (and maybe even outside of the editor and inside the CMS).
Had no idea it had real-time collaboration... Wonder how it's done. Digging in now... :)
If you're using it, it would be great to get your thoughts on it and what particular things you like about it.
Update: ShareLaTeX uses a string-based collaboration algorithm (i.e. LaTeX is plain text, not a structured document format), and seems to be limited to max 2 collaborators (though that could be old info), so that explains why it didn't appear on my radar. I'll take it for a spin anyway, thanks!
Also, you might be unaware of CoCalc, which is open source https://github.com/sagemathinc/cocalc. CoCalc supports collaborative editing of many structured documents, including Jupyter notebooks, but not rich text at present.
Just to be clear, open source development of ShareLaTeX is ongoing and indeed has accelerated since Overleaf and ShareLaTeX joined forces. ShareLaTeX is split into many repos, and the one linked above is a 'coordinator' repo that doesn't get updated much. See https://github.com/sharelatex/web-sharelatex for one that is more active.
The free version of Overleaf/ShareLaTeX does have a restriction on the number of collaborators: you plus one other. Paid accounts support more collaborators , and you can also get more collaborators on the free account by referring others .
 https://www.overleaf.com/user/bonus (requires an account)
So yes, there are some use-cases when it is sufficient for collaboration, but most certainly not for the use-cases described in the blog post.
You could easily make tmux + vim web-based if you desire (I don't see why, I'd just use Mosh or SSH instead). For merging you could abstract the content to a Git repository. For multiple cursors you could try Wemux .
https://gun.eco/explainers/school/class.html ( Cartoon Explainer! )
For a detailed deep dive, including RGA: