Hacker News new | past | comments | ask | show | jobs | submit login
Xi-Editor Retrospective (raphlinus.github.io)
538 points by raphlinus on June 27, 2020 | hide | past | favorite | 157 comments

> There is no such thing as native GUI

I understand why you were disillusioned with the performance of native GUIs. But in other areas, such as accessibility, nativeness absolutely does help. I'm guessing you know this; I just want to make sure other developers get this message.

Here is a quite comprehensive list of the benefits of a native GUI for accessibility, particularly on macOS:


Very good point, and accessibility is important. We do have it on our longer-term roadmap for Druid, and until it's done it's a very good reason to consider it unfinished work-in-progress. I will make sure to discuss this in my follow-up blog, which is about building GUI. Thanks.

I'm looking forward to that follow-up post, especially the part about toolkits that make heavy use of the compositor to work around rendering limitations. I also wonder if the Chromium rendering engine is doing the same thing, and if not, how its performance compares to, say, WinUI and Cocoa.

The accessibility APIs can be programmed against by anyone. If there is a way to make something like this accessible to screen readers in some useful way, it would likely involve specific accessibility work either way.

> If there is a way to make something like this accessible to screen readers in some useful way

If by "something like this" you mean a text editor, then there's no question; they can be made accessible with a screen reader, and some complex ones (e.g. Visual Studio Code, Visual Studio, and Xcode) are accessible already.

> it would likely involve specific accessibility work either way.

For the main UI of a full-featured programmer's editor or word processor, that's definitely true.

What I want to avoid is developers using a custom GUI toolkit where the platform's native toolkit would do, because they want the best possible performance (regardless of whether the platform's toolkit is good enough), and forfeiting all of the accessibility benefits that the platform's native toolkit brings more or less automatically.

One of the cliches of modern business is: don't outsource your core competencies.

If you are developing a text editor, you should put the time into your text editor. It's the thing that distinguishes you from your competitors. And it's the part that users interact with the most. Your users are likely to ask for features, like vi keybindings, that you can't easily build with the OS-native text field anyway.

Incidental forms, like your preferences screen? Unless you have a lot of time and money to burn, the OS widgets will probably be better than yours.

Not sure about other platforms, but I like the way macOS/iOS uses the accessibility metadata to look up UI elements for testing. So in order to "access" a button in a test, you have to do the accessibility work.

You need it to take automated screenshots too.

Implementing enough accessibility to enable automated testing is a good start, but unless you're aware of the needs of people using screen readers and other assistive technologies (and possibly incorporate those requirements into your automated tests), you won't automatically get full accessibility just by exposing the information that your tests need in order to control the UI.

It's the same on Windows. Automated UI testing almost always uses the accessibility APIs, both to find controls and to interact with them.

Platform widgets do a lot of the accessibility work for you. In some cases you can get away with doing zero* work.

*Or very close to zero

I've been watching/rooting for Xi from a distance for a few years now. This is an excellent retrospective. Well written, but also very honest, especially about those all too typical project highs and lows.

I remember looking at one of Raph's early presentations and being both convinced by his arguments for and impressed with the design decisions made, the same ones he outlines here. After doing some of my own comparatively rudimentary research, coming to similar conclusions as he did. It left me very excited to watch the project develop over time.

One thing I was wondering about is that the Xi website contains a list of front ends similar to the list in the GitHub repo, but includes a Flutter implementation for Fuchsia[1] that is now a dead link and missing from the GitHub list. He mentions it briefly in this retrospective as well. Any ideas what happened to it?

[1]: https://xi-editor.io/frontends.html

It was maintained for a little while after I left Fuchsia (almost two years ago now), but the pace of development on Fuchsia is such that if it's not actively maintained, it bitrots very quickly. And there aren't really any devices running Fuchsia that need text editing at scale.

>One of those was learning about the xray project. I probably shouldn’t have taken this personally, ...

I've run into that phenomenon a lot with various startup projects over the years. It's soul crushing, and often an immediate reaction is to assume certain projects are direct competition, rendering your project fruitless.

In retrospect, this ends up almost never being the case. In a way, you become your own worst enemy. The correct solution is to simply ignore it and press on, but this is certainly easier said than done—especially when morale is already low, and without the benefit of hindsight.

>Perhaps more important, because these projects are more ambitious than one person could really take on, the community started around xi-editor is evolving into one that can sustain GUI in Rust.

In the end, I still think that counts as a success. Perhaps it will prove to be an even greater success than the original goal.

Thanks for all these write-ups, they're excellent. Like other commenters, I've been watching at a distance for years now. Your work is much appreciated.

It's not a startup and the alternatives are not competitors, but I've had a similar experience affecting my motivation to work on Red Moon.

There's two other screen filter apps on F-Droid, and at one point I proposed that we [work together], either merging our apps or creating shared libraries. While the other devs initially expressed some interest, they weren't willing/able to put in the effort and nothing came of it. This has killed my motivation to work on Red Moon. It feels like a waste of time, duplicating other people's efforts.

That's not intrinsic to other people working separately to solve the same problem. If Red Moon were an educational project, or if it took a unique approach to screen filtering (like the other project I'm part of and [crowdfunding]), then I'd feel differently. But my main motivation was creating a FLO screen filter app, since previously I depended on the proprietary Twilight. And while there's a handful of valid approaches to screen filtering, the bulk of the effort is on UX, state transitions, and working around the kinks of Android, all of which are shared.

Note: I don't bear any hard feelings towards the other devs, genuinely -- the most frustrating bit was [lack of communication], and even that is totally understandable; it's hard to say no to something you like the idea of, even to yourself (eg, acknowledging you'll never get to that project near the bottom of your list).

[work together]: https://github.com/LibreShift/red-moon/issues/222

[lack of communication]: https://github.com/LibreShift/TransitionScheduler/issues/14

[crowdfunding]: https://wiki.snowdrift.coop/market-research/other-crowdfundi...

Aww, this is really disappointing to hear but an excellent read nonetheless. I always had Xi in the corner of my eye as a very interesting way to make a text editor that had a lot of good things going for it: modularity, native UI, speed…I think if it worked out it would have been really great. It's really sad to hear that it didn't. Thank you, Raph and the rest of the Xi contributors, for working on it for all these years; even if it didn't upend the text editor market I am appreciative of the effort and the lessons you've learned along the way.

Some other side conversations I've had about this that might be interesting and relevant:

* Native UI is not always the right choice. That might seem surprising coming from someone who pushes native UI extremely strongly in general, and I don't want to be misunderstood as recommending you choose something else for your next project (if you have to ask: don't). But in some very specific contexts, such as performant text editing/rendering, it can be problematic to use a high-level control. In fact, many of the text views I use in are not using the standard platform controls but a custom view doing its own drawing: Terminal.app does its own thing (it doesn't even use NSScrollView!), Xcode has the Pegasus editor written in Swift, iTerm draws text using Metal, and Sublime Text is Skia. Platform controls are meant for 99% of apps that don't need absolute control over performance or text rendering, and sometimes you can push them quite far. But in some cases it's appropriate to abandon them. (I will note that all of the examples handle actual text editing very, very well. If you don't know what the standard shortcuts for editing text are for my platform, you're not going to do it right.)

* I love that LSP exists and that it makes my "dumb" editor "smart". But having worked with it a bit, I think it might suffer from some of the same problems: it uses a client-server architecture and JSON to communicate, which means it is not all that performant and needs to keep track of asynchronous state in a coherent way. And not just the state of one document, but the states of many documents, all in different contexts and workspaces, all with their own compiler flags and completion contexts and cursor locations–it is really hard to do this correctly and quickly, although many language servers do an admirably good job. Perhaps it's just "good enough" that it's here to stay, or maybe it has enough support that it can have some of its issues wrinkled out with time.

Thanks for the kind words!

I agree with you that getting shortcuts (and other similar aspects) right is more important than which code is used to implement text editing. We're very much going to try to take that to heart as we implement more text capabilities in Druid.

Regarding LSP. Yes, while the functionality is there, it is very difficult to implement a really polished user experience with it. I think a lot of that is in the design of the protocol itself; it's hard to avoid annotating the wrong region when there are concurrent edits. It's interesting to think about how to do better, but I fear you'd be bringing in a lot of complexity, especially if you started doing some form of Operational Transformation. I would highly recommend that if someone were to take this on, they study xi in detail, both for what it got right and what was difficult.

I want to echo this, it's disappointing, but I'm thankful to Raph and the team for all the work they put into it. I had dreams of being able to write an editor where I could concentrate on making a cool UI in a different language/framework and have Xi do all the hard stuff.

I think LSP will end up being the "good enough" solution for niche languages, while popular languages get dedicated IDEs/mega-plugins, a la the various language flavours of IDEA, XCode, etc.

One important thing to note is that all your examples of custom text are non-rich editors. Text editing becomes astonishingly more difficult if proper rich text and i18n is required. Then, Core Text / TextKit is a pretty good solution because anything else quickly requires years of work. The best alternative I know of is using a Javascript-based editor in a WebView - or jumping to something like Qt.

Concerning LSP I can assure you that my NeoVim with native lsp support is great. Maybe because the situation before was just downright very bad so what we have now is just so much better.

On the github page https://github.com/xi-editor/xi-editor:

  JSON. The protocol for front-end / back-end communication, as well as between the back-end 
  and plug-ins, is based on simple JSON messages. I considered binary formats, but the actual 
  improvement in performance would be completely in the noise. Using JSON considerably lowers 
  friction for developing plug-ins, as it’s available out of the box for most modern languages,
  and there are plenty of the libraries available for the other ones.
4 years later:

  The choice of JSON was controversial from the start. It did end up being a source of friction, but for surprising reasons.

  For one, JSON in Swift is shockingly slow.
Surprising reasons?!

Yes. In this discussion and on Reddit, people still talk about binary vs textual as the source of the problem, but I've argued (based on empirical data) that the lexical details are not the reason for the performance problems.

Also, Swift is marketed as a fast language (also based on LLVM), yet in my measurements it's 20x to 50x slower than Rust for JSON processing. I found that surprising. Would you not?

It wouldn't be especially surprising to me.

JSON's parsing performance is difficult, whether or not the language doing the parsing is fast or not. There's a reason that there are almost always libraries claiming faster JSON performance, regardless of the language.

Go is marketed as a fast language, and they're still trying to build a high performance JSON parser [0].

Parsing JSON will always be slower than most of the alternatives.

[0] https://dave.cheney.net/high-performance-json.html

Swift’s slowness is kind of unfortunate, although I hear that there is work being gone to remove a lot of the lifetime cruft that was going in behind the scenes to make it slow. And unfortunately I think the primary motivation for Swift’s performance is driven by its primary application, which is UI development, so “good enough” performance usually works…

WWDC has a couple of talks regarding runtime improvements, and low level coding including unsafe style coding with Swift.

I agree with you, it's weird that Swift JSON is so slow and binary isn't necessarily faster.

My exasperation is you were surprised by JSON becoming an issue, when this was the most controversial design decision about the project. To me, it's completely unsurprising.

They're saying the reasons were surprising, ie they expected it to be controversial for different reasons

> Surprising reasons?!

It is possible to write a blazing fast JSON serializer/deserializer on pretty much every programming language, and most languages have many of them.

Swift does not, and it does not seem that it will have one in the near future. This is quite disappointing, taking into account that Swift is one of the 3 major-platform native languages (C# windows, swift macos, C on Linux).

So yeah, it is quite surprising that the major platform language in one of the main platforms in use has extremely poor JSON support. Even more surprising is that there is no path forward to fix that.

There are many pros/cons of using json, but this was a "non issue" (if it isn't fast enough, we can make it fast - turns out we cannot because the platform is controlled by Apple and they don't want to).

> And it [syntax highlighting] basically doesn’t fit nicely in the CRDT model at all, as that requires the ability to resolve arbitrarily divergent edits between the different processes (imagine that one goes offline for a bit, types a bit, then the language server comes back online and applies indentation).

What's a rationale for having syntax highlighting server-side as opposed to client-side? I'm working on side project that uses ProseMirror and CRDTs through yjs, and the idea of having a server for syntax highlighting for editable text never occurred to me.

> Even so, xray was a bit of a wake-up call for me. It was evidence that the vision I had set out for xi was not quite compelling enough that people would want to join forces.

I think in hindsight this was a virtue. If you look at the xray project, it died after the Microsoft acquisition of Github: https://github.com/atom-archive/xray/issues/177. By not combining projects, it allowed for redundancy.

So, just syntax highlighting is not compelling to move out to a server. But part of my idea is that if you have async to hide server latency, so it's not slowing down typing, maybe you can do a much deeper analysis than just surface syntax. Right now you get basically instant coloring and then much slower but more accurate feedback that's generally comparable to an incremental recompile. I was thinking that a language server might be able to give you quick feedback on some aspects of your program before having to do all of it.

Perhaps an even better motivation is indentation. When I use editors with regex-based indentation (the norm), I'm regularly annoyed when it gets it wrong. Being able to budget a few milliseconds or tends of milliseconds to do indentation that exactly matches what rustfmt or gofmt would recommend would be worth it, imho.

VS Code’s semantic tokens are a similar concept: an initial client side pass using a tmGrammar, then a language-server provided set of refinements.


Your server generally has a better idea about the language than you do, presumably.

I would only think that's the case when the language of the file you're editing is unknown, so you would have to do some server-side analysis of the entire file and make an educated guess. For example, https://github.com/github/linguist. But if the language is known in advance, then you could send the grammar rules to the client.

A server also means it still works when the language is not simple enough to condense into simple grammar rules :)

had that feeling...

as a side note, i think more and more people are starting to realize that the async paradigm is not a panacea. that paradigm or at least the way we implement it might even mean more trouble than a synchronous counterpart.

also, other editors that i feel will be sunset or not worked on at some point are atom and brackets.

> as a side note, i think more and more people are starting to realize that the async paradigm is not a panacea.

This is why libvim (from the oni2 project) is based on vim rather than neovim. Even aside from performance, it is a huge simplification if you can interact with an editor engine in the same process, synchronously. At some point we replaced simple function calls with baroque APIs accessed over localhost...

It's not that cut and dry.

See https://github.com/onivim/libvim#why-is-libvim-based-on-vim-...

It's mostly due to their build system.

I've seen that but Bryan said this just a few days ago:

> This was a smaller consideration - but more fundamentally, the model Neovim uses for input - queuing it on an event loop and handling it asynchronously - is at odds with what we required - to be able to process the input -> handle updates synchronously.


Atom already seems to have gone somewhat towards maintenance mode since Microsoft bought Github. Feels like there's a lot less by way of features being released and a lot more just keeping up with ecosystem/OS changes. I could certainly be wrong though.

Atom competes pretty directly against VS Code doesn't it? It makes sense that Microsoft would consolidate their efforts.

That's what I thought, too. Hopefully the team that was working on Atom has gotten/will get a chance to bring some of its components/features to VS Code if it doesn't already have them.

There was this one comment when Github Codespaces got released that that's where ex-Atom devs got moved to: https://news.ycombinator.com/item?id=23093150

I don’t see anything wrong with that, tbh. While it’s apples and oranges to compare atom to sublime, I think in an extremely fair and objective head-to-head comparison between VS Code and Atom, the latter is just a worse choice on all fronts. It came first, it filled a need, and now there’s something better.

(And it and that something better is now both owned/maintained by the same organization, making it even more of a no-brainer to slowly phase the inferior option out.)

Atom is open source. The community can always keep working on the project if they care.

Microsoft probably wants people to use vscode.

And given how Office loves React Native, and Microsoft is making it work across all major desktop OSes, with the team continuously bashing Electron's bloat on their talks, I look forward to the way when RN powers VSCode.

I don't think you can blame this on an async design. Right off it talks about having the keyboard input as a separate process, not knowing what version of the buffer input should apply to, using timeouts and I guess, doing everything over local IP. It almost reads like a list of what not to do.

Why not put the input into the window, put a version number with the buffers and keyboard input and use shared memory? Of course something isn't going to work if you don't confront the problems you have at a fundamental level.

I believe that his async code would have been far more maintainable / readable / expressive if it had used a reactive streams implementation such as colorless kotlin coroutines + flow. Sadly rust doesn't yet have one.

What are your thoughts on this topic @raphlinus ?

I think this is not about coding style for async code (coroutines vs. callbacks), but about general asynchronity between various processes/threads.

If you have any asynchronous background task it means the state of your application can change without the current thread having made any change. And your code needs to account for that possibility, which makes it a lot more complicated.

Front-end / interace needs to be synchronously controlled, it doesn’t feel right otherwise. I followed xi-editor but it never was faster than emacs on opening really large files and editing them so it never had a use case for me

I use emacs full time, and I open large files on xi (>300 GB). When I handle files in the 500 MB-10 GB range on emacs, I basically need to disable everything and use "text-mode" exclusively, and yet it still struggles if the file is only a couple of GBs long....

> Looking back, I see much of the promise of modular software as addressing goals related to project management, not technical excellence.

I couldn't find this point discussed elsewhere in the comments. I think this is one of the most important sentences in the whole article. I know this is essentially just another perspective on Conway's law, but I think it is important to identify factors like this early in a project if at all possible so that human factors around development can be factored out of engineering decisions.

I think that this is especially critical for open projects that want to achieve long term sustainability. Open communities don't have the resources to maintain all the packages, even if they use the same build system and follow the same pattern, even if most of the maintenance can be automated, some human being still has to deal each of those systems separately because they are part of the environment surrounding that individual part of the project.

> First, TextMate / Sublime style syntax highlighting is not really all that great. It is quite slow, largely because it grinds through a lot of regular expressions with captures, and it is also not very precise.

The tree-sitter framework provides a pretty good engine for syntax highlight:


The assertion that gpu is required for good text rendering caught me off guard. I can't claim it is wrong, but it does feel like it should be wrong.

I sure wish it were wrong. If macOS’s text rendering primitives could be used asynchronously or concurrently the situation would be significantly better. I’ve spent probably hundreds of hours trying to make it fast but I’m convinced it’s only possible if you work at Apple (as they have done quite well with Terminal by using lots of hacks that are unavailable to me)

(As a person who just looked away from a nearly full-screened 4k iTerm2 running the metal renderer, thank you for your efforts)

Do you have any leads on the kinds of hacks they do? And does emacs have to do the same hurdles? It looks fine by me. (Mayhap I'm looking at the wrong things?)

What do you mean by “primitives” here?

I think the point being made is that it is required for fast text rendering.

Still feels like that should not be the case. What changed so heavily in the last decade? 2d rendering used to be hella fast without the gpu. Right?

Rendering models & expectations changed.

Once upon a time (a decade ago) it was OK for apps to directly draw to the front window, compositing wasn't a super established thing. This saves on precious memory bandwidth, which CPUs didn't really get all that much more of over the last decade.

However now that GPU composition has to happen, GPUs actually don't super like the linear formats that CPUs write to. They want swizzled textures, and they keep those formats private. And on not-heterogeneous memory systems, the buffer the CPU writes into needs to be sent over to the GPU as well.

Meanwhile screen resolutions & pixel counts skyrocket, as did UI visual & animation expectations. Being responsive is a lot easier than being fluid. Especially if you're trying to be fluid on a high-resolution and/or high-refresh rate display.

And this is all while ignoring things like Apple's "high DPI" handling, which is to just say fuck it and downscale instead. Which means you're pushing resolutions far higher than the display's actual resolution quite commonly.

What kllrnohj says here. There are other trends as well. Because bandwidth was considered a very scarce resource, apps used to do very fine-grained dirty region tracking, and be very careful to update only what really changed (again, often by painting on the front buffer directly). Scrolling was often handled by an explicit bitblt operation, which commonly had hardware support even on very early computers.

For complex reasons, that's all been changing. For example, Metal on macOS doesn't even have a way to specify partial screen updates. Those optimizations are still valid, though, and in my ideal world we have both good support for partial invalidation and fast GPU rendering. Among other things, that would be really good for power usage. But to get there requires some attention to detail that seems to be mostly gone from the desktop UI space.

Do you have any good reads on the difference in format that you are referencing? (Linear versus whatever the gpu is doing?)

Even sending the buffer to the gpu for compositing makes sense as a problem, but still feels that should be faster than you would care about in a text editor.

I'm also on a mac that, if I do something that is "gpu accelerated", I'm likely to get a frozen session. Such that most applications don't seem to need gpu help. Are they just using different parts of it?

> Do you have any good reads on the difference in format that you are referencing? (Linear versus whatever the gpu is doing?)

Linear is just your normal buffer where you index into a pixel at '(y * stride) + x', where stride is probably just the width * bytes per pixel.

But this isn't how GPUs store texture data. They swizzle it so that locality can be maintained well enough regardless of how the texture is rotated. httsp://fgiesen.wordpress.com/2011/01/17/texture-tiling-and-swizzling/ is a decent introduction. https://en.wikipedia.org/wiki/Z-order_curve has more of the general side of things.

There's then also framebuffer compression in addition to all this.

Very likely, the specific thing you're experiencing is poorly-engineered switching between integrated and discrete GPUs. It's not a mac-only problem, see this thread for a somewhat horrifying story (follow last link): https://www.reddit.com/r/gigabyte/comments/91ld3o/aero_15x_d...

A good intro to layout transitions is: https://www.gamasutra.com/blogs/EgorYusov/20181211/332596/Ta...

The number of copies required to get a pixel from CPU space into photons today is ridiculous. It used to be you'd just write into a memory-mapped buffer, and your graphics card would scan out directly from that where it would get to the electron beam modulator in microseconds.

This all makes sense. But I would have imagined my machine can perform several hundred copies in the time it used to take to do one. Haven't memory speeds progressed a fair bit?

Such that I agree it would be better to do dirty tracking and sending just a small update to the screen. But the tricks to do that should be a lot easier than they used to be.

The gpu thing in my Mac is amusing just because if I run intellij, it will cause my machine to crash. If I run emacs? Not so much. Even if I am stressing the machine with several compiles or some silly pandas data frames. If I enable gpu accelerations in my browser? Expect instability. My video chat program just boasted that they use gpu for video. And for the past two weeks, it is common for the entire video system of my machine to hang during a video chat... Literally get a crash screen.

> Haven't memory speeds progressed a fair bit?

Not really, no. Over the last 15 years CPU memory bandwidth has increased around ~10x. Meanwhile monitor resolutions have also increased around ~10x. Meaning per-pixel CPU bandwidth has been flat over the years or even regressing a little bit. Laptops have become more common yet they also lag on memory speeds, or sometimes are even just single channel. Yet they also tend to have the highest resolution displays, not a great mix if you're trying to make CPU rendering still be viable.

As for your Mac issues I think you just have a broken computer. Software shouldn't be designed with the expectation that it'll be used on broken systems, that's not a realistic design constraint.

Agreed on my computer being broken. Only brought it up as evidence that not all apps are using the gpu. (And even then, I am not positive that is right.)

> 2d rendering used to be hella fast without the gpu. Right?

Define Hella fast.

If you want 16ms text rendering (~60 Hz) you pretty much need a GPU today. If you want 8ms (120Hz) text rendering, there is just no CPU that can handle it today with an OS running next to your text editor doing other stuff.

The whole point of Xi was being able to edit huge files (Tb size) on 8k displays at over 120 Hz, and being able to resize the screen and scroll, resize text, with great fonts, without any delay or sluggishness.

Xi delivered in some of those things, and druid delivers on some of the others.

But essentially, there is no system today where you press a key and that key appears on your screen in less than 8ms. Xi achieves ~16ms on the right platforms - not the fastest but much faster than most text editors.

The idea behind Xi was to preserve that as the editor got more features, like syntax highlightning, and it did.

Do you have good benchmarks going over this? Still feels ridiculous. I have a multi core machine running at speeds well above my comprehension and you are saying they can't render simple 2d primitives at speed? That just blows my mind. What are they spending the time on?

I get that rendering a full display from scratch could be slow. But updating a single key press should not be. Scrolling? Sure. I guess. But even that feels like we over complicated something in the process.

I note that I do not disbelieve you. My asserting that it is ridiculous is that it sounds ridiculous.


> I have a multi core machine running at speeds well above my comprehension and you are saying they can't render simple 2d primitives at speed? That just blows my mind. What are they spending the time on?

Moving memory from RAM to the CPU, doing stuff in the CPU, moving the results back to RAM, moving those results from RAM to the GPU, etc.

The CPU is fast, but moving memory around is not, so the CPU and the GPU just end up doing nothing most of the time, waiting for memory most of the time.


Raph's blog (linked at the top), has a bunch of articles about rendering latency. But just google about typing latency, there are a couple of projects and tools that measure the time between a physical keystroke is pressed, and the letter being rendered on the screen (end-to-end). Beyond the memory copies, what usually happens is also: an interrupt is triggered on the CPU, the kernel might take some time to context switch to catch it, context switching requires saving all registers, and restoring them, and registers have exploded in size over time (e.g. with AVX-512 you need to save quite a bit of memory), then the kernel notifies the application, which needs to do something with the key press, like scheduling a render into a frame buffer, etc.

So there are just quite a bit of bounces from here to there in the system, most of which deal with memory latencies, and memory latencies is the part of the system that hasn't been getting much faster over the last 30 years.

> there is no system today where you press a key and that key appears on your screen in less than 8ms. Xi achieves ~16ms on the right platforms

How does one measure this? Say for the text editor I'm using (Emacs) or for Chrome/Google Docs?

Looks like this post is relevant: https://thume.ca/2020/05/20/making-a-latency-tester/

Those evil people like me that use letters outside ascii.

Screen resolution / 4k monitors? And maybe font complexity (i.e. lots of web pages seem to be downloading fonts now)

I was curious if font complexity could be it. Does feel like a good guess.

Resolution could also matter, but we did have higher resolution displays back in the day, as well. Feels like 2d acceleration somehow regressed.

Sublime Text 3 still exclusively uses CPU rendering. Unless you're working with really high resolutions (think 8k), CPU rendering is plenty fast.

Doesn’t Sublime Text have an option somewhere to use a GPU buffer or something once your resolution passes some reasonable limit (I think 2560)? What does that do?

From the settings:

  // Mac only. Valid values are true, false, and "auto". Auto will enable
  // the setting when running on a screen 2560 pixels or wider (i.e., a
  // Retina display). When this setting is enabled, OpenGL is used to
  // accelerate drawing. Sublime Text must be restarted for changes to take
  // effect.
  "gpu_window_buffer": "auto",

It's disappointing that things didn't work out.

I'd looked at Xi about a year ago when I was finally running out of patience with Atom's terrible performance, but the lack of a workable Windows solution pushed me to VSCode in the end.

> When doing some performance work on xi, I found to my great disappointment that performance of these so-called “native” UI toolkits was often pretty poor, even for what you’d think of as the relatively simple task of displaying a screenful of text.

The same thing I experienced. Text rendering performance is terrible with Cocoa, and even worse with Win32. Calculating text width is an order of magnitude slower than with FreeType.

That's why for my UI framework I also went from using "native" APIs to custom GPU rendering.

If your framework has a broad audience, please make sure it accounts for the full complexity of text rendering [1] and editing [2], particularly when it comes to internationalization. You may find that there are good reasons for Cocoa to be as slow as it is in the measurement phase, if not the rendering phase. And of course, please don't forget about accessibility.

[1]: https://gankra.github.io/blah/text-hates-you/

[2]: https://lord.io/blog/2019/text-editing-hates-you-too/

This is an excellent read.

I was always wondering if CRDT would really achieve encapsulation, but this kind of things are very hard to know without trying. Thank you for you works!

Is there another modern editor like Xi that strictly separates the GUI front end (view) from the backend (model and controller)?

Interesting juxtaposition of client/server with modernity. It seems really obsolete, in reality. Every kind of client/server UI system has been tried before. 20 years ago the idea that UI elements would run in the display server while sending commands to model/controller backends (over CORBA!) had a lot of traction. Of course, the idea sucks, so all of these projects are dead and buried. People think something valuable is buried there, so they keep digging the idea up again.

The real problem with the idea is that the user expectations of editor performance are high and the computer can only just barely do it. People expect to be able to open a 5GB file, have it drawn really nicely on a display with 30 million pixels, insert 1 character at the beginning of the file while the syntax highlighting updates instantly and then save and exit instantly. It's actually a lot of work for the machine, no room for RPC overhead.

Is this not exactly how macOS does input dispatch? Input comes in from the HID subsystem in the kernel and is redirected by WindowServer to the right application via Mach which queues it on its runloop, and then it updates its view and tells WindowServer again via Mach to composite that to the screen?

I think perhaps libvim is like that? https://github.com/onivim/libvim

neovim is the pioneer here.


Thanks, I didn't realize neovim had this. I found this article that explains the embed feature: https://tarruda.github.io/articles/neovim-smart-ui-protocol/

I believe Onivim is in this camp. I don't know how well that's working out for them, or whether there are any others.

Thanks, this looks amazing. For others looking for more info: https://www.onivim.io/

It's saying one thing that you're asking for it when the article explicitly says he believes this was a mistake from the beginning.

That’s fine. I don’t completely agree with his analysis. Just because he couldn’t make it work doesn’t mean someone else can’t.

Maybe someone can make it work, but it's such a terrible idea that everyone else which doesn't use this oddball architecture will be running circles around them.

Reading the article, I took it that it's hard to get a smooth UI with a combination of async and a frontend/core split. No one likes laggy UIs, but I assume emacs/vim users can live with that.

Tangent/clarification: the regex slowness issue is almost certainly due to using Perl-style exponential time regexes (which are required by Sublime syntax definitions) rather than "regular languages".

The syntect library mentioned uses fancy-regex, which is explicitly different than Rust's default regex crate, in that it supports regex constructs that require exponential backtracking:


In particular, it uses backtracking to implement "fancy" features such as look-around and backtracking, which are not supported in purely NFA-based implementations (exemplified by RE2, and implemented in Rust in the regex crate).

I think "regexes" have gotten bad name so I call them "regular languages" now. That is, RE2, rust/regex, and libc are "regular language" engines while Perl/Python/PCRE/Java/etc. are "regex" engines.

In my experience, lexing with regular languages can beat hand written code. In the case of Oil vs. zsh, it's beating hand-written C code by an order of magnitude (9-10x).




First, TextMate / Sublime style syntax highlighting is not really all that great. It is quite slow, largely because it grinds through a lot of regular expressions with captures

pedantic: I think the issue is probably more lookahead than captures ? You can do captures quickly in a "regular language" engine.

It may be surprising just how much slower regex-based highlighting is than fast parsers. The library that xi uses, syntect, is probably the fastest open source implementation in existence (the one in Sublime is faster but not open source). Even so, it is approximately 2500 times slower for parsing Markdown than pulldown-cmark.

Author of syntect here: This isn't why TextMate/Sublime/VSCode/Atom style regex parsing is slow.

The main reason is that the parsing model is applying a whole bunch of unanchored regexes to the unparsed remainder of the line one after another until one matches, then starting again for the next token. This means each token parsed can require dozens of regex matches over the same characters. I implemented a bunch of caching schemes to cut down on this number but it still tends to need many matches per token. It sounds like Oil's lexer probably does about one regex run per token, with probably a somewhat faster regex engine, and sure enough it's something like 40x faster than syntect.

Oniguruma is actually pretty fast and anyhow most of the regexes in Sublime syntax definitions are written to not require backtracking, because Sublime has two regex engines and only uses the fast one when no backtracking is required. In fact fancy-regex should delegate directly to Rust's regex crate on these regexes but is somewhat slower than Oniguruma, for reasons I haven't yet looked into (edit: see Raph's comment, it's probably heavy use of captures, another thing Oil's lexer doesn't need).

Also note that byte-serial table-driven lexers have a speed limit of one byte per l1 cache round trip (~5 cycles), whereas faster lexers can take advantage of multi-byte instructions (even SIMD) and out of order execution to go much faster, hence why pulldown-cmark is 2500x faster than syntect rather than just 40.

[edit: I should also clarify that since unlike these other parsers, syntect takes a grammar and can parse many languages, so how fast it is depends a lot on the grammar, I suspect the markdown grammar in particular is slower than usual, given that pulldown-cmark runs about 250MB/s and syntect on ES6 Javascript (a complicated but well-implemented grammar) is about 0.5MB/s, so the Markdown grammar may be 5 times slower]

The main reason is that the parsing model is applying a whole bunch of unanchored regexes to the unparsed remainder of the line one after another until one matches, then starting again for the next token. This means each token parsed can require dozens of regex matches over the same characters.

Hm but why can't you just OR them together? That is perfectly fine with a regular language engine. For example, I OR together about 50 different regexes here (including a whole bunch of constant strings) for the ShCommand mode:


Despite this big mess, everything "just works", i.e. the lexer reads every byte of input just once. re2c picks the alternative with longest match, and it picks the first match to break ties.


I suspect the reason that you can't do this is (1) Sublime was originally implemented with a Perl-style regex engine and (2) the order of | clauses matters more when using a backtracking engine. It doesn't have the simple rule that an automata-based engine has.

My claim is that you avoid performance problems by using a "regular language" engine. So I think what you point out supports this, even it might be a slightly different issue than "more backtracking".

I think you are saying that Sublime's parsing model prevents composition by |, which makes lexing slow, because it forces you to read each byte many times. (in addition to the captures issue, let me think about that)

Two reasons: captures, and it needs to know which of the regexes matched. I meant to but forgot to mention in my original comment that the reason Sublime's built-in highlighter is the fastest is they wrote a custom regex engine which basically does the equivalent of or-ing them together but unlike all other regex engines handles captures and information about which one matched properly while doing so. The custom engine doesn't do backtracking and it falls back to Oniguruma for regexes that use fancy features. So yah it's in theory possible you just need to write a custom regex engine to do it.

re2c supports captures and it will tell you which regex matched. http://re2c.org/manual/manual.html#submatch-extraction

This is probably already implemented if it does exist, but I know with a bunch of fixed text strings you can create a NFA/trie thing using Aho-Corasick. Does such a thing exist for regexes (specifically: one that can match "all of them at once"), and is it used for the fast regex engine?

You will probably find https://github.com/BurntSushi/aho-corasick/blob/master/DESIG... good reading. I believe captures get in the way of using the fastest of these NFA-style techniques, though. There's a comment from burntsushi to this effect in: https://lobste.rs/s/fq8uil/aho_corasick

ETA: Heh, I'm amused to find the latter link to be another point in what seems to be an extended conversation between Andy Chu and the Rust text-processing community :)

Ha yes, as far as I remember, my claim about Aho-Corasick had validity, but I definitely learned a bunch of things from that thread.

If you scroll way down you will see a benchmark I did for the "constant string" problem.

So you can see that re2c does scale better from 1000-6000 fixed strings than either RE2 or rust/regex. But I uncovered a whole bunch of other problems, like re2c segfaulting, the output being slow to compile, egrep blowing up, the non-regular heuristics of "grep" playing a role, etc.


    # grep is faster than both fgrep and the "optimal" DFA in native code
    # (generated by re2c).  I think grep is benefitting from SKIPPING bytes.

    # All times are 'user' time, which is most of the 'real' time.
    #        re2c compile | re2c code size | re2c match time | ripgrep time | RE2
    # n= 100         7 ms          11 KiB           1,566 ms         687 ms   1,398 ms
    # n=1000        66 ms          57 KiB           2,311 ms       1,803 ms   1,874 ms
    # n=2000       120 ms          93 KiB           2,499 ms       3,591 ms   2,681 ms
    # n=3000       204 ms         125 KiB           2,574 ms       5,801 ms   3,471 ms
    # n=4000       266 ms         159 KiB           2,563 ms       8,083 ms   4,323 ms
    # n=5000       363 ms         186 KiB           2,638 ms      10,431 ms   5,294 ms
    # n=6000       366 ms         213 KiB           2,659 ms      13,182 ms   6,397 ms
    # n=47,000   2,814 ms
    # NOTES:
    # - egrep blows up around 400 strings!
    # - RE2 says "DFA out of memory" at 2000 strings, because it exhausts its 8 MB
    # budget.  We simply bump it up.
    # - at 48,000 words, re2c segfaults!
    # - At 10,000 words, GCC takes 36 seconds to compile re2c's output!  It's 74K
    # lines in 1.2 MB of source.
I meant to blog about this but never got around to it ...

As mentioned, I think you would uncover similarly interesting things by benchmarking Sublime-like workloads with re2c's capture algorithm. They use some fundamentally different automata-based implementation techniques.

I replied elsewhere, but to answer more concisely: that's exactly what "regular language" / automata-based engines do, as opposed to Perl-style backtracking engines (which are more common).

Here are hundreds of regexes OR'd together so the lexer reads the input exactly once, not 100 times:


And in the lobste.rs thread linked below, I was basically saying for all practical purposes you can ignore Aho-Corasick and use the more general regex version. Since the "fgrep problem" (fixed strings) problem doesn't involve captures, you should get a DFA that runs at the same speed either way. (I don't recall if the compile time was longer but I don't think so, it is buried in the thread probably :) )


First, syntect can use both onig (an adaptation of Ruby regex, same as Sublime) and fancy-regex. The latter is yet another xi-instigated, and was motivated by trying to do a better job at this.

Second, the design of fancy-regex is precisely to delegate to the regex crate when features like lookahead aren't needed. I had high hopes for the performance, but it turned out to be lackluster, a highly tuned backtracking engine beats it in most cases. And much of the problem is indeed that RE2-style approaches are a lot faster when captures aren't required. And, by nature, Sublime-style syntax definitions rely extremely heavily on captures. A bit of discussion (including comments from burntsushi) here:


Third, the question of whether "regular languages" would support faster parsing is somewhat academic, though perhaps you could build something. Though the languages being parsed often have some regular features, as a rule there are exceptions. Markdown is an extreme example, and cannot be parsed well with either regular languages or Perl-style regexes, though that hasn't stopped people from trying.

OK thanks for the clarifications. I responded to BurntSushi and others here:



I don't have experience with captures in re2c, but it's entirely automata-based, and they apparently implement something "fast" for captures and published it in this 2017 paper, so it's pretty new:


I'm sort of interested in benchmarking some Sublime-ish workloads with it, but I'm not sure I'll have time.


And the other claim from that comment is that while regular languages aren't powerful enough by themselves for syntax highlighting, you can add a trivial state machine on top of them (perhaps Vim-like), and get something more powerful and faster than Sublime's model.

I'd have to look at the captures vs. lookahead issue more closely, but the general claim is that I think Sublime has a bad parsing model based on a sloppy application of Perl-style regexes (e.g. if you are really forced to read every byte multiple times, that seems dumb, and can be avoided with regular languages)

The bigger problem I feel is with this:

> On the plus side, there is a large and well-curated open source collection of syntax definitions

Whenever I tried to get Java highlighting updated in GitHub for “var” my issue is closed minutes later by people who don’t work at GitHub telling me to open an issue with some random unpaid TextMate / Sublime repo.

Really the overall problem is not owning the front end. It means you can’t do things like fix syntax highlighting, at least as important to programming as high performance word wrap.

Thank you for these fantastically in-depth and introspective blog posts! It’s a pleasure to observe how a single, experimental project can spawn a dozen different research branches, and it’s something to aspire to in my own work. (I have probably missed many such opportunities due to laziness and lack of curiosity.)

Glad you enjoy them! I certainly plan to continue writing.

I (naively) would have thought that the IPC / multi-process model strongly conflicted with the extremely high performance goals (e.g. time between keypress and painting). Can anyone explain why my instinct there is wrong?

Maybe the time is just not right for Xi yet.

I wish that after druid becomes production ready you'll pick it up again.

The coolest thing about Xi is that some have attempted to make it a vim replacement, others an emacs replacement, etc. I wish it becomes a framework that can be used to build better text editors in general.

I also hope to be able to pick up text editing again, but make no promises. If not, then I hope Druid provides a solid foundation for others to build the dream editor I envisioned.

Too much over-engineering here.

It is always better to start with simple, dumb and reliable code, with minimal architecture, and then grow organically.

With experience, I tend to write the less smart code possible, I felt in love with the power of brute-forcing everything.

> With experience, I tend to write the less smart code possible

I'm always amused by the hn threads patting ourselves on the backs for in depth technical discussions[1], when so often the top comments are basically non-specific thought leader tweets.

For those that got this far, the article is well worth reading about a design space with difficult tradeoffs, and there are comments actually engaging with the content below this.

[1] today brings us multiple instances in https://news.ycombinator.com/item?id=23664067

I found some interesting ones in this thread!

From the little of what I knew about Xi, I came in with this perspective. The article changed my mind. He really was trying to "fix all the problems" as it were, it's right there in the start of the article the issue with google keyboard not talking to text boxes, it's a problem that is real. That said, in his opinion, it didn't work especially with the gui being separate from the core.

I guess my point is he was trying to "correctly solve" a hard problem, which does require engineering. He didn't seem to set out to solve a dumb problem that didn't need solving (Yet Another Text Editor).

What would be the point of building something that already exists?

The thing is, that is the story of the existing big editors, and we know where that story goes.

No, where does it go? Are the big editors unsuccessful?

They are unscalable.

I love Emacs, and it continues to be my editor of choice, but it doesn't seem the design is amenable to adding good threading primitives. NeoVIM seems like a success story in this regard, though VIMScript is a much less pleasant extension language than Elisp, which is not perfect, but is at least a real programming language.

There may be a way forward for Emacs, building a sandbox that looks like a full Emacs instance to existing Elisp, and slowly factoring out the whole-editor blocking issues; but that might be almost as complicated as starting fresh, with fewer benefits.

> Are the big editors unsuccessful?

No, but if that's the only criteria, we already have plenty of those.

My first test of any text editor is to open something like 4-gigabytes log file with one gigabyte line. If editor works fast, it's good. So far very few editors pass this test, so most are not suitable for general use, only for some niche use like editing tiny text files.

I am curious which editors pass this test, as pretty much everything I have tried chokes on long lines, even things like nano, vim, Sublime Text…

It's still disappointing, and I'd totally understand if someone wanted to fix this.

I think the pagers - less - is somewhat ok with long lines? Not much else that deals gracefully with multi megabyte lines.

I finally stopped on Windows EditPad, but it's shareware and its UI is somewhat weird, so I won't recommend it for everyone, but as a general text editor, it works for me.

Emacs has made some major improvements with long lines in master, and some of that will come out with 27.

I read the previous comment as sarcasm...

That is a bizarre measure for a text editor.

It was a sarcasm

Perhaps this is a cultural side effect?

You mean working at Google or in the Android team?

I've rarely seen a worse example of absurdly over-engineered code than the Android codebase...

The latter includes the former, so yes.

Lol clearly salty at not being at Google and comments about the "culture". Stick to Amazon where people don't get pee breaks or cry due to pressure or get fired for speaking their mind.

Am I the only one on HN which thought this was a bad idea all around from the beginning? I remember that people were very enthusiastic back then because of the Rust hype and the author's pedigree, so critics were quickly silenced.

For me, picking Rust and the JSON-based IPC were huge red flags and this post-mortem confirms that. But what I find odd is that many are still not willing to accept the conclusion that using Rust for a UI-intensive app is a bad idea. And using multiple languages in a project and splitting in back-end vs. front-end or lib + UI is also quite popular around here, unjustifiably in my opinion when one considers the extra complexity involved.

I was quite enthusiastic about it, and don't see even now why using Rust for a UI-intensive app would be a bad idea in general. If doing an app with a "monolithic" design, the lack of good frameworks is a problem, but that's improving, so it wouldn't be a good idea if you want to see results right now, but I'm pretty comfident that's gonna change.

However, I fail to see how the choice of Rust is relevant in this particular project, as the UI part was meant to be programmed in any language, making Rust an irrelevant part of the equation.

On another note, I'm baffled why you are downvoted. I find your comment negative, but not unreasonable.

> For me, picking Rust and the JSON-based IPC were huge red flags and this post-mortem confirms that. But what I find odd is that many are still not willing to accept the conclusion that using Rust for a UI-intensive app is a bad idea.

This postmortem says Rust was, and still is, a great fit for the problem domain for which it was used.

Where do you reach the conclusion that Rust is bad for UI-intensive apps? That claim appears to have no relation to the post-mortem as Xi didn't use Rust for the UI in the first place? Things might have been better if Rust was used for the UI if anything, as attempting to be native instead was one of the problem areas.

How does the claim that it's a great fit reconcile with the fact that the project which was supposed to prove the said great fit was shut down? In the end real world is the only yardstick by which projects can be reasonably measured.

However if we want to remain positive, we could say like Edison that this project did not fail and is merely attempt 1 of 10000 at learning how not to build UI apps with Rust. :)

> Where do you reach the conclusion that Rust is bad for UI-intensive apps? I've reached that conclusion by applying logic and looking at the state of Rust and its capabilities as a language and the scarcity of UI apps built in Rust.

I don't have any opinion on Rust but reading about microservices, JSON and IPC in the context of a high-performance editor did make me raise an eyebrow.

I wasn't there when xi-editor started, but writing a text editor in Rust and splitting the backend from the front-end still seems like a good idea. Why? You could compile Rust to Web-assembly -- getting a more performant text editor than if it was written in Javascript -- and then develop your browser or native frontend in HTML/CSS/JS.

No, you're not. But I'd not say something beforehand, because there's nothing wrong with such a project succeeding if that's what ended up happening. Being able to say "I told you so" isn't worth it.

Although I personally think Rust is fine. It's just that I thought the frontend and backend split is too complex. I am looking for something simpler than neovim, personally, and xi didn't tick that box for me.

Right, I didn't want to be a killjoy other, but the goal of postmortems is honestly appraising what happened.

Not mentioning Rust as a cause for failure at all is mildly disappointing but at the same time mostly expected given the lack of self-reflection in the community. I would have also liked to see some thoughts on how this debacle could have been avoided. What were the mental processes that made the author(s) ignore some pretty serious alarm signs about the design? How can these be better controlled?

one of the design goals is

> CRDT as a mechanism for concurrent modification.

i am genuinely wondering if concurrent modification is really, truly needed ?

I didn't know about Xi, but JSON, RPC, multiprocess, microservices, Async... are all red flags.

What is the successful text-editor project in Rust now? Xray seems to have been abandoned too.

Honest and thoughtful retrospective, obviously with no hidden agenda, and written with humility. Fantastic stuff.

I was personally really intrigued by the ambitions and design decisions in Xi, but it somehow lacked the one thing I can’t live without in a non-terminal editor - a tree view, at least when I first tried the OSX version.

Perhaps that changed, but you’d think that would take higher feature precedence over something like collaborative editing right?

This was solving problems nobody really has, so its fate was predictable. Here's what I think could take off like a rocket: a VSCode-like experience that runs completely in terminal, and therefore does not require a "remote" of any kind. Better yet if it takes most of the same plugins, to reuse the immense amount of work people have done there (and are not going to re-do for some hotshot new editor).

Note how none of the above mentions "async", "ropes" or "crdt".

I don't want to speak too much for raphlinus here [1], but I don't think it was ever the goal to make an editor that gets a lot of downloads like a startup pitching a product. All the things you've listed are non-goals, or at least lower-priority goals.

The goal was to make an editor based on sound technology principles, and that included investigating said technology principles. It turned out some of those principles were bad ideas, and that's that.

[1]: My involvement with xi was limited to being part of conversations he had about rendering performance on Windows in the winapi crate's IRC channel.

This is a bit complicated. My main goal was to make something good. I would have liked it to be so good that lots of people would use it, but I wasn't optimizing for popularity, and, indeed, had I been, I would have done quite a few things very differently.

Also, thanks for those early winapi discussions. The attitude of Rust towards winapi is one of the early reasons I started seeing Rust as being viable for actually building GUI.

That's a long way of saying it's a 'research project' or an 'experimental text editor' written in Rust.

What you're asking for exists: https://github.com/neoclide/coc.nvim

"Step 1: Install NodeJS"

I think what got people excited about Xi with Ropes and CRDT and Rust was that it was going to be a modern codebase with performance. Now that Moore's Law is buried next to Denard scaling, we need less software written with time-to-MVP in mind and more software written with performance in mind. Software is used a lot more than it's created. If I looked at editors like emacs and vi/vim/neovim, and tried to calculate the person-decades spent making them vs. the person-millennia spent using it, it's quite clear that optimizing for the single-core performance we have today pays off tomorrow when singe-core performance is 10% faster.

I think it had a lot of goals that matched I'd like from my text editor:

* Using native technologies

* Fast

* Modular and extensible

* Open source

And I'm not disputing that. But one goal eclipses all others:

* Usable

That is correct, hence why I never used Xi. But if it did end up usable I could imagine myself switching to it.

This is how you write Xi Thought, and if you contribute enough code you can become the Xi Dada :-)

> it might be able to safely save the file, but you can also do that by frequently checkpointing

It is always amazing to discover niches where the conceptual and practical power of logging has not yet penetrated.

Applications are open for YC Winter 2024

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact