I agree that everything written there makes sense but I can't get hyped for trailers anymore. No release date, no screenshots, let me know when it's ready :)
> The key insight was that modern graphics hardware can render complex 3D graphics at high frame rates, so why not use it to render relatively simple 2D user interfaces with an immediate mode architecture?
While I appreciate this approach and think it's a great idea over using something like Electron, this almost comes off as some kind of obvious discovery they've made. Nothing about this is new or even interesting, as this has basically been the status quo in games for ages now.
Edit: I realized this comes off as attacking the project itself. The project _is_ interesting because it seems like most "advancements" in text editing recently have been locked up in web technologies. I only mean their approach to rendering is not that interesting or exciting, which honestly might be a good thing
> this almost comes off as some kind of obvious discovery they've made
Except that it's not?
We are currently in a weird path-dependent state for GUIs. Because GUI toolkits are such a pile of detailed work, it takes a lot of effort to create good general ones. The ones we currently have were all designed prior to ubiquitous 3D performance, have a strong dependency on object orientedness, and were often designed around constrained CPU and memory.
None of this holds today. However, we still use the same toolkits.
The immediate-mode GUI stuff is interesting, but throws away all OS integration which means accessibility is right out the window. Responsive GUIs (see: every DAW (digital audio workstation)) all throw away GUI toolkits and implement their own.
Not to mention that 2D rendering with 3D hardware is NOT a solved problem. The NV_path_rendering extension is not available in Vulkan or DirectX, as far as I know. Text shaping is decidedly video card unfriendly. The number of "vector" renderers can be counted on one hand--and you don't need any hands at all to count the 3D native ones.
I welcome all the experimentation in this space. I just really hope we get some convergence to something interesting soon.
> While I appreciate this approach and think it's a great idea over using something like Electron, this almost comes off as some kind of obvious discovery they've made. Nothing about this is new or even interesting, as this has basically been the status quo in games for ages now.
That it's been the status quo in games is not really relevant.
What is probably more so is this is not the first project to go in that direction (e.g. alacritty is a GPU-accelerated terminal), and while the isolation from the platform and handling of everything tends to be a bonus point for a game which is rather isolated from the system it can be more problematic for other programs, especially heavily text-based ones (like... an editor).
I agree it seems like a strangely ho hum realization. And I’m definitely not attacking the project, I just signed up for the waitlist.
Just wanted to add that it’s an especially odd framing, given that this appears to be Atom’s successor, and VSCode—the editor which many Atom users adopted—is both Electron and heavily reliant on GPU rendering.
Yes, I use Raylib for toy games, and for toy apps. Raygui, the Raylib gui library, is an immediate-mode-gui. It doesn't seem to hog any resources for the frame updates, but I lower it for apps where I don't need very high refresh rates on state changes such as moving stuff around or changing an input in a dialog or text box. I've been playing with immediate-mode-guis for years (nuklear, imgui, etc.), so I wasn't sure what was novel about this. There are tricks to store scenegraphs and only update what changes, which pushes you away from true immediate mode anyway. I am more curious about how you render fonts at scale. Do they implement sub-pixel rendering or SDFs to achieve variable resolution clarity?
I think the point here is that we've worked ourselves into a corner with web technologies. The web is very reliant on GPU rendering, but when building an app with DOM + CSS, you don't have direct access to all that power. You have to be very careful to construct your application in such a way that it can be efficiently rendered, and actually use that GPU power.
Basically, they're getting rid of the middleman: the web. Think about most modern web frameworks (React, Angular, etc.), they're basically immediate mode GUI frameworks, but we do "virtual diffing" in order to translate our immediate-mode-like code into DOM (a retained mode UI framework), then the web renderers the DOM via immediate-mode-like patterns again. Pretty strange if you think about it.
Our stacks of technology are getting in the way of creating good software. So yeah, it's on old idea, but in light of the state of the art in IDEs and Text Editors today, this is an interesting approach.
Note, this is a problem in the first place because of how good the web is. The web has so much tooling, cross-platform support, and documentation / learning resources that it's being overused in places where it's very inefficient, simply due to momentum and mindshare. When you have a hammer...
Now, one of the biggest reasons for the success of the web is cross-platform support, but it's not just cross-platform support. It's the only write once run anywhere platform, truly. Most other cross-platform frameworks require you to think about the native environment at some point, but it's very rare on the web because of the robust standard library including Media access, Networking and even more rarely used APIs like WebMIDI. Not to mention that web design culture allows you to ignore the ui patterns of the native OS you're targeting, allowing more cross-platform savings.
If we can somehow manage to throw away the DOM + CSS (except, perhaps, for decoration) and replace them with immediate mode frameworks, combine that with WebAPIs for interfacing with hardware, and decouple those from JS (maybe via WebAssembly), then we have a competitor to the web for complex cross-platform software that is actually interesting. Perhaps Zed and their Rust-based GUI framework could help push us in that direction.
Edit:
Another aside, the performance of immediate mode GUI. You could argue that retained mode UI can be made to have better performance than immediate mode, but I think the right combination of caching and framework-level optimizations to allow for partial updates are the solution.
You might say "that's not immediate mode though?", and sure, but what we care about here is fast applications that are easy to build. We've discovered that writing code via declarative patterns has huge benefits over "object-soup", so our goal should be to make those patterns as fast as possible. I think the DOM and retained-mode GUI is a just a result of the Java style of Object Oriented obsession.
One thing that has me interested is that they seem to be working with the person/people behind tree-sitter. Tree-sitter is pretty amazing tech and vastly outperforms traditional text parsing. There are a couple of impressive YT videos demonstrating it.
3/4 team members are Atom veterans (which is to say GitHub, which is to say Microsoft)
What’s unclear to me is whether this is an internal project backed by those deep pockets, or a new and independent company? I hate to say it but that makes a big difference in the likelihood that this project will be around in five years, vs the (many, many) other new editors we’ve seen crop up in recent years
They look like they’ve got a great game-plan, but from the outside it seems like a tough business to get into. Best of luck to them either way!
It's the team who built an open-source editor at GitHub and they are building a new editor. Cool... I want to check it out.
Signup for a waitlist? Login? Seems like they're building a business around an editor... Cool, we have a number of those. I'll wait to see when they have a product I can evaluate instead of when they're talking about what they're going to do.
I haven't been sold on the reality of using collaborative editors. I have tried the jetbrains offerings, and for me it didn't make pairing easier (the primary use case I think of). However I am interested in being sold on the idea.
For those who have found collaborative editing to be a killer feature I would love to hear about your experiences.
I find it very useful when mentoring, especially when working remotely. I think one of the best ways to train someone to program in Rust (or any language really) is to sit side-by-side and write code while answering questions and explaining why one approach was chosen over another. Having the less experienced developer sometimes take the wheel and drive is also important.
I vigorously agree that remote mentoring and training is a prime scenario for a collaborative editor, and I've used VSCode for it (in both roles).
OT... but I would go a little further than you and say that in my experience, I prefer to have the learner drive almost all the time, and to have the mentor step in and type only when necessary.
If they did they wouldn't be on the HN homepage. "We are planning to write a text editor based on an existing GUI toolkit, stay tuned" doesn't sound all that exciting, even if the result would be identical.
> The main thing CRDT buys you is the ability to propagate edits through a mesh-like network, exploiting local connections, as opposed to requiring a central server. I think that's appealing if you're actually editing in such an environment. For large scale organizations, I think it does make sense to treat the set of analysis and review tools as a form of collaborative editing, but those are also cases where a centralized server is completely viable.
> The flip side of the tradeoff is that you have to express your application logic in CRDT-compatible form. How do you represent the merge rules for soft spans as a monotonic semi-lattice?
> So I come to the conclusion that the CRDT is not pulling its (considerable) weight. When I think about a future evolution of xi-editor, I see a much brighter future with a simpler, largely synchronous model, that still of course has enough revision tracking to get good results with asynchronous peers like the language server. The basic editing commands, including indentation and paren matching, are more simply done as synchronous operations, and I think avoiding that complexity is key.
Why does every commercial editor seem to desperately believe we all want this?
Why would I, as an actual developer, want "pair programming but now there's lag"?
I didn't even like the pairing fad in the first place, I found it awkward, anxiety-inducing, and slow. Adding an additional software dependency just seems worse.
> Why does every commercial editor seem to desperately believe we all want this?
It's one of those implementational-catnip ideas that steadily converges toward collective bikeshedding whenever it comes up, but which is really tricky to scale/sell/market in practice.
So, it hasn't caught on and been soundly disproven.
Or maybe it's that it hasn't caught on and then done the whole "oh look at all these other companies that were so head of their time" thing...
Reading this description - a fast native code editor with collaborative editing as a key feature - reminded me of Hydra/SubEthaEdit, which first came out in 2003! Talk about being ahead of your time.
This seems really interesting from a technical point of view. However, they seem to be assuming that developers will place a very high value on low input lag. This seems especially doubtful in the context of a collaborative editing tool, where network delays are likely to dwarf any input lag deriving from the event handling and rendering architecture of the editor.
This leaves me wondering what the key advantage of this product will be over VSCode Live Share.
As looooong time user(/addict) of emacs, probably my two biggest complaints are slowness and bugginess.
So I find an absolute focus on speed to be quite appealing.
I 100% agree with their philosophical positions: An editor should always be able to reflect input in the next screen refresh.
This might seem like a trivial thing to care about, but things that are more responsive just feel better to use. For something that I use all the time (too much, really), that's very significant.
(And w/r/t the network case, you still show your local input immediately; of course, input from other players is necessarily lagged, but you don't care about that.)
Input lag has been imperceptible for me with all text editors that I've ever seriously used for coding (Vim, Emacs and VSCode). I appreciate that some people are genuinely more sensitive to input lag than others. However, I have to suspect that it's a minority concern. It doesn't seem like the first thing to mention on a marketing page.
>of course, input from other players is necessarily lagged, but you don't care about that
But I do care about network lag. It's a much greater source of inconvenience (to me) than an imperceptible (to me) lag in registering my key presses.
> But I do care about network lag. It's a much greater source of inconvenience (to me) than an imperceptible (to me) lag in registering my key presses.
With a clever implementation, you should see your local input reflected instantly, and other people's input will arrive with some delay. However, since you don't actually know when other people started the input, as long as you have a reasonably fast network, it should be "imperceptible" to you. (There's nothing to perceive!)
> Input lag has been imperceptible for me with all text editors that I've ever seriously used for coding
Here's a very specific example: try using emacs to edit the the c++ source file for the c# thrift generator. For extra agony, try using the iedit emacs extension to change a symbol throughout the file. It's extremely laggy.
Emacs in fundamental mode without any extensions is responsive.
> I appreciate that some people are genuinely more sensitive to input lag than others. However, I have to suspect that it's a minority concern.
Different strokes for different folks! I for one am happy to see an editor with that focus. (We'll see if it lives up to it!)
I also have a theory that even people that think they aren't sensitive to input lag actually might be -- I'd love to do some sort of study where people are divided into two groups to do some editing task for 8 hours, one group with an editor that deliberately has say a 100ms delay in input processing vs one group that doesn't, and afterwords survey each group about emotional state, fatigue, etc.
> For extra agony, try using the iedit emacs extension to change a symbol throughout the file. It's extremely laggy.
The buffer data structure that Emacs uses -- a gap buffer -- has a fundamental incompatibility with making multiple "simultaneous" edits. Any attempt to do it in Elisp is going to be hacking (in a good way) around that. Performance will definitely degrade for a large buffer, because it has copy around most of the buffer contents for each edit. Full explanation here if you're curious: https://nullprogram.com/blog/2017/09/07/
Tangential, but I definitely disagree w/ the blog post about the value of multiple cursors vs just macros and rectangle edits, tho. Multiple cursors are just so much easier to work with and get right. You're editing blind when recording a macro, which is tricky, and if you mess up, you have to start over again from zero.
Network lag in a collaborative editing environment definitely is perceptible in practice. Though in theory you can't be 100% sure that the other person isn't just pausing randomly in the middle of typing a word, the statistical signature of lagged typing is quite distinctly perceptible.
(By the way, I am fully aware that there need not be any delay in showing one's own input locally. That does not mean that network lag ceases to be an issue when editing collaboratively. It just means that you can speculatively display the probable result of your own edits before you've synced with the other person's edits.)
Apart from that, of course you are right that it is just a question of how many people find input lag to be a significant problem in their current text editor of choice. The issues with Emacs, as you note, have to do with the low performance of various language modes rather than the rendering architecture of the editor. (Though I guess maybe there is some missed opportunity for parallelism imposed by Emacs's architecture?)
> Network lag in a collaborative editing environment definitely is perceptible in practice. Though in theory you can't be 100% sure that the other person isn't just pausing randomly in the middle of typing a word, the statistical signature of lagged typing is quite distinctly perceptible.
If we imagine all possible network conditions, this is definitely true for some of them. But if you have a good network, I don't think you're going to be able to tell. Consider that you can even do something like Stadia which does round-trip input lag to render a full-frame of a video game, and it often works pretty darn well. Something like the one-way lag between when another person starts typing (an event you can't know the start of) and when it arrives on your display just isn't going to be noticeable if you're both on at all reasonable network connections, imo.
> The issues with Emacs, as you note, have to do with the low performance of various language modes rather than the rendering architecture of the editor. (Though I guess maybe there is some missed opportunity for parallelism imposed by Emacs's architecture?)
Yes and yes. (Also, it's a contributor to the bugginess of emacs -- dynamically typed and (often) dynamically scoped(!) lisp code is not conducive correctness. It does, however, lend itself amazingly well to extensibility/flexibility.)
So but anyway, plain emacs w/ no language modes is indeed fast and fairly bug free, but...what's the point?
The goal is to be able to have a full-featured editor that remains fast and bug-free. :-)
I've personally noticed an ever increasing responsiveness throughout Emacs since switching to the native compilation versions in the early pandemic. I have an Ethernet connection with a 2ms ping for my typical remote Emacs sessions and I feel the difference when I occasionally have to work from faraway locations or through fluctuating WiFi networks that add another 30ms on top. Some colleagues use mosh as their terminal to improve the experience in the presence of lag.
I guess not everything would need to lag. As long as you and your peers are working on not the exact same row, a slight network delay might not matter too much, especially if your own typing etc doesn't lag?
Yes, exactly. You should be able to show your own input immediately. For others input, you can't tell the difference between human lag and network lag, provided the network lag is reasonably low (which it will be unless you have a very bad or super-high-latency network).
The meta on this is interesting to me. Coding is speaking. Writing code is communicating with the next developer about what you are trying to get the machine to do. In the case of collaborative editing, it is immediately speaking to one or more developers about what you are trying to get the machine to do.
Tools that facilitate this communication are welcome.
Love me some Atom, but I had to switch to VSCode for various reasons. I will definitely test Zed! Cool to see them combine lots of interesting innovations to make a new thing
I'm ready to pay for my tools or get $dayjob to pay for them (I already pay for some things and have them pay for Jetbrains Toolbox bundle) and there are many of us.
If a product is significantly better and not significantly more expensive I'm in.
In fact I actively want someone to pay someone to take the newest Firefox and start publishing builds that fix the things Mozilla broke.
I can't wait for a Microsoft team to take the best ideas of Zed (Atom), its GPUI framework (Electron), and beat them on their own field, by adding a twist (LSP) and having better performance(VSCode).
Note: this is a joke, I wish all the best to Zed and its team !
Why do they want to use relatively expensive immediate mode rendering instead of simple retained? It's not like a code editor requires a lot of interaction.
What's the meaning of "immediate mode" in their description? The only context (no pun intended) in which I know this phrase is the old, deprecated way of writing OpenGL vertices.
Likely they mean an immediate mode UI [1], as opposed to a traditional retained mode UI. Immediate mode UIs have gained a lot of popularity in the games/graphics world over the past few years.
I did initially wonder what could justify such ugly font rendering and a complete lack of screenshots, but this is a serious team with serious ideas so it'll be intriguing to see what they come up with.
For better or worse, Microsoft is all in on the JS ecosystem. I could sooner see TypeScript being rewritten in Rust: at least from a performance perspective, there’s a lot more room to improve there. But I think it’s safe to say that’s not gonna happen.
VSCode is much larger, and much more complex. I just can’t imagine it happening ever.
No I can't see it happening either. But why spend a lot of time reinventing a wheel when everything about VSCode is actually pretty good, except its performance. But I'm a Vim user anyways so what do I care...
> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.