The background for this is that most existing UI toolkits are a poor fit for Rust.
Rust doesn't like having shared mutable state, but event-based UIs have a global event loop and can mutate anything at any time.
Rust works best with strictly tree-shaped data structures, but event handlers turn trees of widgets into arbitrary webs with possibility of circular references.
Rust prefers everything thread-safe, but many UI toolkits can't even be touched from a "non-main" thread.
Rust's object-oriented programming features are pretty shallow, and Rust doesn't have inheritance. That makes it awkward to model most toolkits that have deep hierarchies with a base View type and a dozen of Button subclasses.
So instead of retrofitting mutable single-threaded OOP to a functional multi-threaded language, there's a quest to find another approach for UIs. This change of approach has worked for games. Rust wasn't nice for "class Player extends Entity" design, but turned out to be a great fit the ECS pattern.
The thing is, there do exist UI frameworks that prefer composition over inheritance and strictly tree shaped components where data only flows one way: they're all the rage on the web.
And that's great, because they give plenty of useful insight into things that work and don't work when designing UI frameworks this way.
To be fair, it's not exactly like nobody realized this. More than one Rust desktop UI framework is explicitly React inspired, and it's not like FRP-based UI was non-existent prior to being popularized in web frameworks. Still, all the same... I suspect the best answers for how to do good UI in Rust are not far away from this paradigm.
So the question is, why are we wrangling our UI approach to match the strict nature of software languages?
Isn't that kind of upside down?
Wouldn't the obvious lack of fit hint at perhaps some kind of fundamental impedance mismatch?
While most other techs are not a perfect fit, Rust is a 'worse fit' for the kinds of things we want to do in UI. The 'design objectives' of UI for the most part just do not apply at all.
Even the 'Stateless / Unidirectional' tree patterns seem to be a bit inverted as I think the slight advantage of thinking just a bit about hierarchy in a different way is abnegated by the hurdles imposed when it does happen.
Dart is a good language for UI, I suggest better than Rust in almost every way. The 'advantages' of Rust may present themselves in at lower layers.
And both the unidirectional and even 'stateless' (i.e. Flutter) are mostly unnecessary. We'd be better served by some fairly basic conventions.
I will admit I 'learned' something by using the fairly strict hierarchy in Flutter - and am better for it - but I still want my 'statefulness' tree back.
I'm pretty sure the mismatch is just that the major ui frameworks were designed for oop languages, not that UI is inherently oo. As the gp said, react type frameworks play fine with languages that have no inheritance. It's just a matter of a few more coming along with the same paradigm(s).
> I’m pretty sure the mismatch is just that the major ui frameworks were designed for oop languages, not that UI is inherently oo
I’m pretty sure you are right. It seems to me that both FRP and ECS models of UI are at least as good as OO models, and a lot more adaptable to Rust than conventional OO models. But GUIs and OOP grew up together and have been deeply wedded, so non-OO UI just isn’t how people are used to thinking, for the most part.
I suspect the mismatch is actually due to the borrow checker's inability to do various common abstractions such as delegates and observers.
These patterns are quite useful, especially in stateful situations such as GUI. They help decouple observer from observable, and can give an architecture more flexibility in a lot of ways. Alas, the borrow checker isn't very amenable to that sort of style.
The borrow checkers inability to do those abstractions is evidence of the greater ontological mismatch ... which is my point.
UI's are 'inherently' trees, data that needs to be mutated, at least some degree of self and circular referencing. All of that a bit outside of Rust scope.
More pragmatically, UI code tends to be 'wide and flat' - i.e. a ton of 'little things' and screens, mostly independent from one another, with visual elements.
You really want to be able to code fast, compile fast, 'try things' for validation. Performance is usually not a primary concern. That's contrary to Rust ethos where they trade all of that away for performance. So, it's a bit upside down in my view.
Rust UIs I can see for automotive, critical systems etc. - but even then - I suggest the UI should be disassociated from the 'critical' part.
So that's a good point, but I'm not quite sure OO is the issue.
Also - as you hint it's entirely unnecessary, I do actually believe OO is a natural match for UI because of the overlap between components.
I think it's the fact in UI you have a big chunk of state, with references all over, and sometimes doing sort of 'circular references' ... Rust code starts to have a lot of the various Vec Rc Cell Box all over the place, and then honestly 'what's the point of rust'?
And doing 'observers and events' feels unnatural, as well as the rest of the threading/stateful issues related to GUI.
UI is also about a lot of small details and so the typing->compile->quick check is something that happens possibly more in UI - you have to often 'see' the results you can't just run the test.
I like Rust, but it is a bit of 'Chinese Finger Trap' for smart people, and applying it to UI I think is basically upside down. I mean - fun to reason about and experiment with - but basically just wrong.
I wonder if someone will come up with a language that is the UI version of Rust, with sufficient checking etc.. That would be cool, though I can't even fathom what that might be.
> Rust code starts to have a lot of the various Vec Rc Cell Box all over the place
Well, C and C++ also has all of these, with the former having it in a convention-only manner, where you have to learn what the author intended for every project. These are generic paradigms that are useful and often needed for low-level programs, and Rust makes their usage safe.
What I do find problematic regarding rust is that people want to apply it to everything, when it is a low level language, no matter how convenient it can look on the surface. Managed languages are more than fine for the vast majority of problems, they should not be replaced.
One of the reasons why is that the notion of a GUI is ill-specified. Most ui frameworks, including the excellent ones, like QT, have corner cases where they fail. It's just that we've gotten to the point where coders kind have it in their bag of tricks to avoid these situations by defensive programming.
Until we have a proper model of graphical UI, it'll be hard to make any such language.
Think about the last time you had a weird UI bug (screen not redrawing, unexpected behavior, not able to see something that should be in scroll view, not able to scroll to something that is out of bounds, etc). Now think of the last time this happened in a terminal or REPL.
We have no model of user interface that adequately covers the corner cases. We do have a model for CLIs. That's why guis tend not to work.
Yes that's a good point, but I'm not sure if the bugs in QT etc. are inherent, i.e. we probably can fix 'all of them'. You're right to point out 'a' model should be a reference though.
Honesty though I fear in attempt to 'find' a model, we get forced upon us this FRP stuff for academic reasons of architectural purity. Despite QT's bugs, it mostly works just fine.
We do not have such a flawless model for CLIs. Command-line apps have their corner cases as well, and they fail too.
Many CLI programs were designed in the assumption the display size never changes, like 80x25 characters.
That's not true anymore, users resize their windows, the text in the terminal may or may not stay good after that.
Terminals have a GUI state: cursor position, and current text attributes. That state may cause issues, especially when a program quits suddenly, like a crash.
> Terminals have a GUI state: cursor position, and current text attributes. That state may cause issues, especially when a program quits suddenly, like a crash.
I have never in my life witness a terminal crash so hard that I couldn't fix it and get back all my other state. In 100% of cases, just typing Ctrl+C, Ctrl+Z, etc, followed by 'r', 'e', 's', 'e', 't', enter seems to have always done the trick.
Isn't it done already? Kotlin plus a simpler OOP toolkit like javafx avoids all those problems. You have a GC to handle circular references, you have a decently strong type system, coroutines etc.
It's really not inheritance that's the issue. The issue is that in Rust it's difficult to have a data flow between objects that goes in multiple opposing direction, ie, a graph instead of a directed tree.
Inheritance is a way of doing that, but not the only one. Rust is capable of every OOP design pattern except ones that require this kind of dataflow inversion which become cumbersome.
> In Cocoa, scroll position is part of the view's state, a mere property of the view. This is simple because the UI itself is stateful.
> In React, scroll position is typically not part of the state from which we project the view. Instead this state is attached to the projection itself (e.g. a HTML node) and we are dependent on the "Memoization Map" to preserve it. So this memoization is now required for the correct functioning of the app. The "pure function" abstraction is leaking.
I struggle to understand what you are saying here.
What do you mean when you say "this memoization is now required for the correct functioning of the app"? I agree that there can be some issues with scroll restoration in routing, but it doesn't seem like a big issue to me and certainly doesn't seem to reveal some fundamentally flawed architecture.
Haven't written React, but it relies on memoization to preserve hidden state (like text carets and item scrolling) not tracked by your application. If you change the tree structure of your GUI (add/remove/move elements), and don't give scrollable UI items a key if they move, React will delete the scrollable DOM element and create a new one scrolled to the top, losing the user's scroll progress. The same goes for text fields with horizontal (or vertical) scrolling and active cursor position, perhaps text field contents (unsure), etc.
Again, I am not sure if I am just misunderstanding something but that doesn't sound right to me.
>it relies on memoization to preserve hidden state
It relies on the DOM to keep that state. It's not hidden, you are just not forced to deal with it or think about it.
>If you change the tree structure [...] and don't give scrollable UI items a key if they move, React will delete the scrollable DOM element [...]
Yes, if you have an array of elements, you need to give them a unique identifier. This is a fundamental property of any diff-algorithm which the virtual DOM relies on. I don't think that is unreasonable, and IMO it is not something that adds significant cognitive load.
React definitely has quirks and especially optimizing for performance can be quite complex. But out of all the GUI frameworks I have tried (including Win32, WinForms and GTK as well as other web ones), React is the most intuitive to me.
> This is a fundamental property of any diff-algorithm which the virtual DOM relies on. I don't think that is unreasonable, and IMO it is not something that adds significant cognitive load.
I think it's a leaky abstraction nonetheless, regardless if it causes issues in practice. But if it's not a significant issue to users, that's fine; personally I'd prefer to work in UI frameworks that avoid redundant diffing work by design (outside of cases like lists where it's difficult to avoid).
> I think it's a leaky abstraction nonetheless, regardless if it causes issues in practice.
The abstraction is intentionally leak to meet a very valuable design goal: React plays well with other libraries that are manipulating the dom. This is a hard requirement for many businesses because libraries written without React in mind would often cost years to replicate.
So far as I can tell there's no reason someone couldn't apply the general approach to UIs React takes while also making it a less leaky abstraction. In a native scenario this makes a lot more sense because there's less components you might want to reuse vs the web.
If leaky abstractions are unavoidable, we're left with debates about which leaky abstractions are justifiable given the constraints and goals of the design.
I agree that this is an annoying issue. I don't think it's inherent in the FRP UI design, but I think it might very well be somewhat impractical to avoid, especially on the web. The fact is, the DOM already has sort-of 'hidden state' that can't be 'managed.' If you designed a UI framework from the ground up, without worrying about the web, you could avoid some of these pitfalls, for example by making the underlying UI nodes (equivalent to the DOM) completely stateless.
The scroll position, in this case, would just be props.
However, at some point a UI framework usually encapsulates some kind of state somewhere. I'm not sure if a design exists that 100% doesn't encapsulate any internal state. (Maybe immediate mode GUIs?) Though, removing hidden state in the underlying UI components would at least solve the problem of needing to care about underlying node identity for them. But, it wouldn't solve it for components that compose them. I think this is unavoidable in a system that allows state encapsulation.
I think you could make an FRP UI framework that in fact, does not have state encapsulation, but it would be a lot more cumbersome to use, because you'd have to "bubble" the underlying state of any component anywhere all the way up the tree to the root.
In theory, though, with React it is idiomatic to make the state flow back through a state store like Redux, then those state changes are reflected in the top-down re-rendering.
That's not a problem for Rust, as evidenced by the fact that it is actually the way several frameworks work in Rust today.
Swift is different from Rust, but it's more towards Rust's end of the spectrum. Sure, it has classes, but it's almost as if they're mostly there to aid in compatibility with Objective-C APIs like UIKit. SwiftUI builds UIs with value types and protocols like View.
I wonder if something like SwiftUI would work well for Rust.
Swift<>Rust similarity is only superficial in syntax. Swift has full OOP with inheritance, allows shared mutable state, and doesn't have compile-time thread safety. It has been built for Cocoa interoperability from the start.
The clunky `Rc<RefCell<Object>>` that Rust tries to get rid of from UI toolkits in Swift is just `Object`. Rust's undesirable case is the Swift's built-in default (that's not a criticism of Swift, ARC works great there, but these two languages have chosen very different trade-offs).
There's one "UI toolkit" that builds in top of immutable data structures, trees, and unidirectional data flow. It has also taken web UI development by storm. It's called React.
I wonder if React-like approaches are going to work for Rust UI toolkits. They could even wrap some native controls, much like React does with DOM controls, or React Native with Android controls.
React is nice for high-level work. However, if you're writing e.g. a rich text editor from scratch it is probably not a good fit from an efficiency viewpoint.
I recently tried https://github.com/yewstack/yews that looks very much like "React in Rust". It is web oriented through WASM compilation and integration with web technologies, Tauri can be used to bridge the gap and use it for desktop apps but it is probably an excessively complex and impure assembly for that goal.
I'm glad Rust* is forcing us out of a rut in UI toolkit design. All of these spell hard-to-debug problems.
* Yes, I know elm-derive UIs are also a thing and have laid a lot of ground work in this area.
btw I've had great experience 10 or so years ago doing composition-based UI design in GTK with Python. When Maemo switched from GTK to Qt, the requirements on inheritance were one of my points of frustration and I got away with creating a couple adapter classes to avoid most of it.
I wouldn't give Rust or Elm the credit for that, though. Although it was not first, React is really what put the declarative method for UI programming on the map. Today, there's a solid argument that by usage numbers React is the most used UI framework today, and it's children like Vue are high up there too.
Swift also has a similar problem and unlike Rust has a powerful backer, so SwiftUI is in many ways already a vision of what a Rust native UI framework would look like.
Swift is in a niche (I hope). Only useful in Apple's walled garden.
I recently spend two years with Swift: It is partly modern, but is slow, obtuse, opaque, and frustratingly archaic in places (its threading model is just awful).
Rust is hard, but is is beautiful, and it is mush more modern than Swift. Rust has an actual community (Swift has acres of astroturf). Rust does not wrap everything in a reference counted object.
If you are programming for iOS then Swift might be a good choice (but since it is not portable, you will regret it if you have success and want to port it - that is the voice of experience). For any other purpose avoid Swift. It has no reason to exist, we should let it go down with Apple wherever Apple is taking itself.
Rust is capable of eliminating unnecessary count increments and decrements for Rc fairly often. The compiler doesn't usually need specific knowledge -- if there is a +1 and then a -1 LLVM is smart enough to cancel that out on its own. I'd actually be curious to see examples where it doesn't.
I mean, how often are you cloning an Rc? It’s not like c++ where passing a shared_ptr to a function is going to copy it. Most of the time you’re going to move it.
> Rust works best with strictly tree-shaped data structures, but event handlers turn trees of widgets into arbitrary webs with possibility of circular references.
This is also the problem with closures. And why you can't do Haskell/Ocaml style functional programming in Rust.
Rust has the reputation of being fast, but if you force everything into a tree, you are making big compromises from the start.
In a lot of fields, it's more important to have a codebase that's easier to use and more flexible than a codebase that uses less memory. This is why people opt for GC and why it's such a big benefit.
It's way more than just a memory use problem. Garbage collection trashes the CPU cache, and unless you aggressively use value types you have extra layers of indirection everywhere that trash it even more. GC languages have a deserved reputation for worse performance.
I'm not really talking about memory requirements. And I don't see how a GC would make any difference in terms of ease of use or flexibility of a codebase.
It is a mindset about memory, but you have to have a mindset about memory anyway so it isn't exactly unique.
In a language like Rust, one has to carefully orchestrate the dance between Ship, &mut Ship, and &Ship... and Rc<Ship> and RefCell<Ship> and Cell<Ship> and Rc<RefCell<Ship>> and Box<Ship>. In the kind of programming most people do, that is harder than GC which doesn't have any of those distinctions. It's all just Ship.
If that isn't hard to you, it's probably because you've gotten so used to that mental overhead that you don't notice the drag any more, or because it's worth it for your case so you don't feel it. More power to you, but the rest of programmers don't want to have to deal with that, and don't need to. They'd rather get the feature done and go home to their families.
The point is that the mental overhead with memory must always be there. Why not let the language guide you?
Sure, they'll get the feature "done". And then they'll suffer for it later, adding up to much more work and a much less rewarding code base than if they'll done it properly in the first place.
It is important to have a code base that an inexperienced programmer can work on.
Keeping track of memory is one of the hardest parts of programming in languages like C, a bit easier in C++ (thanks RAII) but still memory leaks are rampant.
Rust improves in some aspects, but I do not think it is a language for inexperienced programmers.
So where garbage collection pauses are tolerable, and real time performance not an issue, save the money and use garbage collection.
I too hate garbage collectors, I want to be in control. But the business case is clear: garbage collected languages are a better choice in a lot, f not most, cases.
For most people working a career as a programmer, how much of their time in that career is going to be spent as an inexperienced beginner programmer?
I agree that Rust is a professional tool, designed for professional use. It's designed for professional use, instead of being optimized for inexperienced beginners like Python is.
I just don't really see "inexperienced beginners can easily contribute productively to this code base" as something that would have been valuable for any of the professional work I've done over the past couple of decades of my career.
If you're an amateur, or if you're programming recreationally, or if you're writing some low-reliability low-impact throwaway code, sure, it's great to use happy-path-oriented languages. When you want to write something that people actually rely on to run a business, you should use professional-grade tools instead.
First of all, memory architecture is not thrown out the window with a GC, there are plenty of considerations that can be done in managed languages as well. Also, many managed languages have value types, allowing for basically every memory pattern you would use in a low-level language.
Second of all, a GC is the superior way of handling many, random-allocations with no patterns. An allocation in a modern GC will literally be an integer increment (not even atomic!), and every later step will happen on another thread in parallel, not slowing down the thread doing the work. No malloc implementation can beat it (unless it doesn’t care about freeing). For the rare case where an arena allocator is a better approach, the aforementioned managed languages with value types are there.
Of course not, you can have an excellent memory architectures with a GC.
My point is that if you do that, well, then you've done the hard parts of living without a GC - so might as well ditch it altogether.
But the absolute vast majority of course doesn't have excellent memory architectures, so they'll suffer greatly from not being "burdened" with having to think about memory.
It is unfortunate that GC debate is all about performance. The programmer ergonomics alone makes the GC a worse choice. But then of course we still have GC tuning.
Most programs are not video decoders running a tiny tiny code on gigabytes of data, they contain quite a bit of code and many parts of it run irregularly. By not “paying attention” to memory allocations on these vast amount of cases, you get similar, or sometimes even better performance, safer and more correct software, faster. If it turns out to be a critical part, it is very easy to pay a bit more attention to the allocation story there.
So in like 90+% of cases, a GC is a huge boost to productivity, and this is proved by their extensive usage in the industry.
And in what world is a GC safer and more correct? And no, please don't argue that it is faster in any meaningful way.
> So in like 90+% of cases, a GC is a huge boost to productivity, and this is proved by their extensive usage in the industry.
It is not a boost in productivity. And the reason for why it has extensive use in the industry is because the sad state of programming languages have been that languages with GC are safe and high-level whereas languages without a GC are old and unsafe. That really has nothing to do with the GC though. Which is why I say that the GC has been the biggest mistake in software engineering.
Rust and Swift are beginning to change that in a very small way.
Rust actually solves the problem, although by taking a huge toll in the form of the borrow checker. It is a very good idea, but just see this thread, it is not applicable everywhere - which is okay. Rust is a low-level language made for low-level programs where absolute control over the execution is needed.
Swift chose a different approach, but their tradeoff was lower memory overhead vs performance. It is likely a worthwhile goal for mobile devices, but that is a niche. And by the way, for all practical purposes RC is a garbage collection algorithm, it just tracks dead links instead of live ones.
So there is one solution for correctness and safety without GC, which comes with plenty of warts. How exactly GC is not a boost in productivity?
The borrow checker does a whole lot more than replacing the GC though so it is very weird to point at it and say that the lack of a GC leads to that.
Modern C++ works very well too, but it is of course a huge language with a lot of legacy that makes it unsuitable or undesirable for a lot of things. But again, completely orthogonal to the GC.
You claim that every other sentence but present nothing to stand on. The interview linked above shows some perspective if you care. You are not relieved from thinking about memory with a GC.
GC is a sensible response to C-style memory management, but not to any form of manual memory management.
The problem is that the options for non-GC languages are so poor, so the most reasonable language choice gives you a GC whether you want it or not. Hence my main point.
So it's not like you pick Go because it has a GC. But you might pick Go despite it having a GC.
> The borrow checker does a whole lot more than replacing the GC though so it is very weird to point at it and say that the lack of a GC leads to that.
Well, yes and no. Sure, it helps with data races (but not with race conditions in general), but foremost it is a tool that allows for correct compile-time memory management. Compile-time memory management is only possible in a subset of programs, so rust as well has to use (A)RC at times. This is okay when used sparingly, but atomic increases are very expensive on today’s hardware.
I am familiar with RAII, but that is the exact same thing what Rust enforces at compile time, with the exact same shortcomings, so I don’t see how is it an argument against GC.
Reference counting can cause long pauses as well - as I said, it is the same problem, just looking at it from the other direction. If an object with many many references to plenty other objects die it can take quite a long time to deallocate, there are no free lunch. And then we haven’t even talked about cycles that need a similar background process to tracing GCs, without which it will leak.
While I am all for research into better ways to do RC, please look at the discussion - it is not at all clear that RC would be better, and even theoretically a tracing GC will win.
Yes and yes. Why on earth would "doesn't solve X" be an argument against that it does Z + Y ? Whereas a GC only does Z.
It was an alternative, that does less and doesn't put as much restrictions on the programmer compared to the borrow checker, as you complained about rust and equaled that with not having a GC.
And it is deterministic.
>While I am all for research into better ways to do RC, please look at the discussion - it is not at all clear that RC would be better, and even theoretically a tracing GC will win.
Win what and in what sense?
How every GC language has problems with GC tuning is in my opinion a clear indicator that the GC lost. With no real benefits short term and huge downsides down the line - even if you never hit any limits.
The borrow checker doesn’t give a complete answer to memory management, as dynamic allocation patterns by definition can’t be done at compile time unless you can solve the halting problem.
> How every GC language has problems with GC tuning is in my opinion a clear indicator that the GC lost.
Citation required. It Just Works in like 99% of cases, and it’s not like there is any solution that covers every edge case. Just look at the thread I linked, there is an example of C++’s RC lingering for a long time after the effective work is done freeing many things.
Did I say it did? The operations are deterministic but dynamic memory is not. But that alone is quite important. Lucky for the non-GC crowd though, as the languages often give you a stronger control over what is allocated on the stack and not.
It is an axiom at this point. GC battles are not exactly unheard of on HN. Don't worry, it gets fixed in the upcoming release - has been said for decade after decade.
No there is no solution that covers any case. But you pretty much always have to think about memory, so the appeal of using a GC when the only argument in its favor is that you don't have to think about memory is quite unclear.
Depends on how much attention one was paying during their data structures and algorithms lectures, and if the language also has support for value types, manual memory management and trailing lambdas or not.
Thank you for the context, this also explains why most UIs gets broken / stuck at times. Like spinners that never stops or buttons that stay disabled even though they should be enabled. Those kinds of issues happens every day even in Apple, Google and Meta products (a lot of economical power) so there must be a
deeper structural cause for this and I think it boils down to that the UI frameworks used are inherently fragile and has built-in edge case issues because of this
I wouldn’t say it is inherent, I think it has more to do with state management - some state is inherent to the component itself, but the separation is problematic. All too often business logic gets stored in the component and vice versa.
I believe the solution for this is to have a shared context, batch changes and then run through all of them at once.
So when you e.g. mutate a variable, it doesn’t actually change until the next frame. which you can get by calling some sort of poll function. The poll function has mutable access to your entire widget tree, applies the updates, then you can have more UI with shared access
Rust is fast enough that i think performance won’t be an issue.
yes there’s absolutely no need for UI to be multi-threaded. You can do stuff in the background but IMO multi-threaded UI is completely useless.
You have an event loop where you poll for input => propagate changes => commit changes => rerender UI. Normally you
might have poll => propagate => rerender, but when you delay the changes you can represent everything in a shared reference, and when commit them you have a single source which takes the 1 mutable reference Rust allows.
UIs can spawn jobs in the background, but that's an explicit thing programmer has to do, and evidently they forget all the time. It took Apple over a decade to mostly fix "beachballing" problems plaguing Finder.
Another problem that may be Cocoa-specific is that it can hook UI-changing observers to objects, and then these objects become unsafe to use in multi-threaded programs (including those background jobs you should be using!)
Yes, but the 'background threads' are usually not working on the UI.
So it's not rocket science to just coordinate access to the tree.
It'd be nice to have an architectural way of doing that, but frankly, just living with the idiom ("Don't Do This") works well, and some frameworks put in runtime checks, i.e. QT will crash I think in some cases if you try to mess with the tree on the wrong thread.
Not really UI, but wasn’t one recentish game really performant due to it using Vulkan, but in a multi-threaded way? Not sure whether it would benefit traditional UIs though.
Intensive graphics rendering is something which does deserve the GPU and multiple cores. But most "UI" as we see it shouldn't be that intensive. UI drawing is usually on the GPU but stuff like handling inputs, updating the UI tree, should be fast enough on one thread.
Rust has no problem expressing things that aren't thread safe. If the UI framework is single threaded all it takes is making its objects neither Send nor Sync, which will likely be taken care of automatically by them having fields that are not.
The ‘Scenic’ GUI library written in Elixir handles all these problems very elegantly, and Elixir is completely functional (no traditional OO, classes or objects)* and immutable.
*of course Elixir/erlang is actually OO done right (actor model) but that’s another story
I think the web really killed the idea of "native" widgets. I think the rise of web apps just got everyone used to the idea that controls are just going to look different everywhere. Even on macOS, practically every app I use has a very distinct look and set of widgets (vscode, photoshop, blender, spotify -- two of those are electron, but even the non-electron stuff doesn't look very much like a "mac" app anymore). And in windows, microsoft really killed it themselves by constantly updating their own apps with non-standard widgets that everyone else wanted to clone (I remember the longest time it seemed like the only new things of note in every new version of ms office was how the toolbars looked). I suppose it's not a terrible thing, but I do kinda miss the days of windows 2000 and classic macos where the platform had a (somewhat) uniform look and feel.
I think it's more a desire for branded UI rather than Web apps. Adobe wants their UIs to be instantly identifiable as Adobe, Slack wants to be instantly identifiable as Slack, MS Office as you said, etc. Personally, I doubt this benefits users (do you really need to market to people who already bought your product?) but I'm sure the logic is compelling to decision makers and it's cheaper to "design once, run everywhere". It also makes sense to developers that their app should look the same across OSes even though that "consistency" doesn't affect the 99% of users who only use one OS.
What's funny is that it pisses off a large number of those 1% of users who swap OSs frequently as well. So it isn't even for the benefit of that 1% either.
When operating a Mac I expect Mac-specific behaviors. When operating a Windows PC I expect Windows-specific behaviors. I LOATHE programs that decide to do away with OS-isms that I am accustomed to and require me to learn how to "do things their way" because they wanted cross-OS consistency. Sometimes this means conforming everything to a single platform which is partially evil but just as often it means conforming to _none of them_ which is pure evil.
But I understand this - as a web developer I've experienced pressure from Project Managers and the "decision makers" to make things consistent across browsers/devices even if doing so goes against how the users of said browsers/devices would expect things to operate - breaking user expectations and creating a worse UX for many users. I fight against it when I can but I'm not always successful.
Could you give an example which inconsistencies are the most offending to you?
I agree that some platform specific behavior is important. But only behavior that relates to OS, like window management, system dialogs, accessibility etc.
IMHO we are focusing too much on what does and doesn't look like platform UI. Big part of native widgets is also about branded UI. Arguing weather Apple or Adobe is more important brand for the UI is kind of pointless.
Consistent UI across multiple platforms affects more than just people who are using multiple OSes. For example, user watching a tutorial, expects to be able to replicate instructors actions. Imagine the joy of discovering in-app menus on his machine are different, because instructor was using the app on a different OS. Also, greater cost of designing and developing widgets for different platforms means there is less budget left to design custom task-specific UIs with better UX.
For UI "inside" the app, we should be focusing on what is a best possible UX of human-computer interaction for a task at hand. Just blindly using native widgets for everything can actually lead to bad user experience. For example, I hate it when an app uses Win32 Color Dialog for color selection. It's a widget with a bad UX for 99% of use cases.
To me, as a user, UI with good human-computer interaction experience is much more important that use of branded platform-native widgets.
Games are the peak of this, since consoles lack anything resembling a filesystem, natural fits for that (like inventory management) tend to differ for every single game... and then they get ported back out to PCs of various flavors with zero adaptation for new peripherals. (Or old ones, as most of those consoles COULD in theory support a keyboard and mouse or touchpad input of some sort too.)
Consistency across OSes if for the benefit of devs, not users. The more you can reduce platform differences, the easier it is to maintain multiplat code, you don't need a specific platform to repro issues, etc. It's the same logic as the trend toward static linking.
But this kind of branding wasn't as prevalent before web apps invaded the desktop. Now the users are pretty much conditioned to expect every app to do its own thing.
> But this kind of branding wasn't as prevalent before web apps invaded the desktop.
This is ahistorical. App branding now doesn’t hold a candle to the late-90’s mania for every app having its own skin engine. App-vs-platform UI deviation is a lot tamer nowadays.
I remember the late 90s - early 00s very well. I remember that such "branded" apps existed, true. I also remember being annoyed enough at this that I always used equally functional but properly styled alternatives - which is to say, there were such alternatives.
> App branding now doesn’t hold a candle to the late-90’s mania for every app having its own skin engine.
I dunno, I would say the 90s (well, for Windows and MacOS, DOS and the other no-standard-GUI OS’s that were still very much alive in at least the early part of that period are a different story) were pretty much the low point in terms of degree of UI branding for “professional” apps.
I'm the main developer of the AccessKit [1] project mentioned in this post. AMA. To preemptively answer one expected question, I know the project has been inactive for a while; I'm back to work on it in earnest starting this month.
This is a really interesting project. Accessibility support is near and dear to my heart.
The readme mentions a macOS adapter prototype, but I don't see that directory. Does it exist in a different branch? I'd love to check it out – I'm pretty experienced with the Mac's AX implementation, but have basically no experience with AX on Windows...
Oh wow, the readme is even more out of date than I realized. I yanked the Mac adapter prototype from the main branch last year. You can find it if you dig through history. I'll bring it back and continue working on it later this year; it's already on my schedule of work to do this year.
> On macOS ... going forward, it’s likely that new capabilities and evolutions of the design language will be provided for the latter [SwiftUI] but not the former [AppKit].
I think this is vastly over-stating the technical churn of UI development on macOS.
While we're definitely seeing new UI toolkits introduced that are Swift-only (like the new Charts.framework[0]), nearly every Mac app Apple ships is written using AppKit. For example, I just verified (`otool -l PATH_TO_APP_BINARY`) that Mail, App Store, Notes, Music, Xcode[1], and Photos all link against AppKit, and none link against SwiftUI (on macOS 12.4 21F79, which I'm running).
There is, in my opinion, approximately zero chance that Apple either:
(A) rewrites all of the UI in all of their apps, replacing AppKit with SwiftUI, in even the medium-term, or
(B) starts treating AppKit apps as second-class citizens by introducing a new design language only available from SwiftUI.
Yes, we're going to keep seeing cool new widgets and features which are only available from Swift. No, the platform's design language is not going change in a way that makes AppKit apps obsolete.
[1]I bet Xcode links SwiftUI somewhere for IDE integration, but I'm specifically referring to the UI implementation, parts of which have been in development since, like, NeXTSTEP and are certainly built using Objective-C & AppKit.
On the flip-side, it's worth keeping in mind where the puck is heading. Every "new" macOS app from Apple (eg, Shortcuts, System Settings, and even recent refreshes of old apps) has been written in either SwiftUI or Catalyst. A bunch of these still link to AppKit here and there, but it seems pretty clear that no new developement is breaking ground with AppKit.
Similar to the Carbon/Cocoa split, I'm sure we'll see AppKit supported well into the future, especially since we just passed the latest major architecture transition. But that doesn't mean AppKit isn't a dead end.
That's totally fair, but I think the point that "every 'new' macOS app from Apple ... has been written in either SwiftUI or Catalyst" isn't really surprising. New features are probably going to rely more on newer technologies – especially in places like Shortcuts & System Settings, where the legacy approaches had non-trivial security implications. And, at least so far, we've only seen Catalyst apps from Apple when either:
(A) an app was being ported from iOS because it didn't already have a native Mac counterpart (e.g., TV, News, Podcasts), or
(B) the Mac version was significantly behind its iOS counterpart from a feature perspective (e.g., Messages).
I had written a diatribe earlier about how I don't see this as equivalent to the Cocoa/Carbon situation, but deleted it because I'm being long-winded enough already. It boiled down to the fact that Carbon existed specifically to help port apps from a legacy OS to what Apple was loudly declaring to be the future: OS X. Carbon was a way for apps to get off the sinking ship that was the "classic" MacOS. I mean, they had a funeral for it! https://i.imgur.com/KjFh63u.jpg
20 years from now, will Apple's macOS apps contain more Swift+SwiftUI than Objective-C+AppKit, probably! But I'm not worried about AppKit being a "dead end" until they rewrite Mail, Keynote, and Xcode – because they have significantly more invested in those apps than I have in my AppKit apps.
(For the record, I love SwiftUI, and use it whenever I can! I just have no expectations that AppKit is going away any time soon.)
I would remind you that Microsoft disowned MFC and aggressively pushed all developers off of win32api once .net was released. Then they went about rewriting visual studio in .net and possibly the office suite as well. I hope Apple doesn’t attempt this because, in my opinion, I don’t think Microsoft’s products benefited from that churn.
Apple themselves aren't particularly quick at adopting the guidance that they give to developers. iTunes was carbon well after Apple was telling developers to stop using it
Apple clearly stated at WWDC 2022, that the best days of Objective-C and AppKit/UIKit are behind them, they even had a slide for it in case there were any doubts.
Nope! Both are fully native macOS apps, at least on the version of Monterey that I'm running.
You can examine which frameworks a binary links by running `otool -l /System/Applications/Photos.app/Contents/MacOS/Photos`. Traditional macOS Cocoa apps will link AppKit, while Catalyst apps will link UIKit.
For example, if you run that command with a Catalyst app (e.g., Messages or Maps), you'll see that those apps link against `/System/iOSSupport/System/Library/Frameworks/UIKit.framework/Versions/A/UIKit`, and AppKit is nowhere to be found.
> The background for this is that most existing UI toolkits are a poor fit for Rust.
Being in GUI business for almost 30+ years I shall admit that this above is true. And not just Rust but C/C++ are there too.
Real (a.k.a. practical) GUIs are multi-paradigm entities. Same GUI implementation may have React'ive alike widgets immersed into purely declarative DOM/widget tree with elements of immediate mode graphics on top or inside that.
It is just that GUI reflects complexity of real life - you cannot say that sickle and hammer are best tools for everything...
Back to Rust... As a language behind UI Rust is the worst language imaginable. That's primarily due to its strictness.
Object ownership graph in GUI is usually quite complex. yet it is dynamic and frequently contain loops. This situation is best handled by GCable languages. On-click-here-highlight-the-thing-there-and-remove-that-one.
Also each part of UI declaration needs it's own DSL: for UI structure definition, style system and logic behind the UI.
Before arriving with Sciter [1] I've tried [2] many things for UI: C++, D, Java, etc.
Conclusion: HTML/CSS/JS is the best (most flexible and multi-paradigm) solution that we've got so far. Not perfect of course but good enough. And considering existence of Sciter it can be fast and lightweight.
Practical example: RustDesk (https://github.com/rustdesk/ - remote desktop access) is using Sciter for UI layer and Rust for app logic layer. So it is using proper tool for each task.
Rust can be good for implementation of UI system but as a language-behind-UI it is bad - its advantages (e.g. strictness) directly translate to disadvantages in that area.
At the end, main idea of initial Rust design is to be the language with what browser is implemented.
JavaScript (in ES2020 specification) is really good enough as UI automation language. If not JS then TS.
Just in case in Sciter I've added native JSX to JS. JSX is an ergonomic way of defining tree alike literals that used for UI DOM population and patching.
I've been using egui->rend3->winit->wgpu->vulkan recently, with the goal of a program that will run on Windows, Mac, and Linux. It all mostly works. There's some dirty laundry revolving around full screen modes and window depth order, but it's not too bad.
egui, an immediate mode GUI on top of winit, is interesting. It works OK, but only does part of the job. It displays the GUI widgets, but doing something with them is your problem. Usually, you need each widget to have some persistent state. Managing that state is the user's problem. So is generating, queuing, and distributing events to and from the GUI elements. There's also something strange which causes some scrolled windows to vibrate between two states.
egui's default widgets are not very good looking. Light grey text on a dark grey background, super-thin scroll bars, that kind of thing. The aesthetics need work.
Overall, though, not bad. This stuff just needs to be used more. It has not had enough attention and polishing.
It's impressive that most of this works cross-platform, even cross-compiled. You don't even need a Windows machine to develop for a Windows target.
I just happened to build a little project using Slint (https://slint-ui.com/) last week and found it fairly pleasant, even though it's got some rough edges still. I liked it because I was able to cross-compile to Windows from my Mac and from my Linux based CI with extreme ease ("cargo build --release --target x86_64-pc-windows-gnu"). It supposedly has some sort of native widgets available if you have Qt installed, but I haven't pursued that yet.
It has a little meta language for describing the UI that gets compiled when you compile the code and that was nice because it catches type and syntax errors at compile time.
I almost went with Tauri, but the cross-compile story didn't seem quite as good compared to Slint. Tauri does look nice though—web UIs are more flexible than Slint at this point. But for my simple little tool, Slint really hit the spot.
I like how the ecosystem around Kotlin has approached the problem.
1. The language itself has a way of introducing compiler plugins that lets you express your own tree structures in a succint syntax (comparable to macros in Rust I guess)
2. Then they figured out most UIs are trees and came up with a Flutter-like UI tree declaration API with react-like re-renders. Thus Android Compose was born! This went mainstream because most android devs fell in love with this new way of writing GUI
3. Like flutter, they are slowly exposing Skia (underlying rendering engine) APIs as kotlin wrappers - I guess the project's name is Skiko
4. Now with all these pieces they are building a full-fledged UI toolkit that renders over every platform (including web) - Jetpack Compose
> Instead of trying to decide whether a GUI toolkit is native or not, I recommend asking a set of more precise questions:
>
> * Does text look like the platform native text?
>
> * To what extent does the app support preferences set at the system level?
>
> * What subset of expected text control functionality is provided?
>
> * Complex text rendering including BiDi
>
> * Support for keyboard layouts including “dead key” for alphabetic scripts
>
> * Keyboard shortcuts according to platform human interface guidelines
>
> * Input Method Editor
>
> * Color emoji rendering and use of platform emoji picker
>
> * Copy-paste (clipboard)
>
> * Drag and drop
>
> * Spelling and grammar correction
>
> * Accessibility (including reading the text aloud)
>
> If a toolkit does well on all these axes, then I don’t think it much matters it’s built with “native” technology; that’s basically internal details. That said, it’s also very hard to hit all these points if there is a huge stack of non-native abstractions in the way.
This is it! Browsers handle all of this. It's a ton of work. Many many gui kits start with just ASCII and really don't realize how deep the rabbit hole is for making all of this work.
Raphlinus, great post as usual, thanks for sharing your knowledge, is always an interesting read.
I'm loosely following the progress of slint ui (formerly SixtyFPS), anything interesting in their approach or does it strictly matches one of your examples?
I don't follow slint as closely as maybe I should. I'm definitely happy there's a real product out there, and would be happy to work with them on common infrastructure, but we haven't had much interaction (yet).
The elephant in the room for big CAD/CAM apps is dockable widgets, where (for me personally), the "standard" is how Visaul Studio does it.
Even Qt, with it's built-in widgets is not enough (e.g. no multiple widgets/windows in non-main window), but there are some extensions/libraries/hacks to go around.
Recently, few years ago, imgui got proper support for these too. And flutter has one or more design docs around it - e.g. multi-window support, which might be the stepping stone to these.
Desktop apps are always going to be important in this space. Our game editor (Radiant) relies on this functionality when you use two, three or even more monitors.
Surprisingly enough, Qt's built-in docking system has not one but two actively-maintained open-source third-party alternatives, KDDockWidgets and Qt-Advanced-Docking-System. (QtitanDocking is a commercial third-party docking system.)
Regarding Electron, i don't understand why anyone would want to go this route. I work in finance and desktop native apps (Winforms, WPF) have far more mature libraries, better performance and time to market compared to web GUIs. With Electron, we first write web apps and then wrap inside electron and then reinvent the wheel - this is so stupid.
WPF is now open source (MIT licensed [1]), and its XAML control templates provide _as data_ a full declarative description of how a native Windows control is supposed to look like (in multiple Windows themes like Aero for Win7, Aero2 for Win10, Luna + Royale for WinXP, and Classic for Win95 look and feel [2]).
This includes everything like the exact colors and gradient stops and animation timing and vector shapes and accessibility behavior etc. of buttons and scrollbars and everything. Example: [3]
I wonder what one could learn / achieve trying to "port WPF to rust" / implement a XAML control template renderer in Rust. If you can "simply" parse and interpret those XAML files do you instantly get a native-like GUI that supports the exact look and feel of these different Windows themes? (on any OS!)
Somehow I think it is not realized how amazing that is!
I’ve had similar thoughts and starting writing my own rust XAML framework with exchangeable backends (backend implementations incomplete) but didn’t find much interest from the community despite how awesome XAML is at separating the UI from the toolkit.
You only support windows, electron is multiplatform. About time to market it depends on your team. I don’t think there is a big difference between C#/WPF and TypeScript/React. But the second option works on more platforms and you will find competent developers a lot more easily.
This seems to implicitly mean low-level GUIs, or GUI frameworks? The only mention of Tauri is 'tao the fork of winit used by'.
The article opens:
> A few times a week, someone asks on the #gui-and-ui channel on the Rust Discord, “what is the best UI toolkit for my application?” Unfortunately there is still no clear answer to this question.
I would say if by 'UI toolkit for my application' you just want a way to make a GUI app, not a game or some kind of highly native or specific interaction needs thing, just use Tauri. No idea why it's not a 'top contender', it has an order of magnitude more GH stars than Druid, for whatever that counts; I can only assume they mean lower-level toolkits. (No affiliation.)
I just think someone new or unexposed to rust's going to see this and think it's way harder than it needs to be or is just to do something basic.
It seems to me that, maybe because of the situation described in this article, Tauri already has become, or at least is quickly becoming, the clear answer to this question.
This article seems to be implicitly excluding Tauri from consideration, because the UI portion in Tauri is not "native" or even Rust-based. With Tauri, you write the whole app in Rust except the UI.
(Technically, you could use some Rust UI frontend tech, but I think 99% of devs using Tauri are using some web UI technology, such as one of the several excellent built-in vite-powered starting points (Svelte, Vue, Solid, React, Angular...) or something similar.)
This seems to me to be a pretty good solution for the next several years, at least, because implementing a good cross-platform native UI toolkit is very very hard — so hard that, arguably, nobody has ever done it.
That's fair. The main reason Tauri isn't higher on my radar is that if you're going to make an app based on the web technology stack, why not just use Electron? The tooling is mature, there's lots of knowledge, and you can use Rust modules (through wasm) easily enough. I suppose it depends on the actual use case, I'm sure there is a niche for Tauri, but it doesn't intersect use cases I'm interested in particularly.
AIUI, the major difference between the two is that Tauri uses the system’s WebView while Electron bundles a specific version of Chromium. That means Tauri will have a number of performance advantages at the cost of not being able to target a specific browser.
Still, the “why not use Electron?” answer is that Electron is much more hostile to the user…much larger download size, slower startup and increased memory usage. With Rust valuing zero-cost abstractions, Tauri comes a lot closer to that ethos than Electron.
If someone into or getting into rust asked me 'how can I make a cross-platform app like with Electron', I would definitely say 'Tauri'; not 'actually you can use Electron with rust modules (through wasm) easily enough'.
Unless perhaps they're also big into the JS ecosystem anyway. That 'mature tooling' you mention is npm/yarn/etc. stuff; Tauri's is cargo.
Isn't all the discussion about the top level UX a bit of a red herring?
Iced, slint, sycamore, Dioxus , yew ... they all have quite different but very workable API surfaces. Yes, three of those target the browser, but there's nothing stopping them from building on a lower level Rust widget system instead. Even gtk-rs found a solid way to map a class structure to Rust extension traits.
Isn't the real problem in the weak fundamentals? Solid window and input handling, text rendering, layouting and 2d drawing libraries, ...
I see all the UI attempts struggling with those, and either implementing their own solutions or battling the existing ones (like winit).
I feel like if all those pieces were in place decent solutions could emerge.
Fwiw, I've head a great time with rust qmetaobject for some (basic) GUIs. Qt is more polished than any non-concerted (or even concerted) effort is going to be in at least half a decade
Also, do you think if multi-channel signed distance field is a good approach for GUI text rendering?
I understand, for games, it can support extreme zooming with small textures. But for GUI (for example a text editor), we only occasionally resize fonts.
I looked at some of the open source projects (mostly terminal emulators), they simply rasterize fonts onto a texture map using CPU. I wonder if signed distance field is really necessary.
I'm not a fan of distance fields for GUI text rendering. Their main advantage in games is super-easy integration into the rendering pipeline (they're basically just a texture and a simple shader), but the quality is not quite as good as standard font rendering. There are other issues, including fairly large texture RAM requirements for CJK fonts, and no easy way to do do variable fonts. My main work these days is piet-gpu, which is intended to do 2D graphics (including font rendering) "the right way." In the meantime, using existing platform glyph rendering capabilities makes sense and will certainly give the best visual match to platform text.
The best GPU accelerated text rendering I know of is https://sluglibrary.com/. Not based on distance fields. Unfortunately not open source, though it is a one-man project. Maybe someday one of the big tech companies will pay him a whole ton of money to open source it.
The problem with slug is not that it isn't open source, it's that it's patent encumbered[1]. If it weren't for the patents, it'd be possible to implement an open source version.
For font rendering, there are a lot of existing techniques out there, but whichever is best depends on your requirements. There are no silver bullets, and slug certainly isn't one.
I will say though that most applications probably don't need hardware accelerated font rendering. A software renderer like Freetype2 is going to offer much higher quality results than anything like slug ever could, and with proper caching you can achieve good realtime performance. For a real world example of a text-heavy application that does this, see Lite-XL[2]
For most text rendering in the 2D UI use case, I'm not sure there is that much to solve anymore. Aggressive caching of rendered glyphs solves almost all the problems in practice. From what I can tell, Slug is targeted at games and 3D, where there are still interesting problems, but those aren't the 2D UI case.
Even in 2-d, it is nice to have smooth zoom in/out. The only way to do this that I know of that doesn't compromise on quality is to re-render every frame, and the only way I know to do that fast is to use the gpu. I believe pathfinder can do it (correct me if I'm wrong). I know I'm working on something that can do it too.
Moving a bit farther from the strictly 'text' realm there are authoring tools for vector graphics, where you would like to be able to view the changes you make in real time. CPU can do it (since it's probably the only complex thing onscreen, and changes are likely localised anyway if you really need the extra cycles), but GPU can do it better.
Honestly, there are so few different glyphs on a typical page, and CPUs are so fast, that it usually isn't that hard to just rerender every glyph on CPU and aggressively reuse glyph images in order to hit 60 FPS during pinch zoom, at least for Latin text. CJK may be a different story, but I kind of doubt it.
In 2022, smooth font rendering during pinch zoom is yet another case in which we in the software field dropped the ball, but it didn't matter because hardware picked up the slack.
I wouldn't say it 'doesn't matter'. We get by, sure; but that doesn't mean the situation is ideal. On my laptop, if I try pinch-zooming, safari seems to do bilinear scaling, and firefox CPU usage will dramatically spike. The former is noticeably ugly; the latter is bad for battery life, and may stutter if work is being done in the background (laptops and phones don't yet have hundreds of cores).
I would bet that if you profile Firefox WebRender there very little of the time is actually spent rasterizing glyphs. It takes like 10 microseconds to rasterize a small glyph, it's really nothing. Source: This is what I spent a ton of time profiling in the past.
And Safari should just rerasterize glyphs while zooming. In 2022 there's no technical reason why it can't.
> In 2022, smooth font rendering during pinch zoom is yet another case in which we in the software field dropped the ball, but it didn't matter because hardware picked up the slack.
But just letting hardware pick up the slack hurts people stuck with older hardware, right?
Yeah, existing techniques work fine if you want to make something similar to existing systems. But it's a shame to write a whole new GUI library and still have the same old limitations. Zooming is underused in existing systems, probably related to the fact that it's hard to make it work well with commonly used techniques.
you mean what are some of the difficulties of implementing detachable tabs using existing libraries?
For example, they don't give you the correct mouse coordinates once your mouse cursor is outside a window. I heard this is impossible to do if the underlining system is wayland.
On Windows, the graphics story is more or less OK. The OS includes Direct2D and DirectWrite user-facing APIs.
It’s now possible to implement an equivalent on all modern platforms which have a 3D GPU. An open-source example in C# https://github.com/Const-me/Vrmac#vector-graphics-engine That particular one requires GLES 3.1, but I’m sure (did it before) it’s also possible to implement comparable stuff on top of GLES2.
Waiting for target devices to have TFlops of FP32 performance, and good support for compute shaders, is less than ideal. Many currently sold phones, tablets, laptops, and even some desktop PCs, have limited GPGPU capabilities and/or performance. These currently sold devices gonna stay in use for at least couple years in the future.
Rendering things on CPU is less than ideal, because many devices have high-rez displays yet relatively slow CPU and especially RAM.
Am I missing something? What’s the reason there’re no cross-platform GPU-first 2D graphics libraries, despite GLES2 (or better equivalents) is now universally available, and have been for years?
Getting sufficient antialiasing quality for 2D graphics is difficult on GPUs. https://github.com/memononen/nanovg accomplishes this with GL2/GLES2 level hardware for most of the stuff one would want to render as part of a GUI. My project https://github.com/styluslabs/nanovgXC supports rendering arbitrary paths with exact coverage antialiasing, but requires GLES3.1 or GL4 level hardware for reasonable performance.
> Waiting for target devices to have TFlops of FP32 performance, and good support for compute shaders, is less than ideal. Many currently sold phones, tablets, laptops, and even some desktop PCs, have limited GPGPU capabilities and/or performance. These currently sold devices gonna stay in use for at least couple years in the future.
I'm reminded of how Windows Presentation Foundation relied on a level of GPU-based 3D acceleration that wasn't yet universally available on non-gamer PCs when Vista came out. This affected my brother when he was in seminary and tried to run the WPF-based Logos 4 Bible software on a budget laptop bought around 2007 (edit: actually 2008 IIRC). By 2011 it got bad enough that I just bought him a new laptop. And he wasn't the only one affected by this; there were others talking about it on the forum for that application [1]. This anecdote has stuck with me as a cautionary tale about us developers failing to put our users first in our technology choices (though I admit it hasn't actually stopped me from using Electron when the pressure to ship is on).
Huh? That library exists, it is called Skia, it is the most premier 2D GPU (among other targets) rendering library in the world and it powers the Android UI, Chrome, Firefox and a ton of other things. See also: https://skia.org/
Skia is not GPU first. It was initially designed for CPU-only rendering, 3D GPU support was an afterthought. For that reason, the quality of that support ain’t great. AFAIK it only uses 3D GPU to compose layers.
No, I'm pretty sure it rasterizes using OpenGL ES, too.
I don't know the code well enough to find good proof, but there's a lot of GLES and I think some DX calls in there. More than if it was just composing.
> What subset of expected text control functionality is provided?
Missing from this otherwise great list, is basic system-standard text navigation and editing. I get so annoyed when basic stuff like ctrl+end or ctrl+v doesn't work like they should.
I think explicit might be better, because it goes beyond that. For example, pressing ctrl+v while having text highlighted should replace the text on Windows, but doesn't always with non-native editors.
edit: btw, great article. I've worked on a cross-platform application (Windows, Linux, OSX) which went from using wxWidgets to Qt. Quite painful either way, and while Qt was a fair bit better on OSX at the time we still had to use tons of ifdefs and per-platform configuration.
And while I've been a win32 GUI programmer for decades, I totally get why people reach for Electron or embedded web servers. It's a really hard problem space with lots of trade-offs to be made.
Small comment: It's possible to stitch together UI/video/3D without dealing with the compositor, if you use child windows instead (or in wayland terms, a subsurface). On win32 at least, it's a much simpler approach.
I could probably have written this in a clearer way. Child windows like a proto-compositor in a way, just with more limitations. You can put video/3D/etc content in a child window, but there are serious tradeoffs. When it's in a scrolling container, either your scrolling implementation is built from child windows, or you're going to deal with less than perfect synchronization as you overlay a changing clip/translate for the embedded content with draws of the rest of your UI (this used to be a common problem in browsers). Modern compositor APIs have explicit transactions[1, 2] causing the various view changes to be applied at the same time. It's not that the problem can't be solved, it's that it's trickier than people imagine.
And of course a Wayland subsurface is using the compositor.
I am participating now in such project - 3D CAD alike application: it renders 3D scene in Vulkan with Sciter rendering auxiliary UI on the same Vulkan surface. Sciter manages dockable widgets, property editors, scene tree logical representation, menus, etc. on top of 3D scene.
The article mentioned scrolling. Do you get smooth scrolling with that approach, when the child window is conceptually embedded into a scrollable view? I guess you should, because the traditional Windows controls are technically all windows (HWND), but I wonder.
I haven't looked closely at the latest rust gui projects, but from what I remember most of them seem to be focused on code driven ui layouts, which I think is going to be a non-starter for anyone doing serious application work. View hierarchies and layout rules are too big and too complex to be maintainable via hand coding. Even a moderately sized app can have hundreds of custom views or components that need constant refactoring. You really need something like xml or json or whatever to build out that tree structure and easily visualize it.
I think the industry at large is moving in the opposite direction, towards declarative layouts written in code. As an example, for decades, Apple's UI design tooling (Interface Builder) was serialized to XML, but SwiftUI uses a DSL so your interfaces are written in code. I haven't been a web developer professionally for a while, but I've dabbled in React, where you build UI components in JSX, which is an extension of JavaScript.
I don't see why Rust couldn't be successful with a similar approach.
> No matter what people try to invent, it needs to do all Electron does but ways better.
Tauri strikes me as a good step up from Electron, and likely a better fit for Rust users (as you can code everything except for view in Rust), but I have just started playing with it.
I probably should have said "pure Rust" as you are mostly writing HTML/CSS, but yes, we could swap JS/TS there for Rust at least. It is just that for people like me who aren't gifted in HTML/CSS we need a UI library like MUI to help us out. Yew/Seed/etc. aren't mature enough to have a big ecosystem of mature UI libs (yet).
Bit of a hot take, but I basically expect that most new Rust GUI projects will be written in Tauri before long.
It works on multiple platforms, and is flexible, and it's easier to find off the shelf pieces due to the web technology involved, and it's lighter than Electron. It's a no-brainer for me (and others, I think) if I'm starting a new GUI desktop project.
Hopefully, not. From my personal experience, Electron and alike applications are slower, larger, and in general have less advanced UI than many Qt or GTK-based (or Windows UI) applications. Nothing can beat good old native GUI frameworks.
Note that I did not make performance claims relative to native!
The “use Qt/GTK/etc” is quite a common refrain on hn, and while I agree all these projects just have not achieved the DevEx that anyone is looking for.
Electron/Tauri and even flutter are easier, and that’s why I think they will win.
AFAIK the real problem is accessibility features which are lacking. The wasteful resource use will be essentially fixed as computers get faster/hold more data, and more devs put even minimal effort into resource usage.
Tauri uses WebView2 on windows, I believe, which should have the same text rendering as the Windows 11 start menu and Edge itself. You may be thinking of older IE-based webviews, which were used even well after "old edge" webviews were introduced because "old edge" webviews weren't supported on windows 7 or 8.
Is there something blocking Tauri from switching to WebView2 on Windows? It's based on Chromium, so I assume most of the legacy web view issues would be irrelevant.
IIRC, WebView2 is supported going back to Windows 7...
The comparison of React Native to toolkits like Java’s AWT is incorrect given react native orchestrates the platform native UI toolkit, so the platform’s widgets will work similarly to other applications.
What’s actually wrong with electron, or makes electron slow? I feel like every ‘UI frameworks suck’ posts glosses over this, but no real indication of why.
Most complaints about Electron that I see are about memory usage. If you’re running on an old laptop with little RAM, running 3+ Electron apps can start to slow your entire system down.
There’s also a breed of users who are extremely sensitive to the “native feel” of a desktop app. For them, it’s a jarring experience to downgrade to something that is less snappy, but they expect a certain standard of snappiness.
Because every Electron app is basically its own Web browser with all of the app's code being JavaScript, rather than the app just being native code. Try running Chrome, Edge, and Safari all at the same time and look what that does to your computer's resource usage. Electron apps basically do the exact same thing.
I know this is going to be a controversial opinion considering how much everyone seems to love Rust, but does anyone else find Rust incredibly painful to work with, even for simple tasks? Like I'm no stranger to unmanaged languages, and to some extent I cut my teeth on C, but doing anything with Rust always feels like the most aggravating exercise in needless verbosity and the "Lombok problem" doesn't help either. (Funnily enough, both these reasons are why I don't like Java either). I don't want to sound too negative, Rust is a very promising language, but even seven(!) years later it still feels like a pre-alpha language.
I learned rust a few years ago, and almost all the code I’ve written in the last 18 months has been rust. Rust makes you deal with all the pain of learning it up front. Does that pain ever go away?
In my experience, it mostly does. I’m now significantly more productive in rust than C, and I feel much more pleased with the end result.
I still find it takes a lot more effort to write rust than javascript though. Both more thinking per line, and it often takes more lines of code. The resulting software is faster and more correct, but it’s not a uniform win. I still reach for javascript for “code as content” - like UI code.
Rust’s async story is awful. Pin is confusing. Async doesn’t play nice with other rust features (like traits). There’s no async streams and it’s extremely difficult to code your own. Solutions to these problems (like GAT and TAIT) have been proposed, coded and discussed for 6 years or something but they still haven’t shipped. I’m quite frustrated with how slowly the language is moving to fix obvious problems. And it seems to be getting slower as more people try to “help” (by chiming in on GitHub).
But I really like rust-sans-async as a language for infrastructure code. The stuff I’ve built with it is bonkers fast, safe and correct. The crates ecosystem is fantastic. (Much higher quality than npm). And the community is smart and lovely. It’s not the most productive language but it fits the “better C” niche really well.
When I hear you say "Lombok problem", it makes me think you're writing getters and setters and generally treating the language as if it's Java. You might get along better if you try to unlearn some of those habits. There's nothing wrong with public fields or free functions; in fact I'd say most of the time the straightforward approach is preferred.
It's less about getters and setters specifically and more about how much Java's ecosystem relies on code generation through libraries like Lombok instead of just trying to fix the language. Likewise, Rust's answer to everything seems to be "just make a (proc) macro" instead of trying to extend the language. Proc macros do have their place, serde being a great example, but then there are crates like thiserror and bitflags which exist because Rust provides nothing in the language grammar to actually talk about errors and error handling patterns, or bitmasks (which is an incredibly common thing for a systems programming language). Sure you could write out the From<Error> impls instead of using thiserror, but most people aren't going to because Rust makes something that should be mundane incredibly annoying to write by hand.
Side note, Rust best practice seems to be in favor of getters and setters like Java and several popular libraries I've seen don't expose struct fields directly instead opting for set_x() and x() methods. Give it a few years and I'm sure Rust will have its own Lombok crate generating getters, setters, and constructors too.
I don't find that. I was quite familiar with C and C++ before learning Rust and I find the places where Rust requires explicitness are the places where C lets you glide by and inject hidden bugs into your code. When my Rust code finally compiles it has an extremely high percentage chance to _just work_, where when C compiles you still have another whole pass of "find all the segfaults lurking around".
I don’t love rust. I think rust has some fair critiques to be made of it.
It’s not, and never has, or (probably) will be the answer to all problems.
Don’t use it just because it’s popular; why are you trying to use it, and what are you using instead?
I’ll eat my hat if you can convince me that rust is more of a pain in the ass than c++. I think cpp is a stupid broken ecosystem, and the fact that rust went all in with a single unified package manager makes the comparison a non-event.
So… compare apples to apples right?
“pre-alpha” language? I don’t even know what you mean by this; but, it’s ok not to love it.
There are things I dislike about it too.
It is verbose.
I still use it though; it’s better than the alternatives for what I’m doing.
It was painful for me at first. Many struggles with the borrow checker. I dropped it for a year or two and came back, and, weirdly, my return met smooth sailing -- the borrow checker and I were a well-oiled cybernetic machine. It was like the lessons of my first time burrowed into my subconscious or something.
At this point, I kind of feel like, if you're writing safe C or C++, being mindful of where the data goes and why, you're pretty much writing Rust that compiles.
> At this point, I kind of feel like, if you're writing safe C or C++, being mindful of where the data goes and why, you're pretty much writing Rust that compiles.
Exactly. If you're using modern c++ and you actually know what's undefined behavior and what's not, you're doing rust. The only difference is c++ compilers don't error out on the undefined behavior. Actually it wouldn't be that hard to write a c++ compiler that errored out in this way. unfortunately, it'd break all c++ programs today.
To be honest, usually the exact opposite. There are some things where I start writing in Rust and it turns out to be a bit of a pain, but this is unusual for the types of stuff I write. Typically this just means I need a crate to do the heavy lifting as I'm trying to do too much. I would say most things are solidly better and easier to write, and given the safety guarantees, often work even without tests (exception being complicated algorithms that I usually get wrong the first time). In summary, yes, I'm a bit of a fanboy, but there is a reason I am - Rust is pragmatic and helps me get things done in a way I have not found in other languages.
It is good for some use cases, but the hard fact of the matter is, it is not useful as a way for creating cross-platform scalable GUI apps.
I have heard all the arguments for using Rust in GUI apps, but realistically they seem to be very weak to convince a serious team of developers to choose between the alternatives.
I really really hate Java, in parts due to the verbosity, and find Rust very pleasant to write. I'm not sure I would call Rust verbose at all, although you can look at code that gets verbose due to the expressiveness of the language and the desire for people to over engineer things (versus a language like Golang that remains very readable at all time). But when I write my own code, I tend to avoid the complexity and write pretty clear (I would like to think) Rust code.
There's definitely a learning curve (I think everyone agrees with that one), which might be what you're feeling now, but that quickly goes away once you pick up some speed. I would say keep on learning Rust and the language will grow on you.
It depends very much on what type of software you are writing. Whether you are writing an exploratory software that will get tossed a few days after, or a piece of hardware that will be used and modified continuously in order of several years. Whether it is a GUI like mentioned in this article with changing requirements or architecture trends once in a while, or fundamental building blocks with very slow-changing requirements but with high stakes regarding correctness and performance like kernels or cryptotography/security. There is no single tool that can handle all the problems, but rust can be especially productive to use in those latter case, considering the time spent in entire software life-cycle like maintaining and debugging. After some time, it is even possible to be more productive just in writing code itself than Python (for me this is true for command-line client tools)! At the end of day, if you are a C++ programmer that gets constantly bitten by bug either written by youself or by someone else in your team. Bugs like using `foo.to_string().c_str()` after the line that defines it, bugs caused by trying to erase element of an iterator in a `for` loop. Starting to write code in Rust, it will soon be less painful than writing in C++. Especially you are into idiomatic ways to write good software in C++!
Rust is a clever language, which wrecks havoc on people's programming ego by making them very inclined to try and write clever code.
But if you avoid trying to be really annoying with the type system like the majority of the ecosystem is, and slap Arc/RefCell/heap allocations everywhere, the language turns back into a high-level language again.
Unfortunately, stepping outside the realm of ``core`` and ``alloc`` and ``std`` means having to repeatedly step on overly clever landmines everywhere you go.
That’s really only a problem if you spend a significant amount of time trying to “build data structures from introductory algos” without reaching for unsafe, though.
If you find it easy to build a linked list or graph in C, you can do it just as easily in Rust — use unsafe, and you have your easy linked list, with exactly as much safety as it had in C. Sure, it’s more challenging to build a fully memory and thread safe linked list or graph, but it’s actually hard as hell to do that in C too. Other languages make it easy to build one with these guarantees only by requiring significant runtime support, which is out of scope for Rust.
In the end, it’s pretty unrealistic to expect that any language would allow you to write a guaranteed memory and threadsafe graph structure with zero runtime overhead, without a lot of knowledge, time, and attention on your part — there are no silver bullets.
And if you’re using Rust for anything real you’re generally not doing sophomore computer science homework like this anyway.
The problem is that unsafe data structures are often less safe (harder to avoid UB) than in C, because in the presence of pointer aliasing and cycles (found in unsafe data structures including BTreeMap's node.rs https://doc.rust-lang.org/src/alloc/collections/btree/node.r...), Stacked Borrows places strict conditions on constructing &mut T (they invalidate some but not all aliasing *const T). And the user of an owning or intrusive linked list generally expects to receive &mut T, which is not always safe to construct because of Stacked Borrows. In fact, Gankra, a major contributor to unsafe Rust libraries, standards, and documentation, doesn't solve this problem through axiomatic reasoning, but instead an "oversimplified" "heuristic" (IMO hopes and prayers): https://rust-unofficial.github.io/too-many-lists/fifth-stack... (written 2022-01).
In practice, I find that unsound libraries frequently get written and used unknowingly in the wild. I've commented on this earlier at https://news.ycombinator.com/item?id=31897503.
In short, I believe that Stacked Borrows places unreasonable and unattainable requirements on authors of unsafe structures and algorithms, which serve as the foundation for practically all safe code (outside of the vanishingly rare case of code operating on tree-shaped fixed-size variables allocated solely on the stack, and never creating aliased mutable pointers).
And The “easy” C sophomore computing science graph is inevitably chock full of Undefined Behavior too, the implementers just never noticed or gave a shit because they cobbled something together that looked superficially correct, the compiler didn’t complain, so they decided they were all good. Sophomores, as it turns out, are hardly world-beating experts in C standard esoterica.
The rest of what your wrote is completely irrelevant to the original point: this seems hard in safe Rust only because you’re comparing it to doing something totally different and simpler in C.
I agree that sophomores are better off sticking with safe Rust than writing linked lists, but they'll be using libraries written in unsafe Rust, and a "safe" Rust program built on unsound foundations is undefined behavior anyway (through no fault of your own).
You said Rust linked lists have "exactly as much safety as it had in C". Are you seriously arguing that C linked lists are just as wrong as Rust ones, by saying a CS student is as likely to write an incorrect C linked list as a hardened professional is to write an incorrect Rust linked list (eg. Tokio and https://gist.github.com/Darksonn/1567538f56af1a8038ecc3c664a...)? Stacked Borrows ensures that translating sound C code into idiomatic Rust APIs produces needlessly unsound Rust code, and the Rust language (or a specification if it existed) means C translated directly into raw-pointer Rust is unidiomatic and you're fighting the language every step of the way (no autoderef, no -> operator). At this point, an experienced programmer is more likely to write and use a correct C linked list than write and use a Rust one.
> Are you seriously arguing that C linked lists are just as wrong as Rust ones, by saying a CS student is as likely to write an incorrect C linked list as a hardened professional is to write an incorrect Rust linked list
What?
No, I’m saying that writing a linked list in C is “easy” if and only if your implementation doesn’t even bother to try to cope with the 9000 foot guns C offers you — that your easy C graph data structure is basically always a disaster of Undefined Behavior and thread unsafety, because if you try to make it otherwise, it will cease being at all easy. It will become in fact really fucking hard
So when I say “you can have your easy C-style linked list in Rust, just use unsafe”, I agree entirely: the Rust implementation will also likely be a gigantic clusterfuck of Undefined Behavior. That’s the entire point: it’s only ever easy in either language if you’re willing to blow both of your feet and you dick clean off. It’s easy only if you’re so inexperienced and naive that you don’t even perceive the dangers of the highly efficient dick-and-leg removal device you’ve built in C, and whine on forums about how mean-old-Rust is hard in comparison because it keeps refusing to compile your dick_exterminator.rs file.
That’s the Apples to Apples comparison. Which unsafe thing is harder to get right, I don’t honestly give a flying fuck about. Now, it’s highly de-fucking-batable that it’s easier for “an expert C programmer” to avoid undefined behavior entirely in an arbitrary mutable graph implementation in the presence of multithreading unless we’re talking about an entirely mythical level of expert here, but that’s utterly offtopic to the discussion at hand. You were just looking to grind a fucking axe about Stacked Borrows and decided to rant at me about it, but it really has fuck all to do with anything I was saying, man.
You can have an easy graph structure in Rust in the only way you can easily have one in C: by not giving a single shit about correctness.
A C data structure is never a disaster of "thread unsafety" if, like most data structures, you don't use it across threads. Any Rust data structure which is Send and Sync, translated literally into C, is just as safe in C to read (or write in the rare cases &Structure is mutable) across threads, and create in one thread and destroy in another. It's only thread-unsafe if used in a thread-unsafe manner (and all Rust adds for non-multithreaded data structures is checks to prevent users from mutating across threads, though its pattern of "one handle per thread" is helpful for structures designed to be mutated across threads).
And C is not a dick-and-leg removal device, it's a direct representation of runtime semantics (aside from signed integer overflow which is avoidable, type-based alias analysis which is rare, etc.), and any sound Rust code which doesn't transmute types can be compiled into equally UB-free C code, and even Rust which commits UB by violating SB (many unsafe libraries) can be transpiled into UB-free C code as long as you don't use `restrict` when inappropriate. Rust is merely a possible way to organize a program to avoid UB, to be followed when helpful (RAII, catching use-after-free in application logic, avoiding reference counting errors, multithreading) and replaced when it impedes writing low-level code. It's not a religion where apostasy is punished by castration.
I'm criticizing Stacked Borrows because I've seen more than enough evidence that it's an unreasonably stringent memory model for writing unsafe code. Please stop putting words in my mouth and misrepresenting my positions as profanity-laced straw men, like "not giving a single shit about correctness".
> A C data structure is never a disaster of "thread unsafety" if, like most data structures, you don't use it across threads
lmao. profound stuff man.
> It's not a religion where apostasy is punished by castration.
What on earth are you even talking about. It was just a metaphor for runtime crashes you absolute dingus.
> I'm criticizing Stacked Borrows because I've seen more than enough evidence that it's an unreasonably stringent memory model for writing unsafe code
I didn’t ask! I don’t care! At no point in my life have I cared about anything less than I care about what “nyanpasu64” thinks about Stacked Borrows. Take the hint you tedious dork.
Yes I do find it painful. Because rust’s memory safety restricts the number of valid programs, it is (to me) more difficult to write code. That’s not a bad thing though! It’s just not a trade off I’m willing to make most of the time.
If my project requires the memory safety that rust offers, I’ll choose rust. But most of the time I’ll pick something else to lower my mental load when coding.
I remember reading (on HN I think) that coding in Rust is like playing some kind of intricate puzzle game. Tricking the compiler into accepting your code. You feel smart when it works and challenged when it complains. I personally like that aspect, at least for now.
On the other end of the spectrum I would say Go is, with its lack of "advanced" or functional language features and sans-syntactic-sugar straight forward approach. I always feel like I have oncoming RSI when I write Go, its just so verbose and un-dense and frankly boring. No room for (too much) cleverness.
At the same time I would guess real development teams using Go are more productive (in problem spaces where Go can be used instead of Rust of course). Especially if you factor in mentoring of junior colleges new to the language etc.
> coding in Rust is like playing some kind of intricate puzzle game. Tricking the compiler into accepting your code
That is mostly only true for very early beginners.
There are certain patterns you have to learn and understand, especially for developers not accustomed to thinking about lifetimes ( which applies to most developers that only have experience with GC languages).
And sometimes you do have to battle the compiler, even with experience.
But most of that goes away pretty quickly once the language clicks for you.
After that Rust can still feel restrictive, but that's because there are very few languages that enforce as much correctness at compile time.
That very well can mean that Rust just isn't a good fit for certain domains - which is perfectly fine!
Completely agree with Golang, it is such a nice language as it remains pretty basic/simple and thus extremely readable and easy to use. Sure it's not expressive enough for some applications, but for a lot of applications it really shines. I'm mostly doing Rust nowadays, but if I want to use a beautiful and enjoyable garbage collected language there's Golang.
Why would people be annoyed? Plenty of people have done the same thing in this thread. Except they stayed on the GUI topic. So maybe that’s why you included the sheepish intro sentence.
Sure, I kind of hint at it in another comment in this thread but a few more pain points for me:
* Rust has no way to talk about heap allocations succinctly other than Box; an actual type I had Option<Box<[Box<[&'a str]>]>>. It's more than just Box and Option being poor abstractions for the heap and nullability respectively, but the fact that Rust is a systems programming language and provides nothing to actually help with even mundane problems that arise in systems programming
* there has been almost no iteration in the design space of lifetimes despite being a cornerstone feature of the language
* prolific do_x and do_x_mut methods; there has to be a better way than countless *_mut methods for a language where mutability is so important
My general impression of Rust is that it ships a MVP version of a feature and never really tries to iterate on it, or iteration happens incredibly slowly (const generics being the only notable exception I can think of where almost every release seems to have something to say about const generics). And I get it, things like GATs are important for the language long term, but the "ivory tower" approach has left the rest of the language feeling neglected IMO.
I have no business related to Rust, but after reading a few articles from this author, I tend to think that his approach to GUI toolkit design is deeply flawed.
He is undoubtedly highly knowledgeable about the subject, but this knowledge may be a curse in his case.
A mix of second system syndrome and Architecture Astronaut, trying to satisfy too many constraints can be a deadlock.
I think the sensible approach is starting with a Minimum Viable Product, cutting some corners and consciously making tradeoffs. And if success comes, then try to organically grow from there.
“If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” — Antoine de Saint-Exupéry
I will one hundred percent agree with you that my approach is not a good one if you want a pretty good GUI in a reasonable amount of time. For that, an incremental approach, especially adapting some existing successful design, would be better. But that is not in fact my goal, it is to yearn for the sea.
I am, quite deliberately, spending a lot of risk points. In addition to using a language which may (see elsethread) not be a good fit for expressing UI at all, I'm building a GPU 2D renderer from scratch, also using compute shader techniques which have not been proven, and I am designing a reactive architecture that is not just a simple adaptation of React. Any of these could fail, as have some of my previous attempts. But I think they will be interesting failures, in that we'll learn something, and if all the pieces do come together, it will be a UI toolkit capable of performance completely untouchable by the existing state of the art. In turn, I'm interested in how that could open up new creative possibilities constrained by current implementations.
So I'm pretty comfortable with my approach. And hey, next time you're in the Bay Area (or perhaps when I'm in your neck of the woods), lemme buy you beer and we can talk about why it gives you satisfaction to dump on other people's work.
To be frank, I think that the kind of work you are doing is necessary.
In my opinion GUI is not yet something I would consider to be a "solved problem".
Both from the API perspective and the rendering side, and compute shaders are indeed extremely promising and could be something close to an end-game in this space.
This is why I read your articles.
And I have absolutely zero interest in dumping on your work, but I have the feeling that a large part of this work is going to waste because of what I consider to be flaws in the ways you are approaching the problems, or maybe the way you advertise your approach, I can sense how discouraging it can be for other devs.
What, specifically do you think the flaws are? I think if you're going to make such criticism, it helps to give actionable specifics (i.e. constructive criticism). How is the approach discouraging to other devs?
I think that people who manage to achieve "impossible" tasks tend to be overly optimistic (and naïve) at first, consciously or not, this is a good trick to fuel their own motivation and that of others.
On the contrary, knowing/talking too much about the difficulties can quickly kill the fun and motivation.
He should talk about how great the end-goal will be and why it is important.
Would you rather end-users continue to get stuck with applications that ignore important things like accessibility, that can block some people from getting or keeping a job, so as not to kill some developers' motivation? I'm glad Raph is talking about the difficulties so they (hopefully) won't be ignored or clumsily bolted on afterward this time.
Edit to add:
"Those who cannot learn from history are doomed to repeat it." --George Santayana
I think the time for recklessly moving fast and breaking things in software, without taking into account known complexity and avoiding the mistakes of the past, is over. Our impact on the world, and the resulting responsibility, is simply too great for that.
As another anecdatum, I thought the last part was beautifully done and completely deserved.
The dismissive complaint was not without merit, but wow did it assume that the commenter's use cases are all that matter and all that need be considered. The world of UIs is much, much, much bigger than that, and awash in unsolved or badly-solved problems that matter a lot to many people.
I agree with the complaint, fwiw, when applied to an important subset of the design space. I even think it's useful to try to understand where the limits of that subset are, and why.
But saying that exploration is pointless because we're all happy living on this here big island and there's nowhere else that could possibly be better so why bother looking, it's all good except we still don't know why people keep dropping dead, but that's an acceptable drawback to what is otherwise a paradise on Earth—hang on a sec while I scrape off these leeches, they're so silly sometimes—and the people who think otherwise are just malcontents who ought to be out catching fish for the rest of us to enjoy.
>So I'm pretty comfortable with my approach. And hey, next time you're in the Bay Area (or perhaps when I'm in your neck of the woods), lemme buy you beer and we can talk about why it gives you satisfaction to dump on other people's work.
I'll have a lot more to say about this as I gather quantitative performance data. But it's a good question. I expect the big wins are: fast 2D (vector with blends and so on) rendering with compute shaders, multithreaded creation of expensive resources like image decompression and text layout, pushing incremental reactivity all the way from app logic to GPU (as opposed to needlessly redoing work), and of course just using a fast, non-GC language.
From a computation perspective, UI is fundamentally an incremental computation engine. Most elements are not changing from frame to frame, so you can either recompute and re-render, or be smarter about only propagating deltas. I'd like to propagate those deltas all the way to the GPU, so you reuse lots of things from the previous frame if they haven't been invalidated. I'll be writing about this in considerably more detail; stay tuned.
If I recall correctly, damage rects are a raster-stage optimization that rerenders components only if they have changed ('dirtied') since the previous render. You could extend this concept of 'damage rect' to the next layer down, like React has pursued with the vdom/dom split. If you continue extending the analogy through every layer (raster, layout, scene graph, component hierarchy, data model, ...) all the way down to and including the application interaction model ("user has pressed the E key", "received IO result", ...) and then statically project every interaction model event back up through all the layers to derive a rerender box, then I think you could say damage rects are kinda like deltas. (I think this explanation is mostly reasonable.)
Yeah, I get it. Although in React's case the whole vDOM diffing thing is self-inflicted - it is only necessary because they insist on the UI being a "pure" function of state. Turns out that the only way to do this with an imperative API is some kind of tree diffing, so that you don't recreate expensive stuff from zero each frame.
'infogulch has it right. A damage region is a way of saying "this region of pixels hasn't changed," but you can also say that for a scene graph, attributes of widgets, layout, the view tree, and other things. Ideally you press a key and the CPU does only a tiny amount of work figuring out what changed, followed by the GPU doing a tiny amount of work rerendering the changed pixels.
> it will be a UI toolkit capable of performance completely untouchable by the existing state of the art
Could you give an example of an application where the bottleneck is UI code? In my experience the bottleneck is always either disk or network. Not trying to bash you, genuinely curious.
There's plenty of apps that we developers use daily that fall into that. Apps like Teams, Slack and Jira can feel incredibly slow to a lot of people, even with everything already in memory and not waiting for anything from the network or disk. Typing messages, changing tabs, going to read a notification... Facebook sometimes takes a second to show characters you typed in a reply, and the characters often show up out of order (also zero network activity during it). A lot of complex WYSIWYG editors are also incredibly heavy, while simpler ones aren't.
Sure, that might be technically a bandwidth problem: in this case probably RAM. It is solvable with faster computers/faster RAM. But it's slower or at least the same speed as it was in the past with slower machines. Since hardware got better, it has got to be something different in the software side.
And it's not even about difficult things like Unicode Glyphs and Emojis, which are common canned responses when anyone says that software "is slower than N years ago". Those things are handled by the OS, not by Jira. And there are super fast apps that make use of them.
I don't have access to such corporate software at the moment, but can you verify in devtools where the delay is actually coming from in the apps you mention? Leaping to "the stack is bad it needs to be rewritten in rust" is extreme, especially when apps like VS Code are generally regarded as non-slothy [1]. Or at least not more slothy than the domain requires [2].
I'd wager the developers are poorly incentivized - if something satisfies the ticket and looks fast enough on their tiny test bed with state of the art hardware and great network (perhaps even localhost), it ships. On the other hand I don't consider something good enough to ship until it is fast enough on testbeds orders of magnitude larger than what I expect a "normal" workflow would include. The process of going from "works" to "fast enough for people using 100x larger inputs than I expect" almost never involves "rewrite in rust"[3], but instead "cache cleverly", "debounce discretely", and if all else fails "sit down and ponder on novel algorithms and data structures". These are all operations that are just as easy, if not easier, in high level languages as compared to rust.
[1]: Full disclosure I was paid to write vscode for a period. When VS Code is slow, and it 100% is at times, the root cause is almost always an extension blocking progress for some dumb reason. This absolutely blows, but isn't a problem rust would solve - indeed extensions can already invoke rust.
[2]: inb4 "but sublime!": Running experiments, I've found sublime to in fact be slower than a fresh VS Code install at working with very large files. Of course when you have extensions trying to do dumb stuff with the big files, VS Code can get worthlessly slow. Again not a problem rust would solve. Try it: make a 5M line file, click it open it in Sublime, then in VS Code. On my machine VS Code opens it well before Sublime can.
[3]: Yes there are some times when rewriting in rust is appropriate, for instance VS Code's search is ripgrep - but rust isn't handling the UI at all, it's running in a separate thread doing what it does best (multithreaded systems programming), while the main renderer thread is doing what it does best (rendering). This is the way forward for the truly "inner loop" code, IMO.
> I don't have access to such corporate software at the moment, but can you verify in devtools where the delay is actually coming from in the apps you mention?
I already did. Like I said, that happens when there's zero network activity (the apps are a bit sluggish even with wifi deactivated) but also zero disk activity as well (according to Apple's tools), as caches are warm.
Sure, network in the apps I mentioned is also incredibly slow (mostly because of lots of requests and redundant data), but the real slow part is the interface itself. Dragging anything, typing text, clicking buttons, popups. Everything takes more time than a native app from 15-20 years ago.
Devtools show it's death by a thousand cuts. Thousands of sub-milisecond javascript functions, thousands of unnecessary re-renders. That happens in almost every operation. Even popup menus take a long time to show up, despite not really doing anything before such as loading data. This is in both Teams and Jira, btw, Slack is not as bad. Interestingly, Teams works faster when opened inside Safari rather than in Electron, but not by much.
> Leaping to "the stack is bad it needs to be rewritten in rust" is extreme
Thankfully I said nothing of the sort... You mention Rust a couple times more, so I guess this is something of a pet peeve to you, which I'll ignore since it has nothing to with my message. I was only answering to your query, I'm not interested in making arguments because I have no dog in this race. Like I said, other apps (even those written in the web platform) are faster than the examples I gave.
> I'd wager the developers are poorly incentivized - if something satisfies the ticket and looks fast enough on their tiny test bed with state of the art hardware and great network (perhaps even localhost), it ships.
In this case it's also not about being fast on localhost or having 100x more data, although this is definitely a safe bet on most products. It's slow even on the base case, even without doing anything remote, or having almost no data.
If you want me to wager on why this happens: developers work in a way they can retain their sanity. If there's weekly changes of scope, they'll program defensively in a way that allow quick changes. So there's no room for macro performance optimisations. The architecture is optimized for change, not performance. I know there are weekly stupid changes in Jira/Teams because I see bugs and little test features coming and going every single fucking week, frequently disrupting my workflow. VSCode on the other hand is a developer tool, and performance and familiarity seems to take precedence over quick stupid features. Why? VSCode is a dev tool, so developers indeed know better. In normal products developers are unable to fight back the asshole product manager or product owner changing their mind every other week.
This post is about moving the GUI layer to Rust, away from the standard HTML/JS/CSS used in the apps you mention. That’s why I brought it up.
My claim is that moving to Rust is useless without solving the unnecessary rerenders, and once those are solved moving to Rust would be pointless. So that only real problem is fixing bad coding practices, which has nothing to do with the underlying language. In fact using a lower level language is likely to make that much more difficult.
Got it. I’m just making it clear that I’m only replying to your question (“Could you give an example of an application where the bottleneck is UI code?”) rather than arguing for language X or Y.
I actually agree with you.
I guess I’m kinda tired of people trying to lure me into arguments I don’t want to have.
The fact that our modern computers still often don't feel amazingly fast is a perennial topic of griping on forums like this one. I'm sure the UI stack has something to do with that.
This approach doesn't work for GUI frameworks IMO. When building an application, you're looking for a GUI library that meets a set of requirements. If it doesn't meet those requirements you're not going to use it: as an application developer you're not going to want to rewrite the fundamentals of the GUI library, so there's no opportunity for that library to improve.
Maybe it would work if there was no other choice, but in the end you can always use bindings to GTK or some other mature non-Rust UI framework, or use web-view based option.
In order for a pure Rust solution to succeed in the general case, it needs to meet all of these constraints from the get-go, but of course that's an insurmountable task.
For that reason, I think the best approach is to not try to "solve" GUI yet, but instead focus on the pre-requisutes:
- Window creation.
- Text rendering.
- Compositing.
- Accessibility.
- etc.
ie. all of the things mentioned in the article can be separated out and tackled one at a time. For that reason I think the author has taken exactly the right approach.
A more scrappy approach is what I'm trying to do with my library, rui [1]. Just get something out there. Also, it's really in service of my (already released) app, as opposed to trying to be a successful open source project. If others happen to like it, great!
That said, I think Druid is a good Minimum Viable Product. I just needed GPU acceleration for my app, and wanted something closer to SwiftUI, which I'm used to.
i am just interested enough in new gui toolkits that i have read through a bunch of blogposts, articles, and github repo docs for a ton of emerging ones. my overall impression is that if a lot of these "architecture astronaut" concerns are not at least planned for up front, they will never be implemented, so i welcome the OP's thoroughness in documenting the various issues to be taken into account.
I don't think that GUI toolkits that are in use today were born in their final form, or that everything was planned from the start.
I think that unfortunate early design decisions can lead to dead-ends, or extremely painful evolution, of course, and knowledge can help, but paralysis is in my opinion an ever bigger issue.
Making the perfect GUI Toolkit starts by making one that is Good Enough for some use cases.
Rust doesn't like having shared mutable state, but event-based UIs have a global event loop and can mutate anything at any time.
Rust works best with strictly tree-shaped data structures, but event handlers turn trees of widgets into arbitrary webs with possibility of circular references.
Rust prefers everything thread-safe, but many UI toolkits can't even be touched from a "non-main" thread.
Rust's object-oriented programming features are pretty shallow, and Rust doesn't have inheritance. That makes it awkward to model most toolkits that have deep hierarchies with a base View type and a dozen of Button subclasses.
So instead of retrofitting mutable single-threaded OOP to a functional multi-threaded language, there's a quest to find another approach for UIs. This change of approach has worked for games. Rust wasn't nice for "class Player extends Entity" design, but turned out to be a great fit the ECS pattern.