Hacker News new | past | comments | ask | show | jobs | submit login
Rust 2021: GUI (raphlinus.github.io)
372 points by clarkmoody on Sept 29, 2020 | hide | past | favorite | 244 comments



Why is every project on the HN front page these days Rust this, Rust that? I understand the enthusiasm, but sometimes it seems a bit much.

Oh, this is my post. Never mind, carry on.

More seriously, this was my contribution to the Rust 2021 blogging effort. I didn't really intend for it to be posted to HN, more as in internal discussion within the Rust community. Even so, I'm happy to discuss and answer questions.

Read some more thoughts about Rust 2021 here: https://readrust.net/rust-2021


This is coming from an outside perspective, and I'm quite prepared to be badly wrong.

I think the Rust language needs to learn a few tricks from Objective C and Swift to really make this possible. In particular the work being done on extensible protocols.

What you can do, is build a 'Rust-flavored' GUI library. But this isn't what users want: they want a native app, while developers want to write once-run anywhere.

This requires a kind of double decoupling, where UI intent is expressed as an abstraction, and is realized as an OS-native control — but one which itself will dynamically update, insofar as possible, as the OS evolves.

This calls for a kind of API wizardry which Objective C always got right, and C++ never really did. I see that dichotomy extending into Swift and Rust, but it isn't inevitable, Rust hasn't painted itself into that particular corner.

Anyway, it's encouraging to see this work happening, I wish you good fortune in 2021 and beyond.


Check out Raph's retrospective on his xi editor project [1], particularly the section "There is no such thing as native GUI", for his take on why using platform-provided ("native") controls isn't always practical, at least for the applications he's developing. Anyway, cross-platform wrappers over native controls have been done in other languages, e.g. wxWidgets in C++ and SWT in Java, and that approach has enough limitations that AFAIK most developers of complex applications avoid it now.

[1]: https://raphlinus.github.io/xi/2020/06/27/xi-retrospective.h...


I will miss dearly the good old days of Cocoa GUIs that adhered to Apple's Human Interface Guidelines (and when HIG itself was about usability, not looks). They all supported all the little things of Cocoa (like click-hold-drag on popup menus activating items on mouseup, or proxy icons that gave quick access to files of documents in any window, or search pasteboard).

Nowadays native apps on Mac are dead. Even Apple is giving up. There are still native-code-running .app bundles, but they're Qt, Electron, or Catalyst that are like knock-off brands of Mac apps. Even if they bother to have vaguely Mac-like skin, they lack all of the nice little details that made Cocoa apps great.


> Nowadays native apps on Mac are dead.

I know you’re saying that in a subjective way, and definitely the percentage of non-Cocoa apps have gone up, but I’m not sure if they’re dead. In fact, I feel that they are thriving once again in 2020 — a lot of indie devs that care about making Mac-assed Mac apps[0] are present and are releasing great apps and replacing ones that were filled with Electron.

Some examples that pop up in my mind: NetNewsWire 5[1] (an RSS feed reader), Shrungs[2] (a Cocoa Slack client), MimeStream[3] (a Gmail email reader), Chime[4] (a Go IDE that’s entirely Cocoa! I seriously considered learning Go because of this app), ProxyMan[5], and I can go on and on...

My feeling is that Electron got a bit of traction in like 2014~6 but people started to realize they really do like Cocoa better.

[0]: https://daringfireball.net/linked/2020/03/20/mac-assed-mac-a...

[1]: https://ranchero.com/netnewswire/

[2]: https://shrugs.app/

[3]: https://mimestream.com/

[4]: https://www.chimehq.com/

[5]: https://proxyman.io/


Agreed. I'm one of the rare folks around here who, despite decades of deep Unix experience, prefer decent GUIs to the command line, and Cocoa really is the best you can get in general use today. The decreasing number of native apps and the increasing number of apps that don't have all the nice little affordances makes me sad.


Same here, that is why I am so critic of those that use 25 inch screens to organize xterms and nothing else, I was doing that in 1994 with twm on IBM X terminals, something better is to be expected in 2020.


> that is why I am so critic of those that use 25 inch screens to organize xterms and nothing else, I was doing that in 1994 with twm on IBM X terminals, something better is to be expected in 2020.

Seems weird to criticize people for working in a way they like to work. The "something better" might just be improved command line tools. I think there's room for both, and different people prefer different types of interfaces.


I haven't used a Mac in ages, so I'm unfamiliar with any unique features of Cocoa over what one gets with e.g. GTK+; what would you say the features that make it better are?

I'm usually a bit skeptical of current-generation GUIs (but then again, I'm ignorant of the Mac world) largely because it's usually so much more painful to extend and compose them. To pick a perhaps slightly unfair example (since it's basically about text processing), I use Weechat for chat rather than Pidgin, since the UI inconvenience of it being a CLI is outweighed by how easy it is to process my chat history, programmatically interact with notifications, and how much more functional Mosh/SSH are than X11 forwarding.


Speaking as a user, I don't particularly care if the widgets are native or reimplemented - what I care is whether they look and feel native. E.g. on Windows, a Qt or even a Tcl/Tk app can easily look "native enough" that nobody will care.


On macOS the native UI language is very consistent and it’s very easy to decide whether an app looks and feels native. And AFAIK Qt is the only production-ready cross-platform toolkit that even comes close to native on macOS.

On Windows there are like at least three or four kinds of “native”, I don’t even know what’s “native enough” there.


I can only think of two kinds of "native" on Windows when it comes to look & feel - Win32 apps, and WinRT apps. Ideally, a modern GUI framework would have backends for both.

(Stuff like WPF still has a Win32 L&F, even if it draws the widgets itself, same as Qt.)


In my opinion there's more than two kinds of native if you pay attention to the details.

Themes: Unstyled Win32, styled Win32, WinForms (which is part Win32 and part custom controls), WPF (which sorta imitates Win32 and doesn't quite look like it), and UWP.

Fonts: Microsoft Sans Serif, Tahoma, Segoe UI, and non-English fonts.


What's the development experience scene like for Rust GUIs? In my experience developing Electron apps there are a a couple major plusses that I'm not sure Rust can compete with, based only on my limited knowledge of the ecosystem:

- Excellent embedded developer tools. Hundreds of thousands of engineer-hours have gone into the chromium devtools, providing everything from styling overrides to layout metrics to profiling tools to a REPL to DOM tree inspection and modification, to an integrated debugger, etc etc etc, and all of that is at your disposal with a simple Cmd+Opt+I. To my knowledge Rust doesn't have anything like this.

- Super quick incremental updates. Making a change and seeing the impact is sub-second, even on large projects (vscode). Based on my understanding of rust, recompilation can take a very very long time. When making GUI's a lot of the time is spent making finicky little changes and reloading -- if that inner loop is slow you're going to have a bad time.


I've talked up the strengths, you've brought up the big weaknesses. I think there is some interesting work on both of these fronts, but the developer experience situation is very primitive so far by comparison.

Compile times have been getting better. For reference, on my machine an incremental build of Runebender is 2.3s. Part of this reflects some choices we've made to use platform capabilities rather than build the entire stack ourselves.


DevTools is the killer feature.

I've developed a Gtk GUI in Rust recently. It wasn't bad, but I probably won't use it ever again. When I'm debugging a UI, I want to right-click it, select Inspect, and be able to tweak everything live.

DevTools is such a force multiplier, that I'm confident I can quickly and reliably develop a whole GUI from scratch, with custom layout animations and bells and whistles using JS+CSS. I have zero confidence in doing anything in Gtk or Cocoa beyond dumping static pre-built controls in a window. The cycle of editing blindly, recompiling, and re-running is the old paradigm that I don't want to return to.


Seems to me like the future could well be TS/Electron with bindings to Rust for system-level high perf activities. Let the UI scripting language and mature rendering platform do the UI scripting and rendering, and let the high performance systems programming language do the performance-critical system interfacing.

For example, in vscode we do almost everything in TS, but shell out to ripgrep for workspace text search.


This is what some people have started doing, and it works well as long as your bottleneck lives outside of the UI itself. But for some highly complex apps, web layout itself can become the bottleneck, and that's really hard to work around. VSCode is an absolute miracle in this regard, but it uses lots of dirty tricks under the hood to accomplish that.


Great, here is an off topic quick feedback to someone from VSCode, in what concerns search I always end up starting notepad++ Ctrl-Shif-F, get what I was searching for, meanwhile VSCode just keeps showing its increasing blue line without any results to show up.

Which is quite disappointing, given ripgrep's reputation in search performance.


Have you perhaps disabled ignore files accidentally? It’s the toggle in right of the the exclude field.


Seems to be enabled, oh well there are enough Github issues related to search performance anyway. Thanks for the help.


Up to you if you want to go through the rigamarole, though we do appreciate every issue^, even if we do end up closing it as a duplicate :)

^well, every issue that isn't completely void of information ;)


Actually there is built-in GTK Inspector [0] and I used it successfully with gtk-rs before.

    GTK_DEBUG=interactive your-app
[0] https://wiki.gnome.org/Projects/GTK/Inspector


GTK has all bits needed to support this tweaking workflow, no? "Just" missing a UI (outside the GUI builder tools).


Kind of, you need introspection capabilities, check for example live tree in VS for WPF and WinUI/UWP for .NET and C++ applications.


Flutter is an example you can have efficient AOT compilation while not sacrificing ability to hot reload during development.

The tooling is not here yet. But there are drawbacks to using web and flutter is already promising. Any progress in this field should be welcomed by us.


Otoh, despite the name, electron apps are very heavy in terms of both memory and disk size, have a long load time, and can be very slow if not designed well.


Same applies to other native GUI tools, like WPF/WinUI(UWP), Qt, Cocoa,...

The Rust semantics also seem to make it quite complex to just drag components around on a GUI designer, because the lifetimes become dynamic and just dropping a component in the middle of a form can have multiple lifetime meanings, depending on the surrounding context.


Qt at least has Gammaray for introspecting running apps.


GammaRay is like Dev Tools but extremely unstable, and requiring fiddling with ptrace permissions on Linux, and I don't know how to make it work on Windows.


> I wouldn’t consider a toolkit “ready” for production use until it supported accessibility, and as far as I know there is nothing in the Rust space even starting to work on this.

Have you given much thought to the design of Druid's accessibility layer yet? I might be able to help with that and with a Windows UI Automation implementation in my spare time (a stable com-rs would help with that). I guess this isn't much of a priority though for your hero app (Runebender).


Help on this is more than welcome. It's also not as far off the roadmap as you might think. I know of at least three very talented font designers with disabilities, at least one who uses a wheelchair. It would please me enormously to make a tool suitable for them, and I would push other priorities back for it, just because I consider it so important.

In any case, I haven't done detailed design work on accessibility, but enough poking around to get some sense of what might be involved.

You are right that stable com-rs will help a lot with this kind of integration; there are soundness and other issues with the custom com implementations we've had to do. Fortunately there are good people working on it.


Kudos for adopting the "dynamic range" concept.

I agree that accessibility support is a good proxy for maturity. Even if somebody tries to game perception by doing accessibility "early", there are benefits.

On that topic, what are GTK's and Qt's accessibility support like? I remember that once when there was a real prospect of Free Software being made preferred for public acquisitions, somewhere, a big disabled-support group argued that the Free Software stacks lacked accessibility support, and would shut them out. Would such an objection be plausible today?


Qt supports the native platform accessibility on Windows and Mac, and "AT-SPI" on X11 [0].

Windows and Mac has been supported since at least Qt 4 in 2005. I think there was also some support in Qt 3. The archived Qt 4 documentation says X11 support is "preliminary".

0: https://doc.qt.io/qt-5/accessible.html


I think it's ideal content for HN. Programming languages have always been a big deal here, since the day HN was created.


I certainly enjoy content like this a lot more than the "X startup exited for $Y million" content.


Or the countless single person startups that shut down before you even knew of their existence.


I came to HN for the lisp fanboy culture when it washed out of reddit, and stayed for the great moderation.


Thanks for putting accessibility on the first row of priorities.


Why not just take the React-native model and port it to Rust (whatever that translates to)? From my POV for 85%+ of things it’s good enough and would be fairly simple to implement.


You mean that they should wrap native controls? It's not React Native's model, it has been around since forever.

WxWidgets (https://en.wikipedia.org/wiki/WxWidgets) 1992

SWT (https://en.wikipedia.org/wiki/Standard_Widget_Toolkit) 2003

Or if you mean React itself, I found this blog post kind of funny: https://www.bitquabit.com/post/the-more-things-change/ :-)


No I mean something that uses:

1) data builds the UI

2) widgets, don’t really care if they are native but some building blocks (button, lists, activity indicator etc.) The list from React native will do for now.

3) layout using flexbox and the exact same simplified css with the cascade.

That’s what I’m saying to take anyway.


Those items remind me very much of how GUIs are specified in Tk:

https://en.wikipedia.org/wiki/Tk_(software)

Perhaps a model similar to Tk could be used?


Which is more broadly known about and used, Tk or React Native? It really doesn’t matter anyway, I’m just suggesting these things that work fairly well to avoid more standards proliferation. It won’t work.


Should read “The list of widgets from React Native” and “withOUT the cascade”. Guess I was tired last night...


Raph's previous post (linked in the first paragraph) goes into more detail than you could imagine on the options and differences and tradeoffs involved…


Thanks! I’ll check it out


While I'm all for React-Native, the opinions on how to do cross-platform GUI differ.

Some say, use native UI elements, like React-Native.

Some say, render your own stuff in a (GL) canvas, like Flutter/Revery.


Some say, render your own stuff in a (GL) canvas, like Macromedia Flash


Two things. IMHO (of course) and GUI toolkit needs to be usable from other languages, I hope that's part of the plan. And also, Don't use platform-specific features or be a wrapper over their widgets. Be a GUI toolkit that works everywhere.

Good luck!


It came out as programmer's favorite language for the past few years so it shouldn't be a surprise. You should be posting articles for your favorite language and let it float to the top.


What is your opinion on the relationship of the GUI toolkit with the operating system?


That's a pretty broad question, but overall I favor deeper integration with operating system and platform capabilities. Some people see GUI as something to be built over an abstraction layer consisting of delivery of mouse and keyboard events, plus a fast pipeline to render pixels. I think that misses some opportunities.

This is one reason I'm really enjoying Rust. When it comes time to do some platform integration, you just haul out the unsafe keyword and write it, nothing in your way. By contrast, when I was working on Android and had to do a lot of JNI, it was painful.


Can't answer for Raph, but deeper integration with the OS interferes with portability. You need enough integration for things to work at all, but after that, integration is in competition with portable features for developers' attention.

Attention is always the scarcest resource, all up and down the stack.


I don’t know why, but this reminded me of HN about a decade ago when people would post Erlang articles en masse as a means to make sure non-technical people were bored and wouldn’t hang around. I guess it didn’t work in the long run!


Oh, is that why?


Why isn't Xi a "hero" app for this anymore?


1. As your question touches on, it was. The codebase has grown organically from its start as xi-win.

2. The xi-editor project had other problems which I've written about extensively in my retrospective.

3. It's hard to compete against MS VS Code. When I started xi, the competitive landscape had a huge gap between performant and feature-rich but bloated editors.

4. We got funding for the font editor project.


> 3. It's hard to compete against MS VS code.

I agree, but as an emacs user that has tried it multiple times, I really don't "get it".

The main thing that excited me about Xi wasn't Xi itself, but rather that its technology stack could be user to "build your own editor" using a solid foundation. That includes a more modern emacs, vi, vs code, or whatever you like.


A solid, production-grade GUI library would complete Rust for me.

I reach for C++ and Qt when I want a cross-platform GUI and a low-level, compiled language. If Rust offered something on the same, or at least close, to Qt, I’d go all-in on Rust.


Same here on all points. I am familiar w/ existing options to create Qt bindings in Rust, yet like other non-Qt-backed bindings (i.e. not PySide), they fall into disrepair or suffer on often needed features such as inheritance and slots.

I am kinda hoping autocxx[0] reaches the level of being able to handle wxWidgets headers (I have no hope for Qt though there are multiple Rust projects that try) and that will probably be enough for me on the desktop. Alternatively, it'd be nice to see a project that manually built Qt widget bindings surfacing an idiomatic Rust interface (kinda like what NodeGui does), but it's a lot of manual work that will have limited coverage in early stages instead of just continually improving/tweaking a generator.

0 - https://github.com/google/autocxx


Doesn't wxWidgets have C-based bindings anyway, that Rust could use? How do they handle non-C++-based languages with existing support?


I am only familiar w/ wxc which wxHaskell used[0], but I do not know if they are still maintained. Granted after your sibling commenter mentioned concerns w/ wxWidgets, I may have research to do on concerns with using it in 2020 (e.g. reading the references of https://en.wikipedia.org/wiki/WxWidgets#Criticism).

0 - https://github.com/wxHaskell/wxHaskell/tree/master/wxc


I hope that wxWidgets for Rust never sees the light of the day.


Why not cooperate with the Qt people? I've experimented with some minor Hello World stuff in Qt several times over the years, but it's the C++ that tends to put me off. Some official bindings or deeper integration might draw people to both rust and Qt.


Do you just want the GUI portions of Qt or all the extra stuff that's also included?

I always found that Qt programs end up as their own little dialect of C++ that doesn't quite play by the same rules.


I think this is by design. For example, in Qt you use QStrings instead of standard library strings which then allows you to use the Qt system for internationalizing text. C++ is a general purpose language and Qt is a library/framework for building highly portable applications.


Qt bindings for Rust released under a MIT or LGPL type license would be great.


Why do you use buggy C++ and proprietary Qt when there is superior alternative called Free Pascal + Lazarus? It produces cross-platform GUI (Windows, Linux, Mac) using native GUI control (QT does not). Pascal is also readable and Pascal programs are not full of exploits like C++ software.


I actually loved Delphi back in the day for Windows development. It was quick and easy. My biggest issue with FPC/Lazarus is that the technology isn’t popular compared to C++. In my experience, the library ecosystem, etc. just isn’t as large and it’s harder to find support for.


One other aspect of Rust that makes it suited to GUI work is the fact that the management of mutable state is one of the core problems of writing a GUI app, and that Rust allows you to talk about mutation in a way that no other language (that I know of) does. You can a) have mutable structures, and b) declare that a function will treat one of those - passed as an argument - as deeply immutable, both within the same language. You can have exactly the amount of mutation that you want, which should be extremely enticing to any GUI developer.


> deeply immutable

Not really true, because the "interior mutability" pattern allows for mutating structures that are passed via a shared reference. Truly "immutable" data is in fact quite hard to characterize in a language that's as 'low-level' as Rust.


Technically yes, there are trapdoors like RefCell. But these are intended to be used sparingly because they move all relevant borrow-checks to runtime. Under normal circumstances immutability is statically guaranteed by the ownership system, which is much more than can be said for other languages where mutability is an option at all.


I suspect lifetimes get you the vast majority of the benefit of immutable data for UI purposes, tbh. It lets you ensure that references aren't retained or accessed at the wrong time, unless they provide an explicit way to bypass that (i.e. the defaults for data types is safe).


I wonder how event handling will look in whatever rust gui library wins out.

What API alternatives are there to callbacks for widgets that may signal multiple events? One object would be to pass some implementation of some “Events” trait which would have methods to be called for each event kind. This is basically callbacks (I don’t think rust offers a way to make inline trait implementations and I don’t know if there’s a way to implicitly have default implementations that do nothing). This feels similar to in Java where you might subclass Button (I think. It’s been a while since I looked at any Java at all, let alone gui code).

One way to deal with events is a more imgui-like api where you write e.g.

  if (make_button("click me")) {
    /* just been clicked */
  }
But I don’t see how this would work for lots of events (one option would be to have the return value be e.g. ButtonEvent option where the ButtonEvent type is an enum of all possible events a button may signal but there isn’t sub typing for variants so you either need lots of similar types for different controls and lots of pointless conversions or the types say that a control may signal events which it in fact never does. Also it may be possible to have multiple events hit a widget in a frame (eg mouseEnter, mouseMove; focus, mouseDown; mouseUp, click) so the option should be a sequence and you lose the ability to use “if let” when making the controls). A second issue is that this seems likely to add a frame of latency in many cases (eg if you have a wizard with a “next” buttons then you get the mouse event for the button being clicked by rerendering the dram as-is, and then on the next render you get to do the next page).

Perhaps I should just look at the code to see what Druid and crochet do.


We're still figuring this out, a discussion on the #crochet stream in our Zulip today is on this very topic. At a very high level though, many events (mouse movement, keyboard, etc) are handled by widgets, not by app logic, and those are dispatched to the widget by the toolkit calling into the widget Trait's `event` method. The idea that a button is clicked is different, we tend to call that an "action" rather than an event. In current Druid, it's a callback which is given access to mutable app state (often through a lens). In Crochet, it's placed in an "action queue" and the app logic is rerun when the queue is nonempty, then everything is rendered after the app logic has had a chance to make its mutations to the view tree.

We don't have the one frame latency, as it's not imgui under the hood. (There's a bug in the Crochet prototype on this, but it's known and will be fixed).

Hope this helps clear things up a bit.


What about drag gesture? I.e. if you want to enable a particular widgets such as image display / text box to be draggable? How the event should be passed in and handled?

These gestures seems not internal to the widgets, but also mutate the widgets state in one way or another and need some abstraction to handle the common behavior.


That is indeed a complicated case we haven't fully figured out yet. We've started thinking about it, but don't yet have an implementation.


Something tells me that you will end up in HTML/CSS/DOM and bubbling/sinking event propagation schema (used in browsers) as the most flexible one, see: https://en.wikipedia.org/wiki/Event_bubbling

I did quite a lot of different attempts to GUI Holy Grail in last 20 years ( https://sciter.com/10-years-road-to-sciter/ ) . I am pretty confident that DOM and DOM events is the most flexible (and so robust) foundation so far.


The problem is that to make it flexible in a way that allows for GUI construction blocks, with Rust one would end up with Rc<RefCell<>> everywhere, like it happens in Gtk-rs.


Is that all that bad of a thing? You could combine them into a single type, which then becomes a type marker for "is / related to a UI element" and perhaps more palatable to type.


Versus what languages with proper GC support?

Plenty, decrease in productivity, basically back to Objective-C before GC/ARC, or doing COM in C, lack of support in GUI designers that need to always map to the same types, and building on top of that a big attrition for component libraries.


Not sure if I'm missing something, but how does Rc<RefCell<...>> equate to "objective-c before gc/arc"? It seems roughly equivalent, in the same way that smart pointers are roughly equivalent: the lifetime is inferred by use, and retaining cycles are a problem. Arc could be a little bit more efficient (which does matter in some ui work), but I don't see how it'd change the use of a framework, and iOS and OSX devs have used objc with arc for years successfully and often happily.


Good GUI layer should be language agnostic. Reasons:

1. GUI is non-trivial. That is not just about problems of CPU/GPU rasterization (30...40% of all GUI problems). There are others that we usually forget about: accessibility, customization/styling, DPI and screen device size awarness, etc.

2. It makes no sense to invest in development of only-for-Rust GUI libraries. Just use in Rust something ready that has stable plain-C ABI. And so people from C/C++, Delphi can use it too. That would be more robust on the long run.

3. Rust is is not that good to be a language-behind-UI. Ownership graph of GUI objects can be quite complex, contain loops and can be not known at compile time. Something GCable plays significantly better in this respect. That's why JS is so popular in UI. Sciter's script is even better than JS but that's another story.

So "GUI 2021" as a subject of blogging makes sense, but "Rust GUI 2021" is a bit hopeless I think.


> 2. It makes no sense to invest in development of only-for-Rust GUI libraries. Just use in Rust something ready that has stable plain-C ABI. And so people from C/C++, Delphi can use it too. That would be more robust on the long run.

Disagree. Anything that uses a C ABI has to keep track of ownership externally, and that tends to be error-prone; you end up with excessive copying, leaks, or both. And this is a particularly thorny problem for GUIs because, as you say, ownership of GUI objects can be complicated. So it makes a lot of sense to have a Rust-specific GUI library that leverages Rust's ownership-tracking capabilities to make this easier.


UI and BL (business logic layer) can interact by messages.

Think about Protocol Buffers (https://developers.google.com/protocol-buffers ) or JSON/BSON as a core of communication protocol

   // App side:
   window* pw = new Window();

   pw->willReceiveMessage("account.modified", cb1);
   pw->willReceiveMessage("account.closed", cb2);
   pw->willReceiveMessage("account.wantsNew", cb3);
   ....
   pw->loadUI("accounts-view.htm");
   .... 
   pw->fireMessage("account.show", accountData: json)

Messaging, as a concept, is simple and data ownership is crystal clear.


Sure, but at the cost of not being able to share logic between the two layers and giving up most of the advantages of native UI. For example, you can't have an encapsulated component that combines user input with validation that comes from the business logic - you'd have to have an input box that sends a message to the business layer when the user inputs a value and then the business layer does the validation and sends a message back.


> you can't have an encapsulated component that combines user input with validation that comes from the business logic.

Why not? On UI layer I can always do:

    var response = bl.sendMessage("account.validate", accountData);
    if( response && response.valid) ...
    else if( response && !response.valid )
Again, we are doing the messaging for years now, AJAX/REST are messaging protocols ...

I am not saying that this is the only way...

In Sciter you can use custom native UI components extending existing DOM - they can handle event, do custom painting, etc. And you can design those components in Rust, C, Go or whatever you like.


> Why not? On UI layer I can always do:

    var response = bl.sendMessage("account.validate", accountData);
    if( response && response.valid) ...
    else if( response && !response.valid )
Like I said, you're no longer encapsulating the validation in the component, where it logically belongs from a business point of view. You're being forced to distort your architecture for these operational concerns. And it gets worse as your structures and UI get more complicated - e.g. if I want to validate several fields within a single database transaction, that's the sort of thing that Rust's ownership system is ideal for - each datastructure can know how to validate itself given a borrowed connection handle.

> Again, we are doing the messaging for years now, AJAX/REST are messaging protocols ...

Yes, and the overhead of having to serialise everything is the biggest problem with web UI. If you're going to separate the UI from the business logic like that then why even bother making a native UI at all?


I'm actually playing with a message-passing-based UI system right now, mostly as a prototype; long story below. I'm making it considerably more Erlangy than yours -- essentially, you have message-passing with a sequential/direct-style interface on top of it, so one can write code like:

  (send-to foo '(:bar 1 "foo"))
  (setf val (recv))
  (format t "~A~%" val)
  (send-to baz `(:thingy ,val))
The basic idea is, you have some set of primitive widgets (e.g. :text, :image, :text-input), which are "just data," like in HTML. Components, rather than being objects or functions, are instead actors. Other, non-component, actors would exist as well, including a compositor actor associated with each window, which you would be sending messages that look like:

  `(:update (:layout-box (:direction :vertical)
              (:layout-box (:direction :horizontal)
                (:text-input (:layout (= (width :self) (/ (width :parent) 2))
                              :contents "Hello, world!"))
                (:text (:layout (= (width :self) (/ (width :parent) 2)))
                  "Hello, world!"))
              (:include ,other-actor)))
Breaking this down, it creates a layout box (invisible box for layout purposes), containing the equivalent of the HTML:

  <div>
    <input style="width: 50%;" value="Hello, world!"></input>
    <span style="width: 50%;">Hello, world!</span>
  </div>
Below that, it places... whatever OTHER-ACTOR's current render tree is. Other than having its name, the actor sending this tree doesn't need to call into OTHER-ACTOR at all.

The compositor is in charge of gluing together the tree fragments, laying them out (with Cassowary), "legalizing" them into primitives the current platform supports, and displaying them to the screen. It also sends back input events to the actors involved, probably with some CALL-NEXT-METHOD -like primitive to opt into bubbling.

This lets the compositor operate concurrently with state updates to components, and components to operate concurrently with each other. This is good in the general case of "new processors getting more cores, not higher clocks," and especially since I'm going to be using this mainly on ARM devices, where this is especially true, and being able to split computations to work with big.LITTLE is a huge win for battery life.

I think this also should help quite a bit with ensuring a user interface remains interactive while performing lots of work, if I were to implement preemption (or context switch checks often enough to act like preemption is occurring, like Go has historically had). The only bottlenecks where the whole interface can get stuck are in the compositor, and the only expensive thing there is Cassowary. And if that's fast enough for Apple, it's fast enough for me.

Once this is in a state where I can take pretty screenshots (okay, considering my aesthetic taste, not too pretty...), I'll probably submit a blogpost version of this here with code samples. If you wanna discuss this beyond the length HN makes convenient, feel free to drop me a line on https://lists.sr.ht/~remexre/public-inbox (~remexre/public-inbox@lists.sr.ht).

Long story:

I recently got a PinePhone, and I'm experiencing keyboard latency that makes it incredibly annoying to actually type on it. I only use a small handful of apps on my phone, most of which have CLI or library equivalents, so I figured it wouldn't be that much of a loss to reimplement crappy versions, if I at least had a nice keyboard. (Plus, my old phone isn't actually broken yet, so instability here isn't as big of a problem as it might be otherwise.)

So, I reflashed it to a non-graphical build of Arch Linux ARM, and set up SBCL with Swank on it, with the intent of making a new interface to replace Phosh and the apps that run under it. This has the pretty nice effect that I can live-edit code that's running on the phone itself from Vim on any of my other machines (or even several at once, if I want). Plus, Lisp itself has great performance relative to other languages that provide this level of dynamism. Lastly, since it's set up as a systemd service, even when I mangle EGL state enough to get a segfault, it restarts and generally Just Works Right.

I've been meaning to try out this concept, but always meant to implement a custom language that looks more like Erlang semantically to do it with. Using Lisp lets me buck Greenspun's tenth rule, though, and forces me to avoid spending a year implementing my own efficient code generation. Plus, Lisp already runs great on my phone.

I am probably performing the do-notation transform with a macro, though, to allow for the green thread implementation I need for large numbers of actors. Alternately, I could use cl-cont, but I'm not sure that I want continuations exposed to the programmer.


#1 is a non-sequitur. Yes, GUI is non-trivial. But why would that mean that a good GUI layer should be language-agnostic? It just means that if it’s not, you will need to do more work to make it good.

#2 is nonsense. You assert it makes no sense to invest in only-for-Rust GUI libraries, with no support whatsoever; I flatly disagree. Part of my reasoning is that, if nothing else, an exclusive-to-Rust GUI will be ploughing new ground because of the differences that are forced on the GUI library by the differences in the language, and it will come up with new ideas that will help others in other languages, progressing the state of the art. So yeah, I think it makes a lot of sense, even if it is only available for Rust (though the potential for smooth cross-language operation definitely excites me—I said to Raph a few days ago that I was interested to note that it was actually this that was exciting me the most about the Crochet architecture, that you can have the GUI library in one language and use it entirely from a separate language). But then also as imm says, if you use bindings to something else, you’re throwing away a significant advantage of using Rust. (Similarly, you’ll want to be careful if you’re exposing Rust code to other languages to make sure that you’re not harming either side because of the difference in philosophies.) And if I want to work in Rust, why should I care about C/C++/Delphi? Why should I make compromises on the sort of GUI library I can have just because other languages exist? (Sure, there are definite advantages to pooling resources across language ecosystems, but there are costs as well, that’s my point, and why I think it is unreasonable to say say “it makes no sense” as you did.)

#3 is brimming full of assumptions that there is good cause to suspect are false. You were present in the Towards Principled Reactive UI thread a few days ago (https://news.ycombinator.com/item?id=24599560), which is basically all about developing models that don’t depend on complex ownership graphs. Sure, GC languages work better for the traditional breed of observer-based GUIs, but that’s not the only feasible approach—it’s merely the simplest to implement at the library level, which is why it got so popular. So Rust is probably not a good language for supporting that particular traditional style of UI, but there’s reason to suspect that it may actually be very good for other types of UIs.


A major motivation of the Crochet prototype is the Rust implementation of the widget tree can be driven by scripts.


Try to consider other way around.

Native application is:

1. HTML - declaration of UI structure (think about accessibility here to).

2. CSS - declaration of how that UI structure shall be presented to the user.

3. script - declaration (too, sic!) of how UI structure (a.k.a. DOM) shall be updated in response to user events and application events/state.

4. Native code of application - generates events for UI, provide data to be presented, consumes UI events data updates from UI.

I mean that native languages in Sciter/Rust, Sciter/C++, Sciter/Go applications are talking more with script rather than to particular UI objects. And that script plays a role of declarative/configurational layer translating linear application logic into asynchronous UI concepts. UI and business logic talk with each other in terms of messages - loosely coupled layers, separation of concerns and memory organization, all that.


2. Like what? Name a good cross platform GUI C lib.

3. Remains to be seen.


Being a creator of Sciter (https://sciter.com), I would insist it is good in the sense:

0. Really cross platform: Win, Mac, Linux, Mobiles, IoT.

1. Uses stable (10+ years) and compact API - 30 functions really

2. Uses well known and time proven constructs: HTML/CSS

3. Battle tested, in production since 2006, Sciter UI works on 460 mln PCs and Macs.

Ah, and Sciter/Rust is already here: https://github.com/sciter-sdk/rust-sciter


Any chance that the Servo renderer could leveraged for cross-platform GUIs as an alternative to Electron?

I don't know if there would be any meaningful benefits over Electron, but I can see why it might be attractive to build everything in Rust. Seems like you could extend Servo with features required by your app, too.


I've certainly been following Servo, and there are bits here and there where we've tried to share infrastructure, but ultimately we're making some pretty different decisions, mainly not to base things on Web technology so much, but align more with capabilities provided by the platform.


Safari has proved to me the web has potential to be much, much faster and more memory/battery efficient than most believe who just use Electron/Chrome.

Projects like ultralight[0] also show there's even much higher room for improvement. That, and libraries like React starting to leverage WASM, and SharedArrayBuffers sharing memory and potentially moving all application logic off-thread (worker-dom is a cool project, I was able to get a medium-size React app fully running with it).

Finally, you could fork Safari and remove quite a few features. A lot of legacy things, simplifying the DOM quite a lot, etc.

If someone did that, I think we'd have a really ideal setup for cross platform apps. A much faster, lighter, smaller bundled app that supported 99% of modern web apps out of the box.

[0] https://ultralig.ht


Existing browser engines (Chromium, WebKit, Gecko) definitely have a lot of cruft in them for compatibility purposes. An engine designed to be much lighter weight and modern would be a great fit for embedding into applications.

As opposed to writing a new engine from scratch (Servo), I wonder how feasible it would be to fork one of these existing engines and modularize its contents? The ability to start from (close to) zero and then pick and choose which features you want in your engine would be pretty attractive to lots of developers. The application that never expects to play any sort of media or access any peripherals could exclude the respective components from the build entirely.


> Safari has proved to me the web has potential to be much, much faster and more memory/battery efficient than most believe who just use Electron/Chrome.

I've been using Safari more recently and it really is a lot better than Chrome, and especially Firefox. I don't know how Apple does it, but they somehow made a Porsche in an SUV world. Thanks for the ultralig.ht pointer, looks quite interesting!


> Finally, you could fork Safari and remove quite a few features.

I'm really curious how you would fork a proprietary project.

> If someone did that, I think we'd have a really ideal setup for cross platform apps.

Buy safari isn't cross-platform.


WebKit isn’t proprietary. Ultralight, the thing I linked to, is a fork of it.

You’d just need to build a UI.


webkit != safari. You might "just" need to build a UI, but if you want it to be cross-platform, that brings you back to the question of what GUI toolkit to use, or you have to handle the mess of different APIs for different platforms yourself.


You’re really confused. We’re not talking about a browser, just an app platform. You can build WebKit on every major platform. If you’re talking about including a UI kit, there’s unlimited of those already written in HTML/CSS/JS. Once WebKit is running there’s no “different APIs”... that’s the whole point.


As I understand it webkit is just the rendering engine. You would still need to build the chrome around it, create and manage windows, etc.


Having just started looking more into winit, I'm wondering how much different druid-shell is. There's a lot of similar ideas, and I know from working on a couple other projects that platform abstractions are often a nightmare to build and test.

How does druid test across platforms like the web, Wayland, x11, macOS, Windows, etc? I feel like testing is one of the biggest missing links in the Rust GUI ecosystem right now.


This is a very complicated issue. You can read our reasoning at the time in https://github.com/linebender/druid/issues/16

Since then, I've both regretted the fact that we're duplicating work, and been happy that we've been able to move more nimbly than we would if we were dragging the rest of the winit userbase along. A recent example of that is keyboard handling, and I'm expecting a similar situation soon with IME, as we'll want to handle that very differently than winit's current implementation.

Take a look at our CI for the testing story. We have some tests, but they're mostly for platform-independent logic; we don't run a lot of platform specific code in CI. That said, Rust's type system is strong enough to catch a lot of potential breakage at compile time. Other than that, people tend to notice broken things pretty quickly. So overall testing is one of the many things we want to improve, but works reasonably well.


If you don't mind, I'd love to hear more about what you are doing differently with keyboard handling (and IME support).

I'm asking partly because I know this stuff is ripe with complexity and edge cases, and also because my head's in that space a bit right now, since I've been slowly fixing a bug in Alacritty that's been driving me crazy with modifiers at startup on X11.


On keyboard handling, https://github.com/linebender/druid/issues/1040 is the main tracking issue. That references a number of PR's, of which 1049 is the core of the platform keyboard handling. I'd be very pleased for other people doing this (including winit) to use this as a model, as a fair amount of study went into getting it right, especially on Windows.

Regarding IME, we haven't done it yet (we're trying to be fairly conservative in our roadmapping), but we plan to wire up actual composition regions and so on, and hope to plumb IME to it using a cross-platform abstraction. My understanding of the winit code is that it varies a lot from platform to platform, but seems to mostly be synthesizing keyboard events (ReceivedCharacter) when an IME request such as "insertText" happens.


A wrapper around wxWidgets would be nice in my opinion. Every language community has a tendency to want a "pure" implementation of everything in their prefered language and utltimatly this just splits efforts and let everybody with half finished projects.


Exactly! How much effort has been spent on UI kits already and we still have very few options, most as you say, half-finished!

If only people had as much energy to help finish/improve existing, usable UI kits instead of re-inventing their own just for the glory of it.

I have no doubt this effort will end up in the same exact way as the xi-editor (https://raphlinus.github.io/xi/2020/06/27/xi-retrospective.h...) - a Rust text editor - did: huge amounts of work spent reinventing the wheel, one or two minor original contributions, but ultimately, ending far short of its goals and not delivering anything.


I haven't tried it out seriously yet, but vgtk (https://github.com/bodil/vgtk) seems to be an interesting approach to GUIs in Rust.


I've tried it out a little bit, and the biggest problem is that it's implemented with deeply recursive macros. The concept is great, it seems to work, but there's no way you will ever have decent compile times.


For those looking for interesting rust GUI projects, I think the one that has stuck out the most in my memory is Conrod[0][1].

If I were to try and write a cross-platform single-binary 2D GUI application with rust these days, it's the first thing I'd pick. Unfortunately, I haven't done that, so whether it's the right choice for you is still up for debate.

There's also stuff like gtk-rs[2] for those who want to have a bit more of a trusted gtk-flavored ecosystem.

[0]: https://docs.rs/conrod_core/0.70.0/conrod_core/guide/chapter...

[1]: https://github.com/PistonDevelopers/conrod

[2]: https://gtk-rs.org/


I hope the debugging story for Rust gets better: https://nbaksalyar.github.io/2020/05/19/rust-debug.html


The debugging story is pretty great already. Honestly that expression stuff he is talking about is pretty niche and very rarely works except for toy examples even in C++.

Debugging Rust code in VSCode with Rust-analyzer and LLDB works pretty flawlessly for me, and it understands and displays all the types/values in the GUI so you don't really need expression evaluation.


Apparently I use plenty of toy examples on .NET, JShell, VSC++ immediate debugger and WinDBG consoles.


Naive question perhaps – what is missing in Gtk-rs?


Packaging gtk for systems other than Linux can be a bit of a nightmare, and the experience on platforms other than Linux and desktop BSD leaves a lot to be desired.


RAD tooling, not having to type Rc<RefCell<>> and clone everywhere (their samples even have a macro for it), or adding yet another library layer with relm.


As a casual passer-by, I'd also like to know the trade-offs


Sciter is what I'm using, and I love it. It's not a pure Rust solution though.

https://sciter.com/


It’s worth mentioning that there’s currently a Kickstarter campaign to open source Sciter https://www.kickstarter.com/projects/c-smile/open-source-sci... it’s by the author who seems to have some worthwhile goals for using the money besides simply opening up the code.


It looks great. I personally have nothing against HTML UIs other than electron's excessive resource usage. My primary complaint is that chrome's engine was never meant to be used once per application. Sciter fixes that.


You should check out femtovg, a canvas 2d API based on nanovg https://github.com/femtovg/femtovg

In some sense, the rendering API is the hard part of trying to build your own GUI toolkit.


This is more a topic for Rust 2020-2030 than Rust 2021.


> On Linux, Druid requires gtk+3

That kind of defeats the purpose?

> Alternatively, there is an X11 backend available, although it is currently missing quite a few features.

It would be good to also support Wayland compositors without GTK.


If you are targeting Linux with a professional GUI application your only options in 2020 are GTK or Qt (or toolkits that are built on top of those like wxWidgets). The other options lack manpower or features.

In 2020 you need HiDPI support, Wayland, and accessibility. You probably also want a toolkit that has a strong developer community. GTK and Qt have these. FLTK is under active development but it is missing a lot modern features. Maybe it will get there by 2022.

The other option, rolling your own, is a ton of work and a quite a bit of "reinventing the wheel". It's certainly possible but it needs a passionate and dedicated team.


For a subset of applications DearImgui is suitable.

There's a gallery of screenshots here. https://github.com/ocornut/imgui/issues/123

Unlike the others mentioned it is an immediate mode GUI, which is an interesting approach.


Rust has decent GTK bindings, and that's basically the only decent option at the moment.


But better user relm as well, Gtk-rs alone is a bit of pain if one doesn't want to use Rc<RefCell<>> everywhere (see Gtk-rs samples and clone macro).


> If you are targeting Linux with a professional GUI application your only options in 2020 are

There are other options. Maybe unpopular options, but options nonetheless: Electron, and immediate mode GUIs.

They certainly have downsides. But there are "professional" GUI applications in widespread use that are written with both.


Immediate mode makes accessibility difficult, in my experience. That's certainly something worth considering.


Applications made in Electron are not professional. I am shocked anybody uses such applications. Electron is basically a web browser displaying a web page.


Are you suggesting that vscode is "not professional"?


I thought the whole idea is to create an alternative using Rust. Otherwise you can just use existing GTK or Qt bindings.


What would you propose doing differently than what we're doing? This is a serious question.


Sorry, couldn't answer before, HN was doing its weird "you post too often" dance.

I guess I misunderstood the intent behind the project. I thought it's like a full alternative to GTK / Qt but in Rust which would be really nice to have, but I get that it would be a much bigger project than something built on top of them.


Have you looked into iced? It targets Vulkan or WebGL.


Ah, yes. There is cross-platform infrastructure we could use, which is what Iced does. We have chosen to do things a bit differently. In general, we use platform capabilities where they're available, for a much lighter weight build and less impedance mismatch with native look and feel. It's a tradeoff, and one of the downsides is that the Linux port needs a bit of extra attention.

Ultimately, I believe our approach will yield higher quality results, but there's a lot to learn from Iced as well.


I remember you wrote before that "there is no such thing as native GUI." Is that more of a high level situation, with text rendering and other lower level things still best handled by "going native"? What other things count as low level like this?


Yes. What I meant by that was more a reference that platforms increasingly support diverse ecosystems of UI toolkits, especially at the high level. Even on mac there's a choice between SwiftUI and AppKit (technically Catalyst too, but that doesn't seem to be a hit), while on Windows there is even greater diversity. So basically it's a way of saying "just use the native toolkit" doesn't actually solve as many problems as one might think.

At the lower level, for some things you really have to integrate deeply with the platform, and for others (text layout is one), there are advantages, including faster builds and smaller binaries. (Ultimately I'd like to have a highly GPU accelerated 2D graphics library that does everything, including text, but that's some ways off and doesn't block current work)


He mentions it about halfway down the article along with other systems he's examining.


> It would be good to also support Wayland compositors without GTK.

So, which text and vector renderer are you proposing? That's not snark, that's a real question.

A GUI that can actually scale requires both a vector rendering engine and a text rendering engine.

On Linux, that's Cairo and Pango--which are Gtk.

I believe that your only other open source vector engine is Skia, from Google.

Am I missing any other options?


>So, which text and vector renderer are you proposing? That's not snark, that's a real question

I was going to write that there are several, but you already mention most of them. You say though:

"On Linux, that's Cairo and Pango--which are Gtk."

Yeah, they're not really GTK.

Or in any case that's not what people mean when they say they wished a Rust GUI that doesn't use GTK. They mean one that doesn't use GTK+ the lib and widgets, not whether it can use Cairo and Pango...


Qt. It supports Wayland. It can do high DPI scaling.


Can you pull the Qt vector rendering engine out and use it independently to render to a non-Qt surface? I've never heard of anybody doing that.


Skia is a decent option, or if it has to be written in Rust, then something Rust based.



One can use Cairo and Pango without GTK?


To a certain extent. You can probably punt the UI widget stuff. However, I suspect certain things like glib are required dependencies.


Cairo does not depend on glib,, but pango does.


There's also Graphite (still alive?) that doesn't...


>That kind of defeats the purpose?

Exposing a nice, standardized Rust interface would be the purpose. Then you can switch the backend later (for a custom Rust one or something else).


Many distributions (Ubuntu, etc.) treat GTK as the platform's native GUI toolkit, at least in terms of customization and accessibility features. And, from what I've read on @raphlinus' blog, one of the goals is to integrate well with the native platform. So I think, rather than defeating the purpose, it's entirely in line with the purpose.


Not with the purpose I expected first - GUI toolkit in Rust all way through which would imply no GTK or Qt.


You have to start somewhere. Making use of an existing system is a good way to bootstrap a project like this.


If that's the eventual goal, then sure, it's a pragmatic approach.


> That kind of defeats the purpose?

Which purpose?


Speaking as an outsider, I have been watching Rust very closely for two reasons:

* I hope it will become a viable alternative to C/C++ for graphics programming. Unfortunately, last I checked as of ~6 months ago, Rust graphics libraries are still in their infancy.

* I hope it will eventually replace Electron as the go-to for multiplatform app development. IMO this cannot realistically be done by relying on native APIs / platform-specific tools like GTK, since we need something closer to functional reactive programming a la React/Svelte/SolidJS, with more emphasis on performance. If I wanted to build a GTK app, I would build a GTK app. See also [1].

I believe Rust has the potential to accomplish both goals, partially due to the design of the language, but mostly because I have faith in the very smart people who work on Rust to make good decisions in the long-term.

[1] https://raphlinus.github.io/rust/druid/2020/09/25/principled...


>I hope it will become a viable alternative to C/C++ for graphics programming

This already exists, it is called Free Pascal + Lazarus.


Care to elaborate? AFAIK C/C++ emerged as the alternative to Pascal way back when. No sense moving backwards--it's clear Rust is on track to displace several common uses for C/C++.


UNIX, like the browser for JavaScript, helped a bit the alternative to win against the competition.


What kind of graphics libraries were you looking for?


Just basic, stable support for OpenGL / Vulkan. My reading from ~January of this year was that there were several competing methods for graphics support in Rust. I tried a few and the boilerplate was quite a bit more than expected, and I was unsuccessful getting the available samples to run on my Linux machine.

Admittedly, I know nothing about what's actually going on behind the scenes when it comes to graphics + Rust. Just my first impressions as an outsider. I found the existing libraries quite difficult to get started with.

(I'm someone who doesn't know much Rust and is waiting for some of this functionality to mature -- someone already invested in Rust is certainly in a better position to take advantage of the existing graphics support)


Ok, in this case, I think the situation is fairly good. De-facto standard for raw Vulkan in Rust is Ash [1], and raw GL can be worked with GL-rs [2]. It's when you go higher levels where things start diverging more.

  [1] https://github.com/MaikKlein/ash/
  [2] https://github.com/brendanzab/gl-rs


Thanks! I think I overlooked those before--last time I got tangled up in gfx-hal. I'll give these a try!


There is nothing wrong with gfx-hal. I care deeply about it, and wgpu implementation is based on top. It's just for users who need portability and performance, while Ash gives you close to raw access to Vukan.


Of having a pure Rust based GUI without dragging in non Rust dependencies.


That's nice to have, but maybe not a central goal. It's worthwhile to have a library that's designed with Rust in mind and can take advantage of its features and idioms, even if it uses non-Rust libraries somewhere under the hood.

From the article:

> One strength is Rust’s wide “dynamic range” – the ability to describe application logic in high level terms while still being attentive to low level details.


> That kind of defeats the purpose?

Disagree. To be GTK 3/4 are main UI toolkits. If your software manages to look like the rest of my GUIs (read themes, fonts) then I will be happy.

However, if I install your software and it's looking like alien - nah. It's better be looking amazing.


Do you mean main in Gnome? I prefer KDE and Qt to anything GTK based if it's not about Rust. And if it's about Rust, I thought it could be good to have some alternative that's neither GTK nor Qt and is just using Rust all way through.


Define "all the way through"? I mean, on Windows, for example, you're always going to have some C or C++ code from Win32 DLLs on the stack in any GUI app, because you'll need a top-level window even if you're rendering everything else yourself, and it will have a Win32 message loop etc. But if that's acceptable, what's wrong with using the system libraries for text rendering, or even complete widgets?


Since it's a UI library, I'd say deep enough to take care of the actual UI. Whether it depends on system libc and the like is less about that.

I.e. imagine being able to use this library on something like RedoxOS, not just on Linux. That would fit the idea.


But what constitutes "actual UI"? You have to link to way more than system libc to make a GUI app on Windows.

And it's kinda orthogonal to using it on OSes that don't have their own UI layer, because that can be implemented as another backend. For example, wxWidgets wraps native widgets on Win32/macOS/X11 (with Gtk considered "native" for the latter) - but it also has wxUniversal, that renders by itself, and is the backend normally used on platforms like DirectFB.


More effort needs to be put into porting existing GUI APIs to rust. Port UIkit, port Flutter. These APIs people already know. Electron is popular largely for this reason.

Or do we just enjoy reinventing the wheel?


The models those GUI APIs use have a pretty big impedance mismatch with the way Rust works.


How to design a UI is a broad question that is independent of programming language. It does not need to be reinvented.


Forgive my ignorance of Rust, but GUI toolkits are probably the strongest candidate for OOP style development, and everything I've read about Rust doesn't lend itself well to OOP.

StyledObject -> View -> Text

StyledObject -> View -> ViewContainer -> Button -> ToggleButton -> MyCustomToggleButton

StyledObject -> View -> ViewContainer -> VerticalLayout -> ListView


>Forgive my ignorance of Rust, but GUI toolkits are probably the strongest candidate for OOP style development

That's so 1991. We're on to GUI's as pure functions of state these days, get on with the program :-)


There's really no such thing. You might be using pure functions from your perspective, but someone is handling all that backing layer data and mutable information.


There's no such thing as structured programming, at the assembly level it always just compiles down to conditional jumps. There's no such thing as a digital circuit, because real-world voltages are always analogue. Etc.


Which is all dealing with retained, mutable data. Period. I'm not making the comment for the sake of pointing out _levels_ of abstraction.

I made the comment because such an abstraction doesn't at all map to reality.


> Which is all dealing with retained, mutable data. Period.

Only if you choose to model your system that way. You can push the same paradigm all the way through: immutable events, and pure functions that transform an event and a previous (immutable) state into a new state.


No, you really can’t, and that’s what all the functional/immutable evangelists don’t get: you can’t.

You’re going to recreate framebuffers and font atlases and invalidate your entire UI every single time? Good joke.


So much for "not pointing out levels of abstraction". Maybe somewhere in the guts of the low-level runtime a mutable framebuffer is being used for now, sure, but I don't care about that any more than I care about whether my processor is jumping vs doing a conditional move.


Yeah, but some abstractions are leakier than others. I have never hit a real-world voltage bug or conditional jump bug, as far as I remember.


Have you ever hit a bug in the implementation of immutable data? I haven't. (And I have seen a miscompiled loop, though I agree they're very rare).


Well, now that I think about it, I have seen a compiler bug in an IBM Java compiler where an if condition would never be hit, despite the code being fine and the Sun Java compiler working as expected.


I'm not entirely convinced. Libraries like React have succeeded in web development due to the complete lack of html having an extensible component model. Native applications don't have to worry about shoehorning a workable framework into stateless document renderers.


No, this is a larger trend. Both major mobile platforms are moving in this direction as well (SwiftUI, Jetpack Compose).


Tell us more ?


Check out these articles surveying the landscape of reactive UI, from the same author as the linked article:

https://raphlinus.github.io/ui/druid/2019/11/22/reactive-ui....

https://raphlinus.github.io/rust/druid/2020/09/25/principled...


Elm and React for one... and all the other similarly inspired UI libs (even SwiftUI is kind of going there)... Plus the increased interested on fp UIs for games, entity systems, etc...

https://www.freecodecamp.org/news/the-revolution-of-pure-vie...


Cycle.js, Callbags, etc.


As every "flat" hierarchy this is inflexible and impossible to get right (think "ComboBox" or "MenuButton" for example).

A better approach are mixins. Separate ones for layout and behavior. This is much more flexible, and avoids code duplication or "crazy" inheritance hierarchies at the same time.

Rust Traits are ideal for mixins. So a GUI toolkit in Rust should be just fine.


> As every "flat" hierarchy this is inflexible and impossible to get right (think "ComboBox" or "MenuButton" for example).

why does it work fine in Qt then ?


It doesn't work super well. It functions, but there's all kinds of funky interface design which basically exists because of the nature of inheritance.


React is a very popular GUI toolkit and it doesn't use OOP functionality at all.


https://reactjs.org/ says right on the front page: "Build encapsulated components that manage their own state, then compose them to make complex UIs."

Components managing their own state is a textbook definition of OOP.


Encapsulation is not equivalent to OOP. If it is, then why do we have two separate terms for the same thing?


React isn't the one doing the actual drawing.


That's one way to do it, but you could have something very similar that uses composition instead of inheritance (and a nice trait hierarchy to top it off). MyCustomToggleButton would contain an instance of ToggleButton and override some behavior while implementing ToggleButtonInterface or whatever. I don't know if that's the way to go but that'd work.


But rust does have Traits which work perfectly fine for that same logic. We can just say this trait implements the Button and ToggleButton Traits.


> But rust does have Traits which work perfectly fine for that same logic. We can just say this trait implements the Button and ToggleButton Traits.

i've never seen that not being called oop


It pretty much is. But it's not exactly class inheritance, so most people don't consider Rust to be OOP. Structs ("classes" in Rust) can implement Traits and Traits can "inherit" each other, but structs can't inherit implementations from other structs.


> Structs ("classes" in Rust) can implement Traits and Traits can "inherit" each other, but structs can't inherit implementations from other structs.

Right, this is textbook interfaces used for composition.

Let me paste you a few paragraphs from the GoF book, released in 1994:

> Inheritance versus Composition

> The two most common techniques for reusing functionality in object-oriented systems are class inheritance and object composition. As we've explained, class inheritance lets you define the implementation of one class in terms of another's. Reuse by subclassing is often referred to as white-box reuse. The term "white-box" refers to visibility: With inheritance, the internals of parent classes are often visible to subclasses.

> Object composition is an alternative to class inheritance. Here, new functionality is obtained by assembling or composing objects to get more complex functionality. Object composition requires that the objects being composed have well-defined interfaces. This style of reuse is called black-box reuse, because no internal details of objects are visible. Objects appear only as "black boxes."

...

> Object composition has another effect on system design. Favoring object composition over class inheritance helps you keep each class encapsulated and focused on one task. Your classes and class hierarchies will remain small and will be less likely to grow into unmanageable monsters. On the other hand, a design based on object composition will have more objects (if fewer classes), and the system's behavior will depend on their interrelationships instead of being defined in one class.

Like, it's the most well-known OOP design book and it states exactly that - more than a QUARTER CENTURY ago. Who in hell argues that this isn't oop ?!


Even the bible has more than ONE single interpretation, and as much as you would like, GoF isn't the bible of OOP, just a famous book among plenty of others.

Maybe widen the bibliography of OOP programming languages beyond Smalltalk and C++ patterns?


They're not wrong, though - Rust really doesn't need implementation inheritance to be considered OO. It just needs some notion of object identity, and virtual dispatch - and it has both.


Indeed, that is why reducing OOP to implementation inheritance is a very constrained view of the paradigm.

Just like placing ECS against OOP, when they are discussed across OOP SIGPLAN papers and are the genesis of language features like Objective-C, CLOS/Flavor protocols and Smalltalk categories.

I guess, as always, the blame lies in how we teach these subjects and many don't do it properly.


They can, however, derive impls.

Which I consider a better approach than inheritance, ymmv.


Traits are just like Objective-C protocols, pretty much OOP to anyone that bothers to read about the various approaches how OOP concepts can be implemented.

There are plenty of ACM and IEEE papers if you want to see it being called OOP.


So typeclasses in Haskell are OOP as well? Sounds like a pretty meaningless definition.


typeclasses are definitely oo. just because it's compile-time polymorphism doesn't make it "not oop".


OOP is strictly run-time polymorphism. It may be that in some instances of OOP, what calls what is deducible at compile time. That subset alone is not OOP.

OOP means that an object which does not exist now can be written in the future, and loaded into code which has already been compiled and is already running now. The method calls in this existing code will correctly resolve to the method implementations in that object.


Then “OOP” is an entirely meaningless definition. Where exactly are the objects in Haskell?


GUI toolkits have been written in the OOP style since it's the paradigm available. If SML had become popular, I'm sure we'd be writing that.


I got the impression from what I saw (coming into the scene a little too late to say from direct experience) that languages like C++ and Java, intentionally or not, did a great job of capturing how procedural programmers implemented the thick, rich, stateful GUIs that were common at the time. GUI widgets were literally the textbook example that many books used to explain classes and inheritance in a way that procedural programmers could relate to. So, I would say that the OOP style of GUI programming was a continuation of the procedural style that brought the old way of thinking to new "heights" through the new OOP language constructs.


Yes, GUIs have both been greatly benefited by the availability of OOP and a great force on the popularization of that paradigm. I think nobody will disagree that GUIs are kinda hard to create in purely structured code, and OOP makes compares much better.

But we are not at the 80's anymore, and have more options now. There is very little you can do with OOP that you can't in Rust, and a whole lot of things Rust adds to it. Just because things evolved this way it does not mean they are optimal.


Rust is great, but it can't do very basic things that OOP offer. You can't make a simple observer in Rust without RC<RefCell<T>>, unsafe, global-ish mutable state, or being forced into some framework.


You can always copy the observer into the observed structure (and avoid a lot of complications - observers are best created with FP, not OOP). You can also carry the GUI and whatever else you want as a parameter when executing the observer.

But, of course you won't be able to mutate arbitrary global state.


React has been prototyped in SML


And note that it's not oop.


ML is from almost exactly the same time period as SmallTalk and it's work creating GUIs.


The point was that OOP was popular, ML wasn't.


ML was pretty popular in the day.

But my point is that it's pretty weird to bring up SML in particular around GUI programming. The GUI framework only example I know of is mGTK and SML/tk which are far from idiomatic and pretty much require you to write imperative OOP code in SML. Meanwhile SML has existed for decades. It's one thing to say "people aren't using this great model we've developed the basics off, please help make it better", but if you wanted to write an idiomatic GUI app in SML today, your first step would be to have to write the GUI framework itself because no one else has. They don't provide any real options even if you fully buy in and want to jump in with what they're saying.


I'm well aware, what's your point.


The paradigms have been available for pretty much exactly the same amount of time, down to the year.


I'm aware. What's your point?


My compromise has been to use [flutter-rs](https://github.com/flutter-rs/flutter-rs). Flutter is a powerful and performant GUI/Drawing framework, even if you need to hold your nose when slinging Dart.


Lacking inheritance is a major stumbling block that rust will have to overcome in order to build an effective GUI. The earliest GUIs such as the object-pascal based Macintosh to the most modern ones rely on inheritance to function, so I wonder how the rust developers will overcome this limitation.


I think the explosion of GUI frameworks for the web that don't rely on inheritance, or rely on it in only the most shallow of ways, demonstrates that inheritance is not essential for creating rich GUIs.


I've written a good number of GUI applicions and have found requiring inheritance is more of a limitation of the toolkit than inherent to the problem space. With PyGTK, I rarely, if ever inherit. With PyQT, I've had do to some abstractions to allow composition.


Rather then reinvent a GUI library for each language, what about creating a stateful open GUI Markup Language? Then you'd use XML to communicate with the GUI browser (or browser plug-in).

That way any language can do real GUI's, and without screwy bloated buggy JavaScript libraries.


Excited to see what comes here. If we can build some sort of React-like thing with a Redux like model, I think I would be very pleased. I find web development quite pleasing these days.


@raphlinus I'm curious if VR plays into your thinking for 3D?


Not a lot, though I've had a number of really good conversations with the Servo team, who had been more focused on VR lately.

Basically, I find VR mildly interesting, and will put significant effort into it if and when I'm funded for that :)


Sure, VR doesn't have mostly any serious applications. Some very special games and "virtual camp fires" is all I can think of.

But AR has significant serious applications!

I could imagine that there are quite some overlaps when it comes to parts of the core technology though, for example regarding VR / AR GUIs.


You guys should check out rg3d https://github.com/mrDIMAS/rg3d it's a relatively advanced 3d engine with a ui toolkit https://github.com/mrDIMAS/rg3d-ui and a scene editor https://github.com/mrDIMAS/rusty-editor

Join the discord channel https://discord.gg/xENF5Uh


A true winner Desktop GUI framework should target the web via WebAssembly. Accessibility of rendering everything in <canvas> is not an unsolvable problem.

I also think targeting only desktop will avoid so much distraction that comes with targeting iOS and Android


If you're talking about rendering to a <canvas> instead of at least using the HTML DOM then you're not talking about a GUI toolkit really. You effectively just have a framebuffer and some primitives. That's not really what's being discussed here.

If I just need a canvas to draw into I use the SDL, not GTK or Qt.


Doing this makes all of your applications similar to Electron apps. This is not a good thing. Nobody wants a desk calculator that requires 1GB of memory and takes 10 seconds to start. The amount of backend you need to implement a simple <canvas> is excessive for a great many apps.


I don't remember the last time I installed a GUI desktop application.

I think it might have been Zoom a couple years ago. If I install a desktop application that isn't some command line utility, it's a big deal, like a new girlfriend, feels like a committment. It better be a real appliance that actually requires access to hardware and would need to stay open most of the day.

I struggle to think of the audience for these kinds of concerns about "desktop GUI". If the customer is enterprise, government, etc., then they will want WinForms/C# or Java/Swing and you simply don't get to pick Rust, Go, etc.

If you just need a user interface, GUI nowadays means web or mobile (ideally SwiftUI).

Let's face it, in 10 years, there aren't going to be many people using desktops. The people that will be using desktops (like me) probably will be fine with and prefer curses interfaces that run in a terminal emulator. There you go, that's your cross-platform GUI.


Enterprise clients do not care what GUI library an app uses.

Desktops/laptops are alive and well and there is nothing on the horizon that replaces them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: