I've been looking for a way to run GUI applications remotely for a while, specifically on a wlroots compositor. Projects like this (maybe one day) and https://github.com/udevbe/greenfield are interesting since they essentially make access universally accessible.
Dear ImGui is small and fast enough for running in browsers (it will add up to a few hundred KBytes of WASM byte code). If this is too much "bloat", there are smaller, but also less powerful alternatives like microui: https://github.com/rxi/microui)
I don't know about GTK, QT, and the like, but there's a whole budding ecosystem purporting to solve it WASM-native.
Dioxus [1] and about a dozen other libraries are attempting to be write-once, deploy web, mobile, desktop with native performance. They adopt a React-like component UI.
The Rust ecosystem is full of stuff. There are lots of different approaches too - immediate mode drawing, canvas drawing [2], etc.
Choosing a winner is the hard part. There are too many projects and no clear community-elected leader.
GTK+ has Broadway to render GTK applications to the web browser. Qt has some ActiveX support, but you don't want that. Java/Kotlin has a bunch of frameworks, like TeaVM to run .JAR files in the browser, or Vaadin to write web GUIs like you would desktop applications.
I think most languages that have been around for more than five years have some kind of GUI framework that'll run in the browser.
Very worst-case scenario, you could compile a Windows executable and run them in BottledWine :P
I wrote a simple chord finder for learning to play guitar with it pretty quickly [2], and would use it again to solve a problem like this, though I'm told immediate mode UI frameworks aren't good for things other than quick prototypes, or in my case, scratching a quick itch.
In my experience Immediate mode UI frameworks are almost always not very internationalization friendly (right to left text like Arabic, problematic IME support) and, they are almost always not very accessibility friendly. Also things like system context menus to offer features like translation, system level spelling correction with the user's custom dictionary, system password memorization and recall, word lookup, etc
it's not impossible. in fact at an API level, React is an immediate mode gui written on top of a retained mode gui. but it's that retained mode that's providing all the features above.
which immediate mode GUIs have you used that cover these features?
If the issue is that implementations are feature lacking, that's a different story. The world is littered with half baked frameworks. Hell, I've used plenty of retained mode frameworks that lack all of those things too.
My concern was moreso that there is nothing inherent to immediate mode that prevents all of those things. Someone just has to build it.
RTL tends to be more of a layout problem and has little to do with UIs. Most of my experience with immediate mode UI frameworks is in proprietary applications. I will say that we had no issues with localization and we integrated well with screen readers though none of this was in a platform independent manner. And that's usually the root of all of these issues.
For example, when iOS shipped it did not have spell checking, word lookup, translation, password insertion, etc built in as context menus on text. When those features were added by the OS, every app using the native widgets got those features for free with no work on their part.
Apps that rolled their own though, had to re-release their app. If they don't have time or their app is abandoned then their users suffer.
This is arguably one of the reasons flash was killed off. It was designed to render to a rectangle of pixels and it assumed a mouse and keyboard. Then smartphones appeared and no one was going to go back and fix millions of flash pages.
Current apps doing their own thing will have the same problem with the next UI change (maybe vision Pro for example).
Sure, I'm not arguing against using native widgets. I'm not suggesting that immediate mode necessitates eschewing all of those things or hand rolling them. There is a difference between native frameworks and cross-platform frameworks.
Did Qt magically support all of those features on day 1? Xamarin doesn't completely support all of the iOS input properties, for example.
This was exactly my experience building an app at my last company that used an immediate mode UI. You get so much baseline accessibility 'for free' if you use the DOM.
Flutter is looking to do so, it was just waiting for WASM-GC support which has now shipped on Chrome. Rust is another good one as the other commenter stated. I personally use Flutter for the UI and Rust for the business logic, using libraries like flutter_rust_bridge and rinf as FFI between the two.
I'm curious how flutter apps will do on visionOS. I suspect apps using native widgets will be great because the OS can change to UX for the app. Flutter apps on the other hand may all have to be re-shipped since they're doing it by hand.
Not exactly what you are looking for, probably, but Leptos[1]. Dioxus[1.1], Yew[2] and almost all "frontend web frameworks in rust" run primarily/only on WASM. Larger list here[3]
I’ve used ImGui with success. I specially was also using webgpu so I used the glfw+webgpu backends. I had to fix a few issues in both halfs of the backend and there remain some little interaction bugs (mostly because browsers capture some keychords and mouse actions and you have to work around a number of them).
ImGui makes a specific design choice and isn’t amazingly documented. I don’t think it’s a good choice for larger scale apps. But good for tools or UIs in games, etc.
Accessibility is a nightmare with pretty much anything skipping browser layout of course.
I've been looking for a way to run GUI applications remotely for a while, specifically on a wlroots compositor. Projects like this (maybe one day) and https://github.com/udevbe/greenfield are interesting since they essentially make access universally accessible.