Hacker News new | past | comments | ask | show | jobs | submit login
Moxie: Incremental Declarative UI in Rust (anp.lol)
400 points by anp on Dec 2, 2019 | hide | past | favorite | 190 comments

Any discussion about incremental UI should include Yaron Minsky's talk about Jane Street's OCaml framework, incr_dom from 2016: https://www.youtube.com/watch?v=R3xX37RGJKE

Incremental is essentially an implementation of Adapton from 2014: https://www.cs.umd.edu/~hammer/adapton/

There is a big upset coming in the UX world as we converge toward a generalized implementation of the "diff & patch" pattern which underlies Git, React, compiler optimization, scene rendering, and query optimization.

Two sleeper startups in this space that are going to make a lot of money are Frank McSherry's Materialize Inc. and Nathan Marz's Red Planet Labs.

incr_dom source: https://github.com/janestreet/incr_dom

Differential Dataflow: https://github.com/TimelyDataflow/differential-dataflow

Declarative Dataflow Client: https://github.com/sixthnormal/clj-3df

Crux adapter for Declarative Dataflow: https://github.com/sixthnormal/clj-3df

There is a big upset coming in the UX world as we converge toward a generalized implementation of the "diff & patch" pattern which underlies Git, React, compiler optimization, scene rendering, and query optimization.

Don't forget Windows 1.0 in that list:


Pretty much all windowing toolkits (X11 included) worked that way prior to the mid-90s because while there was at least enough RAM for a framebuffer (unlike say the Atari 2600), it would have been wasteful to have a backing-store for every window.

X11 added the ability to have a backing-store for each window, and the compositor would render them to the display, and Wayland is compositor only.

On another topic, the constraint based GUI systems going back at least to the 80s are similar to react et al., though usually the widget graph was fixed, and only the properties of the widgets were reactive.

Windows was still doing it that way into 00s in XP and 2003, and you could observe it when apps would get a hang in their windows loop, and couldn't process WM_PAINT anymore - you'd get ghosting artifacts moving other windows over the one hanging, because it wouldn't repaint the invalidated areas. It wasn't until Vista that such windows would be rendered using the ghosted version of the last known good state (cached by the compositor).

To be fair, this is also the rendering model of Swing and the Amiga.

interesting insight! Do you by chance know what's the current model used by windows right now with WPF or UWP or whatever they call it today ?

Win32 is still alive and kicking, so any desktop Windows app has at least one "traditional" window (HWND) - that being the top-level one - and it still runs a WndProc, that periodically receives WM_PAINT, telling it which chunks to redraw. There's a compositor sitting above all that, so the complexities of the model are largely redundant, because it no longer needs to handle partial refreshes - it doesn't render directly to the screen.

Most modern GUI frameworks don't use the tree-of-HWNDs anymore, though. Which is to say, the entire visual element tree is handled internally by the framework's own compositor, and the top-level WM_PAINT just renders the resulting bitmap. WPF and Qt do it that way. That said, there's still no shortage of apps that are implemented in terms of native Win32 widgets - pretty much all the non-UWP apps that come with Windows are like that. So when you are looking at, say, Notepad or Explorer, they still fundamentally work the way the article linked to above describes.

Modern UI frameworks mostly try to offload the rendering to the gpu where just every pixel is rendered for every frame into texture buffers.

But of course that can be combined with diff&patch as well...

Not every pixel! Mozilla posted recently about how they have gone to great lengths to not redraw every pixel every frame because it saves battery. Apparently Chrome and Safari already do that.

> But of course that can be combined with diff&patch as well...

Which they are, through extensions such as https://www.khronos.org/registry/EGL/extensions/KHR/EGL_KHR_...

Yes I am especially curious to know how they manage state in desktop GUIs - if it's events and callback based or some other kind of functional architecture

It's usually event callbacks for actions, although these are often wrapped in a first-class "action" or "command" abstractions, to allow routing different events to the same handler - e.g. both the menu item and the toolbar button.

For views, you either get some form of MVP, with explicitly implemented model interfaces that provide the glue between the views and the object tree they are representing, or data binding that effectively creates that same glue for you. Here's an example from UWP:


So no, it's not really functional. Quite the opposite - the state is global and mutable, and UX actions that purport change things really change them. That also makes it all very intuitive, though.

In what concerns WPF, UWP (you can do this with forms as well although support is more primitive), via data bindings.

The concept is somehow complex to master, I compare it to getting monads to click, but when you understand it, you are able to envision how to build the full UI architecture as having a Lego box at your disposal.

For everything, views, stylesheets, event handlers, data models.

> Two sleeper startups in this space that are going to make a lot of money ...

How are they going to make money if open source enthusiasts copy the ideas and spread them for free? See e.g. React -> Vue.

Selling developer tools these days is just a losing game.

> Selling developer tools these days is just a losing game.

Selling a company that holds some patents or serious implementations of useful software is not. We live in an era where Big Tech will acquire companies for tens of millions, then dismantle their product, just to hire a proven engineering team (or keep them out of the hands of competitors for a brief period). So companies with actual, useful products do fetch quite a bit.

But that's a very indirect and uncertain way of making money. I think that people who do honest work deserve a better way of getting paid.

Also oftentimes not all that lucrative for the founders. The reason Big Tech will aquihire for a million or two per employee is because it costs that much to get a good employee. When companies have managed to assemble such a team and build a product to showcase it it's usually because they took a lot of capital before they had a product, which means that VCs own the lion's share of the company.

Which is why I totally dislike the FOSS culture of wanting to be paid for their work, while unwilling to pay for the tooling.

Git PR don't pay supermarket bills.

I've thought this since Borland fell apart. The ship sailed a long time ago. Unless you're a unicorn like Slack with cross organization appeal you have no chance!

JetBrains, OutSystems, Embarcadero (nee Borland),....

It is a matter of getting a target audience willing to pay for their tools.

JetBrains in particular has done a great job, and the Community Edition of Idea provides a ton of value for free.

> Selling developer tools these days is just a losing game.

I agree this is true for _trivial_ dev tools, but for dev tools that are more complex or provide an entire ecosystem I think this is not strictly true. Where I work (Sourcegraph) we are able to sell a good cohesive bundle of dev tools (code search, jump-to-definition/find-references, etc.) and through that have been able to easily fund REALLY cool tooling like https://comby.dev

I imagine the same is true for e.g. GitLab

Could be true, but personally I steer clear of these so-called "ecosystems" because of the lock-in effect, and especially if they are monetized using a SaaS scheme. For me, such tools are (in a way) broken by design, and therefore I wouldn't want to sell such tools myself.

What's the relationship between React and Vue? I've tried searching for it, but all I could get was code samples comparing the two.

They're both libraries for building user interfaces. They're related because they both solve the same problems. They also share a lot of core ideas (one way data flow, reusable components, declarative UIs, etc). I'm pretty sure React came first and heavily influenced Vue. They aren't commonly used together because most projects only need a single UI library.

Thanks, but the parent seemed to imply one was a commercial offering and copied by the other as an open source project. That's the part I was looking to get clarification on.

No, they've both always been 100% open source (ignoring the complaints about React's previous BSD+PATENTS license). React is built by Facebook, while Vue is primarily built by Evan You (+ a team of core contributors).

The "copying" comments are largely due to things like Vue's new proposed hooks API being very much inspired by React coming up with hooks in the first place.

To developers on the street, kind of.

Hence why everyone that want to make a living selling developer tools, turns to enterprise customers, B2B, or finds a way to package them as SaaS.

You are right about developer tools, but these companies are not selling developer tools. Plus, the technical implementations are non-trivial.

It seems like Materialize has some interesting approaches in mind:


> but these companies are not selling developer tools

Welll ... from their website:

> Red Planet Labs is pioneering a radically new kind of software tool.

Also, the Materialize "reactive SQL" solution sounds a lot like Firebase.

Firebase is a hosted database. Where does a dev tool begin and end? Is Slack a dev tool? Is HN? Where do you draw the line?

There's also Aardworx[0], who are doing something similar with F#. Their extracted incremental computation library is here[1], and the platform overall is here[2]. There's also a bunch of cool talks on YouTube[3].

[0] https://aardworx.com/index.en.html

[1] https://github.com/fsprojects/FSharp.Data.Adaptive/

[2] https://github.com/aardvark-platform/aardvark.base

[3] https://www.youtube.com/watch?v=mZ3o6TqNR6U

And here's some other interesting related content:

1. Deriving a Domain Specific Language for purely functional 3D Graphics - https://rawgit.com/wiki/aardvark-platform/aardvark.docs/docs...

2. Fable.Elmish.Adaptive (another instance of applying incremental computation to the Elm model) - https://github.com/krauthaufen/Fable.Elmish.Adaptive

Heartily agreed re: Materialize, and I've also been nerd-sniped by Adapton, although less literally than incr_dom.

Whatever you do, do not read the META II paper, especially not Figure 5, which is the META II compiler written in itself. You may lose two months of your life: http://www.ibm-1401.info/Meta-II-schorre.pdf :)

Those are some big words when it's literally a more obtuse, less production ready variant of what WPF has been doing forever (and WPF wasn't the first on desktop UX to do it!).

I mean, here is Qt shipping Qml interfaces in a ton of cars, and you're talking up rust2html.

Also maybe the Purescript experiments in incremental updates?

Original idea: https://github.com/paf31/purescript-incremental-functions

More recent Proof of Concept: https://github.com/jlavelle/purescript-snap

(I just posted the latter to HN here: https://news.ycombinator.com/item?id=21690382)

I might be remembering wrong, but I think Jane Street's incremental is based on self adjusting computation by acar vs adapton by hammer?

Yeah it’s not inspired by adapton. Plus, they work differently. Their data flow in opposite directions.

Good to know! I was incorrect to take GPs claim at face value, perhaps.

(BTW I’m not sure if you remember our conversation at React EU in 2017, really enjoyed chatting!)

> There is a big upset coming in the UX world as we converge toward a generalized implementation of the "diff & patch" pattern which underlies Git, React, compiler optimization, scene rendering, and query optimization.

I thought git (as opposed to, say, RCS) did _not_ do the diff thing, and each object is just the full thing, relying on gzip to do the compression?

Merging requires diffing, and hashing enables efficient diffing. It's a reconciliation process just like checking balances in accounting, and you could think of version control as accounting for code.

No, git actually stores diffs when its reasonable to do so. You can check the files for the objects in the repository yourself. Git can compress these files into packs, but that's a separate step on top.

> No, git actually stores diffs when its reasonable to do so.

No it doesn't. It does delta compression – which ultimately accomplishes the same goal – but this is not what people mean when they say that a version control system stores diffs (like SVN does).



The predecessor of all these diff and patch systems/UIs is the ancient character terminal based curses library. [1]

[1] https://en.m.wikipedia.org/wiki/Curses_(programming_library)

A lot of innovative frameworks take the approach of forcing everything through a single paradigm, so that you can avoid solving a whole class of problems.

It seems like a good trade off—yes, everything must be done just so, and perhaps some parts of your app are a little awkward, or a little boilerplate-y, but there’s a whole giant class of problems you can just totally ignore.

Until... you can’t.

There’s another approach to library authorship, where the goal is not to delete a class of problems, but to make a class of problems easily controllable. Not to make a large domain disappear, but to make it programmable.

I have found, with that as your goal, you will be pushed away from declarative models. Because declarative models are about making a surface which is not programmable but configurable.

You can’t program arbitrary relationships across a React component boundary. You must flatten your intent into the very tight declarative interface which exists there.

Procedural models allow you to use the full set of capabilities that a function call and type system gives you. Ideally you don’t use much of that capability at any one time. But the beauty is you can use exactly the right control structure for the specific concern you are trying to... not abstract away, but make controllable.

The set of “concerns” is the infinite set. And no possible declarative model can capture it. Functions can.

You just need to know where your escape hatch is.

   SQL -> SQL procedures

   Angular templates -> DOM manipulation

   Puppet -> custom scripts
And you may even find it depends on the context.

For example, HTML/SVG/CSS DOM scripting can be considered "procedural", but the in other sense the DOM is a rather declarative version of rendering, whose procedural escape hatch is Canvas/WebGL.

You may even find case where they are combined tightly:

A makefile has declarative dependencies and procedural recipes.


But once you are out the escape hatch you will need to model both the new space you are working in, AND model the world you escaped from in order to interoperate, which is a special hell.

This hell has a name you will recognise: the leaky abstraction.

The idea is always to put a layer on top that simplifies things, so it's easier to use and faster to learn. But the end result is usually you have to learn both the abstraction and thing it was trying to hide.

> And no possible declarative model can capture it. Functions can.

If functions are your declarative model, what would you say?

Well, in most apps you will layer both kinds of models. You’ll have SQL which is declarative, but you’ll call those queries as procedures. The boundaries around a React component are declarative, but the render function is procedural.

So when I talk about declarative models I’m talking about the Swiss cheese of declarative models with procedures mixed in.

In moxie, everything is procedural, as it were. The declarative syntax is a way to invoke "imperative but idempotent" functions, and the core runtime provides tools for wrangling mutability and repetition in those functions.

React is just a very sophisticated template engine. It's something the server side rendering model used for decades.

I don't need the freedom to invent my own character encoding, or the freedom to create SQL injection vulnerabilities, buffer overflows or use after free bugs. That's just something that drags me down when I'm writing a nice web app to have fun with friends or a good old enterprisey web app at work. I need the freedom to create working applications quickly.

There are procedural APIs to do everything you mentioned.

I agree. In theory, declarative languages let programs reason about the content more easily. But I can't think of when that has actually been a real benefit.

Flutter gets it right. The code looks almost like it is declarative. But really it's just Dart code (an underappreciated language IMO) and you never get stuck by things that are impossible to express.

> The code looks almost like it is declarative. But really it's just Dart code

I need to figure out how to express this better in the future, but if you swap "Dart" for "Rust," this sentence applies to moxie as well. It's what I mean when I say that the callgraph of functions is used for structure.

Yeah, the way I would say this is that "being declarative" is the goal, and the broad strokes of the way you express app logic (one way data binding and so on) are pretty similar, but the implementation details of how you get there vary widely. Expressing the app logic as just running some code, with the side effect of generating your UI, is an appealing approach, because it means that all of the power of your host language is under your fingertips if you need it.

> the goal is to write the code "generically with respect to time," describing the state of the UI right now for any value of now. I think this is clearer in the above code samples, where the code always executes a complete declaration of the desired UI rather than explicitly mutating prior state.

I almost wish the post led with this, anp. It’s such a great and concise description of the aim for the project, and really helps clarify all underlying decisions.

Thanks for writing this post. It’s great!

Great idea on the intro. Moved some things around, should be online soon.

Thanks for the kind words <3

For more in the Rust UI space, see also Iced, which has an Elm-inspired reactive model: https://github.com/hecrj/iced

Also Relm which is Rust+Elm-inspired+Gtk


Also yew. It's more web focussed, but it works great running in a WebView for a local application with RPC to a background server running locally.


I like the ELM model for UI. The stateless/functional view approach is very similar to IMGUI [^1]. It makes things much less error prone with much less state management / hierarchy management to do than the React model.

[^1]: https://github.com/ocornut/imgui

Also Yew[1], which is a bit different, but a relatively flushed out component model.

[1] https://github.com/yewstack/yew

> moxie assumes that the program will enter from the "top" of the tree each time. [...] My suspicion is that Rust may be fast enough along with the right memoization strategies to never worry about the time spent getting from the root of the tree to nodes with changes.

I think there is reason to worry: consider a node with a lot of children, for example a list of many todos. Say you want to change the state of one single todo, for example marking it as done. With a top-down approach such as diffing you have to go through each child to see if it has changed. This is O(n). If you instead have components that know their position in the tree and can react to external changes, like react's Components or using hooks and useState, this can be updated in O(1).

Author here. I'm trying out an optional email Q&A format on the post, but will also be keeping an eye out here.

Not a question so much as encouragement. I think a big unspoken benefit behind react and its ilk is the strong focus on concise readability. React hooks can be confusing to understand, but when used properly they are incredibly concise and readable. I think moxie is closest to doing this properly in rust, and as such I am watching it very closely. Awesome stuff.

Website on mobile needs a bit of fixing. Also, what do you think of Svelte?

I think there are some very strong parallels between the approach Svelte has taken and what I'm trying out in moxie-dom:


screenshot? how does reader mode look on your phone?

Code blocks are too wide. Make them 100% wide and add overflow-x: scroll

I can't find reader mode on mobile chrome.

Thanks for the tip! Should show up shortly.

how well does your editor deal with the quasi xml?

Fairly well, although I need to manually format it. VSCode's syntax highlight for rust doesn't do anything fancy for tags and attributes, but literals and block expressions are highlighted correctly.

Not sure if it's because of rust (not super familiar with the language) but the syntax seems really awful

Yeah, I'm not the biggest fan of the syntax still either. That said, the phrasing of your comment is unkind and its contents don't provide any useful feedback.


Please don't misinterpret in this way. This is a candid reaction. As you indicate, it confirms your previous observations. This is not an opinion of you or any other Rust developer. It is an opinion of the syntax, period.

It's unkind to call someone else's work "awful," even if its meant in a very minor way (even if I agree with it!). It's a low value comment when it doesn't provide any concrete feedback on top of that. Our industry has abysmal standards for discourse and I'm not sure why you think that's important to apologize for and normalize.

Some pushback: The syntax used in your example is WAY cleaner to my eyes than competing UI solutions in Rust today.


As a Rust lover, it can be quite the hill to climb when you first start looking at it. Closures really weirded me out when I first started.

Though, you may also be referring to the "made up language" powered by the Macros. I agree that can be a bit much, but it's in my view no more distracting than React's JSX. Some people hate JSX, and I imagine they would hate this as well.

I'm not a Rust user either. It's a lot of "symbol noise" to me, but I assume this is the kind of thing you get used to reading after a while. However it looks very annoying to type all those symbols, even if you have some kind of customized ergo keyboard.

> some kind of customized ergo keyboard

I use an ErgoDox, and while my layout isn't customized for writing mox invocations, it hadn't occurred to me that this might be a blind spot in my thinking about ergonomics. Good point!

I didn't really see anything that was weird whatsoever from a normal day to day programmer (Rust or not) perspective so I wouldn't take his feedback to heart or worry about your ergodox.

It seemed like a high ratio of "non-alphanumeric symbols" compared to a lot of other code that I read and write.

I suspect you are keying on some of the Rust language bits in the middle of his syntax.

_ is "don't complain because I don't use this variable"

|| is the syntax around Rust closures

foo! is a Rust macro

Why this is desirable is that we can program in real Rust while using some simple macro expansion not to create a free standing language for gui building?

Learning Rust via this gui library might not be the best choice(?) and I think it's better he focus on people that know Rust already.

The code in the OP appears to largely be a custom-built DSL designed to emulate XML. One can tell it's a custom DSL by dint of looking at the `mox!()` invocation; any identifier followed by a bang in Rust is a macro (Scheme-style syntax-aware macros, not C-style textual macros), which have loose parsing rules to allow for such DSLs to be defined.

Rust does have plenty of symbols (nearly all of which are lifted directly from C or C++, with the same or analogous meanings), but such extensive DSLs are rarely encountered in the course of everyday use of the language.

Syntax only matters to people who don't know the syntax. I used to think it was awful before learning Rust, but very quickly the syntax just fades into the background because syntax is patterned and repetitive by definition. There's an initial barrier to entry then an abrupt payoff cliff.

The important thing is actual expressiveness. Rust gives you a heavy duty toolbox to play with. You could have a beautiful-looking syntax with no braces, no parens, and tons of whitespace (think CoffeeScript or Haskell) where you're very restricted on a small set of square pegs, or you could use something like C++, (Perl), Rust, or Lisp that lets you make any Rube Goldberg machinery you want no matter how hideous it looks. It's a matter of personal preference. You can be very effective either way, but I think one is unnecessarily harder on yourself.

There are two camps:

1. Syntax is important. I want everything to line up beautifully on the screen. I have a riced out Arch desktop and a super clean enviable desk with modern furniture.

2. Syntax doesn't matter. I format code sloppily and let clang-format fix it eventually. I love macros and codegen because they make my life easier. I haven't changed my wallpaper in 5 years.

I know you're being somewhat facetious but... not really. To me, syntax is important, and I want everything to line up beautifully on the screen. But I use a simple, no-frills Debian Xfce desktop, and my "desk" is my lap and my couch.

It is awful. Javascript with JSX is awful, too.

Anyone else plays "Surviving Mars"?


I also thought of it because of the "moxie" oxygen generator building in the game

No, but I thought about it. Is the game any good?

The game is good once you get the hang of it, but there are bursts of micromanaging followed by long droughts of waiting for shit to happen. At max speed, the game will do 1 day/3 minutes. Most of the Steam achievements take about 60-100 days to do, which means 3-5 hours per game with lots of time spent not interacting much. Of course, to get to a "full fledged" colony it can take as many as 10 hours of gameplay, which again is mostly waiting for your commands to happen.

Also, because its a Paradox game, the interface is ass.

If you like the genre of games, this is one to lose a couple of dozen hours to. If you just think "oh, Mars is cool" its probably good to skip or download a mod that lets you zoom time faster.

> Also, because its a Paradox game, the interface is ass.

Huh, it's a Paradox game? Thar alone makes me interested; I'm a big fan of Stellaris.

> bursts of micromanaging followed by long droughts of waiting for shit to happen

Yeah, that sounds like Stellaris.

Thanks for the review; I'll check the game out.

> Huh, it's a Paradox game?

It's published by Paradox not developed by them.

I attempted to play it on Xbox (was a free game at one point) but controllers are not really ideal with these types of strategy/city building genres. I may attempt to play it on PC at some point as the idea of the game sounds interesting...

It's pretty casual but I love it.

One of the issues with React-style "declarative" UI frameworks is that they have a hard time modeling transitions and animations. I'm not a Rustacean but that would be one of the things I'd look for here.

Agreed! I have some thoughts about how this can be handled but it’ll be a little bit before we can try them out.

If you take the animation out of the DOM-level code and put it in the CSS, doesn't that fix the issue? React then does not need to know about or control the animation and the browser just does what it does natively as things get added or removed from the DOM.

CSS animations are not enough for anything but simple transitions, since they are time based. Their behavior is also hard to control from JS. Check out this react animation library, which is based on springs for natural motion: react-spring.io (intro youtu.be/1tavDv5hXpo).

That leaves you with very simple and bland animations, and very little flexibility.

Hm, have you seen some of the incredible and fine-tuned animations possible with all the CSS features for that? I think it's pretty amazing and relatively easy.

This also requires that we have someone else who does animations for us, rather than doing all the UI ourselves.

> Each time the button is clicked, the closure in boot runs in entirety but if you open your browser's devtools on the iframe, you should see that only the necessary DOM nodes are seeing updates:

Isn't that still radically inefficient?

Not really. The assumption is that modifying the dom directly is the expensive part.

Doesn’t seem like a good bet for the long term. There’s no practical reason why modifying the DOM couldn’t be practically free. Especially if we can hint to the layout engine that we’re working in a well behaved subset. (Or the layout engine can detect such)

Hi, I work on a browser, layout and styling are expensive.

Browsers already to tons of work to avoid recomputing too much of this stuff whenever the DOM changes, and it's still inadvisable to poke the DOM too much. I don't see this changing anytime soon. There are various new features that allow for some level of hinting, but it's not going to obviate this. Browsers need to have incremental layout/styling prepare for any kind of potential change, whereas if you have a reactive UI framework you know what kinds of changes can happen, and can optimize diffing based on that.

There's a reason why a lot of JS UI frameworks use a virtual DOM. It sounds expensive to maintain, but directly operating on the DOM is more expensive.

That’s why I mentioned subsets! (like ASM.js)

Also, syncing can theoretically get to the minimum possible DOM calls, but with good tools I believe I can get close to n*log(n) of that with a procedural model. Which makes your point somewhat moot.

At some point the JS->C++ FFI was just slow in most browsers, but I guess this has seen improvements lately?

I don't think that's too slow. It can be, but it's not the main bottleneck in my experience.

That said there are various tricks that browsers use to avoid introducing these boundaries.

Right. All this treating the DOM with kid gloves seems to be due to its pre-existing weaknesses. Why not just fix those?

Browser vendors have spent the past several decades trying to fix those weaknesses and haven't, it seems like a very strong indication that it is not exactly an easy thing to do.

This might be just because most web apps never had a bottleneck on JS->DOM changes.

This is blatantly false, as the DOM has always been a bottleneck for web apps.

Because you can't "just" fix them. It's a complex problem with no easy solutions. Don't you think it'd already be "fixed" if it was easy?

> There’s no practical reason why modifying the DOM couldn’t be practically free.

Oh yeah, modifying the DOM is practically free. Which is why React and the like do that in their mirror DOM.

Actually applying the resulting changes, on the other hand...

Can this be easily built as a static binary? I mean, really static, as in pop it onto your mother's windows laptop and let her double-click it and run the program?

Don't see why not, although to do so you'd need to use moxie-native which is still very new.

This looks great, was kinda disappointed they didnt have a DOM demo for the calculator, woulda really sold it even more honestly. It looks great otherwise. I wish Go had more efforts like these, Fyne is as nicely a UI as I could find for Go thus far. I think every modern language should really produce a UI library as part of the STD lib even if it's rather basic.

Good idea to make a DOM example for the calculator! I'm very behind on examples.

Someone recently got very close to having the moxie-native calculator working in a browser's wasm/webgl, which will be mindblowing if they get it to work.

I think even for documentation it would sell it amazingly well if you can see sample GUI code and run it in the browser and try out the result, even if it's not interactive editing wise.

So... semi-serious question, what does html/css give us that imgui doesn't? e.g. taking this idea in another direction - why not just replace the dom hierarchy with a single webgl node and compile imgui to wasm in order to drive it that way?

Browsers have a lot of features we'd need to reinvent. You lose all tooling, all programmability, hyperlink functionality, accessibility, etc. The list goes on and on.

Text rendering is extremely difficult. Even today, Chrome/Firefox are not fully hardware accelerated on this front.


> Text rendering is extremely difficult. Even today, Chrome/Firefox are not fully hardware accelerated on this front.

On the other hand native toolkits have had hardware-accelerated text rendering for years if not decades. Browsers are not actually good at being a UI toolkit. They are good at handling a variety of inputs of a variety of questionable quality with a variety of usage patterns. They are good at being very defensive in the face of arbitrary inputs. They are a great catch-all, but as a result they are never actually fast at anything in particular.

Good point and thanks for the link! However, doesn't it imply that text rendering is a bottleneck anywhere - games, digital signage, etc.?

No doubt there's good reasons for the slow browser performance (like text as you mentioned, layout, event management, etc.) ... but it's still kinda crazy that with the power of computers today, tearing down and building up a tree that results in calculations for no more than a few thousand sprites or so is a performance killer.

Would be nice if I could just re-render the entire DOM every tick. I've been playing with doing exactly that via lit-html and it seems to be working fine... but still, the idea is "the dom is sensitive - don't change what you don't need to" :\

> Good point and thanks for the link! However, doesn't it imply that text rendering is a bottleneck anywhere - games, digital signage, etc.?

Yes, and this is why non-Latin text in many games is at best a texture, and at worst, horribly, incorrectly rendered.

The state of the art for Arabic in Unity is I believe a plugin that basically does manual shaping by replacing code points. There may be a Harfbuzz plugin, idk.

By the way - kindof a tangent, but if we're mentioning Unity I remember with Flash around ~8 years ago using TLFText. Seemed really good iirc.

Though that was with the old vector/cpu Flash, not the Starling/gpu stuff.

Interesting. So does all text in the browser go through the same problem space? Canvas, SVG, and HTML?

(also a question to the siblings here! I'm not totally clear on HN etiquette when one wants a "reply all"...)

SVG and HTML use the browser's text rendering stack, which is pretty good. (Native UI elements also benefit from native rendering stacks).

Canvas has the same problem, though there are tricks like compiling Harfbuzz to wasm to get around that. There are proposals for a Web Shaping API to expose the underlying shaping engine used by the browser.


Servo / WebRender originally this idea - they started with this philosophy but ended up adding a lot of caching of draw results, swinging the pendulum back somewhat.

(I don't know if there a writeup somewhere about this other than scattered information in the mozillagfx newsletter and issue trackers?)

This is complicated, but the short answer is that it motivates actual incremental approaches, as opposed to rebuilding the world from scratch all the time. Imgui works best when the rendering is fast, which can be the case when the GPU is doing almost all of the work.

Oh, I see - although this is a relatively old issue on the imgui repo, seems like it's the bottom line (and expresses the same ideas mentioned here): https://github.com/ocornut/imgui/issues/1228#issuecomment-31...

Accessibility is a big issue; I'm unsure if imgui has support for that or if it will work with the browsers built-in support for that.

Does imgui re-render everything every frame or does it do a "diff and patch" technique as well?

With imGui you must fully describe your interface at each frame. Because it's fully GPU accelerated it does not cost that much and run very fluently at 60 FPS.

> it does not cost that much and run very fluently at 60 FPS

But it drains your battery like crazy. Immediate mode GUIs are good for games, which already render the whole scene @60FPS and are expected to be costly, but re-rendering your whole window every frame even when idling is just a waste of power.

As mentioned the cost is negligible when you already have a animated 3D scene with the case for games.

For non-game (desktopey) app, well, there's no reason to render at 60 FPS you should only render on user interaction otherwise go idle most of the time....

The rendering is on the GPU but the description of the UI is created and executed on the CPU, and then the results are sent to the GPU, is that right? In other words, it works a little like a video game.

Yes, you describe your UI from the CPU code and it gets accumulated into a draw list. Differents backends (OpenGL, Direct X, Metal ...) sends them to the GPU for display. It's close to what a game is doing.

brisk-reconciler[0] is in the same vein but implemented in ocaml/reasonml. Its currently used by brisk[1] and revery[2].

[0] https://github.com/briskml/brisk-reconciler

[1] https://github.com/briskml/brisk

[2] https://github.com/revery-ui/revery

I'm not clear on something: is this translated into javascript or does this provide an HTML render with native Rust hooks?

moxie-dom is compiled to WebAssembly, and mutates the DOM using APIs from javascript. The Rust isn’t translated into JS, but it does run alongside it. moxie-native doesn’t have anything to do with JS or HTML, aside from reusing some concepts from CSS.

Whats the advantage over just writing vanilla html5?

Regarding your mental model: This looks quite similar to IEC 61499

Ah, that's a good reference for a more modern "control loop" idea. I agree that there are many similarities with that specification and I need to read in more depth. Thanks!

Honestly, there's limited advantage over vanilla HTML5 if HTML is actually what you were going to write. The website at moxie.rs is plain HTML/CSS/etc. The reason to reach for tools like this on the web is if you want interactive state, complex data transformations, etc.

I think it will be hard for moxie-dom to compete directly on ergonomics with purely web-focused tools (especially JS frameworks), because the underlying tools will always need to maintain some distance from platform semantics.

I think if you want vanilla html5, you probably have a simple static site, or something with minimal (or server-side-only) dynamic components.

If you want to build a highly-dynamic SPA, vanilla html5 will become an accidental exercise in reinventing a web framework, but poorly.

Is it safe to call this "React.js for Rust?", or is there a nuance I'm missing?

(Don't intend that to be dismissive; I'm a huge react fan)

From far down in the post:

> I have described moxie a few times as "what if a React in Rust but built out of context and hooks?"

If you have a chance to start fresh on building a UI DSL, why would you choose XML?

Familiarity. In short order it’ll be easy enough to write moxie functions without the xml macro, at which point it becomes a matter of preference.

the syntax, it is HORRIBLE

they'll go nowhere with that

I'm not happy with the state of the syntax, but I'm pretty sure it'll take me farther than this attitude.

It's nice to see that the traditional desktop GUI programmers finally realize that the way the Web people do GUIs is right (React/Vue/...) and the way they did it until now (QT/GTK/WPF/...) is wrong.

Many think that the only reason people use Electron is that it's cross-platform, when that is actually a minor benefit. The big one is just how much more better and efficient React/Vue/... are at creating GUIs. I say that as someone who programmed GUIs in pretty much everything, starting with MFC.

> traditional desktop GUI programmers finally realize

FWIW, I'm not a desktop GUI programmer nor am I an...uh...professional GUI programmer of any kind. Not sure exactly who is doing the realizing here, but if they are, then great!?

> the way the Web people do GUIs is right

There are definitely some attractive things about web frameworks' approaches, but it's important to remember that the history of "declarative UI" traces a path back through C++ imgui devtools before React happened.

I see the transition here differently -- the web is so opinionated about the DOM and its rendering engine, the only thing you can iterate on to make yourself more productive is the frontmost application semantics. Full-pipeline UI toolkits have to manage changes across all the various phases of their implementations.

This (I think) creates an environment where the web is a great incubator for application model experimentation.

However, many projects that start on the web have a difficult time faithfully mapping to the semantics of other GUI systems (one of the driving forces behind Electron adoption IMO). This gap is one that moxie hopes to bridge -- ideally we should be able to learn from the highly productive web ecosystem while transferring those learnings further into the UI stack without the overhead and performance cliffs of typical web tools.

> It's nice to see that the traditional desktop GUI programmers finally realize that the way the Web people do GUIs is right (React/Vue/...) and the way they did it until now (QT/GTK/WPF/...) is wrong.

Ironically, it's quite the opposite.

It's quite funny to see React people seeing their thing as a "GUI revolution" when React's approach have existed for 10/20 years in the traditional GUI world.

- Component oriented approach: Done by Qt, GTK, WPF, OSX for more than 20 years:

- Reactive design: Qt and other have done that for 15 years.

- Stateless Immediate rendering: IMGUI and nuklear have done it for 10 years.

- Pub/Sub event dispatch: Call it signal/slot and you had it in Qt/GTK for 15 years.

- DOM specified GUI, call it XUL, XAML or Qt XML and you had it for 10 years.

One more proof that the tech world is often a continuous reinvention with a lot of hype.

>It's nice to see that the traditional desktop GUI programmers finally realize that the way the Web people do GUIs is right (React/Vue/...) and the way they did it until now (QT/GTK/WPF/...) is wrong.

What? How is WPF any different than a web based ui framework. It does differential rendering, it uses hierarchical components in XML. WPF can update the UI with state changes on the backing C# without update commands, etc.

I agree. Doing MVVM in WPF/XAML felt like an incredibly natural transition from web frameworks for me. In fact, from a high-level standpoint it feels uncannily similar to working in Angular and TypeScript.

When I used it, WPF only had two way binding. The recommended way to use React/Vue is with one way unidirectional binding (Redux/Vuex). It's a huge difference, and simplifies things a lot.

Maybe WPF can do this today, I don't know, haven't touched it in 10 years.

Another aspect, while WPF might check the feature list of React/Vue, using it in practice is kind of clunky.

.NET Framework 3.0 was initially released November 21st, 2006. React.js was initially released May 29th, 2013.

There were 2381 days between differentiating between various binding types in WPF (which include one way, both OneWay and OneWayToSource) and React's release. There were 2378 days between React's release and today.

So it took React longer to copy one way bindings from WPF than it took this project to "copy" React.

Too bad you didn't make this comment on Thursday.

If moxie brings anything new to the table, it’s only because I copied liberally and shamelessly from React and other successful and interesting projects, learning and improving on them in the process. If it doesn’t bring anything new, then it’s because I copied liberally and shamelessly from successful and interesting projects :P.

I’m flattered to be compared to long standing paradigms I’ve had in mind (and others I am learning of now) while working on this.

I'm really only commenting on the idea that wpf doesn't have one way binding. People saying dumb stuff on the internet makes me mad. Personal failing. Like that was added in 2006. That feature was coreleased with Windows Vista and IE7. Google had just bought YouTube for $1.65bn and people were incredulous about such a stupid decision on Google's part. One way binding in wpf was released to the public three months after jQuery. It's a year older than silverlight, which is significant because silverlight just took everything that wpf was and said, "what if we could make web pages this way?" Arguably without silverlight the react folks never ask, "what if silverlight wasn't completely shitty?"

I'm not really qualified to compare wpf to react or moxie.

Second, liberally copying the best parts of other things and leaving the crappy parts behind is bringing something new to the table. Don't sell yourself short.

Can you point me to a good introduction (document, blog post, video) to the way WPF does this? I'm actually quite interested in the older systems, especially because a lot of them come from a place where massive complexity is not seen as a good thing.

Long time ago, because .NET 4 already had multiple bindings.

In fact WPF original team was composed by ex-IE developers, if I am not mistaken.

I completely disagree. I've been working react native and the whole thing feels like an inefficient abstraction designed to improve the lives of people who have to work in browsers every day.

Web pages are stateless document trees, and libraries like react were designed to overcome those inherent problems when trying to build interactive features.

Desktop/mobile applications frameworks are designed to be stateful component trees and have design patterns to help juggle interaction, data and views more effeciently.

> It's nice to see that the traditional desktop GUI programmers finally realize that the way the Web people do GUIs is right (React/Vue/...) and the way they did it until now (QT/GTK/WPF/...) is wrong.

c'mon, Qt has been doing declarative UI for the last ten years, before React and Vue even existed : https://patrickelectric.work/qmlonline/

It may be declarative, but is it reactive?

Edit: it may not sound that way but I swear I was honestly just asking a question. :)

yes ? in an even purer form than most JS frameworks : just using a variable creates a reactive binding.


    property int count: 0
    Text {
      text: "counter " + count
    Button {
      onClicked: count++
will result in the text changing every time the button is clicked (for a complete code example you can paste in the previous link :

    import QtQuick 2.7
    import QtQuick.Controls 2.3

    Rectangle {
        id: root
        anchors.fill: parent
        property int count: 0
        Column {
          Text {
            text: "counter " + count
          Button {
            onClicked: count++


> It's nice to see that the traditional desktop GUI programmers finally realize that the way the Web people do GUIs is right and the way they did it until now (QT/GTK/WPF/...) is wrong.

Can you provide any detail to corroborate your assertion? Because the Qt way of developing GUIs is based on a combination of a DOM, events and state machines to update the state of each subtree in the DOM.

This is actually one of the reasons I'm so excited about the space. If it is possible to combine modern, Web-inspired ways of building UI with Rust's performance and other qualities, I think the result could be compelling. But we don't know yet, and I think what's needed now is an exploration of the various ways to adapt these reactive approaches into idiomatic Rust. I've been tracking Moxie and think it brings a lot to the conversation - I think Adam and I have been learning a lot from each other.

I don’t think this statement is correct.For example Qt has been doing declarative with QML for years.

I generally agree with the OP except that lumping QML into the old-school GUI techniques isn't a proper fit. Traditional Qt Widgets, yes, but QML is indeed not far from the React-style of GUI programming.

> Traditional Qt Widgets, yes

No you're still quite wrong. QWidgets-based UIs still consist of a DOM driven by a state machine that handles events. UI files have a 1:1 correspondence with components and state machines still control changes to the widget/DOM tree.

But are widgets reactive in the way QML is? That’s the issue.

I'm no UI programmer, but this is such a strong claim that I would ask you to elaborate further.

I tell you what was efficient for creating GUIs:

The old Visual Basic (and I've also heard Delphi.)

Doesn't mean I want them anywhere near my projects. The web is somewhat better thanks to declarative UI (for those that use it) and separation of concerns (again for those who take advantage of it).

You can do declarative UI with Forms as well, it just takes a bit more of code than WPF to set up the data binding contexts and layouts.

Sure, if you don't care about transitions and animations at all.

If you're wanting to do a really responsive and nice-feeling app, the react model hinders you far more than it helps.

What you're describing is pretty much how a LOT of game UIs have been developed for YEARS before React/Vue... in fact, I was doing similar with E4X in browser half a decade before React (though without chrome or ie support, it was kind of a dead end).

> The big one is just how much more better and efficient React/Vue/... are at creating GUIs

Efficiency is not the first thing that pops to my mind when I look at the Electron-based apps in my task manager.

They were probably referring to speed/ease of development, not application performance.

> application performance

I feel like this has suffered because the end users aren't the ones paying the bills in web space.

As someone who started doing UI in the late 90s I totally agree.

AFAIK Flutter and SwiftUI are bringing those web ideas into mobile/desktop dev.

AFAIK there is still nothing similar in the C++ world.

Flutter is available for C++ desktop programming. You just cannot do all the UI logic in C++; you have to use Dart.

> Flutter is available for C++ desktop programming

I know Flutter for desktop is in process. Any link with more info?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact