Incremental is essentially an implementation of Adapton from 2014: https://www.cs.umd.edu/~hammer/adapton/
There is a big upset coming in the UX world as we converge toward a generalized implementation of the "diff & patch" pattern which underlies Git, React, compiler optimization, scene rendering, and query optimization.
Two sleeper startups in this space that are going to make a lot of money are Frank McSherry's Materialize Inc. and Nathan Marz's Red Planet Labs.
incr_dom source: https://github.com/janestreet/incr_dom
Differential Dataflow: https://github.com/TimelyDataflow/differential-dataflow
Declarative Dataflow Client: https://github.com/sixthnormal/clj-3df
Crux adapter for Declarative Dataflow: https://github.com/sixthnormal/clj-3df
Don't forget Windows 1.0 in that list:
X11 added the ability to have a backing-store for each window, and the compositor would render them to the display, and Wayland is compositor only.
On another topic, the constraint based GUI systems going back at least to the 80s are similar to react et al., though usually the widget graph was fixed, and only the properties of the widgets were reactive.
Most modern GUI frameworks don't use the tree-of-HWNDs anymore, though. Which is to say, the entire visual element tree is handled internally by the framework's own compositor, and the top-level WM_PAINT just renders the resulting bitmap. WPF and Qt do it that way. That said, there's still no shortage of apps that are implemented in terms of native Win32 widgets - pretty much all the non-UWP apps that come with Windows are like that. So when you are looking at, say, Notepad or Explorer, they still fundamentally work the way the article linked to above describes.
But of course that can be combined with diff&patch as well...
Which they are, through extensions such as https://www.khronos.org/registry/EGL/extensions/KHR/EGL_KHR_...
For views, you either get some form of MVP, with explicitly implemented model interfaces that provide the glue between the views and the object tree they are representing, or data binding that effectively creates that same glue for you. Here's an example from UWP:
So no, it's not really functional. Quite the opposite - the state is global and mutable, and UX actions that purport change things really change them. That also makes it all very intuitive, though.
The concept is somehow complex to master, I compare it to getting monads to click, but when you understand it, you are able to envision how to build the full UI architecture as having a Lego box at your disposal.
For everything, views, stylesheets, event handlers, data models.
How are they going to make money if open source enthusiasts copy the ideas and spread them for free? See e.g. React -> Vue.
Selling developer tools these days is just a losing game.
Selling a company that holds some patents or serious implementations of useful software is not. We live in an era where Big Tech will acquire companies for tens of millions, then dismantle their product, just to hire a proven engineering team (or keep them out of the hands of competitors for a brief period). So companies with actual, useful products do fetch quite a bit.
Git PR don't pay supermarket bills.
It is a matter of getting a target audience willing to pay for their tools.
I agree this is true for _trivial_ dev tools, but for dev tools that are more complex or provide an entire ecosystem I think this is not strictly true. Where I work (Sourcegraph) we are able to sell a good cohesive bundle of dev tools (code search, jump-to-definition/find-references, etc.) and through that have been able to easily fund REALLY cool tooling like https://comby.dev
I imagine the same is true for e.g. GitLab
The "copying" comments are largely due to things like Vue's new proposed hooks API being very much inspired by React coming up with hooks in the first place.
Hence why everyone that want to make a living selling developer tools, turns to enterprise customers, B2B, or finds a way to package them as SaaS.
Welll ... from their website:
> Red Planet Labs is pioneering a radically new kind of software tool.
Also, the Materialize "reactive SQL" solution sounds a lot like Firebase.
1. Deriving a Domain Specific Language for purely functional 3D Graphics - https://rawgit.com/wiki/aardvark-platform/aardvark.docs/docs...
2. Fable.Elmish.Adaptive (another instance of applying incremental computation to the Elm model) - https://github.com/krauthaufen/Fable.Elmish.Adaptive
I mean, here is Qt shipping Qml interfaces in a ton of cars, and you're talking up rust2html.
Original idea: https://github.com/paf31/purescript-incremental-functions
More recent Proof of Concept: https://github.com/jlavelle/purescript-snap
(I just posted the latter to HN here: https://news.ycombinator.com/item?id=21690382)
(BTW I’m not sure if you remember our conversation at React EU in 2017, really enjoyed chatting!)
I thought git (as opposed to, say, RCS) did _not_ do the diff thing, and each object is just the full thing, relying on gzip to do the compression?
No it doesn't. It does delta compression – which ultimately accomplishes the same goal – but this is not what people mean when they say that a version control system stores diffs (like SVN does).
It seems like a good trade off—yes, everything must be done just so, and perhaps some parts of your app are a little awkward, or a little boilerplate-y, but there’s a whole giant class of problems you can just totally ignore.
Until... you can’t.
There’s another approach to library authorship, where the goal is not to delete a class of problems, but to make a class of problems easily controllable. Not to make a large domain disappear, but to make it programmable.
I have found, with that as your goal, you will be pushed away from declarative models. Because declarative models are about making a surface which is not programmable but configurable.
You can’t program arbitrary relationships across a React component boundary. You must flatten your intent into the very tight declarative interface which exists there.
Procedural models allow you to use the full set of capabilities that a function call and type system gives you. Ideally you don’t use much of that capability at any one time. But the beauty is you can use exactly the right control structure for the specific concern you are trying to... not abstract away, but make controllable.
The set of “concerns” is the infinite set. And no possible declarative model can capture it. Functions can.
SQL -> SQL procedures
Angular templates -> DOM manipulation
Puppet -> custom scripts
For example, HTML/SVG/CSS DOM scripting can be considered "procedural", but the in other sense the DOM is a rather declarative version of rendering, whose procedural escape hatch is Canvas/WebGL.
You may even find case where they are combined tightly:
A makefile has declarative dependencies and procedural recipes.
But once you are out the escape hatch you will need to model both the new space you are working in, AND model the world you escaped from in order to interoperate, which is a special hell.
The idea is always to put a layer on top that simplifies things, so it's easier to use and faster to learn. But the end result is usually you have to learn both the abstraction and thing it was trying to hide.
If functions are your declarative model, what would you say?
So when I talk about declarative models I’m talking about the Swiss cheese of declarative models with procedures mixed in.
I don't need the freedom to invent my own character encoding, or the freedom to create SQL injection vulnerabilities, buffer overflows or use after free bugs. That's just something that drags me down when I'm writing a nice web app to have fun with friends or a good old enterprisey web app at work. I need the freedom to create working applications quickly.
Flutter gets it right. The code looks almost like it is declarative. But really it's just Dart code (an underappreciated language IMO) and you never get stuck by things that are impossible to express.
I need to figure out how to express this better in the future, but if you swap "Dart" for "Rust," this sentence applies to moxie as well. It's what I mean when I say that the callgraph of functions is used for structure.
I almost wish the post led with this, anp. It’s such a great and concise description of the aim for the project, and really helps clarify all underlying decisions.
Thanks for writing this post. It’s great!
Thanks for the kind words <3
I think there is reason to worry: consider a node with a lot of children, for example a list of many todos. Say you want to change the state of one single todo, for example marking it as done. With a top-down approach such as diffing you have to go through each child to see if it has changed. This is O(n). If you instead have components that know their position in the tree and can react to external changes, like react's Components or using hooks and useState, this can be updated in O(1).
I can't find reader mode on mobile chrome.
Please don't misinterpret in this way. This is a candid reaction. As you indicate, it confirms your previous observations. This is not an opinion of you or any other Rust developer. It is an opinion of the syntax, period.
Though, you may also be referring to the "made up language" powered by the Macros. I agree that can be a bit much, but it's in my view no more distracting than React's JSX. Some people hate JSX, and I imagine they would hate this as well.
I use an ErgoDox, and while my layout isn't customized for writing mox invocations, it hadn't occurred to me that this might be a blind spot in my thinking about ergonomics. Good point!
_ is "don't complain because I don't use this variable"
|| is the syntax around Rust closures
foo! is a Rust macro
Why this is desirable is that we can program in real Rust while using some simple macro expansion not to create a free standing language for gui building?
Learning Rust via this gui library might not be the best choice(?) and I think it's better he focus on people that know Rust already.
Rust does have plenty of symbols (nearly all of which are lifted directly from C or C++, with the same or analogous meanings), but such extensive DSLs are rarely encountered in the course of everyday use of the language.
The important thing is actual expressiveness. Rust gives you a heavy duty toolbox to play with. You could have a beautiful-looking syntax with no braces, no parens, and tons of whitespace (think CoffeeScript or Haskell) where you're very restricted on a small set of square pegs, or you could use something like C++, (Perl), Rust, or Lisp that lets you make any Rube Goldberg machinery you want no matter how hideous it looks. It's a matter of personal preference. You can be very effective either way, but I think one is unnecessarily harder on yourself.
There are two camps:
1. Syntax is important. I want everything to line up beautifully on the screen. I have a riced out Arch desktop and a super clean enviable desk with modern furniture.
2. Syntax doesn't matter. I format code sloppily and let clang-format fix it eventually. I love macros and codegen because they make my life easier. I haven't changed my wallpaper in 5 years.
Also, because its a Paradox game, the interface is ass.
If you like the genre of games, this is one to lose a couple of dozen hours to. If you just think "oh, Mars is cool" its probably good to skip or download a mod that lets you zoom time faster.
Huh, it's a Paradox game? Thar alone makes me interested; I'm a big fan of Stellaris.
> bursts of micromanaging followed by long droughts of waiting for shit to happen
Yeah, that sounds like Stellaris.
Thanks for the review; I'll check the game out.
It's published by Paradox not developed by them.
Isn't that still radically inefficient?
Browsers already to tons of work to avoid recomputing too much of this stuff whenever the DOM changes, and it's still inadvisable to poke the DOM too much. I don't see this changing anytime soon. There are various new features that allow for some level of hinting, but it's not going to obviate this. Browsers need to have incremental layout/styling prepare for any kind of potential change, whereas if you have a reactive UI framework you know what kinds of changes can happen, and can optimize diffing based on that.
There's a reason why a lot of JS UI frameworks use a virtual DOM. It sounds expensive to maintain, but directly operating on the DOM is more expensive.
Also, syncing can theoretically get to the minimum possible DOM calls, but with good tools I believe I can get close to n*log(n) of that with a procedural model. Which makes your point somewhat moot.
That said there are various tricks that browsers use to avoid introducing these boundaries.
Oh yeah, modifying the DOM is practically free. Which is why React and the like do that in their mirror DOM.
Actually applying the resulting changes, on the other hand...
Someone recently got very close to having the moxie-native calculator working in a browser's wasm/webgl, which will be mindblowing if they get it to work.
On the other hand native toolkits have had hardware-accelerated text rendering for years if not decades. Browsers are not actually good at being a UI toolkit. They are good at handling a variety of inputs of a variety of questionable quality with a variety of usage patterns. They are good at being very defensive in the face of arbitrary inputs. They are a great catch-all, but as a result they are never actually fast at anything in particular.
No doubt there's good reasons for the slow browser performance (like text as you mentioned, layout, event management, etc.) ... but it's still kinda crazy that with the power of computers today, tearing down and building up a tree that results in calculations for no more than a few thousand sprites or so is a performance killer.
Would be nice if I could just re-render the entire DOM every tick. I've been playing with doing exactly that via lit-html and it seems to be working fine... but still, the idea is "the dom is sensitive - don't change what you don't need to" :\
Yes, and this is why non-Latin text in many games is at best a texture, and at worst, horribly, incorrectly rendered.
The state of the art for Arabic in Unity is I believe a plugin that basically does manual shaping by replacing code points. There may be a Harfbuzz plugin, idk.
Though that was with the old vector/cpu Flash, not the Starling/gpu stuff.
(also a question to the siblings here! I'm not totally clear on HN etiquette when one wants a "reply all"...)
Canvas has the same problem, though there are tricks like compiling Harfbuzz to wasm to get around that. There are proposals for a Web Shaping API to expose the underlying shaping engine used by the browser.
(I don't know if there a writeup somewhere about this other than scattered information in the mozillagfx newsletter and issue trackers?)
But it drains your battery like crazy. Immediate mode GUIs are good for games, which already render the whole scene @60FPS and are expected to be costly, but re-rendering your whole window every frame even when idling is just a waste of power.
For non-game (desktopey) app, well, there's no reason to render at 60 FPS you should only render on user interaction otherwise go idle most of the time....
Regarding your mental model: This looks quite similar to IEC 61499
Honestly, there's limited advantage over vanilla HTML5 if HTML is actually what you were going to write. The website at moxie.rs is plain HTML/CSS/etc. The reason to reach for tools like this on the web is if you want interactive state, complex data transformations, etc.
I think it will be hard for moxie-dom to compete directly on ergonomics with purely web-focused tools (especially JS frameworks), because the underlying tools will always need to maintain some distance from platform semantics.
If you want to build a highly-dynamic SPA, vanilla html5 will become an accidental exercise in reinventing a web framework, but poorly.
(Don't intend that to be dismissive; I'm a huge react fan)
> I have described moxie a few times as "what if a React in Rust but built out of context and hooks?"
they'll go nowhere with that
Many think that the only reason people use Electron is that it's cross-platform, when that is actually a minor benefit. The big one is just how much more better and efficient React/Vue/... are at creating GUIs. I say that as someone who programmed GUIs in pretty much everything, starting with MFC.
FWIW, I'm not a desktop GUI programmer nor am I an...uh...professional GUI programmer of any kind. Not sure exactly who is doing the realizing here, but if they are, then great!?
> the way the Web people do GUIs is right
There are definitely some attractive things about web frameworks' approaches, but it's important to remember that the history of "declarative UI" traces a path back through C++ imgui devtools before React happened.
I see the transition here differently -- the web is so opinionated about the DOM and its rendering engine, the only thing you can iterate on to make yourself more productive is the frontmost application semantics. Full-pipeline UI toolkits have to manage changes across all the various phases of their implementations.
This (I think) creates an environment where the web is a great incubator for application model experimentation.
However, many projects that start on the web have a difficult time faithfully mapping to the semantics of other GUI systems (one of the driving forces behind Electron adoption IMO). This gap is one that moxie hopes to bridge -- ideally we should be able to learn from the highly productive web ecosystem while transferring those learnings further into the UI stack without the overhead and performance cliffs of typical web tools.
Ironically, it's quite the opposite.
It's quite funny to see React people seeing their thing as a "GUI revolution" when React's approach have existed for 10/20 years in the traditional GUI world.
- Component oriented approach: Done by Qt, GTK, WPF, OSX for more than 20 years:
- Reactive design: Qt and other have done that for 15 years.
- Stateless Immediate rendering: IMGUI and nuklear have done it for 10 years.
- Pub/Sub event dispatch: Call it signal/slot and you had it in Qt/GTK for 15 years.
- DOM specified GUI, call it XUL, XAML or Qt XML and you had it for 10 years.
One more proof that the tech world is often a continuous reinvention with a lot of hype.
What? How is WPF any different than a web based ui framework. It does differential rendering, it uses hierarchical components in XML. WPF can update the UI with state changes on the backing C# without update commands, etc.
Maybe WPF can do this today, I don't know, haven't touched it in 10 years.
Another aspect, while WPF might check the feature list of React/Vue, using it in practice is kind of clunky.
There were 2381 days between differentiating between various binding types in WPF (which include one way, both OneWay and OneWayToSource) and React's release. There were 2378 days between React's release and today.
So it took React longer to copy one way bindings from WPF than it took this project to "copy" React.
Too bad you didn't make this comment on Thursday.
I’m flattered to be compared to long standing paradigms I’ve had in mind (and others I am learning of now) while working on this.
I'm not really qualified to compare wpf to react or moxie.
Second, liberally copying the best parts of other things and leaving the crappy parts behind is bringing something new to the table. Don't sell yourself short.
Web pages are stateless document trees, and libraries like react were designed to overcome those inherent problems when trying to build interactive features.
Desktop/mobile applications frameworks are designed to be stateful component trees and have design patterns to help juggle interaction, data and views more effeciently.
c'mon, Qt has been doing declarative UI for the last ten years, before React and Vue even existed : https://patrickelectric.work/qmlonline/
Edit: it may not sound that way but I swear I was honestly just asking a question. :)
property int count: 0
text: "counter " + count
import QtQuick 2.7
import QtQuick.Controls 2.3
property int count: 0
text: "counter " + count
Can you provide any detail to corroborate your assertion? Because the Qt way of developing GUIs is based on a combination of a DOM, events and state machines to update the state of each subtree in the DOM.
No you're still quite wrong. QWidgets-based UIs still consist of a DOM driven by a state machine that handles events. UI files have a 1:1 correspondence with components and state machines still control changes to the widget/DOM tree.
The old Visual Basic (and I've also heard Delphi.)
Doesn't mean I want them anywhere near my projects. The web is somewhat better thanks to declarative UI (for those that use it) and separation of concerns (again for those who take advantage of it).
If you're wanting to do a really responsive and nice-feeling app, the react model hinders you far more than it helps.
Efficiency is not the first thing that pops to my mind when I look at the Electron-based apps in my task manager.
I feel like this has suffered because the end users aren't the ones paying the bills in web space.
AFAIK Flutter and SwiftUI are bringing those web ideas into mobile/desktop dev.
AFAIK there is still nothing similar in the C++ world.
I know Flutter for desktop is in process. Any link with more info?