
Fast, Bump-Allocated Virtual Doms with Rust and Wasm - BoumTAC
https://hacks.mozilla.org/2019/03/fast-bump-allocated-virtual-doms-with-rust-and-wasm/
======
ilovecaching
How exciting. One of the great things about React is that it's just as much a
pattern as a library, and with Rust macros and v-dom it should be rather easy
to build something similar to JSX in Rust (with the proper Rust-isms of
course). Can't wait for Rust to rule the web.

~~~
steveklabnik
I'm watching [https://github.com/bodil/typed-
html](https://github.com/bodil/typed-html) personally; not just JSX, but
statically typed!

------
gambler
_> Virtual DOM libraries provide a declarative interface to the Web’s
imperative DOM._

I'm not sure what this means. DOM is an object model for HTML. It is mutable,
but HTML itself is definitely declarative.

Which brings up an interesting question. Why are DOM-diffs something that is
done by userland libraries when it can and probably should be done by the
browser itself?

~~~
sraquo
Because virtual DOM is not the only way to achieve efficient DOM updates, and
even within the concept of virtual DOM there could be many different ways to
achieve efficient diffing (e.g. React vs Snabbdom). There is nothing special
about a particular virtual DOM spec to deserve a place in web standards.

~~~
tracker1
The things a VDOM get you is a simpler interface than the browser DOM, with an
often faster comparison than the browser actually provides. There have been
significant real browser DOM improvements since React came out, but I'm pretty
sure an optimized VDOM in WASM could be faster because of issues of
interaction with the real/full browser DOM. There are also side effects wrt
the full/real DOM in practice.

I agree though, nothing that requires a place in web standards at all.

~~~
sraquo
Well... faster than what exactly? You _have_ to do the virtual DOM pattern
(generating new DOM state and then diffing with old state) with _virtual_
elements. You can't compare proper virtual dom to using _real_ DOM elements
instead of virtual ones in a virtual dom pattern, it wouldn't make any sense.

But there are other non-virtual-DOM ways to manage DOM state efficiently and
in a maintainable manner. For example, my own library uses Observables to
drive precise DOM updates and works with trees holding _real_ (not virtual)
DOM elements, so it doesn't need to do any diffing at all:
[https://github.com/raquo/Laminar](https://github.com/raquo/Laminar)

I don't think it's a given which of these techniques would be faster, it
depends heavily on the particular use case and the implementation of diffing
(for virtual DOM) and Observables (for my pattern). If both are well optimized
I'd expect virtual DOM to lose in a lot of cases.

~~~
tracker1
Like I said, _could_ , it probably depends on actual use... DOM navigation for
read or update can be optimized, but depending on how it is done may not work
as well. React itself is moving towards diffing against the browsers real DOM
iirc. Browsers have gotten a lot better than in the past. That said, actually
comparing each node for updates against large trees may be more costly than
updating and diffing against a partial abstraction.

~~~
sraquo
What I'm saying is outside of the virtual DOM paradigm you might not need to
diff any elements at all, real or virtual, and so you wouldn't care about the
performance of DOM reads, as you're not doing them.

Then it becomes a matter of DOM write performance, but that is the same for
everyone assuming the native DOM API commands issued by the libraries are the
same, which is a more or less reasonable assumption for well optimized
libraries even if they use different paradigms to calculate what those
commands should be.

------
hiccuphippo
Yes, I was thinking web developers, rather than using rust and wasm directly,
would first get the benefits when the libraries they use start moving the
heavy duty parts to it. Can't wait to see if someone uses this for building a
react-like framework.

~~~
steveklabnik
The Rust Wasm working group agrees, and that's why they've been pursuing a
strategy of building libraries, rather than full front-end frameworks. There
are some people who are doing that, and now that the "build a library in wasm"
story is going pretty well, there are some plans to move into that space too
([https://rustwasm.github.io/2019/03/12/lets-build-gloo-
togeth...](https://rustwasm.github.io/2019/03/12/lets-build-gloo-
together.html)). But almost all of the previous work has been for libraries.

~~~
dman
I am hopeful for the other half to land as well - heres to hoping that
something like piet and druid results in a completely rust based Application
delivery platform that ships in the browser.

~~~
vanderZwan
Could you elaborate on what Piet and Druid are, perhaps with a few links? I
assume you don't mean the esolang[0].

[0] [https://esolangs.org/wiki/Piet](https://esolangs.org/wiki/Piet)

~~~
dman
Sure, sorry about the initial omissions.

[https://github.com/xi-editor/druid](https://github.com/xi-editor/druid)

[https://github.com/linebender/piet](https://github.com/linebender/piet)

~~~
vanderZwan
Thanks! Looks very interesting, going to watch the video about Druid linked in
the README.MD later[0].

And no worries, it's very easy to forget not everyone is introduced!

Anyway, to actually engage with your point: I can see the appeal of an all-
Rust framework: good typing system + high performance (both in speed and low
memory use, ideally) sounds fantastic!

So out of curiosity: how do you think hot swapping would be handled? I'm
asking this who has not dived into Rust at all and only observed it with
interest from a distance.

For starters, I understand Rust has fairly slow compilation times, no? Or is
that only true for optimized code and do we have fast debug options?

Similarly, hot swapping requires maintaining state. With JS that's not too
difficult because you don't worry about memory layout: as long as the high-
level structure and names are the same the code works fine (it just kills the
JIT optimizations).

With WASM, that goes out the window: add or remove a field to your struct and
all offsets change. Rust may guarantee no memory leaks, but that is not the
same as guaranteeing that a snapshot of the memory state of one Rust program
works on a different one.

I guess some kind of "export to/import from JavaScript based representation"
glue code that runs every hot reload could work, but that sounds like it could
really freeze the browser when hot-reloading the big frameworks.

[0]
[https://www.youtube.com/watch?v=4YTfxresvS8](https://www.youtube.com/watch?v=4YTfxresvS8)
would be funny if I forgot to include links myself at this point :p

~~~
dman
Had to look up hot swapping, to be honest I dont think the experience will be
as good as it would be for dynamic languages like Dart / Javascript.

------
writepub
Firstly, congrats on shipping a virtual DOM lib in WASM. Hopefully, frameworks
intent on using a V-DOM will greatly benefit from this.

Having said that, is a V-DOM required in 2019, if DOM updates are optimally
batched, like in FastDom (
[https://github.com/wilsonpage/fastdom](https://github.com/wilsonpage/fastdom)
). Decades of optimizing browser internals would surely account for not
trashing the DOM, if updated optimally. So, is it required?

~~~
acdha
It’s never been required — and is usually substantially slower – but a
virtualdom may be worth the overhead because it avoids the need to organize
those updates. Most cases aren’t performance sensitive to the point where
that’s the deciding factor in a decision.

~~~
megaman821
It appears lit-html is using a method that keeps updates declarative.
[https://lit-html.polymer-project.org/](https://lit-html.polymer-project.org/)

------
Bahamut
The benchmark has some old versions of Angular (2) and the legacy AngularJS
(1.x) - how do the benchmarks look with a more recent version (v7)?

~~~
Caspy7
The post notes that they had issues with recent versions.

------
cmroanirgo
It seems memory fragmentation can occur, rather easily, if you hold onto a few
of them.

> _The disadvantage of bump allocation is that there is no general way to
> deallocate individual objects and reclaim their memory regions while other
> objects are still in use._

~~~
Rusky
The use case in the article does not hold onto a few of them, though- it holds
on to exactly two at any time.

------
dmitriid
Why virtual DOM is not a part of browser APIs is anyone’s guess at this point.

~~~
MrEldritch
It is! They call it the DOM, for short.

~~~
lootsauce
Uh no, actually virtual DOM would be a great addition to the browser. What is
there is an imperative api, the big win of virtual DOM is the specification of
the UI is the specification of all possible updates to the ui in an efficient
manner. It would be nice for the browser to include an api method that takes
VDOM data-structure, diffs it and applies the changes in native land vs
implementing this in user land.

------
anonytrary
The benchmark leaves me wondering if this was worth it.

------
btown
Could this be used in something like Cloudflare Workers to enable server-side
rendering?

------
moron4hire
Bump allocation sounds a lot like heap allocation. Is a "bump" a small heap?

~~~
steveklabnik
Bump allocation is a strategy for building an allocator; where that
allocator's memory is is irrelevant to the algorithm itself. Most of the time,
that is the heap, but you could write an allocator that takes a bunch of stack
memory and hands it out too.

