
Asm-Dom – WebAssembly Virtual DOM - mbasso
https://github.com/mbasso/asm-dom
======
masklinn
I was going to say that the API is almost identical to Snabbdom's and ask
whether that was voluntary or an independent reinvention, but given how
similar the "inline example" is[0][1] (down to the comments) I assume either
snabbdom was used as a starting point or the inspiration is very much wilful?

[0] [https://github.com/mbasso/asm-dom#inline-
example](https://github.com/mbasso/asm-dom#inline-example)

[1] [https://github.com/snabbdom/snabbdom#inline-
example](https://github.com/snabbdom/snabbdom#inline-example)

~~~
teddyh
They both have the MIT license, which may be a so-called “permissive” licence,
but it _does_ require attribution. Did they base asm-dom on Snabbdom? That’s
perfectly fine. Did they remove the copyright notice? That’s illegal.

~~~
mbasso
Just committed a fix for that in license.md

~~~
snakeanus
Shouldn't the license be in every file?

~~~
masklinn
The only reason to put the license in every file is if the "project" is really
a loose collection of independent files, or if you stay up at night fearing a
madman on the loose randomly downloads and reuses single files from your
projects.

MIT specifically calls out that the notice must be included in copies of
"substantial portions" of the software which makes no sense if the entirety of
the notice is already part of every file.

> The above copyright notice and this permission notice shall be included in
> all copies or substantial portions of the Software.

------
whatnotests
I expect the performance here to be blazingly fast...is that an accurate
assumption?

~~~
cmonguys
"All interactions with the DOM are written in Javascript."

You can only be as fast as the slowest link.

Does it help for virtual dom reconciliation to be in WASM as opposed to JS
that is optimized and JIT'ed by modern JS engines? I doubt it as the use case
is very different from VR/AR/AI where WASM would presumably have some
advantage.

Any experts to educate us?

~~~
gizmo
Web browsers are pretty fast already, so if you only do those DOM updates that
actually affect what you see on screen you can have your app update at 60
frames per second. That's where a virtual dom helps out.

Virtual dom systems effectively work in two stages:

1) figuring out how the DOM should be updated

2) applying these changes with as few (or cheapest) DOM operations as possible

I don't expect WebAssembly will do much for step (2), as the bottleneck there
is in repeatedly crossing the boundary between the javascript world and the
DOM, or in the DOM operations themselves.

Step (1) is essentially just crunching (tree based) data structures. Web
Assembly should be a lot faster here, because (depending on implementation)
you get denser data structures, better cache locality, no garbage collection,
and so on. You'll also get much more consistent performance.

For complex web applications that keep a lot of view state around (1) might be
the bottleneck, but in most real world applications I expect that (2) is the
bottleneck, because the browser has to do so much work in terms of layout, CSS
application, and so on every time the DOM gets updated.

If you really want to get a big performance improvement you have to do much
more in WebAssembly. Use only absolute positioning for DOM nodes and create
your own layout engine. Don't have any CSS except for the rules needed to
display what is currently visible. Do all typesetting in WebAssembly too. This
means recreating much of a web browser inside a web browser, and you can then
optimize the WebAssembly code for the specific needs of your web application
for big performance gains.

It may sound hugely impractical, but if WebAsembly is only 10% slower than C
it's viable. It means you won't have to wait for standards organizations to
give new FlexBox options, or a way to dynamically adjust font size to fit a
fixed rectangle, you can just solve all these problems locally. If this works
it could lead to a complete "user space" renaissance.

~~~
oever
> Web Assembly should be a lot faster here, because (depending on
> implementation) you get denser data structures, better cache locality, no
> garbage collection, and so on.

You can get these advantages by using plain javascript with typed arrays.

[https://github.com/vandenoever/baredom](https://github.com/vandenoever/baredom)

~~~
DonHopkins
I would guess a lot of the overhead would be thunking back and forth between
webasm and js functions, so if you implemented the update in pure js where it
can call directly into the DOM apis, and also read pointers, numbers and
strings directly out of the webasm data array, then you could avoid any
thunking between the two different worlds in the inner loop.

By thunking I mean doing this from C++ code:

    
    
        EM_ASM_({
            window['asmDomHelpers']['domApi']['appendChild']($0, $1);
        }, vnode->elm, createElm(vnode->children[i]));

~~~
oever
So you propose to interface between C++ and JavaScript via a shared buffer and
to eschew any explicit FFI calls.

If the data structure is lockless, that is doable.

------
DonHopkins
Very cool, thanks for publishing this! I'm learning some interesting things by
reading the code.

I'm curious about the three different coding styles used to define functions
in domApi.js, if they have different behaviors, are meant to imply different
meanings, or if they are just different styles for the same thing:

[https://github.com/mbasso/asm-
dom/blob/master/src/js/domApi....](https://github.com/mbasso/asm-
dom/blob/master/src/js/domApi.js#L54)

    
    
      'setAttribute'(nodePtr, attr, value) {
    
      'parentNode': nodePtr => (
    
      'setTextContent': (nodePtr, text) => {
    

Why the single quotes around function names? Why are some normal functions and
others fat arrow functions? Why omit the parens around the param of a one
parameter fat arrow function -- does that define a get accessor?

~~~
crooked-v
> Why the single quotes around function names?

Syntactically the same as not having them, aside from the fiddly details of
string vs property name access to an object.

    
    
      thisObject = {
        usualPropertyName: value1,
        'property name with spaces': value2
      }
    
      thisObject.usualPropertyName
      thisObject['usualPropertyName'] // this works too
      thisObject['property name with spaces'] // you can't use the dot access style with this property name
    

> Why are some normal functions and others fat arrow functions?

Fat arrow functions bind "this" to whatever the surrounding context is, and if
the only statement is a single object or value, it implicitly returns that.

That is to say, this:

    
    
       whatever => value
    

is syntactically the same as this:

    
    
       whatever => (
         value
       )
    

is syntactically the same as this:

    
    
      whatever => {
        return value
      }
    

> Why omit the parens around the param of a one parameter fat arrow function
> -- does that define a get accessor?

If there's only a single argument, you don't need the parentheses. This is
meant mostly for the convenience of things like this:

    
    
      someArray.map(item => item.propertyName)

~~~
masklinn
> Fat arrow functions bind "this" to whatever the surrounding context is

They don't actually bind `this` at all, there is a slot on the function object
([[ThisMode]] in the spec) which describes how to resolve `this` references
for the function's body, for arrow function the mode is "lexical" and they
simply grab the this reference from their lexical environment[0] as if it were
any other non-local variable.

In fact, lexical resolution is the default, so what happens is during
[[Call]][1] arrow functions will simply skip all the hard work in
OrdinaryCallBindThis[2] by early-returning while the other two types (strict
and global) have to resolve their thisValue then bind their local environment.

[0] [https://www.ecma-international.org/ecma-262/6.0/#sec-
lexical...](https://www.ecma-international.org/ecma-262/6.0/#sec-lexical-
environments)

[1] [https://www.ecma-international.org/ecma-262/6.0/#sec-
ecmascr...](https://www.ecma-international.org/ecma-262/6.0/#sec-ecmascript-
function-objects-call-thisargument-argumentslist)

[2] [https://www.ecma-international.org/ecma-262/6.0/#sec-
ordinar...](https://www.ecma-international.org/ecma-262/6.0/#sec-
ordinarycallbindthis)

------
Chyzwar
I do not get why use Babel when both ASM.js and WASM are only supported in
first class browsers.

Without native interface to DOM this project is will be limited but It is
interesting anyway.

~~~
untog
Babel does not require ASM or WASM. It is (usually) simply an ES6 to ES5
transpiler.

~~~
Chyzwar
My point is that you do not need Babel if you plan to target modern browsers
anyway.

~~~
Touche
Many ES6 features are slower than their ES5 counterparts currently. let, const
are slower than var, for example.

~~~
tomatsu
> _let, const are slower than var, for example._

They aren't slower anymore in V8. Most ES6 features are now about as fast as
their ES5 counterparts.

------
amelius
> As we said before the h returns a memory address. This means that this
> memory have to be deleted manually, as we have to do in C++ for example. By
> default asm-dom automatically delete the old vnode from memory when patch is
> called.

So may there be structural sharing across vdom trees? Or within a tree?

~~~
mbasso
Yes! h always returns a memory address to a VNode. In C++ this address is
reinterpreted in a VNode _, that contains others VNode_ as children. If you
decide to manually manage the memory, you can implement some interesting
mechanism to diff vnodes and create vdom trees. To do this, I have to develop
some APIs that allows you, for example, to replace a children with another and
so on. This is certainly on the roadmap!

------
amelius
I don't like the way they write styles:

    
    
        style: 'font-weight: normal; font-style: italic'
    

instead of

    
    
        style: { fontWeight: "normal", fontStyle: "italic" }
    

I mean, it requires stringization and string-concatenation overhead.

~~~
rictic
If you're doing inline styles, the former is how the style is expressed to the
style engine isn't it? i.e.

<div style="font-weight: normal; font-style: italic">

~~~
ojr
in react, styles can be just another variable in the component model/props

<div style={{ fontWeight: 'normal', fontStyle: 'italic' }}/>

------
fiatjaf
Shouldn't it be called "wasm-dom" instead?

------
ilaksh
Eventually more parts of the DOM etc. will be directly manipulated from WASM
somehow.. right?

~~~
mbasso
DOM interactions are a future feature, I'll update asm-dom to pure WASM
without JS when they'll be supported!

------
billconan
possible to use c++ to develop the frontend?

any example?

------
Ahmed90
any benchmarks ?

