
How I built a fast JavaScript framework - krapans
https://medium.com/@marcisbee/how-i-built-super-fast-js-framework-faster-than-react-ea99f0d03150
======
evmar
I am often frustrated in this ecosystem by how people build things without any
clear conceptual model, and then they document them by saying "it is so fast,
it just works like magic". It is impossible to compare two approaches because
there are no facts available to compare them on.

Like for this one from a peek at the source code it appears to try to parse
JavaScript at runtime using regexes ([https://github.com/radi-
js/radi/blob/master/src/index.js#L4](https://github.com/radi-
js/radi/blob/master/src/index.js#L4)) which is great for demoware and not at
all something I'd want to use in production, but that is not mentioned
anywhere in the site or blog post.

I don't mean to just dump on this project in particular, which could just be a
fun toy side project for the author, because this problem is endemic. (But
then why go to all the effort to make all the marketing materials for it
without actually writing any docs?)

~~~
Froyoh
Looks like the regex is only used to strip comments?

~~~
pygy_
[https://github.com/radi-
js/radi/blob/eace84e9b74c944b233f631...](https://github.com/radi-
js/radi/blob/eace84e9b74c944b233f631de314a6869ed21f0f/src/index.js#L585)

It parses the source code of the view function at run time, matching
parentheses, but ignoring whether those are present in strings or regexes.
Likewise, the comment stripper will turn

    
    
        "/*whoops*/" 
    

into the empty string...

In other words, the parser is buggy :-/

~~~
lhorie
Having taken a closer look, it looks like the problems go even deeper than
that: it rewrites `l()` calls in functions through a regex/replace on
`fn.toString()`. This is similar to the mechanism that broke shorthand
Angular.js DI in production. So code can break if it goes through some bundler
than renames the `l` variable, or if a minifier does it, etc.

I'd imagine closures also break.

~~~
pygy_
Closures will break, yes. It looks like it is intended to react to mutations
to the properties of `this`...

I hadn't thought of minification, yes it pretty bad :-/...

It looks like the author is smart but has little JS experience (`var last =
temp.splice(-1)[0]` where `temp.pop()` would do tells me that it may be one of
his first JS project).

------
Whitespace
They compared Radi to React Stack, not React Fiber. That's being sneaky.

If you run the React Fiber demo, you'll see it's just as fast:
[https://claudiopro.github.io/react-fiber-vs-stack-
demo/fiber...](https://claudiopro.github.io/react-fiber-vs-stack-
demo/fiber.html)

------
dgllghr
I'm not primarily a Javascript developer, but I seem to remember that
attaching event listeners to Object fields and modifying the DOM in event
handlers is how Javascript development was done 5-10 years ago. Am I missing
something? Are we going full circle?

~~~
ng12
Yup. I find that most cases of "JavaScript fatigue" are really cases of "I
want tools custom tailored to my use case, but don't want to do the legwork to
figure out what they are".

The only thing wrong with jQuery is that it's perfectly fine until it isn't.
Conversely, if you'll never need React's power you don't need it's complexity
either.

~~~
murukesh_s
React's (or other SPA frameworks like Vue's) complexity is much higher than it
should have is probably because of the tooling (Babel, Webpack, NPM etc) and
additional build process required even for a simple hello world app. That
coupled with the rapid revision of the underlying ECMAScript standard and the
race by developers and framework authors to adopt them causing further chaos.

Another issue was the rapid deprecation of age old JQuery plugins and the lack
of equivalent plugins in the React landscape causing confusion whether you
should build it yourself or not etc. I had hoped WebComponents somehow got
standardized as the underlying architecture for SPAs, just so that the
plugin/widgets wont get fragmented further, but it looks like it wont be
happening anytime soon.

~~~
ng12
> React's (or other SPA frameworks like Vue's) complexity is much higher than
> it should have is probably because of the tooling (Babel, Webpack, NPM etc)
> and additional build process required even for a simple hello world app.

I wouldn't frame it as excess complexity, I would frame it as power. My
problem is not writing simple hello world apps, it's writing and maintaining
very large production applications with 5 other developers. Two years ago I
took a week to really learn the tooling stack and it's paid dividends ever
since.

~~~
olavgg
We are 10 developers just using vanilla Javascript here. No mess, and a joy to
work with. We use good old design patterns like observer, proxy, composite and
so on. We also use inheritance and template literals a lot.

The key is good documentation, having an onboard plan for new developers,
keeping things stupid simple and stop being bleeding edge. We want our code
base to just work, so we can focus more on new features and less on
maintenance.

I'm planning on writing a deeper blog post this year about why and how we feel
is a good structured simple way to do frontend development. I believe there
are several frustrated frontend developers out there that just want to make
their life easier but currently all the arguments they find just favor
'React'....

Our whole Javascript codebase is 50k + lines. But the clients only fetch what
is needed.

~~~
ng12
> joy to work with

Color me skeptical. React is quite literally the only framework I've ever
found to be enjoyable (and maintainable).

------
fwilliams
I'm not a frontend developer and don't know really know JavaScript frameworks.
That said, it seems like something very well established like React would
choose to have a virtual DOM for a good reason.

Surely there must be some tradeoffs that the author makes for his design to
get better performance. Could somebody explain the cost/benefit of doing
things one way versus the other?

~~~
brlewis
Virtual DOM was invented not to make things faster than with jQuery, etc., but
to make a better and less error-prone style of web app development not be
intolerably unperformant.

I think if we could see a larger Radi example, say, the standard TODO app, the
costs of its approach would become obvious.

~~~
TomMarius
That's not true. Virtual DOM was created to avoid recreating huge DOM elements
each time a re-render is needed, which was the way apps used to be done in the
days of jQuery.

~~~
lhorie
Eh, not exactly right either. In the jQuery days, the problem is that you
needed different procedural code paths to go from one DOM state to another
depending on what action you took. For example, adding a row to a list would
be `$('.list').html(html)`, removing would be `$('.list :last-
child').remove()` and updating one item might be `$('.list #' \+
id).text(value)`.

Templating engines such as handlebars provide a declarative abstraction that
represents how the DOM is supposed to look like given some rules, rather than
procedurally specifying how to get from A to B. This is similar to how you
write HTML declarative as opposed to how you have to write canvas code
procedurally.

Virtual DOM is just an implementation detail of some templating libraries that
have the same goal of providing that declarative abstraction. The problem was
that popular KVO systems (e.g. Knockout and Ember) were notoriously slow and
people were looking for alternative ways to speed up declarative rendering
libraries. The community has since learned a lot about performance and is
realizing that KVO systems _can_ be performant after all.

The main drawback of KVO systems is that their API bleeds into the data layer
because they require you as a developer to manually register data as
observable entities. In contrast, virtual DOM works with plain Javascript
objects which are much more familiar and easier to manipulate and interop
with. A third approach, called dirty checking and popularized by Angular also
had the desirable property of working w/ plain js objects, but it was also
very slow. Some systems (most notoriously Vue 1) tried to bridge the gap
between KVO and POJOs by providing a POJO-looking data layer that has
monkeypatched array methods etc that provide observability/reactivity under
the hood. Without JS proxies, though, this still left some weird edge such as
the ones handled by `Vue.set`/`Vue.delete`. Vue eventually moved to vdom
because the way it mounted subcomponents didn't scale for recursive mounts
(think comment trees such as the ones in HN or reddit). This is because it had
to mount each level to find out what subcomponents needed to be mounted, and
as well know, multiple repaints is a big no-no for perf.

With that said, virtual DOM is not perfect either. It's inherently memory
intensive and can block the UI in cases with large DOMs and few changes.
Various frameworks have various techniques to cope (such as
shouldComponentUpdate and friends) but they typically require developer
intervention, whereas a KVO system would not suffer from this class of
problems to begin with. Another problematic aspect of virtual DOM systems is
that they are "naked" js and thus, difficult to optimize as libraries from an
algorithmic standpoint. This is why Svelte does better in benchmarks than vdom
systems.

TL;DR: if you want to evaluate the performance characteristics of a KVO system
such as Radi or Surplus, the main things you should ask about are API lock-in
in the data layer and performance of recursive mounts. If you want to evaluate
perf of virtual DOM systems, you should ask about diff latency for needle-in-
a-haystack scenarios and ease of use for hooks to selectively diff a tree.

~~~
localvoid
> It's inherently memory intensive

Even if we pretend that other architectures doesn't have any memory overhead,
vdom memory overhead in a large UI app will be less than a single small
decoded image. But in reality, KVO can be way much more memory intensive,
especially when we consider cases that involve filtering items in large
collections.

------
naasking
Surplus [1] is the fastest framework in the big JS benchmark challenge, and it
too uses the native DOM directly rather than a virtual DOM. Looking forward to
seeing Radi.js added to that benchmark suite to really put it to the test.

[1]
[https://github.com/adamhaile/surplus](https://github.com/adamhaile/surplus)

~~~
amelius
The problem with not using a virtual DOM is that you can't merge in changes to
stateful DOM elements. For example, an INPUT field has state (its value, plus
its scroll position). If you regenerate the INPUT field from scratch, you risk
losing the state. In contrast, if you use a virtual DOM, the vdom mechanism
can merge in any changes for you (the INPUT field will stay the object!)

Of course, you can recreate DOM elements with their complete state, but if
you'd try to do that with e.g. a VIDEO element, then you risk flicker and
reloading of the actual video.

~~~
lhorie
This isn't quite correct. Modern rendering libraries aren't going to recreate
an input from scratch unless they absolutely have to, vdom or not.

What's tricky about reconciling app state changes and dom state is dealing
with things like cursor position if e.g. the input value change comes from a
socket and the user had the input focused and the cursor in the middle of the
text. But this has nothing to do with vdom

~~~
amelius
To prove you wrong, consider how DOM elements are created (remember in Surplus
there are no virtual elements). Ignoring all dependency tracking, the code to
show some text _has_ to be (eventually) of the form

    
    
        x = document.createTextNode(v);
    

Now, consider the dependency v changes. The only option is to run the code
above again, in its entirety. And thus, the DOM node corresponding to the text
is _new_.

Of course, a smart library will ensure that the amount of nodes that has to be
visited is at a minimum. But that doesn't mean DOM nodes don't get rebuilt
from scratch.

~~~
lhorie
> The only option is to run the code above again, in its entirety

Not at all. The reactive pattern typically boils down to something like this:

    
    
        el = document.createTextNode(initialValue);
        parent.appendChild(el);
        reactiveValue.onchange(v => {
          el.nodeValue = v;
        })
    

Ironically, in this specific example, vdom libs typically do this when
possible:

    
    
        parent.textContent = v
    

which does in fact replace the underlying text node, but they do so for the
sake of performance

~~~
amelius
Yes, you can code like that, but ultimately that block will be contained
inside a closure that will get triggered by some change. Of course, unless you
are very careful.

~~~
lhorie
The element creation part will be triggered if a creation is required. It
won't be triggered by a prop update, for instance.

In the following example, I change the button id on click. It's still the same
button element after the change:

[https://jsfiddle.net/dwn36aw4/8/](https://jsfiddle.net/dwn36aw4/8/)

~~~
amelius
But it's very difficult to prevent unnecessary dependencies from triggering in
all cases. Consider a map (dict) with some values. Some outer code depends on
the map. This code builds some DOM elements and sets values based on the
values in the map. Now, if the map changes, all the elements will be
recreated, even if effectively some irrelevant part of the map changed. Of
course you can prevent this from happening by doing smart diffing on the map,
but now you've effectively moved the problem from dom-diffing to map-diffing.

~~~
lhorie
I wrote about that (and the drawbacks and alternative takes) here:
[https://news.ycombinator.com/item?id=16540223](https://news.ycombinator.com/item?id=16540223)
but yes, in a nutshell, "move the problem from dom-diffing to map-diffing" is
basically what a reactive system is supposed to be taking care of.

You're absolutely right that reactive systems are not trivial to implement
(let alone implement well), but inefficient DOM recreation under these systems
is a symptom of poor granularity in the reactive layer (which may be a problem
with the reactivity library just as well as it could be poor usage of its
API), rather than a limitation of the rendering library itself.

~~~
amelius
Ok, that makes sense.

I'm also worried about debugging accidental dependency triggers. Imagine that
suddenly the scroll-state of some div is reset; how do you determine which
dependency caused the DOM element to be recreated? With a vdom you avoid the
whole problem.

~~~
lhorie
> how do you determine which dependency caused the DOM element to be recreated

Normally, I would toss in a throw and look at the stack trace. But yes, the
stack trace gets noisy as hell.

------
joebo
This looks similar in many ways to mithril[1] -- somewhat similar api - mount,
view, r("input") vs m("input"). The counter example[2] looks similar[3] I'd be
interested in seeing comparisons between the two.

[1] [https://mithril.js.org/](https://mithril.js.org/)

[2]
[https://github.com/MithrilJS/mithril.js/blob/5956314e3655a3c...](https://github.com/MithrilJS/mithril.js/blob/5956314e3655a3c85f8425bffc2bd76c1c541a3b/docs/mount.md)

[3] [https://github.com/radi-
js/radi/blob/master/examples/counter...](https://github.com/radi-
js/radi/blob/master/examples/counter.html)

~~~
lhorie
Hi, Mithril author here.

In terms of API, frankly all the frameworks are more or less similar. Mithril
has been around for a while, so obviously all the common observations apply
(i.e. it's been more battle tested, it has a more mature community, more
libraries, etc)

Taking a quick glance at the Radi.js code, I noticed a few things that could
still use some improvements (e.g. the comment regexp someone else had pointed
out, the handling of attributes not accounting for things like SVG, etc)

I didn't see support for lifecycle methods in the Radi.js code, and it's not
clear to me from the docs whether keyed lists and fragments are supported.
Keyed lists are extremely important in cases like `someArray.unshift()` and
any list containing stateful dom (e.g. inputs or link tabindex). Lack of
fragments would not be a deal breaker, but IME they're super nice for building
lightweight abstractions, especially in Mithril's case, where they can have
lifecycle hooks.

------
iamleppert
I recently did a project with morphdom ([https://github.com/patrick-steele-
idem/morphdom](https://github.com/patrick-steele-idem/morphdom)) using only
ES6 template strings and a render loop that checked for changes to a global
state object and patched a re-computed DOM string using morphdom. Couldn’t
have been an easier and more pleasurable experience.

------
jbob2000
The reason diffing and virtual doms are used are to prevent unnecessarily
repainting the browser window 1000 times a second. If I have 5 updates to
make, I want to do those in one paint, instead of 5. So the process is change
-> change -> change -> paint.

With this framework's model, the process would be change -> paint -> change ->
paint etc. etc. Sure, you get nice FPS (because it's painting a LOT), but that
will destroy battery performance on mobile phones and I don't need 60 FPS in
my shopping web app.

