
Virtual DOM is pure overhead (2018) - nailer
https://svelte.dev/blog/virtual-dom-is-pure-overhead
======
oraphalous
I think this article - and many of the comments on this thread are forgetting
the context of how DOM manipulation was typically done when the virtual DOM
approach was introduced.

Here's the gist of how folks would often update an element. You'd subscribe to
events on the root element of your component. And if your component is of any
complexity at all - first thing you'd probably do is ask jQuery to go find any
child elements that need updating - inspecting the DOM in various ways so as
to determine the component's current state.

If your component needed to affect components higher up, or sibling to the
current instance - then your application is often doing a search of the DOM to
find the nodes.. and yes if you architect things well then you could avoid a
lot of these - but let's face it, front end developers weren't typically
renown for their application architecture skills.

In short - the DOM was often used to store state. And this just isn't a very
efficient approach.

This is what I understood the claim that VDOMs are faster than the real DOM
meant - and the article is pretty much eliding this detail.

As far as I'm aware React and its VDOM approach was the framework that
deserves the credit for changing the culture of how we thought about state
management on the frontend. That newer frameworks have been able to build upon
this core insight - in ways that are even more efficient than the VDOM
approach is great - but they should pay homage to that original insight and
change in perspective React made possible.

I feel this article and many of the comments here so far - fail to do that -
and worse, seem to be trying to present React's claim of the VDOM faster than
the DOM as some kind of toddler mistake.

~~~
jasonkester
_the DOM was often used to store state._

Every once in a while I'm reminded that I'm mostly disconnected from the way
"most" people build things. Thanks for this insight. It finally explains why I
hear people talking down about "jQuery developers", if that was something that
people actually did.

But wow. I've been building javascript-heavy web stuff since the mid 90's and
it had never occurred to me to do that. You have your object model, and each
thing had a reference back to its DOM node and some methods to update itself
if necessary. All jQuery did was make it less typing to initially grab the DOM
node (or create it), and give you some shorthand for setting classes on them.

It also explains why people liked React, which has always seemed completely
overcomplicated to me, but which probably simplified things a lot if you
didn't ever have a proper place to keep your data model.

I can't imagine I was the only one who had things figured out back then,
though. The idea you're talking about sounds pretty terrible.

~~~
sam0x17
My god, it finally all makes sense!

And this is why I've been developing all my modern web applications as
essentially an S3 bucket of flat HTML with vanilla javascript and jquery
sprinkled in sitting behind cloudfront, connected to a fast API built of cloud
functions / lambdas written in crystal/rust/etc. I use a custom routing system
(I have S3 set up to respond with a 200 at the index in the event of a 404, so
I have unlimited control over pathing from within my js logic) and I never let
node touch anything at all. And I'm super happy about it. Never has it been
easier to get things done. I don't have to fight with any system because there
is no system to get in my way.

This gives me:

1\. 2-4 second deploys

2\. full control over assets pipeline (I call html-minifier, etc., manually in
my build script)

3\. literally serverless -- S3 or lambda would have to go down for there to be
an issue (ignoring db)

4\. caching at the edge for everything because of cloudfront

5\. zero headaches because I don't have to do battle with node or react or
anyone's stupid packages

6\. (surprisingly) compatibility with googlebot! It turns out that the
googlebot will index js-created content if it is immediate (loaded from a js
file that is generated by a lambda and included in the document head tag as an
external script, for example)

7\. full control over routing, so I don't have to follow some opinionated
system's rules and can actually implement the requirements the project manager
asks me to implement without making technical excuses.

This does not give me:

1\. A magical database that has perfect automatic horizontal scaling. Right
now there is no magic bullet for that yet. Some come close but eschew the
transactional part of ACID, making themselves basically useless for many
applications.

And the parent post exactly matches my usage of jQuery :D

~~~
avip
Pricing aside (as it's almost unreasonably expansive if your app requires
frequent db writes), firestore is indeed "A magical database that has perfect
automatic horizontal scaling". But as you have your happy setup on aws it
probably makes little sense to switch.

~~~
sam0x17
Yeah there are a few in that category also Google Cloud Spanner and the stuff
by CitusData. All of them work, but are prohibitively expensive to get
started. I've harangued them a number of times about how people aren't going
to want to use something they can't scale up from $1/month to $10000/month
without migrating any data (that's the whole point of an auto-scaling
horizontal service imo), but so far no changes from them. Like why would you
design potentially infinite auto-scaling, and then lock it up behind a
$90/month minimum fee. They could have been making money all along on those
$20/month or $40/month or $5/monthers, who vastly outnumber those who _need_
autoscaling, but what the peace of mind that auto-scaling provides.

~~~
chillacy
Hmm I wonder if spanner has more minimum hardware costs or something. Like if
they have to provision you at least one standalone atomic clock to get
started.

~~~
sam0x17
But like that's sort of my point. They could have data default to going into a
tiny VPS slice that would be free-tier territory, and then automatically move
it to whatever infrastructure spanner requires when the time comes. That could
all be seamless. Why keep the seams in?

------
Lowkeyloki
First, I think anyone using React solely because of the virtual DOM
implementation is largely missing the point. IMHO, the real win of React is
the functional and composable way components can be designed and implemented.

Second, no disrespect to Svelte, but I think there's a huge trade-off between
the React approach and the Svelte approach that developers should be aware of.
React is a pretty unopinionated library, all things considered. The only
compilation step necessary is JSX to Javascript. JSX maps pretty directly to
React's API. This means compilation is pretty simple. So much so that you can
do it by hand really easily if you really wanted to. Svelte, on the other
hand, is pretty compilation-heavy. There's a lot of what I'd consider to be
non-trivial transformation going on between the code you pass to the Svelte
compiler and what comes out of it and runs in the browser. Personally, I'm
less comfortable with that compared to React's runtime library approach. But
if you are comfortable with that trade-off, that's perfectly fine. It is worth
being aware of it, though.

~~~
jordache
>functional and composable way components can be designed and implemented.

Ughh.. that's the point of all modern FE frameworks...

You are putting that description on a pedestal as if that is a unique property
of React.

~~~
somerando7
Currently trying to find a new framework to do a front-end with because the
company I'm currently interning doesn't allow React :^)

Looking at angular code, it's pretty ugly. What would be the next best thing
to look at? Vue?

~~~
brobdingnagians
I love Vue.js. I've never really caught onto the JSX stuff. If you have ".Vue"
files then you get nice separation of the template html, methods, and the
scoped styling. The Javascript syntax is pretty straightforward, and the
templates just add nice directives like v-if, v-for, etc. I think it look
pretty clean and is fairly easy for JS developers to pick up. Integration into
a project is pretty straightforward as well. We have a webpack installation
that pulls in the Vue files and bundles everything and it is quite clean.

~~~
JMTQp8lwXL
I've never understood how people view the separation of template, styles, and
business logic into separate files as simpler. Now, to work on a single
component, I need to open three files in my editor, instead of one.

~~~
brylie
The "separate files" argument is a red herring. It is really about separate
"mindsets" or "modes of thinking".

In effect, JavaScript logic tends to be procedural/imperative, while templates
allow declarative semantics, and styles are nearly a 2.5D constraint language.
"Separation of concerns" here means only having to think in a particular mode,
rather than blending all of those modes of thought into a single eyespan.

Notably, Vue allows for single-file components, while preserving the familiar
and intentionally designed separation of declarative (HTML), imperative
(JavaScript), and aesthetic (CSS) code.

~~~
JMTQp8lwXL
I don't see how separate files forces you to think differently. It might aid
in that effort, but it likely doesn't force it.

------
Lx1oG-AWb6h_ZG0
Dan Abramov has a great thread about this here:
[https://mobile.twitter.com/dan_abramov/status/11209717954258...](https://mobile.twitter.com/dan_abramov/status/1120971795425832961).
In particular, I find this argument really persuasive:

> Time slicing keeps React responsive while it runs your code. Your code isn’t
> just DOM updates or “diffing”. It’s any JS logic you do in your components!
> Sometimes you gotta calculate things. No framework can magically speed up
> arbitrary code.

In my experience, as your app grows, the amount of time you spend on dom
reconciliation becomes negligible compared to your own business logic. In this
case, having a framework like React (especially with concurrent mode) will
really help improve perceived user experience over a naive compiled
implementation.

~~~
tylerhou
> In my experience, as your app grows, the amount of time you spend on dom
> reconciliation becomes negligible compared to your own business logic. In
> this case, having a framework like React (especially with concurrent mode)
> will really help improve perceived user experience over a naive compiled
> implementation.

In my experience, the exact opposite occurs. If there is ever any heavy
computation I need to do, I usually try spawn a web worker or offload it to
the server. In contrast, as your app tree grows reconciliation costs grow
(super?)linearly, and more importantly there is (currently) no way to offload
reconciliation.

~~~
dmix
Same, the only time I've run into performance issues with Vue after building
many very complex deeply nested components prior to this was one which ground
to a halt on re-rendering because I was simply rendering too many elements
into the DOM with their subsequent watchers filling up memory.

After hours of combing through frames of the memory profiler and seeing only
highly concurrent framework calls the only solution was to paginate the
particular content. 99% of the users never had this issue but it was 1-2
customers who had thousands of components to render instead of the usual
hundreds.

I'm really curious now if Svelte would have helped with that because it was a
huge dev timesink and one where I was never satisfied with the solution. As it
really should be able to render that amount of data. It obviously wasn't a
problem in the jQuery/Rails version I was replacing and improving upon
(although page load times was higher).

The new React concurrency model wouldn't have helped from what I've read. I
just needed something lighter weight from the rendering model itself. Vue 3.0
is apparently going to come with plenty of performance improvements so I'm
looking forward to that as well.

~~~
tylerhou
> After hours of combing through frames of the memory profiler and seeing only
> highly concurrent framework calls the only solution was to paginate the
> particular content. 99% of the users never had this issue but it was 1-2
> customers who had thousands of components to render instead of the usual
> hundreds.

Did you try something like [https://github.com/Akryum/vue-virtual-
scroller](https://github.com/Akryum/vue-virtual-scroller)? The trick is if you
know the height/width of the elements, you can only render the elements
directly in the viewport (+ some padding) and replace the missing elements
with fixed-size blank divs, whose width and height you can find with some
math. That way, you don't have to rely on the browser to layout your elements,
nor do you have to reconcile hidden elements. (Essentially, element occlusion
culling for the virtual DOM.)

Looks like vue-virtual-scroller only works with fixed-height elements (because
n * m is easier to compute than n_1 + ... + n_m), but as long as you don't
rely on the browser for layout the same trick works with preknown variable
element sizes.

~~~
dmix
That wouldn't have worked for the problem unfortunately. It was a pretty
compact UI with a lot going on so scrolls wouldn't have masked enough
components. Thanks for the link though.

------
spankalee
This is absolutely true.

Virtual DOM diffs do a huge amount of unneeded work because in the vast
majority of cases a renderer does not need to morph between two arbitrary DOM
trees, it needs to update a DOM tree according to a predefined structure, and
the developer has already described this structure in their template code!

A large portion of JSX expressions are static, and renderers should never
waste the time to diff them. The dynamic portions are clearly denoted by
expression delimiters, and any change detection should be limited to those
dynamic locations.

This realization is one of the reasons for the design of lit-html. lit-html
has an almost 1-to-1 correspondence with JSX, but by utilizing the
static/dynamic split it doesn't have to do VDOM diffs. You still have UI =
f(data), UI as value, and the full power of JavaScript, but no diff overhead
and standard syntax that clearly separates static and dynamic parts.

The syntax is very close:

JSX:

    
    
       render(props) {
         return <>
           <h1>Hello {props.name}</h1>
           {props.items
              ? <ol>{props.items.map((item) => 
                  <li>{item.label}</li>)}
                </ol>
              : <p>No Items</p>}
         </>;
       }
    

lit-html:

    
    
       render(props) {
         return html`
           <h1>Hello ${props.name}</h1>
           ${props.items
              ? html`<ol>${props.items.map((item) => 
                  html`<li>${item.label}</li>`)}
                </ol>`
              : html`<p>No Items</p>`}
         `;
       }
    

I really think the future is not VDOM, but more efficient systems, and
hopefully new proposals like Template Instantiation can advance and let the
browser handle most of the DOM updates natively.

edit: closed JSX fragment as pointed out

~~~
ec109685
One thing nice about React is that it can take care of quoting for you
depending on the method call the jsx template translates into (attribute,
value, element name). String templates doesn’t have that nice property.

~~~
spankalee
Can you explain that more?

I'm not sure what you're saying is true with lit-html. You don't have to quote
attributes with single expressions.

    
    
        html`<div class=${myClass}></div>`
    

is perfectly fine.

~~~
ec109685
Ah, thanks for pointing that out.

------
vfc1
Angular also works in a somewhat similar way, there is also no virtual DOM.

Instead, the modern compiler is used at build time to generate what looks like
a change detection function and a DOM update function per component.

These functions will detect changes and update the DOM in an optimal way
without any DOM diffing.

However, because Javascript objects by default are mutable, after each browser
event Angular in its default change detection mode has to check all the
template expressions in all the components for changes, because the browser
event might have potentially triggered changes in any part of the component
tree.

If we want to introduce some restrictions and make the data immutable, then we
can check only the components that received new data by using OnPush change
detection, and even bypass whole branches of the component tree.

This is the current state of things, for the near future Angular is having
it's internals rebuilt in a project called Ivy.

One of the main goals of Icy is to implement a principle called component
locality.

Ivy aims at getting to a point where if we change only one component, we only
have to recompile that component and not the whole application.

I think the article puts the focus on the wrong thing. The current change
detection and DOM update mechanisms made available by modern frameworks
virtual DOM or not are more than fast enough for users to notice, including on
mobile and once the application is started.

What we need is ways to ship less code to the browser, because that extra
payload makes a huge difference in application startup time.

~~~
2T1Qka0rEiPr
Thanks for the write up - it's a very succinct explanation of how Angular
works in comparison.

I found the original article to be a really good read, and the Svelte approach
in general seems rather neat. I do however find that in this current front-end
framework sphere, there seems to be a huge amount of religiosity and one-
upping going on.

I hear routinely (on-line and off) developers vocalising some
anti-[jQuery,angular,etc.] mantra, which to be honest saddens me. Yes the
jQuery approach was flawed in so many ways _in comparison to the modern
frameworks_. Yes _Angular 1.x_ was flawed in many ways compared to what we
have on offer today. But those tools were still great _improvements_ on what
we had before (for anyone who knew the DOM-API standardisation nightmares pre-
jQuery, or state management / testability woes pre angular/react).

Svelte may take us down the next path, and if it allows us to produce better,
smaller, more testable code then it has my full backing. But I think as a
community we need to strive to be less polarising - from my perspective its
likely to be mostly reductive, and lead to even more _JavaScript fatigue_.

~~~
ralphstodomingo
I would guess this is partly because the modern front-end framework
leaderboard is a zero-sum game: you can't possibly be sane to use two or more
different frameworks for most of your day-to-day work. Maybe you have one for
work and one for hobby development, but that's about it. I'd be torn to
remember quirks of both React and Vue, for example.

And thus you see it in discussions that people feel the need to pull one down
to put their preferred one on the top. We know what happens to a library
without a critical mass of adopters: they lose contributors, which in turn
reduces the rate of growth, and in turn, the quality of the library over time.

Which is kinda sad. A lot of work goes into these frameworks that I really
respect. There's no logical rule that says these new ideas completely succeed
their precedents. I wonder what needs to be done to get us over that JS
fatigue.

------
QuadrupleA
So glad to see this article, I've long wondered how this "virtual DOM is
faster" myth got accepted as gospel when clearly it's pure overhead, compared
to a well written app that updates the DOM directly only when needed (which I
find is easy to accomplish in most apps).

Can't speak to the svelte approach due to inexperience with it, but good to
see this myth challenged - react.js is fine but I worry there's been a cargo
cult mentality around it, that it's The One True Modern Way To Do Web Apps,
when really it's a tradeoff that involves some extra layers and performance
baggage, and like any tool you need to weigh the pros and cons.

~~~
lacampbell
_compared to a well written app that updates the DOM directly only when needed
(which I find is easy to accomplish in most apps)_

Do you do full blown SPAs with this technique? I mean I'm sure it's possible,
but I wonder how difficult it is.

I wouldn't use (p)react for a website that just needed a bit of AJAX, but I
find it a bit hard to imagine doing an actual _app_ with vanilla JS.

~~~
austincheney
> but I wonder how difficult it is.

Its not if you have some basic understanding how the code actually works.

> but I find it a bit hard to imagine doing an actual app with vanilla JS.

Try it. It will blow your mind how simple it is and how little overhead it
requires.

~~~
citeguised
I went from developing simple server-rendered websites enhanced with a bit
jQuery straight to React-style JS.

I'm interested in studying the source of less or more complex JS-apps, that
were developed without SPA-Frameworks.

One I'd like to see would be Construct3, but alas it's not open-source.

I'd very much appreciate links and hints!

~~~
austincheney
You can look at my SPA that maintains state perfectly well and persistently
without any framework. It isn't hard, but you would have to be willing to
write original code.

[https://prettydiff.com/](https://prettydiff.com/)

All of the UI is defined here:
[https://github.com/prettydiff/prettydiff/blob/master/api/pre...](https://github.com/prettydiff/prettydiff/blob/master/api/prettydiff-
webtool.ts)

------
namelosw
I thought this was well known years ago. A better description for VDOM should
be 'It's not fast, and is not slow either'.

But frankly, what I see in virtual DOM is not about speed. It's a declarative
interface, an abstraction. It's more like a blueprint that's easier to
interpret across different environments like React Native, WebGL. Even if you
don't need any of these cross-platform benefits it's still good for testing --
without real DOM.

As for performance, it could be an aspect of advertising but I doubt it really
matters anymore.

I saw many applications where AngularJS is too slow, and I even worked on one
for quite a while -- it's just a fairly typical 'enterprise application'. But
I still yet to see a real-world front-end project where React is too slow.

Users won't even care about if it is 10ms or 30ms.

~~~
Bahamut
I work on one with React that is too slow in the browser with a team that only
has senior devs, and users even filed bugs about the performance - we do heavy
computations, and React's model of blocking rendering on having everything
updated can freeze our UI for up to 10s while data comes in from various API
requests. I believe our app would be performing much better for the end user
if we were using Angular 2+ interestingly enough due to its built in
incremental updating - there would be other tradeoffs though.

Part of the problem is not having good enough APIs currently (we have to make
too many API requests and data payloads are too fat, sometimes up to 2 MB per
request), but imperfect APIs tend to be the case in a lot of apps early in
their lifecycle. I've actually been a bit disappointed in React's performance
from a UX perspective.

~~~
charrondev
This seems almost entirely unrelated yYou framework though.

You are blocking rendering on IO. This can be greatly improved by offering

\- Good loading indicators that stay consistent for related IO. \- Caching to
speed up or remove the need for requests. \- Fetching related data in
parallel. \- Prefetching data.

------
guscost
From the conclusion:

> Virtual DOM is valuable because it allows you to build apps without thinking
> about state transitions, with performance that is generally good enough.

In other words, Virtual DOM is _somewhat-valuable_ overhead. This is a cool
alternative, seemingly sort of a compile-time version of Knockout. It's
probably worth a try for writing an efficient client app, but I have a hunch
that I'd miss the "HTML-in-JS(X)" pattern if I went back to using "JS(?)-in-
HTML" instead. A VDOM runtime allows you to write plain JS that "just works",
at least until certain parts need to run faster. This means junior programmers
can pick it up and become productive quickly, _and_ avoid driving their
projects off a metaphorical cliff.

Of course this is bought with bandwidth and CPU overhead, lots of it in some
cases. The call you should make when considering a VDOM is whether the safety
and familiarity benefits are worth the overhead. If your team is experienced
enough to take on a new DSL for rendering markup (which every template-binding
tool really is) _and_ meticulous enough to assign instead of mutate and avoid
two-way binding pitfalls, go for it. If not, be careful.

This is not meant as a challenge. Personally I wouldn't want to work on a big
application that is wholesale optimized in this way, unless there was no
alternative. I wouldn't write my own game engine (if it was for a job) either.

~~~
floatboth
You can easily do HTML-in-JS "react-like" rendering _without diffing_!

lit does just that [https://lit-element.polymer-project.org](https://lit-
element.polymer-project.org)

------
Sawamara
I was running some quite complex UI systems in several of my projects with
vanillaJS, cached DOM elements, all that jazz. Nothing like VDOM. Then
eventually, it started to finally bog me down. Like rendering an inventory in
an RPG system where you can buy stuff from the vendors: I started to get
dissatisfied with change operations that lasted upwards to 2-3ms in a bad day.
Then I started caching even more DOM elements, and started to build local data
structures to see which was rendered last,and to what, to avoid rerendering
everything, and to optimize to the change-based render only.

A few weeks of this, and it dawned on me that had I needed to generalize my
solution, I would have arrived at the exacty same model that hyperscript does
for its diffing, and something similar to whats underneath React's diffing
method (or Preact since I prefer that, but they share the API).

So yeah, virtual dom is just a more clever and straightforward way to map your
state to the dom, identifyng exactly where the changes happened, and only
updating those nodes, instead of doing any queries towards the dom api
(costly, can cause rerender,like when checking for bounding boxes, etc).

It IS more useful because you no longer need to maintain a hyper-specific
update function per project and manuallí created/maintained differs.

~~~
Mikushi
Not to be grating but your problem sounds more like using the wrong tool for
the job. DOM is not made to render video game UI, it is a bad tool to do so as
you discovered yourself.

~~~
Sawamara
Tell that to Guild Wars 2. ([https://mithril.js.org/framework-
comparison.html](https://mithril.js.org/framework-comparison.html))

But putting even those games aside which use webview for UI, there are still
video games made inside the browser, and for those, having sub-1ms rerender
times with React is perfectly reasonable. We could get into immediate mode vs
retained mode debates right about here, but that is a slightly different topic
:D

------
thoman23
2 things I'm not seeing in the article or in the comments so far:

1) The virtual DOM is an abstraction that allows rendering to multiple view
implementations. The virtual DOM can be rendered to the browser, native phone
UI, or to the desktop.

2) The virtual DOM can, and should, be built with immutable objects which
enables very quick reference checks during the change detection cycle.

~~~
kgwxd
On point #2, ClojureScript not only provides immutability out of the box, but
also has libraries for replacing JSX with the same stuff everything else is
built with. It's an insanely beautiful way to work with React.

~~~
city41
Reagent is probably the best UI dev experience I’ve found yet.

A quick blog post explaining why for anyone curious about reagent:
[https://www.mattgreer.org/articles/reagent-
rocks/](https://www.mattgreer.org/articles/reagent-rocks/)

------
ng12
I keep hearing this and find it really hard to care about. Runtime performance
is not a bottleneck for me. Once in a blue moon I'll have to optimize a React
component with shouldComponentUpdate but otherwise I have no performance
concerns even on old browsers.

There are other characteristics that are very, very important like build size.
VDOM is not worth thinking about.

Honestly I don't understand Svelte. It sounds like it's very good at the
things it does but the things it does are not the things I need.

~~~
ralphstodomingo
I feel as if Svelte came at the wrong time. These days, when most people know
either React or Vue or some other thing, and computing devices are performing
better over time, there's diminishing returns on performance optimization.
Sure, you do a bit of it, and then you're often better doing something else,
like enhancing developer experience for example.

I really like the idea, and will play around with it, but fat chance it's
getting into production with me. I am much more productive with React now, and
I worry more about business requirements than raw performance (that I almost
never worry about these days).

~~~
atan
> Sure, you do a bit of it, and then you're often better doing something else,
> like enhancing developer experience for example.

Aside from its performance optimizations, arguably one of the biggest selling
points of Svelte _is_ its developer experience (which is enabled by it being a
compiler). See [https://svelte.dev/blog/svelte-3-rethinking-
reactivity](https://svelte.dev/blog/svelte-3-rethinking-reactivity). And
because only the features used in your components get included in the final
bundle, the framework is free to add nice extras (like its
transitions/animations system) without negatively impacting apps that don't
need those extras.

------
nojvek
The ideas of svelte are great. It reminds me of snabbdom thunks and inferno
blueprints. If you know the view code ahead of time, you could do plenty of
perf optimizations since you exactly what changes and what to react to.

But sometimes I dynamically generate vdom nodes. Like markdown to vdom. There
vdom shines. It’s a simple elegant idea.

I think svelte is exaggerating a bit.

React and vdom family of libraries are great. Svelte is great too. Not
mutually exclusive.

Someone should write a Babel jsx transpiler that does svelte like compile time
optimizations for react. At the same time still allows dynamic runtime diffing
if needed.

No reason why we can’t have best of both worlds.

~~~
fnordsensei
I do this with React all the time as well, though via ClojureScript and Re-
frame[1], in which nodes are represented as plain Clojure data structures.

E.g., send an article from the server, formatted in EDN/Hiccup[2][3]. Insert
it into a component in the frontend, and it's converted to VDOM nodes. No
further logic or conversion required.

[1]: [https://github.com/Day8/re-frame](https://github.com/Day8/re-frame)

[2]: [https://github.com/edn-format/edn](https://github.com/edn-format/edn)

[3]:
[https://github.com/weavejester/hiccup/wiki/Syntax](https://github.com/weavejester/hiccup/wiki/Syntax)

~~~
palerdot
I just noticed that Reframe README is very amusing and wickedly hilarious in
some parts. It is very refreshing to read compared to other
bland/formal/techinical READMEs ... Maybe in future I will try and see if
using Reframe itself is joyful like its Readme ...

~~~
fnordsensei
Once you get into it, it's very good. It wrinkled my brain at first though. It
does include a bit more ceremony than plain Reagent, which means the app
should be sufficiently advanced for the benefits to outweigh the overhead.

My rule of thumb is to start with Reagent, and as soon as I notice the desire
to wrap and abstract away plain atom swaps, I switch over to re-frame.

Re-frame is largely an abstraction over atom swaps, and guaranteed to be
better than the half-baked, 10% coverage of the Re-frame functionality that I
would end up with if I did it myself.

Migrating from Reagent to Re-frame doesn't have to be done in one huge
refactoring. It can be done by introducing re-frame into your app function by
function.

------
osrec
We implemented a library at my company that does not use a virtual DOM, but
instead captures reactive "change functions".

The framework captures dependencies between the reactive "change functions"
and underlying variables, and executes the functions whenever a variable's
value changes. You can also have dependencies between variables (like computed
vars in Vue), and the lib works out the correct order for calculation and
execution. All the change functions get queued up and applied in order before
the next repaint.

Everything is component based, and there is even a nice kind of inheritance
(with lazy, async component loading).

It works rather well. I'd be happy to share it with anyone that's interested!
Not open sourcing it yet, as I envisage it would be a full time job to support
it!

Edit: the upside of the change functions is that YOU decide how the DOM is
updated. It's really quite cool to be able to implement a function like the
following and have it run to update this.$dateOfBirth whenever
this.data.dateOfBirth changes:

    
    
      function()
      { 
        this.$dateOfBirth.val(this.data.dateOfBirth);
      }
    

That is of course a simple example, but when you need even more control,
reactive change functions have proven to be super useful (for us anyway!).

~~~
markharper
That sounds similar to the incremental lambda calculus described in this
paper: [https://arxiv.org/abs/1312.0658](https://arxiv.org/abs/1312.0658)
There's an implementation for DOM updates in Purescript
([https://blog.functorial.com/posts/2018-04-08-Incrementally-I...](https://blog.functorial.com/posts/2018-04-08-Incrementally-
Improving-The-DOM.html)), but I haven't come across a similar approach in
Javascript yet.

~~~
osrec
Yes, it's a similar idea. The only difference I can see is that in our
library, the dependencies are automatically tracked, and there is some
additional clever scheduling around when a function should be run.

------
nemothekid
Virtual DOM is pure overhead*

* Compared to doing static analysis and optimizing your UI updates at build time.

While I certainly agree that svelte's approach may be the future, I think
React and others, are very much a needed stepping stone (especially when you
consider all the work done transpiling JS code).

The Virtual DOM was the most performant solution that applied generally to
many a large number of cases. The reason almost everyone did `x.innerHtml =
html` is that it was the most general and widely available solution.

~~~
floatboth
No, you don't need to do anything at build time! (You don't even have to have
a build time.)

You can just… instantiate a template, remembering where the "holes" are, to
get precise update functions for every data field that gets inserted into the
template. This is what lit-html does, and it's such an obvious approach I'm
really surprised that VDOM took off before it.

[https://lit-element.polymer-project.org](https://lit-element.polymer-
project.org)

------
xtagon
Svelte's philosophy on turning the virtual DOM concept inside out sounds like
it has merit, and is very promising. But it's going to take a lot more than
that, in my opinion, before a large number of people consider switching from
React, Ember, etc.

I don't see that as a drawback, I see it as an open opportunity for Svelte to
keep building out on improvements other than the DOM updates, and catching up
with everything else the SPA alternatives provide that have nothing to do with
the virtual DOM.

For example, Ember is just a joy to work with, and makes it easy to rapidly
prototype reactive frontends in a way that reminds me of Ruby on Rails's
initial appeal to developer happiness, and the tooling is very mature. If you
could unlock all those benefits while keeping the blazing fast DOM updates, oh
boy!

~~~
hatch_q
Tooling is the key word here. Svelte simply doesn't have the tooling needed
for any big project. \- testing/testability (unit-tests are easy, but what
about functional, e2e?) \- strong-typing support (flow, typescript) \- good
IDE support? \- i18n? ICU support, etc? They need to redo what ember-intl or
react-intl do.

Without these things it's simply not viable to start bigger projects with new
framework.

------
beders
It is simply mind-boggling how much effort the JS community has put into
working around the performance properties of a document layout engine to make
it 'interactive' and 'responsive'.

With only 3 major rendering engines left standing, where is the concerted push
to turn these document renders into general purpose, fast, desktop-quality
rendering engines?

Back to Svelte vs. React vs. Reagent vs. Vue.JS vs. Angular vs. (insert
framework-of-the-month-here)

One common theme seems to be: Run code to manipulate a tree-like data
structure (DOM) efficiently. This obviously needs to become: Submit data to
the rendering engine to manipulate the tree-like structure. (and in a way
.innerHtml is doing that for a sub-tree, but is not suitable for general-
purpose tree manipulation)

~~~
writepub
> where is the concerted push to turn these document renders into general
> purpose, fast, desktop-quality rendering engines

The DOM is fast enough for desktop apps @60 or even 90 frames per second,
especially of you follow best practices (no framework required)

~~~
beders
If it is fast enough, why is everyone optimizing?

~~~
robocat
For an individual developer it is not obvious how you should structure your
usage of the DOM API to avoid the performance potholes and other problems.
Even a team of experts in the DOM will sometimes take shortcuts or one
expert's technique doesn't play nicely with another's.

Some parts of the DOM are extremely slow if the API is used naturally, and
obvious usage of the API has other side-effect penalties (e.g. stored state,
difficult component destruction, or reference loops causing memory blowouts).

So the vast majority of developers use the native DOM in such a way that the
page is slow and buggy.

React et al provide a clean API that avoids the worst performance problems,
while providing a framework that steers a team towards good practices, so the
average developer can be productive.

The framework has a bunch of extra overhead, but the overhead is far less than
the average overhead of not using the framework.

------
ummonk
>The danger of defaulting to doing unnecessary work, even if that work is
trivial, is that your app will eventually succumb to 'death by a thousand
cuts' with no clear bottleneck to aim at once it's time to optimise.

This is so true in practice.

------
pier25
After years of doing mostly React and Vue SPAs I've never experienced a
performance problem.

The metrics of Svelte that interest me more are lines of code and bundle size,
both in which it excels.

Here is an article that compares the exact same project called RealWorld
written in a number of front end libraries/frameworks.

[https://medium.freecodecamp.org/a-realworld-comparison-of-
fr...](https://medium.freecodecamp.org/a-realworld-comparison-of-front-end-
frameworks-with-benchmarks-2019-update-4be0d3c78075)

Here is the main repo for the RealWorld project for front and backend:

[https://github.com/gothinkster/realworld](https://github.com/gothinkster/realworld)

~~~
pan_4321
Same experience here but with React and Angular (2+). No real world
differences that can be observed by an average user in the average app.

The choice has to be made by ecosystem, programming style, etc.

React as a library offers not enough for me personally. Angular is too heavy
on concepts. Vue seems to hit the sweet spot?

------
computerex
What a strange post. Yes, virtual DOM is overhead, much like JIT compilation
is an "overhead". But this overhead ultimately translates to better
performance because many virtual DOM transformations can be buffered into 1
transformation of the real DOM.

~~~
azangru
> ultimately translates to better performance

Better compared to what?

For a library like React, which re-renders the DOM tree every time component’s
props or state change, virtual DOM with diffing and patching is indeed a
better approach as compared to naive re-rendering of the whole DOM.

But as Rich Harris said during his talk about Svelte v.3.0, whenever he hears
claims about better performance of frameworks based on virtual DOM,
illustrated with benchmarks, he runs the same benchmarks with Svelte (not
based on virtual DOM), and inevitably gets better results.

~~~
nemothekid
> _But as Rich Harris said during his talk about Svelte v.3.0, whenever he
> hears claims about better performance of frameworks based on virtual DOM,
> illustrated with benchmarks, he runs the same benchmarks with Svelte (not
> based on virtual DOM), and inevitably gets better results._

That isn't a fair comparison. Consider this analogy - React is the JVM and
Svelte is Rust. JS before either is C. Now it can be shown that in most cases
that C is faster than anything on JVM, but in reality it wasn't, and that C
code was riddled with bugs. The JIT'd JVM comes a long and guarantees safer
and more performance code. Then someone comes a long and says the JIT is
overhead rewrites everything in Rust and shows how fast the program is.

What's being ignored is the man-years, technology and insights that made Rust
possible. The fact of the matter is, (1) code that is written like Svelte
generates was incredibly rare and difficult to achieve and (2) Svelte is
pretty radical in how the framework is implemented (at a high level, Svelte is
essentially doing static analysis on your code to figure out where the DOM is
being updated, serving a like-minding purpose of Rust's borrow checker).

~~~
overgard
> Now it can be shown that in most cases that C is faster than anything on
> JVM, but in reality it wasn't, and that C code was riddled with bugs.

Um, what? Your metaphor breaks down because it doesn’t really connect with
reality. The operating system and the browser you typed that in are written in
C/C++ because the performance hit of doing that in a language like java would
be absurd. Practically every performance sensitive application is still
written in C/C++ (games, productivity tools, desktop apps, etc). Rust is,
outside of places like HN, just an interesting novelty to like 99.999% of the
software industry, and your average java app is glue code between a gui and a
database.

~~~
nemothekid
I had a feeling it was a poor analogy after I submitted it. I'll leave it up,
but the analogy doesn't properly convey the point I was trying to get across.
This isn't a discussion about Java's relative performance with C++.

It's more about the performance gains of properly optimizing your DOM
modifications aren't generally possible with a static analysis like Svelte
employs.

~~~
cpfohl
I _think_ it's worth noting that Svelte batches updates too...

------
fpoling
Direct manipulations of DOM are expensive. It is vastly more cheaper to create
or update JS object than create or manipulate DOM node. So the claim that
VirtualDom is always an overhead is not true. The diff algorithm can give a
set of DOM operations that are less expensive than typical sequence of manual
mutations. So virtual DOM can be faster if savings from less DOM operations
are bigger than extra JS work.

Surely carefully crafted direct DOM mutations will be the fastest approach,
but it typically leads to hard to maintain code.

~~~
atan
I'm not sure you understood the article. The Svelte compiler does in fact
generate code that performs "carefully crafted direct DOM mutations," though
it is not hard to maintain, because the compiler handles it. Given code that
already knows exactly which DOM updates to make, virtual DOM would indeed be
pure overhead.

~~~
fpoling
Typical React code does not know which updates to make. It always builds the
Virtual DOM from scratch as is was the first time. It is the diff algorithm
then figures out the set of changes.

If code knows which updates to make, it essentially embeds a particular form
of the diff algorithm. That inevitably leads to more code to write as besides
the initial construction of DOM one has to track changes. And such manual
tracking is not necessary optimal as the diff algorithm has a global picture
and can target the globally optimally set of mutations, while manual in-
component tracking optimizes for a particular component.

~~~
atan
> If code knows which updates to make, it essentially embeds a particular form
> of the diff algorithm.

Having an alternative means of identifying where to make targeted updates on
data changes (i.e., via static analysis during a compilation step) is not the
same thing as embedding "a particular form of the diff algorithm" (which would
be a runtime operation). Svelte does not produce anything comparable to a diff
algorithm.

> That inevitably leads to more code to write as besides the initial
> construction of DOM one has to track changes.

With Svelte, there is actually _less_ code to _write_ , as the compiler
handles generation of the change tracking code. And even the generated code is
minimal and generally leads to much smaller bundles, as no runtime gets
shipped with the code. This leads to faster startup times, which is often the
real performance bottleneck for SPAs.

> And such manual tracking is not necessary optimal as the diff algorithm has
> a global picture and can target the globally optimally set of mutations,
> while manual in-component tracking optimizes for a particular component.

Can you offer an example where a "globally optimal set of mutations" would be
different from the set of mutations Svelte would make on a given change?

------
lenkite
Personally, I find modern template based approaches like lit-html,
hyperhtml/lighterhtml better and faster. And also being far, far smaller.
Throw in a CSS framework like bulma or tailwind-css and you are good to go at
a smaller footprint and better performance.

~~~
floatboth
And lit works without any build tools!

~~~
nsonha
the main reason templates are bad is not that it needs a build step, it's more
about being non-standard (a problem lit-html doesn't have), and not statically
analyzable (think typescript's jsx), which lit-html doesn't solve.

~~~
spankalee
lit-html does solve the static analyzable problem, see
[https://marketplace.visualstudio.com/items?itemName=runem.li...](https://marketplace.visualstudio.com/items?itemName=runem.lit-
plugin)

~~~
nsonha
that's interesting to know.

Correct me if I'm mistaken since the motivation of the project isn't that
clear on the home page: the main thing that makes lit-html special is the fact
that it's just standard javascript.

This won't appear to typescript developers because they have jsx already built
into the compiler. They can use lit-html and have their template type-checked
too but it requires extra setup. What's the benefit here? That they can use
class= and for=?

~~~
spankalee
lit-html isn't _just_ about using standard JavaScript syntax. It's also about
being more efficient than VDOM, very small, and having great HTML integration
(you can set any property, attribute, or listen to any event, unlike with
React).

Yes, you need a plugin for type checking with TypeScript. That doesn't seem to
be a huge problem for users so far.

~~~
nsonha
I appreciate that it's more efficient than VDOM, if it's true and continue to
be true in larger scale apps. I don't see "can set any property, attribute, or
listen to any event" a benefit though. To me it's such an unpleasant way of
writing code to have to keep in your brain a map of all these event and prop
bindings.

If you need events, Rx is much better to work with. Probably the one thing
react is not as good as html elements is the lack of attributes, this causes a
lot of defaultValue and shouldComponentUpdate kind of complications. However,
if you really think about it all props should just be immutable (attributes)
and if you want some to be updated or emit events, just make them Observable
and Observer accordingly.

------
rayiner
Only ever having used MFC and Swing, this seems odd to me. A diff of the
entire DOM on every state change? You never see anything like that in native
toolkits. ELI5: What problem is that solving?

~~~
Tehdasi
The problem it's solving is the DOM being unsuited to writing the kind of
applications that MFC and Swing are used for writing. (which, for the web, is
a SPA)

~~~
andrekandre
isn’t this pointing to more of a fundamental problem with how
browsers/html/document model are designed/implemented than anything else?

html was originally a document format, not an app framework, and i think all
of these frameworks (large and small) are just workarounds when what we really
need is a fundamental re-imagining of what the browser should/could be.

sometimes i wonder if alan kay was right when he said the browser should have
been basically just byte-code interpreter[1]...

[1] [http://www.drdobbs.com/architecture-and-design/interview-
wit...](http://www.drdobbs.com/architecture-and-design/interview-with-alan-
kay/240003442?pgno=2)

------
underwater
> The original promise of React was that you could re-render your entire app
> on every single state change without worrying about performance. In
> practice, I don't think that's turned out to be accurate. If it was, there'd
> be no need for optimisations like shouldComponentUpdate (which is a way of
> telling React when it can safely skip a component).

It's shouldComponentUpdate(), not shouldDOMUpdate(). Even if DOM operations
are direct, or the virtual DOM is infinitely fast, there are plenty of
situations where you want to avoid running _application_ code on every update.

Some frameworks use data binding to track if a component update is necessary.
This is what Svelte does, but because there is no explicit checks they have
some weird conventions around annotating certain bound values:

    
    
        <script>
            export let num;
            $: squared = num * num;
        </script> 
    

React just happens to implement this behaviour different: it assumes a
component needs updating unless the shouldComponentUpdate() hook says
otherwise. The advantage (ironically) is that React is "just JavaScript",
whereas Svelte needs a compiler that can instrument the code.

This design decision shouldn't be confusing to the author; I assume he would
have made this design decision consciously?

~~~
JMTQp8lwXL
Well-written React code shouldn't need to leverage the shouldComponentUpdate
API. The application structure would be off, if that's the case.

~~~
underwater
A React component can do arbitrary, computationally expensive, logic inside
components. React gives you the ability to memoize that work. Without
shouldComponentUpdate developers would need to hand roll a caching layer or
stick computed values in a store.

~~~
prashnts
With the current release, you can use a hook, such as `useMemo`, which would
memoize the value for the entire lifecycle of your component. I’ve been
writing in react for around 3 years now, and never had to use
shouldComponentUpdate, but instead compute whatever needed either on mount or
when props changed. Curious what prompted the case you mentioned?

------
smith-kyle
> Here, we're generating a new array of virtual <li> elements — each with
> their own inline event handler — on every state change, regardless of
> whether props.items has changed. Unless you're unhealthily obsessed with
> performance, you're not going to optimise that.

React makes it pretty trivial to prevent rerenders when props have not changed

------
DigitalSea
This is why I use Aurelia. It's a Javascript framework many here have probably
never heard of or used, it debuted in 2015 and I have been working with it for
four years now. Sadly Aurelia debuted at the height of the React hype and soon
after, Vue hype.

Rob Eisenberg (the man in charge of the Aurelia project) had the right idea
straight out of the gate. A reactive binding and observation system that
worked like a virtual DOM (isolated specific non-destructive DOM operations)
without the need for an actual virtual DOM. Which allows you to use any third-
party library without worrying about compatibility or timing issues with the
UI.

This is one area where React falters, at least when I used it. third party
libraries clashed with the virtual DOM. When you start introducing
abstractions to solve imaginary problems caused by improperly written code
(the myth of the DOM being slow) you introduce issues you have to battle later
on as your application scales.

~~~
Tobani
The default behavior for javascript interacting with the DOM is incredibly
slow once the page gets complicated enough. I've certainly seen it first-hand.
This may not be a problem you have, and indeed maybe not everybody needs
react. But the problems things like react/vue/whatever solve (correctly or
not) isn't imaginary.

------
reilly3000
Has anybody here migrated a UI from React to Svelte? How did it go?

~~~
ru999gol
of course not, this is just opinionated rubbish, in the real world people use
react dom/native

~~~
PudgePacket
> Be kind. Don't be snarky. Comments should get more thoughtful and
> substantive, not less, as a topic gets more divisive.

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

[https://mobile.twitter.com/sveltejs/status/10885005396404183...](https://mobile.twitter.com/sveltejs/status/1088500539640418304)

------
revskill
To me, React is more about Developer Experience. The ease to reason about the
app is far much more important than anything else.

The problem with Svelte, is that, it sounds "words louder than action".

You can learn from React documentation. Instead of throwing a bunch of concept
to the dev face, it brings out the WHY of ReactJS with real code.

Teaching users right from documentation is the best way to introduce a
library/framework for user to really experience with the tech.

~~~
rahimnathwani
If you like the React documentation with real code examples, you'll love the
Svelte tutorial with both code examples and a live playground. The UI is
beautiful, too:

[https://svelte.dev/tutorial/basics](https://svelte.dev/tutorial/basics)

The examples are also very clear:

[https://svelte.dev/examples#hello-world](https://svelte.dev/examples#hello-
world)

------
saggas
I have never used modern JS frameworks like Angular, React and Vue and I have
always assumed (hoped) that they contained optimisations that you would be
unlikely to use in your vanilla JS code even though you could. Something like
FastDOM which batches read/write operations to avoid unnecessary reflows. Do
they contain anything like that?

~~~
zeugmasyllepsis
To varying degrees depending on the library, I believe the answer is "yes,
they sometimes do". Angular and Ember at least have systems for batching user
interactions, which translate into model updates, and thus potentially DOM
updates. I believe the respective systems for handling this are called Zone
and Backburner for Angular and Ember, but I've been out of touch with those
projects for a couple of years.

There's definitely a trade-off, however. They make a huge difference for noisy
events (like mouse move, scrolling, dragging, etc), but tend to make debugging
much harder in my experience. When things go wrong, the stack traces nest
deeply into the event handling systems and code paths no longer resemble the
relatively straightforward world of traditional event calls, where a callback
handler is invoked directly in response to a single event.

------
franciscop
Seems like just an ad for Svelte, and fairly FUD-ish on that. React will warn
you by default when doing that exact example, asking you to provide a key to
the `<li>` to avoid re-rendering it unnecesarily. Instead, this states "Unless
you're unhealthily obsessed with performance, you're not going to optimise
that."

------
baybal2
This is what I use when I need to do very minimalistic UI and feel that
bringing Vue/React is an overkill
[https://github.com/adamhaile/S](https://github.com/adamhaile/S) \- reactivity
without DOM superglue to it.

------
amelius
I think most people in this thread focus too much on the vdom, and miss the
important part of the post:

> Unlike traditional UI frameworks, Svelte is a compiler that knows at build
> time how things could change in your app, rather than waiting to do the work
> at run time.

------
snewcomer24
Here are some measurements of vdom vs plain ol DOM manipulation -
[https://github.com/patrick-steele-
idem/morphdom#benchmarks](https://github.com/patrick-steele-
idem/morphdom#benchmarks)

------
gr__or
I wrote about this a while ago, and now I wonder whether I read this post at
some point in the past. Anyway, here it is:

[https://dflate.io/svelte-react-reconciliation](https://dflate.io/svelte-
react-reconciliation)

------
kevingadd
Virtual DOM is a shaky long term bet since it's essentially betting that the
cost of DOM operations will always be high enough to justify all the work
you're doing (both at runtime and at trying-to-figure-out-how-to-write-this-
code time). When it's easy to do your virtualization, you can just go 'well
this is an optimization i can remove at any point', but if it's suddenly so
complex it introduces bugs, you're in trouble.

Naturally, the cost of DOM operations didn't go unnoticed and while people
have been going in on virtual dom solutions like React, the devs of Firefox,
Chrome and Safari have all been aggressively optimizing the dom - making the
native code bits faster and moving more of the DOM into javascript so all your
JS can get inlined and optimized. It gets harder and harder for libraries to
compete with regular DOM as a result.

~~~
mantap
Surely JS engines are also optimising React-style code. e.g. the article says
that react style code does a lot of unnecessary object creation (e.g. map) but
if that is now a common pattern then JS engines can do a lot to optimise that
away.

~~~
kevingadd
You can optimize the unnecessary operations but it's a lot more work to fully
determine that they're unnecessary and erase them.

------
_pdp_
Writing some basic apps with vanila js is fine. But I think we use browsers
for much more than this these days. For that we do need this level of overhead
because DOM is just too simple for the task.

------
gigatexal
Seems plausible to me but the examples don’t work on mobile (iPhone iOS 12.3
with safari) nor is the layout mobile friendly at least when render in
portrait modes on my phone.

------
keepsmiling
hello very great article. i've been working as a developer for over 20 years
now and have to say that do more experience you have on fewer and fewer trains
you jump on what should not be heisen that i don't look at frameworks anymore
but to describe it it's just often too much of a good thing. every framework
has good pages but you should keep it simple. ciao

------
patsplat
Double buffering is a standard rendering technique. It might be pure overhead
but it won't look that way.

~~~
peterbraden
It's not double buffering - double buffering is just calculation into memory
and then a memory update.

VDOM is essentially calculating the buffer once in a scripting language, and
then again in a fast way if necessary.

Despite that, it may still be a good idea.

------
paulddraper
I quote an Angular contributor

> If you call something "view" and diff it, then it's Virtual DOM.

> If you call something "view model" and diff it, then it's Dirty checking.

(Angular uses the latter.)

[https://github.com/angular/angular/issues/22587#issuecomment...](https://github.com/angular/angular/issues/22587#issuecomment-370420631)

------
wnevets
I remember when react was new and the reason why everyone should use it was
because of its virtual dom and lack of typescript.

Now everyone is saying the virtual dom isn't the point and advocating
typescript.

~~~
ec109685
Seriously, who actually said lack of typescript was a plus for React ever?
That seems like a non-sequitur.

~~~
wnevets
A lot of people back when every blog was comparing Angular (read not
angularjs) and Reactjs.

~~~
preommr
People were upset about being FORCED to use typescript. It was hard for some
people to accept both being bundled together. Not to mention I would say this
is before when typescript had even less support and there was a chance it was
going to be like coffeescript. Hell typescript adoption is still pretty low
according to metrics like the TIOBE index (although that doesn't mean much).

------
EGreg
I have always said:

Angular is diffing the model

React is diffing the view

That’s all. Better to just skip the diffing usually, and grab references to
elements and update them when certain events happen. It’s really ok!

~~~
quickthrower2
The problem is spaghetti! You have something in the Dom and you might not know
what can or has updated it or why...

~~~
EGreg
When everything is event based, you know. You can trivially even record the
entire call stack on an object attached to the DOM element, when your
framework is in debug mode. You can see the whole history of call stacks if
you so choose.

And there is no waste, either.

------
kyberias
Please God let me never get into a situation where I need use these
frameworks.

