
Be proactive, not reactive – Faster DOM updates via change propagation - evlapix
http://blog.bitovi.com/change-propagation/
======
lhorie
Interesting article.

Some thoughts off the top of my head: Proxies are presented as a possible
candidate to improve performance, but in conversations w/ some vdom library
authors, I learned that proxy performance is far too bad to make it a viable
option for high-performance vdom engines (in addition to having abysmal cross-
browser support today)

Another issue is that observable overhead _must_ be offset by savings in
number of DOM operations in order for change propagation to be worth it. For
example, a `reverse` operation would not benefit much, if at all, since it
requires touching almost all DOM nodes in a list, and would incur worst case
overhead on top of it.

While naive vdom can lose in needle-in-haystack scenarios, vdom libraries
often provide other mechanisms (thunks, shouldComponentUpdate, per-component
redraws, etc) to cope w/ those scenarios.

In addition, the field of vdom performance has very strong traction currently.
Authors of vdom libraries often share knowledge and implementation ideas and
there are now libraries than can perform faster than naive vanilla js in some
cases by employing techniques like DOM recycling, cloning and static tree diff
shortcircuiting, as well as libraries w/ strong focus on granular localized
updates.

~~~
justinbmeyer
> thunks, shouldComponentUpdate, per-component redraws, etc

I'm not sure how those would deal with the scenarios focused on in the
article. The only one I'm familiar with is per-component redraws which
wouldn't apply.

~~~
lhorie
The general gist is that one would use the subtree diff skipping APIs to
prevent expensive diffs, and per-component redraws to selectively update
things in the problematic subtree. Admittedly, it's not perfect and requires a
not-so-trivial amount of app-space code, but at least it's not a black box
renderer like, say, Angular 1.

Ultimately, there are a vast number of different scenarios, some of which are
likely to remain open problems for the foreseable future (e.g. a 100,000 item
reverse), and some of which can be worked around via application space "escape
hatches" such as the use of granular update APIs and techniques like occlusion
culling.

As I said, it's very interesting to see work that tackles the problem from an
algorithmic complexity angle, and it's always healthy to explore different
performance characteristics, but I think it's important to also keep in
perspective what is the performance profile of solutions currently in the
market, because looking at their theoretical algorithmic complexity alone
doesn't tell the whole story.

------
gmmeyer
Rendering the initial list takes much, much longer with the change propagation
implementation than with the virtual dom one. I'd say, without benchmarking
it, I see about 2 seconds or more difference. Even if the virtual dom is
slightly slower at reacting to change, the time to render the latter
implementation is more than enough to make me not want to choose it.

