Hacker News new | past | comments | ask | show | jobs | submit login

People loved React because it solved a problem. They just happened to solve it using virtual DOM.

but theses next generation frameworks are showing that there's a better way to solve the same problem.




VirtualDOM allowed you to update the dom with better performance than before. Is there now a better technique? Why is it better than VirtualDOM?


This is a good talk on the matter from the creator of Svelte himself, if you're interested: https://www.youtube.com/watch?v=AdNJ3fydeao

Basically, it boils down to moving from tracking the DOM state in a VirtualDOM to tracking what DOM updates can happen at the compile stage and then just doing those exact updates to the DOM.


I haven't seen proof that the output of these libraries is restricted in any meaningful way. Is there a case where some small input n yields !n code? Is there even any proof that all states can actually be tracked? In the case of a vDom, your code size is fixed and execution has a provable upper-bound.

Even if that could be proven, code bloat is still a problem. With a vDom library, the render engine size is fixed. Moreover, those functions are guaranteed to run enough that they will be optimized by the JIT while changing render functions for every component could mean your renders are optimized for this view, but back in interpreter land when rendering the next view.


Thanks!


My annoyance with that talk is he never compared Svelte with multi-threaded React. It was the natural next thing to look at but it was like “okay my point stands let’s move on” haha.


There's no such thing as 'multi-threaded React'. JavaScript runs in a single thread unless you use web workers, which React doesn't (this was a popular idea a few years ago, but people have since come to accept that it adds overhead and complexity out of all proportion to the problem it's designed solve).

You're probably thinking of Concurrent Mode, which the talk does indeed address. Concurrent Mode is, among other things, a clever way of solving one of the problems introduced by the virtual DOM paradigm: that you have to rerun a lot of user code on every state change that will often block the main thread if you don't break it up into chunks.

More on virtual DOM here: https://svelte.dev/blog/virtual-dom-is-pure-overhead


Well maybe we should compare apples to apples. With React you go down a rabbithole of workarounds around their initial concepts (like hooks) until you wonder why you chose it in the first place.


why do you consider Hooks a "workaround around [React's] initial concepts"?


I think Hooks are genius truthfully. But they are definitely shoehorning something into a place that didn't expect it. You are calling these render functions with the purpose of creating new transformations every cycle, and the injection mechanism has to include the initialization the first time. So you basically have these slotted things that allocate memory every time to just use what's cached most of the time.

Now don't get me wrong I've benchmarked Hooks like crazy and I don't think they are much consideration on performance at all. It's just the mental model of always update on the outside with bubbles of things that don't update creating these closures is a little unnatural. By comparison the mechanism they ape (fine grained reactivity) works exactly the opposite way. You just need to be aware the stuff outside doesn't update. Wrap what needs to be updated. Done. There are no out of date closures. No inconsistent mental model. It's as straightforward as registering event handlers.

Still I have to admit the solution is genius. They have managed to get 90% of the benefits with simply repositioning things. At one point this was one of my biggest arguments against React. People didn't appreciate it before hooks. But can you imagine knowing you could write applications this way years ago and trying to convince someone using React classes there was a better way? I just sort of gave up and did my own thing.


thanks for the explanation! though i have to say i only have a vague idea of "fine-grained reactivity", any pointers for reading about this or systems that work this way?

in particular, i don't quite get this bit:

> By comparison the mechanism they ape (fine grained reactivity) works exactly the opposite way. You just need to be aware the stuff outside doesn't update.

inside/outside of what?

PS. my yet undeveloped pet theory is that hooks are (somehow) something like half a monad/algebraic-effect-thingy... though they're probably too tied up with the render cycle to analyze them this way


I have the article for you: https://indepth.dev/finding-fine-grained-reactive-programmin... It's a bit dense at times but I try my best to cover the whole spectrum and how it relates to familiar libraries.


Burning cycles in another thread is just as wasteful, and problematic when you run on a battery, or when you run other stuff in the background.


Sort of. I don't think you can write off the virtual Dom. But this is basically my area of research. I'm saying that a specific approach to reactive programming is more performant. I have an article for that too: https://medium.com/better-programming/the-fastest-way-to-ren...


Thanks. The whirlwind of changing benefits of various frameworks can be rather confusing. I do wonder how much this matters for most apps and at what cost of complexity it brings. But advancements in the field is a obvious good thing.


From my perspective, the things are a bit like this:

1. Make all updates manually with jQuery. This is fast, but hard to keep track of.

2. React, create a virtual DOM, then compare the current DOM with the virtual one, and figure out what needs to change.

3. Solid, Svelte, don’t create a virtual DOM, but have the JS compile all the possible changes so you can make them directly in the DOM like with jQuery.


If you’re “compiling all possible changes” that’s a diff, right? If there’s no VD, what’s being diffed?


the question is not what, but when


Exactly. Think of is as the classic compiled vs. interpreted languages debated. One is clearly faster than the other but the slower one may have different advantages.


Ahead-of-time vs Just-in-time is a more accurate description.

In this case, there's no evidence that precompiling is faster in theory -- let alone in practice. In the JS framework benchmark suite, SolidJS and InfernoJS performance is almost identical (with SolidJS having a much larger margin for error in most tests).

This is the BEST possible case for precompiling too. In the real world, JITs take a long time to warm up (a couple hundred executions before all the optimizations kick in). With the vDom, you warm up ONE set of code that then runs forever. With the precompile, it has to warm up for EVERY new component and potentially slightly different codepaths within the same component.

The JS framework benchmark reuses the same components for everything which is a huge advantage to precompiled frameworks while not having much impact on vDom ones (as the actual components in both cases usually won't optimize very much due to being polymorphic).


> In this case, there's no evidence that precompiling is faster in theory -- let alone in practice.

It’s absolutely not because it requires far greater effort computationally. The benefit has nothing to do with performance but instead simplified state management.

I know people desire certain frameworks due to how they perform state management. I have never really understood that motivation myself though because managing state is incredibly simple. Here is a basic outline of how simple it is:

1) realize there are exactly two facets to every component: data, interface.

2) all components should be stored in common locations. A single object stores component data and a common DOM node for storing component interfaces.

3) pick a component facet to update, either data or interface, and never update the other. The other should be automatically updated by your application logic (reflection).

4) store your data on each change. That could be dumping it into local storage. In my current app I send the data to a local node instance to write into a file so that state can shared across browsers in real time.

5) be able to restore state. On a fresh page, or even page refresh, simply grab the stored data and rebuild the component interfaces from it.

My current application is a peer to peer Windows like GUI that works in the browser and exposes the file system (local device and remote devices) in Windows Explorer like interfaces. Managing state is the least challenging part of this. The slowest executing part are long polling operations against large file system trees (it’s about as slow in the native OS interface)


> In the JS framework benchmark suite, SolidJS and InfernoJS performance is almost identical (with SolidJS having a much larger margin for error in most tests).

I mean I agree with most of your post, but I'm not sure I would necessarily make that highlighted claim from the benchmark results. I mean the +- seems to be pretty run dependent for most libraries on there. And while I agree that the differences in performance is neglible, there is one. Solid is clearly faster in most tests even if by a small amount. Anyone interested you can look at: https://krausest.github.io/js-framework-benchmark/current.ht... And then isolate Solid and Inferno and then do a comparison against one library. It will color highlight the degree of certainty the difference is between the libraries in terms of significance of the results.


VirtualDOM allows you to roughly code your UI as a function of state. It's never been a magic bullet to better performance, except perhaps compared to Angular.js 1.x's "change detection" methods.


Perhaps faster than other framework implementations but there is no way it’s faster than the standard DOM methods.

https://stackoverflow.com/questions/21109361/why-is-reacts-c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: