I like the article, but it gets some things subtly wrong.
> To grossly oversimplify things: React assumes that your entire virtual DOM tree needs to be rebuilt from scratch, and the only way to prevent these updates is to implement useMemo
Not quite, on a state update, it rebuilds the component that was updated and all of its children. Not the entire virtual DOM; old versions of Angular did this, but it was wasteful.
useMemo doesn't prevent that, but React.memo can (useMemo has a different role; it lets you choose when to recompute or recreate a normal JavaScript object. But on its own it won't stop rerendering of child components!) [0]
This invalidates some of their assumptions. The reason why React isn't "push-only" isn't because it does that, it's because it sometimes buffers updates instead of always pushing them immediately. In fact, other frameworks like ~~Svelte also aren't "push-only" and hence not strictly reactive~~! [edit: this is no longer true after Svelte v5, see discussion below] (Funnily enough, OP uses an article as a source that explains this correctly [1], but it seems they took the wrong lesson from it).
The reason why signals are so cool is because the framework knows for any given state change which exact attributes in the DOM need to be re-rendered, even more specifically than "the element and all its children". But this neither implies reactivity nor the other way around. The two concepts are orthogonal.
Anyways, kudos to the author for diving into this so deeply!
[0] useMemo is useful in combination with React.memo sometimes, as the latter compares objects shallowly/by reference instead of their contents, so useMemo can be used to only recreate shallow references if its contents changed. You could probably also reimplement React.memo with useMemo, but you probably shouldn't.
[1] https://dev.to/this-is-learning/how-react-isn-t-reactive-and...
I did indeed mix up `useMemo` and `React.memo` – fixed it in the post.
You're right, I am skipping a lot of details (hence "to grossly oversimplify"). I know that React doesn't invalidate the whole tree, but it does in the worst case. Maybe I should add a note about that.
Svelte not being truly reactive makes perfect sense, but in Svelte v5 my understanding is that "runes mode" does exactly that. This is what I mean by "moving in that direction."
> AFAIK there's no magic to React.memo. It's basically a shorthand for useMemo that takes the props as the dependency.
Pedantic note: this isn't quite true. memo() also allows a second `arePropsEqual` argument that useMemo doesn't have. Also, memo() compares individual prop values, while useMemo() can only look at the whole props object (which would be "fresh" on every render -- it's a diffferent object, even if it has the same values). So it's not like you can easily reimplement memo() via useMemo(). But of course, conceptually they are pretty close :)
> “Also, memo() compares individual prop values, while useMemo() can only look at the whole props object”
Passing “Object.values(childProps)” as the dependency array for useMemo should do the same thing.
But yeah, there are good reasons to use React.memo for convenience with props. It’s not fundamentally different though, and you can definitely useMemo() for caching components when more convenient.
> You can definitely use useMemo with JSX elements to prevent child components from being re-rendered too often
Only if those child components are memoized. By default, whenever the state of a component changes, React will rerender the entire subtree. The only time it doesn't is when a child component is memoized (React.memo) AND the props haven't changed. Utilizing useMemo and useCallback is how we prevent non-primitive props from being recreated unnecessarily
React actually has a little-known "same element reference" optimization. If your component returns the exact same JSX element reference in the same spot in consecutive renders, React will bail out of rendering that subtree, regardless of whether or not the child component is wrapped in `React.memo()`. This allows the parent component to control the behavior. So yes, `useMemo` would be how you do that:
I see what you mean. I'm a little shocked the docs have an example of it being used in this way. Using `useMemo` in this way is generally considered a bad practice and a "hack". The new version of the react docs does not have an example of useMemo being (mis)-used in this way
I really love Svelte. The compiler is great and very extensible. For example, you can easily add functions to the processing pipeline to process Svelte templates (or the script elements or style sections) in your own special way. It's a fantastic way to build JavaScript frameworks. Svelte people always note Svelte isn't a framework, so this isn't a framework on top of another framework!
I used this to build Svekyll, a Jekyll clone (the original static blog tool).
Not to toot my own horn, but I'm really proud of it. Svekyll scores all 100s with lighthouse but still has all the cool things you get from a Svelte app. It's a true single page app, all JS is inlined and can be put on any web server for hosting. Plus, a bunch of other cool things that only are possible with a native JS blog.
I also like svelte but as a compiler-y person who happens to be doing some JS I can't work out why I'm annotating so much stuff by hand (even with svelte 5). I can get why you might want this for react, but isn't svelte a compiler? Can't we do dataflow analysis?
I'm 50% convinced there's a Chesterton's fence I'm mising but where?
You can only do so many compiler-y things in a dynamically typed language like Javascript. And even less of it within the context of a single file, unless you implement a bundler (web jargon equivalent of linker). Otherwise by the time the run-of-the-mill bundler is done connecting parts of your component tree into a single file, so much of the high-level information of the component is lost that you can't really do much analysis on it.
At least that were the reasons I saw when I was thinking about the same. If anyone has ideas around it, please get in touch.
It sounds like you would appreciate Vue3 and its <script setup> mode, the compiler does a lot of ref() and data flow analysis to make the code for components very readable.
I love svelte too but I've found it hard to integrate it in different places where I may run js (e.g. obsidian plugins, browser plugins).
It was hard to configure the necessary tooling but maybe it's not sveltes fault. I think svelte would really benefit from better documentation on how to do this (and I don't mean sveltekit documentation).
It was also at least non-trivial to use with typescript and even more non-trivial to have some third-party dependencies/components that don't use typescript. Again, maybe not sveltes fault and hard in every framework but those were my show-stopping issues the last time I tried to build something real in svelte.
Interesting. I think you should ask some questions inside the Svelte discord channel. My opinion is that Svelte is the easiest to integrate because it doesn't require the runtime that react does, for example. I've got Svelte running inside lots of non standard places, like browser extensions (https://addons.mozilla.org/en-US/firefox/addon/please-at-me/). It can take a few minutes to figure out how to get it built correctly for the context but it has never blocked me.
I don't disagree about your typescript point. Svelte community seems to have aligned around typescript being a challenge and I do like their assertion that jsdoc+eslint is a better approach.
Until svelte can have a client-side router builtin, and treat SPA as its first class citizen(the way Vue.js does so far. React also shifts to SSR-SPA mixed situation jus like svelte, both are impacted by Vercel, which is really sad), instead of just focusing on its sveltekit SSR-first, I have zero interest in it. Yes I know I can customize sveltekit to do SPA, but it's very ugly and I don't need all your SSR mental load to an already complex frontend world.
Both Svelte and React are shifting to SSR-first, which is what Vercel can make money with, I read somewhere Vercel had many React core members now, after it bought out Svelte.
Svekyll was originally built on sveltekit and I couldn't figure out how to get SSR and figure out the right adapter for nodejs and static hosting. Which is why I abandoned it and am much happier with a Svelte only CLI, just like Jekyll.
Or, rather than being some great conspiracy, SSR (or as we used to call it, just rendering) makes a great deal of sense.
What goes around, comes around. It has always felt like the front-end frameworks, from things like Backbone to React always failed to learn the lessons of history. Preoccupied with being "new" in an area that's rapidly gaining capabilities via the browser.
Re-inventing the wheel isn't difficult. Improving it is.
Someone some day should write an article about frameworks with examples that show it's advantages.
I just try different things in vanilla depending on the load. In terms of speed nothing can beat just serving a page rendered on the server. If you really need dynamic updates replacing dom nodes is good up to some very limited number, virtual dom and cloned nodes increase this number by a tiny irrelevant amount. If you need to update more than 100 nodes, from what I've tested, nothing beats replacing the parent node content with a html string. Sometimes inline onclick="" handlers are great compared to creating 1000 listeners one by one. Sometimes you put the listener on the parent and figure out what was clicked when it happens. I've even had cases where iframes are wonderful. At times I also put some or many hidden nodes in the html document and display them when needed.
Writing this I'm curious what the performance is for <output>....
True. The drive to SSR is caused by the rise of battery-constrained mobile devices with unreliable wireless connections, along with the rise of cloud computing.
I recently used a Windows 10 machine after years and I suggest you try it. It could not be more evident that Microsoft is fully betting on Web and Cloud. Windows is increasingly a thin client. Even the mail app isn’t an actual mail client now.
Recently, I wrote a web renderer/framework using solid-js reactivity to understand how a reactive renderer works. The docs website been written with it and mainly to test the library. https://pota.quack.uy/ . Source code https://github.com/potaorg/pota
You need a class called ScreenManager, one called Screen, and one called Component. Make Screen and Component able to load an htm file, then hook its tags on the DOM and do whatever you want. Update a counter? Write that in your component. Some static networking class in the background either long-polls or gets pushed new data, dispatches update events that any component can listen to. Each component can make its own calls and update its own data instantly when a user interacts with it.
No abstraction, no nonstandard HTML tags, no <template>, no Proxy, no master class trying to figure out what part of the DOM should or shouldn't be redrawn based on inbound data. Every component should be autonomous, every screen should be able to destroy or resurrect its own components. If you need a central data cache, put that on the ping and let every component deal with it on the event firing.
[edit] I've built and maintained two frameworks, one for websites and one for single page apps, rewritten and improved over 20 years, originally in PHP, now in Nodejs. The main guiding principle for me has always been decoupling design from code.
Sounds like you implemented MVC. Model (=Component) handles its own state. Screen (=View) subscribes to state changes in the model. ScreenManager (=Controller) glues it all together.
It probably works just fine, but gets cumbersome if you want to know exactly where a piece of state is managed or the order of event processing is important for some reason.
If anyone is interested in this topic, I would recommend to start from fundamentals, so it would provide some answers on why some "not so modern" frameworks aren't jumping on a "signals" hype-train.
Vue.js is and isn't. It did ""fine grained reactivity getter setter proxy something"" before it was cool [0].
At this point I can't stop myself from pointing out that the underlying reactivity/diffing system is rarely what makes an application slow. I've heard the creator of XState and Stately [1] say that React's vdom is not fast enough for updating edges in their state chart in real time without lots of optimisations and I believe him. It's just that most people don't encounter such issues and spend adding a dozen tracking scripts that run before the actual application does.
I've stopped paying close attention to the web framework scene in the past couple of years, as most of the interesting ideas on this topics are usually coming from different communities. But as I understand, the majority of popular web frameworks (React, Vue3, Angular) are still using tree diffing or hybrid "signals"+tree diffing strategies.
In my opinion, one of the most interesting ideas to explore in this problem space is a hybrid solution: differential dataflow[1][2](model) + self-adjusting computations(view-model + view).
Or maybe we could just ditch react and everything like it? Not everyone uses it now, and many large websites are built in simpler ways despite not using ‘modern JavaScript’.
Is all this pain and complexity really worth the gain? In practice it often leads to a slower experience for users due to the massive globs of js.
In a nutshell: we had a 7 year old server side rendered page on our site that shows a chat between two users that was more like email. So you have to refresh to see new messages. We added 2 lines of code to the HTML(!!) and now the page is fully multiplayer so messages come in directly when the other user sends one and you see other things update as they are changed in the database, etc.
2 lines of fucking HTML.
Sure turbo is built in JavaScript, but I have to spend zero time writing any and I get all the benefits in a 7 year old server side rendered page.
Just long enough and without a hooks style revolution so I can cruise for a bit. Build some stuff without keeping up. Like a carpenter. Tell the LLM people to slow it down too.
Coincidence or not but since the tech winter started there are fewer js modules and frameworks released. As it should be. Hope react manages to somehow die in the process.
I recently did this because none of them are exactly what I wanted. I really like the idea of reactive proxies and pushing changes. Things get trickier when you try to address mutable arrays and other scenarios.
There's no diff. Arrays get proxies just like everything else.
For mutations that change the length, extra elements are built or removed at the end. Assignment straight to an index (that doesn't change length) is able to use normal object property semantics.
arr.push(e) and arr[i] = e are both O(1). But not all array operations achieve this.
I'm not too familiar with the state of the art, but I assumed this had been done before.
I have some vague ideas about implementing efficient .splice() calls. Right now knowledge about splicing semantics is lost in between the proxy layer and `ForEach` DOM node array. But I have a bunch of other plans to do first.
I haven't really kept up with frontend developments. My experience was with Knockout which I enjoyed and I did some hacky React more recently for devops-pipeline.com
I am curious, to find the sweet area of maintainability + performance.
Is the problem usually latency ? If you load too many items into a grid view you get performance problems.
Maybe this problem is solved already - Facebook solved it -
I would like to be able to create a "weak iterator" that handles forward and back navigation of collections with extremely fast rendering when moved backwards or forwards.
Computers are fast, I like the ideas of immediate mode but they burn CPU.
When they say Svelte “compiles” your code into Javascript, doesn’t it still mean it has to ship some common code alongside your transformed code? Isn’t that still the core of a Framework?
Question to the folks with a lot of frontend framework experience:
Is there a framework/library that supports the usage of an effect-system when it comes to rendering actions?
For instance, in react, a component (or rather it's render-function) has to return the element(s) directly. Is there a framework where the render-function accepts something effect-like or promise-like instead, even if that means that the rendering might potentially be delayed?
React now supports this using Suspense boundaries [0]. Some frameworks (eg. NextJS) already ship variants of it but here is some code you could use in React's development version:
function MyComponent() {
const promise = ...;
const result = use(promise); // use is like await
// do something
}
function Wrapper() {
return (
<Suspense fallback={<Loading />}>
<MyComponent />
</Suspense>
);
}
You don't need to know this to use it, but the implementation is both interesting and horrifying: The `use` hook checks if the Promise has resolved, and if not, it throws an error that is caught by the Suspense boundary. And because there is no property like `hasResolved` on JS promises, `use` adds one itself. (At least this was how an early draft proposed it, a lot of changes have been done since, and my knowledge might be out of date.)
Thank you, this is exactly what I meant! Awesome that this is already being worked on in react.
Yeah, the implementation under the hood seems a bit crazy. Reminds me of how I found out that angular used to call toString on functions to get the parameter names for dependency injection.
But honestly, if I can use it without problems as a user, that's what I care most, even if the backend developer in me is horrified about it, haha. But I guess that is mostly Javascripts fault after all.
The way Suspense works is actually even a bit more interesting/horrifying: it doesn’t throw an error, it throws a Promise. Which again you don’t need to know to use it, and so it’s a valid implementation detail, but it’s a really odd one.
Its not for use but it was an interesting experience that enables a lot of new patterns by using generators.
I don't claim that it is better than other frameworks though, there's a lot of times where this pattern is significantly more cumbersome than just using react.
Very cool! Especially I liked your example of elements that have "finished". Funnily, this was one of the major problems of mine when I built [visakami](www.visakami.com) which essentially is just a survey on steroids.
I think you should join the react developer's team. ;-)
One of the obvious differences is that I'm still using TSX, but it is very different from React, it just looks a lot like React at first glance.
Also, because I was doing it across at least a couple of projects, I started it from the beginning as its own small framework and have been trying to document it: https://github.com/WorldMaker/butterfloat/tree/main
It's still very much in early "prerelease" stages, but feedback is welcome.
Yeah these type of articles always have hypester tone with less substance unfortunately. RiotJS is great but ember.js API/tools are even greater, highly suggest learning ember.js, if you value good APIs, testing & good community ;)
I can't tell if this comment is tongue in cheek or not.
The walkthrough on how to build a JS framework from scratch is "hypester" because in the short section that mentions existing frameworks, your pet frameworks weren't included?
> Yeah these type of articles always have hypester tone with less substance unfortunately.
Oh come on! The article mainly mentions the specific frameworks it does to qualify and contextualize the set of features it discusses, which it goes on to implement. The article is excellent, and this kind of reflexive dismissal is so tiresome.
The article doesn’t state what problem those ‘modern’ web frameworks are trying to solve. It’s already been known that you can make a faster framework at the cost of less ergonomic API and more complicated mental model, but in most cases it’s not worth it. And when it’s worth it, React had the tools to ‘eject’ a subtree from the very start.
This is a good article, but I've noticed that the adjective "modern" is used disproportionately more in the world of JavaScript, compared to other tech stacks. Is it more performant, more maintainable, faster to develop, compatible with more devices/platforms? If not, what is the advantage of being modern? That said, I'm a fan of JavaScript and have been writing it since 2001, and many of the backends I write are Node.
It's a fair criticism. React has been around for ten years now. Which by Javascript standards makes it ancient. IMHO, the javascript community has been chasing its tail a bit for most of that time in the sense that there have been very few real innovations. People keep on reinventing the same wheels. But mostly the same things that were a problem ten years ago are still problems. Problems such as managing state in a sane way, preventing issues with performance related to state changes, and keeping code bases maintainable. I'm not sure that modern is a word I'd slap on the notion of not quite having figured out those things still.
I have good hopes for some disruptions to arrive on the web via wasm. There are some interesting things happening in that space that are increasingly less about Javascript, dom trees, css, and all the limitations that come with those and more about leveling the playing field with mobile where UIs are more competitive and non Javascript frameworks seem to be preferred over the poor man's choice (aka. web based).
I'm saying that as somebody actually pushing web based on mobile (we are about to release a PWA). Just acknowledging the reality that web-based is still considered a huge compromise on ux, performance, and capabilities on mobile. Good enough is the best you can say about it. Some of those mobile frameworks (flutter, compose web, and others) are now coming to the web via wasm. IMHO, there are a lot more interesting things that could be done in that space in the next years.
Author here! I struggled with the word "modern" – I could have said "current gen" or "post-React" or even "Solid-inspired" frankly, but I thought "modern" was succinct with the right amount of punchiness.
Obviously a lot of these techniques are pretty novel, and maybe they won't stand the test of time. Or maybe a new browser standard will make them obsolete eventually. But for now these seem to be the current wave anyway.
Perhaps you could have said that it is built using more modern JS APIs - template literals instead of string manipulation, Proxy(), queueMicrotask etc.
My observation is that there is a lot more cargo-culting / trendchasing in the JS world, largely due to its low barrier to entry. The overuse of "modern" is a dogma more than anything.
(I mainly work on native desktop apps, but can use JS if needed.)
> To grossly oversimplify things: React assumes that your entire virtual DOM tree needs to be rebuilt from scratch, and the only way to prevent these updates is to implement useMemo
Not quite, on a state update, it rebuilds the component that was updated and all of its children. Not the entire virtual DOM; old versions of Angular did this, but it was wasteful.
useMemo doesn't prevent that, but React.memo can (useMemo has a different role; it lets you choose when to recompute or recreate a normal JavaScript object. But on its own it won't stop rerendering of child components!) [0]
This invalidates some of their assumptions. The reason why React isn't "push-only" isn't because it does that, it's because it sometimes buffers updates instead of always pushing them immediately. In fact, other frameworks like ~~Svelte also aren't "push-only" and hence not strictly reactive~~! [edit: this is no longer true after Svelte v5, see discussion below] (Funnily enough, OP uses an article as a source that explains this correctly [1], but it seems they took the wrong lesson from it).
The reason why signals are so cool is because the framework knows for any given state change which exact attributes in the DOM need to be re-rendered, even more specifically than "the element and all its children". But this neither implies reactivity nor the other way around. The two concepts are orthogonal.
Anyways, kudos to the author for diving into this so deeply!
[0] useMemo is useful in combination with React.memo sometimes, as the latter compares objects shallowly/by reference instead of their contents, so useMemo can be used to only recreate shallow references if its contents changed. You could probably also reimplement React.memo with useMemo, but you probably shouldn't. [1] https://dev.to/this-is-learning/how-react-isn-t-reactive-and...