I have very similar feelings to the author. I think it was back in 2019ish that I gave up on React & co for good. Since then I have built a number of web apps on a relatively spartan stack of Actix, Tera and HTMX, and they have all stood the test of maintainability over time and have a core group of diehard users.
Last week I shipped another new web app[1] I got an idea for in <24h to closed alpha testers and in <7d to a public beta with no issues or fanfare.
Perhaps part of the increased velocity is my having gone deep and learned the ins and outs of this stack over multiple years and projects, but it is important to note that I probably would never have gained this level of mastery over these tools if I were constantly being hit with what the author calls "dependency management fatigue".
I recently shipped a project [0] with a very similar tech stack: Axum + HTMX + Askama. It’s still in beta but it’s incredible how far you can get with just html, css and less than 20 lines of js. Also, can’t beat the beauty of putting a single binary in a systemd service and have your entire frontend + backend ready to go.
The only downside that I see is that the binary gets big pretty fast, especially with Askama which basically is a templating engine like Tera but pulls templates at compile time in the binary so you don’t have to copy templates around on your server.
I have not worked with SSR a lot before but it seems it’s harder to cache pages too.
I'd argue a fat binary isn't a major issue unless you're on a lambda or edge computing platform, especially if the application is fairly stable / does not get many re-releases.
I'd say caching can go into a different layer entirely.
Yeah for the moment I reached a 25MB binary and I see no issues in performance whatsoever. I'm curious about caching, do you have any tips or resources on that?
I (a novice) have been toying with web apps for a little bit now with Axum, Tera/Askama, and HTMX. I'm doing something very similar. I find that it works well for my simple apps but then complexity gets tough when I start to have a lot of template logic. This is when I realize that frontend web dev is actually full of interesting complexity...
Obviously managing complexity in templates is something I'm sure people solved years and years ago, but of course now with 99% of frontend resources (blog posting, video content, courses, etc) being SPA focused, resources on doing it "the old school way" are scant.
Do you know of any resources for designing web-apps of intermediate complexity with SSR templates? Or even just good resources from the HTMX + SSR camp, aside from the HTMX fellas?
My longest running web app[1] is also my most complex. Although I do use HTMX, I use it very sparingly.[2]
90%+ of what you need for a web app that you're building as a one-person show can be handled by storing state in the URL - people these days can be quick to forget that this is how large and complex web apps like GitHub have run for many, many years.
With regards to complexity in templates, I find that you can cut down on a lot of that complexity if you enforce type constraints in your templating engine's context object.
Getting in a habit of doing this early gets you thinking about ways to shift things that require conditional logic to the pre-templating stage, and more often than not it leads you to breaking down bigger templates into smaller partials and using them to compose different template variations for different views. I like this and it makes a lot of sense to me, but I'm not sure how it would feel to people who have only ever written JS SPA web apps.
i think your approach is orthogonal to SPA. first of all, it is my understanding that react in particular favors small templates or partials. and second, with the traditional server side frameworks i have always used full page templates, and i tend to write SPA the same way. i would only break up a template if it contains selfcontained sections that could potentially be used in different places. my point is that this is not because all i ever did was SPA, but because i did things like that already for years before that.
Recently I've also been using Axum and HTMX but instead of writing templates I've been using Maud to write the dynamic bits inside the handlers themselves. Then I just serve a static .html file containing everything that can be static with HTMX requests to fetch the rest.
The downside is Maud isn't proper HTML but the benefit is I can use normal Rust to format my variables, etc into whatever string format I need rather than deal with a constrained templating language. It feels like writing an API that happens to serve HTML instead of JSON.
I debate using Askama to avoid the extra requests but there's something nice about just serving static HTML files.
How have you managed to find work consistently? It seems like everyone is looking for 3-5 years of react specifically as a minimum on most job postings near me. I would love to get away from React, it is really not my favorite to worn with.
I was laid off just before Thanksgiving 2023[1] from a Principal Engineer role on a Platform Engineering team. I was employed there since 2018.
I was very lucky to get referrals from a few folks that I social dance with in Seattle after that layoff, and I ended up with a job offer in a product area I knew very little about (networking, routing, programmable packet processing middleware with eBPF etc.)
Despite my lack of domain knowledge, I was told I received the offer largely because of my demonstrated proficiency in Rust (I did all the interview whiteboarding sessions in Rust), and although this isn't something that is measured "officially" in the interview process, because there are many hours of me live programming online[2] for people to feel confident that they're not hiring a dud who has hyper-specialized in passing interviews which are not representative of real-world workloads.
I general advise people in your situation to start looking further down the world's dependency tree, where things churn less frequently, and where the skills you acquire will last longer. This can be easier said than done, but since my very first job was as a React developer, I can at least share my path down the dependency tree:
Frontend (React etc.) -> Backend (web APIs) -> Infrastructure / Platform / DevOps (started with a cloud automation focus, moved gradually towards to bare metal) -> Networking (I'm in ur VPCs, directing ur packets)
All of this being said, the job market right now is very tough. I doubt I could walk out of this job and into another within 3 months like I did this time last year.
>I general advise people in your situation to start looking further down the world's dependency tree, where things churn less frequently, and where the skills you acquire will last longer.
I actually don't agree, being "further down" the dependency tree, the more likely you are to be exposed to new concepts and stay agile as a developer. I moved from a backend .NET background into frontend and have found the faster pace more refreshing with the evolving web and mobile platforms. Staying with .NET would have had me writing the same EF code over and over in almost a time capsule, at least talking to old colleagues.
I feel like i'm more valuable now, because i've been through a few tech stacks and understand the benefits and drawbacks of each.
Hi, author here.
At least from my personal experience, there are still many jobs in which the main focus isn't frontend development, and per se React.
I do have to work with React from time to time. But it isn't my main focus. I usually work implementing backend systems (with Go, SQL [Postgres], Redis, etc.) and infrastructure as code with Terraform.
I'd highly recommend reading some articles by Luca Palmieri[1], or even buying his book[2]. Although I didn't learn this stack by working through his book, whenever I had questions through the years, my searches usually led me to his articles which are often excerpts from the larger book.
The high level of API stability and lack of churn in the Actix ecosystem makes the book a particularly good investment for someone looking to settle on this stack in my opinion. In keeping with the topic of this submission, I doubt I'd be comfortable spending money on a similar book about building web apps with React.
> Roughly every three months, the book is updated to keep up with the latest developments in the Rust ecosystem. In particular, we make sure to update all the crates we use in the book to their latest released version. If you bought a copy of the ebook, you can get the latest book revision at any time by redownloading the content from here.
I bought the printed version as I find it easier to work through.
I was expecting it to be relatively out of date, but I was OK with that since the libraries in this ecosystem are relatively stable.
To my surprise and delight, in the first few pages I came across the sentence "As of October 2024"
So I think the printed version is either print to order or printed in very small batches, considering the book itself is several years old at this point.
I'm half way through it now, I can highly recommend it. And if you are like me and prefer a printed version, don't be afraid to spring for it.
Tanner builds libraries with a huge amount of functionality. But IMO he is not great at API design. His packages often have leaky abstractions which need to be patched over time.
React Table and React Query are powerful but end up simultaneously doing too much and not doing enough, because their boundaries are in the wrong place.
What’s wonderful about React is that it’s _not_ a framework. It does one thing well, and then stops at a well thought out, well documented, well tested boundary.
I try to only adopt libraries that also meet that standard. It means you have a lot fewer libraries you can lean on, but it means the API surface you build on will be more stable for longer.
It maps state to a set of reasonably efficient DOM updates that you generally don't have to manage or think about.
Go play around with Angular 1, or BackboneJS, or try building a working SPA with jQuery, and you'll get a sense of the breakthrough that react represented in 2013.
I worked with AngularJS back in 2014-15 and it was hell. We used to regularly have accidental performance dips because of the way it reacted to changes in values. IIRC it used to do two scans of all variables in all the controllers on many browser events, one for checking if something had changed and updating everything else and one for checking if the previous check changed anything that would require further updates. I don't remember the specifics now cause it's been so long but it got really costly really fast for complex applications (we were building real time WebRTC telephony interfaces). React was so much better because it came with restrictions on what changes were being checked and Angular was a total rewrite in Typescript with heavy performance improvements over AngularJS. The virtual DOM stuff truly was a revelation over everything everyone else was doing at the time.
> Go play around with Angular 1, or BackboneJS, or try building a working SPA with jQuery
I have used these in production (and mootools, prototype, and many more) and when these came out they were novel / a breakthrough as well at the time.
My point being is React is no longer a simple transform from state -> UI. Since fibers and concurrent rendering and suspense and server components and hooks and actions it is a much wider framework than you are remembering from 2012.
I’d say all of those bullets fall under the umbrella of “rendering” in an asynchronous execution environment like browsers, where code often depends on a consistent and predictable view of the DOM.
I’m willing to hear arguments about the merits of how React approaches these issues, but I would want any frontend UI library for generating and updating DOM trees to address them in some way.
To anyone reading, mobx is a more generic tool for any type of state management.
And starfx seems like a data fetching + holding state and providing hooks for that data kind of library. Very unique and looks someone cared to make something nice for that kind of problem
State management is typically handled by other libraries. React can do it, but it's not great for a full app. Maybe "data flow" too, not really sure what makes that different from "state management".
Lodash does a hundred different things and then people start using it in place of super simple code or native capabilities ballooning the third party API you need to learn
Also react-router, which I found to be way too overcomplicated and would duplicate functionality I was handling in other ways. IIRC last major release was also not backwards compatible, and made some odd decisions in the design. Funny enough I went with wouter (which the post also complains about) instead: Simple, focused, does one thing well, doesn't try to be multiple things. I don't recall anything about its major updates though.
Sorry yeah, I got the names mixed up. I recognized the documentation, and I do mean tanstack router with my description above. I don't remember why I didn't use react-router, but skimming over its documentation I think it may have been the focus on the odd non-React-y way of doing things ( https://reactrouter.com/start/framework/routing ).
I always feel like these HTMX examples just shift complexity to other parts of the stack. JSX is a really elegant way to avoid templating, but we're back to templating with HTMX, and you can easily reintroduce a complex set of libraries just trying to handle that with any scale.
Routing, state management, auth, components, theming, API access, and more are all still problems that people add libraries for and those problems don't go away just because you've abandoned the ecosystem with the most libraries.
I feel like a lot of the arguments for HTMX are simply delight in having "permission" to build MPAs again. Backend stacks are historically good at routing/state management/auth and more, everything old is new again. But also, yes, not everything needs to be a SPA and maybe we've just reached the point to start asking again "why is this a SPA?"
> JSX is a really elegant way to avoid templating
JSX is still a templating language. It's just an "inverted one" where the templates are embedded in scripts rather than the other way around. That said, I do think it is a very elegant templating system, especially because it can be type checked with Typescript. TSX is a massive improvement on most template compilers in part because it has such a massive types ecosystem today.
(My own efforts in "post-React"/"post-Angular" have been TSX-based. I've got a Knockout-inspired view engine with a single runtime dependency on RxJS. It has a developer experience similar to React, but isn't a virtual DOM, and has some some interesting tricks up its sleeve. I'm really happy with TSX as the template language for it.)
Agreed. I also think React did the world a favor by making functional programming a little more mainstream, which is part of what makes JSX/TSX feel so magical when it works.
No, it isn't a templating language, it's still just JavaScript - calls to createElement or _jsx or other function with some syntactic sugar to make it look like HTML.
No. Do you seriously don't understand the difference between JSX, a simple syntax extension that compiles to JavaScript function calls, and a templating language, for example Vue's templating syntax [1] or EJS [2] ?
Rather than acting like OP is stupid for not seeing the difference, why don't you enlighten us?
From where I'm sitting it looks like maybe you're describing a difference in complexity levels? Vue compiles to a representation that is less obviously related to the template? Or what?
I, like OP, am genuinely unsure what distinction you're drawing. It seems like you might just feel that templates are icky and jsx isn't icky.
It's trivial. JSX is just syntactic sugar for JavaScript, and templating languages aren't that. They also aren't just syntactic sugar. It's literally apples and oranges. I ask again, what is difficult to understand about this? Or do you think it's not true?
You may have stricter internal boundaries of what constitutes a "language" than many of us do. Languages are a spectrum and I find there are few hard boundaries in practice. Depending on how you define "syntactic sugar" (and I believe it is a much harder thing to define than you may expect) every language is just "syntactic sugar" for some form of machine code.
JSX is a language that takes XML-influenced templates embedded in JS files and compiles that to JS files. EJS is a template language that can embed JS snippets and compiles that to JS files (or interprets it at runtime, though the distinction between compilers and interpreters I think is largely irrelevant here). They both have the same general target compiled language, and they both have similar transformations from an original document to a new process. The biggest difference I mentioned is an "inversion" of what people think of as a template language (the template being the "focus" and scripting it being secondary/embedded), but I don't think that disqualifies it as a template language.
Subjective feelings of "syntax sugar" or not, JSX is a language intended to write templates in. That's a "template language" by tautology, if not also by definition.
No, JSX is syntactic sugar and nothing more, it requires no context, you can "transpile" it from memory. In my code I sometimes explicitly use React.createElement calls, when using JSX is not that useful, because it's always just writing JavaScript, not writing templates; it's just code. Template languages also add different semantics, e.g. those EJS <% or <%_ tags - you can't express them in some equivalent JavaScript code (don't confuse it with compiling the template to some code, that's something completely different).
It's mainly an interactivity/simplicity tradeoff, sometimes the right trade other times not. A lot of people are using JSX on the server side w/htmx because it's a good and familiar templating option on the server side.
> JSX is a really elegant way to avoid templating, but we're back to templating with HTMX
Not necessarily. There are libraries in all mainstream languages that let you embed HTML generation directly in your backend server itself, without using a templating engine. Some examples:
> Routing, state management, auth, components, theming, API access, and more are all still problems that people add libraries for and those problems don't go away
Actually they kinda do go away. Have you ever tried Ruby on Rails? It does all this out of the box.
> Routing, state management, auth, components, theming, API access, and more are all still problems that people add libraries for
A lot of this goes away if you choose a server-side framework that handles its own routing, auth, api, templates etc. And the state management also goes away if you don't need complex stateful widgets on the frontend.
> And the state management also goes away if you don't need complex stateful widgets on the frontend.
Browsers come with a very limited selection of widgets, almost everything I make requires at least one custom widget, usually significantly more than one. How can you possibly know when you start a project that you won't need to make any complex stateful widgets?
If they really are that complex, then use react for them. One of the biggest issues I have always had with react proponents is the "well if I need react for X then I might as well build the whole thing in react."
Islands Architecture is really not complicated. The bulk of your app can be very simple hypermedia exchanges and components and when you need a really fancy widget, load it and mount it where it needs to be.
I don't even know what problem this author has with React and how switching to HTMX is going to help them (after reading the article twice).
> Some of the worst offenders in this respect were wouter (a React router package) and TanStackQuery (which I was using to fetch, cache and manage state from the backend).
For the past 4 years, most objections to HTMX have come in the form of your comment. A look from afar, trying to speculate on what development must be like.
After 4 years of mature devs taking it for a spin and reporting back with a thumb up, maybe it's best to try to actually use it in a project of your own and see how close your predictions match reality.
Perhaps you even have experience with it of course. In which case, it'd be interesting/useful to voice your objections more specifically.
It’s a little weird to say you’re ditching React, when your issues are not with React but other dependencies you’ve adopted. The choice to write a backend in Go that handles routing was always there.
That's fair but react forces you into the npm mentality hard on choosing random libraries that are often abandoned, don't have licenses, and a myriad of other issues other people have mentioned.
I'm dealing with a massive migration at my job where the schism between libraries going from node-sass to dart-sass caused something as simple as just updating bootstrap versions into a year long effort of having to move a dozen core libraries to their latest versions simultaneously with the result being what? It's not like our product will be significantly faster or gain some new features, all we get is our app not breaking because the deps shit the bed.
I think the lessons we can learn from the frontend community is that not having a robust standard library is a failure of javascript and this problem will persist until we stop caring about backwards compatibility.
Will we as software engineers continue having discussions about supporting browsers from 1990 in the year 3025? I hope not, because bad decisions were made then and they have been compounded since.
I hope "react forces you into the npm mentality hard on choosing random libraries..." isn't true. I'm just getting into a side-project using Vue and Vuetify components but not planning on using much else than commercial third-parties (which admittedly have free plans that I'm currently using).
I have grown scared to look at OpenAPI and related technologies, for fear that I will find the same problems there.
I don't think "not caring about backwards compatibility" is the issue, though? Quite the contrary, constantly forcing migrations because things have changed hurts far more.
Fair that it isn't just a JavaScript thing, but have you tried looking at the various options out there supposed to help implement with it? Either as a client or as implementation.
Many are abandoned. None seem to support AWS as a target.
Yeah. That's how huge sprawling multi-language projects typically work. People who are interested in specific stacks maintain the code generation for those stacks. Stacks which are inherently less popular receive less attention in the OpenAPI ecosystem as well. It's very similar with Protobuf, Thrift, etc. There's no magic wand to automatically maintain the codegen for all stacks.
Also not sure how OpenAPI is supposed to support AWS as a target. Do you mean that AWS doesn't support OpenAPI specs for their JSON HTTP services? Pretty sure that's because they use Smithy: https://smithy.io/2.0/index.html
I mean, this doesn't change the complaint? I wasn't even looking in JavaScript at the time. The support in Python was... less than pleasant. Couldn't even do documentation generation well, as most of those plugins for sphinx still relied on the older swagger schema. Close enough to be a pain in the neck.
For supporting AWS, API Gateway has some efforts to support OpenAPI. It is a lot like the documentation problem. Close enough to be a pain in the neck.
So, the question is ultimately what did it help me do? I had a pleasant feeling of following standards for the intro. That is about it. Nothing was disastrous, I don't recall. Just a lot of paper cuts on things not working as hoped by the documentation. I grew to call it aspirational documentation.
You also don't have to choose any of those random NPM libraries, React can stand on its own. Adopting a new technology without any of those libraries just means having to roll your own, but that was always an option.
Do you write software professionally? Not saying this to chide but these types of comments always come from either devs that just work by themselves or green ears.
I have never worked on a project where the only dependency was react (let's ignore build or testing tools for the sake of argument). What I do mostly see are projects that captured the react zeitgeist of the time in regards to which "popular" libraries were recommended and people just copied willy nilly.
Maybe this is more of an indictment against software development in general where professionals are not allowed to design and engineer robust solutions because the alternative is getting fired from your job because John Dev was able to complete more tickets in a sprint when they downloaded a bunch of bloated npm libs that will break in two years and they'll job hop to the next place to continue the cycle.
I'm not saying I haven't seen it, but I would bet my lifesavings the overall percentage is quite small. Why is bringing up the minority use case helpful in these discussions? Some teams just use javascript and opt for zero dependencies too, these teams are also an extreme minority.
You can't have an earnest discussion about react if you're going to argue that no one pulls in a myriad of other dependencies. Even the react docs recommend you use frameworks when starting out:
Your usage, while quite admirable (I earnestly mean that too, I wish I was on a team that was disciplined enough to only use react and nothing else), isn't the common experience.
> Why is bringing up the minority use case helpful in these discussions?
It was presented as a challenge to the status quo. As you point out, a majority of developers don't ever think twice about including everything and the kitchen sink. The idea that you don't have to do that may not be novel information to you, but is to a large number of developers. If they don't hear it here, where are they going to hear it?
That was kind of my take on this article as well. We're comparing libraries with lots of breaking-API churn vs. libraries that don't do as much, in another language.
When it comes to React itself, only breaking changes I ever experienced were 17->18 and that was such a simple fix it's not worth talking about.
The problem isn't specifically with React, but the culture around it based on the fact that it's "just a library". Net effect is that you have many more dependencies than in frameworks which provide some features out of the box and thus much greater risk of having to deal with the problem discussed here.
Agreed. I’ve been using Preact on an entirely client-side SPA with a simple RPC-like backend all in typescript for 5 years or so. I can’t think of a single major dependency that has changed significantly in that time. But, I did make an effort to minimize my dependencies.
I had the impression that hash routing in React was deprecated and a React-based back end is almost a mandate now... Wondering what options people are actually taking?
Not at all mandated. Buying into React server-side rendering will get you into a morass of complexity. Traditional SPAs with history-based routing still work just fine.
It seems to me that a lot of people are forgetting that when updating to next major version of a package, breaking changes are expected - that's the whole point of major version number in SemVer [1]. What they actually want is seamless updates (or never changing APIs, but that not possible in most situations, and also not what you want as a package developer - you want to be able to correct your API design mistakes). That requires a lot of work from the package developers.
Look for example how people at Remix do it: breaking changes are hidden behind future flags [2], so you a user can turn them on one by one and adapt their code on gradually without surprises. Another solution is creating codemods for upgrades. But how many open-source package developers are willing do to this extra work?
Same story with peer dependencies - they're completely fine, if package developers know how to use them.
As always, don't be mad at React, don't curse Npm, it's not their fault. There is no great package without great effort.
> It seems to me that a lot of people are forgetting that when updating to next major version of a package, breaking changes are expected
No, I don't think that's the problem here. The author completely understands and accepts that a new major version will break their code. They're asking whether there's actually a benefit to these breaking changes.
> seamless updates […] That requires a lot of work from the package developers.
Well, the lower something is in the stack, the more likely developers seem to be to put in that work. The Linux kernel syscall API is sacred, so are most Win32 base interfaces. libc/VCRT almost as much. Python versions a little bit less. GUI toolkits and SSL libraries break a bit more frequently but tend to just be parallel install. But the more you move up the stack, the more frequent breakage you get.
Same in the browser. Basic DOM is backwards compatible to the stone age, but the more things you pile on the more frequent API/update breakages become.
It's really a kind of obvious logic, since the lower something is in the stack, the more things above it indirectly snake down dependencies, and the more pressure to not break things there is.
In just finished migrating a fairly large client project SPA with a Django backend to just straight Django + HTMX with some Alpine sprinkled in for reactivity. Went from 100s of JavaScript dependencies to 5.
It feels like a massive weight off my shoulders. The SPA felt like a ticking time bomb.
No, I don’t. No new deps server side. Htmx and Alpine are self contained. They have no transitive dependencies, they don’t even require a build step. Worst case I can just vendor them.
Sure, given enough time everything will rot. But there's a huge spectrum here. HTMX for example - for it to suddenly stop working browsers would have to collectively drop backwards support for their most basic of features.
Now with the SPA it's very possible to have a Vue2 -> Vue3 situation, or just someone pulling a left-pad. Not to mention the build system requiring specific versions of nodejs, etc. And this is just to keep things running, not to speak of adding new stuff.
And also, just because something isn't self-contained, also doesn't mean it isn't a ticking time bomb. Great, now everything is a ticking time bomb. Enjoy!
> No, my webapp wasn’t getting any additional benefits. I was already happy with the functionality of these packages.
Then why did you upgrade in the first place. Clearly there were no security issues (and those hardly play a role in Frontend world, especially in those library Tanstack and Wouter). It seems people just want to upgrade to the latest version, just to upgrade to the latest version without any benefit.
I do .NET development, and I skip major versions all the time, instead of upgrading every year.
Frontend security issues do exist and can be big problems. Especially in SPA designs where the frontend runs a massive in-memory database of an entire application state. (As many of them do, especially if you are using things like frontend routing and ad hoc backend DB querying as both example libraries are about.)
.NET has a security support policy that LTS versions (currently even version numbers) are supported for a couple of years and non-LTS versions (currently odd version numbers) for a year after they've been released. A lot of frontend packages don't have the maintenance budget to offer support plans on anything but the most recent major version (in part because many of them are open source and low contributor count; their own problems for the ecosystem).
Don't discount security/support maintenance concerns in the frontend. Also, yes, it is a problem that many frontend packages in the ecosystem don't have maintenance policies as strong as the best backends.
The packages referenced in this article do not have known security vulnerabilities in the latest minor/patch versions of their previous majors. So the author is still complaining about an unnecessary update.
> Frontend security issues do exist and can be big problems.
Citation needed.
> Especially in SPA designs where the frontend runs a massive in-memory database of an entire application state
How is that more relevant for security issues? Common Frontend Security issues are traditionally xss, csrf, token/session theft etc. Lot of those attack vectors have been severely weakened with modern browser security settings (Content-Security-Policy, HttpOnly cookies, SameSite=strict etc.). Eager to hear how a view router (wouter) or a server-state-management system (react-query) in the FE are likely to have security wholes in that field.
Two big easy and obvious ones missing from your list:
- Query injection (always a threat to any query library)
- RegEx DDoS (always a threat to routers because all most routers are is a miserable pile of RegExes and because route writers don't like RegExes directly, often include DSL compilers to RegExes, which can lead to their own exploits)
JS is a Turing complete language and though it is often run in a sometimes strict sandbox mathematicians have now proven there's a "0-Day" sandbox break in the Universal Turing Machine and that it is likely a corollary/relative of the Halting Problem. Sure, modern browsers absolutely have a ton of security settings and improve every day on that, but browsers aren't perfect (mathematically can't be perfect, according to our best understanding). JS is still a complete programming language with everything that implies about exploits and bugs and timing attacks and disclosure leaks. "severely weakened" is not "the threat doesn't exist" and certainly not "the threat isn't worth worrying about, it is fine to leave bugs unpatched".
It certainly means you can take a measured approach to how you prioritize unpatched bugs, but a general malaise sense of lack of priority in frontend issues is what contributes to why frontend libraries don't have the same backwards compatibility rules or long-term security maintenance habits as many backend systems. As an industry we are really bad about looking down on frontend as a second-class environment when it is one of the largest percentages of all program code running on the average user's machines this decade.
I could keep listing CVEs for days, and then still have weeks of lecture topics about how npm has one of the biggest ecosystems for supply chain attacks right now (and those are an active and ongoing threat). I understand the lack of perceived priority and I appreciate that not everyone has the same level of "full stack paranoia" that I do.
you don't seem to have any understanding what SPAs are. There is no server-side code in your SPA, all code is run on the users browser. If you do any sort of DoS, you will bring down the users browser, not the server hosting the SPA. How is that a feasible attack vector?
> Query Injection
Again, not SPA relevant. What is injected? How do you even inject that code into the browser of a user, if you do not have access to the server-side end of the application? XSS, as I said, is mostly mitigated in SPAs that use CSP.
> you don't seem to have any understanding what SPAs are. There is no server-side code in your SPA, all code is run on the users browser. If you do any sort of DoS, you will bring down the users browser, not the server hosting the SPA. How is that a feasible attack vector?
You don't think it is a problem if user's browsers lock up or crash? You don't think it reflects poorly on a business if their app/SPA is the one that caused it? (Even though it wasn't their bug?)
The first D in DDoS is distributed and sure the most common meaning of that is taking down a remote server with lots of broken clients. This form in SPAs is also distributed in the additional way that it can vector to take down many clients.
> Again, not SPA relevant. What is injected? How do you even inject that code into the browser of a user, if you do not have access to the server-side end of the application? XSS, as I said, is mostly mitigated in SPAs that use CSP.
Query Injection attacks like SQL Injection attacks are about putting attacker-specified changes into whatever Query Language is being used. Many SPAs move their generation of queries and the Query Languages they use into the SPA. That's the entire point of libraries like react-query, moving query generation into DSLs close to the UI and directly inside the SPA. No CSP is going to ever stop a SPA from sending bad queries to open query services. Sure XSS is mostly mitigated, but XSS is not the only way to inject a query. The SPA is in the hands of the users. If your attackers are users (or pretending to be) they don't need XSS, they just need access to the application and its query DSLs. Bugs in those query DSLs are security bugs.
Classic Client/Server architecture 101: the client is in the hands of the enemy.
> how npm has one of the biggest ecosystems for supply chain attacks right now
And all of them are only relevant for server-side nodejs applications, not SPAs as the user original poster was referring to (React CSR). Your points are all moot and irrelevant to the actual topic discussed.
Several of them have been massive disclosure leaks in SPAs. All of the ones targeted at Crypto Wallets, for instance, were supply chain attacks in Electron code and/or SPA code.
On such popular and packages, one should reasonably expect security holes to be found and made public with relatively little delay. At that point, upgrading becomes relevant.
Part of the problem with the JS ecosystem specifically is that, because it is culturally acceptable to have dozens of dependencies even for fairly small libraries, eventually you run into a situation where updating that one small library that you need to update requires you to deal with major upgrades for a lot of other stuff that e.g. break your build because they dropped support for your version of Node.js, or migrated entirely to ESM, or ...
React is not responsible for the stack of poorly maintained third party packages one chooses to use alongside it. You don't actually need a router, or redux, or other "state management" nonsense. Your application code can/should handle that stuff, and it won't break, and it won't change unless you change it.
While I also think react-query should have kept v5 compatible with the v3 API (even if the new API is better, keep the old one optional, at least for a while), the migration is fairly easy and quick to do, and more importantly, it's not mandatory at all. I still have apps running v3 and v4 without any issues. Also, v3 to v5 is a bump of two major versions, it obviously implies breaking changes [1] (can't tell for Go but this happens in most part of the industry, see for example Python v2 -> v3, SQLAlchemy v1 -> v2 or psycopg v2 -> v3) and it's not like they release a new major version every week.
As someone who do a good amount of frontend dev, I feel like the "dependency management fatigue" sentiment, which is very common on this forum, is way too much inflated. Just keep your dependencies to a reasonable number, pick solid dependencies that don't break their APIs every year, and don't upgrade just for the sake of it. Like you surely do with your backend environment too.
The thing with the react community is that this constantly happens with random things. You cannot at all compare python v2 -> v3. The react community has 10 python v2->v3 situations in any given year. The only reason why it works, is that Facebook and by extension the react community think it's okay to hire massive Frontend development teams.
I've often felt this is the other way round: Facebook hired a massive frontend team, and therefore they have to continually create work for themselves.
The point where they re-centered React around a bad half-implementation of Objects, in a language that already has an OO system, was when I decided they were out of real problems to solve (or didn't know how to approach those real problems—do Forms still suck in vanilla React, without an add-on library or two?) and on to pure employment-justifying and CV-building.
And, more power to them, but at that point I'm not going to willingly rely on the project for anything.
> do Forms still suck in vanilla React, without an add-on library
Did they ever? Wouldn't we just do <form onSubmit={handleSubmit}> and then in the handler we just grab the FormData, send it to the server, get back the response, then update whatever? Doesn't seem like it should be that hard?
React v1 solved a real problem, but perhaps didn't really need 17 subsequent major revisions with another on its way? But that's the different cultures around this discussion. The Javascript community generally seems to subscribe to progress at all cost, which the earlier commenter attributes to exceptionally large teams like that at Facebook trying to find work to do, whereas the Go community generally subscribes to the idea that it is okay to be "done", clamping down on the extent of any future iterations.
> The react community has 10 python v2->v3 situation in any given year.
That is not my experience. Maybe you could share with us some examples?
My most important dependencies, after React itself and TypeScript, are react-router (which released v6 in 2021) and react-query (which released v5 in 2023). I don't remember other major breaking changes in recent years, at least with the dependencies I'm using.
Yes, we do maintenance contracts and anything react is really costly to keep up to date.
You have enormous churn just to keep existing functionality.
Super hard to sell that you need like 30 days a year just to keep you app live workable
The real dependency management hell was ~10 years ago when npm, webpack, etc. became popular (TBH not just dependency, JS in general). I feel nowadays things are better
There's a curious expression in Brazil which translates to "taking the goat out of the living room". A parable tells the story that people in a living room were complaining about everything and crying out loud how miserable they were. Someone then brings a goat into the room and chaos comes about. After enough time, the goat is carried off of the room. People are now exactly as they were before, but they do not complain anymore. The just say that life is now much better without a goat in the living room.
Things may be better today, but that doesn't mean they're necessarily good. Even after climbing a couple circles away from the very bottom of dependency management hell, we're still condemned to endless suffering.
Thank you. My life is better for having read through this long discussion to stumble upon this profound and hilarious fable. I will now remove the goat from the room and go do something productive.
> Is it necessary to literally break the API of a fundamental component in a React webapp 5 times ?!?!
i cant speak to tanstack specifics but just fyi to general HN audience that it is very normal to bump a major version just because a major dependency bumped a major version (eg Typescript or React), and often its just a sign of deprecating legacy apis than breaking anything core.
Deprecating "legacy APIs", from the perspective of the API users, _is_ breaking core functionality. Functionality that existed previously is now gone, that is a breaking change.
Additionally, bumping a major version because a dependency changed isn't a common practice to my knowledge. In fact I'd say it's incorrect. You bump a major version if _your_ API has breaking changes. If one of your dependencies changed but you've adapted in a way that is transparent to your users, that is a patch not a major.
When we shipped React-Redux v8, we rewrote our internals to drop our own subscription management logic, and switched to React's new `useSyncExternalStore` hook instead. However, `uSES` only was added with React 18, and we still wanted to support earlier versions of React that had hooks (16.8+, 17). So, we defaulted to using React's "shim" package that provided a backwards-compatible implementation of `usES`, at the cost of a bit of extra bundle size.
Once React 18 reached sufficient adoption, we wanted to drop the use of the `uSES` shim package and use the built-in version, but that required React 18 as a minimum dependency. So, we did that in a major, React-Redux v9.
Code _using_ React-Redux never changed in the slightest - it's still the same `useSelector` calls either way. But given that anyone attempting to mix React-Redux v9 and React <=17 would have it break, that was clearly a major version bump for us.
This type of dependency is different to what I wrote about. I was talking about dependencies of your library that are fully internal and hidden from its users, whereas this example is much wider and closer to a dependency on the environment in which your library is being used.
> Deprecating "legacy APIs", from the perspective of the API users, _is_ breaking core functionality. Functionality that existed previously is now gone, that is a breaking change.
Not really. PHP is not HTML-aware the way this thing seems to be, which is a fairly significant difference in practice. Notice how there's no special syntax to toggle between "this is code" and "this is output".
Similarly, my favorite web stack these days is good ol' PHP for the API and HTMX for the front-end. It keeps everything simple and all the complexity in one place. Love it.
Haxe compiled to JS and PHP is my goto for traditional web apps. I get the type safety and access to a vast ecosystem. Bonus, i can share code both on the client and server.
I like to keep things as vanilla as I can because historically using languages that transpile into other languages has been quite horrible in my experience (for debugging reasons among many other things).
We ditched react and nextjs(broth of satan himself that it is) mostly (clients we are trying to lose) for clog/cl and go/live/htmx (we rewrote two of our largest products). Days are relaxed now, sleep is better, people are no longer stressed, updates no longer lead to depression and therapy.
While I believe you, what do you mean with no more stress? We're using Node.JS + React for a simple integration. I don't have experience in React, but it doesn't seem too bad so far. I do prefer Go or Elixir but as a backend engineer I don't mind it
We minded react somewhat less than the ever ongoing misery churn of nextjs. But it's specifically about updates/upgrades of packages that just break things for no reason (visible and invisible; like I mentioned in another post; often the author cannot explain in coherent language why they broke everything).
In general, and this applies to many language communities, you are encouraged to include all kind of garbage dependencies into your project.
Adding dependencies should be something you consider carefully. Every line of code has a maintenance cost - a dependency has it times 1000. Effectively you are adding technical debt in many cases.
For instance I just developed a new react app with just react and react-router. My colleagues suggested react-query but why add this when you can do all you need with a few lines of code and fetch?
I love the go+alpine+htmx+templ GAHT stack as long as I have defined and limited interaction needs. Things get tricky when you need more complex interaction because there isn't much example/LLM code out there for complex situations. Then you end up having your view code split between the server and some random JS on the front end and it kinda kills the simple GHAT fun.
I used htmx for my side project https://rapidforge.io/ along with Golang. I don't think you even need templ but its personal preference at this stage. I was suprised how fast I was able to finish front end work. Only part I use React was for drag and drop editor.
Author's frustration comes out of his personal not-necessary dependency choices, but of course his final choice was to blame the technology itself. I have one suggestion for those who are struggling with wouter, or react-video-player, or react-stank-tank-fetch or any other dependency, 100% safe way to start a successful project:
that's it; you don't need them; they can be replaced with few hooks in 100 lines of code to fit your needs; just write javascript code, react hooks and components; don't install weird dependencies, they won't make it faster or more convenient in the long runl; if there is any other dependency everybody wants and needs, you will see it was last updated 9 years ago on GH and still works great to this day
Use raw esbuild or swc; or be hassle free with Vite... or something else less cursed. I am grateful for Babel, it opened up the js development to new syntax but it's a beast from the past times. (The same applies to webpack)
Transitive dependencies of those are exactly the thing Dependabot will nag you about day and night.
Why should I care about 500KiB of development dependencies, they won't end up inside the build anyway? I don't see any value in vite or other build tool since I know how to write a webpack config I need in 3 minutes, and it is the same process for almost 10 years now, just npx webpack init, adjust the config slightly and never touch it again, there is no option which is too complex or hard to grasp, just the typical output/input/modules/plugins, and you never need to update it without a good reason to. Just dependabot nagging is never a good reason to start manically updating your build dependencies
In enterprise environment you need to manically update to meet the security compliance SLAs because those dependencies are a source of non stop CVEs. It's mostly bunk CVEs but that's out of your control.
No, you only need to update when there is an actual CVE which is a real concern, which is fairly rare for development dependencies, for instance webpack had only two in it's 12-year history – with one being severe. Babel had practically zero (except 1 indirect critical traverse package CVE last year). Vite you're proposing had 7 total and 3 severe in 4 years. Think this through – non stop CVEs, really?
Hmm, I wished the author would have gone a little deeper into why Go+Htmx+Templ is better than just rant about why react is bad. Which isn’t even a react specific problem but more of a npm ecosystem one.
I also ditched react on my side projects but for a whole different set of reasons.
I think that is a very valid criticism. I wanted to keep the article concise, but I should have elaborated more on why Go+HTMX+Templ solves the dependency management fatigue.
As I said in the article, it is mainly anecdotal evidence, i.e. the experience from having to maintain projects with either React or Go+HTMX.
For example, in the Go+HTMX project I handle state management and routing solely with the Go stdlib (which is very very stable IMHO), I don't have to ever worry that a dependency update will force me to perform painful refactoring work.
Maybe in a future article I can expand on these points, thank you for the feedback :)
I wish backward compatibility was a thing in js library development but clearly there's nothing fun in keeping things working, so developers break API all the time for no reasons.
I don't how many time I was dealing with a breaking changes for trivial things like making an API prettier, renaming a few functions, a few parameters here and there because it suits the author's aesthetic sensibilities.
They're of course perfectly free to do this and being open source they don't owe anything to anybody, but I still wish that there was some degree of responsibility towards the end user. Or else why even release the code publicly? End users don't care *at all* how pretty the API is, we just want things to work.
It's great when you can just pick a simpler tool, and it turns out to be an adequate tool. When you don't need the benefits which React provides for building very complex interfaces, there's no reason to put up with its complexity.
There are kinds of applications though where React is indispensable, and HTMX would become unmanageable spaghetti. Stuff like Facebook (the original authors of React), GMail, Jira, etc. Such complex applications (not "websites") are relatively rare. If yours is not of this class, do explore simpler solutions unless you enjoy React and would write it for fun (and even then).
i think you could do at least the basics of facebook, gmail and jira pretty well w/htmx: they really aren't all that interactive and are mostly text and images
examples of things you couldn't do well w/htmx are google sheets and google maps, see:
One of the major issues with the npm (etc) ecosystem is, that people are breaking even across minor versions for no viable reason. Aka a function call has been changed and the author cannot explain to me why it is no longer backward compatible, other then 'it works for me'. It's their good right but I don't get it. Also, some packages are just 'done' besides the security fixes, but that doesn't give you kudos or vc money as people 'think the project is dead' if no updates for a week, so stuff gets changed and useless commits are done.
I hope the author will write a new post when he has used
the new solution on a large project (not a personal)
preferably with seceral devs working on the same code.
In general, minimizing complexity usually pays off in the long term.
My only issue with Go is getting the old bootstrap C version compiled for a new port... so it can build a modern release. In that area, the whole paradigm of needing only Go hits hard as a dependency reality check in some use-cases.
JavaScript frameworks just follow a well known trajectory... =3
That is the catch, you need Go for the next release of Go... So no recent Go, means no new Go... and no deprecated Go in C means no Go to or fro... makes sense now... No? lol =3
If you are porting Go, than you may find it has a legacy dependency on the deprecated version (rarely used except for the porting use-case) which is a problem on some platforms.
To list Node issues would be pointless, as most already re-discover them within minutes. =3
So they got tired of the rate of change from 2 dependencies: Wouter and TanStackQuery, so they changed their app architecture and now they have new dependencies: HTMX and Templ.
This is kind of fascinating, they posted a migration guide that explains what code has to be updated, at the end of which they mentioned the dropped IE support. So your brain must've just ignored the code diffs above when responding, because it likes htmlx so much.
2 Examples:
1. "Convert any hx-on attributes to their hx-on: equivalent: [..] hx-on="htmx:beforeRequest: alert('Making a request!') [..] becomes: [..] hx-on:htmx:before-request="alert('Making a request!')" Note that you must use the kebab-case of the event name due to the fact that attributes are case-insensitive in HTML.
2. The htmx.makeFragment() method now always returns a DocumentFragment rather than either an Element or DocumentFragment
Part of the reason is that a lot of the changes were things that were already pushed as "correct" way of doing things in v1, and the they even avoid marking v2 as "latest" specifically to prevent accidental upgrades and provide a time frame to upgrade.
Let's go through migration guide:
- NO CHANGE NEEDED JS modules changes: new feature in form of all three major module formats being provided out of the box
- MINOR Extensions are now packaged separately, and one extension explicitly has to be upgraded. Minor amount of work (been there, done that)
- MINOR Again migration of SSE and WebSocket extensions, something already pushed as default and preferred way in later v1 versions. If you followed best practices from docs in last year, you have nothing to upgrade.
- MEDIUM hx-on="htmx:..." -> hx-on:htmx:... - again, old recommended becomes new mandatory. If you followed best practices you have nothing to upgrade, otherwise change is minor unless you have lots of custom events involved where you also did not follow recommended naming.
- MINOR - default settings of configuration options changed. Minor change of adding few configuration lines.
- MAJOR ... but only for users of internals, not public API - htmx.makeFragment() changed to simpler and more performant code underneath (and less buggy) - and it was because HTMX dropped IE that it could finally do so, because IE's documentFragment handling was simply plainly bad. It's also internal, not public API, so unless you went deep or wrote your own extensions you're not going to see it at all
- MAJOR - for extension authors that used a certain specific private internal API, that API got replaced with a new public one.
- Internet Explorer support getting dropped. Some of us still have to deal with it, for them the team behind htmx apparently promises to maintain v1 for a while longer. Also trigger for the makeFragment thing.
All in, if you have followed v1 documentation on recommended patterns in the last year or two, you're barely going to notice anything except maybe the configuration changes, and well, better be explicit about those.
Neither of those examples apply to or impact the bulk of HTMX projects. I've worked on a bunch of these and have never seen `makeFragment` used in the wild. The `hx-on` attribute is similarly rare — I've only used it twice myself and it took longer to open my IDE than to update those to the new syntax.
> So your brain must've just ignored the code diffs above when responding, because it likes htmlx so much.
It's less about whether someone likes HTMX or not and more has to do with none of those points in the upgrade guide being relevant to, or impacting, the author or most other people.
I think I read it like they said, "there were no breaking changes in the major release except for dropping IE support" instead of "even though it was a "major release", there were no "major breaking changes" except for dropping IE support" big difference. And then my stupid ass went off on it. This is my bad, I'm sorry disregard my comment.
Nah, happens - if you don't actually use HTMX and read the change list, it sounds scarier.
The reason some of us like the way it updated so much is that except possibly for authors of swap plugins the new behaviours have been telegraphed long in advance so a lot of places already did all the changes and V2 just removed some stuff that was already deprecated.
Funnily enough, this confusion shows how bad our expectations became, in some ways...
Rails has Hotwire/Turbo, Phoenix has LiveView, others use various methods like Htmx to get away from the SPA nonsense a majority of us are living with.
I've been enjoying Web Components with lit-html (and not lit) so far.
The developer experience is definitely worse compared to Vue, but I feel there's less ideology and careerism.
Plus, Claude generates some pretty nice code out of the box!
Isn't this also a Single Page Application (SPA) problem? To me, it looks like we are building monolithic Javascript applications that are running in a browser execution engine.
HTMX seems to be much more componentized.
Go dependencies are not much different. You mostly import directly from github.com, and it's a pain to review all the changelogs. Compared to Java/.Net that traditionally have fewer better tested dependency updates.
It gets clunky really fast if you’re building anything complex. Also, Templ gives compile time errors in situations where the built in templates would give you runtime errors.
In fact, it reminds me of Joseph Tainter's theories in Collapse of Complex Societies. Additional civilizational complexity adds value until it starts producing negative marginal returns and then the complexity collapses and reverts to simpler forms.
Yet so often I have these thoughts that (feel free to replace react with anything):
- Yeah why was that in react anyway?
- Is the fact that the project grew into a mess a react thing or just typical project is developed over time and turns into a mess because humans.
- Are these problems "react" problems or ... choices?
- Is the new system better fundamentally, or better because everything got re-written after the fact / all the lessons learned were applied to it from the start?
Sometimes there are answers in the articles, sometimes not.
Not a fan of Tanstack. It's fine if you want to ditch react but I have to question your judgement in usage of packages like wouter and tanstack. There is a way to develop with react that requires just a handful of other dependencies and this is not the way
I evaluated Tanstack for a project and it's got a lot of annoying issues with doing way too much and way too little simultaneously. I feel Tanner Linsley moves between these numerous projects a bit too much leaving the API surface of them all half baked. Specifically in my case it was the table that simply did not fit some use cases we wanted. We ended up going with react-aria instead. Very flexible, very focused on accessibility and very simple and extensible APIs and components.
That's Go's out-of-the-box experience. Maybe you're looking for something to scaffold projects? There are plenty around, I've been using https://gowebly.org
Sorry but dependencies are choices. Rewriting on a new stack with less dependencies is just choosing less dependencies. People seem to believe that using React requires using all the other shiny libraries. Right sizing your implementation to your needs is important. But it’s not Reacts fault, it’s the false impression that you can accrete things without cost.
The problem is not so much React as it is the JS ecosystem, but React is just very visible when you have these issues because there are so so so many packages being imported.
And the root of the problem is peer dependencies and the JS community's lack of backwards compatibility and maintenance.
Take any decently-sized JS application, whether React or whatever else. Put it in Github. Turn on dependabot. Watch your pull requests go up by 5-10 PRs per week, just to bump minor versions, and then watch how 1 of those PRs, every single time, fails because of a peer dependency on a lower version.
This has been a problem forever in the community, and there's no good solution. There's also just no feasible way to make a solution due to the nature of the language and the platform itself. You just have to absorb that problem when you decide to use eg Node for your backend code or React/etc for your frontend code.
Reminds me of the Rich Hickey talk Speculation[1]. There is a special place in hell reserved for programmers that break back compat (for non-security impacting reasons) with widely used libraries, including google's guava developers. Linus Torvalds seems to be the only engineer with his head on straight on this topic and he has to constantly dive in and berate people that are trying to violate it in his project.
The culture around JS seems to believe that, between semver and package version pinning, there's some kind of mandate to do things like massive incompatible API refactoring, so long as you update the major version number.
Can you provide an example where Google Guava broken backwards compatibility? I have used it for more than 10 years without any issues during upgrades. To be fair, it is a huge library, and I have probably barely used 20%.
Oh, this is common. Fun when you have two different dependencies which both use Guava, but different versions, and you can't upgrade them to a common shared one. The solution there usually means having to shade Guava for one of them, which sucks, but it at least gets things working.
It's not much of a problem when you use it yourself, but it is when your dependencies do.
The JS ecosystem definitely has a big problem from the culture developed in the IE6 era where people developed so many packages working around the limited language and runtime, but React does have part of the blame here: the way it’s designed forces everything into its proprietary model instead of web standards, so you end up with tons of components duplicating other projects but in React or providing shims for those projects, Facebook’s big devrel push prioritized getting started quickly on a proof of concept rather than maintaining a larger app so you had things like Create React App adding nearly 40k dependencies before you had written a single line of code, and the culture of focusing JavaScript over built-in browser functionality (which made some sense in the 2000s when you had users stuck with IE6) means that you’re going a lot of work in runtime JavaScript rather than the browsers’ heavily-optimized C++ – and it’s often hard to change that because it’s not a direct dependency but a nested chain.
This is also why it’s slow and memory hungry: it’s not just the inherent inefficiency of the virtual DOM but also that having such a deep tree makes it hard to simplify - and since interoperability makes it cheaper to switch away, framework developers have conflicting incentives about making it easier.
Same (I started writing JavaScript when it was called LiveScript in the Netscape betas), and I remember how the vDOM hype was conspicuously short on rigorous benchmarks – people would compare it to heavyweight frameworks which did things like touch elements repeatedly or do innerHtml cycles and say it was fast.
More specifically, they would use the default font, which IE in particular had set to Times New Roman, so that is what most people saw. To add insult to injury, there was no way to configure it for a very long time.
To this day I wonder if this particularly strange choice of a serif font that is very clearly intended primarily for printed documents rather than on-screen legibility is why this entire notion of using user-selected fonts for web pages has largely withered. What if they went with, say, Verdana instead?
Same, except my web backgrounds were in sepia. I had one of those old sepia monochrome monitors, so no grey for me. Or colors for that matter.
I even made my first website on that monitor (complete with animated gifs and <blink>, of course) - and seeing it finally on a color monitor was... interesting.
I was around for DHTML days, and as I recall, it was just a generic term for the ability to manipulate the actual (not virtual) DOM programmatically from JS.
Can’t help being sarcastic: I have seen a couple of “I ditched React” post-mortems that apparently start with “I decided to stop adding poorly vetted dependencies with poor package maintenance practices”, just worded differently.
It is unsurprising to me if the router library is the first accused. When I was starting with a new project where I am using React, I went through a bunch of router libraries. There are tons, it seems like a low-hanging fruit with many implementations and many people trying to make a living off theirs (can’t blame them for it, unless they make changes for the sake of making changes and to incentivise people to pay for support). Ultimately, I found something off in every one, so I… just decided to not use any!
That is the thing, React is a small rendering library[0] and you are free to build whatever you want around it with as many or as few dependencies as you want. If the ecosystem is popular enough, there will be dependency tree monsters (simply because the ecosystem is extensive and using many dependencies allows package authors to make something impressive with less effort); switching to a less popular ecosystem as a way of dealing with that seems like a solution but a bit of a heavy-handed one.
[0] Though under Vercel it does seem to suffer from a bit of feature creep, RSC and all that, it is still pretty lean and as pointed out has two packages total in its dependency tree (some might say it’s two too many, but it is a far cry from dependency hell).
Sorry, can't agree. React is a state management library that also implements efficient rendering on top of the DOM diff it computes as it propagates the state changes.
This allows React apps to remain so simple (one mostly linear function per component) and so composable without turning into an unmanageable dish of callback / future spaghetti.
There is a number of other VDOM libraries, but what sets React apart is the data / state flow strictly in one direction. This allows to reap many of the benefits of functional programming along the way, like everything the developer sees being immutable; not a coincidence.
Regarding the size, preact [1] is mostly API-compatible, but also absurdly small (3-4 kB minified), actually smaller than HTMX (10 kB). But with preact you likely also want preact-iso, so the size grows a little bit.
> implements efficient rendering on top of the DOM diff
Here your definition of React diverges from reality, I believe.
React does implement state management—it sort of has to to be of any use. It is a flavour of immediate mode GUI, and some way of managing what changes and what does not change between renders is necessary.
However, React famously does not know about DOM or even Web. (Unless you meant DOM in some more general sense?) People use React to make command-line interfaces, output to embedded LCD screens, etc.
Yes, coupling it to Web and DOM and abandoning the separation of concerns that makes core React so general-purpose could probably make React a bit smaller, but I think projects like Preact are welcome to do it instead.
You are right, and my calling it imGUI is a stretch since actually it does rely on state to know when to render.
If you think about it, the end goal is rendering, and React facilitates that, but the actual rendering (as in, changing page DOM, terminal buffer, etc.) happens outside of React. So my definition may need to be adjusted.
I still think it’s a stretch to call React a framework, though!
Which always seemed a bit ironic, considering those libraries are presumably intended to make state management work better if you really have to do a lot of it (i.e., have a large product).
At MPOW we have a bunch of state-managing components for complex cases, like multi-page forms or API-backed grids with tons of filters, etc. They take some effort to get the hang of them, but after that they save large amounts of boilerplate.
React comes with useState, useMemo, and useCallback, which is actually enough, but it may be too low-level when you think e.g. in terms of a huge interactive form. It's easy to write your own useWhatever based on these which would factor out your common boilerplate.
I suspect HTMX also does not come with every possible battery included, judging by the proliferation of libraries for HTMX-based projects. Modularity is a strength.
it is enough in the sense that NAND gates are enough to build computers. Yes, you can write a complex application using only those, and yes it's easy to write hooks to keep the boilerplate low - although Context still feels like too much boilerplate - but as complexity grows it's quite natural to want to share state between distant parts of the application, and then you're left choosing between lifting staten and prop drilling (not great) or Context and massive, frequent rerenderings. Hence the need for third party state management solutions
Are you sure contexts causing rerenders is not solvable by moving useContext() calls into hooks that each return only the part of the context that is required and ensure reference equality of returned value (meaning any component will not have a reason to rerender unless there is an actual change in that part of the context)?
I thought about it more and my reasoning is as follows (I may be wrong):
— useContext() returns an object. This object is new any time anything in the context changes. If the object has a nested sub-object, it may be new as well even if it did not change, though I suppose it may depend on how context provider works.
— All components where you use that context therefore will render any time it changes. (Takeaway 1: apply loose coupling & high cohesion principle to contexts, such that if you use the context and it changes in any way there is a high chance the change is relevant to wherever you use the context.)
— The render at that stage may be fine[0], especially if contexts are nicely organized, but care is needed because a downstream child that receives a nested sub-object from the context may render as well even if the sub-object is unchanged but referentially new (unless the child is wrapped in memo() and the memo handles reference equality, which may well be what you meant). (Takeaway 2: always remember that JavaScript is full of pointers and referential equality is important in React.)
— However, if part of the context is useMemo()ed for reference equality before being passed to a child, then the child will not have a reason to render from other unrelated context changes.
[0] It may make sense not to use context in large numbers of downstream leaf components (e.g., not use it in an item rendered in a map, but use it in parent list and pass relevant props to list items).
This may be frustrating to deal with in a large project, but it may be that the effort put into organizing contexts strategically and using them with care would lead to a more solid, refactorable and reusable architecture compared to state sprinkled around the place as essentially an equivalent of global variables. It depends.
Not sure, assuming a change in context through useContext() directly counts as state change and memo does not prevent re-renders on state changes…
Generally, re-renders should not be a problem (assuming nothing changed for this component it is a no-op as far as its DOM is concerned), but that is a separate issue, I suppose. I did have to worry about re-renders on a few occasions (and it never feels great when you put effort into memoing each prop for reference quality, but something still causes a rerender from within).
This is my main complaint about React - the "just don't worry about rerenders!" model works well until it really does not, but then you're left with very little help from the tooling to understand and fix it: "why did this render happen" is still a surprisingly difficult question to answer, and if you really want to take control of this you have to very carefully micro manage useCallbacks, useMemos, memo(), probably lie about your useEffect dependencies, check every single hook, and hope that your dependencies do the same. In the words of Ben Lesh[1]: React is not a pit of success.
That said, I fear your solution would not work - your usePartOfTheContext() would re-render every time the useContext() inside did, not helping with avoiding re-renders. But if you only passed the part of context to descendants that use memo(), it _should_ work. Having children of context providers always use memo() is probably a good rule of thumb.
This uncertainty is why I find it much more productive to just slap shared state inside Jotai, so I can be reasonably certain that rerenders will have the smallest granularity without any more work.
I am very hopeful about the compiler, which should help a lot with this, freeing a lot of mental bandwidth, but also useEffectEvent() which will finally make useEffect sane.
> your usePartOfTheContext() would re-render every time the useContext() inside did, not helping with avoiding re-renders
If a hook returns the same value with a stable reference across renders, and it is passed as a prop to some downstream components, it does not matter whether the hook itself uses context or not: for downstream components, prop did not change and no render can be triggered.
Thanks, I was not correct. Still, in my experience the impact from failing to ensure referential equality of props is usually a root cause of many issues.
If you make sure prop references are stable as early as possible, then if you run into poor performance you can always just wrap components in memo() (or some would just memo() all the things by default), but you may not even need it because renders also get cheaper when dependency diffing is effective and every hook does less work.
If prop references are unstable, things get messy in many ways.
In reality though, making a case to use only the React library in a minimalist setup is just as hard to convince a team filled with people who came up in the past 10 years of front-end development as it is to convince them of using HTMX or web components. Nowadays, when people use React they use the whole kit-and-caboodle, and when they say React they mean all of it.
Personally, I avoid React because I don't want a compile step. I do everything I can to avoid one. And if I do need to use a framework like React, I prefer to isolate it to exactly where I need it instead of using it to build a whole site.
For the first part: yes, but that is why I think it is important to stress dependency vetting.
For the second part, a couple of times when I had to add a bit of purely client-side reactivity to something pre-existing but did not want to introduce any build step I simply aliased createElement() to el(). That said, personally I prefer TypeScript for a project of any size, so build step is implied and I can simply not think about converting JSX. Webpack triggers bad memories but esbuild is reasonable.
I like when it simply does not compile, which also helps if there are other various team members (or future me) who can not always be trusted to ignore typing errors. Also, I may be wrong, but I feel like JSDoc types are a bit limiting (and more verbose & extra effort) compared to inline TypeScript. Coming from Python, I really enjoy the typing power of TS and do not want to compromise on that…
I prefer to just run tsc to check for type errors on GitHub commits instead of needing them for every change.
And yeah inline types are more verbose but I prefer to use .d.ts files for definitions and then declare with a comment (vim lets me move to definitions with ctrl-] which is nice).
I also come from a Go background so I actively don't like using the more esoteric and complex types that typescript provides.
Culturally, who writes React apps with only those dependencies? I’ve done it for quick benchmarks and the like, but actual sites almost always have a huge amount of code to load. It’s like saying that you can have a Java app using only builtin libraries: true in theory but rarely in practice.
I found this really noticeable while traveling over the summer with limited bandwidth: the sites which took 5 minutes to fail to load completely all used React or Angular along with many, many other things posturing at being an SPA but the fast sites were the classic server-side rendered PHP with a couple orders of magnitude less JavaScript. It really made me wonder about how we’ve gotten to the point where the “modern” web is basically unusable without a late-model iPhone and fast Wi-Fi or LTE even when you’re talking about a form with a dozen controls.
Most of the problem there are people implementing their own timeouts in javascript instead of relying on the browser. The browser knows the difference between something taking 5m and making no progress vs. something taking 5m and making slow progress. Your application does not.
In this case, it’s simply putting a mountain of code into the critical path. If you have to load 30MB before the page works, it’s just not going to be a good experience. You can try to handle and retry errors but it’s better not to get into that situation in the first place.
That's what I mean. I've seen async loaders that wait 5s and don't see the file, then request it again. Before you know it, you're downloading 50 files of the same file or making 100 api requests to the same api endpoint.
There is a good solution. It's actually a great solution: Write everything in plain JavaScript. You get a great language to develop in. All your problems will go away. No dependency hell. Excellent load times and performance. Superb compatibility.
you can use fully featured frontend frameworks without a build step. while that may sound ingenious because you are effectively using a prebuilt version of the framework (so there is a build step somewhere) you are not suffering the problems that come with building it yourself. instead your code is used and stored in a way that it will work directly in the browser as long as browsers support javascript.
In one of my previous job, the main product was 100% pure Javascript (using AngularJS), with a few (vendored) third-party scripts, and it was very nice to work on it.
No package.json, no dependency issues, and above else, the workload we had was always related to business, and almost never related to external technical constraint such as a depreciated dependency.
In general I would agree, but this project managed to avoid the main gotcha of AngularJS (the performance issues & the possibly messy data-flow), so it was holding out suprisingly strong, even pas the maintenance date of AngularJS.
The lack of pre-processing steps, combined with good CSS and a well-formed DOM made it one of the rare project in my work history that didn't create any rewrite-envy.
AngularJS or not, the main point is that avoiding piling layers of tooling that might force you to an upgrade for purely technical reasons was a nice experience.
Maintaining a large typescript codebase with many developers is already a nightmare, I can't imagine what it would be like if intellisense was clueless about the parameters a given function takes.
If you are one programmer it may work, but many of us work in teams. JS is horrible for that as you need a lot of discipline (which often does not carry over well from team to team) in order to write "good JS".
We use Elm now. Elm translates well to JS (quick to compile and Elm is designed to map well to JS). We use Elm libs, but not nearly as much as in the unholy React+jQuery (yes, that's a bad idea) code it replaces.
All is compiled into one bundle. For the browsers the result is much less to download. For us devs it is a very different development flow: once the compile errors (shown in the IDE) are gone, it just works.
Compared to the loads runtime bugs in JS, we are confident this is a huge step forward and a good foundation to build on top of.
For node and other technologies like react, I would prefer less, but fatter libraries that could be optimized at compile time. All those micro packages coming from nowhere and getting updated every day is a big pain.
Or better yet, no dependencies! The built-in's for both JavaScript and the browser APIs are getting better and better every day, but people still reach for things like Lodash and Date libraries when the equivalent functions are built into the language and runtime itself.
Some things are easy now, but some things still require multiple lines of code: https://youmightnotneed.com/lodash. If you only need one or two, sure, just write your own, but if you're maintaining a big project... why waste time debugging these common utility functions? You'd basically just be reinventing lodash, but with fewer community eyes and tests on it. Whoever inherits that is gonna need to debug all your util functions when something inevitably goes wrong. Like for _.pickBy(), none of these are very readable (https://stackoverflow.com/questions/54743996/converting-loda...), another implements it wrong and leaves out the predicate (https://github.com/you-dont-need/You-Dont-Need-Lodash-Unders...), etc. Why do this to your project and fellow devs when it can easily be a single tree-shaken function imported from a popular, well-maintained lib?
If anything, ECMA should just absorb more lodash functions into the standard lib, like they've gradually done with some of the array functions. But common things like that shouldn't be up to each individual programmer & team to reinvent all the time. It just needlessly expands the maintenance surface and causes subtle bugs across teams & projects.
JS Date is in an even worse place. If you ever need to work across time zones on both the server and the client/browser, native JS date is totally unusable because it "loses" the original time zone string and just coerces everything into (basically) utc milliseconds. The Temporal API is supposed to fix that, but I've been waiting for that for nearly a decade: https://tc39.es/proposal-temporal/docs/. That proposal links to https://maggiepint.com/2017/04/09/fixing-javascript-date-get..., which explains some of the weaknesses of the current JS date system.
I agree with you about timezones and I think a library like date-fns (and date-fns-tz) strike a nice balance by not using a bespoke intermediary object type.
As you say, some of them are built in and those should just be used instead in most cases. The problem is that when you leave the choice of library to use, then the choice isn't always obvious, especially for niche use cases. A library that's well-maintained today may not be so tomorrow when the maintainer falls ill, gets burnt out on work + open source work or simply gets bored of the project.
Deno has the right approach in this regard where they are creating standard libraries to go with their runtime which are expected to be maintained in the long term, but even then I'd still prefer built-in APIs in most cases.
Angular was kinda like that, batteries-included, but it lost to React. It wasn't until Next that we got a similar batteries-included big framework for the React world.
But React was not batteries included, which is where the dependency hell came from; Angular comes with routing, API management, the works, but with React people quickly needed additional dependencies like state management (redux & co), routing, API calling, etc.
While Angular feels big and heavy because it's a batteries included framework, React feels simple and quick on the surface, but as the article and the discussion show, it comes with its own price.
If I'm building a personal project, I don't have the same time to curate a full ecosystem stack and nobody in the react system is maintaining those for applications that are put to the side for weeks or months at a time.
As for me, I just restarted a personal project on rails because of its batteries included mentality - it means I can limit the number of dependencies, and they have gotten very good at migration paths and deprecations.
Sure, but the framework cares about that for me. I don’t use rails personally but that’s the whole point — someone upstream of me is paying attention and making everything work together.
In contrast, I have work apps made in React that need regular piecemeal updating — routers, form libraries, query managers, CSS — because we’ve chosen to cobble that together ourselves. That’s fine, that’s the path we chose knowingly when we picked the tech we picked, but the point isn’t that frameworks don’t have dependencies — it’s that they take on more of the burden of managing them for you.
Well, Next is kinda like that then. It takes care of the sub-dependencies for you and when you upgrade, you just upgrade to the next major Next version (which isn't necessarily easy, but more so than upgrading 100 individual packages). They provide codemods for some stuff too.
I suspect that most rails, or next, projects add additional dependencies than just the framework. Generally the framework isn't the issue in my experience.
Sure, but it's not an either/or situation. Every big project adds dependencies, but using Next means you have some basic, common functionality included out of the box by default/by convention (like TypeScript, linting, testing, routing, caching, SSR, static builds, serverless definitions, etc.) all done in a predefined way. Maybe your project has 200 deps, but Next would replace like 50 of the big ones that you'd otherwise have to separately install and maintain. Just having a basic page/app router and minimal state system (via contexts and RSC and props and such) reduces a lot of the headaches of the bad old React Router days.
It replaces "React soup of the day" with a more standard "recipe" shared by most Next projects – like "Grandma Vercel's secret React minestrone", I guess. But yes, projects would typically still add their own "spices" on top of those basics.
Should a library become compromised with a vulnerability, fine (if said vulnerability is relevant to your usage). If you need a feature only available in a newer version, fine (I’m counting better performance as a feature).
What I’m seeing far too much of is upgrading for the sake of it. It feels like such a waste of dev time. Pinning dependencies should be absolutely fine.
1. Process. As a guiding principle, it is easier to make frequent small steps rather than one big step. There are many reasons for this, and the benefit of frequent small chunks of work apply beyond updates.
2. Security. Frequent updates can improve security posture, for different reasons: you apply undisclosed security fixes without knowing it (not everything is a CVE), prevent unnoticed vulnerabilities (this can be fixed by automated monitoring) and when there is a time-critical upgrade, the work is faster and less risky (see previous reason).
Pinning and updating reactively would be fine and sometimes is, however: there will be security issues, you will have to update. Given that the task is hard to avoid, for any product that is actively maintained and developed I think the better choice is to do it regular updates regardless of security issues. Maybe with good monitoring and for products that are really not developed any further just reacting to security issues is the better choice - its often also a pain though.
In my experience (~10 years of front end stuffs), the rewrite will happen either way. Code rot, sweeping redesigns, obsolescense, or over-eager consultants will trigger a full front-end rewrite / re-architecture every ~5-10 years.
The issue is that feature or vulnerability might not be patched on older versions. If you are using a 2 year old version and a non-backported vuln or needed feature comes along that means you have to absorb 2 years of breaking changes to move to that version.
Frequent updates allow you to address the breaks gradually rather than all at once.
JS is just awful, though, because of the sprawling dep tree. I get why devs would prefer pinning as any one of the 1000 deps that get brought in could need an update and code changes on any given day. A sticky static version requires less daily maintenance.
It's vastly, vastly easier to upgrade small version bumps constantly via automated tools like renovate than it is to try upgrade several major versions every few years. It's shite being stuck with dependencies the dev team has now put in the "too hard basket" in terms of upgrading because the delta is too scary or difficult and too much code has ossified around the now ancient version. Don't willingly do that to yourself if you can avoid.
I get that, and it’s a good point. But at some point that easy patch/minor version bump becomes a major version with a breaking change, and does take time to upgrade regardless, scary delta and such. My point is that, without an actual feature need or an actual vulnerability (none of these guaranteed to spring up in future), any time spent upgrading is potentially wasted. I know some projects are unlikely to last beyond a few years —- in those cases I think the risk is calculated enough to not matter too much.
It's down to engineering culture at that point. We have a weekly process where we merge those PRs and including any that are failing. It doesn't suck up much time at all, but our stuff is always well maintained with few surprises lurking. The side effect of this type of culture is high quality test suites and pipelines that you have very high confidence in and are executed frequently and quickly. It's overall been a far better experience than just letting stuff rot.
Any security work always involves a calculated risk. The risk here is that you will be forced to do a painful and error prone upgrade at the time of the vulnerability - under pressure. You haven't done that too often, thus the process is unlikely to go smooth. So there may be bitrot, lots of debt, and time pressure to put out a patch: perfect storm for a lot of things to go wrong even if you don't get exploited. It also throws a wrench into your current schedule. This should be part of the risk calculation.
As I web developer I see so many CVEs in mature stacks, and every so often they really do apply to our work. It is hard to avoid updating, unless you kind of pretend those vulnerabilities don't exist or apply (honestly, the vast majority of devs and small orgs do just that). Even monitoring and deciding what vulnerability applies is a recurrent 'waste' of time, sometimes you might as well just do regular updates instead.
One issue I often see is that if you do your job well, any time sunk into security can by definition be seen as wasted. Until that rare moment comes when it is not so, and then it suddenly transforms from wasted time into a business critical or even business ending death crunch.
> My point is that, without an actual feature need or an actual vulnerability (none of these guaranteed to spring up in future), any time spent upgrading is potentially wasted. I know some projects are unlikely to last beyond a few years —- in those cases I think the risk is calculated enough to not matter too much.
You could make the same argument for any kind of code quality efforts. Frankly I think this site probably leans too far into a high-quality mindset, but apart from anything else good programmers won't want to work on a codebase that isn't seen as valuable and treated as such.
I have been building a platform https://github.com/claceio/clace for teams to develop Hypermedia based internal tools. One of the main criteria for the technology stack and the feature set has been making sure apps can be maintained easily, after six months and after six years.
Settled on using Go HTML templates, Starlark and HTMX. Go has a great track record of not breaking backward compatibility. Go templates are widely used by ops teams, any breaking changes there will cause ops teams to revolt. Starlark is somewhat widely used by build systems (like Bazel), any breaking changes there will cause build engineers to rise up in arms. The HTMX 1.9 to 2.0 upgrade was also painless, no changes required in my test apps. Only change required was to update the way the websocket extension is resolved.
React itself doesn't have many dependencies at all, but nobody uses React in isolation because it's only a rendering library / data flow; React was never pushed as a full framework.
I wouldn't dismiss the responsibility of culture in the JS dependency problem. I agree with a few points in the article but much of these dependencies aren't needed, they are preferred and importing is normalized.
Left-pad wasn't a problem because of browser constraints, it was a problem because of culture and to some extent discipline.
This doesn't excuse some authors who enjoy constantly rewriting their libraries just for the hell of it, consistently introducing breaking API changes.
react-router might be one of the best examples (or the worst, depends on how you look at it), and it's unfortunately very popular, even though sane and stable alternatives exist (like wouter).
There's a few solutions that the community won't do for you though. Reduce your dependencies; the author proposes htmx, which is an extension of "just use the platform". For a lot of websites / applications, you don't need these tools; you don't even need a build step, because plain JS is fast and compact, browser support for most features is fine, and intermediates like CDNs or whatnot can handle the "last-mile" optimization of assets if needs be, with HTTP 2/3 being the other newer factor that makes asset optimization / a build step less necessary.
Reduce your update frequency; a lot of the updates of these libraries are trivial, which is both good (fast updates and releases is good, many open source contributors are good) but leads to a high update frequency. But it's fine to run a month behind, the amount of actually critical issues are few and far between. If these projects have their semantic versioning correct, you should be able to see whether updating them once a month requires a lot of work.
The fear, which is justified, is that waiting too long with updates means these compatibility problems add up. Especially when the ecosystem was still figuring itself out and did major backwards-incompatible rewrites (remember Angular 2?) this was a major issue, but it seems to have eased off a bit. Last big one I've run into was when eslint decided to change its config format, and given ESLint's old config could get pretty convoluted already (especially in a monorepo with partially shared configuration and many plugins), changing that was effectively rebuilding the configuration from scratch.
Anyway. I frequently look to the Go ecosystem and attitude for things like this. And it's had an impact on the JS ecosystem too, it was only after Go came out and said "use gofmt, fuck your opinion on formatting and fuck spending time on trivial shit like that" did the JS and other ecosystems follow suit with e.g. Prettier and Biome. I unfondly remember peppering code reviews with dozens of "this single should be a double quote" and "there should be a newline there". Such a waste. Anyway, the Go ecosystem mindset is a healthy one. Go the language gets a lot of justified criticism and it's not for everyone / everything, but Go the mindset does a lot of things right or better, for less developer frustration, better future proofing, and more maintainable software.
Well, designing a module to be a peer dependency and then not strongly favoring backwards compatibility is a choice. When you make that choice you're probably screwing your users, in the long run.
As a user of modules, if you can detect such module, you can choose not to use it, and save yourself all that future trouble.
Now. Let's see. How many times has react's major version number changed?...
Yes, it's not only react, but boy are they an enthusiastic leader of this approach.
We also have a generation of designers who "think in React" who don't approach the web like the web, but like a less form of mobile.
A designer who has a solid understanding of hypermedia and puts its principles first would be worth their weight in gold to a team who wanted to move away from the React ecosystem.
Yeah React itself is actually a very small part of the ecosystem that has evolved alongside React. And it's a mature ecosystem, therefore there's a lot of libraries to use that come at the cost of keeping them patched.
It's easy to think that a new tech stack is somehow more complete because there are fewer add-ons and no vulnerabilities have been discovered yet.
Last week I shipped another new web app[1] I got an idea for in <24h to closed alpha testers and in <7d to a public beta with no issues or fanfare.
Perhaps part of the increased velocity is my having gone deep and learned the ins and outs of this stack over multiple years and projects, but it is important to note that I probably would never have gained this level of mastery over these tools if I were constantly being hit with what the author calls "dependency management fatigue".
[1]: https://blucerne.app