It's been so frustrating watch this play out over the past decade.
I keep seeing projects that could have been written as a traditional multi-page application pick an SPA architecture instead, with the result that they take 2-5 times longer to build and produce an end-result that's far slower to load and much more prone to bugs.
Inevitably none of these projects end up taking advantage of the supposed benefits of SPAs: there are no snazzy animations between states, and the "interactivity" mainly consists of form submissions that don't trigger a full page - which could have been done for a fraction of the cost (in development time and performance) using a 2009-era jQuery plugin!
And most of them don't spend the time to implement HTML5 history properly, so they break the URLs - which means you can't bookmark or deep link into them and they break the back/forward buttons.
I started out thinking "surely there are benefits to this approach that I've not understood yet - there's no way the entire industry would swing in this direction if it didn't have good reasons to do so".
I've run out of patience now. Not only do we not seem to be learning from our mistakes, but we've now trained up an entire new generation of web developers who don't even know HOW to build interactive web products without going the SPA route!
My recommendation remains the same: default to not writing an SPA, unless your project has specific, well understood requirements (e.g. you're building Figma) that make the SPA route a better fit.
> And most of them don't spend the time to implement HTML5 history properly, so they break the URLs - which means you can't bookmark or deep link into them and they break the back/forward buttons.
The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.
That aside,
SPAs are great for a number of reasons:
1. You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.
2. You need a beefier backend. A SPA backed by REST APIs is super easy to scale. nginx can serve up static resources (the JS bundle of the site) LOLWTF fast, and stateless REST apis are easy peasy to scale up to whatever load you want. Now you just have to worry about backend DB, which you have to worry about with non SPAs anyway.
3. Less languages to deal with. If you are making a modern site you likely have JS on the front end, so with SPA you have JS + HTML. With another backend framework you now have JS+HTML+(Ruby|Python|PHP|C#|...), and that back end code now needs to generate HTML+JS. That is just all around more work.
I agree some sites shouldn't be a SPA, a site that is mostly text content consumption, please no. A blog doesn't need to be a SPA. Many forums, such as HN, don't need to be a SPA.
But if a site is behaving like an actual application, just delivered through a web browser, then SPAs make a ton of sense.
Your arguments seem to be assuming a particularly bad implementation of a traditional backend.
1. A good server-generated-HTML backend will have no more state than a good server-generated-JSON backend. The client state is all stored in the client either way, whether in JS variables, HTML tags, or the URL.
2. A good server-generated-HTML backend doesn't do significantly more work just because its output is in HTML instead of JSON. A bit of extra text generated isn't going to increase your CPU load in any meaningful way.
3. There are only fewer languages to deal with if you aren't in charge of writing backend code. If you're in charge of the backend code, you still have to pick a backend language for your JSON API.
I think you're assuming that the choice is "SPA" or "messy stateful monstrosity". It's perfectly possible to build a RESTful HTML-based API that is as clean and stateless as any JSON API. PHP's been starting each request with a clean slate for decades.
Your arguments seem to be assuming a particularly bad implementation of a traditional backend.
Whenever someone says "this particular architecture is bad!" they're talking about a bad implementation of it.
The point is whether or not you're more likely to succeed at making a good app with an SPA or a multi-page site. For pretty much all brochure-styles websites and many SaaS webapps you're more likely to achieve success (for every common understanding of success) by using a multi-page architecture because they're usually simpler to implement, they work the way browsers expect things to work, and you don't need to implement some hard things yourself. You can make a brilliant SPA website for any purpose, but often people try and fail. Saying "you shouldn't have used an SPA" is shorthand for "You didn't understand or implement an SPA well enough, and now your web thing is failing to serve users as well as it should, and using a multi-page architecture would have avoided the problems your website has now."
You always have to fight with incompetency in any large codebase.
Incompetency exists the most in whatever the first thing that coders learn is.
I'm old enough to remember when that was c++, then it was java, php, ruby, jquery, now it's react.
It's always a trade-off. You can build things in the "cheapest" language (whatever the first one currently is) but then you'll inevitably get the cheapest code
That's really what this conversation is about in the long arc of coding
Skills and people are a pyramid. The more competency you demand the harder the people are to find.
We have this tendency to taint the tool by the users.
Incidentally after a language or tool loses "first learned" status it generally slowly regains its prestige.
We don't assume a c++ shop is a bunch of morons any more or that using php means you write nothing but garbage. One day vue/react/whatever will lose its first language status as well and I'll be here reading about something that might not have been invented yet being a trashy bad no good idea
Ultimately the technical merits are mostly cover for a conjecture of economic efficiency. There's a reason why people aren't defending things like applications built with Go/wasm bridges - those people are expensive
The key here is that if we consider equivalent good and robust implementations, equivalent capable teams, same UX, etc of an SPA and a traditional full stack MVC application with a modern ajax tool such as livewire, hotwire, etc the latter takes a fraction of the time and cost to build and the result is far less complex and easier to maintain.
I've worked in both kinds of environments, and unless you're building an offline first app, dogma, or Google maps...SPAs make absolutely no sense from the engineering point of view.
Multi-page forms without some front end stuff ends up with the very clunky either "rerender previous form pages over and over again, except hidden", or "have some token to track partial form data", or "build up a DB to store a partially complete form".
With some frontend work you can have a multi-page form just work, with the data stored in the client up until final submission, and only sending in partial checks ahead of time. This is qualitatively easier to handle, in my opinion.
It also seems extremely uncontroversial that sending data for a single item is going to require less text generation than sending over that data + the entire page.
These are all gradients, but people make absolute claims that don't hold up in these arguments.
"It also seems extremely uncontroversial that sending data for a single item is going to require less text generation than sending over that data + the entire page."
And yet... so many SPAs feel so much slower than MPAs. They suck down MBs of JavaScript, constantly poll for more JSON and consume crazy amounts of CPU any time they need to update the page.
If you're on an expensive laptop you may not notice, but most of the world sees the internet through a cheap, CPU and bandwidth constrained Android phone. And SPAs on those are just miserable.
I also use a lot of "classic" websites where they fall over because of bad server-side state.
An example, a train reservation site, where I choose dates + a destination. The next page, it shows me some results. I decide to change the date. I hit the back button, and it falls over, cuz the state management on the server is messed up.
This happens a lot for me (this is mainly on Japanese websites), and it's extremely frustrating.
I don't like a lot of SPAs, I also don't like a lot of "classic" apps, but I do feel like SPA-y stuff at least demands less of the developers so the failure cases are a bit less frustrating for me. In theory.
And to the connections, the terrible websites with many megs of JS were likely terrible websites with many megs of HTML and huge uncompressed images before that... I don't want to minimize it (thank god for React, but old Angular bundles were the worst), just think comparing like-for-like is important.
EDIT: thinking about it more though, it's definitely _easier_ to send giant bundles on certain websites.
Given how many times this discussion happens on HN, I feel like instead of the hypotheticals, people should make a list of actual websites in both domains so that comparisons and proper critiques could be made...
> "I also use a lot of "classic" websites where they fall over because of bad server-side state.
An example, a train reservation site, where I choose dates + a destination. The next page, it shows me some results. I decide to change the date. I hit the back button, and it falls over, cuz the state management on the server is messed up."
any idea to not fall into a pit making an website/app/whatever?
I see your point, but managing state is not free on the client side either. Frontend frameworks usually come with some built in state management, but once it starts to be more complicated we often need to find a 3rd party library to manage it.
I agree that there are many cases where managing the state in the frontend is the preferred solution. Multi-page forms add complexity for both frontend and backend. Sometimes frontend is less complex, and other times the backend is simpler.
I'll add some comments on your statements regarding backend. I'm not saying it is a better solution than managing it on the frontend for all cases. My point is that although it adds complexity on the backend, it does not necessarily mean that managing state on the frontend is simpler. That depends on the use case, but I think a lot of developers "default" to handling state on the frontend that adds much more complexity than a simple backend solution.
> "rerender previous form pages over and over again, except hidden"
In that case, you would only render hidden <input> fields with the values, and not the complete form. The code for receiving "step x" of the flow, would simply read the parameters from the request and include the values in the html code.
> "have some token to track partial form data"
Using <input> fields for this is much simpler. The final page would just read all variables as if they where posted from the same form, without requiring to generate/parse any tokens
> "build up a DB to store a partially complete form"
Most frameworks have a built in "session" that abstracts this away. It may be stored in a database, file, memory etc. If you require distributed sessions, the framework often handles this transparently by just configuring the session manager to use something like Redis to store the data.
> "rerender previous form pages over and over again, except hidden"
There are modern ways to do this, see Unpoly, HTMX, livewire, hotwire, etc. You're comparing with an outdated view of what an MVC application looks like. It's like to complain about SPAs because of backbone.js
> "have some token to track partial form data"
This is called "sessions", the token you refer to can be a cookie, which is done by default for free on any MVC framework. Doing this in any other way lead to either losing authentication on page reload or security vulnerabilities (storing a token in localstorage, etc).
> "build up a DB to store a partially complete form"
Again, you can store partial data in a session, for free. As it comes by default with any MVC framework.
One of the key points here is that with any of the popular MVC frameworks you don't need to rebuild the wheel and the car from scratch as with SPA frameworks, most of these things come for free, specially anything related to forms. This is something we're not used to have in the SPA world and everyone has a different way to deal with it.
> Multi-page forms without some front end stuff
Nobody says there shouldn't be any frontend stuff, you still need it of course. If fields are static between steps you can just render every step and toggle between different set of fields using something like Alpine, no need to reload from the server. If fields are dynamic and need some kind of database lookup between steps, Unpoly or livewire/hotwire make this trivial.
Please, let's stop comparing Next.js/React top modern SPAs to 20 year's ago struts MVC, it has not been like that since many years ago already.
I've built a multi-page form in an SSR app with a tiny dash of JavaScript. The form's children are divs. The Next button hides the current div and shows the next one. The final Submit button is just a regular submit.
If you get into more than 3 pages, this isn't a great approach for various reasons, but you don't need to reach for a framework the instant you have a multi-page form.
> Doesn't really sound like an application, but a website.
"text" here is as in "text/html", not as in "English-language copy".
> In a web app it can happen that you use it for an hour without the backend doing a single thing.
I would submit that this is an extremely rare case. The most involved web apps I interact with (say, Figma) are constantly syncing their state with the server. The simplest (say, TurboTax) save state as I move on to the next screen.
If you do have a case where you can pull that off, then by all means use an SPA. But it's weird to say that something isn't a web app unless it can go long periods of time without server interaction.
> Doesn't really sound like an application, but a website.
That's probably the key thing - most companies building SPA's don't really need an application but a website. There are many interesting products that need to be applications because of the functionality they need, but for every one such product there's at least a dozen that does not.
There is no black or white here. 98% of every site is in between those things and you could consider them one way or the other. Is reddit a website or an application? is a backoffice dashboar a website or an application?.
The problem with SPAs is not the technology or the architecture itself. The problem is everyone thinks, by your own definitions, they are building an "app" by default.
I've already worked for several teams which struggle to get almost anything done and everything takes ages to ship because of the fanaticism of using React for everything. God, some didn't even know you could submit a form without building a json API endpoint.
> 1. You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.
If this were true, you wouldn't need a REST API. I don't understand what you're trying to say here. When you make a REST call to get data, you instantly have two different sets of state: the client and the server. It's no different from SSR, it's just transmitted in a different data format (json vs html).
> 2. You need a beefier backend. A SPA backed by REST APIs is super easy to scale. nginx can serve up static resources (the JS bundle of the site) LOLWTF fast, and stateless REST apis are easy peasy to scale up to whatever load you want. Now you just have to worry about backend DB, which you have to worry about with non SPAs anyway.
You do the exact same thing with SSR. Stateless shared nothing app tier instances. Been doing it for 15 years now.
> 3. Less languages to deal with. If you are making a modern site you likely have JS on the front end, so with SPA you have JS + HTML. With another backend framework you now have JS+HTML+(Ruby|Python|PHP|C#|...), and that back end code now needs to generate HTML+JS. That is just all around more work.
You can use JS on both the frontend and backend. Or ClojureScript. Or TypeScript. I'm sure there's others. But yes, for many languages this is a potential negative of SSR.
> If this were true, you wouldn't need a REST API. I don't understand what you're trying to say here. When you make a REST call to get data, you instantly have two different sets of state: the client and the server. It's no different from SSR, it's just transmitted in a different data format (json vs html).
SSR means you don't have a clear representation of the client-side state (as distinct from the presentation) - by definition you render on the server and only serve the view layer to the client, whereas your data model only lives on the server. There will naturally be state in the client (e.g. form inputs), but you don't have a good representation of that in your model.
> You do the exact same thing with SSR. Stateless shared nothing app tier instances. Been doing it for 15 years now.
OK so where does the UI state live - not the long-term persistent entities, but things like unvalidated form input, which tab is enabled, which step of an in-progress wizard the user is on? Either you manage that on the client (at which point you're halfway to an SPA, and getting the worst of both worlds), or you manage it in the application layer on the server (in which case you have all the scaling issues), or you make every UI change go all the way into the data layer which has even bigger performance issues.
> If you need to persist past a reload then a few lines can save to localstorage.
Sure, and pretty soon you've got a dozen random little copies of bits and pieces of your state, all out of sync with each other.
> Anything more requires server-side calls anyway.
The issue isn't whether you need server-side calls (ultimately every webapp needs server-side calls, otherwise why would it be a webapp at all?), the issue is whether your framework can manage client-side state between those server-side calls. In theory you could create a server-side-rendering framework that was good at this. In practice, none of the big names has succeeded, and certainly not without significant costs. (I'd argue that Wicket does this well to a certain extent, but it comes at the cost of both relatively heavy server-side session state and significantly more network roundtrips than SPA style).
> This magical state that can only be managed on the client-side with a heavy SPA is a myth for 99.9% of sites.
On the contrary, 99.9% of sites have or could benefit from having some amount of client-side state. Any time you have a stateful UI, there's a usability benefit from persisting that. Any time you have so much as a form field - like the text box I'm typing in right now - there's a usability benefit from having that as managed state (I've lost comments because I closed the wrong tab or accidentally pasted over with something else), and in cases like this there would actually be a privacy concern with doing that on the server side.
In theory you don't need an SPA framework to do that. But in practice SPA frameworks are the only ones that do it well.
There's usability benefit in reloading to reset state and it's the common expectation when browsing. Regardless, if you do decide to add it then its a few lines of code to persist all forms on the page.
SPAs don't automatically provide any state management, and often the complexity requires even more work to manage forms. This is the complaint here, taking a simple requirement and forcing a webapp into it. It's completely unnecessary.
Many SPA frameworks do provide state management. If you start from the idea that you want structured client-side state management for your webpages, you'll probably land on an "SPA framework". And if you're using such a framework, while I'll always advocate things like proper URLs and history (which the framework should handle for you), forcing a page reload when it's not needed seems pretty wasteful.
Regarding a rest call (#1) being out of sync ... usually the "state" is in the database... if you're using SSR, it's still a separate context of state than what may be in the database a fraction of a second later.. and if you wish to keep that in sync, you're still going to need JS, or some other goofy hacks to do so.
> You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.
This is a bit weird to me, in that I'd say that cuts in the opposite direction. State can exist in at least three locations for most applications: db, app server, and client. Keeping state consistent across all of them can be difficult in the best of times, but thin clients by their very nature carry less state, lessening the burden. Sometimes client state is necessary, for richer user interactions, but for all but the most cosmetic of purposes you're going to have to replicate that state on the backend anyway, to enforce business and security requirements.
> but for all but the most cosmetic of purposes you're going to have to replicate that state on the backend anyway, to enforce business and security requirements.
This really just comes down to what you're writing, how app-like your web app is. It's too easy to have one's own experience focused in a certain area and estimate the remaining majority as relatively similar. (For most of what I personally work on, the DB portion is mostly a simple straightforward serialization of what the user has built through the application; whereas client-side state has so many aspects to it I couldn't give a brief characterization—the whole app is basically client-side.)
From what I can tell most of the disagreement about SPAs results from devs who are building things that aren't app-like railing against their futility vs devs who are, who become perplexed by the vitriol when they have immediate experience with their architectural benefits.
> From what I can tell most of the disagreement about SPAs results from devs who are building things that aren't app-like railing against their futility vs devs who are, who become perplexed by the vitriol when they have immediate experience with their architectural benefits.
The SPA criticics from the article and this thread have repeatedly said that their issue is not with building things that need the benefits an SPA architecture brings. The criticism is that the majority of SPAs are harmed by that architecture because it is the industry default and being used when it isn't appropriate.
I get that that's the biggest problem, but there are plenty of people in the thread talking about how they're a bad idea in general (including the comment I was replying to)—which incidentally lines up with the (apparently) clickbait title "SPAs were a mistake".
For the record, I was making a general point about one of the tradeoffs with an SPA vs MPA, not making the claim that MPAs are universally better than SPAs for all use cases. I think most reasonable people can agree that there are places where SPAs are called for and places where they're not. It's the ambiguous cases that draw the conflict, and psychologically the anti-SPA people focus on the really shitty ones and the pro-SPA people focus on the use cases that would be impossible in an MPA.
Also, for what it's worth, I've worked full-time on one of the largest SPA projects in the world (~500 engineers contributing frontend code to it on an average week), so this is not coming from a place of total ignorance.
I excerpted and replied to something specific from your comment which read to me as essentially "even the cases that seem to need it probably don't" which matches both the tenor of (many of) the replies and the title of the article. But if I misread you my apologies.
That’s the thing, they are a bad idea in general because, in general, people ARE building things that don’t benefit from an SPA article. You’re the one extrapolating that they are saying ALL SPAs are bad.
> You’re the one extrapolating that they are saying ALL SPAs are bad
This was not my extrapolation. But if you skim over a comment you're likely to perceive it as falling into one broad camp or another whether that's the case or not.
This is missing the real reason that people write SPAs, which is that React solved web components, which are hugely beneficial for almost 100% of web sites, and thus became the standard for building web sites, and with React it's easier to make an "SPA" than to make a "traditional" site and users don't know or care either way.
Users care they just don’t know why many modern websites are bad websites. Every website is now an app whether that actually makes it better UX or not.
Web components actually aren't that good. Most seasoned and experienced developers I know hugely prefer either ASP.NET webforms or one of the many Java MVC implementations. We all know and use React daily, but I've literally seen the same application built faster with better maintainability and scalability once it was moved away from a SPA.
Wicket has offered a beautiful component approach for over a decade now. Having seen it I can't stand page-oriented MVC frameworks (indeed it's good enough that it convinced me that OO actually has some merit in some cases).
I used Wicket quite extensively about 10 years ago so my comments may not be true anymore. I began using Wicket as it was so much better than Struts and JSF. However, the development of new custom widgets in Wicket was so much more convoluted than implementing the same widget in Backbone.js. And it was hard to inject new functionality into and existing page. I eventually refactored all my UI code into jquery+ backbone.js ( this was before React ) and that code is in production and still working. And the new developers maintaining it don't see any reason to refactor that into React or Vue.
> However, the development of new custom widgets in Wicket was so much more convoluted than implementing the same widget in Backbone.js.
Hmm, I found developing custom widgets was a joy, though the key was to keep them very small and compositional - e.g. if you want a user details widget with an address entry, it's probably best to make that address widget its own smaller widget that the user details widget just uses - and aligned with your model hierarchy. Often you end up with a parallel hierarchy where e.g. you have a user model that contains an address and phone number, so you've got a user display widget whose model is that user object and in that there's an address display widget whose model is the address field of that user object (and similarly for your edit widgets).
I tried to look up examples of what doing the same thing in Backbone.js looks like, but the search results don't seem to be about making custom widgets as I understand it. To be fair I'm struggling to find examples in Wicket as well. But I'd be interested to hear what it does better.
> And it was hard to inject new functionality into and existing page.
Hmm, what kind of functionality do you mean? I will say that again making pages very compositional was key - my team settled on a pattern where most of our pages were just a single top-level panel (so you could always reuse or embed a whole page if you wanted to) and then most panels were made of a handful of smaller panels, similar to the clean code style where you try to make each function only call three or four other functions. And then it was easy to change whatever we needed because the code structure corresponded directly to the logical business/model structure and the inheritance structure corresponded to the visual structure (e.g. we had an abstract class for what an "editing panel" looks like and all our editing panels inherited from that. So if you want to add a new field to the user model, you add it in the user model and the user display panel and user edit panel are right next to that. And if you want to change the visual design of all our editing panels, you change that in the parent component and it will apply to all of them).
I agree with all your points, but I think it's worth pointing out that those benefits you mentioned are largely for the developers. As a consumer I love a well-written SPA when the problem set calls for it, but most of the SPAs I have to use are garbage. I don't fault the tech for that, although I suspect that a lot of those SPAs were created by "me too" people that just wanted to build a SPA. When React was in the pre 1.0 days, I did that, and several people on my team as well (so I'm not casting any stones here, just trying to state facts).
Last time I bootstrapped a React SPA I don't think cra includes a router ootb.
As an example, look at reddit. I'm still using old.reddit.com because I can't stand their fancy SPA UI. It is so bad to the point, as a user, I enjoy a lot more HN's interface than reddit's one.
2. Hard to believe that is true in the general case.
Typical scenario for SPA is to use some sort of REST API, these API:s are usually designed for general usage, not specific usage, i.e. designed to be reused between components and views thus they basically return everything of a specific model regardless if data is needed or not.
Therefore the controller queries the database with the equivalent of SELECT * on a table (or perhaps multiple tables with joins) and then exposes every field.
And in many cases one request is not enough because the common generic design of REST APIs, thus a few request more are fired that results in multiple SELECT * against the database, and eventually the equivalent of SQL JOIN is performed in JavaScript.
Already SPA solution has an increased cost by asking for data that is currently not needed, not only in the traffic between the database and the backend but also in the traffic between the backend and the frontend.
And because we want to be good REST citizens we sprinkle the JSON payload with timestamps, resource urls and pagination information and what not and in majority of cases never to be used.
Comparing that to SSR where you can fetch what you need from the database with custom SQL query (I hope you do, otherwise the SQL leprechaun will make a visit).
Just imaging how much data there is on the web that is requested and then just discarded, not even looked at.
It is possible to design custom REST endpoints for each component, but then of course what is the point of a SPA then? If you are already writing a custom REST endpoint just return HTML instead of JSON and then swap in the new and swap out the old for your component (one-liner), the end result is the same.
GraphQL is such a quality of life upgrade coming from this environment, especially at the scale where your frontend teams are potentially larger and shipping more than the teams closer to the SQL can provide.
GraphQL is a consequence of the SPA design, a bad design leads to a worse fix.
The drawback is that the frontend now has its own schema, often it starts as a naive direct mapping of the real schema.
Thus any changes in the real schema also need to change the frontend schema and every use of it, or the mapping to the frontend schema needs to change.
Eventually these two schemas will diverge because it is not feasible that every schema change results in frontend change. Especially if the idea is to have two different teams working from either side, then the backend team can’t wait for the frontend team therefore the schema mapping will change.
And the thing is that the frontend shouldn’t be aware of how the backend schema is constructed, if the User model is separated into three different tables, because of some technical reason, that should not change how the frontend operates. The frontend understanding of what a User is shouldn’t be the same as the backends.
Therefore ideally frontend schema and the backend schema will always differ. They don’t view the world the same way.
However what you now have is a slow mapper between the frontend schema and backend schema.
The point of relationship databases is that you can view your data from different perspective by doing different SQL queries. That is already built in. But now we have invented yet another layer on top of SQL, usually in combination with the already monstrosity called ORM.
What a tortured usage of GraphQL. Schema files are automatically generated by the backend, and components pull data that they need, and know more. If you find yourself changing schemas constantly, then you’re not defining them in a scalable manner. You’ve basically misused the tech, and blamed it on the tech instead of your misuse.
That is even more horrible what I thought. Automatically generate schemas 1:1 and then expose it. Let me guess tons of information leakage, ddos attacks and queries not hitting indexes. This is absolutely the worst idea I come across in web development. Horror.
GraphQL is just another artificial solution to a problem created by SPAs themselves. Same as SSR, hydration, server components, client side routers, dynamic bundles loading, dynamic translations loading, etc, etc, etc. A whole industry of workarounds for a broken idea. Now 10 years after SPAs became popular we starting to approach a point were we almost have what we already had.
What I really don't get is why we don't just expose SQL directly at this point. Is it just security? Database servers have fairly extensive authentication and authorization models.
Authorization & access restrictions. Yes, you can go quite far with table/row/column permissions, but a lot of business logic cannot be modeled using just those (i.e. "user cannot place orders if total outstanding invoice payments surpass value $X").
The combination of DB permissions, DB constraints, and simple (SQL, not procedural language) triggers gets you a lot, including the ability to enforce rules like the one you mention.
Yes, you can enforce a lot through SQL triggers/stored procedures etc. But you often end up abusing your DB/SQL as a business logic layer, where your business logic is encoded in a huge set of row/column permission and custom SQL triggers . This tightly couples your database into your whole business application stack.
Especially in Oracle PL/SQL, I've seen this often abused to an extend where no one ever understood the whole business logic anymore (as logic was spread out in frontend, middle-layer services, and DB mumbojumo), and the database became a fragile core-piece (with a significant vendor login) and hindered all sort of future development.
Seriously, your business logic should be modelled in code, ideally in some sort of service layer (which does not necessarily mean microservices!).
> But you often end up abusing your DB/SQL as a business logic layer, where your business logic is encoded in a huge set of row/column permission and custom SQL triggers.
That's not “abuse”. Admittedly, it's no longer an essential best practice for most systems, the way it used to be viewed, because it's more common to have a single application which fully owns the database and not to (at least in idealized theory, though very some ops staff still end up with direct access to the prod DB) allow access by other means, so in theory it doesn't tend to be necessary to avoid either circumvention of rules or (inevitably inconsistent, as well as expensive to maintain) duplication of logic.
Then why aren't we working on improving the databases to allow for such complex rules, and instead wrap it in another layer (often multiple) to do all this stuff there?
because a database should not hold your business logic. It should hold your data, and that it can do well. See also my other post on parent for more reasoning.
I beg to differ in many interesting cases, we put at least some parts of the business logic into the database. The table design is a direct consequence of the business logic, the same is true for constraints and triggers.
Why shouldn't it? As noted above, database servers already have most of this (security) logic in them - likely tested much better than whatever you can write on top of the database yourself. And given how many apps are basically just CRUD, why reinvent the wheel every time?
Mostly because the tooling is so bad. Or do you have unit tests, lint, and easy version control for your stored procedures? Are they written in a way that matches at all what rest of your programming is? Can you import a randomly picked utility library?
The next obvious question is, why is tooling so bad then? And would it have been so bad if we invested as much into RDBMS as we did into Node.js web-frameworks-of-the-day.
Writing the next modern elegant artisanal javascript webshit is a lot easier than making a sandboxed programming environment that integrates tightly with a production quality database engine and has a good story for testing, deployment, debugging, etc.
Even if you solve the security issue, a query can easily bring down the server if it has a complex join query.
This could be solved by only exposing stored procedures, but that just moves the code to the database server instead of the REST service with the same problems as before.
You can still get performance issues with a view if you "select *" on a large amount of data, or join with other views. By exposing the SQL to a web page, you also open up for DDoS attacks more easily, as you can write complex SQL queries
You can get the same problems with GraphQL or stored procedures too of course, if the queries are not optimized correctly
You wouldn't belive how often companies use excel with external datasources like this. Excel basically is the common-ui for a lot of people.
And most of the projects I worked on in my professional career as a webdev, were replacing such workflow with a proper web application, because excel does not scale and eventually people fuckup their data.
> The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.
They might not mess up history when using a standard routing library, but I've seen plenty of devs forget to add unique titles to different pages which is frustrating for a user with multiple tabs going.
On SO the accepted answer for react-router looks like "create a custom Page component with title as a prop"[0]. At work I just ask folks to use react-helmet.
> . You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.
Why would I want to keep any state on the client? What in the history of the web (the whole idea being its someone elses computer) would make it a good idea to take away the one major selling point of the web? That no matter what, someone else has the state I need, and I never have to worry about losing that if something happens to my connection. It either went through or it didn't.
> The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.
99% of SPAs break history related features in some way.
I can say I've seen a lot of state management issues with SPAs... more with Angular than React, and almost none when using React+Redux well.
I think a part of this is that a lot of developers simply don't desire, want to, get to or otherwise take the time to understand the framework they are using... It has been true forever... I can't tell you how many times I've seen stuff copy/pasted from StackOverflow, by devs that don't understand what they're doing, or they add jQuery to a React application, and have goofy interactions.
The lack of understanding will always be a thing, you have to learn, most learn by doing, and when starting out, you don't know that what you are doing isn't good, but it kind-of works.
Last time I used redux, about 4 years ago, every tutorial on it demonstrated a completely different way of using it.
I spent a week piping a couple dozen form inputs through redux.
Throw typescript in there and life got more complex.
Maybe it sucks less now. But I've seen plenty of websites where every key press causes crap tons of state to get copied around because "lol const only". I've seen sites where typing takes a second per character due to mis use of redux, and the problem with redux is that it is easier to misuse than to use properly.
The biggest issue I've seen with things like that, is certain actions with form validation can have unexpected surprises on keypress... so depending on how you're doing form validation, that is usually what will throw off the timing and things drop to a crawl.
Often, if you have a form action button, separate from your validation, best to update state as part of on-change or isolate form state until the action button itself is pressed to push to the redux state.
But I do understand the sentiment... I've run apps, and even forms with some relatively complex and large state via redux without much issue. The biggest hurdle is often getting everyone working on something to understand how redux works, and how the difference comparison works for state changes. Also, dealing with when/where an action should be created/dispatched, how to use the thunks for async handlers, etc.
I just don't know how it could be made any easier to not mess up. You really truly do need to go out of your way to mess it up, or be entirely unfamiliar with the JavaScript routing framework or library you're using.
They do, but it's usually the anti-SPA people who mess it up, by refusing to work with the grain of their tools. It's not quite the same thing as "strategic incompetence" but it feels related.
Client-side routing for page-oriented stuff is certainly not a solved problem: the basics, sure, but not actually doing it properly. There are some parts of the experience that it’s not possible to do perfectly because the web doesn’t expose the necessary primitives, and exceptionally few things go beyond the basics of just clobbering and resetting scroll position on back/forward. To do it properly, you need to restore all transient UI state (form field contents/state, scroll positions, focus, selection, media playback position; zoom level, probably not implementable; and there may be more, though I don’t include things like <details open> as transient state since that’s put into the DOM) on back/forwards, and I don’t know if I’ve seen anything actually do that. Then there’s the matter of helping accessibility tech to realise a page change has occurred, and I’m not sure of the state of the art on that, but last time I looked (some years ago) I think it was bogged down in unreliable heuristic land rather than actually being solved.
1) JS history handling is fragile. A single error can break navigation completely. There's no built-in loading indicator so sites are left with no feedback or have bloated progress bars. And nothing automatically solves for deep links if the app doesn't use routes for different views or relies on other events instead of hyperlinks.
2) Servers are very fast and assembling HTML is trivial. Browsers are optimized for downloading, parsing and rendering HTML as it streams in. Using JS to write HTML after making multiple network calls is objectively slower than a single network request that assembles everything on the server close to the datastore with minimal latency.
3) Every other language is faster and more capable on the server than JS, and all major web frameworks have modern component-based UI templating. Interactions with roundtrips are just fine, and some light JS can handle most other scenarios.
> "an actual application"
That's the only reason to use a SPA, not what you mentioned.
I don’t work on front-end and am trying to learn from this thread, so I may misunderstand, but that doesn’t look like an advantage to me. Doesn’t “beefier backend” imply “higher costs”?
We recently went a rewrite of our frontend for https://www.crunchybridge.com from SPA to more "basic" request response app and couldn't be happier. Previously was SPA with React and we rebuilt from scratch with request/response using Node. In places we still leverage react components for re-usable frontend bits, but no more SPA and state management.
As you've mentioned in some of your other threads on this, the state management and sync between the API team and the front end team just caused velocity to slow down. It took longer to ship even the most basic things. While we want a clean and polished experience, the SPA approach didn't really accomplish any of that for us.
The rewrite was under 8 weeks of an app that had been built up over a couple years and we quickly recouped all that time in our new found velocity.
My knowledge is limited on the front end. May I know which Node framework do you use? Is it NextJS? If not, what do you think about using NextJS because I really consider it a better approach and want to use it at new projects.
We built a product in ~5 months with real-time collaboration, extensive interactivity, Oauth, Stripe and Gmail integrations with a standard Ruby on Rails stack.
It's rock-solid, performant, dead-simple and extremely productive to work with.
Why're we throwing away years of learning to build unstable, complex and inaccessible applications?
As an ex-member of a team who used react, redux, typescript, observables, epics, thunks, custom hoome-grown validation libraries, websockets and elixir deployed in two different microservices to build a... signup wizard... I can confirm this.
I proposed to build it in Rails (which we already had, but was the "old monolith we're migrating away from") and I almost get crucified.
That's a story I'm familiar with, but I am not actually aware of any (major, commonly used) tools that were created out of boredom. I only see instances of people using existing tools when they are not necessary out of boredom.
The more complex products are the only ones that typically have any documentation or up to date learning resources.
You want to learn how to build a thing and this is the only thing that really exists, is up to date, and works.
It may not be the right tool, but for someone new it's impossible to tell what the right tool is and people online are stereotypically obtuse and about anything tool related.
lmao yeah pretty much. I'm moving from a low code shop to Node/Vue because I can't keep people. They all want to pad their resume, so I'm going to build at 2x the cost just so I can keep the projects going.
Same experience here, in our case with Laravel. The project started as a Next.js SPA and after we needed to add authentication, translations and background jobs things became so crazy and so "custom" that we ditched it and in almost 2 weeks had everything built in a much more robust way with Laravel and Livewire + Alpine.
Because the majority of developers will gravitate towards tools that will give them the best employment opportunity, not necessarily the best tools for the job.
One could argue rails is just doing a decent job of hiding a monstrous amount of unnecessary complexity from you for basic CRUD stuff. It’s good at this… until it isn’t. In the the whole ORM abstraction (not just in rails) is questionable.
The way most of us would handle authorization in something like rails is a leaky abstraction, especially when we’re usually backing onto postgresql which has very mature roles and permissions.
I always thought of the benefits of SPAs more as a separation-of-concerns thing. You can pretty effectively build a functional front-end web application and mock a set of back-end REST apis, while another team builds out a the back-end. There are absolutely tradeoffs, and being a good software engineer is about understanding where and when those tradeoffs apply.
That's definitely true at the organizational level, and it's an argument with some merits.
In practice though, I've seen this backfire. You end up with the frontend team blocked because the API they need isn't available yet, and then the backend team gets blocked because they shipped the API but they can't use it to deliver value because the frontend team don't have the capacity to build the interface for it!
My preference is to work on mixed-skll teams that can ship a feature independently of any other team. I really like the way Basecamp describe this in their handbook: https://github.com/basecamp/handbook/blob/master/how-we-work... - "In self-sufficient, independent teams".
that sounds like a mismatch between the architecture and how work is getting planned no? if the backend is in the critical path to delivering the user value of a feature then the backend and frontend engineers need to be developing (and testing) the feature together
They ALWAYS need to be building and developing the feature together or this happens. Decent API design without deep understanding of Client implementation or performance needs is nearly impossible.
They generally should all be in the same team, but that often doesn’t scale.
Not having them in the same team pretty much never works well though either.
It's not about being unique, or what you can/can't do. You certainly can mock a front end with a ssr app, but it gets messy when you are building a rich client app and need to start sharing state back and forth.
Why not eliminate that organizational bottleneck by using a full-stack framework that lets one person do it all? DHH recently described Rails as a one-person framework [1]. I think Phoenix fits that category as well.
When SPA started picking up steam, I thought it was an amazing development! We had gone from mainframes, to personal computers, and were back to mainframes and using our powerful machines as glorified dumb terminals. This way, we could have UI code running locally, and servers only handling state. Plus, less data to transfer!
Then the frameworks ballooned in size. What previously was seen as wasteful (rendering and sending HTML) started to seem pretty frugal in comparison to the multi megabyte pages. Not to mention that one could always send just page fragments.
Other than specialized apps, I think most single page applications are a mistake. Sure, some may benefit from a nice UI - say, I'm writing a 3D modeler. But most apps there are could just re-render pages. 'Refresh' is not much of a problem in an age where simple REST API calls are returning megabytes of JSON data...
I try to tell myself "don't get caught up in using a fancy frontend framework on this one," as I'm starting a new project, but I keep running into situations where my functionality would just work so much better.
As an example, I was writing a tool the other day to automate some things that have to do with quotes for my 9-to-5. Being able to add inline functionality in Django to select a customer within the quote page, or add / edit a new customer without having to leave that quote felt very 'hackish,' using the same jquery callback method used in Django Admin. My point is, this feels like very basic functionality, but turned into a whole other ordeal using traditional methods.
> Being able to add inline functionality in Django to select a customer within the quote page, or add / edit a new customer without having to leave that quote felt very 'hackish,' using the same jquery callback method used in Django Admin.
Agreed. For form based apps I don't like to fall back to SPAs (bloat, the desire of every dev to reinvent forms in their framework, client and server side validation duplication), and yet working with relational data they are easier.
It's one of those places where a half-way step would be so useful.
That is something I hadn't really taken the time to compartmentalize and articulate, but a js framework that focused on forms only would be wonderful. I'm sure that someone has taken a stab at it. Something like crispy-forms that added the ability to add components for variable data such as inlines...
I'm guessing that Vue.js may be a good drop-in for this, but it has been a while since I have used Vue.
I initially thought part of the appeal was offloading the workload to the front end, where your processing power scales infinitely with each user's device. Maybe the benefit turned out to be negligible, I'm not really sure. Can server costs be reduced by offloading the work to the front end?
They absolutely can, if your workload is ideal for this situation, but unfortunately, the most "expensive" (in terms of time, money, computing power, you name it) part of giving a user information is typically the filtering and collation of that information from a much larger pool of information — almost always a pool of information that is far too big and too private to just send to the client to sort through locally.
Even in the most simple scenarios, you quickly find your limits. If you get data back, but it's paginated (and it almost always has to be, for basic reliability reasons as much as anything else), you can't be guaranteed to have the complete set of data in a given circumstance, so you can't perform operations like filtering, pivoting, or sorting that data locally. You have to ask the server to do this for you and wait for the response, just like we've had to in the past.
Dynamic loading of content is a feature of SPAs, but it's not a defining feature, nor unique. In fact, one defining feature of SPAs is the offline capabilities (service workers, caching, etc.), which sits at a bit of a tangent to database considerations like this.
If you can actually offload substantial CPU cycles to the client, yes, you'll save server costs. But the SPA hype has led to a lot of SPAs that work like this:
> User clicks a tab. A request to server fetches the JSON data for the tab. Client renders it to HTML. User fills in some fields and clicks submit. A request to server sends the JSON form data and gets a JSON response code. Client shows a confirmation screen. ...
In this case, you're not saving much by templating JSON on the server instead of just templating HTML.
This was one of the appeals initially, and would certainly still be true if you were doing something very processor intensive that could securely be done on the client.
Languages/runtimes have gotten faster and more optimized, while hardware has continued to move forward. It's also far easier now to add more backend instances using orchestrators like k8s, so it's less of a big deal to have to add replicas.
> Can server costs be reduced by offloading the work to the front end?
I would say yes. One significant benefit of SPAs is that you can produce fairly complex applications without any server logic, only static hosting. The workload is essentially offloaded to the build process and the front-end. You still need to carefully consider the effect on e.g low powered and js-disabled devices ... but these are straightforward considerations.
>Not only do we not seem to be learning from our mistakes
That is a lot of good faith. What happens if they were not mistakes? But deliberate attempt to push Javascript as the one and only de facto approach to web development and Resume Driven Development?
I recently asked this [1],
I dont want to name names, but do any tech company actually apologise after their high evangelism to the world and industry and walk back 70% of their decision five years later?
And for some strange reason this mostly happens to Web Development in general.
I know people for whom the traditional way of building a web app is completely foreign. I am curious how you would describe the concept and tools to someone who has never encountered them before outside an SPA architecture.
>....taking advantage of the supposed benefits of SPAs: there are no snazzy animations between states....
If that's the main benefit, let's hear it for MPAs. I want a website that's fast, responsive, clicky, sharp and to the point - not some soft-focus pastel cartoon movie. As the author says, that's fine for audio/video sites (and reasonable for other entertainment-focussed sites) - for information sites it just gets in the way (animated elements - especially persistent ones - are a terrible idea when trying to concentrate on textual content).
For some reason, designers likes it no matter it makes sense or not, so it is what you get. The current designer trending is just a shit show, nuke the usability for nearly no benefits IMO.
It’s all about state management IMO. There are legitimate reasons to keep UI specific complex temporary state on the client that would be more complex (and slower) if the server needed to hold it. So an SPA or at least partial SPA in some situations does makes sense.
But it does tend to become a hammer for every screw over time…
> It's been so frustrating watch this play out over the past decade.
> I keep seeing projects that could have been written as a traditional multi-page application pick a SPA architecture instead, with the result that they take 2-5 times longer to build and produce an end-result that's far slower to load and much more prone to bugs.*
Its been frustrating seeing the webplatform not play out, seeing so little growing in to SPAs, so little maturing.
Url-based routing is heavily under-represented, tackes on only by the one or two blokes who happened to have some memory of web architecture. This clairifies the architecture both internally & externally.
As bad a problem, single page apps being stuck, forever, at single-bundle apps is phenomenally sad. Splitting bundles into chunks as a manual development task is so hard, so bad. The goal of having web based modules almost made sense, almost happened, but we rafically underinvested in transport technology, cache-digest going undelivered. I continue to think js modules, with import maps- the key tech to making modules modular- is worth it, would help make our architecture so much better. There is mild extra time to first load, but worth it/small, & cached after.
Again we're damned though.
Years too late to try & see how excellent it would be to have something like react cached & AOT compiled as from a cdn. Because now privacy concern freak-outs mean this huge advantage of only needing to pull & potentially compile a js module once are gone: site-partitioning rules. We could have had better architecture, been using the language not absurd bundlers, and enjoyed high cache bit rates for common libraries. SPAs just didnt care, never tried at all, we all (almost all) did a horrible job & took way way too long (over a decade) to make modules modular g usable. There was so much hope g promise & such abaurd non-delivery, on the module front, on app archtiecture.
HTTP3 and early-hints still have some promising hopes for accelerating our transports, making "just modules" a possibility & fast, without careful hand optimization. We could still do more to optimize bundles, have automated tools that analyze up front versus on-demand dependency bundles, build http bundles of these. But i hope eventually itcs mostly not super necessary to build webpackage (nor far worse, webpack) bundles.
SPAs still have great potential. More so, now that we finally have some support tech for modules forming.
I mean, of course the co-creator of Django would say this.
I wouldn't recommend newer generation of developers to build traditional web apps, let alone use jQuery.
I don't understand why people think we're still in the age of form submissions and blog posts - There has to be a good majority of us here that has worked on something complex that required SPAs here, no?
Not only would it be detrimental to a young developer's career to suggest avoiding SPAs in regards to hiring, but only limiting that developer to create blog post styled content is severely restraining.
Let them develop their blogs in SPAs, at least when they are needed to go into something a bit more complex, they at least have the foundational knowledge required to move towards that.
What you're suggesting is to learn two things, (one that is inevitably being phased out), and spend the mental effort to discern when to use either one, when the more beneficial alternative is to learn SPAs and just go with it.
No 18-25 year old is trying to make a Weblog where walls of text is the main content - Youtube shorts, instagram reels, tiktoks and all these bite sized content has done a great job at destroying that level of attention span.
They're going to be building something else, something quick and visual, something pleasing to the eyes - and more often than not, it's going to require a SPA.
> There has to be a good majority of us here that has worked on something complex that required SPAs here, no?
You can still create a something complex without using any of the common SPA techniques, instead you can use things like Hotwire, Livewire, htmx instead.
I'd say it is very practical for hiring, as you will need just 1 or 2 developers that knows JavaScript and (Rails|Laravel|etc..) for every 4 or 5 you'd need otherwise (some that know JS, some that know backend, and some to coordinate/manage them).
If you consider that native desktop and mobile applications are siloed applications that coordinate with an API to achieve tasks - this is basically how SPAs or serverless MPAs work.
Part of the reason this is effective is because of the low cost nature of deploying applications like this.
For example; I can write a calorie counter that stores records in the client via indexeddb.
Given all the work is processed on the client, using an http server would be an unnecessary maintenance burden as it would simply server static files.
Rather than host the web application via a self managed http-server, I can just put my html files on S3 making hosting it free and unmanaged.
Should I decide I need to add user accounts and cloud storage - well I can then create a backend that exposes API endpoints to facilitate the tasks.
Those endpoints are then compatible with native applications, should I decide to write native mobile and desktop variations of my web application.
Furthermore, with Web Assembly expanding to offer the ability to write web applications using languages like C++, Rust, C#, Golang and the browser expanding access to OS subsystems like filesystem access - what we are seeing is that the browser is becoming a sandboxed UI toolkit, much like GTK or QT (except without native styling).
If there was anything that would empower Linux Desktops to be compatible with productivity software - it's progressive web applications.
Consider that Photoshop and Office are accessible on Linux via web today.
That's a lot of interfaces to create and maintain. The principle value of backend/MPA frameworks like Rails & Django is that they give you nearly all these interfaces for free in a neat package, which ends up being "good enough" for many use cases below Google-scale.
A calorie counter using IndexDB is a great example of something where an SPA is appropriate - like I said, "default to not writing an SPA, unless your project has specific, well understood requirements (e.g. you're building Figma) that make the SPA route a better fit".
I mainly work in the world of database-backed websites and applications, where going client-only without a backend isn't an option.
You're describing an actual client side (mobile or desktop) application made with web technology, not a web application. That's a fair use of SPA tech.
As soon as you need authentication, showing data across users, allowing visitors to see shared data, perform validation of inputs, send notifications when other user action happens, etc you're back in SPA hell.
i think what it comes down to is if you have to make a decision of "should i build an SPA?" the answer is no. the web is good at doing pages. if your app has pages, use the web's default page mechanism.
and i don't say this as a hater of single-page apps. i love webapps, and i think that building a webapp should be the default for most cases. there's a lot of apps that don't naturally break into a "page" metaphor, and all the technologies that are part of the single-page app concept are great for those cases.
figma is a perfect example, because there's no obvious division between what would be one page versus another. it's not a paginated website that has been built as an SPA, it's literally just one page that has a whole bunch of interactivity.
Don't don't do what I would do. The mistake is forgetting first principles. You don't do something good by focusing on what not to do (like don't be evil). Focus on what to do: KISS, YAGNI, etc.. even DRY is a far lower priority. Software, especially frontend and web are rampant with problems from operating as an echo chamber. Just consider what Ryan did with Deno and had to say criticizing his first project. Yet folks are still wildly supportive of the older technical decisions and go to great lengths to preserve those same mistakes.
Idk I don't think the problem is the SPA itself, it's bad design patterns that that make it terrible as you say.
I think really clean, performant SPAs can definitely be written and I think the overall experience of using a SPA can be much better than a multi-page site if the task at hand requires it.
There should be two parts of the web now really:
* Traditional multi-page websites
* SPAs that could have been a native app on the device, but are much more accessible in web form and without requiring an install
Speaking for myself, I find it much quicker and easier to build an SPA than a server rendered app. You seem to take the stance that server rendered is the default, normal way to architect and SPA requires justification for its aberrant departure from the norm.
SPA have lots of advantages: fewer languages to learn, easier to deploy, etc.
> SPA have lots of advantages: fewer languages to learn, easier to deploy, etc.
The two examples you give are only true if you don't have a backend at all. As soon as you have a backend, you're back to having to pick a backend language and deploy a backend server.
If your app doesn't need a backend, then I'd agree that an SPA is the way to go.
That just sounds like poor performing developers to me. You don't need snazzy animations between states. Just not reloading the full page is a benefit.
That's what I'm saying: I maintain a complex web app that is built with PHP-templated HTML with jQuery sprinkled in as needed to make things more interactive. It's not perfect, but it's a far cry from the nightmare that people always seem to imagine when they think of "the jQuery days".
Would I use jQuery if I redid it today? No. But this app does prove to me that progressively-enhanced HTML is a valid path to an app today, it doesn't need to be an SPA.
I keep seeing projects that could have been written as a traditional multi-page application pick an SPA architecture instead, with the result that they take 2-5 times longer to build and produce an end-result that's far slower to load and much more prone to bugs.
Inevitably none of these projects end up taking advantage of the supposed benefits of SPAs: there are no snazzy animations between states, and the "interactivity" mainly consists of form submissions that don't trigger a full page - which could have been done for a fraction of the cost (in development time and performance) using a 2009-era jQuery plugin!
And most of them don't spend the time to implement HTML5 history properly, so they break the URLs - which means you can't bookmark or deep link into them and they break the back/forward buttons.
I started out thinking "surely there are benefits to this approach that I've not understood yet - there's no way the entire industry would swing in this direction if it didn't have good reasons to do so".
I've run out of patience now. Not only do we not seem to be learning from our mistakes, but we've now trained up an entire new generation of web developers who don't even know HOW to build interactive web products without going the SPA route!
My recommendation remains the same: default to not writing an SPA, unless your project has specific, well understood requirements (e.g. you're building Figma) that make the SPA route a better fit.
Don't default to building an SPA.