It's kinda funny to me that many of the "pros" of this approach are the exact reasons so many abandoned MPAs in the first place.
For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
I remember that all the web shops in my town that did Ruby on Rails sites efficiently felt they had to switch to Angular about the same time and they never regained their footing in the Angular age although it seems they can finally get things sorta kinda done with React.
Client-side validation is used as an excuse for React but we were doing client-side validation in 1999 with plain ordinary Javascript. If the real problem was “not write the validation code twice” surely the answer would have been some kind of DSL that code-generated or interpreted the validation rules for the back end and front end, not the fantastically complex Rube Goldberg machine of the modern Javascript wait wait wait wait and wait some more to build machine and then users wait wait wait wait wait for React and 60,000 files worth of library code to load and then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)
Even worse: Client-side validation and server-side validation (and database integrity validation) are all their own domains! I call all of these "domain logic" or domain validation just to be sure.
Yes, they overlap. Sure, you'll need some repetition and maybe, indeed, some DSL or tooling to share some of the overlapping ones across the boundaries.
But no! They are not the same. A "this email is already in use" is serverside, (it depends on the case). A "this doesn't look like an email-address, did you mean gmail.com instead of gamil.com" is client side and a "unique-key-constraint: contactemail already used" is even more down.
My point is, that the more you sit down (with customers! domain experts!) and talk or think all this through, the less it's a technical problem that has to be solved with DSLs, SPAs, MPAs or "same language for backend and UI". And the more you (I) realize it really often hardly matters.
You quite probably don't even need that email-uniqueness validation at all. In any layer. If you just care to speak to the business.
unfortunately this also needs to be done server side, unless your trusting the client to send you information that is what your expecting?
client side validation makes for a good user experience, but it does not replace the requirement to validate things server side, and many times you will end up doing the same validations for different reasons.
If it's merely a hint for the user (did you make a typo?) there's no need to ensure "this is a valid email address". in fact: foo@gamil.com is perfect valid email-address, but quite likely (though not certain!) not what the user meant.
I've seen hundreds of email-adres-format-validations in my career, server-side. The most horrible regexps, the most naïve assumptions[1]. But to what end?
What -this is a question that a domain expert or business should answer - does it matter if an email is valid? trump@whitehouse.gov is probably valid. As is i@i.pm[2]. What your business- expert quite likely will answer is something in line of "we need to be sure we can send stuff so that the recipient will can/read it", which is a good business constraint, but one that cannot be solved by validating the format of an email. One possible correct "validation" is to send some token to the email, and when that token is then entered, you -the business- can be sure that at least at this point in time, the user can read mail at that address.
[1] A recent gig was a Saas where a naïve implementor, years ago, decided that email-addresses always had a TLD of two or three letters: .com or .us and such. Many of their customers now have .shop or somesuch.
[2] Many devs don't realize that `foo` is a valid email-adress. That's foo without any @ or whatever. It's a local one, so rather niche and hardly used in practice; but if you decide "I'll just validate using the RFC, you'll be letting through such addresses too!". Another reason not to validate the format of email: it's arbitrary and you'll end up with lots of emails that are formatted correct, but cannot be used anyway.
It doesn't matter if an email has a valid format: that says almost nothing about it's validity. The only way you can be sure an address can receive mail(today) is by sending mail to it. All the rest is theatre.
And all this only matters if the business cares about deliverability in the first place.
No, I understood your point and I agree sending the email and getting some type of read receipt is necessary.
You seem to think that because of this client validation should be skipped. On that point I disagree. If you can tell that it's not a valid email address (bigtunacan@goog obviously invalid since missing a TLD) then no email should be sent. Good UX is to let the customer/user know there is a mistake in the email address.
I think the main concern for frontend validation was before HTML5 came along with validation attributes. You can easily produce HTML input validation attributes from a Yup schema for example by using its serialization feature (https://github.com/jquense/yup#schemadescribeoptions-resolve...).
Here is an example from some silly code I wrote a while back testing htmx with deno
https://github.com/dsego/ssr-playground/
I once needed to order something in the company's ordering system, but for some reason my manager wasn't set as an approver, by virtue of some glitch, since it had worked a few weeks before, and if you wanted to change approvers you'd need the current approver to approve. But that wasn't set. A classical chicken and egg situation.
The button for changing approvers was greyed out, so out of boredom I changed it to active in the client-side code. Lo and behold after clicking the "active" button I got a box for selecting the approver.
I could select any user in the company. Even the CEO or myself.
I did the right thing and mentioned this to our IT Security department. Since obviously this could be used to order really expensive stuff in the name of the CEO or whoever.
They came back to me and told me, the vendor (I'm not sure I want to mention them here because they're happy to sue), knows about this for 3 years and won't fix it.
Even worse^2, client-side validation may differ from server-side validation and from database-side validation. I cannot imagine client-side checking for a validity of a phone number using freshly downloaded database of current carriers and assignment rules in different countries, I prefer to maintain it server-side, even though it could be possible (thanks to guys from Google and their Libphonenumber). But again, I don't trust the client, so it needs to be re-validated later. Then it will be converted to some native data structure on order to make things faster and unified, a later it will go to a database with its own coercion and validation routines just before application will do a query. This validation trusts the format so it will just make sure the result of conversion is correct. But then the query itself carries a validation aspect: when the number must be unique in some context, it will return error, which will bubble up to user.
A "this doesn't look like an email-address, did you mean...
Stop right there.
I'm tired of receiving mail from people that gave my email address as if it was their own.
Never ever accept an email address unless you can instantly confirm it's valid sending an email and waiting for an answer. If the user can't access their email on the spot, just leave it blank and use another data as key.
I think this is the point of the client side check though - if the user makes a typo (e.g. gamil.com) then the client side validation can prompt them to check, before the server sends the validation email and annoys the owner of the typoed address.
My point is that it doesn't matter if some arbitrary string looks like an email address, you need to check.
If it isn't valid the server won't annoy anyone. The problem is that the address is valid. And not theirs, it's mine.
The moment the users need to be careful, they will. Make the problem theirs, not mine.
"Sorry sir, the address you provided returns error" or "haven't you received the confirmation email YET? really? there are other customers in the line" and see how soon they remember the right address, perfectly spelled.
Even big ass companies like Paypal that have no problem freezing your monies, allow their customers to provide unchecked email addresses and send income reports there. (here)
You can (and should) definitely do both. But needing to validate that a user has access to the entered email address doesn't mean you should do away with client-side validation entirely.
I meant that it very much depends on the business-case (and hence laws and regulations) what exactly you'll have to verify, and therefore where you verify and validate it.
Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle? Then it probably only has to be guaranteed unique. Do you need to just store it in some address-book? Then just checking roughly the format is probably enough. "It depends".
> Do you need an address to contact people on? You'll must make sure that the user can read the emails sent to that by you. Do you merely use it as a login-handle?
Pretty humongous dick move to use someone else's email address as one's own login for some website, wouldn't you agree? What if it's a popular website, and the owner of the address would like to use it for their id; why should anyone else be able to deprive them of that?
And thus it's also a dick move from the site operator to allow those dicks to do that. So no, it doesn't depend: Just don't accept untested email addresses for anything.
Not all web-applications with a login are open for registration. Not all are public. Not all are "landgrab". Not all have thousands of users or hundreds of registrations a week. Not all are web applications and not all require email validation.
Some do. But, like your niche example proves: the business-case and constraints matter. There's no one size fits all.
It really wasnt about client side validation or UX at all. You can have great UX with an MPA or SPA. Although I do think it’s slightly easier in an SPA if you have a complex client like a customizable dashboard.
Ultimately it’s about splitting your app into a server and client with a clear API bounday. Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities. This may be worse for small teams but is significantly better for large teams (like Facebook and Google who started these trends).
One example is your iOS app can hit the same API as your web app, since your server is no longer tightly coupled to html views. You can version your backend and upgrade your clients on their own timelines.
I’ve worked in two kinds of organizations. In one of them when there is a ‘small’ ticket from the viewpoint of management, one programmer is responsible for implementation but might get some help from a specialist (DBA, CSS god, …)
In the other a small ticket gets partitioned to two, three or more sub teams and productivity is usually reduced by a factor more than the concurrency you might get because of overhead with meetings, documentation, tickets that take 3 sprints to finish because subtasks that were one day late caused the team to lose a whole sprint, etc.
People will defend #2 by saying thar’s how Google does it or that’s how Facebook does it, but those monopolists get monopoly rents that subsidize wasteful practices and if wall street ever asks for “M0R M0NEY!” they can just jack up the ad load. People think they want to work there but you’ll just get a masterclass in “How to kill your startup.”
I’ve worked at the same company for a long time. For about 15 years, my team was embedded in a business team and we managed things however we wanted. We could move very quickly. Then, about 5 years ago, we were moved into the tech organization. We were forced to adopt agile, sprints, scrum masters, jira, stand ups, etc. It probably takes 10 times longer to get the same amount of work done, with no improvement in quality. The amount of meetings is truly astonishing. I’m convinced the tech org mainly exists to hold meetings and any development work that occurs is purely accidental.
What you may not see is quality-of-life improvements for executive management, planning, and scheduling. Communication and alignment can be both more important and more profitable than just velocity alone.
I work at a company that makes a very clear distinction between API and View layer. Our API spans 200+ endpoints. We have 6 backend and 6 frontend developers.
As far as iterations go it’s very rapid. Our work teams are split into 1 backend and 1 frontend developer. They agree on an API spec for the project. This the contract between them and the frontend starts working immediately against a mock or very minimal version of the API. Iterate from there.
This is a pretty popular approach, and I use it sometimes, but "agree on an API spec for the project" does gloss over how challenging and time consuming this can be. How many people here have ever gotten their API wrong? (raises hand). There's still a lot of ongoing coordination and communication.
Oh certainly. It’s pretty rare to get things exactly right on the first try. For this reason we hide new endpoints from our public open api spec and documentation until we are satisfied that some dust has settled on them a little bit.
Still you only have to get it mostly right. Enough to get started. This only starts to become a huge problem when the endpoints is a dependency of another team. When you’re in constant communication between the developer building the API and the developer building the client it’s easy to adjust as you go.
I find a key part of a workflow like this though especially if you have multiple teams is to have a lead/architect/staff developer or whatever you may call it be the product owner of the API.
You need someone ensure consistency and norms and when you have an API covering functionally as broad and deep as the one I work on, it’s important to keep in mind each user story of the API:
- The in house client using the API. This generally means some mechanism to join or expand related records efficiently and easily and APIs providing a clear abstraction over multiple different database table when necessary.
- The external client, used by a third party or the customer directly for automation or custom workflows. The biggest thing I’ve found helps these use cases is to be able to query records by a related field. For example if you have some endpoint that allows querying by a userID, being able to also query by by a name or foreignID passed over SSO can help immensely.
Yep. I was in a type 1 startup. Stuff got done fast.
Company forced us to type 2 using Angular. projects thar used to take a couple of days for one person became multi month efforts for a dozen developers across three teams.
They need scaled agile, where every 5 or 6 sprints you group them into an program increment, with extra overhead and even more ridiculous symbolic rituals. Your team is held to an arbitrary commitment months out, then executives shift the ground under your feet and make everything irrelevant. Dev teams love it!
Generally you don’t want to reuse the same API for different types of clients, you want backends for frontends (BFF) that are specialized for each use and can be moved forward in their own pace. The needs and the requirements differs a lot between a browser, app and server-to-server call.
And just because you serve HTML doesn’t necessary mean that you backend code is tightly coupled with the view code, HTML is just one adapter of many.
A boundary doesn’t get better just because you slip a HTTP barrier in between, this is the same type of misconception that has driven the microservice hysteria.
> you want backends for frontends (BFF) that are specialized for each use
third time I've heard this thing and the reasoning still escapes me.
First there's ownership. Backend team owns API. Frontend teams own clients (web/android/ios/cli) etc. Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?
there's confusion.
Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.
And there's performance.
Adding more hops isn't making it faster, simpler or cheaper.
Isn't is better to have the API team own the single source of ownership ?
> Do you now have a BFF for each client type? Who owns it then ? Don't you now need more fullstacks ?
everyone has an opinion, but ime ideally you'd have 1 bff for all clients from the start
> there's confusion. Now you have 2 sets of contracts (API-BFF, BFF-clientIOS, BFF-clientAndroid, ...). You now have more human synchronization overhead. Changes take longer to percolate throughout. More scope for inconsistencies.
yep, i have literally experienced the chaos this can cause, including the endless buzywork to unify them later (usually its unify behind the web/html bff which breaks all kinds of frontend assumptions)
> Isn't is better to have the API team own the single source of ownership ?
it depends on what it means 'api team'... but ideally bff has its ownership separate from 'backend' wether that is in 'api team' or outside i think is less important ime
but... ideally this separation of ownership (backend backend, front end for backend) allows each to focus on the domain better without mixing up say localization in the lower level api's et
iow having a bff is sort of like having the view model as a server... that way multiple clients can be dead simple and just map the bff response to a ui and be done with it
Companies do this, but it is really hard to support. I prefer teams that own an entire vertical slice. Then they know their API and more importantly, The WHY? their API does what/how it does. A BE team can never know the entire context without exposure to the end use IME, and there is far less ownership. YMMV and it will ultimately come down to how your company is organized.
Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.
Specialized teams not only creates synchronization issues between teams but also creates different team cultures.
What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.
Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.
By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.
This also means that you separate public and private APIs. External consumers should not use the API as your own web client.
Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.
E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.
Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.
The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.
What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.
By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.
By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.
Ownership of a BFF should ideally be by the ones consuming it.
iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.
BFF is nothing more than an adapter in hexagonal architecture.
> Yes. I’m generally against specialization and splitting teams. This of course depends on what type of organization you have and how complex the frontend is. iOS and Android is usually complex as it is so they are typically specialized but I would still keep them in the team.
Right why have someone _good_ at a particular domain who can lead design on a team when you can have a bunch of folks who are just ok at it, and then lack leadership?
> Specialized teams not only creates synchronization issues between teams but also creates different team cultures.
Difference in culture can be cultivated as a benefit. It can allow folks to move between teams in an org and feel different, and it can allow for different experimentation to find success.
> What this does is that it induces a constant time delay for everything the organization does. Because teams no longer can solve an entire feature the organization instead spends more time on moving cards around in the planning tool of choice. The tiniest thing can require massive bureaucratic overhead.
I've seen this true when I was by myself doing every from project management, development, testing, and deployment. Orgs can have multiple steak holders who might throw a flag at any moment or force inefficient processes.
> Solutions also has a tendency to become suboptimal because no technician has an general overview of the problem from start to finish. And it also quite common that the same problem is solved multiple times, for each team.
Generalists can also produce suboptimal solution because they lack a deeper knowledge and XP in a particular domain, like DB, so they tend to reach for an ORM because that's a tool for a generalists.
> By making BFFs specialized, instead of the teams, you don’t need to spend time to create and design a generalized API. How many hours hasn’t been wasted on API design? It adds nothing to customer satisfaction.
Idk what you're trying to claim, but API design should reflect a customers workflow. If it's not, you are doing it wrong. This requires both gathering of info, and design planning.
> This also means that you separate public and private APIs. External consumers should not use the API as your own web client.
Internal and external APIs are OK, this is just a feature of _composability_ in your API stack.
> Specialized BFFs is not only to have a good fit for the client consuming it but it also about giving different views of the same underlying data.
If the workflow is the same, you're basically duplicating more effort than if you just had a thin client for each platform.
> E.g assume we have an article with multiple revisions (edits). Handling revisions is important for the Admin API but for the web client that serves the final version of the article not at all, it shouldn’t even be aware of that the concepts of revisions exists.
Based on what? Many comment systems or articles use an edit notification or similar for correcting info. This is a case by case basis on the product.
> Creating a new a BFF is as easy as copy&paste an existing one. Then you add and remove what you need.
That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client.
> The differences between BFFs is usually how you view your schema (GET). Writing to your model (POST) is likely shared because of constraints.
That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.
> What is then different views of the same data? An SQL query (or VIEW). Too many APIs just maps a database table to an endpoint 1:1, those APIs are badly designed because the consequence of that is that the client needs to do an asynchronous HTTP JOIN to get the data it needs, very inefficient.
Yes, those API are not being designed correctly, but I think you said folks are wasting too much time on design, so not sure what your arguing for here other than to not just try and force your clients to do excessive business logic.
> By writing SQL to fit your BFFs you will then realize that the ORM is the main problem of your architecture, it usually the ORM that creates the idea that you only have one view of the same data, one table to one entity. But SQL is a relationship model, you can’t realistically express that with 1:1 only.
Yet ORMs are tools of generalists. I agree they are generally something that can get in the way of a complex data model, but they are fine for like a user management system, or anything else that is easily normalized.
> By removing the ORM you will also solve the majority of your performance issues, two birds one stone scenario.
That depends a lot on how the orm is being used.
> Ownership of a BFF should ideally be by the ones consuming it.
Why? We literally write clients for APIs we don't own all the time, whenever we call out to an external/3p service. Treat your client teams like a client! Make API contracts, version things correctly, communicate.
> iOS and Android can usually use the same BFF, they don’t differ that much to warrant a new BFF. If there are any differences between the two, give them different endpoints within the same BFF for that specific use case. When designing APIs one should be pragmatic, not religious.
The workflows Shou be the same. The main difference between any clients are the inputs available to the user to interact with.
> BFF is nothing more than an adapter in hexagonal architecture.
You are comparing apples with oranges. I'm talking about organization, you about
individual developers.
I can have fullstack that is better than a specialist. Specialist only means that they have specialized in one part of the architecture, that doesn't necessarily mean that they solve problems particular well, that depends on the skill of the developer.
And the point is that even if they do have more skill within that domain, total overall domain can still suffer. Many SPAs suffer from this, each part can be well engineered but the user experience is still crap.
If your developers is lacking in skill, then you should definitely not split them up into multiple teams. But again I'm talking about organization in general, that splitting teams has a devastating effect on organization output. Difference in culture will make it harder to move between teams, thus the organization will have much more difficult time planning resources effectively.
BFF is all about reflecting the need of the client, but the argument was the a generalized API is better because of re-usability. The reason why you split into multiple BFFs is because the workflow isn't the same, it differs
a lot between a web client and a typical app. If the workflow is the same you don't split, that is why I wrote
BFF per client type, a type that has specific workflow (need & requirement).
> This is a case by case basis on the product.
Of course, it was an example.
> That sounds terrible, and very OO. I'd rather generate another client for my openapi documented API, in whatever language is most appropriate for that client
I'm talking about the server here, not the client.
> That's a stretch, if I need a form, I likely need the same data if I'm on iOS, Android, native, or web. Again it's about execution of a workflow.
But the authentication and redirects will probably be different, so you can reuse a service (class) for updating the model, but have different controllers (endpoints).
> Yes, those API are not being designed correctly
Every generalized API will have that problem in various degrees, thus BFF.
> Yet ORMs are tools of generalists.
Oh, you think a fullstack is generalist and thus doesn't know SQL. Why do you believe that?
> That depends a lot on how the orm is being used.
Most ORMs, especially if they are of type active record, just misses that mark entirely when it comes to relationship based data. Just the idea that one class maps to a table is wrong on so many levels (data mappers are better at this).
ORM entities will eventually infect every part of you system, thus there will be view code that have entities with a save method on, thus the model will be changed almost from everywhere, impossible to track and refactor.
Performance is generally bad, thus most ORMs has an opaque caching layer that will come back and bite you.
And typically is that you need to adapt your database schema to what the ORM manage to handle.
> We literally write clients for APIs we don't own all the time,
The topic here is APIs you control yourself within the team/organization. External APIs, either that you consume or you need to expose is different topic, they need to be designed (more). The point is internal APIs can be treated differently than external ones, no need to follow the holy grail of REST for your internal APIs. Waste of time.
But even with external APIs that you need to expose they can be subdivided into different BFFs, no need to squeeze them into one, this has the benefit that you can spend less time on overall design of the API, because the API is smaller (fewer endpoints).
> That's what a client is...
I'm specially talking about server architecture here, the client uses the adapter.
Agreed! There are many things in IT industry that are prone to this kind of almost magical thinking, and "boundaries" / "tight coupling" is one of them. I realized that when tried to actually compare some stuff I had been doing at work through years, being fascinated with uncoupling things. Well, if you start measuring it, even at the top level (time, people, money spent) then it is so clear that there are obvious tight couplings at architecture level (like data on wire containing some structure or transferring a state of application), and it is very tempting to remove them. But then we may actually find ourselves having a subtle tight coupling, totally not obvious, but effecting in a need of two teams or even two tech stacks and a budget more than twice the size because of communication / coordination costs.
This development style might be a better DX for the teams. But Facebook on the web is an absolute dumpster fire if you use it in a professional capacity.
You can't trust it to actually save changes you've made, it might just fail without an error message or sometimes it soft-locks until you reload the page. Even on a reliable connection. Error handling in SPAs is just lacking in general, and a big part of that is that they can't automatically fall back to simple browser error pages.
Google seems to be one of the few that do pretty good on that front, but they also seem to be more deliberate for which products they build SPAs.
> Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities
How desirable this is depends on the UI complexity.
Complex UIs as the ones built by google and facebook will most likely benefit from that.
Small shops building CRUD applications probably won't. On the contrary: the user requirements often cross-cut client and server-side code, and separating these in two teams adds communication overhead, at the best of the hypotheses.
Moreover, experience shows that such separation/specialization leads to bloated UIs in what would otherwise be simple applications -- too many solutions looking for problems in the client-side space.
There is no reason other than poorly thought out convenience to make the webbrowser/webserver interface the location of the frontend/backend interface. You can have a backend service that the web server and mobile apps all get their data from.
When a company gets to the stage where they actually need a mobile app, it is pretty easy to add API endpoints in many/most/all? major web frameworks. Starting out with the FE/BE split slows you down immensely.
IMHO it is completly doable to do a state transfer with HTML to a mobile device instead of writing a separate application using a separate technology. Then we can deal with coupling server-side, e.g. "view's team" can use some templating system and "core team" can play with logic using JSP-Model2 architecture or something similar.
There is a third option, which is that FE-facing teams maintain a small server side application that talks to other services. That way the API boundary is clearly defined by one team.
It sounds a lot more annoying to have to manage one client and many servers instead.
Or even skip the DSL and use JS for both client and server, just independently. Validation functions can/should be simple, pure JS that can be imported from both.
Validation logic is surprisingly simple but almost always lives in different domains. Unique columns are a great example, the validation has to happen at the database layer itself and whatever language is used to call it will just be surfacing the error.
Language and runtime decisions really need more context to be useful. JS everywhere can work well early on when making a small number of devs as productive as possible is a goal. When a project scales parts of the stack usually have different priorities take over that make JS a worse fit.
> felt they had to switch to Angular about the same time and they never regained their footing in the Angular age
And in this case what actually happened is exactly what we had expected would happen: tons of badly-written Angular apps than need to be maintained for foreseeable future because at this point nobody wants to rewrite them so they become Frankensteins nobody wants to deal with.
> then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)
As far as I know, windows explorer has been extremely slow for this kind of operation for ages.
It's not even explainable by requiring a file list before starting the operation, I have no idea what it is about Windows explorer, it's just broken for such use cases.
Just recently, I had to look up how to write a robocopy script because simply copying a 60GB folder with many files from a local network drive was unbelievably slow (not to mention resuming failed operations). The purpose was exactly what I wrote: copy a folder in Windows explorer.
What does this have to do with React or JavaScript?
Can you argue a bit more genuinely and not pick on such a minor point as validation? I think parent mentioned other points? How about the logical shift to let client do client things, and server do server things? Server concatting html strings for bilions of users over and over again seems pretty stupid.
Why not stress the argument further and say server "concats" http strings or sql strings? It's because of the nonsense from the web platform that inefficient text-based transports such as json became prevalent in the back-end btw.
But you can download another node package from npm to delete those other npm packages: npkill. For whatever reason this is as they say in the javascript world "blazingly fast"
It was/is quite popular with .NET developers due to TypeScript being very similar to C# and implementing similar patterns like dependency injection (I know dependency injection/IOC isn't .NET specific).
This was my experience as well. Angular became really popular in enterprise teams that were already full of devs with a lot of C# and MVC or MVC experience.
> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
Plus we now get the benefit of people trying to "replace" built in browser functionality with custom code, either
The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.
or
They're changing things because they're already so far from default browser behavior, why not? ... Scrolling broken or janky because the developer decided it would be cool to replace it? Check.
There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.
> There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.
Yep. It's bonkers to me that a page consisting mostly of text (say, a Twitter feed or a news article) takes even so much as a second (let alone multiple!) to load on any PC/tablet/smartphone manufactured within the last decade. That latency is squarely the fault of heavyweight SPA-enabling frameworks and their encouragement of replacing the browser's features with custom JS-driven versions.
On the other hand, having to navigate a needlessly-elongated history due to every little action producing a page load (and a new entry in my browser's history, meaning one more thing to click "Back" to skip over) is no less frustrating. Neither is wanting to reload a page only for the browser to throw up scary warnings about resending information simply because that page happened to result from some POST'd form submission.
Everything I've seen of HTMX makes it seem to be a nice middle-ground between full-MPA v. full-SPA: each "screen" is its own page (like an MPA), but said page is rich enough to avoid full-blown reloads (with all the history-mangling that entails) for every little action within that page (like an SPA). That it's able to gracefully downgrade back to an ordinary MPA should the backend support it and the client require it is icing on the cake.
I'm pretty averse to frontend development, especially when it involves anything beyond HTML and CSS, but HTMX makes it very tempting to shift that stance from absolute to conditional.
I remember writing high complexity rich internet applications (knowledge graph editors, tools to align sales territories for companies with 1000+ salespeople, etc.) circa 2005. It was challenging to do because I had to figure out how to update the whole UI when data came in from asynchronous requests, I had to write frameworks a bit like MobX or redux to handle the situation.
Even before that I was making Java applets to do things you couldn't do with HTML, like draw a finite element model and send it to a FORTRAN back end to see what it does under stress, or replace Apple's Quicktime VR plugin, or simulate the Ising model with the Monte Carlo methods.)
What happened around 2015 is that people gave up writing HTML forms and felt that they had to use React to make very simple things like newsletter signups so now you see many SPAs that don't need to be SPAs.
Today we have things like Figma that approach the complex UI you'd expect from a high-end desktop app, but in many ways our horizons have shrunk thanks to "phoneishness" and the idea that everything should be done with a very "simple" (in terms of what the user sees) mobile app that is actually very hard to develop -- who cares about how fast your build cycle is if the app store can hang up your releases as long they like?
> The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.
MPAs break back buttons all the damn time, I'd say more often than SPAs do.
Remember the bad old days when websites would have giant text "DO NOT USE YOUR BROWSER BACK BUTTON"? That is because the server had lots of session state on it, and hitting the browser back button would make the browser and server be out of sync.
Or the old online purchase flows where going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.
Let's think about it a different way.
If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app? That'd be insane.
Yeah, state mutation triggered by GET requests is going to make for a bad time, SPA or MPA. Fortunately enough of the web application world picked up enough of the concepts behind REST (which is at the heart of all web interaction, not just APIs) by the mid/late 00s that this already-rare problem became vanishingly rare well before SPAs became cancerous.
> going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.
The problem is entirely orthogonal to SPA vs MPA.
> If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app?
It's not only EVER done, it's regularly done. Perhaps you should interrogate some of the reasons why.
But more to the point, if it's bad, SPAs seem to frequently manage to bring the worst of both worlds, a giant payload of application shell and THEN also screen-specific application/UI/data payload, all for reasons like developer's unfortunately common inability to understand that both JSON and HTML are perfectly serviceable data exchange formats (let alone that the latter sometimes has advantages).
> It's not only EVER done, it's regularly done. Perhaps you should interrogate some of the reasons why.
Content in the app is reloaded, sure, but the actual layout and business logic? Code that generally changes almost never, regenerated on every page load?
I know of technologies that are basically web wrappers that allow for doing that to bypass app store review processes, but I'd be pissed if an alarm clock app decided to reload its layout from a server every time I loaded it up!
The SPA model of "here is an HTML skeleton, fill in the content spaces with stuff fetched from an API" makes a ton more sense.
The application model, that has been in use for even longer, of "here is an application, fetch whatever data you need from whatever sources you need" is, well, a fair bit simpler.
Everyone is stuck with this web mindset for dealing with applications and I get the feeling that a lot of developers now days have never written an actual phone or desktop application.
> But more to the point, if it's bad, SPAs seem to frequently manage to bring the worst of both worlds, a giant payload of application shell and THEN also screen-specific info, all for reasons like developer's unfortunately common inability to understand that both JSON and HTML are perfectly serviceable data exchange formats (let alone that the latter sometimes has advantages).
I've seen plenty of MPAs that consist of multiple large giant mini-apps duct taped together.
It wasn't GET mutation... it was POSTs with multi-page forms that was the problem. It was such a pain to subdivide a form and create server and session state and intuit the return state. And what happens if you needed a modal with dynamic data? Did you pop open a new window and create a javascript call for the result? There was no great progressive answer to them.
Oh, and then request scope wasn't good enough because you needed to do a post-redirect-get? I will say that I do not think MPAs for web applications were the good old days.
Yeah. As someone that’s quite bearish on JS altogether, and as someone that’s worked on a few old-school multi-step forms recently, we can’t pretend that this was and still is anything other than a code and UX disaster. And…I’m not an idiot, I understand different HTTP request types and how browsers handle going back through history. I know that there’s not something obvious I’m missing. I’ve put the work in. The reality is that non-JS web technologies aren’t very good at some things that are quite common and that many people expect in anything more than a brochure site.
I’m just so miffed that it can end up necessitating roping in so much BS. Mind you, not necessarily in this example. Things like HTMX excite me. And, on the other side, things like Next.js and Remix that IMO are a breath of fresh air, even if they might not ultimately be heading in the right direction (I genuinely have no idea).
It is totally possible to make MPAs where reloads are never a problem.
As for phone apps these are undeniably a step backwards from desktop apps, web apps and every other kind of app. On the web you can deploy 100 times a day, waiting for the app store to approve changes to your app is like building a nuclear reactor in comparison.
All the time you get harassed in a physical store or a web site to "download our mobile app" and you know there ought to be a steaming pile of poop emoji because the mobile app almost always sucks.
One of the great answers to the app store problem is to move functionality away from the front end into the back, for instance people were looking to do this with chat bots and super apps back in 2017 and now that chatbots are the rage again people will see them as a way to get mobile apps back on internet time.
> If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app? That'd be insane.
Good luck forcing users to download 50MB before they can use your web app.
The web and mobile/desktop apps are two totally different paradigms with different constraints.
I mean, there's nothing about an SPA that forces you to break the back button, to the contrary, it's possible to have a very good navigation experience and working bookmarks. But it takes some thinking to get it right.
I don’t think “forces” is the right way to think about it. By default a SPA breaks navigation history etc (it’s right in the name). It’s not onerous to reimplement it correctly but reimplement you must.
Right, so you agree: you have to reimplement it. You can just use a framework to do so.
It might be news to folks to learn that every single SPA framework has solved the problem entirely because it's really not an uncommon experience to have your browser history broken by a SPA. I believe that most frameworks implement the API correctly. I also believe a good number of developers use the framework incorrectly.
Or simply unaware about the whole "back button" debacle. Which is yet another stone to throw at the SPA camp: if using technology A requires a programmer to learn about more stuff (and do more work) than technology B for achieving pretty much the same end results, then technology A is inferior to B (for achieving those particular end results, of course).
No, it’s like saying you’ve been provided with a calculator but may, if you wish, create your own calculator with some parts provided. No guarantee it adds numbers together correctly.
I don't understand why this point is so complicated. Yes, bad SPA developers mess it up all the time. Bad MPA developers do not mess it up because it doesn't require reimplementation by said bad developer, it works out of the box.
We’re way too far into a thread for me to have to restate the original point I made in the first post. If what you’re saying is true we’d never see bad implementations of history in SPAs yet we do all the time.
But look, whatever. It’s Friday afternoon, I’m out of here. Have a good weekend.
Mail is not a good example. Why would you like to read a collection of documents through A Single Page interface? Gmail was a fantastic improvement over Hotmail and Yahoo, and it provided UX innovations we still haven't caught up with, yes, but MPAs are naturally more suited for reading and composing them. Overriding perfectly clear HTML structure with javascript should be reserved for web experiences that are not documents: that is, videogames, editors, etc (Google *Maps* is a good example). The quality of the product usually depends on how it was implemented more than the underlying technology, but as I see it is: if it's a Document, if the solution has a clear paper-like analogue, HTML is usually the best way to code it, structure it, express it. Let a web page be a web page and let the user browse through it with a web browser. If it's not, well, alright, let's import OpenGL.
Mail is good for a SPA because the main central view which shows the different items (emails) to view or take an action on is based on a resource intensive back-end request, so keeping that state present and not having to refresh it on many of the different navigation actions yields a tremendous benefit.
You could do some client side caching with local page data, but just keeping it present and requesting updates to it only is vastly superior.
Thats honestly one place SPAs shine, where there's a relatively expensive request that provides data and then a lot of actions that function on some or all of that data transiently.
You're thinking just of the amount of data sent, not the amount of work that's done on the back end. Just because it's only showing you the most recent 40 messages or something doesn't mean it isn't doing a significant amount of work on the back end to determine what those messages are. Not having to scan through all your email and sort by date nearly as often is a significant win.
No, I'm talking about back end processing cost. If the main page of the app has a significant server cost in the determining what data is being sent, being able to just redisplay the data you have when you browse back to the main page instead of request it again, which could incur that large processing fee, is a large gain.
As a simplisit ecanple, imagine an app which on login has to do an expensive query which takes five seconds to return because of how intensive it is on the back end. If you can just redisplay the data that's already in memory on the client, optionally updating it with a much less expensive query for what's changed recently, then you're saving about five seconds of processing time (and client wait time) by doing so.
Yiu could use localStorage to do something similar without it being a SPA, but that's essentially opting into a feature that serves a similar need.
Client side caching is a strong point of SPAs, so it makes sense that a use case that can leverage that heavily will have benefits.
I find it hard to really agree that the backend of Gmail would be more involved with a thinner frontend. The "low bandwidth html" version sorta gives the lie, there...
I'm not sure what you're getting at. I'm not talking about bandwidth usage at all. I'm talking about CPU, memory and IO (as in disk, not client server transfer) usage.
I'd wager all of those things are still lower on their "low bandwidth" option.
Now, I will grant that it does less. Probably lacking a lot of the "presence detection" that is done in the thick client. Certainly lacking a lot of the newer ad stuff they are pushing at.
But the rest could be offset by a very basic N-tier application where the "envelope" of the client HTML is rather cheaply added to any outgoing message. And the vast majority that goes into "determining what data is being sent, being able to just redisplay the data you have when you browse back to the main page instead of request it again, [etc.]" will probably be more identical than not between the options.
Now, I grant that some of the newer history API makes some tricks a bit easier for back button changes to work. Ironically to the point, is that gmail is broken for back button usage. So... whoops.
> I'd wager all of those things are still lower on their "low bandwidth" option.
I would argue that Google has thrown a bunch of engineering talent at it to optimize the problem as much as it can be for a web interface, and that Gmail is a bad example of a a SPA mail client, as it's more a combined mail client and IMAP server (really a custom designed mail store) all rolled into one. Whether Gmail itself really uses more or not is somewhat irrelevant to whether a mail client in general leans into the benefits a SPA provides. This is what I was talking about here.[1]
That said, whether it uses less resources is a tricky question. Sometimes there's algorithmic wins that overall reduce the total work done, and I don't doubt Gmail leverages some of those, but it's also just a huge amount of caching, whether in the browser or in a layer underneath. The benefit of a SPA is that you can customize the caching to a degree for the application in the client without having to have an entire architectural level underneath designed to support the application. For anything at scale, having that layer underneath is obviously better (it's custom fit for the needs of the application and isn't susceptible to client limitations), but it's also very engineering intensive.
My guess is that Gmail puts a very large amount of cache behind most requests, and is just very, very good about cache invalidation. Or they've got the data split across many locations so they can mapreduce it quickly and efficiently (but tracking where those places are will necessitate some additional resource usage).
In the end, you need caching somewhere. You can do it on the server side so that you have full control over it but you have to pay for the resources, or you can do it on the client side with some limits on control and availability, but you don't use your own resources. SPAs make client side caching more reliable in easier to deal with in some cases, because the working state of the client isn't reset (or mostly reset) on every request.
What exactly is the resource-intensive request here? Loading an E-mail, or list of E-mails? I don't see why that should be any more resource-intensive than any other CRUD app.
A list of emails. That's essentialls a database query that is taking X items and sorting by the date field, most commonly, except that the average user can have thousands, or even tens or hundreds of thousands of items that are unique to them in that dataset that need to be sorted and returned.
Sure, gmail optimizes for this heavily so it's fast, but it's still one of the most intensive things you can do for an app like that, so reducing the amount of times you need to do that is a huge win for any webmail. If you've ever used a webmail client that's essentially just an IMAP client for a regular IMAP account, you'll note that if you open a large inbox or folder it's WAY slower than trying to view an individual message, most times, for what are obvious reasons of you just think of a mailbox as a database of email and the operations that need to happen on that database (which it is).
If clicking on an individual message is a new page, that's fine, but if going back to the main view is another full IMAP inbox query, that's incredibly resource intensive compared to having it cached in the client already (even if the second request is cached in the server, it's still far more wasteful than not having to request it again).
There's been a fair amount of discussion on this thread, which left me wanting to clarify my comments...
It is entirely possible to have a MPA application that makes calls to the back end to retrieve more data. Especially for things like a static page (cached) with some dynamic content on it. My problem is when people convert an entire site to a Single Page (SPA). When I click to go from the "home page" to a "subsection page", it makes sense to load the entire page. When I click to "see more results" for the list of items on a page, it seems reasonable to load them onto the page.
Side note: If I scroll down the page a few times and suddenly there's 8 items in the back queue, you're doing it wrong. That drives me bonkers.
my favorite example is dev.to. A (web-)developer-centric site, open-source nowadays. In a similar discussion years ago it was praised as well-done SPA. Everytime the topic comes back up again I spend 5 minutes clicking around, it every time I find some breakage of a page being critically broken during a transition, not being the page the URL-bar says it is, ... because having a blogging site just be pages navigated by the browser was too easy.
I fail to see how HTMX could be the "future". It could have been something useful in the 2000s, back when browsers had trouble processing the many MBs of JS of a SPA. Nowadays SPA's run just fine, the average network bandwidth of a user is full-HD video tier, and even mobile microprocessors can crunch JS decently fast. There is no use case for HTMX. Fragmented state floating around in requests is also a big big problem.
The return of the "backend frontender" is also a non happening. The bar is now much higher in terms of UX and design, and for that you really need frontend specialists. Gone are the days when the backend guys could craft a few html templates and call it a day, knowing the design won't change much, and so they would be able to go back to DB work.
> Nowadays SPA's run just fine, the average network bandwidth of a user is full-HD video tier, and even mobile microprocessors can crunch JS decently fast.
ie. "I don't live in a rural area, but that's fine, nobody who matters lives there."
Really sounds to me like you’re speaking from your own professional context and are talking to consider the huge spectrum of circumstances in which web code is written.
It's amusing that for a long time the response was "oh man that sounds terrible".
Now it is "oh hey that's server side rendered ... is it a new framework?".
The cycle continues. I end up writing all sorts of things and there are times when I'm working on one and think "this would be better as Y" and then on Y "oh man this should be Z". There are days where I just opt for using old ColdFusion... it is faster for somethings.
Really though there's so many advantages to different approaches, the important thing is to do the thing thoughtfully.
I also switch back and forth between two large projects written in different decades and it definitely gives an interesting perspective on this. Basically every time I'm in php I go "oh yeah I see why we do react now" and every time I'm in react I go "oh right I see why php still exists."
I switch between a fair variety frontend, backend myself and have never had that reaction.
It's always, I could do exactly this in 2005 using jquery + JSP, it would not need any of these 1500 dependencies and the user would see absolutely no difference (except downloading 10 times more js today at 5G speeds)
The scalability issues non-facebook scale webapps are trying to solve for do not exist. These apps will be dead before they reach a 10% of that scale and yet the project folks just don't get it.
anecdotally, github project bookmarks I have 3-4 years ago won't even compile today. a large chunk of projects from 2010 still work. Including mine I wrote a decade ago as a newbie js junkie.
>How much of that is just a garden variety "grass is always greener on the other side" effect?
In my example not so much. I'm working in a number of frameworks, use them regularly, sometimes ColdFusion is just faster / better suited, sometimes some other system.
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.
> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
I would claim they became even more so than the thing they replaced. Basically most of any progress in bandwidth or ressources is eaten by more bloat.
>Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.
Yeah, that was my point. With Node you can write JS to validate on both the client and server. In the article, they suggest you can just do a server request whenever you need to validate user input.
>Basically most of any progress in bandwidth or ressources is eaten by more bloat.
In my experience, the bloat comes from Analytics and binary data (image/video) not functional code for the SPA. Unfortunately, the business keeps claiming it's "important" to them to have analytics... I don't see it but they pay my salary.
> In my experience, the bloat comes from Analytics and binary data (image/video) not functional code for the SPA. Unfortunately, the business keeps claiming it's "important" to them to have analytics... I don't see it but they pay my salary.
Similar to my experience. So glad I uBlock Origin a lot of unnecessary traffic. At some point it is not longer good taste, when the 5th CDN is requested, the 10th tracker script from random untrusted 3rd parties loaded ... All while neglecting good design of a website, making it implode, when you block their unwanted stuff. Not rare to save more than half the traffic, when blocking stuff.
The first SPA I wrote, I wrote in React for my use and for the use of friends. I spent about 3 days getting it working and then 3 months getting it to usable performance on my phone. There were no analytics, no binary data (100% text), just a bunch of editable values and such. I ended up having to split it up into a bunch of tabs just to reduce the size of the vdom.
I've been told that before. Maybe it was something else; I don't know.
All I know is that I was unable to figure out what it was, and I bounced it off a few people online, and the performance scaled inversely with the number of DOM nodes.
I would think you have something else wrong with the design. I've worked on some pretty complex and large react apps that worked flawlessly on some low-end mobile browsers. Maybe you're accidentally duplicating a LOT of dom nodes?
I feel like you misunderstood the OP, they are claiming that Node allows you to reuse the same code to do validation on both the client and the server. By definition that means they are also doing server-side validation, and they are not relying on it being checked on the frontend.
As I see it though, node.js on the backend is not mainstream, most sites are still using JVM or other back ends. Using the same code for the front end and the back end is a dream that has been pursued in various forms but it isn’t mainstream.
Man... if you don't think node.js on the backend is mainstream at this point I don't know what to tell you. It's not even the hyped-up new thing anymore.
Being not hyped up doesn’t mean it’s mainstream. Most backends are in Java, Go, or PHP. Python and Ruby take up most of those that aren’t. It’s rare to find node on the backend in comparison.
I wonder if part of the confusion here is that “backend” is pretty overloaded. There are backends like API servers, and web server backends (which at Google they call “frontends”!)
I’d guess that Go is relatively more popular than Node for API servers, and Node is more popular for web servers.
And as you note, both are probably less popular than languages like Java and PHP.
I don't understand why that should be the case. There are a lot of checks that end up needing to be repeated twice with no change in logic (e.g., username length needs to be validated on both ends).
There are two things that engineers tend to neglect about validation experiences:
1) When you run the validation has a huge impact on UX. A field should not be marked as invalid until a blur event, and after that it should be revalidated on every keystroke. It drives people crazy when we show them a red input with an error message simply because they haven't finished typing their email address yet, or when we continue to show the error after the problem has been fixed because they haven't resubmitted the form yet.
2) Client side validation rules do occasionally diverge from server side validation rules. Requiring a phone number can be A/B tested, for example.
Even if you’re not A/B testing you’re going to have some validations that only happen server-side because they require access to resources the client doesn’t have, but I don’t see either of these points as arguments against sharing the validators that can be.
I agree. These points are arguments against the philosophy of HTMX which asserts that you can get everything you need without client-side logic.
To be fair, I'm also not a fan of bloated libraries like React and Angular. I think we had it right 15-20 years ago: use the server for everything you can, and use the smallest amount of JS necessary to service your client-side needs.
I think a charitable reader could infer that this is often made a requirement out of UX concerns and therefore it “needs” to be done. Do you have a substantive objection to what I said?
There are limitations to that, as you well know, since you hedged with “much of.” And this is, again, a nitpick around the edges and not really a comment that addresses my main point.
Input validation checks are such a small part of the codebase; it feels weird that it would dictate the choice of a server-side programming language. Server-side python is very capable of checking the length of a string, for example.
One challenge is that you've got to keep the server-side and client-side validations in sync, so if you'd like to increase the max length of an input, all the checks need to be updated. Ideally, you'd have a single source of truth that both front-end and back-ends are built from. That's easier if they use the same language, but it's not a requirement. You'll also probably want to deploy new back-end code and front-end code at the same time, so just using JS for both sides doesn't magically fix the synchronization concerns.
One idea is to write a spec for your input, then all your input validation can compare the actual input against the spec. Stuff like JSON schema can help here if you want to write your own. Or even older: XML schemas. Both front-end and back-end would use the same spec, so the languages you pick would no longer matter. The typical things you'd want to check (length, allowed characters, regex, options, etc.) should work well as a spec.
It's also not the only place this type of duplication is seen: you'll often have input validation checks run both in the server-side code and as database constraint checks. Django solves that issue with models, for example. This can be quite efficient: if I have a <select> in my HTML and I want to add an option, I can add the option to my Django model and the server-side rendered HTML will now have the new option (via Django's select widget). No model synchronization needed.
As others mention, you may want to write additional validations for the client-side or for the server-side, as the sorts of things you should validate at either end can be different. Those can be written in whichever language you've chosen as you're only going to write those in one place.
I don’t disagree that if this is your sole reason for picking a language it is not a great one. But it is a benefit nevertheless. And obviously we can express more complex rules in a full-on programming language.
A possible solution could be what ASP.NET does where you can just set the validation rules in the backend and you get the client side one too, the magic is done by jQuery unobstrusive validation. Of course something a bit more up to date than jQuery would be ideal but you got the gist.
Right, you shouldn’t, but that means writing them twice. One of the selling points of backend JavaScript is the same validation code can run on both ends (obviously any validator that needs to check, e.g., uniqueness in a database won’t work).
Frontend and backend validation are usually not the same though. You won't be writing the same thing twice, you'll be writing different validations for each.
I’ve several times been in the position of writing a new UI for an existing API. You find yourself wanting to validate stuff before the user hits “submit”, because hiding errors until after submitting is terrible UX; and to do that, you find yourself digging into the server code to figure out the validation logic, and duplicating it.
And then years or months later the code gets out of sync, and the client is enforcing all sorts of constraints that aren’t needed on the server any more! Not good.
It's not as easy as that. Showing validation while people are editing can be even worse, especially for less-technically able users or people using assistive technology.
Having an announcement tell you your password isn't sufficiently complex when you're typing in the second letter might not be bad for us, but how does that work for a screen reader?
Not really. GOV.UK Design System team have done lots of research into this and their guidance says:
> Generally speaking, avoid validating the information in a field before the user has finished entering it. This sort of validation can cause problems - especially for users who type more slowly
HTML gives very limited tools for tracking what a (potentially JS-less) user is doing. There are various tricks, like "link shorteners" and "magic pixels" that allow some tracking.
But if you want advanced tracking, like tracking what a user is focusing on at a particular instant, you need to wrap the whole document in a lot of JS.
SPA frameworks came out of AdTech companies like Meta, and I assure you it wasn't because they had limited engineering resources.
I can imagine that Facebook and Google liked the way Angular and React allowed for more advanced tracking. But it seems like you're giving too much weight to that as a primary cause.
From my memory of working through this time, it was driven more by UX designers wanting to have ever more "AJAXy" interfaces. I did a lot of freelancing for design agencies 2006 - 2016, and they all wanted these "reactive" interfaces, but building these with jQuery or vanilla JS was a nightmare. So frameworks like JavaScript MVC, Backbone.js, SproutCore, Ember.js were all popping up offering better ways of achieving it. React, Vue and Angular all evolved out of that ecosystem.
That’s a good and logical story, but it doesn’t match the reality in my experience.
Companies use SPA frameworks for the same reason they use native apps, to make a “richer”, more responsive, more full-featured UI.
Analytics is typically done in a separate layer by a separate team, usually via Google Tag Manager. There might be a GA plugin for your UI framework, but it can work equally well with plain HTML. GA does use a bunch of client-side JS, yes, but it’s not really a framework you use on the client side, it’s just a switch you flip to turn on the data hose.
In my experience, trying to add analytics cleanly to clientside UI code is a complete pain. Trying to keep the analytics schema in sync as the UI evolves is really hard, and UI developers generally find analytics work tedious and/or objectionable and hate doing it.
Google Tag Manager is the big story in adtech, and I think it comes from and inhabits a completely different world from Angular, React etc.
You can do all that with vanilla html. Cursor tracking, scroll tracking. With HTMLX it makes it trivial.
React isnt a SPA framework. It’s a component framework. It has no router or even state management. ExtJs is an Mvc framework in JavaScript and can be used to create a full spa app without additional libraries. It also came out in 2007. There is also ember that also predates react and is another mvc framework by the people who did rails.
This is not correct. SPAs and web components were pioneered by Google with the introduction of Angular. Later, Vue was invented by a previous Google employee who had worked on Angular. Finally, Facebook came up with React (it's a "reaction" to Angular) because they could not be seen using a Google product.
If anything, SPAs make metrics harder because they hide the real behavior of the page in local JS/TS code and don't surface as much or any information in the URL. Also, fewer server interactions means fewer opportunities to capture user behavior or affect it at the server level.
I work with SPAs with API calls every day. It definitely reduces the server interactions over computing everything on that side, and it gives fewer points of contact with the server about the user's behavior. For example, many clicks and other actions will not result in any server contact at all.
I'm aware that they call it "reactive" but I'll stick with my rationale. There is no way they would use a Google product like that.
I... don't believe you? Like looking at the network request of any SPA I've ever seen there's just tons of requests for even simple page loads. One for main content, one for profiles, one for comments, etc.
In theory stuff like graphql helps but in the reality I'm living in SPA's hit multiple endpoints to get render even simple pages.
An enterprise React app I am currently working with takes about 50 requests to fully render the app post-login. Switching to another view (no reload) takes another few dozen. That's a lot of "server interactions", pretty standard for SPAs, but YMMV.
your timeline is a bit off. facebook had react in production (mid-late 11) less than a year after angularjs went public, open-sourced it 18-24 months later (early 13), then evan started working on vue a few months after that (mid 13) and released early the following year
But if you don't blend the two, then you have a DRY violation. Someone should only have a say a field (column) is required in one and only one place, for example. The framework should take care of the details of making sure both the client and the server check.
I myself would like to see a data-dictionary-driven app framework. Code annotations on "class models" are hard to read and too static.
that seems like an easy way for validation logic between the two to fall out of sync. Limits want to be enforced on the back end, definitely, but if the frontend also does the same validation the user experience is better, so you want to do some there as well (eg blank username does not need to do the slow round trip to the server). Through the magic of using JavaScript on both ends, the exact same bit of code can, with a bit of work, be used on both the front and the back end, so you can get the best of both worlds.
I actually tend to think of it to add feature degradation and handle micro service issues. It always seemed better to have the client manage that, and more graceful.
The number of SPAs that implement their own timeouts when I'm stuck on 2G networks is non-zero and incredibly annoying. The network socket has a timeout function, just because you 'time out' doesn't mean the network timed out, that data is still being transferred and retrying just makes it worse.
I don't understand how spa is different than vanilla web app in terms of user analytics? A beacon is a beacon. Whether its img tag with a 1x1 transparent gif or an ajax call.
Also validation is usually built on both client and server for the same things. Like if you have a password complexity validation. Its both on UX and the server otherwise it will be a very terrible UX experience.
I have never heard this before. Can you elaborate on the differences? What do you validate on the client side that you don't on the server and vice versa?
Some validations require capabilities that you don't want/need the client to have.
There are also validations that can improve UX but aren't meaningful on the server. Like a "password strength meter", or "caps lock is on".
Religiously deploying the same validations to client and server can be done, but it misses the point that the former is untrusted and just for UX. And will involve a lot of extra engineering and unnecessary coupling.
I'm not sure adding a meter value output to the server side check to use it in both places is really more engineering work. Writing separate checks on the client and server side seems much more likely to create headache and extra work.
That said, I could definitely see additional checks being done server side. One example would be actually checking the address database to see if service is available in the entered address. On the other hand, there really isn't any waste here either. I.e. just because you write the validation in server side JS doesn't mean you MUST therefore deploy and use it in the client side JS as well, it just means you never need to worry about writing the same check twice.
I understand the argument I just disagree that having a separate "bool isPasswordValid()" and "float isPasswordValid()" (really probably something that returns what's not valid with the score) function is in any way simpler than a single function used on both sides. Sure, the server may not care about the strength level but if you need to write that calculation for the client side anyways then how are you saving any engineering work by writing a 2nd validation function on the server side instead of just ignoring the extra info in the one that already exists?
In this situation code for a good strength meter is going to be an order of magnitude or two more complicated than the boolean validity check. Porting 50x as much code to the server is significantly worse than having two versions or having one shared function and one non-shared function.
You shouldn't have to port anything. If you mean in the opposite case of two separate languages between client and server side then yeah, of course - by definition you're rewriting everything and there is no way to reuse code. I'm not clear how you're reaching anywhere near 50x complexity though. You're writing something like this on the client side (please excuse the lazy checks):
Then instead of writing another one on the server that only checks the password isn't blank, is less than the maximum, and has valid characters you're just reusing the full 6 check code. That's only twice as many checks, not even twice as many lines, and it's already written. You really should check all 6 again on the server anyways, but that's beside the point. Better still, if you do the reuse as a build step via shared function library file or similar you don't need to copy/paste and it stays in sync automatically.
Of particular note there is no UI code here because the meter's UI code is not related to the check function beyond it reads the return value.
If that's all you want then sure, but that's not what I would call a good password quality meter. It makes no attempt to look for patterns or words or super-common passwords.
As noted excuse the basic check functions and use whatever you actually want for check criteria and the amount of work on the server side is still a factor <1 compared to writing that and then a different check on the server. If your password check logic is 50x the size of that though you might be overdoing it, but that's just an opinion. Again I'd argue you should really be validating server side as well anyways, fewer chances to mess something up and accept a weak password.
Your check functions are fine, for both client and server.
I'm not saying not to reuse things, because I specifically think it should be two separate functions on the client, one of which is copied to the server. But if you insist on having only one client function, I think the server function should be cut down.
And the premise is doing client-only advice on strength so I'm not going to challenge that premise.
As far as 50x, your code doesn't need those consts saying the exact same thing as the results object, so that simplifies to 8 lines, and I think 400 lines for a good password estimator isn't unreasonable. zxcvbn's scoring function is around that size.
I see, I have never implemented those types of validations. We do religiously deploy the same validation on client and server to explicitly avoid the mismatch of client/server validation. Having the client submit "valid" input only to have server reject it is something we have run into. Having only client side validation is something I have never run into.
Also, in my opinion things like you suggest you shouldn't do. A password strength metre is only going to give attackers hints at the passwords you have in your system. And I have not see a caps lock on warning in forever. The only password validation we do is the length which is pretty easy to validate on client and server.
> A password strength metre is only going to give attackers hints at the passwords you have in your system
No, it's not. A password strength meter just shows you the randomness of an input password, it doesn't have anything to do with passwords already in the system.
I'd agree with both takes on that it depends on the meter. Ones which truly approximate password entropy work like you say, however, for some reason, the most common use of such meters is to show how many dartboard requirements you've met while ignoring the actual complexity. When this common approach is used you combine "password must be 8 characters or more" with things like "password must have a number, symbol of ${group}, and capital letter" and the average password complexity is actually made worse for a given length due to pigeonholing.
In the full picture though, in terms of UI/UX, the meter seems like only a downside. In the dartboard use case it's great because it displays what's still needed in terms users work and think with signalling e.g. "you still need a number, otherwise you're all set". People don't really think in bits of entropy though so ll that really is being signaled by either a meter or a normal failed validation hint is "more complexity and/or length needed".
There may be good cases for using a meter while simultaneously implementing good password requirement policy I'm not thinking of though.
This works like I described, it don't show 'dartboard requirements', only entropy. I think you've misunderstood what a password strength checker is. It's definitionally not a checklist like 'You need an uppercase letter, a lowercase letter, a number, a special character'. It's a tool which measures the strength i.e. the randomness or entropy of the password.
Everything has to be validated on the server side simply for security reasons.
Even if you do all validation on the client side, which prevents the users submitting a form with invalid data, an attacker can work around that. e.g. submitting the form with valid data, but intercepting the request and modifying the values there. Or simply just using curl with malicious/invalid data.
You still need the client side validation for UX. The regular users needs to know if they messed up someting in the form. Also it's a much better UX if it's done on the client side, without the need to send an async request for validations.
They're limited in some ways but they're just about powerful enough to do almost everything you'd need or want to do client-side without making a network request. In my opinion it doesn't make sense to try to fit in tons of complex validation logic in the frontend.
Some kinds of validation really do need the round trip. If somebody is choosing a user name on a sign up for you do need to do a database lookup.
If your back end is fast and your HTML is lean, backend requests to validate can complete in less time than any of the 300 javascript, CSS, tracker, font, and other requests that a fashionable modern webapp does for no good reason...
It's true though that many back ends are run on the cheap with slow programming languages and single-thread runtimes like node.js that compete with slow javascript build systems to make people think slow is the new normal.
Yeah, obviously if it requires I/O you can’t write it client side, but the argument here seems to be in favor of doing validation only server-side even when it could also be done client-side.
Presumably there are some features of the app that won’t work that way. Surely you’re not saying you prefer a Web app just for the sake of making calls.
No, I'm just taking your argument to its logical conclusion. If you manufacture enough criteria, you can steer any discussion so that your choice is the only possible choice left. That's not how things work in reality, there are many competing factors that go into technological choices.
Minimizing round trips is an optimization with improved UX and operational cost and no real downside except marginal implementation difficulty (which itself seems like an argument for being able to share validation between the backend and frontend).
because many people wouldn't use the product or you would have to maintain multiple codebases for the various operating systems and devices (including mobile).
Can we even weight that statement? The average SPA is significantly worse than the average MPA. There is so much browser functionality that needs to be replicated in a SPA that few teams have the resources or talent to do a decent job.
Recently I was using Circle (like a paid social media platform for communities) and pressing back not only loses the scroll position, it loses everything. It basically reloads the whole home page.
The nice thing about htmx is it gives a middle ground between the two. Build with the simplicity of an MPA while getting a lot of the nice user experience of an SPA. Sure, you don't get all the power of having a full data model on the client side, but you really don't need that for most use cases.
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.
What? No.
The whole point of Node was a) being able to leverage javascript's concurrency model to write async code in a trivial way, and b) the promise that developers would not be forced to onboard to entirely different tech stacks on frontend, backend, and even tooling.
There was no promise to write code once, anywhere. The promise was to write JavaScript anywhere.
That's the reasoned take, and yet I have strong and distinct memories of Node being sold on the basis of shared code as early as 2011. Much of the interest (and investment) in Meteor was fueled by its promise of "isomorphic JavaScript."
>For instance, a major selling point of Node was running JS on both the client and server so you can write the code once
I mean, I'm using Laravel Livewire quite heavily for forms, modals and search. So effectively I've eliminated the need for writing much front-end code. Everything that matters is handled on the server. This means the little Javascript I'm writing is relegated to frilly carousels and other trivial guff.
You're on the money with this assessment. It's all bandwagon hopping without any consideration for reality.
Also, all these things the author complains about are realities of native apps, which still exist in massive numbers especially on mobile! I appreciate that some folks only need to care about the web, but declaring an architectural pattern as superior - in what appears to be a total vacuum - is how we all collectively arrive at shitty architecture choices time and time again.
Unfortunately, you have to understand all the patterns and choose when each one is optimal. It's all trade-offs - HTMX is compelling, but basing your entire architectural mindset around a library/pattern tailored to one very specific type of client is frankly stupid.
> to one very specific type of client is frankly stupid
However, I see this specific type of clients that need just basic web functionalities, e.g CRUD operations and build something basic more prevalent than those that need very instant in-app reactivity and animations and so on (React, and SPA ecosystem).
Nowadays that's exactly the opposite, every web developer assumes SPA as default option, even on these simple CRUD examples.
Technically, the technology support doing any of them right. On practice, doing good MPAs require offloading as much as you can into the mature and well developed platforms that handle them; while doing good SPAs require overriding the behavior of your immature and not thoroughly designed platforms on nearly every point and handling it right.
Technically, it's just a difference on platform maturity. Technically those things tend to correct themselves given some time.
On practice, almost no SPA has worked minimally well in more than a decade.
> But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
While I am a fan of MPAs and htmx, and personally find the dev experience simpler, I cannot argue with this.
The high-order bit is always the dev's skill at managing complexity. We want so badly for this to be a technology problem, but it's fundamentally not. Which isn't to say that specific tech can't matter at all -- only that its effect is secondary to the human using the tech.
100%. Saying that [technology x] will remove complexity is like saying that you've designed a house that can't get messy. All houses can be messy, all houses can be clean. It depends on the inhabitants.
Yes, but some technologies make it easier (or harder) to keep everything clean.
Like in my opinion you can write clean code in C, but since you dont even have a string type it shepherds you into doing nasty stuff with char*... etc.
I remember the hype about javascript on the server (node) being front-end devs didn't have to know/learn a different language to write backend code. Not so much writing code once but not having to write Javascript for client-side and then switch to something else to write the server-side.
[edit: both comprising shared code between client and server, as well as, reduced barrier to server-side contribution, and then some including but not limited to the value of the concurrency model, expansive (albeit noisy) library availability, ...]
> Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
People forget how bad MPAs were, and how expensive/complicated they were to run.
Front end frameworks like svelte let you write nearly pure HTML and JS, and then the backend just supplies data.
Having the backend write HTML seems bonkers to me, instead of writing HTML on the client and debugging it, you get to write code that writes code that you then get to debug. Lovely!
Even more complex frameworks, like React, you have tools like JSX that map pretty directly to HTML, and in my experience a lot of the hard to debug problems come up with the framework tries to get smart and doesn't just stupidly pop out HTML.
We decided for fun to do a small project in htmx (we had to pick something, one person opted strongly). Yeah, I was cringing and still am. I fully support frontend/backend split status quo.
For stuff that is uncomplicated I much prefer svelte as it still keeps the wall between frontend/backend but let's you do a lot of "yolo frontend" that is shortlived and gets fixed. I run small startup on the side" svelte fe + clojure be. It works great as I have different acceptance for crap in frontend (if I can fix something with style="", I do and I don't care). I often hotfix a lot of stuff in front where I can and just deploy to return later and find better solution that involves some changes in backend.
I can't imagine that for moving a button I would have to do deployment dance for whole app that in my case has 3 components(where one is distributed and requires strict backwards compat).
That demonstration as per OP is dumb or targeted to React-ists. You can, with HTMX, do the classic AJAX submit with offline validation.
In the last years, for every layer of web development, what I saw was that a big smelly pile of problems with bad websites and webapps, be it MPA or SPA, was not a matter of bad developers on the product, but more a problem of bad, sometimes plain evil, developers on systems sold to developers to build their product upon. Boilerplate for apps, themes, ready-made app templates are largely garbage, bloat, and prone to supply chain attacks of any sort.
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.
(I'm not actually arguing with you, just thinking out loud)
This is often repeated but I don't think it even close to a primary reason.
The primary reason you build JS web clients is for the same reason you build any client: the client owns the whole client app state and experience.
It's only a fluke of the web that "MPA" even means anything. While it obviously has its benefits, we take for granted how weird it is for a server to send UI over the wire. I don't see why it would be the default to build things that way except for habit. It makes more sense to look at MPA as a certain flavor of optimization and trade-offs imo which is why defaulting to MPA vs SPA never made sense now that SPA client tooling has come such a long way.
For example, SPA gives you the ability to write your JS web client the same way you build any other client instead of this weird thing where a server sends an initial UI state over the wire and then you add JS to "hydrate" it, and then ensuring the server and client UIs are synchronized.
Htmx has similar downsides of MPAs since you need to be sure that every server endpoint sends an html fragment that syncs up to the rest of the client UI assumptions. Something as simple as changing a div's class name might incur html changes across many html-sending api endpoints.
Anyways, client development is hard. Turns out nothing was a panacea and it's all just trade-offs.
> all the devs writing shitty MPAs are now writing shitty SPAs
This pretty much sums it up. There is no right technology for the wrong developer.
It's not about what can get the job done, it's about the ergonomics. Which approach encourages good habits? Which approach causes the least amount of pain? Which approach makes sense for your application? It requires a brain, and all the stuff that makes up a good developer. You'll never get good output from a brainless developer.
>For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.
You did write it once before too.
With NodeJS you have Javascript on both sides, that's the selling point. You still have server and client code and you can write a MPA with NodeJS
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
These are two different things and I don't see how they're related. You don't need code sharing to do client side navigation. And you should always be validating on the backend anyway. Nothing is stopping an MPA from validating on the client, whether you can do code sharing or not.
> prevent your servers from ruining the experience for everyone.
This never panned out because people are too afraid to store meaningful state on the client. And you really can't because (reasonable) user expectations. Unlike a Word document people expect to be able to open word.com and have all their stuff and have n simultaneous clients open that don't step on one another.
So to actually do anything you need a network request but now it's disposable-stateful where the client kinda holds state but you can't really trust it and have to constantly refresh.
> But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again.
I think the root cause of this is lack of will/desire to spend time on the finer details, either on the part of management who wants it out the door the second it's technically functional or on the part of devs who completely lose interest the second that there's no "fun" work left.
A pro can be a con, and vice versa. The reason why you move to a SPA might be the reason why you move away from it. The reason why you use sqlite early on might be the reason you move away from it later.
A black & white view of development and technology is easy but not quite correct. Technology decisions aren't "one size fits all".
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.
This is only sort of true. The problem can be mitigated to a large extent by frameworks; as the framework introduces more and more 'magic' the work that the developer has to do decreases, which in turn reduces the surface area of things that they can get wrong. A perfect framework would give the developer all the resources they need to build an app but wouldn't expose anything that they can screw up. I don't think that can exist, but it is definitely possible to reduce places where devs can go astray to a minimum.
And, obviously, that can be done on both the server and the client.
I strongly suspect that as serverside frameworks (including things that sit in the middle like Next) improve we will see people return to focusing on the wire transfer time as an area to optimize for, which will lead apps back to being more frontend than backend again. Web dev will probably oscillate back and forth forever. It's quite interesting how things change like that.
Unfortunately, developers often write code in a framework they don't know well so they end up fighting the framework instead of using the niceties it provides. The end result being that the surface area of things that can go wrong actually increases.
True. But I also find that a lot of frameworks are narrowly optimized for solving specific problems, at the expense of generality, and those problems often aren’t the ones I have.
Supposedly declarative approaches especially are my pet peeve. “Tell it what you want done, not how you want it done” is nice sounding but generally disappointing when I soon need it to do something not envisioned by its creator yet solved in a line or two of general purpose/imperative code.
Most companies unfortunately don't let developers adequately explore solutions or problem spaces before committing to them either. The ones that dominate do, but that's also because they often have the resources to build it from the ground up anyway.
The average mid-sized business seems to have internalized that code is always a liability, but they respond by cutting short discovery and get their just deserts.
That oscillation probably wouldn't happen if it were possible to be more humble about the scope of the solution and connection to commercial incentives. It's gotten to the point where a rite of passage for becoming a senior developer is waking up to the commercialization and misdirection.
You can see the cracks in Next.js. Vercel, Netlify et. al, are interested in capitalizing on the murkiness (the middle, as you put it) in this space. They promise static performance but then push you into server(less) compute so they can bill for it. This has a real toll on the average developer. In order for a feature to be a progressive enhancement, it must be optional. This is orthogonal to what is required for a PaaS to build a moat.
All many people need is a pure, incrementally deployed SSG with a robust CMS. That could exist as a separate commodity, and at some points in the history of this JAMStack/Headless/Decoupled saga it has come close (excluding very expensive solutions). It's most likely that we need web standards for this, even if it means ultimately being driven by commercial interests.
> a major selling point of Node was running JS on both the client and server so you can write the code once
But we don’t have JS devs.
We have a team of Python/PHP/Elixir/Ruby/whatever devs and are incredibly productive with our productivity stacks of Django/Laravel/Phoenix/Rails/whatever.
> have to do a network request for each and every validation of user input.
HTML5 solved that to a first approximation client-side. Often later you'll need to reconcile with the database and security, so that will necessarily happen there. I don't see that being a big trade-off today.
Well by definition the "average" team is not capable of writing a "great" app. So it doesn't matter so much what the technology stack is -- most of what is produced is pretty shitty regardless.
This is the real problem, and why I'd argue we've made little real progress in tooling despite huge investment in it.
The web still requires too much code and concepts to be an enjoyable dev experience, much less one that you can hold in your head. Web frameworks don't really fix this, they just pile leaky abstractions on that require users to know the abstractions as well as the things they're supposed to abstract.
It seems like it is difficult to truly move webdev forward because you have to sell to people who have already bought into the inessential complexity of the web fully. The second you try to take part of that away from them, they get incensed and it triggers loss aversion.
For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.
Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.
We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.