The allure of xmlhttprequest was that over connections much slower than today, and with much less powerful desktop computers, a user didn't have to wait for the whole page to redownload and re-render (one can argue that focusing on better HTTP caching on the server and client might have been smarter) after every single user interaction. This was also much of the draw of using frames (which were also attractive for some front-end design use-cases later re-solved with CSS).
As apps got more complex, clients got more compute, bandwidth grew, and as web audiences grew, offloadingl much of the page rendering to the client helped to both contain server-side costs and increase or maintain responsiveness to user interactions.
Now, as desktop client performance improvement slows down (this isn't just the slowing of computer speeds, computers are also replaced less frequently), average bandwidth continues to grow, app complexity and sophistication continues to grow, but as server compute cost falls faster than audience size grows, shifting rendering to HTML back to the server and sending more verbose pre-rendered HTML fragments over the wire can make sense as a way of giving users a better experience.
As someone who implemented a SPA framework prior to "SPA" being a word much less React or Angular, I have to say for my company, it was all about state management.
Distinguishing between web apps (true applications in the browser), and web pages (NYT, SEO, generally static content), state management was very hellish at the time (~2009).
However, with the advent of V8, it became apparent as an ASP.NET developer that a bad language executing at JIT speeds on the browser was "good enough" to not send state back and forth through a very complex mesh of cookies, querystring parameters, server-side sessions, and form submissions.
If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.
Or back buttons and CSRF tokens and flash scope...
Or, let's talk about a common use case. Someone starts filling in a form, and then they need to look at another page to get more information. (This other page may take too long to load and isn't worth putting in the workflow, or it was cut from scope to place the information twice.) So, they go out to another page, then back, and are flustered because they were part way through the work.
So, if you want this to work, you're going to need state management in the client anyway. (Usually using SessionStorage these days, I'd presume?) So, then, we've already done part of the work for state management. You are then playing the "which is right, the server or the client" game.
You accumulate enough edge cases and UX tweaks, and you're half way down the SPA requirements anyway.
Now, hopefully Hotwire will solve a large number of these problems. I'm going to play with it, but the SPA approaches have solved so many of the edge cases via code and patterns.
Part of the problem has also been ameliorated by larger screens and browser tabs.
Reminds me of saving Rich Text content on the server side. It was nightmare.
Also reminds me of the Microsoft RTF format. It's basically a memory dump of the GUI editor.
The state binded on a tree was never a good idea to start with.
I think what drives this crazy train of overengineered solutions of SPAs and K8s for hosting single static page is deep separation of engineeres from the actual business problems and people they are trying to help. When all you have are tickets in Jira or Trello you don't know why you should do or if they actually benefit someone then it's natural to invent non-existant tech problems which are suddenly interesting to solve. That is natural for curious engineers and builders. Then mix in 1% of big apps and companies which actually do have these tech problems and have to solve them and everybody just wants to be like them and start cargo culting.
I recently wrote a SPA (in React) that, in my opinion, would have been better suited as a server-side rendered site with a little vanilla js sprinkled on top. In terms of both performance and development effort.
The reason? The other part of the product is an app, which is written in React Native, so this kept a similar tech stack. The server component is node, for the same reason. And the app is React Native in order to be cross-platform. We have ended up sharing very little code between the two, but using the same tech everywhere has been nice, in a small org where everyone does everything.
Making teams responsible
Third, we give full responsibility to a small integrated team of designers and programmers. They define their own tasks, make adjustments to the scope, and work together to build vertical slices of the product one at a time. This is completely different from other methodologies, where managers chop up the work and programmers act like ticket-takers.
Together, these concepts form a virtuous circle. When teams are more autonomous, senior people can spend less time managing them. With less time spent on management, senior people can shape up better projects. When projects are better shaped, teams have clearer boundaries and so can work more autonomously.
I wonder if other industries suffer from the same problem, bored engineers.
From the turbo handbook: "An application visit always issues a network request. When the response arrives, Turbo Drive renders its HTML and completes the visit." Using the phrase "When the response arrives" begs the question of what happens if it doesn't arrive, or if it takes a minute for it to arrive, or if it arrives but with a faulty status code.
Not saying this is better from an error handling perspective, but at least the whole idea of Hotwire and its peers (Turbolinks, etc) is that there is no state and it should thus be safer and quicker to reload the page should things go wrong.
> there is no state and it should thus be safer and quicker to reload the page should things go wrong.
That's not exactly true since there are non-idempotent HTTP methods and while the browser will prompt you if you want to resend a non-idempotent HTTP request when refreshing a normal form POST I don't think that turbo/turbolinks/similar will allow you to prompt or resend.
On refresh should turbo retry a POST? The "right way" is to keep the state of the last POST and prompt the user for confirmation, but it seems like it is undocumented as to what it does. I'm guessing it either does not retry or it retries and hopes effect will be idempotent.
No one (SPAs, traditional webpages and "spiced" webpages like this included) is doing everything right, but my objection to this framework is that it seems to try to say things are simple or easy when they clearly aren't.
> it seems to try to say things are simple or easy
That's an unfair mis-characterisation. The developers are not pitching a universal panacea that solves all your problems and handles every edge case. They are offering an architecture that simplifies many common scenarios, and one that is thoroughly developer-friendly when it comes to supplying observability and integration hooks for edge cases.
For this latter purpose it merely remains to bother with reading the (clean & elegant) source code to enlighten oneself.
> it seems like it is undocumented
On the contrary, the behavior w.r.t full-page replacement on non-idempotent verbs is extensively discussed in the Turbolinks repo.
The "Turbo Drive" component appears to me as essentially unchanged behaviour in Turbo 7.0.0beta1 from Turbolinks versions 5.x. Turbolinks was introduced in 2013, has many years of pedigree and online discussion, and is well understood by a large developer community. Turbolinks was always maintained, even being ported to TypeScript (from the now venerable CoffeeScript) ca. two years ago with no change in behaviour. Turbo Drive is, practically, just a slightly refactored rebrand of the TypeScript port.
The stuff everyone is so excited about are Turbo Frames and Turbo Streams. These are new, and may be used without adopting Turbo Drive: as with practically everything from Basecamp, the toolkit is omakase with substitutions. They are, nevertheless, complementary, so you get all three steak knives in one box.
Of course now I just go on Hacker News and Twitter instead.
If I know the network is always there, why bother.
The entire design philosophy here is to mimic apparent browser behaviour, or to delegate to it. Hence, to GP's question; you should expect the appearance of browser-like behaviour in any circumstance, modulo anything Turbo is specifically trying to do different. Deviation from baseline browser semantics was certainly a basis for filing bugs in its predecessor (Turbolinks).
As for what Turbo actually does, I checked the source. Good news, even for a first beta, they're not the cowboy nitwits alleged; it gracefully handles & distinguishes between broken visits and error-coded but otherwise normal content responses, and the state machine has a full set of hooks, incl. back to other JS/workers, busy-state element selectors, and the handy CSS progress bar carries over from Turbolinks.
in general, the right approach in HTML-oriented, declarative libraries appears to be triggering error events and allowing the client to handle them, since it is too hard to generalize what they would want
1. What if something goes wrong?
2. How do I test for handling success/error?
They never address this stuff.
However, another big issue is the dominance of mobile. More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly
web, iOS and Android), and you want to power them all with the same backend. RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative. Mobile is a big part of why SPAs have dominated - you use the same backend and overall approach/architecture for web and mobile.
I’d strongly consider this for a web-only product, but that’s becoming more and more rare.
They have accompanying https://github.com/hotwired/turbo-ios and https://github.com/hotwired/turbo-android projects to bridge the gap.
This, while very interesting and might have a preferable set of constraints for some projects, is simply not a good fit for many others, as you mentioned in your comment. This looks amazing, and I would definitely try it for a project in which it would fit, but I don't really see a reason to disparage the work others have been doing over the past decade. We need those other tools too!
(sorry for the rant)
However I think that for iOS they're still offering server side rendering via Turbo-iOS and Turbo-Android so you can build quickly and then replace that later if you need to.
This is one of the primary promises of MVC in the first place: views can be rendered independently of controllers and models. For a given controller method call, a view can be specified as a parameter.
In this case, swap "view" for JSON sent back over the wire...
RESTful APIs serving up JSON works for all 3, as does GraphQL [...]. This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative.
I bet someone will produce a native client library that receives rendered SPA HTML fragments and pretends it's a JSON response. They might even name it something ironic like "Horror" or "Cringe".
That said, an ideal API for desktop web apps looks rather different than one for mobile web or native clients. Basically, for mobile you want to minimize the number of requests because of latency (so larger infodumps rather than many small updates) and minimize the size of responses due to bandwidth limitations and cost (so concise formats like Protocol Buffers rather than JSON).
It is definitely possible to accommodate both sets of requirements at the same API endpoint, but pretending that having a common endpoint implies anything else about the tech stack is rather disingenuous. If you want server-side rendering and an API that delivers HTML fragments instead of PB or JSON, that can be done too.
And if you can't think of anything worse, you're not trying very hard.
Really, an incredible bang for the buck.
HTML is a machine-readable format, like XML and JSON. Have your back end represent a given resource as microformatted-semantic markup, send it gzipped over the wire, and you've got the data exchange you need, even if your mobile app isn't already dressed-up webview.
Generally the projects I've felt best about have two features:
1) The API knows how to represent resources across multiple media types, usually including at least markup and JSON.
2) UI is well-annotated enough that developers and machines find it easy to orient themselves and find data.
But you're quite right that this isn't common. I have my own guesses on the reasons why. My observation's been that the workflow and stakeholder decision making process on the UI side places semantic annotation pretty low on the priority side; most places you're lucky if you can get a style guide and visual UI system adopted. And there has to be cooperation and buy-in at that level in order for there to be much incentive to engineer and use a model/API-level way of systematic representing entities as HTML, which often won't happen.
And TBH it is extra effort.
If I recall correctly, this made use of that new technology of the time called "XMLHttpRequest" (/s) which pretty much jump-started web 2.0.
[fn]: Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.
We can blame this on the MBA types. I've literally never heard a software engineer say "hey, let's make this pop-up after they've already been looking at the page for a minute!" or anything like it.
But there is suprisingly little layers on layers. Part of what has been amazing about the web is that the target remains the same. There is the DOM. Everyone is trying different ways to build & update the DOM.
Agreed that there are better alternatives than a lot of what is out there. We seem to be in a mass consolidation, focusing around a couple very popular systems. I am glad to see folks like Github presenting some of the better alternatives, such as their Catalyst tools which speed up (developer-wise & page-wise (via "Actions") both) & give some patterns for building WebComponents.
The web has been unimaginably stable a platform for building things, has retained it's spirit while allowing hundreds of different architectures for how things get built. Yes, we can make a mess of our architectures. Yes, humanity can over-consume resources. But we can also, often, do it right, and we can learn & evolve, as we have done so, over the past 30 years we've had with the web.
While willfully ignoring all the people doing better.
Maybe we are- as you fear- stuck, forever, in thick JS-to-JS transpilers & massive bundles & heavy frameworks. Maybe. I don't think so.
React is well under 20k.
FWIW when optimizing my SPA, my largest "oops" in regards to size were an unoptimized header image, and improperly specified web fonts.
You're literally a stereotypical Hacker News commenter.
I also find the modern frontend a bit too complicated but this is just an unreasonable statement.
Of all the problems I have with React, and I do have a few, JSX is not one of them.
If you are going to be using a language to generate HTML, you are either going with a component approach that wraps HTML in some object library that then spits out HTML, or you are stuck with a templating language of some sort. (Or string concatenation, but I refuse to consider that a valid choice for non-trivial use cases.)
JSX is a minimal templating language on top of HTML. Do I think effects are weird and am I very annoyed at how they are declaration order dependent? Yup. But the lifecycle stuff is not that weird, or at least the latest revision of it isn't (earlier editions... eh...). The idea of triggering an action when a page is done loading has been around for a very long time, and that maps rather well to JSX's lifecycle events.
> React alone provides little to nothing
Throw in a routing library, and you are pretty much done.
Now another issue I do have is that people think React optimizes things that it in fact does not, so components end up being re-rendered again and again. Throw Redux in there and it is easy to have 100ms latency per key press. Super easy to do, and avoiding that pitfall involves understanding quite a few topics, which is unfortunate. The default path shouldn't lead to bad performance.
> The concept of components and the idiotic life cycles
Page loads, network request is made. Before React people had listeners on DOM and Window events instead, no different.
Components are nice if kept short and sweet. "This bit of HTML shows an image and its description" is useful.
> Do I need to explain how much stuff can be packed in 400kb?
No, I've worked on embedded systems, I realize how much of a gigantic waste everything web is. But making tight and small React apps is perfectly possible.
And yes, if you pull in a giant UI component library things will balloon in size. It is a common beginner mistake, I made it myself when I first started out. Then I realized it is easier for me to just write whatever small set of components I need myself, and I dropped 60% of my bundle app size.
In comparison, doing shit on the backend involves:
And then someone goes "hey you know what's a great idea? Let's put state on the back end again! And we'll wrap it up behind a bunch of abstractions so engineers can pretend it actually isn't on the back end!"
History repeats itself and all that.
SPAs, once loaded, can be very fast and scaling the backend for an SPA is a much easier engineering task (not trivial, but easier than per user state).
Is all of web dev a dumpster fire? Of course it is. A 16 year old with VB6 back in 1999 was 10x more productive than the world's most amazing web front end developer now days. Give said 16yr old a copy of Access and they could replace 90% of modern day internally developed CRUD apps at a fraction of the cost. (Except mobile support and all that...)
But React isn't the source of the problem, or even a particularly bad bit of code.
> Throw in a routing library, and you are pretty much done.
Ok routing library, now make an http request please without involving more dependencies....
> Throw Redux in
See, exactly what I said: we are getting to the endless pages of dependencies.
> 100ms latency per key press
100ms latency??!?!?!? In my world 100ms are centuries.
I don't have a problem with that. At the end of the day you know exactly what you want to achieve and what the output should be, whereas react it's a guessing game each time. We are at a point where web "developers" wouldn't be able to tell you what html is. With server-side rendering, from maintenance perspective you have the luxury to use grep and not rely on post-market add ons, plugins and ide's in order to find and change the class of a span.
The term SPA first came to my attention when I was in university over 10 years ago. My immediate thought was "this is retarded". Over a decade later, my opinion hasn't changed.
> jsx is a retarded idea because it adds an abstraction over something brutally simple(html).
what programming languages do it better?
if you have other places that have done a good job of being ripe for building html directly, without intermediation, as you seem to be a proponent of, let me/us know. jax seems intimately closer to me to what you purport to ask for than almost any other language that has come before! your words are a vexing contradiction.
> Ok routing library, now make an http request please without involving more dependencies....
please stop being TERRIFIED of code. many routing libraries with dependencies aren tiny. stop panicking that there is code. react router v6 for example is 2.9kb. why so afraid bro?
this is actually why the web is good. because there are many many many problems, but they are decoupled and a 2kB library builds a wonderful magical consistent & complete happy environment that proposes a good way of tackling the issues. you have to bring some architecture in but anyone can invent that architecture, the web platform is in opinionated ("principle of least power" x10,000,000), and the solutions tend towards tiny.
redux is 2kB with dependencies as well.
Yup that's crappy. The ease of it happening, the Work At A Startup page used to have this issue (may still, haven't looked lately) shows that it isn't hard to make accidentally happen.
As I said, it is a weakness of the system.
> sx is a retarded idea because it adds an abstraction over something brutally simple(html)
Have you seen how minimal of an abstraction jsx is? It is a simple rewrite to a JS function that spits out HTML, but JSX is super nice to write and more grep-able than the majority of other templating systems.
I have a predisposition to not liking templating systems, but JSX is the best part of React.
Notably it doesn't invent it's own control flow language, unlike most competitors in this space.
> My immediate thought was "this is retarded".
Well the most famous SPA is gmail and it's rather popular, you may have heard of it. It is bloated now, but when it first debuted it was really good. Webmail sucked, then suddenly it didn't.
Google maps. Outlook web client. Pandora. Online chat rooms, In browser video chat, (now with cool positional sound!)
SPA just means you are just fetching the minimum needed data from the server to fulfill the user's request, instead of refetching the entire DOM.
They are inherently an optimization.
Non-SPAs can be slow bloated messes as well, e.g. the Expedia site.
It's generally frowned upon to use retarded in this manner. Not only is it insulting to people, it brings down the overall tone of your argument.
So call it either one and people will know what you're talking about.
And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.
This strongly depends on the type of duct. Flex ducts that are a plastic skin over a wire coil don't work so well with aluminum tape.
Duck tape is the stuff developed for the US army to waterproof things closed. Post-war, they made it silver instead of green and marketed it for use with ducts (since being waterproof made it also SEEM like a good candidate for the job in heating systems), but its pretty terrible for this purpose since temperature changes degrade the adhesive rapidly.
The tape you actually want to use for ducts is foil-backed tape.
In short, it was and still is a great marketing gimmick, but Duck Tape was only ever “ok” at keeping things water proof and only looks like gaffers tape or the tape you want to use on ducts.
I use alot of Duck Tape .
From the little language study I've done, English is one of the most flexible. You can discard entire parts of speak and it still works.
Saying , ,'ey you woke up yet', is ok in many contexts.
The real world disagrees with you; go check out any major website and observe as your laptop's fans spin up.
However I think the main problem here isn't the symptom (websites are bloated) but the root cause of the problem. I'm not sure if it's resume-driven-development by front-end developers or that they genuinely lost the skill of pure CSS & HTML but everyone seems to be pushing for React or some kind of SPA framework even when the entire website only needs a handful of pages with no dynamic content.
Try every old media site and most e-commerce.
In the Microsoft-verse, this might also draw some comparisons to the more modern server-side blazor.
I used it 13 years ago. It was fancy.
I don't know if it's so much 'everything old is new again' as it is a problem of market penetration.
Almost 20 years ago, one of my professors told us before graduation that hot tech is mostly about the idea pendulum swinging back and forth. I immediately chalked it up to 65+ above white wise men snobbery.
However, this is exactly that. We started with static pages, then came Ajax and Asp.net and the open source variants, then we went full SPA, now we are moving back to server side because things are too complicated.
Obviously tech is different, better, more efficient but the overall idea seems to be the same.
People have been pointing out it's a shit show with no end in sight for the entire duration of the phase. Pointing out the performance impact and cost to end users, how diabolical it is for those on lower latency or poorer network connectivity (i.e. most of the world), and so on.
Same thing as always happens with these pendulum swings, newer engineers come in convinced everyone before them is an idiot, are capable of building their new thing and hyping it up such that other newer engineers are sold on it while the "old guard" effectively says "please listen to me, there's good reasons why we don't do it this way" and get ignored. Worse, they'll get told they're wrong, only to be proven right all along.
I'm not denying there are obstructionist greybeard types that just refuse to acknowledge merits in new approaches, but any and all critique is written off as being cut from the same cloth.
It's perfectly possible to iterate on new ideas and approaches while not throwing away what we've spent decades learning ('Those who do not learn history are doomed to repeat it'), but tech just seems especially determined not to grow up.
"HTML over the wire" isn't really a return to the good ol' days. It's still the client maintaining state and using tons of js to move data back and forth without page reloads. It just changes the nature of 1/2 the data and moves the burden of templating back to the server.
Then, remember the awful awful #! URLs? Atrocious, and seemed like obviously a terrible idea from the start, yet they spread, and have mostly died, thankfully. But even with the lessons from these bad tech designs, new frameworks come out that repeat mistakes, yet get incredible hype.
Here's the Netscape Enterprise Server manual from 1998. Sorry I couldn't find an earlier version.
I'd like to see these people make applications in pure XML. No programming.
> there's nothing inherently bad about it
This argument is so bogus if you look at alternative language dependency sizes.
My reasoning is that the numpy project is meant for scientific and prototyping purposes, but many times people are using it as an shortcut and include the whole thing into their project.
That being said, the quality in these packages does vary depending on who developed them. But I think this is a problem that exists with all languages where publishing packages is relatively straightforward.
The bad experiences stick out to people, whereas all the well behaved JS-heavy apps out there likely don't even register as such to most people.
Even with SPAs, it's very possible (and really not that hard) to make them behave well. Logically, even a large SPA should use less overall data than a comparable server-rendered app over time. A JS bundle is a bigger initial hit but will be cached, leaving only the data and any chunks you don't have cached yet to come over the wire as you navigate around. A server-rendered app needs to transmit the entire HTML document on every single page load.
Of course, when you see things like the newer React-based reddit, which chugs on any hardware I have to throw at it, I can sort of see where people's complaints come from.
We started out on mainframes.
Then things moved to the desktop, with some centralized functionality on servers (shared drives, batch jobs).
The processing moved to centralized web servers via the web, SAAS, and the cloud.
Then more moved into the client through React & similar.
And now things are moving back to the server.
These changes are not just arbitrary whims of fashion, though. They're driven by generational improvements in technology and tooling.
Like thin client (VT100), to thick (client/server desktop app), to thin (browser), etc.
Similarly, console apps (respond to a single request in a loop), to event-driven GUI apps, to HTTP apps that just respond to a simple request, back to event-driven JS apps.
It depends on how you define the boundaries, but history rhymes.
Notably, the VM operating system could run an instance of itself in one of its own virtual machines.
I'd say they're driven by corporate greed. Cloud computing is basically renting time, and so the more you use them, the more $$$ they make.
I sense that you're right about swings requiring change to older techniques. But I think there's also a component of being fed up with the direction things are currently facing.
At the beginning of college everyone was SUPER into NoSQL. All my friends were using it, SQL was slow, etc.
Nearing the end of college and the beginning of my job I began seeing articles saying why NoSQL wasn't the best, why SQL is good for some things over NoSQL, etc.
Technology is cyclical. 10 years from now I expect to read about something "new" only to realize that it was something old.
I sit in design meetings all the time where people with <5 years experience go out of their way to avoid using a relational database for relational data because "SQL is slow". They will fight tooth and nail, shoe-horning features in to the application which are trivial to do with a single SQL command.
I helped out on one project lead by a few younger devs who chose FireStore over CloudSQL for "performance reasons" (for an in-house tool). They had to do a pretty major rewrite after only a few weeks once they got around to deleting, because one of their design requirements was to be able to delete thousands of records; a trivial operation in SQL, but with FireStore, deleting records requires:
> To delete an entire collection or subcollection in Cloud Firestore, retrieve all the documents within the collection or subcollection and delete them. If you have larger collections, you may want to delete the documents in smaller batches to avoid out-of-memory errors. Repeat the process until you've deleted the entire collection or subcollection.
> Deleting a collection requires coordinating an unbounded number of individual delete requests.
Turns out, once they started needing to regularly delete thousands-millions of records, the process could run all night. Luckily, moving over to CloudSQL didn't take very long...
I mean, this is just dumb. I have less than 5 years experience and I understand that SQL isn't "slow", there are just different tradeoffs between SQL and NoSQL databases and you have to pick the right tool for the job.
JSON is just a media type that a resource can be rendered as. HTML is another media type for the same resource. Which is better? Neither, necessarily, it depends on the client application. But if you are primarily using JSON to drive updates to custom client code to push to HTML, well, that should give you something to think about.
Which part of that is supposed to be a reasonable thing to say?
None of it - that's the point. They are self-deprecatingly pointing out how naïve and judgemental their younger-self was.
Youth are always writing off the oldies - I did it, and now that I am old, I see it happening to me - and that is ok - we need that passion to shake things up, even if they end-up eerily similar to the way things were done before...
However if I can't understand their perspective, I have a very hard time in understanding and judging their reasonableness (because I'm basing my judgement solely off of my own experiences and memories that are similar to their circumstances).
This lack of understanding translates to seeing a lack of credibility in them. "Maybe if they were more like me, they'd make more sense, be more reasonable". This type of thinking is common in most types of prejudice.
It's why young people write off older people: "They're too older to remember what it's like being my age, or to understand how things are now".
Why the opposite occurs: "They're still too young to understand how life works yet".
Why people of very different cultures tend to be prejudiced: "Their kind are ignorant of how the world works", and the opposite: "They've never been through what I've been through, they don't understand me or mine".
All of these statements evaluate down to: "If they were more like me, they would be reasonable". Which is of course true, if "they" were more like "you", their systems of reasoning and value be more similar to yours, and vice versa.
Perhaps this can be the opportunity for you to look through your past and consider and reevaluate other ideas you discarded because of your own bigotry.
Rails never did, and even actual SPA frameworks (e.g., React) have had SSR support/versions for quite a while. Basecamp introducing yet another iteration of front-end-JS dependent mostly-SSR for Rails isn't a pendulum swinging anywhere.
I don't see much merit in moving back to server side rendering aside from obfuscation & helping SEO ratings (web crawlers have a hard time with SPA)
This doesn't save battery life on a device. If someone downloads a few meg of JS their browser has to parse and execute that JS locally. This use of processing uses power. If that same person had half as much JS to parse and execute it would use less power.
A CDN does not save from this happening.
When power use happens on a server it's more on the server but less on devices with batteries. Batteries aren't used up as quickly (both between recharges and in their overall life).
A server side setup can cache and even use a CDN to only need to render parts that change.
My points are that it's not all cut and dry along with considering batteries.
Oh, and older systems (like 5 year old ones)... surfing the web on an older system can be a pain now because of JS proliferation.
This matters because of the poor, the elderly (on a fixed income), and those who aren't in first world countries don't have easy access to money to keep getting newer computers.
Then there is the environmental impact of tossing all those old computers.
So, there is both a people and environment impact.
The two things that use the most battery in a phone are the radio and the screen.
If you can do most of the work client side, the phone can turn off the radio and save battery. The amount of battery savings of course depends greatly on what the application is actually doing.
Yes, I'm aware that "it will be cached" lost most of its glory when bundling became mainstream, but I still hear it as an argument when pulling things from common CDNs.
The server's hardest work is usually in the database: scanning through thousands of rows to find the few that you need, joining them with rows from other tables, perhaps some calculations to aggregate some values (sum, average, count, etc.). The database is often the bottleneck. That isn't to say I advocate NoSQL or some exotic architecture. For many apps, the solution is spending more time on your database (indexes, trying different ways to join things, making sure you're filtering things thoroughly with where-clauses, mundane stuff like that). A lot of seasoned programmers are still noobs with SQL.
Anyway, if rendering is lightweight, then why does it bog down web browsers when you move it there? I don't think it does. If all you did was ship the JSON and render it with something like Handlebars, I think the browser would be fine, and it would be hard to tell the difference between it and server-side rendering.
I guess what it is, is that the developers keep wanting to bring in more bells and whistles (which is understandable) especially when they find some spiffy library that makes it easier (which is also understandable). But after you have included a few libraries, things start to get heavy. Things also start to interact in complex ways, causing flakiness. If done well, client-side code can be snappy. But a highly interactive application gets complicated quickly, faster than I think most programmers anticipate. Through careful thought and lots of revision, the chaos can be tamed. But often programmers don't spend the time needed, either because they find it tedious or because their bosses don't allot the time --- instead always prodding them on to the next feature.
Nice! Casual ageism and racism mixed into one post.
On one hand, it’s true. It’s part of white privilege which is tangible.
On the other hand, however less often people in a privileged class are realistically impacted by discrimination, it’s still > 0.0%. Since it usually costs nothing more to include everyone it seems useful.
But I think the biggest reason it’s important to care about discrimination wherever it shows up and not let people off the hook is that it’s unifying.
There’s a story out of Buddhism that suggests it’s important to think equally kindly about rich people, kind of similar in that they’re a privileged class.
I know it’s a hard sell. I don’t do it justice here. However a powerful argument can be made that not disparaging privileged classes, actually helps us all in the long run/big picture.
If I get down voted I understand, that’s ok. If it makes a difference I don’t mean to minimize the 10,000 year history of pain suffered by any humans due to discrimination.
The powerful argument is that you should treat everyone well, period, and not do some kind of calculation to decide how cruel you're allowed to be to them.
Edit: I've had just about enough of people using "white privilege" to justify violence and blatant discrimination because "they haven't been exposed to enough". Its just another way to justify racism. Plenty of white people live in poverty. Its not ok in either direction.
First, acknowledging the thinking behind a common opinion is not the same as sympathizing with it. It’s only stating a concept I disagree with.
Secondly, it’d be nice to take credit for this, big fucking idea, but unfortunately it’d be thousands of years too late. I explicitly mentioned the source.
Finally, I don’t see how it’s not a novel idea. If you started asking people to think kindly about rich Wall Street bankers or cable company executives would everyone he instantly on board?
I know those are extreme examples but that was the point of the story. What’s indeed not novel is to say, think well of all people.
The hard part is when you try to actually apply it equally, including to less popular but highly privileged classes of people.
I don’t claim that I can do it all the time, I’m sure I don’t in fact. However for any ideal shouldn't it be ok to try and work towards it over time?
And there are definitely applications I would prefer to write as an SPA over the Hotwire approach. But given that the vast majority of websites are just a series of simple forms, I prefer this approach over the costs you incur from building an entire complex SPA.
Less boilerplate. More reuse. Consolidation of app state to the server.
Feel free to post your code from a decade ago so that we can do a bake-off and compare implementations side-by-side.
It's lightweight on the frontend and a pleasure to develop for since forms are entirely coded on the backend with components. On rare exceptions more complex pages have some sprinkled lodash.js.
They have around 400 CRUD pages with complex business validations. Up to 1.5k concurrent users. At that point MySQL starts to sweat a bit and p95 increases above 200ms. All running on 2 machines: a nginx+phpfpm and a MariaDB.
The instant no-compilation feedback loop of edit-save-refresh is orgasmic. That system opened my eyes to a lot of preconceptions and buzzwords I once held sacred.
Also a lot like React Server Components that we saw on HN yesterday
It seems like this is the next wave of web apps. Hopefully once the hype settles, we'll be able to decide which approach is best for which project.
It's much different based on a preliminary reading of Hotwire's docs.
Live View uses websockets for everything. If you want to update a tiny text label in some HTML, it uses websockets to push the diff of the content that changed. However you could use LV in a way that replaces Hotwire Turbo Drive, which is aimed at page transitions, such as going from a blog index page to contact form. This way you get the benefits of not having to re-parse the <head> along with all of your CSS / JS. However LV will send those massive diffs over websockets.
Hotwire Turbo Drive replaces Tubolinks 5, and it uses HTTP to transfer the content. It also has new functionality (Hotwire Turbo Frames) to do partial page updates too instead of swapping the whole body like Turbolinks 5 used to do. Websockets is only used when you want to broadcast the changes to everyone connected and that's where Hotwire Turbo Streams comes in.
IMO that is a much better approach than Live View, because now only websockets get used for broadcast-like actions instead of using it to render your entire page of content if you're using LV to handle page transitions. IMO the trade off of throwing away everything we know and can leverage from HTTP to "websocket all the things" isn't one worth making. Websockets should be used when they need to, which is exactly what Hotwire does.
I could be wrong of course but after reading the docs I'm about 95% sure that is an accurate assessment. If I'm wrong please correct me!
Hotwire will benefit from caching better than LiveView, because frames are distinct URLs. But I haven't personally need that.
How does that work for page transitions? The docs don't mention anything about this or how to configure it.
With Turbolinks or Hotwire Turbo Drive, the user clicks the link to initiate a page transition and then the body of the page is swapped with the new content being served over HTTP. With Turbo Frames the same thing happens except it's only a designated area of the page. In both cases there's no need to poll because the user controls the manual trigger of that event.
How would LV do the same thing over HTTP? Everything about LV in the docs mentions it's pretty much all-in with websockets.
Then there's progressive enhancement too as another difference. Turbo is set up out of the box to use controllers which means you really only need to add a tiny amount of code (2-3 lines) to handle the enhanced experience alongside your non-enhanced experience. For example you could have a destroy action remove an item from the dom when enhanced using Turbo Stream or just redirect back to the index page (or whatever) for the non-enhanced version.
There's an example in the Turbo docs for that at https://turbo.hotwire.dev/handbook/streams if you search for "def destroy".
But with LV wouldn't you need to create both a LV and a regular controller? That's a huge amount of code duplication.
You just do LiveView instead of a regular controller. No duplication.
When you request a page, it is render on the server and all of the HTML is returned over HTTP as usual.
After the client has received the HTML updates, live updates can go over a websocket. For instance you start typing in a search field, this is sent to the server over websockets. Then the server might have a template for that page that adds search suggestions in a list under that search field. The server basically automatically figures out how the page should be rendered on the server side with the suggestions showing. By "re-rendering" the template with changed data used with the server side template. Then it sends back a diff to the client over websockets. The diff adds/changes the search suggestions to the page. The diff is very small and it's all very fast.
Yes but this is only in the happy case when the client is fully enhanced no?
What happens if you hook up a phx click event to increment a like counter.
But with Hotwire Turbo, if you have a degraded client with no JS, clicking the + would result in a full page reload and the increment would still happen. That's progressive enhancement. It works because it's a regular link and if no JS intercepts it, it continues through as a normal HTTP request.
I don't know if there is already a way to have `phx-click` with fallback to HTTP in a less "manual" way. It should be possible to make.
In your Counter example, it's true that for the 'degraded' version to work, the link would have to be a proper link and not a phx-click. But in the (IMO very unlikely) case where this fallback is necessary, solving it with a proper link/route does not require duplication, just a different approach.
What you would do is create a LiveView that handles both the initial page and the 'increment' route. If LV is 'on', it intercepts the click and patches the page. if LV is 'off', your browser would request the 'increment' route, and the same LV would handle all this server-side and still display the new, incremented counter.
The LV is both the server-side controller /and/ the client-side logic. That's part of what makes it so appealing, but, admittedly, also something that can take a while to wrap your head around.
I've more than once reflexively gone for phx-click solutions where the LV would receive the event and 'do' something, only to later realize that it would be much better to use a proper url/routing solution (where LV is still the 'controller'). In hindsight it's often a case of treating LiveView too much like just 'React on the server', basically.
Page changes are still initiated by the client in LiveView (although can be server initiated)
LiveView is just channels under the hood. Once you consider that, long polling may seem more obvious
It's not obvious to me for user invoked page transitions because when I think of long polling, I think of an automated time based mechanism that's responsible for making the request, not the user. But a page transition is invoked by the user at an undetermined amount of time / interval (it might happen 2 seconds after they load the page or 10 minutes).
Right isn't long polling keeping the connection open for every client on the server and then the server is doing interval based loops in the background until it gets a request from the client or times out?
It wouldn't be doing a separate HTTP request every N seconds with setInverval like the "other" type of polling but it's still doing quite a bit of work on the server.
In either case, LV's long polling is much different than keeping no connection open, no state on the server and only intercepting link clicks as they happen for Turbo Drive and Turbo Frames.
I don't think that's necessarily a big deal with Elixir (I wouldn't not pick it due to this alone, etc.), but this thread is more about the differences between Hotwire and LV, and no state being saved on the server for each connection is a pretty big difference for the Drive and Frames aspects of Turbo.
Rarely admitted, but absolutely truth.
His work inspired me to build my app (TravelMap) with SSR views: https://clem.travelmap.net