From the turbo handbook: "An application visit always issues a network request. When the response arrives, Turbo Drive renders its HTML and completes the visit." Using the phrase "When the response arrives" begs the question of what happens if it doesn't arrive, or if it takes a minute for it to arrive, or if it arrives but with a faulty status code.
Not saying this is better from an error handling perspective, but at least the whole idea of Hotwire and its peers (Turbolinks, etc) is that there is no state and it should thus be safer and quicker to reload the page should things go wrong.
> there is no state and it should thus be safer and quicker to reload the page should things go wrong.
That's not exactly true since there are non-idempotent HTTP methods and while the browser will prompt you if you want to resend a non-idempotent HTTP request when refreshing a normal form POST I don't think that turbo/turbolinks/similar will allow you to prompt or resend.
On refresh should turbo retry a POST? The "right way" is to keep the state of the last POST and prompt the user for confirmation, but it seems like it is undocumented as to what it does. I'm guessing it either does not retry or it retries and hopes effect will be idempotent.
No one (SPAs, traditional webpages and "spiced" webpages like this included) is doing everything right, but my objection to this framework is that it seems to try to say things are simple or easy when they clearly aren't.
> it seems to try to say things are simple or easy
That's an unfair mis-characterisation. The developers are not pitching a universal panacea that solves all your problems and handles every edge case. They are offering an architecture that simplifies many common scenarios, and one that is thoroughly developer-friendly when it comes to supplying observability and integration hooks for edge cases.
For this latter purpose it merely remains to bother with reading the (clean & elegant) source code to enlighten oneself.
> it seems like it is undocumented
On the contrary, the behavior w.r.t full-page replacement on non-idempotent verbs is extensively discussed in the Turbolinks repo.
The "Turbo Drive" component appears to me as essentially unchanged behaviour in Turbo 7.0.0beta1 from Turbolinks versions 5.x. Turbolinks was introduced in 2013, has many years of pedigree and online discussion, and is well understood by a large developer community. Turbolinks was always maintained, even being ported to TypeScript (from the now venerable CoffeeScript) ca. two years ago with no change in behaviour. Turbo Drive is, practically, just a slightly refactored rebrand of the TypeScript port.
The stuff everyone is so excited about are Turbo Frames and Turbo Streams. These are new, and may be used without adopting Turbo Drive: as with practically everything from Basecamp, the toolkit is omakase with substitutions. They are, nevertheless, complementary, so you get all three steak knives in one box.
Of course now I just go on Hacker News and Twitter instead.
If I know the network is always there, why bother.
The entire design philosophy here is to mimic apparent browser behaviour, or to delegate to it. Hence, to GP's question; you should expect the appearance of browser-like behaviour in any circumstance, modulo anything Turbo is specifically trying to do different. Deviation from baseline browser semantics was certainly a basis for filing bugs in its predecessor (Turbolinks).
As for what Turbo actually does, I checked the source. Good news, even for a first beta, they're not the cowboy nitwits alleged; it gracefully handles & distinguishes between broken visits and error-coded but otherwise normal content responses, and the state machine has a full set of hooks, incl. back to other JS/workers, busy-state element selectors, and the handy CSS progress bar carries over from Turbolinks.
in general, the right approach in HTML-oriented, declarative libraries appears to be triggering error events and allowing the client to handle them, since it is too hard to generalize what they would want
1. What if something goes wrong?
2. How do I test for handling success/error?
They never address this stuff.
However, another big issue is the dominance of mobile. More and more, you’ve got 2-3 frontends (web and cross-platform mobile, or explicitly
web, iOS and Android), and you want to power them all with the same backend. RESTful APIs serving up JSON works for all 3, as does GraphQL (not a fan, but many are). This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative. Mobile is a big part of why SPAs have dominated - you use the same backend and overall approach/architecture for web and mobile.
I’d strongly consider this for a web-only product, but that’s becoming more and more rare.
They have accompanying https://github.com/hotwired/turbo-ios and https://github.com/hotwired/turbo-android projects to bridge the gap.
This, while very interesting and might have a preferable set of constraints for some projects, is simply not a good fit for many others, as you mentioned in your comment. This looks amazing, and I would definitely try it for a project in which it would fit, but I don't really see a reason to disparage the work others have been doing over the past decade. We need those other tools too!
(sorry for the rant)
However I think that for iOS they're still offering server side rendering via Turbo-iOS and Turbo-Android so you can build quickly and then replace that later if you need to.
This is one of the primary promises of MVC in the first place: views can be rendered independently of controllers and models. For a given controller method call, a view can be specified as a parameter.
In this case, swap "view" for JSON sent back over the wire...
RESTful APIs serving up JSON works for all 3, as does GraphQL [...]. This however is totally web-specific - you’ll end up building REST APIs and mobile apps anyways, so the productivity gains end up way smaller, possibly even net negative.
I bet someone will produce a native client library that receives rendered SPA HTML fragments and pretends it's a JSON response. They might even name it something ironic like "Horror" or "Cringe".
That said, an ideal API for desktop web apps looks rather different than one for mobile web or native clients. Basically, for mobile you want to minimize the number of requests because of latency (so larger infodumps rather than many small updates) and minimize the size of responses due to bandwidth limitations and cost (so concise formats like Protocol Buffers rather than JSON).
It is definitely possible to accommodate both sets of requirements at the same API endpoint, but pretending that having a common endpoint implies anything else about the tech stack is rather disingenuous. If you want server-side rendering and an API that delivers HTML fragments instead of PB or JSON, that can be done too.
And if you can't think of anything worse, you're not trying very hard.
Really, an incredible bang for the buck.
HTML is a machine-readable format, like XML and JSON. Have your back end represent a given resource as microformatted-semantic markup, send it gzipped over the wire, and you've got the data exchange you need, even if your mobile app isn't already dressed-up webview.
Generally the projects I've felt best about have two features:
1) The API knows how to represent resources across multiple media types, usually including at least markup and JSON.
2) UI is well-annotated enough that developers and machines find it easy to orient themselves and find data.
But you're quite right that this isn't common. I have my own guesses on the reasons why. My observation's been that the workflow and stakeholder decision making process on the UI side places semantic annotation pretty low on the priority side; most places you're lucky if you can get a style guide and visual UI system adopted. And there has to be cooperation and buy-in at that level in order for there to be much incentive to engineer and use a model/API-level way of systematic representing entities as HTML, which often won't happen.
And TBH it is extra effort.
If I recall correctly, this made use of that new technology of the time called "XMLHttpRequest" (/s) which pretty much jump-started web 2.0.
[fn]: Arguably, the web is worse with chat bots, sticky headers, and modals constantly vying for your attention.
We can blame this on the MBA types. I've literally never heard a software engineer say "hey, let's make this pop-up after they've already been looking at the page for a minute!" or anything like it.
But there is suprisingly little layers on layers. Part of what has been amazing about the web is that the target remains the same. There is the DOM. Everyone is trying different ways to build & update the DOM.
Agreed that there are better alternatives than a lot of what is out there. We seem to be in a mass consolidation, focusing around a couple very popular systems. I am glad to see folks like Github presenting some of the better alternatives, such as their Catalyst tools which speed up (developer-wise & page-wise (via "Actions") both) & give some patterns for building WebComponents.
The web has been unimaginably stable a platform for building things, has retained it's spirit while allowing hundreds of different architectures for how things get built. Yes, we can make a mess of our architectures. Yes, humanity can over-consume resources. But we can also, often, do it right, and we can learn & evolve, as we have done so, over the past 30 years we've had with the web.
While willfully ignoring all the people doing better.
Maybe we are- as you fear- stuck, forever, in thick JS-to-JS transpilers & massive bundles & heavy frameworks. Maybe. I don't think so.
React is well under 20k.
FWIW when optimizing my SPA, my largest "oops" in regards to size were an unoptimized header image, and improperly specified web fonts.
You're literally a stereotypical Hacker News commenter.
I also find the modern frontend a bit too complicated but this is just an unreasonable statement.
Of all the problems I have with React, and I do have a few, JSX is not one of them.
If you are going to be using a language to generate HTML, you are either going with a component approach that wraps HTML in some object library that then spits out HTML, or you are stuck with a templating language of some sort. (Or string concatenation, but I refuse to consider that a valid choice for non-trivial use cases.)
JSX is a minimal templating language on top of HTML. Do I think effects are weird and am I very annoyed at how they are declaration order dependent? Yup. But the lifecycle stuff is not that weird, or at least the latest revision of it isn't (earlier editions... eh...). The idea of triggering an action when a page is done loading has been around for a very long time, and that maps rather well to JSX's lifecycle events.
> React alone provides little to nothing
Throw in a routing library, and you are pretty much done.
Now another issue I do have is that people think React optimizes things that it in fact does not, so components end up being re-rendered again and again. Throw Redux in there and it is easy to have 100ms latency per key press. Super easy to do, and avoiding that pitfall involves understanding quite a few topics, which is unfortunate. The default path shouldn't lead to bad performance.
> The concept of components and the idiotic life cycles
Page loads, network request is made. Before React people had listeners on DOM and Window events instead, no different.
Components are nice if kept short and sweet. "This bit of HTML shows an image and its description" is useful.
> Do I need to explain how much stuff can be packed in 400kb?
No, I've worked on embedded systems, I realize how much of a gigantic waste everything web is. But making tight and small React apps is perfectly possible.
And yes, if you pull in a giant UI component library things will balloon in size. It is a common beginner mistake, I made it myself when I first started out. Then I realized it is easier for me to just write whatever small set of components I need myself, and I dropped 60% of my bundle app size.
In comparison, doing shit on the backend involves:
And then someone goes "hey you know what's a great idea? Let's put state on the back end again! And we'll wrap it up behind a bunch of abstractions so engineers can pretend it actually isn't on the back end!"
History repeats itself and all that.
SPAs, once loaded, can be very fast and scaling the backend for an SPA is a much easier engineering task (not trivial, but easier than per user state).
Is all of web dev a dumpster fire? Of course it is. A 16 year old with VB6 back in 1999 was 10x more productive than the world's most amazing web front end developer now days. Give said 16yr old a copy of Access and they could replace 90% of modern day internally developed CRUD apps at a fraction of the cost. (Except mobile support and all that...)
But React isn't the source of the problem, or even a particularly bad bit of code.
> Throw in a routing library, and you are pretty much done.
Ok routing library, now make an http request please without involving more dependencies....
> Throw Redux in
See, exactly what I said: we are getting to the endless pages of dependencies.
> 100ms latency per key press
100ms latency??!?!?!? In my world 100ms are centuries.
I don't have a problem with that. At the end of the day you know exactly what you want to achieve and what the output should be, whereas react it's a guessing game each time. We are at a point where web "developers" wouldn't be able to tell you what html is. With server-side rendering, from maintenance perspective you have the luxury to use grep and not rely on post-market add ons, plugins and ide's in order to find and change the class of a span.
The term SPA first came to my attention when I was in university over 10 years ago. My immediate thought was "this is retarded". Over a decade later, my opinion hasn't changed.
> jsx is a retarded idea because it adds an abstraction over something brutally simple(html).
what programming languages do it better?
if you have other places that have done a good job of being ripe for building html directly, without intermediation, as you seem to be a proponent of, let me/us know. jax seems intimately closer to me to what you purport to ask for than almost any other language that has come before! your words are a vexing contradiction.
> Ok routing library, now make an http request please without involving more dependencies....
please stop being TERRIFIED of code. many routing libraries with dependencies aren tiny. stop panicking that there is code. react router v6 for example is 2.9kb. why so afraid bro?
this is actually why the web is good. because there are many many many problems, but they are decoupled and a 2kB library builds a wonderful magical consistent & complete happy environment that proposes a good way of tackling the issues. you have to bring some architecture in but anyone can invent that architecture, the web platform is in opinionated ("principle of least power" x10,000,000), and the solutions tend towards tiny.
redux is 2kB with dependencies as well.
Yup that's crappy. The ease of it happening, the Work At A Startup page used to have this issue (may still, haven't looked lately) shows that it isn't hard to make accidentally happen.
As I said, it is a weakness of the system.
> sx is a retarded idea because it adds an abstraction over something brutally simple(html)
Have you seen how minimal of an abstraction jsx is? It is a simple rewrite to a JS function that spits out HTML, but JSX is super nice to write and more grep-able than the majority of other templating systems.
I have a predisposition to not liking templating systems, but JSX is the best part of React.
Notably it doesn't invent it's own control flow language, unlike most competitors in this space.
> My immediate thought was "this is retarded".
Well the most famous SPA is gmail and it's rather popular, you may have heard of it. It is bloated now, but when it first debuted it was really good. Webmail sucked, then suddenly it didn't.
Google maps. Outlook web client. Pandora. Online chat rooms, In browser video chat, (now with cool positional sound!)
SPA just means you are just fetching the minimum needed data from the server to fulfill the user's request, instead of refetching the entire DOM.
They are inherently an optimization.
Non-SPAs can be slow bloated messes as well, e.g. the Expedia site.
It's generally frowned upon to use retarded in this manner. Not only is it insulting to people, it brings down the overall tone of your argument.
So call it either one and people will know what you're talking about.
And for actual ducts you'll want to use foil-tape because temperature changes wreck the adhesion of duct-tape, then the moisture leaks into the walls/ceiling which is $$$$ bad.
This strongly depends on the type of duct. Flex ducts that are a plastic skin over a wire coil don't work so well with aluminum tape.
Duck tape is the stuff developed for the US army to waterproof things closed. Post-war, they made it silver instead of green and marketed it for use with ducts (since being waterproof made it also SEEM like a good candidate for the job in heating systems), but its pretty terrible for this purpose since temperature changes degrade the adhesive rapidly.
The tape you actually want to use for ducts is foil-backed tape.
In short, it was and still is a great marketing gimmick, but Duck Tape was only ever “ok” at keeping things water proof and only looks like gaffers tape or the tape you want to use on ducts.
I use alot of Duck Tape .
From the little language study I've done, English is one of the most flexible. You can discard entire parts of speak and it still works.
Saying , ,'ey you woke up yet', is ok in many contexts.
The real world disagrees with you; go check out any major website and observe as your laptop's fans spin up.
However I think the main problem here isn't the symptom (websites are bloated) but the root cause of the problem. I'm not sure if it's resume-driven-development by front-end developers or that they genuinely lost the skill of pure CSS & HTML but everyone seems to be pushing for React or some kind of SPA framework even when the entire website only needs a handful of pages with no dynamic content.
Try every old media site and most e-commerce.
In the Microsoft-verse, this might also draw some comparisons to the more modern server-side blazor.
I used it 13 years ago. It was fancy.
I don't know if it's so much 'everything old is new again' as it is a problem of market penetration.
The allure of xmlhttprequest was that over connections much slower than today, and with much less powerful desktop computers, a user didn't have to wait for the whole page to redownload and re-render (one can argue that focusing on better HTTP caching on the server and client might have been smarter) after every single user interaction. This was also much of the draw of using frames (which were also attractive for some front-end design use-cases later re-solved with CSS).
As apps got more complex, clients got more compute, bandwidth grew, and as web audiences grew, offloadingl much of the page rendering to the client helped to both contain server-side costs and increase or maintain responsiveness to user interactions.
Now, as desktop client performance improvement slows down (this isn't just the slowing of computer speeds, computers are also replaced less frequently), average bandwidth continues to grow, app complexity and sophistication continues to grow, but as server compute cost falls faster than audience size grows, shifting rendering to HTML back to the server and sending more verbose pre-rendered HTML fragments over the wire can make sense as a way of giving users a better experience.
As someone who implemented a SPA framework prior to "SPA" being a word much less React or Angular, I have to say for my company, it was all about state management.
Distinguishing between web apps (true applications in the browser), and web pages (NYT, SEO, generally static content), state management was very hellish at the time (~2009).
However, with the advent of V8, it became apparent as an ASP.NET developer that a bad language executing at JIT speeds on the browser was "good enough" to not send state back and forth through a very complex mesh of cookies, querystring parameters, server-side sessions, and form submissions.
If state could be kept in one place, that more than justified shifting all the logic to the client for complex apps.
Or back buttons and CSRF tokens and flash scope...
Or, let's talk about a common use case. Someone starts filling in a form, and then they need to look at another page to get more information. (This other page may take too long to load and isn't worth putting in the workflow, or it was cut from scope to place the information twice.) So, they go out to another page, then back, and are flustered because they were part way through the work.
So, if you want this to work, you're going to need state management in the client anyway. (Usually using SessionStorage these days, I'd presume?) So, then, we've already done part of the work for state management. You are then playing the "which is right, the server or the client" game.
You accumulate enough edge cases and UX tweaks, and you're half way down the SPA requirements anyway.
Now, hopefully Hotwire will solve a large number of these problems. I'm going to play with it, but the SPA approaches have solved so many of the edge cases via code and patterns.
Part of the problem has also been ameliorated by larger screens and browser tabs.
Reminds me of saving Rich Text content on the server side. It was nightmare.
Also reminds me of the Microsoft RTF format. It's basically a memory dump of the GUI editor.
The state binded on a tree was never a good idea to start with.
I think what drives this crazy train of overengineered solutions of SPAs and K8s for hosting single static page is deep separation of engineeres from the actual business problems and people they are trying to help. When all you have are tickets in Jira or Trello you don't know why you should do or if they actually benefit someone then it's natural to invent non-existant tech problems which are suddenly interesting to solve. That is natural for curious engineers and builders. Then mix in 1% of big apps and companies which actually do have these tech problems and have to solve them and everybody just wants to be like them and start cargo culting.
I recently wrote a SPA (in React) that, in my opinion, would have been better suited as a server-side rendered site with a little vanilla js sprinkled on top. In terms of both performance and development effort.
The reason? The other part of the product is an app, which is written in React Native, so this kept a similar tech stack. The server component is node, for the same reason. And the app is React Native in order to be cross-platform. We have ended up sharing very little code between the two, but using the same tech everywhere has been nice, in a small org where everyone does everything.
Making teams responsible
Third, we give full responsibility to a small integrated team of designers and programmers. They define their own tasks, make adjustments to the scope, and work together to build vertical slices of the product one at a time. This is completely different from other methodologies, where managers chop up the work and programmers act like ticket-takers.
Together, these concepts form a virtuous circle. When teams are more autonomous, senior people can spend less time managing them. With less time spent on management, senior people can shape up better projects. When projects are better shaped, teams have clearer boundaries and so can work more autonomously.
I wonder if other industries suffer from the same problem, bored engineers.