Old school web tech is the best. I still reach for multipart/form-data every day. Many of my web applications do not even have javascript.
I hope at some point the original pattern is re-discovered and made popular again because it would make things so much snappier:
1. Initial GET request from user's browser against index and maybe favicon.
2. Server provides static/dynamic HTML document w/ optional JS, all based upon any session state. In rare cases, JS is required for functionality (camera, microphone, etc.), but usually is just to enhance the UX around the document.
3. User clicks something. This POSTs the form to the server. The server takes the form elements, handles the request, and then as part of the same context returns the updated state as a new HTML document in the POST response body.
4. That's it. The web browser, if it is standards compliant, will then render the resulting response as the current document and the process repeats.
All of this can happen in a single round trip. Latency is NOT a viable argument against using form submissions. I don't think suffering window navigation events is a valid one either. At some point, that SPA will need to talk to the mothership. The longer it's been disconnected, the more likely it's gonna have a bad time.
The web only has to be hard if you want to make it hard. Arguments against this approach always sound resume-driven more than customer-driven. I bet you would find some incredibly shocking statistics regarding the % of developers who are currently even aware of this path.
I'm increasingly getting to this point as well. I think modern webtech is incredible in many ways, and if properly used could be pretty great but overall and for the most part we are pretty terrible at it. I've started to dread using SPAs because I know it's going to be sluggish and load megabytes of junk, and will still require frequent page reloads when clicking stuff Even though ostensibly it should not.
There's an exceptional beauty in a simple and lightweight website that is just a load and done. I've started doing more with Phoenix and Alpine for the little bits of client-side functionality required, and have been very pleased with it. The most interesting part though, is that users have also been very pleased. They can't explain why in technical terms, but they know it feels better.
Yes, SPAs are absolutely horrid. I tried to use HEY Calendar and the iOS app is just so absurdly slow I went back to the native iOS calendar app.
Maybe it speeds up their dev time but a native app should not feel like a (poorly engineered) web app. Every click loads for a second or two and sometimes it hangs for even longer. I grew to dread adding and managing events.
It’s a shame because even though the app made some questionable design choices, it was in general a breath of fresh air in terms of UI.
I’d like to hope that they’ll fix it, but I know they won’t.
When did you try hey calendar? I've never used anything from hey, but saw a lot of griping about its performance, and eventually saw that they apparently addressed the primary issues.
I was recently tasked with building a PWA to escape the play stores. A vitally important feature was notifications for the messaging system.
Hours of research trying to figure out how to deliver a notification through the PWA, even if the application was closed. Eventually I thought it wasn't going to be possible. But I remembered the HEY products were PWAs, and they had an email product. Surely that has the notification feature I'm trying to replicate? It's a paid service.
... no, I couldn't get their notifications to work, even though I enabled all of the proper settings. Crazy.
However, when I was about to give up and abandon the entire project, I had one of those moments, and got it to work.
I recently had a discussion with some "web developers" that just... couldn't understand how a web application could interact with the server without JavaScript and a REST API. They had never heard of traditional web forms! They didn't even know that was an option.
I wonder how these „web developers“ react, if they discover secret sauces like SSE, the „disabled“ attribute of <input> elements and SSI (server side includes)
I‘d love to be in the same room, when they edit their packages.json, spin up Docker just to create a basic CRUD form
This is good up to a point. If you have only a medium amount of CRUD-like data jn a basic structure in a single user context, this is fine. But when you get into complex relationships, collaborative editing, and unique visual representations of data, this guess right out the window.
The easiest example is probably also the first app to really sell dynamic web apps to the populace: Google Maps. There were meeting applications before Google Maps, and they operated much as you describe--through HTTP round trips--and they sucked. As a user, it was such a pain in the ass to wait for those things to rerender after every click. As a developer, it was such a pain in the ass to field those complaints from those users and know--barring some new invention in browser tech that nobody saw coming (XMLHttpRequest in Internet Explorer)--there really wasn't anything you could do about it.
It's true that a lot of sites go over board with their interactions. But that's not the same thing as needing to ditch asynchronous requests completely.
I like form submissions too, but it's not for everywhere.
For example, file uploads using multipart form submissions is a poor user experience - you cannot put in a percentage-complete bar anywhere because there is no way to determine how much data was transmitted.
When a file upload is taking 5m, the user is liable to think that the upload failed, and then retry, or give up. Very poor UX.
Other than that one exception, I cannot really think of any other place in which form submissions are the inferior choice.
Modern JS frameworks work in this same way. You just create a form and provide the action which corresponds to one you define in your route file. It even still works without client side JS!
You can also have both. Our apps are all spa. Bundled in single js, html and and js file. But we also have an ssr app runner that contains hundreds of thousands cached static html pages. Initial page load is from the ssr. Further interaction pure clientside. This way you have best app like experience and maximum seo. Setup is maybe 400 lines of code.
> Bundled in single js, html and and js file. But we also have an ssr app runner ...
This is exactly the kind of complexity and bloat the GP wishes we can get rid of so that we can return to simpler web applications with forms that do a simple POST and get the updated web page in response.
Basically serverside you have a simple route app that is able to run the web applications through a headless chrome instance. The output is cached in static html files. Now when the initial site is served to the end users browser from then on all interactions are all dynamic. This setup works for all kinds of stacks. Angular, vue, react or just custom.
The logic is pretty simple. No bloated layers of complexity.
To clarify, the headless chrome is because it's ultimately a js-rendered app, so you just pre-render all the routes on the server to get the final html, cache that, then serve the html on any first request. Then when the app is loaded in the browser, it just does everything client-side?
Yes, rip out all the advanced coding skills and simply do it the way it was originally intended.
I have a single php file with a textarea. It creates pages or updates them. The textarea only displays the article. Everything above and below is removed and added again. When publishing it adds the article to the index.html archive-2025.html and tagname.html pages. I haven't bothered automating removal. If it ever happens I'll do it by hand.
The search engine is a bit odd, it is a 26x26 list of raw files named aa.html ab.html each bit in those refers to an article. If you type foobar it loads fo oo ob ba and ar.html then performs and AND on the files. What html's remain are loaded and searched for foobar.
I should really store everything in a db in stead. (Just for searching)
Edit: near zero maintenance over two decades. One instance where the host upgraded php and the editor stopped working but the website worked just fine so it doesn't really count.
I did not find this very convincing. I suspect it has to do with the author being very comfortable with site-generation tools, and not super comfortable with JavaScript. They don't mention any frameworks (React, Vue etc) so I suspect the JavaScript they write is doing things "the hard way" from scratch.
> My first impulse was to have a list of posts you can filter with JavaScript.
> But the more I built it, the more complicated it got. Each “list” of posts needed a slightly different set of data. And each one had a different sort order. What I thought was going to be “stick a bunch of <li>s in the DOM, and show hide some based on the current filter” turned into lots of data-x attributes, per-list sorting logic, etc. I realized quickly this wasn’t a trivial, progressively-enhanced feature. I didn’t want to write a bunch of client-side JavaScript for what would take me seconds to write on “the server” (my static site generator).
This is pretty trivial to implement in any framework.
Generally speaking, I find most of the "You don't need Javascript and frontend frameworks, just write static HTML!" posts reflect the skillsets of their authors. Me, I never really did any site-generation or server-side rendering stuff, so writing everything in VueJS + JavaScript is the easy way.
It’s rough for the old timers because as consumers of the web, we’re forced to use SPAs that have objectively worse UX. And sites are built that way now “because it’s how everyone does it.”
The sad thing is, there is no real obstacle to building a fast SPA. It's certainly not harder than to do it all "the old way". But you have to pay attention to performance.
If you just implement as many features as you can in as little time as possible, then the outcome will be something like Confluence, I guess, where you wait several seconds to load a documentation page you wanted to just read, not even edit.
React doesn't necessarily mean SPAs. It has great support for server-side rendering and that was a big part of why React became more popular than other client-side JS frameworks of the time.
In the current landscape doing things "the hard way" in plain JS is often simpler than with React. You will give up reactivity, testability, declarative rendering, and the ability to hire any passersby, in exchange for not having to deal with a rube goldberg contraption of hooks, dependency tracking, query caches and state management, with a complex build system. Terrible idea for a team. For a bunch of tabs in your blog? Probably worth the trade.
That said, the problems he describes would occur with React/Vue as well since they are a result of mixing a statically generated site with client-side rendering logic. It would not, in fact, be trivial without changing the server.
> Me, I never really did any site-generation or server-side rendering stuff
> I find [...] reflect the skillsets of their authors
I disagree, but I am 100% with you on the imperative for structure and order in the codebase.
A typical angle - and it used to be very good - is that vanilla JS doesn't offer a good way to organize common code or impose some sort of structure. I find this to be at the heart of "wont work with a team" arguments. This is no longer the case and hasn't been for some time now (~10 years):
>In the current landscape doing things "the hard way" in plain JS is often simpler than with React.
I'm not sure what "simpler" means to you.
It is extremely simple to create and deploy a framework based app; `npm create vue` give it a name, answer a few questions or accept the defaults.
I find programming in such an environment very "simple", because the framework is hiding a lot of complexity.
> You will give up reactivity, testability, declarative rendering, and the ability to hire any passersby, in exchange for not having to deal with a rube goldberg contraption of hooks, dependency tracking, query caches and state management, with a complex build system.
All I can say is: if you have a codebase that has those issues, then the alternative implementation without a framework would be much more difficult to manage.
It's true that Webpack can be very complex to configure, but out of the box configuration gets you pretty far.
It's a great idea, but I don't see it working long-term. At some point I suspect big tech will aggregate and centralize and kill off these little sites. They may even start implementing technology that locks them down so that you can only visit them with approved devices. Hopefully Netscape refuses to go along with it, though I do worry about their business model's long term viability.
> "I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work."
I'm not understanding how you are achieving the css transitions between what look like new page loads under the hood...Can you elaborate on how that works?
That's a pretty major deal breaker for OP to leave out of his post touting it as something to build everything in your site on (especially for a tech blog)! Does it at least have a polyfill story? I see no mention of how to make it work on, uh, the other 15% of browsers worldwide, CanIUse is telling me.
The one element so far I dislike out of this is having search redirect to a new page.
This is something SPAs are doing as well, though in the case of, say Google Web Search (last I used it directly, which is months if not years ago). Namely, when I begin typing a search the currently-displayed content blanks out.
Why is this annoying?
Because often what I'm typing is being prompted by that content, and if I can't continue to view it as I'm typing my search I lose that context and my train of thought.
So: embed your search directly in every page on your website, rather than having it be a separate page. It's fine for the results page to redirect elsewhere, and if the search mechanics need that page to function, so be it (that's what form submissions are all about).
But other than that, I like the implementation, after realising that it was the blogpage and site themselves which were the demonstration. (I'm ... slow like that sometimes.)
Ouch, the back button becomes "Let's move backwards through all your mouse clicks".
Open menu (marvel at the animation), close menu -> hitting back means "reopen menu". Clicking through the tabs, and hitting back means navigating through them again, backwards.
I guess hijacking the link with window.location.replace can mitigate this, but the "Javascript is evil!"-Amish web-dwellers will scream at me.
I guess this is a matter of preference. I see when I click a link on stack.lol that the page header remains visible, but the rest of the page disappears (blanks to white) for a second before the next page appears.
I prefer a seamless experience of seeing the next page immediately without a "white flash" first. For instance, if I am looking at a GitHub project page and I click the "Issues" button, in a fraction of a second the page changes to the Issues page without any intermediate disappearance or partial page load.
In most cases, I find transition effects like fading, sliding, etc. to be an annoyance that adds unnecessary delay and visual distraction.
I believe this is related to the implementation of the light/dark theme. It uses the preferred theme but default to light, if your preferred theme is dark, it will take some time where it defaults to light before being set to dark. There are workarounds.
The page layout is broken in Firefox 136.0.1 but okay in Chrome 134.0.6998.89 =(. I don't have time to debug what CSS feature might be causing this problem.
This site doesn't work well on desktop Safari either. Clicking things doesn't seem to do anything but a few seconds later I'll see some movement. I suspect something is wildly broken or I just don't understand the navigation.
the web is so built for this. when you go to sites like this it just feels right. we could have had this as our future instead of the craziness of javascript and nextjs and react. now some new devs dont know html exist. they think react is the web.
imagine what the future could have been if the html spec was enhanced to support the features of htmx rather than the push for javascript enhancements.
We may have had a future without javascript ever running on a server.
I love this approach. I just wish there was better tooling for HTML polyglot programming.
PHP got it right by embedding itself within "HTML" documents. React also got it right by inventing JSX. There's Go templ too. They all said "HTML deserves first class support".
Why can't this DX also be available for other general purpose PLs? I want my Python language server to detect that this or that string is HTML, and provide features accordingly. I don't want to keep bouncing between .py and .html files. Is this achievable?
PEP 750 (still a proposal) adds "t-strings" Python. They're a simple extension of f-strings that give developers access to the static and interpolated parts of the string before they get combined into a final string. They're Python's answer to JavaScript template literals.
This allows writing code like:
user_name = current_user.user_name # something user-supplied and potentially unsafe
foo: HTMLElement = html(t"<div>{user_name}</div>") # wouldn't be safe with f-strings
where `html` is a normal function that takes a `Template` as an instance, and can return whatever it likes -- say, an `HTMLElement` rather than a `str`.
If PEP 750 does get accepted, eventually one hopes the community will introduce formatting, type checking, and LSP smarts for common kinds of t-strings, like HTML and maybe SQL, etc.
I think PHP was brilliant, and to this day is still an underrated model, but as websites grow in complexity, particularly around data and access controls, The model needs to be inverted. When logic gets sprinkled throughout templates, it can turn into an unreadable nightmare. Personally, I think ERB in Ruby with Sinatra or rails really nailed The approach.
I'm doing something sort of similar now -- my research advisor was remarking on the messy state of all the scripts I use for numerical experiments (there's dozens of them, poorly commented, constantly changing and not well versioned, assumptions/purpose not documented etc).
Naturally I used that as an excuse to implement a framework for tracking the output of numerical experiments along with any generated assets and output and configuration details in a SQLite database. Then wrote a generator that pulls everything out of the database and builds a static HTML site.
Anyway I implemented a system of tags for grouping similar experiments. It's less polished and sophisticated than this, but it was quick to implement and feels suitably lightweight.
I'm realizing now that the SQLite database was a misstep, mainly for adding notes and metadata. So I'm planning to just use the filesystem instead so that one's raw experiment runs can be versioned as plaintext files (at the moment, just the static site is versioned)
But for now I need to get around to actually doing the experiments :(
On a similar note, does anyone know if it's possible without JavaScript for a link or form or something similar to be submitted without the whole page refreshing and still being able to update some text?
For example, the upvote, favorite, and reply actions on hacker news website. They all require a whole page refresh. Is it possible to achieve it without page refresh and without JavaScript?
and then use the `:checked` CSS selector to display or not the hamburger menu, you can see it working in my (very barebone :D) website [1]. Note that this implementation is not keyboard-navigable because the input is not visible, I should fix it someday.
Reminds me of a Django project rewrite I got to work on very early in my career. I was able to convince the lead to stave off JavaScript because I could barely read Python at the time. I did however fail to convince my colleagues to write unit tests.
What were the arguments in favour of SPAs in the first place?
I'm finding it interesting that most of the top HN submissions (by popularity) matching "single page app" are critical, not positive. First 15 follow, all but 5 are negative:
- The myth that you can’t build interactive web apps except as single page app (https://htmx.org/essays/you-cant/) 262 points | 4 months ago | 143 comments
That’s also because they were not called SPAs for the first 15 years of the web.
The very first technology for building “SPAs” was Java Applets. Corel released a web-based office suite already in 1997, and it was practically an applet for each app.
This was replaced by Flash because Macromedia did an incredible job of getting Flash Player preinstalled on every desktop computer. If you wanted to deliver a rich and seamless app-like web experience in 1999-2004, you used Flash. (It was mostly very primitive as a dev environment, but did a great job at vector motion graphics and video.)
The original progenitors of present-day SPAs are Microsoft Outlook for Web and Google Maps. Outlook invented the ubiquitous XmlHttpRequest object, and Google Maps became so popular that it coined “AJAX.”
Following these examples, people started building DOM-based JavaScript apps that loaded data asynchronously and built up views entirely in code. And that’s when they became modern SPAs.
(There was also Silverlight and some other technologies that vied for the rich web app crown, but JS/DOM carried the day especially when iPhone happened and wouldn’t support anything else.)
Appreciate the history lesson. And yes, I was there for all of that (though not engaged in Web-app development).
One of the first AJAX apps I recall was the original Gmail. That was a revolution in what the browser could do, followed fairly shortly by others (notably Google Earth which launched about the same time).
SPAs simply to deliver relatively static content however seem ... a poor fit. I'll note that HN itself is LLML.
Which leaves my original question: what specifically was the selling point of SPAs, or Flash, or early AJAX? Because despite having live through that the emergence was both sufficiently diverse (a number of different approaches) and gradual that I don't have a sharp recollection of what that was.
I hope at some point the original pattern is re-discovered and made popular again because it would make things so much snappier:
1. Initial GET request from user's browser against index and maybe favicon.
2. Server provides static/dynamic HTML document w/ optional JS, all based upon any session state. In rare cases, JS is required for functionality (camera, microphone, etc.), but usually is just to enhance the UX around the document.
3. User clicks something. This POSTs the form to the server. The server takes the form elements, handles the request, and then as part of the same context returns the updated state as a new HTML document in the POST response body.
4. That's it. The web browser, if it is standards compliant, will then render the resulting response as the current document and the process repeats.
All of this can happen in a single round trip. Latency is NOT a viable argument against using form submissions. I don't think suffering window navigation events is a valid one either. At some point, that SPA will need to talk to the mothership. The longer it's been disconnected, the more likely it's gonna have a bad time.
The web only has to be hard if you want to make it hard. Arguments against this approach always sound resume-driven more than customer-driven. I bet you would find some incredibly shocking statistics regarding the % of developers who are currently even aware of this path.
reply