This is great. I had an idea to use named iframes and targeted forms for simple, server-rendered pages with built-in style-scoped widgets, without leaning into complex JS client-side. But, I never simplified it well nor expressed a polished and elegant realization of that idea, as this htmz looks to me to be.
A reminder to never give up good ideas, focus on excellence, and focus on refinement to a completion of an idea, and communicate well!
Also the comments here:
- This is a great hack and shows how close the browser is to offering SPA natively.
- This is a glorious demonstration of someone really understanding the platform.
- Simple and powerful, as the vanilla web should be. Thank you for this (small) gem :)
- This is a neat hack, I like it :). Thanks for sharing.
are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be. Close to the web-grain, using the given material of the web in the best way possible. Fuck yeah! :)
Thank you for being a genius! :)
And for inspiring about what's possible! :)
P.S - also your communication and marketing skills are top notch! I think the way you have communicated this elegant technical thing, from the choice of name, API, examples, copy -- is just awesome. I learn from it! :)
I am a little bit confused because your comments seem to imply initially that htmz is written by someone other than you, and then later that you wrote htmz.
Who are you and what is your relationship with htmz and its creators? Please be honest and refrain from violating federal law and FTC guidelines in your response.
You seem to have misinterpreted "[those comments] are exactly what I hoped to hear reflected about my creation". They weren't saying "I enjoyed hearing those comments about htmz, which is a thing I created" - they were saying "those comments are what I _had_ hoped to hear about my (unreleased and unnamed) creation, which indicates that htmz is a good implementation of a similar idea that I had"
I don't think this person is implying that at all (nor do I think that anyone needs to be trotting out federal law and FTC guidelines here :).
"I had an idea to use named iframes [...] But, I never simplified it well nor expressed a polished and elegant realization of that idea, as this htmz looks to me to be. [...] the comments here [...] are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be."
Seems pretty clear to me. This person had a similar idea but didn't complete it, and finds the flavor of appreciative "nice hack" energy (as opposed to "this is enterprise" or "this is a revolution" energy, I guess) appropriate for the project and the type of feedback they had wanted to hear had they completed their project.
> Who are you and what is your relationship with htmz and its creators? Please be honest and refrain from violating federal law and FTC guidelines in your response.
My name is John Galt and I wrote this library. When you learn who I really am, then you'll understand.
That's a great hack and it shows how close the browser is to offering SPA natively.
Just a few attributes and we could avoid the iframe.
It's probably more useful to prove a point than an actual day to day tool. And the point seems to be: htmx is too much trouble for what it offers. We just need HTML native ajax.
I'm the creator of htmx and think this is a great library/snippet. Much closer to what htmx-like functionality in HTML would/should look like in that it is following existing norms (iframes, the target attribute) much more closely than htmx.
From a practical perspective, a lot of the bulk of htmx is bound up in things like history support, collecting inputs, a lot of callbacks/events to allow people to plug into things, etc. I expect a lot of htmx-like libraries will come out now that it has some traction: it's not a super complicated idea, and many of them will pick smaller, more targeted subsets of functionality to implement. That's a good thing: the ideas of hypermedia are more important than my particular implementation.
I am now officially emotionally invested in htmx so I have to justify investing my time/energy in htmx instead of something else, so all alternatives to htmx are stupid and their existence make me very angry.
Sorry if I came across as dismissive, htmx is a much needed counterpoint to React's dominance and a great contribution to the industry.
And I hope its core idea (Ajax from HTML with results loading inside a target element) will be adopted as a Web standard.
not at all, i understand there are going to be different takes on what the right balance of functionality & configuration are and htmx has been hamstrung to an extent by being IE compatible, which, coupled w/ my commitment to backwards compatibility, means some obvious things are off the table (e.g. using fetch())
i don't begrudge other people having differing opinions on this stuff and agree w/the spirit of your original comment: HTML should have a native mechanism for doing what htmx does. hopefully htmx (and other takes on the idea) contribute to that future.
Hi Carson! I've been using htmx sprinkled with hyperscript and I find using your tools very enjoyable (though I found myself fighting a bit with hyperscript at the beginning but past that and once you get the mindset and the hs way, things are easier).
Thanks for these tools! I wanted to also take the opportunity to ask you something you either mentioned, commented or heard in a podcast.
You said that htmx might not be the tool to solve all problems (or certain kind of apps). Just asking because I think is also great to hear when not to use something to solve a problem. So, in your opinion what kind of webapps do you consider that maybe htmx is not a good fit? And in that case what alternate tools/approaches do you suggest? Once again thanks a lot for htmx & hyperscript!
Things that involve a lot of events that need to be handled quickly are not a good fit for hypermedia. Sometimes you can have a rich island inside hypermedia though (e.g. a rich text editor w/ autocomplete, etc) so long at is triggers events and offers form participation. Two common examples I give are google maps and google sheets. Those would be difficult to do well in htmx. (But the settings pages for them might be a good use case for it)
I saw a htmx demo where the first request loaded the whole page to the browser and also synchronously loaded the code on the server into the browsers service worker.
The next request hits the service worker and then gets any json from the server ( if not cached ) and returns the html fragment.
It uses htmx .
I thought it was a nice have your cake and eat it too
Regarding the size I would guess that if htmz would be extended to have the same features as htmx, it would also be similar in size? Would it make sense to modularize htmx in order to only pay for what you really use to support adding features without necessarily increasing the downloaded size?
I think you could do a smaller htmx by dropping a lot of the event and config stuff & adopting modern tools that produce smaller javascript. Clever use of JavaScript features could probably knock it down as well, but i'm anti-clever in most cases. As it stands, htmx is ~17ms to download over slow 4G and should be cached after the first download, so, while I wish I could make it smaller, every time I've tried to it ends up not moving the needle too much.
We are going to get a chance to remove all the IE-related syntax in 2.x which should help a bit.
Kind of makes me wish there was a htmx-lite of sorts. Like an 80/20 solution at 20% of the size, which could be extended to the full blown package with extensions.
Have you thought about moving more features to extension, like was done with the web sockets feature?
I've looked at the codebase and there isn't much I feel would do well as an extension. As I said upthread, most of the code is AJAX, input gathering, history support and then book-keeping/event stuff around that.
There will be a bunch of htmx-like libraries coming out though and I expect many of them to take different design perspectives. Two that I'm aware of are https://data-star.dev/ and https://ajaxial.unmodernweb.com/, both were created by people on the htmx discord. I know of at least two other minimalist rewrites htmx discord users have done, but they haven't published their stuff yet.
If you'd like to create your own, I do an overview of how htmx works here:
Yes this was a response to htmx. It was a half-parody half-I wanna make it work project.
Like https://github.com/vilgacx/aki
I would fear if anyone wants to use this in production BUT I would love someone to get inspired and use the concepts rather than the actual code. Hmm maybe i should write a disclaimer...
It's funny, I stumbled on a similar use for iframes a few years ago that I did put into production. I needed to have a rather large SPA for employees hosted locally - that is, on a server on a local network in retail stores, not accessible from the web or unless you're on the same network as the server. The employee had to be able to load it on a tablet (but wouldn't necessarily know the server's local, dynamically allocated IP address without checking). And it had to be deployable without any complicated local DNS setups, router changes, etc.
I solved it by writing a discovery service... a wrapper that's accessible on a public (non-SSL) web page that basically opens a pile of hidden iframes and uses them to war dial local IP addresses until it finds the app and replaces the page's content with the winning frame's. Amazingly this janky thing has held up ;)
Love it! I think this idea has some legs in that a programmer can build their own f****k. Bundling only the pieces that they actually use. I don't see why it should not be used in a production environment...other than someone in the internet disapproves...a fear many of us suffer from. It's a simple idea & can be easily managed in a codebase.
In he spirit of breaking apart HTMX piece by piece, I created hyop (Hypermedia/Hydration Operation). It weighs in at 61 B but uses the "hyop" attribute. I know, I know, I'm bad...but at least I save bytes.
I released it a few days ago & I appreciate the feedback. A single__hyop is a strategy to run a hyop mapped to a key...opposed to a multi__hyop which runs multiple hyops on keys delimited by whitespace. Other strategies are possible, like implementing an HTMX-like api with tree-shaking...but I havn't yet run into a use case for such sophisticated behavior. TBH, I only use single__hyop but wanted to leave the api space open for more sophistication. I mainly use this pattern instead of the overly-bloated hydration feature that come with the large UI frameworks.
It is like onload for all elements, it works with tree-shaking, & the hyops can keep local scope (do not need to be assigned to globalThis/window). I didn't use the onload attribute because hyop has different semantics than onload. eval would need to be used & the functions would need to in global scope to preserve onload semantics.
There is a DEBUG preprocessor which ensures that all of the hyops are loaded in the browser build & there are no unused hyops as well.
It's only 61 bytes, so it's basically snippet-ware at this point. There may be more built on this foundation as I mentioned earlier. I wanted to express this pattern b/c it's been useful for me.
It looks like it's used to register hooks to invoke custom behaviour on elements defined by the hyop attribute. The document load handler defines a central dispatch table for all of these hooks.
For one thing, it seems like it wouldn't gracefully degrade in functionality if JavaScript was disabled or unavailable. With htmx you can use its HX-Request header to check whether a request is from htmx or not, and serve a partial HTML or a full page accordingly. So no matter the circumstances, you can maintain a good user experience.
It's just a weird feeling for someone to use a hack / experiment as a foundation or something. I know 'the software is provided "as is ", without warranty'.
The difference with htmx is that they are more polished.
i don't want people to pick htmx if it's going to be a bad choice for their particular application, would be happy to work w/ you to get something put together
It sounds like 'probably, yes' to adding a disclaimer, if only because I rather took it at face-value seeing it posted here on QBN so bookmarked for investigation later... .
HTML native ajax is the right approach, and what the htmx devs fully support if I understand correctly, but I don't think this demonstrates htmx is too much trouble for what it offers. It offers considerably more than what's possible here, eg. DOM morphing, animations, graceful degradation, rich event model, server side events, etc.
You know... we've been doing something very similar 20 years ago for a stats dashboard web app - reloading only DIVs that need with new content server-generated. We didn't even bother to recreate the DOM, but directly innerHTML = content loaded.
Perl on the back-end and some very tiny JS on the front-end. Do I need to tell U that this worked as absolute charm and was blazing fast. The only considerable downside was that indeed lots of traffic was going back and forth. But then 20 years later latency is much lower, traffic much cheaper, CPUs also, and I am very happy to see more and more people realize this bare-bones approach was actually a good thing to consider (the author lists the downsides).
To me such approach is much more web-native in comparison to abomination UI frameworks, that try to reinvent presentation marginalizing the browser to be nothing more than a drawing surface. But guess what - their primary goal is to save on this network latency, that is anyway going down down down down with every year.
The htmlz/htmlx approach is indeed much simpler and easy to live with in a large project, it is really sad that we put so much logic in the front-end in recent years...
I think there's a pretty strong argument at this point for this kind of replacing DOM with a response behavior being part of the platform.
I think the first step would be an element that lets you load external content into the page declaratively. There's a spec issue open for this: https://github.com/whatwg/html/issues/2791
When I read your comment I thought "hang on, I remember this discussion from years ago". Looks like the issue dates back to 2017 indeed. I remember this idea being shelved because of concerns around the handling of things like scripts and stylesheets, especially in the context of the idea of HTML imports (which were the next big thing when Google's Web Components advocates harassed other JS frameworks for not being Polymer - until it turned out that HTML imports had to be shelved and Polymer ended up being shelved in favor of the next big thing Google pushed out).
iframe is that kind of element, if they hadn't added the brain dead srcdoc attribute to it to show inline content.
I think some of the proposals I see in that thread are kind of interesting, but to do it properly requires a new MIME type for html fragments, eg. text/html-fragment. This is arguably what htmx should also be using for fragments.
I do kind of like the idea of adding this as an attribute to some existing elements, and they show the inner content by default until the target is loaded. Basically like adding an iframe-like rendering mode to existing elements. Then you could implement htmx's hx-boost behaviour just by setting this mode on the body element and using the htmz trick of links naming the body as a target.
In 2001 or so, I was building an HTML based email client. We used a hidden iframe to execute JS to load data from the server and manipulate the DOM with the result. It was not quite as neat and elegant as this—the browser support was not there. However, the basic mechanism was the same.
It warms my heart to see the basic mechanism in such a compact package, without libraries upon libraries and machinations to make things work. This is probably perfect for 90% or so use cases where react or similar FE frameworks are used at the moment.
We later used to joke that we used Ajax before Ajax was a thing.
I think I posted it before, but I had SPA using Spring Webflow running in dom nodes (no iframes required) with animation event handling, csrf, all back around 2006 or so. The calling html was also pure save for a jQuery include at the top and my app include. The backend used JSP. No back button issues, no client or server state (I used a db instead of some options webflow gives you), it was a dream. Completely lost on the company I worked for at the time. I was up and running with a new user flow, in half a day or so. Static blocks suddenly would "do stuff" from the perspective of the business team in about half a day.
This is the problem with technology. When it works well and really solves the problem, it is invisible. No one gets promoted for invisible.
Omg! Same story for me! I was working in a billing company and we used an iframe that we'd reload with JS that would run to change the DOM. Around the same time as well!
Understanding or not, certain decisions like overriding the semantic meaning of a hash in a url doesn't seem to be working with the platform. A better version would be adding a target to a `data-` attribute.
But the hash doesn't have a semantic meaning when it is in an href or action attribute, because it is stripped out before the request is sent to the server. The hash only has a meaning when it's in the URL bar. That's kinda the point of this hack.
Further size reduction, you don't need the `this.` on the inline event listener, so it can be: `contentWindow.location.hash` and `contentDocument.body.childNodes` instead of `this.contentWindow.location.hash` or `this.contentDocument.body.childNodes`.
This will shave another 10 bytes off the snippet :D
Although location.hash defaults to the empty string '' when there is no hash, which gives a SyntaxError, so we still need the fallback selector to select none.
Given that this uses `target`, doesn't it mean that unlike htmx you can't easily make this gracefully degrade when JS isn't enabled?
And, yes, I know, saying "when JS isn't enabled" in 2024 is a bit like saying "when the user is on Mars and has a 10 minute RTT" but forgive me for being an idealist.
Yeah it breaks without JS. You could add the iframe behind JS, so the target would default to a new tab. But the server would still be designed to return HTML fragments. I never found a way for a server to check if the originating request is for an iframe or a new tab. It's not quite a graceful degradation.
> I never found a way for a server to check if the originating request is for an iframe or a new tab.
There is no such technique. One way to distinguish is to pick a URL convention and modify the URL (before the hash) of the iframe URL. For example, add ?iframe=true to the URL, and then have the server check for that. Perhaps more usefully you could include information about the parent URL, e.g. url += '?parent=${document.referrer}'. Or something.
You could intercept the clicks with JS and add a special header, like htmx does, to return fragments, otherwise fall back to full documents.
Edit: rather than a header, dynamically adding a query parameter to the URL of the element that was clicked would probably fit better with htmz's approach.
It is maintained but the UI for dealing with JS is horribly time consuming and overly complex compared to uMatrix. I'll never really understood it and I keep using uMatrix on my laptop. I switched to NoScript on my phone. Maybe I can install uMatrix now if Mozilla really unblocked many extensions. If uMatrix stops working, I'll switch to NoScript everywhere for JS and uBO for all the other issues.
I've never used uMatrix what can uMatrix do what uBlock can't? I have set all sites to no js, and if something does not work I click the JS toggle in the menu, reload the page and thats it.
And yes uMatrix should work on Firefox on Android. ( At least it is installable)
uMatrix has a spreadsheet like matrix of features and sites. It's immediately clear what happens when I click a cell, a column or a row. With uBO I don't even understand where to click to toggle JS for one of the sites a page got scripts from. I learned it years ago but it was too cumbersome. I kept using uMatrix and I forgot it. I attempted to use it again now and I couldn't find anything. I didn't google for it.
I installed uMatrix on my phone and disabled NoScript. Of course I keep using uBO for filtering out ads and hiding annoying parts of sites with the element picker.
Not even just without JS. If you middle-click to open a link in a new tab you get just get content that was expected to be swapped in. I think that abusing links is a far bigger sin than adding a custom attribute.
This is why htmx sends requests by default with an HX-Request header so that the server can distinguish between them and serve different content if need be.
Could you describe your ideals for why websites should gracefully degrade without JS enabled? It’s not an unpopular view on HN, but from my perspective as a web developer, JS is a part of browser application just like HTML, and there’s no reason for the website to work if you’ve disabled a part of the browser.
I suspect “doesn’t have JavaScript” is being used as a proxy for a lot of other ideals that I do understand, like “should work on as many devices as possible” but that’s a rough correlation and doesn’t make the use of JS inherently bad.
So I've been in numerous situations where having JavaScript enabled was either undesirable or impossible, granted, it's my own fault for using strange platforms like a Nokia N900 or whatever, with strange web browsers. But it's still nice when interactive websites continue to work even in contexts where JavaScript isn't a thing. I always thought of JavaScript as something you use to "upgrade" the web experience, not to build it. Although obviously there are some things you simply can't do without JavaScript and for which there exists literally no alternative. There's also situations where JavaScript is a liability. See, for example, the Tor browser.
Especially my ideal is that all functionality which can work without JavaScript should work without JavaScript. So, for example, I am not expecting someone to implement drag-and-drop calendars without JS, but there's no reason why the editing function of a calendar item should fundamentally require JS.
That being said, I know this is an idealist position, most companies which work on developing web-applications simply don't care about these niche use-cases where JS isn't an option and as such won't design their web-applications to accommodate those use-cases to save on development costs and time. But, while I am not really a web-developer, whenever I do deal with the web, I usually take a plain-HTML/CSS first approach and add JavaScript later.
I browse the web without JS. It is a fast easy way to load websites. And some sites with heavy app interaction features need JS. And that is fine. It is the other sites that use JS to figure out if their users have read more than 2 articles that are the problem.
Degrade gracefully is a required development skill. Sites need to allow for their pages to work in limited fashion without JS. JS should only be a layer added for interactivity, animation, and app construction. Otherwise, workarounds are great.
Is there a way to make this gracefully work? YES!! Instead of using hash tag names, use '?id=example'. And let the script in frame figure out the real destination of the output. Otherwise, the page will load the full site. Also use script to add "target" attribute to links.
Because there's a case for a very useful Web without running a Turing-complete language on the visitor's end.
If you just want to consume text, images, audios and videos, follow links and fill in forms (and that's quite a lot and pretty awesome already), you shouldn't need JavaScript.
Chances are I’m on your website for information, mostly text content. Which really doesn’t require JavaScript.
So then, most JavaScript on the web is enabling revenue generation rather than improving the user experience. So yeah, disabling JS is a proxy for, “don’t waste my time.”
But I agree that it’s not inherently bad, but just mostly bad (for the user.)
A reason people might want to have JavaScript disabled, is because of the immense tracking possibilities that JavaScript has, which can't easily be safe-guarded against.
The people who do disable JavaScript completely are admittedly few and far between, but are, I would assume, more common among the Hacker News crowd.
If you would've told anyone in the year 2000 that it'd become standard practice to blindly execute programs sent to you by anonymous people you don't know you'd get a lecture on why that's stupid.
But in 2024 it's standard accepted practice. And that standard has made it so browser developers have to literally prevent the user themselves from having control over their browser because it's too dangerous to do otherwise.
The problem with the entire commercial web application ethos, despite it being a perfect fit for for-profit situations, is that it forces the rest of the web stack to gimp itself and centralize itself, CA TLS only, etc, just to keep the auto-code executing people secure. The one horribly insecure user behavior (auto executing random code) takes over from all other use cases and pushes them out.
So, we end up with very impressive browsers that are basically OSes but no longer functions as browsers. And that's a problem. Design your own sites so that they progressively enhance when JS is available. When work requires you to make bad JS sites, do so, but only in exchange for money.
Of course! Actually, for any website that runs JS, in my opinion, we should just automatically forward the JS execution into a virtual machine, it is horrible to allow any random website to just run code directly on our machine.
What if we built this virtual sandbox directly into the browser, that way no code could run on our machine, but all websites still work fine? That's revolutionary!
Websites are at least supposedly sandboxed so they are not as much of a risk as running native binaries. But this is getting worse and worse though as browsers expose more and more of their host operating system's functionality (and more and more speculation CPU bugs make sandboxing as a concept appear infeasible). The benefits of using a website instead of a native application are quickly disappearing, while the drawbacks have only been somewhat mitigated.
If you could actually control the virtualization in the browser, that would be good. Unfortunately you are just left trusting that it is even virtualized (properly).
Supporting as many devices as possible, breaking RESTful APIs, etc.
A JS engine pre-supposes many, many things (too many) about a client, stuff like implicit assumptions that "this device has enough power and bandwidth to process my <insert length> javascript, perform appropriate security checks on it, handle this fast enough to service other requests in a timely manner, and also not take over the user's entire computer".
Accessibility means you should presume the least number of things possible. There's no sound, no visuals, no powerful cpu (maybe there isn't even a cpu), the device is the only working device from 20 years ago performing some critical function (e.g. government poverty assistance), there's only a couple minutes of power left, the internet is not available currently or quickly, etc.
You should never assume you have JS, period, and features should gracefully degrade in the presence of JS engines.
It's a lot harder to create bitcoin miners or other malicious things without js so I keep scripts disabled by default and only enabe them where it makes sense.
Reimplementing browser functionality in JS also often breaks excpectations as things don't work quite the same for all browsers. It also means browser extensiosn are less likely to be able to deal with the content.
Essentially it's like wanting your documents in PDF format even though a executables are also part of the PC platform and you could just ship a custom exectuable that renders your document instead. To me JS is just as absurd.
It breaks without JS but many JS blocker extensions can be configured to always allow JS from the host serving the page. For example NoScript on my phone has the "Temporarily set top-level sites to TRUSTED" option.
With only 181 bytes it could even be included in the page. It's much less than the sum of the meta tags on many sites.
I had a dream yesterday, that scientists managed to create a new kind of EMP bomb. This bomb was unusual in that, by varying the level of EM discharge in the payload (dreamy-sciency explanation), it could target all hardware created after a certain point in time.
I had access to their facility (dreams being as dreams often are), and managed to set it for 1992, and right when I was about to press the button, I woke up.
GitHub itself used pjax heavily and I liked those interactions far more than the newer React ones, the HTML was much more semantic for one, with middle click always being respected.
Reusing the <slot> element like this is a bad idea - it has very specific behavior in browsers. In a shadow root it'll be replaced by the children of the host element, no matter what the library does.
HTML already has an inert <output> element for things like this.
Slight correction - <output> is not inert for users who use screen readers, though in this case it shouldn't cause issues. If you need an actual inert element, use a div.
Clicking a "tab" to change the example code to Greeting, or anything else adds a history event but doesn't update the url.
I probably would have done the exact opposite in both aspects. Use replace to prevent extra navigation entries, but still update the url for bookmarking etc.
For something that claims to "just be html", it seems to be breaking some fundamental rules of the web/UX. Whether it's a simple mistake or not and easy to fix, it does not inspire confidence in the framework.
...You realize this "framework" is 181 characters long, right?
<a> elements add to the history, it's what they're supposed to do. If you don't want them to do that, don't use <a> elements for UI. Use a button and change the iframe src with javascript. That's not really the library's problem, and it's not
really hypertext's problem what the browser is putting into history.
> For something that claims to "just be html", it seems to be breaking some fundamental rules of the web/UX.
Yes, obviously. HTML+CSS are meant to put boxes and words and images on a screen. If you want to show a spreadsheet with cell formulas, use javascript. Fundamental rules of web/UX are not the same as fundamental rules of hypermedia.
> Whether it's a simple mistake or not and easy to fix, it does not inspire confidence in the framework.
There's a section on the page titled "Is this a joke?" (not to mention, "Is htmz a library or a framework?"- its a snippet). Confidence is not the point. The point is to demonstrate how little it takes to turn HTML into a stateful UI. As the author says:
> If you need something more interactive than the request-response model, you may try the htmz companion scripting language: javazcript. Sorry, I meant JavaScript, a scripting language designed to make HTML interactive.
> htmz does not preclude you writing JS or using UI libraries to enhance interaction. You could, say, enhance a single form control with vanillaJS, but the form values could still be submitted as a regular HTTP form with htmz.
This seems snappy from the US but I doubt someone in say NZ will have a good experience. Going back and forth between the client and the server on every interaction can result in terrible UX.
Users will be happy waiting 1-2 seconds after submitting a form but waiting that much to switch a tab is not gonna fly. Plus there's internet weather etc which might result in unpredictable latencies over long distances.
Yes, you can move the compute layer of your app close to the user in multiple ways. Moving the data to the edge is much harder.
Texting from NZ: I don’t know how snappy your experience is but all I can say is that we‘re used to slower websites here in New Zealand. Switching between the tabs and having a <1sec delay is totally acceptable imho. It doesn’t feel „instant“ but instant enough!
Yeah. I like the simplicity of htmx etc but network round trips are simply not suitable for a lot of interactions (at least if we’re talking about web apps). For a barebones approach I’d prefer tabs etc to be low-JS on top of a functional plain html page.
Personally I've landed on the balance of leaving rendering wherever the state lives.
In the tabs example, my question is always how dynamic and complex the tabs are. For fairly simple tabs its pretty clean to server render the HTML for all tabs and use a basic client-side script to handle navigation (ideally its synced in the URL as well).
For more complex tabs, or tabs heavily dependent on dynamic state, I find that its much easier to maintain long term if the rendering happens server-side. I personally have enjoyed HTMX for this to swap out an island of HTML, but a full page reload has worked for decades now.
In my experience the worst solution is shipping the complex state and the complex rendering logic to the browser. From performance to bug reproducibility, it has always cause me more pain that the DX original dev time are worth.
I don't think every solution in the webdev space is supposed to be a one size fits all panacea. Especially this, which seems more like an incentive to think about structuring webapps.
Sometimes it's interesting to see how far you can go without going the "let's download the web in a JS blob" way.
One of these days, I'm going to see whether I can still build something reasonably modern with Seaside, the continuation-based framework from the olden days, where basically the whole UI state is stored in the server-side session and everything is a request. Worked quite alright, way back when not everything had a local CDN and the internet tubes were a bit narrower.
If you need to fetch data from the server side, you need to fetch data from the server side. It doesn't matter much if it's a 10k json that you have to load and parse, or a 12k html snippet that you can directly display, the delay would be the same.
If you are concerned about the delay when switching a tab, nothing prevents you to load both tabs during the first page load, and then just display the content without a new network requests. Whether you should do this or not is completely unrelated to whether you should use htmx, htmz or react.
The big killer here isn't just round trip time, it's the RTT compounded with the 14KB limit from TCP before it stops blindly sending data and compounded with extra resources that have to be fetched in a chain. If you keep the size of the response under 14KB and don't have render-blocking links to CSS and JS a page will be pretty fast; the moment it's any bigger the time before page load will double. I'm happy waiting about 300ms for a page served from the east coast of the US to load, but 600ms becomes noticeable and anything more - JS which loads more JS which loads JSON and blocks rendering until it's done, for example - starts to be excruciating.
One cool hack you can do is set the background of all widgets to a non transparent color, and then set the background image of the iframe to a loading gif. That way, between loads, you will not have any widget content in the iframe and will instead reveal the loader. :)
Totally disagree as a regular user of 2g internet on a phone. If you are downloading a 1mb bundle to show me your home page, I'm going to give up within 30s or go elsewhere. I'm more than happy to put up with laggy feedback.
That being said, usually people writing downloads using fetch will implement a promise-based timeout on top of the network timeout, which causes all kinds of issues on 2g internet. One of my favorite side-effects are implementations that "retry after 2s" that don't actually cancel the in-flight request, causing my entire bandwidth to be hogged up with dozens of these requests "retrying" when the initial request eventually returned successfully. Javascript developers are unintentionally evil to non-5g internet.
It happens more than anyone wants to admit, but even at 2g speeds, 256kb takes almost 20 seconds to download (I get about 120-180 kbps). When sites provide the HTML and CSS, I can usually stop the page from loading and at least read the content within a few minutes. When there's a js bundle, there is usually a fairly large initial bundle, then that loads a couple of other bundles ... and I'm still looking at a white screen, and not even a loading animation (from the browser or the developer) because as far as the browser is concerned, it has loaded the HTML and most developers don't even realize they can edit the base-line HTML sent to browsers because they never even see it.
FWIW, I happen to be in Australia and it’s quite snappy. No noticeable delay at all. If I were on a satellite connection or something it would be different, but trans-pacific cable seems to be doing just fine. Really it’s the super-long initial latency connections where this remains an issue.
It's quite good. I noticed that my back button had to be repeatedly pressed to go back to write this comment, after interacting with the examples a few times. I'm sure that's simple enough to fix.
I happened to spend a little more time on htmx this weekend which htmz was inspired by.
htmx/htmz does do well for simple use cases, htmx does well for SSR heavy cases(e.g. django).
in the end I returned to vue.js, with a few lines code(put in a library) I can mimic htmx easily plus all the extra stuff vue.js brings, and no I do not need use a build tool for that, vuejs works fine via CDN inside a html for simple use cases the way htmx does, but vuejs can do more, e.g. draw live chart from data returned via ajax calls where vuejs is json by default, htmx is html by default instead.
same here, spent lots of time poking around, even tried svelte and react.js(heavily), and now firmly back to vue.js.
vue.js does not mix SSR with SPA into one, make it much simpler compare to what React.js is doing today, and it provides way more than alpine.js and htmx etc, it's the best one in practice for me now.
React has Meta behind it, Vue.js is a non profit project by individuals, very different, good news is that vue.js is good enough for serious projects nowadays and I especially like the fact that it does not mix SSR into SPA to make things complicated.
This seems likely to have issues with most "Content-Security-Policy" rules because of the inline script in "onload" and the iframe. Makes it a non starter in real world production environments.
If you're talking about the tabs demo, I think it's reasonable for some use cases to respect back button as previous tab. as long as it's easy to opt out.
If only HTML provided a way to navigate <a> without adding a history entry, like <a nohistory href=...>.
If this was a real product i would market it as "Native time travel debugging! Go through your application state as you would go through your browser history!"
The real answer is that web developers should stop using anchor tags to target non-stateful links and should properly handle navigation to pages that are short lived.
But that battle is a bit like asking people to use their turn signals when merging.
I built a chat app with both group chat and private chat (with access control) using only plain HTML; only ~250 lines of HTML markup for the entire thing, no front end or back end code was used: https://github.com/Saasufy/chat-app/blob/main/index.html
It would be great to add support for other front end tech like this. I kind of like HTMX (especially as it's declarative). This HTMZ looks interesting. I'd like something with a big community and more components to handle more advanced cases (e.g. higher level components like calendars).
This is what I now wished existed. A flowchart/wizard that let you choose a development framework based on some questions and answers. So that a minimum framework (HTMZ) is used if it can satisfy. Or HTMX if one of your answers indicates that it's needed. Or Vue, etc. - getting "heavier" platforms as needed.
Of course we don't always know ahead of time the answers to the question. But being given the questions and the flowchart would be beneficial for the up front analysis.
This looks neat! I've never really been in to web development, but I'm curious... is it possible to create a standalone .html file for a browser-delivered app? Like, not just PWA or SPA, but... a single HTML App?
If I had modest amount of data in JSON baked into html, what's the barrier to something interesting, say... implementing a minimal spreadsheet or maybe just a sortable/filterable table?
> This looks neat! I've never really been in to web development, but I'm curious... is it possible to create a standalone .html file for a browser-delivered app? Like, not just PWA or SPA, but... a single HTML App?
Yes, you can include your JavaScript and CSS directly inside the <script> [0] and <style> [1] tags, so you don't need to include any other file. Images like PNG, JPEG, ... can be either embedded with a base64 data URL [2] or an SVG with the SVG tag [3].
> what's the barrier to something interesting, say... implementing a minimal spreadsheet or maybe just a sortable/filterable table?
Well, you could go the easy route and use an already existing JavaScript library for this. Libraries are normally included via a URL, but you can just copy the contents of this library into the script tag as mentioned above.
Otherwise, I think it's manageable to do develop it yourself (sortable/filterable tables) much knowledge, but frontend development can be a PITA very fast
This is fantastic. Right now in my PWA I'm using a little library I created called `html-form` (HTMF) and I need to implement it as a SPA to avoid service worker fetching on every page change. So, I used a similar attribute to `hx-select`. But it adds quite a bit of code to the code base. Using your pattern I can remove all that added code and I can stop using hash navigation. Nice and simple.
Thank you for putting your code out in the public so I can learn from it!
Playing with the code it doesn't fit my use case. It seems like it would be really cool if you have a very narrow use case and that use case doesn't grow. Having your browser parse any scripts you have twice doesn't make much sense. Also, working with eventing would be an interesting problem to solve. I guess you would need to add an element for that, maybe something like `<template type=event>{"my-event": "my event data"}</template>`.
So, it seems like this pattern could probably take you quite a ways. But the main issue I have is that you would need to parse all that JS out twice by the browser. But if you don't have the need for any `script` tags in what you pass back it would be pretty nice.
Also, it wouldn't work for graceful degradation as the server wouldn't know if you need the whole page or a segment. As HTMX adds a header that lets you know that JavaScript is being used by the front end.
Overall, I think it is really cool and it would be neat to see how far you could take this.
Careful with that snippet, targetOrigin of * is dangerous. I could embed your iframed content on my own site and then you'd happily send me the entire HTML inside the iframe.
One downside of this approach is that it fills my browser history with a bunch of extra entries, which ain't ideal (especially for my Back button). I'd guess that's probably fixable as an htmz "extension"?
Very neat use of existing HTML. One thing this highlighted for me is that the MDN docs on the target attribute are incomplete, because I was recently reading them and while they mention that you can target iframes, they didn't actually describe how to do so.
I was reading the docs because I had some thoughts on my own htmx-like take. You've used some of the same attributes I was going to use but slightly differently than how I was going to approach it. Some good food for thought!
I wonder how feasible it is to create htmz / htmx -like lightweight library with the support for React/Vue/Stelve using web components. I agree that 90% of the use-cases you don’t need React but for the last 10%, most of us are stuck with these bloated frameworks. Astro has a similar idea but it’s working as a full framework instead of being a library. Considering the limitations with Astro, I guess the biggest bottleneck is state management.
Where did those basic (<100 loc) jQuery / XHR wrappers go that simply hijacked all links & forms and turned them into AJAX requests? That seemed like the sweet spot of having apps that worked well for search engines to index (SSR) and yet still offered better page transitions than regular full-page-load requests when you had JavaScript enabled.
Was curious to see the code so found the GitHub. Posting it here in case anyone is interested since the author doesn't link to it on the site and also the npm page doesnt link to it either
Except using a URL fragment refers to a “resource that is subordinate to another, primary resource” not a destination. They point out the URL abuse, so why do it?
Slot is also a part of the custom elements standard, but they say no custom elements.
Why use only web standards and then use them incorrectly?
I'll take a deeper look in the code later, but it seems useful.
If been using the window location hash to do some simple navigation on my SPA, but i use JS. (Just hide all sections and shows the one that matches the hash i.e.: #main
ha, I'm working on a dev-side version similar to this (mostly just for me, but hopefully publishable). I opted for an entirely pre-deplopyment build tool, where you just put
<custom-tag custom-param="value"></custom-tag>
in your HTML, run the build and it outputs the filled in file somewhere else. I know its functionality is very similar to many web frameworks (e.g. React, handlebars) but it does one thing.
Oops, it's just a mistake on word choice in the nutshell summary. This works on static files like on a local filesystem (in that case the "server" is the local filesystem that serves me files that happen to be html)
Edit: on second thought, direct filesystem access has different origins which would mess with iframes. I'm not on the computer now to test. But at the very least you need a basic web server that serves files.
When I want to replace some element using JS as the user clicks a link, it is progressive enhancement.
Usually links enable history navigation. If you do stuff like this, you need to code it in a way that uses the history API to fix it for users with JS enabled (majority of users).
If you don't want history navigation and URLs, why do you use links?
This breaks history navigation and bfcache for no good reason, or am I missing something? bfcache already provides SPA-like speed, or even better.
No need to avoid regular links if you e.g. link from a list to a detail page.
Also:
> No preventDefaults. No hidden layers. Real DOM, real interactions. No VDOM, no click listeners. No AJAX, no fetch. No reinventing browsers.
So many words saying nothing, just to cater to some sentiment. fetch is part of browsers by the way.
If I need to replace an element as the user clicks a link, I can code it myself (without using this abstraction layer, however thin it is). I also don't need an iframe for doing this. And preventDefault is aptly named and a good reminder for what progressive enhancement should do. If it's not meant to be a link, don't use a link.
And if you want to react to clicks, you know, use click listeners (event handlers). Where's the problem?
It is understandable to developers and uses native browser functionality as intended. As opposed to this hack, which I'd find pretty glaring, bug-prone and hard to understand, would I have to debug an issue on some page that uses this snippet.
To me this seems like useless indirection and technical debt.
If you really need low-code AJAX "links" (who says you need that, if you don't want an SPA?), code yourself some understandable JS that is properly tailored to your site and respects history navigation, or use a library that is a good fit for your concrete use case.
I have changed it to explicitly state within the first paragraphs that "htmz is an experiment" now. I started this as a joke but turned into a fun working solution - I myself am not sure if this is just a joke or a thing. Maybe I'll use it in some smaller projects, or maybe not!
All power to you! No need to decide if it is a joke or not! ;)
In fact I clicked on it just because the name was so hilarious and already indicated pretty well what might be to expect (I didn't expect your particular solution though, kudos)
You can easily add no-js fallback by setting base target in JS instead of HTML. Then append e.g. `?fullpage` to HTML <a> elements and remove this query param with JS. With no JS, links will open in the same window with `?fullpage` query param, which return a full page from the backend instead of a fragment.
A commenter above mentioned that when you target an `iframe` it sends back the header `Sec-Fetch-Dest: iframe`, so if it doesn't have that then you know it requires the full page and not the snippet. So, yes, easy no JS fall back by dynamically adding the `base` tag. Nothing else needed.
A reminder to never give up good ideas, focus on excellence, and focus on refinement to a completion of an idea, and communicate well!
Also the comments here:
- This is a great hack and shows how close the browser is to offering SPA natively.
- This is a glorious demonstration of someone really understanding the platform.
- Simple and powerful, as the vanilla web should be. Thank you for this (small) gem :)
- This is a neat hack, I like it :). Thanks for sharing.
are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be. Close to the web-grain, using the given material of the web in the best way possible. Fuck yeah! :)
Thank you for being a genius! :)
And for inspiring about what's possible! :)
P.S - also your communication and marketing skills are top notch! I think the way you have communicated this elegant technical thing, from the choice of name, API, examples, copy -- is just awesome. I learn from it! :)