Hacker News new | past | comments | ask | show | jobs | submit | alexpetros's comments login

In the near future I'll write a blog about this, but the short answer is that even though more developers use REST incorrectly than not, it's still the term that best communicates our intent to the audience we are trying to reach.

Eventually, I would like that audience to be "everyone," but for the time being, the simplest and clearest way to build on the intellectual heritage that we're referencing is to the use the term the same way they did. I benefited from Carson's refusal to let REST mean the opposite of REST, just as he benefited from Martin Fowler's usage of the term, who benefited from Leonard's Richardson's, who benefited from Roy Fielding's.


> About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.

This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]

> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.

What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.

> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.

[0] https://alexanderpetros.com/triptych/form-http-methods#ref-6

[1] https://alexanderpetros.com/triptych/form-http-methods#rest-...

[2] https://alexanderpetros.com/triptych/form-http-methods#usage...

[3] https://alexanderpetros.com/triptych/form-http-methods#appli...


> This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

I see no such thing in the link you have there. #ref-6 starts with:

> [6] On 01/12/2011, at 9:57 PM, Julian Reschke wrote: "One thing I forgot earlier, and which was the reason

But the link you have there [1] does not contain any such comment. Wrong link?

[1] https://lists.w3.org/Archives/Public/public-html-comments/20...

(will reply to other points as time allows, but I wanted to point out this first)


You're right about the quote, thanks for pointing that out. And somehow I can't find the original one anymore, which is frustrating. I replaced it was a different quote from the same guy saying the same thing elsewhere in the discussion.

In many cases, browsers will also automatically perform a "smooth" transition between pages if your caching settings are don well, as described above. It's called paint holding. [0]

One of the driving ideas behind Triptych is that, while HTML is insufficient in a couple key ways, it's a way better foundation for your website than JavaScript, and it gets better without any effort from you all the time. In the long run, that really matters. [1]

[0] https://developer.chrome.com/blog/paint-holding [1] https://unplannedobsolescence.com/blog/hard-page-load/


Co-author here! I'll let the proposal mostly speak for itself but one recurring question it doesn't answer is: "how likely is any of this to happen?"

My answer is: I'm pretty optimistic! The people on WHATWG have been responsive and offered great feedback. These things take a long time but we're making steady progress so far, and the webpage linked here will have all the status updates. So, stay tuned.


Thank You for the work. It is tedious and takes a long time. I know we are getting some traction on WHATWG.

But do we have if Google or Apple have shown any interest? At the end you could still end up being on WHATWG and Chrome / Safari not supporting it.


Is it possible to see their feedback? Is it published somewhere public?

How much would HTMX internals change if these proposals were accepted? Is this a big simplification or a small amount of what HTMX covers?

Similarly, any interesting ways you could see other libraries adopting these new options?


i don't think it would change htmx at all, we'd probably keep the attribute namespaces separate just to avoid accidentally stomping on behavior

i do think it would reduce and/or eliminate the need for htmx in many cases, which is a good thing: the big idea w/htmx is to push the idea of hypermedia and hypermedia controls further, and if those ideas make it into the web platform so much the better


This covers a lot of the common stuff.

This is native HTMX, or at least a good chuck of the basics.


It sounds like the client you're describing is less capable than the client of 2005, and I'd be curious to hear why you think that's a good thing.

The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API. Hypermedia is completely self-describing when you have human users involved but not when you don't have human users involved. If you insist on HATEOAS as the substrate of the API then you need to give us a schema language that we can use to automate things. Then you can have your hypermedia as part of the UI and the API.

The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.

Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.

So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:

  - compression (though the
    data might still be highly
    compressible with zlib/
    zstd/brotli/whatever,
    naturally)

  - hypermedia

  - structured data with
    programmatic access methods
    (think XPath, JSONPath, etc.)
The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.

This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.

The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.

Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.

But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.



SPAs are for humans, but they let you have structured data.

That's the problem here. People need APIs, which means not-for-humans, and so to find an efficient way to get "pages" for humans and APIs for not-humans they invented SPAs that transfer data in not-for-humans encodings and generate or render it from/to UIs for humans. And then the intransigent HATEOAS boosters come and tell you "that's not RESTful!!" "you're misusing the term!!", etc.

Look at your response to my thoughtful comment: it's just a dismissive one-liner that helps no one and which implicitly says "though shall have an end-point that deals in HTML and another that deals in JSON, and though shall have to duplicate effort". It comes across as flippant -- as literally flipping the finger[0].

No wonder the devs ignore all this HATEOAS and REST purity.

[0] There's no etymological link between "flippant" and "flipping the finger", but the meanings are similar enough.


Yeah, that was too short a response, sorry I was bouncing around a lot in the thread.

The essay I linked to somewhat agrees w/your general point, which is that hypermedia is (mostly) wasted on automated consumers of REST (in the original sense) APIs.

I don't think it's a bad thing to split your hypermedia API and your JSON API:

https://htmx.org/essays/splitting-your-apis/

(NB, some people recommend even splitting your JSON-for-app & JSON-for-integration APIs: https://max.engineer/server-informed-ui)

I also don't think it's hard to avoid duplicating your effort, assuming you have a decent model layer:

https://htmx.org/essays/mvc/

As far as efficiency goes, HTML is typically within spitting distance of JSON particularly if you have compression enabled:

https://github.com/1cg/html-json-size-comparison

And is also may be more efficient to generate because it isn't using reflection:

https://github.com/1cg/html-json-speed-comparison

(Those costs will typically be dwarfed by data store access anyway)

So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.

Hope that's more useful.


Thanks, I appreciate the detailed response.

> So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.

I've yet to see what I proposed, so I've no idea how it would work out. Given the current state of the world I think devs will continue to write JS-dependent SPAs that use JSON APIs. Grandstanding about the meaning of REST is not going to change that.


I've built apps w/ hypermedia APIs & JSON APIs for automation, which is great because the JSON API can stay stable and not get dragged around by changes in your application.

As far as the future, we'll see. htmx (and other hypermedia-oriented libraries, like unpoly, hotwire, data-star, etc) is getting some traction, but I think you are probably correct that fixed-format JSON APIs talking to react front-ends is going to be the most common approach for the foreseeable future.


If you want JS-lesness and HATEOASnes then maybe if we had an automatic way to go from structured data to HTML... :)

most structured-data to UI systems I have seen produce pretty bad, generic user interfaces

the innovation of hypermedia was mixing presentation information w/control information (hypermedia controls) to produce a user interface (distributed control information, in the case of the web)

i think that's an interesting and crucial aspect of the REST network architecture


What I have in mind is something like this:

1) you write your web page in HTML

2) where you fetch data from a server and would normally use JS to render it you'd instead have an HTML attribute naming the "schema" to use to hydrate the data into HTML which would happen automatically, with the hydrated HTML incorporated into the page at some named location.

The schema would be something like XSLT/XPath, but perhaps simpler, and it would support addressing JSON/CBOR data.


this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model

if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user

the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas

REST (original sense) does couple your responses to your UI, however, in that your responses are your UI, see https://htmx.org/essays/two-approaches-to-decoupling/

I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.


> if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user

This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?

> this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model

I wouldn't call it templating. It resembles more a stylesheet -- that's why I referenced XSLT/XPath. Browsers already know how to apply XSLT even -- is that unRESTful?

> the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas

Nonsense. The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server", it's not coupling anything. It's a compression technique of sorts, and mainly one that allows one to reuse API end-points in the UI.

EDIT: Sending the data and the instructions for how to present it separately is no more non-RESTful than using CSS and XML namespaces and Schema and XSLT are.

I think you're twisting REST into pretzels.

> REST (original sense) does couple your responses to your UI, however, in that your responses are your UI, see https://htmx.org/essays/two-approaches-to-decoupling/

How is one response RESTful and two responses not RESTful when the user-agent performs the two requests from a loaded page?

> I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.

You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.


> This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?

Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.

> Browsers already know how to apply XSLT even -- is that unRESTful?

XSLT has nothing to do with REST. Neither does CSS. REST is a network architecture style.

> The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server"...

I guess I'd need to see where the hypermedia controls are located: if they are in the "data" request or in the "html" request. CSS doesn't carry any hypermedia control information, both display and control (hypermedia control) data is in the HTML itself, which is what makes HTML a hypermedia. I'd also need to see the relationship between the two end points, that is, how information in one is consumed/referenced from the other. (Your mention of the term 'schema' is why I'm skeptical, but a concrete example would help me understand.)

If the hypermedia controls are in the data then I'd call that potentially a RESTful system in the original sense of that term, i'd need to see how clients work as well in consuming it. (See https://htmx.org/essays/hypermedia-clients/)

> You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.

When discussing REST i think it's reasonable to link to the paper that defined the term. With Fielding, who defined the term, I regard the uniform interface as the most distinguishing technical characteristic of REST. In as much as a proposed system satisfies that (and the other REST constraints) I'm happy to call it RESTful.

In any event, I think some concrete examples (maybe a gist?) would help me understand what you are proposing.


> Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.

It's an API-specific schema, yes, but the browser doesn't have to know it because the API-to-HTML conversion is encoded in the second document (which rarely changes). I.e., notionally the browser only deals in the hydrated HTML and not in the API-specific schema. How does that make this not RESTful?


Well, again I'm not 100% saying it isn't RESTful, I would need to see an example of the whole system to determine if the uniform interface (and the other constraints of REST) are satisfied. That's why i asked for an example showing where the hypermedia controls are located, etc. so we can make an objective analysis of the situation. REST is a network architecture and thus we need to look at the entire system to determine if it is being satisfied (see https://hypermedia.systems/components-of-a-hypermedia-system...)

> I think you can profitably split your application API (hypermedia) from your automation API (JSON)

Why split them? Just support multiple representations: HTML and JSON (and perhaps other, saner representations than JSON …) and just let content negotiation sort it all out.



The "structured data with programmatic access methods" sounds a lot like microformats2 (https://microformats.org/wiki/microformats2), which is being used quite successfully in the IndieWeb community to drive machine interactions with human websites.

Thanks for the link!

> The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API

We already have the schema language; it’s HTML. Stylesheets and favicons are two examples of specific links that are automatically interpreted by user-agents. Applications are free to use their own link rels. If your point is that changing the name of those rels could break automation that used them, in a way that wouldn’t break humans…then the same is true of JSON APIs as well.

Like, the flaws you point out are legit—but they are properties of how devs are ab/using HTML, not the technology itself.


HTML is most decidedly not a schema language.

> The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication.

What code duplication? If both these APIs use the same data fetching layer, there's no code duplication; if they don't, then it's because the JSON API and the Hypermedia UI have different requirements, and can be more efficiently implemented if they don't reuse each other's querying logic (usually the case).

What you want is some universal way to write them both, and my general stance is that usually they have different requirements, and you'll end up writing so much on top of that universal layer that you might as well have just skipped it in the first place.


I've worked with a HATEOAS database application written in Ruby using Rails. It still managed to have HTML- and JSON-specific code. Its web UI was OK, but not as responsive as a SPA.

Hey there, good question! Probably worth reading both sections 6 and 7 for context, but I answer this question specifically in section 7.2: https://alexanderpetros.com/triptych/form-http-methods#ad-ho...

Hey Simon, maintainer + article co-author here.

tl;dr htmx is better than a lot of alternatives, but Simon is right, it can improve this story and it has a responsibility to do so.

As a framing device: any interactivity that is not described with HTML is interactivity that someone is responsible for making accessible (which you correctly pointed out). Unlike JS frameworks, htmx encourages the developer to solve problems with HTML first, so the inaccessible surface area that the developer has to worry about is going to be much smaller on htmx-based applications than it is with JS-based applications. That's a pretty strict win from where a lot of the industry is right now.

That having been said, I agree with you (and disagree with a lot of our defenders in the comments) that thinking about this is in-scope for htmx. In fact, I'm a firm believer that libraries are exactly where responsibility for implementing ARIA attributes live. The average web developer should not have to think about this at all—they should be able to use declarative abstractions and trust that they are annotated correctly for screen readers. This is the only system that scales, and htmx should adhere to it as much as possible.

Ideally, htmx would just add the polite notification attribute to the elements it processes, and give an easy way to customize that. I think it's basically possible to do this in a backwards-compatible way (first as an extension) that aligns with the maintenance strategy outlined in this essay. And I do think we can improve the documentation as well.


I would argue that frontend libraries like htmx and React are not the place to build in ARIA attributes patterns/behaviours. In fact, none of the Big Ones do anything specifically w.r.t. announcing changes. See https://news.ycombinator.com/item?id=42625070

What would be the appropriate place, imho, would be component frameworks that are built on top of htmx. FastHTML is an example; they already have htmx integration and they talk about server-side components. If we look at a popular one in React, this is what they do: https://blog.logrocket.com/aria-live-regions-for-javascript-...

Thinking about it in terms of dream-html[1], it might look like:

    let announce msg =
      p [
        class_ "announcement";
        Hx.swap_oob "innerHTML:.announcement";
        Aria.live `polite;
      ] [txt "%s" msg]

    (* POST /purchase *)
    let purchase request =
      let thank_you_fragment = ... in
      Dream_html.respond (
        (* Join fragments together *)
        null [
          thank_you_fragment;
          announce "Your purchase has been successful!";
        ]
      )

[1] https://github.com/yawaramin/dream-html

I'm thrilled to hear you're thinking about this. I really like the philosophy of the project, very happy to hear that reducing accessibility friction is a value you care about too.

Hey Dan, htmx maintainer here.

I would love to know in what other two ways relative links are broken, and which event stopped working, so we can get those fixed. With respect to the fix you PRed (thank you, by the way), we did get that merged very quickly, and would love to do the same for whatever is broken here, even (especially) if you are no longer interested in doing the fix yourself. [0]

As for the DX of making changes to htmx: absolutely true! The single file is a DX we've chosen, and it has both costs and benefits, which we've written about in "Why htmx doesn't have a build step". [1]

[0] https://github.com/bigskysoftware/htmx/pull/1960

[1] https://htmx.org/essays/no-build-step/


Hi Alex, thanks for the reply and for your work on HTMX. I love the idea and think there's a strong need for something like HTMX, and if HTMX can live up to those promises then great. For me it doesn't, currently, but it sounds like it does for others.

The relative link bugs are [0] and [1], and I fixed [2]. My fix was merged quickly, but the other two bugs of what appears to be similar significance, have been open for well over a year each. My issue however is less about these specific issues, and more the general lack of support for hypermedia, which speaks to a general state of HTMX not living up to its promises.

As for the DX, I think not having a build step, and code structure, are somewhat orthogonal. I agree with much of that essay, but most of it is about issues with complex build systems and the use of Typescript. It's not about having 190 top-level functions, which, with all due respect, I think indicates a lack of architecture. Again, the issue is less about the specifics, and more about the fact that HTMX is not living up to its promise of simplicity because of how impenetrable the codebase is in this regard.

As mentioned, so far I have found Stimulus and Turbo to be better implementations of what HTMX appears to promise to be. More activity in this space from anyone and everyone is great here though!

[0]: https://github.com/bigskysoftware/htmx/issues/1476 [1]: https://github.com/bigskysoftware/htmx/issues/1736 [2]: https://github.com/bigskysoftware/htmx/pull/1960


Why is the number of top level functions a problem? As long as they are named and organized appropriately I don't see the issue

I love writing prototypes as a single file as much as the next guy, but there's almost 5k lines of code in this one file: https://github.com/bigskysoftware/htmx/blob/master/src/htmx....

They're sectioned off, so "event/log support" and "AJAX" and such are grouped by the type of method, but they're not prefixed so there's no way for your editor to help you explore grouped functionality.

Given that the code isn't particularly organized or structured in any way that I could quickly glean (aside from the aforementioned grouping of related functionality, which is only so helpful), I think I'd be put off from wanting to contribute to this codebase.


De gustibus, but after taking a look personally I think it's pretty good. It's typed to the extent allowed by js, sections are clearly delineated, and if you collapse all the functions you can get a quick overview of every section. I personaly would use a different naming convention, but still their functions look reasonably named.

See: php.

190 functions at a single level strongly implies that they're not named and organized appropriately.

Naming doesn't have anything to do with the number of functions. They section things off in the file, so if you prefer things split into multiple files I can understand, but it's a personal perference in something this size

If you're not using more common organization tools like modules, naming tends to be where organization is implemented: common prefixes, etc. Even if they're carefully done, you still end up with a long list of names, with a lot of implicit rules to keep in mind to decode the name (this is why Hungarian notation seemed like such a good idea in the beginning, and a bad idea once it was actually extensively used). Names shouldn't need decoding.

I haven't looked deeply at HTMX, so I won't claim they're falling prey to this exact problem. But it's definitely a code smell that's concerning.


i don't mind a lot of functions in a single file:

https://htmx.org/essays/codin-dirty/#i-prefer-to-minimize-cl...


The more you need to hold in your head at once to understand code the harder it is to do understand, and the harder it is to contribute or onboard to a codebase.

A lot of functions doesn't necessitate a lot of things to hold in your head, but in my experience, HTMX hasn't got enough other structure to prevent this. I was not confident about the changes I made, and the reviewer was not confident about the changes. In a well architected codebase the goal is for these things to be obvious.

As for minimising classes? Sure. I can get behind that. But I think it's orthogonal to having a lot of top level functions with no clear naming or sorting.

If the goal is to have a single-file codebase, I'd suggest considering the following (you may already have done so, but I haven't noticed consideration in the few documents I've read):

- Structuring the file into clearer regions – there is already some of this, but comment blocks are easy to miss in a 5.2k file, and utilities are everywhere.

- Adding named closures for grouping related functionality – "classes lite", at least gives some function namespacing and code folding.

- Ordering the file to help direct readers to the right bit, literate-programming style, so that there's a sort of narrative, which would help understand the architecture.

- Function name prefixes to indicate importance – is something the entrypoint to core functionality? is it a util that should be considered a sort of private function?

- Pure functions – so much of the code is state management performed in the DOM, which makes it hard to test, hard to know if it's working, hard to know what interactions will be introduced, etc. State management is always hard, but centralising state management more would be good.

- (That said... arguably library internals are the place to have make the low-abstraction high-performance trade-off with a bunch of mutable state. However, this makes it hard to have other Javascript that co-exists with HTMX, because it's too easy to stomp over each other's changes. A better integration path, like Stimulus, might alleviate this and retain HTMX's control over the DOM).

- I understand the preference for longer functions, but `handleAjaxResponse` is a lot. More abstraction would really help make this more understandable.

I get that personal preferences are key to why HTMX is the way it is, but I think it's important for the general health of open source projects that others are able to contribute, safely and effectively, and I'm not sure the current choices are most conducive to that. Hopefully some sort of middle ground can be found where HTMX doesn't lose its "personality"(?) but where some of these things can be improved.


Obviously it's an idiosyncratic way to organize code, and I get that someone coming in and making a one-time contribution might be a bit overwhelmed by how different it is than other codebases, but I've been happy with how easily i've been able to come into it after taking time off and figure things out, particularly when debugging.

We have other people contributing on an ongoing basis and i've had people (mostly non-JS people) comment on that they find the codebase easy to navigate, since you don't have to rely on IDE navigation.

There isn't a lot of state involved in htmx: it's mostly passed around as parameters or stored on nodes. History, where you ran into a problem, is a tricky part of the library that's hard to test, and that's where most of the problems have cropped up. I could probably factor that better to make it more testable, at the cost of more abstractions and indirection.

In general, I'm not inclined to change the organization much (or the codebase, per the article above) so that we can keep things stable (including the behavior of events, which apparently changed on you at one point.) I'll sand it, but I'm not going to do a big refactor. We've had people come in and propose big changes and reorganizations, and we've said no.

https://data-star.dev is a result of someone proposing a big htmx rewrite and then taking it and doing their own thing, which I think is a good thing. htmx will stay boring, just generalizing hypermedia controls.

I did a walk through the codebase here:

https://www.youtube.com/watch?v=javGxN-h9VQ

Given the lack of API changes going forward, I hope that artifacts like that, coupled with overall stability of the implementation, will mitigate risks for adopters.


I think an opinionated stance is in general a good thing when it comes to open source project. I just worry that contrarianism in frontend development is being conflated with contrarianism in general programming. The former being the intention of the project that I support (yay hypermedia!), and the latter most likely not being the goal.

Maybe the sanding will help, and if HTMX is not going to change much then maybe not much is needed, but I think there's still a way to go to stability and feature completeness.

> https://www.youtube.com/watch?v=javGxN-h9VQ

Thanks for the link, I'll definitely give this a watch!


i'm pretty contrarian in programming as well, so it tracks:

https://htmx.org/essays/codin-dirty/

https://grugbrain.dev/

htmx is feature complete (see the article)

we can argue about stability, but we have lots of happy users


> The relative link bugs are [0] and [1],

Thanks. Genuinely asking something here: are relative links actually common in frontend? I've only ever used absolute URLs for static assets. Actually I even made tooling that ensures that static asset paths will always be correct: https://yawaramin.github.io/dream-html/dream-html/Dream_html...

Of course, this kind of tooling exists for many frameworks. But I've never seen frontend frameworks suggest using relative links for static assets, they always seem to put them in a separate subdirectory tree.


Relative links are common in the Microsoft ecosystem; the IIS webserver, in its default configuration, serves websites at a subpath named something like /Our.App/ instead of at /. Frontends often use the <base> tag so that they can refer to assets by relative path, allowing different environments to have different base paths.

Random developer here: in my code, yes they are common. I tend to write modules so that I can route to them from wherever, reuse them, and move them around as I please. Relative links help greatly with such things.

Besides that, they are a very basic part of the spec, and I consider anything that breaks them to be truly fundamentally broken.


Understood, but how do you deal with static asset caching and cache-busting with version markers or similar? Does your framework/build system/whatever automatically add version markers to static assets that are bundled directly with your module?

I tend to keep statics consolidated and slap middleware on those routes to handle expiration headers and whatnot. Versioning is handled where links are generated; an href="{{ linkfor(asset) }}" style resulting in generated urls complete with content hash tacked on. I almost never construct asset links without some form of link generator function because said functions give me a centralized place to handle things just like this.

(edit: I should clarify that I'm mostly working in Go these days, and all my assets are embedded in the binary as part of compilation. That does make some of this much easier to deal with. Javascript-style build mechanisms tend far too much toward "complex" for my tastes.)


OK, wait, you said earlier that you bundle static content like images in the same module as the feature itself, but now you're saying you keep them consolidated in a separate place? You also said earlier that you commonly used relative links but now you're saying you generate them with a link generator function? I'm confused about what you're actually doing...if you look at the link I posted earlier, you can see how I'm doing it. So...are you using relative links like <img src=dog.png> or not?

I was unclear; I bundle them per-module; each module has an asset route with attached middleware. So for a users module, there will be users/assets or some such route, to which various static-related middleware gets attached.

The link gens inherently accept relative paths (foo) and output relative links with appropriate adjustments (foo.png?v=abcdef), though they can generate fully-qualified links if circumstances warrant (eg I often have a global shared css file for theming, though that's all I can think of offhand other than "/" for back-home links). The module can then be mounted under /bar/blort, or wherever else, and the links remain relative to wherever they are mounted.

On occasion I do hardcode links for whatever reason; my rule of thumb is relative links for same level or deeper, or root-anchored if you're referencing a global resource outside the current path, but there aren't many of those in my apps. A rare exception might be something like an icon library, but I tend toward font-based icons.

To put the rule more succinctly, I never use "../foo". Going up and then over in the hierarchy strikes me as a recipe for bugs, but that's just instinct.

I also never use fully-qualified (including hostname) urls to reference my own assets. That's just madness, though I vaguely recall that WordPress loves to do crap like that; there's a reason I don't use it anymore. :)

So the main difference if I understood your link correctly, would be that I would have maybe one or two items in /static/assets by root-anchored urls, and the rest accessed as /path/to/module/assets/whatever (edit: or just assets/whatever from within the module).

Because I'm doing this in go, the module's assets live where the code does and gets embedded in a reasonably standard way. I actually tend to use git commit as my cache buster to avoid having to hash the content, but full hashing can be done as well very easily just by changing the generator functions.

I can then just import the module into an app and call its associated Route() function to register everything, and it just works. Admittedly it's very rare that I reuse modules between apps, but it's easily possible.

For background to my thought process, I originally started using the link gen paradigm long ago to make it easy to switch CDN's on and off or move between them, back before whole-site proxying was considered best-practice. The rest is just a desire for modularity/encapsulation of code. I like to be able to find stuff without having to spelunk, and keeping assets together with code in the source tree helps. :)


Thanks for the explanation!

Sorry, forgot about this bit:

> and which event stopped working

It was `htmx:load`. On 1.x this fired on first page load, which I used to do a bunch of setup of the other little bits of JS I needed, and which would cause that JS to re-setup whenever the page changed.

On 2.x this never fired as far as I could tell. Maybe I got something else wrong, but only downgrading to 1.x seemed to fix it immediately, I didn't investigate further.

I did wonder if there were breaking changes in 2.x, but as far as I could tell from the release notes and documentation there were not.


Maybe I'm missing something here, but JS modules do not require a build step.

* Note to non-JS hackers: JS module symbol scope is per-source file.


JS modules can't be imported with a plain script tag.

Script type=module doesn't work?

Not if you want to support the 0.2% market share that IE 11 has.

That's true, but also a trade-off that is a perfectly valid engineering choice for many or even most teams.

HTMX 2 stopped IE support, so that shouldn't be an issue.

we don't :)

we want to support the traditional script tag w/ a src attribute and nothing else

Why?

idk just like it better that way

Chiming in to say modules are awesome and you shouldn't be so scared.

I'm not scared, I just want people to be able to include htmx on their page the same way, for example, good ol'jQuery is included.

We generate esm and cjs modules for people who want to use htmx in that manner though:

https://github.com/bigskysoftware/htmx/tree/master/dist


you can define import maps in a separate <script> tag and reuse the module name elsewhere

sure, but I wanted the same experience of, say, jquery, where you just drop in a script tag w/ a src attribute and it just works

an aesthetic decision, I suppose


Depending on the import tree depth, it significantly increases latency.

(Which is why bundlers still exist.)


Anything that can be loaded as a stylesheet or a script tag works great with htmx. These are not htmx-branded libraries... because they're just web libraries.


Right, but then you are glueing things together and mixing and trying to match CSS so they all play together. AirDatepicker here, tomselect there, datatables here, popper there. All of sudden you lost a lot of time trying to make things look like one cohesive unit, when you should be doing your app.

Meanwhile, I have v0 generate the whole skeleton for me, with composable reusable components, and just got the API done.


Interestingly I did something similar with Claude, asking it to generate tailwind/HTML components instead. And copy/pasting them as Thymeleaf fragments for a java/springboot + htmx project.


Yeah maybe, if you are using all these ready-made components. But with htmx...maybe you won't need to. Maybe you'll just end up using basic HTML widgets and the result will be simple and fast.


I'm sorry to be a little rude, but the AI-generated cover (the left crab is missing an eye, for starters) is a negative quality indicator. If you weren't willing to take the time to put together a thoughtful cover, why should readers expect the book's content to be assembled any more carefully?


I love how the top comment is a guy literally judging a book by it's cover.


A book's cover is a strong signal of its content and quality. You absolutely should judge books by their covers.

This isn't an accident. Publishers put a lot of effort into ensuring covers send accurate signals about the contents of the book. A book with a garbage cover didn't come through the modern publishing process, and it didn't get made by someone who cares enough about their book to put actual effort into the cover. The percentage of books that will be worthwhile with a bad cover is vanishingly small. To a first approximation, it's 0%.

It's very reasonable to judge a book by its cover these days. It's an intentional signal; covers aren't chosen at random. If you intend your book to be taken seriously, you need to keep up with modern standards.


No serious readers judge a book by the cover.


That's absolutely false, but thanks for contributing.


It is pretty amusing, but I actually have to agree. Unless your book is about generative AI, using it for the cover makes me question the effort put into the rest of the book - which is really unfortunate if the author did in fact put a lot of effort into their book!

Yes, don’t judge a book by its cover - but lots of people do anyway.


Don't judge a book by the cover is an advice for us, the readers. It is NOT an advice for the writers or the marketing department.

a tasteful public domain image of a crab or something like hundreds of years old but then detailed enough kind of like one of those O'Reilly's books and then make the whole cover white with some text that stands out... It has all been done before and I anal but I doubt O'Reilly's can copyright or trademark that design feeling.

Edit: unless the authors are trying to crowd source the proof reading and peer review by releasing an early access copy? This cover draws attention from the average reader who is drawn to comment about the book because there is something to criticize? Maybe this is some kind of advanced 3D chess?


It's questionable advice for readers too. For example, if the cover of a book indicates it is a cheap romance novel, that is a pretty good indication that I won't have any interest in reading it.


The saying itself is from a time before many books had such descriptive covers.


I don’t think that making your book’s cover look indistinguishable from a significant number of other books in the same category is good marketing advice.


Who said "indistinguishable" ?

Choosing to make your book "fit" is often a good call. You're actually going to attract people who know roughly what they want. My mother reads a lot of trash romance, it's not very important who wrote it, but a handsome guy on the cover signals that this is roughly the right book, the attire and surroundings are a vague gesture at the sub-genre, guy in a white coat? Medical. Business suit? Maybe 1980s.

There's no rule against refusing to obey convention, "Twenty Jazz Funk Greats" is, in fact, not Jazz-funk, and indeed didn't have twenty tracks. But nobody working on that album thought "This is a commercial route to success". It's art. Really difficult to listen to art. If you are not making art but have some other purpose you probably do want to signal your intent to people merely browsing.


You could even take the "public domain image" idea and still make something that looks nothing like an O'Reilly book.

For example, here are some images from Library of Congress with crabs in them that I think could make for interesting book covers:

- https://www.loc.gov/pictures/resource/fsa.8a23704/ "Residents of Raceland, Louisiana, eating crabs at crab boil" (1938 Sept.)

- https://www.loc.gov/pictures/resource/jpd.00748/ "Benkeigani to tsubaki" (between 1825 and 1830)

- https://www.loc.gov/pictures/resource/ppmsca.69838/ "Frozen crabs at a restaurant" (January 1954)

- https://www.loc.gov/pictures/resource/highsm.14204/ "The Barking Crab restaurant on the Fan Pier across the Northern Bridge from downtown Boston, Massachusetts" (between 1980 and 2006)

- https://www.loc.gov/pictures/resource/ds.00242/ "Cancer" (latin name for crab, you'll probably want to edit the image to avoid any unfortunate connotations) (1825)

But before picking any of these or other images from the Library of Congress, check the "about this item" link and see in particular the "Rights Advisory", including any links to further details about rights and usage in that field, and keep also in mind things like what is pointed out in one of those links; "Privacy and publicity rights may also apply". IANAL, TINLA.


It also might be a sign that the content of the book is itself AI generated.


Oh please. Don't pretend like this has anything to do with "effort". If the book cover was some generic stock image you wouldn't even blink.


If it was a particularly bad stock image, I would. The only reason I even noticed here is because the image is bizarre and uncanny if you look at it for even a moment.

If the AI produced something indistinguishable from a good stock photo, then no, I would probably not have noticed. But it didn’t, and they still put it on top of their hard work. If the AI image sucks, don’t use it!


I disagree, this is actually a great use case for generative AI. There are many people who are great at writing, coding, hacking, or other talents, but are terrible artists. I know that I fall into this category, and I’m very excited about the prospect of using AI for any graphical assets I need for my projects.

Especially with an ebook, if the content is good, I frankly don’t even care if it even has a cover at all.


I understand, but you should understand that OP's response will be common to your use of AI graphics: it'll be an indicator of little thought or quality control put into it.


As an independent developer or author, every bit of time or money put into superficial things like a book cover is time or money that isn’t used for the content itself.


A comment like this makes a lot of subtle assumptions.

That there's a fixed budget of time/money - exactly X hours/dollars will be allocated to the book, one way or another.

That the improvement to the quality of the content as time/money is invested is something approaching linear - that if exactly X hours/dollars are going to be allocated to the book, the content will be appreciably better if, say, 0.9X is allocated to the content vs, say, 0.8X.

That time and money is fungible: that all time/money that could be invested in the content could be invested in superficial things, and the other way around.


"That there's a fixed budget of time/money"

As independent developer.

Yes, time is fixed. There is only 24 hours in a day.


A pleasant color and a pleasing font good a long way. You can spend very little time and no money.


You have direct evidence above that the book cover is not a superficial thing. If you don't take this into account, then you're not a serious author.


which begs the question: why spend any time at all including an ai image as part of the cover if it’s a negative signal for some people?


Stock art exists. Still may require some taste in editor, but it's hard to do worse than bad AI art.


Yeah, i agree that, on principle, generative AI should be a good got for these kinds of things. But, the results are... idk, uncanny. At least to me.

For example, if you were to ask a human artist to draw this image with two crabs, and this artist didn't know how many legs crabs have, what would they do? I think they'd probably google how many legs crabs have, and how they look in general, and at the very least draw two crabs with the same amount of legs.

But these generative models don't know how many legs crabs have, and they are not clever enough to ask, or even count it seems. So you end up with results like these. The main crab has 8 legs (6 + 2 claws), while the little one seemingly has 10 (or at least it has 5 on one side), and it's also missing an eye for some reason.

A human artist can also make mistakes, for sure. But i don't think these kind of mistakes would be expected from an artist.

And i agree with the other commenters that said this reflects poorly on the author of the book. Didn't they care to check the cover for obvious mistakes? Attention to detail is a very important trait on a technical writer.


No image is better than an AI image. It’s demeaning to your work.


Judging books by their covers is underrated.

In sociology most of the results that actually replicate are the ones that are true but people wish they weren't.


Agreed. Whatever you might think of GenAI, using it poorly speaks to a fundamental lack of attention to detail.

Even worse, it makes me question the content itself. If they used Stable Diffusion for the cover image, who's to say they didn't also use an LLM to generate some of the Rust examples in the book itself?

Note: I'm not suggesting that the inherent usage of an LLM to generate Rust code is necessarily bad. What I am suggesting is that they might approach the review of said generated code with the same laissez-faire attitude as they did with the cover image.


Are we sure that the crab missing the left-eye wasn't on purpose? Some meme about bugs or something.

It seem to also be missing left legs, as if it was damaged.

The book seems focused on errors and error handling, maybe a 'bug' on the cover was on purpose.

Not so sure this is just AI artifact.


It is 100% a badly generated AI image. The random partial keyboards are a dead giveaway along with the smeared looking green text on the monitor.


who cares?


apparently a lot of commenters care.

a huge amount of the comments on the book were critical just because of the cover.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: