JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.
It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
Maybe just give up the ghost and use a new unambiguous term instead of REST. Devs aren't going to let go of their JSON apis as browsers often are not the only, or even main, consumers of said APIs.
Creating frameworks and standards to support "true" RESTful APIs is a noble goal, but for most people it's a matter of semantics as they aren't going to change how they are doing things.
A list of words that have changed meaning, sometimes even the opposite, of their original meaning:
It seems these two discussions should not be conflated: 1. What RESTful originally meant, and 2. The value of RESTful APIs and when they should and shouldn't be used.
I think "both sides" right now are a bit wrong about the other. Content Negotiation is also an ancient part of REST. User Agent A prefers HTML and User Agent B prefers XML and User Agent C prefers JSON and all of those are still valid representations of a resource, and a good REST API can deliver some or all three as it has capability to do so. It's very much in the spirit of REST to provide HTML for your Browsers and JSON for your CLI apps and machine-to-machine calls.
This shouldn't be a "war" between "HTML is the original REST" and "JSON is what everyone means today by REST", this should be a celebration together that if these proposals pass we can do both better together. Let User Agents negotiate their content better again. It's good for JSON APIs if the Browser User Agents "catch up" to more features of REST. The JSON APIs can sometimes better specialize in the things their User Agents need/prefer if they aren't also doing all the heavy lifting for Browsers, too. It's good for the HTML APIs if they can do more of what they were originally intended to do and rely on JS less. Servers get a little more complicated again, if they are doing more Content Negotiation, but they were always that complicated before, too.
REST says "resources" it doesn't say what language those resources are described in and never has. REST means both HTML APIs and JSON APIs. (Also, XML APIs and ASN.1 APIs and Protobuf APIs and… There is no "one true" REST.)
In the near future I'll write a blog about this, but the short answer is that even though more developers use REST incorrectly than not, it's still the term that best communicates our intent to the audience we are trying to reach.
Eventually, I would like that audience to be "everyone," but for the time being, the simplest and clearest way to build on the intellectual heritage that we're referencing is to the use the term the same way they did. I benefited from Carson's refusal to let REST mean the opposite of REST, just as he benefited from Martin Fowler's usage of the term, who benefited from Leonard's Richardson's, who benefited from Roy Fielding's.
> It's weird that you would argue "people won't change" at the same time as you point out how word meanings change.
"People won't change" does not imply "people don't change"; "I observe change" does not imply "I cause change."
Dante's Paradiso XVII.37–42 (Hollander translation): "Contingent things [...] are all depicted in the Eternal Sight, / yet are by that no more enjoined / than is a ship, moved downstream on a river's flow, / by the eyes that mirror it."
> Also, specific people might not change, but they do retire/die, and new generations might have different opinions.
People change organically but people can't be easily changed inentionally.
I'm not suggesting that going back to the original meaning is a bad thing, in fact more power to those who are attempting this. I'm just suggesting that instead of moving the mountain, they could just go around it.
RESTful is an oddly-specific term, so I don't see the point of changing the meaning.
Feel free to change the meaning of 'agile' to mean 'whatever' (which is how it's interpreted by 99.99% of the population), but leave things like RESTful alone.
The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API. Hypermedia is completely self-describing when you have human users involved but not when you don't have human users involved. If you insist on HATEOAS as the substrate of the API then you need to give us a schema language that we can use to automate things. Then you can have your hypermedia as part of the UI and the API.
The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.
Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.
So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:
- compression (though the
data might still be highly
compressible with zlib/
zstd/brotli/whatever,
naturally)
- hypermedia
- structured data with
programmatic access methods
(think XPath, JSONPath, etc.)
The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.
This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.
The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.
Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.
But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.
SPAs are for humans, but they let you have structured data.
That's the problem here. People need APIs, which means not-for-humans, and so to find an efficient way to get "pages" for humans and APIs for not-humans they invented SPAs that transfer data in not-for-humans encodings and generate or render it from/to UIs for humans. And then the intransigent HATEOAS boosters come and tell you "that's not RESTful!!" "you're misusing the term!!", etc.
Look at your response to my thoughtful comment: it's just a dismissive one-liner that helps no one and which implicitly says "though shall have an end-point that deals in HTML and another that deals in JSON, and though shall have to duplicate effort". It comes across as flippant -- as literally flipping the finger[0].
No wonder the devs ignore all this HATEOAS and REST purity.
[0] There's no etymological link between "flippant" and "flipping the finger", but the meanings are similar enough.
Yeah, that was too short a response, sorry I was bouncing around a lot in the thread.
The essay I linked to somewhat agrees w/your general point, which is that hypermedia is (mostly) wasted on automated consumers of REST (in the original sense) APIs.
I don't think it's a bad thing to split your hypermedia API and your JSON API:
(Those costs will typically be dwarfed by data store access anyway)
So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.
> So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.
I've yet to see what I proposed, so I've no idea how it would work out. Given the current state of the world I think devs will continue to write JS-dependent SPAs that use JSON APIs. Grandstanding about the meaning of REST is not going to change that.
I've built apps w/ hypermedia APIs & JSON APIs for automation, which is great because the JSON API can stay stable and not get dragged around by changes in your application.
As far as the future, we'll see. htmx (and other hypermedia-oriented libraries, like unpoly, hotwire, data-star, etc) is getting some traction, but I think you are probably correct that fixed-format JSON APIs talking to react front-ends is going to be the most common approach for the foreseeable future.
most structured-data to UI systems I have seen produce pretty bad, generic user interfaces
the innovation of hypermedia was mixing presentation information w/control information (hypermedia controls) to produce a user interface (distributed control information, in the case of the web)
i think that's an interesting and crucial aspect of the REST network architecture
2) where you fetch data from a server and would normally use JS to render it you'd instead have an HTML attribute naming the "schema" to use to hydrate the data into HTML which would happen automatically, with the hydrated HTML incorporated into the page at some named location.
The schema would be something like XSLT/XPath, but perhaps simpler, and it would support addressing JSON/CBOR data.
this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model
if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user
the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas
I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.
> if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user
This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?
> this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model
I wouldn't call it templating. It resembles more a stylesheet -- that's why I referenced XSLT/XPath. Browsers already know how to apply XSLT even -- is that unRESTful?
> the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas
Nonsense. The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server", it's not coupling anything. It's a compression technique of sorts, and mainly one that allows one to reuse API end-points in the UI.
EDIT: Sending the data and the instructions for how to present it separately is no more non-RESTful than using CSS and XML namespaces and Schema and XSLT are.
How is one response RESTful and two responses not RESTful when the user-agent performs the two requests from a loaded page?
> I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.
You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.
> This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?
Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.
> Browsers already know how to apply XSLT even -- is that unRESTful?
XSLT has nothing to do with REST. Neither does CSS. REST is a network architecture style.
> The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server"...
I guess I'd need to see where the hypermedia controls are located: if they are in the "data" request or in the "html" request. CSS doesn't carry any hypermedia control information, both display and control (hypermedia control) data is in the HTML itself, which is what makes HTML a hypermedia. I'd also need to see the relationship between the two end points, that is, how information in one is consumed/referenced from the other. (Your mention of the term 'schema' is why I'm skeptical, but a concrete example would help me understand.)
If the hypermedia controls are in the data then I'd call that potentially a RESTful system in the original sense of that term, i'd need to see how clients work as well in consuming it. (See https://htmx.org/essays/hypermedia-clients/)
> You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.
When discussing REST i think it's reasonable to link to the paper that defined the term. With Fielding, who defined the term, I regard the uniform interface as the most distinguishing technical characteristic of REST. In as much as a proposed system satisfies that (and the other REST constraints) I'm happy to call it RESTful.
In any event, I think some concrete examples (maybe a gist?) would help me understand what you are proposing.
> Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.
It's an API-specific schema, yes, but the browser doesn't have to know it because the API-to-HTML conversion is encoded in the second document (which rarely changes). I.e., notionally the browser only deals in the hydrated HTML and not in the API-specific schema. How does that make this not RESTful?
Well, again I'm not 100% saying it isn't RESTful, I would need to see an example of the whole system to determine if the uniform interface (and the other constraints of REST) are satisfied. That's why i asked for an example showing where the hypermedia controls are located, etc. so we can make an objective analysis of the situation. REST is a network architecture and thus we need to look at the entire system to determine if it is being satisfied (see https://hypermedia.systems/components-of-a-hypermedia-system...)
> I think you can profitably split your application API (hypermedia) from your automation API (JSON)
Why split them? Just support multiple representations: HTML and JSON (and perhaps other, saner representations than JSON …) and just let content negotiation sort it all out.
The "structured data with programmatic access methods" sounds a lot like microformats2 (https://microformats.org/wiki/microformats2), which is being used quite successfully in the IndieWeb community to drive machine interactions with human websites.
> The problem with RESTful requiring hypermedia is that if you want to automate use of the API then you need to... define something like a schema -- a commitment to having specific links so that you don't have to scrape or require a human to automate use of such an API
We already have the schema language; it’s HTML. Stylesheets and favicons are two examples of specific links that are automatically interpreted by user-agents. Applications are free to use their own link rels. If your point is that changing the name of those rels could break automation that used them, in a way that wouldn’t break humans…then the same is true of JSON APIs as well.
Like, the flaws you point out are legit—but they are properties of how devs are ab/using HTML, not the technology itself.
> The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication.
What code duplication? If both these APIs use the same data fetching layer, there's no code duplication; if they don't, then it's because the JSON API and the Hypermedia UI have different requirements, and can be more efficiently implemented if they don't reuse each other's querying logic (usually the case).
What you want is some universal way to write them both, and my general stance is that usually they have different requirements, and you'll end up writing so much on top of that universal layer that you might as well have just skipped it in the first place.
I've worked with a HATEOAS database application written in Ruby using Rails. It still managed to have HTML- and JSON-specific code. Its web UI was OK, but not as responsive as a SPA.
> JSON rest api are just hypermedia rest api but with the data compressed. The compression format is json and the dictionary are the hypermedia relations previously passed in the client.
Yes.
> It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
No. Where the client is a user-agent browser sort of application then it has to be generic.
> The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
Yes-ish. If instead of a hand-coded "re-hydrator" library you had a schema a schema whose metaschema is supported by the user-agent, then everything would be better because
a) you'd have less code,
b) need a lot less dev labor (because of (a), so I repeat myself),
c) you get to have structured data APIs that also satisfy the HATEOAS concept.
It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.