Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:

- The API returns JSON

- CRUD actions are mapped to POST/GET/PUT/DELETE

- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

- There's a decent chance listing endpoints were changed to POST to support complex filters

Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.



Fielding won the war precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.

RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.

People say it is not RPC but all the time we write some function in Javascript like

   const getItem = async (itemId) => { ... }
which does a

   GET /item/{item_id}
and on the backend we have a function that looks like

   Item getItem(String itemId) { ... }
with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.


When I realized that I was calling openapi-generator to create client side call stubs on non-small service oriented project, I started missing J2EE EJB. And it takes a lot to miss EJB.

I'd like to ask seasoned devs and engineers here. Is it the normal industry-wide blind spot where people still crave for and are happy creating 12 different description of the same things across remote, client, unit tests, e2e tests, orm, api schemas, all the while feeling much more productive than <insert monolith here> ?


I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules. Furthermore it became hard to onboard to these environments and figure out how to make changes and deploy them safely. Sometimes the repetition is really the lesser evil.


I see, it's also reminiscent of the saying "microservices" are an organisational solution. It's just that I also see a lot of churn and friction due to incoherent versions and specs not being managed in sync now (some solutions exists are coming though)


> it was generally more brittle and harder to maintain

It depends on the system in question, sometimes it's really worth it. Such setups are brittle by design, otherwise you get teams that ship fast but produce bugs that surface randomly in the runtime.


Absolutely, it can work well when there is a team devoted to the schema registry and helping with adoption. But it needs to be worth it to be able to amortize the resources, so probably best for bigger organizations.


> I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules.

I'm not sure what would lead to this setup. For years there are frameworks that support generating their own OpenAPI spec, and even API gateways that not only take that OpenAPI spec as input for their routing configuration but also support exporting it's own.


I keep pining for a stripped-down gRPC. I like the *.proto file format, and at least in principle I like the idea of using code-generation that follows a well-defined spec to build the client library. And I like making the API responsible for defining its own error codes instead of trying to reuse and overload the transport protocol's error codes and semantics. And I like eliminating the guesswork and analysis paralysis around whether parameters belong in the URL, in query parameters, or in some sort of blob payload. And I like having a well-defined spec for querying an API for its endpoints and message formats. And I like the well-defined forward and backward compatibility rules. And I like the explicit support for reusing common, standardized message formats across different specs.

But I don't like the micromanagement of field encoding formats, and I don't like the HTTP3 streaming stuff that makes it impossible to directly consume gRPC APIs from JavaScript running in the browser, and I don't like the code generators that produce unidiomatic client libraries that follow Google's awkward and idiosyncratic coding standards. It's not that I don't see their value, per se*. It's more that these kinds of features create major barriers to entry for both users and implementers. And they are there to solve problems that, as the continuing predominance of ad-hoc JSON slinging demonstrates, the vast majority of people just don't have.


I can write frickin' bash scripts that handle JSON APIs with curl, jq, here quotes and all that.

A lot of people just do whatever comes to mind first and don't think about it so they don't get stuck with analysis paralysis.

   curl -fail
Handling failure might be the real hardest programming problem ahead of naming and caches and such. It boggles my mind the hate people have for Exceptions which at least make you "try" quite literally if you don't want the system to barrel past failures, some seem nostalgic for errno and others will fight mightily with Either<A,B> or Optional<X> or other monads and wind up just barreling past failures in the end anyway. A 500 is a 500.


I worked at a place that had a really great coding standard for working with exceptions:

1. Catch exceptions third-party code and talking to the outside world right away.

2. Never catch exceptions that we throw ourselves.

3. Only (and always) throw exceptions when you're in a state where you can't guarantee graceful recovery. Exceptions are for those exceptional circumstances where the best thing to do is fail fast and fail hard.


Brb, I'm off to invent another language independent IDL for API definitions that is only implemented by 2 of the 5 languages you need to work with.

I'm joking, but I did actually implement essentially that internally. We start with TypeScript files as its type system is good at describing JSON. We go from there to JSON Schema for validation, and from there to the other languages we need.


> Brb, I'm off to invent another language independent IDL for API definitions that is only implemented by 2 of the 5 languages you need to work with.

Watch out, OpenAPI is now 3 versions deep and supports both JSON and YAML.


If younger me had been told, "one day kid, you will miss working with XML", I'd have laughed.

YAML made me miss JSON. JSON made me miss XML.


The pattern I observe is that in old industries, people who paid the cost, try to come up with a big heavy solution (xml, xsd, xpath), but newcomers will not understand the need, and bail onto simpler ideas (json), until they hit the wall and start to invent their own (jsonschema, jquery).

same goes for java vs php/python


Definitely. And often, it's the right call, or the thing wouldn't generate any business value (such as money) at all in a reasonable time.

But boy, how messy spaghetti don't we get for it, sometimes.

(Invent their own, badly, at first. Sigh.)


I dunno, I remember the holy trinity of $, % and @ sigils in Perl as my first exposure to JSON-like objects which are the real world's answer to S-Expressions because they address the nameless tuple problem

https://www.codeproject.com/Articles/1186940/Lisps-Mysteriou...


anything I could read to imitate that workflow ?


I haven't written anything up - maybe one day - but our stack is `ts-morph` to get some basic metadata out of our "service definition" typescript files, `ts-json-schema-generator` to go from there to JSON Schema, `quicktype-core` to go to other languages.

Schema validation and type generation vary by language. When we need to validate schemas in JS/TS land, we're using `ajv`. Our generation step exports the JSON Schema to a valid JS file, and we load that up with AJV and grab schemas for specific types using `getSchema`.

I evaluated (shallowly) for our use case (TS/JS services, PHP monolith, several deployment platforms):

- typespec.io (didn't like having a new IDL, mixes transport concerns with service definition)

- trpc (focused on TS-only codebases, not multi language)

- OpenAPI (too verbose to write by hand, too focused on HTTP)

- protobuf/thrift/etc (too heavy, we just want JSON)

I feel like I came across some others, but I didn't see anyone just using TypeScript as the IDL. I think it's quite good for that purpose, but of course it is a bit too powerful. I have yet to put in guardrails that will error out when you get a bit too type happy, or use generics, etc.


Can't thank you enough. I'm gonna try these and see.


One day I hope to publish this as a tool, or at least parts of it. But you know, that won't put food on the table.


It's not that we like it, it's just that most other solutions are so complex and difficult to maintain that repetition is really not that bad a thing.

I was however impressed with FastAPI, a python framework which brought together API implementation, data types and generating swagger specs in a very nice package. I still had to take care of integration tests by myself, but with pytest that's easy.

So there are some solutions that help avoid schema duplication.


fastapi + sqlmodel does remove many layers that is true, but you still have other services requiring lots of boilerplate


My experience is that all of these layers have identical data models when a project begins, and it seems like you have a lot of boilerplate to repeat every time to describe "the same thing" in each layer.

But then, as the project evolves, you actually discover that these models have specific differences in different layers, even though they are mostly the same, and it becomes much harder to maintain them as {common model} + {differences}, than it is to just admit that they are just different related models.

For some examples of very common differences:

- different base types required for different languages (particularly SQL vs MDW vs JavaScript)

- different framework or language-specific annotations needed at different layers (public/UNIQUE/needs to start with a capital letter/@Property)

- extra attached data required at various layers (computed properties, display styles)

- object-relational mismatches

The reality is that your MDW data model is different from your Database schema and different from your UI data model (and there may be multiple layers as well in any of these). Any attempt to force them to conform to be kept automatically in sync will fail, unless you add to it all of the logic of those differences.


anybody ever worked with model-driven methodologies ? the central model is then derived to other definitions


Having 12 different independent copies means nobody on your 30 people multi-region team is blocked.


I remember getting my hands on a CORBA specification back as a wide-eyed teen thinking there is this magical world of programming purity somewhere: all 1200 pages of it, IIRC (not sure what version).

And then you don't really need most of it, and one thing you need is so utterly complicated, that it is stupid (no RoI) to even bother being compliant.

And truly, less is more.


I'm not super familiar with SOAP and CORBA, but how is SOAP any more coherent than a "RESTful" API? It's basically just a bag of messages. I guess it involves a schema, but that's not more coherent imo, since you just end up with specifics for every endpoint anyways.

CORBA is less "incoherent", but I'm not sure that's actually helpful, since it's still a huge mess. You can most likely become a lot more proficient with RESTful APIs and be more productive with them, much faster than you could with CORBA. Even if CORBA is extremely well specified, and "RESTful" is based more on vibes than anything specific.

Though to be clear I'm talking about the current definition of REST APIs, not the original, which I think wasn't super useful.


SOAP, CORBA and such have a theory for everything (say authentication) It's hard to learn that theory, you have to learn a lot of it to be able to accomplish anything at all, you have to deal with build and tooling issues, but if you look closely there will be all sorts of WTFs. Developers of standards like that are always implementing things like distributed garbage collection and distributed transactions which are invariably problematic.

Circa 2006 I was working on a site that needed to calculate sales tax and we were looking for an API that could help with that. One vendor uses SOAP which would have worked if we were running ASP.NET but we were running PHP. In two days I figured out enough to reverse engineer the authentication system (docs weren't quite enough to make something that worked) but then I had more problems to debug. A competitive vendor used a much simpler system and we had it working in 45 min -- auth is always a chokepoint because if you can't get it working 100% you get 0% of the functionality.

HTTP never had an official authentication story that made sense. According to the docs there are basic, digest, etc. Have you ever seen a site that uses them? The world quietly adopted cookie-based auth that was an ad-hoc version of JSON Web Tokens, once we got an intellectually coherent spec snake oil vendors could spam HN with posts about how bad JWT is because... It had a name and numerous specifics to complain about.

Look at various modern HTTP APIs and you see auth is all across the board. There was the time I did a "shootout" of roughly 10 visual recognition APIs, I got all of them working in 20-30 mins except for Google where I had to install a lot of software on my machine, trashed my Python, and struggled mightily because... they had a complex theory of authentication which was a barrier to doing anything at all.

Worse is better.


Agree with most of what you said, except about HTTP Basic auth. That is used everywhere - take a look at any random API and there is roughly 90% chance that this is the authentication mechanism used. For backends which serve a single frontend maybe not so much, but still in places.


> That is used everywhere - take a look at any random API and there is roughly 90% chance that this is the authentication mechanism used.

I have no idea where you got that idea from. I'm yet to work in a project where any service doesn't employ a mix of bearer token authentication schemes and API keys.


I've found recently that CORS doesn't work with it, which kills it for a lot of usecases.


> Have you ever seen a site that uses them?

I lost the thread...are we talking websites or APIs?

Both use HTTP, but those are pretty different interfaces.


What RPC mechanisms, in your opinion, are the most ergonomic and why?

(I have been offering REST’ish and gRPC in software I write for many years now. With the REST’ish api generated from the gRPC APIs. I’m leaning towards dropping REST and only offering gRPC. Mostly because the generated clients are so ugly)


Just use gRPC or ConnectRPC (which is basically gRPC but over regular HTTP). It's simple and rigid.

REST is just too "floppy", there are too many ways to do things. You can transfer data as a part of the path, as query parameters, as POST fields (in multiple encodings!), as multipart forms, as streaming data, etc.


Just not in C++ code. gprc has a bajillon dependencies, and upgrades are a major pain. If you have a dedicated build team and they are willing to support this - sure, go ahead and use it.

But if you have multiple targets, or unusual compilers, or don't enjoy working with build systems, stay away from complex stuff. Sure, REST may need some manual scaffolding, but no matter what your target is, there is a very good chance it has JSON and HTTP libs.


People get stuff done despite at all that.


I'd agree with your great-grandparent post... people get stuff done because of that.

There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards that sloppyREST has casually dispatched (pun fully intended) in the real world. After some 30+ years of highly prescriptive RPC mechanisms, at some point it becomes time to stop waiting for those things to unseat "sloppy" mechanisms and it's time to simply take it as a brute fact and start examining why that's the case.

Fortunately, in 2025, if you have a use case for such a system, and there are many many such valid use cases, you have a number of solid options to choose from. Fortunately sloppyREST hasn't truly killed them. But the fact that it empirically dominates it in the wild even so is now a fact older than many people reading this, and bears examination in that light rather than casual dismissals. It's easy to list the negatives, but there must be some positives that make it so popular with so many.


> There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards

Care to list them? REST mania started around early 2000-s, and at that time there was only CORBA available as a cross-language portable RPC. Microsoft had DCOM.

And that was it. There was almost nothing else.

It was so bad that ZeroC priced their ICE suite based on a PERCENTAGE OF GROSS SALES: https://web.archive.org/web/20040603094344/http://www.zeroc.... Their ICE suite was basically an RPC with a human-designed IDL and non-crazy bindings for C/C++/Java.

Then the situation got WORSE when SOAP came.

At this point, anything, literally anything, that didn't involve XML was greeted with enthusiasm.


I don't just mean the ones that existed at the time of the start of REST. I mean all the ones that have come up since then as well and failed to displace it.

Arguably the closest thing to a prescriptive winner is laying OpenAPI on top of REST APIs.

Also, REST defined as "A vaguely HTTP-ish API that carries JSON" would have to be put later than that. Bear in mind that even after JSON was officially "defined" it's not like it instantly spread everywhere. I am among the many people that reconstructed something like it because we didn't know about it yet, even though it was nominally years old by that point. It took years to propagate out. I'd put "REST as we are talking about it" as late 200xs at the earliest for when it was really popular and only into the 2010s as to when you started expecting people to mean that when they said "Web API".


> I mean all the ones that have come up since then as well and failed to displace it.

They won inside large companies: Coral in Amazon, Protobufs/gRPC in Google, Thrift in Facebook, etc. And they are slowly spreading outside of them.

OpenAPI is indeed an attempt to bring some order into the HTTP RPC world, and it's pretty successful. I'm pretty sure all the APIs that I used lately were based on OpenAPI descriptions.

So the trend is clear: move away from loosely-defined HTTP APIs into strict RPC frameworks with code generation because this is a superior approach. But once you start doing it, HTTP becomes a hindrance, so alternatives like gRPC are gaining popularity.

> Also, REST defined as "A vaguely HTTP-ish API that carries JSON" would have to be put later than that.

Ruby-on-Rails came out in 2005, and Apple shipped in 2006. REST-ful APIs were one of its major selling points ( https://web.archive.org/web/20061020014807/http://manuals.ru... ).

AWS S3 API, designed around the same time, also was fully REST-ful. This was explicitly one of its design goals, and it was not really appreciated by most people.


Yes, I agree with all of that stuff about using more structure in larger companies.

My meta point is that it is easy for programmers to come to the conclusion that all that should exist is the stuff that large companies use, as I see so many people believe, but if you try to model the world on that assumption you end up very frustrated and confused by how the real world actually works. You can't push a highly proscriptive, very detailed, high up-front work methology out on everyone. Not because it's a bad idea per se, or because it "wouldn't work", but because you literally can't. You can't force people to be "better" programmers than they are by pushing standards on them.

My gut leans in the direction of better specifications and more formal technologies, but I can't push that on my users. It really needs to be a pull operation.


> You can't force people to be "better" programmers than they are by pushing standards on them.

Oh, for sure. A company can just mandate something internally, whether it's a good idea or not. But superior approaches tend to slowly win out on merit even in the wider world. Often by standardizing existing practices, like OpenAPI did.

And I believe that strict prescriptive APIs with code generation are superior. This is also mirrored by the dynamic and static typing languages. I remember how dynamic languages were advertised in early 2000-s as more "productive" than highly prescriptive C++/Java.

But it turned out to be a mistake, so now even dynamic languages are gaining types.


> Care to list them?

From the top of my head, OData.

https://www.odata.org/


This is a recent project. REST happened basically in the environment where your choices were CORBA, DCOM, SOAP and other such monstrosities.

Of course, REST won handily. We're not in this environment anymore, thankfully, and REST now is getting some well-deserved scrutiny.


> This is a recent project.

OData officially started out in 2007. Roy Fielding's thesis was published in 2000.


So it was a contemporary of Protobufs, Cap’n Proto, and other frameworks. Facebook had Thrift, Amazon had Coral, and so on.

They appeared almost simultaneously, for the very same reason: REST by itself is too vague and unreliable.


I think all of them came a bit later. And I do remember Thrift. With regret.


People got things done with flint axes too. It isn't really a useful argument.


I mean... I used to get stuff done with CORBA and DCOM.

It's the question of long-term consequences for supportability and product evolution. Will the next person supporting the API know all the hidden gotchas?


The critical problem with gRPC is that it uses protocol buffers.

Which are...terrible.

Example: structured schema, but no way to require fields.


With Protobuf this is a conscious decision to avoid back-compat issues. I'm not sure if I like it.


That's exactly how these systems fail in the marketplace. You make one decision that's good for, say, 50% of cases but disqualifying for 50% of cases and you lose 50% of the market.

Make 5 decisions like that and you lost 31/32 of the market.


Infra teams like it, app devs don't like it.


What indicates that to you?


I’m a dev and I like it.


Well the competition is REST which doesn’t have a schema or required fields, so not much of a problem.


> Well the competition is REST which doesn’t have a schema or required fields, so not much of a problem.

A vague architecture style is not competition to a concrete framework. At best, you're claiming that the competition to gRPC is rolling your own ad-hoc RPC implementation.

What I expect to happen now is an epiphany. Why do most developers look at tools like gRPC and still decide it's a far better option to roll their own HTTP-based RPC interface? I mean, it's a rational choice for most. Think about that for a moment.


Some people complain about that, but I have yet to see anyone demonstrate that this is an actual problem. Show me the scenario where this is a show stopper.

You have all the permutations that sail under the name "REST" to some degree, where there seems to be no rules and everyone does something different. And then you have an RPC mechanism that is about two orders of magnitude tigher and people complain about not having required fields? How? Why? What are they on about?

I mean, if you write validation code for every type, by hand, you will probably still have to do less overall work than for REST'ish monstrosities. But since you have a lot more regularity, you can actually generate this code. Or use reflection.

How much time do people really spend on their interface types? After the initial design I rarely touch them. They're like less than a percent of the overall work.


> REST is just too "floppy", there are too many ways to do things.

I think there is some degree of confusion in your reply. You're trying to compare a framework with an architecture style. It's like comparing, say, OData with rpc-over-HTTP.


In practical reality the distinction is mostly, if not completely, without a meaningful difference. The words "practical" and "meaningful" being key. The distinction only has relevance if one engages in pedantry. Or possibly some form of academic self-pleasuring.

I'm aware this is an unappealingly rustic reality, but it is nonetheless the reality experienced by most.

Besides in the practical world we are able to observe, REST isn't even an architectural style: it is several architectural styles multiplied by every possible permutation of how you address a dozen or more different concerns. Necessitating disambiguation whenever you talk about it. First to state the obvious, that it isn't really what Fielding described, then on to communicating what vector describes your particular permutation of choices.

It's okay. We don't need to pretend any of us care about REST beyond as an interesting academic exercise.


You can mess up grpc just as much. Errors are a good place to start.


Could you elaborate?


Wait until you hear about errors in REST...


What about errors in REST? It's HTTP status codes, and implementations are free to pick whatever approach they want for response documents. Some frameworks default to using Problem Details responses, but no one forces that.


You can't rely on them because they can come from middleboxes (load balancers, proxies, captive portals in hotels, etc.).

So you can't rely on having structured errors for common codes such as 401/403/404, it's very typical to get unstructured text in payloads for such errors. Not a few REST bindings just fail with unhelpful serialization exceptions in such cases.


I don't see the point of ConnectRPC.


Amen. Particularly ISO8601.


Always thought that a standard like ISO8601 which always stores the date and time in UTC but appends the local time zone would beneficial.


Sometimes you need your timestamps to be in a named timezone. If I have a meeting at 9am local time next month, I probably want it to still be at 9am even if the government suddenly decided to cancel daylight time.


There are a few things you want about dates. 1932-04-12 sorts lexically, 04/12/1932 doesn't. So long as you don't use a timezone or always use the same timezone (especially Z) you get this nice property with ISO 8601 which makes it better than the alternatives. Once you include timezones you get into all sorts of problems such as dates (as opposed to date times) only having a partial ordering as the day here in New York starts an hour earlier than Chicago. In an extreme case the Pearl Harbor attack was launched the day after it was executed.

At some point you need real time aware libraries and whatever language you use they've been through several iterations of them (Javascript Date, moment, dayjs, ...) because they got it wrong the first time and probably the second time to.

With ISO 8601 it is easy to get the yyyy, yyyy-mm, hh and other things you might work with primitive tools (awk). Getting the day of the week or the time involved is not hard which gets you to the chronological rosetta stone

https://en.wikipedia.org/wiki/Julian_day

which is a multiplier and and offset away from Unixtime except for all those leap seconds and whatnot. With Unix timestamps comparison is easy and differences are easy and even knowing it is Thorsday is easy; they don't sort as strings but GNU sort has a -n option, only trouble is it is a bunch of twisty little numbers that look alike.


unless the customer you're meeting is in another timezone where the government didn't cancel daylight time


Exchange/GMail/etc. already has this problem/feature. Their response is simple: Follow the organiser's timezone. If it's 9am on the organiser's calendar, it will stay at 9am on the organiser's calendar. Everyone else's appointment will float to match the organiser.


I don't think I ever needed something like that... Since most cases don't need local time zone, why not keep two separate fields?


It's a delimited string. There are many fields within that string already.

    "2025-07-10T09:48:27+01:00" 
That contains, by my quick glance, at least 8 fields of information. I would argue the one field it does not carry but probably should is the _name_ of the timezone it is for.


ISO8601 is really broad with loads of edge cases and differing versions. RFC 3339 is closer, but still with a few quirks. Not sure why we can't have one of these that actually has just one way of representing each instant.

Related: https://ijmacd.github.io/rfc3339-iso8601/


I love how that's live!

Since ISO 8601 costs 133 CHF I suspect hardly anybody has actually read it, I think if you wanted something that supports all the weird stuff you might find somebody wrote it in 390 assembly.


That would be solved if JSON had a native date type in ISO format.


JSON doesn’t really have data types beyond very simple ones


> JSON doesn’t really have data types beyond very simple ones

What do you think primitive types are supposed to be?


I guess my point was something like an ISO 8601 date would be beyond the scope of a built in data type given JSONs philosophy of a minimal spec. It’s up to the end user to define types like that.


The below type definition (TS) fits the ECMA schema for JSON:

    type JSON = 
      string |
      number |
      boolean |
      null |
      JSON[] |
      {[name: string]: JSON}


You didn't answered my question.


I'm not the person you asked.


> Fielding won the war

It’s a bit odd to say fielding “won the war” when for years he had a blog pointing out all the APIs doing RPC over HTTP and calling it REST.

He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.

If that’s what you call victory, I guess Marx can rest easy.


> He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.

I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.


> I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.

That is a false dichotomy. Fielding gave a name to a specific concept / architectural style, the concept got ignored (rightly or wrongly, doesn’t matter) while the name he coined got recycled for something essentially entirely unrelated.


I mean, HTTP is an RPC protocol. It has methods and arguments and return types.

What I object to about eg xml-rpc is that it layers a second RPC protocol over HTTP so now I have two of them...


> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.

Why do people feel compelled to even consider it to be a battle?

As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.

Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?

[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model


For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.


It was buried towards the bottom of the article, but the reason, to me:

Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.

However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.

Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.


> Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.

That was the theory, but it was never true in practice.

The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.

Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)


Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.


Can we just call them HTTP APIs?


Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).

Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.


> (...) and part of the reason for this is just that defining media types is not something people do (...)

People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.

In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.

So why bother?


>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.

Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?


complexity


Backend only and verbosity would be more correct description.


We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.


> We should probably stop calling the thing that we call REST (...)

That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.

None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.

Ultimately it's all about nitpicking.


> Why do people feel compelled to even consider it to be a battle?

Because words have specific meanings. There’s a specific expectation when using them. It’s like if someone said “I can’t install this app on my iPhone” but then they have an android phone. They are similar in that they’re both smartphones and overall behave and look similar, but they’re still different.

If you are told an api is restful there’s an expectation of how it will behave.


Words derive their meaning from the context in which they are (not) used, which is not fixed and often changes over time.

Few people actually use the word RESTful anymore, they talk about REST APIs, and what they mean is almost certainly very far from what Roy had in mind decades ago.

People generally do not refer to all smartphones as iPhones, but if they did, that would literally change the meaning of the word. Examples: Zipper, cellophane, escalator… all specific brands that became ordinary words.


> If you are told an api is restful there’s an expectation of how it will behave.

And today, for most people in most situations, that expectation doesn’t include anything to do with HATEOAS.


> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.

Only because we never had the tools and resources that, say, GraphQL has.

And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well


> Only because we never had the tools and resources that, say, GraphQL has.

GraphQL promised to solve real-world problems.

What real world problems does HATEOAS addresses? None.


GraphQL was "promising" something because it was a thing by a single company.

HATEOAS didn't need to "promise" anything since it was just describing already existing protocols and capabilities that you can see in the links I posted.

And that's how you got POST-only GraphQL which for years has been busily reinventing half of HTTP


I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.

When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.


> Should REST without HATEOAS classify as REST? Why not?

Because what got backnamed HATEOAS is the very core of what Fielding called REST: https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...

Everything else is window dressing.


> Why do people feel compelled to even consider it to be a battle?

Because September isn't just for users.


HATEOAS adds lots of practical value if you care about discoverability and longevity.


Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.

HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"


You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.


> You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side.

Have you ever heard of HTTP's OPTIONS verb?

https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...

Follow-up trick question: how come you never heard of it and still managed quite well to live without it?


Maybe you should reconsider the way you ask questions on this forum. Your tone is not appropriate and the question itself just demonstrates that you don't understand this topic.

Yes, I'm aware of this header and know the web standards well enough.

In hypermedia API you communicate to client the list of all operations in the context of the resource (note: not ON the resource), which includes not only basic CRUD but also operations on adjacent resources (e.g. on user account you may have an operation of sending a message to this user). Yes, in theory one could use OPTIONS with a non-standard response body to communicate such operations that cannot be expressed in plain HTTP verbs in Allow header.

However such solution is not practical, because it requires an extra round trip for every resource. There's a better alternative, which is to provide the list of operations with the resource using one of the common standards - HAL, JSON-LD, Siren etc. The example in my another comment in this thread is based on HAL. If you wonder what is that, look no further than at Spring - it does support HAL APIs out of the box from quite a long time. And of course there's an RFC draft and a Wikipedia article (https://en.wikipedia.org/wiki/Hypertext_Application_Language).


This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.


That's actually an interesting take, thank you.


How does the UI check if certain operations are available?


It’s literally in server response:

   {
    … resource model
    _links: {
       “delete” : { “href” : “.” }
    }
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.


The promise of REST and HATEOAS was best realized not by building RESTful apps like say "my airline reservation app" but by building a programming system, spiritually like HTTP + HTML, in which you'd able to declaratively specify applications, of which "my airline reservation app" could be one and "my sports gambling service" could be another. So some smart person would invent a new application protocol with rich semantics as you did above, and a new type of user agent installed on desktops understands how to present them to the user, and the app on the server just assembles the resources in this rich format, directing users to their choices through the states of hte program.

So that never got done (because it's complex) and people started building apps like "my airline reservation app" but then realized to to build that domain app you don't need all the abstraction of a full REST system.


Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.

I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.


I’d suggest that bandwidth optimization should happen when it becomes critical and control presence of hypermedia controls via feature flag or header. This way frontend becomes simpler, so FE dev speed and quality improves, but backend becomes more complex. The main problem here is that most backend frameworks are supporting RMM level 2 and hypermedia controls require different architecture to make server code less verbose. Unfortunately REST wasn’t understood well, so full support of it wasn’t in focus of open source community.



Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).


That’s a neat idea actually, I think I’ll need to read up on the semantics of Allow again…. There is no reason you couldn’t just include it with arbitrary responses, no?


I don’t see why not!


I always thought soooo many REST implementations and explainers were missing a trick by ignoring the OPTIONS verb, it seems completely natural to me, but people love to stuff things inside of JSON.


It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)


With HATEOAS you're supposed to return the list of available actions with the representation of your state.

Neo4j's old REST API was really good about that. See e.g. get node: https://neo4j.com/docs/rest-docs/current/#rest-api-get-node


That API doesn’t look like REST level 3 API. For example, there’s an endpoint to create a node. It is not referenced by root or anywhere else. GetNode endpoint does include some traversal links in response, but those links are part of domain model, not part of the protocol. HAL does offer a protocol by which you enhance your domain model with links with semantics and additional resources.


I'm not saying it's perfect, but it's really good, and you could create a client for it in an evening.



It is interesting to me that GraphQL would be in "the swamp of POX," mostly because personal experience was that shifting from hand-built REST to GraphQL solved a lot of problems we had. Mostly around discovery and composition; the ability to sometimes ask for a little data and sometimes a lot at the same endpoint is huge, and the fact that all of that happens under the same syntax as opposed to smearing out such controls over headers, method, URI, and query params decreased cognitive load.

Perhaps the real issue was that XML is awful and a much thinner resource representation simplifies most of the problems for developers and users.



> If it's for robots, then _maybe_ there's some value...

Nah, machine readable docs beat HATEOAS in basically any application.

The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.


The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:

https://news.ycombinator.com/item?id=44509745


For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.


And that's fine, but then you're doing RPC instead of REST and we should all be clear and honest about that.


I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:

* If the API is available over HTTP then the only verb used is POST.

* The API is exposed on a single URL and the `method` is encoded in the body of the request.


It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.


What would it take for you to update your assumptions?


People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.

At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.


LLMs also appear to have an easier time consuming it (not surprisingly.)


To me, the most important nuance really is that just like "hypermedia links" (encoded as different link types, either with Link HTTP header or within the returned results) are "generic" (think that "activate" link), so is REST as done today: if you messed up and the proper action should not be "activate" but "enable", you are in no better position than having to change from /api/v1/account/ID/activate to /api/v2/account/ID/enable.

You still have to "hard code" somewhere what action anything needs to do over an API (and there is more missing metadata, like icons, translations for action description...).

Mostly to say that any thought of this approach being more general is only marginal, and really an illusion!


While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.


>> The important difference being that I need to figure out the peculiarities of each such web API

So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.


op's article could've been plucked from 2012 - this is one of my favorite rest rants from 2012: https://mikehadlow.blogspot.com/2012/08/rest-epic-semantic-f...

..that was written before swagger/openAPI was a thing. now there's a real spec with real adoption and real tools and folks can let the whole rest-epic-semantic-fail be an early chapter of web devs doing what they do (like pointing at remotely relevant academic paper to justify what they're doing at work)


So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.

I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.


I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.

(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)

You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.


>> "Hey! I'm not like those other girls, I care about quality,"

OMG. Pure gold!


What some people call pedantic, others may call precision. I normally just call the not-quite-REST API styles as simply "HTTP APIs" or even "RPC-style" APIs if they use POST to retrieve data or name their routes in terms of actions (like some AWS APIs).


Like all things in life it’s about balance. If you are to say things like the person I replied to says he does you are ultimately creating friction for absolutely no gain. Hence why I said being pedantic for the sake of being pedantic or in other words, being difficult for no good reason. There is a time and place for everything but over a decade plus of working and building many different APIs I see no benefit.

I cannot even recall a time where it caused me enough issues to even think about it later on. The business logic. I have had moments where I thought something was strange in a Elasticsearch API but again it was of no consequence.


REST is pretty much impossible to adhere to for any sufficiently complex API and we should just toss it in the garbage


100%. The needs of the client rule, and REST rarely meets the challenge. When I read the title, I was like "pfff", REST is crap to start with, why do I care?


REST means, generally, HTTP requests with json as a result.


It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.

It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.


> It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id`.

No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.


>> /things/:id/child/:child_id

It seems that nesting isn't super common in my experience. Maybe two levels if completely composite but they tend to be fairly flat.


Generally only /companies/:companyId/buildings

And then you get a list of all buildings for this company.

Every building has a url like: /buildings/:buildingId

So you constantly get back to the root.

Only exception is generally a tenant id which goes upfront for all requests for security/scoping purposes.


This seems like a really good model. It keeps things flat and easy to extend.


I see both.

E.g. GitHub /repos/:owner/:repo/pulls/comments/:comment_id

But flat is better than nested, esp if globally unique IDs are used already (and they often are).


Yes but /comments/:comment_uuid that has a parent to /pulls/:pull_uuid is harder to map the hierarchy it belongs to.


Not really if an URL link is added to the post in the comment response.

Also it is possible to embed a sub resource (or part of it).

Think a blog post.

/blogPost/:blogPostId

You can embed a blog object with the url and title so you can show the blogpost on a page with the name of the blog in one go.

If you need more details on the blog you can request /blogs/:blogId


> instead of GET/POST for everything

Sometimes that's a pragmatic choice too. I've worked with HTTP clients that only supported GET and POST. It's been a while but not that long ago.


Not even just clients, but servers too would block anything not GET/POST/HEAD. And I believe PHP still to this day only has $_GET and $_POST as out of the box superglobals to conveniently get data params. I recall some "REST" APIs would let you use POST for PUT/DELETE requests if you added a special var or header specifying.


I also view it as inevitable.

I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.

This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.

Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.


xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.


HTTP/JSON API works too, but you can assume it's what they mean by REST.

It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).


> HTTP/JSON API works too, but you can assume it's what they mean by REST.

This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.

In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".


> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.

wat?

Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST


> Nowhere is JSON in the name of REpresentational State Transfer.

If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.


Read messages before replying? It's the internet! Ain't no one got time for that

:)


Don't stress it. It happens to the best of us.


This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.


Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)


I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.


> it had proper standards

Lol. Have you read them?

SOAP in particular can really not be described as "proper".

It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.


> - The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

I really wish people just used 200 status code and put encoded errors in the payloads themselves instead of trying to fuse the transport layer's (which HTTP serves as, in this case) concerns with the application's concerns. Seriously, HTTP does not mandate that e.g. "HTTP/1.1 503 Ooops\r\n\r\n" should be stuffed into the TCP's RST packet, or into whatever TLS uses to signal severe errors, for bloody obvious reasons: it doesn't belong there.

Like, when you get a 403/404 error, it's very bloody difficult to tell apart the "the reverse proxy before the server is misconfigured and somebody forgot to expose the endpoint" and "the server executed your request to look up an item perfectly fine: the DB is functional, and the item you asked for is not in there" scenarios. And yeah, of course I could (and should) look at and try to parse the response's body but why? This "let's split off the 'error code' part of the message from the message and stuff it somewhere into the metadata, that'll be fine, those never get messed up or used for anything else, so no chance of confusion" approach just complicates things for everyone for no benefit whatsoever.


The point of status codes is to have a standard that any client can understand. If you have a load balancer, the load balancer can unhealthy backends based on the status code. Similarly if you have some job scheduler or workflow engine that's calling your API, they can execute an appropriate retry strategy based on the status code. The client in most cases does not care about why something failed, only whether it has failed. Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern and the server can always do that with its own custom error codes.


> The client in most cases does not care about why something failed, only whether it has failed.

"...and therefore using different status codes in the responses is mostly pointless. Therefore, use 200 and put "s":"error" in the response".

> Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern.

One of the very common failures is for the request to simply never reach "the server". In my experience, one of the very first steps in improving the error handling quality (on the client's side) is to start distinguishing between the low-level errors of "the user has literally no connection Internet" and "the user has connected somewhere, but that thing didn't really speak the server protocol", and the high-level errors "the client has talked with the application server (using the custom application protocol and everything), and there was an error on the application server's side". Using HTTP-status codes for both low- and high-level errors makes such distinctions harder to figure out.


I did say most cases, not all cases. There are some concerns that are considered cross cutting, to have made it into the standard. For instance, many clients will handle a 401 by redirecting to an auth flow, or handle a 429 rate limited by backing off before making a request, handle 426 by upgrading the protocol etc. Not all statuses may be relevant for a given system, you can club several scenarios under a 400 or a 500 and that's perfectly fine for many use cases. But when you have cross cutting concerns, it's beneficial to follow fine grained status codes. It gives you a lot of flexibility in how you can connect different parts of your architecture and reduces integration headaches.

I think a more precise term for what you're describing is transport errors vs business errors. You're right that you don't want to model all your business errors as HTTP status codes. Your business scenarios are most certainly numerous and need to be much more fine grained than what the standard offers. But the important thing is all errors business or transport eventually need to map to a HTTP status code because that's the protocol you're ultimately speaking.


> transport errors vs business errors

Yes, pretty much.

> But the important thing is all errors business or transport eventually need to map to a HTTP status code because that's the protocol you're ultimately speaking.

"But the important thing is, all errors, business or transport, eventually need to map to the set of TCP flags (SYN, ACK, FIN, RST, ...) because that's the protocol you're ultimately speaking". Yeah, they do map, technically speaking: to just an ACK. Because it's a payload, transported agnostically to its higher-level meaning. It's a yet another application of the end-to-end principle.


what is a unhealthy request? is searching for a user which was _not found_ by the server unhealthy? was the request successful? thats where different opinions exist.


Sure, there's some nuance to it that depends on your application, but it's the server's responsibility to do so, not the client's. The status code exists for this reason and the standard also classifies status codes under client error and server error so that clients can determine whether a server is unhealthy simply by looking at the status code.


Eh, if you're doing RPC where the whole request/response are already in another layer on top of HTTP, then sure, 200 everything.

But to me, "REST" means "use the HTTP verbs to talk about resources". The whole point is that for resource-oriented APIs, you don't need another layer. In which case serving 404s for things that don't exist, or 409s when you try to put things into a weird state makes perfect sense.


> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

I had to chuckle here. So true!


This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.

I think good rest api design is more a service for the engineer than the client.


> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)

A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.

Some developers just do not understand http.


I just consumed an API where errors were marked with a "success": false field.

The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.

Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.


This is the real world. You just deal with it (at least I do) because fighting it is more work and at the end of the day the boss wants the project done.


Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.

So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.


Sometimes I wish HN supported emojis so I could reply with the throw-up one.


I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.


{ "statusCode": 200, "error" : "internal server error" }

Nice.


> So the lack of "HATEOaS" hasn't even been noticable for me.

I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.

In the end it's a solution searching for problems, and no one has those problems.


It isn't clear that HATEOS would be better. For instance:

>>Clients shouldn’t assume or hardcode paths like /users/123/posts

Is it really net better to return something like the following just so you can change the url structure.

"_links": { "posts": { "href": "/users/123/posts" }, }

I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").


Yes, so the link doesn't have to be relative to the current host. If you move user posts to another server, the href changes, nothing else does.

If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.

The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.

It's brittle and will break some time in the future.


>> If you move user posts to another server, the href changes, nothing else does

It isn't clear what insurance you are really buying here. You can't possibly mean another physical server. Obviously that happens all the time with any site but no one is changing links to point to the actual hardware - just use a normal load balancer. Is it domain name change insurance? That doesn't add up either.

>> If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.

Normally you would just fix the problem instead of doing weird application level encryption stuff.

>> The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes

If those "frontend" developers are paying customers as in the case of AWS, OpenAI, Anthropic then you probably want to make your API as simple as possible for them to understand.


I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.


>HTTP 200 status with an error in JSON is used instead of HTTP status codes

This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.


On the other hand, functional app returning http errors clouds your observability and can hide real errors. It's not always ideal for the client either. 404 specifically is bad. Do I have a wrong id, wrong address, is it actually 401/403, or is it just returned by something along the way? Code alone tells you nothing, might as well return 200 for a valid request that was correctly processed.

(devil's advocate, I use http codes :))


> HTTP 200 status with an error in JSON is used instead of HTTP status codes

I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.

Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.


> I can safely assume [...] CRUD actions are mapped to POST/GET/PUT/DELETE

Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.


Isn't that fairly straightforward? PUT for full updates and PATCH for partial ones. Does anybody do anything different?


PUT for partial updates, yes, constantly. What i worked with last week: https://docs.gitlab.com/api/projects/#edit-a-project


That's straightforwardly 'correct' and Fielding's thesis, yes. Yes people do things differently!


Lots of people make PUTs that work like PATCHes and it drives me crazy. Same with people who use POST to retrieve information.


Well you can't reliably use GET with bodies. There is the proposed SEARCH but using custom methods also might not work everywhere.


No, QUERY. https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-saf...

SEARCH is from RFC 5323 (WebDAV).


The SEARCH verb draft was superseded by the QUERY verb draft last I checked. QUERY is somewhat more adopted, though it's still very new.


  POST /gql
  "Get thing"
  ...
  200
  "Permission denied to get thing"
Hate it.


These verbs dont even make sense most of the time.


You sweet summer child.


It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.


Agreed... in most large (non trivial systems) REST ends up looking/devolving closer to RPC more and more and you end up just using get and post for most things and end up with a REST-ISH-RPC system in practice.

REST purists will not be happy, but that's reality.


What is the limited support for CONNECT/HEAD/OPTIONS/PUT/DELETE ?


It was limited up until the last 10 years, and if someone hasn't updated their knowledge then it's still limited, I suppose.


XMLHttpRequest? fetch?

We're talking JSON APIs -- HTML forms are incompatible with that no matter the verb.


Fetch came in around 2015, and XMLHttpRequest wasn't consistent in the way different verbs were handled, like redirects, as this blog post[0] from 2006 points out:

> Basic redirect support is pretty universal, but things quickly fall apart on most browsers when you do tricky things like use non-GET/POST methods on redirecting resources.

There were other things too, I'm not sure CORS supported anything but GET and POST early on either. Wanting consistency and then sticking to it isn't an inherently bad thing, there's a lot to know, and people don't update knowledge about everything (I'm speaking generally as well as including my self here).

[0] https://www.mnot.net/blog/2006/01/23/test_xmlhttprequest


> I've long ago caved to the reality of the web's limited support for most of the other verbs.

Sounds like this reality is not the recent one.


Hell yeah. IMO we should collectively get over ourselves and just agree that what you describe is the true, proper, present-day meaning of "REST API".


> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

401 Unauthorized. When the user is unauthenticated.

403 Forbidden. When the user is unauthorized.


Yeah

I can assure you very few people care

And why would they? They're getting value out of this and it fits their head and model view

Sweating over this takes you nowhere


I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...

> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.

> There's a decent chance listing endpoints were changed to POST to support complex filters

So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.


The world would be lovely if we could have standard error, listing responses, and a common query syntax.

I haven't done REST apis in a while, but I came across this recently for standardizing the error response: https://www.rfc-editor.org/rfc/rfc9457.html


I really like the idea of a type URL.


> - CRUD actions are mapped to POST/GET/PUT/DELETE

Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.


Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?

So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.

So I say: GET or POST.


> From my point of view, post, put, delete, update, and patch all do the same.

That's how we got POST-only GraphQL.

In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...


Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.


I've had situations when I wanted a GET with a body :) But not that many


There's no point in idempotency for operations that change the state. DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id. Should you do something like delete by email or product, you have to use another operation, which then obviously will be POST anyway. And there's no way to "cache" a delete operation.

It's just absurd to mention idempotency when the state gets altered.


> There's no point in idempotency for operations that change the state.

Of course there is

> DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id

Which is most operations

> Should you do something like delete by email or product, you have to use another operation,

Erm.... No, you don't?

> which then obviously will be POST anyway. And there's no way to "cache" a delete operation.

Why would you want to cache a delete operation?


The defined behaviors are not so well defined for more complex APIs.

You may have an API for example that updates one object and inserts another one, or even deletes an old resource and inserts a new one

The verbs are only very clear for very simple CRUD operations. There is a lot of nuance otherwise that you need documentation for and having to deal with these verbs both as the developer or user of an API is a nuisance with no real benefit


> The defined behaviors are not so well defined for more complex APIs.

They are. Your APIs can always be defined as a combination of "safe, idempotent, cacheable"


I've had situations when I wanted a GET with a body :)


I agree. From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.


> Do you care?

I don't. I could deliver a diatribe on how even the common arguments for differentiating GET & POST don't hold water. HEAD is the only verb with any mild use in the base spec.

On the other hand:

> correct status codes and at least a few are used contrary to the HTTP spec

This is a bigger problem than verb choice & something I very much care about.


There's one, though. The client can tell the server it has a cached version, but that only works (automatically) for GET in browsers. That could have been solved without resorting to those verbs, of course, but it's legacy.

HEAD allows the server to send meta data without the (potentially very large) body. That could have been solved without verb (as if HEAD is a verb in this case!), of course, but it has its uses.


I actually had to change an API recently TO this. The request payload was getting too big, so we needed to send it via POST as a body.


> even sometimes read operations behind a POST

Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!


I work with an API that uses GET for delete :)


Sounds about right. I've been calling this REST-ish for years and generally everyone I say that to gets what I mean without much (any) explanation.


> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec

I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.


> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.

True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.


As long as it's not SOAP, it's great.


If I never have to use SOAP again in my life, I will die a happy man.


> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood

This is an insightful observation. It happens with pretty much everything

As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]

1: https://simonwillison.net/2025/Mar/19/vibe-coding/


100% agreed, “language evolves”

This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.

The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.


Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.


I present to you this monstrosity: https://stackoverflow.com/q/39110233

Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!

Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.

I think people tend to forget these things are tools, not shackles


Exactly. What you describe is how I see REST being used today and I wish people accepted the semantic shift and stopped with their well-ackshually. It serves nothing.


I have seen monstrosities claiming to be rest that use HTTP but actually have a separate set of action verbs, nestled inside of HTTP's.

In a server holding a "deck of cards," there might be a "HTTP GET <blah-de-blah>/shuffle.html" call with the side-effect of performing a server-side randomization operation.

I just made that up because I don't want to impugn anyone. But I've seen API sets full of nonsense just like that.


Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.

The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.


We still have gRPC though...


the last point got me.

How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).

So ... how does one do it?


One uses POST and recognizes that REST doesn't have to be so prescriptive.

The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).

When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.

Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.


POST the filter, get a response back with the query to follow up with for the individual resources.

    POST /complex
    
    value1=something
    value2=else
which then responds with

    201 Created
    Location https://example.com/complex/53301a34-92d3-447d-ac98-964e9a8b3989
And then you can make GET request calls against that resource.

It adds in some data expiration problems to be solved, but its reasonably RESTful.


This has RESTful aesthetics but it is a bit unpractical if a read-only query changes state on the server, as in creating the uuid-referenced resource.


There's no requirement in HTTP (or REST) to either create a resource or return a Location header.

For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).


Isn't this twice as slow? If your server was far away it would double load times?


The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.

Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.


There was a proposal[1] a while back to define a new SEARCH verb that was basically just a GET with a body for this exact purpose.

[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...


Similarly, a more recent proposal for a new QUERY verb: https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...


If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).

Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.


Cons: not postman or cURL friendly.


"Filters" suggests that you are trying to query. So, QUERY, perhaps? https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...

Or stop worrying and just use POST. The computer isn't going to care.


HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.

Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.

In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.


Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.

I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.


RESTful has gone far beyond the http world. It's the new RPC with JSON payload for whatever. I use it on embedded systems that has no database at all, POST/GET/PUT/DELETE etc are perfectly simple to map into WRITE|READ|Modify|Remove commands. As long as the API is documented, I don't really care about its http origins.


- the inclusion of HATEAOS links which are NEVER used


  > The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
Haha yes! Is it even a dev team if they haven't had an overly heated argument about which 4xx code to return for an error state?


I describe mine as a JSON-Based Representational State SOAP API to other internal teams. When their eyes cross I get to work sifting through the contents of their pockets for linting errors and JIRA tickets.


>- There's a decent chance listing endpoints were changed to POST to support complex filters

Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.


I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.

When I think about some of the RESTy things we do like return part of the response as different HTTP codes, they don't really add that much value vs. keeping things on the same layer. So maybe the biggest value add so far is JSON, which thanks to its limited nature prevents complication, and OpenAPI ecosystem which grew kinda organically to provide pretty nice codegen and clients.

More complexity lessons here: look at oneOf support in OpenAPI implementations, and you will find half of them flat out don't have it, and the other half are buggy even in YOTL 2025.


> I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.

While I generally agree that REST isn’t really useful outside of academic thought experiments: I’ve been in this about as long as you are, and it really isn’t hard. Try reading Fieldings paper once; the ideas are sound and easy to understand, it’s just with a different vision of the internet than the one we ended up creating.


You can also read Fielding’s old blog posts. He used to write about it a lot before before he stopped blogging.


this is most probably a 90% hit


[flagged]


I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.


> and people aren't "mediocre" just for using widely accepted words

If you work off "widely accepted words" when there is disagreeing primary literature, you are probably mediocre.


So your view is that the person who coins a term forever has full rights to dictate the meaning of that term, regardless of what meaning turns out to be useful in practice and gets broadly accepted by the community? And you think that anyone who disagrees with such an ultra-prescriptivist view of linguistics is somehow a "mediocre programmer"? Do I have that right?


I have no dog in this fight, but 90% of technical people around me keep calling authentication authorization no matter how many times I explain the difference to those who even care to listen. It's misused in almost every application developed in this country.

Sometimes it really is bad and "everybody" can be very wrong, yes. None of us are native English speakers (most don't speak English at all), so these foreign sounding words all look the same, it's a forgivable "offence".


No. For all people who use "REST": If reading Fielding is the exception that gets you on HN, than not reading Fielding is what average person does. Mediocre.

Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.


The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.


Why should I (or you, for that matter) go and tell them their restful has nothing to do with rest? Why does it matter? They're making perfectly fine HTTP APIs, and they use the industry standard term to describe what kind of HTTP API it is.

It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.


1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.

2. So just "HTTP API". And that would suffice. Adding "restful" is trying to be extra-smart or fit in if everyone's around an extra-smart.


> 1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.

This doesn't seem like a useful line of conversation, so I will ignore it.

> 2. So just "HTTP API".

No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.

Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.

Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.


> On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it (...)

Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.

Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.


You're being needlessly pedantic, and it seems the only purpose to this pedantry is finding a pretext to accuse everyone of being mediocre.


I think the pushback is because you labelled people who create "REST APIs" as "mediocre" without any explanation. That may be a good starting point.


It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.

Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.

And I agree with the feature article.


> It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.

I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.


I met a DevOps guy who didn't know what "dotfiles" are.

However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.

I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".


This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.

If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.


I agree, thought it would be really really nice if a http method like GET would not modify things. :)


> This is more like people arguing over "proper" English, the point of language is to communicate ideas.

ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!

</sarcasm>

You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.

Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).

It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.

> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?

Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.


>misusing it just decreases clarity and hinders communication

There is no such thing as "misusing language". Language changes. It always does.

Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.

Because language always changes.

Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.

You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.

And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.

Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.

One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.


> I work for a German company and my German is not great but if I can make myself understood, that's all that's needed.

Really? What if somebody else wants to get some information to you? How do you know what to work on?


Pretty much everyone speaks English too, it's the official language of the company. Though we all try to be respectful; if I can't understand them then they tell me again in English. I try to respond as much as possible in German and switch to English if needed - there's also heavy use of deepl on my side which seems to be a lot more idiomatic than Google, MS, or Apple translate.


What an incredibly bad take.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: