Hacker News new | comments | show | ask | jobs | submit login
Rules for REST API URI Design (restcase.com)
85 points by RestCase 10 months ago | hide | past | web | favorite | 92 comments

I don't disagree with most of the rules, but it's more than a little ironic that the supporting quote at the top of the page that the article seemingly hangs off directly contradicts most of the article:

> The only thing you can use an identifier for is to refer to an object. When you are not dereferencing, you should not look at the contents of the URI string to gain other information.

- Tim Berners-Lee

By this quote, caring about the hierarchical nature of URIs, pluralization, and enhancements to readability are somewhere between irrelevant to actively harmful (since they promote the idea that the URI contains information beyond identification)

By the logic that a URI is supposed to be used for identification and identification alone, these two URLs are identical in terms of value:



Most compilers interprets variables name as opaque identifiers without semantic meaning. Does that mean I should not have rules to name variables?

The way I interpreted the quote is: Don't parse the URI in your program to try to gain information about a resource other than it's ID. It doesn't mean you can't put information in there to make it easier for human developers to understand.

In other words, pretty names are for humans, not machines.

And in a 'true' REST API the pretty name for humans should be in the link text, not the href

Yes, The article is talking about REST as in "JSON RPC over HTTP" as opposed to REST as the person who came up with it (Roy Fielding) intended.

If you are worrying about URL readability you aren't doing REST.

Spot on. RESTful URLs are not to be treated like SEO URLs. But unfortunately most people don't see the difference. One example in the article is especially harmful (/students/<id>/courses/physics for querying).

The single best slideshare I've found on REST is Teach a Dog to REST. Old but gold. https://www.slideshare.net/landlessness/teach-a-dog-to-rest

(And a shameful plug - https://medium.com/@rdsubhas/pitiful-restful-urls-5d576ffccb...)

  > RESTful URLs are not to be treated like SEO URLs
But it does not hurt to have them human-readable.

It does hurt. Examples in the article linked above.

Most web apps (outside of websites themselves, which are a kind of very simple application) aren't HATEOAS, so Roy's thesis has less purchase on their design.

In most contexts what you actually need is REST as in "JSON RPC over HTTP" instead of REST as Roy Fielding intended.

So the recommendations in the article are actually valid and useful, but we'd need to come up with another name to properly distinguish which "REST" we're talking about, at least unless/until one of these usages fades into obscurity.

> we'd need to come up with another name to properly distinguish which "REST" we're talking about

"Underspecified, broken half-implementation of RPC" is quite precise, but most of the programmers don't like being reminded that they implement a brain-dead idea.

I never understood why people don't use one of the already-defined RPC protocols, rolling instead their own versions (each different) that don't even signal errors properly.

When doing something like `student/1245/courses` there will certainly be a scenario where you want a list of courses on their own as well.

If planning ahead, does that warrant designing your route such as:

`/courses` where you get a list of courses and `/courses?studentId=12345` Where you get courses scoped to a student...

Or is it better to just recreate a separate route such that you have /courses AND student/2345/courses?

Im concerned that the latter results in duplication of code and possibly more confusion (ie not clear if the API require POST to /courses or /student/12345/courses to create a course for a student ) while the former results may cause (lack of) caching problems.


I follow JSON API v1.1, mostly anyway, see [1].

I not only create both paths, I also specifically create `relationships` links to control the relationship itself. That way, I have a restful way of, say, making a signed article anonymous. DELETE articles/2/relationships/author not that this doesn't delete the author just the relationship itself.

Your worry about the duplication of code is well founded, but slightly off. For what are called "related" links (articles/4/author) we only ever issue GETs. There are many reasons for this and you've started to catch onto a few of them.

This means a _lot_ of routes (over 300 right now), but with smart abstractions it's not so bad. I've been thinking of doing a weird fork of Rails based on how productive I've become. Basically I want to follow Rails but I want to override a lot of their decisions that don't fit JSON API's needs. Maybe one day.

[1] With a minor addition that I find helpful: I add an `also` link in the relationship body. GET articles/5/author would have `also` set to `users/45`. In this manner I can easily get a reference to the related resource and I make it explicit that the resource isn't dependant on it's relationship continuing to exist with the parent resource.

I believe '/courses' and 'student/1234/courses' mean two different things. The former implies a list of courses and the later implies an association between a student and a course.

POSTing to '/courses' creates a new course. However POSTing to a 'student/1234/courses' creates a relationship/association between a student and a course.

So, it's two different things IMO and not a duplication of logic.

This isn't true. Or at least not necessarily true. If you're referring to the JSON API spec you're conflating related and relationship links. The former is strictly for GETs (GET students/5/courses) and the later is for controlling the relationship itself (PATCH students/5/relationship/courses).

See here for more information:


I think it depends on what makes sense for your problem domain. Think about what kind of HTTP methods and the associated data with each and decide accordingly.

Nesting of resources in a REST API can become problematic, for these reasons, so another option would be to get rid of the nesting and use query parameters to filter based on whatever criteria you want. This is the motivation for PostgREST [1] where the author views the shortcomings of nested API resources as similar to those of hierarchical databases. GraphQL solves this similarly, it also has a richer query language, but it's missing some of the benefits of REST like caching, etc.

[1]. https://postgrest.com/en/v4.1/intro.html

If it's an API then you could bypass the aforementioned constraints of spitting your request up by URI "directories" and instead have your query's parameters passed via form data or serialised as JSON* in the HTTP request body.

URIs are easier to deal with but not _that_ much easier if you're programmatically sending the requests as one might expect to do with an API

* Other data formats also exist

Adding stuff to a request body in GET is considered an antipattern

If your APIs are login sensitive then you'd want your APIs to be POST requests anyway.


If your REST APIs do updates or retrieve sensitive information then odds are you'll have some kind of authentication method, possibly with session cookies or whitelisted IPs. Allowing sensitive APIs to be sent via GET means that you open yourself up to a couple of easy vectors of attack. eg a malicious webpage with a <img src="http://example.com"> tag using the API end point as the src URI. This means the target person opening the page will send the API request as themselves, using their IP and cookies. Granted the API wont return an image so the "image" will fail to render but that doesn't matter as the API will still execute successfully. However if your APIs only accept POST requests then you mitigate this particularly attack.

It's pretty common advice to recommend any APIs that require user authentication to be sent via POST. In fact it's one of the first things pen testers will check for and you'd also fail PCI DSS vulnerability scans for exposing APIs via GET as well.

Disclaimer: I've works on multiple projects that have been pen tested, been audited by the UK Gambling Commission and/or had to adhere to PCI Data Security Standards.

> I've works on multiple projects that have been pen tested, been audited by the UK Gambling Commission and/or had to adhere to PCI Data Security Standards.

Why do you want to 'disclaim' that? Assuming it's true, you might have meant 'disclosure', but I think what you really mean is much closer to 'source' - i.e. 'why I know this'.

I just had this exact 'problem' with an API I'm building. I just ended up creating both routes which accessed the same internal function to return the same results - no duplication of code necessary. Personally I think this is a good resolve as it answers both use cases depending on the accesssors context.

To add, my endpoints were:




Both return the users attached to a particular account.

just drop this REST hype altogether and use adequate protocols for communication

As in design a custom TCP/IP protocol for every application?

I think there are a few GraphQL enthusiasts entering this thread right now, which has its own merits, but can live perfectly next to REST.

In my company we use graphql for getting, and REST for all other operations.

Might be an antipattern, but has served us well.

We're also using a special flavor of graphql that returns the data as flat dictionaries instead of deeply nested dictionaries. Whether you need that or not of course depends on the client design.

As in using GraphQL for this purpose, I presume. Or even offering an SQL-API. Seems more ideal than overly complicated URL schemes.

Or, you know, just use GraphQL as your protocol and be done with this. Everything that's problematic in REST is clearly specified here and stays very easy to use. You can focus on real problems from now on.

I think the problem goes deeper then REST & GraphQL, it boils down all the way to SQL and not a lot of people are seeing the problem because it is a "boiling frog" problem. Let me explain. So only a decade ago, all of the work was being done on the server and everything was great (... in a way :), and that's because there you had SQL and you could ask quite complicated questions with it. Now along comes XMLHttpRequest and the iPhone and a lot of the "business logic" starts moving to the frontend slowly. It was a natural thing to do when you need just a little info from the server, basically make "GET /items/1" mean "SELECT * FROM items WHERE id=1", because if you strip all the auth stuff away, that is the essence of that URL. So REST is born. It worked for a while, while the frontend code wasn't doing a lot. But now, we are in a state where everything is being done by the fronted. Now remember all those complicated questions we had to ask of our data, they did not go away, but one thing did, SQL, and all that remained was REST. REST maps to a very limited subset of SQL power, basically all it can do (and still be RESTful) is generate queries like this "SELECT * FROM items WHERE cond1, cond2 ...". Devs were used to working mostly only with queries like this (and ignore joins) because the db was close and there was no (big) penalty for firing 100 queries like these, but now when things moved to the frontend, people started remembering they need to take network latency into account and suddenly "getting everything in one go" ... which is basically a join, became important again. This is what sparked the GraphQL popularity, getting everything in one step, aka ability to express a join everything else is not that important (standard/tooling/types is of course important but it could have been invented for REST also).

Having said all that, GraphQL is still far from the expressivity of SQL and people will still wonder how can they express in GraphQL questions that are easily answered by SQL. Everybody is still thinking in terms of "defining an api" but recently a new way of thinking about this problem started to emerge. Defining a way to translate a HTTP request to a SQL query, i.e. trying to expose the power of SQL in a safe way to the frontend. There is no predefined list of endpoints/types, every requests gets transformed to a SQL query and executed. This is what PostgREST [1] is doing. I know it sounds scary and dangerous but it works :) Try the Starter Kit [2] to get a taste and if GraphQL is your thing, you could explore the same idea using subZero [3]

[1] https://github.com/begriffs/postgrest

[2] https://github.com/subzerocloud/subzero-starter-kit

[3] https://subzero.cloud

Edit: typos

We've recently started our transition from full rest to graphql. It's actually magical how much easier it is to do everything.

We've used join-monster to handle the sql generation so it's been a breeze to get everything up and running.

Its surprising how fast it is too. We used to use sequelize glasses an orm, but we were getting 4-500ms r times on simple requests. Now for the most part we get sub 100ms responses on complex queries.

Overall this seems like a clickbaity list of practices that have been well established for a while now. REST is easy when you're just doing CRUD. It's once you have actions besides "update" that things start to get a bit more interesting.

I disagree, it's fairly simple, people only find it hard because they're thinking in RPC terms - in commands instead of resources.

An action is simply a new resource you create. You don't send_emails(), you create a new email sending resource, which has its own URL you can check in later (giving you built-in resilience to network cuts and other problems).

I'm yet to find an action you can't easily model with resources and their representations.

Could you maybe elaborate a bit more with your send email example? We've been struggling a bit with API design and exactly this kind of thinking everything as a resource. I'm still having a hard time to imagine exactly how "a new email sending resource" would/should look like. having something like /api/sendmail/confirmation would trigger my confirmation mail sending method internally, which clearly is RPC thinking. How would that look like with REST? How would the REST version deal with /api/sendmail/{confirmation,thanks,resetpw} etc.?

Any RPC-style endpoint can be represented in REST by exposing the resulting event as a first-class object you can read and write.

Turn your verb into a noun: instead of "move $10 from Alice's to Bob's account", your clients will ask to "record a transfer of $10 from Alice to Bob". A transfer is a type of record the client creates with a POST, not a function call that results in money being moved. Creating it records the client's intended result; how and when to implement the actual transfer of money is not the client's concern. If Alice wants to check if Bob received the money, for example, she can GET /transfers/<ID returned by the POST response> and look at the fields.

For emails sent to users, I'm not sure what you'd call it. The simple verb-the-noun rule would make it a "send", but sometimes you need something better. Maybe a "thread" or "communication."

This would be a natural API design for CQRS/ES, but you could also use it (with adjustments, maybe) to present a RESTful interface to something that, behind the scenes, just moves the money right away and never thinks about transactions as an entity.

The key to modeling this way is in the name: REpresentational State Transfer, meaning clients send a snapshot of the current or desired state of a resource instead of calling functions that change it. And sometimes, to make it make sense, you have to invent a new kind of resource.

What exactly are you trying to model? What type of emails are those?

Generally, trying to treat the server as a dumb API doesn't work well. If your client is the one who knows that something was confirmed, it should tell the server that, and let it worry about sending whatever emails it wants.

So you'd just PUT your resource with state=confirmed, and let the server take care of any side effects that might trigger.

On the other hand, if it's something like a mass email created by the user, then the client should POST to create a new "mass mailing resource", then it'd PUT all the changes made by the client, and finally PUT its state to "ready to send" so that the server can do so.

I don't know if it's helpful, but think of the server like a coworker with whom you can only interact by opening tickets on Jira or equivalent.

You choose the type, describe what you need, and immediately get back a ticket ID (that's the URL). Then you can check back to see the state of the ticket.

/api/sendmail, data passed in POST as JSON. POSTing doesn't have to be uniquely identified.

Yes, but that's RPC thinking. What if the queue is backlogged? What happens if the email doesn't get delivered? If your client asks for the email to be sent, it should deal with it during its lifecycle.

In my opinion, one shouldn't have a /sendmail API at all, the server should deal with that, but if one really needs the client to do that, then /sendmail should at least return a URL that represent the email being sent, so that the client can keep track of it even after the connection is lost or the device is rebooted.

We've already established REST has crippling limitations. Introducing RPC in it makes up for those.

You can also make it a long lived connection, that waits for a proper confirmation, which means you might be waiting ten seconds. Or you can make it a fire-and-forget operation, and have a resource to check if it was properly sent. Which is back to polling. Or you can use server-sent events to have the server notify you back when it's done.

But yes, unless you explicitly need to be able to send emails from clientside, I wouldn't expose a sendmail resource, and would leave that to the server. (I say as I recently implemented a resource that allows me to post an event from client side to allow the server to send it back. :| )

Ultimately, do what works. The pure REST cargo cult is dangerous. As long as what you do is clean, maintainable and ideally idempotent, you're good.

We've already established REST has crippling limitations.

No, we haven't. I explicitly disagreed with that assertion. REST is not adequate for everything, but it doesn't have "crippling limitations". It's pretty good for its intended use.

You can also make it a long lived connection, that waits for a proper confirmation, which means you might be waiting ten seconds.

You can't rely on the connection lasting that long. REST's design was a major sucess on the Internet in part because it naturally dealt with the connectivity limitations.

Or you can make it a fire-and-forget operation, and have a resource to check if it was properly sent.

Yes. That's what REST means. That's my point!

Or you can use server-sent events to have the server notify you back when it's done.

You still have to create an unique ID for the operation, so that the client can tell what is done. Having an URL as that ID is barely any effort.

Ultimately, do what works. The pure REST cargo cult is dangerous. As long as what you do is clean, maintainable and ideally idempotent, you're good.

There's nothing clean and maintainable about having ad-hoc mechanisms that break the overall functioning of the API. If you're using a paradigm, be that REST, RPC, or anything else, you should have a major reason to break it.

... Then make an endpoint where you can get a task's status through its ID (your post to sendmail would then return a task ID), and poll repeatedly until it's marked as done, wasting bandwidth, your architecture, your choices ¯\_(ツ)_/¯

Or you can let the server notify you when it's done, and carry on with your work. As a bonus, most clients able to receive SSE already include automatic reconnection to the feed if it ever gets cut.

They're not incompatible, though. I'd make everything as a REST endpoint (since you need to create a resource with an ID anyway for SSE, otherwise, how will the client know which email want sent?), and then add SSE as an optional performance improvement layer on top of those endpoints that may benefit from it.

This is how WebSub (formely PubSubHubbub) works on top of RSS/Atom, for example.

Creating a new resource for everything sounds very tedious though. A simple example might be upvoting. I think taking a step away from REST and making a /upvote action makes for an easier to consume API.

To be a bit more abstract, consider complex state transitions on a resource. In some cases it can make a lot more sense for a client to say "transition to this state" without explicit knowledge of how to do so. To do this restfuly you could maybe update a virtual "state" field to the desired state, but to me that can feel very unnatural.

Creating a new resource for everything sounds very tedious though. A simple example might be upvoting. I think taking a step away from REST and making a /upvote action makes for an easier to consume API.

How are you imagining the REST equivalent? Because that's also what I'd do: create a /post/<id>/upvote URL that the client would just have to POST to. That's perfectly RESTful, you're creating a new upvote, you don't have to care about the resulting resource if you don't need to manipulate it further.

To do this restfuly you could maybe update a virtual "state" field to the desired state, but to me that can feel very unnatural.


when things start to get interesting, it's better to switch to JSON-RPC

Nice rules, except for #7. Class names are singular, as are most SQL tables in modern SQL. I think this pattern should be followed also in URL design, since it’s more common to refer to a single at /person/327 than the list of all people at /person.

However, everyone designing an API should be aware that the REST principles really don’t work very well without HATEOAS, and HATEOAS does not require any “designed” URLs, just that URLs be persistent. Any client to a real REST (HATEOAS) API requires exactly one URL, the root. All other URLs should be discovered by the clients by links in the resources given by the API, starting with the resource present at the root URL.

I actually think the singular/plural question isn't too important - the key thing is that it's consistent across the whole API.

I think that discoverability is overrated, because you can discover URLs (as in, what options are available) but you can't discover what exactly those URLs will do (especially for non-GET options) or find URLs that will do exactly what you need, so any discovered URLs aren't usable anyway unless you already knew beforehand what exactly you are trying to discover.

The client (if it's not a human) can't magically discover the semantics, and a HATEOAS API can't properly describe the semantics - it will give you a relationship type string that might be descriptive of what the URL will do, and that's it.

In any case, you need to define a mapping between "I want to do X" and an item on the server side; and when writing a client there's not much practical difference (only a conceptual one) between linking "do X" to an URL string versus linking "do X" to a HATEOAS relationship string. You gain some stability if the service renames some methods, but unless you're really sure that it was just a cosmetic change and none of the semantics has changed, you need to re-verify everything anyway if it happens.

I can't help feeling a great deal of the effort that goes into API definition and maintenance is pointless and a waste of time.

I now have a lot to deal with API which was created by people sharing your view. It is a nightmare.

I love the REST principles. The RESTful ideas framework (I call it like this because I lack a better name for them) helped me organize my applications in a much more consistent manner.

I wonder if a URI like http://api.college.com/students/3248234/courses/physics/stud... would actually make sense to get a list of all the students who take the same physics course that student 3248234 takes. If yes, is there a web framework where such generic routes can be defined?

Ruby on Rails easily supports arbitrarily nested resourceful routes, but they strongly (and wisely, in my experience) advise against precisely this type of deep nesting.



If you're using the url as a hierarchical representation, it wouldn't make sense. What happens if you ask for a circular reference like /students/3248234/courses/physics/students/3248234? You can either query the student to get the course's uri, then access /courses/<course_id>/students.

Two points that I don't understand.

1. Using dash instead of underscore as a space replacement. Underscore is a much more natural as a space replacement and dash is actually a punctuation character. An article gives the following reason: "Text viewer applications (browsers, editors, etc.) often underline URIs to provide a visual cue that they are clickable. Depending on the application’s font, the underscore (_) character can either get partially obscured or completely hidden by this underlining.". It's not convincing at all. I've never encountered this glitch.

2. The keep-it-simple rule applies here. Although your inner-grammatician will tell you it's wrong to describe a single instance of a resource using a plural, the pragmatic answer is to keep the URI format consistent and always use a plural.

But why plural and not single? English is a weird language and it has numerous exceptions for plural form. Isn't it simpler to use single form? I'm always using single form everywhere, works fine for me.

More about point 1 here: https://stackoverflow.com/a/6153129

The argument with the most weight right now is probably that everyone uses dashes and there are no significant advantages to using underscores. So, to help the web a little more consistent, just use dashes (unless you have some unusually good reason not to).

Regarding your point in 2: do you store your docs in a directory called Documents or Document?

I think this point is great.

When you name the resource you are naming a directory/table. You pick a file/row in it by either using an ID or query parameters. That's how you reduce the many to the one.

Just like how you say "one of the students" (singular lookup, /students/42) or "students who study CS" (plural query, /students?studies=cs). Not "one of the student" (singular, looks ok: /student/42) or "student studying CS" (plural, but reads as singular: /student?studies=cs).

I would disagree on the artificial file endings. It's a very transparent way to allow the client to request a resource with a specific content type, visible right there in the URI. After all the .json and .xml are unique representations of the same resource and can reasonably have their own URI

A URI should represent a resource, regardless of its serialisation format. Serialisation really is part of the HTTP headers, and the support in there is great. Using this allows the web browser and the web server to completely negotiate an acceptable format themselves.

Adding the serialisation format to the URI also creates a conflict: what would happen if you request a .json, but the web browser doesn't accept this format ?

The last case sounds like user error to me. You could say the same for using Accept headers - what happens if the user tells their line-drawing client to request a spreadsheet?

In the case of accept headers, you would have actual negotiation; the client sends all of the formats it accepts, and the server chooses the most suitable one. If none are available, it would return a 406 Not Acceptable.

The problem with putting another, incompatible serialisation format on top of that is that it creates conflicts, and inherently requires one to reinvent the wheel, or have a less flexible solution.

I simply fail to see what problem it solves.

In practical applications, I'd want it to be the opposite way around - the server will be coded to support multiple formats since it serves many clients and many types of clients, but any particular client explicitly targeting that server will likely implement just a single format for exchanging data with your app.

So you'd want the server to send all the formats it can provide (i.e. as a list of different resource URLs) and the client chooses whatever it prefers, entirely the other way around as you describe.

Furthermore, server being able to dynamically respond "I don't do your preferred format" is not a desirable outcome; from the client perspective you'd want to know if the format is available right away, and for a whole class of URLs, not an individual URL at the very last moment. I.e., I'd want it to be equivalent to a "compile time check", that an URL scheme promises that the server will always be able to return type X for such URLs, instead of a "runtime check" where the availability of a particular format is known only after you try it.

> So you'd want the server to send all the formats it can provide (i.e. as a list of different resource URLs) and the client chooses whatever it prefers, entirely the other way around as you describe.

What is the advantage of that, compared to having normal Accept: encoding ? Please note that in the negotiation phase, a server uses the format that the client actually prefers; it's able to tell the server these things (e.g. "if you have json, give me that, otherwise I'll take XML).

Your solution requires an additional round-trip for this same negotiation.

It doesn't have to be done in a bad/incompatible/new way. Here's what I did for an API I worked on recently supports JSON and XML:

The general route for a resource is an endpoint that can serve both application/json and application/xml, the ".json" route identifies an endpoint that will serve only application/json and ".xml" identifies an endpoint that will serve application/xml.

None of that conflicts with normal content negotiation, and normal conflict resolution is used. So if the client were to send a request to a .json endpoint with Accept: application/xml it would get a 406 response.

The problem it solves is to make it straight forward to send ad hoc API requests from a browser but still control the format. The browsers I work with don't generally make it easy to specify an accept header. This is useful for demos, issue investigations, and ad hoc testing.

> The problem it solves is to make it straight forward to send ad hoc API requests from a browser but still control the format. The browsers I work with don't generally make it easy to specify an accept header. This is useful for demos, issue investigations, and ad hoc testing.

First of all, these things are supported by HTTP headers; you can send "Accept: application/json;q=0.9, application/xml;q=0.1" and the server will always send JSON if it supports it.

The problem you mention is fair, but REST APIs are designed for consumption by computers, not humans. So while it is a fair point, it is explicitly a non-goal for REST APIs.

On your second paragraph... I don't think you can generally state that REST APIs don't need to be developable, supportable, teachable/learnable. Ad hoc requests support these things in useful ways that scripted requests do not. That makes it a fair goal for REST APIs.

On your first paragraph, sorry I don't understand your point. I showed how this can be done in a way that is fully consistent with standard content negotiation, like your example.

That's a fair response, but you're presupposing a client that behaves as it's supposed to.

To my mind, the problem it solves is familiarity - average users are accustomed to "file.html", not "file(oh hey, send the text/html version please)".

All of this is heavily influenced by who your users actually are, of course.

  > the client sends all of the formats it accepts
And currently many clients only support JSON. As do many backend systems.

  > Serialisation really is part of the HTTP headers, and the
  > support in there is great.
True. But being able to get XML or JSON just by changing what you type into browser URL field is a nice and helpful feature.

  > what would happen if you request a .json, but the web
  > browser doesn't accept this format ?
And what happens when you do the same via HTTP headers?

The thing is that resource should have ideally one URI and selecting representation should be done through content negotiation (e.g. Accept header). Of course having an option to use file extensions makes debugging easier in a browser.

I don't mean to pick on you at all, and I hope this doesn't come off that way... but this is a very interesting statement.

You start off with a statement of ideals which you then directly contradict with a real, practical concern.

I think that has to mean the ideal isn't a good one and in fact -- here's the interesting part, IMO -- one or more of the considerations on which the ideal was based are also invalid.

Personally, I think the "lesson" here is that we should not aspire to have a one-to-one relationship between URI and resource. The attraction is that it's simple. But I think it's clearly too simple; that is, it is too inflexible to be useful: resources cannot be transferred between hosts or shared by multiple hosts; resources can only be organized in a single fixed way (for all time!) within a service (itself with a fixed organization within its host).

It seems better not to try to impose these restrictions on resource identity and instead separate how a resource is identified from how it is retrieved. E.g. use a UUID to unambiguously identify a resource across all space and time, but retrieve the resource using a set of URLs which can change over time. I mean, the idea that a resource can move (its URL has changed) has been is built into HTTP for decades. Why attempt to create a service that assumes this does not happen?

I agree. Also it might be more convenient when doing get request. Imagine an API generating images like /cat/300x300.jpg the ability to switch to .png is really handy. Of course you could do /cat/300x300/png and don't use extensions.

I think it's up to particular use cases.

Yeah, there aren't many situations where that really washes - it's a reasonable rule to have in mind when designing your API, but it seems to clash with the principle of designing "for your clients, not your data". Your clients are often using a browser and don't get to pick the content-type they request.

I guess you could contrive to have /resources/xml/apples and resources/json/apples serve the same content, but otherwise it's query strings or interposing a 'choose a representation for your content' page.

Naturally, none of this is a problem if your clients are not humans, I suppose.

I think these are rather strongly written, but certainly a reasonable list. Servers should be carefully implemented to be forgiving of bad input, but always ensure perfect output.

Orthogonal: I wish there was a good RPC protocol for the web. REST really sucks when you're not doing CRUD (which, frankly, is way more often than expected).

I also prefer RPC-style interfaces, and tend to use it internally - i.e., not exposed as public API, since REST is usually the expected standard. In one application, I was able to use the same group of commands for both AJAX and WebSockets, which just mapped to a folder of functions. It was a pleasure to forget the boundary between client and server, and treat the server as just an asynchronous function call away. I suppose there's nothing stopping me from using a similar structure for external parties to consume the data, it's just that there's no established standard/protocol for how to expose and document such APIs..?

There's nothing problematic about non-CRUD in REST, it's just a matter of not thinking in RPC terms.

On a related topic, I hear the best practice is to make use of HTTP methods like PUT and DELETE. Am I in a tiny minority that wishes we could use verbs in URIs, ie http://api.blah.com/student/32/delete instead of relying on the DELETE http method to communicate that point?

Yes, because request methods provide a uniform way of communicating meaningful information to HTTP infrastructure like caches. If a request uses the DELETE method then a cache in the middle can know something about the meaning of that request (e.g. it's idempotent) and change its behaviour accordingly. If that information lives only in the URI then there's no way for a cache to understand its significance.

Wouldn't that place "delete" at the same level as, e.g., "courses" i.e. http://api.blah.com/student/32/courses?

He's trying to articulate using arbitrary semantic uri schemes (that was just a simple example) rather than HTTP verbs. Ultimately, you have to document them in your APIs anyway and the caching reasoning (along with all the others I've heard), just haven't been useful in practice.

URI normalization takes care of trailing slashes (RFC3986).

If you are parsing URLs yourself try to stick to the WHATWG URL standard (https://url.spec.whatwg.org/).

I hadn't seen this before - it's horrifying.

Right up the top it declares that its intent is to _obsolete_ the existing RFCs - and then just breezily drops this in: "As the editors learn more about the subject matter the goals might increase in scope somewhat."

Honestly, I'm gobsmacked. I really hope nobody's taking this document seriously.

Welcome to the WHATWG standards. The only ones who have any say in WHATWG are the large browsers, and most of the power is with Google.

Their concept is also not to standardize new stuff, but to always only describe what the largest browsers (often simply Chrome) do.

All you need to know about WHATWG is that the monstrosity they're billing as "HTML" has neither version numbers nor a formal grammar.

It may be bad but unfortunately it's the closest to a serious/workable standard nowadays regarding URLs.

This post makes me very happy to be using GraphQL. (not trying to start another flamewar)

Rather than following arbitrary "best practices" you dig up from random articles and might or might not know or follow, GraphQL forces you to write your API a certain way.

That is not to say GraphQL is a silver bullet, it has it's own problems, but at least I can concentrate on my application and how it needs to work rather than reading more articles about how exactly to name my URLs.

GraphQL is just another set of "arbitrary" best practices. If you're looking for a ambiguity-free prescriptive approach to implementing a REST API, there's plenty out there (MS, Google, and tons of other companies have very prescriptive approaches to building REST APIs), and some frameworks like Rails push you strongly in one direction (for example, most of the rules of this article are something that isn't a decision you need to make in Rails)

Sure, but at least these arbitrary best practices are enforced at a framework level. I can't not adhere to the GraphQL way of doing things!

RFC 3986.

is this rules are perfect?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact