> The only thing you can use an identifier for is to refer to an object. When you are not dereferencing, you should not look at the contents of the URI string to gain other information.
- Tim Berners-Lee
By this quote, caring about the hierarchical nature of URIs, pluralization, and enhancements to readability are somewhere between irrelevant to actively harmful (since they promote the idea that the URI contains information beyond identification)
By the logic that a URI is supposed to be used for identification and identification alone, these two URLs are identical in terms of value:
The way I interpreted the quote is: Don't parse the URI in your program to try to gain information about a resource other than it's ID. It doesn't mean you can't put information in there to make it easier for human developers to understand.
In other words, pretty names are for humans, not machines.
If you are worrying about URL readability you aren't doing REST.
The single best slideshare I've found on REST is Teach a Dog to REST. Old but gold. https://www.slideshare.net/landlessness/teach-a-dog-to-rest
(And a shameful plug - https://medium.com/@rdsubhas/pitiful-restful-urls-5d576ffccb...)
> RESTful URLs are not to be treated like SEO URLs
So the recommendations in the article are actually valid and useful, but we'd need to come up with another name to properly distinguish which "REST" we're talking about, at least unless/until one of these usages fades into obscurity.
"Underspecified, broken half-implementation of RPC" is quite precise, but most
of the programmers don't like being reminded that they implement a brain-dead
I never understood why people don't use one of the already-defined RPC
protocols, rolling instead their own versions (each different) that don't even
signal errors properly.
If planning ahead, does that warrant designing your route such as:
`/courses` where you get a list of courses and
`/courses?studentId=12345` Where you get courses scoped to a student...
Or is it better to just recreate a separate route such that you have /courses AND student/2345/courses?
Im concerned that the latter results in duplication of code and possibly more confusion (ie not clear if the API require POST to /courses or /student/12345/courses to create a course for a student ) while the former results may cause (lack of) caching problems.
I not only create both paths, I also specifically create `relationships` links to control the relationship itself. That way, I have a restful way of, say, making a signed article anonymous. DELETE articles/2/relationships/author not that this doesn't delete the author just the relationship itself.
Your worry about the duplication of code is well founded, but slightly off. For what are called "related" links (articles/4/author) we only ever issue GETs. There are many reasons for this and you've started to catch onto a few of them.
This means a _lot_ of routes (over 300 right now), but with smart abstractions it's not so bad. I've been thinking of doing a weird fork of Rails based on how productive I've become. Basically I want to follow Rails but I want to override a lot of their decisions that don't fit JSON API's needs. Maybe one day.
 With a minor addition that I find helpful: I add an `also` link in the relationship body. GET articles/5/author would have `also` set to `users/45`. In this manner I can easily get a reference to the related resource and I make it explicit that the resource isn't dependant on it's relationship continuing to exist with the parent resource.
POSTing to '/courses' creates a new course. However POSTing to a 'student/1234/courses' creates a relationship/association between a student and a course.
So, it's two different things IMO and not a duplication of logic.
See here for more information:
Nesting of resources in a REST API can become problematic, for these reasons, so another option would be to get rid of the nesting and use query parameters to filter based on whatever criteria you want. This is the motivation for PostgREST  where the author views the shortcomings of nested API resources as similar to those of hierarchical databases. GraphQL solves this similarly, it also has a richer query language, but it's missing some of the benefits of REST like caching, etc.
URIs are easier to deal with but not _that_ much easier if you're programmatically sending the requests as one might expect to do with an API
* Other data formats also exist
It's pretty common advice to recommend any APIs that require user authentication to be sent via POST. In fact it's one of the first things pen testers will check for and you'd also fail PCI DSS vulnerability scans for exposing APIs via GET as well.
Disclaimer: I've works on multiple projects that have been pen tested, been audited by the UK Gambling Commission and/or had to adhere to PCI Data Security Standards.
Why do you want to 'disclaim' that? Assuming it's true, you might have meant 'disclosure', but I think what you really mean is much closer to 'source' - i.e. 'why I know this'.
Both return the users attached to a particular account.
Might be an antipattern, but has served us well.
We're also using a special flavor of graphql that returns the data as flat dictionaries instead of deeply nested dictionaries. Whether you need that or not of course depends on the client design.
Having said all that, GraphQL is still far from the expressivity of SQL and people will still wonder how can they express in GraphQL questions that are easily answered by SQL.
Everybody is still thinking in terms of "defining an api" but recently a new way of thinking about this problem started to emerge. Defining a way to translate a HTTP request to a SQL query, i.e. trying to expose the power of SQL in a safe way to the frontend. There is no predefined list of endpoints/types, every requests gets transformed to a SQL query and executed. This is what PostgREST  is doing. I know it sounds scary and dangerous but it works :) Try the Starter Kit  to get a taste and if GraphQL is your thing, you could explore the same idea using subZero 
We've used join-monster to handle the sql generation so it's been a breeze to get everything up and running.
Its surprising how fast it is too. We used to use sequelize glasses an orm, but we were getting 4-500ms r times on simple requests. Now for the most part we get sub 100ms responses on complex queries.
An action is simply a new resource you create. You don't send_emails(), you create a new email sending resource, which has its own URL you can check in later (giving you built-in resilience to network cuts and other problems).
I'm yet to find an action you can't easily model with resources and their representations.
Turn your verb into a noun: instead of "move $10 from Alice's to Bob's account", your clients will ask to "record a transfer of $10 from Alice to Bob". A transfer is a type of record the client creates with a POST, not a function call that results in money being moved. Creating it records the client's intended result; how and when to implement the actual transfer of money is not the client's concern. If Alice wants to check if Bob received the money, for example, she can GET /transfers/<ID returned by the POST response> and look at the fields.
For emails sent to users, I'm not sure what you'd call it. The simple verb-the-noun rule would make it a "send", but sometimes you need something better. Maybe a "thread" or "communication."
This would be a natural API design for CQRS/ES, but you could also use it (with adjustments, maybe) to present a RESTful interface to something that, behind the scenes, just moves the money right away and never thinks about transactions as an entity.
The key to modeling this way is in the name: REpresentational State Transfer, meaning clients send a snapshot of the current or desired state of a resource instead of calling functions that change it. And sometimes, to make it make sense, you have to invent a new kind of resource.
Generally, trying to treat the server as a dumb API doesn't work well. If your client is the one who knows that something was confirmed, it should tell the server that, and let it worry about sending whatever emails it wants.
So you'd just PUT your resource with state=confirmed, and let the server take care of any side effects that might trigger.
On the other hand, if it's something like a mass email created by the user, then the client should POST to create a new "mass mailing resource", then it'd PUT all the changes made by the client, and finally PUT its state to "ready to send" so that the server can do so.
You choose the type, describe what you need, and immediately get back a ticket ID (that's the URL). Then you can check back to see the state of the ticket.
In my opinion, one shouldn't have a /sendmail API at all, the server should deal with that, but if one really needs the client to do that, then /sendmail should at least return a URL that represent the email being sent, so that the client can keep track of it even after the connection is lost or the device is rebooted.
You can also make it a long lived connection, that waits for a proper confirmation, which means you might be waiting ten seconds. Or you can make it a fire-and-forget operation, and have a resource to check if it was properly sent. Which is back to polling. Or you can use server-sent events to have the server notify you back when it's done.
But yes, unless you explicitly need to be able to send emails from clientside, I wouldn't expose a sendmail resource, and would leave that to the server. (I say as I recently implemented a resource that allows me to post an event from client side to allow the server to send it back. :| )
Ultimately, do what works. The pure REST cargo cult is dangerous. As long as what you do is clean, maintainable and ideally idempotent, you're good.
No, we haven't. I explicitly disagreed with that assertion. REST is not adequate for everything, but it doesn't have "crippling limitations". It's pretty good for its intended use.
You can also make it a long lived connection, that waits for a proper confirmation, which means you might be waiting ten seconds.
You can't rely on the connection lasting that long. REST's design was a major sucess on the Internet in part because it naturally dealt with the connectivity limitations.
Or you can make it a fire-and-forget operation, and have a resource to check if it was properly sent.
Yes. That's what REST means. That's my point!
Or you can use server-sent events to have the server notify you back when it's done.
You still have to create an unique ID for the operation, so that the client can tell what is done. Having an URL as that ID is barely any effort.
Ultimately, do what works. The pure REST cargo cult is dangerous. As long as what you do is clean, maintainable and ideally idempotent, you're good.
There's nothing clean and maintainable about having ad-hoc mechanisms that break the overall functioning of the API. If you're using a paradigm, be that REST, RPC, or anything else, you should have a major reason to break it.
Or you can let the server notify you when it's done, and carry on with your work. As a bonus, most clients able to receive SSE already include automatic reconnection to the feed if it ever gets cut.
This is how WebSub (formely PubSubHubbub) works on top of RSS/Atom, for example.
To be a bit more abstract, consider complex state transitions on a resource. In some cases it can make a lot more sense for a client to say "transition to this state" without explicit knowledge of how to do so. To do this restfuly you could maybe update a virtual "state" field to the desired state, but to me that can feel very unnatural.
How are you imagining the REST equivalent? Because that's also what I'd do: create a /post/<id>/upvote URL that the client would just have to POST to. That's perfectly RESTful, you're creating a new upvote, you don't have to care about the resulting resource if you don't need to manipulate it further.
To do this restfuly you could maybe update a virtual "state" field to the desired state, but to me that can feel very unnatural.
However, everyone designing an API should be aware that the REST principles really don’t work very well without HATEOAS, and HATEOAS does not require any “designed” URLs, just that URLs be persistent. Any client to a real REST (HATEOAS) API requires exactly one URL, the root. All other URLs should be discovered by the clients by links in the resources given by the API, starting with the resource present at the root URL.
The client (if it's not a human) can't magically discover the semantics, and a HATEOAS API can't properly describe the semantics - it will give you a relationship type string that might be descriptive of what the URL will do, and that's it.
In any case, you need to define a mapping between "I want to do X" and an item on the server side; and when writing a client there's not much practical difference (only a conceptual one) between linking "do X" to an URL string versus linking "do X" to a HATEOAS relationship string. You gain some stability if the service renames some methods, but unless you're really sure that it was just a cosmetic change and none of the semantics has changed, you need to re-verify everything anyway if it happens.
I wonder if a URI like http://api.college.com/students/3248234/courses/physics/stud... would actually make sense to get a list of all the students who take the same physics course that student 3248234 takes. If yes, is there a web framework where such generic routes can be defined?
1. Using dash instead of underscore as a space replacement. Underscore is a much more natural as a space replacement and dash is actually a punctuation character. An article gives the following reason: "Text viewer applications (browsers, editors, etc.) often underline URIs to provide a visual cue that they are clickable. Depending on the application’s font, the underscore (_) character can either get partially obscured or completely hidden by this underlining.". It's not convincing at all. I've never encountered this glitch.
2. The keep-it-simple rule applies here. Although your inner-grammatician will tell you it's wrong to describe a single instance of a resource using a plural, the pragmatic answer is to keep the URI format consistent and always use a plural.
But why plural and not single? English is a weird language and it has numerous exceptions for plural form. Isn't it simpler to use single form? I'm always using single form everywhere, works fine for me.
The argument with the most weight right now is probably that everyone uses dashes and there are no significant advantages to using underscores. So, to help the web a little more consistent, just use dashes (unless you have some unusually good reason not to).
When you name the resource you are naming a directory/table. You pick a file/row in it by either using an ID or query parameters. That's how you reduce the many to the one.
Just like how you say "one of the students" (singular lookup, /students/42) or "students who study CS" (plural query, /students?studies=cs). Not "one of the student" (singular, looks ok: /student/42) or "student studying CS" (plural, but reads as singular: /student?studies=cs).
Adding the serialisation format to the URI also creates a conflict: what would happen if you request a .json, but the web browser doesn't accept this format ?
The problem with putting another, incompatible serialisation format on top of that is that it creates conflicts, and inherently requires one to reinvent the wheel, or have a less flexible solution.
I simply fail to see what problem it solves.
So you'd want the server to send all the formats it can provide (i.e. as a list of different resource URLs) and the client chooses whatever it prefers, entirely the other way around as you describe.
Furthermore, server being able to dynamically respond "I don't do your preferred format" is not a desirable outcome; from the client perspective you'd want to know if the format is available right away, and for a whole class of URLs, not an individual URL at the very last moment. I.e., I'd want it to be equivalent to a "compile time check", that an URL scheme promises that the server will always be able to return type X for such URLs, instead of a "runtime check" where the availability of a particular format is known only after you try it.
What is the advantage of that, compared to having normal Accept: encoding ? Please note that in the negotiation phase, a server uses the format that the client actually prefers; it's able to tell the server these things (e.g. "if you have json, give me that, otherwise I'll take XML).
Your solution requires an additional round-trip for this same negotiation.
The general route for a resource is an endpoint that can serve both application/json and application/xml, the ".json" route identifies an endpoint that will serve only application/json and ".xml" identifies an endpoint that will serve application/xml.
None of that conflicts with normal content negotiation, and normal conflict resolution is used. So if the client were to send a request to a .json endpoint with Accept: application/xml it would get a 406 response.
The problem it solves is to make it straight forward to send ad hoc API requests from a browser but still control the format. The browsers I work with don't generally make it easy to specify an accept header. This is useful for demos, issue investigations, and ad hoc testing.
First of all, these things are supported by HTTP headers; you can send "Accept: application/json;q=0.9, application/xml;q=0.1" and the server will always send JSON if it supports it.
The problem you mention is fair, but REST APIs are designed for consumption by computers, not humans. So while it is a fair point, it is explicitly a non-goal for REST APIs.
On your first paragraph, sorry I don't understand your point. I showed how this can be done in a way that is fully consistent with standard content negotiation, like your example.
To my mind, the problem it solves is familiarity - average users are accustomed to "file.html", not "file(oh hey, send the text/html version please)".
All of this is heavily influenced by who your users actually are, of course.
> the client sends all of the formats it accepts
> Serialisation really is part of the HTTP headers, and the
> support in there is great.
> what would happen if you request a .json, but the web
> browser doesn't accept this format ?
You start off with a statement of ideals which you then directly contradict with a real, practical concern.
I think that has to mean the ideal isn't a good one and in fact -- here's the interesting part, IMO -- one or more of the considerations on which the ideal was based are also invalid.
Personally, I think the "lesson" here is that we should not aspire to have a one-to-one relationship between URI and resource. The attraction is that it's simple. But I think it's clearly too simple; that is, it is too inflexible to be useful: resources cannot be transferred between hosts or shared by multiple hosts; resources can only be organized in a single fixed way (for all time!) within a service (itself with a fixed organization within its host).
It seems better not to try to impose these restrictions on resource identity and instead separate how a resource is identified from how it is retrieved. E.g. use a UUID to unambiguously identify a resource across all space and time, but retrieve the resource using a set of URLs which can change over time. I mean, the idea that a resource can move (its URL has changed) has been is built into HTTP for decades. Why attempt to create a service that assumes this does not happen?
I think it's up to particular use cases.
I guess you could contrive to have /resources/xml/apples and resources/json/apples serve the same content, but otherwise it's query strings or interposing a 'choose a representation for your content' page.
Naturally, none of this is a problem if your clients are not humans, I suppose.
If you are parsing URLs yourself try to stick to the WHATWG URL standard (https://url.spec.whatwg.org/).
Right up the top it declares that its intent is to _obsolete_ the existing RFCs - and then just breezily drops this in: "As the editors learn more about the subject matter the goals might increase in scope somewhat."
Honestly, I'm gobsmacked. I really hope nobody's taking this document seriously.
Their concept is also not to standardize new stuff, but to always only describe what the largest browsers (often simply Chrome) do.
Rather than following arbitrary "best practices" you dig up from random articles and might or might not know or follow, GraphQL forces you to write your API a certain way.
That is not to say GraphQL is a silver bullet, it has it's own problems, but at least I can concentrate on my application and how it needs to work rather than reading more articles about how exactly to name my URLs.