Hacker News new | past | comments | ask | show | jobs | submit login
You know how HTTP GET requests are meant to be idempotent? (twitter.com/rombulow)
751 points by mpweiher on May 1, 2018 | hide | past | favorite | 304 comments



Hello! Long-time lurker, and guilty dev behind the garage door. You can see the (broken) code I wrote here:

https://github.com/wpearse/wemos-d1-garage-door-wifi

I'll get around to fixing it later this week.

Also, an apology: I should have used "side-effect-free" instead of "idempotent" in my tweets.


> I should have used "side-effect-free" instead of "idempotent" in my tweets

The HTTP term is "safe method". Although you weren't even wrong because section 4.2.2 of RFC7231 (i.e. HTTP) defines all safe methods, including GET, as idempotent.

I think they use this language because nothing is truly side-effect free. In fact GETs can have side-effects, the most obvious of which is writing the fact of it to a logfile, and that's the most harmless side-effect of all, right until you run out of disk space.

Being a language arse I think the high precision descriptor is actually nullipotent. https://en.wiktionary.org/wiki/nullipotent but I'd never say it out loud.


'Side-effect free' means that doing it once, twice or n >= 3 times (with same parameters) yields the same result, i.e. what it returns doesn't depend on any remote state that is altered by the call itself.

However, an idempotent HTTP call is certainly not a pure function which some people seem to be mixing up. Pure functions don't work with I/O.

REST is bit more specific and explicitly requires GET to be nullipotent which really means "effect free" - it just reads and doesn't alter the state on the remote system at all.

Side-effects like log files, rate-limiting, etc. will always exist, but they do belong to a different 'layer', so to speak. That is, these should be unobservable side-effects (also think about minuscule effects on the power grid, the fact that a request might write something to an ARP-cache, etc. - they all happen at different layers, so the quantum world state keeps changing, but that's not what this is about). Whether an X-Request-Count header violates the requirements or not depends on interpretation. From the garage door perspective, I wouldn't care...


> Pure functions don't work with I/O.

there is a comment above that argue this point in better details.

https://news.ycombinator.com/item?id=16966046

Basically it depends on the many nuances you have on "pure", "functions" and "idempotent"


When discussing HTTP behaviours I think it’s easiest to stick to the definitions of these terms given in the HTTP standard. Anything else is fruitless bikeshedding.


IMO "side-effect free" is always a statement constrained by the operating level of abstraction. Logging is not an effect at the level of the application, but rather some subset of it's context (system, db).

I love that term and am going to use it as much as possible


Fixed now: https://github.com/wpearse/wemos-d1-garage-door-wifi/commit/...

I think.

edit: updated link because left creds in the commit :-/


I hope that's not your real password!


EDIT: it's actually his address, I thought it was just a coincidence but you can the house on Street View... I removed the actual name as doxxing isn't great, sorry.


Thanks. Not the address with the WiFi garage thankfully.

I was actually thinking "better not commit that" ... before, y'know, I committed it :/


I got just the thing for you:

https://gist.github.com/hraban/10c7f72ba6ec55247f2d

Every time you write some code you need to remember removing before commit, surround it with a comment containing "NOCOMMIT". With this script as a pre-commit hook, git will echo an error message and fail.

E.g.:

  print("debug: ", myval)
becomes:

  print("debug: ", myval) # NOCOMMIT
I end up relying on this every day I program. Can't go back.


Thanks! I'm not sure how easy it would be to put the git hook on all my machines though? I have a collection of laptops (and one desktop) that I work on and I often don't use the same machine for a few weeks :-/

I ended up using a "env.h" file... is there a C-equivalent of the PHP (?) .env file?


https://direnv.net Would be my recommendation


It's possible to remove that from the history of your repo, although it breaks any forks.

https://rtyley.github.io/bfg-repo-cleaner/


Heh. I think I did worse ... made a local copy of the repo, nuked it on GitHub, then re-created the three commits by hand ... less credentials.

That looks like a much more useful tool, though.


git add -p

It will let you approve each hunk in a file to commit or not.

git commit -e -v

Will force you to edit the commit message and in the editor show you the diff of the commit against HEAD.


Thank you! I use 'git add -p' all the time, but didn't know the trick with commit. I am a sucker for nice commits so I will check every commit's diff multiple times. When I don't, I usually end up including pieces of code which is not ready yet, which is meant for debugging,...


Well looks like a nice neighborhood at least.


Well according to Google it’s an address in Auckland NZ so I hope it’s not his address either.


hey, I'm in Auckland,maybe I can go around and hack his garage :)


Bring beer.


I kinda hope you come home to find multiple beers with notes saying "pull request pending"!


Me too!


Aw man I missed the address otherwise I would


Please, for the love of all that is holy, don't post long-form content on Twitter.


> Please, for the love of all that is holy, don't post on Twitter.

FTFY


I'm sorry. This is the first time I've used a thread. I won't do it again!


Actually, I think I actually wrote this story just so I could try out the "new" (months-old?) threading feature. I don't much twitter.


Anything side-effect-free is idempotent too, I suppose


No. A function which multiplies a number by two is side effect free, but is no idempotent.


I dispute your example. If you call f(2) and it always returns 4, it's idempotent and side-effect-free. If you call f() and it returns 4, then 8, etc, it is neither.


From wikipedia:

"A unary operation f, that is, a map from some set S into itself, is called idempotent if, for all x in S, f(f(x)) = f(x)."


Instead of checking wikipedia for a general definition of idempotency, check the RFC for the definition that applies to HTTP

9.1.2 Idempotent Methods

   Methods can also have the property of "idempotence" in that (aside
   from error or expiration issues) the side-effects of N > 0 identical
   requests is the same as for a single request.


Yes, in mathematics, not programming. And a function that doubles a number isn't idempotent even by that definition.


Of course a doubling function is not idempotent!

I think the confusion arises because side-effectful functions can be considered as having type

    f :: (RealWorld, OtherArgs) -> (RealWorld, OtherOutputs)
and so for a garage door toggle you have something like

    t :: RealWorld -> RealWorld
where the new state is the old one with the door opened/closed as appropriate.

Now the idempotence condition becomes:

    t(t(world)) == t(world)
but clearly

       t(t(doorOpenWorld)
    =  t(doorClosedWorld)
    =  doorOpenWorld
    != t(doorOpenWorld)
    =  doorclosedWorld
so this is where the notion comes from. If you abuse notation and just say a function of no arguments can be idempotent then you'll get confusion like this.


But `GET(GET(x))` doesn't make sense, in general (and if it did, then you would not expect it to be idempotent), so clearly idempotency in this context is meant to mean side-effect free. They should probably just say side-effect free, though, to avoid the confusion.


It's possible to be idempotent without being side-effect free. If you PUT some record, for example, then that operation _will_ have side effects (modifying the record). If you then PUT that same data again, the result will be the same (it's idempotent).


that’s not a side effect. in common usage, a side effect is an extra action, not the desired action itself. if you PUT some record, and some other record or state changes, that’s a side effect.


I don't disagree with common usage, but common usage is not what's being discussed here.

It is a side effect in programming usage (not just HTTP).

Something is side-effect-free if and only if the only result of it running is that you get an answer. If you ignore the answer, then you cannot tell you ran the function/method/call/whatever. PUT is not side-effect-free.

That said, side-effect-free-ness is an incomplete paraphrasing of the HTTP spec (RFC7231); you'll notice that the only mentions of the phrase "side effect" are giving examples of legal side effects.


I suppose logging is a side effect and that makes practically everything technically non-idempotent (though on purpose).


I think because in math you don’t ever have side effects you usually use composition where in programming you usually use a sequence. So to change that function in to how people would implement it means rearranging the internal stuff and then it probably wouldn’t be idempotent be either definition.


But not the reverse. If you are side effect free you must be idempotent.


"Idempotent" is one of those overloaded terms...


I think that is considered idempotent in the REST-sense of the word: you can multiply a number by two as many times as you like, the result will be the same (and no state is mutated).

I looked it up, apparently there's a formal definition "denoting an element of a set which is unchanged in value when multiplied or otherwise operated on by itself", which does not seem to describe the REST usage very well, though.


The typical operation applied to sets of functions is composition, so idempotency of a function f is the condition that f(f(x)) = f(x) for all x in the domain of f. I don't think that applies meaningfully to GET.


It does apply to HTTP idempotency. `x` is the state of the server. `f` is the change to the state of the server that ensues when one makes such-and-such an HTTP call. So taking PUT as an example, `x` is the state before the PUT, `f(x)` is the state after one PUT, and `f(f(x))` is the state after that single PUT is sent twice. Of course in a RFC7231-compliant server, `f(x) = f(f(x))`. Taking GET (or any other nullipotent method) as an example, we also see that in a RFC7231-compliant server, `x = f(x) = f(f(x))`.


Ah yeah that makes sense.


Technically speaking, idempotency as defined by RFC7231 only requires f(x) = f(f(x)).


Yes, and nullipotency requires x = f(x) = f(f(x)), and four of the methods defined in RFC7231 are expected to be nullipotent, otherwise known as "safe". The point of mentioning that is to highlight the relationship between idempotence and nullipotence.


Except if the domain is zero.


Or even a slightly more exotic case, such as Z mod 2 (a single binary digit)


No side effects means no garage door opening or closing. I guess you wanted it to open or close in some cases at least?


Isn't the point of this thread that the OT says that he shouldn't have used GET for this?


When I recently added 'click to unsubscribe' functionality to my emails, the URL got wrote out into some logs. Those logs got written to a Slack channel and Slack loves to click any link it sees. Oh, and it doesn't respect robots.txt.

But all I saw was every member of my list clicking 'unsubscribe'. It took a good hour to figure out exactly what was going on.

Idempotence is not the problem here, by the way. That just means calling the method twice has the same effect. But GET should have no side-effect, in an ideal world. Of course, in the case of unsubscribe links, it needs to have a side-effect to comply with the law.


Actual public Certificate Authorities have done this too.

You request a cert, it's authorized everything seems fine. Except, huh, the guy who was supposed to authorize is off sick today, how did that work? The email to the authorizer should just be sat in his INBOX until he gets back.

Oh - the company's "Malware protection" system automatically dereferenced the "Do you want to issue this certificate?" link from the email and there was no second step. So for affected companies basically anybody could request any certificate in their domains and it would get issued.

As far as I remember nobody has proof any bad guys ever used this, but grey hats posted some fun they had with it. Likewise for a CA that decided to OCR the images from an unco-operative DNS hierarchy that wouldn't provide machine readable data to them. Grey hats obtained domain names that confused the OCR into allowing them to get certs for other people's names. Did any black hats do it? We have no proof.


Any chance you could provide articles on these grey hat activities? Sounds interesting


For the OCR one the combination of "WHOIS" and "OCR" found me this thread from 2016 with an incident report from Comodo's Robin Alden:

https://www.mail-archive.com/dev-security-policy@lists.mozil...

Alas I wasn't able to bring to mind a combination of keywords that would find the other incident in public archives, and I know it's from my background reading so it will be before my personal archives of these discussions begin. Sorry.


Aren't you allowed to have your "click to unsubscribe" button lead to a page with a button that does a POST that actually unsubscribes? I feel like I've seen that approach in use.


What they do is load a page with a form redirect to do the POST, I believe, so link loaders won't follow it and you'll be safe.


How about just having, near the unsubscribe link in the email, a link that says "click here to ignore up to one unsubscribe link press within two minutes of clicking this link", so that the automated process that clicked the unsubscribe link by mistake will also click that link.

The page in question could also have a "actually, I do want to unsubscribe, I just clicked the wrong link by accident" button as well, in case a human reader is confused.

---

Or, one link that says "unsubscribe immediately" and another that says "unsubscribe only after confirmation", and the first one unsubscribes immediately, unless they also click the second link immediately before or after, while the second link only unsubscribes them if they click a confirm button on the page?

---

Or, maybe the unsubscribe link could have a confirmation button, but would also have some javascript to confirm the unsubscribe after a few moments of the page being fully loaded (the button being used if they have javascript disabled, for example)


Don't make your users jump through hoops to unsubscribe. That seems like a typical dark pattern to me.


Having a single confirmation step to prevent "oops" clicks doesn't seem like a dark pattern to me.


Confirmation steps should only be used if the action can't be easily undone.


It can't be easily undone if it was "clicked" by an automated process rather than a human being.


And when a user receives an email saying they've been unsubscribed because an automated system prefetched the link, they won't be concerned or worried or confused at all. (Especially when this happens _every time_.


And then the automated system clicks the undo link... Problem solved!


Which fires off a new email thanking them for re-subscribing.

Soon your inbox is full.


You are right, it doesn't seem like one, but to pissed off users it doesn't matter. If you, for any reason, piss people off to the point where they complain to your registrar or your hosting provider it is bad news. Doesn't matter why. So, the answer is to make it as easy as possible to get removed, which is why people use GET.

I have seen this with spam, and I have seen it with DMCA requests. My hosting provider will issue me a warning for any DMCA request that they will shut me down within 48 hours if I don't comply. Even when it is clearly not a valid request. Even when the content has already been removed. They don't even check, they just say "do it or else". And I pay them thousands per month. Godaddy is the same way, I have had people complain to them, and then they threaten to shut down the domain.

When you have been threatened repeatedly to be shut down for operating normally you don't take any chances. It isn't worth it.


Have a single "I clicked by mistake" button that resubscribes instead.


Yes, the automated system prefetching links will always click said button.


Not if the button issues a POST request.


The button I was responding about was an "undo unsubscribe". A bot won't click that button, but may follow a link.


Right, I assumed the button would be linked from a subsequent email confirming the unsubscription. You're right, a button in the unsubscription page doesn't help.


The majority of people who clicked on the link did it on purpose, so a better pattern would be to make it unsubscribe immediately with a "didn't meant to unsubscribe? click here to undo" link afterwards.


You're making the mistake that this very article is highlighting: it's not just "people" who click links. An overzealous mail client or browser preloading links would force unsubscribe you without your knowledge or ability to undo.

A single step, a button push, to confirm an unsubscription is fine.


> A single step, a button push, to confirm an unsubscription is fine.

No, it really isn't.

Lots of mailing lists operate exactly as the person you replied to mentioned where after unsubscribing you are given a chance to undo that action. That's a far more respectful way to operate.


The user isn't going to see the "Oops! I need to undo" button when their email client helps themself behind the scenes.


Yet somehow lots of places use the click-one-link-to-unsubscribe method and it seems to work. What are they doing differently?

The follow up confirmation screen feels slimy to me like some kind of cable company retention tactic. That's why I called it a dark pattern.


I don't think it's slimy. They could provide a simple, complete unsubscribe button in addition to a list of subtopics to check/uncheck. Maybe I only want to unsubscribe from their blog, or I still want to receive feature update news but not their sales catalog.


As a person who ends up clicking a lot of unsubscribe links: I definitely see it as slimy.

I would rather companies just didn't subscribe me, so having to click an unsubscribe link is already a problem. The more confirmation they have, the more I take it as biased and manipulative.


If I click a link to claims to unsubscribe me, then that's exactly what it should do. I have no problem with a page to manage account preferences, but don't label the UI element that leads there as unsubscribe.


Do you also not like links called "Contact"?


None of your responses acknowledge or challenge the very real problem that automated systems and expected behavior of GET reqeusts impact your desired behavior of click-to-unsubscribe.

In the spectrum of "buttholedesign", using proper web standards to make sure an action is being taken deliberately is far lower than "intentionally low-contrast skip buttons" and "call to cancel subscription".


Are we all going to just ignore the fact that lots of mailing list operators seem to be able to present a link to unsubscribe without triggering the bot problem?


I think having a URL that unsubscribes people just via a GET request is far more likely to cause problems, 'respectful', 'legal', or otherwise.


What definition of fine are you using?


I don't want to be presented with a confirmation box every time I change something. Nobody reads them - they just click the button that will make it go away.


> I don't want to be presented with a confirmation box every time I change something.

That's much broader scope than what's being discussed here.

> Nobody reads them - they just click the button that will make it go away.

That statement is false. Many people read and care about confirmations.

What you're talking about is very specific personal preferences, not what is generally considered "fine".


It's not all that specific. What I'm describing isn't that far off from what Microsoft found when Windows Vista was presenting a lot of UAC elevation prompts.


The thread is talking about situations where an automated system would 'click' the link though. The automated system is probably not going to go "oh oops, resubbed"

You could probably automate the POST action though. Equivalent of $('#unsub-button').click() on the unsubscribe page load


Github also does it for their logout button for good reason. Is that a dark evil pattern to keep you logged in to their ecosystem? Or just something someone would say who doesn't understand it?


I trust Github. If it were up to me, I'd remove the logout confirmation from Github but having it there doesn't particularly bother me.

When Facebook does it, I do have a problem with it because I don't think their motives are as pure.


> Idempotence is not the problem here, by the way. That just means calling the method twice has the same effect. But GET should have no side-effect, in an ideal world. Of course, in the case of unsubscribe links, it needs to have a side-effect to comply with the law.

Thank you! I felt like I was taking crazy pills with my understanding of idempotence.

Calling GET /door/open is idempotent too, but it's still gross in my opinion. The author makes it sound like that would be fine.


Kinda seems like he really just wanted to use the word.


I think he only had GET /door/toggle, which cannot be idempotent.


Yea, but his point was that GET requests should be idempotent.. when that's not the important part of GET. That's why I used the example of GET /door/open, because that is idempotent, but it's still gross - it's causing a state change from a GET request while still being idempotent.

GET being idempotent is not the issue the author was dealing with. State change from a GET is the real issue.


He mentions later in the thread that he doesn't have a sensor to indicate door position, which is going to kill this project and absolutely prevents an idempotent approach. There are a bunch of ways to approach this, most of which are patented by Chamberlain, which is the specific reason you don't find many garage automation solutions sold as a bundle in the US.

Typical bolt-on solutions would be a range finder pointing down from the ceiling near the door which can tell if the door is obstructing it, a tilt sensor mounted to the top door panel, or limit switches on the carriage way/tracks. With a door position sensor in place a simple momentary-contact dry relay can be used to trigger door motion.

Once that is in place, you can add another WeMos D1 mini to your car to open and close the door with no interaction: https://github.com/aderusha/MQTTCarPresence


I wasn't looking forward to running cable for sensors (I need two reed switches per door, as the doors can stop half-open). The D1s are at the back of the garage, so maybe 8m of cable per switch and two doors = 30-ish metres of cable? (90 feet?)

Range finder seems like the best idea. I was thinking of mounting it on the door motor and pointing horizontally down the track at a reflector attached to the chain/handle.

I love the idea of putting a D1 in the car. Thanks for the link!


You can get commercial Z-Wave tilt sensors for approx ~20USD a piece that will last a year or two on a single battery. With those you can mount them directly to the door (because wireless) but obviously you need a Z-Wave radio on your automation platform for that to work.

In either event, are you sure you need two sensors per door? The garage door can stop at any point, but (to my thinking) it's still only open or closed. If it's open by only a bit, open half way, mostly open, etc... it's all open in my system, meaning someone can get in or out if they want. So limit switches (or whatever) at the closed position can tell you if it's all the way closed or "not closed", which is suitable for automation purposes.


My opener runs a tiny little garage door in parallel with the big door in front of my garage, and monitors the position of the little garage door with simple switches. This is all in the plastic housing that holds the motor. If yours is the same, you might be able to steal the same signal.

Sadly, the little garage door doesn't look like a dollhouse garage door; that'd be too cool. It looks like a wire on a track.


Why not simply have your webserver remember what state the door _should_ be in? Maybe don't rely on that from a security standpoint (eg. GET /garage/status => "closed, everything is secure"), but I suspect it would work for most situations.


Facebook Messenger has the same problem, for a while my friends and I couldn't figure out why our referral links for a service weren't working, turns out Messenger was "using them up" for us.


Consider link shortener services...

It's perfectly fine to have certain side-effects, provided the GET is idempotent, but not others/most. Specifically, it's fine to idempotently have side-effects where it doesn't matter who is causing the effect, the side-effects are desirable, and the side-effect load won't be overwhelming.

In the case of a link shortener one can pretend that the side effect did not happen the first time the service sees some particular link, that the shortening has already occurred (at the beginning of time!). There is definitely a side-effect, since it involves updating a persistent hash table, and it is idempotent (though one could construct a shortener where it's possible to get more than one shortened form for a given URI when racing to shorten it, but this is not a problem for this particular sort of service).


“Toggle” is by definition not idempodent, because you get a different result each and every time. “Open” and “close” are idempodent, but not safe. The result of a GET request should always be idempodent and safe.


GET requests should have no side effects. In other words NOOP is idempotent


I'd just like to clarify that what you have described here is "safe" rather than "idempotent" [0]. Easy mistake to make, made the same mistake myself, idempotently (that is to say you only make it once ;)

Get, and NOOP, are both "safe" and naturally "idempotent" since the former encompasses the latter.

[0] https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html


GET must be safe and idempodent. In the context of http, idempodency basically means that you get the same result for each request with the same url and the same set of parameters.

Toggle should be implemented as post, since it’s neither safe, nor idempodent.


> get the same result for each request with the same url and the same set of parameters.

I think the resource can change externally...so it means that your GET request doesn't change it, but it doesn't have to be the same result every time.


No not the same result. How would the front page of any news site work then?


yes


IIRC, the rfc uses “safe” instead of “side effect free”, which is very similar, but not exactly the same.


Get should be safe, but that's not the same as indempotent. Delete requests are also indempotent as the final state will always be the same.


Reading from a database isn’t “side effect free”, neither is reading from a mutable variable.

Given that HTTP requests are supposed to trigger database reads that may return a different result after some time, then “side effect free” would indeed be incorrect.

The litmus test is ... does it always return the same output given the same input? If that can change, the it’s not describing a pure function, but a side effectful one.

And the output of a GET request can and does change.

Speaking of which people here aren’t talking of idempotency in a mathematical sense:

f(f(x)) == f(x)

If you really think about it, GET requests aren’t even idempotent ;-)


Having no side-effects means it doesn't cause side-effects, not that it can't be "victim" to them. It's no the same as purity.


> But GET should have no side-effect

I disagree. There are some ways in which GET can be non-idempotent, such as pageview counters and endpoints with a vast amount of constantly-changing content, for instance. One may argue that the first example may be possible with a GET followed by a POST, but any subsequent GET response (assuming it contains the counter) would still be different to its prior.


It would be sufficient to GET a resource that uses a script to POST the side-effect. Slack's user-agent probably isn't sophisticated enough to mess this up (although heaven help us when they implement their preview with something like headless chrome).


What would the problem be in just showing a button that says "Confirm unsubscribe" that sends a POST request? A lot of sites does something like that for their newsletter unsubscription.


I like the one-click URLs though. It irritates me to no end to click unsub, then wait for a page to load then repeat my intention.


I'm actually much more concerned when I don't have to click the link. It let's me know they haven't thought the problem through and what other errors and problems are there in their systems?


That would comply with GDPR, and is a valid solution.


It is at least better than asking the email or something even to login to unsubscribe ("update communication preference")


I believe asking to login violates the CAN-SPAM Act. That being said, there might be two links, one for "update commication preference" on top and "unsubscribe" in small text at the bottom. I always ignore the first and seek the latter.


I remember having a similar issue in 2000, before any (meaningful) client-side javascript. Solution: each link that did something also had a request_id parameter, which was a timestamp in milliseconds. Two requests with the same request_id meant the user had clicked something twice, so any action would NOT be performed multiple times if the same request_id came in more than once.

This let users double click on links and have the action performed only once. In 2000, hyperlinks were still confusing to some users who were used to "double-click = open", especially for file icons.

EDIT: added text in italics because initial wording was confusing.


A not-small percentage of users double click links (and buttons, and anything else that needs clicking).


The wording is a little ambiguous, but I think the scheme was that the first request is honored and any subsequent requests are ignored, not that additional requests cancel or undo the action altogether.


Thanks! Yes I didn't word that properly. :(


At 13:45 in this interview with Sergey Brin and Larry Page, you can listen to them discuss the meaning of idempotent on the air with Terry Gross. Pretty amusing thing to hear on NPR. I wonder if the term has ever been mentioned on public radio before or since.

http://www.npr.org/2003/10/14/167643282/google-founders-larr...


> Idempotence is not the problem here, by the way. That just means calling the method twice has the same effect. But GET should have no side-effect, in an ideal world. Of course, in the case of unsubscribe links, it needs to have a side-effect to comply with the law.

Could you have the page redirect to itself with POST? Like javascript redirect or metatag. Browsers would do this, and there wouldn't be any different for users, but bots, slack and Safari probably wouldn't.


The problem is not that your endpoint had side effect. Rather, it didn't have any side effect(it is supposed to unsub, and it does exactly that).

The problem is with lack of authentication. Slack's ability to unsubscribe people on their behalf, without their explicit permission seems to be the real issue here.

Even a "I'm not a bot" check would provide some protection.


If the unsubscribe link were unique for each subscriber would the law still be satisfied?


I don't understand how that prevents an automated system from unsubscribing them?


I thought that initially, but then I considered that for users to be bulk unsubscribed then the link would surely have to be the same for every user, at which point the each user gets the same unsubscribe link, and then when they click on it it unsubscribes everyone.

This doesn't seem very likely, so I guess that a whole load of unique unsubscribe links got dumped into slack which started following them.


It should be a DELETE imho or a PATCH.


same thing with games running on facebook's platform


Many years ago, I was asked to look at why all the content had vanished from a site (not built by me). After digging in a bit, I found that:

1) the original developer's idea of handling an unauthorized /admin request was just to set a redirect header and continue processing the current request .

2) the /admin page had a grid of all the content on the site, with handy 'Delete' links that ran over GET without confirmation.

You can probably guess where this is going – some search bot hit the overview page, ignored the redirect header, saw the content, and dutifully crawled every single link on it…


There were at least two browser extensions which also discovered that poor design was widespread and to disable prefetching for similar reasons:

http://fasterfox.mozdev.org/index.html

https://signalvnoise.com/archives2/google_web_accelerator_he...

I think the state of the web has improved slightly over the last decade but this is a great example of why browser vendors are so conservative. You can do this now but only opt-in.


Was it blekko? We had a website owner email us about that issue when blekko's ScoutJet crawler was new... although I don't recall the bit about ignored redirect headers.


I'm pretty sure everyone with a crawler has hit this sort of problem before. The first startup I was at did with someone's wiki that had "delete" links everywhere with no auth.


Now that I've hit it once, I watch out for websites with this problem. I was surprised to notice that a Fortune50 tech company's internal employee-personal-webpages-maker-thingie had that issue. And then a week later they asked me if I could crawl their internal web. Uh, no, who knows what other internal systems had that problem?


Idempotency might be necessary for GET calls, but it's not sufficient. Imagine he had two separate GET calls (opened/closed): the author would still have the same problem. Browsers assume GET to be safe (non-mutating), and safety implies idempotency.


Exactly. A 'toggle' should really be implemented as a PATCH request, or maybe a PUT if there's no data other than the door state.


At the risk of being overly pedantic/piling on -- this is what bad REST-ful API design looks like in practice.

When you talk to your teammates about the semantics of these verbs and someone just says "oh a GET is fine" and the team agrees but you don't and you can't say it so you don't become "that guy" it's time to find a new engineering org to be a part of.

On the topic of PATCH, check out JSON merge patches (application/merge-patch+json):

https://tools.ietf.org/html/rfc7386


I hope this doesn't come across as mean-spirited, but I'm really struck by the middle part of your comment, even though I think it was meant as a throwaway remark.

It sounds like you're advocating leaving an organisation instead of speaking up when a mistake is being made? In the scenario you described, the "a GET is fine" person is unfamiliar with the protocol they're writing for (HTTP), and so is every person who agreed. Leaving instead of speaking up seems pretty drastic.


Totally valid question -- it was indeed a throwaway remark but is pretty bitter in tone now that I think about it.

To delve into what I was thinking a little bit I think it came off so bitter because I've been in too many orgs where group-thinking squashed dissenting possibly-correct opinions, where half the room is wondering "this seems too complex, why are we doing this" but everyone goes with it. Reading through the twitter comments had a bunch of people were trying to gloss over the misuse and it might have triggered me.

The more reasonable response is definitely to articulate and explain why a GET is NOT fine in that case so everyone learns, but once this starts happening a lot I mark it as a red flag in my head -- it's either a culture clash or I'm too close to being the most experienced in the room (in that specific area), and that means there are less people to learn from and staying for too long might lead to stagnating. The "leave the org" bit is hyperbole, but I worry about this kind of thing if I experience it.

In the end though, it was meant to be a humorous post so the remark is overblown.


My interpretation is that GP is advocating leaving an org if you feel that speaking up is discouraged


Just be the guy who says "oh a POST is fine" first


Yeah, full REST is nice, but you can do worse than just using GET and POST as long as GET doesn't change state.


Layers of REST that people get to:

1. GET is good for everything

2. Perhaps we should use POST too

3. Let's use all the verbs

4. Someone mentions HATEOAS

5. Some old guy says that you've got to use XML because that defines links and JSON doesn't

6. Someone else counters with JSON-LD and sends a link to the W3 spec

That's as far as I got. Mostly this conversation happens in my head.


So I'm actually really interested in this conversation -- I think it's a good one that everyone has that needs to just get resolved. I think deciding where to follow and how much to follow REST/HATEOAS is exactly what engineering teams should decide. There are escape hatches (POST can do just about anything) and lots of ways to do things but REST-ful (not pure necessarily pure REST) and HATEOAS-y (usually comes up in terms of pagination/relations first) APIs are not bad at all.

I think JSON+JSON-LD still offers some benefits over XML at the very least in the security sense -- while it can be misused, there are much less dynamic bits built in to the transfer language itself.

Also I'd say that JSON "scales" well in terms of complexity still, small things are cognitively light, and big things are linearly more cognitively heavy.

Is JSON+JSON-LD+X the new XML?

Is Swagger the new WSDL/SOAP?

I dunno, but it doesn't feel like it's quite that bad yet.


SOAP-XML request handling in C on a 100mhz embedded microLinux MMUless processor with plenty of processor and shitty kernel arch bugs ruined my life for a year once.

I mean it's the obvious choice for a simple control UI implementation on a slow embedded system...

You may resume your javascript framework discussion now.


JSON-LD is a significant cognitive leap to reason about [1], and its descriptive power goes beyond what you could do with XML. A JSON-LD document is effectively an RDF document coupled with a JSON document, in a seemingly human-friendly form, although lots of people will recognize the letters but have no clue what they're reading.

For that matter, link relations too are a cognitive leap, and no one writes 'smart REST clients' anyway: these days, to consume a "REST API" you use the first-party official library, or something you found on github, or something you wrote in 3 hours where all the URLs are formed by string concatenation. Within one vendor's one particular REST API, the benefits of HATEOAS are typically minimal, which is why it's so frequently omitted and no one complains except REST pedants.

It also doesn't help that the field still hasn't settled down in ~10 years: JSON Schema, widely used for JSON schemas due to the lack of an official mechanism, recently decided they wanna get into hypermedia too [2], and there's also the enticingly named JSON API, which offers a similar data model too. And let's not forget about stuff ~2011-2013 like HAL [3], which didn't really become big, but never fully went away.

Swagger is absolutely much the new WSDL/SOAP ecosystem, complete with first-party code generators. In the WSDL days, all of your code generators were third-party, and due to enough knobs in WSDL and enough accumulated design baggage, it wasn't always interoperable [4]. But when it was, it was pretty magical for ~2003.

Swagger had clout and name recognition to kill the other schemes that are largely the same, but then they renamed it OpenAPI to play up the consensus against competing codegen/specs like RAML. Now that Mulesoft has agreed to bring RAML under the OpenAPI umbrella, there's less integrated competition, but then OpenAPI's uptake will be limited by those put off by serious tooling who opt for simpler description languages instead. It's a mess.

[1] https://www.w3.org/2013/dwbp/wiki/RDF_AND_JSON-LD_UseCases [2] http://json-schema.org/specification.html [3] http://stateless.co/hal_specification.html [4] https://blogs.msdn.microsoft.com/dotnetinterop/2005/01/17/ne...


I'm not sure it's possible for JSON-LD to go beyond the descriptive power of XML, because XML is so general, flexible and powerful (and you could cretainly represent an RDF document in XML) -- the big upgrade there IMO is that it stays in relatively human-readable form (though depending on the human, so does XML).

Ditto on the huge disappointment with the cobbled together SDKs when HATEOAS/JSON-LD-fluent applications represent a more robust future that could have been the current timeline.

Also, it's a shame that hyper schema couldn't reconcile with JSON-LD. JSON-LD is the one I'm leaning towards using at the moment, because of it's early consideration of things like multiple languages.

It's reassuring (?) to see the mention of the RAML/Swagger situation being a mess from someone else. I actually liked RAML more than the Swagger specification, but Mule (https://en.wikipedia.org/wiki/Mule_(software)) left a bad taste in my mouth once upon a time, when a team I was on was deciding how best to create an ESB. I choose to go with "OpenAPI" (AKA Swagger 3.0) for my projects going forward not for that single personal reason but rather due to the sheer amount of people that have gotten behind Swagger -- they seem to have won the mindshare battle, if not the war.

I have picked a lot of things I thought were cleaner/technologically superior/whatever that lost the mindshare war in my lifetime, I'm trying to cut down.


HAL is personlly my current favourite, it seems to add the minimal overhead on top of JSON whilst adding in links.

> JSON-LD is a significant cognitive leap to reason about

I fully agree. JSON-LD requires first some kind of idea about RDFa which I didn't even know was a thing until reading up on JSON-LD.


I've had to develop APIs for clients that could only do GET or POST requests. Sometimes you have to sacrifice correctness for what's actually possible.


Hadn't seen that RFC before. We were doing almost exactly that on a large API project a year or so before that RFC was published. Backend guys hated it, front end guys loved it.


Absolutely agree. A PUT method carrying an open/closed flag would seem like a natural choice. Calling it any number of consecutive times with the same payload would be idempotent. There would probably be a GET method to go along with it. And of course, it would model the desired state, not the actual position of the garage door since garage doors don't instantaneously flip (would be cool though).


A toggle would actually be a good use of POST, though PUTting the desired state would be better (PATCH works instead of PUT if you are changing some part of the state and not the whole state, but is unnecessary if the door state consists entirely of either “open” or ”closed”.)


One more thing: PATCH needs to do atomic updates on a partial resource per http://restcookbook.com/ (which I think is a great TLDR resource on the topic).


HTTP GET is nice because you can "debug" via browser. But I don't think it's a good protocol choice for opening/closing doors nor any other service not related to document requests.


It's trivial to debug a POST in a browser if you can open the console. https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequ...


And if you're in a browser with a console worth opening you can use fetch instead which is way nicer: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

Also Postman is awesome for debugging REST APIs.


I really don’t like that, I prefer debugging with curl which easily supports the other verbs.


This stuff is also why you should be afraid of any libraries/ frameworks/ tooling that says it's going to automatically offer TLS 1.3's "Zero round trip" (0RTT) feature for code as opposed to trivial stuff like resource downloads.

Normally, TLS ensures you can't replay somebody else's conversations. So even if I know Barry, who is authorised to toggle the door, just sent a "toggle the door" command, if I try playing it back that won't work, the setup will be different each connection and I can't respond.

But for 0RTT there is no setup - there can't be, no time to do it, and so if I replay Barry's "toggle the door" it would work.

The specification is very clear that the right thing here will be to never allow 0RTT for such features. But the moment that's hidden behind some library API you can bet _somebody_ is going to screw up badly. Alas our industry doesn't exactly have a "safety first" mentality.


I’m more surprised that the Safari new tab window makes GET requests to every “favorite” URL, which I gather is what was happening.


It's updating the thumbnail screenshots. This only happens if you have the Safari "blank page" be your favorites instead of either your "home page" or a truly blank page.


> if you have the Safari "blank page" be your favorites instead of either your "home page" or a truly blank page.

Yay for modern user friendly applications making simple words like blank completely meaningless and ambiguous.


The commenter may have called it a "blank page", but Safari doesn't; it just labels the setting as "New tabs open with...", with "Empty page" being one of the choices.


That's on the poster -- Safari calls it "Top Sites", with explicit options for "Empty Page", "Home Page", etc.


Literally the reason HTTP verbs are a thing is so that User Agents like Safari can do exactly this. If this weren't a by-design property of the HTTP protocol, we wouldn't even have methods. Read the spec!


If I recall, Safari's default is to show your "favourites" screen in a new tab, which routinely refetches to update icons/previews.

I'm skeptical that it's "every time" but I do remember it doing it way more than I thought was needed.


Now I wonder if Safari (& other browsers) has distinct headers for their favorites lookups, to tell these lookups apart from real users and discard these accesses from site analytics..


They do, "X-Purpose: preview"


Hey! I've been thinking about this all day. I thought it was a GET to fetch the title of the page, but it might only be an OPTIONS or HEAD request? I'm not sure. Either way, my code activates the garage door on that endpoint no matter the HTTP verb.


The intersection of full-stack web devs from the commercial line-of-business world; and hardware/embedded hackers brings a lot of room for accidents IMO. I'm not saying any one of these groups are bad or inept. I'm in the former and completely accept that I'm new to embedded programming. It seems kool and I wanna learn about it. But I can also see the flip side where a hardware hacker sees query strings for toggling an output as a perfectly reasonable interface. Do we expect the embedded guys to grok HTTP/REST? The web-dev would be like "no, no, that has to be POST or PUT". But these things are going to happen. We don't yet have a large pool of experts across both fields.

It's no surprise the level of compromise and breach when you intersect what were pretty distinct skillsets and dump them in the mixing bowl together. That's what this IoT thing is like - it's a bunch of household and industrial chemicals all poured into the one container. It's not going to be very safe.


> Do we expect the embedded guys to grok HTTP/REST?

REST is irrelevant to this; HTTP alone covers the reason why this is bad. So ultimately, the question is: "Do we expect somebody designing an HTTP API to understand HTTP?" I think that's a reasonable expectation. If your embedded guys don't understand HTTP, then get somebody who does understand HTTP to design the API. They don't need embedded experience to do so, they aren't implementing it, just designing it. This isn't a difficult cross-functional intersection, you just don't assign tasks to people who aren't qualified to carry them out.


This is pretty much the classic newbie web developer mistake, heard many stories about people making it when they first start. I've also seen people fuck up in the opposite way, using POST when they should use GET and having unexpected behavior. Though not usually as "funny" as the classic "using GET instead of POST" errors are.

This concept of HTTP request methods really should be explained to new developers in a more accessible way, with examples of mistakes. It might not be intuitive at first or they might not think it's important as it is.

"Idempotence" isn't really the problem here, nor "should" GET requests be idempotent, think kittenwar.com or stumbleupon, the problem here is GET is reserved for retrieving (getting!) data, it shouldn't modify data. (Other than access information.)


GET requests are specified[0] to be idempotent:

  > Methods can also have the property of "idempotence" in that (aside from
  > error or expiration issues) the side-effects of N > 0 identical
  > requests is the same as for a single request. The methods GET, HEAD,
  > PUT and DELETE share this property.
[0] https://tools.ietf.org/html/rfc2616#section-9.1.2

edit: formatting


Looks like the RFC talks about idempotence from a "side effect" perspective where I was talking about it from an "output" perspective (the generated HTML).

I agree with the RFC and I mistook what the person meant


What are some of these unexpected behaviors associated with using POST when you're meant to use GET?


I saw “GET request” “idempotent” and “WiFi control garage doors” and immediately inferred the punchline.

At least it was his own devices and not Googlebot or something, I guess.


Perhaps we need a specific nomenclature for this sort of case: oddempotent!

After all, you get the same state if you do it 3,5,7,9,etc times as if you do it once, right? ;)




The first time I clicked the link, Twitter said I was rate limited. I thought that was the joke.


Well, a while ago I saw this code (on my own project!): window.open("?controller=users&action=changePassword&name=" + user_name + "&password=" + password)

I was horrified, glad it isn't live yet, and I fixed it immediately. But I'm still wondering whether I was so sleep-deprived or drunk when I wrote this. It's over SSL, so it should not be that big deal, but still, GET shouldn't be used for such things.


Well you don’t seem to validate the existing password prior to authorizing the change.

Good CSRF protection on GET requests is also near impossible to implement as GET is intended to be a “safe” request as in a request that does not modify a state but this isn’t something that is actually practiced.


Actually, I do. This is not a form for user to change his own password, rather a administrators form to change another user's form. And for such actions the administrators identity and privileges are checked. But I understand your reasoning and thank you for pointing it out.

And yeah, I try to use GET only for safe requests, but I should be more careful.


Another big deal is that it'll get stored in server logs too.


It's a big deal since it will be visible in access logs in plaintext, so if the logs are compromised your users would be too.


This comment thread has really put me in a good mood. These stories have so much pedagogical value: • The grammars we engineer have important semantic value • understanding and adhering to them is important, and hard • relying on others to adhere to them is dangerous, and hard to avoid • "experts" make mistakes in both areas constantly

I genuinely love seeing this kind of lively discussion, because these seemingly "trivial details" matter, a lot. The Three Mile Island accident was more or less caused by "message sent" being conflated with "state changed" at the UI level, directly leading to a nuclear meltdown. They basically had a system with the equivalent design of GET /open and /close that assumed success for both https://en.wikipedia.org/wiki/Three_Mile_Island_accident#Con...


Two thoughts:

1) Twitter is a terrible medium for anything, let alone posts longer than a sentence.

2) The level of over-engineering tech people readily engage in without a second thought is truly mind boggling.


1) Market disagrees with you strongly, hence hundreds of millions of people using it regularly.

2) He's just having fun, working on a side project that's useful to him, and learning. Nothing wrong with any of that.

Why so negative?


1) Popularity is rarely an indicator of quality.

2) That doesn't mean it isn't over-engineering.


If actually being used by people isn't part of your criteria for a good medium, then I'm not sure it's very useful.


Adding Wi-Fi control to your garage door using a WeMos is not over-engineering. It's just a fun little weekend hack.


Over-engineering is what I did...

- Raspberry pi

- Open/close sensors on the garage door as well as the side-entry door

- Camera pointed at the side entry door taking photos while it is left open

- Push alerts to my phone if either door is opened between specific hours of the night (Break-ins to detached garages were huge in my neighborhood)

- Voice controls from my fucking phone to open the door

I sold the house otherwise it'd probably be an even larger monstrosity today


That sounds really cool. One of the advantages of owning your house.


Over-engineering can only be assessed based on the goals of the project, which he hasn't detailed.


70 lines of code is over-engineering? That's nothing.


> "I threw the code together in minutes and was too lazy to spend another couple minutes figuring out POST."

So it's not the vendor's problem then. They provide you with two ways to make a request. You have a choice to do it right, you didn't.


The device should not support GET at all for this. It opens up a number of attacks and there’s no good reason to support it.


Who says it was the vendor's problem?


What's your point? Nobody said it was the vendor's problem except you.


GET requests aren't supposed to be idempotent. They're not supposed to change state in a first place.


> GET requests aren't supposed to be idempotent.

RFC 7231 disagrees.

> They're not supposed to change state in a first place.

Well, yeah, GET is supposed to be safe, but all safe methods are also idempotent.


Just because a server announces HTTP/1.1 doesn't mean it conforms to that specific RFC.


That non-RFC-compliant implementations of HTTP exist irrelevant to what properties HTTP methods are supposed to have, which is the issue under discussion.


It is compliant to an RFC, just not a specific RFC. That's part of the issue, that's simply taken for granted because of ideology.


My desk height is set with a GET.

I was going to fix it, but considering how hilarious this is I might not.


Your desk runs a web server? I need to step up my game...


Yeah, it's a sit/stand Linak desk.

I hooked up an ESP32, 2 channel relay (up/down control), and distance sensor (to detect height). Pushes height to graphite and position is settable remotely. :)


It's been an hour, is your desk oscillating yet?


No, but I had to implement a lock feature because my colleagues apparently cannot be trusted...


I get that it's cool, but I'm missing the why? Would you ever need to change your desk height when you're not already at your desk? Wouldn't manually changing the height be easier than hitting an HTTP endpoint on your PC to adjust it? Maybe I'm missing something, and like I said, I'll give oyu that it's cool and that alone is sometimes reason enough.


> I get that it's cool, but I'm missing the why?

I'm a hacker, I did it because it's cool and I wanted to learn.


Primary effect - An effect on the input arguments, who's effect is captured in the output.

Side effect - An effect on state that was not passed in as input arguments, who's effect may or may not be captured on the output.

Side effect free - Also known as pure, means the function only has a primary effect. Thus it only effects the input in a way that the output captures.

Idempotent - Applying a function to itself results in the same effects. Applies to both primary and side effects.

Where things get weird, is that there's also the following:

- An effect on implicit input state, which did not come from input arguments, who's effect is captured on the output. This would be like a HTTP GET. Or any query on a DB where the DB is an implicit input.

- An effect who's effect is captured on implicit output state, either by having its effect captured on an input (like a modification to a pointed object), or captured on output not returned by the function (like print to screen). This would be like a HTTP POST.

And now if you look at all these, there's an easy permutations of them. So you can build a table like so:

  Input | Output | Idempotent
  Arguments | Return Value | Yes
  Arguments | Return Value | No
  Arguments | Outside State | Yes
  Arguments | Outside State | No
  Arguments | Arguments | Yes
  Arguments | Arguments | No
  Outside State | Return Value | Yes
  Outside State | Return Value | No
  Outside State | Arguments | Yes
  Outside State | Arguments | No
  Outside State | Outside State | Yes
  Outside State | Outside State | No
All these combinations are possible. That's why it can be really tricky.


Also, having your garage door opened with unauthorized requests seems like looking for trouble


Yeah, did wonder about that. My thinking at the time was that the device was on the local WiFi network, not exposed to the internet, and there would be easier ways of getting into the garage if you really wanted to.


war-driving -- now with parking included!


See "Important Programming Concepts (Even on Embedded Systems) Part I: Idempotence"

https://www.embeddedrelated.com/showarticle/629.php


I recently soldered a wire to my garage door opener on the wall and ran it to a relay and then to the pins on a raspberry pi. Knowing the state of the door is key because the opener is just a toggle. I also have my alarm system hooked up to the pi, so it checks the state before and after any request. Repeatedly asking it to open will open it, or return success of it already is. Same with close.

It took a bit of testing before I trusted it would all work the way I thought it would, but now I user it and don't even think about it, it just works and is handy to have.


I know this is quite unrelated, and based on hazy memory of things from almost 10 years ago, DML statements in databases especially 'insert into' statements are not idempotent as I remember; ie if you try to select a few rows from a table table1 and insert them into table2 with same schema, if there were any identical rows already in table2, then whole insert will fail. My thinking at that time was that if these insert operations were idempotent, then there would be no need to explicitly check for duplicates before the insert.


At least in MySQL you can use “ON DUPLICATE KEY” to either ignore such things or optionally execute an update statement to change something about the matching row.

There is also REPLACE INTO

Assuming you have a relevant unique key setup of course.


> DML statements in databases especially 'insert into' statements are not idempotent as I remember;

GET is actually supposed to be safe which is stronger than idempotent; the SQL command that most naturally corresponds to GET—SELECT—is normally safe, but DML inherently is not.

But, sure, that INSERT isn't safe increases the amount of code needed to implement idempotent PUTs.


See also: email-based click-to-confirm produces many false positives.


Is idempotency part of HTTP or just part of loose REST conventions?



Part of HTTP: https://github.com/for-GET/know-your-http-well/blob/master/m...

For a fascinating overview how all of HTTP fits together I also recommend the HTTP decision diagram: https://github.com/for-GET/http-decision-diagram/blob/master...


> loose REST conventions

REST is HTTP. "loose REST conventions" is when someone chose to ignore big chunks of the HTTP spec.

In other words, you can build whatever you want (like SOAP) on top of HTTP and ignore the spec that describes content negotiation, HTTP methods, Caching policies, etc. It's still technically HTTP. But if you were to read the HTTP spec and follow it to a tee, you'd build a REST application.


> REST is HTTP

This is untrue. I hate to quote from Wikipedia, but it sums it up quite nicely: "REST is not a standard in itself, but RESTful implementations make use of standards, such as HTTP, URI, JSON, and XML"

More colloquially: REST is what happens when people mix up transport layers in their head.


HTTP is an implementation of REST.


This is more the point I was trying to make.


It's also untrue.


> REST is HTTP.

That's actually backwards; HTTP is the motivating example of REST (that is, REST was developed from observed properties which HTTP/1.0 loosely exhibited and was consciously applied in design of HTTP/1.1.)


> you'd build a REST application

Nah, nerds would come out of the woodwork to inform you that what you've built is not a real REST.


Yeah - there's always a HATEOAS comment somewhere and I've never really managed to figure out what that means beyond using URI's rather than a database IDs + some documented endpoint path to point to other resources.



HATEOAS is pretty simple - it is pretty much what you describe. Like a webpage which embed links to other webpages which you can then follow.

The problem is people thinking HATEOS is a requirement for web services.


> there's always a HATEOAS comment somewhere and I've never really managed to figure out what that means beyond using URI's rather than a database IDs

You know how browsers use URLs to find content and Content-Type to decide what to do with it, and links ib URLs in content (so as anchor tags in HTML) to find related content: that's HATEOAS in the original REST API.


It's pretty simple – the information a client needs to transition from one state to another needs to be encoded in the documents. So if you were to design a garage door opener, a non-REST specification might say something like:

    To get the door status, GET https://example.com/status  
    
    To open the door, POST to https://example.com/open  
    
    To close the door, POST to https://example.com/close
A REST approach might use a document something like this:

    {
        "doorState": "closed",
        "actions": [
            {
                "label": "Open the door",
                "href": "/open",
                "method": "POST"
            }
        ]
    }
Then, upon a client performing an "Open the door" action, the server would then respond with a document like this:

    {
        "doorState": "open",
        "actions": [
            {
                "label": "Close the door",
                "href": "/close",
                "method": "POST"
            }
        ]
    }
…and vice-versa. So at any given point, the web service is describing to the client what the state is and how to transition to other states. There's quite a few different benefits to doing it this way – for instance, if you wanted to add a "Turn the garage light on" feature, you could just add the action into the API and clients would be able to use it without any changes whatsoever. Or if you wanted to disable an action, you'd simply remove it from the actions array and the client wouldn't present it to the user as an option. Want to A/B test different label text? You don't need any special A/B testing functionality in the client, just vary the responses you send to users and observe which actions they take. Want to translate into different languages? Just take a look at the Accept-Language header coming from the client and respond with the right label text.

In practice, you'd want to use a vocabulary that's already out there like Hydra instead of coming up with your own format. But it can sometimes take a little time to get up to speed with them because they are necessarily quite flexible. But you can get most of the way there by simply thinking about it in terms of "the server tells the client what to do and how to do it".

For something as simple as a garage door that you are certain will never need to change? The benefits REST brings probably aren't going to be worth much to you. But as the complexity of an API grows, the point at which it's easier to use REST than not arrives very quickly.

If you want to read more about this, the best book I've found on the subject is RESTful Web APIs: http://restfulwebapis.com/


The problem I have with that explanation is that it seems to assume the existence of a universal client. How many clients are examining responses and using that to build up a UI for actions with no anticipation of what those actions might be?


The universal client is a human who is trying to figure out how to implement a client against the API.


That doesn't make sense to me either. Why would you bloat all your responses with that information rather than documenting it out of channel?


I don't see why you jumped to that conclusion. What about that JSON requires a universal client? It seems trivial to implement a personal "Garage Automation Client" based on that JSON that doesn't have to deal with anything else.


I might have this 100% backwards, but I've always thought the application state part of HATEOAS is entirely in the client. Does the server know anything about application state other than maybe implicitly through resource state?


The server tells the client about the state through the hypertext documents it sends it. The client then transitions to a new state by performing an action described in that document. Hence: Hypertext/Hypermedia as the engine of application state (HATEOAS).

I'm still not grasping why you think the API above needs some kind of universal client though. You can implement a very simple client for that extremely quickly. All you need to do is loop over the actions and render a button for each one using the provided label. Then when the button is clicked/tapped, you perform the action described and the response will be the next state.


I'm not saying it needs a universal client, just that it's the only real reason I can think of for including all of that.

For example, say we are writing a client to do something with Fitbit data. Step one wouldn't be to send a GET to the root to find out what actions are possible, would it? We might find out they have APIs for tracking vehicle mileage but I don't want that in my client - I just want to graph the hours I've been sleeping. So I'm going to Fitbit ahead of time knowing what I want to do.

If you were going to craft a client UI based entirely off the server responses, you are going to recreate the Gopher experience, and nobody wants that.


> just that it's the only real reason I can think of for including all of that

Having done a fair bit of HATEOAS driven APIs, but one benefit of the above is that it allows you to change "/open" to "https://otherservice.com/open" if needed and applications will start hitting the correct endpoint. Sure you can solve that with redirects or a reverse proxy config entry, but in my limited experience it is quite convenient to specify that in the response.

One good example is something you've probably already done: Download URLs. Instead of having clients build "https://cdn.photoservice.com/photos/12341231/download" whenever they want to download a raw file, they instead hit "https://api.photoservice.com/photos/12341231" which has a "download_url" in the response. They then fetch the file from that url. In my experience that download_url is region specific, or it has query params that AB test different resolutions, or it's AB testing different CDNs, etc.

I hope that benefit is clear enough. Now think about how those same benefits can be applied to a wider range of API endpoints. That might seem like overkill to do everywhere, and probably is, but it certainly has benefits in the right situations.


> I'm not saying it needs a universal client, just that it's the only real reason I can think of for including all of that.

I already gave several examples of reasons why you would do it this way in my earlier comment.

Why have we leapt from the garage remote example to a FitBit that tracks vehicle mileage? I don't see why you keep leaping to "it must handle everything everywhere". Stop thinking about handling everything under the sun – we're talking about a garage remote.

> If you were going to craft a client UI based entirely off the server responses, you are going to recreate the Gopher experience, and nobody wants that.

REST is a distillation of the principles that make the web work. Can you see the parallels with how HTML forms work? Browsers don't need to know that contact forms need to be POSTed to /contact, the server sends the client a document describing the action to take.


Okay - so to put my question in terms of the garage remote example, say the API offered is_open, open_door, and close_door functions.

A "dumb" client that is configured by the server is going to include all three. Image a developer that only wants the UI to be a toggle that mimics an actual hardware button? That would require some a priori knowledge of what the API supports. The list of actions is perhaps useful for the developer before they write their code. But then version 2 of the API may remove the close function and expand open to accept a value between 0 and 1. The client is going to need to be updated and the action list will again be useful for an afternoon.

> Can you see the parallels with how HTML forms work?

Okay - now that's a fantastic question that has me reconsidering everything I said. I think what's special about HTML forms is that the browser is the universal client. I have to think about that more though.

I appreciate you trying to help me see the light here. This is something that has bothered me since I first started reading about REST.


I've been diligent about HATEOAS links for years, but recently stopped bothering. They've never actually been used in practice. While the concept of a universal HATEOAS client has been around forever, there's no real world use case for them. The whole idea is a solution looking for a problem.


I still am having a hard time with the concept.

Going back to the garage door example. Say I'm building a wifi opener. It's going to have one button that acts like a toggle, just like regular garage door openers. Press the button and the door opens. Press it again and the door closes.

In the firmware, it will contact the opener server and it can get a list of actions and endpoints. But if version 2 of the server changes the action name from open to door_open, then my button is broken even though I've diligently followed the HATEOAS model.


Yeah, IMO the whole thing was a pie in the sky dream 20 years ago that we'd have magical APIs that automagically all work together in harmony by following HATEOAS links to do what they want. So it would be like the internet of APIs and a "universal client" would be the browser.

But what was misunderstood in that vision is that context can't readily be conveyed through some simplistic HATEOAS verbs. In the real internet of internets, with browsers and HTMLs, you have help links, positioning, colors, animations, highlighting, decades of UX research to help people understand what the various buttons do. HATEOAS was supposed to provide this in an automated way as a "browser for APIs", but ultimately the concept is far too simplistic to guide anything beyond an obvious CRUD model. And for an obvious CRUD model, well, it's unnecessary.

So anyway, that's my position after a decade of experience with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: