I'll get around to fixing it later this week.
Also, an apology: I should have used "side-effect-free" instead of "idempotent" in my tweets.
The HTTP term is "safe method". Although you weren't even wrong because section 4.2.2 of RFC7231 (i.e. HTTP) defines all safe methods, including GET, as idempotent.
I think they use this language because nothing is truly side-effect free. In fact GETs can have side-effects, the most obvious of which is writing the fact of it to a logfile, and that's the most harmless side-effect of all, right until you run out of disk space.
Being a language arse I think the high precision descriptor is actually nullipotent. https://en.wiktionary.org/wiki/nullipotent but I'd never say it out loud.
However, an idempotent HTTP call is certainly not a pure function which some people seem to be mixing up. Pure functions don't work with I/O.
REST is bit more specific and explicitly requires GET to be nullipotent which really means "effect free" - it just reads and doesn't alter the state on the remote system at all.
Side-effects like log files, rate-limiting, etc. will always exist, but they do belong to a different 'layer', so to speak. That is, these should be unobservable side-effects (also think about minuscule effects on the power grid, the fact that a request might write something to an ARP-cache, etc. - they all happen at different layers, so the quantum world state keeps changing, but that's not what this is about). Whether an X-Request-Count header violates the requirements or not depends on interpretation. From the garage door perspective, I wouldn't care...
there is a comment above that argue this point in better details.
Basically it depends on the many nuances you have on "pure", "functions" and "idempotent"
I love that term and am going to use it as much as possible
edit: updated link because left creds in the commit :-/
I was actually thinking "better not commit that" ... before, y'know, I committed it :/
Every time you write some code you need to remember removing before commit, surround it with a comment containing "NOCOMMIT". With this script as a pre-commit hook, git will echo an error message and fail.
print("debug: ", myval)
print("debug: ", myval) # NOCOMMIT
I ended up using a "env.h" file... is there a C-equivalent of the PHP (?) .env file?
That looks like a much more useful tool, though.
It will let you approve each hunk in a file to commit or not.
git commit -e -v
Will force you to edit the commit message and in the editor show you the diff of the commit against HEAD.
"A unary operation f, that is, a map from some set S into itself, is called idempotent if, for all x in S, f(f(x)) = f(x)."
9.1.2 Idempotent Methods
Methods can also have the property of "idempotence" in that (aside
from error or expiration issues) the side-effects of N > 0 identical
requests is the same as for a single request.
I think the confusion arises because side-effectful functions can be considered as having type
f :: (RealWorld, OtherArgs) -> (RealWorld, OtherOutputs)
t :: RealWorld -> RealWorld
Now the idempotence condition becomes:
t(t(world)) == t(world)
It is a side effect in programming usage (not just HTTP).
Something is side-effect-free if and only if the only result of it running is that you get an answer. If you ignore the answer, then you cannot tell you ran the function/method/call/whatever. PUT is not side-effect-free.
That said, side-effect-free-ness is an incomplete paraphrasing of the HTTP spec (RFC7231); you'll notice that the only mentions of the phrase "side effect" are giving examples of legal side effects.
I looked it up, apparently there's a formal definition "denoting an element of a set which is unchanged in value when multiplied or otherwise operated on by itself", which does not seem to describe the REST usage very well, though.
But all I saw was every member of my list clicking 'unsubscribe'. It took a good hour to figure out exactly what was going on.
Idempotence is not the problem here, by the way. That just means calling the method twice has the same effect. But GET should have no side-effect, in an ideal world. Of course, in the case of unsubscribe links, it needs to have a side-effect to comply with the law.
You request a cert, it's authorized everything seems fine. Except, huh, the guy who was supposed to authorize is off sick today, how did that work? The email to the authorizer should just be sat in his INBOX until he gets back.
Oh - the company's "Malware protection" system automatically dereferenced the "Do you want to issue this certificate?" link from the email and there was no second step. So for affected companies basically anybody could request any certificate in their domains and it would get issued.
As far as I remember nobody has proof any bad guys ever used this, but grey hats posted some fun they had with it. Likewise for a CA that decided to OCR the images from an unco-operative DNS hierarchy that wouldn't provide machine readable data to them. Grey hats obtained domain names that confused the OCR into allowing them to get certs for other people's names. Did any black hats do it? We have no proof.
Alas I wasn't able to bring to mind a combination of keywords that would find the other incident in public archives, and I know it's from my background reading so it will be before my personal archives of these discussions begin. Sorry.
The page in question could also have a "actually, I do want to unsubscribe, I just clicked the wrong link by accident" button as well, in case a human reader is confused.
Or, one link that says "unsubscribe immediately" and another that says "unsubscribe only after confirmation", and the first one unsubscribes immediately, unless they also click the second link immediately before or after, while the second link only unsubscribes them if they click a confirm button on the page?
Soon your inbox is full.
I have seen this with spam, and I have seen it with DMCA requests. My hosting provider will issue me a warning for any DMCA request that they will shut me down within 48 hours if I don't comply. Even when it is clearly not a valid request. Even when the content has already been removed. They don't even check, they just say "do it or else". And I pay them thousands per month. Godaddy is the same way, I have had people complain to them, and then they threaten to shut down the domain.
When you have been threatened repeatedly to be shut down for operating normally you don't take any chances. It isn't worth it.
A single step, a button push, to confirm an unsubscription is fine.
No, it really isn't.
Lots of mailing lists operate exactly as the person you replied to mentioned where after unsubscribing you are given a chance to undo that action. That's a far more respectful way to operate.
The follow up confirmation screen feels slimy to me like some kind of cable company retention tactic. That's why I called it a dark pattern.
I would rather companies just didn't subscribe me, so having to click an unsubscribe link is already a problem. The more confirmation they have, the more I take it as biased and manipulative.
In the spectrum of "buttholedesign", using proper web standards to make sure an action is being taken deliberately is far lower than "intentionally low-contrast skip buttons" and "call to cancel subscription".
That's much broader scope than what's being discussed here.
> Nobody reads them - they just click the button that will make it go away.
That statement is false. Many people read and care about confirmations.
What you're talking about is very specific personal preferences, not what is generally considered "fine".
You could probably automate the POST action though. Equivalent of $('#unsub-button').click() on the unsubscribe page load
When Facebook does it, I do have a problem with it because I don't think their motives are as pure.
Thank you! I felt like I was taking crazy pills with my understanding of idempotence.
Calling GET /door/open is idempotent too, but it's still gross in my opinion. The author makes it sound like that would be fine.
GET being idempotent is not the issue the author was dealing with. State change from a GET is the real issue.
Typical bolt-on solutions would be a range finder pointing down from the ceiling near the door which can tell if the door is obstructing it, a tilt sensor mounted to the top door panel, or limit switches on the carriage way/tracks. With a door position sensor in place a simple momentary-contact dry relay can be used to trigger door motion.
Once that is in place, you can add another WeMos D1 mini to your car to open and close the door with no interaction: https://github.com/aderusha/MQTTCarPresence
Range finder seems like the best idea. I was thinking of mounting it on the door motor and pointing horizontally down the track at a reflector attached to the chain/handle.
I love the idea of putting a D1 in the car. Thanks for the link!
In either event, are you sure you need two sensors per door? The garage door can stop at any point, but (to my thinking) it's still only open or closed. If it's open by only a bit, open half way, mostly open, etc... it's all open in my system, meaning someone can get in or out if they want. So limit switches (or whatever) at the closed position can tell you if it's all the way closed or "not closed", which is suitable for automation purposes.
Sadly, the little garage door doesn't look like a dollhouse garage door; that'd be too cool. It looks like a wire on a track.
It's perfectly fine to have certain side-effects, provided the GET is idempotent, but not others/most. Specifically, it's fine to idempotently have side-effects where it doesn't matter who is causing the effect, the side-effects are desirable, and the side-effect load won't be overwhelming.
In the case of a link shortener one can pretend that the side effect did not happen the first time the service sees some particular link, that the shortening has already occurred (at the beginning of time!). There is definitely a side-effect, since it involves updating a persistent hash table, and it is idempotent (though one could construct a shortener where it's possible to get more than one shortened form for a given URI when racing to shorten it, but this is not a problem for this particular sort of service).
Get, and NOOP, are both "safe" and naturally "idempotent" since the former encompasses the latter.
Toggle should be implemented as post, since it’s neither safe, nor idempodent.
I think the resource can change externally...so it means that your GET request doesn't change it, but it doesn't have to be the same result every time.
Given that HTTP requests are supposed to trigger database reads that may return a different result after some time, then “side effect free” would indeed be incorrect.
The litmus test is ... does it always return the same output given the same input? If that can change, the it’s not describing a pure function, but a side effectful one.
And the output of a GET request can and does change.
Speaking of which people here aren’t talking of idempotency in a mathematical sense:
f(f(x)) == f(x)
If you really think about it, GET requests aren’t even idempotent ;-)
I disagree. There are some ways in which GET can be non-idempotent, such as pageview counters and endpoints with a vast amount of constantly-changing content, for instance. One may argue that the first example may be possible with a GET followed by a POST, but any subsequent GET response (assuming it contains the counter) would still be different to its prior.
This let users double click on links and have the action performed only once. In 2000, hyperlinks were still confusing to some users who were used to "double-click = open", especially for file icons.
EDIT: added text in italics because initial wording was confusing.
The problem is with lack of authentication. Slack's ability to unsubscribe people on their behalf, without their explicit permission seems to be the real issue here.
Even a "I'm not a bot" check would provide some protection.
This doesn't seem very likely, so I guess that a whole load of unique unsubscribe links got dumped into slack which started following them.
1) the original developer's idea of handling an unauthorized /admin request was just to set a redirect header and continue processing the current request .
2) the /admin page had a grid of all the content on the site, with handy 'Delete' links that ran over GET without confirmation.
You can probably guess where this is going – some search bot hit the overview page, ignored the redirect header, saw the content, and dutifully crawled every single link on it…
I think the state of the web has improved slightly over the last decade but this is a great example of why browser vendors are so conservative. You can do this now but only opt-in.
When you talk to your teammates about the semantics of these verbs and someone just says "oh a GET is fine" and the team agrees but you don't and you can't say it so you don't become "that guy" it's time to find a new engineering org to be a part of.
On the topic of PATCH, check out JSON merge patches (application/merge-patch+json):
It sounds like you're advocating leaving an organisation instead of speaking up when a mistake is being made? In the scenario you described, the "a GET is fine" person is unfamiliar with the protocol they're writing for (HTTP), and so is every person who agreed. Leaving instead of speaking up seems pretty drastic.
To delve into what I was thinking a little bit I think it came off so bitter because I've been in too many orgs where group-thinking squashed dissenting possibly-correct opinions, where half the room is wondering "this seems too complex, why are we doing this" but everyone goes with it. Reading through the twitter comments had a bunch of people were trying to gloss over the misuse and it might have triggered me.
The more reasonable response is definitely to articulate and explain why a GET is NOT fine in that case so everyone learns, but once this starts happening a lot I mark it as a red flag in my head -- it's either a culture clash or I'm too close to being the most experienced in the room (in that specific area), and that means there are less people to learn from and staying for too long might lead to stagnating. The "leave the org" bit is hyperbole, but I worry about this kind of thing if I experience it.
In the end though, it was meant to be a humorous post so the remark is overblown.
1. GET is good for everything
2. Perhaps we should use POST too
3. Let's use all the verbs
4. Someone mentions HATEOAS
5. Some old guy says that you've got to use XML because that defines links and JSON doesn't
6. Someone else counters with JSON-LD and sends a link to the W3 spec
That's as far as I got. Mostly this conversation happens in my head.
I think JSON+JSON-LD still offers some benefits over XML at the very least in the security sense -- while it can be misused, there are much less dynamic bits built in to the transfer language itself.
Also I'd say that JSON "scales" well in terms of complexity still, small things are cognitively light, and big things are linearly more cognitively heavy.
Is JSON+JSON-LD+X the new XML?
Is Swagger the new WSDL/SOAP?
I dunno, but it doesn't feel like it's quite that bad yet.
I mean it's the obvious choice for a simple control UI implementation on a slow embedded system...
For that matter, link relations too are a cognitive leap, and no one writes 'smart REST clients' anyway: these days, to consume a "REST API" you use the first-party official library, or something you found on github, or something you wrote in 3 hours where all the URLs are formed by string concatenation. Within one vendor's one particular REST API, the benefits of HATEOAS are typically minimal, which is why it's so frequently omitted and no one complains except REST pedants.
It also doesn't help that the field still hasn't settled down in ~10 years: JSON Schema, widely used for JSON schemas due to the lack of an official mechanism, recently decided they wanna get into hypermedia too , and there's also the enticingly named JSON API, which offers a similar data model too. And let's not forget about stuff ~2011-2013 like HAL , which didn't really become big, but never fully went away.
Swagger is absolutely much the new WSDL/SOAP ecosystem, complete with first-party code generators. In the WSDL days, all of your code generators were third-party, and due to enough knobs in WSDL and enough accumulated design baggage, it wasn't always interoperable . But when it was, it was pretty magical for ~2003.
Swagger had clout and name recognition to kill the other schemes that are largely the same, but then they renamed it OpenAPI to play up the consensus against competing codegen/specs like RAML. Now that Mulesoft has agreed to bring RAML under the OpenAPI umbrella, there's less integrated competition, but then OpenAPI's uptake will be limited by those put off by serious tooling who opt for simpler description languages instead. It's a mess.
 https://www.w3.org/2013/dwbp/wiki/RDF_AND_JSON-LD_UseCases  http://json-schema.org/specification.html  http://stateless.co/hal_specification.html  https://blogs.msdn.microsoft.com/dotnetinterop/2005/01/17/ne...
Ditto on the huge disappointment with the cobbled together SDKs when HATEOAS/JSON-LD-fluent applications represent a more robust future that could have been the current timeline.
Also, it's a shame that hyper schema couldn't reconcile with JSON-LD. JSON-LD is the one I'm leaning towards using at the moment, because of it's early consideration of things like multiple languages.
It's reassuring (?) to see the mention of the RAML/Swagger situation being a mess from someone else. I actually liked RAML more than the Swagger specification, but Mule (https://en.wikipedia.org/wiki/Mule_(software)) left a bad taste in my mouth once upon a time, when a team I was on was deciding how best to create an ESB. I choose to go with "OpenAPI" (AKA Swagger 3.0) for my projects going forward not for that single personal reason but rather due to the sheer amount of people that have gotten behind Swagger -- they seem to have won the mindshare battle, if not the war.
I have picked a lot of things I thought were cleaner/technologically superior/whatever that lost the mindshare war in my lifetime, I'm trying to cut down.
> JSON-LD is a significant cognitive leap to reason about
I fully agree. JSON-LD requires first some kind of idea about RDFa which I didn't even know was a thing until reading up on JSON-LD.
Also Postman is awesome for debugging REST APIs.
Normally, TLS ensures you can't replay somebody else's conversations. So even if I know Barry, who is authorised to toggle the door, just sent a "toggle the door" command, if I try playing it back that won't work, the setup will be different each connection and I can't respond.
But for 0RTT there is no setup - there can't be, no time to do it, and so if I replay Barry's "toggle the door" it would work.
The specification is very clear that the right thing here will be to never allow 0RTT for such features. But the moment that's hidden behind some library API you can bet _somebody_ is going to screw up badly. Alas our industry doesn't exactly have a "safety first" mentality.
Yay for modern user friendly applications making simple words like blank completely meaningless and ambiguous.
I'm skeptical that it's "every time" but I do remember it doing it way more than I thought was needed.
It's no surprise the level of compromise and breach when you intersect what were pretty distinct skillsets and dump them in the mixing bowl together. That's what this IoT thing is like - it's a bunch of household and industrial chemicals all poured into the one container. It's not going to be very safe.
REST is irrelevant to this; HTTP alone covers the reason why this is bad. So ultimately, the question is: "Do we expect somebody designing an HTTP API to understand HTTP?" I think that's a reasonable expectation. If your embedded guys don't understand HTTP, then get somebody who does understand HTTP to design the API. They don't need embedded experience to do so, they aren't implementing it, just designing it. This isn't a difficult cross-functional intersection, you just don't assign tasks to people who aren't qualified to carry them out.
This concept of HTTP request methods really should be explained to new developers in a more accessible way, with examples of mistakes. It might not be intuitive at first or they might not think it's important as it is.
"Idempotence" isn't really the problem here, nor "should" GET requests be idempotent, think kittenwar.com or stumbleupon, the problem here is GET is reserved for retrieving (getting!) data, it shouldn't modify data. (Other than access information.)
> Methods can also have the property of "idempotence" in that (aside from
> error or expiration issues) the side-effects of N > 0 identical
> requests is the same as for a single request. The methods GET, HEAD,
> PUT and DELETE share this property.
I agree with the RFC and I mistook what the person meant
At least it was his own devices and not Googlebot or something, I guess.
After all, you get the same state if you do it 3,5,7,9,etc times as if you do it once, right? ;)
I was horrified, glad it isn't live yet, and I fixed it immediately. But I'm still wondering whether I was so sleep-deprived or drunk when I wrote this. It's over SSL, so it should not be that big deal, but still, GET shouldn't be used for such things.
Good CSRF protection on GET requests is also near impossible to implement as GET is intended to be a “safe” request as in a request that does not modify a state but this isn’t something that is actually practiced.
And yeah, I try to use GET only for safe requests, but I should be more careful.
I genuinely love seeing this kind of lively discussion, because these seemingly "trivial details" matter, a lot. The Three Mile Island accident was more or less caused by "message sent" being conflated with "state changed" at the UI level, directly leading to a nuclear meltdown. They basically had a system with the equivalent design of GET /open and /close that assumed success for both
1) Twitter is a terrible medium for anything, let alone posts longer than a sentence.
2) The level of over-engineering tech people readily engage in without a second thought is truly mind boggling.
2) He's just having fun, working on a side project that's useful to him, and learning. Nothing wrong with any of that.
Why so negative?
2) That doesn't mean it isn't over-engineering.
- Raspberry pi
- Open/close sensors on the garage door as well as the side-entry door
- Camera pointed at the side entry door taking photos while it is left open
- Push alerts to my phone if either door is opened between specific hours of the night (Break-ins to detached garages were huge in my neighborhood)
- Voice controls from my fucking phone to open the door
I sold the house otherwise it'd probably be an even larger monstrosity today
So it's not the vendor's problem then. They provide you with two ways to make a request. You have a choice to do it right, you didn't.
RFC 7231 disagrees.
> They're not supposed to change state in a first place.
Well, yeah, GET is supposed to be safe, but all safe methods are also idempotent.
I was going to fix it, but considering how hilarious this is I might not.
I hooked up an ESP32, 2 channel relay (up/down control), and distance sensor (to detect height). Pushes height to graphite and position is settable remotely. :)
I'm a hacker, I did it because it's cool and I wanted to learn.
Side effect - An effect on state that was not passed in as input arguments, who's effect may or may not be captured on the output.
Side effect free - Also known as pure, means the function only has a primary effect. Thus it only effects the input in a way that the output captures.
Idempotent - Applying a function to itself results in the same effects. Applies to both primary and side effects.
Where things get weird, is that there's also the following:
- An effect on implicit input state, which did not come from input arguments, who's effect is captured on the output. This would be like a HTTP GET. Or any query on a DB where the DB is an implicit input.
- An effect who's effect is captured on implicit output state, either by having its effect captured on an input (like a modification to a pointed object), or captured on output not returned by the function (like print to screen). This would be like a HTTP POST.
And now if you look at all these, there's an easy permutations of them. So you can build a table like so:
Input | Output | Idempotent
Arguments | Return Value | Yes
Arguments | Return Value | No
Arguments | Outside State | Yes
Arguments | Outside State | No
Arguments | Arguments | Yes
Arguments | Arguments | No
Outside State | Return Value | Yes
Outside State | Return Value | No
Outside State | Arguments | Yes
Outside State | Arguments | No
Outside State | Outside State | Yes
Outside State | Outside State | No
It took a bit of testing before I trusted it would all work the way I thought it would, but now I user it and don't even think about it, it just works and is handy to have.
There is also REPLACE INTO
Assuming you have a relevant unique key setup of course.
GET is actually supposed to be safe which is stronger than idempotent; the SQL command that most naturally corresponds to GET—SELECT—is normally safe, but DML inherently is not.
But, sure, that INSERT isn't safe increases the amount of code needed to implement idempotent PUTs.
For a fascinating overview how all of HTTP fits together I also recommend the HTTP decision diagram: https://github.com/for-GET/http-decision-diagram/blob/master...
REST is HTTP. "loose REST conventions" is when someone chose to ignore big chunks of the HTTP spec.
In other words, you can build whatever you want (like SOAP) on top of HTTP and ignore the spec that describes content negotiation, HTTP methods, Caching policies, etc. It's still technically HTTP. But if you were to read the HTTP spec and follow it to a tee, you'd build a REST application.
This is untrue. I hate to quote from Wikipedia, but it sums it up quite nicely: "REST is not a standard in itself, but RESTful implementations make use of standards, such as HTTP, URI, JSON, and XML"
More colloquially: REST is what happens when people mix up transport layers in their head.
That's actually backwards; HTTP is the motivating example of REST (that is, REST was developed from observed properties which HTTP/1.0 loosely exhibited and was consciously applied in design of HTTP/1.1.)
Nah, nerds would come out of the woodwork to inform you that what you've built is not a real REST.
The problem is people thinking HATEOS is a requirement for web services.
You know how browsers use URLs to find content and Content-Type to decide what to do with it, and links ib URLs in content (so as anchor tags in HTML) to find related content: that's HATEOAS in the original REST API.
To get the door status, GET https://example.com/status
To open the door, POST to https://example.com/open
To close the door, POST to https://example.com/close
"label": "Open the door",
"label": "Close the door",
In practice, you'd want to use a vocabulary that's already out there like Hydra instead of coming up with your own format. But it can sometimes take a little time to get up to speed with them because they are necessarily quite flexible. But you can get most of the way there by simply thinking about it in terms of "the server tells the client what to do and how to do it".
For something as simple as a garage door that you are certain will never need to change? The benefits REST brings probably aren't going to be worth much to you. But as the complexity of an API grows, the point at which it's easier to use REST than not arrives very quickly.
If you want to read more about this, the best book I've found on the subject is RESTful Web APIs: http://restfulwebapis.com/
I'm still not grasping why you think the API above needs some kind of universal client though. You can implement a very simple client for that extremely quickly. All you need to do is loop over the actions and render a button for each one using the provided label. Then when the button is clicked/tapped, you perform the action described and the response will be the next state.
For example, say we are writing a client to do something with Fitbit data. Step one wouldn't be to send a GET to the root to find out what actions are possible, would it? We might find out they have APIs for tracking vehicle mileage but I don't want that in my client - I just want to graph the hours I've been sleeping. So I'm going to Fitbit ahead of time knowing what I want to do.
If you were going to craft a client UI based entirely off the server responses, you are going to recreate the Gopher experience, and nobody wants that.
Having done a fair bit of HATEOAS driven APIs, but one benefit of the above is that it allows you to change "/open" to "https://otherservice.com/open" if needed and applications will start hitting the correct endpoint. Sure you can solve that with redirects or a reverse proxy config entry, but in my limited experience it is quite convenient to specify that in the response.
One good example is something you've probably already done: Download URLs. Instead of having clients build "https://cdn.photoservice.com/photos/12341231/download" whenever they want to download a raw file, they instead hit "https://api.photoservice.com/photos/12341231" which has a "download_url" in the response. They then fetch the file from that url. In my experience that download_url is region specific, or it has query params that AB test different resolutions, or it's AB testing different CDNs, etc.
I hope that benefit is clear enough. Now think about how those same benefits can be applied to a wider range of API endpoints. That might seem like overkill to do everywhere, and probably is, but it certainly has benefits in the right situations.
I already gave several examples of reasons why you would do it this way in my earlier comment.
Why have we leapt from the garage remote example to a FitBit that tracks vehicle mileage? I don't see why you keep leaping to "it must handle everything everywhere". Stop thinking about handling everything under the sun – we're talking about a garage remote.
> If you were going to craft a client UI based entirely off the server responses, you are going to recreate the Gopher experience, and nobody wants that.
REST is a distillation of the principles that make the web work. Can you see the parallels with how HTML forms work? Browsers don't need to know that contact forms need to be POSTed to /contact, the server sends the client a document describing the action to take.
A "dumb" client that is configured by the server is going to include all three. Image a developer that only wants the UI to be a toggle that mimics an actual hardware button? That would require some a priori knowledge of what the API supports. The list of actions is perhaps useful for the developer before they write their code. But then version 2 of the API may remove the close function and expand open to accept a value between 0 and 1. The client is going to need to be updated and the action list will again be useful for an afternoon.
> Can you see the parallels with how HTML forms work?
Okay - now that's a fantastic question that has me reconsidering everything I said. I think what's special about HTML forms is that the browser is the universal client. I have to think about that more though.
I appreciate you trying to help me see the light here. This is something that has bothered me since I first started reading about REST.
Going back to the garage door example. Say I'm building a wifi opener. It's going to have one button that acts like a toggle, just like regular garage door openers. Press the button and the door opens. Press it again and the door closes.
In the firmware, it will contact the opener server and it can get a list of actions and endpoints. But if version 2 of the server changes the action name from open to door_open, then my button is broken even though I've diligently followed the HATEOAS model.
But what was misunderstood in that vision is that context can't readily be conveyed through some simplistic HATEOAS verbs. In the real internet of internets, with browsers and HTMLs, you have help links, positioning, colors, animations, highlighting, decades of UX research to help people understand what the various buttons do. HATEOAS was supposed to provide this in an automated way as a "browser for APIs", but ultimately the concept is far too simplistic to guide anything beyond an obvious CRUD model. And for an obvious CRUD model, well, it's unnecessary.
So anyway, that's my position after a decade of experience with it.