

Ask HN: How do I convince my coworkers to use HTTP well? Or am I being a pedant? - supersecret

Throwaway account here.<p>I work on a team that is developing a mobile app with an HTTP-based backend. Except for me, everyone is attempting to reinvent the wheel. They are:<p>1. HTTP 200 everything, unless it&#x27;s a 500 internal error, or the route requested doesn&#x27;t have a handler (404).<p>2. POSTing everything.<p>In cases where, say, a route exists but the resource requested does not, they return a 200 with a JSON payload that identifies some internal error code (the list of which is not currently well documented).<p>In cases where a client requests a resource, the request is sent as a POST with either a querystring or JSON body. Same for updating, same for deleting.<p>My question:<p>How would you convince them to use HTTP as God (Mr. Fielding) intended? Things to note:<p>1. It&#x27;s a small, new team. I believe they can be swayed, but I want to make it an easy argument.<p>2. The API is (currently) completely private. Only internal developers use it, so until we have external folks accessing it, it&#x27;s kind of a matter of taste. I reason that we shouldn&#x27;t assume it will never have external consumers. Still, I&#x27;m not sure: am I being too pedantic?<p>If my argument is worth pursuing, can you help me build it to be short and strong? Anything other than the whole W3C spec and a dissertation that might sound convincing to them? Or should I just keep quiet and work on?
======
sajal83
If your app does become super popular, it would be very hard to implement
http-level caching when using only POST. By using GET you can do caching at
client (or CDN/reverse proxy) layer.

~~~
supersecret
Thank you, yes. This is exactly the sort of short and sweet bullet points that
I'm building up.

------
pestaa
Either they don't know how HTTP can be utilized more effectively or they don't
care.

The former is relatively easy to fix. Still searching for the answer for the
second one...

~~~
yarper
Same here for #2

------
sp332
You're going to have a harder time making this argument if the API is really
completely internal. You could point out that it will be difficult to inter-
operate with other programs or libraries though. I think your best bet is to
show them that the spec already exists, there is already code that implements
those specs, and they will have a lot less work to do (now and in the future)
if they stick to the standard.

------
theandrewbailey
> Only internal developers use it, so until we have external folks accessing
> it, it's kind of a matter of taste.

No, it's not a matter of taste. What will you do when there are external
users? Rewrite your whole internal ecosystem to use the external one? Maintain
two APIs? Throw a complicated, not semantic API out there and sacrifice goats
hoping that someone will use it?

------
mikeomoto
Programming is API's all the way down. Point to an API in the programming
language in your shop, and be like, "how productive is it really to start
overloading methods in this API to have effects other than they intended?"
Because what they're doing is effectively the same.

The verb is the method, the post content and query string are the arguments.

Not only would it be incredibly confusing for newcomers to the system (and
there invariably will be), why add to the cognitive load when you can simplify
things instead?

------
gumballhead
Even if it is internal, you're just making things harder on yourself if you go
against convention for no reason.

Most client networking libraries have abstractions built upon those
conventions. Like jquerys $.ajax: it returns a promise that will invoke the
failure callback for a response other than a 200 level code.

------
T-R
Talk about the practical benefits you can get, and decide based off of those:

\- GET requests are supposed to be idempotent so they can be transparently
cached by caching proxies. Even if you won't be taking advantage of caching
proxies, it makes it easier if you can use the same data and mechanisms (i.e.,
HTTP headers) for server-side caching (and makes it easier to pull it out onto
a separate server later). This is important if it gets a bunch of traffic or
requests are resource intensive.

\- Thinking about endpoints as "resources" instead of as function calls
discourages you having parameters with a large or infinite set of possible
inhabitants, which would dilute your cache utilization.
"mysite.com/multiplicationTable.html" is nicely cacheable; You can cache
"mysite.com/mult?a=5&b=10" if you want to, but how likely is someone to
request that same URL again before it falls out of cache? Any information that
changes the response is technically a parameter, too (since it invalidates
your cache) - that especially means things like session state and side
effects. That doesn't necessarily mean "never have side-effecty/RPC
endpoints", but you should minimize them if you want to be scalable, since
every request to one pretty much _has_ to put load on your server.

\- PUT/DELETE requests are supposed to be idempotent so they can be re-issued
in case of failure. I also find it encourages design with immutable/replace
semantics, rather than partial update semantics, which puts you in a better
position when you want to move toward distributed/eventual consistency systems
(or even just for debugging), since the data's all there in the request, and
likely easier to make it commutative with other requests. Some of these things
might matter to you more than others, depending on what you're building.

\- Using headers properly gets you nice features like content negotiation, and
provides what's needed for caching to work (e.g., 'vary' header). Using the
built-in stuff means you can probably use someone else's library instead of
hacking out something buggy yourself.

\- HATEOAS gets you fully dynamic API discoverability. Describing your links
well enough to accomplish this is pretty involved, and I'm personally not
really sure it's worth the benefits, at least for anything that isn't an API
having arbitrary clients built for it, or otherwise being directly navigated
by some unknown end user. Maybe if there were more standardization around
hypertext formats.

\- Proper error codes can be nice for debugging, and theoretically can let
certain situations be handled automatically (e.g., 401s could direct a user to
authenticate, 302s generally redirect transparently) They also communicate
some intent to e.g., search engines ("pass on my SEO link juice to this new
URL"). They're a bit of a harder sell for internal APIs, since they're not
really as descriptive as they should be. Probably the best practical benefit
would be the ability to use client- and server-side libraries with them, or to
be able to categorize your errors easily to simplify your handling, without
having to dig into the response body. Also nice that you can look them up on
Wikipedia - you don't have to maintain documentation for your own error
format.

TL;DR: sticking to HTTP as intended helps to save you from rolling your own
cache invalidation and coming up with your own new names for things, which are
well known to be the two hard things in Computer Science.

