Hacker News new | past | comments | ask | show | jobs | submit login

I can't stop thinking how HTTP REST is just an abuse on the original design of HTTP and the whole thing of using HTTP verbs to add meaning to requests is not a very good abstraction for RPC calls.

The web evolved from being a tool to access documents on directories to this whole apps in the cloud thing and we kept using the same tree abstraction to sync state to the server, which doesn't make a lot of sense in lots of places.

Maybe we need a better abstraction to begin with, something like discoverable native RPCs using a protocol designed for it like thrift or grpc.






HTTP as a pure transport protocol keeps coming back to the default, because it works. Its superpower is that it pushes stateless (and secure) design from end to end. You have to fight a losing battle to get around that, so if you play along, you can end up with better power efficiency, resiliency and scalability.

REST is just very simple to understand and easy to prototype with. There's better abstractions on top of HTTP, like GraphQL and gRPC (as you mentioned), but you can layer those on after you have a working solution and are looking for more performance.

HTTP3 is on the way this decade and I'm excited for its promises. Given how long it took for HTTP2 to standardize, I'm not optimistic it will be soon, but it does mean we have a path forward.


HTTP2 has already been mostly replaced by HTTP3, iirc. We now mostly have a split between HTTP1.1 And 3, if I'm remembering an article I read correctly.

> The web evolved from being a tool to access documents on directories to this whole apps in the cloud thing and we kept using the same tree abstraction to sync state to the server, which doesn't make a lot of sense in lots of places.

The first part is correct, but hard disagree on the rest. HTTP makes a lot of sense for RPC-ish things because a) it can do those things better than RPC, b) HTTP can do things that RPC typically can't (like: content type negotiations and conversions, caching, online / indefinite length content transmission, etc).

HTTP is basically a filesystem access protocol with extra semantics made possible by a) headers, b) MIME types, and if you think of some "files" as active/dynamic resources rather than static resources, then presto, you have a pretty good understanding of HTTP. ("Dynamic" means code processes a request and produces response content, possibly altering state, that a plain filesystem can't possibly do. An RDBMS is an example of "dynamic", while a filesystem is an example of "static".)

REST is quite fine. It's very nice actually, and much nicer than RPC. And everything that's nice about REST and not nice about RPC is to do with those extensible headers and MIME types, and especially semantics and cache controls.

But devs always want to generate APIs from IDLs, so RPC is always so tempting.

As for RPC, there's nothing terribly wrong with it as long as one can generate async-capable APIs from IDLs. The thing everyone hates about RPC is that typically it gets synchronous interfaces for distributed computations, which is a mistake. But RPC protocols do not imply synchronous interfaces -- that's just a mistake in the codegen tools designs.

Ultimately the non-RESTful thing about RPCs that sucks is that

  - no URIs (see below response to
    your point about discovery)

  - nothing is exposed about RPC
    idempotence, which makes
    "routing" fraught
  
  - lack of 3xx redirects, which
    makes "routing" hard
  
  - lack of cache controls
  
  - lack of online streaming
    ("chunked" encoding w/
     indefinite content-length)
Conversely, the things that make HTTP/REST good are:

  - URIs!!
  
  - idempotence is explicitly part
    of the interface because it is
    part of the protocol
  
  - generic status codes including
    redirects
  
  - content type negotiation
  
  - conditional requests
  
  - byte range requests
  
  - request/response body streaming
> Maybe we need a better abstraction to begin with, something like discoverable native RPCs using a protocol designed for it like thrift or grpc.

That's been tried. ONC RPC and DCE RPC for example had a service discovery system. It's not enough, and it can't be enough. You really need URIs. And you really need URIs to be embeddable in contents like HTML, XML, JSON, etc. -- by convention if need be (e.g., in JSON it can only be by convention / schema). You also need to be able to send extra metadata in request/response headers, including URIs.

(Re: URIs and URIs headers, HATEOAS really depends on very smart user-agents, which basically haven't materialized because it turns out that HTML+JS is enough to make UIs good, and so URIs in headers are not useful for UIs, but they are useful for APIs.)

It took me a long time to understand all of this, that REST is right and typical RPCs are lame. Many of the points are very subtle, and you might have to build a RESTful application that uses many of these features of HTTP/REST in order to come around -- that's a lot to ask for!

The industry seems to be constantly vacillating between REST and RPC. SOAP came and went; no one misses it. gRPC is the RPC of the day, but I think in the end the only nice thing about it is binary encodings and schema, and that it won't survive in the long run.


En.wikipedia.org/wiki/constitutionalism



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: