Hacker News new | past | comments | ask | show | jobs | submit login

I finally feel safe to suggest that I think the cargo-culting of gRPC on to projects these days is also wrong. One of the best (and to be fair, worst) parts about http is it's flexibility, and it's like people just completely skipped over `Content-Type` and other simple options.

Throwing out standards-compliant HTTP (whether 1,2 or 3) with the bathwater that is JSON decoding was a mistake. JSON + jsonschema + swagger/hyperschema should be good enough for most projects, and for those where it isn't good enough, swap out the content type (but keep the right annotations) and call it a day! Use Avro, use capnproto, use whatever without tying yourself into the grpc ecosystem.

Maybe gRPC's biggest contribution is the more polished and coherent tooling -- in combining three solutions (schema enforcement, binary representation and convenient client/server code generation), they've created something that's just easier to use. I personally would have preferred this effort to go towards tools that work on standards-compliant HTTP1/2/3.




I'm not necessarily saying gRPC is the solution to everything, but I don't see why HTTP is so great? It's a protocol for transferring, primarily text over networks. Most backend systems operates in binary, so serializing binary data into a text format seems to be unnecessary overhead.


One pro of HTTP is that the methods are barebones and error codes standardized, while there are plenty of battle tested front ends for your tx/rx endpoints that might touch the service. Basically works everywhere.

The con is that you can do that with the protocol of your choice directly and you don't need to bolt HTTP to whatever you're building.


gRPC works over HTTP.

That said, the http body and response are perfectly fine being binary. It's only the headers that are text based (in http 1. Http 2 turns those headers into binary as well.)


It is great because it has quality implementations in every language. Much like protobuffs.


HTTP also has a vast range of proxies, transport encodings, cryptographic layers, solutions for client/server clock skew, tracing and a whole bunch of other things like rerouting and aliasing baked in.


The processor usage of serialization is almost never the bottleneck, usually it's bandwidth. Despite that, unless you're sending floats or large integers over the wire, the difference probably isn't usually worth the engineering investment over gzipped json until you're "web-scale".


>It's a protocol for transferring, primarily text over networks.

Who told you that? You can specify an arbitrary content type. Not just text.


Whilst I tend to agree, the fact that a gRPC service is very unlikely to be designed to be ‘RESTful’ to the point of obtuseness is a huge plus. It might not be the best tool for the job but it’s a lot better than the other most cargo-culted option.


gRPC is literally just calling conventions with HTTP2/3

For people who prefer JSON to protobuf, gRPC is serialization-agnostic. For folks who prefer REST verbs to gRPC methods, proto3 has native support for encoding REST mappings and tools like envoy and grpc web can do the REST <-> gRPC proxy translation automatically


The biggest problem with HTTP is the way developers tie themselves into knots with their HTTP clients. I've seen a lot of bad decisions, including nonsensical timeout and retry logic, nonstandard use of headers, bodies on GET requests, query strings over a megabyte in size, and performance bottlenecks caused by manual management of HTTP connections and threads.

The biggest advantage of an RPC is that it takes most of that out of the hands of the developers. Developers can just focus on business logic and leave the connection and request management to the standard library.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: