Hacker News new | past | comments | ask | show | jobs | submit login

gRPC is so much cleaner and easier to work with for the APIs.

- Error codes are well defined (vs "should I return 200 OK and error message as a JSON or return 40x HTTP code?")

- no semantics ambiguity ("should I use POST for query with parameters that don't fit URL query param, or POST is only for modifying?")

- API upgrades compatibility out of the box (protobuf fields identified by numbers, not names)

Not to mention cross-platform support and autogenerating code.

I use it in multiple Flutter+Go apps, with gRPC Web transparently baked in, and it just works.

Once I had to implement chunked file upload in the app and, used to multiform upload madness, was scared even to start. But without all that legacy HTTP crap, implementing upload took like 10 mins. It was so easy and clean, I almost didn't believe that such a dreadful thing as "file upload" could be so easy. (years of fighting HTTP legacy).

Compared to the "traditional" workflow with REST/JSON the downside for me is, of course, the fact that now you can't help but care about API. With web frameworks the serialization/deserealization into app objects happens automagically, so you throw JSON objects left and right, which is nice until you realize how much CPU/Memory is being wasted for no reason.

Also, check out drop-in replacements for cases where you don't need full functionality of gRPC:

- Twirp (Twitch light version of gRPC, with optional JSON encoding, HTTP1 support and without streaming) - https://github.com/twitchtv/twirp

- Connect - "Better gRPC" https://connect.build/docs/introduction/




> Not to mention cross-platform support

I'm truly curious to know how REST or JSON or HTTP is not cross platform


Good point. I refered to the code, not to the protocls.

With REST/JSON mostly you write code twice – server implementation and client implementation. If you add new platform (say, iOS), you write new code again. So for each new platform you maintain separate code => not cross platform.


What code are you talking about? Having to write the URL in the HTTP client call?


No, a bit upper layer. You need to handle error codes, request and response parameters, naming for the API etc.

Imagine having API with 100+ endpoints. How do you add another one in gRPC? You edit .proto file and run codegenerator, which creates stub code for clients and server. You just need to connect that code to the rest of your app (say, talk to database here or display the response on the screen here). You don't write actual code for the network part.

With traditional HTTP/REST/JSON approach you would have to write handlers yourself for each codebase separately. And there is no way to make sure they're in sync – only on organizational level (by static checks, four-eyes policies and such). I know it doesn't sound too big of a deal because most people used to it, but gRPC gives a different experience. Hence my initial comment that it's so much easier to work with.


I don't have much experience. I don't understand the complaints around semantic ambiguity. Can't you just declare you aren't trying to build the semantics perfect system, pick one of the options and have things be perfectly all right?


For me it's not about building perfect system. It's this gut feeling that you're using wrong tools. Like you're trying to build a house, but all you have is a set of old car tires and a bunch of duct tape. It just feels ill-suited for the task.

That's not just REST/JSON, of course. That's the general feeling I have from web-ecosystem. Recently I had to work with a simple HTTP form with one field and a single checkbox that is shown conditionally. Hours of debugging revealed that you can't just POST unchecked checkbox [1]. There is no difference between "unchecked checkbox" and "no checkbox". Instead you have to resort to hacks with hidden input field. It's just all feel hackish and you constantly question yourself – am I doing something wrong or it's just this whole stack is a set of hacks on top of hacks?

Same feeling with REST/JSON. Once your API grows past simple CRUD you start caring about optional values and error codes. Ambiguity seems fine until project grows and more people join and introduce inconsistency: one call returns 200 OK for error, other returns 5xx/4xx (cost of choice). Now you have to enforce rules with static checkers and other tools. You bring more tools just to keep the API sane, and it all, again, feels hackish and ill-suited. I don't have this feeling with gRPC - it feels like perfectly designed for APIs.

[1] https://stackoverflow.com/questions/1809494/post-unchecked-h...


But you have to carry around the schema on top of the data. And they become out of sync among different groups with different versions.

What is the ubiquitous utility for interacting with gRPC? We have curl for REST. What is openAPI of gRPC?


> And they become out of sync among different groups with different versions.

This is technically true, but part of the "grpc philosophy", if you will, is to not make breaking changes, and many of the design decisions of protobufs nudge you toward this. If you follow this philosophy, change management of your API will be easier.

For example, all scalar values have a zero value which is not serialized over the wire. This means it is not possible to tell if a value was set to the zero value, or omitted entirely. On the surface this might seem weird or annoying, but it helps keep API changes backwards _and forwards_ compatible (sometimes this is called "wire compatible", meaning that the serialized message can work with old and new clients).

Of course you still can make wire-incompatible changes, or you can make wire compatible changes that still break your application somehow, but getting in the habit of making only wire-compatible changes is the first step toward long term non-breaking APIs.

GraphQL, by contrast, lets you be more expressive in your schema, like declaring (non)nullability, but this quickly leads to some pretty awkward situations... have you ever tried adding a required (non-nullable) field to an input?


>What is openAPI of gRPC?

The proto file. Grab that , use protoc to generate bindings for your language and off you go....


> What is the ubiquitous utility for interacting with gRPC? We have curl for REST. What is openAPI of gRPC?

grpcurl[1] combined with gRPC server reflection[2]. The schema is compiled into the server as an encoded proto which is exposed via server reflection, which grpcurl reads to send correctly encoded requests.

[1] https://github.com/fullstorydev/grpcurl [2] https://github.com/grpc/grpc/blob/master/doc/server-reflecti...


> What is the ubiquitous utility for interacting with gRPC? We have curl for REST.

Kreya, for example, haha (check the original link of this post).

There are many, actually, including curl-like tools. But I almost never use them. Perhaps it's because my typical workflow involves working with both server and app in a monorepo, so when I change proto file, I regenerate both client and server.

Just once I had to debug the actual content being sent via gRPC (actually I was interested in the message sizes) and Wireshark did job perfectly.


> should I return 200 OK and error message as a JSON or return 40x HTTP code?

Anyone who returns a 200 OK on error is, of course, in a state of sin.

You’re welcome to return a 4xx + JSON, if that is congruent with the client’s Accept header.

Yes, I’m aware that GraphQL uses 200 OK for errors. That is only one of the reasons that it GraphQL is unfit for purpose. Frankly it is embarrassing that it has become popular. It’s like a clown car at a funeral.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: