Hacker News new | past | comments | ask | show | jobs | submit login

protobuffers are no end of pain for me at work.

Yes, the guarantee of no breaking changes in an API is wonderful. Everything else sucks.

Not being able to use JSON to test your api? sucks.

Having to convert the changed .proto files to a new library for your language (gem for ruby in my case) and then have your system use that instead of the latest release (because you haven't tested it yet) sucks.

A terrible ruby library written by people who were obviously not ruby devs (it makes FooServiceService classes for example)? sucks.

hoop-jumping to force a breaking change in something you thought was good, but isn't and hasn't been released yet? Sucks

horribly degraded iteration loops on new functionality? sucks

not being able to host it anywhere that doesn't support HTTP2 (Heroku)? sucks

Unless you're working on something at google scale and the performance difference and bandwidth savings are actually noticeable RUN FOR THE HILLS. Protobuffers will bring you nothing but pain.




Couldn't agree more, especially about the un-idiomatic libraries in many languages and gRPC's insistence on end-to-end HTTP/2 and trailer support. All of this is, in essence, what we're trying to solve with the Connect family of libraries.

Eventually, we'd like you to have `connect-rails` (or maybe `connect-rack`, I'm an awful Ruby programmer). It would automatically support the gRPC-Web and Connect protocols, and hopefully standard gRPC too (no idea if rails/rack/etc supports trailers). It should feel like writing any other HTTP code, and it should fit right into any existing web application. We'd probably end up writing a new protobuf runtime for Ruby, so the generated classes feel idiomatic. Now your Ruby code is accessible to curl, bash scripts, and plain fetch from the browser, but it's also accessible to a whole ecosystem of clients in other languages - without anybody writing the boring plumbing code over and over.

Protobuf and gRPC aren't perfect, by any stretch, but they're effective. At least IMO, they're also past critical mass in medium-to-large systems: Cap'n Proto _is_ nicer and better-designed, but it doesn't have the adoption and language support that Protobuf has (imperfect as the support may be). Once your backend is gRPC and protobuf, it's really convenient to have that extend out to the front-end and mobile - at least IMO, it's just not worth the effort to hand-convert to RESTful JSON or babysit a grpc-gateway that needs to be redeployed every time you add a field to a message.


Take a look at Twirp (https://github.com/twitchtv/twirp) open sourced by TwitchTv. It's a lot lighter weight than gRPC. It does use Protobufs but addresses some of the concerns you mentioned, such as being able to test with JSON payloads, works over HTTP 1.1 and HTTP/2, good client libraries, and doesn't require a proxy.

They address your concerns in more detail in the Twirp release announcement (2018) - https://blog.twitch.tv/en/2018/01/16/twirp-a-sweet-new-rpc-f...


Twirp is excellent. Their protocol is _very_ similar to the Connect protocol for unary RPCs.

However, the semantics that Twirp exposes are different from gRPC. That means that it's impossible (or at least _very_ awkward) to have the same server-side code transparently support Twirp and gRPC clients. The Twirp ecosystem is smaller, and has more variable quality, than the gRPC ecosystem; when you choose Twirp for your server, you're betting that you'll never want a client in a language where Twirp is painfully bad.

Our hope is that the Connect protocol appeals to the same folks who like Twirp, and that the server-side Connect implementations make it easier to interop with the gRPC ecosystem.


We use it at work, it does indeed address all of the concerns.

The problem is gRPC, not protobuf.


And if you’re interested in consuming Twirp from a browser, I wrote TwirpScript: https://github.com/tatethurston/twirpscript


Twirp does not support streaming or websockets though. This is something connect-web appears to handle well.


Also, even if you do want to have some kind of binary-ized transport with a strict type-checked schema and compilers for multiple languages -- ultimately a noble goal, I think -- there are much better options available, design and technology wise. Cap'n Proto is wonderful with a very strong foundational design and good implementation, it's major drawback being a relatively sparse set of working languages, and a lack of tooling. But then again the quality of 3rd party Protocol Buffers libraries has always been a complete gamble (test it until you break it, fork it and patch it to make due), and as you note the tools aren't all there for gRPC either.

So in general I completely agree. If you are exposing a Web API to consumers -- just keep doing RPC, with HTTP APIs, with normal HTTP verbs and JSON bodies or whatever. Even GraphQL, despite having a lot of pitfalls, and being "hip and fancy", still just works normally with cURL/JSON at the end of the day because it still works on these principles. And that matters a lot for consumers of Web APIs where some of your users might literally be bash scripts.


You missed my biggest gripe: the Golang influenced decision that "zero strictly equals nil." This decision is non-idiomatic in, honestly, most languages.


It makes a lot of sense for a wire format to save space by having a default value and omitting the message when the value is the default, and by standardizing on a specific default, the protobuffer protocol avoids developers having to wonder what the default is for every field.

I agree that it can be a sharp edge for the tool if you don't see it coming... It can be convenient to deserialize a protobuffer into an in-memory representation for working with it. Proto doesn't do this for you because it would be unnecessary overhead if your code can just work with the protobuffers as-is, and protobuffer errs on the side of speed.


That is not longer the case - they brought back optional in proto3


I’m not familiar with protobufs, but zero and nil are different types in Go. You can’t even compare a numeric type with a reference type (it’s a compiler error).


GP meant that you can't differentiate between the zero-value and "not present".

In proto2, strings were represented as *string in Go. In proto3, they are simply string, and "" is the default.


It’s especially awful because protobufs will incorrectly parse non-matching messages-filling in blank and mismatched fields, instead of just failing out.

Had that happen more than once, and it made for an extremely confusing situation.


What. Protocol Buffers are from 2001, Go is from 2009.


Agreed also with Node.js, and we didn't even see meaningful improvements in data size for our usecase without double encoding. The size of fairly repeated data with JSON+GZIP was on the same order of magnitude as the raw data with protobuffers. To be fair the protobuffers included some unwanted fields, but it was expensive enough decoding them, what were we going to do decode AND encode them again? No way, decode, slice in JS and send the JSON, with GZIP it was the same size.


Honestly, that's one of the nicest things about JSON. While it has a lot of on the wire redundancy, it is a very compressible shape of redundancy.


Agree, I was hard into proto for awhile and there are parts I still really like but too many pitfalls and ultimately the sdks produced are usually just bad and you need to wrap them again anyway.

Load balancing also a huge pain.

I would like a language in between openapi, proto, and graphql

The graphql schema is the nicest but I don’t always want to deal with everything else. Ideally it would produce beautiful sdks in every language.


It’s also increasingly common for languages and platforms to come with decent JSON parsing capabilities, which is a huge point in JSON’s favor in my eyes — fewer dependencies to have to wrestle is always welcome.


The only really nice thing I can say about protobuffers is that they've caused less trouble than gRPC has caused.


Seconded, for PHP. It's such a nightmare to work with.


Completely agree - it’s a tonne of extra ceremony for essentially no payoff. JSON-based RESTful services are just simpler and more universal.

If you want code gen from API specs, you can have OAS specs that you create/edit with Stoplight Studio, and then generate types/clients/server-stubs for basically any language with OpenAPI Generator, just as you would with gRPC. And obviously can generate docs, etc. from them too. But unlike with gRPC, you can still use widely available REST middleware, browser dev tools request inspection works better, can curl your services, anyone can call them without special tooling, etc.


I know it's unlikely, but has anyone ever tried using protobuf + HTTP/1/2/3 with `Content-Type`? It's an experiment I've been wanting to run forever but haven't found time to.

I wonder if all the speed gains people are seeing would work just fine with reasonable `Content-Type` and maybe an extra header for intended schema.


This is - more or less - what the Twirp and Connect protocols are.


Ahhhhh OK great, I definitely will just read up on twirp rather than ever trying to do this myself.


Of course, also check out Connect :-) https://connect.build


> Simple, reliable, interoperable.

I like that.


Yah but I need this promotion so it’s rolling out


grpcurl is handy for testing your API with JSON.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: