I love protobufs as a type-safe way of defining messages and providing auto-generated clients across languages.
I can't stand gRPC. It's such a Google-developed product and protocol that trying to use it in a simpler system (e.g. everyone else) is frustrating at best, and infuriating at worst. Everything is custom and different than what you're expecting to deal with when at its core, it is still just HTTP.
Something like Twirp (https://github.com/twitchtv/twirp) is so much better. Use existing transports and protocols, then everything else Just Works.
Yea, and the code gen (for Go at least) very clearly assumes you're using a monorepo and how dare you think of doing anything else you monster.
E.g. there's a type registry, which means you can't ever have the same proto type compiled by two different configs (it'll panic at import time). In a monorepo that's (potentially) fine, but for the rest of the world it means libraries can't embed the generated code that they rely on (if the spec is shared), which means they can't customize it (no perf/size/etc tradeoff possible), can't depend on different versions of codegen or .proto files (despite code clearly needing specific versions, and breaking changes to the generated code are somewhat common), can't have convenience plugins for things that would benefit from it, etc.
And all of this to support... an almost-completely-unused text protocol. And `google.protobuf.Any` auto-return-value-typing, but tbh I think that's simply a bad feature, and it would be better modeled as a per Any deserialize call registry, where you can do whatever the heck you like (or not use it, and just `.UnmarshalTo(&out)` with the correct type).
---
What really gets my goat here is that none of this makes sense at all for a protocol. The whole point of having a language-and-implementation-agnostic binary protocol is to not be dependent on specific codegen / languages / etc, but per above the whole Go protobuf ecosystem is rigidly locked in at all times, and nearly every change is required to be a breaking change... and if you make that breaking change in a new Go module version, like you should, you immediately break anyone who uses two of them at once, so it must also always be a semver-violating breaking change.
I used protobuf extensively with Kafka and I remember having to be quite particular about how the proto files/packages were arranged so as to avoid naming and versioning conflicts.
We never generated go code from it, but it took a bit of fine tuning to get generated code that felt at least somewhat ergonomic for Ruby and Typescript. It usually involved using some language specific alternative to protoc for that language because the code generated by protoc itself was practically unreadable. IIRC in the case of Typescript I had to write a script that messed around with the directory structure so you could use sensible import paths and aliases, because TS itself wasn't discovering them automatically without it.
That's stuff you can work with and solve technically. Initial faff but it's one and done. The worst problem I had with it was protobuf3 stating every field is optional by default, and the company I worked at basically developed a custom schema registry setup that declared every field as required by default, with a custom type to mark a field as optional. It turned literally every modification to a protobuf definition into a breaking change and, what's worse, it wasn't done end to end and the failures were silent, so you'd end up with missing data everywhere without knowing it for weeks.
>the failures were silent, so you'd end up with missing data everywhere without knowing it for weeks.
This is the main reason I think protobuf's "zero values are simply not communicated" is fundamentally wrong. Missing data is one of the easiest flaws to miss, and it tends to cause problems far away from the source of the flaw, in both time and space, which makes it extremely hard to notice and fix.
I get the arguments in its favor. I get the arguments in favor of "everything is optional by default". But presence is utterly critical in detecting flaws like this, and it can't always be addressed in a backwards-compatible way by application code. E.g. in proto's case, it's not possible because that data does not exist, and adding it would change the binary data. Even binary-compatible workarounds like "add a field with a presence fieldset" aren't usable because that unrecognized field will be silently ignored by older consumers, so you're right back to where you started.
It needs to exist from day 1 or you're shooting your users in the feet.
> E.g. there's a type registry, which means you can't ever have the same proto type compiled by two different configs (it'll panic at import time). In a monorepo that's (potentially) fine, but for the rest of the world it means libraries can't embed the generated code that they rely on (if the spec is shared), which means they can't customize it (no perf/size/etc tradeoff possible), can't depend on different versions of codegen or .proto files (despite code clearly needing specific versions, and breaking changes to the generated code are somewhat common), can't have convenience plugins for things that would benefit from it, etc.
I actually forgot about this when writing the article. This is a major pain in the ass both in Go and Python and basically forces you to ensure than no 2 services have the same file called "api/users/service.proto". There have been multiple instances where we literally had to rename a proto file to something like reponame_service.proto to avoid this limitation.
I can't stand gRPC. It's such a Google-developed product and protocol that trying to use it in a simpler system (e.g. everyone else) is frustrating at best, and infuriating at worst. Everything is custom and different than what you're expecting to deal with when at its core, it is still just HTTP.
Something like Twirp (https://github.com/twitchtv/twirp) is so much better. Use existing transports and protocols, then everything else Just Works.