The criticisms the author levies against Protobuf are unfair. Inside Google, all source code is in a monorepo, and depending on other Protobuf files is a matter of code-sharing it as a Bazel library; it is trivial. There is no need for a package management system because its existence would be irrelevant.
Yea, but from the complexities of managing external facing libraries they do an extremely well job. As a former Googler I can jump into bazel, grpc projects or start my own relatively easily.
I tried making a few guides on a personal blog explaining how to use these tools but to be honest without seeing how they get used within Google it's relatively difficult to understand why they have some of the design decisions which may seem as clunky initially.
That is true. I am using Bazel daily with all deps vendored and proto defined as targets so there is really no need for these tools the author mentioned because with Bazel the underlying problems simply don't exist in the first place.
It's worth pointing out that it takes a bit of time and pain to gronk the underlying rational for some of the less obvious design decisions.
For example, Protos aren't versioned because you already version them with git.
Releases usually some kind of hash, so you already have reliable dependencies with checksums.
No point in versioning protos again. It's a mono repo, so why bother with distribution?
Composition? Use proto lib target...
Without Bazel, though, your basically totally lost and then these tools kinda make sense as a way out of the pain caused the lacking tool support you will face.
That said, a lot of orgs have less then ideal IT for historical or whatever reasons so these problems are 100% real and these solutions exist for a reason.
> For example, Protos aren't versioned because you already version them with git. Releases usually some kind of hash, so you already have reliable dependencies with checksums.
Sorry, but this is such a stupid statement. You external service or client doesn't have access to your internal git hash
> That said, a lot of orgs have less then ideal IT for historical or whatever reasons
No. A lot of orgs don't require setting up Bazel to make simple things like generating code from protobufs work
> You external service or client doesn't have access to your internal git hash
Including the build commit is very straightforward - but more to the point, Google projects frequently treat the monorepo like a service to do things like fetch the latest config. Even deployments tend to be automated through the monorepo - lots of tools make automated change requests.
It's a stupid statement internally as well. Unless you can freeze time itself, deploying new versions of stuff isn't instantaneous. And what happens if you need to do a rollback of a component?
> And what happens if you need to do a rollback of a component?
You revert the offending commit. That triggers an automatic deploy (or, even more likely, your buggy change gets caught in the canary stage and automatically reverted for you).
The Google philosophy is called "live at head" and it has a bunch of advantages, provided your engineers are disciplined enough to follow through.
Until you run into things like "your partners deploy once every two months" or "the team's deploy is delayed by X due to unforseen changes downstream" or ...
Protobuf is built specifically for Google and Google's way of doing things. Not everyone is built like Google.
Well, the core problem is that you shouldn't be deploying as infrequently as every 2 months...you should spend engineering energy to fix that rather than on working around it.
I worked at Google; I'm describing to you exactly how the infrastructure worked.
Yes, Google's internal infrastructure is that good. It's easy to get caught up in the "haha cancel products" meme and forget that this company does some serious engineering.
What Google does with its internal infrastructure is of limited applicability to the majority of people, where interface stability is the prime directive.
They are not. A good interface rarely ever needs to be changed; the point of live at head is that it's impossible to commit a breaking change as long as there's a test depending on the thing you broke.
If Google wants to push gRPC as a general, open, standardized solution -- which they most certainly want, and have done -- then they need to cater to how everyone does things, not to how Google does things.
Nonsense. The criticism are perfectly fair and realistic.
Bazel is virtually unusable outside of Google. If protobuf is only usable inside Google infra then the response would be “don’t use it if you aren’t Google”. And yet I somehow doubt that’s what you’d argue!