Reading between the lines, it sounds like the main problem is Google's tight control over the project. Apple contributes to the Swift implementation and MSFT drives the native .NET implementation, but there's little non-Google input in decision-making for Go, Java, C++ core, or any of the implementations that wrap core.
More subjectively, I'm impressed by the CNCF's willingness to stick to their stated graduation criteria. gRPC is widely used (even among other CNCF projects), and comes from the company that organized the CNCF - there must have been a lot of pressure to rubber-stamp the application.
100% this. GRPC has nearly all its code contributions by google. If google removed its funding the project would be at risk. It’s closer to a proprietary offering with source code available than a mature FOSS ecosystem. Think redhat enterprise Linux is to gRPC where-as istio/k8s is closer to Debian.
You acknowledge contributions from several large companies. It's obvious it won't go away if Google pulls support or anything. Perhaps the fact that some random folks across the internet don't feel compelled to submit patches is simply due to the low level nature of the project and that it does the job quite well and its scope is limited by its nature, thereby limiting the need for customization by every individual. I really wonder what the standard is. If I recall correctly, for instance a certain proxy graduated by CNCF was such crap at one point that it linear searched routes. That naturally necessitates contributions if you actually use such software in production at scale.
Why is there a need to read between the lines? The post seems quite clear what is needed and it sounds like the ball is just in gRPC court. If anything it seems promising that there was movement after 3.5 years.
It’s really hard to have any influence on it unless you’re inside the Google fence. Even for the primary maintainers of those two external parties that you mentioned.
IMHO that's accurate for grpc. The project works great if you're all golang on backend. As soon as you use other languages it gets complicated and the story falls apart--you almost certainly have to pull in tooling to manage protobuf generation, and proxying your grpc backend code to web frontend code (easy if your backend is golang, but many more options and questions if not). The fact grpc (and protobufs in general) need so much extra tooling is a bit of a code smell of immaturity and incubation IMHO.
Yes you need additional tooling but often, such as in C++ it's the nature of the build environments for that language. There are many organizations using gRPC in mission critical environments along with C++.
It's maintained by James Newton-King, so it's in the hands of .NET royalty :)
Unlike the other gRPC implementations, it's also taken seriously by MSFT's entire developer division. MSFT contributed a bunch of performance improvements to the protobuf runtime and generated code for .NET, the Kestrel webserver team runs gRPC benchmarks, and support for gRPC-Web is baked right into the core frameworks. From a user perspective, it's clear that someone cares about how all the pieces fit together.
Even if I only briefly used it, I wish they had the same love for the COM tooling, after 30 years, Visual Studio still can't offer an IDL editing experience similar to editing proto files.
+1 for gRPC for .NET. Its quite nice _except_ on macos. I forget the exact issue but there's some extra friction to hosting https locally on macos that makes the dev flow a lot more cumbersome.
The lack of server ALPN support on macOS is probably the extra friction you're referring to. This made accepting HTTP/2 connections with TLS impossible. Fortunately, support will be added in .NET 8 with https://github.com/dotnet/runtime/pull/79434.
Bazel, rules_proto and its grpc rules for various languages is the tooling you are looking for in my opinion! It's so nice to be able to share your proto files and generate stubs / clients across language boundries, being able to see exactly which pieces of your application break when a shape or signature changes in a proto file. Without a monorepo, it would be hard to see the impact of proto changes across disparate code repositories.
Perhaps the tooling feel somewhat overkill for toy apps that can be done with Ruby on rails or such. Many large orgs with billions of dollars of revenue per year companies are utilizing gRPC across many languages.
Surely Google has been using gRPC across many languages for a decade or more at this point, and surely this is a solved problem at Google. How is this not solved outside of Google?
Yeah I was aware of devstats and yes the UI is awkward to use. I'm planning on open sourcing DevBoard, since GitSense is really the differentiating factor, so CNCF is free to use it, if it wants to. I personally think Grafana is great for analyzing time series data but I don't believe it's a very good dashboard system if you need to tell a story, which is what I believe software development insights needs.
If you goto to https://devboard.gitsense.com/dapr/dapr?board=gitsense_examp... you can see how my DevBoard widget system is different than Grafanas. Note, the repo that I talk about in the Intro page hasn't been pushed to GitHub yet, but will be soon (hopefully by the end of this week). I'm planning on creating widgets where you can just feed it some numbers and it will generate a graph, but since my widgets can be programmed, you can do much more with the data to tell a story and to help surface insights.
Some of Google Cloud's critical APIs only seem to use gRPC over HTTPS. It relies on such an esoteric part of the TLS specification that many (most?) proxies can't carry the traffic. You end up hunting for 2 days to find out why your connections don't work only to realize they probably never will. So I would say it's good that gRPC isn't being pushed hard yet.
On a personal level, it's one of those projects that someone obsessed with "perfect engineering" develops, regardless of the human cost. Crappier solutions (ex. JSON-over-HTTP) are better in almost all cases.
I've never encountered a proxy that can't do the portions of TLS 1.3 that gRPC requires - NGINX, Envoy, linkerd, all the managed cloud offerings I know of, and any random Go binary can handle it. What esoteric portions of the spec are you referring to?
gRPC _does_ require support for HTTP trailers, which aren't used much elsewhere. If you want to use streaming RPCs, you also need a proxy that doesn't buffer (or allows you to disable buffering).
You couldn't get a better example of good pragmatic engineering than gRPC compared to something like CORBA or DCOM. I can't talk about "all cases" but in the cases I've come across it's a much better solution than JSON over http.
I prefer another CNCF incubating project, NATS. (nats.io)
It decouples the service addresses via a pubsub architecture.
So if I want service A to send a request to service B, then it is done by subscribing to a shared topic, there is no service discovery.
It kind of replaces GRPC and Istio.
I like the “static typing” and code generation you get from grpc so a hybrid of the 2 would be my preference.
I actually solved the code generation part for NATS though by using AsyncAPI (like Open API but for messaged based systems). Would be better if baked in.
Yeah completely different tech but it can solve the same problem—connection and communication between services. Implementation details.
If you don’t think about the tech between services, at the end of the day my service is using some protocol to send and receive data, using grpc or otherwise.
NATS has a clean request/reply paradigm built in that makes this simpler.
The proto files format is ok, because it has nullability defined in the type.
However everything else is bad.
I had to use grpc on a project and it was a pain and created problems.
Want to use postman to test locally?
Forget it, you have to use some grpc client, and all of them are not ideal.
Want to write automation tests? Good luck finding a tool that supports grpc.
Want to add distributed tracing to calls? You have to use some unofficial code, and better learn the grpc implementation details if you want to be sure that the code is good.
Use json over http, or json over http2 if possible. You will have a much better and less frustrating experience.
Grpc is good for servers moving petabytes of data and for low latency needs (compressed json over http2 would be the same performance in terms of low latency, maybe a few percent slower). I guess 99% of it's users would do much better with a json over http interface.
Nowdays it is very easy to make an http call. In java it can be done with an annotation over an interface method, ex: https://docs.spring.io/spring-cloud-openfeign/docs/current/r...
It is also recomended to store the data types used in the interface externally, similar to how proto files work, so that you don't have code duplication on the client and the server.