Next up, browser support?
Please! There is a working TypeScript client implementation  of gRPC-Web , which relies on a custom proxy for converting gRPC to gRPC-Web . Would be nice to bring that proxy functionality into Nginx.
(Disclosure: My team and I wrote it.)
For example, how does a TCP proxy perform round-robin load balancing on a per-RPC basis? If it's a gRPC proxy then that capability becomes possible.
E.g. it's already based on top of HTTP(/2), and uses normal path for distinguishing methods, which would actually be a good prerequisite to make it work everywhere. But then OTOH it uses barely support HTTP features like trailers, which require very special HTTP libraries and are not universally supported. If the status codes there would have been implemented as just another chunk of the HTTP body, and if some other small changes had been done, we could have had gprc from browsers already a long time ago. I guess that's what grpc-web now tries to fix, but I haven't dug into that in detail.
Initial commit: https://hg.nginx.org/nginx/rev/2713b2dbf5bb
Additional features: https://hg.nginx.org/nginx/rev/c693daca57f7 and https://hg.nginx.org/nginx/rev/c2a0a838c40f
Can any of you tell if it includes unit tests? I didn't see any.
Also, anywhere that you might use RPC you could use gRPC. It has a compact wire format and is pretty user-friendly as far as designing your RPC req/rep types.
FB's Thrift also solves the same problem, and is an alternative to gRPC.
Personally I find the autogenerated client code to be the biggest upside. Anyone who wants to use your API, in any language supported by the RPC, can start doing it with very little work. Gone are the days of maintaining officially-supported client libraries.
Still, it's a big win to automatically make your API available in all gRPC-supported languages, since most companies can't justify the business cost of a hand-crafted library in even one of those languages, let alone all.
We get tired of things because they accumulate cruft, or are deemed "ugly" by younger developers. So we replace them with newer alternatives, that are more light and easy to reason about for newbies entering the profession. But then we eventually find that we needed more features after all, so we gradually re-implement them again until the cycle repeats. The industry wheel just keeps on spinning...
CORBA was designed around the idea of distributed objects. The core idea was that you have a reference to an object but you don't know (or care) if the object lives in your address space or on a remote computer somewhere. When you make a "remote procedure call", CORBA tries to make it behave as if it were just a regular function call. The call would block the thread until it completed, and any communication errors would be marshaled into some kind of language exception.
It turns out that RPCs are different from regular function calls in a lot of ways. Trying to make them the same just makes things overall more complicated and less flexible. Also, making "remote objects" stateful creates a lot of problems for little benefit.
So XML/SOAP did away with these ideas. Instead of being designed around remote object references, it was designed around request/reply to a network endpoint. No statefulness was designed into the protocol, though of course it could be layered on top by enclosing your own data identifiers.
But SOAP was based around XML, which was never really designed to be a object serialization format. Compared to alternatives like Protocol Buffers, XML is big, slow, and not a clean mapping to the kinds of data structures you use in programming languages. Protocol Buffers are a much better match to this problem. (More at: https://developers.google.com/protocol-buffers/docs/overview...)
My point is that these new technologies aren't just repeats, there are real improvements that justify inventing something new.
An rpc system with a schema and code generation is a must for internal services.
Also the biggest difference is, that soap had like a trillion implementations which all worked kinda differently. code generation, etc..
GRPC somewhat does not have this problem because basically there is only one client implementation managed by google (now the cnf).
also in soap you basically built your server first, because writing a wsdl from scratch is like... akward. the idl of grpc is extremly simple to actually start without any implementation at all. and as a bonus it works way better if you need to add/change fields.
If the protocols and standards were designed lock-step with concrete implementation, I'd agree with you.
But too much of SOAP, CORBA, yada-yada was designed _before_ any implementation occurred. So they are nasty and cruft-filled long before even version 1.0.
Protocol Buffers ain't perfect, but they've been vastly deployed and hugely battle tested, so their ratio of cruft/useful remains tolerably low.
It's hard to overstate how crappy working with SOAP really was. I think as the industry matures we really will see serialization formats and protocols stabilize, I think we've already seen a bit of it with JSON.
In all seriousness grpc and protobuf isn't bad. Not sure if I'm sold on the http2 transport - but at least it has somewhat reasonable support for crypto.
I was bummed waiting for for the actual rpc-part to become usable - and now I think I'd rather build on capt'n'proto. But really, if we can just get some standardization that's better than json/soap, I'm willing to have another look.
If I never have to base64-encode an image or other binary to fit it into an api request, it'll be too soon. Or invalid deserislization error.
- It was more common than not that these earlier rpc libs had no async support. And because language support for futures wasn't that hot back then, a ton of apps would come to a network tx and essentially hang until something happened. grpc doesn't have this problem.
- No major rpc impl (eg. corba, soap, rmi, etc) had versioning / backwards compatibility support, until grpc.
REST & the web won because the most up to date client is distributed to the user at each usage, which solved the versioning issue a different way. Imo, if grpcs works in the browser (see my later comment), then it's essentially better at everything.
- The gRPC protocol is built on top of http2 - a protocol designed to overcome many of the shortcomings in http, primarily performance.
- protobuf is a much simpler serialization format that achieves far better performance than XML
- gRPC allows message streaming in both directions. For some use cases this is vital for achieving performance.
- there's a growing ecosystem to make it easier to work with large distributed systems. I could list a bunch, but I will just point to one: gRPC has many options for the load balancing of requests: https://grpc.io/blog/loadbalancing.
gRPC is clearly the better solution if you intend to build a large, multi-service, distributed system or want to build cloud APIs for mobile devices. And it's not just because developers hate XML.
The library ecosystem is more mixed though. Python's support is great, Ruby's is serviceable, because it's schema-less Java is very verbose and not very fun, and PHP's stdlib support is just horrendous (though ripcord is good).
Can you elaborate on that?
AFAIK gRPC comes with bidirectional streams out of the box. I played around with gRPC on Java and it seemed to work.
You define a series of set method calls, using a custom language. Each method call has a single message as its request and another message as its response. (You can actually get fancier than this, but you usually don't.)
In gRPC, the messages are usually protocol buffers (though other formats are supported like JSON). The method calls are organized into groups called services.
You stick these definitions in a file, then run a tool that takes these definitions and generates code in your desired language (Java, Python, etc. -- gRPC supports many languages). This code allows you to build objects that will get turned into protocol buffers wire format and sent across from client to server and back.
So for example, if you define a method Foo that takes a FooRequest and returns a FooResponse, you would put this a definition file, run a tool that generates some code. For the sake of this example, we'll say you're using Java for everything, so you tell the tool to generate Java code. This generated Java code would include code to create a FooRequest object, set values in it (strings, ints, etc.). It would also include a Java method you can call that takes your FooRequest and sends it to the server and that gives you back a FooResponse after the server responds. On the server side, you also get Java code that is generated to help you respond to this request. Your Java code on the server side will receive a FooRequest, and it can use generated Java code to read the fields out of it (those same strings, ints, etc.), and then it can build a response in the same way that the client built the request.
On the client, there is obviously some work involved in opening connections to the server, converting the FooRequest into wire-format data (and vice versa for FooResponse), but that is done for you, and you just need to tell it the server's address. On the server, there is work involved in listening for connections from clients, figuring out which RPC method is being called and routing it to the right Java method, converting the wire-format data into objects (and vice versa), but all that is done for you, and you just need to tell it what port to listen on.
gRPC itself uses HTTP/2 and makes POST calls when your client calls a method. The methods and services you define are mapped to URLs. So if you define a Bar service with a Foo method inside, it will be turned into /Bar/Foo when the HTTP call is made.
(I'd edit the comment but it's too late.)
slightly longer: one writes a protobuf that gets compiled into a language specific server and client code. All you have to do is implement the server functions or call the generated client functions to make rpc calls.
It's designed to be very simple to configure. It doesn't support streams yet, but it should soon.
We use gRPC in my company. We're happy but some things were not easy or straightforward to implement. With this update nginx makes load balancing and authentication easier to implement.
But they’re both essentially solving the dev problem of having ‘typed contracts’ right?
Very interesting evolution! SOAP to REST to graphql more popular on the ‘frontend’ side and grpc on the ‘backend’ side.
There is no place for gRPC in NGINX.
This is Google trying to thin end of the wedge their own proprietary protocols into web standards yet again.
My idea of a good time is not a future where the internet is built using Google technologies dressed as "open technologies".. that.. uh-huh just happen to be the exact same as infrastructure protocols that span the internal Googleverse.
Besides that, protobuf and its ilk aren't even good or modern.
People who say, yay look at it growing are very naive imho
Meh. There's place in NGINX for Adobe HDS, flv streaming, JWT, memcached, flash mp4 pseudo-streaming and XLST.
Hell, spdy draft 3.1 is still supported…
Same can be said for all the other things you hastily rushed to link to, those are not infrastructure protocols
It's now being used by tons of different companies, including Google competitors like Microsoft Azure.
Disclosure: I'm the executive director of CNCF.
The difference is between open (which gRPC is) and proprietary. The origination doesn't really matter. Lots of great open tech has come from Google, Microsoft, Apple, Amazon, Facebook, Netflix, Github, etc. Almost all the big projects started at a big company that needed to get something done and had the resources to create something new.
I'd rather the industry pick something and actually standardize instead of reinventing the same thing repeatedly just for some philosophical reasons.
The part I'm not keen on is nobody asks for a discussion on things anymore, they just check stuff in and it becomes a defacto standard.
Which can be okay... but I'm not a fan of foie gras for a reason
Google doesn't own Nginx, so I find it hard to believe that there were no discussions and they just went ahead and "checked stuff in".
Google's influence in many parts is a problem, but people using an internal protocol they've made up is basically irrelevant IMHO, unless you have a really good argument why it is a problem.
What would you consider "good" or "modern"? JSON?
(Disclosure: I work at Google on the protobuf team)
Such things exist, I'm not just rattling off buzzwords.
The main thing is protocols are serious business and people, especially any of the big five shouldn't get to own them just because it suits a business interest.
I would also argue that not based on a schema is even more brittle as you end up implementing validation logic and client marshalling that the schema would allow you to just generate.
you are though ...