Hacker News new | comments | show | ask | jobs | submit login
Announcing gRPC Support in Nginx (nginx.com)
384 points by tex0 8 months ago | hide | past | web | favorite | 80 comments



Finally! Up until now, when people ask how they are supposed to proxy grpc traffic, we could only recommend Envoy. Pretty much no one wants to hear that they have to change their stack to use new technology. Since a large part of the world is already on nginx, this was a a real barrier for adoption.

Next up, browser support?


> Next up, browser support?

Please! There is a working TypeScript client implementation [0] of gRPC-Web [1], which relies on a custom proxy for converting gRPC to gRPC-Web [2]. Would be nice to bring that proxy functionality into Nginx.

[0] https://github.com/improbable-eng/grpc-web/tree/master/ts

[1] https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-WEB.md

[2] https://github.com/improbable-eng/grpc-web/tree/master/go/gr...


Caddy Web Server (https://caddyserver.com) have support for gRPC-Web through it's grpc plugin: https://caddyserver.com/docs/http.grpc


Armeria [0] supports pretty much every possible combination of gRPC variants, including gRPC-Web - HTTP/1 and 2, TLS and cleartext, Protobuf and JSON, framed and unframed.

(Disclosure: My team and I wrote it.)

[0] https://line.github.io/armeria


I remember seeing that Nginx has TCP proxying as well. Couldn’t that be an option?


It is, but proxying at the higher-level protocol lets you proxy more intelligently.

For example, how does a TCP proxy perform round-robin load balancing on a per-RPC basis? If it's a gRPC proxy then that capability becomes possible.


My hunch is that the impetus was largely because of this kind of conversation.


If gRPC would have been designed slightly different, they could have had good proxy support AND browser support right from the start.

E.g. it's already based on top of HTTP(/2), and uses normal path for distinguishing methods, which would actually be a good prerequisite to make it work everywhere. But then OTOH it uses barely support HTTP features like trailers, which require very special HTTP libraries and are not universally supported. If the status codes there would have been implemented as just another chunk of the HTTP body, and if some other small changes had been done, we could have had gprc from browsers already a long time ago. I guess that's what grpc-web now tries to fix, but I haven't dug into that in detail.


For the record, the reason grpc uses trailers is because it uses http/2, not the other way around. It was expected that since the whole transport was completely new, adopters of http/2 would add trailer support. As it turns out, they mostly didn't. Particularly Firefox and Chrome did not expose trailers. This is even despite being part of the new Fetch API.


I'm using grpc-web in a service that's going live soon. It works great.



Wow, I was surprised at the sheer size of the diff. Huge!

Can any of you tell if it includes unit tests? I didn't see any.


Their tests live in a separate repo


That must make working out whether a code change is well tested a whole bunch of fun.



OK so what are good use cases for gRPC? What problem does it solve, and in what contexts should I be reaching for gRPC?


If you want to have a set of globally defined types and/or language-independent types to share between your various programs or services, gRPC and Protobufs are a good option.

Also, anywhere that you might use RPC you could use gRPC. It has a compact wire format and is pretty user-friendly as far as designing your RPC req/rep types.


I used gRPC for numerous hobby projects during my undergrad to glue together binaries running in different languages (e.g. a simulation server running in C++ and a scripting client in Python). By passing around a shared data structure (Protobufs), one does not need to waste time writing serialization/de-serialization adapters. It is also useful for gluing together microservices.

FB's Thrift also solves the same problem, and is an alternative to gRPC.


Going into the microservice aspect above, it provides a nice abstraction of remote function calls, so that you can write microservice code that looks like it's executing a local function, but is really just expecting a remote server to implement the method name. In general, that's just RPC calls though. Google's implementation has proven very intuitive to learn, and has a nice size community online for help debugging, etc.


Thrift mostly solves the problem of crashing a lot and being an undependable mess.


Everything where you would use REST ( http / json ) but where the client is not a browser.


Latency-sensitive / chatty microservices can benefit greatly. Some of this is by nature of http2 but it’s extended by protocol buffer packaging of messages and other client smarts. Inter-service comms is where this popped onto my radar recently.


You get type safety in your API, you get autogenerated client code, and you get http2 out of the box.

Personally I find the autogenerated client code to be the biggest upside. Anyone who wants to use your API, in any language supported by the RPC, can start doing it with very little work. Gone are the days of maintaining officially-supported client libraries.


I'd more say that it gets rid of the need for officially-supported client libraries, but such libraries can still be a nice convenience for idiomatically mapping your service's higher-level concepts to what makes sense for a given language.

Still, it's a big win to automatically make your API available in all gRPC-supported languages, since most companies can't justify the business cost of a hand-crafted library in even one of those languages, let alone all.


Low latency and/or low bandwidth data transfer in M2M communication, especially when the client and servers are done in different languages.


I love seeing grpc grow. An rpc system with a schema and code generation is a must for internal services. Grpc has worked really well for me.


No disrespect intended, but I find this comment pretty funny. SOAP/XML has been exactly this for 20 years. It definitely has some major warts, but gRPC isn’t doing anything new.


And CORBA/IDL was doing exactly the same 20 years prior to that.

We get tired of things because they accumulate cruft, or are deemed "ugly" by younger developers. So we replace them with newer alternatives, that are more light and easy to reason about for newbies entering the profession. But then we eventually find that we needed more features after all, so we gradually re-implement them again until the cycle repeats. The industry wheel just keeps on spinning...


I think this is a bit of an oversimplification. The modern approach to RPC is very different than CORBA or even SOAP/XML.

CORBA was designed around the idea of distributed objects. The core idea was that you have a reference to an object but you don't know (or care) if the object lives in your address space or on a remote computer somewhere. When you make a "remote procedure call", CORBA tries to make it behave as if it were just a regular function call. The call would block the thread until it completed, and any communication errors would be marshaled into some kind of language exception.

It turns out that RPCs are different from regular function calls in a lot of ways. Trying to make them the same just makes things overall more complicated and less flexible. Also, making "remote objects" stateful creates a lot of problems for little benefit.

So XML/SOAP did away with these ideas. Instead of being designed around remote object references, it was designed around request/reply to a network endpoint. No statefulness was designed into the protocol, though of course it could be layered on top by enclosing your own data identifiers.

But SOAP was based around XML, which was never really designed to be a object serialization format. Compared to alternatives like Protocol Buffers, XML is big, slow, and not a clean mapping to the kinds of data structures you use in programming languages. Protocol Buffers are a much better match to this problem. (More at: https://developers.google.com/protocol-buffers/docs/overview...)

My point is that these new technologies aren't just repeats, there are real improvements that justify inventing something new.


I totally agree that these new technologies are an improvement, and I have no desire to go back to SOAP/XML, I was just commenting on the statement:

  An rpc system with a schema and code generation is a must for internal services.
Which seemed to suggest that gRPC is somehow novel in this respect.


well I would guess the difference between soap and grpc is that soap was developed as a standard, while gRPC became one (or is becoming one).

Also the biggest difference is, that soap had like a trillion implementations which all worked kinda differently. code generation, etc.. GRPC somewhat does not have this problem because basically there is only one client implementation managed by google (now the cnf).

also in soap you basically built your server first, because writing a wsdl from scratch is like... akward. the idl of grpc is extremly simple to actually start without any implementation at all. and as a bonus it works way better if you need to add/change fields.


> But then we eventually find that we needed more features after all, so we gradually re-implement them again until the cycle repeats.

If the protocols and standards were designed lock-step with concrete implementation, I'd agree with you.

But too much of SOAP, CORBA, yada-yada was designed _before_ any implementation occurred. So they are nasty and cruft-filled long before even version 1.0.

Protocol Buffers ain't perfect, but they've been vastly deployed and hugely battle tested, so their ratio of cruft/useful remains tolerably low.


SOAP was overdesigned and yet somehow still underspecified at the same time. You could implement two different implementations that both followed the specs religiously that could not interop at all.

It's hard to overstate how crappy working with SOAP really was. I think as the industry matures we really will see serialization formats and protocols stabilize, I think we've already seen a bit of it with JSON.


I actually think that the design of gRPC must have been a great deal of effort. The project proposes an scalable solution with simple enough interfaces that smaller teams have been able to adopt quickly. I admire that very much!


No love for Open Network Computing (ONC) Remote Procedure Call (RPC)/xdr? (the rpc in rpcd for nfs).

In all seriousness grpc and protobuf isn't bad. Not sure if I'm sold on the http2 transport - but at least it has somewhat reasonable support for crypto.

I was bummed waiting for for the actual rpc-part to become usable - and now I think I'd rather build on capt'n'proto. But really, if we can just get some standardization that's better than json/soap, I'm willing to have another look.

If I never have to base64-encode an image or other binary to fit it into an api request, it'll be too soon. Or invalid deserislization error.


Earlier rpc impls had a bunch of issues:

- It was more common than not that these earlier rpc libs had no async support. And because language support for futures wasn't that hot back then, a ton of apps would come to a network tx and essentially hang until something happened. grpc doesn't have this problem.

- No major rpc impl (eg. corba, soap, rmi, etc) had versioning / backwards compatibility support, until grpc.

REST & the web won because the most up to date client is distributed to the user at each usage, which solved the versioning issue a different way. Imo, if grpcs works in the browser (see my later comment), then it's essentially better at everything.


gRPC has significant technical advantages to SOAP/XML.

- The gRPC protocol is built on top of http2 - a protocol designed to overcome many of the shortcomings in http, primarily performance.

- protobuf is a much simpler serialization format that achieves far better performance than XML

- gRPC allows message streaming in both directions. For some use cases this is vital for achieving performance.

- there's a growing ecosystem to make it easier to work with large distributed systems. I could list a bunch, but I will just point to one: gRPC has many options for the load balancing of requests: https://grpc.io/blog/loadbalancing.

gRPC is clearly the better solution if you intend to build a large, multi-service, distributed system or want to build cloud APIs for mobile devices. And it's not just because developers hate XML.


i don't think the grpc team pretends it's "new", and some presentations even mention (with humor) stuff like soap and corba (particularly the ones by ray tsang).


XML-RPC also, which I rather enjoyed using ~15 years ago.


The designers did well in picking datatypes, though they erred in not including a null value it's a real pain in the ass going to json and losing binaries and datetimes.

The library ecosystem is more mixed though. Python's support is great, Ruby's is serviceable, because it's schema-less Java is very verbose and not very fun, and PHP's stdlib support is just horrendous (though ripcord is good).


Seems like a bad comparison. On the one hand you are apparently willing to overlook a 100x performance penalty associated with XML compared to protobuf. And then you apparently assign no value to the safety of a strongly-typed representation in C++ vs Doc*-you-figure-out-what-this-means.


Seems they punted on graceful handling of bidirectional streams.


last i checked grpc could only technically support bidirectional streams. None of the libraries I looked at actually implemented it.


> None of the libraries I looked at actually implemented it.

Can you elaborate on that?

AFAIK gRPC comes with bidirectional streams out of the box. I played around with gRPC on Java and it seemed to work.


Go user checking in here as well -- bidirectional streams work just fine


We're using them between Python <-> Go with no problems. Can't remember exactly what version it was at when I first tried it but it's worked fine for two years or more.


All core grpc libraries support bidirectional streaming. It's the "routeguide" example.


Does this mean anything for http2, specifically anything for support for http2 upstreams? I would imagine that’s was a necessity to support for grpc so any way that will come to generic http2 as well?


Anyone have a good ELI5 of gRPC? It says it's a fast RPC implementation, but all the explanations of RPC seem very in-the-weeds.


Not sure if you're looking for an explanation of RPC in general or just specifically how gRPC does it, but I guess I'll kind of cover both.

You define a series of set method calls, using a custom language. Each method call has a single message as its request and another message as its response. (You can actually get fancier than this, but you usually don't.)

In gRPC, the messages are usually protocol buffers (though other formats are supported like JSON). The method calls are organized into groups called services.

You stick these definitions in a file, then run a tool that takes these definitions and generates code in your desired language (Java, Python, etc. -- gRPC supports many languages). This code allows you to build objects that will get turned into protocol buffers wire format and sent across from client to server and back.

So for example, if you define a method Foo that takes a FooRequest and returns a FooResponse, you would put this a definition file, run a tool that generates some code. For the sake of this example, we'll say you're using Java for everything, so you tell the tool to generate Java code. This generated Java code would include code to create a FooRequest object, set values in it (strings, ints, etc.). It would also include a Java method you can call that takes your FooRequest and sends it to the server and that gives you back a FooResponse after the server responds. On the server side, you also get Java code that is generated to help you respond to this request. Your Java code on the server side will receive a FooRequest, and it can use generated Java code to read the fields out of it (those same strings, ints, etc.), and then it can build a response in the same way that the client built the request.

On the client, there is obviously some work involved in opening connections to the server, converting the FooRequest into wire-format data (and vice versa for FooResponse), but that is done for you, and you just need to tell it the server's address. On the server, there is work involved in listening for connections from clients, figuring out which RPC method is being called and routing it to the right Java method, converting the wire-format data into objects (and vice versa), but all that is done for you, and you just need to tell it what port to listen on.

gRPC itself uses HTTP/2 and makes POST calls when your client calls a method. The methods and services you define are mapped to URLs. So if you define a Bar service with a Foo method inside, it will be turned into /Bar/Foo when the HTTP call is made.


In case it was confusing, I meant to type "set of method calls" in my comment above, not "series of set method calls".

(I'd edit the comment but it's too late.)


short: protobuf based (so relatively language agnostic) rpc mechanisms that communicates over http2

slightly longer: one writes a protobuf that gets compiled into a language specific server and client code. All you have to do is implement the server functions or call the generated client functions to make rpc calls.


This is great; I also recently published an alternative designed for browser clients: https://github.com/ericbets/danby

It's designed to be very simple to configure. It doesn't support streams yet, but it should soon.


This is great! TL;DR: instead of building JSON or GraphQL API now you can easily expose your gRPC service to the outside world!

We use gRPC in my company. We're happy but some things were not easy or straightforward to implement. With this update nginx makes load balancing and authentication easier to implement.


I was actually just about to say that. There seems to be fair bit of overlap in graphql and grpc from the Codegen and Schema POV. Graphql is more intended to be a data query language and grpc more generic function calls.

But they’re both essentially solving the dev problem of having ‘typed contracts’ right?

Very interesting evolution! SOAP to REST to graphql more popular on the ‘frontend’ side and grpc on the ‘backend’ side.


from my understanding this only works over h2 or h2c


haproxy has had http/2 supports beginning 1.6 ( May 2015) however I have not yet looked into using it. I do wonder if it posses the same features in terms of inspecting the method names on a per request basis.


congrats to the Nginx team and contributors! some of us have been waiting anxiously for this release- this will be great!


This is not cool

There is no place for gRPC in NGINX.

This is Google trying to thin end of the wedge their own proprietary protocols into web standards yet again.

My idea of a good time is not a future where the internet is built using Google technologies dressed as "open technologies".. that.. uh-huh just happen to be the exact same as infrastructure protocols that span the internal Googleverse.

Besides that, protobuf and its ilk aren't even good or modern.

People who say, yay look at it growing are very naive imho



That's all very well, but Adobe don't have aspirations of running the entire internet.

Same can be said for all the other things you hastily rushed to link to, those are not infrastructure protocols


Note that gRPC was created by Google but was contributed to CNCF last year. https://www.cncf.io/blog/2017/03/01/cloud-native-computing-f...

It's now being used by tons of different companies, including Google competitors like Microsoft Azure.

Disclosure: I'm the executive director of CNCF.


What are better options?

The difference is between open (which gRPC is) and proprietary. The origination doesn't really matter. Lots of great open tech has come from Google, Microsoft, Apple, Amazon, Facebook, Netflix, Github, etc. Almost all the big projects started at a big company that needed to get something done and had the resources to create something new.

I'd rather the industry pick something and actually standardize instead of reinventing the same thing repeatedly just for some philosophical reasons.


I agree with the industry picking something part.

The part I'm not keen on is nobody asks for a discussion on things anymore, they just check stuff in and it becomes a defacto standard.

Which can be okay... but I'm not a fan of foie gras for a reason


Ok, but what do you find are better options then? If you have examples then we can discuss why they haven't taken off as popular standards.


> nobody asks for a discussion on things anymore, they just check stuff in and it becomes a defacto standard

Google doesn't own Nginx, so I find it hard to believe that there were no discussions and they just went ahead and "checked stuff in".


There might be some problems with it, it is not perfect, however it is already open, not bound to Google and we used in internal projects, and multiple very large projects also use it freely. I really like the strong typing and the ability to bi-directional streams and finally code generation for various languages make it very okay. Our main product was Go, but a part had to be Java and the integration was very easy because we used gRPC. Not that it cannot be achieved by other tools and frameworks, gRPC is already popular and well performing enough for most people.


"Not cool" is telling the long-term maintainers of software what "has no place in their software" and not to fulfill user/customer requests because that'd be helping a company you don't like.

Google's influence in many parts is a problem, but people using an internal protocol they've made up is basically irrelevant IMHO, unless you have a really good argument why it is a problem.


> Besides that, protobuf and its ilk aren't even good or modern.

What would you consider "good" or "modern"? JSON?


Just out of curiosity, what's a modern alternative to protobufs?


I would also like to know why you don't consider it 'modern' and how you would define modern-ness in this context.


I answered above.



I don't think they are successors, just different. Protobuf is a sparse wire format, whereas FlatBuffers and Cap'n Proto use a fixed-layout wire format. There are plusses and minuses to both. I wrote a little bit about this here: https://news.ycombinator.com/item?id=6329041#6330426

(Disclosure: I work at Google on the protobuf team)


gRPC uses Protobuf version 3. Both it and CapnProto are successors to Protobuf version 2. Flatbuffers is not a successor but is targeted at a different use case.


One that is type extensible, not brittle and based on schemas with fragile language integration tools, is streamable down to the atom level, naturally sortable, homoiconic in nature and efficient to produce and consume.

Such things exist, I'm not just rattling off buzzwords.

The main thing is protocols are serious business and people, especially any of the big five shouldn't get to own them just because it suits a business interest.


It's easy to argue you are just rattling off buzzwords if you don't provide examples.

I would also argue that not based on a schema is even more brittle as you end up implementing validation logic and client marshalling that the schema would allow you to just generate.


Name some. Seriously, I'm interested in protocol design, so I'll read them. But also, yeah, just declaring that things exist is kind of unhelpful.


> Such things exist, I'm not just rattling off buzzwords.

you are though ...


Is Google behind this integration effort?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: