Hacker News new | past | comments | ask | show | jobs | submit login
gRPC: Internet-scale RPC framework is now 1.0 (googleblog.com)
266 points by vtalwar on Aug 23, 2016 | hide | past | web | favorite | 108 comments

Something I've wondered for awhile: why would I want to design with gRPC rather than well-defined HTTP/JSON endpoints? Is it just a perf thing?

(Tedious disclaimer: my opinion only, not speaking for anybody else. I'm an SRE at Google.)

Performance. gRPC is basically the most recent version of stubby, and at the kind of scale we use stubby, it achieves shockingly good rpc performance - call latency is orders of magnitude better than any form of http-rpc. This transforms the way you build applications, because you stop caring about the costs of rpcs, and start wanting to split your application into pieces separated by rpc boundaries so that you can run lots of copies of each piece.

I cannot sufficiently explain how critical this is to the way we build applications that scale.

I'm a former Google engineer working at another company now, and we use http/json rpc here. This RPC is the single highest consumer of cpu in our clusters, and our scale isn't all that large. I'm moving over to gRPC asap, for performance reasons.

Performance and versioning are two large benefits.

Performance benefit comes from the fact that schema is defined on each side (generally server / server) so you only send the information bytes. With a good RPC system you can also access specific fields of your structure without unpacking (or very fast unpacking, depends on what RPC system you're using).

gRPC uses Google's protocol buffers. https://developers.google.com/protocol-buffers

Comparable systems use other IDLs, e.g Facebook uses Apache Thrift.

Versioning is easier because your fields are defined explicitly, so you can ignore clients sending an old field to no ill effect (again, you can access individual fields without unpacking). Whereas with json you need to deserialize first. Also if your schema changes when using json without protobufs, you may experience either the client or the server making the wrong assumptions about input data. Whereas there is no ambiguity with protobufs; future changes to proto messages add fields and old fields can just be marked deprecated.

RPC is preferable to JSON for server-to-server communication, but client-server still often uses json just because it's often easier for your client app to interpret json. Systems like gRPC allow servers to emit json as well: https://developers.google.com/protocol-buffers/docs/proto3#j...

I've heard of performance-oriented web apps using protobufs on both client and server.

I'd really love to see something like gRPC implemented over CBOR.

CBOR -- Concise Binary Object Notation -- http://cbor.io/ -- is all the performance of a binary protocol, with semantics basically identical to JSON.

I appreciate some of the things protobuf does to help you version, but I also do not appreciate the protobuf compiler as a dependency and a hurdle for contributors, or for wire debugging. CBOR has libraries in every major (and most minor) languages and works without a fuss with tiny overhead, both at runtime and in dependency size. It's pretty pleasant to work with.

There are two reasons why I think simpler binary packing libraries like CBOR, MsgPack, or BSON can't really match what Protobuf gives you:

- Assistance with schema evolution and versioning is one of the best parts about using Protobuf in an API. It is really like the best parts of XML and XML Schema (validation, documentation, interoperability) without any of the bloat.

- Working with code generation can be a pain to get working initially, but is very friendly when actually using real objects in code. There is no need to think about any representation on-the-wire... everything 'just works'. There is no need to ensure you don't accidentally serialize fields in the wrong order, or worry about encodings, etc.

Also, there is a binary decoder included with protoc that can print a debug decoding of any protobuf binary message, including integer tags for different fields. Wouldn't you have pretty much the same problems with dissection and debugging on-the-wire in CBOR?

It is really quite pleasant to use Protobuf for an API, I can see why Google is opinionated in including it as the only option with gRPC.

I don't find the utility to outweigh the PITA. I've been on both sides of the fence, and maintained large projects with heavy protobuf use.

I don't find the schema validation powerful enough. You still have to write correct code to migrate semantics. Avoiding incorrect reuse of field names is... nice, but also the most trivial of the problems.

(I do like schemas in theory. It's possible I haven't worked with good enough tooling around protobufs to really experience joy resulting from those schemas. The protos themselves certainly aren't guaranteed to be self-documenting, in some of my experiences.)

I don't find code generation results to be smooth in many languages. At $company, we switched through no less than three different final-stage code generators in search of a smooth experience. Not all of this was misplaced perfectionism: in some cases, it was driven by the sheer absurd method count verbosity in one code generator basically making it impossible to ship an application. (You've perhaps heard of the method count limits in early android? Bam.)

I don't think the debug decoding tools for protobuf are directly comparable to CBOR's interoperability with JSON. CBOR and JSON are literally interchangable. That means I can flip a bit in my program config and start using one instead of the other. Config files? Flip the bit. Network traffic not meant for humans? Flip the bit. Need to debug on the network? Flip the bit. Want to pipe it into other tools like `jq`? No problem. There's a whole ecosystem around JSON, and I like that ecosystem. Working with CBOR, I can have the best of both worlds.

Sometimes opinionated is good and powerful and really makes things streamlined in a way that contributes to the overall utility of the system. I don't think this is one of them. Almost every major feature of gRPC I'm interested in -- multiplexing, deadlines and cancellations, standard definitions for bidi streaming, etc -- has nothing to do with protobufs.

natch, I used "I" heavily in this comment, because I realize these are heavily subjective statements, and not everyone shares these opinions :)

gRPC can also use FlatBuffers, by the way.

I think of GRPC as HTTP/JSON with most of the gotchas fixed for me. HTTP2 makes things like concurrent requests easy. Protobuf fixes schema problems (and I think is technically replaceable if you love your JSON). With the GRPC/protobuf definitions you can easily get clients for many languages.

I'm sure it's possible to create the same thing with HTTP/JSON (and swagger?), but I find it to be more work.

On top of other answers, I see other advantages:

* streaming in any direction (client->server, server->client, both). Not something easily done with a simple HTTP/JSON endpoint, all handled for you

* Arbitrary cancellation from the client or the server

Significantly less overhead and the benefits of Protocol Buffers on top of that.

If performance isn't a big deal, JSON-RPC over HTTP or something like it is fine. If it's critical, something like gRPC makes more sense.

Apart from the performance gain, the generated stubs (and the generated protobuf classes) are a nice improvement in developer experience.

If you do json rpc over HTTP/2 the performance will actually not be that different. The serialization layer is different and for sure protobuf will be fast to (de)serialize than JSON, but for most applications serialization is not the bottleneck (you can e.g. bury a lot more performance in the HTTP implementation).

Lets turn the question around: When should you prefer HTTP/JSON:

At the moment as long as you want to use the endpoint directly from a browser. gRPC uses some HTTP features that are not [yet] available from within browser JS APIs.

A gRPC->JSON proxy service is simple. I'd use gRPC inside my datacenter, and convert to JSON on the way in and out of the browser.

There's already a project that auto-generates a normal REST proxy from a gRPC definition: https://github.com/grpc-ecosystem/grpc-gateway

Perf is one reason indeed; however the integration with protobuf and the tools to generate stubs for various languages from that interface are also very helpful. gRPC is what SOAP should have been, IMO.

Primarily performance, HTTP HOL[1] kills latencies and wastes a lot of resources, especially with micro services spanning requests recursively to multiple services.

We have had to force keep alive and even forcefully turn it off in some cases :(.


Because gRPC is typed, and basically allows you to generate wrappers for objects and service clients in most major languages that play nicely

Single point of integration for all of your microservices necessities (load balancing, circuit breaking, load shedding, distributed tracing, metrics, logging). It's really interesting that no one talks about this but if you get relatively large, the lack of this integration point is the source of a lot of pain.

> Is it just a perf thing?

Mostly. I can't speak to protobuf's serialization format personally, but similar binary serialization like Thrift's TCompactProtocol or TDenseProtocol outperform JSON and you get schemas for "free*" (as in, in your producer and consumer's glue code by virtue of codegen, and not as an afterthought).

The IDL and codegen is a big part of gRPC and Thrift. The rest is opinionatedness and less cognitive effort -- the format is non-human-readable anyway so less propensity for bikeshedding about cosmetic stuff in JSON.

Having an enforced schema is a major advantage for reducing bugs and thus speeding up development.

I'll try to jump on the plane, and announce that latest GoReplay version now supports Thrift and ProtocolBuffers.

So if you are looking to load testing (or integration testing) for gRPC based apps check https://goreplay.org

Unfortunate that you didn't mention it was a "pro" feature, of which there seems to be no easy way to obtain it. You need a call to action and an automated process.

Anyone here who tried out gRPC or is using it in production, and can share some experiences?

We're using grpc-java in production for some of our based backend system, slowly replacing our old netty/jackson based system using JSON over HTTP/1.1.

The performance is good, and it's nice to have proto files with messages and services, which acts both as documentation and a way to generate client and server code. Protobuf is much faster, produces less garbage and is easier to work with than JSON/jackson. The generated stubs are very good and it's easy to switch between blocking and asynchronous requests, which still only require a single tcp/ip connection.

We've had two performance problems with it:

1. Connections can die in a somewhat unexpected way. This turned out to be caused by HTTP/2.0 which only allows 1 billion streams over a single connection. Maybe not a common issue, but it hurt us because we had a few processes reaching this limit at the same time, breaking our redundancy. It's easy work around it, and I believe the grpc-java team has plan for a fix that would make this invisible to a single channel.

2. Mixing small/low-latency requests with large/slow requests caused very unstable latency for the low-latency requests. Our current work-around is to start two grpc servers (still within the same java process and sharing the same resources). The difference is huge with 99p going from 22ms to 2.4ms just by using two different ports. Our old code with JSON over HTTP/1.1 implemented using jackson and netty didn't suffer this unstability in latency, so I suspect grpc is doing too much work inside a netty worker or something. I haven't yet tested with grpc-java 1.0, which I see has gotten a few optimization.

Still, these have been minor issues, and we're happy so far. The grpc-java team is doing a good job taking care of things, both with code and communication.

> This turned out to be caused by HTTP/2.0 which only allows 1 billion streams over a single connection.

Hilarious. People called this issue out as an obvious flaw when HTTP/2.0 was first proposed, got ignored, and here the issue is.

For those unfamiliar:

HTTP/2.0 uses an unsigned 31-bit integer to identity individual streams over a connection.

Server-initiated streams must use even identifiers.

Client-initiated streams must use odd identifiers.

Identifiers are not reclaimed once a stream is closed. Once you've initiated (2^31)/2 streams, you've exhausted the identifier pool and there's nothing you can do other than close the connection.

For comparison, SSH channels use a 32-bit arbitrary channel identifier, specified by the initiating party, creating an identifier tuple of (peer, channel). Channel identifiers can be re-used after an existing channel with that identifier is closed.

As a result, SSH doesn't have this problem, or the need to divide the identifier space into even/odd (server/client) channel space.

It was not ignored, it was very much made on purpose because of a certain popular programming language not having unsigned 32 bit variables...

Well, that's half of the downside presented, the other half is that it's split be server/client connections. I assume this was done because it simplifies the tracking of the next stream identifier, because you can just keep a counter and increment, rather than a table of used streams to check a new random identifier against?

Correct, something that was used already in SPDY and proved to be very handy and convenient so it was kept in HTTP/2.

BTW, recent data shows that Firefox does (on median) about 8 requests per HTTP/2 connection, up from slightly more than 1 on HTTP/1.1

So, if we ever close a connection from having reached a billion streams we are in a very very good position.

Are you talking about Java?

Doubling the number of them doesn't make it more right.

So, that's the other side of the argument? I assume there was at least a reason they specced it this way originally, even if under comparison those reasons wouldn't have held up. Was there any justification, or was it literally ignored?

It was considered a non-issue, since it's easy to work around. See https://github.com/http2/http2-spec/issues/61

Since HTTP2 is a client->server protocol, the server can close whenever and the client can just open another one.

May be I'm stupid, but how hard it is it to reimplement it as 64 bit unsigned ?

It would likely require a new version on the protocol level.

Support for Javascript? (53-bit mantissa limit)

Is the lack of reuse something to do with its evolution from QUIC as a loosely ordered UDP-based protocol?

We use it heavily on multiple projects across languages, and for the most part it works very well. We've had some pain about sharing proto definitions across languages and keeping them in sync. It's probably a much smaller problem when you've got a company-wide monorepo like Google, but you'll definitely have to be vigilant about your build processes to make sure you have the latest definitions shared.

Some of the language bindings (Ruby) started off feeling experimental quality when we began the project, but overall it's been a huge win for us versus HTTP+JSON. I'm sure a non-zero portion of the benefit has been using protobufs at all, but gRPC gives us a great way to generate clients for every language we use without worrying.

Could you expound upon the problem of keeping your protocol definitions in sync? In my experience this is the strength of protocol buffers: if you follow a few rules, your systems can successfully be decoupled. Some of the rules are never re-using a tag number and never changing a type in an incompatible way (e.g. string->bytes might be ok, but int32->bytes is not).

Yeah! I think a few responses have covered this below, but I'll give you our spin (and why it's painful, compared to what people have offered up).

Most of the projects that we're integrating gRPC into are existing codebases that have their own build tools that are (mostly) in isolation. JSON schemas have been agreed on beforehand, and there are separate client implementations in different languages that basically exists as independent units.

By adding protobufs to this process, the "JSON schemas that have been agreed on" become protobuf definitions - which is _fantastic_ for development teams, because they have a single spec to work from, and there is no ambiguity (or, significantly less).

The challenge comes when we are trying to generate gRPC clients in Go, Ruby and Python for the same profobuf file - in order to do it in an automated fashion without a 'monorepo', we need to create a build system that pulls this protobuf from a central place and generates the client, which doesn't exist right now.

It's not a huge challenge to ensure services can communicate at all - as you said, protobufs have thought of this and have an extensive amount of decoupling built in. When we're working on adding new features however, we need to have a place to keep the "gold master" of protobufs, and grab it for all of our projects to build+deploy at once, which is where the above becomes challenging.

Not an un-solvable problem, and different languages have different tooling for this. We've settled on placing the proto definitions in the "server" side (most of our interactions are fairly well modeled by client/server), and then updating the clients as-necessary, as we can deploy server changes without needing to update the clients immediately.

There's no need to pull the proto file on every build. Proto also has a set of rules for how to maintain wire-compatibility across versions[1]. Following those rules and distributing the definition only when you need new fields should be sufficient.

That said; If you've got a set of shared proto definitions, you should probably either go to a monorepo, or share the shared bits with a git submodule. Doesn't prevent you from needing to follow those conventions, but does make it far easier to debug when things changed.

[1] https://developers.google.com/protocol-buffers/docs/proto3#u...

He's got the same protos checked into multiple projects under different files. They need to be kept in sync

That problem can be avoided without monorepos. You primarily need a way to declare a dependency from one package on another at build time, such that the appropriate release gets pulled in. For example, maybe you've depended on version 2 of the interface definition; and in that case the build system fetches the artifacts for interface 2 at build time when building the client. Maven for Java works this way.

Ideally this system would also allow the package owner to release updates within an existing version if they wish. For example, backward-compatible changes to the service interface can be released while keeping the major version 2. In this way, clients automatically consume safe updates, while incompatible or risky changes can be given a major version bump (e.g. to 3). Consumers who want to pin the interface to a specific version like 2.5.1 could do so, in some build systems, though dependencies this specific are rarely useful or a good idea. In my experience it's best for the contract between producer and consumer to be explicitly versioned at the "major version" level, and only implicitly versioned (meaning updates are automatic) at the minor version level.

That seems like a "doctor it hurts when ... " scenario, and I don't see why it's specific to protobuf. Any IDL managed that way would have the same problem.

I'm not using it outside of Google but I will start for some personal projects with this announcement. I can say that it is one of the best parts of our tech stack and one of the great things about building systems here.

I've used it. It's easy to use and very capable. My favourite feature is that it supports streaming objects, in both directions. In other words you can do an RPC call where the input and/or output is an asynchronous stream of objects.

Every RPC system needs this, or you end up with hacks like HTTP long polling.

My least favourite feature is that it is tied to HTTP2. I'm not sure what you're supposed to do if you are running on a microcontroller.

Agree on this point. The most innovative feature is streaming, which enables some very powerful scenarios.

The tying to HTTP/2 and especially the way it is done is also not my cup of tea. E.g. if it wouldn't have chosen to use HTTP trailers (which are mostly unsupported) it could be implemented with a lot more HTTP libraries. It's also sad that it doesn't run in current browsers because of the lack of trailer support as well as streaming responses there. With putting a little bit more thoughts in it (maybe choosing multiple content types/body encodings) this could have been supported - at least for normal request/response communication without streaming.

Regarding microcontrollers: It should be possible to implement HTTP/2 and gRPC also on microcontrollers, but imho it will neither be easy nor necessarily a good choice. Implementing HTTP/2 with multiplexing will need quite a lot of RAM on a constrained device, especially with the default values for flow control windows and header compression. You can lower these through SETTINGS frames, but that might kill interoperability with HTTP/2 libraries that don't expect remotes to lower the settings or to reset connections while settings have not been fully negotiated.

caveat: I work in gRPC team. read target blogpost link to get a sense of experience of some of the companies. https://cloudplatform.googleblog.com/2016/08/gRPC-a-true-Int...

So gRPC evolved out of Stubby. An excellent show of force would be to announce that Stubby has been internally replaced by gRPC, so that the "gRPC is internet scale" assertion can be more than just a gimmick. Knowing nothing of the first one and very little of the second I imagine it would be some important task, so I have to ask: do you plan to internally run with the stuff you open-sourced ? What is missing ?

Been a couple years since I worked at Google, but when I was there, Stubby was pretty intimately connected with Google's networking fabric, datacenter hardware, and internal security & auditing needs. None of this is at all useful to external customers - you're not running on Google's proprietary hardware, you don't interface with their monitoring & auditing systems, etc.

As an ex-Googler, using gRPC feels just like using Stubby: the interfaces are the same, the serialization code is the same, the only thing different is the networking code and transparent hooks into other systems.

gRPC faces a longer road to feature parity with Stubby. For external adopters this is not an issue, so it makes sense that it would be available to the public in advance of its adoption inside Google.

not to mention the internal infra grows all these knobbly bits as one-off feature requests for large/influential teams, that aren't necessarily useful outside the goog

It has nonetheless started to replace some uses already.

That's the original link! You tricked me :)

App engine app's clients use gRPC to connect to datastore , as far as I know

docker recently adopted it for its new docker swarm feature.

We used it before, for containerd to docker communication.

I'm glad they are releasing version 1.0 but I feel that the maintainers of the Go gRPC team have a lot of work to rebuild trust.

I've seen backwards incompatible changes made by core go team members and core gRPC maintainers. Where the API is statically consistent but actually behaves in completely different broken ways. One of these was big enough that I said screw it and am moving that application away from gRPC.

I've seen multiple issues where the library you generate against ends up being incompatible with the library you link against at build time. They finally added a version check as part of the build/run step to prevent this from causing silent runtime errors.

Maybe in a year gRPC will actually be stable, maybe it has been over the last three months. I don't really know but I gave up, am moving my applications off of it and actively pushing for coworkers to do the same.

"Rebuild trust"? This is the first stable release. Every single release before than was called a "Development Release" (and Protobuf 3.0 only came out of beta less than a month ago), so of course they've been breaking things before now.

You should absolutely hold them to this standard, but only now that they have released 1.0.

Was looking at RPC framework for a project, in the end I went with Apache Thrift since at that time there was no Python 3 support for gRPC [1]. IIRC Thrift also lacks support for my project's Python version (3.5) but at least I can use thriftpy [2].

https://github.com/grpc/grpc/issues/282 https://github.com/eleme/thriftpy

We put in a lot of work to make gRPC work well with python 3.5, so give it a try if possible. disclaimer: I work on gRPC.

thank you, I'll look into it as soon as I can :)

now support Python3. http://www.grpc.io/about/#osp

Ugh, why does the Python driver use CamelCase method names?

    def GetFeature(self, request, context):

That's not the driver, that's just the example.

However, according to the style guide, camelcase is preferred, and the compiler is supposed to generate language-native names with the correct case [1].

One thing the terrible years of SOAP and WSDL should have taught people is that generated stubs are awful to use if they go against the grain of the host language.

[1] https://developers.google.com/protocol-buffers/docs/style

Not great, but I think it is to match function from the protocol definition. Inside that function you can see "normal" looking python function get_feature.

This looks very good. At the moment, for the jvm, there are not a lot of features above simply building services using netty and the protobufdecoder/encoder. Similarly, http2 goodness is also already available in netty.

I would like to use gRPC, unfortunately the Python driver is incompatible with Gevent[1]. And that's more or less a show stopper for us.

[1] https://github.com/grpc/grpc/issues/4629

Would it be worth it to use protocol buffers just for the serialization. To replace a current traditional variant. So not for small http communication but for larger persistent storage. Does it help for versioning purposes and preparing for the future when your product might go from traditional to web. Performance aside, it's a clear win there. It seems you have some overhead against in-code solutions (separate .proto files) but it may pay off in versioning and future of your product given all movements to Web etc.

What's missing to get client/server stub in pure C?

gRPC core libraries for C++ are written in pure C. We need to extend these to use a pure C implementation of Proto and create a C based generated API. We have experimented with this and some users have also tried it. We are looking to add this in future and also welcome contributions.

Any plans for rust lang bindings?

would love community contributions :-). I see some efforts in community but nothing very concrete yet. You can suggest project in gRPC Ecosystem. https://github.com/grpc-ecosystem

One rust crate that seems fairly well along is https://github.com/stepancheg/grpc-rust. It claims it can communicate with the go client.

Any people using gRPC and Protocol Buffers from Python with good experiences?

The library for Python for one seems very unpythonic to me, starting with all those CamelCase method names. It does not seem possible to use asyncio or any other non-blocking solution on the server.

Finally, gRPC obviously only handles the transport: Are there any other useful related Python packages out there for validation etc.?

Any plans for rust lang bindings? (would prefer to avoid FFI to c++)

Check out https://github.com/stepancheg/grpc-rust. Still in development.

> Or go straight to Quick Start in the language of your choice:

No plain-old-C?

That's a shame.

Is gRPC a full fledged server for API calls?

e.g.: Will it have things like monitoring (we've handled x calls to this API in the last hour, the average API call took y milliseconds). Clustering? (a client connects to a list of grpc servers, if one server goes down, the client will automatically connect to the next on the list)? And load balancing?

If not, are there existing third party tools to implement these, or is the expectation that the community will create these?

As for monitoring, grpc is hooked into census, which provides some rudimentary statistics and is also intended to eventually exfiltrate Dapper tracing out of Google and into the public gRPC user base. It's a bit rudimentary at the moment, but see https://github.com/grpc/grpc/tree/master/src/core/ext/census

No, these are separate concerns. I use Kubernetes for the latter two.

Is "internet scale" the new "web scale"?

"internet scale" is something of a term of art at Google. It's generally used to mean "scales to build systems that do things like 'search the entire contents of the Internet'"

10s of billions of requests per second sounds like it needs a new term compared to 10s of billions of requests per day.

So how does service discovery work? Where do I read more?

That's really an orthogonal concern. You could use Consul, DNS, &c. to do that.

It isn't really that orthogonal. Just like authentication, things like service discovery, smart clients (with load balancing, retries, etc) should likely be pluggable with some reasonable defaults, otherwise you will be building your own custom stuff on top of it and lose a lot of the value of having a standard. See Finagle for a more complete RPC framework.

I can speak most directly to Go, where I've been working. You could potentially use one of the maps-to-DNS service discovery systems, although that's limited.

At Square, we created a custom balancer (https://godoc.org/google.golang.org/grpc#Balancer) which not only handles updates from the service discovery system in order to manage the pool of connections, but also handles which connection to use per-call (so we can do targeting of specific capabilities, datacenters, etc.)

How does this compare to something like zeroMQ?

I think that it's a mix betweeen ZMQ and Thrift

One big advantage of gRPC over ZeroMQ is that HTTP/2 can traverse firewalls and proxies more easily.

Can anyone explain like I'm 5 what an RPC framework is?

You want to call a method in another process, potentially on another machine. In order to do that you both need to agree on what the networking protocol looks like. gRPC uses HTTP/2 for the control and data channel and uses Protocol Buffers to describe the method call, its parameters and ultimately its return values. Since this is standardized across languages it doesn't matter if the caller is Python and the callee is Java, you can still make the method call.

whenever you hit a json api, that's effectively an RPC call.

for example, if you visit: https://api.github.com/repos/grpc/grpc/issues

you'll get some json back. what happens is the browser resolves "api.github.com" (via DNS) to an IP address, and then opens a socket connection to that address. then, in accordance with the HTTP protocol it executes a "GET" operation (a notion particular to HTTP), and in response the github server will talk to a database or cache and respond with data appropriate to request.

because this sequence of events happens across network boundaries (ie, your browser is talking to something outside your computer), it's often referred to as a "remote" procedure call.


Also the github api is more RESTful than RPC.

The last time I tried gRPC with Python in Windows it didn't compile out of the box.

With the 1.0.0 release, there are Windows binaries for all supported Python versions (2.7, 3.4, 3.5). Make sure you upgrade to the latest version of Pip before trying to pip install grpcio. (The binaries use some ABI tags that are only recognized by newer versions of pip)

Thanks! it works now.

It should work now. We have worked to get python3 work across platforms.

> gRPC can help make connecting, operating and debugging distributed systems as easy as making local function calls

Oh lol, again

Can someone explain how is GRPC significantly better than WebSockets? I have used it for a few weeks and was very underwhelmed by it.

What is your use case, and what issues did you find?

Incomplete or completely lacking documentation for Objective C and Java, weird bugs, random disconnects. Overall it feels like the Ruby on Rails of networking - an opinionated package/framework that tries to do too much. Also, the whole concept of "as easy as a a local functional call" is a flawed, leaky abstraction.

I'm responsible for the Objective-C part, so please let me know of anything we can improve there. We have a couple of tutorials at http://www.grpc.io/docs/tutorials/ and a quick-start guide at http://www.grpc.io/docs/quickstart/objective-c.html . For bugs and connectivity problems, filing a GitHub issue would be super appreciated.

If you look at the example code, you'll see that RPCs aren't modeled exactly as local function calls. You're right that that wouldn't work very well. The libraries for all or most languages let you make RPCs asynchronously, without blocking the thread. And all of them provide with ways to write and read RPC metadata (headers and trailers).

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact