
gRPC-Go Engineering Practices - mehrdada
https://grpc.io/2018/01/22/grpc-go-engineering-practices.html
======
rb808
Its a good time to ask, is gRPC any good? I'd love to standardize on stable
middleware layer that handles multiple versions of clients and servers well.
Rest with json really seems to work great for most things already.

What is the advantage of gRPC - just more efficient?

~~~
willvarfar
Afraid I'm going to be contrary and old-fashioned and say I prefer JSON.

Its never been problematic adding or extending JSON endpoints and its never
been a problem using basic gzip compression on the fly either.

And JSON endpoints are a damn sight easier to debug and wireshark and all the
rest.

I've spent a lot of time writing fast JSON serialization for various languages
including Java etc; its staggering how inefficient most libraries are. But
that's not really JSON's fault.

~~~
kyrra
There are definitely cases for both. JSON is definitely easier to consume as a
human (so for debugging).

gRPC was born out of Google's Stubby rpc system[0], which is used heavily for
communicating between different jobs. If you are going to stand up a lot of
different services that are going to talk, it provides a lot of advantages
that you don't get with JSON. For large companies that use multiple program
languages, this is really nice, as proto3 and GRPC have code generators for a
slew of different languages.

There are a lot of other niceties that are useful in gRPC that you don't get
with JSON (like streaming data).

[0] [https://grpc.io/blog/principles](https://grpc.io/blog/principles)

~~~
malkia
Also a protobuf may be stored directly in some of the databases google uses
internally, then you have lots of tools around them (diffing with map-reduces
done on them, etc.).

------
bruth
Slightly off-topic, but related.. I have read that a common practice for
managing proto files (or any schema definitions really) is to put them in a
separate repo/package to share. It seems pretty straightforward in my head and
provides several advantages. However I still ask about any trade-offs when
doing this in practice?

~~~
doh
It has similar challenges than a monorepo.

First challenge is, that you have to keep some kind of reference on what proto
are you using within your project. What Google (and some others) do, is that
they a) put all the proto files in a separate repo [0] and then generate them
for each language separately (python [1], ...). This way you can use whichever
proto file you need within your project, however you have to load more
libraries than you need to. To be honest, it only makes slight difference when
deploying, so not too bad.

The second challenge is, that you have to generate the result files every time
you make some kind of change. If you have a lot of proto files, then it may
take some time to generate them and there are very little tools available to
help you. Google open-sourced Artman [2] although it's more focused on APIs
than managing shared protos.

The massive advantage is that, because proto files are self-explanatory and if
you put enough information in them can function as direct documentation of
your API's interface, you don't need to fish out the requirements in the
project or in the documentation but rather just directly read the proto file
itself. But this does depend on the developers, to make it as consistent as
possible, which is not always the case [3].

[0]
[https://github.com/googleapis/googleapis](https://github.com/googleapis/googleapis)

[1] [https://pypi.python.org/pypi/googleapis-common-
protos](https://pypi.python.org/pypi/googleapis-common-protos)

[2]
[https://github.com/googleapis/artman](https://github.com/googleapis/artman)

[3]
[https://news.ycombinator.com/item?id=16166153](https://news.ycombinator.com/item?id=16166153)

~~~
bruth
Thanks for the response! These are very good references. I like the breakdown
of message and/or service definitions into separate files so it is easy to
browse through the repository. I am assuming the version bump (v1 vs. v2)
occurs when you introducing breaking changes to the API? I am aware that
protobuf does a good job at allowing forwards and backwards compatibility at
the message level..

~~~
doh
Correct on the versioning. We internally treat protos as an immutable
descriptions of our API interfaces. That means whenever we need to change
anything (outside of bugs), be it adding a new field, or changing the order,
renaming fields, changing types, ... we start a new version.

We also use a lot of inheritance of non-default types (timestamps, errors,
...) so it's important to make sure that we don't break anything for others.

~~~
bruth
Interesting so adding new fields even.. I suppose there is quite a bit of
planning and iterating on the interface before making it public to reduce
churn? I understand doing that for backwards incompatible changes.

~~~
doh
We add new fields (or redesign existing protos) very sporadically. We also
don't have fully automated process to deal with a change in a proto within all
projects so by using new versions we can signal to the developers that they
should eventually adopt it. We are actively using 5 languages that have to
deal with the changes, so we found it easier overall to do it this way.

------
jeffrand
Great to hear, I'm pretty bullish on the framework and have been using it
happily in Go for a while

------
willvarfar
There has been a flurry of gRPC posts on HN recently - it must be the new xml
soap rest NoSQL fad!

NoSQL is an interesting parallel - Google publish map reduce and bigtable and
amazon publish some influential papers and suddenly everyone is using NoSQL in
order to be "web scale". Then it turns out that Google themselves were doing
sql web scale and spanner and all that.

There's a risk that gRPC is the same? In chat yesterday ex-googlers said that
Google was increasingly moving over to flatbuffer...

Personally I have an aversion for tools with generators. Harks back to the
damage CORBA did me I guess... I also have a preference for plaintext eg JSON
- so much easier to debug.

Oh well. Guess we're in the fashion business ... ;)

------
j_s
Is this gRPC the same thing as golang net/rpc referred to here:
[https://news.ycombinator.com/item?id=16170116](https://news.ycombinator.com/item?id=16170116)?
I don't think so but I've never used either one.

>seniorsassycat: _I don 't understand why AWS released Go support instead of
binary support and I don't understand why they chose to rely on go's net/rpc
[...] which encodes objects using Go's special [gobs] binary format_

~~~
cube2222
net/RPC is a rpc implementation in the go standard library, which uses gob for
serialization.

gRPC is a protocol and set of libraries for cross-language rpc based on
protobuffs. Also doing a lot of codegen for you, like generating clients.

------
virmundi
Has anyone found a good resource on using gRPC directly in a JS client? I've
looked at using gRPC. My current challenge is that I want to support a
website/webgateway on one side and mobile gateway on the other. If I use Swift
or Java on the mobile side, it's easy. If I use Ionic Framework, I'm in the
same spot as with the web gateway; probably better off with HTTP + RPC.

~~~
kyrra
Does this work? [https://github.com/grpc-ecosystem/grpc-
gateway](https://github.com/grpc-ecosystem/grpc-gateway)

Seems to generate a REST proxy server side.

~~~
virmundi
I haven't tried it, but that goes against the grain of gRPC. I understand it
as an integration point between legacy REST and gRPC. I don't know, and from
what I've read, wouldn't use it for greenfield development.

The only issue I have with full HTTPS + JSON is CRIME. I'm not sure how that
works.

------
qsymmachus
Does anyone have experience with both gRPC and Thrift? I'd be curious to know
how they compare.

~~~
kajecounterhack
Thrift used to support more languages (this has changed). gRPC was more
performant (take w/ grain of salt, this is word of mouth) for a while --
unclear if it's changed or if the difference was ever that significant except
at very high scale.

I think they're pretty similar and you can't lose either way. Facebook's
support of Thrift and Google's of gRPC make both decent options.

One thing I will say about gRPC is that it plays nice with Google's build
system (Bazel) and some Google APIs now have first-class gRPC support. If you
choose thrift in your stack you'll have to call APIs using JSON or support
gRPC anyway if you want to use them for those API calls...so gRPC might be an
attractive choice. Furthermore gRPC's go interop is also excellent if you
happen to be a fan of golang.

~~~
euyyn
When I looked into Thrift's generated code for Objective-C when we started
working on gRPC (so 2014), RPCs were all blocking, which made for a non-
idiomatic API. I haven't looked if they've added an asynchronous version
since.

------
xstartup
Is it possible to add a middleware which does gRPC <-> JSON?

~~~
mehrdada
Check out grpc-gateway[1]

[1]: [https://github.com/grpc-ecosystem/grpc-
gateway/](https://github.com/grpc-ecosystem/grpc-gateway/)

