
GRPC.io is interestingly different from CORBA - tonyg
http://eighty-twenty.org/2015/08/28/grpc-dot-io.html
======
dadrian
I've been using gRPC for the last several months. It's a reasonably nice API
and integrates nicely in with protobuf, which I think is it's main draw. We
were already passing data around as protobufs, so we just took the dive.

Unfortunately, gRPC is still very much at the release level of "it works at
Google and also on some developers laptop once", so while we haven't really
encountered any bugs, dealing with gRPC has been a bit of a devops headache.
That being said, we don't have a full-time devops person or anything, and I
bet someone who knew their way around automated configuration and deployment
better than myself could handle gRPC fine.

~~~
ropiku
In what languages have you been using it ?

~~~
whopa
Don't know about the OP, but I've tried it with Python and Go. The Go stuff
works pretty well, the Python stuff is buggy and incomplete (and the code
looks overengineered)

------
rektide
Cap'n'Proto (& by extension Sandstorm) very much intends to allow references
(capabilities) to be passed around, which is in direct conflict with the "#1
and in bold" on this list, gRPC being "first-order", which I think the author
is saying means that systems have to marshal all the data to communicate.

This is discussed- in relation to Sandstorm- in a Sandstorm blog post dating
from last December. Other notables- the post mentions CORBA explicitly, and it
discusses persisting capabilities (anti "First Class" objects extreme if I'm
reading OP right).
[https://blog.sandstorm.io/news/2014-12-15-capnproto-0.5.html](https://blog.sandstorm.io/news/2014-12-15-capnproto-0.5.html)

~~~
atombender
I assume that the author is indeed referring to the fact that all APIs are
about simple, marshallable values. You can't pass references to a remote
server, and the remote server can't return a reference to a remove object.

I don't know Cap'n'Proto's RPC at all (I looked at it briefly for some
projects, but rejected it since we needed Ruby support), but I've worked with
CORBA, Microsoft's COM (specifically, DCOM) and Java's RMI. These all let you
pass around references to remote objects.

The danger in these systems is that that every API is locality-transparent,
you don't know if any given call will be to remote object or a local object.
You can be super careful about what you pass around, and still end up doing
remote calls by accident.

It also leads to "normalized" API designs that too easily incur 1+N roundtrips
even for basic information retrieval; for example, an endpoint that returns
Book[], where each book has an Author, accessing book.author ends up with a
network roundtrip to get the author.

I very much prefer the dumb-but-explicit approach where APIs only exchange
data, and you only exchange data.

Another big challenge that DCOM struggled (and, I think, CORBA and RMI) with
was object releasing. DCOM, for example, had explicit reference counting. This
meant that a client could hold onto an instance for a long time, and that
while doing so the server also had to keep the instance around. If a client
died, keepalive logic in the DCOM framework would eventually release the
object.

Perhaps the biggest challenge in any DCOM server was thread safety. Since any
client could come in on a separate thread, your entire API model had to be
thread-safe, or rely on thread-safe wrappers that acted as proxies for your
real internal data objects.

I should say that locality transparency is very nice to work with — until you
start having issues.

------
lobster_johnson
I've been testing gRPC a bit. It's promising, but the tools are clearly alpha
quality, so if you're going to build apps on top of it, you're in for some
early-adopter hurt.

For example, you need the latest Protobuf 3.0 beta release, which no current
OS distro currently has, and build gRPC from source. The language-specific
packages (Ruby, Node, etc.) have releases that lag behind gRPC itself, so
you'll probably have to build those from source, too (linking directly to HEAD
from a Gemfile or package.json doesn't work, last I checked). Performance also
seems decidedly lackluster, though it's been a few months since I did some
casual benchmarking.

As for the API and feel of the library, it's similar to Thrift, and much like
CORBA without the OO and attempts at location transparency. With the current
generators, you'll get some low-level and not very friendly wrappers generated
from the Protobuf declarations that don't attempt to hide the fact that you're
writing RPC requests and responses as Protobuf structs.

One disappointing aspect of gRPC at this point is the lack of discoverability.
Clients have to connect to a specific host and port, and you have to build
your own glue based on ZooKeeper/etcd/Consul/DNS/whatever. Since no fault
tolerance is built in, things like retries and load balancing is left as an
exercise for the reader.

~~~
soldergenie
That is good to know. Do you have any recommendations for a more mature
version of grpc (cross-platform communication with a high speed encoding)?
Thrift?

~~~
lobster_johnson
I've never used Thrift. One option I have evaluated is NATS [1], which is a
very fast, non-persistent, distributed MQ written in Go, with client libs in
all sorts of languages. It's extremely fast and supports RPC-style
request/response.

You get load balancing and discoverability for free, since NATS will
distribute messages equally to consumers: Just fire up new consumers, and they
will get messages to the topics they subscribe to, and all a client needs is
the host/port of the NATS server, which is going to be the same for all
parties. Couple that with some hand-coded serialization — Protobuf, Msgpack or
even JSON — and you have a fairly resilient, fault-tolerant RPC.

You could trivially do streaming RPC if you handled the request/response
matching yourself — if you look at the clients, NAT's RPC is all handled on
the client side, using ordinary messages with a unique topic name generated
for each request/response. Extending it to support multiple replies per
request would be simple.

There are other, more feature-rich message queues, of course, such as
RabbitMQ. NATS' advantage is that it's extremely simple.

Another option is ZeroMQ, but it's a bit lower-level and doesn't solve the
discoverability part. You'll end up writing much more glue for each client and
server.

[1] [http://nats.io](http://nats.io)

------
helper
We've been building a new system that uses gRPC. We really like the gRPC model
and will eventually switch over most of our internal rpc systems to use it.

The main benefit we get from gRPC vs something like thrift is a nice way to do
bidirectional streaming.

------
newobj
"Much of the complexity I saw with CORBA was to do with trying to pass object
(i.e. service endpoint) references back and forth in a transparent way. Drop
that misfeature, and everything from the IDL to the protocol to the frameworks
to the error handling to the implementations of services themselves will be
much simpler."

Drop that misfeature and you have no reason to invoke its name in the same
breath as CORBA, other than to confound people under the age of 35.

------
PaulHoule
I find it hard to take seriously w/o exceptions.

~~~
helper
Does HTTP have exceptions? No, it has error codes and error messages. gRPC is
a protocol. Implementors can choose to use exceptions or not, but that choice
is tangential to the actual protocol.

~~~
wora
That is right. gRPC and Google APIs share one simple error model, as defined
by
[https://github.com/google/googleapis/blob/master/google/rpc/...](https://github.com/google/googleapis/blob/master/google/rpc/status.proto).
The goal is to make it easier for developers to handle errors.

PS: I was the co-author of it.

