Hacker News new | comments | show | ask | jobs | submit login
GRPC.io is interestingly different from CORBA (eighty-twenty.org)
39 points by tonyg on Aug 28, 2015 | hide | past | web | favorite | 32 comments



I've been using gRPC for the last several months. It's a reasonably nice API and integrates nicely in with protobuf, which I think is it's main draw. We were already passing data around as protobufs, so we just took the dive.

Unfortunately, gRPC is still very much at the release level of "it works at Google and also on some developers laptop once", so while we haven't really encountered any bugs, dealing with gRPC has been a bit of a devops headache. That being said, we don't have a full-time devops person or anything, and I bet someone who knew their way around automated configuration and deployment better than myself could handle gRPC fine.


In what languages have you been using it ?


Don't know about the OP, but I've tried it with Python and Go. The Go stuff works pretty well, the Python stuff is buggy and incomplete (and the code looks overengineered)


Python and C++. C++ server-side, and Python client-side. Python has been a larger struggle than C++.


Cap'n'Proto (& by extension Sandstorm) very much intends to allow references (capabilities) to be passed around, which is in direct conflict with the "#1 and in bold" on this list, gRPC being "first-order", which I think the author is saying means that systems have to marshal all the data to communicate.

This is discussed- in relation to Sandstorm- in a Sandstorm blog post dating from last December. Other notables- the post mentions CORBA explicitly, and it discusses persisting capabilities (anti "First Class" objects extreme if I'm reading OP right). https://blog.sandstorm.io/news/2014-12-15-capnproto-0.5.html


I assume that the author is indeed referring to the fact that all APIs are about simple, marshallable values. You can't pass references to a remote server, and the remote server can't return a reference to a remove object.

I don't know Cap'n'Proto's RPC at all (I looked at it briefly for some projects, but rejected it since we needed Ruby support), but I've worked with CORBA, Microsoft's COM (specifically, DCOM) and Java's RMI. These all let you pass around references to remote objects.

The danger in these systems is that that every API is locality-transparent, you don't know if any given call will be to remote object or a local object. You can be super careful about what you pass around, and still end up doing remote calls by accident.

It also leads to "normalized" API designs that too easily incur 1+N roundtrips even for basic information retrieval; for example, an endpoint that returns Book[], where each book has an Author, accessing book.author ends up with a network roundtrip to get the author.

I very much prefer the dumb-but-explicit approach where APIs only exchange data, and you only exchange data.

Another big challenge that DCOM struggled (and, I think, CORBA and RMI) with was object releasing. DCOM, for example, had explicit reference counting. This meant that a client could hold onto an instance for a long time, and that while doing so the server also had to keep the instance around. If a client died, keepalive logic in the DCOM framework would eventually release the object.

Perhaps the biggest challenge in any DCOM server was thread safety. Since any client could come in on a separate thread, your entire API model had to be thread-safe, or rely on thread-safe wrappers that acted as proxies for your real internal data objects.

I should say that locality transparency is very nice to work with — until you start having issues.


I've been testing gRPC a bit. It's promising, but the tools are clearly alpha quality, so if you're going to build apps on top of it, you're in for some early-adopter hurt.

For example, you need the latest Protobuf 3.0 beta release, which no current OS distro currently has, and build gRPC from source. The language-specific packages (Ruby, Node, etc.) have releases that lag behind gRPC itself, so you'll probably have to build those from source, too (linking directly to HEAD from a Gemfile or package.json doesn't work, last I checked). Performance also seems decidedly lackluster, though it's been a few months since I did some casual benchmarking.

As for the API and feel of the library, it's similar to Thrift, and much like CORBA without the OO and attempts at location transparency. With the current generators, you'll get some low-level and not very friendly wrappers generated from the Protobuf declarations that don't attempt to hide the fact that you're writing RPC requests and responses as Protobuf structs.

One disappointing aspect of gRPC at this point is the lack of discoverability. Clients have to connect to a specific host and port, and you have to build your own glue based on ZooKeeper/etcd/Consul/DNS/whatever. Since no fault tolerance is built in, things like retries and load balancing is left as an exercise for the reader.


I think your frustration with the release process reflects Google's weakness in this area. Inside Google, everything is built and statically linked from the moving head of a single gigantic source code repository. Library releases, versions, and dependencies are not something the average Google ever thinks about.


Retries and load balancing are on their way (expect it in a release cycle or two), and we're starting to figure out the discoverability part - especially on client side.


Regarding the language-specific packages, we also have releases of the repository as a whole (https://github.com/grpc/grpc/releases), and the published language-specific packages correspond to those releases.


That is good to know. Do you have any recommendations for a more mature version of grpc (cross-platform communication with a high speed encoding)? Thrift?


I've never used Thrift. One option I have evaluated is NATS [1], which is a very fast, non-persistent, distributed MQ written in Go, with client libs in all sorts of languages. It's extremely fast and supports RPC-style request/response.

You get load balancing and discoverability for free, since NATS will distribute messages equally to consumers: Just fire up new consumers, and they will get messages to the topics they subscribe to, and all a client needs is the host/port of the NATS server, which is going to be the same for all parties. Couple that with some hand-coded serialization — Protobuf, Msgpack or even JSON — and you have a fairly resilient, fault-tolerant RPC.

You could trivially do streaming RPC if you handled the request/response matching yourself — if you look at the clients, NAT's RPC is all handled on the client side, using ordinary messages with a unique topic name generated for each request/response. Extending it to support multiple replies per request would be simple.

There are other, more feature-rich message queues, of course, such as RabbitMQ. NATS' advantage is that it's extremely simple.

Another option is ZeroMQ, but it's a bit lower-level and doesn't solve the discoverability part. You'll end up writing much more glue for each client and server.

[1] http://nats.io


We've been building a new system that uses gRPC. We really like the gRPC model and will eventually switch over most of our internal rpc systems to use it.

The main benefit we get from gRPC vs something like thrift is a nice way to do bidirectional streaming.


"Much of the complexity I saw with CORBA was to do with trying to pass object (i.e. service endpoint) references back and forth in a transparent way. Drop that misfeature, and everything from the IDL to the protocol to the frameworks to the error handling to the implementations of services themselves will be much simpler."

Drop that misfeature and you have no reason to invoke its name in the same breath as CORBA, other than to confound people under the age of 35.


I find it hard to take seriously w/o exceptions.


Does HTTP have exceptions? No, it has error codes and error messages. gRPC is a protocol. Implementors can choose to use exceptions or not, but that choice is tangential to the actual protocol.


That is right. gRPC and Google APIs share one simple error model, as defined by https://github.com/google/googleapis/blob/master/google/rpc/.... The goal is to make it easier for developers to handle errors.

PS: I was the co-author of it.


Seems sensible enough to leave out exceptions. Otherwise it just encourages bad programmers to write bad programs. And the rest of us would be left having to continue wrapping up all those sodding exceptions which should have been contingency return values just to maintain the interface/contract.

Over an RPC boundary it seems impossible that there could be any scenario so serious that is so unexpected and which it is not possible to form a contingency for that it would be necessary to propagate the exception across to the other side. It opens up a whole can of worms.


I concur. Many people have an insufficient understanding of contracts - especially concerning the difference between refusing a service, and failing because of an error.

An inherent requirement for any contract language is a signal to refuse a service if the pre-conditions are not met (for example, refusing to withdraw money because of insufficient funds in an account).

The most pragmatic solution to this inherent feature of contracts are typed (checked) exceptions - anything else is a less robust, incomplete solution.


Nothing in the first part of your post supported the conclusion. gRPC has an error code it can return, FAILED_PRECONDITION. Why are exceptions more robust or more complete than that?


They aren't. He just wants to be able to leak stuff like "IOException: disk space full whilst writing to 'C:\some\path\that\probably\ought\to\not\be\revealed\outside\this\gRPC\server'." to his client app when all the client app really cares about is whether it is a transient or permanent failure.


Here is my thought.

First of all for debugging it is a huge help to get as rich data back as you can from the other end and logged.

As for the proper handling of exceptions, I think there are a number of semantic attributes that an exception can have: for instance, transient/permanent, and the scope of the exception (is the whole system hosed? just this record? etc.)

In a greenfield application you can build an exception hierarchy that records this, but in the real world it is necessary to build a system that does some inference, or even guessing, to decide what to do about a failure.

I just remember the bad old days when C programs would return an int and it would be -1 if the function failed, or maybe they set errno, and either way you could double the size of some functions if you add in error checking and probably get it wrong.

Exceptions on the other hand are good for the normal kind of call stack and also work in "callback hell" situations since you can stuff them in a field or collection, pass them as a function method, etc.


So return an error message with structured data about the error, and then log it in your protocol layer.

Most of the time, when you're handling RPCs, you're processing the responses asynchronously anyway, which makes the concept of an exception (which unwinds the stack) superfluous anyway. You should absolutely be handling error conditions. That doesn't mean you need to use the language mechanisms of exceptions to do so.

CORBA got into trouble because it tried to make remote objects look too much like local objects. Don't think that way: think in terms of messages passed between communicating sequential processes, with an error indicating that the process will have to do something else.


Absolutely not - the people who do that, don't understand the difference between refusal (precondition not met), and failure (unexpected system error). In your particular case above, I would never let IOException leak "up" into a client-facing contract.

How could I? I would have designed that interface long before implementing it. No, in that case, this is an unexpected system error - for example, an unchecked RuntimeException in Java.

Just because people don't know how to write software contracts properly, doesn't mean that languages and protocols should remove the notion of a checked exception.

When used properly, they are an essential part of an understandable contract. And that's before we even look at it from a type system point of view, where it's absolutely silly to use the same response message to represent either success of failure, greatly increasing the cognitive overhead of the developer calling your service.


You need to learn more languages. "Checked exception" is highly Java-specific and it is biasing your understanding.

A checked exception is okay but it is still an exception with all the stack frame capturing overhead that brings.

In better languages like ML variants (say F#) we have discriminated unions. These let us return structured, type safe, result codes and information. Without all the overhead and OO-ness of nasty exceptions. We can still throw exceptions but these are strictly, and I mean strictly, for those very exceptional and unexpected scenarios.


You can add your own exceptions if you want. Just have all your messages like:

    message FooResult {

      optional Exception exception = 1;
      optional Foo foo = 2;
    }


Sure, but that is incredibly weak from a typing point of view, and not compiler-checked in most languages.

You have no clean way of describing multiple pre-conditions (refusal reasons), or to evolve those over time in existing systems, without assuming that developers will somehow correctly read your API documentation, and manually write the code everywhere to handle all possible exceptions, with no help from your compiler or your language's type system.

This is how to indicate exceptions in the 1970s. The world has moved on since then :-)


Language level exceptions are how to indicate exceptions in the '90s. The world has moved on since then.

Modern languages have a compiler-checked sum type, and protobuf can support these (oneof). You get the good parts of exceptions (being able to cleanly separate error handling from happy-path code, and doing it concisely and readably via do-notation or equivalent) without the bad parts (invisible nonlocal control flow, breaking referential transparency, hard to abstract over).

(You still have the problem of returning a new type of error condition that your client code wasn't expecting, but it's the same problem as returning a new type of successful value that your client code wasn't expecting, and you can solve it in the exact same way).


I'm not trying to be facetious, but do you literally just mean Haskell? Because none of the languages widely used in corporate / industry do.

I'd settle for 1990s tech over 1970s tech any day when it comes to contracts (APIs).


Scala - which I've been using in corporate/industry for 5 years now - works as I described. Of course that's only my experience.

(There are certainly plenty of places stuck on older tech - but I'd think those places won't be interested in adopting something as new as GRPC)


Why? It might make integrating with existing code a bit harder, but is not exactly an uncommon design choice.


I'd say it's actually easier to integrate exception-less code to the exception-enabled codebase, than the opposite direction.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: