
Twirp: A new RPC framework for Go - spenczar5
https://blog.twitch.tv/twirp-a-sweet-new-rpc-framework-for-go-5f2febbf35f
======
spenczar5
Hey everyone! I'm the OP and primary author of Twirp. I'm happy to answer any
questions and hear feedback.

You can also reach me directly, if you like, email is in my profile.

~~~
tschellenbach
Thanks for posting! How did you work around issues with GRPC on AWS/ELB?

~~~
justinko
We had to stick to layer 4/TCP via an ELB (as opposed to an ALB).

~~~
Xorlev
gRPC connections are persistent, so a single client will always talk to a
single backend in that configuration. It's fine if you have lots of clients
but can easily lead to load imbalances.

That's why projects such as Envoy exist. I'd link it, but I'm on mobile.

Keep an eye on it.

~~~
puzzle
You can use round robin load balancing in gRPC without Envoy.

~~~
tacticus
And it's not difficult to swap it out for a consistent hash balancer or other
solution.

------
joshuak
I think the work the gRPC contributors are doing is great, including all the
features. But I can't emphasize enough how important it is for projects like
this that take great ideas to a new level by prioritizing simplicity. It's
like one project brainstorms great ideas by not being too resistant to new
ones (the "yes, and..." rule), while the other refines the ideas with a focus
on simplicity to extract the greatest value for the cost.

Great work.

------
warent
Really excited about this. I didn't like how opaque and heavy gRPC is. Also I
really wanted support for JSON. Mostly for these reasons, RCP hasn't been
implemented in my architecture yet (Just using standard REST)

Twirp is everything I wanted in an RPC framework and I'm looking forward to
implementing it ASAP. Thanks Twitch team :)

~~~
jadeklerk
Can I ask why you want JSON with gRPC? The benefits to protobuf are
tremendous, with little to no downsides

~~~
lobster_johnson
On the other hand, plain URLs with JSON are _much_ easier to work with without
writing any code. You can do everything want with curl from the shell, and
often an API allows doing almost anything from a browser (Elasticsearch comes
to mind). The simplicity of it all comes in handy when you want to do
something trivial — load a small piece of data into the server, do some
diagnostics, run some ad-hoc queries, etc. — without really wanting to write a
_program_.

Debugging with lower-level tools like strace and tcpdump is also something
that's trivial with JSON-over-HTTP/1.1, but _virtually_ impossible with gRPC.
(I mean, you could grab the payload and run it through gRPC deserialization,
but _would_ you?)

I'm a big fan of gRPC, but it is pretty top-heavy — lots of code to accomplish
relatively little. If you have a language that can build client bindings
dynamically from .proto files without recompilation, that would ease things a
lot, but if you're using Go, for example, the bar is pretty high going from
zero to productive.

~~~
bborud
I think the only RPC mechanism I've been happy with, that required little work
and didn't constantly get in the way was Stubby - the precursor to gRPC used
inside Google.

For a few years inside Google I experienced zero discussions about almost
every aspect of RPC. It took a trivial amount of time to implement stuff
interfaces, clients and servers in multiple languages and it was trivial to
understand the interface of, and implement a client for, other people's code.

I didn't necessarily like everything in Stubby, but I absolutely loved not
needing to have pointless discussions about RPC mechanisms or protocol design.

Since I left Google, anything even remotely resembling RPC (including REST)
has been an utter waste of time mostly spent bickering over this crap solution
or that crap solution – mostly with people who don't care about the same
things you care about.

REST is a crap solution in my eyes because it invites absolutely endless
discussions on an endless list of subtopics. From the dogmatic/fundamentalist
HATEOAS end of the spectrum to the RPC-using-HTTP-and-JSON-and-let's-call-it-
REST camp. Not to mention that in addition you need to have an IDL and
toolchain discussion. (Of course, none of the toolchains or ways to describe
interfaces are very good. In fact, they all suck in part because the attention
is being spread across so many efforts that don't get the job done).

I have yet to see an IDL that works better than a commented .proto file from a
"get stuff done" point of view.

I completely understand where you are coming from when it comes to having
human readable wire format. For 20 years I was a strong believer in the same,
and for some systems I still believe in human readable formats.

But RPC and RPC-like mechanisms is no longer among them. RPC is for computers
to talk to each other and not for humans trying to manually repair stuff.

(I'm a pragmatist, so I'm allowed to both change my mind and have seemingly
inconsistent opinions :-))

For RPC you should encourage the creation of tools. If you need to look at the
traffic manually: fine, make a Wireshark plugin or a proxy that can view calls
in real time. That's annoying, but cheaper than going off and inventing yet
another mechanism. And once it is done, it is done and there's one more thing
that is sane.

We should really encourage people to build tools so we can automate things and
have more precise and predictable control over what we are doing without
having to reimplement parsing (which is what happens if people think they
understand your protocol - which they often don't)

Also, make sure it works for a large enough set of implementation languages
and understand how to work in mechanical sympathy with build systems. I don't
care if Cap'n Proto is marginally better than Protobuf if it lacks decent
support for languages I have to care about.

I have no idea how much time we wasted on trying to get Thrift to work in a
Java project that needed to build on Windows, Linux and OSX back in the day,
but I was ready to strangle the makers of Thrift for not paying attention to
this.

At this point I'm beyond caring about the design of RPC systems. I just want
something that works for software development and doesn't have to be a
discussion. Hence, I get annoyed every time I see a new RPC mechanism instead
of attempts to make some of the existing mechanisms work by just making just
one aspect of them a bit more sane and exhibit a bit more empathy with
programmers rather than the egos of protagonists of various libraries,
frameworks and formats.

~~~
lobster_johnson
How does Stubby compare to gRPC?

I imagine part of the lack of friction around Stubby was that Google was the
only consumer, and could maintain client and server bindings/tools for the
strict subset of the languages that Google standardized on.

~~~
bborud
It was pretty similar, but gRPC is a bit simpler since Stubby had a lot of
other stuff to deal with authorization etc.

I wouldn't say the lack of friction was mostly due to Google being the only
consumer. It was mostly because there was a clear path from A to B when you
wanted to give something an RPC interface and that this path was made
efficient.

Or at least more efficient than trying to use REST-like stuff in a large
organization with lots of different teams using different technologies.

It also helped that it wasn't a democracy. You had to use it. If you didn't
like that you were free to leave. As a result people will focus more effort on
making the tools better and make friction points go away.

In practical terms: we can spend weeks on getting a REST-like interface to
work with other projects because everyone has an opinion on every bit of the
design, and everyone uses different, and quirky libraries and tools. For
Stubby in Google back then, it was mostly about defining the data structures,
the RPC calls, discuss semantics and then the mechanics were taken care of.
This is far, far, far from the actual case for many other technologies.

(And while I appreciate HATEOAS as a design philosophy, and I've tried to make
use of it several times, it just is not worth the effort. It is just takes too
much time to do right and to get everyone on the same page. Most proponents
are more keen on telling everyone how they are using it wrong, than on writing
good tools that actually help people use it right. There's very little empathy
with the developer).

------
mikeschinkel
It is amazing to me that almost nobody here actually questioned the wisdom of
throwing out the time tested benefits of robustness of REST in exchange for
that which REST was created to eliminate; the fragility of RPC. And all
because using RPC is easier in the moment (vs. over time.)

This reminds of of the old saw "Those who ignore history are doomed to repeat
it."

If you are unaware of the benefits, here are a just few links that can explain
it:

\- [https://www.quora.com/What-are-the-advantages-of-REST-
over-a...](https://www.quora.com/What-are-the-advantages-of-REST-over-a-more-
RPC-style)

\- [https://www.quora.com/What-is-the-difference-between-REST-
an...](https://www.quora.com/What-is-the-difference-between-REST-and-RPC)

\- [http://duncan-cragg.org/blog/post/getting-data-rest-
dialogue...](http://duncan-cragg.org/blog/post/getting-data-rest-dialogues/)

\- [https://apihandyman.io/do-you-really-know-why-you-prefer-
res...](https://apihandyman.io/do-you-really-know-why-you-prefer-rest-over-
rpc/)

\- [https://www.quora.com/What-are-the-pros-and-cons-of-REST-
ver...](https://www.quora.com/What-are-the-pros-and-cons-of-REST-versus-RPC)

~~~
Walkman
I did. It's pretty obvious they don't understand REST at all so they
reinvented the wheel.

------
kodablah
What could make this really take off is an in-browser JS client. The
simplicity it has added seems to really help there. The gRPC team has had one
in hiding for a long time only giving people access who explicitly ask:
[https://github.com/grpc/grpc/issues/8682](https://github.com/grpc/grpc/issues/8682)
(good thing GitHub has a feature that snips hundreds of comments or that link
would take a while to load)

~~~
spenczar5
Totally agree, and it's something I'd love to see. Consider this a call for
contributors - I think a simple generated javascript client would be an
excellent way to help with the project.

------
g123g
I am interested in finding out how Twirp helps with versioning? Is it possible
to have services evolve independent of each other?

------
ben_jones
This is a pretty awesome project. The one thing that's missing would be
autogenerated javascript/typescript stubs like grpc-web does. Will definitely
experiment with this when building small go applications.

~~~
cyrusaf
Hopefully these will be implemented soon by the community. Open source rules
:)

------
daveroberts
I'm trying to understand the problem this solves. Let's say you have an HTTP
API which allows users to update their email address:

    
    
        POST /api/user/:username/update_email
    

But you change the application to require API clients to send the user_id
instead of the username.

    
    
        POST /api/user/:user_id/update_email
    

Wouldn't you still need to mandate that all clients are updated regardless of
whether you use this tool as an abstraction layer to your API?

~~~
warent
Recommend reading on what an RPC is: [https://www.geeksforgeeks.org/operating-
system-remote-proced...](https://www.geeksforgeeks.org/operating-system-
remote-procedure-call-rpc/)

And protobufs: [https://developers.google.com/protocol-
buffers/docs/proto3](https://developers.google.com/protocol-
buffers/docs/proto3)

Example, one benefit is that you're defining your API by using language
neutral protobufs which then generate code consistently (including types!)
into many languages. Your entire communication procedure can be easily and
succinctly described in a single small, human readable file.

------
tdrd
Lots of comments here about lack of JSON support in gRPC - while that's true,
it's relatively easily to bolt on using grpc-gateway
([https://github.com/grpc-ecosystem/grpc-gateway](https://github.com/grpc-
ecosystem/grpc-gateway)).

Here's how we did it in CockroachDB:
[https://github.com/cockroachdb/cockroach/blob/24ed8df04719a1...](https://github.com/cockroachdb/cockroach/blob/24ed8df04719a15e74c29d5747224ef6987e7756/pkg/server/server.go#L994-L1008)

The supporting code (protoutil) is
[https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/...](https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/protoutil#JSONPb)
and
[https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/...](https://godoc.org/github.com/cockroachdb/cockroach/pkg/util/protoutil#ProtoPb).

------
ghayes
I wrote a similar library to this called Hyperbuffs[0] in Elixir. The goal of
the project is to document and build your endpoints using protobufs, but to
allow the caller to choose either protobuf or JSON encoding for content and
accept types.

[0]
[https://github.com/hayesgm/hyperbuffs](https://github.com/hayesgm/hyperbuffs)

------
chuhnk
Author of go-micro here. Good to finally start seeing some choices focused on
RPC. I started go-micro in 2014, before gRPC came on the scene. Even still I
think the tooling doesn't emphasize ease of development. That was my goal with
go-micro.

[https://github.com/micro/go-micro](https://github.com/micro/go-micro)

------
sliken
Looks nice. Can anyone comment on how auth works with Twirp? I was trying to
get GRPC working to authenticate with unsigned ssl certs (much like using SSH)
and was rather disappointed how awkward it was. Basically two completely
different methods requiring hiding session ID in two unrelated places just to
allow a SSL cert to control authentication.

Anyone done similar with Twirp?

~~~
spenczar5
Yep, you can do this pretty easily because Twirp's generated objects plug in
nicely to the normal `net/http` tools. The server is a `http.Handler`, and the
client constructor takes a `http.Client`. So if you're familiar with how to
use SSL certs for authentication with a vanilla Go HTTP client and server,
Twirp would work in exactly the same way.

When you create a Twirp server, you get a `net/http.Handler`. You can mount it
on a `http.Server` with its `TLSConfig` field set to the right policy.

The client constructor similarly takes a `*net/http.Client`. You could provide
a Client that uses a `http.Transport` with its `TLSClientConfig` field set to
something using the right value (like in
[https://gist.github.com/michaljemala/d6f4e01c4834bf47a9c4](https://gist.github.com/michaljemala/d6f4e01c4834bf47a9c4),
say).

------
rguzman
this looks really sweet. i've never understood why gRPC limits itself to
protobufs only when the protobufs have a canonical json representation. i'm
glad that twirp is fixing that piece.

~~~
gipp
On GCP, Cloud Endpoints proxies will transparently translate back and forth
between protobufs and the canonical JSON, allowing either representation to be
used. So if you're on GCP and don't care about vendor lock-in, that's a
solution.

~~~
buckhx
There is also the grpc-gateway project for JSON transcoding
[https://github.com/grpc-ecosystem/grpc-gateway](https://github.com/grpc-
ecosystem/grpc-gateway)

~~~
puzzle
To be fair, if you chose gRPC for performance, but then end up using JSON for
most of your traffic, perhaps you picked the wrong tool.

~~~
gipp
I don't think generally people want to have JSON accepted to use in their
production workloads. More for development, testing, that kind of thing. Being
able to just curl your service makes a huge difference.

~~~
puzzle
It always starts that way, then people demand JSON everywhere ("why not?"),
then they complain when things get too slow or when the OOMs begin to appear.
:-)

------
lobster_johnson
This looks promising! We use the go-grpc SDK in conjunction with gogoprotobuf,
and it's been a rocky road.

While the article identifies some operational issues (e.g. the reliance on
HTTP/2), there are several considerable deficiencies with gRPC today, at least
when using it with Go:

1\. The JSON mapping (jsonpb.go) is clumsy at best, and by this I mean that it
produces JSON that often doesn't look anything like how you'd hand-design your
structures. "oneof" structs, for example, generate an additional level of
indirection that you typically wouldn't have. Proto3's decision to forego
Proto2's optional values (in Proto3 everything is optional) cause Go's zero
value semantics to leak into gRPC [1]. (We had to fork jsonpb.go to fix some
of these issues, but as far as I can tell, upstream is still very awkward.)

2\. The Go code generator usually produces highly unidiomatic Go. "oneof" is
yet again an offender here. The gogoprotobuf [2] project tries to fix some of
go-grpc's deficiencies, but it's still not sufficient. Ideally you should be
able to use the Proto structs directly, but our biggest gRPC project we
basically gave up here, and decided to limit Proto usage to the server layer,
with a translation layer in between that translates all the Proto structs
to/from native structs. That keeps things clean, but it's pretty exhausting
work, which lots of type switches (which are hampered by Go's lack of switch
exhaustiveness checking; we use BurntSushi's go-sumtype [3] a lot, but I don't
think it can work for Proto structs, as it requires that a struct also
implements an interface).

3\. Proto3 has very limited support for expressing "free-form" data. By this I
mean if you need to express a Protobuf field that contains a structured set of
data such as {"foo": {"bar": 42}}. For this, you have the extension
google.protobuf.Value [4], which supports some basic primitives, but not all
(no timestamps, for example) and cannot be used to serialize actual gRPC
messages; you can't serialize {"foo": MyProtoMessage{...}}. Free-form
structured data is important for systems that accept foreign data where the
schema isn't known; for example, a system that indexes analytics data.

From what I can tell, though, Twirp doesn't "disrupt" gRPC as much as I'd
like, since it appears to rely on the existing JSON mapping.

[1]
[https://github.com/gogo/protobuf/issues/218](https://github.com/gogo/protobuf/issues/218)

[2] [https://github.com/gogo/protobuf](https://github.com/gogo/protobuf)

[3] [https://github.com/BurntSushi/go-
sumtype](https://github.com/BurntSushi/go-sumtype)

[4] [https://developers.google.com/protocol-
buffers/docs/referenc...](https://developers.google.com/protocol-
buffers/docs/reference/google.protobuf#google.protobuf.Value)

~~~
spenczar5
Yeah, I agree with pretty much everything you've written here.

> 1\. The JSON mapping (jsonpb.go) is clumsy at best

The best thing for optional fields in jsonpb is to use the protobuf wrapper
types [1]. They have special support in jsonpb to serialize and deserialize as
you would expect, without the indirection. But the Go structs you get on the
other end are a little weird, so its a tradeoff.

> 2\. The Go code generator usually produces highly unidiomatic Go.

Yeah, using the generated structs as the main domain types in your code can be
up-and-down. I agree that gogoprotobuf can help, but it's rough. We definitely
use Getter methods on generated structs quite a bit for stuff like oneofs.

> 3\. Proto3 has very limited support for expressing "free-form" data.

There's always `repeated byte` :) It sounds like a joke, but we've used it in
some spots where the input is totally schema-less.

The Any type is also designed for this sort of thing. Still clumsy, though.

[1]
[https://github.com/google/protobuf/blob/master/src/google/pr...](https://github.com/google/protobuf/blob/master/src/google/protobuf/wrappers.proto)

~~~
mappu
_> > 2\. The Go code generator usually produces highly unidiomatic Go._

 _> using the generated structs as the main domain types in your code can be
up-and-down_

At $DAYJOB we solve this by doing code generation outward from our domain
types. The RPC layer is idiomatic Go because that's what we began with.

Some go/token and regexes take our structs and produce a server-side router
implementation for net/http (endpoints from magic comments), some client-side
libraries for Go / C++ (Qt) / PHP / JS, and documentation in markdown.

Our system is in a pretty reusable state, but nobody has the free cycles to
open it. If Twirp had been available 24mo ago our project might have been
different.

------
anonacct37
Any performance numbers?

~~~
spenczar5
It's really hard to write benchmarks of an RPC system that mean much, but the
overhead is really just in serialization. We have services that handle tens of
thousands of requests per second on Twirp in one process.

Serialization/deserialization of a typical protobuf struct takes a microsecond
or two, but it generates some garbage, so GC ends up slowing you down if you
try to go really crazy and push past 100k req/s in one process with non-
trivial message structures.

~~~
anonacct37
Thanks, that's the exact info I was looking for.

I've unfortunately been bitten before by choosing json as a serialization
format, specifically in go, due to JSON performance dominating processing.

No criticism though, JSON is the right choice for many types of APIs.

~~~
spenczar5
You can and should use Twirp's protobuf serialization instead for almost all
applications. The JSON serialization is really intended for developers and
low-throughput cross-language clients.

Protobuf serialization isn't free, but it's definitely cheaper than JSON
serialization.

------
tothemario
awesome! a lot easier to integrate than gRPC. It will be way more useful after
other languages are supported

------
ww520
Is Thrift supported in Go?

~~~
jadeklerk
Not fully

[https://thrift.apache.org/docs/Languages](https://thrift.apache.org/docs/Languages)

------
jlebrech
beautiful, i'm sick of juggling get/post/put/patch/delete and figuring out
which one the api developer chose. just do X using Yparams.

------
lprd
Can any explain to me what an RPC is?

~~~
th1nkdifferent
Remote Procedure Call.
[https://en.wikipedia.org/wiki/Remote_procedure_call](https://en.wikipedia.org/wiki/Remote_procedure_call)

------
SEJeff
Looks like the website got a HN hug of death and isn't really loading for me
whatsoever.

~~~
dayjah
Unlikely? It's hosted on medium - I'm pretty sure they can handle the load.
However here is the cached version:
[https://webcache.googleusercontent.com/search?q=cache:6vYOM9...](https://webcache.googleusercontent.com/search?q=cache:6vYOM9EjSsAJ:https://blog.twitch.tv/twirp-
a-sweet-new-rpc-framework-for-go-5f2febbf35f+&cd=1&hl=en&ct=clnk&gl=us)

------
oh-kumudo
It is nice but would say with limited value outside of Twitch

~~~
d0100
I definitely would prefer to use this instead of bizare-REST-like (it's how
REST usually devolves into) in a next project , if I can't use graphql.

~~~
rmrfrmrf
I was surprised to find that GraphQL was probably the easiest sell to my team
ever.

------
MentallyRetired
I've been saying for years and years that JSON RPC is the way to go. Glad to
see at least someone agrees.
[http://www.jsonrpc.org/](http://www.jsonrpc.org/)

------
rmrfrmrf
There seems to be somewhat of a pattern of Go being linked to outages
(CloudFlare and now Twitch). Any regrets investing in Go?

~~~
klodolph
Are you talking about problems with gRPC mentioned in the article? gRPC is not
in any way specific to Go or even related to Go, and I can confirm that there
have been some problems with the C++ version of gRPC.

The CloudFlare outage was related to leap second handling... while the
particulars of the Go library contributed, this is also far, far from the only
time that a leap second caused havoc online. Hell, in 2008, Zunes were
crippled by a leap day bug.

RPC and time handling are notoriously tricky problems to get right.

------
lykr0n
Why use HTTP for transport instead of Messagepack or ZMQ? Seems a bit overkill
if you are whipping binary data back and forth between services. Protobuf +
ZMQ seems a lot more efficient to me.

~~~
mentat
A lot of this boils down to wanting to be able to use standard load balancers.

~~~
lykr0n
ZMQ uses ZMTP which is a TCP protocol, and for example, HAProxy supports TCP
just as much as TCP/HTTP.

