
Why create gRPC if you can run REST over HTTP/2, and there is websockets? - techsin101
If you say communication, then isn&#x27;t websocket already there for that? gRPC honestly looks like it could have been just websocket with library enforcing data contracts.<p>I&#x27;m confused, over actual benefit here. It seems it is being forced with promise of benefits that already exists..
======
Matthias247
Apart from the fact that you can do bidirectional data transfers with both,
they are quite different:

\- grpc and HTTP/2 have a fixed paradigm (RPC plus streaming), whereas
websockets are lower level and just describe how packets are transferred in
each direction. For PRC you have to build something on top (e.g. following the
WAMP conventions).

\- grpc on top of HTTP/2 features per stream flow. Doing something like
transferring a 4GB file on top of websockets without loading everything in
memory is not as easy, since flow control is somewhere between basic and
missing there.

\- They have different ordering guarantees. grpc doesn't gurantee ordering
between different requests, websockets guarantees it between messages.

\- grpc comes with an interface definition language (protobuf) and code
generators, which makes it much easier to build interoperable and backwards
compatible services.

\- gprc is proxyable per request, since each stream contains all relevant
information (endpoint, parameters, auth, etc.). In total grpc and HTTP/2 are
more stateless than a websocket connection, which could contain any kind of
data.

\- websockets work out of the box in browsers. The original grpc specification
not, due to relying on features that are not exposed in browser APIs.

~~~
notheguyouthink
Since you brought it up, what are your _(or anyone in-the-know 's)_ opinions
on transferring large _(~GB)_ files via gRPC?

I recently implemented my first gRPC service and while it went nicely, I ended
up writing a somewhat gross HTTP service along side the gRPC service to push
and pull bytes to. After some short research _(not testing myself!)_ , it
sounds like gRPC has significant overhead for large files due to having to
chunk each file up, and the byte chunks end up not being efficient on CPU
time/etc.

Since my download service required no security/etc that gRPC might excel at, I
just used gRPC for setting up the download/upoad, querying data, etc.

It would have been much simpler to fully use gRPC for the upload and download,
but it sounded costly. With no specific question, what are your thoughts on
gRPC streaming "large" data?

example perf tests: [https://ops.tips/blog/sending-files-via-
grpc/](https://ops.tips/blog/sending-files-via-grpc/)

~~~
Matthias247
In an ideal implementation it should be pretty much as fast as plain HTTP/2,
since the overhead is only a bit of protobuf framing between chunks. Which
depending on the chunk size is negligible.

However what I think comes into play in the benchmark is the behavior of the
real library, and most likely the garbage collector. There might be paths in
the code where the each chunk is freshly allocated and later on dropped. This
gets more costly if chunks get bigger, which might explain the degradation in
the measurements.

In this example the client's chunk size is allocated once and everything else
is a cheap slice, which should be good. However for the server I'm not sure, I
guess it might allocate the buffer for each chunk just in order to drop it
later on. Then you could investigate if you can give protobuf a prepopulated
data structure with a reserved maximum size (e.g. from a pool) that it should
use for deserialization, which might speed things up.

Another thing which obviously comes into play is the performance of HTTP/2
implementations, which may vary a lot. I think even for Go, the HTTP/2
implementation in the net package and the one in grpc where different (not
sure if they unified them now, there was an attempt on the way). So the
question would need to be investigated separately for each of those.

~~~
notheguyouthink
Appreciate your insight!

------
dozzie
Are you really asking why not replace a well-defined RPC protocol that has
properly specified data serialization and error signaling with a randomly
slapped together half of an RPC protocol?

~~~
madmax96
REST is not intended to be a RPC protocol. Comparing it to one isn't
appropriate.

The point of REST is to totally and generally describe the semantics of a
networked application's state transitions and is protocol-agnostic. GRPC
exists to invoke remote functions.

Why use REST? That's well-documented in Fielding's thesis [1]. Use the
appropriate architecture for the appropriate task.

[1][https://www.ics.uci.edu/~fielding/pubs/dissertation/fielding...](https://www.ics.uci.edu/~fielding/pubs/dissertation/fielding_dissertation.pdf)

~~~
dozzie
> REST is not intended to be a RPC protocol. Comparing it to one isn't
> appropriate.

Ah yes, an obligatory reminder about this cute little original idea that never
got implemented for computers to consume. Though whenever somebody talks about
REST _API_ , they mean "almost RPC with underspecified semantics", not a
hypertext driven way of fetching the data.

~~~
madmax96
The web, as it is consumed by humans, generally adheres to the RESTful
constraints quite well. RESTful architecture (e.g. the architecture that lets
you use one client to consume literally billions of applications and provides
the mechanisms for you to move between them totally transparently) works very
well for building a specific kind of application. Now, most of us are not
building the web, and so we have other constraints that are often times more
important. You are mistaken in your assertion that this was never implemented,
considering the fact you posted this comment from a REST client.

A general comparison of REST and RPC is as unproductive and harmful as blindly
making "REST APIs" everywhere. The way to escape this game of buzzword bingo
is to introduce nuance to our conversations about architectures, not by
changing the buzzword.

~~~
dozzie
>>> REST is not intended to be a RPC protocol. Comparing it to one isn't
appropriate.

>> Ah yes, an obligatory reminder about this cute little original idea that
never got implemented for computers to consume.

> The web, as it is consumed by humans, generally adheres to the RESTful
> constraints quite well.

Quite an apt observation: _as consumed by humans_. But we're comparing
something that's commonly called "REST" to _an RPC protocol_ , and RPC
protocols are quite clearly intended for computers, not humans. This something
that is commonly called "REST", even if you argue it's called incorrectly, is
very far from the famous PhD thesis.

> You are mistaken in your assertion that this was never implemented,
> considering the fact you posted this comment from a REST client.

OK. If you say that I'm wrong about implementations, show me where's a
production system _where the computer is the primary consumer_ (i.e. not
merely a terminal for displaying something to a human operator, and not a
second-class citizen like web crawling bots) and the system is RESTful in the
original meaning. I can't think of even a single one. Note, however, that WWW
cannot be considered such an implementation, because its primary consumer is
human, not computer, and I remind you once again: we're talking under this
post about unsupervised computer-to-computer communication, where human
operator is a rare guest.

> A general comparison of REST and RPC is as unproductive and harmful as
> blindly making "REST APIs" everywhere.

Much less productive is trying to pull an unrelated idea (the original meaning
of "REST") into discussion about machine-to-machine communication, especially
that the original term was created post factum to describe WWW's architecture
(already existing back then!) and wasn't used for pretty much anything else,
so introducing the term hasn't advanced nor produced anything.

~~~
madmax96
We agree that ad-hoc RPC mechanisms are inappropriate.

REST is not a RPC mechanism - people wrongly “using it” as one does not change
what REST is, simply because there is a wealth of academic and industrial
knowledge that uses REST in a specific way. This way is formally defined, and
retroactively changing the meaning offers no immediate advantage.

Again: when a statement “REST is worse than RPC” is evaluated, it implies to
people (who are already obviously confused) that REST is worse than RPC. When
people look at what REST is (i.e. Fielding’s thesis) they are further
confused, because what Fielding describes obviously isn’t designed to compete
with RPC. RPC existed when Fielding was writing his thesis and developing
HTTP. He wasn’t solving the same problem.

The best thing to do is to instruct what REST actually is so that the
confusion is dispelled. Perpetuating ignorance only leads to more problems.
Pointing out that REST is not an ideal mechanism for machine-to-machine
communication is therefore extremely relevant.

Fielding was heavily involved in the design of HTTP 1.1. The justification of
the design decisions is REST and became his thesis. Implying that Fielding was
disconnected from the design of the web is factually incorrect.

Folks on Hacker News (where the web is obviously one of the most common
application delivery mechanisms) might want to know how well-behaved web
applications are constructed. That’s definitely relevant to this thread and
community.

------
skybrian
gRPC isn't really designed with web programming in mind at all; it's primarily
used for server-side languages. (For example, it supports 64 bit ints.)

------
RantyDave
It's almost exactly like the difference between static and dynamic languages
(with gRPC being the static option). It's an order of magnitude faster and
(like you said) to a certain extent enforces compliance with an API. Brittle,
though.

~~~
techsin101
i get it's faster over http, but i haven't seen much when it comes to
websockets.

------
segmondy
You can build gRPC and get REST for free.

[https://github.com/grpc-ecosystem/grpc-gateway](https://github.com/grpc-
ecosystem/grpc-gateway)

~~~
notheguyouthink
Hell, I don't know how production-friendly it is, but it sounds like clay[1]
will even give you http-json endpoints without setting up a 2nd server.

[1]: [https://github.com/utrack/clay](https://github.com/utrack/clay)

 _edit_ : If you're using Go, of course.

