

The Sorry State of Convenient IPC (2014) - rubikscube
http://dev.yorhel.nl/doc/easyipc

======
tlack
I believe Q/Kdb+ to have the most natural implementation of IPC that I've
seen, as long as you can keep all your gear inside Q.

To connect to another host you just do:

    
    
        remote:hopen `:server:port;
    

The port number is specified with -p on the command line. To send messages:

    
    
        answer:remote "sum x" 
    

or if strings are repulsive to you you can pass in an expression like:
answer:remote (`sum;x). `sum here is a symbol which is like a reference to a
variable (or function). You can also pass a literal function for the remote
host to execute, since they're just data.

You can also do asynchronous calls using the negative value of the handle
(which is just a file descriptor int, wildly enough): neg[remote]
(`incViews;`welcomepage;1)

Completely natural and simple. The only issue that I'm aware of is a 2gb limit
for each message. Built-in functions can split your data into blocks to
transmit large amounts.

The in-memory representation of data and the network representation are
essentially the same. So there's no costly per-byte packing or unpacking going
on, simply a memcpy(). In terms of performance and power use (environmental
impact) this is a huge win.

It's trivial to expose data to the outside world by creating a web server.
Simply define a function called .z.ph (or .z.pp for POST). This one evaluates
the URL requested as a Q expression and returns the result as json.

    
    
        .z.ph:{.j.j value x 0}
    

Being able to build services as simply as this allows you to begin to think
about the microservice as the unit of program composition, rather than
classes, functions, or modules. Microservices have their own issues, but
that's a topic for a later rant.

------
tinco
Looks to me like DBus is actually pretty good and just needs some good
implementations, some bug reports and some active contributors. I'm not super
familiar with non-server applications in linux, but isn't DBus also in
widespread use?

Most linux server apps I know communicate through the filesystem by leaving a
.pid,.lock and/or a .sock somewhere and then setting up a custom channel using
a handle to that. It feels kind of hacky, but at least you get to control the
crappiness.

~~~
rwmj
kdbus actually supports a whole load of fairly sensible communications models,
with a reasonably simple API, and ditches the horrible bits of dbus. It's a
shame that it's (unnecessarily IMO) bound up in controversy.

Edit: Poettering's talk about kdbus from devconf 2014:
[https://www.youtube.com/watch?v=HPbQzm_iz_k](https://www.youtube.com/watch?v=HPbQzm_iz_k)

~~~
justincormack
It doesn't seem very general purpose unless there are also non kernel
implementations, and you can't sanely remote it as it relies on the kernel for
authentication.

~~~
rwmj
Its purpose is single host communication. If you assume a single host then you
don't have to deal with distributing computing problems at all.

Not sure why having non-kernel implementations is a problem. Is the fact that
there is no non-kernel implementation of Unix sockets / TCP/IP / filesystems /
name-most-other-Linux-kernel-features a problem for you?

BTW I don't especially care about kdbus, but I do hate that there are no good,
featureful communication primitives in Linux.

~~~
justincormack
The article was looking for multihost too which is why I mentioned it.

There are some non kernel sockets implementations, but it is less of an issue
as every kernel implements them, but kdbus is not implemented anywhere yet. It
is also not possible to implement correctly in userspace as the security
guarantees rely on a kernel, which is inconvenient. It is unlikely eg FreeBSD
will ever implement it.

------
JoachimSchipper
Encoding complex objects is just hard (in the sense that it's going to get
ugly, complicated, and usually both); however, just getting framing right
(i.e. ensuring that client and server agree on where messages/objects start
and end) at least solves many security problems. A simple binary (tag-)length-
value works, as do schemes based on
[http://cr.yp.to/proto/netstrings.txt](http://cr.yp.to/proto/netstrings.txt).

~~~
mst
netstrings of JSON is my goto for "trivial protocol that people can easily
implement".

~~~
JoachimSchipper
That's fairly easy when most of your data isn't binary, and when you're using
only languages with good UTF-8 support, yes.

~~~
perlgeek
And no cyclic references.

~~~
mst
The Object::Remote wire protocol handles that fine.

------
math
gRPC is probably worth mentioning in this thread (new since the article),
though I haven't used it yet and have no idea if it can satisfy his
asynchronous requirement.

He considers using message queues and dismisses the idea because "you're still
on your own in implementing an RPC mechanism on top of that. And for the
purpose of building a simple RPC mechanism, I'm convinced that plain old UNIX
sockets or TCP will do just fine.".

In my experience, getting an arbitrary sized message from A to B over a
potentially unreliable network is the difficult bit and implementing a basic
RPC mechanism is pretty easy. I did a project that required a lot of IPC and
ended up using nanomsg (also not mentioned in the article.. it's very similar
to zeromq) for [fast] reliable message delivery and wrote my own basic RPC
layer on top of this (NanomsgRPC - currently C# only ..). This worked pretty
well for me.

A note on nanomsg though: I wouldn't consider it stable enough for production
use, but that said, it didn't give me any problems for what I used it for.

------
eschaton
This is pretty much why Jordan Hubbard &co are bringing Mach IPC and other
NeXT/Apple technologies to FreeBSD:
[http://youtu.be/49sPYHh473U](http://youtu.be/49sPYHh473U)

It turns out that some of these problems already do have pretty decent
solutions.

