
ZeroMQ 4.2.0 - arunc
https://github.com/zeromq/libzmq/releases/tag/v4.2.0
======
omginternets
I'm confused. I thought 0MQ had essentially been superceded by nanomsg [0].
What's the deal?

On a related note, using nnpy [1] and mangos [0] have been a real pleasure.

And lastly, a bit of info on how 0MQ and nanomsg differ [3].

[0] [http://nanomsg.org/](http://nanomsg.org/) [1]
[https://github.com/nanomsg/nnpy](https://github.com/nanomsg/nnpy) [2]
[https://github.com/go-mangos/mangos](https://github.com/go-mangos/mangos) [3]
[http://nanomsg.org/documentation-
zeromq.html](http://nanomsg.org/documentation-zeromq.html)

~~~
jpm_sd
A postmortem on nanomsg begs to differ:
[http://sealedabstract.com/rants/nanomsg-postmortem-and-
other...](http://sealedabstract.com/rants/nanomsg-postmortem-and-other-
stories/)

~~~
widdma
After that, Garrett D'Amore eventually returned on condition he be BDFL. See
[https://github.com/nanomsg/nanomsg/issues/619](https://github.com/nanomsg/nanomsg/issues/619)

Since then the project looks to be fairly healthy.

~~~
omginternets
Oh good! This is the guy behind mangos, too.

------
MichaelAza
I've been seeing the occasional post from his blog about the choice to die but
didn't connect them to ZMQ until I saw this.

ZMQ, as a library, is a work of art and Code Connected is by and far one of
the best programming books I have had the pleasure to read. That, coupled with
the deep and interesting posts on his blog show we have lost a truly great
mind.

------
k__
"Tell them I was a writer.

A maker of software.

A humanist. A father.

And many things.

But above all, a writer.

Thank You. :)"

\- Pieter Hintjens [http://hintjens.com/](http://hintjens.com/)

~~~
kzisme
If you're just learning about ZeroMQ (or just reading this post) - I suggest
taking some time to read or listen to some of Pieter Hintjens
blogs/book/talks.

It's a wonderful experience and he seemed like a great/genuine dude.

~~~
k__
I think I will use PC3[0] in my next project. Seems like an easy and sane way
to structure repos.

[0] [http://hintjens.com/blog:23](http://hintjens.com/blog:23)

~~~
tel
How would you review PC3 versus C4? It seems like C4
([https://rfc.zeromq.org/spec:42/C4](https://rfc.zeromq.org/spec:42/C4)) is
the latest evolution of this design. It seems to embrace optimistic merging
even more than PC3 and does away with reviewers.

~~~
k__
Hm, like he wrote it, it seems to be the other way around

"The Pedantic Code Construction Contract (PC3) is an evolution of the GitHub
Fork + Pull Model, and the ZeroMQ C4 process..."

~~~
tel
That's a good point. I was basing it on dates I could find and what I've been
reading in Social Architecture. Perhaps the latest C4 is later and PC3 is
smack in the middle

------
RelaxBox
I haven't used zeromq in forever, but did they ever fix the problem with
request/reply sockets where the server socket could get into an indeterminate
state after a client socket drops at just the wrong time?

~~~
alfalfasprout
Nope. The reality is that ZeroMQ is useful for a variety of tasks but doesn't
really excel at the tasks for its specific socket types anymore. He offers a
heart-beating pattern to get around this issue for Req/Rep sockets though.

For pub/sub Aeron is now _much_ better (way more throughput and doesn't crash
at multi-gigabit rates like OpenPGM). For REQ/REP HTTP/2 and other QUIC-based
approaches are reigning supreme (if you need high performance across a WAN
then you can repurpose something like FIXT 1.1 from the FIX protocol).

~~~
justinsaccount
> For REQ/REP HTTP/2 and other QUIC-based approaches are reigning supreme

Oh? I implemented something recently using req/rep using pyzmq and then ported
it to grpc. grpc was an order of magnitude slower. Then I updated the zeromq
code to do pipelining via router/dealer and that was even faster.. by sending
pipelined batches of 100 items it can do 160k lookups/second. grpc+batching I
think maxed out around 20k.

Could have been protobuf that was the cause of the performance hit though.

~~~
Matthias247
gRPC is and certainly will never be the fastest protocol for small
request/reply messages. The reason is the stream multiplexing layer that is
required for it. You almost certainly need to copy data from the connections
receive buffer into a streams receive buffer into the application and the
opposite for the sending side.

If you don't have the stream multiplexing and just write complete request or
response packets to a connection (similar to Thrift) you save quite a lot of
overhead.

However this multiplexing feature is also the biggest upside and achievement
of gRPC, since it enables you to stream big requests or responses and not only
small packets. And it enables multiple big streams (file uploads, etc.) in
parallel over a single connection without one blocking another. And of course
it enables flow-controlled bidirectional streaming IPC, which can not be found
in other systems.

~~~
justinsaccount
Well the underlying thing I am doing is small request/reply messages - I'm
doing metadata lookup for ip addresses. The way I sped things up with zeromq
was first by batching requests. Essentially, if I have 10k lookups to do,
instead of sending 1 at a time, I group them into blocks of 100 and send

    
    
        ' '.join(block)
    

Then I do all the lookups on the server and send a block of responses back.
This turns what would be 10k queries into only 100 rpc calls.

That got me to about 60k lookups a second locally, but over a wan link that
dropped down to 10k. I fixed that by implementing pipelining using a method
similar to the described under
[http://zguide.zeromq.org/page%3Aall#Transferring-
Files](http://zguide.zeromq.org/page%3Aall#Transferring-Files) where I keep
the socket buffers busy by having 10 chunks in flight all the time.

That got things to 160k/s locally and 100k+/sec even over a slow link.

I'll have to mess with grpc a bit more. Looking at my grpc branch it looks
like I tried using the request_iterator method first, then I tried a regular
function that used batching, but I didn't try using request_iterator with
batching. I think the biggest difference would be if request_iterator uses a
pipeline, or if it still only does one req/reply behind the scenes.

I'm sure one thing that doesn't help is that

    
    
      message LookupRequest {
        string address = 1;
      }
      message LookupRequestBatch {
        repeated LookupRequest requests = 1;
      }
    

Ends up as a lot more overhead than doing ' '.join(batch)

------
pbhowmic
Who took over zmq after Hintjens' untimely death?

