
Nanomsg postmortem and other stories - profcalculus
http://sealedabstract.com/rants/nanomsg-postmortem-and-other-stories/
======
n00b101
I personally traveled to Slovakia to meet with Martin, Mato and Pieter, a few
years ago. I commissioned them to fix some annoyances I had with ZeroMQ.

I just want to say that they are all great people, they're very smart and have
spent a lot of time deeply thinking about messaging.

I was really hoping to use nanomsg at some point, so I'm a bit disappointed to
see all this. Really hoping that someone (Drew?) releases a new BSD licensed
library.

------
IshKebab
Yeah I dunno, the "solid technical reasons" are pretty unconvincing. The
reasons are:

1\. Exceptions are awkward because they separate the error handling from the
context that is usually required to correctly handle an error (i.e. where it
occured).

2\. Exceptions are the only reasonable way for a constructor to fail. You
can't return error codes from a constructor.

I actually agree with the point about exceptions, and I'm glad Rust didn't use
them. That said, the solution is trivial (and the author admits it) - don't
use exceptions.

The second point is also actually a fairly good one, and again Rust has fixed
this - it uses static `new()` initialisers that create object (or not, if they
fail). Fortunately we can actually copy this method in C++. Here's an example
for a network connection object:

    
    
        class Connection
    	{
    	public:
    		static boost::optional<Connection> connect(string host)
    		{
    			// Try to connect.
    			int connectionHandle = -1;
    			int rc = connect(host, &connectionHandle);
    
    			// If it fails return none.
    			if (rc < 0)
    				return boost::none;
    				
    			// Otherwise return the new object.
    			return new Connection(connectionHandle);	
    		}
    		
    		void send();
    		void recv();
    		void etc();
    
    	private:
    		explicit Connection(int connectionHandle)
    		{
    			mHandle = connectionHandle;
    		}
    		
    		int mHandle;
    	}
    

Unfortunately as far as I can see there is no Rust-like `result<>` type in
Boost or the standard library yet, but it would be fairly easy to write your
own.

It seems pretty crazy to give up C++ for these fixable flaws. Especially given
how many unfixable flaws C has.

~~~
hendzen
Your connect method can still throw std::bad_alloc.

~~~
IshKebab
You could easily catch that within the method though. But in general I think
throwing an exception on memory allocation failure is much better than the
alternative, which is - in practice - assuming it succeeds and segfaulting.

------
zellyn
Very interesting read: worth the time.

See also: [http://hintjens.com/blog:112](http://hintjens.com/blog:112)

------
onli
Projects have to drive unbelievers out to work. Well, partly. It does not
matter if someone is just not believing in one specific aspect of the project
and ignores it. That is fine. But when the group starts to fight over what to
do instead and this agreement can't be reached, then a split is necessary to
enable common work of the people left.

Of course that can't work when the project has no working governance, if the
project has not even a clear goal. If a project is only defending against
unbelievers without having a consensus of its own. And I think that is what he
is mainly describing here, without me being sure that is fully intended.

------
_pmf_
So, what is the spiritual successor of 0mq?

I think a lot of use cases that would have been covered by 0mq are now handled
by more higher level abstractions like consensus protocols or more heavy
weight message queues (which absolutely makes sense), but what would be a
modern (i.e. maintained) alternative for the simple "pub-sub via TCP" use case
of 0mq?

~~~
PieterH
ZeroMQ is a community of projects that has grown and evolved significantly in
the last years. It was never threatened by Nanomsg, which we saw as an
interesting experiment, and potentially an engine for a pure C stack. We have
pure Java, C#, C, and C++ stacks, and a new pure Go stack.

Above that, you will see many language bindings, especially PyZMQ and CZMQ,
each with a large community on top. CZMQ provides things like actors and
CurveZMQ authentication, and has wrappings in many languages. I'm writing a
Node.js one at the moment, and did a Java one a few weeks ago.

Above that, we have Zyre, a clustering library that is somewhat like Nano's
bus pattern, yet rather stronger. It was designed for flaky WiFi networks, and
does not lose messages as long as a node reconnects within a reasonable time.

And above that, we have a message broker, Malamute, which does pub-sub and
workload distribution.

All the C libraries have rich packaging (builds for every conceivable
platform, and bindings in a growing number of languages), provided by
zproject. So you can start a Malamute server (a CZMQ actor) from Python, or
Node.js, trivially.

We have run Malamute on a $25 OpenWRT router. This stack is small, efficient,
and very alive.

------
seiji
nanomsg was an interesting successor to the bloated/C++/LGPL zmq, but it
succumbed to old ways that don't work anymore.

It's 2016 and you can't run an (intended) global-scale open source project by
just being "code nerd in chief." If you can't run an open feedback-driven
community _and_ if you can't be chief architect (also means communicating
plans constantly, not just 'do whatever you want') _and_ if you don't possess
technical and professional excellence (also means being responsible with
security issues and timely resolving of user issues), then you don't actually
have an "open source project," you have a private uploaded code repository
other people can see publicly.

Gone are the days of 1995 when you could live on an island and upload your new
code once per year and everybody would leave you alone and praise you in
computing magazines for being a genius. Now you'll have 1,000 issues on GitHub
and requests for public appearances and proposals for changes and important
security flaws to fix and requests for extensive communication about current
designs and upcoming features.

The whole "i'm going to do this all on my own, everybody else go away it's my
code" doesn't work anymore. Those projects are now destined to fail on their
own without open and scalable community leadership. The good projects "get it"
and the old guard are toppling as we speak. In another two years, nobody will
trust open source projects without a stated scalable community model and
codified successorship plan.

~~~
sovande
99.99% of open source project does not and cannot match these expectations and
it is so out of touch with reality just to assume so. Most successful open
source project (apart from the well known 0.1%) are run by 1-2 maintainers and
contribution from the so called community amounts to a few percent of the
total work at best - look at the contributors graph of popular github project.

Unless projects are able to bootstrap themselves into a business (wordpress)
or are eventually supported by a commercial entity (llvm) they will be
fireflies that shines for a short while until the creator/maintainer does a
reality check and realise how much time and work he has put into this with
nothing much in return eacept a mile long issue list on github, lots of
unfriendly noise and arguments from the community and insane expectations from
users like yourself.

~~~
seiji
_a mile long issue list on github,_

If you have enough users to have "too many issues to handle," then you're big
enough that you need an official governance and contribution structure. You're
also big enough where you'll have a pool of high quality contributors to
promote to project-level-oversight status.

Nobody is saying "do all this work for free forever," but people _are_ saying:
"You have too much work you're avoiding. Why refuse to let other people help?"
Once you reach a million users this isn't your private software anymore, it
belongs to the community and you need to step up and allow faster development
than one person can handle alone.

~~~
sovande
> If you have enough users to have "too many issues to handle," then you're
> big enough that you need an official governance and contribution structure

I'm sure e.g. Ruby on Rails and Bootstrap would like some pointers on how to
setup "an official governance and contribution structure" to handle their the
long issue and PR lists.

> Why refuse to let other people help?

Very few if any wants to use their free time to work through an issue list
like that. Someone might do it once and write a blog post about it, but unless
someone does it all the time it does not help much.

Open source might be perceived as a free commodity, but like a public toilet,
no one likes to cleanup and especially for free. Likewise, help in the form of
P.R.s doesn't much help either unless it is aligned with the overall vision of
the project. It more often than not isn't.

------
CrLf
What I don't really understand is what's fundamentally wrong with ZeroMQ to
warrant such forks. The project has a large user base and is very established.
What's unfixable about it?

It was pretty obvious to me that nanomsg had very little chance to thrive. For
an outsider what's so radically different about _any_ ZeroMQ competitor?

~~~
teraflop
The nanomsg documentation (linked from the article) lists a number of
fundamental differences between it and ZeroMQ. Some of them are implementation
details, but there are quite a few fundamental design/API changes that would
be impossible to make in an existing library.

[http://nanomsg.org/documentation-
zeromq.html](http://nanomsg.org/documentation-zeromq.html)

------
baq
once again i see the "no matter what they tell you, it's a people problem"
issue. very nice analysis of how to reach this conclusion from a very
technical position.

------
skybrian
It seems like for side projects that don't have an active community, the only
viable approach is to set expectations right up front that it's abandonware.
That is, write it, release it, and walk away. No future implied, unless
someone volunteers to turn it into a real project.

Example: this is what most researchers do - they don't maintain the code after
the paper is published.

------
shoover
This piece is excellent. The backstory on the zeromq and nanomsg projects is
fascinating and generously detailed. The trick of reaching out to gather
rejected contributors is good humanity and practically clever. The rule about
monitoring contributor entrance and exit rates, I get.

But I'm struggling with the implemention of the rule.

 _In almost every case, it is better to merge even a bad patch than to turn
away a contributor for the projects I already struggle to maintain. So I try
to get the patches improved, but I merge them even when I can’t. Even bad
patches are better than none._

With all due respect to the OP and everyone who have poured blood, sweat, and
tears on these projects and as maintainers of others, this point is hard to
swallow. How bad of a patch is ok to let through? How complex? How much of a
deviation from the original design? Is the contributor going to be around to
fix regressions impacting other users? If the patch is complex and taking the
software in a different direction, is it a good direction for existing users
and will people get on board to do the work to update the rest of the code? I
think these questions are vitally important for maintainability, correctness,
and performance of software in general. I would guess hashing them out is a
significant portion of the actual work in open source projects and are why
people fight on mailing lists and say no to patches in the first place. We
fight because we care?

Working out the answers to these questions is so, so much work, and people
just want their patches taken and maintainers just want to get on with their
lives, so taking anything that comes along as a means of keeping the project
moving may be the only hope in some cases. But I find it hard to believe it
doesn't backfire more often, on grounds of user support and principles of
software maintainability.

To name a couple projects that have had some success with different
approaches, I've noticed the Clojure project seems to maintain a vibrant
contributor rate while also exercising strict design control from the top.
Project leadership takes heat for rejecting, in particular, feature requests,
but from what I've seen they will work with you if you will take the time to
work out your design in a way that fits the language and the maintainers'
vision.

Secondly, the Waf maintainer strictly will not merge patches that change APIs,
break anything for users, or do much of anything wonky internally, but from
what I've seen he will work with anyone introduces new functionality that
doesn't break old code and is in line with his view of the project. He worked
with me to whip a shoddy patch into shape. I was grateful he bothered at all,
and it would have been disastrous if he'd just merged it. Somehow he walks the
balance of rejecting bad patches and protecting users while also merging quite
a few hit-and-run contributions. The maturity of the project and decoupled
architecture finely honed for the problem space may help support this style of
maintenance.

So at least in those two cases the answer lies in a very skilled balance of
engineering and social concerns, but not necessarily any patch is better than
none.

~~~
shoover
ZeroMQ's Collective Code Construction Contract [1] goes into a lot of my
question areas in concrete detail. It puts fair burden on both contributors
and maintainers and clearly delineates the roles. Relevant to my post is the
focus on identified and agreed problems and the requirements that contributors
use the issue tracker and work to consensus on the validity of their
observations and solution.

[1] [http://rfc.zeromq.org/spec:22](http://rfc.zeromq.org/spec:22)

------
zokier
So where does Crossroads fit in all this?

~~~
lobster_johnson
Crossroads was Sustrik's first attempt, after ZeroMQ, which he then abandoned
(or renamed?) to start Nanomsg.

------
vegabook
I wanted to use nanomsg 6 months ago. Google alone, at the time, had it as a
better zmq. I had to _really_ read between the lines at the time to figure
that it was losing momentum. It was not an easy sense.

I'm glad this forthright piece puts nanomsg incontrovertibly to pasture. It's
thanks to clear pieces like this that people are able to navigate the open
source world and choose technologies which, if they may not be theoretically
the best, have the much more important characteristic that they are long
lived, alive.

