Sockets are just as portable, more so on UNIX descendants where one can rely on relatively consistent socket APIs. Beyond that, almost every single language and runtime (Python, Ruby, Java, OCaml ...) provides a portable socket API.
> message framing
Length-prefixed message framing winds up being 10-100 lines of code in almost any language/environment.
> super fast asynchronous I/O
Sockets have this.
Sockets have buffers. The OS can use those buffers to implement flow control. This isn't the same as queueing, but the truth is that you rarely want blind background queueing of an indefinite number of messages that may or may not be delivered.
> support for every bloody language anyone cares about
Just like sockets.
> huge community
I don't think you can get 'huger' than the community around sockets.
> price tag of zero
Seeing as socket libraries ship with everything, does that mean they have a time/resource cost of less than zero?
> mind-blowing performance
> protection from memory overflows
This has essentially nothing to do with a networking library. Plenty of environments have safe/efficient zero-copy chained byte buffer implementations/libraries.
> loads of internal consistency checks
Library correctness isn't a unique feature.
> patterns like pub/sub and request/reply, batching
Ah-ha! Here finally we get to the meat of it!
If you need QUEUES, including pub-sub, fanout, or any other QUEUE-based messaging structure, than 0MQ is better than sockets!
> and seamless support for inter-thread transport as well as TCP and multicast
Inter-thread transport of already-serialized messages at the transport protocol layer doesn't make a ton of sense from an efficiency perspective.
> ZEROMQ IS JUST SOCKETS
No, 0MQ is a lightweight network message queue protocol. It's not competing with sockets.
You've got to be kidding me. The BSD socket API is only "portable" for basic things. Do any kind of advanced thing and you will notice the limitations of the "portability".
Want to write an evented server that handles a large number of sockets? Choose your favorite platform-specific API: epoll, kqueue, whatever Solaris is using, etc.
Error handling? Each platform behaves in a subtly different manner. See http://stackoverflow.com/questions/2974021/what-does-econnre... for an example.
Windows support? I hope you don't mind the #ifdefs and typedefs in code. The WinSock API is still OKish... it doesn't differ from the basic BSD socket API too much. But good luck trying to handle more than 1024 sockets in a non-blocking/async manner. I hope select() on Windows serves you well.
> Length-prefixed message framing winds up being 10-100 lines of code in almost any language/environment.
Only if you're writing blocking code. If your code is evented, good luck with writing 2-3 times more code. Oh, and don't you dare getting that code wrong and introduce bugs. And of course you have to write this code every single time. And you didn't forget to unit test all that, did you?
That the contour of the API differs slightly means nothing. An example of true incompatibility would be, say, supporting UNIX-style mounts on Windows. If you wanted to support that cross-platform, either you or a library would have to directly implement the semantics of UNIX mounts, as opposed to just making a shim over what the OS already provides.
What? It means everything if you have to learn socket intricacies at different levels of abstraction on a per-platform basis. That is not what most people mean when they say an API or library is portable.
What about what the parent said, use a multi-platform technology like Java, Python, Ruby, etc?
The only plausible reason you could disregard the magic of using the same semantics for secure internet messaging and inter-thread messaging is that you don't write applications.
We're doing a simulation using it and we've ended up with a bunch of pub/subs instead of a shared bus.
It's not terrible, but is a little weird.
Various queue control methods are available in TCP/IP also.
UDP is the only way to get decent performance if your application is designed around it. Why bother resending data if it is too old now to be useful, or if the data arrived via another route? TCP often has more variable latency than UDP, which is the main performance killer for certain types of apps. zeromq isn't multipath aware either.
Dealing with multicast is a real pain in the neck to deal with in your network infrastructure unless you only want it on one subnet, which restricts you just as much as traditional broadcast.
multipath is either something you leave to the routers, or use a protocol such as SCTP or, hope multipath-TCP will come to your OS in the near future, or you manage it in the application.
It's unclear what you mean by server to many, in this context it sounds like what you'd use a zmq socket to fan out messages for.
The queue mechanism in TCP/IP are for the transport layer, not for implementing application policy.
I came to think of 0MQ as a multi-point data link abstraction (i.e. layer 2). The API abstracts over sockets, IPC message queues, and in-process message queues as the virtual layer-1 transports.
I say it is a layer-2 abstraction because 0MQ doesn't provide any mechanism for addressing or transparently routing over an internet of connected 0MQ networks. You can do source-based routing by explicitly naming the intermediate hops but this is more like intellegent layer-2 bridging than traditional layer-3 routing. There is no concept of a layer-3 address or naming scheme of any kind and there any important layer-4 features (re-transmissions, flow-control, out-of-order resequencing).
The message structure and fan-in/fan-out features are very useful but they are operating at a layer-2 level, not a 3, 4, 5, etc. layer, from my perspective.
It has been about a year since I spent time with 0MQ so perhaps it has evolved beyond what I experienced.
Over the years, I've worked on several different systems that wrote those "10-100" lines of message framing from scratch, and had to fix subtle, hard-to-track-down bugs with those. It's a conceptually simple thing that's very easy to have subtle bugs in edge cases. Edge cases that are difficult or impossible to produce in development systems, that do happen in production systems once you're running high volumes. An example of this is sockets pausing and in the middle of sending the multi-byte length, and needing to fiddle with certain parameters on the sockets that control heartbeat and other minutia.
It's certainly simple to write something that works well, but also very simple to write something that works well but will fail in subtle ways under certain kinds of circumstances.
Your application protocol needs to handle timeouts (some sort of retry, preferably with some notion of idempotency).
The problem with:
> > Length-prefixed message framing winds up being 10-100 lines of code in almost any language/environment.
is that it's not really a response the ZMQ feature. With ZMQ you don't have to reimplement for every application and platform.
> Sockets are just as portable [...] almost every single language and runtime (Python, Ruby, Java, OCaml ...) provides a portable socket API.
It's not portability if you have to learn APIs at varying levels of abstraction for each language you need to port to.
That is work to do in every single language you wish to work in. You are acknowledging the value that ZeroMQ provides in not requiring you to carry out this work.
>> support for every bloody language anyone cares about
> Just like sockets.
Except with the aforementioned portability (APIs at a consistent level of abstraction), which is not something raw sockets provide.
>> loads of internal consistency checks
> Library correctness isn't a unique feature.
Not sure what that means. ZeroMQ's claimed value-add here is in providing correctness in various languages, that you would not get otherwise when using raw sockets.
>> huge community
> I don't think you can get 'huger' than the community around sockets.
People writing socket code in C don't go to SocketConf and meet people writing socket code in Python. They don't idle in #socket on IRC. They don't swap blog posts about the cool 'socket patterns' they wrote today. It's not a community.
>> and seamless support for inter-thread transport as well as TCP and multicast
> Inter-thread transport of already-serialized messages at the transport protocol layer doesn't make a ton of sense from an efficiency perspective.
It's a feature nonetheless. That you think it doesn't make sense 'from an efficiency perspective' does not invalidate that feature.
>> protection from memory overflows
> This has essentially nothing to do with a networking library. Plenty of environments have safe/efficient zero-copy chained byte buffer implementations/libraries.
But you don't get that with raw sockets in every environment, which is the point.
("A Web Server in 30 Lines of C")
before I clicked through. Salient quote:
"ØMQ Is Just Like BSD Sockets, But Better
The other essential ingredients of a creation myth are lies and deception. ØMQ is nothing at all like BSD sockets despite very insistent attempts from its early designers to make that. Yes, the API is vaguely socket-like. APIs are not the same as semantics. ØMQ patterns are weird and wonderful and delicate but they are not, and I'll repeat this, even marginally close to the BSD "stream of bytes from sender to recipient" pattern."
A serious explanation of ZeroMQ takes 500 pages and several weeks to read.
I hope this is not true. It doesn't speak well to ZeroMQ at all.
I have built applications that use both AMQP messaging with RabbitMQ and ZeroMQ for the parts where an MQ broker was not necessary or where it would impact performance.
The arguments swirling around ZeroMQ and sockets are the same ones that swirl around threading and higher abstractions. We now know that it is hard to write correct programs using threading unless you refrain from using locks and either have all state immutable or you use lock-free access techniques. There are many libraries (even for Android and iOS) that encapsulate threading with a task-oriented layer that communicates between tasks/threads using queues.
Sometimes you have to make a decision NOT to do something that you CAN do, because of the greater good of the work.
0MQ implementation is built on Sockets. I can build the same features on top of Sockets, but why would I want to if it works well?
Side note: I'm interested in looking deeper in to nanomsg http://nanomsg.org/index.html -- which is a re-write of 0MQ by the original author.
All I want to be able to express when I send() is:
1. Whether it's a Broadcast, Unicast or Anycast message
2. Whether I'm sending globally, or targeting a
subset of peers (subscribers to some filter).
Target Method ~0MQ socket type
Global broadcast -> ZMQ_PUSH
Global unicast -> ZMQ_PAIR
Global anycast -> ZMQ_DEALER
Subscribers broadcast -> ZMQ_PUB/SUB
Subscribers unicast -> ZMQ_REQ/REP
Subscribers anycast -> ZMQ_ROUTER
When you send() you also need to express how exceptions are handled -- what happens when there are no peers, and what happens when their buffers overflow.
Martin Sustrik did a fine job when he designed those patterns because they are (so far) watertight containers for rather tricky semantics.
When I started out with 0MQ I felt sucked in to a world of combinatorial explosion where I always have to think about what type of socket is at the other end of of a connect()... it just doesn't feel right to me.
All the arguments in the article are about the fact that it does messaging really good with lots of features, but it doesn't have the queue part at all.
Sure, you can build that yourself on top of 0MQ, but I'd say that building the 'guaranteed delivery' part properly is the hard part of an MQ system.
Why are they using the Norwegian/Nordic letter Ø (pronounced almost like "uh" in English) in their name, a letter most people in the world can't type, if they want to get traction?
"The Ø in ØMQ is all about tradeoffs. On the one hand this strange name lowers ØMQ's visibility on Google and Twitter. On the other hand it annoys the heck out of some Danish folk who write us things like "ØMG røtfl", and "Ø is not a funny looking zero!" and "Rødgrød med Fløde!", which is apparently an insult that means "may your neighbours be the direct descendants of Grendel!" Seems like a fair trade.
Originally the zero in ØMQ was meant as "zero broker" and (as close to) "zero latency" (as possible). Since then, it has come to encompass different goals: zero administration, zero cost, zero waste. More generally, "zero" refers to the culture of minimalism that permeates the project. We add power by removing complexity rather than by exposing new functionality."
Believe it or not, even in the Nordic part of the world we do have empty sets, and we are able to distinguish them from Øs just fine :)
Anyway. If that's the underlying logic... How does communicating "empty set MQ" to your visitors work when you want to market the product with the name "zero MQ"? What do I search for when I remember "empty set MQ"?
To me this just doesn't add up. It looks like an attempt at doing "something cool" gone a bit off target.
Edit: The replies to my original comment here also seems to back up the point that this is not very clear or universal communication.
I believe all the socket types except REQ and REP do have incoming/outgoing queues. There's just no requirement for a broker to serve as an independent queue.
the name reminds me of jero, the american-born enka singer (https://www.youtube.com/watch?v=ba9rKhVAz80)
i'm not entirely sure if i personally could come up with a good usage for zmq though, unless i were going through tons of data from sources like social media or scientific experimentation
The latest release (version 4) has added a few new functions, so all of the bindings are undergoing some minor revision now to support the new calls.
Luckily it's a free world so you are more than welcome to take libzmq, strip out all the asserts, and launch your new improved fork! Why even debate this? Please remember us when you get rich and famous.