ZeroMQ gets rid of almost all this cost, so lets applications approach full network speed, as zinxq said, 10Gb on a 10Gb network.
There is a very big difference between a test program at each end of a network, and an application framework like ZeroMQ, which includes routing and queuing of messages based on the AMQP semantics of exchange and queue. The real beauty of ZeroMQ is the way it lets developers build fast multithreaded applications without ever worrying about how to scale - everything works as asynchronous messages, between processes on one machine, or processes distributed across wide networks.
It is possible to get about 2M kernel traversals per second, in one direction, per core. So with a perfect scalable messaging product that does not bypass the kernel and TCP/IP stack, you can hope for up to 1M messages per second per core. Depending on the size of the message you can thus saturate any network.
Apart from ZeroMQ, no existing messaging actually manages this. The very best closed-source products in this space manage 1M messages per second, which is already a big number.
To get to the stated goals, ZeroMQ will need to bypass TCP, bypass parts of the kernel, and probably move parts of its routing into silicon.
I can write a small program to stream 512byte messages from one machine to another on a 10G network and get 2.5 million messages across too. They're also assuming the network connection is setup and that the messages are fully formatted and ready to send. They also dont have any messaging acknowledgment or anything.
Please check my math:
(I was lax with 1000 or 1024, but results are similar)
k = 1024
meg = 1024 x 1024 = 1048576
Gigbit network = 1048576 x 1000= 1048576000
10Gigibit network = 1048576000 x 10 = 10485760000
Convert from bits to bytes:
10485760000 / 8 = 1310720000 bytes per second
i.e., 1310720000 is the number of bytes that a 10G network can push per second. Note, thats raw - TCP overhead and such will eat some measurable portion.
how many 512 byte messages fit in that?
1310720000/512 = 2,560,000 (just like they said)
I'm wondering if they actually tested this or just did this math. Because again, this has nothing to do with their product, its just the
throughput of a 10G network - and again, these numbers don't factor the 40byte TCP headers per packet.
So.. if we made a webserver out of their message infrastructure (even lets say, just a static webserver that only serves 512 byte pages ) - we can get 2.5million pages served per second?
Hey, common, do you really believe you can pass the networking stack up and down 2.5 million times in a second?