> I have done multiple tests and then took an average, don't know how much these results can be reliable (since to get statistics more plausible we perhaps should have allocated much more memory).
Wanted to just note down the numbers I got by timing, as then I said, I wanted to remark that was interesting how they are being used complementarily and had fun to implement them.
This article is very difficult to follow. It seems to be timing the difference between mach_msgs and System V semaphores. It does not seem to be comparing shared memory vs message passing.
"Benchmarking OS X semaphores vs message passing".
Shared memory + spinlocks should be much faster than shared memory + semaphores (much faster, if implemented properly, e.g. spinlock + timeout, on timeout queue another message, again spinlock and try to copy all the elements, etc.).
As a benchmarking guy, this is pretty basic. You might take a look at lmbench, it's open source, you could use all the timing infrastructure it provides and provide much better results. Reading that code would be educational (and it's very small).
It is rude. You can say "I don't consider this benchmarking; benchmarking would be XXX" etc. Just to say "you failed" says more about you than this article (which does need work).
What kind of work would it need sorry? Anyway I can understand this may not seem to be real benchmarking, while writing it eventually became almost an excuse, the very original purpose was to kinda verify my Prof's thesis, whether was really right or not. My intention was not even about to utterly speak about benchmarking nor shared memory (which I both tried to cover though).
First of all, your article was fine: you had a question, you thought about it a little and got some data, and wrote it up. More people should do this.
As an article it's not particularly clear in method or presentation. It's not especially clear in what it's measuring, process, isolation of variables. But it's not attempting to be a journal article. So that's not what I meant about "needs work" (though it is hard to pull the message out of the wording).
But I don't think it supports your thesis. Again, thats part of what the web is about (not every posting should be a well polished pearl) so I"m glad you posted this. If you cared though, I'd improve your process. This post might not be worth rewriting though -- that's up to you.
Yes, I got what you meant, and I know that some parts, especially those which cover measuring, should perhaps be investigated deeper. I'll outline the objectives better next time.
Indeed I'm not merely speaking about shared memory, you can find anything anywhere about it. If you had really taken the time to read it, you would have just figured it out.
It looks like what you are actually timing is the time it takes to start a process - time for either communication method should be on the order of microseconds or less.
For better results, start the two processes first, then measure how long it takes to send a message from process a to b and back to a.
Why does a graph define a "benchmark"? A table of data is equivalent to a graph. He only has four datapoints anyway, and the deltas are so small that a graph would be deceptive.
Yep. Nothing to see here