No, they're not. There are Apple-published UI guidelines, there are decades of User Interface research, and there are time-motion studies to back what I'm saying.
FWIW, I've been paid thousands of dollars as an outside consultant to just look at an app provide my initial feedback. If it were just whimsical, subjective musings, I can assure that would not be the case.
I believe you are testing the requests / sec and not replies / sec. While requests / sec matters a bit, you'll most probably be bottle-necking in your context switches. What matters is replies / sec. This is a more accurate measurement of your server performance and not OS bottlenecks. I'd use httperf for this matter.
request rate is not influenced much by the polling mechanism (epoll/kqueue/poll/select) when all you're doing is listening to one fd and processing new ones then closing then immediately. These multiplexers matter when you are working on lots of file descriptors.
It matters because select will have to iterate over all the file descriptors you passed to it, while kqueue for example have knotes registered to it when it wakes up and won't need to iterate over everything, but just the knotes it got. Not to mention the data copying from userland to the kernel in case of select/poll.
Back to the request rate limitations, the limitation is mainly coming from the # of system calls you can execute in a sec (accept() in this case) which is heavily influenced by the context switches.
Looking at the application CPU gives you a very good idea of what your bottleneck is. If your CPU is 100% then it's clear your application is hitting the limit, but if the application CPU is at 30% and you cannot process more requests / sec then you've hit the system limit here.
Right, there is indeed a mode switch to read/write from/to the tcp buffer or accept a connection. Luckily mode switches are not context switches - which are typically massively more expensive - as only one thread is ever involved.
Hellepoll uses 'epoll' which is the Linux equivalent of kqueue. Kqueue is said to be marginally faster still, and I look forward to Hellepoll using kqueue when running on FreeBSD. Epoll/kqueue absolutely affect accept rate incidentally, so its an exercise to the reader to work out why ;)
Hellepoll does use the fanciest features of epoll like 'edge triggering' which is perhaps one of the things making it nudge ahead of Java's NIO-based webservers (as NIO is lowest-common-denominator and lacks ET).
Finally, Hellepoll is really writing meaningful bytes, and it even flushes them in keep-alive connections (obviously; think how it would ever work otherwise?).
So I think, on balance, Hellepoll is the real thing and not just measuring how quickly a big backlog on a listening socket can fill up ;)
On Linux, I've found the new 'perf timechart' a lot of fun.
Network IO, Disk IO, scheduling, locks etc.. all trigger context switches and not only mode switch because only the kernel is allowed to manipulate data structures related to mbufs, vfs, and whatnot.
When I said epoll/kqueue doesn't affect the accept rate, I was replying back to a specific request of writing a program to just accept/reply/shutdown. In this case you are passing one fd to the poller which won't matter much what poller you are using.
My original comment is that you are measuring the wrong thing and it's not against hellepoll. requests / sec can be much higher than reply / sec because they get buffered waiting for you to accept them. Once accepted, then a request is counted. What matters though, especially to the http client is how long it takes to serve the connection from start to finish, hence reply rate.
fyi, you can use libev for poller portability.
You can experiment with this command and look at the reply rate.
I didn't say there is no distinction. I said system calls that require disk io, or network io, or triggers locking/sleeping requires a context switch. Calls like read/write/accept triggers context switches (which starts with a mode switch). The switch is required for the kernel to execute the system calls and operate on it's own data structures. Only the kernel can alter mbufs in this case.
Go read your OS book.
I am not trolling. you just lack experience and this sounds new to you.
I totally agree that SYSCALL/SYSENTER/SYSRET are very cheap to execute. But these instructions only takes care of the ring switch and are not executed alone.
When you make a system call, a trap is issued that causes the hardware switch to kernel mode.
The hardware pushes onto the per-process kernel stack the pc, status word, and the kernel code takes care of saving the registers, esp, etc.. this is called a task context switch. Context switching between processes is much more expensive but task context switching is still considered a context switch.
When you are making a system call, it's still much more expensive then most of the work you are doing in your program hellepoll and hence it's your bottleneck. This is why you don't see your process's CPU at 100%.
On a related note, whenever you have a program doing a lot of network IO, you are essentially causing a lot of process context switches because each time you get data on the wire you cause a context switch because the kernel needs to handle this hardware interrupt.
So we only disagree in edge-case terminology. I think your trying to worm out of your missclassification, but no worries.
Yes, to get these numbers i have had to minimise syscalls. Thats the advantage of hellepoll. I wrote the http server just to release it, as before it was an rtmp server but it was commercial and couldn't be released. That had write buffers usr side too which helped even more.
And i understand what ab and httperf test, and yes i am counting served pages.
Finally, i spend a lot pf time staring at linux perf reports and timecharts.
The quality of service and network is also widely different and varied. Hetzner are awesome for the cost, don't get me wrong. But at times their network is terrible and their service seems to vary hugely.
You don't get these problems with Softlayer, you pay significantly more and get significantly better almost-guaranteed service.
If you're looking for tier-1 network traffic, 100tb.com is with Softlayer, and since they have so many servers, they get massive discounts and can give you 100TB of softlayer traffic for relativly cheap.