if (s->debug & 0x01) sleep(1);
/*#define PKT_DEBUG 1 */
if so, it might be vs timing attacks
This crummy Sleep() implementation has some nice effects on programmers. Those who like to solve problems with lots of copy/paste code are forced to think about using proper synchronization primitives when running high resolution loops that wait for events, or their code just won't run very fast.
Sleep() on windows takes ms.
sleep() on nix takes seconds.
VOID WINAPI Sleep(
__in DWORD dwMilliseconds
sleep(unsigned int seconds);
Timers are hard to get right. Tread warily, programmers! This is one of those areas where it is good to understand some things about the computer hardware behind the software.
EDIT: I should add that the high frequency timer is not a panacea either. It will work for you most of the time, but there are two circumstances that will occasionally trip you:
(1) At least in Windows XP and 2000, there is a KB (I do not remember it now) that explains that for a certain small set of manufacturers, if there is a sudden spike in high load on the PCI bus, the high frequencer timer will be jumped ahead to account for the lost time during that laggy spike. This correction is not accurate. This means that if your initial timestamp is X, and you are waiting for it to be X+Y, wall clock time may be between X and X+Y, but Windows itself altered the timestamp to be X+Y+Z, and your software thinks the time has elapsed. I personally experienced this bug.
(2) You actually have more than one high frequency timer -- one for each CPU on your system. Once you start playing on a system with multiple CPUs, how do you guarantee that the API is providing you a timestamp from the same timer? I remember there may have been way to choose if you dropped to assembly to make the query but that the API at the time did not support a choice. The timer starts upon power-up. If one CPU powers up after the other, you will have a timestamp skew. Some high frequency software algorithms attempt to correct for this skew. I do not know all the details to that now.
"The issue has two components: rate of tick and whether all cores (processors) have identical values in their time-keeping registers. There is no promise that the timestamp counters of multiple CPUs on a single motherboard will be synchronized. In such cases, programmers can only get reliable results by locking their code to a single CPU."
The entry also mentions that hibernation can affect the counters. I wonder if power savings implementations that speed up or slow the CPU could also have an effect.
On WinAPI, Sleep is denominated in milliseconds.
On BSD, sleep(3) is a library wrapper around nanosleep(2).
Linux's man pages make no mention of the magic number "1" as a "sleep 1 timeslice" shortcut; also, older Linux man pages warn that sleep(3) can be implemented in terms of alarm(1), which is used all over POSIX as an I/O timeout and would blow up the world if it alarmed in milliseconds.
If you want to sleep "as short as you can", sleep for 0 seconds, or call any other system call to yield your process back to the scheduler.
But that's usleep not sleep, which is the inaccuracy I was admitting to in the first place.
Sorry to pile on you, though.
When you do sleep, depending on the hardware, the OS, the configuration, the kernel flags, etc. the minimum you actually get is around 38.
But that varies.
People, it's right there in the man pages.
Are you maybe thinking about WinAPI's Sleep? That's ms-denominated. It would make sense that attempting to sleep for 1 millisecond wouldn't work, and would build in the time for the scheduler and the timeslices for every other process. We're talking about OpenSSL and POSIX sleep(3).
Please discard my above comment, everyone.
if they wanted noisy sleep, it should be something like
You have two hashes and want to see if they're equal. The naive approach is to iterate over each byte in both hashes and compare them, then break when you find a byte that doesn't match. That approach, however, could be vulnerable to a timing attack because you could potentially measure how many times it iterates. An implementation that's resistant to timing attacks could XOR each byte of each hash and accumulate across them; if that accumulator is zero at the end of the loop, it's equal. That approach is constant time, rather than being dependent on the data you're dealing with.
Incidentally, the primary problem here is not the mere presence of a debug flag that governs a sleep, it's the fact that PySSL_SSLdo_handshake sets that debug flag. Right?
In other words, it's not a bug in OpenSSL itself, but rather the Python wrapper for OpenSSL. That's how I understand it.
Or was this not noticed because all the major frameworks like cherrypy and twisted are still using the pyopenssl wrapper?
Is there any evidence that this bugfix actually changes the performance?
> it is actually more reliable to sleep than to block. by definition blocking is unreliable because you don't know exactly when it will unblock.
A block will end when the nic can handle more data. You can't just wait a second and assume the nic can handle the data. That's where the "unreliable" part comes in. You assume it can handle the data, but you are not checking. And the way to check is either by polling, blocking, or receiving a signal. Waiting is not a way to check.
> You do know when a sleep will end though.
It makes no difference that you know when the sleep will end. It's irrelevant - all you care about is can the nic accept more data or not.
> I also want a variable delay between writes.
If you want variable delays then do that, but that has nothing whatsoever to do with making sure the nic doesn't lose data.