
A portable high-resolution timestamp in C++ - shin_lao
https://blogea.bureau14.fr/index.php/2014/06/a-portable-high-resolution-timestamp-in-c/
======
ChuckMcM
If you want to see what a rabbit hole looks like, parse this sentence from the
article :

 _" The beauty of this is that with a precise enough timer, you also solve the
multithreading issue because nothing ever happens at exactly the same time."_

In the first part the author expresses a technique which narrows the window
for failure, and adds a fallacy which sounds good but isn't true.

This way of thinking (narrowing windows to the point where they are
probabilistically rare "enough") has been the source of many bugs.

Urs Hoezle (VP at Google) once said something I really liked which was "At a
large enough scale, statistically impossible things happen every day." It is
painful to accept but I've seen it in action.

~~~
codexon
I had the same feeling when people suggested using completely random UUIDs for
things like cookies and transactions.

~~~
X-Istence
This just comes down to probability. It is extremely improbable to the point
that the heat death of the universe would probably happen first, yet it is
something that can worry...

------
jsnell
I don't understand the use of clock_getres() here. A second of posix monotonic
clock time means a second of real time, the clock resolution has no bearing on
that. Seems to me that on any system where the claimed CLOCK_MONOTONIC
resolution isn't 1ns this code will advance the clock at the wrong pace.

> This is not a hard task. Nothing we've done above requires more than reading
> the documentation carefully. Attention to details like this is what makes
> the difference between working and rock-solid software. #frencharrogance

Um, right...

------
gilgoomesh
This really looks like a problem with the Windows implementation of
std::chrono::steady_clock more than anything else. This clock is nano-second
resolution on most *nix platforms (incl. Mac OS X) and offers the strictly
increasing guarantee required (without possibility of discontinuities).

It seems like (in the long term) it would be better to push some
std::chrono::steady_clock Windows patches to stdlibc++/libc++/MSVC and use
this instead of re-inventing the wheel.

~~~
stinos
Spot on:
[http://connect.microsoft.com/VisualStudio/feedback/details/7...](http://connect.microsoft.com/VisualStudio/feedback/details/753115/)
and
[https://connect.microsoft.com/VisualStudio/feedback/details/...](https://connect.microsoft.com/VisualStudio/feedback/details/719443/),
and the last response:

 _Hi,

Thanks for reporting this bug. We've fixed it, and the fix will be available
in the next major version of VC (i.e. after 2013).

steady_clock and high_resolution_clock are now synonyms powered by
QueryPerformanceCounter() converted to nanoseconds. QPC meets the Standard's
requirements for steadiness/monotonicity.

Additionally, the CRT's clock() has been reimplemented with QPC. While this
improves precision and conformance to the C Standard (as QPC is monotonic), we
are aware that this is not completely conformant. Our CRT maintainer has
chosen to avoid having clock() return CPU time advancing faster than 1 second
per physical second, which could silently break programs depending on the
previous behavior._

So it is expected that VS2014 should have the fixes.

On a sidenote, I've seen most of the code, if not all, posted by the OP
already before while searching for the same functionality (eg the piece after
"you will write your own function:" is followed by something that to me seems
like a straight up copy from some FOSS project, even the comments seems to
match - ok I cannot be 100% certain on this but if it's the case it would be
nice to mention the source). I can't find it atm, but I am sure a C++11 clock
compatible implementation using QPC has been posted on StackOverflow.

------
przemoc
Function you are looking for Windows is possibly not
QueryPerformanceCounter(). It's unreliable when you consider various hardware,
especially in multithreaded applications on multi-core/CPU systems. Even more
if you use Windows under VM. QPC can use RDTSC(P), but it's only one of its
options, and even if RDTSC(P) is used it doesn't mean anything reassuring
actually.

Go with timeGetTime(), remembering about calling timeBeginPeriod(1) early
(usually at the beginning of application) to set minimum resolution for
periodic timers to 1 ms (well, it will happen only if HW provides that much
resolution), and calling timeEndPeriod(1) after you stopped working with time
(usually at the end of application). Milliseconds don't give you high-
resolution, but at least working in this resolution is reliable. Having us or
ns garbage is hardly any better...

~~~
daemin
In recent years QueryPerformanceCounter is actually quite reliable since it's
guaranteed not to change frequency during runtime.

timeGetTime() and company though operate at the highest frequency that
application has specified, and as such can be quite a drain on portable power
systems (laptop etc). So when one application calls timeBeginPeriod(1) it
means the laptop needs to wake up more frequently and hence is less power
efficient.

~~~
przemoc
What does it mean "in recent years"? Did QPC become better (and
Sleep()/WaitForSingleObject() saner, as we're touching time-related stuff) in
XP before EOLing? Assuming having Vista+, does it really work reliably even
when called from a thread bound to only one core on various (not necessarily
latest) hardware? Have you played with it under VMs?

QPC was buggy and is still buggy for many users out there. After 15 years
Microsoft is likely getting closer to sane behavior and maybe in latest
Windows 8 (8.1) and Sever 2012 (2012 R2) it really works reliably in different
setups and loads (haven't tested it yet). OTOH "getting better" by sole die
out of setups, where it was working incorrectly, is not really getting any
better.

Viewpoint is important here. If you build your own infrastructure, you choose
your HW/SW stack, can thoroughly test QPC behavior, then if it seems you go
with same setup for your servers or what not [1]. But if you write software
for all Windows users out there, then you don't have that much luxury. QPC is
simply not good enough. You can always try with building some logic that falls
back from QPC to lower resolution functions whenever odd behavior is detected,
but unless you have HW/SW instances that you could test it on, then there is
(high?) possibility you won't do it right, so maybe you shouldn't do it in the
first place?

1 ms resolution at best should be enough in most cases. Increasing timer
interrupt frequency from default 64 Hz (15.6 ms resolution) to 1000 Hz (1 ms
resolution) indeed affects system behavior and can harm battery life, but it
may be necessary evil. If we're talking about high-resolution time, then 1 ms
resolution is simply that kind of thing for Windows (as ridiculous it may
sound) if we take reliability into account. E.g. games, your multimedia
applications and so on are already using timeBeginPeriod(1). Good news is that
reportedly since Windows 8 the harm on battery is much less.

[1] But if you really have control over HW/SW stack for your servers, then
you'll quite likely happily avoid using Windows...

------
leif
FTA: "Assuming servers are kept synchronized enough (the enough depending on
your application), you may just solve the problem by acquiring time precisely
enough."

How on earth are "sub-microsecond timers" supposed to be synchronized
_anywhere_?

~~~
profquail
Precision Time Protocol:
[http://en.m.wikipedia.org/wiki/Precision_Time_Protocol](http://en.m.wikipedia.org/wiki/Precision_Time_Protocol)

~~~
leif
OK but the point in the article is to reduce reliance on shared state. This
doesn't work in a database.

------
batbomb
I'm pretty sure rdtsc is/was broken on Intel core 2 duos, especially those
with power saving feature (like laptops).

~~~
oso2k
Not broken, per se. Rdtsc counts instruction cycles. It's not an accurate
indication of time when the clock rate is variable (and in fact pausible).

~~~
batbomb
I believe it is actually not monotonic, which was a requirement for this
library. That's why you end up using the high resolution timer which has a
10mhz clock rate instead.

I ran into this problem trying to get better than 1us resolution for a daq
system which was hooked up to custom GPS hardware which had better than 10ns
resolution.

------
eliteraspberrie
A minor correction: POSIX only mandates CLOCK_REALTIME and CLOCK_MONOTONIC is
optional (POSIX Advanced Realtime Extensions). Linux provides both as well as
CLOCK_MONOTONIC_RAW, and some BSD like FreeBSD provide
CLOCK_MONOTONIC_PRECISE.

For OS X, use mach_absolute_time, described here:
[https://developer.apple.com/library/mac/qa/qa1398/_index.htm...](https://developer.apple.com/library/mac/qa/qa1398/_index.html)

------
shmerl
What is the reason for using std::uint64_t? Shouldn't uint64_t work? C99 types
should be already included in C++.

~~~
nly
uint64_t is a typedef defined in stdint.h. std::uint64_t is defined in the std
namespace in <cstdint>

~~~
leetNightshade
Yes, though which do you propose should be used? It's nice that the type is
wrapped in the std namespace, but for something like a basic standardized
type, seems like a waste. I think I'd rather include stdint.h.

~~~
jevinskie
If it bothers you so much, just use 'using namespace std;' or 'using
std::uint64_t;'.

[http://en.cppreference.com/w/cpp/language/namespace](http://en.cppreference.com/w/cpp/language/namespace)

Or just '#include <stdint.h>'. C++ is TIMTOWTDI too!

~~~
leetNightshade
Yes, thank you (TIMTOWTDI)! I was just curious to see what other's used, and
was sad to see this comment: "You're writing C++ code, no need to regress back
to C." Really? Sure it's from C, but it's a part of C++.

Going the route `using std::uint64_t`, you'd have to do that for every basic
type. Which is fine. I'd just use that in a general header that's included
everywhere in a project (stdafx, or what have you). Though at that point it
doesn't really matter which you pick.

------
bogolisk
how about being _portable_ to non-Intel platform?

Really, nothing new to see...

~~~
apaprocki
Here's our (Bloomberg) version of this. The bsls::TimeUtil component is used
to implement the bsls::Stopwatch. It supports high-res timestamps on
OSX/Windows as well as Solaris, AIX, and HP-UX (SPARC, POWER, IA64). It's a
good base to start from and we'll continually tweak it for more performance as
platforms change (I think there a few patches in the pipeline).

[https://github.com/bloomberg/bde/blob/master/groups/bsl/bsls...](https://github.com/bloomberg/bde/blob/master/groups/bsl/bsls/bsls_timeutil.cpp#L540)

Component docs:
[http://bloomberg.github.io/bde/group__bsls__timeutil.html](http://bloomberg.github.io/bde/group__bsls__timeutil.html)

------
strictfp
Eh, what about not relying on timestamps at all in distributed systems? Use
lamport or vector clocks if you want a notion of time in a distributed system.
This article makes me more reluctant to consider quasardb.

