
gettimeofday() should never be used to measure time - aethertap
http://blog.habets.pp.se/2010/09/gettimeofday-should-never-be-used-to-measure-time
======
MichaelGG
Except when doing a pcap, I actually want the actual time it is. Maybe in some
cases you'd explicitly want an offset, but not in general. It's really a plea
to get your clocks synced up, so you aren't forced to choose between reporting
an incorrect time or an incorrect duration. If I'm running a pcap and the
system time changes over a day by several seconds, I'd prefer each packet to
report the closest concept of the right time instead of being way off as time
goes by.

Not to mention: if the monotonic clock can keep such accurate timing, then
everyone would just use that and NTP would not be so necessary.

Really: Under what conditions do you have a usefully functioning system when
the clock is so off you need to do multi minute jumps? Even HyperV, with the
utterly atrocious w32time manages to keep it with a minute or two (and a Linux
guest can easily have ~ms accuracy).

The leap second point is valid, but that's an argument against leap seconds
which serve no use in today's society other than to introduce unnecessary
problems. Even Google just gives up and purposely introduces inaccuracies in
their clocks for a day so that when the leap second comes around they're
synced again. A leap hour would be a far better solution, as it's something
many people are (unfortunately) used to from DST, and it wouldn't bother us
for a dozen centuries.

~~~
mct
_Under what conditions do you have a usefully functioning system when the
clock is so off you need to do multi minute jumps?_

One example is embedded systems. Many don't have an RTC, or boot after the RTC
has lost power. If a network connection finally comes up, NTP will instantly
fast-forward the clock by _years_

~~~
toast0
for debian based embedded systems, the fake-hwclock package is helpful here
(it's a script to periodically save the current time, and restore on boot).
You'll still have big jumps after a power loss, but probably not years. It's
also helpful in case you ever change the motherboard on a regular system with
a RTC.

~~~
cperciva
Many embedded systems don't have any writable durable storage.

------
teddyh
I wish that HN would use the Public Suffix List
([https://publicsuffix.org/](https://publicsuffix.org/)) in its algorithm to
display domain names of submissions. That way, we wouldn’t get things like
this, where the domain given (pp.se) does not say anything about what the
actual site is.

~~~
cpach
I think the HN admins/devs will have a better chance to see your suggestion if
you send it to hn@ycombinator.com

Edit: Good suggestion BTW :)

------
JoeAltmaier
OS time handling is feeble across the industry. Its inherited from 20-year-old
ideas about APIs. Not just the antique time structures that are useful for
rendering but not much else. Also the abominable Sleep() and such.

Imagine you want to do something every second. You Sleep(1000) or some such.
But it takes time to do the thing, so its actually a bit longer between loops.
Maybe it doesn't matter; maybe it does. But you're stuck doing stuff like
that.

Why not Wait(timetowaitfor). Not a duration; the actual time you want to be
woken up. Now it still takes time to wake up and run. And it takes time to
make the call. But now, your stuff actually runs say 60 times per minute (e.g.
if you wait for successive seconds), hour after hour and day after day.

Also, what's with limited resolution on the time? Its due to the common
implementation of timers as a counter of ticks, where a tick is whatever
regular interval some hardware timer is set to interrupt. Why not instead
interrogate a free-running counter? And if I want to wait 1 second plus 150
nanoseconds, then I Wait for that time to arrive, and the library (or OS) set
a real timer interrupt to go off when that time has arrived? Sure there's
latency in calling me back; that's inevitable. What's not inevitable is some
limited multi-millisecond tick resolution.

Anyway, whenever I'm in charge of designing an OS or application environment,
I provide real timers like this. It's about time the big OS providers catch up
to the 21st century.

~~~
adestefan
> Why not Wait(timetowaitfor). Not a duration; the actual time you want to be
> woken up. Now it still takes time to wake up and run. And it takes time to
> make the call. But now, your stuff actually runs say 60 times per minute
> (e.g. if you wait for successive seconds), hour after hour and day after
> day.

There's a lot of "what ifs" that need to be answered for something that seems
so simple:

* What clock are you using? Machine ticks? Wall clock? Is it correct? Is it stable enough?

* What if the clock misses the time that I asked for? Do you run it anyway? Do you skip that invocation?

* What if the clock moves backwards? Will you trigger twice? Will you even notice?

* What if I have a leap second so there are 61 seconds in the hour? What if a second is removed so there are 59 seconds in the hour?

The reason why people don't touch this stuff or get it wrong is because it's
really hard. There's a lot of corner cases when it comes to time handling.

~~~
JoeAltmaier
Really hard things are what OS code is FOR. Get it right once; then apps call
it and it works.

------
vojfox
It's a similar situation on iOS, where new developers sometimes use (in
Objective-C) `[[NSDate date] timeIntervalSince1970]` which is natural, but
wrong. NSDate draws from the network synchronized clock and will occasionally
hiccup when re-synching it against the network, among other reasons.

If you're looking at measuring relative timing (for example for games or
animation), you should instead use `double currentTime =
CACurrentMediaTime();` That's the correct way.

------
conradk
Ironic that the "What to use instead" part doesn't even feel the need to check
return values of functions.

Am I missing something ?

~~~
thomashabets2
Example code often skip error handling, because that's not the point.

of course clock_gettime() should have its return value checked, just like you
always check the return value of gettimeofday() and time()

~~~
conradk
In an article that tries to tell us we're doing it wrong, doing it right would
be nice.

------
sargun
Let's talk about the sad state of clocks today. There exists a few ways to
query NTP time on Linux. (1) Directly through NTP (2) the adjtimex syscall,
(3) the ntp_gettime call. I found it hard to find many codebases using the
proper NTP. In fact codebases that need reliable time, like Cassandra and
OpenLDAP. don't use NTP time APIs to check whether the system clock is in
sync, or to get accurate time. Even if we were to make PTP accessible to the
world, it would be some time before its usage actually became ubiquitous. The
understandability of time keeping, and clock yielding in our community is a
sore point.

~~~
Matumio
I think NTP usually synchronizes the system time, so programs don't have to
use any NTP specific API to get NTP time. PTP won't help here: all it does is
increase the accuracy from milliseconds to microseconds - not much use if the
Linux scheduler tick is 1ms. PTP time is mainly useful together with hardware
event timestamping, where stuff like interrupt latency can be excluded. If you
want, for example, to send a frame every two seconds as soon as the device
boots, you still should use CLOCK_MONOTONIC, otherwise you produce a glitch
when PTP ramps up.

------
dbrower
I think the article and much of the discussion misses a larger point -- time
is hard, and very very hard when there are multiple systems with different
clocks. The APIs are the way they are because there just aren't solutions
especially since all systems ultimately have unreliable connections to good
time sources.

The miserable APIs are New Jersey/Worse-is-better answers to intractable
problems.

~~~
stestagg
That's just not the point of this article.

Basically, my takeaway from this is: if you /can/ avoid the complexity, by
dealing with absolute relative timings (if something takes 2 seconds, then it
takes 2 seconds, even if one of them is a leap-second), then you /should/.

And the best way to do this is using the techniques mentioned in the article

------
dsjoerg
the semantics you'd like the OS + standard library to provide would be some
kind of gettime() call that returns a time thingie, and a secondsbetween(a,b)
call that reliably tells you the time between the two time thingies.

the fact that it doesn't already work this way is a design fail.

all the nonsense about NTP and clock slew and monotonicity are implementation
details that should be hidden below this layer.

------
shanwang
the last time I tested them on redhat 6, clock_gettime(REALTIME) and
gettimeofday are slightly faster than clock_gettime(CLOCK_MONOTONIC), and
gettimeofday is much faster than any clock_gettime on older platforms.

------
pstrateman
This isn't even right....

The correct call is to clock_gettime(CLOCK_MONOTONIC_RAW...

~~~
obstinate

        while (ts_remaining.tv_nsec > 1000000000) {
            ts_remaining.tv_sec++;
            ts_remaining.tv_nsec -= 1000000000;
        }
    

should be

    
    
        int secs_in_nsec = ts_remaining.tv_nsec / 1000000000;
        ts_remaining.tv_sec += secs_in_nsec;
        ts_remaining.tv_nsec %= 1000000000;
    

Right? I mean maybe he microbenchmarked it and looping is faster because no
div or mod, but intuitively this seems like it would be better. If the loops
are a result of benchmarking, it should probably be called out in comments.

~~~
CamperBob2
I sometimes deliberately write really slow, stupid C code when dealing with
time or other things that are both extremely important and (if I'm honest with
myself) unlikely to be tested exhaustively.

Apple has really awesome developers who don't need to do stuff like this, and
that's probably why iPhone alarms fail to go off every other leap year and
reliably sound at 2 AM on January 32nd of years ending in '3'. Time-related
code is like rolling your own encryption, in a sense. It's a trap for amateurs
and pros alike.

------
shared4you
OP or mods: please add [2010] tag to the title.

~~~
theoh
If adding the year to every post that's not "news" is to become a convention,
it should be added to the guidelines. I doubt this is going to happen because,
really, what difference does it make for an article like this?

~~~
jonsen
getyearofpost() should never be used to measure timeliness

~~~
thomashabets2
Ha!

Indeed. I'll go back and update posts if they are no longer true.

/Author

