Hacker News new | past | comments | ask | show | jobs | submit login
UTC, TAI, and Unix time (1997) (cr.yp.to)
27 points by marcopolis on Feb 8, 2015 | hide | past | favorite | 5 comments



The article title should include (1997) as the year it was written.

Much better (updated much more recently with the latest developments) link is:

http://www.cl.cam.ac.uk/~mgk25/time/leap/

also

http://www.ucolick.org/~sla/leapsecs/

"In 1970 the CCIR (predecessor of the ITU-R) decided to disconnect clocks from the rotation of the earth, but they kept the calendar connected to the rotation of the earth. That decision was implemented starting in 1972, and since then the leap seconds have maintained the connection.

In 2015 the ITU-R will decide whether the calendar will also become disconnected from the rotation of the earth. If the ITU-R decides to abandon leap seconds in UTC then the calendar day will become regulated purely by cesium atoms, not by sunrise, noon, sunset, nor midnight. The ITU-R will choose between these two options."

More background:

http://www.ucolick.org/~sla/leapsecs/amsci.html

The possible solution:

For most uses knowing about leap seconds is unnecessary. Ignoring them is convenient and enough for the common computers: you as the programmer can then assume that every day has 86400 seconds, and the variations due to leap seconds are probably less than those that happen due to your computer not having the atomic clock built in (the time provided by your hardware is, unsurprisingly, much less stable than the one of the atomic clocks and you probably don't care).

The most recent popular article by Kuhn (2015) explains the problem most of the programmers and computer users have is easily solvable without changing UTC:

http://theconversation.com/an-extra-second-on-the-clock-why-...

"Unfortunately, the way NTP implemented leap seconds in Unix and Linux operating systems (which run most internet servers) made things worse: by leaping back in time to the beginning of the final second and repeating it. Any software reading off a clock twice within a second might find the deeply confusing situation of the second time-stamp predating the first. A combination of this and a particular bug in Linux caused computers to behave erratically and led to failures in some datacentres the last time a leap second was introduced in 2012, notably in one large airline booking system. Instead, alternative implementations now just slow down the computer’s clock briefly in the run up to a leap second to account for the difference."

And his proposal (since 2005, updated in 2011 based on the use of similar principle by Google, still valid):

https://www.cl.cam.ac.uk/~mgk25/time/utc-sls/

"UTC-SLS is a proposed standard for handling UTC leap seconds in computer protocols, operating-system APIs, and standard libraries. It aims to free the vast majority of software developers from even having to know about leap seconds and to minimize the risk of leap-second triggered system malfunction."

"Overall, the Google experience suggests that there is a justifiable need for a smoothed version of UTC for use in computer APIs, if only for due diligence reasons. (...) UTC-SLS has many additional advantages and remains a desirable and more robust candidate for a standardized, long-term solution for the same problem.

I like UTC-SLS as the best approach for most of common use cases.

Edit: now UTC-SLS can be also discussed here: https://news.ycombinator.com/item?id=9018504


Steve Allen is one of the more vocal proponents of leap seconds and knows much more than I do about the issue. However the general thesis that "Discontinuing leap seconds would require many observatories and other organizations to procure new hardware and rewrite software that deals with time and the earth's rotation" has always left something to be desired. I have never understood why the needs of a small minority (astronomers) should be the deciding factor for society as a whole. The argument for leap seconds would be a lot stronger if it did not seem like a tyranny of the minority.


The leap seconds in the UTC aren't making any problem if the operating systems wouldn't do "unexpected" things with them, as described by Kuhn. Specifically, the software "clocks" for "humans," those that are bound to the calendar time, should just have 86400 seconds in a day. (2) When the atomic clocks signal them the "leap" second they should just "smooth" it. We can have that with some updates of our favourite operating systems. It's a pure software thing.

The solution (UTC-SLS) is simple and good enough for most of the uses, including Google's synchronisation needs of the millions of their computers.

The UTC is still just the to-the-second approximation of the UT1 (1) which is by definition bound to the Earth rotation ( http://en.wikipedia.org/wiki/Universal_Time ). Steve Allen just tries to make people understand that the UTC is by definition "the human calendar time" (like: year, month, day, hour, minute, second) sent over the radio clocks.

For the time less dependent on Earth TAI also exists ( http://en.wikipedia.org/wiki/International_Atomic_Time ). So we already have the time reference that "just counts the atomic seconds."

Had there been less confusion among the programmers regarding the leap second handling on the common systems we'd already all use the UTC-SLS solution and we wouldn't have to care about the leap seconds unless we really need TAI.

---

1) Watch out for http://www.itu.int/en/ITU-R/conferences/wrc/2015/Pages/defau... (2 to 27 November 2015) if that changes.

2) POSIX already specifies that every day has exactly 86400 seconds for "Seconds Since the Epoch" and the current code relies on that: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_...


Why deal with the complexity of leap seconds; given TAI why should society as a whole deal with the complexity of leap seconds just because it makes life a little easier for astronomers?


It's not about being easy for the astronomers but for the humans. The astronomers already have to use all of TAI, UT0, UT1 and UTC and much more complex calculations. We other humans use days. We have the daylight saving time and nobody cares for an hour difference twice a year because those who do care use the less moving time stamps, often called UTC, even if they are just "synchronized with UTC" and not exactly UTC, as they have in POSIX-inspired systems exactly 86400 seconds in a day, always. Only some of the programmers then "discover" the definition of the leap second and remain confused by the fact that their computers use "UTC" name and wrongly think that the exact leap seconds are important for them even if they only need the calendar time.

If you use the POSIX time routines (and you almost certainly do use them unless you tweaked something wrongly) you already don't have to deal with the complexities of the leap seconds (but you should care about DST!) Every day in what POSIX calls "Seconds Since the Epoch" (but is sometimes referred to as UTC) has in fact the same number of seconds (if you know C it's what you get in time_t for all the time stamps). Only the OS-es have to be fixed to smooth the leap seconds instead of introducing them at once, and then even some obscure sync bugs will never happen any more. Google proved that it's a good approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: