Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How is it that programmers can simply ignore leap seconds?
27 points by sporkle-feet 5 days ago | hide | past | favorite | 38 comments
I am an professional software developer, and have occasionally needed to work with dates and times, including applications that require a certain level of accuracy (e.g. processing pricing feeds from financial institutions)

Since 1970, there have been 27 leap seconds applied. But these don't show up anywhere in any computer system I have ever used (OS, languages, applications, third party APIs). If I create a date object of 1970-01-01T00:00:00 and repeatedly add 86400 seconds, should I not end up with a date/time that is no longer midnight?

I assume that we are collectively just ignoring leap seconds and hoping for the best. Is this OK?






Computers usually report time in UTC but don't actually have leap seconds. Most computers are pretty bad at keeping time, and regularly sync to NTP servers for the current time. NTP servers generally have better time keeping hardware (or sync with better sources). When a leap second occurs, NTP servers will smear that second over 12 hours or so. Different NPT servers have different smear standards. During that smear, seconds are a bit longer, or shorter for negative leap seconds, than usual. But for general computing this doesn't matter. The smear is on a similar order of magnitude of normal clock drift. This isolates the leap second to only computers that need to track it. UTC/UT1 is designed to be used by humans and is based on the Earth's revolution and rotation about the sun (well now it's actually measured using distant pulsars).

The big reason to isolate this is because leap seconds aren't totally determinate. We can predict them to some degree, but ultimately rely on measurement. Unlike something like leap days, you cannot safely code a system that accounts for future leap seconds. Leap days follow a well-defined formula, leap seconds do not.

For time sensitive applications like navigational computers, TAI is used. TAI is currently 37 seconds off from UTC. TAI is associated with the SI Second unit. While originally derived from the solar day, it is now based on the vibrational frequency of a caesium atom.

For a more detailed explanation see http://mperdikeas.github.io/utc-vs-ut1-time.html for a good summary.

For an overview on how Google handles the smear see https://developers.google.com/time/smear


>When a leap second occurs, NTP servers will smear that second over 12 hours or so

Since when? I mean that literally, I only heard of this concept recently and didn't think it was a general standard, yet.


"My system clock is wrong, what action shall I take to correct it?" Common answers to that question are "step" or "smear". The notion that the clock could be wrong is not addressed in the design of most systems, and "smear" is widely accepted as the least harmful fix.

Yes, I'm wondering when/how "smearing" became widely accepted, since the Wikipedia page on leap seconds only says that there is a proprietary method that Google uses, and another that Amazon uses.

"UTC-SLS was proposed as a version of UTC with linear leap smearing, but it never became standard"


Lack of agreement; "smearing" is completely normal for answering "What time is it?", and completely wrong for applications requiring precise time.

Why can't we just switch to TAI for computers and anything electronic everywhere and just use UTC for user visible timestamps?

How can air-gapped computers keep time well?

Use a GPS receiver, that will only get time reference and location from the GPS satellites, and not transmit anything. So you're still airgapped - see https://gpsd.gitlab.io/gpsd/gpsd-time-service-howto.html for reference.

It's not really an air-gapped system if it accepts unauthenticated external radio signals and changes state in response, which is what would be happening in that case.

You don't need much imagination to think of various attacks that a spoofed GPS signal could open up, if you were able to cause a system to have an incorrect time.


Is it really still airgapped if it accepts input from an outside system?

No data leaves the computer over GPS and the computer controls how it processes the data. It's no worse than a keyboard.

> Since 1970, there have been 27 leap seconds applied. But these don't show up anywhere in any computer system I have ever used (OS, languages, applications, third party APIs). If I create a date object of 1970-01-01T00:00:00 and repeatedly add 86400 seconds, should I not end up with a date/time that is no longer midnight?

The timestamp on your computer is is in a timescale (called UNIX time) which is defined as not including leap-seconds, so no. The advantage of this system is that there is an algorithm for converting the integers to points on a calendar and back again. Kinda as you mentioned (ie. just keep adding 86400 to go forward a year, a bit more if it's a leap year, etc.). The downside is that you can set your timestamp integer to some value and it can refer to a second in UTC which happened twice (like those 27 leap seconds), or not at all (if a negative leap second is inserted in the future).

If you want to use UTC as your timescale, then you would have a different representation (something like TAI) and you would need to know about leap-seconds in order to do the integer-to-calendar-or-back-again type of calculations.

As with all things in engineering, it is a trade-off and what is the best choice depends very much on what you are trying to do.


POSIX requires that we ignore leap seconds. POSIX had no alternatives to this choice because the information needed to do time right was not, and still is not, readily available via an authoritative and robust mechanism. No international recommendation has ever required the creation nor funding of a mechanism better than "This one agency in Paris will use the post office to send out letters to your national government time service agency at least 8 weeks in advance of a leap."

I was under the impression that it's fundamentally impossible to determine leap seconds in the future because the rate at which the Earth is slowing down its spin varies unpredictably.

At the inception of leap seconds it was unclear how well they could be predicted, but the agreement required 8 weeks of notice as a minimum. Since the inception of leap seconds the rotation of the earth has accelerated, not slowed.

>Since the inception of leap seconds the rotation of the earth has accelerated, not slowed

Ok, I'm not sure if you are correcting what I wrote or not. There is a sense in which what you write is correct, I think. But the long term trend is slowing, and since the inception of leap seconds, there have been ups and downs whether or not one averages over a year.

Illustration:

https://en.wikipedia.org/wiki/Leap_second#/media/File:Deviat...

I think this is showing that the yearly cycle is larger than the change in speed in a century, and so are the cycles in the 365-day average over decade-timescales.


Look at the past 250 years where there is not really any clear indication of deceleration https://www.ucolick.org/~sla/leapsecs/dutc.html

It says "Over the passage of centuries the rotation of the earth is being decelerated by tidal friction from the moon and sun"

Are you correcting me, or not, or what is it you are trying to communicate?


It is possible 8 weeks in advance to predict the difference between UTC and UT1 and keep it less than 0.9 seconds. Nothing more is required by international agreement.

Most of us don't need to hope for the best because leap seconds won't trigger bugs in our applications.

In my work, I deal with dates and times, but a leap second would have no consequence. A negative leap second would be interesting, because database records could appear to have been created out of order (a possible problem for many apps), but my apps wouldn't care.

There are programmers that have to worry about leap seconds, I'm happy I'm not one of them.


A failure rate of ~once every two years is so tiny compared to the rate of failure introduced by other things (from human error, on up); and for many time-related things being off by a second is irrelevant (or again tiny compared to all the other sources of noise in measuring time). So it seems reasonable to me to ignore leap seconds in the vast majority of projects.

That said, modern cloud environments do hide this problem for you with leap smearing [1], which seems like the ideal fix. It'd be nice to see the world move to smeared time by default so that one-day = 86400 seconds stays consistent (as does 1 second = 10e9 nano-seconds); but the length of the smallest subdivisions of time perceived by your computer varies intentionally as well as randomly.

[1] https://developers.google.com/time/smear


I've never worked on systems that required absolute timing accuracy to more than a few minutes. The only date/times were timestamps on human activity, on computer sets by a humans, not NTP.

> including applications that require a certain level of accuracy (e.g. processing pricing feeds from financial institutions)

Should leap seconds matter then? As long as you are synchronizing with the same source of truth as the financial institution's back-end, both clocks should tell the same time.


What if we're trying to correlate prices from two different financial institutions? Maybe one uses smearing and one doesn't, who knows?

In that case UTC is the wrong answer to the question "What timescale will produce robust results for this application?" As soon as leap seconds were recommended by the CCIR in 1970 radionavigation system administrators announced they would switch to purely atomic TAI-10s and astronomical almanacs continued to use just plain UT. Proceedings of international standards bodies show them discussing how the new CCIR timescale with leap seconds (not yet officially named UTC) need not apply to anything other than radio broadcast time signals.

Leap smear was invented to make this easier on programmers and others: https://developers.google.com/time/smear

I've always wondered that, too. There are fields where exact second level precision is required, but virtually everyone is OK with systems being up to a few seconds off here and there.

It could be the case that as long as dates are internally consistent (a timestamp for an object created later than another reflects that fact) most systems wont break.

Leap seconds aren't a precision issue unless you need to calculate time delta's between events.

Calculating time deltas between events is fundamental for GPS navigation, tracking an incoming missile, coordinating the action of robotic systems, synchronization of telecommunications, ...

One second here or there I am sure we all could give a great damn If you should gain or loose a buck if you don't know the true time.

A real shitstorm is going to happen in 2038 when unix time overflows. I base this hypothesis on how slow moving the industry is recently

What critical systems are actively in use now that still use a signed 32bit `time_t` or equivalent?

> If I create a date object of 1970-01-01T00:00:00 and repeatedly add 86400 seconds, should I not end up with a date/time that is no longer midnight?

Of course and that’s why you shouldn’t do that. I don’t know about other languages, but if you want to advance a certain number of days, you don’t just add seconds. Any code review would pick that apart.

You add (or subtract) date components, meaning you specify a day, week or whatever unit.

Do I misunderstand the question?


If we were taking leap seconds into account, we should end up with a date/time that is not midnight.

Because it does always remain midnight, it shows we are all ignoring leap seconds.


"ignoring leap seconds" is equivalent to "using a time scale that does not conform to UTC". Time scales other than UTC have 86400 seconds in a day. UTC is the exception.

> I don’t know about other languages...

Other programming languages apart from what? You never specified which language you're talking about.


Oof I somehow butchered that. Swift.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: