
Should the leap second be eliminated? - ColinWright
http://www.bbc.co.uk/news/science-environment-16597191
======
Gring
Why don't we let GPS satellites and NTP broadcast both "regular time" and
International Atomic Time (TAI), and let each user use the one that fits best?

So: satellite navigation, financial services and flight control systems use
TAI, everybody else uses regular time?

To make the difference obvious and prevent misunderstandings, TAI could even
have a different form, like a pure integer of "seconds after date X". So
nobody except programmers for financial services and other specialized systems
would ever get to see TAI, and everybody's happy?

~~~
mseebach
I think you're on the right track, but I'd take it a step further: Whenever
you're displaying _what the time is_ to a human being, convert to UTC. In all
other cases - especially storage and transmission etc. - use TAI.

We have very rich libraries for dealing with time in software, so it's not
like it would be a huge drag.

There are still some unaddressed edge cases, of course. Say there's a leap
second on 1 July. Now consider the interval 14:00 30 June - 14:00 1 July. In
"human time" it's 24 hours, but in computer time it's 24h00m01s - and,
conversely and perhaps more dangerous, should "14:00 30 June" + 24 hours come
out as 13:59:59 or 14:00:00?

~~~
masklinn
> We have very rich libraries for dealing with time in software

99.9% of which suck goats though.

And then, there's the problem of correctly syncing time display with internal
representation: let's say the library stores time as a monotonically
increasing number (which it should, ideally, that's the first failure of many
libraries). This monotonically increasing number is then mapped onto "human"
dates and times via more or less complex schemes.

The problem is that those mapping change over time, for instance the leap
second is provided with 6 months of advance, DST/GST changes may be provided
with far less (to say nothing of timezone changes). So those time libraries
need to be updated all the bloody time or to have a mechanism through which
they download mapping updates.

> conversely and perhaps more dangerous, should "14:00 30 June" + 24 hours
> come out as 13:59:59 or 14:00:00?

Well this is a human time mapping, in human time mappings the leap second is
part of one of those 24h, so it'd be 14:00:00 (same as adding "1 day" or "1440
minutes"), whereas adding 86400 seconds would yield 13:59:59.

The internal storage should not have a concept of hours (or minutes, or even
maybe seconds), it would have its "time unit" and then _contextual_ mappings
from the time unit to "human units" through calendars and calendar operations.

~~~
mseebach
> 99.9% of which suck goats though.

The past decade has seen immense improvements, though. JodaTime is pretty neat
and JSR-310 seems to be an improvement on that (haven't used it though).

> And then, there's the problem of correctly syncing time display with
> internal representation [..] leap second is provided with 6 months of
> advance, DST/GST changes may be provided with far less (to say nothing of
> timezone changes).

Since the time zone issue is the larger one anyway, this provides a solid
(copyright trolling aside) framework for distributing this information: a
database distributed and maintained as part of the OS.

> Well this is a human time mapping [..] contextual mappings

I agree, but it's the context that's hard. You need some way for your
abstraction to capture the fact that a "minute" can sometime consist of 61
seconds. Consider an application that monitors the number of foos per minute
where the developer perfectly reasonable uses the "minute" abstraction. This
might then exhibit faulty behaviour when there's a leap second inserted -
suddenly there's 1/60th more foos reported in a minute.

Depending on the application, that might be statistically insignificant and
can safely be ignored, or it might be a red flag that triggers someones pager
to go off (or worse).

~~~
masklinn
> The past decade has seen immense improvements, though. JodaTime is pretty
> neat and JSR-310 seems to be an improvement on that (haven't used it
> though).

Joda is pretty much the 0.1% in question. On its own.

> I agree, but it's the context that's hard. You need some way for your
> abstraction to capture the fact that a "minute" can sometime consist of 61
> seconds.

Absolutely.

> Consider an application that monitors the number of foos per minute where
> the developer perfectly reasonable uses the "minute" abstraction. This might
> then exhibit faulty behaviour when there's a leap second inserted - suddenly
> there's 1/60th more foos reported in a minute.

Ideally, this application should monitor the number of foos per [[time unit]],
and only convert from [[time unit]] to minutes when it displays information.

Of course you'd still have contextual issues:

> Depending on the application, that might be statistically insignificant and
> can safely be ignored, or it might be a red flag that triggers someones
> pager to go off (or worse).

Yeah. Unless those triggers use the rate per [[time unit]] instead of using
the rate per "human" unit (would blow up on summer time end as well, as it
"replays" a full human hour)

------
chalst
For a backgrounder, read this CACM article:

[http://cacm.acm.org/magazines/2011/5/107699-the-one-
second-w...](http://cacm.acm.org/magazines/2011/5/107699-the-one-second-
war/fulltext)

I've wondered, would the technical problems described be ameliorated if a
longer timeframe was given? At the moment, we have about six months notice of
leap seconds; what if we lengthened this to, say, two years?

That said, I'm not really sympathetic to the proposal. We already have the
highly predictable International Atomic Time (TAI); why if we want this
predictability, not just use TAI instead of trying to change UTC?

------
qntm
This article doesn't give clear enough reasons for abolishing leap seconds.
One passage says:

> But those seeking to abolish the leap second say these one-second jumps are
> becoming increasingly problematic for navigation and telecommunication
> systems that require a continuous time reference. These include satellite
> navigation, financial services, the internet, flight control and power
> systems, among others.

But elsewhere,

> Ron Beard, chairman of the ITU's working party on the leap second, said:
> "This is not a technical issue, it is more a diplomatic one."

~~~
drostie
Well, let me phrase it a little more analytically, then. The biggest reason to
switch UTC to atomic time is hacker-cultural. Basically, many countries have
said, in law, "we use UTC." This causes most specifications -- in particular
NTP, Javascript, and POSIX timestamps -- to also say "we use UTC -- but we
ignore leap seconds." Unfortunately, these specifications are thereby rendered
useless and incompatible in a bunch of different ways. NTP servers, we'd like,
should just broadcast atomic time, TAI. Similarly, Javascript's Date.getTime()
should give TAI, and POSIX should give TAI. But TAI isn't what countries have
legally specified -- UTC (and in some cases GMT) is what countries have
legally specified.

Governments use UTC because we like the idea that the sun rises at around 6
AM. (Or, on my base-10 clock, at around 0 ki.) But that's when the sun rises
-- that's a question of setting a time zone!

So the best way to include leap seconds for now is via time zone information:
allow the time zones to drift with leap seconds. Time zones are already pretty
crazy and hard-to-anticipate and may change by a much larger offset at even
shorter notice (e.g. "This summer, we are not observing Daylight Savings
time"). If you can't anticipate upcoming political "leap-hours", your software
already can't handle the less-political question of "when is the next leap-
second?"

During my free time I've been playing with designing a base-10 clock.
(Obligatory plug: <http://drostie.org/time/> .) So I've been thinking about
this topic off-and-on for a couple years now. During that time I guess I've
convinced myself that the right way to handle this is to define a _named time
zone_ which implements leap seconds -- this requires _derivative time zones_.
So for example, my clock above automatically assigns to New York the base-10
time zone of "+300". With named time zones, it could shift to leap seconds by
passing a law saying "the base-10 time zone in our jurisdiction is instead
leap+300." Or perhaps the US as a whole would design their own time zone to
standardize a base-10 daylight savings time, and call it "us" and then New
York might be in the time zone "us+300".

The goal is to really make specifications designers think, "oh, leap seconds
are a cosmetic choice, I'd better just use the base-10 'rational timestamp'
format rather than rolling my own implementation of POSIX timestamps." Right
now they think "oh GMT is old and UTC is new and everyone knows what UTC is so
we don't have to worry about it, right?" -- and they don't get it right.

So is this a technical issue or a diplomatic one? Well, it's technical because
it affects technical specifications, but it's nontechnical because there is no
good _technical_ argument for using UTC versus TAI -- that's a time zone
question, a political or diplomatic question about when you want the Sun to
rise.

The only technical situation that I know of is the claim that some old
telescopes might be using time to find stars -- and might misalign if the time
comes unsynchronized from Earth's rotation. I don't consider this a serious
technical difficulty, because those telescopes are manned by hackers smart
enough to get a feed of Terrestrial Time and pipe it into whatever clock input
was receiving UTC. "Hey look, it uses this NTP server, let's just redirect
that DNS query to a smallish Linux box which serves TT over NTP." I think
we're smart enough to ignore this technical case.

------
rnadna
There is no way to know whether a leap second will be used in the future,
beyond the half-year "notice period". This can make it tricky to plan for
synchronized events past the notice period. The solution, of course, is to use
non-leaped time for such planning. As several have pointed out, libraries
handle both times. As someone who needs to deal with both types in data
processing, I can report that it's definitely not a big deal.

------
Nick_C
The answer (for software engineering) is to stop thinking of time as a
continuous flow with periodic markers such as second, minute, etc. Instead,
think of it as chunks.

Mostly one chunk flows seamlessly into another. Occasionally, a chunk is
"missing", such as skipping an hour at Daylight Saving. We are already set up
to handle that. It would be very simple to add another rule to the DS database
to globally add a second now and then.

I really don't see what the problem is.

------
Lewton
>Over decades, the difference between Earth-based time and atomic clock time
would amount to a few minutes, but over 500 years, they would be out by an
hour. Over millennia, the discrepancy would grow even more.

Is this really true? Based on <http://en.wikipedia.org/wiki/Leap_second>

It seems like there's only been 25 leap seconds in 40 years. That's around a
minute every century..

~~~
chalst
Leap seconds are expected to become more frequent, and the last decade has had
unusually few leap seconds.

~~~
Lewton
Does that mean we expect there to be more than a couple of leap seconds every
year? (Based on the "more than an hour over 500 years" statement)

~~~
chalst
Err, yes, that does sound an order of magnitude out, now I come to think of
it.

The original proposal allows updates four times a year, though only the July
and December updates have been used.

~~~
maggit
Four times a year, with up to two seconds added each time should put an upper
bound for adjustments accumulated over 500 years to:

500 years * 4 updates/year * 2 seconds/update = 4000 seconds = 66 2/3 minutes

So an hour in 500 years seems to require adjusting almost as much as we can,
which isn't realistic.

------
jeffl8n
I know not all of the world observes daylight savings time, but why not
coincide any leap seconds/minutes/hours with the changing of daylight savings
time? It would at least make it simpler for those who do practice DST.

------
lucian1900
Leap minutes seem like the simplest option to me, if keeping synchronisation
with sunrise/set is really that important.

------
gojomo
I like the leap-hour once every 500 years or so idea.

But I suspect in practice, it would never be actually assessed. When it comes
due, everyone alive would be more accustomed to the 'hour off' times than the
500-years-ago time, so natural tendencies to eschew change in such matters
would make following-through on the promised leap-hour unlikely.

~~~
Splenivore
We’re willing to shift our clocks by an hour twice a year to “save daylight,”
whatever that means. Correcting for 500 years of clock drift is a far more
tangible reason. It’s rather pessimistic to assume that the people of 500
years from now would be unwilling to implement this.

~~~
extension
Correcting for clock drift is not tangible in a practical sense though. We
(allegedly) have good reasons for changing our schedule twice a year. Changing
it because that's how people lived 500 years ago is not any kind of reason,
and I think our ancestors will just laugh at the idea when it comes time for
the leap hour.

So if these leap seconds are causing problems, I don't see any reason to keep
them. By the time the consequences of getting rid of them are noticeable, they
will no longer be negative consequences.

------
its_so_on
Having read the article, I for one am glad they're not leaping to any
conclusions without a second thought.

