Hacker News new | past | comments | ask | show | jobs | submit login
A positive leap second will be introduced at the end of June 2015 (obspm.fr)
203 points by TheSwordsman on Jan 5, 2015 | hide | past | favorite | 109 comments



Why don't we just switch from UTC to TAI and put the leap seconds in our localtime offsets? It makes all the problems go away, or else reduces them to problems we've already solved for handling daylight saving time.

If you think about it, leap seconds are no different than daylight saving time. The only differences are that daylight saving time usually has 1-hour granularity and is occasionally redefined by your government, where leap seconds have 1-second granularity and are occasionally redefined by Mother Nature.

We already know how to deal with repeating seconds and minutes (alternatively, with minutes or hours that are too long). Those happen during daylight saving time changes, but the ambiguity disappears when switching to UTC. We already record timestamps and communicate with UTC, applying a localtime offset for display or user input. That's a solved problem.

Further, we know how to change the localtime offset. Most timezones do that twice a year. We also know how to handle timezone definition changes because ignorant legislatures sometimes modify when the daylight saving time switches occur.

So, why don't we switch from UTC to TAI as an underlying time standard? When UTC leap seconds occur, we can treat them somewhat like a government redefining its timezones. US Eastern Time then changes from "TAI-05:00:35/TAI-04:00:35" to "TAI-05:00:36/TAI-04:00:36".

If two UTC clocks are synchronized, then one accounts for a leap second and the other misses it, the clocks will no longer be synchronized. If they were TAI clocks, they would still be synchronized and both would still record and communicate correct timestamps. It's just that one would display localtime incorrectly. But, you would immediately know that was the case when you see that the offset is a second behind. Missing a leap second would be like having your computer is set to the wrong timezone, and would be just as easy to fix.


Time zones and leap seconds are fundamentally different kinds of adjustments. Time zones are intrinsically a synthetic concept: they're (basically) just different views on the same canonical (UTC) time, based on arbitrary, regional, human-level notions. A leap second is an adjustment to the canonical time itself. In a pedantic sense, that's a synthetic concept too, but it's intended to model a real physical process: the earth's rotation. In the case of a leap second, that process has actually changed, and the time in all time zones is affected.

Another problem is that we don't know what leap seconds will be added until six months ahead of time. With the current system, if you calculate something for a time that's a year away, it will still be correct if a leap second is added. If you attempt to deal with leap seconds via time zones, then a timestamp calculated before the addition of a leap second will become incorrect when the leap second is added. It's easy to imagine this causing issues just as serious as the ones we have today. Both approaches are error prone in various cases, but at least today it's possible to write correct code that won't be broken by the addition of a leap second.


I disagree. Leap seconds are no more synthetic than timezones. As I'll explain, they are used for very similar purposes. Recognizing that means we have only one problem, not two. We can solve that one problem with existing code and processes with very few, if any changes. With no extra work, it also elegantly sidesteps the problem of being unable reliably predict when leap seconds will occur.

Leap seconds exist because we want the sun to be in the same position at the same time of day regardless of what year it is, despite the fact that the earth's rotation is slowing down. Timezones exist because we want the sun to be in the same position at the same time of day, regardless of where we are on the globe.

We can't reliably predict leap seconds in advance because the slowing of the earth's rotation is variable. We can't reliably predict timezone offsets in advance because legislatures are fickle things. We can no more command politicians to stop meddling than we can command the earth to stop slowing down.

Time is a natural phenomenon. It proceeds smoothly at a constant rate. (Ok, relativity. Still...) But, we want our clocks to measure more than just the passage of time. We want them to also indicate the position of the sun relative to where we're standing and what day it is. That causes discrepancies which give rise to time zones. Thus, we need to distinguish between a globaltime that is the same for everybody on earth and various localtime adjustments for convenience.

We generally use UTC as a globaltime standard. The problem with that is that UTC isn't smooth or constant because of leap seconds. Our localtime adjustments are difficult enough. We also have to deal with adjustments to our localtime adjustments. It's a hard problem, but a mostly solved one. Unfortunately, leap seconds mean we have the exact same difficulties in our underlying globaltime standard. We're solving the same hard problems twice, in different ways. I'm suggesting that instead we use a nicer globaltime standard and put all our adjustments and our adjustment adjustments into the existing localtime offset calculations.

We want timezones, so we need the complexities of implementing localtime. We might as well reuse that solution to deal with leap seconds too, since they're basically the same kind of thing.

You're right that with TAI there would be difficulties with calculating times in the future, but we already have those difficulties in a slightly different form. The nature of the difficulties would change, but the change would generally simplify things.

Currently, if you specify a UTC timestamp for a future event, the _duration_ between now and then will vary depending on how many leap seconds there are. However, the globaltime and localtime _timestamps_ would remain constant, regardless of leap secords. If instead we switched to TAI for our globaltime standard, both the duration and globaltime timestamps would remain constant, and only the localtime timestamp would change.

The unchanging timestamps might make it seem like UTC is better, but that's wrong. The timestamps only _appear_ constant. When those timestamps actually take place depends on leap seconds which can't be known very far in advance.

For example, suppose Alice and Bob are trying to coordinate an activity specified by a UTC timestamp. Neither can know in advance how long to wait, so they can't just set timers. If Alice accounts for all leap seconds but Bob misses one, his clock will be wrong and he'll start early.

If instead they used a TAI timestamp, then they wouldn't have any problems. They could just set a timer. Or, they could base their activities on their TAI globaltime clocks. Or, they could also use their localtime clocks. Bob missed the leap second, so his localtime is a second fast, but he would also think the event starts a second later. The errors would cancel and he wouldn't make a mistake. When Alice applied the leap second to her offset, she would also need to re-compute the now-changed localtime timestamp of the event, but computers are really good at such things. She could also write down the localtime timestamp of the event with an offset. If she applied the leap second to her localtime clock, but not her old pre-computed timestamp for the event, the clock and the timestamp would have different offsets. She would have to convert between the offsets in the same way she would have to convert between timestamps in different timezones.

Leap seconds are the same kind of problem as timezones. We already know how to deal with timezones. We should simplify things and use that one solution for both problems.


Because humans. Remember how confusing and difficult to find those bugs are where you forget to convert timezones that are, say, <3 hours away from one another? This must be very familiar especially to all African and European coders, where we're caught by this every time we forget to convert to/from UTC somewhere. It's easy to not notice it, while coding with some test data, when your software displays a time only one or two hours off. I bet many UK coders only notice their bug when summer begins. I'm not sure whether you valley people recognize this as well, but I'm sure you can imagine.

Now, instead, imagine things being only a few seconds off, instead of entire hours. How big is the chance that the bug would be caught before things hit production? What if, in some cases, seconds actually matter?

I think it's just asking for trouble.

To me, this is a bit like proposing binary protocols and not text protocols because it's obviously technically superior. It's a good idea until you factor in that software is made by humans.


Almost joking, but in my experience europeans (also not coders) are stronger in language-localization and currencies, while americans (also not coders) are strong in timezones ;-)


> Now, instead, imagine things being only a few seconds off, instead of entire hours.

No, and yes, and anyway it doesn't matter.

No, that wouldn't be the case. If you mess up the local->global conversion, you'd still be hours off, not just seconds. Realistically, we would probably need to define TAI2 = TAI + 35 seconds and then switch to that. Then the only immediate change would be that we'd stop adding the leap second flag to NTP updates. If you were doing UTC conversions correctly before, you'd automatically be doing TAI2 conversions correctly after, but if you mess up that conversion, you'll still be hours off.

But yes, such a change would introduce a new problem: Software or systems that are not updated to handle leap seconds in offsets might display localtime a second or two off.

Still, it wouldn't matter much. Non-TAI2 systems might display localtime a second or two off, but their clocks would still be correctly synchronized. Having an incorrect offset would be like inadvertently defining your own custom timezone. Nobody else would care about your timezone (whether it's a standard one or not), because you'd be communicating with them using global time based on your correctly synchronized clock.


I can confirm about the UK. I've written timezone-sensitive code once during winter and it turned out later that it was full of subtle bugs with serious consequences.


I don't follow. Won't you have all the issues you mentioned by storing timestamps in UTC?

Instead of converting from UTC to Local time, you'll be converting from TAI to Local time.


I'm a big fan of fail-fast, but no-one tests for leap seconds (and how can you?). Every time a leap second happens, there are massive failures, both in big corporate systems (oracle!) and in open-source ones (linux!).

If seconds matter you should be testing seconds. I submit that being off by 3 seconds all the time is an easier mistake to catch in testing than "explodes every time there is a leap second".


I don't think this approach will really work, and could cause a significant amount of issues on an implementation level. While it may work in theory, changing all the software would be a huge pain and it would be more difficult to mentally model.

1. I probably make the mental conversion between UTC, and EST, EDT, PST, etc several thousand times a year, which is very strait forward to do with an hour offset. However, doing these mental conversions with 4 hours and 3 seconds (think after a few changes) or 3 hours, 59 minutes, and 57 seconds will make mental alignment of data in different offsets massively more difficult. While I'm a huge proponent of representing all our systems in UTC time, it's just not a reality that exists today. Also, I see many peers fail this conversion often enough, that adding complexity here would be costly.

2. Some of the protocols used for transmitting time offset from UTC are only capable of a resolution of 15 minutes. If I remember correctly, all current 3GPP standards do this, as the wireless protocols for cellular are highly optimized for small message sizes, they do not currently encode a higher resolution than 15 minutes. This means that today, a cellular network cannot send your phone a time offset from UTC more precise than 15 minutes, and to do so would require a change in standards. This also means, that likely no phone on the market today would be capable of reflecting these time differences.

While I don't have a perfect alternative, I did like the idea of abolishing the concept of a leap second itself. Meaning, that it is more important that time count forward at a steady rate, from a computation perspective, and that adjustments to re-synchronize with the slight changes in the earths rotation should not be taken at all. I can't think of any downsides with this approach off hand or from previous readings, but I'm sure it creates it's own problems in certain problem spaces I'm not familiar with.


I think this is the approach recommended by djb in http://cr.yp.to/proto/utctai.html


Steve Allen of Lick Observatory has over the years maintained a site discussing the problems of leap seconds (including the internal inconsistencies of POSIX time-related specifications): http://www.ucolick.org/~sla/leapsecs/. There's a lot of history, and a lot of confusion and conflict among the various standards bodies. It does sound like getting rid of leap seconds one way or another might be the best approach.


I haven't done this for any type of crucial system, but I use GPS time if I want a really accurate time reference (I dabble with ham radio and satellites). GPS time is a few seconds off from UTC due to GPS time not recognizing leap seconds. It's fairly trivial to add the seconds to get UTC very accurately, or even record time in GPS time, and do the calculations to UTC or local time after-the-fact (if needed).

I like this idea.


> GPS time is a few seconds off from UTC due to GPS time not recognizing leap seconds.

GPS time has a fixed 19s offset to TAI, so in essence you're already using TAI.


Even better, GPS itself is sending out the exact UTC offset (and start of validity/moment of insertion of leap seconds)...

http://www.losangeles.af.mil/shared/media/document/AFD-10081... §20.3.3.5.2.4


I like the idea of presenting leap seconds as a change in timezone or time offset. But instead of redefining all the UTC-offset timezones, e.g. redefining EST/EDT from UTC-5/UTC-4 to TAI-05:00:35/TAI-04:00:35, what if we redefined only UTC in terms of an offset from TAI? I think this is actually equivalent to what we have today, but it makes the relationship to TAI more obvious, and it should be easier than leap seconds for developers to understand, because they can reuse their cognitive models for time offsets.

So leap seconds simply gets re-branded. Instead of saying 'a positive leap second will be introduced at the end of June 2015.' We'll say 'on 1 July 2015 00:00:00 UTC, UTC time will move from TAI-00:00:35 to TAI-00:00:36.'

ADD: I suppose this is more than a "re-branding", as it results in the abolishment of second 60. Under the current system, with leap seconds:

    2015-06-30 23:59:59 UTC = 2015-07-01 00:00:34 TAI
    2015-06-30 23:59:60 UTC = 2015-07-01 00:00:35 TAI
    2015-07-01 00:00:00 UTC = 2015-07-01 00:00:36 TAI
Under a system where UTC is a TAI offset, second 59 gets repeated:

    2015-06-30 23:59:59 UTC = 2015-07-01 00:00:34 TAI
    2015-06-30 23:59:59 UTC = 2015-07-01 00:00:35 TAI
    2015-07-01 00:00:00 UTC = 2015-07-01 00:00:36 TAI


The more reasonable thing to do would be to just forget about leap seconds altogether, not to put them elsewhere. Who cares if midday happens at 12:00:00 or at 12:00:30? Those who do need to add their exact longitude anyway, and they don't like time jumping around either. Human culture can adapt to a slow drift (over centuries) of the meaning of 08:00.


The repercussions of such a changed are addressed at http://www.ucolick.org/~sla/leapsecs/

Many things depend on using mean solar time, astronomy being one of them. The places that actually need a strict accounting of elapsed time can already use TAI or GPS time.


Why does astronomy "depend" on anything like that? Why should they care how the rest of the world standardizes on time? Seems like they can adapt, rather than forcing everyone else to.

I'd bet the number of non-astronomers inconvenienced by leap seconds outnumbers astronomer 1000 to one.


Astronomy usually needs sidereal time.

Mean solar time is a largely mathematical construct that has no direct relation to anything immediately observable. (Notice the word 'mean' ... to _observe_ mean solar time you'd need to watch the sun for a whole year from the equator.)

To the extent that astronomy applications want solar time at all, they probably wants it with more precision than you get from UTC.


We could also accumulate leap seconds and release them as leap minutes once a century or so.


I like the current system better. They happen often enough that software is (or should be) written with them in mind. Otherwise, we'd have y2k-style scrambling every time they occurred (since it would be infrequent enough that the pain of the experience would be dulled with time).


An actual proposal was to wait until an hour has accumulated and then change time zones.


"Due to inflationary pressures the standard work day has been extended another hour". I didn't downvote but suggesting a fudge factor is never going to fly around here!


Fudge factor? What I meant to suggest is to just define that UTC is now equal to TAI, and forget about the offset (set leap seconds to zero, or drive slowly towards zero if required).


Because of High Frequency Trading


Why does HFT care where we are or how we're rotated in relation to the sun? So long as we all agree on what time it is, whether 12pm denotes high noon or not is arbitrary and irrelevant.


The only thing I can think of: a good HFT algo might be able to exploit leap seconds to somehow gain an advantage over those that are ignorant of them (or vice versa).


Valid point imho. Sad to see your comment turned to grey within as little as 8 minutes after you posted it.


How does the system-wide workload of changing from UTC to TAI compare to adding the leap second? No idea about this stuff myself, just curious.


I think TAI is continuous, unlike UTC, so you can continue auto-incrementing timers without leap seconds and breakages, and just apply the leap-second to rendered time, not accounted time.


Workload is pretty low in either case...

Here's the function in the Linux-Kernel that's counting the seconds. It's actually not specified if this function is called exactly every second, or more often, or less often. It just overflows accumulated nanoseconds to seconds:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

The only leap-second specific thing in it is the function second_overflow() called in the line linked above, it's implementation is linked below. seconds_overflow() checks if a flag updated by NTP which means "there will be a (positive or negative) leap second at the end of today" (bits in time_state) is active, and if that's the case the last second will be repeated, or skipped.

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....

So, computationally, keeping the general complexity of timekeeping in mind, leap second processing is completely insignificant, as evident by the comparatively tiny second_overflow() function.


So that you can calculate the stamp for given date/times in advance.

Edit: This depends on two (currently true) assumptions. (1) We are going to have leap seconds. (2) We will not know far in advance when they will be.


There are also whole leap days once every four years


Not quite that simple:

    if (year is not divisible by 4) then (it is a common year)
    else
    if (year is not divisible by 100) then (it is a leap year)
    else
    if (year is not divisible by 400) then (it is a common year)
    else (it is a leap year)


Last time we had a leap second in 2012, it crashed a whole lot of web servers. There was a bug in Linux kernel threading that MySQL and Java both triggered. Please forgive the self-link, but lots of details from the time on my blog: http://www.somebits.com/weblog/tech/bad/leap-second-2012.htm...

Leap seconds are a fact of timekeeping. Google's approach of slewing the clock seems like a robust way to implement it if you don't trust the operating system to do it right. http://googleblog.blogspot.com/2011/09/time-technology-and-l...


>Last time we had a leap second in 2012, it crashed a whole lot of web servers.

That exposed a bunch of underlying bugs, didn't it? It may be a good thing in the long run than without it, especially with a one year's head start.


That's the optimistic view. But leap seconds also caused problems in 2008 and 2005. It seems more likely we'll just find some exciting new bugs. Hopefully not one that b0rks the whole kernel though.


Somewhere, there's a vindicated coder shouting, "See! I told you that extra code and storage to allow for a schedule of arbitrary future insertions of leap seconds would pay off!"


> Daniel Gambis

> Head

Daniel is clearly missing out on an opportunity to have the coolest title ever.

'Head of Earth Rotation', or 'Earth Rotation, Director of' would sure look nice on a business card.


The Earth Orientation Center of which he is head belongs to the IERS which, until 2003, was the International Earth Rotation Service (it is now the International Earth Rotation and Reference Systems Service).

So until 2003 there was, in fact, someone whose job title was "Head of International Earth Rotation Service".


Being able to address, in all seriousness, the "authorities responsible for the measurement and distribution of time" seems like a good second place to me though.


Daniel Gambis, Time Lord would go well on a business card.


That is a cool title. But i'm not sure it's actually cooler than 'Planetary Protection Officer':

http://planetaryprotection.nasa.gov/contacts

Particularly given that her job genuinely is to protect the Earth from alien life. Well, and also to protect the rest of the universe from Earth life - which, predictably, is the more difficult bit.


This leap second business seems like a pretty dreadful idea. Considering timezones are way off-base all around the world as it is[1], it seems a lot simpler to just let the clocks drift for a while. One less weird thing in our operating systems, much less opportunity for bugs in all kinds of code that deal with times.

The set of people who really care about solar time "down to the second" is smaller than the set of people who'll be put out by 23:59:59 not being immediately followed by 00:00:00, and most of those people probably want sub-second accuracy anyway. Astronomers complaining about the clock time being wrong is exactly the same as farmers complaining about daylight savings.

1: http://poisson.phc.unipi.it/~maggiolo/index.php/2014/01/how-...


> Astronomers complaining about the clock time being wrong is exactly the same as farmers complaining about daylight savings.

Well, and it's exactly the same (only the sign is changed) like software engineers complaining about leap seconds.

Do a better job and the systems will not suffer from leap seconds. Of course, astronomers and farmers will be more inclined to agree with this statement; software engineers, not so much. :)

We all complain when we are inconvenienced. When other people are inconvenienced, bah, it's a trifle.


The point is that the leap seconds don't actually help astronomers because they need sub-second accuracy anyway, so they're going to need a more exact time and they won't really care whether it says "12:47:45.435" or "12:47:44.435". So it's a lot of work for dubious benefit.

Also, we "suffer" from leap seconds not because we did a bad job designing the system, but because the Earth's rotation is slowing due to tidal forces from the moon, so an earth day today is already a tiny fraction of a second longer than it was when we defined a day to be 24 hours. So we have a couple of choices: change the length of an hour (and minute, and second...) to match the Earth's changing rotational speed; let solar time and clock time gradually drift apart; or add leap seconds to keep them roughly in sync. The first option is clearly insane, since having our measure of time vary over time would be a huge inconvenience. Out of the 2nd and 3rd, options, we have chosen the 3rd. A lot of people are arguing that the 2nd is really a better option. In the (very very) long term, it almost certainly is. Once the Earth's day becomes 25 hours, we'd be adding a leap hour every day!

This is my understanding of the situation, anyway. Anyone with more experience here, feel free to correct me.


Here I would suggest that we use a thrusters to fix (yes, fix) Earth's rotational speed. Heck, let's slow it down when we're at it, I could use more hours in a day!



Too bad leap seconds will not be abolished before this happens, abolishing leap seconds is up for consideration in November 2015 at the World Radio Conference:

https://en.wikipedia.org/wiki/Leap_second#Proposal_to_abolis...

I wish they had a call for public comments.


"In May 2014, David Willetts, the ex-UK Minister State for Universities and Science,[38] described as a non-scientist with a degree from the London School of Economics and Political Science,[39] expressed opposition to the abolition of leap seconds. He indicated that as a layman, he wanted to keep "the link between time and people's everyday experience of day and night." He also wanted to keep Greenwich Mean Time in Britain, and "warned" that it would drift towards the United States.[40] An article in the Times science section suggested that the abolition of leap seconds would mean the demise of Britain's role in timekeeping.[41]"

Just gah.

UK: please realize you are a medium-sized EU country, not a global super power. :)

Sincerely, the rest of Europe


Gah. The ignorance, it burns.

It will take thousands of years for the drift to even amount to one hour.

If people care about it then, ... they can switch timezones (e.g. like DST causes twice a year in most places) to compensate for it.


Yes, the UK has questionable politicians. A quick survey suggests this isn't limited to either the UK, English speaking countries, medium sized countries (by landmass or otherwise), or the EU.

Anyway, the UK is realistically still a global great power (the US being the only acknowledged super power):

  6th by GDP (nominal)
  8-10th by GDP (PPP)
  Permanent member of UN Security Council
  Declared nuclear weapon state
  6th by military expenditure
  3rd biggest European country by populace (excluding Russia)
  4th biggest importer and exporter in the world 
  7th biggest R&D spending
The only European countries to consistently be ahead of the UK are the French, and most of the time the Germans.


    Most pervasive cyber-surveillance state
    Most self-righteous has-been in denial. 
    Most politically jaded western democracy
    Most disturbingly enthralled celebrity culture.
    Worlds smallest media attention-span.
    Worlds best dogging hotspots.


> Most self-righteous has-been in denial.

You sure that's not the French?


I agree. This bullshit temporal nationalism was also used as an argument for stopping Britain from changing to the European time zone (it makes way more sense). Apparently we can't do that because GMT is British. Ugh.


> it makes way more sense

I never understood this argument. The whole point of having a 24-hour clock is so that 12:00 is midday and 00:00 is midnight.

If you break that link then there seems little point in even having time zones, but it is useful to understand that "09:00" is in the morning, in whatever arbitrary location that time is observed.

The instant failure of "Swatch internet time" showed that the current system is still working.


You don't want to know what time the sun's highest in the sky in various places around the world, though - you want to know what time people are likely to get to work, when people will be heading off for lunch, and when they'll be back. And that isn't as simple as knowing when it's 9am or 12 noon everywhere, because cultural assumptions about working hours and lunch hours are as variable as the alignment of timezones to actual astronomical noon. What you actually need is to keep an eye on people's Skype presence indicators.


12:00 is almost never midday. Even in equator, "midday" drifts by several minutes as the year goes by. Sunrise and sunset times vary a lot by latitude, so 5:00 in one location might mean sunrise, where in another location the sun won't come up for a couple of hours. Ditto for sunset. Even DST can't compensate if the sun is rising at 6:00 and setting at 22:00.


Midday doesn't have to be that precise, and you centre your day around it rather than move the day around sunrise and sunset. It may be that Swatch Internet Time was rather _before_ its time rather than something that will never catch on, but the current system works really well for pretty much everyone.


I think the real reason is that it gives us an advantage in attracting American businesses. If you're an American company and need to open a European office, you're going to open it in the place where people can talk to your New York office for more of the working day, so you go to London rather than to Paris or Frankfurt.


No, we can't do it because it would involve me getting out of bed an hour earlier.

Tell you what: i'll allow a switch to European time if we move the start of the working day to 11:00. Deal?


UK-born/resident person here, couldn't agree more. Sorry for all the British people who still think we have an empire and all that nonsense. It's what's behind all this anti-EU crap that's going around here too. Hate it all.


Do astronomers actually complain about leap seconds? I was under the impression they had their own timescales that were independent of civil timekeeping. The real objection to leap seconds comes from legacy systems like US Air Traffic Control, who complained in the 70s when leap seconds were first instated. And rather than, you know, upgrade their systems in the intervening 40 years, clearly the best approach is to keep complaining about them until they go away.


i for one vote to a base 10 system that takes all that into account.

imagine splitting time not exactly in seasons begin/end (what was the last time you synched your clocks with a solstice?). no leap seconds. no 28-31 months. no 365+-1 years. even daylight saving time benefits can be worked in the system if we don't sync 100% with the solar day but skew it just enough that you get the drift to align things on daylight saving benefits and still can tell the time by simply looking at the sun.

it's not hard. just nobody has the balls to even suggest this seriously out of math circles.


At least most people who suggest this use one of the existing could-actually-work-even-though-it-will-never-be-implemented approaches (usually 364+1 "year day" or +2 for a leap).

There are a fractional number of days in a year. You can't have a year with an integral number of days (without varying the number of days in a year, i.e. leap years).

The proposal to abandon leap seconds is appealing but as short-sighted as two-digit years. You'll have to dynamically adjust times eventually because the rate of the earth's rotation changes. We can either do it right, incrementally, and programmers can manage to implement a standard that's been around for over forty years, or we can ignore it and pass the problem along. I suppose I can guess what will happen there.


I don't think anyone should bring up DST when talking about time representations. DST is a stupid hack for artificially controlling economic forces, and has no meritorious bearing on timekeeping itself.

Let's face it: decimal time was really only intended for people doing lots of math work involving time. If the argument for base 10 is that calculations and representations are "easier" than base 12 (mainly because all modern human civilizations use base 10), why not make the argument that base 2 should be used, since computers virtually think for us at this point?

The real reason our representation of time is still so fucked up is that it has to be used by all humans, and all humans don't want to deal with lots of digits, fractions, floating point, etc. Sexadecimal is just simpler, even if we do have to occasionally jump through some hoops to keep this piece of crap timekeeping system in order. One second every couple years? Converting to decimal just to deal with little problems like that seems like a completely unreasonable suggestion to most people.


Ever since I was a kid, the concept of a seven day week threw me. Way back then I came up with a ten day week, three weeks a month, 13 months a year, and five non-month "holidays" calendar in my head. The five non-month holidays were (if I recall correctly) summer and winter solstice, spring and fall equinox, and New Year's Day.

I never could get around the 1/4 day problem, and still had to account for leap years with a sixth non-month day every four years. It was a fun thought experiment as a teenager interested in math and science though.


FYI, you have reinvented the French Republican Calendar[1], only you missed out on decimal time (1 day is 10 hours, 1 hour is 100 minutes, 1 minute is 100 seconds; 1 second is 0.864 conventional seconds)

[1] http://en.wikipedia.org/wiki/French_Republican_Calendar


Why are you all looking for base-10 hours, when it's rather the base-10 which is wrong??? Numbers should be switched to base-12. I mean we often need to divide 1kg by halves, quarters and thirds, and it should fall even.


We should do base 16, though, so it lines up with binary.


Not divisible by three, which comes up a lot. 12 is superior.

60 is probably best, tho.


Oh, I'm not surprised it's not an original idea, and I never claimed it was. Just the musings of a child fascinated by the seasons. :-)

edit: And I just realized I had a typo up there; I wrote "13 months" when I meant "12 months". D'oh.


You sir, are a beacon. A bacon of hope, light, justice and marshmallows!

Your idea has definitely never been tried or abolished. http://en.m.wikipedia.org/wiki/Decimal_time


At least this one is in June. Whenever they do one in December, you have to suffer through a terrible explanation of what a leap second is from a TV newscaster, explaining why the ball isn't going to drop for an extra second.


I'm curious to see how this affects Spanner (Google's new database that seems to rely on extremely precisely synchronized time to coordinate transactions).

Anyone on the Spanner team able to comment?


Google wrote a blog post about this a few years ago:

- http://googleblog.blogspot.com/2011/09/time-technology-and-l...


There was a post in the past that google have their own custom timeservers for their servers, slowly adapting time over multiple days and months, so they are not affected by this.


In their databases that depend on time, they repeat that they use an external source of time.

It seems like some of their designs are constrained around always non negative deltas between events, making things append only so their trees or hash tables are only balanced once when written to disk.


I haven't read about Spanner, but wouldn't it be possible for their clock to measure time based on an epoch, instead of using the system based on physical factors (the rotation of the earth around its axis and around the sun) that we use?


It is probably easier in the long run if the servers had the same time as the machines that people use, which are regularly updated by their OS's choice of NTP server, since people run reports and react accordingly.


It wouldn't affect Spanner or any other sanely implemented system - usually nowadays time is saved as "units from epoch", and leap seconds, days, etc. don't matter for internal calculations. It matters only for final conversion to printable format, debug logs, etc.


Most UNIX systems store time as "seconds from the epoch, according to UTC". The advantage is that you can always assume that 1 day = 86400 seconds. The disadvantage is that the time is affected by leap seconds. The same-numbered POSIX second can be repeated twice in a row.

Spanner instead deals with it by changing the length of the second.

If you're assuming time intervals are accurate to the second, or that times are monotonic, have you actually reconfigured your system to use TAI?


One cannot even assume that '1 day = 86400 seconds' because of DST.


I've never seen a UNIX system that changes the system clock due to DST. You change the local time zone.


This will probably resonate with people here[1]. A brilliant video by Tom Scott for Computerphile, where he starts to explain what coding timezones correctly is like and by the end of it is in full polite-mini-rant-mode :D

[1] https://www.youtube.com/watch?v=-5wpm-gesOY


My personal opinion is that leap seconds have no place in civil timekeeping. I love them as geeky trivia but in practical terms they are a disaster. They do solve a real problem of keeping solar time roughly in sync with atomic time due to the slowing rotation of the Earth, but in today's computer-dominated and interconnected world inflicting a one-second discontinuity every few years on millions of systems individually rather than handling it in a central place is a bad idea. It's like a mini-Y2K several times a decade.

The core issue is we have two different needs: a hyper-accurate count of elapsed SI seconds, and a day-to-day date/time system that roughly tracks the position of the sun in the sky. UTC tries to combine these into the same thing, when they would best be left separate.

For the former, we already have a pure atomic timescale -- TAI -- to handle this. I would go a step further and not even present TAI as a date and time, but rather a raw count of seconds like unix time. This is to enforce that TAI is not civil time, but rather a standard to benchmark and calibrate against.

For the latter, I propose a new time standard that is a transformation function applied to TAI. The IERS, instead of decreeing leap seconds like they do now, would instead declare an offset and skew rate that smears the leap second out over a longer period. Something like: at date X, civil time will be Y seconds ahead of TAI, and will tick Z% faster/slower until further notice, where Z is expressed in a simple unit like parts-per-billion or milliseconds per day. Instead of leap seconds every 18 months or so, with this scheme the IERS could probably get away with making adjustments once every five years, and still stay within the 0.9s of true solar time as mandated by UTC.

Any modern microprocessor could handle this transformation trivially. Most would not even need to as the internal clock resonators in 99.9% of the world's computers and clocks are less accurate than the skew rate Z. They need to periodically resync with a master clock anyway and the drift would be indistinguishable from noise.

To be clear, no one would be changing the length of the second. An SI second is and always will be an SI second, and high-precision computing and science will be done with reference to SI seconds. But the elapsed time between consecutive seconds of civil time will be not quite an SI second, and will differ by an amount so small that for the purposes of day-to-day timekeeping is in effect indistinguishable.


Adding to my original comment:

Getting rid of leap seconds and being done with it -- effectively setting civil time to atomic time -- may seem like an attractive alternative that achieves the same effect without the complications of transformation functions and skew rates, but it ignores that since the Earth's rotation is continuing to slow down, the offset between atomic and solar time is going to increase quadratically. While the first leap hour is 900 years out, the next one after that is only another 400, then 300, then 250... Someone is going to have to solve this problem for good eventually, and that will either involve applying the correction continuously as in my scheme, thus eliminating the quadratic compounding of the error, or eventually re-defining the second at some distant point in the future to match the Earth's rotation (and thus starting the situation over from scratch).


This really messes with some of the stuff I do at work with test instrumentation since we use GPS time in the field but our computers use UTC. We had a problem late in 2014 when one of our instrumentation techs was using an old instruction manual and he was converted his data using a 14 second offset between GPS time and UTC, instead of the correct 16 seconds. Now we'll have to issue all new instruction manuals to the technicians-- or maybe we can just fall back to a 2011 version that had a 15 second offset.


Write in the instruction manual "The number of seconds offset is shown on the web page: <insert address of your updated page>"


> or maybe we can just fall back to a 2011 version that had a 15 second offset.

Won't the new offset be 17s? The offset would only diminish if a negative leap second was enacted which has yet to happen.


I must have misread the article. I thought a positive offset was different from all the other ones. In that case, it's still a new manual version, but much less unusual.


If folks are (reasonably) feeling nervous about this, I've got a test program lets you insert leap seconds to test your application behavior. Would love to hear about anything folks see that might point to any remaining kernel issues.

https://github.com/johnstultz-work/timetests/blob/master/lea...


Is there an NTP command to see if NTP was informed about the change?


It's not quite that simple, but I think this is a good explanation: http://support.ntp.org/bin/view/Support/ConfiguringNTP#Secti.... (TL;DR: don't worry about it).


Everybody's talking about the time situation here, but what about the fact that the world's clocks will gain a second and the site announcing the fact isn't even signed and secure. It seems like the message should definitely have a cryptographic signature and that the site should be reachable via https.


http://www.leapsecond.com/java/gpsclock.htm

A cool realtime clock that shows the offset the GPS satellites work from, which do not include leap seconds.


It could be worse. They could have leap tenths of seconds ten times as often.


That would arguably be better, the smaller distortion would be less likely to break things and the frequency would make it much more likely that things were tested against it.

(Of course best would be to have none at all, and accept that several thousand years from now people-- if people still exist then-- may choose to slip the timezones by one hour to correct the drift.)


Yay, more sleep!

Wait... (very briefly thinking about broken assumptions, date calculations, month-end assumptions, 0:59 versus 0:60 vs. 0:00, ...) dammit, I just lost more than I gained! Thanks, France!


Ugh not again http://phys.org/tags/leap+second/

Quantas crashed last time. I mean their computers, not the planes.


There's no 'u' in Qantas - it was originally an initialism.


Thanks for the update. Writing a Go program now which is pretty sensitive to these, and it's pretty annoying to have to consider them in your tests.


Incidentally, time.nist.gov is down. It hosts a list of leap seconds at ftp://time.nist.gov/pub/leap-seconds.list.


Remember when we'd have one of these almost every year? Those were the days.

Kids today... Get off my lawn.



Hurray, time to see NTP screw up hosts everyone on the internet yet again.

(Oh wait, the last leapsecond event didn't even involve an actual leapsecond)


Does this require a code change?


They did say that 2015 would be the year we got time travel. One second is a little less than anticipated though.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: