If you think about it, leap seconds are no different than daylight saving time. The only differences are that daylight saving time usually has 1-hour granularity and is occasionally redefined by your government, where leap seconds have 1-second granularity and are occasionally redefined by Mother Nature.
We already know how to deal with repeating seconds and minutes (alternatively, with minutes or hours that are too long). Those happen during daylight saving time changes, but the ambiguity disappears when switching to UTC. We already record timestamps and communicate with UTC, applying a localtime offset for display or user input. That's a solved problem.
Further, we know how to change the localtime offset. Most timezones do that twice a year. We also know how to handle timezone definition changes because ignorant legislatures sometimes modify when the daylight saving time switches occur.
So, why don't we switch from UTC to TAI as an underlying time standard? When UTC leap seconds occur, we can treat them somewhat like a government redefining its timezones. US Eastern Time then changes from "TAI-05:00:35/TAI-04:00:35" to "TAI-05:00:36/TAI-04:00:36".
If two UTC clocks are synchronized, then one accounts for a leap second and the other misses it, the clocks will no longer be synchronized. If they were TAI clocks, they would still be synchronized and both would still record and communicate correct timestamps. It's just that one would display localtime incorrectly. But, you would immediately know that was the case when you see that the offset is a second behind. Missing a leap second would be like having your computer is set to the wrong timezone, and would be just as easy to fix.
Another problem is that we don't know what leap seconds will be added until six months ahead of time. With the current system, if you calculate something for a time that's a year away, it will still be correct if a leap second is added. If you attempt to deal with leap seconds via time zones, then a timestamp calculated before the addition of a leap second will become incorrect when the leap second is added. It's easy to imagine this causing issues just as serious as the ones we have today. Both approaches are error prone in various cases, but at least today it's possible to write correct code that won't be broken by the addition of a leap second.
Leap seconds exist because we want the sun to be in the same position at the same time of day regardless of what year it is, despite the fact that the earth's rotation is slowing down. Timezones exist because we want the sun to be in the same position at the same time of day, regardless of where we are on the globe.
We can't reliably predict leap seconds in advance because the slowing of the earth's rotation is variable. We can't reliably predict timezone offsets in advance because legislatures are fickle things. We can no more command politicians to stop meddling than we can command the earth to stop slowing down.
Time is a natural phenomenon. It proceeds smoothly at a constant rate. (Ok, relativity. Still...) But, we want our clocks to measure more than just the passage of time. We want them to also indicate the position of the sun relative to where we're standing and what day it is. That causes discrepancies which give rise to time zones. Thus, we need to distinguish between a globaltime that is the same for everybody on earth and various localtime adjustments for convenience.
We generally use UTC as a globaltime standard. The problem with that is that UTC isn't smooth or constant because of leap seconds. Our localtime adjustments are difficult enough. We also have to deal with adjustments to our localtime adjustments. It's a hard problem, but a mostly solved one. Unfortunately, leap seconds mean we have the exact same difficulties in our underlying globaltime standard. We're solving the same hard problems twice, in different ways. I'm suggesting that instead we use a nicer globaltime standard and put all our adjustments and our adjustment adjustments into the existing localtime offset calculations.
We want timezones, so we need the complexities of implementing localtime. We might as well reuse that solution to deal with leap seconds too, since they're basically the same kind of thing.
You're right that with TAI there would be difficulties with calculating times in the future, but we already have those difficulties in a slightly different form. The nature of the difficulties would change, but the change would generally simplify things.
Currently, if you specify a UTC timestamp for a future event, the _duration_ between now and then will vary depending on how many leap seconds there are. However, the globaltime and localtime _timestamps_ would remain constant, regardless of leap secords. If instead we switched to TAI for our globaltime standard, both the duration and globaltime timestamps would remain constant, and only the localtime timestamp would change.
The unchanging timestamps might make it seem like UTC is better, but that's wrong. The timestamps only _appear_ constant. When those timestamps actually take place depends on leap seconds which can't be known very far in advance.
For example, suppose Alice and Bob are trying to coordinate an activity specified by a UTC timestamp. Neither can know in advance how long to wait, so they can't just set timers. If Alice accounts for all leap seconds but Bob misses one, his clock will be wrong and he'll start early.
If instead they used a TAI timestamp, then they wouldn't have any problems. They could just set a timer. Or, they could base their activities on their TAI globaltime clocks. Or, they could also use their localtime clocks. Bob missed the leap second, so his localtime is a second fast, but he would also think the event starts a second later. The errors would cancel and he wouldn't make a mistake. When Alice applied the leap second to her offset, she would also need to re-compute the now-changed localtime timestamp of the event, but computers are really good at such things. She could also write down the localtime timestamp of the event with an offset. If she applied the leap second to her localtime clock, but not her old pre-computed timestamp for the event, the clock and the timestamp would have different offsets. She would have to convert between the offsets in the same way she would have to convert between timestamps in different timezones.
Leap seconds are the same kind of problem as timezones. We already know how to deal with timezones. We should simplify things and use that one solution for both problems.
Now, instead, imagine things being only a few seconds off, instead of entire hours. How big is the chance that the bug would be caught before things hit production? What if, in some cases, seconds actually matter?
I think it's just asking for trouble.
To me, this is a bit like proposing binary protocols and not text protocols because it's obviously technically superior. It's a good idea until you factor in that software is made by humans.
No, and yes, and anyway it doesn't matter.
No, that wouldn't be the case. If you mess up the local->global conversion, you'd still be hours off, not just seconds. Realistically, we would probably need to define TAI2 = TAI + 35 seconds and then switch to that. Then the only immediate change would be that we'd stop adding the leap second flag to NTP updates. If you were doing UTC conversions correctly before, you'd automatically be doing TAI2 conversions correctly after, but if you mess up that conversion, you'll still be hours off.
But yes, such a change would introduce a new problem: Software or systems that are not updated to handle leap seconds in offsets might display localtime a second or two off.
Still, it wouldn't matter much. Non-TAI2 systems might display localtime a second or two off, but their clocks would still be correctly synchronized. Having an incorrect offset would be like inadvertently defining your own custom timezone. Nobody else would care about your timezone (whether it's a standard one or not), because you'd be communicating with them using global time based on your correctly synchronized clock.
Instead of converting from UTC to Local time, you'll be converting from TAI to Local time.
If seconds matter you should be testing seconds. I submit that being off by 3 seconds all the time is an easier mistake to catch in testing than "explodes every time there is a leap second".
1. I probably make the mental conversion between UTC, and EST, EDT, PST, etc several thousand times a year, which is very strait forward to do with an hour offset. However, doing these mental conversions with 4 hours and 3 seconds (think after a few changes) or 3 hours, 59 minutes, and 57 seconds will make mental alignment of data in different offsets massively more difficult. While I'm a huge proponent of representing all our systems in UTC time, it's just not a reality that exists today. Also, I see many peers fail this conversion often enough, that adding complexity here would be costly.
2. Some of the protocols used for transmitting time offset from UTC are only capable of a resolution of 15 minutes. If I remember correctly, all current 3GPP standards do this, as the wireless protocols for cellular are highly optimized for small message sizes, they do not currently encode a higher resolution than 15 minutes. This means that today, a cellular network cannot send your phone a time offset from UTC more precise than 15 minutes, and to do so would require a change in standards. This also means, that likely no phone on the market today would be capable of reflecting these time differences.
While I don't have a perfect alternative, I did like the idea of abolishing the concept of a leap second itself. Meaning, that it is more important that time count forward at a steady rate, from a computation perspective, and that adjustments to re-synchronize with the slight changes in the earths rotation should not be taken at all. I can't think of any downsides with this approach off hand or from previous readings, but I'm sure it creates it's own problems in certain problem spaces I'm not familiar with.
I like this idea.
GPS time has a fixed 19s offset to TAI, so in essence you're already using TAI.
So leap seconds simply gets re-branded. Instead of saying 'a positive leap second will be introduced at the end of June 2015.' We'll say 'on 1 July 2015 00:00:00 UTC, UTC time will move from TAI-00:00:35 to TAI-00:00:36.'
ADD: I suppose this is more than a "re-branding", as it results in the abolishment of second 60. Under the current system, with leap seconds:
2015-06-30 23:59:59 UTC = 2015-07-01 00:00:34 TAI
2015-06-30 23:59:60 UTC = 2015-07-01 00:00:35 TAI
2015-07-01 00:00:00 UTC = 2015-07-01 00:00:36 TAI
2015-06-30 23:59:59 UTC = 2015-07-01 00:00:34 TAI
2015-06-30 23:59:59 UTC = 2015-07-01 00:00:35 TAI
2015-07-01 00:00:00 UTC = 2015-07-01 00:00:36 TAI
Many things depend on using mean solar time, astronomy being one of them. The places that actually need a strict accounting of elapsed time can already use TAI or GPS time.
I'd bet the number of non-astronomers inconvenienced by leap seconds outnumbers astronomer 1000 to one.
Mean solar time is a largely mathematical construct that has no direct relation to anything immediately observable. (Notice the word 'mean' ... to _observe_ mean solar time you'd need to watch the sun for a whole year from the equator.)
To the extent that astronomy applications want solar time at all, they probably wants it with more precision than you get from UTC.
Here's the function in the Linux-Kernel that's counting the seconds. It's actually not specified if this function is called exactly every second, or more often, or less often. It just overflows accumulated nanoseconds to seconds:
The only leap-second specific thing in it is the function second_overflow() called in the line linked above, it's implementation is linked below. seconds_overflow() checks if a flag updated by NTP which means "there will be a (positive or negative) leap second at the end of today" (bits in time_state) is active, and if that's the case the last second will be repeated, or skipped.
So, computationally, keeping the general complexity of timekeeping in mind, leap second processing is completely insignificant, as evident by the comparatively tiny second_overflow() function.
Edit: This depends on two (currently true) assumptions. (1) We are going to have leap seconds. (2) We will not know far in advance when they will be.
if (year is not divisible by 4) then (it is a common year)
if (year is not divisible by 100) then (it is a leap year)
if (year is not divisible by 400) then (it is a common year)
else (it is a leap year)
Leap seconds are a fact of timekeeping. Google's approach of slewing the clock seems like a robust way to implement it if you don't trust the operating system to do it right. http://googleblog.blogspot.com/2011/09/time-technology-and-l...
That exposed a bunch of underlying bugs, didn't it? It may be a good thing in the long run than without it, especially with a one year's head start.
Daniel is clearly missing out on an opportunity to have the coolest title ever.
'Head of Earth Rotation', or 'Earth Rotation, Director of' would sure look nice on a business card.
So until 2003 there was, in fact, someone whose job title was "Head of International Earth Rotation Service".
Particularly given that her job genuinely is to protect the Earth from alien life. Well, and also to protect the rest of the universe from Earth life - which, predictably, is the more difficult bit.
The set of people who really care about solar time "down to the second" is smaller than the set of people who'll be put out by 23:59:59 not being immediately followed by 00:00:00, and most of those people probably want sub-second accuracy anyway. Astronomers complaining about the clock time being wrong is exactly the same as farmers complaining about daylight savings.
Well, and it's exactly the same (only the sign is changed) like software engineers complaining about leap seconds.
Do a better job and the systems will not suffer from leap seconds. Of course, astronomers and farmers will be more inclined to agree with this statement; software engineers, not so much. :)
We all complain when we are inconvenienced. When other people are inconvenienced, bah, it's a trifle.
Also, we "suffer" from leap seconds not because we did a bad job designing the system, but because the Earth's rotation is slowing due to tidal forces from the moon, so an earth day today is already a tiny fraction of a second longer than it was when we defined a day to be 24 hours. So we have a couple of choices: change the length of an hour (and minute, and second...) to match the Earth's changing rotational speed; let solar time and clock time gradually drift apart; or add leap seconds to keep them roughly in sync. The first option is clearly insane, since having our measure of time vary over time would be a huge inconvenience. Out of the 2nd and 3rd, options, we have chosen the 3rd. A lot of people are arguing that the 2nd is really a better option. In the (very very) long term, it almost certainly is. Once the Earth's day becomes 25 hours, we'd be adding a leap hour every day!
This is my understanding of the situation, anyway. Anyone with more experience here, feel free to correct me.
I wish they had a call for public comments.
UK: please realize you are a medium-sized EU country, not a global super power. :)
Sincerely, the rest of Europe
It will take thousands of years for the drift to even amount to one hour.
If people care about it then, ... they can switch timezones (e.g. like DST causes twice a year in most places) to compensate for it.
Anyway, the UK is realistically still a global great power (the US being the only acknowledged super power):
6th by GDP (nominal)
8-10th by GDP (PPP)
Permanent member of UN Security Council
Declared nuclear weapon state
6th by military expenditure
3rd biggest European country by populace (excluding Russia)
4th biggest importer and exporter in the world
7th biggest R&D spending
Most pervasive cyber-surveillance state
Most self-righteous has-been in denial.
Most politically jaded western democracy
Most disturbingly enthralled celebrity culture.
Worlds smallest media attention-span.
Worlds best dogging hotspots.
You sure that's not the French?
I never understood this argument. The whole point of having a 24-hour clock is so that 12:00 is midday and 00:00 is midnight.
If you break that link then there seems little point in even having time zones, but it is useful to understand that "09:00" is in the morning, in whatever arbitrary location that time is observed.
The instant failure of "Swatch internet time" showed that the current system is still working.
Tell you what: i'll allow a switch to European time if we move the start of the working day to 11:00. Deal?
imagine splitting time not exactly in seasons begin/end (what was the last time you synched your clocks with a solstice?). no leap seconds. no 28-31 months. no 365+-1 years. even daylight saving time benefits can be worked in the system if we don't sync 100% with the solar day but skew it just enough that you get the drift to align things on daylight saving benefits and still can tell the time by simply looking at the sun.
it's not hard. just nobody has the balls to even suggest this seriously out of math circles.
There are a fractional number of days in a year. You can't have a year with an integral number of days (without varying the number of days in a year, i.e. leap years).
The proposal to abandon leap seconds is appealing but as short-sighted as two-digit years. You'll have to dynamically adjust times eventually because the rate of the earth's rotation changes. We can either do it right, incrementally, and programmers can manage to implement a standard that's been around for over forty years, or we can ignore it and pass the problem along. I suppose I can guess what will happen there.
Let's face it: decimal time was really only intended for people doing lots of math work involving time. If the argument for base 10 is that calculations and representations are "easier" than base 12 (mainly because all modern human civilizations use base 10), why not make the argument that base 2 should be used, since computers virtually think for us at this point?
The real reason our representation of time is still so fucked up is that it has to be used by all humans, and all humans don't want to deal with lots of digits, fractions, floating point, etc. Sexadecimal is just simpler, even if we do have to occasionally jump through some hoops to keep this piece of crap timekeeping system in order. One second every couple years? Converting to decimal just to deal with little problems like that seems like a completely unreasonable suggestion to most people.
I never could get around the 1/4 day problem, and still had to account for leap years with a sixth non-month day every four years. It was a fun thought experiment as a teenager interested in math and science though.
60 is probably best, tho.
edit: And I just realized I had a typo up there; I wrote "13 months" when I meant "12 months". D'oh.
Your idea has definitely never been tried or abolished. http://en.m.wikipedia.org/wiki/Decimal_time
Anyone on the Spanner team able to comment?
It seems like some of their designs are constrained around always non negative deltas between events, making things append only so their trees or hash tables are only balanced once when written to disk.
Spanner instead deals with it by changing the length of the second.
If you're assuming time intervals are accurate to the second, or that times are monotonic, have you actually reconfigured your system to use TAI?
The core issue is we have two different needs: a hyper-accurate count of elapsed SI seconds, and a day-to-day date/time system that roughly tracks the position of the sun in the sky. UTC tries to combine these into the same thing, when they would best be left separate.
For the former, we already have a pure atomic timescale -- TAI -- to handle this. I would go a step further and not even present TAI as a date and time, but rather a raw count of seconds like unix time. This is to enforce that TAI is not civil time, but rather a standard to benchmark and calibrate against.
For the latter, I propose a new time standard that is a transformation function applied to TAI. The IERS, instead of decreeing leap seconds like they do now, would instead declare an offset and skew rate that smears the leap second out over a longer period. Something like: at date X, civil time will be Y seconds ahead of TAI, and will tick Z% faster/slower until further notice, where Z is expressed in a simple unit like parts-per-billion or milliseconds per day. Instead of leap seconds every 18 months or so, with this scheme the IERS could probably get away with making adjustments once every five years, and still stay within the 0.9s of true solar time as mandated by UTC.
Any modern microprocessor could handle this transformation trivially. Most would not even need to as the internal clock resonators in 99.9% of the world's computers and clocks are less accurate than the skew rate Z. They need to periodically resync with a master clock anyway and the drift would be indistinguishable from noise.
To be clear, no one would be changing the length of the second. An SI second is and always will be an SI second, and high-precision computing and science will be done with reference to SI seconds. But the elapsed time between consecutive seconds of civil time will be not quite an SI second, and will differ by an amount so small that for the purposes of day-to-day timekeeping is in effect indistinguishable.
Getting rid of leap seconds and being done with it -- effectively setting civil time to atomic time -- may seem like an attractive alternative that achieves the same effect without the complications of transformation functions and skew rates, but it ignores that since the Earth's rotation is continuing to slow down, the offset between atomic and solar time is going to increase quadratically. While the first leap hour is 900 years out, the next one after that is only another 400, then 300, then 250... Someone is going to have to solve this problem for good eventually, and that will either involve applying the correction continuously as in my scheme, thus eliminating the quadratic compounding of the error, or eventually re-defining the second at some distant point in the future to match the Earth's rotation (and thus starting the situation over from scratch).
Won't the new offset be 17s? The offset would only diminish if a negative leap second was enacted which has yet to happen.
A cool realtime clock that shows the offset the GPS satellites work from, which do not include leap seconds.
(Of course best would be to have none at all, and accept that several thousand years from now people-- if people still exist then-- may choose to slip the timezones by one hour to correct the drift.)
Wait... (very briefly thinking about broken assumptions, date calculations, month-end assumptions, 0:59 versus 0:60 vs. 0:00, ...) dammit, I just lost more than I gained! Thanks, France!
Quantas crashed last time. I mean their computers, not the planes.
Kids today... Get off my lawn.
(Oh wait, the last leapsecond event didn't even involve an actual leapsecond)