It's absurd that we continue to keep subjecting ourselves to these disruptions and the considerable amount of work that goes into handling leap seconds for the systems that aren't disrupted by them.
Leap seconds serve no useful purpose. Applications that care about solar time care usually care about the local solar time, while UT1 is a 'mean solar time' that doesn't really have much physical meaning (it's not a quantity that can be observed anywhere, but a model parameter).
It would take on the order of 4000 years for time to slip even one hour. If we found that we cared about this thousands of years from now: we could simply adopt timezones one hour over after 2000 years, existing systems already handle devices in a mix of timezones.
[And a fun aside: it appears likely that in less than 4000 years we would need more than two leapseconds per year, sooner if warming melts the icecaps. So even the things that correctly handle leapseconds now will eventually fail. Having to deal with the changing rotation speed of the earth eventually can't be avoided but we can avoid suffering over and over again now.]
There are so many hard problems that can't just easily be solved that we should be spending our efforts on. Leapseconds are a folly purely made by man which we can choose to stop at any time. Discontinuing leapseconds is completely backwards compatible with virtually every existing system. The very few specialized systems (astronomy) that actually want mean solar time should already be using UT1 directly to avoid the 0.9 second error between UTC and UT1. For all else that is required is that we choose to stop issuing them (a decision of the ITU), or that we stop listening to them (a decision of various technology industries to move from using UTC to TAI+offset).
The recent leap smear moves are an example of the latter course but a half-hearted one that adds a lot of complexity and additional failure modes.
(In fact for the astronomy applications that leap seconds theoretically help they _still_ add additional complication because it is harder to apply corrections from UTC to an astronomical time base due to UTC having discontinuities in it.)
3GPP2 C.S0002-A section 1.3 "CDMA System Time":
All base station digital transmissions are referenced to a common CDMA system-wide time scale that uses the Global Positioning System (GPS) time scale, which is traceable to, and synchronous with, Universal Coordinated Time (UTC). GPS and UTC differ by an integer
number of seconds, specifically the number of leap second corrections added to UTC since
January 6, 1980. The start of CDMA System Time is January 6, 1980 00:00:00 UTC, which
coincides with the start of GPS time.
System Time keeps track of leap second corrections to UTC but does not use these
corrections for physical adjustments to the System Time clocks.
I'm pretty sure the only use of leap seconds in CDMA is for converting system time to customary local time, along with the daylight-time indicator and time-zone offset also contained in the sync channel message.
Edit: C.S0005-E section 188.8.131.52 says the mobile station shall store most of the fields of the sync channel message; it may store leap second count, local time offset, and daylight time indicator. This suggests that these fields aren't really that important for talking CDMA.
No link with leap second till proven.
Also, without knowing more about what exactly went wrong with your phone, it's possible other infrastructure within the network was unstable, signalling equipment, etc. I can't remember exactly, but I think some of the CDMA equipment at my previous company had a leap second problem previously. And the equipment is no longer being maintained or patched really.
Anyway, of three different module vendors, two got leap handling wrong. Then our own code had its own leap bugs, on top of the OS (Solaris) timekeeping bugs such as clock jumps and timezone update issues. Good times.
Doubtful. :) I've observed a similar outage at the last leapsecond (and in that case, dropped me off a call-- which is why I even checked this time.)
It's just so nice to get that extra bit of sunlight in the evening.
You'd have to switch the "seconds since epoch" count to TAI, and that would cause new formatting bugs because all kinds of software assumes that the minute changes on a multiple of 60 seconds since the epoch.
I think if there were such a thing as a different kind of epoch time that literally actually is "seconds since epoch" it would help a lot and work like you suggest.
It sounds like in your scenario, you would prefer Unix time to instead include the leap second, so that no rollback or time smearing behavior would need to occur. I believe the reason it does not has to do with simplicity: current systems rely on a day being 86,400 seconds, making each year (regardless of leap days) a multiple of 86,400. Leap seconds break this simple assumption. While it would be simple for a new time formatting system to take leap seconds into account, it is not so simple to go and retrofit all of the existing systems for a new formatting standard, and convince so many different groups of developers to change that much code while also agreeing with one another about the changes.
Besides: Even expensive commercial time keeping devices frequently mishandle leap seconds. History suggests that we are underestimating how difficult they are to get right in complex systems.
I propose a new "non-time" time system. It has exactly two real values which range from 0 to tau and an integer, the first real number is radians of earth rotation, and the second is radians of the rotation around the Sun. The integer reflects the number of complete cycles. So lunch time in Greenwich 'pi'.
It has the benefit that its "source" is actually the planet, so we can use a telescope at Greenwich to pick a certain alignment of stars as the "zero", "zero" point and then each time it realigns to that exact point, you can increment the "year" count.
I believe we can build a robust system to support this out of stone. We'll need to create a circle of stones but using a small hole drilled through a stone and a marker on the ground we can always identify 0.0,0.0, 0.0,pi/2, 0.0, pi, and 0.0, 3*pi/2.
If you're going for such drastic change to get rid of the occasional minor issue with leap seconds, then a star clock is a bad idea - the stars move relative to us and each other. The constellations we look upon are differently arranged to the ones Julius Caesar & Co looked upon. You're basically swapping one source of error for another.
Similarly - the oddity of choosing a planet-based time system for the synchronisation of clocks moving interstellar distances? How do they accurately measure time when they're no longer on the planet? And, as others have mentioned, the reason why we have leap seconds in the first place is because the length of a day (and of a year) changes.
It's also worth noting that when stonehenge was used to mark the time, webpages came in the form of bardic tales. If your bard was asleep, you get a 500 error... and they were asleep a lot. Stonehenge time was terrible for information delivery :)
Time is relative, you'll have the feeling of angst until you accept the relativity. No pun intended.
mostly this is tektonic re-adjustments on a minor level
or asteroid strike on a major level. of course you could say that an asteroid strike would be a larger problem than re-syncing the stone clock
It's illustrative about how hard time is that you tried to create a new system from scratch, with the express purpose of being future proof for space travel, and it's already broken because the fixed point you chose is not, in fact, fixed.
The solution here is that any software that relies on accurate timing and/or breaks when you change the time should be using epoch seconds, not any sort of human-oriented time format.
UTC is different from epoch time. Epoch time by definition does not count leap seconds. https://en.m.wikipedia.org/wiki/Unix_time
It sounds like what it means is that Unix time counts the number of real, actual, by-the-clock seconds that have passed since the epoch. That would be logical. But what it actually means is that it counts the number of real, actual, by-the-clock seconds, minus the number of those those that have been designated "leap seconds".
That is to say, whenever a "leap second" occurs, the nice monotonic isotonic progress of unix time is mutilated by suddenly adding or removing 1 to the total count so far. That's what "does not count leap seconds" means, and sometimes even what "ignores leap seconds" means (which is of course even worse terminology).
> The time() function will resolve to the system time of that server. If the server is running an NTP daemon then it will be leap second aware and adjust accordingly. PHP has no knowledge of this, but the system does.
> It took me a long time to understand this, but what happens is that the timestamp increases during the leap second, and when the leap second is over, the timestamp (but not the UTC!) jumps back one second.
- UTC timestamps can unambiguously refer to times years in the future; TAI timestamps cannot, because it is unknown how many seconds will be in each year.
- Converting UTC timestamps to human-readable UTC times is simple modular arithmetic. A beginner in any programming language can do it. Converting TAI to UTC requires a lookup table, and it must be updated after the software is released.
What would be simpler would be ending the use of leap seconds for a millennium or so.
TAI timestamps don't refer unambiguously to UTC timestamps or calendar dates in the future, because the latter two depend on the variable rotation of the earth and (for zoned times) geopolitical whimsy.
I don't see why this matters though - most "timestamps" are for events in the past. The proper representation for events in the future will depend on your application (eg. are you writing a calendar for humans, or a spacecraft guidance system - does the event happen at a fixed point in time or a fixed point in the human work day?).
UTC doesn't know how long a second is, and TAI doesn't know how long a day or a year is. But most people need to specify "10 years from now" more often than they need to specify "300 megaseconds from now".
Banning leap seconds would be fine. UTC leap seconds are messy but at least we get by. But updating all time conversion software (instead of just updating authoritative clocks) every time there's a leap second is ludicrous.
It's true that applications should use the proper representation for what they're intended to do. And they do. Most applications use UTC because they describe human-centric events. Astronomers use TAI.
In my opinion, the real problem is that TAI is not an option in most current systems. There is no way to get the time in TAI, no way to convert between TAI and UTC etc.
So even in applications where it makes sense to use TAI (think logging and billing) we don't do that because the necessary infrastructure is not available.
I think it is time that the technical community gets together can make TAI a first class citizen.
TAI doesn't help with timestamps in the future, but usually those applications don't need second level granularity anyhow. The applications that break during a leap second are the ones that need to track the current time or passage of time with sub-second accuracy. And those can be served perfectly with TAI.
> But updating all time conversion software (instead of just updating authoritative clocks) every time there's a leap second is ludicrous.
All time conversion software is already updated every time a government changes a time zone - by downloading the most recent tzdata. All software that needs second-level granularity is constantly updated, by synchronizing with NTP. There's nothing at all ludicrous about distributing leap second tables instead of mutilating the NTP time signal.
Isn't it the other way around, since TAI has no leap seconds?
"What would be simpler would be ending the use of leap seconds for a millennium or so."
That effectively means using TAI consistently, which is what software not aware of leap seconds would be doing anyway (despite the fact that it's actually working with UTC.)
And yes, I'm advocating for changing that definition, and making UTC a constant offset from TAI that works the same as TAI for the foreseeable future.
But for timestamps that can be used to obtain a time difference between them, TAI should be readily available to be used. Most timestamps are for figuring out time differences here on Earth and not relative to astrological signs, they are for knowing what came before what, etc. They are not for generating human-readable dates. It's silly that such a major use case is hardly implemented on major systems, and instead an unreliable Unix Time is used which can "at any moment" have the same second twice.
Since the time we are talking about is going to be used by computers it might as well be base 2.
64 seconds per minute, 64 minutes per hour, 36 hours per day. You could then choose 8 days a week, 32 days per month, and 11 months (44 weeks) per year followed by 4.24... days of festivals to the pagan gods.
Just like before, the problem is that there are exogenous values: a non-constant length of year at an Earth location, a non-constant length of day at an Earth location, and a more constant period of time defined by a lower level process of nature like caesium atom vibrations for the seconds that scientists use.
I'm sure other fields do as well.
So go figure: which part of this system should be broken because people keep ignoring that leap seconds happen?
I really thought anyone discussing systems programming should be aware of the need for a monotonically increasing clock source
This time I didn't get paged for anything on leap second day :)
The right way for computers to represent time is with a number that represents the number of constant-rate ticks that have elapsed past a some agreed-upon epoch. If you know what the epoch is and how long each tick is (lots of people use 1 / 9.192 GHz), it is easy to know how many ticks are between any two time values, and you can convert a time value with one epoch to one with a different epoch and tick rate -- you can do everything people expect to do with time. There are no numbers that represent an invalid time value, and for each moment, there is a unique time value that represents it. There's a one-to-one mapping with no nasty edge cases.
Leap seconds are a step function that is added to a constant-rate timescale (whose name is "TAI") in order to generate a discontinuous timescale (whose name is "UTC") that never is too different from solar time. There is nothing fundamentally abhorrent about leap seconds -- there are just good and bad ways to represent, disseminate, and compute with timescales that involve leap seconds.
The right way to handle leap seconds can be seen with many GNSSes and PTP (very high precision hardware-assisted time synchronization over Ethernet). GPS, BeiDou, Galileo, and PTP all involve dissemination and computation on time values -- and with dire consequences for failure/downtime/inaccuracy.
The designers of those systems all somehow converged on the choice to separate out the nice, predictable, constant-rate and discontinuity-free part of UTC from the nasty step function (the leap second offset). Times in all those systems are represented as the tuple (TAI time at t, leap offset at t). This means that the entire system can calculate and work with (discontinuity-free and constant-rate) TAI times but also truck around the leap offsets so when time values need to be presented to a user (or anything that requires a UTC time), the leap offset can be added then. Crucially, all the maths that are done on time values are done on TAI values, so calculating a time difference or a frequency is easy and the result is always correct, regardless of the leap second state of affairs. Representing UTC time as a tuple makes the semantics of that data type easy to reason about -- the "time" bit is in the first element and is completely harmless -- the edge cases have all live in the second half of the tuple.
NTP and Unix (and everything descending and affected by those) have made the mistake of representing and transmitting time as a single integer, TAI(t) + leap offset(t). This is not a data representation that has sensical semantics and it is very hard to reason about it. First of all, the leap second offset is nondeterministic and also unknown -- there is no way to get it from NTP and there is no good way to know the time of the next leap event. Second of all, there are repeated time values for different moments in time (and when a negative leap second will happen, there will be time values that represent no moments in time). Predictably, introducing nondeterministic discontinuities doesn't work so well in the real world. There are a bunch of bugs in NTP software and OS kernels and applications that make themselves shown every time there is a leap second. It's not even just NTP clients that struggle -- 40% of public Stratum-1 NTP servers had erroneous behavior  related to the 2015 leap second! Given that level of repeated and widespread failure, the right solution is not to blame programmers -- it should be to blame the standard. The UTC standard and how NTP disseminates UTC are fundamentally not fit for computer timekeeping.
GNSS receivers and PTP hardware get used in mission-critical applications (synchronizing power grids and multi-axis industrial processes, timestamping data from test flights and particle accelerators) all the time -- and even worse, there's no way to conveniently schedule downtime/maintenance windows during leap second events! "Leap smear" isn't an acceptable solution for those applications, either -- you can't lie about how long a second is to the Large Hadron Collider. GNSS and PTP systems handle leap second timescales without a hitch by representing UTC time with the right data type -- a tuple that properly separates two values that have the same unit (seconds) but have vastly different semantics. The NTP and unix timestamp approach of directly baking the discontinuities into the time values reliably causes problems and outages. The leap second debacle is not about solar time vs atomic time; it's about the need for data types that accurately represent the semantics of what they describe.
Mutilating all timestamps and network time representations by adding a variable unknown step function (the leap second "correction") in order to preserve the illusion that days are always 86400 "seconds" long doesn't help solve this problem at all.
Doesn't the (TAI, leap second count) tuple solution work for this? Maybe I misunderstand the purpose, but you could use the leap second count to figure out how many seconds the TAI is off by.
But that doesn't matter, because date intervals shouldn't be represented with seconds anyway. Months and years have different lengths.
begin = (today, 12:00) (eg. 2017-01-01T12:00:00)
repeat = RRULE:FREQ=WEEKLY;COUNT=1;BYDAY=MO
An event in the future isn't necessarily a known number of seconds away, which I think is the point you were trying to make. But the parent comment wasn't suggesting all instances of time should be stored as (tai, leap seconds). Calculating a UTC value from (tai, leap seconds) is trivial, but if the thing you care about is the UTC value then that's what you store.
This allows the record to stay consistent, even if there are changes to the local time rules - e.g. leap seconds, daylight savings, timezone offset.
Imagine a tech-camp had been planned in Cairo, Egypt, to start on 9am on July 10, 2016: that would have been scheduled for 06:00 UTC. When Egypt cancelled daylight savings with three days notice, that record should then have been 07:00 UTC.
As an aside, how often do the tz databases for each language get released? Are they usually responsive to notices 3 days out?
Edit: I went looking into the pytz release for the Cairo example from parent.
Olson Timezone Database:
Release 2016f - 2016-07-05 16:26:51 +0200
Release 2016.6 - 2016-07-13
So even if the tz database is up to date, there's no guarantee that various library usages of the tz database will be correct for these kinds of changes. Interesting.
I've read that the explanation for this temporary suspension of daylight-savings is Ramadan , and Ramadan is dependent on the observed sighting of the new moon - so you can't necessarily predict the date in advance.
I ended up coming across that after looking for an explanation for something bizarre I experienced on a trip to Morocco in March 2016…with my iPhone set to use "Marrakesh, Morocco", the time on the phone displayed correctly, but the time on my sync'd Apple watch was an hour out. I think I ended up manually setting it to Paris time to get the correct time, but never did get an explanation for the difference.
So even across two devices from the same manufacturer, theoretically sharing the same date-time information, they can be inconsistent.
Conclusion: time is hard!
I keep telling people to use TAI. I once contemplated writing kernel code to rebate internal clock stuff to TAI but at the end of the day it was not worth doing because I would have needed to build a completely new stack of things above the kernel to use it in order to avoid problems.
Have a look at a j excellent video that explains why time algorithms are hard to sort out: https://m.youtube.com/watch?v=-5wpm-gesOY
Happy New Year from Austin!
I knew that something is going to break somehow because for some reason people continue to falsely believe that 1 minute always has 60 seconds.