Hacker News new | past | comments | ask | show | jobs | submit login
Timekeepers may subtract a second in 2029 as planet spins slightly faster (pbs.org)
64 points by rntn 10 months ago | hide | past | favorite | 68 comments



Before people freak out, most time systems that already support leap seconds should also be able to support negative leap seconds. You can just smear a second by speeding up a clocks temporarily. And it will counteract the drift between UTC and TAI.

https://developers.google.com/time/smear#:~:text=During%20th....


Google likes smearing (they famously published a white paper on it), but not everyone does it since it causes error in your measurements of duration. The most common alternative is having a minute that runs to either :58 or :60 while keeping the size of a second the same.


I'm not sure where to find the white paper, but this blog post [2011] probably has most details as well. Its key take-aways:

- their (and probably others') distributed systems expect time to move forwards (synchronization, "happens after", etc)

- repetition of one second is difficult to accommodate for, e.g. with disk writes or storage of e-mail messages

- the initial leap smear was a hack/patch in their NTP servers by not setting LI (leap indicator), but modulating time within a window w before midnight:

  lie(t) = (1.0 - cos(pi * t / w)) / 2.0
- they tested both positive and negative leap smear on a set of 10k servers

- leap smears eliminate the need for programmers to handle leap seconds

[2011]: https://googleblog.blogspot.com/2011/09/time-technology-and-...

(edit: list markup)


> - their (and probably others') distributed systems expect time to move forwards (synchronization, "happens after", etc)

Time moves forward with both positive and negative leap seconds. With the positive ones, which we've seen before, we go from 23:59:59 to 23:59:60 to 00:00:00. Always forward.

With negative ones, which have not been seen in the wild, it goes from 23:59:58 to 00:00:00, skipping :59. Also always forward.


Well, the problem is most systems are on unixtime, and unixtime doesn't allow for a :60. Traditionally, :59 is repeated, so the integer seconds is still monotonic, but the fractional part repeats.

This confuses many applications. And ocassionaly confuses the Linux kernel, too [1]. Other kernels may have done better.

I think a negative leap second is less likely to cause the same sorts of problems. Otoh, it's never happened before, and leap seconds are generally not well tested.

[1] https://www.networkworld.com/article/711440/software-linux-i...


Including leap seconds in unix time was a mistake - it should follow TAI instead of UTC and leave leap seconds entirely as a matter of the formatting/timezone code.


From the blog post:

> Very large-scale distributed systems, like ours, demand that time be well-synchronized and expect that time always moves forwards. Computers traditionally accommodate leap seconds by setting their clock backwards by one second at the very end of the day.

Not quite "always forward".


Or, hear me out, we ditch UTC and only use TAI.

Vote for me and I will also:

- Get rid of DST

- Put the whole USA on EST

- Take over the rest of the Americas and do the same for them (New York is basically centrally located as far as meridians go)


> Put the whole USA on EST

Smearing a single timezone over a four-hour-wide country works for China because the mean Chinese longitudial center of population is inline with Hong Kong/Shenzen/Wuhan; it's safe to say that almost all Chinese people live in the east. Comparatively, the US mean center of population is in Missouri. People in the far west of China already make use of unofficial local timezones precisely because the official timezone isn't designed to serve them; now imagine that behavior but applied to Los Angeles, the second-largest city in the country.

At best, you could split the US into two timezones that were two hours apart, with the cutover somewhere around the Colorado/Kansas border. But for optimal results you'd want to pull an India and make each timezone a half-hour offset from UTC, and at that point it's just too much bother.


Yes I agree, ideally we should move to using TAI for timestamps, and then leapseconds can be added as part of the timezone calculation when formatting the date for the user.


In theory, yes. But in practice... I wouldn't be surprised to see many systems fail. It's probably not a code path that's been tested much, if at all.

Kinda like those systems that went down on Feb 29, 2024. I mean, how does an app in the 21st century not handle leap day? Yet it happened.


> I mean, how does an app in the 21st century not handle leap day?

We'll always have programmers that have less than 4 years experience programming and hadn't even considered leap days when programming "This should happen every last day of the month" or whatever.

One would think libraries would catch this one way or another, but some people are hellbent at doing things their own way and then... Well.


> We'll always have programmers that have less than 4 years experience programming and hadn't even considered leap days

Or, you know they considered it because they left a comment in their code like // TODO: Handle leap year LOL

Just as bad. Don’t roll your own time handling code.


I don’t buy that. While positive leap seconds can be mitigated at the OS level, negative leap seconds need to be supported at the application level.

A positive leap second gives you a discontinuous function from UTC to TAI. But a negative leap second means a function can’t exist at all, because the same timestamp in UTC now corresponds to two moments in time. If someone gives you a timestamp in UTC you can’t know which second it refers to - you would have to switch to giving timestamps in TAI.


While this doesn't change the essence of your claim, the problem is not with UTC timestamps but with Unix timestamps. UTC timestamps (HH:MM:SS) are actually unique because the extra second inserted is assigned the numeral 60. Unix timestamps do, however, repeat when a leap second is inserted.


And isn't this problem then the opposite of what is described in the prior post?

A timestamp "repeats" to add a second (positive leap second). The subsequent timestamp sequence is delayed relative to its prior offset to TIA. The Google "smear" method works here to slow down the clock rather than repeat values.

To drop a second (negative leap second) we have a one second gap in the timestamp sequence. The subsequent timestamp sequence is advanced relative to its prior offset to TIA. This just requires a monotonic jump, as if the computer froze and did not perform any work for one second before resuming with the right clock values.


Oh yes, absolutely, GP got it backward and I went along with it...


> You can just smear a second by speeding up a clocks temporarily. And it will counteract the drift between UTC and TAI.

No, you cannot "just" do this in many instances. Some folks (especially in regulated industries) need to have a close link to UTC, and so purposefully smearing things would be non-compliant.


Can you provide an example?

From my experience, stuff that needs serious synchronization does not care much about absolute time, and optimizes for very low relative offsets. Other stuff is OK with half-second or so in offset, and can tolerate much more.


MIFID II requires [1] at most 100us divergence from UTC for timestamps for reportable events.

[1] https://ec.europa.eu/finance/securities/docs/isd/mifid/rts/1...


Somewhat off-topic question: has anyone ever been seriously punished for violating MiFID requirements? Has anyone ever tried to ask a market participant for five years old trading data and verify it is synchronized with UTC?

I've yet to see anyone who seriously tried to be compliant both to the spirit and to the letter of MiFID. It's both hard and expensive to the point where the fine expectancy might be lower than implementing costs. Everyone do just enough to plausably deny they're not compliant.


At $PREVIOUS_JOB non-compliance was really not an option. Just in my team, for two years, we had at least two people at any time working on MIFID-II compliance upgrades.

We had multiple levels of recording, the last resort was really storing raw pcaps of trading recording.

Very expensive.


Which begs the question, was the Google smear inside that limit?

Answer;

  During the smear, clocks run slightly slower than usual. Each second of time in the smeared timescale is about 11.6 μs longer than an SI second as realized in Terrestrial Time.


By design, the Google smear peaks at about 500ms off When :60 starts, Google smear is at 59.500, and when :0 starts, it's at :0.500.

IIRC, there were some different strategies on how to modulate the length of the seconds. One method was to make all seconds in the 24 hours around the change uniformly longer; another was to increase second length slowly and then decrease slowly after the leap second. Either way, you'd need to do some math to determine when you were out of compliance, and if that overlapped with time you were operating where compliance was needed.

IMHO, better to not smear, and just not trade for a couple seconds around the leap second.


You're basically trading your risk for everyone else's. Not that trade effects and compliance aren't a problem, but the point is you're discounting the impact of the -1 second effect on everything else.

I don't usually "stand with google" on things, but I think this time I stand with google: better to smear, than invoke negative time and time repeats. If need be declare a trading holiday. Or, ensure the smear is within the compliance limits.


> Or, ensure the smear is within the compliance limits.

I don't see how you can be within 11.6 μs of UTC and also smear.

If UTC requires leap seconds in both directions (as it does) and your time keeping must be very close to UTC, you must keep leap seconds (this isn't too hard, historically FreeBSD has done just fine, although Linux has crashed a few times, and MySQL didn't like it at least once even if your OS was fine; other applications also had issues). And you've got to figure out how to log :60, rather than :59 twice; this is probably harder.

Personally, I'd vote for all seconds be the same length, and all days the same number of seconds, and all days the same number of hours. Maybe ocassionally redefine time zones, until the seconds per day is really off. But you know, the powers that be insist that UTC stay close to UT1, and DST is a thing too (different orgs, but still messed up)


FYI: 27th General Conference on Weights and Measures decided to abandon the leap second by or before 2035 and stabilize DUT1.


For interested:

https://en.wikipedia.org/wiki/DUT1

> DUT1 = UT1 − UTC > > - Universal Time (UT1), which is defined by Earth's rotation > > - Coordinated Universal Time (UTC), which is defined by a network of precision atomic clocks. > > UTC is maintained via leap seconds, such that DUT1 remains within the range −0.9 s < DUT1 < +0.9 s. > > The reason for this correction is partly that the rate of rotation of the Earth is not constant, due to tidal braking and the redistribution of mass within the Earth



We need to all put our hands out the window to slow this thing down.


We should take more breaks from sitting at our desks.


Also run in the direction earth is rotation to push it back with our feet


More windmills


Story time.

Was working at $company when our mission critical software went absolutely haywire one day.

The way I proved it was the leap second was because the QA and Prod environments crashed hard at the exact same time.

If you’re running old as the hills enterprise software, maybe plan to have some extra help on call that day.


The article in Nature referenced but not named or linked from the article could be either:

"Melting ice solves leap-second problem — for now"

https://www.nature.com/articles/d41586-024-00850-x

"A global timekeeping problem postponed by global warming"

https://www.nature.com/articles/s41586-024-07170-0


The FreeBSD folks test their code for these things and it works:

* https://lists.freebsd.org/pipermail/freebsd-stable/2020-Nove...

Of course third-party userland code understanding what happens is another thing.


Always good in these kind of time-related situations to dig up https://news.ycombinator.com/item?id=4128208 which links to "Falsehoods programmers believe about Time": https://infiniteundo.com/post/25326999628/falsehoods-program...


Store everything as "number of (seconds|milliseconds|whatever) since $DATE" and the vast majority of those false assumptions can be avoided. Oh, and duration calculations become trivial. Unix had it right 50 years ago. Just make sure you use signed ints, because the past exists!


> Store everything as "number of (seconds|milliseconds|whatever) since $DATE"

This itself is a falsehood, at least as such a strong statement. You'll avoid some of the falsehoods in the text, but as long as humans and politics exists you also need to consider timezones and leap seconds and make a concious decision on when to apply and when to avoid them.


"$DATE" was meant to imply "$DAY $TIME $ZONE."

The leap seconds and other sludge can be compensated for when converting on-the-fly to a human readable format. That's one of the strengths of this system - it pushes the issues of politics up to the client, where those issues belong.


> "$DATE" was meant to imply "$DAY $TIME $ZONE."

I'm curious, what does the "number of milliseconds since" do, then? If it's about intervals, and it has some human connection, you're still in political land (and not only on the client) if you do something like: 3am, 1 month from today 3am.


"number of milliseconds" gives you a representation that's highly efficient, and highly resilient, compared to a calendar-based time representation.

An offset from a known epoch has no dependency on any calendar-based time representation. Regardless of whether you call it 1 January 1970 00:00:00 or January 1 1970 12:00:00 AM, we can agree that that's a point in time that exists, and we can count the number of (seconds|milliseconds|whatever) since that instant and get the same result.

More concisely, an offset from a known epoch has no dependency on how many days are in a month, or seconds are in a minute, or how many nanoseconds are in a second, etc.

An offset from a known epoch is only trivially dependent on the length of any particular time unit. Need to convert a classic UNIX time to milliseconds? Multiply it by 1000. Did the governing bodies of the world decide to change seconds to be 5% longer? Divide by 1.05 before showing a value to the user.

Offsets from a known epoch makes it quite easy to perform important tasks like clock drift compensation, even wirelessly on teeny microcontrollers with a couple microamps of power. Ask me how I know.

When you deliver a client a time represented as an offset from a well-known epoch, you allow the client (and, thus, the user) to decide how the time should be represented. This improves readability an accessibility with less complexity than would be necessary to convert a calendar-based time format. Reduced complexity leads to reduced computational overhead and fewer opportunities for bugs to pop up.

Determining the epoch itself is non-trivial unless you use one that's already in widespread use. Thankfully, there are a few of those to choose from, like the UNIX epoch or Windows NT epoch. Some industries even have their own de facto standards [1]. But once you've picked an epoch, you have a system that's virtually maintenance-free until the integer saturates. Make the integer wide enough, and you can hope that computers will natively support larger integers by that point, making upgrades trivial. Or, make it so big that all of humanity will have been vaporised and the issue will be moot. In any case, you'll have a time representation that's more efficient and more maintainable than anything calendar-based.

[1]: https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...


Agreed, this is true, for anything needs to independent from calenders. You find those problems typically on lower levels, in systems engineering (and I'm not surprised you mention microcontrollers, I know many of those from there as well). The higher you go, the more likely you encounter a problem depending on timezoned timestamps and calendar calculation where it's not sufficient to just convert, say, UNIX timestamps to human readable format for presentation. Instead, time zones are inherent to the problem, and calculating on UNIX timestamps opens a world of pain for you and your users (simple example with microcontrollers: opening the doors at 9 AM). There is a wide application range for epoch based timestamps, but the tricky thing is to know when a calender based timestamp would be more applicable.


given that unix time specifically does not handle leap seconds well, i'm not sure i would say they "had it right" all along


I don't really understand why leap seconds aren't "needed" a lot more often then they are. According to Wikipedia, a modern day is 1.7 milliseconds longer than a day a century ago. Assuming the second was standardized 200 years ago so it's over 3 milliseconds a day, shouldn't we need a leap second every year or so?


the discrepancy between solar & atomic time is noisy & irregular; the past several decades have introduced comparatively less deviation & have required fewer leap seconds as a result

see: https://en.wikipedia.org/wiki/File%3ADeviation_of_day_length...


Your instinct is indeed correct. Leap seconds aren't not frequent today because they only started 50 years ago, and you should see much more leap seconds in average for next 50 years. It also means that leap seconds are not sustainable in the long term for reasons other than computer glitches anyway.


Seems like a contradiction...

Polar ice melt caused by climate change is slowing the Earth's rotation - https://news.ycombinator.com/item?id=39858909


Irony here...

As we move faster, the year is also getting faster. Relativity wise, shouldn't it be getting slower?

Yeah, I know - it isn't quite that simple. Funny thought though.


The effect of special relativity would be negligible


That’s why I put my last sentence in my comment.


Why is it problematic if time is off by a minute related to earth's position in it's orbit? Why should we correct for leap seconds?


Some countries have (or had) laws or articles in their constitution that mandate that time is kept according to earth's orbit and rotation. We want everyone to use the same time, so UTC is the compromise to use atomic time but to regularly adjust it by whole seconds to keep it within one second of astronomical time.

I'd prefer to use TAI (international atomic time), but if that meant that some places use IAT and some use UT2, requiring us to keep track of time offsets of a couple thousand milliseconds that change daily, that would be much worse. UTC is a fine compromise


Can you tell me some examples of such laws, which countries do you have in mind? Is 37 seconds really not considered "synced" to the earth's orbit and rotation?


It generally isn't really, which is why the powers that be have decided to abolish the leap seconds (at the latest in 2035). There's a notice period to adapt the few systems that do care.


Good I want it to spin faster! Wheee!!


Interesting that ChatGPT knows of minutes with 61 seconds but it refuses to believe that a minute can have only 59 seconds.


It's never happened before. All leap seconds before now have been positive.


But we've known about the potential for negative leap seconds for at least decades now (UTC described them in the 70s).


ChatGPT doesn't RTFM.


huh?

> can a minute ever have 59 seconds ChatGPT Yes, a minute can have 59 seconds in the context of leap seconds. Normally, a minute is defined as having 60 seconds. However, leap seconds are a way to adjust the Coordinated Universal Time (UTC) to match Earth's irregular and slowly changing rotation. When a leap second is subtracted, it technically creates a minute with 59 seconds. This subtraction has been less common than the addition of a leap second, which results in a 61-second minute, but it's theoretically possible if necessary to keep UTC aligned with solar time.

The decision to insert or delete a leap second is made by the International Earth Rotation and Reference Systems Service (IERS) based on precise measurements of the Earth's rotation. As of my last update in April 2023, all adjustments have been additions, making some minutes 61 seconds long, but the system allows for the possibility of a 59-second minute if the Earth were to rotate faster than the time standards.



> Starting in 1972, international timekeepers decided to add a “leap second” in June or December for astronomical time to catch up to the atomic time, called Coordinated Universal Time or UTC.

I still fail to grasp why anyone should give two hoots about this. Seconds shmeconds. If it ain't broke, don't fix it.

Is it because astronomers don't want to fix their own substandard software, so instead they have pushed the cost out onto society at large ?

-Dept. of Pet Peeves


Astronomers aren't using software to determine the length of the day, but telescopes. (Well, they are using software too, but that isn't the problem.) The artificial second we use for our clocks because that's easier for some of our other sciences and technologies does not set the standard for what a day is. So something has to give, and the astronomers were there first, and for good reason, because their definition conforms to what humans actually think of what a day is. It is the computer makers that have made substandard software and push its cost onto society.


The Egyptian calendar didn't have leap days, so the 0.25 days per year difference had their calendar rotate completely through the seasons multiple times. A leap second is small now just like a leap 6 hours is. But it adds up.


Astronomers already have to do a huge number of tiny corrections of one kind or another, both manually and in software - they would be fine with making changes. However in this case they wouldn't even need to make any change - the astronomic time is what it is. This negative leap second is to fix UTC to match astronomic time because the rotation of the earth has changed a tiny smidge.


The universe has no concern for our desire for consistent calendars. You know that a day isn't exactly 24 hours right?


I appreciate these thoughtful answers but... color me still unconvinced. Ten, twenty, even thirty seconds don't amount to a hill of beans. But, now we have the software "fixes" in place, so it's a done deal.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: