
Falsehoods programmers believe about Unix time - pplonski86
https://alexwlchan.net/2019/05/falsehoods-programmers-believe-about-unix-time/
======
speleo_engr
I read articles like this and come to the conclusion that UTC is flawed, not
Unix time. Leap seconds seem mostly useless. People seem to think they are
important for astronomy, but for every astronomical calculation I have ever
done, your first step is converting from UTC to TAI. Move any jumps in time to
once a century (or millennium). Such jumps have occurred in the past (Julian
to Gregorian) and are easy to handle programmatically. People think time
increases monotonically and it's confusing to push it any other way.

The idea that I don't know how many UTC seconds will pass between now and May
15, 2022 0:00:00 is absurd. The fact that a clock sometimes reads 23:59:60 is
also absurd, as is the "possibility" of a 23:59:59 being a forbidden time on a
certain date if we ever add a leap second of the opposite sign.

~~~
endymi0n
Well, the problem with this is that every simplistic view at time is...
well... too simple. It's not that many people haven't tried, I especially like
this article: [https://qntm.org/abolish](https://qntm.org/abolish)

That being said, why your approach doesn't work is that there are several hard
astronomical or cultural definitions that you'd throw off by playing with
time.

A day is defined as the rotation of the earth around its axis.

A week is a hard cultural and religious boundary for just about everything in
life.

A month is roughly the rotation of the moon around the earth (although that
definition is arguably the weakest and ready to go)

A year is defined as the rotation of the earth around the sun.

Changing any of these will make the summer drift into the winter, or the night
into the day, or whatever.

Time just isn't simple, and although most of these intervals _almost_ fit
within each other, reality is that they don't and we'll always have artifacts.

If you enjoy philosophy on these kinds of imperfections as much as I do, I can
heavily recommend this article about musical tuning:
[https://blogs.scientificamerican.com/roots-of-unity/the-
sadd...](https://blogs.scientificamerican.com/roots-of-unity/the-saddest-
thing-i-know-about-the-integers/)

~~~
bradknowles
The third principle of Continuous Delivery is "If something is difficult or
painful, do it more often".

So, if leapseconds are actually painful for you, then maybe we need to
contemplate making this kind of adjustment on a finer timescale, like
milliseconds.

OTOH, if you think leapseconds are painful now, just you wait until you
postpone this pain and do it even less frequently.

~~~
twic
That principle is more of a koan than actual concrete advice.

For example, deleting your entire codebase and firing all your developers is
painful, but even the most continuous deliverer wouldn't advise you to do that
more often.

~~~
_jal
Do you also pee in front "wet floor" signs?

Getting past pedantry, the advice is obviously about foreseeable, repeating
parts of normal business, and it applies to more than devops.

A long time ago (tail end of the "desktop publishing revolution"), I was a
production assistant, and then manager at a magazine. We published six times a
year. Towards the end of my first year there, I realized we had the same
problems, right down to our Advertising Director's emotional meltdown, every.
single. issue.

After getting to know folks working at other magazines and people at our
press, I noticed that the monthlies seemed to run smoother with less drama,
and the weeklies were even better at it. Eventually I realized it was because
they had to be. If there was some minimum amount of human drama that had to
happen, it was forced into exhibitions that didn't disrupt the (tight)
schedules. If last-minute changes from flakey advertisers came in, they didn't
cause a firedrill, they just didn't run, because that issue is already on the
press and we're talking about the issue after next now. And so on.

The general principle is actually very straightforward, and applicable all
over the place, including your personal life. If you have high-friction
processes, devoting time and attention to them is the way you make them lower-
friction processes. And while it may be possible to do that without doing
things over and over until you get there, it probably is not possible for
_you_ to get there without repetition, else you'd not have the problem in the
first place.

------
AgentME
Here's a falsehood I've seen a bunch of times: the idea that Unix timestamps
need to be converted to your local timezone. Unix timestamps are the number of
seconds since a specific date in a specific timezone (UTC)! If a user gives
you a Unix timestamp and you know they're in the PDT timezone, you should
_not_ add or subtract 7 hours of seconds to "convert" the timestamp to UTC! It
already is. Similarly, if your client receives a Unix timestamp from your
server, you shouldn't modify the Unix timestamp to "convert" it to your user's
local timezone. Unix timestamps are always UTC. Your platform's date handling
APIs should already offer a way to display a Unix timestamp in the user's
timezone or an arbitrary one, and maybe even have a class that represents a
Unix timestamp paired up with a specific timezone. At no point should you be
doing arithmetic on the Unix timestamp portion to convert it between
timezones.

~~~
rocqua
Some probing questions:

Alice and Bob both live in England and have planned a conference call at 15:00
on 4-jan. Now Alice happens to travel, and she is in American on 4-jan. What
should here calendar do? Moreover, Alice also has a recurring event "Workout"
every friday at 9:00, what should that shift to? Finally, it turns out Bob is
also in America, what time should the conference call be at now? Finally, for
some reason England or America decides to change the DST changeover will now
happen on 3-jan.

There is no universal semantics of time that will deal with every case.
Certainly 'store UTC and convert to the users's time-zone' is not universal,
nor is 'store every timestamp with a time-zone'. The way people perceive of
'do this thing at this time' is very hard to capture. Moreover I'd wager no
users would actually fill out time with sufficient detail to deal with this.
"What do you mean UTC, time-zone, or local-time" I just wanna work out at 9:00
every day, and meet with Alice at 15:00 in a few days. I thought computers
were meant to make things easy".

~~~
AgentME
In your case, you're not doing the specific thing I prohibited. I only meant
you should never do arithmetic on a timestamp to try to "convert" it to an
equivalent representation of the same instant (as Java 8 defines it) in
another timezone. However you're specifically doing arithmetic on the
timestamp to calculate a new different instant, which is fine. (Your case
isn't that different than the user pressing a "shift this event time by N
hours" button.)

However, I saw a good tip once that you should only store timestamps of past
events and events that happen at a fixed instant regardless of calendars and
wall clocks as Unix timestamps. Timestamps for things like future calendar
appointments (that may be affected by future changes in regions' timezone
definitions) should be stored as a date and wall clock time and regional
timezone, and only converted to a Unix timestamp when it happens. This makes
it possible to see the timezone the user intended, let it be changed, and
works well even if timezones themselves change before the event happens.

~~~
mehrdadn
> you should only store timestamps of past events and events that happen at a
> fixed instant regardless of calendars and wall clocks as Unix timestamps

I think this works more often than not, but it's hardly foolproof or without
repercussions. Say, you can imagine Google Calendar having a list of holidays
for the US. Say it's New Year's day. You're saying you'd replicate that into 6
epoch timestamps (one per time zone in the US) per year in the past, instead
of just storing it as "January 1, 00:00:00, recurring every year"?

~~~
pjc50
This is a trick question - for whole-day events, the best way to handle them
is to record the calendar day you want them to happen on, _not_ the timestamps
of the start and end of the day in some particular timezone. See what iCal
does with DATE versus DATE-TIME (which must be UTC or include TZ):
[https://tools.ietf.org/html/rfc5545#section-3.3.4](https://tools.ietf.org/html/rfc5545#section-3.3.4)

~~~
mehrdadn
"Whole-day event" is a red herring. You could've just as easily wanted an
event for the first hour of New Year's day rather than for the whole day.

------
koala_man
> But it’s unsatisfying to say “this is false” without explaining why

Got my upvote. I can't stand the "falsehoods programmers believe" articles
that make a point out of not backing up any of their claims.

~~~
TeMPOraL
Is this really a problem? The "falsehood programmers believe" articles I
remember reading all list things that were either obvious, or obvious in
retrospect.

~~~
koala_man
Take the original one about names, which claims that sometimes children don't
get names.

Should I be flagging mandatory name fields as an I18N concern? How many people
are affected? In which regions does this warrant UI hints or changes? Will
they have names by the time COPPA stops applying?

I've casually Googled this and found nothing, so whatever point the author
hoped to make was lost.

------
bcaa7f3a8bbc
The original purpose of Unix time was decoupling local time representation and
internal timekeeping of the system. Unix time could be the "One True Time" of
the system, it always increases monotonically. When you need local time, all
the tricky and nasty details, including DST, mandated calendar changes, etc,
are processed by the tzinfo system library/database. If the calendar has
changed, _at least in principle_ one does not and should not have to update
the system clock or most unrelated applications, simply update tzinfo and
you're done.

Unfortunately, Unix time did not consider the effects of leap second, which
broke the very foundation of Unix time and nullified all the benefits it had.
A UTC change (leap second) will force you to update the system clock.

There is a way out: we should stop keeping time in UTC, instead, we do the
timekeeping in TAI. And we provide a system-wide facility called "utcinfo"
database to handle TAI-UTC conversation. Just like tzinfo, but much easier, it
only needs to store all the leap-second events. All problems solved! I'm aware
that the leap second still causes some issues: the kernel still has to be
notified for the upcoming leap second for its UTC APIs, but still better.

The question is, why don't systems and libraries treat TAI as the first-class
citizen of timekeeping? Why aren't we doing it right now? Because it's
incompatible with Unix time, or it's something else?

~~~
wodenokoto
Obviously, given the article and the discussion I am wrong, but what you
propose sounds to me exactly like the definition of unix time:

The number of seconds since January 1. 1970 UTC.

Even if UTC counts the same second twice, this does not change how many
seconds has elapsed since the unix epoch.

If you click through the articles reference to wikipedia and again from there,
you will end up on "The Open Group Base Specifications Issue 7, 2018
edition"[1]

Which says:

> The relationship between the actual time of day and the current value for
> seconds since the Epoch is unspecified.

Which again sounds like there is absolutely no reason to double count or skip
seconds in unix time.

However, I am not quite sure of the implication of the formula written there,
but I fear that it says that the number of seconds since the epoch is defined,
not as the number of seconds since the epoch, but by the UTC definition of the
current time.

[1]
[http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_...](http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_16)

~~~
bcaa7f3a8bbc
Thanks for the interesting link, good to know! So we are in fact, pretty close
to a monotonically-increasing, TAI-like Unix clock.

The original article said,

> _Each Unix day has the same number of seconds, so it can’t just add an extra
> second – instead, it repeats the Unix timestamps for the last second of the
> day._

Your POSIX link says,

> _As represented in seconds since the Epoch, each and every day shall be
> accounted for by exactly 86400 seconds._

I'm not a expert on timekeeping, but it appears the only problem here is that
the Unix time is bounded by a 86,400-second day, which I guess was meant to
make a Unix day predictable, so we still have to double count or skip seconds.
It seems the only thing we need to make Unix time monotonic is simply removing
the 86,400 seconds Unix day from specification.

On the other hand, it means a Unix day would be unpredictable and it would be
impossible to calculate a Unix time without a database, and difficult
calculate future Unix time using calendar time. So TAI doesn't automatically
solve every problem, everything comes with a tradeoff.

But I think a unpredictable Unix day should be fine for purpose of an internal
system clock, so perhaps it's still not a bad idea.

------
brmgb
I deeply disagree with point 3.

Unix Time actually never goes backward: it just stagnates during a leap
second. The article uses fictional fractional second to argue the contrary but
I don't think it makes much sense. Unix Time is represented an integer and has
no concept of such a fractional unit.

That's an important distinction because it means that if you use Unix Time as
a timestamp you can actually be sure than an event with a smaller stamp
happened before. You can't say anything about the ordering of events having
the same timestamp but that remains true with or without leap second.

~~~
toast0
POSIX defines gettimeofday [1], which fills a timeval with integer (time_t)
seconds and integer (suseconds_t) microseconds.

Is your concern over weather Unix Time is a time_t or a timeval?

A time_t shouldn't go backwards (in normal operation), but a timeval does.

[1]
[http://pubs.opengroup.org/onlinepubs/9699919799/functions/ge...](http://pubs.opengroup.org/onlinepubs/9699919799/functions/gettimeofday.html)

~~~
brmgb
> Is your concern over weather Unix Time is a time_t or a timeval?

Yes, you could put it that way. I view Unix Time as referring strictly to the
_time_t_ part ( _seconds_ since Epoch) but I might be the one in the wrong. I
didn't remember that the _timeval_ part existed in the standard.

------
nulagrithom
I don't understand this one:

> If I wait exactly one second, Unix time advances by exactly one second

How does UTC jumping around affect this? If a leap second is removed it
doesn't mean you've waited 0 seconds.

I feel like this is wrong too:

> If there’s a leap second in a day, Unix time either repeats or omits a
> second as appropriate to make them match.

It's not Unix time doing that. It's UTC.

~~~
seaish
Unix time is based on a hard calculation of UTC seconds, minutes, hours, days,
and years. So UTC jumping causes a discontinuity in Unix time.

Im pretty sure the second graph is mislabeled (the UTC second after 23:59:60
should be 00:00:00), but Unix time takes 23:59:60 to mean the same as
00:00:00. So 23:59:60.5 is also the same as 00:00:00.5, and so on. If you
parsed the Unix time into a readable timestamp, it would tell you it's the
first second of the next day for two seconds.

------
RcouF1uZ4gsC
One thing to note is that both Google and Amazon smear out the leap seconds
[https://developers.google.com/time/smear](https://developers.google.com/time/smear)
so that this is no longer a falsehood on AWS or GCP. I suspect, over time,
more organizations will adopt this approach.

~~~
dooglius
Leap smearing makes #3 no longer false, but #1 and #2 remain false

~~~
Groxx
Also worth noting that even tho it "solves" _leap seconds_ , it does nothing
to solve _normal clock corrections_. Clocks skew normally, and they need to be
corrected eventually - nothing that I'm aware of skews across more than a
couple seconds / within NTP limits.

I.e. disconnect from a time server for a few days/weeks/months/etc. When you
reconnect, it could be off by seconds/minutes/hours. Odds are pretty good that
your system will just jump to the correct time immediately, rather than
smearing it out for like N*3x longer than the difference is.

The same is true any time you lose your internal clock, e.g. if your machine
is shut down and its battery dies. If it were to skew across "all time since
jan 01, 1970", it'd probably catch up well after the expected lifetime of the
computer.

------
acqq
The article is wrong or misleading.

POSIX (and ISO) time_t is not supposed to "see" the additional leap second at
all. POSIX time_t is _defined_ to effectively always have exactly 86400
seconds per day, and no fractional parts. The seconds as defined by POSIX then
can't last exactly as long as the atomic seconds. Even on the days where a
leap second occur.

Wikipedia article confirms:

"Every day is treated as if it contains exactly 86400 seconds"

But that the seconds don't last the same and therefore aren't "the same" like
the "atom clock" seconds should not matter for the normal users.

The graphs in the article with the fractions of the second going backwards are
just poor implementations in some specific operating systems, libraries or
programs. It's not something that POSIX standard prescribes that is supposed
to happen.

The confusion of the common programmers, like the writer of the article or
those who implemented the "backwards" behavior comes from them not
understanding what they work with. Most of the users of most of the computers
don't have atomic clock. So they also _can 't count_ atomic clock seconds.
What the "normal" computers have are clocks which are much less precise. It's
exactly for that kind of use the time_t is designed by POSIX to have exactly
86400 seconds per day -- the absolute error is at most one "atomic" second per
c.a. half a year, but the error of all of the clocks directly available to the
normal users in their normal computers is bigger.

So "normal" programs which do common human-related scheduling should not even
try to care about the leap second. Use something like a "smeared" time as a
reference:

[https://developers.google.com/time/smear](https://developers.google.com/time/smear)

The SI seconds in that article are the "real atomic clock seconds" \-- but
caring about them isn't even needed for normal human related computing tasks.
If you have a real atomic clock, by all means synchronize it with other atomic
clocks. If you have a normal computer, use the smeared time. There will be no
"jumps" at all then.

Leave the leap second to the astronomers and others who are doing the "hard"
time tasks, they have to care, and they have their own software for that.

~~~
FabHK
Agree. In other words, computers should use UT1 (where one day is a rotation
of the earth, and 86400 seconds; and consequently a second is not a SI
second).

~~~
acqq
Correct, time_t second is already in practice not an "SI second" the later
being defined by the "counts" in the "atomic" clocks:

"the duration of 9,192,631,770 periods of the radiation corresponding to the
transition between the two hyperfine levels of the ground state of the
caesium-133 atom" (at a temperature of 0 K)"
[https://en.wikipedia.org/wiki/Second](https://en.wikipedia.org/wiki/Second)

That SI second is what I refer to when I mention an "atomic clock second."

The time_t second is effectively simply one 86400th of a day.

------
legohead
I was aware of leap seconds after working for an online auction website. How
to deal with all the auctions that may be ending right at or before the leap
second?

Our solution was simple: temporarily pause all the auctions :)

We already had site-wide "auction pause" code as a result of people DDOS'ing
the site.

~~~
netmonk
Well, well i was working in a major bank, in HFT dpt. We had team all over the
world. Our cisco switch were not able to deal correctly with leap second, so
basically we had to switch them off just before the leap second, and switch
them back on, right after.

Everything went as planned on our side of earth. But our local asian in Japan,
managing several colocations in Asia (Jp/hk/singapore/india...) were thinking
this leap second was at midnight LOCALTIME.

I just remember the mess they had to deal with, the day after. Cause midnight
UTC is not midnight in Japan, hft is over sensitive to time coordination.
shifting forward or backward 1sec can close your connection to market.

------
jhayward
One thing that this article evokes is the 'I'm wrong and my belief is deeply
held' aspect which technology people occasionally fall in to. Looking at the
comments, with incorrect beliefs expressed as dicta ranging from just plain
wrong, to incompletely specified, to ordinary confusion is giving my tech-PTSD
a workout.

There are topics in which the ordinary or common sense understanding of a
thing actually interferes in understanding how that topic actually acts in
reality when looked at closely or under complex conditions.

The concept of time is one of those things. The best thing a naive developer
can do when reasoning about time is to first know that almost everything they
assume is wrong, and they don't even know what their assumptions are.

The concept of 'location' is another of these topics.

I would like to close this comment with a helpful link to a concise
introduction for people to start with in clearing out the 'common sense'
assumptions but I haven't ever found one, and haven't invested enough time to
write one. Sorry. Links to same will be gratefully received.

------
xeeeeeeeeeeenu
Windows (since recently) is probably the only operating system with an actual
leap seconds support: [https://techcommunity.microsoft.com/t5/Networking-
Blog/Top-1...](https://techcommunity.microsoft.com/t5/Networking-
Blog/Top-10-Networking-Features-in-Windows-
Server-2019-10-Accurate/ba-p/339739)

~~~
lelf
[https://www.freebsd.org/cgi/man.cgi?query=posix2time](https://www.freebsd.org/cgi/man.cgi?query=posix2time)

------
turtlegrids
Love it. You will continue to capture my heart and my upvotes whenever you
post anything about the many nuances of storing and representing time. Or
something about Unicode / character sets.

------
bloak
Having spent some time studying time keeping in computers, I've come to the
conclusion that nothing needs to be changed. In particular, UTC is exactly
what it should be and should be left as it is. However, there are some things
that need to be added:

* Every standard library needs properly implemented and properly documented functions for converting between UTC and TAI.

* NTP should (at least optionally) tell the user TAI and UTC (like GPS already does).

* When mounting a legacy file system there should be an option to specify whether timestamps should be interpreted as TAI or UTC.

* New filesystems should have a field that specifies TAI or UTC. It would probably be a single bit for the whole filesystem rather than per timestamp.

* The CLOCK_UTC proposal should be implemented, with tv_nsec in the range 1000000000 to 1999999999 during a leap second.

------
PureParadigm
Leap seconds definitely add way more complexity/uncertainty when dealing with
timestamps in the future. I once made a program that would output the amount
of time remaining until some time in the future (with the future time
represented as a Unix timestamp). I realized that it is simply not possible to
report the number of seconds until an event >6 months in the future because we
simply don't know if there might be a leap second or not between now and the
time in the future. Perhaps the best approach for users would be to smear any
leap seconds that are announced so it never has a hard jump, but it's still
not ideal because if you really want to count down to the future time you
simply can't.

------
lcuff
These are nasty little corner-cases. I do wonder if the first two are worth
worrying about: For (1) I can't see a use-case where it would be important.
For (2) Timing of this granularity is likely going to be done through
nanosleep() and the POSIX.1 specification says that discontinuous changes in
CLOCK_REALTIME should not affect nanosleep(). For (3) smearing, as Google and
Amazon do, will handle it, as pointed out by others:
[https://developers.google.com/time/smear](https://developers.google.com/time/smear)

------
teknopaul
A reality that most of these types of blog dont mention is that you may well
not have any applicable data. E.g. users registered before 2000. Or any dates
at all that care about second precision.

Humans are also "wrong" but happy with that, celebrating birthdays indepenent
of timezones. Some celebrate birthdays independente of the actual date birth
occured like the Queen or Jesus or anyone born on the 29th of feb.

Far more likely your clock goes backwards because you fuck up your ntp config
than any other reason.

------
alanfranz
So, if I adopt smearing for my NTP, everything works fine.

~~~
bradknowles
Except then, a second isn't actually a second.

~~~
lodi
But that's always true anyway because of clock drift, skew, etc. Otherwise we
wouldn't need NTP in the first place.

------
hedora
I’m not convinced this article is correct.

Posix defers to ISO C where they differ: [http://www.open-
std.org/jtc1/sc22/wg14/www/docs/n1570.pdf](http://www.open-
std.org/jtc1/sc22/wg14/www/docs/n1570.pdf)

See page 391. The encoding of Unix time is explicitly unspecified there.

Posix goes on to say:
[http://pubs.opengroup.org/onlinepubs/9699919799/](http://pubs.opengroup.org/onlinepubs/9699919799/)

“ The time() function shall return the value of time [CX] [Option Start] in
seconds since the Epoch. [Option End]”

So, Unix time is optionally seconds since the epoch, with no further guidance
about leap seconds.

Also, the spec makes it clear that time_t needs to be converted into the
appropriate time zone, which suggests it does not reflect leap seconds.

I’d be convinced by source code or documentation for both BSD and Linux
showing they’re intentionally not posix compliant on this front, and apply
leap seconds to Unix ticks and not their time zone conversions.

------
falcolas
> If I wait exactly one second, Unix time advances by exactly one second

There's a more insidious problem here - that a computer's internal
representation of a second actually falls in line with an actual second.
Quartz clocks are, at best, approximations. Temperature adjusted
approximations at that.

Without NTP and its ilk, computers would be a complete disaster when it comes
to keeping regular time.

------
yanowitz
chrony[1] neatly solves these and a host of related issues by guaranteeing
time increases monotonically on a given host, speeding up or slowing down the
clock appropriately. It's a nicer alternative to ntpd. Also, AWS recommends it
if you are using their Time Sync service (which is GPS-locked atomic clocks in
every region with leap second smearing).

[1]

~~~
cat199
> It's a nicer alternative to ntpd.

define nicer?

~~~
astrange
ntpd is a pretty weird daemon in the same way that bind is. The config format
is insecure and impossible to understand; if you try to change it you'll
probably break it.

------
revicon

      Unix time assumes that each day is exactly 86,400 seconds long (60 × 60 × 24 = 86,400), leap seconds be damned
    

I don't think I understand this claim. Unix time has no concept of a "day".
Leap seconds increment UTC time, but doesn't add anything to the number of
seconds that have elapsed since unix epoc.

~~~
nemetroid
That's one of the falsehoods. Unix time is not equal to the number of elapsed
seconds since the epoch, it is equal to N x 86400 + s, where N is the number
of days that have elapsed since the epoch, and s is the number of seconds that
have elapsed since midnight.

------
dkbrk
Perhaps I don't fully understand the reasoning that went into these things
when they were decided, but I think the following would be more sane:

\- Hardware clocks track Terrestrial Time, and TT is used for timestamps and
all timekeeping that doesn't care about where exactly on the planet you are

\- Leap seconds are treated as part of the timezone data. UTC is treated as
just another timezone, with the appropriate leap second offset given by the
timezone data for that date and time

\- NTP keeps hardware clocks synchronized to TT and also carries updates to
timezone data (including leap seconds)

This doesn't solve the problem of hardware clocks jumping backwards or
forwards in time - hardware clocks can drift or be misset etc. and be updated
- but I can't help but thinking that much of the pain around time and
timezones is caused by basing our timekeeping on UTC rather than TT.

------
dmh2000
just an anecdote about UTC vs GPS time. GPS time doesn't have leap seconds.

So my team was testing a system with some devices, one of which was a GPS and
the main system had UTC from NTP. We had a big display that showed all our
data including both times, so we could monitor what was going on. So the two
displayed times were 13 seconds apart (the number of leap seconds then). Our
program manager was a smart guy but gaffe prone. So in a demonstration of our
system he blurted out to the whole room of observers, 'hey something is wrong,
those two times are different'. We cringed and explained, but it sounded like
we were covering up an error. But he would go on to repeat the gaffe again to
a different group.

------
abtinf
Perhaps a stupid question: Why isn't there a time standard that is monotonic
and defined simply in terms of seconds, without attempting to match the
movement of the earth (no leap seconds, no negative seconds, no daylight
savings, no complicated calendar politics)?

If such standard existed, wouldn't it be the best to use for programming, with
"simple" conversions to/from the all the other standards?

Basically, I want a monotonic clock that starts at an arbitrary point (I would
suggest Isaac Newton's birthday), is able to go all the way back to the big
bang, and forward until the heat death of the universe, with millisecond or
better precision.

~~~
brlewis
Not a stupid question...it's the central question here.

Unix time is set up to allow programmers to assume every day has the same
number of seconds. Is this the best approach, or would it have been better to
try to educate everyone not to make that assumption and to use a standard
library for all UTC calendaring?

------
max76
I think we should have a scientific definition of time -- something that is
highly static and precise (such as the time it takes for an atom to vibrant a
number of times, or the time it takes light in a vacuum to travel a certain
distance) and a cultural definition of time that is more loose than UTC (There
is always the same number of seconds in a day, but maybe some days have
shorter seconds than other days).

Cultural time is fine and dandy for human level stuff. Keep that simple.
Scientific time for business, engineering and scientific stuff.

~~~
brianpan
Time might not be too hard to define (relativistic effects aside), but the
problem is days and years. The reason for inserting and removing seconds is
because the rotation of the earth is NOT static and precise. It changes so
seconds are added and removed to keep the day from drifting.

[https://en.wikipedia.org/wiki/Day_length_fluctuations](https://en.wikipedia.org/wiki/Day_length_fluctuations)

------
dancek
I've been thinking about datetime APIs and how most of them become extremely
tedious when you have to account for daylight saving, countries changing time
zones et cetera. I was actually planning to write my own implementation for
$language using unix timestamps as internal representation and requiring a
timezone whenever parsing or printing. But then I considered leap seconds.

I don't know if there are libraries that can handle leap seconds or is
everyone just counting on NTP sync fixing things whenever a leap second
occurs.

~~~
masklinn
> I was actually planning to write my own implementation for $language using
> unix timestamps as internal representation and requiring a timezone whenever
> parsing or printing

That does not and can not work when trying to represent future local events,
which is the vast majority of them as an event normally happens relative to a
specific geodesic location.

Astronomical events are more or less the only ones which actually routinely
get planned in TAI / TT, and astronomical software is thus the only one for
which this model could actually work. And then you wouldn't be using unix
timestamps (because it's UTC).

~~~
dancek
Yes. Almost everything is subtly broken, and I realized my attempt would be
broken too. Didn't feel like a fun hobby project once I realized that.

------
alpb
There are much longer lists about this, like:

[https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b...](https://gist.github.com/timvisee/fcda9bbdff88d45cc9061606b4b923ca)

Here are 20 more links: [https://github.com/kdeldycke/awesome-falsehood#dates-
and-tim...](https://github.com/kdeldycke/awesome-falsehood#dates-and-time)

------
rusbus
Is it also possible for Unix time to go backwards with NTP? I assume that's
much more common in practice than leap seconds?

~~~
aflag
ntpd will only move the clock backwards if the gap is too big and you run a
command. Otherwise, it will only slow down the clock and possibly alert you
that it's having trouble catching up.

------
rlucas
My Unix time question is, when I do “sudo date <DDMMhhmmyyyy>” and then get a
sudo password prompt, is the time supposed to take effect before or after the
sudo is authenticated?

Emprically it seems to be after but that seems wrong; shouldn’t the result of
the command be the same regardless of how long it takes to type the sudoer’s
pw?

~~~
bheiskell
After, because sudo won’t run any command until it authorizes that you’re
allowed to run the command.

------
dev_dull
> _Unix time is the number of seconds since 1 January 1970 00:00:00 UTC > If I
> wait exactly one second, Unix time advances by exactly one second > Unix
> time can never go backwards_

I think it’s okay to say that these things are generally true with exception
of leap seconds. Leap seconds don’t make these statements _untrue_.

------
time_travlr
Why does Unix time first travel forward one second, then go backwards one
second, versus "pausing" (flat line on the graph) one second? Both have their
pros and con; if the graph accurately depicts how Unix time is implemented,
why was this decision made versus the other?

------
rkapsoro
I found this out some years ago and for some reason I felt a profound sense of
betrayal. :D

------
netmonk
for those interested in this kind of topics, i cannot recommand more than
suscribe to the timenuts mailing list : [http://leapsecond.com/time-
nuts.htm](http://leapsecond.com/time-nuts.htm)

------
anilakar
Another falsehood, altough it goes against the POSIX standard: the epoch is
midnight in UTC. There is at least one obscure Unix-like OS (can't remember
the name) where the epoch is 1970-01-01 in _local_ _timezone_

------
xucheng
To see the current value for different time standards (e.g. UTC, GPS, TAI):
[http://leapsecond.com/java/gpsclock.htm](http://leapsecond.com/java/gpsclock.htm)

------
kstenerud
This is why I use smalltime when storing time values.

[https://github.com/kstenerud/smalltime](https://github.com/kstenerud/smalltime)

------
JdeBP
A falsehood that Alex Chan believes about UNIX:

No-one uses the Olson "right" TZ data files.

What is stated in M. Chan's article is only true when using the "posix" TZ
data files. But that's not the only option.

* [https://unix.stackexchange.com/a/327403/5132](https://unix.stackexchange.com/a/327403/5132)

* [https://unix.stackexchange.com/a/334029/5132](https://unix.stackexchange.com/a/334029/5132)

* [https://unix.stackexchange.com/a/294715/5132](https://unix.stackexchange.com/a/294715/5132)

------
tinix
time is hard:
[https://gordol.github.io/date_time_manipulation.html](https://gordol.github.io/date_time_manipulation.html)

------
BentFranklin
A whole second of leap? So grainy. Planck scale or go home.

------
jlv2
The author uses graphs with quarter-second increments because they make it
look weirder. It's not that weird for time to standstill for 1 second.

~~~
airstrike
Who cares if we're off by 1 second so long as we're all off by the same
amount? Maybe we should all wait until this lag adds up to 1 minute before we
adjust things instead of perpetually reaching for abstract perfection

~~~
isostatic
Or conversely why don't we add leap deciseconds, or centi or milliseconds?

~~~
bloak
It seems to me that the one-second adjustment is exactly the right compromise.
It's small enough to be ignored for most practical purposes (many clocks are
out by a second anyway), and it means that the offset between TAI and UTC is a
round number (37 s rather than 36.852 s), while leap seconds are frequent
enough for people to get a bit of practice at handling them, while rare enough
that you can avoid them if you're not confident about handling them: don't
schedule any rocket launches for 00:00 on Jan 1 or Jul 1. It seems exactly
right to me.

------
ramshorns
Falsehood #2 is false even for inserted leap seconds. If you wait one second
from 23:59:60.00 to 00:00:00.00, Unix time has advanced by zero seconds.

------
unhammer
[http://yourcalendricalfallacyis.com/](http://yourcalendricalfallacyis.com/)

------
osrec
So when we convert Unix time to Y-m-d format, are those conversion algorithms
aware of the leap seconds?

Is it hard coded in there somewhere?

------
derefr
Question: would anything go wrong on a Unix system if you used TAI as your
timezone, rather than UTC? TLS, maybe?

~~~
toast0
TLS doesn't care about a 30 second difference in clocks, unless you're running
your certs really close to the notBefore/notAfter. There's enough broken
systems out there that it makes sense to not use certs that don't have at
least 24 hours of margin on either side.

------
joeleisner
Never knew about this... interesting article!

------
gauravphoenix
a fun command to run: _cal 9 1752_

------
averros
Basically, everyone who has a clue about time standards uses TAI for internal
representation in non-relativisitic applications.

In other news: most software is brain-dead and most software engineers lack
basic education in pretty much everything other than composing tons of
terminally boring code out of a few LEGO shapes provided by programming
languages.

~~~
averros
...oh, and there's also the issue of time domains and time sources having
different spectral shapes of their noise.

------
tus87
This guy is confusing Unix time with local time. This statement:

> Unix time is the number of seconds since 1 January 1970 00:00:00 UTC

Is true regardless of the calender or leap seconds. Think of seconds in terms
of some physical phenomena, like how many times a certain atom trapped in a
crystal lattice vibrates and you see that doesn't depend on the calendar.
Converting Unix time to local time obviously has to take that into account,
but we still need an absolute measure of our progress does the timeline which
is what Unix time provides.

This is why we use Unix time, it's the same everywhere and nothing short of
relativity can affect it.

~~~
toast0
> > Unix time is the number of seconds since 1 January 1970 00:00:00 UTC

> Think of seconds in terms of some physical phenomena, like how many times a
> certain atom trapped in a crystal lattice vibrates and you see that doesn't
> depend on the calendar

Unix time is NOT the number of physical seconds since 1970 UTC. It's the
number of Unix Seconds since 1970. Every day has 86400 Unix Seconds. Some UTC
days have more than 86400 physical days. Unix time cannot represent the
seconds beyond 23:59:59 on a UTC day, but otherwise attempts to match UTC.

~~~
tus87
Unix seconds ARE physical seconds.

> Every day has 86400 Unix Seconds.

Except a day with a leap second in it.

> Unix time cannot represent the seconds beyond 23:59:59 on a UTC day, but
> otherwise attempts to match UTC.

Err...it literally represents ~49 years beyond 23:59:59 on Day1 of UTC.

~~~
toast0
What's the Unix Time for the UTC second 1985-06-30 23:59:60? My understanding
is this physical second is not representable in unix time.

How about the UTC seconds before and after that one, 1985-06-30 23:59:59 and
1985-07-01 00:00:00? My understanding is that the first is 489023999, and the
second is 489024000.

There is one unix second, but two physical seconds between the start of the
two times.

~~~
tus87
> What's the Unix Time for the UTC second 1985-06-30 23:59:60? My
> understanding is this physical second

That's not a physical second, it's a calendar second.

~~~
toast0
Every UTC calendar second represents a physical second.

On most days, every unix second corresponds with exactly one UTC second and
they all correspond with exactly one physical second, and each one could be
measured as the number of vibrations of some particular atom.

On a day with a positive UTC leap second, it's different. At 12:00:00 UTC,
it's 12:00:00 unix time, the next day at 12:00:00 UTC, it's also 12:00:00 unix
time; 86401 physical seconds have passed, and UTC has counted 86400 calendar
seconds, one of which was a leap second, but unix has only counted 86400
seconds.

If you're running leap smearing, all of the unix seconds in that day are a
little bit longer than the physical seconds (the exact details depending on
your smear technique). If you're using classical techniques, 23:59:59 will be
two physical seconds long, and the fractional second will reset to zero as the
second physical second starts and count up again.

In contrast to UTC, and Unix Time, TAI always has exactly 86400 physical
seconds per day, but after a UTC leap second, both UTC and Unix Time will be
offset from TAI by an additional second.

------
diminoten
> So far, there’s never been a leap second removed in practice (and the
> Earth’s slowing rotation means it’s unlikely)

Oh, okay. Thanks!

~~~
chithanh
Earthquakes and other geological events can speed up rotation though, but so
far never enough to remove a leap second.

------
dintech
tl;dr: you should know about leap seconds.

------
martovmarkov
Can this cause another "millenium bug" chaos again?

------
tqwhite
Unix time _is_ the humber of seconds since 1970. Seconds are a physical thing.
Unix time references the number of "cycles of the radiation produced by the
transition between two levels of the cesium 133 atom" UTC measures time. This
article is silly.

~~~
masklinn
> Unix time is the humber of seconds since 1970.

It's not, and it has never been. The original unix time was "the time since
00:00:00, 1 January 1971, measured in sixtieths of a second", this got
modified multiple times until it settled upon the number of _UTC seconds_
since 1970-01-01T00:00:00Z, meaning it's non-monotonic, non-continuous, and
not based on "physical" seconds.

