
Let’s Stop the Unix Time Insanity - luu
http://creativepark.net/1408
======
nl
I have no idea what the author is proposing, but I'm just posting to point out
that it isn't the unix timestamp that is broken: it's dates in general.

There is simply no algorithm that accounts for their behaviour all the time.
The best you can do is have one that works most of the time, and then have a
table of exceptions (which has to be locale based).

For example, what date is 15330 days after February 30(!) 1712 in Sweden?[1]

 _In November 1699, Sweden decided that, rather than adopting the Gregorian
calendar outright, it would gradually approach it over a 40-year period. The
plan was to skip all leap days in the period 1700 to 1740._

Great.. except:

 _In accordance with the plan, February 29 was omitted in 1700, but due to the
Great Northern War no further reductions were made in the following years._

Oops. Oh well:

 _In January 1711, King Charles XII declared that Sweden would abandon the
calendar, which was not in use by any other nation and had not achieved its
objective, in favour of a return to the older Julian calendar. An extra day
was added to February in the leap year of 1712, thus giving it a unique 30-day
length_

But the Julian calendar sucks[2], so:

 _In 1753, one year later than England and its colonies, Sweden introduced the
Gregorian calendar, whereby the leap of 11 days was accomplished in one step,
with February 17 being followed by March 1_

Once you start doing date work you'll find these crazy cases all the time. Try
and tell me today's date on Mount Athos...

[1]
[https://en.wikipedia.org/wiki/Swedish_calendar#Solar_calenda...](https://en.wikipedia.org/wiki/Swedish_calendar#Solar_calendar)

[2]
[https://en.wikipedia.org/wiki/Julian_calendar](https://en.wikipedia.org/wiki/Julian_calendar)

[3]
[https://en.wikipedia.org/wiki/Mount_Athos#Date_and_time_reck...](https://en.wikipedia.org/wiki/Mount_Athos#Date_and_time_reckoning)

~~~
buro9
His main beef seems to be that time isn't as simple as i++ and he really wish
it were. But that's a lot to do with the fact that time is based on the spin
and orbit of Earth around the sun.

I think, that he's seriously suggesting that we find a unit of time that can
be guaranteed to just increment and not be subject to leap stuff or calendar
adjustments and can go forward and backwards in time as simple increments and
decrements.

A de-coupling between the time counter and our perception of time as a human
concept.

Maybe use the concept of a second, but not in relation to our calendar. So
instead of being "seconds since 1970" it's just "seconds".

I guess the idea being that you'd have a guaranteed increment only counter for
seconds that have passed, but without coupling that to an actual calendar.

I'd probably ask what the problem was that created this thought, maybe there's
an answer to that instead.

~~~
Udo
_> I'd probably ask what the problem was that created this thought, maybe
there's an answer to that instead._

OP here. That article is old and I really wish it wasn't on the front page. If
I remember correctly, at the time this was written, there were some pretty big
outages and failures that originated from apps assuming time was pretty much a
linear monotonic value. So the motivation to write this was not only for
myself, but to attempt and find a better solution for timestamps as needed by
many apps.

There is no technical reason for the timestamp to jump around, and to convert
to and from "human" time we already need libraries (especially if we're
looking at points in the past or the future), so the argument that system
timestamps _MUST_ map to actual time of day is somewhat moot. As it is, the
Unix timestamp attempts to be somewhat in the middle, in the end satisfying
neither computer nor human requirements. So the idea was to separate those two
timing formats completely.

~~~
pdonis
_a linear monotonous value_

I think you mean "monotonic". Though lots of non-geeks would probably say that
"monotonous" applies too. :-)

~~~
Udo
You're right. Corrected.

------
haberman
The author's proposal, for those who (understandably) did not glean it from
the article, is that the UNIX timestamp becomes absolutely monotonic and, in
the case of leap seconds, a day becomes 86,401 UNIX seconds long. Right now
UNIX days are always exactly 86,400 seconds long, and time goes backwards a
second when a leap second is added.

> I know there is a lot of smugness going on in developer circles where people
> get high on posting comments such as “of course it is like this. it’s the
> way we’ve done it forever, it’s the only way.”

What the author perceives as "smugness" is a reaction to criticism of the
status quo without understanding and addressing the benefits of the status
quo.

I'm not preemptively dismissing the author's proposal, but you would have to
overcome some fairly serious hurdles. Most notably, you would not be able to
reliably construct dates in the future, because it is not generally known when
leap seconds will be added. So it is unknowable what the "monotonic timestamp"
for Jan 1, 2023 00:00:00 is, for example.

I think a better answer to the problem is the leap second smear that Google
has implemented: [http://googleblog.blogspot.com/2011/09/time-technology-
and-l...](http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-
seconds.html)

~~~
Udo
OP here. I really wish this hadn't been submitted, but it's here so I might as
well...

 _> Most notably, you would not be able to reliably construct dates in the
future_

You already need libraries for these functions. The only difference is that
right now, those libraries have to calculate against a moving target instead
of one that behaves predictably. I wrote this post a long time ago to suggest
a strict(er) divide between computer timestamps and human timestamps, because
Unix timestamps are in effect neither.

There are countless apps that implicitly assume timestamps are linearly
counted upwards. Most people would say those are bugs, but they'd only be
right on superficial grounds. What most applications rightly want is a
monotonic, predictable way to count the passing of time and to exchange that
data with other apps.

My idea was to introduce such a linear counter for apps that want it, not (as
some people here obtusely suggested) replacing the Unix timestamp with this no
matter the ecosystem breakage.

~~~
haberman
> You already need libraries for these functions.

I'm not sure if you caught my point. Library or no, it is _impossible_ to
construct future dates because it is _unknown_ when leap seconds will be
inserted until six months or so before it happens.

Leap seconds are based on minute changes in the rotation of the earth, which
are unpredictable.

I added this to my comment later, but I think a better way to get a strictly
monotonic timestamp is to smear the leap second:
[http://googleblog.blogspot.com/2011/09/time-technology-
and-l...](http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-
seconds.html)

> There are countless apps that implicitly assume timestamps are linearly
> counted upwards.

There are also countless apps that implicitly assume that minutes are 60
seconds, hours are 3600 seconds, days are 86,400 seconds. Your scheme would
break these apps.

~~~
Udo
I think we're misunderstanding each other.

    
    
      > Library or no, it is impossible to construct future dates because it is unknown 
      > when leap seconds will be inserted until six months or so before it happens.
    

This problem exists whether the Unix timestamp actually jumps or not. At some
point, a reference table will have to be updated with the most current leap
second data - otherwise you couldn't accurately count seconds between to
points in time anyway. If that much precision is actually needed is another
thing entirely. Again, the idea is just to make a cleaner separation between
the two worlds.

    
    
      > Your scheme would break these apps.
    

Again, and I already said this multiple times now, the idea is not to abolish
the Unix timestamp and ignore whatever breaks. Instead I suggested to
introduce a linear counter for apps that want it. I might be wrong of course,
but I believe using this monotonic counter instead would greatly reduce
complexity and potential for bugs in many applications.

~~~
haberman
> This problem exists whether the Unix timestamp actually jumps or not.

No it doesn't. I can tell you authoritatively that the UNIX timestamp for
2023-01-01 00:00 UTC is 1672560000. That value is not dependent on future
changes to the rotation of the Earth.

Now it _is_ true that I cannot tell you precisely how many physical seconds
there are between now and then, because I do not know how many leap seconds
will be added in the meantime. But in practice I think this is less critical.
Applications that care about this probably use a more specialized time scale
anyway. But it would be strange to put an appointment in your calendar for
midnight and find that later on it had flipped to being at 11:59pm the
previous day.

> Again, and I already said this multiple times now, the idea is not to
> abolish the Unix timestamp and ignore whatever breaks. Instead I suggested
> to introduce a linear counter for apps that want it.

I understand what you are saying. I'm just saying that your proposal trades
one surprise for another. An app author might think "oh yes, a monotonic time
sounds nice" and start using it, only to find that they are broken later
because they were surprised that a minute could be 61 seconds long.

~~~
__david__
> I can tell you authoritatively that the UNIX timestamp for 2023-01-01 00:00
> UTC is 1672560000.

That assumes the calendar definition does not change. What if a 10 month
calendar with a new year that starts in spring somehow becomes widely popular
and adopted as the standard sometime in the next 10 years?

Then suddenly 2023-01-01 is _not_ 1672560000 and is actually 1680332400.

~~~
haberman
UTC is defined in terms of days. UTC calendar dates are presumed to use the
Gregorian calendar. 2023-01-01 in the Gregorian calendar maps to a specific
day (Julian day 2459946).

If you wanted to use a different calendar, you would need to map it to Julian
days also to convert between the two.

The idea that the world would suddenly change to a new calendar is highly
unlikely, and if it did happen it is not at all clear what the correct
behavior would be for future dates that had been specified before the switch.

------
azov
_> Unix timestamps are thoroughly and unnecessarily broken. They should be a
continuum._

If you assume that system time is a continuum I have some bad news for you:
system time is actually a user preference. Users can change it any moment to
whatever they feel like, no time machine required. Given that, leap seconds
are small potatoes. If you propose we change this and take away user's ability
to set system time... well, there might be some merit in this from purely
theoretical point of view, but in real world where batteries die, clocks get
out of sync, and tons of software already rely on existing behavior - I don't
think your proposal is likely to get very far :-)

Now, if you want a continuum, it's already there: look up CLOCK_MONOTONIC.

------
jasonlotito
> "a date like this:" > R(I, T, Z) = D > Those four final words...

So yeah. Maybe I'm missing something obvious, or maybe the author is referring
to another random set of 4 words?

~~~
pedantpatrol
Congratulations! You have won the pedantic comment of the day award!

~~~
AndrewBissell
I didn't really get what the author was referring to with "those final four
words" either, this hardly seems like pedantry to me.

------
nemetroid
Meh. Sounds to me like the author recently got burned by not knowing how Unix
timestamps work.

~~~
kbar13
I wouldn't assume anything, but it would be great if the blog post provided
any kind of possible alternative. Right now it really just sounds like a
misguided rant.

~~~
Udo
The idea is, instead of making the timestamp jump around (leading to both
overlap and gaps), to make it a continuous linear counter. What's misguided
about it?

I realize we've been stuck with the Unix timestamp for so long it's become
somewhat impossible for many to acknowledge the points where it breaks.

------
btilly
If you think that the problem is something called "Unix Time", then you do not
actually understand the problem.

The time used for "Unix Time" is actually UTC, which is a standard that is
specified in international standards that predate Unix. It is specified in
current international standards for everything from aviation to HTTP requests.

If Unix tried to use something else, you'd generate massively more confusion
for every Unix developer as they were having to figure out the current
conversion to what is actually required for interacting with the rest of the
world.

~~~
haberman
> The time used for "Unix Time" is actually UTC

Unix Time is not UTC; it is a linear representation that is surjective onto
UTC. It is easy to map between the two but they are not the same. They are not
even isomorphic or bijective because leap seconds have no unique
representation in Unix Time.

The Unix Time representation gives up full equivalence with UTC in order to
provide some useful guarantees (days are always exactly 86,400 seconds long,
midnight always satisfies x % 86,400 == 0, etc), but also creates other
surprises (time can go backwards when a leap second occurs).

------
brianpgordon
What exactly is the call to action here? For kernel developers to change how
time works, and probably break thousands of drivers and applications, based on
a blog post?

~~~
Udo
No, willful misunderstanding aside, the idea is to provide a single continuous
counter to apps that want it. If you look closely at most implementations,
that's how many apps actually expect it to work.

~~~
jonhohle
There already is a continuous monotonic timer available in most operating
systems, it just happens to be relative to some internal counter local to that
system (and depending on virtualization layer doesn't always provide monotonic
guarantees). This value can generally be translated into a fixed (and vice-
versa), serialized timestamp for persistence.

------
dap
At least on a single system, it's not that hard. You have to get over the fact
that you can't compute accurate intervals by recording two human-datetime
timestamps. Even aside from the myriad edge cases around date arithmetic, an
administrator (or NTP) can always reset the clock between the endpoints of
your interval, and your computation may be very wrong.

If you want a human timestamp, use gettimeofday[0] and related functions. If
you want something to compute actual elapsed seconds with, use high-resolution
timestamps([2] and [3]). You just can't have both, and that's because of the
way human dates work, not the way we represent them.

[0]
[http://pubs.opengroup.org/onlinepubs/009695399/functions/get...](http://pubs.opengroup.org/onlinepubs/009695399/functions/gettimeofday.html)
[1] [http://www.lehman.cuny.edu/cgi-bin/man-
cgi?gethrtime+3](http://www.lehman.cuny.edu/cgi-bin/man-cgi?gethrtime+3) [2]
[http://pubs.opengroup.org/onlinepubs/9699919799/functions/cl...](http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_gettime.html)

------
AndrewBissell
One of the happiest moments for me when starting at my current job was the day
I saw that all our servers were set to UTC as the default time zone. Leap
seconds may cause some pain every 3 years or so, but a huge number of problems
in system time accounting can be avoided by just keeping everything in UTC,
and translating into a given timezone representation only at the highest
possible level, just before displaying information to end users.

------
seagreen
There's some great stuff about Unix time on this page:

The Future of Leap Seconds
[http://www.ucolick.org/~sla/leapsecs/onlinebib.html](http://www.ucolick.org/~sla/leapsecs/onlinebib.html)

See especially the 'Distinguishing the two different meanings for time'
section. It makes a good distinction between _time-of-day_ and _time
interval_. Unix time measures the first.

------
peterwwillis
Time is relative, right? If anything we need a protocol to describe how the
observer views time, not a rigid definition of expression of time as viewed by
other people.

Toward that end, unix time is totally usable, as long as you make the
assumption that all unix time was created in a specific space and time. Then
you just have to calculate what your observation of it is. That way, two
bodies sitting next to each other may calculate the time that applies to them,
and communicate by simply returning the time to the standard before comparing
with other bodies.

(Apparently this already exists; the _Einstein synchronization procedure_
defines a method to establish universally (in the astronomic sense) temporal
coordinates, which is in effect both location and time)

~~~
haberman
I think you are missing the point that UNIX time moves backwards by a second
whenever a leap second is added. It is discontinuous with respect to the
actual passage of time.

~~~
peterwwillis
UNIX time is simply an indicator of where you are in relation to the event of
the epoch. Leap seconds are just our immature way of dealing with time
dilation. You can always add or remove time as an observer to conform to
whatever your view of time is in relation to that older fixed point.

~~~
haberman
> UNIX time is simply an indicator of where you are in relation to the event
> of the epoch.

That is true except in the case of leap seconds, in which case UNIX time is
not monotonic or continuous.

> Leap seconds are just our immature way of dealing with time dilation.

Leap seconds have nothing to do with time dilation. TAI, on which UTC is
based, is defined as the passage of proper time on Earth's geoid. This gives a
stable base that is not influenced by the frame of the observer. UTC does not
change based on where you observe it from. It is the same on Earth, in space,
etc. regardless of your frame. None of this has anything to do with leap
seconds.

Leap seconds compensate for the fact that a mean solar day is slightly longer
than 86,400 seconds, and can vary slightly due to irregularities in Earth's
rotation. Leap seconds are added to keep UTC noon within a second of mean
solar noon at the prime meridian.

> You can always add or remove time as an observer to conform to whatever your
> view of time is in relation to that older fixed point.

UTC is defined at the geoid, and everybody uses UTC seconds even though they
are slightly shorter than seconds as observed by most people on Earth.

------
acqq
It seems that the author thinks that Unix typically shows leap seconds to the
application programmer? In fact, there aren't leap seconds there unless you
intentionally turn that feature on. The seconds you get with the time() call
will increment one by one. You will not have to care that leap second
occurred. If you want to see leap seconds, then it is assumed that you know
what you are doing. If you don't know the effects of that, please, please
don't turn it on and don't write about how it's broken.

If you want to learn more about leap seconds read:
[http://www.cl.cam.ac.uk/~mgk25/time/leap/](http://www.cl.cam.ac.uk/~mgk25/time/leap/)

------
BerislavLopac
The obligatory link: [http://infiniteundo.com/post/25326999628/falsehoods-
programm...](http://infiniteundo.com/post/25326999628/falsehoods-programmers-
believe-about-time)

------
JulianMorrison
Use "bignum Planck units since the big bang". That is the ur-clock.

------
axus
"GPS Time" is UTC without the leap seconds. But it only goes back to 1980.

