Hacker News new | comments | show | ask | jobs | submit login
Some Notes About Time (unix4lyfe.org)
210 points by arb99 1528 days ago | hide | past | web | 113 comments | favorite



Sigh.

50 year old numerical / geophysical / real time data acquisition/processing/interpretation programmer here.

Unix Time isn't much chop for "real time" continuous data from the "real world" - it's those pesky leap seconds. If you bother to read the first paragraph of the wikipedia article on Unix Time you'll see :

> Unix time, or POSIX time, is a system for describing instances in time, defined as the number of seconds that have elapsed since midnight Coordinated Universal Time (UTC), 1 January 1970,[note 1] not counting leap seconds.[note 2] It is used widely in Unix-like and many other operating systems and file formats. It is neither a linear representation of time nor a true representation of UTC.

It follows on with a definition of Unix Time and points out various examples when it is ambiguous. These are real issues and can occur when missiles fly, when planes navigate, and when stocks are traded.

Time is tricky.


Bummer. I hope that the author reads your comment, because it's otherwise a very useful page. I'd love to have a reference like the OP's about time and everything a programmer needs to know about it without problems like the one you're subscribing. This one's almost there.

Thanks for clarifying!


Yeah, even writing ditzy consumer apps on Android we're usually stuck with the number of milliseconds the system has been up if we want a reliable, step by step, count that doesn't jump around. Use something else and your animation delays will be screwed up occasionally, etc.. Of course then that number is useless outside that system and outside that boot of the system. Would have been much handier to have something better.


Time is tricky but there is a simple fact: it exists such a thing as the number of actual seconds elapsed since 'time x' (typically the Unix epoch but anything else will do too). No leap seconds issues. No 25 hours-day issues. No 23 hours-day issues. No 59 seconds minutes issues. No 61 seconds minutes issues.

And all it takes to store a time like that is a 64-bit integer and it is very convenient. And a lot of software do precisely that. Most timestamps are just that: they don't care about "real world" details like leap-seconds, {23,24,25}hours per day, etc.

Because, in many case, you really don't give a flying shit about the "real-world" time.

What is your point? That a server that needs to open trade precisely at 9am has to take leap seconds into account? Sure. The article ain't disputing that.

But a lot of application basically only need timestamps and everything becomes much simpler when one starts to reason in milliseconds since the epoch instead of "real world" time.

Btw... Many of your "real issues" are due to programmers not realizing that they could have realized the exact same system but way more resilient had they knew to use timestamps instead of "real world" time.

There was an amazing paper by Google on the subject by the way, where they explained how they were reconciliating their hundreds of thousands of servers' time by allowing it to be "not correct" over x minutes / hours to dodge many issues.

And they clearly did emphasize that: a) these problems shouldn't have occured in the first place and were due to poorly designed programs (poorly designed in that they relied on real-world time instead of simply internally using milliseconds since the epoch) and b) most programs on earth do not give a flying ^^^^ if a time displayed to the user is one third of a second off...


> Time is tricky but there is a simple fact: it exists such a thing as the number of actual seconds elapsed since 'time x' (typically the Unix epoch but anything else will do too).

Yeah. Hmm. Clearly an opinion expressed without either reading, comprehension, or experience.

Why would I say that? Well, consider your statement about "actual seconds elapsed since epoch". Leaving aside the fact that Unix Time enumerates no such thing (and it was Unix Time I was discussing), there are such things as time frames and relativistic effects which come into play when doing fine scientific measurement, astronomical work, and ground satellite communications.

There are levels to dealing with time and the linked article makes a decent first order stab at climbing the ladder, even ground based programmers need to be wary of the leap seconds issue and the plethora of time standards with minute but significant (in some context or another) issues.

When you've cooled down a little consider the title "What every programmer should know ...". Had it been "What most programmers can skate by on" I probably wouldn't have bothered to comment.

I agree there are many good papers on the subject, going back decades, I'm betting the google paper you speak of references a classic one on time stamps and relativity as some times local event order trumps universal order (and in fact the paper pivots on the observation that universal order doesn't really exist, just effective order).


I can't interpret this as anything other than nitpicky. Are you suggesting that every programmer should know about relativistic effects of time dilation in controlled scientific experiments?

I don't doubt that you have valuable experience and information to lend to the discussion, but your frame seems aggressively negative towards the OP with no discernible reason.


I posted shortly before midnight my time so excuse the delay in replying.

The OP article suggests that UnixTime is the answer for every programmer. My take home message to every programmer that uses UnixTime is to be aware that it's non linear and has hiccups that will bite them every few years if not taken into account. This is not so much aggressively negative as it is a simple statement of fact garnered from years of experience.

My message to every programmer that works in a distributed system is that they should read "Time, Clocks, and the Ordering of Events in a Distributed System" (Lamport 1978) which uses observations and arguments from relativity to comment on the manner in which events propagate outwards from sources.

As for limiting awareness to "controlled scientific experiments", no, I'm not advocating that at all as time slip (something that has many causes outside of dilation) pops up all over the place these days, for example many things rely on GPS time which is something else that is non linear and periodically updated. I'd suggest that anyone writing software that relies on second / sub second granularity should be aware of where their fiducial time marks come from and what hiccups there are in that system.


> Are you suggesting that every programmer should know about relativistic effects of time dilation in controlled scientific experiments?

No, he's declaring that not every programmer can afford to ignore these.


Hi,

I find your opinion and experience on this matter very interesting. Could you possibly describe your experience or projects you've worked on that could shed some insight on the complicated issues time causes?


> And all it takes to store a time like that is a 64-bit integer and it is very convenient. And a lot of software do precisely that. Most timestamps are just that: they don't care about "real world" details like leap-seconds, {23,24,25}hours per day, etc.

Your claim that using Unix Time saves you from worrying about leap seconds is incorrect. Unix Time goes backwards when a leap second occurs, which can screw up a lot of software. Check out Google's solution to the problem, which is to "smear" the leap second over a period of time before it actually occurs: http://googleblog.blogspot.in/2011/09/time-technology-and-le...

Practically no software uses true seconds since the epoch; if it did then simple operations like turning an epoch time into a calendar date would require consulting a table of leap seconds, and would give up the invariant that every day is exactly 86,400 seconds. Whether this was the right decision or not is debatable, but it is a mistake to think that using Unix Time saves you from all weirdness surrounding civil time.


Things do indeed become much simpler when you reason in milliseconds since the epoch. So why is it so rare to do so? UNIX time is not milliseconds since the epoch. It's milliseconds since the epoch, plus the leap seconds that have accumulated over that period. The only widely used timebase that's pure milliseconds since the epoch that I'm aware of is GPS time, and basically everything adds in the leap seconds before actually using that number.


I think you mean that unix time is seconds since the epoch minus leap seconds.


All depends on what sign you assign to the leap seconds!


Ah, but negative numbers are not included in the set of counting numbers, and your initial comment referred to "the leap seconds" with no reference to "the leap second offset" or other verbiage that might imply anything other than a simple count of leap seconds.

Alas, I'm afraid you've no other option but to admit a minor error, as traumatic as that may be.


I'll never admit to an error, although perhaps a negative value for correctness.


> Time is tricky but there is a simple fact: it exists such a thing as the number of actual seconds elapsed since 'time x'

Yes and no: it might just exist theoretically, but we have no way to get at it. The closest we have is TAI, which is only an approximation of the time elapsed at mean sea level on the earthly geoid, because clocks fall victim to gravitational time dilatation and compression.

To accurately measure time you'd need a clock sitting perfectly still in space, and all other clocks in the universe would slowly drift behind it.


Not even that; relativity doesn't allow a single special frame of reference. There's no such thing as universal "time elapsed since x". Putting the clock in space would be as arbitrary choice as putting it on my roof (albeit definitely more practical).


I think the one who solves this problem ( at least for humanity ) would get a Nobel.

After the humans will wander thru space ( at some time - very probable), there will no time reference, but only time intervals ( like, day on a spaceship has 24h , etc ). So in this case, you would measure 86400 seconds and call it a new day. No more leap seconds, etc.

Now the UNIX makes sense: count seconds since a certain event in time and meajure from there on, internally. Want to display it? Then use special computation to render it in the format ( read timezone, add relativistic skew, etc ).


You're forgetting to factor in accuracy/resolution. TAI is completely perfect down to the picosecond level and further. That means you do have the number of actual seconds, milliseconds, microseconds, nanoseconds...

You only encounter issues with TAI once you get down to femtosecond or smaller levels.


> TAI is completely perfect down to the picosecond level and further.

I may have worded it badly, let's try again: TAI is "completely perfect" for the approximation it is: time elapsed at the geoid, which is a theoretical construct. That's an approximation for both "experienced time" and for anything which could be called "absolute time"


Oh that's easy to solve. Just mandate everyone live at sea level.


And stop the Earth from spinning, so we can get rid of that pesky oblateness and gravitational delta between the equator and poles.


No need to get all riled up about it. It's just a heads-up, don't follow the advice in this article if you're working on something that needs to deal with time on the seconds level.

Obviously you don't need to go trough all that trouble if you simply want to display the date you published a post on your blog.


Because, in many case, you really don't give a flying shit about the "real-world" time.

But there are loads of cases where your computer/programme needs to talk in "real-world" time. So you can't avoid the problem.


Time is even more complicated than that because of vagueness, either intentional or unintentional.

Humans represent time vaguely rather than precisely. Computers tend not to.

A while back, the British Library catalogue was put online as linked data. The library cataloguers had added details of author birth and death years but not always birth dates. But the data model normalised "1941" to "1941-01-01". (They fixed it after I moaned on Twitter.)

You should be able to represent in your data model, date-time library etc. vague dates and relative dates. When I say someone was born in 1941, I don't mean they are born at 1941-01-01T00:00:00Z. When I say that something happened on Thursday, I don't mean it happened at 2013-01-17T17:04:00Z. It may have happened at any time on Thursday.

If you store everything as Unix epoch seconds, good luck representing vagueness.

ISO 8601 gets around the problem rather neatly by being big-endian and allowing omission at any point. Wanna say 1941? "1941". Want to say January 1941? "1941-01". Want to say January 14, 1941? "1941-01-14". Want to get very detailed? You can add points of a second: "1941-01-14T12:14:03.0482Z". You can do timezones.

Eventually, we'll have an even better way of representing date-times that are more like bitmasks. We can then represent in a sane way the concept of opening hours. If you say the shop is open from 9am to 4:30pm Monday to Saturday, how do you do that?

If I am digitising an old text of unknown origin, and on a page it says "January 12", and on the page after that it says "January 15", can I represent those with a reference to an implicit but unknown year? I don't know the year, but I do know that the January 12 instance comes before the January 15 instance on the next page. And at some future point, we might deduce from circumstantial evidence what year it was and the sequential ordering will still make sense. Relative dates would be useful here.

ISO 8601 allows you to represent ordinal dates (the number of days since the beginning of the calendar year).

Can your date-time representation deal with moveable feasts like Easter?

Have a look at the WHAT WG Wiki page on Time to see more obscure use cases: http://wiki.whatwg.org/wiki/Time

Your date-time representation, storage format and calculation libraries are probably inadequate.


> If I am digitising an old text of unknown origin, and on a page it says "January 12", and on the page after that it says "January 15", can I represent those with a reference to an implicit but unknown year? I don't know the year, but I do know that the January 12 instance comes before the January 15 instance on the next page. And at some future point, we might deduce from circumstantial evidence what year it was and the sequential ordering will still make sense. Relative dates would be useful here.

Of course, if the year is unknown, and could be before ~1918, and your dates are, say, the dates of two letters, you can't even be certain about the ordering of "January 12" and "January 15".


You may be able to infer it from the sequential ordering of events described in the letters, but, yes, point taken.


Eventually, we'll have an even better way of representing date-times that are more like bitmasks. We can then represent in a sane way the concept of opening hours. If you say the shop is open from 9am to 4:30pm Monday to Saturday, how do you do that?

For the record, the Open Street Map project has come up with a human readable way to encode opening hours. http://wiki.openstreetmap.org/wiki/Key:opening_hours


Yup, and they've properly specced it out and everything. I frequently advocate people nick this and reuse or build on it.


Now take it a step further and think one if the problems I ran into when working on MeNomNom about the hours of operation for bars/night clubs. They might list their hours as Mon-Thur 6pm-1am, Fri-Sat 6pm-3am. That's also how people expect to view the hours, but the fact is that it is incorrect. The actual hours (That I had to store in the DB) were more like Monday 6pm-11:59:59pm, Tuesday 0:00:00am to 3am, Tuesday 6pm-11:59:59pm. Human representations of time are so wonky! People seem to have an intrinsic sense that another day doesn't start until they sleep.


Would that be 12 January Old Style or New Style? (And are you sure the year number changed on 1 January in the region where that text was written or published?) http://en.wikipedia.org/wiki/Old_Style_and_New_Style_dates


> Humans represent time vaguely rather than precisely. Computers tend not to.

Hi. A colleague of mine is working on a large medieval dataset than contains many dates and recently explained this concept of vagueness to me. What's the standard way for dealing with this in MySQL for instance, how do comparisons and orderings work for a collection of vague dates?

You're saying ISO 8601 can do this, and MySQL can talk ISO 8601?


I don't know much about MySQL, sorry. Mostly I use Postgres. I'd suggest the only really good way of solving is either something like tstzrange or daterange in Postgres. Or defining your own datatype, which most databases don't support.


The author only advices to use UNIX time to precisely measure time intervals and store precise timestamps; your points are of course valid, but they are not a reason to degrade precise data into human-readable pandemonium.


Ah, my mistake. I'll rephrase my original comment.


> Computers tend not to.

Computers can be made to represent vagueness rather precisely.

> You should be able to represent in your data model, date-time library etc. vague dates and relative dates.

If it is needed. Often it is not needed. Or rather for most applications there is an implicit hard coded resolution, or error interval.

The library case is special and then why not just have an additional byte specifying the confidence interval. You only need a small enumerated type of 20 distinct values to encode confidence intervals from a nanosecond up to a millennium. Heck you can do it with bits if you want to.


I'm not saying you can't do it. I'm saying that a lot of the software one writes against (date-time libraries, databases, data formats etc.) don't make it easy. And programmers don't think about it very hard.


Also worth noting: there are many time zone variations beyond DST, and some are defined in terms of partial hours (30 or 15 minute variations).

True story: when I was at ITA Software, Orbitz ran hundreds of instances of QPX, our low fare search software, on their own servers. We had an ops fire drill one weekend because customers were complaining that the site was showing incorrect prices. The root cause? A single machine in their server farm had the time wrong, so advance purchase was computed incorrectly for any query sent to that particular machine. That was fun to debug.

Even the meaning of, say, a minium stay requirement is hard to precisely define. If you fly across the international date line and back on a Friday night, did you have a Saturday stay or not? There are so many flights going so many places, questions like this actually come up in practice.


For other - often difficult, and sometimes surprising - problems with time, see Peter-Paul Koch's excellent essay "Making <time> safe for historians": http://www.quirksmode.org/blog/archives/2009/04/making_time_...


GMT is not an old name for UTC. From http://www.bis.gov.uk/files/file32707.pdf :

Coordinated Universal Time (UTC) is based on International Atomic Time (TAI), but offset by an integer number of seconds so that it remains in approximate agreement with more traditional time scales based on the Earth’s rotation (i.e. Universal Time or, equivalently, Greenwich Mean Time). There is no natural gearing between TAI and GMT and so additional ‘leap seconds’ are applied when required to keep UTC in agreement with GMT to the nearest 0.9 seconds (see notes on time scales in Annex A for background information). UTC is the global standard for civil-time keeping today. It provides the most stable time base available because it is based on TAI, but also acts as a good approximation to its antecedent, GMT, for everyday purposes.


> When storing time, store Unix time. It's a single number.

Don't do this in your database. Use your datetime types. Please. You might save some work with your timezone but you're not going to be able to use intervals etc in a smart way.

This would be even dumber with PostgreSQL, which has much more robust date/time functionality (being able to use the - operator with datetimes and intervals, for example) and completely sane timezone support.


Why can't you use intervals in a smart way? Single number is the easiest and fastest method to use for intervals.

The only true problem with storing unix time is leap seconds and relativistic effects, which can both be safely ignored for most time keeping usages.


PostgreSQL intervals are smarter than just number of seconds between two datetimes. They can represent "1 month" or "1 year" which can't be represented as a number of seconds. Also, intervals can be used between dates or times, not just datetimes.


Exactly right; date arithmetic is horrible if you only have seconds to work with.

Even just saying "24 hours from now" -- as your user would see it -- you'll sometimes get the wrong answer if you just add on 24 hours worth of seconds... suppose they have a DST shift tonight?

What if you need to know how many days remain from today to March 1st of this year? ...well, are we in a leap year?

Postgres datetime doesn't solve all of the problems that crop up, but it's certainly better than using unix time.


24 hours from now is no problem if you're using unix time. Just add 24 hours worth of seconds -- when you convert back to the user's time, your conversion logic will take care of timezones. This is also incredibly important if serving an app over the internet as your user may have just flown from USA to Japan. Storing unix time solves this in the easiest way.

Same for working out how many days remain - you convert March 1st to a unix date, get an interval in seconds, and then convert that into a human readable format on display.

This isn't some revelation - unix time has been used successfully for decades now.

EDIT: Also, Postgresql uses a number very similar to unix time in the actual storage of the date - it just handles the pretty display for you. So you're arguing for using the same thing whether you say use a postgre datetime or a unix date number.


> [Postgres] just handles the pretty display for you

It's rather more than that, though I don't have the time to kill digging into it now.

And personally I mostly seem to end up doing more complicated date processing in code, not queries; but there everything needs to be a date immediately (not unix time or similar) for most purposes -- then I can use complicated libraries written by others to let me do "simple" things with dates, like rolling months.

Ah, also: > you convert March 1st to a unix date Then that's where the complicated logic goes. Basically, you need that somewhere, and it's non-trivial (leap year calc is the least of it).

My point isn't that unix dates aren't useful for storage, but that they aren't useful by themselves for calculation.


Sometimes I think Mysql has corrupted everyone's brain.

Postgresql and every other database that I'm aware of stores dates internally as a 4 or 8 bit number anyway, so you aren't saving anything by using an (hopefully) bigint instead of a datetime column.


"Sometimes I think Mysql has corrupted everyone's brain."

Unsurprising, given how cavalier MySQL is with all the other data it "stores".


I hope you meant 4 or 8 bytes not bits....


Yes, bytes. I suppose most everyone wants to use dates more than 256 seconds past Jan 1, 1970. cue embarrassment


Who was the genius who decided that unix time should handle leap seconds? It seems like such an obviously bad idea. The biggest advantage of unix time has always been that it is monotonically increasing and that it is precisely defined as seconds since the epoch, period. And then all of that is now broken because of this decision.

Also, does the problem that the leap second solves actually warrant all of the problems that it causes?


Since Unix time does NOT count the leap seconds, it allows the computation of the human-readable time from the Unix time without knowing when the leap seconds occurred. If knowledge of when the leap seconds occurred is required, then those events would need to be available somehow and used, introducing i) more expensive computation ii) difficulty in future-proofing an app since a reliable and updated data source supplying the times of leap seconds is required.


The obvious solution would be to get rid of leap seconds entirely, and let UTC and solar time (when sun is highest at Greenwich) drift apart.

Sun rising 30 seconds later every century is not really a problem in my opinion.


> Also, does the problem that the leap second solves actually warrant all of the problems that it causes?

Definitely not. The only benefit of leap seconds is keeping UTC in sync with solar time: the sun is highest in the sky at 12:00 noon at Greenwich.

But in the modern world, synchronizing solar time and clocks to <1 minute is of no value to anyone but nostalgic astronomers.


Some delegates to the ITU-R process have argued in the above fashion. Other delegates have come representing countries which want UTC to remain as a valid count of days in the calendar (for 86400 SI seconds is not the same as one rotation of the earth). For over 10 years these two viewpoints have been at stalemate with no progress toward any compromise that might alleviate the problems that POSIX systems face at each leap second.


I would argue for both fixed UTC based on atomic clocks, and 86400 seconds per day - at the expense of rotation of Earth not being in sync with UTC or calendar boundaries.

It will take 3000 years before leap seconds add up to an hour, but most countries adjust the clocks that much every year for DST. It will take 40000 years before the usual daytime hours turn into night, and by then I hope a single planet's rotation is a historical oddity.


He did not mention the issue that some processors might have different time counting for each core, sometimes resulting in very silly situations if you are reading time in milliseconds or below scale.

For example: I had a game that physics behaved "woobly" in multi core systems. The issue is that as the OS juggle your process around, you get the delta time since last physics step inaccurate, sometimes even with backwards jump in time ( and according backward movement in the game ).

Later I saw someone showing the effects of this on the file system, with a multithreaded file copy resulting in very strange timestamps.

For my game, the solution was force time-dependant threads to request affinity with one core.


As the article hints, even though timezones are usually presentation-only, there are some cases where business logic really does have to deal with timezones.

If your software has business logic that cares about, say, what day a specific person perceived an event as happening, it needs to think about the timezone they have set.

So, in such cases, you'd better keep a history of all the timezones they've ever chosen, and the time (UTC) they changed them.

(Hopefully this lets someone else avoid my past mistakes.)


I would go so far as to say that time zones are rarely presentation-level issues. Want to have a meeting every Tuesday at 2:00 New York time? Better make sure you know the New York part or daylight savings will break it. Want to wake up every day at 9:00? If you don't track the zone that preference is set in, you won't get that right when you travel. Time zone isn't a presentation-only thing because people's behavior depends on their time zone.


Agreed. Often you need to care about timezones. Someone wants an alarm to go off at 8am every day. Well you need to start thinking about timezones, since that "8am" needs to be local time.


This topic comes up again and again. But I mostly read only about the problems, not about the solutions.

This post advises to use Unix timestamps. That's already what I'm doing mostly everywhere. And it seems to me like most people do and like this is somewhat accepted as the best available option - but still not perfect.

Then I am wondering again: Isn't there really a perfect solution? Esp., the problems with Unix timestamps are the non-monotony and that e.g. two equal timestamps can actually represent two different times (seconds). This could be bad in cases where seconds matter (e.g. some log-files which need to be very precise about time).

Maybe TAI is and we all should store TAI timestamps instead of Unix timestamps? But is there an easy way to get TAI timestamps? Also, I haven't really seen other projects doing this - why? Maybe it is just too little gain over Unix timestamps and too less tools available to work with them... Btw., I just checked, there is http://cr.yp.to/libtai.html and http://pypi.python.org/pypi/tai64n, maybe I should just start using that.

This of course still doesn't solve anything about synchronization or inaccurate system clocks but it would be better than Unix times.

Edit: Getting the TAI timestamp is probably not easy or maybe even not possible... I just saw this: https://github.com/stoni/libtai/blob/master/tai_now.c ...


Getting a TAI timestamp is impossible because the BIPM (the authority that defines TAI) does not want TAI used for such purposes. Without approval from the authority nobody will undertake to construct the technology to provide TAI to an operational system. Without a source for TAI, any system which claims to be using TAI is making the same mistake that POSIX made with UTC. To wit: Creating a new thing which has the name of an existing thing but does not have the properties of the existing thing. This is a recipe for confusion.


Well, there is libtai from djb. Isn't that credible?


Why doesn't BIPM want TAI to be used for timestamps?


TAI is calculated in retrospect, as I understand it, based on the weighted contributions of the atomic timescales maintained by all of the participating countries. As a result, if you store a TAI timestamp in hopes of using it as an absolute time reference point, you will have to go back and update it when BIPM releases the next 'Circular T' bulletin (http://www.bipm.org/jsp/en/TimeFtp.jsp?TypePub=scale).


Read between the lines of http://www.bipm.org/cc/CCTF/Allowed/18/CCTF_09-27_note_on_UT... about what systems of time distribution already exist and which of those are approved for use by national and international agencies.


> This of course still doesn't solve anything about synchronization or inaccurate system clocks but it would be better than Unix times.

One of the solutions used in various systems from cash machines to databases is to use GPS time. All GPS satellites carry atomic clocks, and when positioning the receiver also determines local time at accuracy up to 10 nanoseconds.

GPS time does not have leap seconds, and is always at a 19 second offset from TAI. It's available without a network, and without synchronization between computers.


I'm a noob. How can two equal timestamps represent different times? Or do you just mean as presented in different timezones?


See: http://en.wikipedia.org/wiki/Unix_time#Non-synchronous_Netwo...

The Unix timestamp 915148800 (from the table) represents two times (separated by one second from each other).


Because of the leap seconds -- the unix time would show the same time for those two seconds (before/after the leap).


On servers you control, do what Google does: skew time on leap-second days, making each second slightly longer, so that the day is still the normal number of seconds. Filter inputs coming from outside so that your internal systems never see a 60th second in a minute.


Erik Naggum's "Long Painful History of Time" (http://www.scribd.com/doc/93991574/Erik-Naggum—ALongPainfulH...) is well worth reading, too.


Working through it, thanks.

The link in the paper's headers is still live: http://naggum.no/lugm-time.html

The home page refers to his ill health: http://naggum.no/

And he in fact died in 2009 at 44: https://en.wikipedia.org/wiki/Erik_Naggum

Interesting guy, involved in specifying the Internet from link level to mid-email level, worked on Emacs, and among the first Usenet users to be known for flaming. I vaguely remember the name in that context.

On balance, a contributor.


Third point is worded really unclearly. GMT still = UTC (modulo seconds point that has been made elsewhere). BST = GMT + 1; British time is either GMT or BST depending on the time of year.

Advice to display an offset when displaying a time is wrong or at least incomplete; you should display a symbolic timezone (i.e. "London" or "Eastern US" etc.) as that's what's going to be meaningful to someone reading it.


> When storing time, store Unix time. It's a single number.

This won't work for any system that allows users to schedule events in the future. Let's say that you agree for a meeting in Moscow 1 dec 2013 3pm. The actual (astronomical) time of meeting might change significantly if they change daylight saving rules. So you need to store something like "2013-12-01 15:00 Moscow", there's no way around it.


I disagree that Timezone is just a presentation issue.

When information is to be transmitted between timezones, you want to be able to determine the locale time of the sender or the information relays. This is why you'll find a time zone information in dates and time stamps in mail headers.

This is also why ISO 8601 bothered to standardize specification of the time zone.


"the Prime Meridian was (arbitrarily) chosen to pass through the Royal Observatory in Greenwich"

At the International Meridian Conference in 1884 41 delegates from 25 nations met in Washington, D.C., USA and selected the meridian passing through Greenwich as the official prime meridian due to its popularity - not an arbitrary decision ;)

However, the French abstained from the vote and French maps continued to use the Paris meridian for several decades.


How is a popularity contest not a shining ideal of completely arbitrary decision-making?


In this case popularity was the best criterion. Imaginary line is just that—imaginary line, no one is intrinsically better than other, so the one already popular was the best choice.


Well, the choice of GMT certainly made picking the international date line earlier. Certainly useful that 180 degrees around the world is mostly water.


A good article, but the first point is (essetially) incorrect:

UTC: The time at zero degrees longitude (the Prime Meridian) is called Universal Coordinated Time (UTC).

If you stand at the Prime Meridian in London during the summer and ask someone what time it is, the correct answer will be one hour away from UTC.


At a more coarse-grained level, "Calendrical Calculations" by Dershowitz and Reingold is very interesting. It describes many different world and historical calendar systems, and includes lisp code to convert between them.

Among many other problems, converting between calendar systems requires that you define when a day starts; there are many ways to define that among the systems.

I think I have the latest, the third edition, https://en.wikipedia.org/wiki/Special:BookSources/9780521885...


For those less practically interested in the history of the computation of time, Arno Borst's "The Ordering Of Time: From The Ancient Computus To The Modern Computer" is recommended reading.


I hate that Mixpanel and Google Analytics APIs return data in timezones, and the former doesn't tell you what the timezone is while the latter in v2 of their API can give you the wrong timezone.



The relativistic warping of space-time forbids standardization of time measurements


Unless you also standardize the representation of inertial reference frames.


Funny I read the title and thought: TVM but then realized it was the age old quest for time keeping and dreaming of ways to track it with reasonable sanity.. :-)

Bonus points for anyone who knows what I mean by TVM and can describe why every programmer should know about it..


You explaining it would be more useful than the bonus points.


I can only think of Time Value of Money...


Unfortunately Google can only think of that too.


I was attempting to generate discussion. Your snark is not welcome.


No snark intendend whatsoever. I was trying to figure out what the guy was talking about, and when I searched TBV the results were overwhelmed with Time Value of Money. I was observing that if the guy meant anything other than that, it would be all but impossible for someone to figure out if someone was not familiar with ... whatever he's talking about.


In that case, I owe you apology :(

Love to know what this TVM is.


> A time format without an offset is useless.

Not necessarily. The time might be "floating time", i.e., depends on some local context.

For example, the show starts at 10 pm 19 Jan 2012. You don't need the time zone because people going to the show know what 10 pm means locally.


Until your business grows and you're storing dates for shows in multiple timezones, or if you need to book shows and store dates in the future when DST may or may not be in the same state as when you store the date.


But this will still be useless anywhere else than "locally". The timezone just became implicit and lost.


Of course, if you add a timezone, you are likely in trouble when daylight savings time comes/goes. That is, shows don't change times just because you effectively changed timezones.


Depends on how you're storing the timezone. If you store it as "UTC+2 hrs", then yes you'll have a problem. If you store it as the tzdata "Europe/Sofia", then it'll take care of all the timezones.


I, of course, can not disagree. Just wanted to add to the voices that point out much in dealing with time in computers depends on your goal.


With all the discussion about how Unix represents time, it's surprising that the article made no mention of the Year 2038 Problem:

https://en.wikipedia.org/wiki/Year_2038_problem


A very good open source package for handling date-time is ICU (http://site.icu-project.org/) which handles a lot of globalization/internationalization issues.


Formatting timestamps is a difficult decision for REST APIs to make. I polled my networks a while back and got pretty much divided responses [1]. In the end, I opted for milliseconds over ISO 8601 and I've been happy with that choice for ease of processing and debugging, especially as I'm now using the same values in the URL for caching purposes.

1. Google Plus conversation remains https://plus.google.com/106413090159067280619/posts/Wtkhk9jU... Good luck finding it on Twitter


Your API should return Unix time. The end.


Because when someone wants millisecond resolution the best answer is an integer number of seconds, right?

mmahemoff, are you using actual milliseconds, which means you have to keep track of leap seconds?


In my case, that level of precision isn't the consideration. I'm just expressing Rails' created_at and updated_at fields as integers instead of ISOs.


I guess a related question is how do you synchronize a users perception of time with a program's perception of time -- a program that the user is interacting with. To effect this synchronization, you somehow need to know of the latencies involved in the user initiating actions that the program can sense and when those actions register with the program and in (latencies involved with) the program initiating actions that the user can sense and when the user senses those actions.


Maybe the most insidious "bug" I ever had to solve: a performance test environment showing very strange behavior. Eventually I found that time jumped 30 seconds forward every minute, and soon after 30 seconds backward every minute. Had some kind of redundant time servers configuration, messing time up. Took longer to find out than seems reasonable, but it is the kind of thing you don't really expect.


He missed to explain TAI and its relation with the other times.


Also, there have been many definitions of UTC, GMT etc over time. So to be truly correct you'd always have to mention which definition of UTC you are referring to when you use the term. And the ITU-R is debating to redefine UTC once again and remove leap seconds (http://www.ucolick.org/~sla/leapsecs/). If you are really interesting in the time stuff, here is a great resource: http://www.ucolick.org/~sla/.


>mention which definition of UTC you are referring to

Doesn't the time value tell you? Or have there been temporally overlapping definitions of UTC that disagreed?


There have been many defintions of UTC http://www.ucolick.org/~sla/leapsecs/timescales.html#UTC which do not agree. When an API does not match what the providers supply, which definition qualifies as "official"?


What's the deal with the "?v=1" ?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: