Hacker News new | comments | show | ask | jobs | submit login
What every programmer should know about time (unix4lyfe.org)
444 points by enneff 2060 days ago | hide | past | web | 126 comments | favorite



> Timezones are a presentation-layer problem!

I want to correct this common misconception that UTC is enough. Calendar time with all human traditions involved is more complex than a simple timestamp.

The advice above is incorrect for calendar and scheduling apps, or anything that has a concept of a repeating event.

An example: we have a weekly meeting occurring 9AM Monday in San Francisco. You are in London and want to attent the meeting over Skype. When is it in London time?

It depends.

On 7 Mar 2011 it's at 5pm

On 14 Mar 2011 it's at 6pm

On 29 Mar 2011 it's at 5pm

To make these calculations, you need to know timezone & daylight saving time (DST) rules of both your current location and the home location of the event.

A "DST zone" of a home location of a repeating event has to be saved together with a time and thus it's not just presentation-layer issue.


The article should add a caveat: use UNIX time when you are recording the current time to store for later use. That can always be formatted in the user's current timezone to display back.

When you're inventing a rule system based on local times, as you are above, of course you need to track the rules in the local time zone. That's because the rule is: "do this thing at 9am in my particular local time zone", not "do this thing every 86400 seconds". Keep in mind that this is hard, though, due to the fact that in many local time zones, one hour is missing on one day ("spring forward"), and one hour occurs twice on one day ("fall back"). If you have some event that should be triggered at 1:30am, which 1:30am do you mean? The first 1:30am or the second 1:30am? What about on the "spring forward" day, when 1:30am doesn't occur at all?


As an aside, have you thought about the effects of DST on train schedules? In Germany, for example, they literally stop the trains for an hour when the clocks go backwards and let them all run a nominal hour late when they are going forward.


Very interesting. I didn't realize that an hour made much of a difference to Deutsche Bahn's schedules, however :)


Sadly, not. And I've done some scheduling work for them.


And that doesn't even get into the problem of the DST dates changing at the whim of legislation like it did recently in the US about 5 years ago. If you are attempting to track timezone instead of UTC you've now have to check date ranges when different DST was in effect. Very messy stuff. Better to just keep it in UTC or Unix if you need the offset it was recorded in as you mentioned.


Talking of the DST point, how do you manage this in your apps? Say we're in the UK, you schedule something to run at 9am GMT. In the summer, the timezone changes to BST, so 9am GMT is still 9am UTC. I've never seen a scheduling app where you say 9am UK, and have it automatically switch GMT to BST. The closest I've seen is "use server time" where you have to setup the server to automatically apply the DST rules - but then you have issues when working with out of sync DST rules, such as those of the US.


Yes to everything stated above. We have a system that schedules events in the future, and the times are all wrong for a couple hours per year because the system is effecting DST @ 12:00am UTC instead of 2:00am local time! Getting time right in all cases is much more challenging than my people realize.


UTC is enough for - as the article suggests - storing a date.

Storing a recurring class of dates is - of course - more complicated; but then has anyone ever suggested otherwise?


Maybe nobody explicitly suggested otherwise, but I just wanted to highlight that time is a human concept and in informal context can mean a lot of different kinds of things (e.g. a time of recurring event). When you start formally modeling those, a single timestamp isn't enough.


Actually, UTC isn't enough for storing a date, at least not a date in the future. If I schedule a single event for 3pm on October 1 (currently in the future), I expect it to stay at 3pm. Even if my city hosts the Olympics: http://support.microsoft.com/kb/257178

Note also that not all governments are in the habit of giving sufficiently advance notice of daylight saving changes.


And not just for repeating events. I worked on an app that dealt with scheduling travel, and you needed to record the zone of both locales so that if a trip crossed time zones the trip length(which was derived) would remain accurate.

You also had to take into account what the DST would be for both locales at the scheduled time. Hint: Use Oracle, SQLServer doesn't have a way to stay updated with the constantly changing DST offsets worldwide..


There is no need to save the home location with the time.

If you save the UTC date of the event, the localized date for some timezone can be extrapolated from it.


UTC is enough if we have a single instance of an event. If we have a concept of recurring event ("this event will occur on every Monday 9am"), then it's not enough.

I just wanted to highlight that "time" is a human concept that in informal setting means a lot of things, but when you start to model it formally, it can be more complex than a single timestamp.


Yes, very correct! I hope the 'lets just use Unix timestamp because it's easy' mentality will go away soon!


Unix timestamps are indeed easy and good enough for most cases (isn't the mantra "keep it simple"?), except for calendars/recurring events, as some people have pointed out.


What would you replace it with for storing a single date?


Depends on the context it's used in. Usually YYYY-MM-DDThh:mm:ss of ISO 8601, explicitly extend it with the UTC timezone (with the + notation), sometimes that plus the local timezone in a separate column/field, sometimes a serialized version of a library representation of a date.


UTC is fine for single events. For the recurring events, UTC + timezone alone is useless, you need to know the DST rules too to make correct calculations over the DST changes.

vCal has spec for DST rules, but you don't want to store them for every event. You store a location or "DST zone" of your event and have DST rules in separate database (they need to be updated as DST rules can change)

When I worked on calendar applications, there was not commonly agreed way to transfer DST zones between systems, but single DST rules could be transferred as part of vCal entry.

Microsoft apparently implemented their own integer code for every "DST zone" and used it to transfer events correctly between Microsoft systems (e.g. sending meeting invitations by email from Outlook to Outlook). Things might have changed since I worked on this area, I haven't checked the current status.


"Timezones are a presentation-layer problem! Most of your code shouldn't be dealing with timezones or local time, it should be passing Unix time around."

I can attest to this. At a previous job our entire API used UTC. It was clean and worked at every layer of the app, from django to our client-side javascript. When we needed to display a human readable version, we did the translation at render time. All interactions with time as data was done with the UTC timestamp and saved much headache.

A couple months before I left, one of the engineers proposed switching everything over to a textual representation according to ISO_8601. I forget the nature of the argument, but it was inane(to me). This actually led an extensive back/forth email exchange between various members of engineering, me as one of the frontend engineers, and even the engineering manager who seemed to favor the idea.

I argued, "why change the entire stack which works just fine, etc etc". Fortunately, in this instance a heavy workload and group apathy about taking on unnecessary additional work allowed this entire concept to wither and disappear after a couple days.


Though, in calendar apps, I find this behavior annoying. My phone automatically adjusts my calendar when I change timezones, but I put all events into my calendar as local time for wherever I will be when that event is going to occur. I don't want to have to think about how to enter things into my calendar when I'm planning a trip back to where I grew up...


Well, this is also a presentation layer problem. You should be able to tell it to use your home zone instead of the current local zone, for example.


I don't want to tell it that though. I just want to enter things at a time, and have them stay at that time.


You have to tell it a time zone or else the calendar app will not know what time you mean (and really, time without a time zone is meaningless). What you are doing is using your current time zone as the default so when you enter appointments from another time zone you're actually entering them incorrectly.

Any calendar app should allow the entry of appointments with time zone.


> You have to tell it a time zone or else the calendar app will not know what time you mean

It seems logical to me that whatever time I write in my calendar app is the time that I expect something to happen. I.e. it is the local time at wherever I happen to be. If I put in a meeting at 4pm on 8/7/2011, then I expect an alarm to sound whenever the local time is 4pm on 8/7/2011.

That's how my paper diary works (or used to work when I had one) - if I am planning for a future event where I will be in a different time zone, I simply write down the local time of the event.


That doesn't sound at all logical to me. The phone conference is scheduled for a particular time, eg 10am in Montreal. The other meeting participants do not care that it is now 10am in Mombasa, where your phone happens to be at the moment - they will not be at the meeting for another 7 hours anyway.

Your other way also allows for the same event to happen twice, at different times, which is entirely unexpected.


Yes, very true. Doesn't work very well for events where people are attending from multiple time zones. I was thinking more of events that I attend in person.

But I'll use the paper diary example again - if I was in Mombassa and was due to have a phone conference at 10am Montreal time, I would write "6pm - Phone conference" in my diary (assuming that is indeed the correct local time).

And in your example, yes, it would be easier for me if I could tell my calendar that it is "10am, montreal time, please adjust that to mombassa time". So I guess my ideal solution would be a calendar app that works the way I described unless I specifically override it.

Anyway, I think this proves the point that time is hard.


Alternatively, we could all just use UTC+0 and never worry about time zones again! :)


Most useful calendar apps these days allow you to share events, or invite others to an event. What happens when you invite someone who lives in another timezone? There's no good way to handle this that doesn't involve taking timezones into consideration.


What is supposed to happen when you take a flight at 4:10pm on 8/7/2011 and land before 4:10pm on 8/7/2011 (local time)? Do you get the alarm twice?


I think that would be the expectation of most non-engineers, assuming they set the new time zone before arriving and if they even considered edge cases like that. It's also how most alarms work, whether mechanical or digital.


I think that's pretty hard to do in these post-concord days?

Though if you lived close to a time zone border it could be a problem. There are towns straddling the Queensland-New South Wales border in Australia. There is a one hour time difference between the states for half of the year (NSW does daylight saving, QLD does not). I've always wondered how local businesses deal with that.


It is incredibly easy to arrive hours before you leave. Just fly East across the dateline (like Aus to North America).

And I've even lost a birthday flying back in the other direction. Great scheduling, that.


> It is incredibly easy to arrive hours before you leave. Just fly East across the dateline (like Aus to North America).

Or use a fast plane going west (Concorde used to take ~3h for London to NYC, and NYC is on UTC-5, so passengers on Concorde would arrive "2h before they left").


Time without a time zone is not meaningless. I don't need my calendar to be aware of when in the day the other side of the phone call is happening. I just want to enter "Dinner at 8pm" and have dinner stay at 8pm. This is one situation where I do not want my computer to get unnecessarily smarter than me. Just DWIM.


It still depends on where you have the dinner. If you have a dinner at 8pm in UK, you could leave France after 8pm and arrive for your dinner at 8pm again. What would you expect to happen - get the alarm twice?

What if you actually wanted to call someone before their dinner and that's why you put the event in? If that person was in France in this scenario, you'd want to call them before 7pm UK time instead. Calendars can't guess the context - location of the event depends completely on what you put in.


Any calendar app either has to base its time zone on location or ask you to manually choose a zone. How would you propose changing this?


i think what is being suggested is something like storing it as a string and raising the alarm when the string matches the current local time. so it's "time in whatever time zone i am local to when it matches the time".

(which has problems with uniqueness, but does seem like an intuitive "dwim" high level interface).


In the realm of unrealistic-now-but-maybe-cool-later, if it could access your flight reservations, it could infer where you'll be at the time and base the time zone on that.


Calendar has to know where each event occurs in order to present time in event timezone which introduces some interesting issues like: 1) requiring location fatigues UX, 2) inferring location is not always possible and frequently wrong, and 3) tracking location requires more data like complete travel plans. Adding to this just-another-presentation-layer problem, online event timezones are different for each participant.


Yup. In our architecture (TVs and other devices) time is kept as UTC through the entire system - We actually have two APIs. One that is UTC time that is used in all middleware.. and a "GetCurrentTime" API that is used only in the presentation layer to know how time should be displayed on screen. GetTime returns UTC time getCurrentTime returns a time struct of hours / minutes / seconds... This works well and it allows developers to easily identify API misuse during code reviews.


Storing all your time as UTC can create problems depending on what you're doing with the time. If your application is a calendaring application and people can book things well into the future, you can have problems with daylight saving time and timezones this way.

For my most recent app, timestamps are UTC and everything else is stored local time.


You'll need to store the time zone along with the local time, otherwise you won't be able to handle the situation where you have multiple users in different time zones. About the only reason this is preferable to storing UTC along with time zone, is because there are sometimes political decisions made to change daylight savings time (i.e. the mere fact of DST/TZs etc. isn't enough).


Yes, I store the timezone (associated with the user) along with the local time. All date operations in the application are done related to their timezone.

Storing it this way is preferable because it makes working with times across DST boundaries a non-existent problem. DST is a pain in the ass and you want to use whatever language / database facilities to exist to make this as smooth as possible -- storing local time (and setting the timezone appropriately) makes it not your problem. Also users frequently set their own timezone incorrectly, and when they fix their timezone, they don't want all their appointments to be at the wrong time.


A calendar application is a bit different, since the whole subject of the application is time, time zones, etc. Still I'd store all in UTC along with the user's time zone (you'd have to store the time zone anyway with localized time too).


Storing local time is preferable because it makes working with times across DST boundaries a non-existent problem and because users frequently set their own timezone incorrectly.

I've done previous projects where I've stored UTC dates or unix timestamps for this situation and it was a hassle to deal with. You really need to consider the nature of the data -- for straight forward timestamps (like the date of a post) UTC makes the most sense. For user-entered dates, I think local time is much more appropriate and a lot simpler.


Don't you get exactly the same problem when you store timezones? CET always stays == UTC + 1 hour. So there's no difference whether you store 8 CET or 7 UTC, because some country may decide it's not in CET at that time anymore... Only storing the actual location would save you if you're thinking about dates in far future.

Then again, you'd have to somehow verify what's the time in that location at that point.


I don't store timezone offsets, I store the timezone strings and they are pretty granular. Users can choose the most appropriate timezone from 448 possibilities.


What about when you're storing things at a date/hour/or even minute granularity.. not seconds.. is this still as important?


We solved this by storing metadata around the timestamps. For us, it was an ID that referenced a global table of polities that use daylight savings and we could derive the current GMT offset from that. We could theoretically update the polities table over time as these changed, though while I was there we never did.

As for granularity, you can derive day/minute/hour etc based on the timestamp. For us, we were able to do those calculations in the application layer. For other types of projects, you can store that data in the db if you need to do more efficient queries for example.


As someone working working on time sensitive code on embedded systems (DVRs that get UTC from the broadcast), I can certainly agree with the issues laid out in the post.

As an example: We have some certifications our product must pass and the certification body plays a 4 minute looping broadcast stream with the test condition in it. It turns out I handled the time jump hat occurred when the stream would loop around poorly and this caused about 1 week worth of headaches and delays in getting our certification. None of my code expects time to be ever increasing now.


just to add to the examples here: last week i was writing code to handle calibration data from seismic detectors. these are connected to GPS receivers and so have pretty accurate times. yet when i triggered a calibration it would start 1 minute in the past. somehow the receiver was moving back in time before starting the calibration....

...or ntpd on the test computer had failed and the machine was a minute fast :o) so the code to search for data now looks backwards in time as well as forwards.

incidentally, does anyone know of a really good API for time (including calendars etc)? python's (which is largely a thin layer over C) is a horrible mess, for example.


> incidentally, does anyone know of a really good API for time (including calendars etc)? python's (which is largely a thin layer over C) is a horrible mess, for example.

The only date & time API I've ever seen praised is Joda (JDK/Java library). Joda's author went on to redesign it from scratch (though with inspiration from his work on Joda) for JSR-310, Java's new Date and Time API.


I haven't played with it much, but the dateutil[1] package seems to do some nice stuff.

[1] http://labix.org/python-dateutil


Watch out for GPS time: it ignores leap seconds, and is currently, exactly fifteen seconds ahead of UTC.


thanks. currently everything is as insensitive to time as possible (during calibration the data are tagged, so i look for the block of tagged data near the correct time).


boost::date_time. I've cursed at it in the beginning, but it does things the Right Way, and I've been saved a number of times by its fighting back against shortcuts (like using Unix time - the issues identified in the article are real and a good summary, but the solution only works for the case where you need to work within a limited time span, from 1970 until a few decades into this century.)


C++0x standard library is pretty nice too. std::chrono and std::thread work nicely together.

Right now I'm working with C, pthreads and struct timespec's and it makes me wish for a good time handling library.


I like to rip on MySQL as much as the next guy, but the article is incorrect about MySQL DATETIMEs:

DATETIME: Eight bytes: A four-byte integer packed as YYYY×10000 + MM×100 + DD A four-byte integer packed as HH×10000 + MM×100 + SS

Storing UNIX time as an integer would be silly, considering:

TIMESTAMP: A four-byte integer representing seconds UTC since the epoch ('1970-01-01 00:00:00' UTC)


MySql's datetime fields are not timezone aware. If one client has set one timezone and inserts a value in a datetime field and another client has a different timezone, the value will not be converted.


Backwards jumps in time burned me once. The user was running my software on machines that had a bug specific to certain Opteron multiprocessor chipsets where a process migrating from one processor to another would sometimes see a backwards jump in time, even when the system's time was marching forward predictably on each processor. It just goes to show that you're always doing distributed computing, even if you don't know it.


I'd like to add one to the list - store your Unix time as a 64-bit value, to save your client/employer some headaches in 2032. I doubt I'm the only HN user who was spent a lot of time in '98 and '99 fixing Y2K problems.


Personally I'm looking forward to cashing in on some lucrative contracts as a graybeard C programmer in 20 years time, the same way all those COBOL survivors were able to in the late 90s ;)



D'oh! I misfired typing. Thanks for the correction.


> I'd like to add one to the list - store your Unix time as a 64-bit value

How about actually doing the right thing and using time_t?


I'm not sure that's the right thing, due to how time_t is (not) defined. First, it can be an integer or a floating number (although the latter is unlikely). Second, the size of time_t is not defined, so it could be 32-bit or 64-bit or something else. And then there may also be endianness issues when storing time from multiple different systems. So I'd say store it either as a 64-bit value (ignoring the possible floats, converting 32- to 64-bit and handling endianness), or use a textual representation of time_t.


Erik Naggum's paper The Long, Painful History of Time is a must-read: http://naggum.no/lugm-time.html


> UTC (which is an arbitrary human invention)

Hmm, i wouldn't call it totally arbitrary. UTC = TAI + LS, such that |UTC - UT1| < 1 second, where:

* LS are leap seconds,

* TAI is "physicist" time, based on the ticking of atomic clocks at mean sea level. The length of a second is constant.

* UT1 is "astronomer" time, the rotation angle of the Earth with respect to the quasar reference frame. The length of a second is not constant.

The Earth's rotation is slowing down, so UT1 is gradually drifting away from TAI. UTC is a pretty natural scheme to reconcile these two systems.


UTC used to be called Greenwich Mean Time (GMT)

Sort of. This could be misleading because GMT and UTC are still two different things with different definitions. Wikipedia is a good source of info on this, but for starters:

UTC is closely related to Universal Time and Greenwich Mean Time (GMT) and within informal or casual contexts where sub-second precision is not required, it can be used interchangeably.

So not strictly, but practically.. ;-)


Don't blindly follow the advice at the end of this article! The issues he identifies are real, but the 'solution' only work in a small subset of use cases. When one needs a longer time span than [1970-2038], Unix timestamp is horrible - how are you going to represent a date of birth in it for people born before 1970 (yes they do still exist!)? There is no guarantee that negative timestamps will work!

Also it doesn't take different calendars into account, still doesn't work with leap seconds, doesn't deal well with time spans (t1 - t2 specified in seconds can be a lot things in reality), ...

Use a proper date time library to deal with dates and store them in your database in a string format, including the time zone. It depends on your application which time zone (UTC or local), but in general UTC is best, and the local time zone could be a second column if you need the info (or it could be a property of the user, but e.g. many calendaring application then screw it up in the UI layer...)

I'd like to read a book on the UI issues associated with dates and times, anyone know of something like that?


Unix time: Measured as the number of seconds since epoch (the beginning of 1970 in UTC). Unix time is not affected by time zones or daylight saving.

I don't think this is strictly correct. This implies that someone could start an atomic stopwatch at midnight on Jan 1, 1970, and it would match Unix time. It won't.

Because Unix time is non-linear and will either slew or repeat seconds when UTC has leap seconds, the hypothetical stopwatch would be ahead of Unix time by 34 seconds.

... at least this is how I understand it. Every time I try and wrap by head around the differences between UTC/TAI/UT1, my head really starts to hurt.


You're right. It should say: Unix time is the number of seconds since epoch, not counting leap seconds.


Any idea why it was done this way? It seems like counting leap seconds belongs in the same layer as sorting out timezones, i.e. not here.


Possibly its a historical accident: UNIX was released before the first leap second, and before anyone could have known they were needed (back when seconds were defined in terms of Earth's movement). Around its release time, the definition of the second was switched (making leap seconds potentially needed). Maybe by the time anyone realized, switching from UTC to TAI would have been too painful?

Not to mention that you can't know the UTC-TAI offset more than a few months into the future. We can not predict which years will have leap seconds inserted.

Unix timestamps do not handle leap seconds well at all. Obvious things like t₂-t₁ fail to provide the number of seconds between t₂ and t₁.


Because people working with timestamps like to write code that says things like:

    // schedule another run for tomorrow
    schedule_event(now() + 86400)
...which doesn't actually work when leap-seconds are involved (things start to drift by a second). If you specify that leap seconds get replayed, it works.


An article previously linked on Hacker News goes into leap seconds in much greater detail and is a good read:

http://cacm.acm.org/magazines/2011/5/107699-the-one-second-w...


Strange then that we'd scold a developer for assuming the number of hours in a day, or the timezone, or whatever, but not for using Unix time as an absolute time, when in fact it is not an absolute time at all and even caters to the same error in thought.


As per gakman's comment in the Google+ crosspost[1], be wary of the Unix millenium bug[2] if you use integers for timestamp storage.

[1] https://plus.google.com/106356964679457436995/posts/Hzq2P7V6... [2] http://en.wikipedia.org/wiki/Year_2038_problem


"But there's no way my code will be running unchanged in 2038!"


I wonder about storing the timestamp in a varchar field of sufficient size to avoid all of these headaches? Although I guess 64 bit will suffice for a very long time.


As a point of reference, the universe is estimated at less than 2^59 seconds old.


man 3p time suggests using time_t to "ease the eventual fix".


"The system clock can, and will, jump backwards and forwards in time due to things outside of your control. Your program should be designed to survive this."

This is one of my favorite go-to test cases. I've found some really fantastically interesting, catastrophic network halting badness with this really simple test.


This literally just happened to a VPS of mine. Time was jumping forwards and back, every second the time could jump an hour ahead and back. Everything screwed up, from log rotation to sessions.

My first thought was that this was some kind of prank :-) but seems it was a hardware issue on the parent machine combined with ntpd trying to compensate.


You should read this article (also applies to other VMs) http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachin...

There are a lot of issues that can happen because of this...


This one was a blast back in the day:

http://kb.vmware.com/selfservice/microsites/search.do?langua...

No need to run the test case, you'd run into it soon enough on a 2.6 kernel on VMWare. ;)


Ooh, such as?


The most fun one I ever found was a situation where a daemon analyzing & forwarding traffic through a bridged interface would lock up & stop passing traffic when you popped backwards and forwards through time on the box.


Another thing to add: When asking people to put in the timezones, don't ask them for a UTC/GMT offset, and the dates that DST starts/ends. Instead ask them for the tzdata format (e.g. "Europe/London"). Then you can localize that wherever you want.


The problem with time-based bugs is that they are often subtle. I've had time-based bugs that only appear between 11pm and midnight; or only during daylight savings time; or only where the client is in a different timezone to the server.

Also, it is very common for business applications to deal with 'dates in the calendar', for example: a) John's birthday is 26 August 1966 b) The loan was borrowed on 16 January 2006 and repaid on 9 September 2009.

I suspect most programmers will disagree with me, but in my experience it is NOT good practice to use a timestamp class to represent such things. It's better to use a class specifically designed to represent a date in the (Gregorian) calendar. In fact, I created an open-source Java class for this purpose: http://calendardate.sourceforge.net/


Also beware storing milliseconds in 32-bit quantities (as if you'd ever! but it happens).

GetTickCount is the poster child for this class of bugs: http://en.wikipedia.org/wiki/GetTickCount

In fact some versions of Windows CE intentionally set this value to (0xffffffff - 10 minutes) before bootup so that bugs were more likely to come out in testing, rather than showing up 42 days after bootup.

Also, don't store time intervals as floating point, especially if you're working on a missile system: http://apps.ycombinator.com/item?id=1667060


You can store milliseconds in 32-bit quantities all you want, if you remember the One True Axiom of Time: never compare tick counts, always subtract and compare to the difference you were looking for. If you do it that way, you can't screw it up, at least in C.


Really? What if it wraps around? ts1=MAX_INT-40, ts2=20, ts2-ts1 < 0. Or does that actually work out correctly with signed integers in C?


Technically, overflow of signed integers invokes undefined behavior (or possibly implementation defined. In my copy of a draft standard overflow is given as an example of undefined behavior, but 6.3.1.3 say implementation defined). In practice, it's the same bit values as unsigned integers, which does the right thing in this case.


It works regardless of signed/unsigned type, because signed ints still wrap at UINT_MAX (and not at INT_MAX).


One notable addition: relative time is the same for everyone.

What I mean by this is that instead of messing with timezones (by trying to guess the user's timezone or even worse, asking for it) in most cases it is sufficient to tell the user something has happened x hours ago, or y days ago.

If you're programming in PHP, I can recommend this book: http://www.amazon.com/architects-Guide-Date-Time-Programming...


Ugh, I feel that this is one of the single poorest time-related pratices—from a UX perspective—short of not displaying the time at all. Please don’t follow this advice. It makes it really easy for the developer because there’s no need to deal with time zones and things like DST. However, showing a relative time in many situations is completely opaque. If something happened ‘14 hours ago’, did it happen at lunch time or 3 PM? A lot of times people want to know the time of day something happened, not just how long ago it occurred. Here you can do some arithmetic in your head to figure it out, but that’s a major annoyance. And once the relative time flips over to ‘X days ago’ the time information is completely lost. Similarly if it says ‘About 3 months ago’ it’s impossible to know if that means March 15 or April 1 or April 15 or anywhere in between, never mind the time of day. At the very least the full date and time should be displayed in a tooltip so that it’s available if needed. Ideally relative times shouldn’t be used alone except in situations where the relative time is absolutely, unarguably the only information that could ever need to be known.


I agree! And one of the more useful things you can do with a web app is render time using JavaScript (with an appropriate fallback) and then it will show the timestamp in the user's timezone automatically.


I'm really quite fond of Reddit's convention of specifying that the site will go down for maintenance "when this post is X hours old" made as a self-post (where all posts have a relative timestamp like you're talking about as standard metadata.) It's the only time I don't have to do any mental calculations to figure out when the thing's going to happen.


Other little know facts about time:

The first day of the week can be sunday or monday depending on where you are.

The way to count the weeks isn't the same everywhere.

Some advice:

Default to using ISO 8601 whenever possible http://en.wikipedia.org/wiki/ISO_8601

Don't ask for more precision then you actually need (Don't ask date & time when you only need year & week)

Don't store dates and times with more precision then was actually entered (e.g. Don't ask for year & week, and then store the (calculated) first day of the week.

On my first large project I made the mistake of asking for the year & week while storing the calculated first day of the week, however using the wrong first day of the week and the wrong numbering of weeks…


In the future, the world will use UTC and sunrise and sunset will happen at different times, relative to where you are.


No they won't. The idea that 12am is night and 12pm is day is pretty well entrenched in society.


While I understand your point, I find that a surprising number of people have issues with 12am vs. 12pm, often confusing the two. These same people tend to think midnight occurs at the end of the day.


Everybody who isn't American or British has issues with am/pm. It is so counter intuitive it is not even funny.


jorangreef is probably right, at least assuming that we get to colonise other worlds. I mean, what does it mean to talk about 'day' when you're on the ISS? How are Martians going to deal with the fact that they have a Sidereal Rotation Period of 24h37mins? Or the Moon, with it's roughly 700hr SRP.


Right on. But maybe a better time would be TAI - UTC without the leap adjustments.

Edit: We basically just need a logical clock though we may face problems reconciling this logical clock in a distributed space/time system.


Did you mean 0:00 is night and 12:00 is day?


This seems like it would cause more problems than it solves.

"It's 10:00 here in London. I need to call someone in New York, what time is it there?" "10:00, same as it is everywhere in the world." "So... is now a good time to call them?" "I have no idea."


I'd imagine it would be much easier to remember "business hours in NYC are between 1200 and 2030 UTC," than remembering the time difference and applying a translation every time.


And live shows would only need to advertise events in UTC. DST is almost a symptom of day/night/time discrepancy already but it's a top-down rather than bottom-up approach.


For a much more detailed perspective about time, and still rather accessible, check Poul-Henning Kamp's work:

http://phk.freebsd.dk/pubs/timecounter.pdf http://people.freebsd.org/~phk/


"Other timezones can be written as an offset from UTC. Australian Eastern Standard Time is UTC+1000. e.g. 10:00 UTC is 20:00 EST on the same day."

Every programmer should know about DST. Offsets are not always enough.

When it's winter north of the equator, some countries are on summer time (DST) south of the equator.


It's about time someone posted about this. Great summary - when I first started working on an international system I was hunting for this kind of article. Now if someone could only do one on languages, regional settings, character sets and encodings. . .


MySQL (at least 4.x and 5.x) stores DATETIME columns as a "YYYY-MM-DD HH:MM:SS" string

Wow that's terrible!


And false. See lysol's reply or the 5.0 reference manual's section on storage requirements:

http://dev.mysql.com/doc/refman/5.0/en/storage-requirements....

"YYYY-MM-DD HH:MM:SS" is the format of the DATETIME datatype. That's all.


For anyone else who has just learned about UT1 time and wants to know what it's current value is, here's the link:

http://tf.nist.gov/pubs/bulletin/leapsecond.htm

Today, UT1 - UTC = 82ms.


> When storing time, store Unix time. It's a single number.

This is BS. ISO-8601 (MySQL time) is way better and is not prone to the 2038 bug. Unix time has 'scalability' issues.


The most important thing a programmer need to know about time: "Don't use t+1 or t+N in your timeseries backtesting code ;)"


Julian Date FTW! Seriously, JD is much smaller than Unix time, so it's less wasteful of numeric range for most practical purposes. And its use would eliminate a barrier between the computing world and that of science. http://en.wikipedia.org/wiki/Julian_day


Does NIST (and other sync tools) keep in mind the round-trip of a request (RTD / RTT)?


NTPd does. However, in order the eliminate it, it assumes the RTT is symmetric, that is half is on the journey to the remote, half on the journey back. Asymmetric links do lead to systemic error in NTP.

(It assumes symmetry because there isn't any way to measure the asymmetry without first having time sync, and obvious catch-22. If you need better than 10ms accuracy, you'll need a GPS, etc.)


How can I make Ruby on Rails use unix timestamps instead of MySQL DATETIMEs?


Use the TIMESTAMP datatype for those columns. Also see the comment by lysol (currently two above you) that indicates that the article is wrong about MySQL.


I thought this was some productivity spiel.


Fails to mention the distinction between Zulu and Solar time. All the times mentioned are based on cesium atomic clocks. Satellite and military applications are often instead based on when the Sun is exactly opposite the Greenwich Meridian, re-synced daily.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: