

Ask HN: DBMS 'Timestamp' or Unix Epoch as an Integer? - Navarr

In most of my own personal products I use the Unix Timestamp integer as my means of marking event activities and times.  However, in one of my recent projects that I've been contracted for I'm using MySQL's timestamp.<p>This got me to wondering, which does HN use and why?  Are there any discernible trade-offs of using either one?
======
gregjor
When you have multiple applications or instances writing to the database you
will want all date/time references in the database to come from a aingle
source -- the database server's clock. Otherwise you will be plagued by time
zone and clock drift problems.

Another reason to use a TIMESTAMP or DATETIME type is you can use date/time
operators and functions on it easier. For example timestamp_column + INTERVAL
30 DAY (in MySQL) is clearer than having to convert an INT.

Every relational database has more than one date/time type in case you only
need dates without times, for example.

------
Rust
I always use the integer representation. I believe (no testing) that it is
faster due to the lack of a conversion step, math is always in seconds, and as
long as the number you store is based on UTC (or GMT), there are no timezone
issues.

For @gregjor's answer (having the DB create the date for you), it seems that
if you actually need to know the timestamp of a newly inserted or updated
record, you will actually need to make 2 SQL calls, since the record itself is
not returned by UPDATE or INSERT.

~~~
gregjor
INSERT and UPDATE don't return any row values, so whatever date/time column
type you use you will have to do a SELECT to get it, if you need. I haven't
run into this problem in practice; usually what I need is the last auto-
incremented key value of a newly-inserted row.

If you are getting the date/time value outside of the database -- from a web
application, for example -- and using that to insert into the database you
will run into inconsistencies eventually because clocks on different will not
be synchronized or will be set to different timezones. In my experience it's
more important that all date/time values in a database come from a single
source (the database server itself) rather than which column type you use.
It's easier to control the clock and timezone in one place than require all
database clients to be set to UTC and have their clocks synchronized.

Setting all clocks to UTC is not enough to get around daylight savings time
issues, either. You actually have to know the timezone to do accurate date
arithmetic. You can find lots of articles about this -- it's a pretty well
worked-through topic. Think for a minute why every operating system and RDBMs
and programming language date/time library aren't just using integers and
assuming UTC.

Date/time types are native types in all serious relational database engines,
so there's no conversion going on that you need to worry about. All RDBMSs
have mature date/time manipulation operators and functions. If you use large
integers you are going to be doing more conversions, especially once you have
to deal with timezones. Think it through.

