
PostgreSQL Data Types: Date, Timestamp, and Time Zones - craigkerstiens
https://tapoueh.org/blog/2018/04/postgresql-data-types-date-timestamp-and-time-zones/
======
gjvc
Is there a reason that the PostgreSQL TIMESTAMP type only goes to microsecond
(not nanosecond) precision?

~~~
grzm
You might be able to delve into the history of its development for a specific
answer, but I suspect it boils down to a decision needed to be made trading
off it's size in bytes (8), it's range, and it's resolution. The SQL92 spec
mentions fractional seconds but doesn't specify a precision AFAICS. At the
time when it was implemented (and even now) microsecond is adequate for many
needs.

~~~
anarazel
Yea. If we wanted similar size we'd loose a lot of range. And increasing size
beyond the size that can commonly be held in general purpose registers would
have noticeable performance impacts as well. Types bigger than that have to be
passed by reference, require allocations etc.

I think it'd not be insane to add another type (timestamphp?) that'd both have
a higher precision and a higher range. This seems like a case where giving
users the choice seems better.

~~~
grzm
One of Postgres' strengths is how easy it is to add custom types for specific
use cases. I suspect that if you've got a special need for higher precision,
_how_ high will also be a more narrow use case (how wide of a type, what
tradeoff to make between range and resolution). Seems to me a great custom
type use case.

And of course you could go a long way just using a timestamptz column with an
additional integer (or even numeric) column with {nano|pico|femto...}seconds.

~~~
anarazel
> One of Postgres' strengths is how easy it is to add custom types for
> specific use cases. I suspect that if you've got a special need for higher
> precision, how high will also be a more narrow use case (how wide of a type,
> what tradeoff to make between range and resolution). Seems to me a great
> custom type use case.

There's already a few cases in postgres core that'd benefit however. It's
somewhat annoying to loose available clock precision when more is available.
With hundreds of thousands of transactions/sec more accurate timestamps can be
quite useful. IOW, having now(), clock_timestamp() either return something
higher precision that can be downcast implicitly, or have higher precision
variants of them, would be quite useful. The higher precision type should be
used in a number of places inside postgres core.

I think once you bump up the storage size (128bit?) it's easy enough to cover
both range and precision for nearly every usecase.

~~~
grzm
What are the tradeoffs on modern hardware using 128bit storage? I've been
interested in wider types in general and have abused UUIDs for this purpose.

~~~
anarazel
It doesn't really matter that much whether its 12, 16, 24 byte. The difference
is that, given how postgres is doing it at the very least, can pass anything
bigger than 8 byte (on 64bit arch), in general purpose registers. So function
calls (which are below all operators), tuple deforming, etc, we'll instead
pass those types around by reference. That always will infer another pointer
indirection, and in many cases means we'll have to copy more because the datum
is now dependent on the underlying tuple it has been extracted from.

Whether it matters - it's hard to say so in a general manner. I'd try to avoid
using a pass-by-reference type for the columns I'm aggregating billions of
rows of, when reasonably possible. I'd not go to huge lengths to do so.

Tuple deforming performance doesn't change much if it's a _fixed width_ pass
by reference type, rather than a variable width one.

