Hacker News new | past | comments | ask | show | jobs | submit login

UNIX timestamps are normally stored as 32-bit or 64-bit signed integers, not as floating-point. If you want better than 1-second precision, then the type "struct timespec" (specified by POSIX) gives you nanosecond precision. Fixed-point types can also be used in languages that support them.



Yeah, timespec (or time_interval) is a proper way to go, but it is quite a pain to work with -- you need a library even if you just want to subtract the numbers.

On the other hand, floating-point time is pretty common in scripting languages -- for example Python has time.time(); ruby has Time.now.to_f. It is not perfect, but great for smaller scripts: fool proof (except for the precision loss), roundtrips via any serialization format, and easy to understand. And no timezone problems at all!


I thought unsigned in 64 bits, holding the number of nanoseconds since the Unix 0 time (which might be 1970, but i forget).


UNIX time is seconds since epoch (hence year 2038, that's the limit for a signed 32b time_t).

gettimeofday() and clock_gettime() provide higher resolution timestamps (respectively µs and ns), using typedefs instead of just numbers.

Some APIs return floating-point UNIX time in order to provide sub-second accuracy (the decimal part is the fractional second). Python's time.time() does that for instance.


It’s a signed type so dates before 1970 can be represented


time_t is usually signed.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: