Hacker News new | past | comments | ask | show | jobs | submit login

What they missed with nanosecond timestamps is that it's a feature which is essentially free - What you're actually looking at is 64bit timestamps.

With the 'unix epoch' way of doing things (counting seconds since 1970), a (signed) 32bit int will run out of values in 2038. If they'd counted milliseconds since 1970, they would have exhausted int32_t in less than 36 minutes, and nanoseconds would have exhausted same in less than 2 seconds.

So the move to 64bit timestamps solves the '2038 problem' (for the filesystem, at least) - but at some point someone has decided this filesystem does not need to support the year 292,277,026,596AD, and has chosen to use some of the space for granularity instead. (Given 292 billion years is 20 times the estimated age of the universe, they were probably correct in their assumption that we'll have a new filesystem before then).

If the number of people who need to differentiate between two files created in the same second, is more than the number of people who expect their Watch to still work after our sun is a cold shrivelled mass - they've made the more logical use of the valuespace.




someone has decided this filesystem does not need to support the year 292,277,026,596AD, and has chosen to use some of the space for granularity instead.

I don't know where you came up with that number. It is from Apple documentation?

A true nanosecond timestamp wouldn't get you anywhere near the number you suggest. Here's a back of the envelope calculation:

A 32-bit timestamp with one-second resolution is good for, more or less, 68 years. Or 136 years if unsigned. That's the traditional "unix epoch" setup.

Now add 32 more bits to the LSBs of the timestamp. There are 1 billion nanoseconds in a second. A 32 bit integer can hold over 4 billion different values. So the 32 LSBs, if they represent nanoseconds, will overflow in just over 4 seconds.

What that means is the 32 MSBs now have a range of just over 4x the traditional Unix timestamp. If it was 68 or 136 years before, it becomes 291 or 582 years.

That's what I would have done if I were Apple. I would have set the LSB of a 64 bit unsigned counter to represent 1 nanosecond. I would keep year 0 as 1970. So, problem solved until (roughly) 1970 + 582 = 2552.


292billion AD is where 2^63 seconds from 1970 would get us - eg, what we'd get if we just converted the existing epoch to 64bit.

This is why I call nanoseconds a 'free' feature - it's clear it's time to solve the '2038 problem' now - and simply moving to a 64bit timestamp would solve that. But rather than solving this problem for the next 292 billion years, we can make better use of the timestamp today. So, as you say, providing nanosecond granularity for the next 500 years, being more useful than one-second granularity for the next 292 billion.


> If they'd counted milliseconds since 1970, they would have exhausted int32_t in less than 36 minutes, and nanoseconds would have exhausted same in less than 2 seconds.

Milliseconds would have lasted 24 days. Microseconds would have been 36 mins.


You're right, of course. I wish I could say this was the first time I've confused milli~ with millionth.


On the bright side, your way can make you a millionaire!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: