Why not use a finer granularity? Because space in the on-disk inode structure is precious. We need 30 bits to encode nanoseconds. That leaves an extra two bits that can be added to 32 bit "time in seconds since the Unix epoch". For full backwards compatibility, where a "negative" tv_sec corresponds to times before 1970, that gets you to the 25th century. If we really cared, we could add an extra 500 years by stealing a bit somewhere from the inode (maybe an unused flag bit, perhaps --- but since there are 4 timestamps in an inode, you would need to steal 4 bits for each doubling of time range). However, there is no guarantee that ext4 or xfs will be used 400-500 years from now; and if it is being used, it seems likely that there will plenty of time to do another format bump; XFS has had 4 incompatible fomat bumps in the last 27 years. ext2/ext3/ext4 has been around for 28 years, and depending on how you count, there has been 2-4 major version bumps (we use finer-grained feature bits, so it's a bit hard to count). In the next 500 years, we'll probably have a few more. :-)
> The reason why ext4 and xfs both use nanosecond resolution is because in the kernel the high precision time keeping structure is the timespec structure
...so resolution here is defined by what's provided, not what's (decided to be) useful. Is ns resolution useful is the important question.
> Why not use a finer granularity? Because space in the on-disk inode structure is precious
That's not a good reason AFAICS. What would it gain your users if you did? 1 ns = ~4 machine cycles. Timestamping to that res, well, what's the value to any application? I'm missing something.
Bear in mind 1ns = the time a ray of light would travel 30cm in a vacuum.
Also be careful about thinking machines having big everything. Caches aren't huge, storing an extra byte if a billion places can add up at scale. Nothing comes free, don't scrimp where you don't have to but neither assume anything's free.
HTH. IMO only.
...This resulted in the service generating an invalid x509 certificate [but fortunately the validation library said that the cert was invalid, just didn't tell me why] -- making me lose a day to debugging this mistake.
Somebody should standardize an x.509.good_parts subset that leaves out all the crap nobody needs and doesn't work anyway in deployed systems.
Usually, any product sold comes with 2- or 5-year warranty if your lucky.
Some RAM manufacturers were so proud of their manufacturing quality, they instead offered 50-year warranties.
Now, the reasonable thing to do as a seller would have been to just limit that to something like 5 years, and then give the customer their money back if they had any issues after that.
However, someone actually programmed the system to use 50 years as the warranty duration, way in the 2040s.
I didn't look further into the technical issues behind that (probably not many, since the required precision wasn't seconds, just days), but that remains a great example of when not building a feature costs less than building it.
openssl x509 -req -days 7000 -in site.csr -signkey site.key -out site.crt
In another 78 years they can make it a little smarter.
Keeping in mind that XFS is a filesystem format, it's not hard at all to imagine a filesystem created in CentOS 8 and/or RHEL 8 still being in use when 2038 arrives, even if the operating system was already upgraded to the next major version.
It's not - it'll be here before you know it.
I don't see any way in which that could easily fit with POSIX time types, unless you keep the existing 32-bit time_t and tv_nsecs fields and tack on two 16-bit fields for extending precision in either direction. But ISO C allows time_t to be a floating point number, which opens a few doors.
What about UTF-8 style variable length times? Would that be too messy?
Edit: looks like most 64-bit OSes are already or are switching to 64-bit time_t. So that solves half the problem, but no picoseconds just yet. I guess that's what int64 or float64/80 is for.
15Gyr c * (c ln(2))^2 / G / (1MW/c2)
0: without violating the known laws of physics
Seems to be described in this patch comment:
> This is what happens when your data structures declare year zero like pol pot.
Picking an arbitrary zero point is the only way to make timestamps work. If we started counting with the big bang then all our clocks would be plus or minus millions of years.
You can always use floating point representations. It will give you great range and great precision, though not both at the same time (you can't refer to a specific picosecond on a specific day 100,000 years from now).
We can't even get close to a planck time tick so that's not really necessary
IIRC, Windows can't get a higher resolution than time % 10ms, should we not allow getting the time at millisecond resolution? Just because it's not necessary, doesn't mean it isn't useful. In this case, I can't really see it being useful either... would love to hear a non-niche use-case for it.
There are exactly zero XFS filesystem files created before 1970, so there is no need to represent those times in a filesystem.
I seriously wonder what makes this basic fact so hard for so many people to process. If you have a clue, please do explain.
But you can set the timestamp of a file to a time before the epoch.
$ touch --date='Jun 1 1952' foo
$ ls -l foo
-rw------- 1 me me 0 Jun 1 1952 foo
It doesn't matter whether you think that such timestamps are "lie[s]": the contract is the contract. You have no idea how someone might be using the system, and you don't get to break uses that are legal within the contract but contrary to your sense of good taste.
Yes it does. The rule isn't absolute. Impact matters, not your ability to find a single bit pattern that differs.
Alternatively, if you find an early tape in a yard sale, you’ll want to keep creation dates of the files when you read in the data.
Those are ‘somewhat’ of an edge case, though.
And yes, it seems at least some file systems from before 1970 had time stamps. https://en.wikipedia.org/wiki/Comparison_of_file_systems#Met... says DECTape had, and that’s from 1963.
After a couple of times you just migrate to something sane, like ext4, and everything works flawlessly.
I have seen this argument used the other way. XFS never runs out of inodes, for example.