Hacker News new | past | comments | ask | show | jobs | submit login

Probably it's because this python implementation uses st.st_mtime (https://github.com/apenwarr/redo/blob/master/state.py#L314) which is a double and therefore is not precise to enough digits for maximum granularity.



doubles have ~15 decimal digits of granularity.

    $ python
    Python 2.7.13 (default, Sep 26 2018, 18:42:22) 
    [GCC 6.3.0 20170516] on linux2
    >>> 4e9 + 0.000001
    4000000000.000001
In any case, I did my testing for the article using the C program linked from the article. The first timestamp in its output gives you a lower bound on your system's granularity: https://apenwarr.ca/log/mmap_test.c


Well, this limits you to ~1 us granularity, but you are right that it's not the limiting factor.

Apparently, according to https://stackoverflow.com/questions/14392975/timestamp-accur..., the ext4 driver just uses the cached kernel clock value without the counter correction that gives you a ns-precise value.

One could perhaps have an LD_PRELOADED fsync (or whatever) that updates the mtime with clock_gettime() to store it in its full nanosecond precision glory but it's probably not worth the performance penalty. That wouldn't address the mmap issue of course...


15 decimal digits isn't enough to encode a file's st_mtime seconds and nanoseconds value.

I use Perl and found this to be a problem. Like Python, it uses a double for st_mtime, and the nanoseconds value is truncated, so it fails equality tests with nanoseconds recorded by other programs (e.g. in a cache).

It even fails equality tests against itself, when timestamp values are serialised to JSON or strings with (say) 6 or 9 digits of precision and back again. Timestamps serialised that way don't round trip reliably due to double truncation.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: