Hacker News new | comments | show | ask | jobs | submit login
Ask HN: File operations on thousands of files on Win vs Mac/Linux file systems
3 points by peterjmag 1860 days ago | hide | past | web | favorite
This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me...

I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on Windows (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files.

(Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.)

Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has.

Can anyone explain this to me, or perhaps point me in the right direction to research it further myself?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: