Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doing 'ls' in a big directory with a cold cache is a pathalogical case for ZFS. Finding and opening a single file should be alot faster than 'ls', and frequently used metadata will be cached by ZFS's ARC over time.

I guess instant, unlimited snapshots don't come free. But you also have the option of storing metadata cache on seperate storage (such as SSDs), a feature which many other filesystems don't offer.



OK, I just created a directory with 6000 * 8kb files, and did 'ls' from another computer over NFS with a cold cache, and it completed in:

real 0m 1.96s user 0m 1.12s sys 0m 0.00s

Sounds like Steve was having some other problem unrelated to ZFS.

EDIT: also just found a directory I had with 60,000 randomly created files over time (ie. fragmented), and ls took 3.5 seconds locally (didn't try it over NFS). This is looking more and more like a troll post :-)


When you created the files they would still be cached by ZFS, so it's going to skip reading them from the disks.

Took me 9.9 seconds to get a directory listing for 65336 files over NFS after creating them over NFS on another system.

That's still no where near as bad as the author states, but I had those files in my cache on the file server, I bet.


I managed to get to about 30 seconds with 65,000 files on a local ext3 file system. The file names where all of the same length and with a ~100 character long identical prefix. I re-mounted the file system before the ls to eliminate caching.


Perhaps he did "ls -l".. which requires a stat() for each file. but even in the worst case scenario, that's 10ms for each roundtrip.. way more than it should be: something sounds broken with his setup.


Your newly-created files are still in cache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: