Hacker News new | past | comments | ask | show | jobs | submit login

Speaking as someone who has maintained some rather large NFS utilizing systems/datacenters in the last decade, and found it perfectly suitable to my needs, I'd be curious to why you'd say that?



Not a block device, so not having cgroups (read/write bps limiting + iops limiting) is one annoyance of NFS in large deployments


I'd also add extreme fault intolerance. Even when you're mounting soft, or hard+intr, NFS failures have a way of requiring reboots to fully fix.


The intr mount option has been a no-op for 8 years.

/bin/umount does an fstat before the umount syscall, so if an NFS mount is broken, it will probably hang forever instead of unmounting it! You can invoke the syscall directly to get the behavior you need.


That hasn't been dependable in my experience - the best workaround I've found is either a lazy unmount + remount if you can tolerate the hung process taking the full NFS retry timeout to unblock or downing the network interface to accelerate the process.


This is fair to a degree, but vendors have VIPs that make failures fairly transparent. But if you need low latency on a large data set is there something better? Where low latency means <100ma reads


So use Jails, ZFS and rctl. Problem solved.


How does that fix a client accessing network storage?


You can use Samba with it. Unless you absolutely require NFS, samba is far more robust and less prone to needing to be hard rebooted.


I'd agree, it's hard to beat if you need high bandwidth and low latency access to large data sets.


throwing away 30+ years of protocol development for the latest and greatest containerization paradigm is what the cool kids south of market street talk about at happy hour.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: