Speaking as someone who has maintained some rather large NFS utilizing systems/datacenters in the last decade, and found it perfectly suitable to my needs, I'd be curious to why you'd say that?
The intr mount option has been a no-op for 8 years.
/bin/umount does an fstat before the umount syscall, so if an NFS mount is broken, it will probably hang forever instead of unmounting it! You can invoke the syscall directly to get the behavior you need.
That hasn't been dependable in my experience - the best workaround I've found is either a lazy unmount + remount if you can tolerate the hung process taking the full NFS retry timeout to unblock or downing the network interface to accelerate the process.
This is fair to a degree, but vendors have VIPs that make failures fairly transparent. But if you need low latency on a large data set is there something better? Where low latency means <100ma reads
throwing away 30+ years of protocol development for the latest and greatest containerization paradigm is what the cool kids south of market street talk about at happy hour.