For ML training, where you repeatedly open the same files over and over again, the NFS 'close to open' coherency protocol (where every open() and close() is a round trip) is not a good fit. Once we support NFS delegations, this will become a lot faster, but local training data set will very likely continue to be faster (we have ideas on this though!) Given that ML training uses very expensive instance types, a local file system is the way to go for now.
ML inference on the other hand is a use case that we've specifically targeted with this update, and it should work very well. Many of our customers do inference using AWS Lambda, directly connected to their EFS file system.
ML inference on the other hand is a use case that we've specifically targeted with this update, and it should work very well. Many of our customers do inference using AWS Lambda, directly connected to their EFS file system.