S3 is not designed for intensive metadata operations, like listing, renaming etc. For these operations, you will need a somewhat POSIX-complaint system. For example, if you want to train on ImageNet dataset, the “canonical” way [1] is to extract the images and organize them into folders, class by class. The whole dataset is discovered by directory listing. This where JuiceFS shines.
Of course, if the dataset is really massive, you will mostly end-up with in-house solutions.
S3 is not designed for intensive metadata operations, like listing, renaming etc. For these operations, you will need a somewhat POSIX-complaint system. For example, if you want to train on ImageNet dataset, the “canonical” way [1] is to extract the images and organize them into folders, class by class. The whole dataset is discovered by directory listing. This where JuiceFS shines.
Of course, if the dataset is really massive, you will mostly end-up with in-house solutions.
[1]: https://github.com/pytorch/examples/blob/main/imagenet/extra...