And now the last question, how far can you take that logic? What happens at 1,000,000 (one million file names generated) when there are no more characters left to remove from the left side?
A directory is stored as a linked list of clusters like regular files, and the 32-bit filesize field is irrelevant for a directory, so it theoretically could be as big as the whole volume - just keep adding clusters to the chain.
Filesystem drivers may give up long before then, and access to such a huge directory would be very slow, but there's nothing in the filesystem structures itself that would prevent it. I've written FAT code for an embedded device that I can confidently say would have no problems with large directories, since it'll just keep following the cluster chain to the end.
Actually, it doesn't look like it. It might need to be a race condition done with a program (delete the old files to keep it going to a million). Might be tricky.
Segmentation fault?