It depends on how many files you have, but also the file size. My understanding is that IPFS splits files into 256kB chunks with a content ID (CID), and then when you expose your project, it tries to advertise every CID of every file to peers.
200,000 files could take a while to advertise, but from memory it should work, should hang for less than 15 minutes. But depending on your hardware, file size, quality of connection to your peers, alignment of planets, etc.
If you add one order of magnitude above that, it starts to become tricky. Manageable if you shard over several nodes and look for workarounds for perf issues. But if you keep growing a bit past that point, it can't keep up with publishing every small chunk of every file one by one fast enough.
But it's also very possible perf has improved since the last time I tried it, so definitely take this with a grain of salt, you might want to try installing and running the publish command and see what happens.
I'm not sold on IPFS but the idea of using a file system as a top level global index is attractive to me. I find the 2 best references for human information is global location and time. I think an operating system structured around those constants could be a winner.
I'm not sold on IPFS and will look at Willow and IROH.