Going to analog and back by eg. photographing the screen or printing then re-scanning a document is a good way to ensure you've removed all document metadata, but brings its own challenges (see the article, where it mentions source camera identification), not to mention that the camera/scanner itself may add back its own metadata.
Watermarking may or may not survive that kind of process - depending on the kind of watermark it might be designed to still be detectable even in low-quality copies.
The more coarsely you filter the data (reducing resolution is essentially a low-pass filter on the image data), the more you reduce the bandwidth for a watermarking signal, but using spread spectrum and forward error correction techniques, the ratio of watermark to data can be brought arbitrarily low. There's no amount of obfuscation that will defeat watermarking if they algorithm/key is unknown and a huge amount of data needs to be released.
That is, maybe you can use video and audio filtering/manipulation to push the watermark bandwidth down to one bit of watermark per 1 GB of data, but with 100 participants and 7 GB of data, 7 bits is enough to identify the leaker.
Watermarking may or may not survive that kind of process - depending on the kind of watermark it might be designed to still be detectable even in low-quality copies.