I was thinking about CFA mosaic and JPG compression, I think these may introduce some axis aligned artifacts. But maybe they took it into account (using raw format?) or effect is not relevant in this case.
Even in raw format, all digital cameras apply some amount of sharpening  even when the setting is "off" in the camera menu. Also, all raw format conversion software (Lightroom, Capture One, etc.) applies sharpening by default.
I could imagine that a sharpening algorithm could transform a random distribution into something with structure. That the authors appear to not reference camera or image sharpening anywhere in the paper is somewhat worrisome.
That rule doesn't hold for some languages. For example, a Python lexer needs to remember a stack of indentation levels to know whether a given line's indentation should be tokenized as an INDENT, a DEDENT, or no token at all.
Most demo competitions do not assume you'll require internet connectivity at all, so that won't be required ;)
I understand your irony and do agree that using standards of today is expected.
This doesn't remove any validity to the OP point that what we use today (apart from WebGL or other hardware accelerated code) is several orders of magnitude behind what was already there 10 to 15 years ago.
I do hope though that we'll see more and more widespread/standard techniques to grab back a bit of speed there without only counting on Moore's law :)
See, for example, Hushmail, which has no option but to cooperate with correctly formed legal documents. This means that they use the passphrase (which they can capture within a short time) of the non-java-applet version of their software, or they serve a modified applet.
Being in a different jurisdiction provides a small amount of protection.
I wonder if parent was merely advocating obfuscating sensitive data so that engineers don't accidentally see things like "Downsizing-2012.xls". As long as the obfuscation is reversible, the data is still there for those who need it.
Of course, encryption per se is overkill for that. Something like ROT13 would do the trick.
The OP says that double-spacing is an obsolete holdover from the typewriter era, where the extra space made monospaced type easier to read. I'll go one further and say that spacing sentences using space characters -- any number of them -- is obsolete.
In the present era, the act of typing is separate from the act of typesetting. The comment I'm typing now may be typeset in Arial in Chrome, typeset in Ubuntu Mono in Emacs, or read aloud by a software program to a blind person. No prescribed number of space characters is going to be appropriate for all cases.
The reading software, not I, should be responsible for locating my sentence breaks and setting appropriate spacing there. Perhaps in the future we'll assist the software by marking up sentence breaks using a special character sequence. Ironically, a double-space would serve that function pretty well.