Here's the sticking point for me: it's perhaps overly pedantic, but I want to view the world honestly, and there are some great points of absurdity here. (I always like the absurd, and the ways our world is otherworldly.)
The problem is that much of our approach to information is creative, and we need to start thinking in those terms.
If you have a JPEG of a murder on your unencrypted hard drive, that's not actually a photograph; it's a set of magnetic pointings which can with certain hardware be used to produce a photograph. If you think about it, that also applies to writing on paper, or colored splotches encoding an image into a physical photograph. Those require a creative attempt to produce meaning. The meaning can be off if the creative attempt is not followed through correctly. The easy way to see this is to imagine someone systematically using a common word in an uncommon way -- Feynman for example was once, on the Challenger commission, chasing down memos which sounded like NASA had been actively irresponsible, but instead it turned out to be a figure of speech they'd adopted for a certain phase of their construction. Or imagine that our demented individual really does have a very detailed, lifelike photograph which appears to document his murder of another, but in fact the "murdered" girl is a still-alive actress who was paid to appear in these photographs; the "blood" and such is very convincing but is ultimately a prop.
So the meaning can be off, if the creative act goes awry. I'm using this to underscore that you have to think, at some level, about that recreation of semantics from the physical fact.
Let me be clear: I don't think this is a barrier to investigation usually. I think it's clear that we expect a sort of 'normal hardware' that allows us to recreate semantics. The photographs in this safe, when viewed by a normal person in normal lighting, would show an image of the defendant committing a murder -- and if they want to say that this was all theatrically staged, they may produce the actress or others involved in the production. By that account, photographs inside of a safe are also governed by this principle: even if their physical location happens to be remote and inaccessible, reproducing the image from the photograph is as simple as just looking at it. The photograph really contains the image, up to a 'trivial' semantics.
Now bring this back to your other example of an encrypted disk storing child pornography. That is a nontrivial semantic inflation: you are literally asking the defendant to create child pornography for the purposes of the case. In some sense perhaps you're just saying "create whatever this drive's contents are," with the understanding that the police is going to look through it for child pornography -- in that phrasing, it's more clear that this pornography might not actually exist, etc. -- so there is perhaps a way to comply without generating child pornography at the judge's request.
But still, that's a little mad and absurd in the wonderful way that our world can be otherworldly. It opens up all sorts of questions which I have no clue how to answer. Decryption, like most computation, is a creative act. To demand decryption is to demand creation.
I quoted the above in particular because I really don't care about the "enormous problem for our justice system." Like, the fact that we don't have embedded realtime GPS trackers installed in our spines is an "enormous problem for our justice system" because it makes it so tremendously hard to figure out whether our alibis are true or false. Screw that sort of thinking. Whatever caused the investigators to think this individual was manufacturing or downloading kiddie porn should have been enough to convict. This shouldn't be a gray-matter area. "We just cracked down on this peer-to-peer kiddie porn program, we saw that you were using it to share many images, here are the filenames that the defendant's computer was sharing at the time we busted into his house with a warrant." (Are the police allowed to download such things? Probably. "Here are just a couple of the images we downloaded from him," too, then.) So, if they don't have a case and are fishing through the hard drive to try to make one, that's more or less explicitly what the Fifth Amendment is supposed to guard against: "we don't know your exact sins but we know you're a sinner so damn it, confess!"
But still, the sticking point is the glorious absurdity: "Mr. Doe, we have reason to believe that if you say the magic word, your computer will manufacture child pornography. We demand that you say the magic word, so that we know whether this is true." How will we decide that issue in the face of its pure and present absurdity?
Most encryption software, including TrueCrypt, will complain if you provide the wrong key. I object to this behaviour strenuously. What if it stopped doing that? What if it just gave you whatever output would arise from feeding key x into the algorithm? It would be upon the court to show that the resulting incoherent mass of bytes does not contain "satisfactory" output, which requires them to show what the satisfactory output ought to be, which means they must have some idea of what they're looking for to begin with and the ability to show that it exists on the encrypted medium to begin with. This would be problematic in most cases.
There is another cool utility - encfs. It have magic option "--anykey". Basically, it stops verification of key hash and always tries to decrypt with key you provided. Thing is - it will show you only correctly decrypted files. So, by using different passwords you essentially create layers of encrypted files, where each layer is decrypted by different password..
Truth is - if something did not decrypt, LEO will see it, but I do not see how they could prove you provided wrong password intentionally, and not at some point changed password to new one, and old been forgot. This essentially will happen when you use different password - you will receive no error and empty container where you can start add personal files..
Even if TrueCrypt didn't protect their encryption with a message-authentication code, the police would still notice that you had given them a decrypted file without a filesystem on it -- much less a filesystem containing /media/truecrypt1/where-I-buried-him.txt . If they have already convinced a judge to force you to decrypt the file, they could just tell the judge "this person is being uncooperative!" and your hijinks will get you nowhere.
Now suppose that they do not have this, but convince the judge that since you have TrueCrypt, and this is the only random-looking file on your computer, that this is probably your TrueCrypt archive. They convince the judge to threaten you with contempt if you don't decrypt it, through whatever means they have available to them. Well, TrueCrypt containers are always meant to be directories -- i.e. they always hold file systems -- and so you'd best decrypt this container into a file system! But that severely restricts your defense.
TrueCrypt will let you do something different: to provide a 'wrong key' which indeed decrypts the device to a valid file system. This is their 'hidden volume' system.
I'm kind of mixed in my reaction to TrueCrypt's hidden partitions, for other reasons. But they address the problem that you've identified, and I haven't figured out a better solution.
TrueCrypt is not meant to hold file systems any more than a hard drive is. There is nothing stopping you from not creating a file system on your truecrypt volume and just storing garbage in it - or use another encryption software on top of it.
TrueCrypts hidden-volume feature is quite meaningless in most cases (my opinion) due to the way it is likely used. If you present a decryption key that gives access to a filesystem that does not match what was expected then you are in trouble.
Especially the hidden OS feature... So you have been using this laptop on multiple occasions the last week (of which we have proof) but according to the filesystem you presented to us this system haven't been used for over a month.
The same goes for a hidden volume. Unless you actively use it as often as you use your device (which is really cumbersome to do right) you might just be better of without it since exposing it will tell them way more than you want to tell them (for starters it will tell them that you are actively lying and having made precautions in order to try and get away with lying).
The problem is, due to what I guess is something of a flaw in the central idea, you ultimately have to provide the password for your inner volume when you do all of these things which don't involve it. So now your private data is split up over two drives, which is at least somewhat questionable, and also the "mundane" drive requires the "important" password.
This may be acceptable if you're collecting a small cache of text documents which you believe could harm a corporation -- then you say "no, I don't have those articles, see, this really is just my porn stash, please don't hurt me. But a criminal or a government -- no, they're willing to be patient and they're perhaps willing to peek at your password input prompts with webcams or audio-recordings. They would know that there's an extra password being entered every time you decrypt that file.
this is not a pedantic side concern, but is in fact, the key component of the government's ability to compel evidence production. if they cannot show that they know what's on your hard drive, that you control it, and that what's on your hard drive is incriminating, they cannot compel you to decrypt it.
so yeah, if you gave them a bad key and your decryption algo returned garbage, they'd certainly lock you up for contempt (given the aforementioned conditions were true).
Now, such isomorphisms turning one information into another could in theory be found for any two pieces of information, but here we are talking about a very limited family of isomorphisms between spaces of all finite binary sequences, so there is little to no creativity involved in selecting and using such an encryption function.
But then you seem to be fine ignoring all that and calling the decrypted contents "child pornography". Why aren't the encrypted contents also child pornography? Why is decrypting them the point of creation, rather than, say, opening them in an image viewer?
Now, I'm also trying to form a line of demarcation for why we feel we can abstract those away, and I think that at least an acceptable first approximation, a first abstraction layer, is something like "a normal person with normal tools can look at X and, through this, view a pornographic image."
If it's encrypted then the point is that this becomes one of Joel on Software's "leaky abstractions." The problem is that no, we can no longer ignore the massive number of creations, because you need to say a Magic Phrase to interpret this thing as an image. If you pronounce a different phrase, it just looks like random data. What we're telling the defendant is something like, "say the phrase that makes this look incriminating" -- or perhaps just "say the phrase that makes this not look random."
I guess to answer your last question: Neither the encrypted nor the decrypted contents are, in the absolute strictest sense, images. They have to be rendered onto a screen and then viewed by a conscious person of sound mind to be images. (Maybe a better word is "viewings.")
So decrypting them is a point of creation, as is opening them in an image viewer, as is looking at that image viewer. The absurd thing to me is, if you really focus on the technical details, you'd have to conclude that they don't become "child pornography" until we view them and say "that looks like it was intended to arouse someone, and it looks like it contains an underage person."
So part of why I'm proposing the above "normal people with normal tools" idea is to give some ground to say that the decrypted stuff "can be thought of as child pornography" -- because a normal person will come to that judgment when using the data in a normal way. So in that sense, the decrypted contents "are" child pornography.
You may wish to ignore me on that; I may be becoming too philosophical and solving problems that don't need solving. Perhaps the big problem that's sitting at the back of my mind is this: for any large random-looking block of bits you give me, there is in principle a stream of bits which can be XORed with it to convert it into a JPEG file. In practice there are some limits based on block sizes and ciphers, but in principle there exists some mathematical transform which converts any normal hard drive into this sort of thing.
So I'm interested in the philosophical problem of excluding all of the transforms which we don't want to admit.