Hacker News new | past | comments | ask | show | jobs | submit login

How do hidden volumes, as for example TrueCrypt provides them, play into this? If the defendant doesn't give them up voluntarily the prosecution will have a hard time proving they exist, no?

Also, if they do find evidence for their existence - does the defendant then have to give them up?




It is not that difficult to prove that a hidden volume exists. The TrueCrypt implementation of hidden volumes means that the "hidden" partition is all allocated at the end of the visible partition. If you have a 20G TC volume with a 4G hidden volume, the file system in the non-hidden volume will never allocate a block beyond 16G. This shows up as very anomalous file system layouts at the block level. Simple visualization of the block allocations will show a clear delineation where the hidden partition starts. The TC implementation of hidden volumes is definitely not robust as plausible deniablility.

The police forensics investigators know to look for this already. It is in their recommended best practices for how to handle TrueCrypt volumes.

The safest way to use a TrueCrypt hidden volume is:

  * Create the largest regular volume that you can. 
  * Create the smallest hidden volume that you can.
  * Never mount the hidden volume as "protected"
The idea is that your sparsely populated cover volume won't create enough block allocations to have an obvious "end", and additionally, that those blocks will have a low likelihood of being allocated inside your hidden volume and overwriting your secret data.


I'm pretty sure this isn't true. You cannot prove the hidden volume exists. In fact hidden volumes are completely useless without plausible deniability. The file system of the outer volume will happily overwrite your hidden volume if you tell it to. The point is that you know its there so you intentionally don't write more than 16G on your 20G volume. But the "unused" space looks just like random data so you can't prove there is anything meaningful there.


Assuming you don't have it protected, you need to be more cautious than that, as the file system is not simply a long stream of bytes.


Very true, my numbers were just an illustrative example given the context set up by the comment I replied to.


Which you apparently didn't read thoroughly. I clearly state that the secure way to use TrueCrypt is to never mount the hidden volume in protected mode. That will enable the scenario you describe. I even state the reason why you want use it the way I suggest is to minimize the amount of hidden data that is overwritten.


I think the point of TrueCrypt is simply "plausible deniability". They can suspect with a very high degree of confidence that there's an hidden volume, but how do they prove it?


They can prove it sufficiently to force you to hand over your password. File systems have a particular behaviour, they allocate blocks in certain ways. Most notably, they use all of the space available to them equally (or, pseudorandomly anyway). If a file system never allocates any data in the last N bytes, where N is a very large number, that is indication that the file system is treating the volume as Size-N. Since this behaviour is the signature of a hidden volume in a TrueCrypt container, that is "proof". It will be sufficient proof for a court of law.

Essentially you are arguing that the file system implementation exhibited implausible behavior (it allocated only from the first N% of bytes), and that TrueCrypt exhibited implausible behavior ("ok, normally that would mean a hidden volume, but not in this case!").

All of which is to say, that TrueCrypt's implementation of Hidden volumes (as typically used by end users) is not actually plausibly deniable.


Just because a filesystem isn't using all the space available in a partition does not mean that the rest of the space is being used by something else. Imagine I run `newfs -s 2097152 sd1p` where sd1p is actually 4194304 sectors. Now imagine sd1 is a softraid volume with a CRYPTO discipline. There's no way you can prove that the extra 2097152 sectors aren't being used, but there's also no way you can prove that they are.


That's certainly true. But also (from a investigation point of view) more suspicious than a filesystem covering the whole harddisk, but only filled to 20%. That will always be a problem, as long as the "visible" filesystem just maps block 1..N directly onto encrypted blocks 1+k..N+k (with k being a constant offset), as it's currently the case e.g. in linux LUKS (I assume CRYPTO discipline in BSD is similar).

The proper solution most likely would be to integrate a kind of block-mapping into the encryption software which allocates randomly distributed blocks from the encrypted harddisks whenever a filesystem begins to write to the blocks of an volume. This randomization algortithm then will be aware of all currently active "hidden partitions", but due to the randomness, a pattern to draw conclusions about the existence of other partitions would not emerge.


"More suspicious" is meaningless. If you can't prove - with incontrovertible evidence and beyond any reasonable doubt - that there's something there, then there's plausible deniability.


Plausible deniability won't protect you from the "rubber hose" of a contempt charge.


It would be really nice if truecrypt implemented a feature whereby you could have a special password to use to render the secret partition un-usable, perhaps by rendering your existing password/key worthless.


Something like this wouldn't really protect you from law enforcement. They perform forensic disk duplication before mucking around with a drive. If you provide a fake password to TrueCrypt and it starts overwriting things, it would be pretty obvious to anyone investigating the drive what's going on.


I'm not sure how such a feature would work or how useful it would be, for that matter. Maybe the TrueCrypt binary would attempt to decrypt the first X bytes of the partition under the "coercion" password and then check if it matches some known signature. If so, flip a bit in each encrypted block to scramble it.

Problem: forensics people can use a write-blocking adapter on the original disk and simply make copies to try out the decryption. So, the feature sounds both irritating to implement and (worse) perhaps give a false sense of security to a novice.


http://www.truecrypt.org/docs/?s=hidden-volume is probably more what you want.


> they use all of the space available to them equally (or, pseudorandomly anyway)

This is simply false for many (if not most) filesystems, which preferentially write to blocks near the beginning of the disk. For spinning disks, random distribution of blocks would kill performance.


No, it is not false. The important thing with a spinning disk is locality of reference. You want the blocks which store the file content to be as close together as possible, to minimize the head seek times. This means you want as long a chain of contiguous blocks as possible. This does not mean that you want all those blocks to be at the beginning of the disk. In fact, the exact opposite. You want to start that chain at a random location so you are more likely to have a large number of contiguous unallocated blocks. See the implementation of HFS+ Extents, or Ext4, or UFS for examples of how this works.


A) You have forgotten basic physics. The beginning of the disk is faster. Locality is desirable but is not and has never been the only thing that matters.

B) You have just named three uncommon filesystems that few people will ever use in the first place, much less with TrueCrypt.


A) You have forgotten basic physics. The beginning of the disk is faster. Locality is desirable but is not and has never been the only thing that matters.

If you haven't actually looked at the block allocation patterns of common filesystems, then you can't say conclusively that fuzzbang is incorrect. Arguments from first principles (e.g. "basic physics") cannot override empirical evidence.

Further, locality of reference will have a much, much bigger influence on spinning disk I/O throughput than location at the front of the disk. The difference between the outer rim and inner rim might be 120MB/s to 70MB/s, so reading a contiguous 200MB file will take 1.7x as long if it's stored at the inner rim (286ms vs 167ms). However, if that 200MB file is stored in 100 2MB fragments, and seeking to each fragment takes 4ms, your reading time will be dominated by the seek time due to fragmentation (686ms vs 567ms, or a mere 1.2x difference).

Based on my experience I'm inclined to accept fuzzbang's description of block allocation strategies. It used to be common wisdom that you could defragment a volume by copying all the data off, formatting it, then copying the data back on. I did this once with an NTFS volume (using ntfs-3g), then checked the resulting data in the Windows disk defragmenter. The data was primarily located around the center of the volume, with numerous gaps. Filesystems leave gaps to allow room for files to expand.

B) You have just named three uncommon filesystems that few people will ever use in the first place, much less with TrueCrypt.

"Commonness" for the purposes of forensics is a much lower bar than for market analysis. I'd also wager that, servers included, there are at least as many ext2/ext3/ext4 volumes on the planet as NTFS volumes.


> If you haven't actually looked at the block allocation patterns of common filesystems

I have.

> you can't say conclusively that fuzzbang is incorrect

And I can.

I'm aware of the degree of difference in speed. It is sufficient that it is standard practice for filesystems to be restricted to the first 1/4-1/2 of a spinning disk in performance-sensitive applications. Or at least it was, in the last few years we've become more likely to just use SSDs or keep everything in RAM.

> if that 200MB file is stored in 100 2MB fragments

Thank you for assuming I don't even have the knowledge of a typical computer user, it greatly increases the likelihood I'll not waste further time with you. Raises it, in fact, to 100%.


> If you haven't actually looked at the block allocation patterns of common filesystems

I have.

And? What distribution of block allocation did you observe on said filesystems? Does it contradict the original supposition that filesystems spread out allocations to prevent fragmentation, thus possibly overwriting hidden data at the end of a partition?

It is sufficient that it is standard practice for filesystems to be restricted to the first 1/4-1/2 of a spinning disk in performance-sensitive applications.

This has as much to do with seek times as sequential reading speed. A drive with 8ms average seek times might average 2ms if you only make the heads travel 25% of the width of the platter.

The fact that you have to restrict the filesystem to the beginning of the disk suggests that filesystems don't do this automatically.

Thank you for assuming I don't even have the knowledge of a typical computer user, it greatly increases the likelihood I'll not waste further time with you. Raises it, in fact, to 100%.

I'm not sure how you got that impression. I was just providing numbers to complete the example. There's no need to become defensive; and if you find that you might be wrong, saying, "That's a fair point, I'll have to do some more research," goes a lot further than continuing to beat a dead horse.

Maybe it's the fact that this thread is on an article related to law, politics, and morality that is causing needless argumentation.


HFS+ is not exactly uncommon


As a full-time Mac user for 8 years, and a Linux user for much of the decade before that, I am acutely aware of how common many filesystems, including HFS+, actually are.

If OS X ever breaks 10% (and still uses HFS+ at that time), I'll reconsider my judgement of its commonality.


I'm writing this from a computer with two partitions: one HFS+ running OSX and one Ext4 running Linux. These are both the default options when installing OSX (Mountain Lion) and Ubuntu 12.04, respectively:

https://www.dropbox.com/s/x3cluigwaog8sad/gparted.png


Of course you are. And if you took a poll of HN users, you might even find OS X + Linux near a majority. That has nothing at all to do with what filesystems the vast majority of people are using, much less what they'd be using with TrueCrypt.


It's over 7%, I still wouldn't call that uncommon and say few people use it.

http://en.m.wikipedia.org/wiki/Usage_share_of_operating_syst...


That's how it does hidden volumes huh? I always thought it was something sneakier, like flag a handful of files that exist in the regular volume for it to build its hidden volume out of.


Well ... a normal PC can have up to 10-12 TB of storage these days . It is more than normal for 1 or 2 to be left empty. You just create the hidden volume there.


Excellent. You read my directions on how to use TrueCrypt correctly! :)


Hidden volumes aren't all that great for hiding porn. Generally, porn takes up a large amount of space. Hidden volumes are great for hiding a small amount of data (passwords, launch codes, bank statements, etc) among a large amount; not so great for hiding a large amount of data.


If they don't have to decrypt known encrypted contents why would they have to give up unknown encrypted contents?


This would not be the best argument. It's possible to prove you're lying. If on the other hand you merely state you forgot the password, it's impossible to prove you're lying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: