And also volumes by the quite popular TrueCrypt and its successor VeraCrypt have random headers... https://www.raedts.biz/forensics/detecting-truecrypt-veracry...
Looks like a nice enough project, but be careful with claims like "first".
Interesting, hadn't seen this project yet, or would have been more careful, thanks.
A few thoughts:
- The lack of a link to a white paper makes DeLUKS difficult to analyze, but it looks like they're hashing the password using PBKDF2-SHA1, which is quite outdated. (PUREE uses argon2id.)
- The installation/usage seems slightly complex. I'd suggest people compare to the PUREE quick start instructions.
As per your link: "When you create an encrypted volume using TrueCrypt or VeraCrypt it is stored as a file (container) on your hard drive.".
And as per VeraCrypt's wikipedia page: "VeraCrypt supports plausible deniability by allowing a single "hidden volume" to be created within another volume"
I didn't read too much deeper, but the phrases "stored in a file" and "hidden volume within another volume" are red flags.
The reason why that's not the focus of TrueCrypt/VeraCrypt is because they also go significantly further. One problem with plausible deniability with this concept is that it's difficult and limiting to have to try to hide traces of access to an encrypted disk from a host OS that is stored on unencrypted media (and, obviously since it's the point of the whole project, even if it was encrypted, you could be compelled to decrypt it.)
TrueCrypt/VeraCrypt FDE supports hiding an encrypted, bootable partition within the free space of another encrypted, bootable partition, in such a way that it is not generally possible to tell that there is a hidden partition, but you can boot into either depending on what password you enter. (Of course, the bootloader that allows you to enter the password is not encrypted, so for bootable partitions there is evidence that some encrypted partition exists, but this is unavoidable.)
I will add that the PUREE specification does support subvolumes: different passwords may unlock different regions of the disk. Yet, while the spec is quite simple, this feature isn't implemented yet, nor is the ability for a PUREE disk to be bootable.
My hope is that, despite PUREE currently lacking these features in version 1.0.0, anyone who has been discouraged by the learning curve and installation overhead of the alternatives will find use in PUREE, given its simplicity, clearly strong security properties, and user-friendly interface.
However you should be aware that this property is not widely considered a "good thing" because if you are in a situation where an attacker is willing to physically harm you, then they may not be convinced that you've given them all of the keys (even if you have). The same problem exists for cryptosystems that self-destruct -- how do you convince the attacker that you weren't the cause of the data being destroyed?
You would think it would be the obvious action, but history has taught us people can go a long way in trying to convince others they are right, when they are not.
Why? Files are lot easier than full partition encryption. And how else would you hide a volume with anything like plausible deniability?
Also, as jchw mentioned above, the headers are indistinguishable from random data already, and truecrypt supports full volume encryption already. This nesting feature is on top of the "volume looks like random data" feature.
What's the problem?
> To be fair, some tools do support support completely-random-looking disk layouts, but in most
cases, they either:
> 1. Are key-based (e.g., require a 128-bit or 256-bit key) rather than password based, in which
case, the key must stored elsewhere. (Where do you store the key?)
> 2. Ask the user to store a (non-random-looking) disk-encryption header elsewhere (i.e., “detached
header mode”). (Where do you store the header?)
Is there a reason why this is notably different? Why can't the password be hashed to get the fixed length key?
If the header isn't needed on a daily basis, storing it as a QR-Code on paper would be a possibility.
So there are exceptions and you said “will generally not work”, damn you nailed it
PUREE lets you perform password-based full-disk encryption securely, and in a way that ensures the disk is 100% indistinguishable from random. And as far as I know, this is the first tool to do so.
It also comes with a very simple-to-use command-line tool for encrypting disks. (Currently only supported by Linux.)
After putting much love into this project, I've finally released version 1.0.0. All PUREE designs and software are hereby released as public domain.
All feedback greatly appreciated.
Also see the Wikipedia chart for "Hidden Containers"
How does this work in block devices like this? What errors will occur in Linux when a block is bad? Is that a file system issue on top of it, so e.g. if I format it with Ext4 I can use fsck to "fix" errors?
Admittedly backup headers are not yet implemented. (I had to release an MVP at some point.)
> how [is] data integrity is handled?
(Beware though that this is still MVP, and the code doesn't actually write a backup header yet. If enough people are enthusiastic about PUREE, I promise to do so.)
Every part of the header is encrypted+authenticated. If any part of the header has a bit-flip, the message-authentication code will fail to decrypt the header. PUREE can't tell whether this happened or you're just providing the wrong password. If corruption does occur, you'd need to hint to PUREE to read a backup header---by default, one will be at the final 1MiB of the device. (But again I stress, not implemented yet.)
> How large are the blocks, does this affect other blocks, and how can intact blocks be recovered?
PUREE just handles the header. After that, it hands control to the next stage, which according to the spec, can really be anything. The implementation, as it is now, only has four possible encryption modes, and each just hands things off to Linux dm-crypt subsystem. You'd have to read about that in dm-crypt.
Anything on the volume is almost by definition important.
But a single bit flip and it can be complete garbage.
It would be lovely if someone, far more clever then me, layered fountain codes
at a tasteful spot.
And many “military/diplomatic” ciphersystems used in second half of 20th century have various interesting properties wrt. error correction or non-propagation. My suspicion is that these properties are built by doing MAC-then-FEC-then-Encrypt and in some cases even by relying on limited error propagation of CBC mode. If you think about it, for these applications reliability is more interesting than strict authentication as the ciphertext of diplomatic “cable” or similar thing is sent through somewhat noisy channels and often was manually handled by fallible human radio operators while active MitM was not exactly feasible attack.
Obligatory XKCD: https://xkcd.com/538/ Obviously it ignores habeas corpus but of course, so have most governments, even the more 'evolved' ones.
For most situations, denial of encryption is not necessary. I can see the point for data drives. But in a laptop/PC scenario, there must be a way to actually boot the thing. If there's a way to boot it, it can be detected by trying to boot it ;) There must be something machine-readable that asks for the password. You can store that part on a separate USB but even the possession of that is an indication that you use encryption.
And to mention the obvious: Who carries around drives with random data on them? Storage devices that are unformatted are normally filled with zeroes. Sure, it could have been wiped with random data, but it is still a very suspicious situation.
It's a nice project for this niche usecase but for encrypting my boot drives I'm perfectly happy with LUKS as it is. In most cases denying the existence of encryption is not needed.
A scheme that could boot when there is a hole drilled into the platter would be even better :-)
The first two bits (or any two bits you want really, it’s just convention to use the first two) on the volume must always be 0 and 1. First write both bits to 0 hard multiple times, then write the second bit to 1 and lightly write it to zero. These two bits will serve as a control for the forensic tool to determine the difference between a hard zero and a soft zero, and thus figure out the rest of your volume.
If an unknowing person simply looks at the drive by regular software means, the computer will simply show a drive of all zeros.
Please provide some evidence.
> You think just zero writing it out once is enough?
I definitely think a single zero write on a disk drive is enough (or at least that it's as good as 3 writes or 100 writes). There's no evidence anyone has ever recovered a file that was overwritten by zeros a single time on any hard disk drive from the last 20 years.
Hmm, it looks like the site is down now. Well, here's a copy of the content:
I might have to run my own Great Zero Challenge v2.
> Why do you think DoD standards call for no less than 3 writes when erasing a drive?
I think you're referring to an old DoD standard. Prior to 2007 the DoD recommended that. Now they don't recommend that, and only recommend degaussing or destruction of the drive.
> Clearly, it is trivial to detect that a disk is encrypted with LUKS. In fact, this isn't just a problem with LUKS; but also with most password-based disk encryption solutions. This isn't surprising: it turns out to be a difficult problem to solve.
Is it though? The reason people add metadata headers is because they provide features, such as allowing different parameters without the user having to know them, allowing autodetection that encryption is in use when you plug in a portable drive, etc. Puree trades-off these features for instead having deniable encryption. However for most users, deniable encryption is pretty useless, and those other features are very useful. Hence why most disk encryption solutions go with non-deniability.
Is it really that common of a practice to leave your entire disk full of random data? I would expect that it's more common for disks to be full of zeros, or non-encrypted leftovers, than random data.
And from what I remember reading long ago, several writes of random data is more secure than several (or one) writes of zeros.
AFAIK for a recent (say, 2010 and later) drive - there's little to indicate random/"smart" patterns are better than just a single pass off all zeroes. Even if you consider an adversary with the capability to analyze drive with magnetic force microscopy.
"Data Reconstruction from a Hard Disk Drive using Magnetic Force Microscopy", 2013 Author(s): Kanekal, Vasu
A quick skims seems to indicate the NSA recommends degaussuing and destruction:
And q'n'a indicate that single pass sata secure erase is generally considered sufficient - it typically does a single pass, but should also write over all bad blocks etc.
It's even worse with SSD drives, where the "hidden reserve" many times larger.
Wiping is of course better than doing nothing, but neither lives up to the requirements of GDPR.
Where wiping disks is sufficient, for instance "internal reuse", it is always some number of specified writes, followed by writing all zeros, so both the donor and receiver can verify the disk as empty.
Note that wiping a SSD with all-zeros is likely to be optimized out by its embedded controller.
If the adversary does not belive you, their best bet is to keep torturing you, until they get their way.
What you want is "Proof of inability to decrypt" so that the adversary has nothing to gain by waterboarding you.
See also: https://papers.freebsd.org/2004/phk-gbde/
For example, suppose you work in the IT department of a law firm, something fishy is going on and you want to be a whistle blower, and you need a way to exfiltrate some evidence. You are getting rid of some obsolete laptops, and standard procedure is to wipe the drives before selling them.
Management has its suspicions about you because you've been asking too many questions. So they get another IT person to check the drives you were supposed to wipe.
If the drives have the data on them with a LUKS header, you've been caught (or at least they think you were too lazy to wipe the drives). If it's indistinguishable from random data, then you've given them no reason to suspect you.
IT-Dude-Syndrome is when IT-people come up with scenarios which hinge on "IT-Dude" being smarter than everybody else in the whole world.
First, if you're suspected of anything in a law firm, you're out until the issue is resolved, you don't get to "destroy old laptops" or anything of that sort.
In a high-security operation you don't even get to touch a laptop, and you will be handed a loaner-phone, while your own phone is turned off, signed, sealed and stored in a safe until the matter is resolved.
Second, if "standard procedure" in that law firm is to "wipe the drives before selling them", that law firm is not following best, or even minimum, security practices for the business they are in, and that is what the whistle-blower should report to the relevant authority.
Third and most significant, IT-dude is smarter than the obviously junior IT-dude called in to "check the drives" IT-dude "were supposed to wipe" ?
You know what, he's not.
In such scenario you can be damn sure, that whoever is called in to check is several levels smarter than IT-dude.
And you know what super-IT-Dude will do when she see the disks full of high entropy data ?
She will point to the final step in the official procedure for wiping disks, which is to write all zeros to the entire media to prove that there is nothing hidden.
She will write in the report that IT-dude did not follow a trivial procedure to the letter, and point out that the most likely, in fact the only credible, explanation for skipping a trivial step in the procedure, was that IT dude were trying to exfiltrate data.
It depends on the IT person really, your company, and your opsec. Does your shell history contain luks commands? Are your logs clean from any trace of mounting? Are dirty blocks on the disk free of that data too? Are your machines centrally managed with logs shipped out? Do your local logs look like they've been only continuously appended to, or are there likely uneven rewrites? Does your system contain encryption software you're not using in normal operation?
If you really want to hide that information, the encryption itself looks like the smallest issue.
I find it naive suggesting that "Proof of inability to decrypt" can lead to a positive outcome in such scenario.
Being able to prove that you cannot possibly decrypt the partition, moves you out of the XKCD 538 scenario.
That is a "much less negative" outcome.
It does not. I think your mistake is modelling an adversary willing and enabled to torture like an entity that behaves like you would do, which I assume is according to logic + knowledge + willing/able to communicate + respect for human life.
How about the adversary just not buying your proof and torture you anyway (to death) just in case you are trying to deceive?
How about the adversary not even giving you the opportunity to explain or show your proof? (imagine getting yelled "open it" because they don't speak much english other than that and get beaten for whatever you do that doesn't look like "opening it")
I'm writing this just in case someone reading will actually at some point need to prepare for such threat. "Proof of inability to decrypt" (as also "Denial of encryption") does not give you a way out of the "XKCD 538 scenario".
If you can't avoid the scenario entirely, there are better bets (e.g., disguise still-encrypted data as plausible, non-sensitive other data).
You mean like any human rights organizations who send people to authoritarian states ?
You mean like any Foreign Ministry sending a courier out in the world ?
You could learn so much about operational security, if you wanted to, just by reading open sources.
Can I recommend you start with a wonderful old article called "A first tour like no other" ?
If your adversary model is a wild-eyed gun-slinging mid-west racists and you are black, then encryption is not going to be a factor in your death, and speculating what you can or cannot convince them about is besides the point.
If a sane adversary has captured you and your devices, say border police in some police-state like Belarus, Brazil or USA, it would be silly to assume that you know more about disk-encryption than they do.
Most importantly, you would be very silly to assume that they will not simply lock you up, until you provide access to the device, aka "XKCD538-lite".
There are people who have languished in hell-hole jails for years already, not because they are unwilling to provide access, but because they cannot provide access, but are unable make a convincing showing of that.
Competent organizations make sure their travelers can make that showing convincingly, and one of the steps they take, is to make 100% sure there is no unexplained high-entropy data on their devices.
Imagine ending up in a foreign jail for years, just because you once deleted a huge gzip'ed file, and the first sectors subsequently got overwritten ?
Yeah, that happened: "Now decrypt this other secret partition!"
To clarify, being held captive "by a sane adversary" until you supposedly decrypt allegedly encrypted data is different from falling into the hands of someone willing and enabled to torture you. Unless you consider torure a "sane" practice. If you cannot understand the difference maybe you should follow your own recommendation and learn a little more.
E.g. if a company has a policy of wiping external drives and SD cards before each trip and the policy is religiously [or automatically] followed it becomes clear that harassing employees on a trip for having wiped drives is pointless.
Also a lot of organizations that are ready to torture people do that regardless of the quality of information that they can gain.
Is such repetition of header (in a supposedly random-wiped disk) not suggestive of existence of a PUREE partition? This can be figured by scanning the entire disk.
How could we make protocol handshake invisible? AEAD?
You can do this with
cryptsetup open ... --type plain