The title tells me I don't want XTS.
Then the post explains that I don't want FDE, for sundry reasons. Then it tells me, sure, go ahead and use FDE, but be aware of the limitations.
At that point it looks like I want XTS, after all.
Then it shows how XTS works and where cryptographers are seeing problems.
And then... well, there is no "then". The recommendations don't deal with FDE at all.
I think the post would be much stronger, if it either had some alternative mode to present or (I'd be more interested in that) it showed other ways to achieve what people usually try to achieve with FDE. And usability is a big one there. Bitlocker is huge. FileVault is huge. People can actually use it.
Has anyone ever seriously used XTS mode for anything but FDE? I've only encountered it in Truecrypt and the part of Boneh's Crypto I where he's talking about FDE. But that doesn't mean much, I admit.
1. Try not to rely on FDE at all.
2. For God's sake don't use XTS for anything other than FDE.
There's an encrypted filesystem guy that suggested they were moving to XTS. That's what prompted the post.
Usability is great. I think you should turn FDE on. But turning on FDE buys you a lot less than you think it does. There's a pretty good chance that FDE isn't going to do anything for you if your computer is seized by the FBI, because the key will probably resident when they do that.
You don't want XTS, in general, because you don't want to turn a simpler problem (safely encrypting your secrets) into a hard problem (simulating hardware encryption).
(I was one of the authors of the cold boot paper at USENIX Security, but I haven't followed the subsequent history of attacks and defenses very consistently.)
It does not protect against an attacker able to manipulate memory contents through, say, DMA or other malicious devices.
I remember the DMA situation from a few years ago being quite bad, and it looks like you or one of your colleagues has posted some good resources on that at
I realize that with PrivateCore you're taking quite a comprehensive approach to this problem and not bothering with more piecemeal counterforensic approaches, but I'm still curious about what counterforensic techniques exist for people who aren't going as far as you are. For example, can we make stock Linux deny memory access on external buses with a software policy, or is this simply not something that can be accomplished from software?
I imagine you have good arguments for why the piecemeal approach is likely to fail in the face of an skilled forensic attacker. But many of the attacks that you described to me when we last talked about this were more along the lines of hardware-assisted Evil Maid attacks in an unmonitored and unsupervised colo, rather than forensic attacks against an unmodified encrypted laptop. I'm curious about how different the threat scenarios are between these two cases now.
Do you happen to have a link where I can read more about those?
Still, I want XTS. :-)
The title is also somewhat link-baity, since I clicked expecting bombshell
revelations about XTS. I suggest the mods to change it to "You don't
want XTS (except for full disk encryption)."
I didn't write the article for the front page of HN. I wrote it so that the next time someone says "we're going to switch from CBC to something more advanced like XTS", I can point them at the article instead of writing a long comment.
But it's also not saying that you shouldn't use XTS for full disk encryption. In fact it seems to say it's probably OK for full disk encryption: "It’s certainly better than ECB, CBC, and CTR for FDE. For the crappy job we ask it to do, XTS is probably up to the task."
> I didn't write the article for the front page of HN. I wrote it so that the next time someone says "we're going to switch from CBC to something more advanced like XTS", I can point them at the article instead of writing a long comment.
Understood. Unfortunately it's on the front page of HN now and I think it (the HN article) needs a better title considering the audience.
It's apparently pretty common for people to create virtual Truecrypt volumes and stick them on Dropbox, as a sort of poor-hacker's encrypted Dropbox.
Someone should bone up on the "Evil Maid" attack and do a proof-of-concept of it on a Dropbox-backed volume. Call it "Evil Maid In The Cloud", or "The Mallory Poppins Attack".
My intuition is that the Evil Maid attack is much more powerful vs. Dropbox-backed volumes.
The only reason for block-level encryption in software is because it was too much work to retrofit UFS, ext23456fs, FAT, NTFS, etc. with encryption. The block device provided a convenient bottleneck, and no one cared too much about the downsides you bring up because it seemed better than nothing.
ZFS generally gets this right, using AES-CCM (an AEAD mode as you recommend). It also has proper integration with the deduplication layer so encryption doesn't waste space. These reasons and more show why security needs to be implemented at the filesystem layer.
Another good example people here may be familiar with is Tarsnap. Again, it handles encryption, integrity protection, and deduplication without storing keys on the server side or unnecessarily restricting itself to a block device metaphor.
You're right that TC+DB are not a safe combo. Dropbox needs to be the encrypted Dropbox.
I'm not as confident in the design and implementation of encryption in e.g. ZFS as I am in full disk encryption.
I'm not sure Tarsnap is a great example of doing encryption at a high level either. As I understand it the encryption layer only deals with opaque chunks of data; it doesn't know anything about high-level concepts like files and folders that the Tarsnap application operates on.
it doesn't know anything about high-level concepts like files and folders that the Tarsnap application operates on.
Tarsnap doesn't really operate on high-level concepts like files and folders. It flattens everything into a tar archive, then encrypts and signs that. That does however ensure that data and metadata are kept together, in a way that encrypting individual disk sectors does not.
The article says to "Encrypt things at the highest layer you can.", but you could imagine a backup utility doing encryption at a higher layer than Tarsnap does, with all the complications that would bring. And would it be even better if instead of file system level encryption, every application encrypted it's own files? I don't think so.
So maybe it should just say "Encrypt things at the highest layer that makes sense", which is kind of vacuous.
IMO it would make just as much sense to say "Encrypt things at the level where it's easiest to do.", where block level encryption is a strong contender. It doesn't give you every property you might want, but it's easy to get right.
I understand your point. "Encrypt at the highest layer you can" is an ideal, and achieving the ideal is not always worth the expense. Some lowest-common-denominator crypto in the OS is valuable. But disk block crypto is a pretty crappy lowest common denominator, as this article is at pains to say. What part of it do you disagree with?
I don't have an issue with adding application specific encryption, as long as you also keep doing it at a suitable "lowest-common-denominator" layer.
A more achievable ideal would be for every layer to do strong integrity checking.
I agree that it would be good if the lowest common denominator provided integrity checking, but as we can see with XTS, sector-level encryption makes that hard. Hence the article. :)
Evil Maid attacks that I've heard described before typically involved modifying (unencrypted and unauthenticated) bootloaders or kernels, rather than FDE ciphertext, although I realize that's not part of the definition of the attack.
Assume the attacker can modify plaintext on disk indirectly. This might be viable if e.g. the user's browser cache lives in the TC volume. (Not sure what typical TC-in-Dropbox usage patterns are, so this may or may not be realistic.)
Further assume that the attacker can observe ciphertext changes in Dropbox.
XTS is vulnerable to the same byte-at-a-time plaintext recovery attacks as ECB. This is usually irrelevant, because adaptive chosen plaintext attacks are outside the threat model considered for FDE. But in our scenario, the attacker has at least some degree of plaintext control.
The success of this attack will depend a lot on how fine-grained the attacker's control over his insertion point is, i.e. can he reliably write to the same location over and over again? Any number of components might thwart that control, so I'm not sure how easy it will be to make this work.
Being able to "glitch" someone's truecrypt volumes without even compromising the crypto could lead to fun. Also, OS/file system cruft on the level above, essentially the equivalent of an .htaccess file, could be fun.
However, I've been reminded that a realistic attacker can do much more to an FDE volume than we would first imagine. So, I'm planning to try to play around with this to improve my intuition of how bad the attacks actually are. Thanks for the challenge.
... huh. Actually, suppose the attacker could somehow cause a very large number of file downloads to happen (with known sequence, timing, and contents). The Birthday Paradox gives the attacker a high probability of being able to see a pair of files from two separate sets of files (of different kinds and different origins) land on a particular sector at different times. When that happens, the attacker can then substitute one for the other.
If we think of a computer for some reason downloading and storing a lot of "trusted" files and a lot of "untrusted" files, and that the attacker can see empirically where each file was stored, then when a trusted file gets stored at an offset where an untrusted file was previously stored, the attacker will be able to substitute the contents of the latter for the contents of the former. For this attack to be harmful, the untrusted file that was stored at that location just needs to contain something that it would be harmful for the attacker to be able to replace the corresponding trusted file with.
People do that? encfs/ecryptfs sounds like a much better fit for that use case. Encryption is per file and you can even disable the filename encryption if you want to still be able to navigate your files using their interface. Are there no good similar solutions for Windows/OSX?
In particular: you do not need XTS to get "fast random read and write access".
Taylor's a smart guy. I'm pretty sure he's wrong about this (although maybe he knows something applicable about EncFS that I don't know). If I thought that there was no way any smart person could make the mistake of applying XTS somewhere it didn't belong, there'd be no point to writing the article. :)
It seems like if an attacker wants to compromise (not merely corrupt) a truecrypt volume, they'd need to exploit the target machine itself, not just the data being read by it... unless Truecrypt has an implementation flaw like a buffer overflow, and you're suggesting modifying the volume to take advantage of it?
Storing a Truecrypt volume in Dropbox would completely invalidate your plausible deniability about whether there's a hidden volume on the drive / whether you have access to it, since an attacker can watch changes in the ciphertext and infer whether you're using a hidden volume. But I thought it was impossible for attackers to generate new ciphertext that decrypts to valid plaintext unless they had the keys, at which point they've already won. So it seems like there's no way to modify the truecrypt volume except to corrupt it. Hrm, I'm out of ideas.
Sorry, I just like to learn as much as possible, and what you suggested sounded very interesting. Don't worry about explaining it unless you want to... I imagine my questions are kind of annoying. I'll research it more.
This seems definitely achievable under some applications of FDE and some assumptions about the state of attacker's knowledge, but I don't know if those assumptions are valid in the particular case of TrueCrypt on Dropbox.
It is? That's very interesting! May I ask, what are some of those applications of FDE / assumptions of the attacker's knowledge? This whole thing is quite fascinating.
Then you could flip bits inside of the binaries to convert them into backdoored ones. I have an example in a current presentation that I've been giving where a fencepost error in the OpenSSH server in 2002 (that resulted in a potential privilege escalation attack) was fixed by changing > to >=, which the compiler compiled as JL instead of JLE, which turned a single byte 0x7E into 0x7C. (That is literally the entire effect of the patch!)
If you can find where on the disk that byte is stored and manage to flip that single bit back, you can re-introduce a particular security hole that the user believes was fixed back in 2002. There are probably thousands of such bit-flips that would produce exploitable bugs, typically by modifying or inverting the sense of conditional branches, but maybe in other ways too. (In cracking DRM in Apple II games, people used to replace tests or conditional branches with NOP, and that would still work to affect how security policies are enforced.)
Not all encryption modes will let you do this attack, but I think that relates to tptacek's point originally that encryption software implementers need to be very careful about their selection of cipher modes and understand what kinds of attacks they defend against.
Elsewhere in this thread sdevlin was thinking about how in a non-tweak mode you can copy ciphertext from one block to another, which means if you could cause a particular file to exist on the drive, and then you can modify the ciphertext, you can cause one file to be overwritten by another (if you know the offsets where both are stored, which you might be able to learn if you know exactly when both were created and could monitor the ciphertext sufficiently often). So for example, if you could send somebody an e-mail attachment that they would save to disk, you could later overwrite some other file (like one of their OS binaries?) with the contents of that attachment by copying the ciphertext block that represents the attachment over the ciphertext block that represents the other binary.
This is one of the main reasons that tweak modes exist, specifically to prevent an attacker from recognizing or copying ciphertext blocks between different locations within the overall ciphertext.
(Edit: sdevlin, not rdl.)
And I think several statements are, well, let's say disputable. OK, XTS has the unnecessary complication of two keys, introduced in the standardization process. But the security in storage group (whatever it's called) has apparently seen the light is working on essentially allowing just one again.
Furthermore, XTS is denounced for having "unclear security goals", and the post leaves the impression that wide-block schemes are so much better. Yet their goals are essentially exactly the same as those of XTS, just on a sector as opposed to a (cipher) block level. And do correct me if I'm wrong, but I believe they are pretty clear: essentially ECB encryption of every cipher block on the disk, with a different key. In more concrete terms, I believe those translate to security under deterministic chosen-plaintext attacks, and non-malleability.
Finally Evil Maid attacks (at least the ones presented so far) have absolutely nothing to do with any cipher mode you're using, and everything to do with the trusted platform problem. Maybe in a Dropbox setting the attacker could benefit from the cut-n-paste abilities, but I'm not sure how realistic/severe those attacks would be (presumably you wouldn't have your system partition on Dropbox). I'm having a hard time imagining them, but then again there exist much smarter people than myself in this world.
Not to be only critical of the article, I do believe that the basic messages are sound: be aware of the limitations of FDE and don't use XTS in the contexts outside of FDE (unless you really know what you're doing, I guess). But the rest of it I could do without.
You scare-quote "unclear security goals", but that's not my objection, that's Phil Rogaway's objection. And his argument (and Ferguson's argument, and Liskov's argument) isn't hard to understand: by adopting disk sectors as the setting for your encryption, you trade transparency (the ability to use encryption on any filesystem) for a whole mess of constraints, which the article you're commenting on lays out in detail. The worst of these constraints is that you lose any real authentication, but the fact that XTS is basically XEX-ECB is another problem.
You've also taken the "two keys" issue out of context. The issue isn't that it unnecessarily uses two keys. The issue is that it's hard to derive clean security proofs for XTS, because of all its complications. The way Rogaway puts it: there are three constructions involved in XTS --- the XTS wide-to-narrow block adapter, the "XEX2" two-key XEX construction for full narrow blocks, and the "XEX3" two-key construction for partial narrow blocks.
I believe you're also wrong about the Evil Maid scenario. Perhaps you're getting hung up on a detail that I wasn't implicating (for instance, booting from the drive). The issue is that in a normal FDE setting, attackers don't get repeated use-tamper-use cycles. But in a cloud-backed FDE setting, they easily do get that, so more sophisticated versions of the same attack scenario are possible. Think "Evil Maid" here in the same sense as Kenny Patterson used the "BEAST" attack scenario to build a plausible attack on RC4.
The problem with XTS is that there are better modes you can use, even for sector-level encryption, if you shake off the constraint that you're working with physical disk geometry. Which, in the real world, a lot of FDE users can in fact do, because they're not working with real disks but rather virtual disks stored on cloud filesystems.
If you don't need to comply with physical disk geometry, it is probably feasible to get block-level encryption with strong authentication guarantees, and with a native wide-block PRP that will get rid of the ECB data leak in XTS.
Finally, don't use XTS for anything but FDE, even if you know what you're doing. That's a bit of a tautological statement for me to make, because if you know XTS, you also know that it's only meant for FDE --- but I've been getting comments from people who have seen XTS used in application-layer crypto. Bad. Ick.
What paper by Rogaway are you referencing there? I'm assuming it's not the original XEX paper, and I'm genuinely interested in reading it, this discussion aside. Anyway, I don't understand the "wide-to-narrow block adapter" part. Assuming that the standard group removes the two key "improvement", the only bit that I can see that remains to be proved, is the ciphertext stealing. I admittedly don't know whether proofs for the whole construction exist. A quick search revealed Liskov's review of the XTS draft which gives a sketch, but not the whole proof.
I'm confident about my Evil Maid statement. The essence of (what's commonly referred to as) Evil Maid really is the fact that you cannot trust your own computer. The multiple visits make the attack significantly easier, but are not necessary - you just need a more advanced malware.
As for repeated tampering, I'm still having a hard time imagining a realistic attack. I just don't see how the attacker can get any predictable amount of control of the FDE's inputs, but like I said, people can come up with smart things. The "system drive" in my post was referring to things like weakening the system security by overwriting configuration files with garbage and the like, which are theoretically possible although I don't really see them being practical.
Yes of course you can get better security guarantees if you change the constraints. You're probably correct that it should be feasible to offer a 512 byte sector view to the OS on the one side, and use whatever size on the physical side, if your physical side isn't really physical.
P.S. I didn't really think much of the the scare-quotes retort, seemed like a bit of a cheap jab. If you however have a better idea (or pointers for that matter) about how to refer to a specific part of a blog post I disagree with, I'd be interested to hear.
Efficiency in the sense of "comparing block cipher modes" is not the reason XTS loses integrity and resistance to chosen-ciphertext attacks. Format constraints are the major reason. To wit: there's no good place to stick an auth tag, and for that matter, no good sense of what you'd be authenticating, because you're dealing with fragments of files, not actual files.
Integrity does factor into the security model of XTS. Rogaway points this out when he explains why NIST apparently rejected CBC (the chaining gives attackers a forward bit-flipping capability) and CTR (which was rejected because it's trivially malleable). NIST claimed that some notion of resisting ciphertext tampering was part of the goal, but, as he said, and Ferguson said, and I said repeatedly in this article, nobody really knows what that tamper-resistance is supposed to mean. Attackers can tamper with ciphertext in XTS, and they can probably do it in ways that will create backdoors in binaries!
But lack of integrity checking isn't the only problem with XTS. Another problem is that XTS is for the most part the ECB mode application of XEX. You can't even get a strong definition of confidentiality from this construction, because an attacker observing block offsets into a given sector can collect useful information as those blocks are changed, changed again, changed back, and changed again. This is the attack Ferguson points out in his objection to NIST standardizing XTS, and it's also related to Rogaway's objection that NIST standardized a wide-block-narrow-block construction rather than something that behaved more like a tweakable native wide block construction. The native wide block construction would effectively randomize the whole sector, not 16-byte chunks of sectors. That would also lessen the harm from XTS's malleability, but here we're more concerned with the granularity of the data we leak by encrypting deterministically.
If you want to call the attack I'm talking about something other than "Evil Maid", that's fine by me. I'd also accept "Mary Poppins is a governess and not a maid" as a valid objection. The point is that virtual disks stored on Dropbox are exposed to far more interesting attacks than physical disk media is, because with physical media, attackers don't get an unbounded number of use-tamper-use cycles.
I believe I do understand the constraints under which XTS operates on, and I agree with almost everything you said in the last comment. The only thing I don't agree with is that nobody knows what tamper resistance is supposed to mean - it's non-malleability on the level of a cipher block. Yes this is clearly worse than a sector-level non-malleability, but is also clearly /much/, /much/ better than CBC-style bit-level malleability. But I'll have a look at Rogaway's paper, maybe there's something I'm missing.
I'm still having a hard time conjuring anything resembling a practical attack coming from this. To plant backdoors, you'd have to know the exact location of a binary (feasible and done before, if with pretty strong assumptions), then get the user to change those exact same sectors to something that would be useful to you, and then copy-paste the ciphertext. I just don't see that happening in a real world scenario. Otherwise, taking advantage of the weaker confidentiality notion (deterministic CPA)... still seems quite far fetched, but I accept that I could be proven wrong.
At any rate, I still maintain that this has nothing to do with Evil Maid attacks. Use a CCA2 secure cipher and encrypt the whole disk at once if you want, the Maid still wins.
For that matter, you can trigger vulnerabilities in the kernel or even application code, by surreptitiously randomizing offsets into metadata that they depend on.
What's clear to a cryptographer is that no scheme that provides serious non-malleability should have this property, but XTS does. What's maddening, in a theoretical sense, is that you can't really put this into formal terms, because --- as I've been saying --- nobody has provided a clear definition of what security guarantees XTS is supposed to provide.
No amount of use-tamper-use breaks a cipher that resists CCA2 attacks, because every attempt to tamper produces a hard failure. That's what CCA2-resistance means: attackers don't get to adaptively choose ciphertext (or really, choose any ciphertext at all). In practice, cryptosystems resist these attacks by authenticating ciphertext.
For what it's worth, the article we're commenting on is clear that CBC is inferior to XTS for disk encryption. I take no responsibility for readers who read random selections of the article and come to their own conclusions, although if you look elsewhere on this thread that has clearly happened at least once.
It also doesn't really seem to rely on multiple access to the ciphertext (you're not gonna brute force 16 bytes). And as I mentioned, you are not that likely to have your system partition on Dropbox, and I'd wager the contents and offsets of your user partition are not that easy to guess.
For a definition of non-malleability applicable to XTS, see "Security Notions for Disk Encryption".
Lastly, you're missing my point about CCA2 encryption - it does not solve the trusted platform problem, and an encryption system based on it is thus as vulnerable to Evil Maid attacks as your poor old XTS. Unless you want to exclude installation of malware from the definition of an Evil Maid attack, but then your usage of the term conflicts with just about everybody else's.
Regarding "multiple access to the ciphertext" --- you will indeed brute force 16 bytes, but you won't be trying to find a 128 bit needle in the haystack; you'll be looking for 128 bit blocks that happen to have a mix of innocuous bytes --- of which there will be many possibilities --- coupled with the opcode you want, and you'll have the bytes trailing the previous block and the bytes leading the subsequent block to play with as well to narrow your search. The attack will look a little bit like the byte-at-a-time attack on ECB.
I think that attack is significantly more plausible than you think it is. Apparently, so did Ferguson, since he brought it up ("Code modification attacks") in his NIST objection.
I'm still skeptical about code modification. The access the attacker gets is not really an oracle as in your byte-at-a-time scenario (I take it you're referring to the one from your crypto challenges), they can only observe the decryption output in a very indirect fashion. The number of potentially useful "needle" configurations is also hard to tell, you'd need to get alignment right and produce valid opcodes throughout the block etc. And also how many tries would the attacker get in reality? The executable would need to be reloaded for each one of his tries.
But more fundamentally, I find the whole scenario unlikely. To get there in the first place would require the user to sync their system partition with Dropbox. I think it's fair to say that will almost never happen. And if we're talking about a physical repeated-tamper attacker, well the Evil Maid (the original (TM)) is a much easier option for him.
Again: we are talking about properties that cryptographically sound systems simply don't have. And if you're using Dropbox as your backing store, and not a physical drive, there is no reason to accept those properties!
The reason to accept sub-optimal properties in this case is convenience. Truecrypt is popular and has a nice interface. Using it with Dropbox gives much better security than not using it with Dropbox. It's not perfect, but I still don't think we have any practical attacks against such a scenario.
I agree that having stronger crypto there is however possible and desirable.
With SSDs, seek time shouldn't be an issue. You can argue that if an attacker can modify your disk, they can probably run code to steal your password, but this does not apply to Dropbox-stored disk images, and AFAIK you can try to secure a PC against modified code seeing the keys using TXT.
* If you're working with 512 byte sectors, the cost of individual MAC tags gets high.
* No matter what your sector size, reserving space for the MAC tag gets you odd-sized sectors.
* Most importantly, what does it mean to have a sector with a bad MAC? How does that get reported back to the filesystem? Filesystems aren't designed to cooperate with the disk to track cryptographic attacks.
By the time you solve that last problem, you've already gotten yourself pretty far down the track of just providing crypto in the filesystem layer. Which, because the filesystem layer is format-aware, you might as well just keep going and make a proper cryptographic filesystem.
Reserving the space is the real killer - either you end up with non-512-byte sectors (and what is more problematic, sectors that don't divide the system memory page size) or you stash the MACs elsewhere in which case every write to a sector also requires a read-modify-write of the sector that holds the MAC.
It's clearly not impossible to get the level of cooperation between the fileystem and the encrypted device that you need, but that's effort, and it's effort perhaps better spent on simply encrypting at the filesystem layer.
I think you're right though that the alignment issue is dispositive.
No serious filesystem will return a zeroed out chunk. Apart from that, it obviously depends on where the read error hit - it might cause just one block in the file to return EIO from read(); it might cause a large part of a file to return EIO from read(); it might make the entire file inaccessible (EIO from open()); it might make entire parts of the directory tree inaccessible (EIO from open() or readdir()) or it might make the entire filesystem fail to mount.
Files that are mmap()ed rather than read() will SIGBUS the process on an IO error - this includes executables and libraries.
I think I see what you're driving at, though - application code isn't designed with the idea that EIO might be maliciously-induced, so you could do things like induce a database to rollback to an earlier point in time by corrupting its journal, or make a badly-designed firewall fail-open when its configuration files fail to load?
My problem with just encrypting at the filesystem level is that it doesn't address files like databases that are large and frequently partially updated, i.e. they act like filesystems in their own right. edit: oh, and as mentioned elsewhere, handing untrusted data to filesystem drivers is a pretty big, avoidable risk.
The LRW paper has some starting points for tweakable ciphers with random access properties. Note that if you're doing filesystem encryption, you have a place you can put a nonce to do randomization. For that matter, you could divide files up into 4k pages and use OCB for the pages.
I don't think I understand your point about handing filesystem drivers untrusted data.
Also, if XTS is used in a length-preserving fashion, it can encrypt standard block devices without significant changes.
However, length-preserving means that it's either not going to be authenticated or you need to keep authentication material elsewhere. In the latter case, I'd rather use GCM.
Even better would be a "native" wide-block tweakable cipher, one that created a strong PRP out of the entire sector, rather than chunking the sector into narrow blocks and then ECB'ing those narrow blocks. However you did that, it would also probably involve invocations of the AES block transform, and thus benefit from AES-NI.
But you're right: XTS has a convenience advantage for encryption in the standard hardware block device setting. What's sad about that is that simulated hardware block encryption is actually not a particularly strong protection for users.
But between your OS and the hard drive in your laptop, use whatever full disk encryption software that is best supported & most secure. He's not criticizing FDE in general, he's saying "don't constrain yourself to the block device model if you're designing database encryption etc etc."
Would you happen to know if there is anything equivalent planned/done for btrfs (or ext4)?
First intuition is to just LUKs both disks. Ok, that might work, and then I see the horrible performance degradation from FHDing an SSD.
Second intuition is SED, and I got a Crucial m550 to hopefully satisfy that, but then I find out there is no documentation on if the ATA password is stored in the firmware. If it is, I'm just wasting my time, and I kind of just have to hope Crucial does the right thing and doesn't store the AES key anywhere. I also have to hope the marketing "hardware encryption" is true like on my 840 Pro, where I don't see any performance loss.
And even userspace level encryption of config files that use plaintext passwords is terrible (and lets be honest, way too many different programs hide credentials in plaintext somewhere for me to find all of them easily with a full desktop - off the top of my head, networkmanager, KDE-PIM, Telepathy, Firefox, and Steam all have their own independent unrelated credential stores).
In general I would just want to encrypt all of base ~, /var, and /etc, since that is where personal data can end up (and maybe /opt, because random stuff ends up there) - but then I'm still losing most of the reason of having an SSD, especially one with a hardware AES accelerator that would go unused.
And don't get me started on the mechanical drive, which I'm going to have to part bin when I get the thing and see if it has working hardware encryption. At least on that it isn't too bad to use LUKs, because then the overhead isn't as bad - but having overhead at all kind of sucks.
It's simple. I like simple maths and code, it's less to screw up and less for implementations to screw up. For example, I don't trust EC or GCM, even if some people thinks they're the new hotness, because complexity creates more opportunities for obfuscation and puts the code further out of reach of the already few eyeballs actually (or not) looking at it.
Maybe 'cpervica explain why
cipherblockdata = blockcipher(key, nonce . block #) ^ plainblockdata
plainblockdata = blockcipher(key, nonce . block #) ^ cipherblockdata
Edit fixed my maths:
Think about a file that you preallocate with NULLs. If you get an image of the disk before you write to the file and then an image once you write to the file, you can simply XOR the before and after to get the ciphertext.
using block 100
cipherblock_before = cryptofunc(100) ^ 0x00 = cryptofunc(100)
cipherblock_after = cryptofunc(100) ^ data
cipherblock_after ^ cipherblock_before = data
Further, every solution is going to have other machinery solving specific concerns.
You don't call XTS something else because you've used scrypt or PBKDF2 as the PBKDF.
Work is work.
Propose a scheme whereby you use AES-CTR to encrypt a 100 megabyte disk of 512-byte sectors, whereby the scheme "rekeys" every "few sectors". Be specific.
"It's unreasonable to debate with an unreasonable person."
Supposed OTP constructions are defined as
e(i) == E(...) ^ m(i)
m(i) == D(...) ^ e(i)
where E(...) = D(...)
and where ... doesnt contain any of the following
e(j) for any j
m(k) for any k
j and k in same domain as i
Then, take a look at CTR...
CTR is E(i) = blockcipher(key, nonce . i)
and D(i) = E(i)
e(i) == blockcipher(key, nonce . i) ^ m(i)
m(i) == blockcipher(key, nonce . i) ^ e(i)
(i == counter, since it's the same in this example where counter and blocks start at the same number)
Therefore CTR is an OTP.
Indeed, even if blockcipher=AES256, the attacker can still break CTR by merely guessing key in 2²⁵⁶ operations. (Likely only one such value of key will yield meaningful plaintext throughout the entire multi-block message.) That is contrary to the information-theoretic security property of OTP, where the attacker can't tell whether they've correctly guessed the key.
More to tptacek's point, if you're using the block offset as i, then if you write the same block 30 times, you used the same value blockcipher(key, nonce . i) each time. That isn't a one-time use of that part of the pad, it's a 30-time use of that part of the pad. It's extremely possible that an attacker who has observed all 30 ciphertexts can actually decrypt many of them in combination. In Boneh's Coursera class, we did it successfully with like 4 or 5 ciphertexts, and I've seen a paper that describes doing it automatically for the majority of the text with only two ciphertexts, assuming the plaintext is English written in ASCII.
You have a system of keys derived from a master key. Too many bytes encrypted with one key? Use a new key for subsequent writes.
(And for god's sake use a PBKDF to derive a master key from a password, don't memcpy() it directly.)
XTS is only useful for FDE, everything else should look for simpler constructions.
Maybe you need to read:
Would really appreciated if you would know you're talking about and provide evidence before saying "it's wrong" or "it's bad advice."
Later: FWIW, it looks like the parent comment was edited after I wrote this.
Again, you're making accusations, shifting the conversation without providing evidence. Talking with you is pointless.
Second, KDF1 would be even slower.
Third, you still haven't explained how KDF1 or HMAC takes you from the master key, derived from the user's password, to several million per-sector keys. What's the relationship between the keys?
Fourth, deriving keys from other keys is potentially dangerous; it's something you avoid doing if you can.
Fifth, as the article mentions, one of the reasons nothing in the universe does this is that running the key schedule millions of times is itself pointlessly time-consuming.
Sixth, you're running CTR mode deterministically. As the article points out, you can't do that: every time you alter a sector, you'll be encrypting data under the same keystream. Higher-level code gets to use CTR because it can arrange to randomize it, but in sector-level crypto you don't get to store a nonce.
What's frustrating to me here is that you basically just made a bunch of stuff up, and then feigned offense that I wouldn't have taken this nonsensical scheme seriously. But that's what it is: nonsensical. Nobody generates individual keys per sector. The article covers this: you'd like to do that, but it's too difficult.
Hence tweakable ciphers.
Lodge objections with Liskov, Rivest, Wagner, and Rogaway, not me.
I'm not sure that everybody has developed the intuition that this is horrifically dangerous. Maybe point them to something like
Hair-splitting, really. Actual OTP is an imaginary construction that requires an endless supply of truly random bits that have to be securely stored or somehow recreated during decryption. It shifts the hard part to that fn, and just XORs the result with the pt or ct block.
It's a way to take any block cipher and turn it into a stream cipher with the power of XOR.
(I'm only going to ask this nicely once: cease and desist stalking and harassment.)
Secondly, CTR has serious issues too. It is trivial to bit-fiddle. The naive implementation you're suggesting leaks the keystream in one CCA query.
Just because CTR in and of itself is easy to get right doesn't mean that any system composed using CTR is easy to get right.
That's beyond the scope of which mode, but it's important. However the less code one has, the fewer places there are for things to hide.