Hacker News new | comments | show | ask | jobs | submit login

If anyone's interested in a fun project:

It's apparently pretty common for people to create virtual Truecrypt volumes and stick them on Dropbox, as a sort of poor-hacker's encrypted Dropbox.

Someone should bone up on the "Evil Maid" attack and do a proof-of-concept of it on a Dropbox-backed volume. Call it "Evil Maid In The Cloud", or "The Mallory Poppins Attack".

My intuition is that the Evil Maid attack is much more powerful vs. Dropbox-backed volumes.




The block layer is a poor location to support security, whether access control or encryption. The lack of knowledge of upper layers & the resource constraints of the lower layers (block stealing for metadata) unnecessarily hamper its security and performance.

The only reason for block-level encryption in software is because it was too much work to retrofit UFS, ext23456fs, FAT, NTFS, etc. with encryption. The block device provided a convenient bottleneck, and no one cared too much about the downsides you bring up because it seemed better than nothing.

ZFS generally gets this right, using AES-CCM (an AEAD mode as you recommend). It also has proper integration with the deduplication layer so encryption doesn't waste space. These reasons and more show why security needs to be implemented at the filesystem layer.

Another good example people here may be familiar with is Tarsnap. Again, it handles encryption, integrity protection, and deduplication without storing keys on the server side or unnecessarily restricting itself to a block device metaphor.

You're right that TC+DB are not a safe combo. Dropbox needs to be the encrypted Dropbox.


Keep in mind that the simpler implementation of block layer encryption also makes it easier to implement and audit.

I'm not as confident in the design and implementation of encryption in e.g. ZFS as I am in full disk encryption.

I'm not sure Tarsnap is a great example of doing encryption at a high level either. As I understand it the encryption layer only deals with opaque chunks of data; it doesn't know anything about high-level concepts like files and folders that the Tarsnap application operates on.


As I understand it the encryption layer only deals with opaque chunks of data

Correct.

it doesn't know anything about high-level concepts like files and folders that the Tarsnap application operates on.

Tarsnap doesn't really operate on high-level concepts like files and folders. It flattens everything into a tar archive, then encrypts and signs that. That does however ensure that data and metadata are kept together, in a way that encrypting individual disk sectors does not.


From an end user's perspective Tarsnap certainly deals with files and folders. It outsources most of that work to libarchive, which is kind of my point.

The article says to "Encrypt things at the highest layer you can.", but you could imagine a backup utility doing encryption at a higher layer than Tarsnap does, with all the complications that would bring. And would it be even better if instead of file system level encryption, every application encrypted it's own files? I don't think so.

So maybe it should just say "Encrypt things at the highest layer that makes sense", which is kind of vacuous.

IMO it would make just as much sense to say "Encrypt things at the level where it's easiest to do.", where block level encryption is a strong contender. It doesn't give you every property you might want, but it's easy to get right.


I think it would be better if applications could provide their own encryption. Applications know better how to cryptographically protect content. For instance: we use GPGMail for our email at Matasano. The operating system could not do a better job of encrypting email messages than GPGMail does.

I understand your point. "Encrypt at the highest layer you can" is an ideal, and achieving the ideal is not always worth the expense. Some lowest-common-denominator crypto in the OS is valuable. But disk block crypto is a pretty crappy lowest common denominator, as this article is at pains to say. What part of it do you disagree with?


If every application provided their own encryption, most of them would get it wrong, and it still wouldn't protect important metadata. I don't think that's much of an ideal.

I don't have an issue with adding application specific encryption, as long as you also keep doing it at a suitable "lowest-common-denominator" layer.

A more achievable ideal would be for every layer to do strong integrity checking.


I agree that one of the costs for doing everything at the highest possible layer is the potential for mistakes. I don't think every application should invent their own cryptosystem; I think most applications would be well-served with Nacl, and I think things like GPGMail show another good path forward.

I agree that it would be good if the lowest common denominator provided integrity checking, but as we can see with XTS, sector-level encryption makes that hard. Hence the article. :)


It's too bad that ZFS encryption is only in Oracle ZFS, not OpenZFS.


What kind of access are you assuming the attacker has here? Modifying the ciphertext stored with Dropbox?

Evil Maid attacks that I've heard described before typically involved modifying (unencrypted and unauthenticated) bootloaders or kernels, rather than FDE ciphertext, although I realize that's not part of the definition of the attack.


Right, wouldn't it be extremely difficult to modify the ciphertext to create a meaningful change in the plaintext, such that checksums do not fail?


Just rolling back changes to previous versions might be enough to cause problems.


Here's one possible approach.

Assume the attacker can modify plaintext on disk indirectly. This might be viable if e.g. the user's browser cache lives in the TC volume. (Not sure what typical TC-in-Dropbox usage patterns are, so this may or may not be realistic.)

Further assume that the attacker can observe ciphertext changes in Dropbox.

XTS is vulnerable to the same byte-at-a-time plaintext recovery attacks as ECB. This is usually irrelevant, because adaptive chosen plaintext attacks are outside the threat model considered for FDE. But in our scenario, the attacker has at least some degree of plaintext control.

The success of this attack will depend a lot on how fine-grained the attacker's control over his insertion point is, i.e. can he reliably write to the same location over and over again? Any number of components might thwart that control, so I'm not sure how easy it will be to make this work.


I guess the question is if you could do bad things to truecrypt itself this way, or if it would be easier to rely on user intervention. There is also "what can be done by Dropbox itself" (potentially via court order) vs what could be done by mitm (not sure what protections Dropbox has against this; if it uses the host ca certs to authenticate vs in app cert pinning (which is obvious and good), etc.

Being able to "glitch" someone's truecrypt volumes without even compromising the crypto could lead to fun. Also, OS/file system cruft on the level above, essentially the equivalent of an .htaccess file, could be fun.


After reading the original article and this whole thread, including some quite useful discussions about the security goals of disk encryption, I'm convinced that this is extremely hard (with XTS) under realistic attack scenarios.

However, I've been reminded that a realistic attacker can do much more to an FDE volume than we would first imagine. So, I'm planning to try to play around with this to improve my intuition of how bad the attacks actually are. Thanks for the challenge.

... huh. Actually, suppose the attacker could somehow cause a very large number of file downloads to happen (with known sequence, timing, and contents). The Birthday Paradox gives the attacker a high probability of being able to see a pair of files from two separate sets of files (of different kinds and different origins) land on a particular sector at different times. When that happens, the attacker can then substitute one for the other.

If we think of a computer for some reason downloading and storing a lot of "trusted" files and a lot of "untrusted" files, and that the attacker can see empirically where each file was stored, then when a trusted file gets stored at an offset where an untrusted file was previously stored, the attacker will be able to substitute the contents of the latter for the contents of the former. For this attack to be harmful, the untrusted file that was stored at that location just needs to contain something that it would be harmful for the attacker to be able to replace the corresponding trusted file with.


>It's apparently pretty common for people to create virtual Truecrypt volumes and stick them on Dropbox, as a sort of poor-hacker's encrypted Dropbox.

People do that? encfs/ecryptfs sounds like a much better fit for that use case. Encryption is per file and you can even disable the filename encryption if you want to still be able to navigate your files using their interface. Are there no good similar solutions for Windows/OSX?


encfs uses the same questionable encryption techniques that motivated the creation of XTS in the first place. It's structured almost like full-disk encryption at the file level. I think the same's true of ecryptfs.


EncFS is a FUSE filesystem. There is no reason for it to use sector-level encryption. It's format-aware. It can do fully randomized encryption and provide strong authentication. However, someone involved in EncFS did propose switching to XTS. Which is actually what prompted me to write this article.


Oh. Oh dear. The reason that EncFS were considering switching to XTS is because their last security audit suggested it[1], probably with good reason. Remember that many applications also expect fast sector-level random read and write access to files; add in the potential for power-failure-related data corruption and missing writes, and you can't really do much better than XTS. (At least not whilst relying on a normal filesystem as your backend; ZFS can probably do more.)

[1] https://defuse.ca/audits/encfs.htm


No, this isn't true at all. If you don't have the physical disk geometry problem, you shouldn't be using XTS. In fact, the whole article is about exactly this point.

In particular: you do not need XTS to get "fast random read and write access".

Taylor's a smart guy. I'm pretty sure he's wrong about this (although maybe he knows something applicable about EncFS that I don't know). If I thought that there was no way any smart person could make the mistake of applying XTS somewhere it didn't belong, there'd be no point to writing the article. :)


Would you mind expounding? It sounds very interesting, but I'm having trouble figuring out how an evil maid attack might be employed against Dropbox-based Truecrypt volumes, since people can't boot from those.


You need to generalize the attack past boot, to the core concept of "victim interacts with disk, attacker tampers with disk, victim interacts with disk".


I figured that's what you meant, but I'm having trouble imagining how one would modify a truecrypt volume in a way that would give an adversary any advantage: http://www.truecrypt.org/docs/encryption-scheme

It seems like if an attacker wants to compromise (not merely corrupt) a truecrypt volume, they'd need to exploit the target machine itself, not just the data being read by it... unless Truecrypt has an implementation flaw like a buffer overflow, and you're suggesting modifying the volume to take advantage of it?

Storing a Truecrypt volume in Dropbox would completely invalidate your plausible deniability about whether there's a hidden volume on the drive / whether you have access to it, since an attacker can watch changes in the ciphertext and infer whether you're using a hidden volume. But I thought it was impossible for attackers to generate new ciphertext that decrypts to valid plaintext unless they had the keys, at which point they've already won. So it seems like there's no way to modify the truecrypt volume except to corrupt it. Hrm, I'm out of ideas.

Sorry, I just like to learn as much as possible, and what you suggested sounded very interesting. Don't worry about explaining it unless you want to... I imagine my questions are kind of annoying. I'll research it more.


I'm wondering if tptacek is proposing that an attacker who can modify the TrueCrypt image on Dropbox can get either private key disclosure or remote code execution. (Actually, either of these here is likely to lead to the other.)

This seems definitely achievable under some applications of FDE and some assumptions about the state of attacker's knowledge, but I don't know if those assumptions are valid in the particular case of TrueCrypt on Dropbox.


[private key disclosure or remote code execution] seems definitely achievable under some applications of FDE and some assumptions about the state of attacker's knowledge

It is? That's very interesting! May I ask, what are some of those applications of FDE / assumptions of the attacker's knowledge? This whole thing is quite fascinating.


The example that I'm thinking of is if the FDE doesn't provide authentication and the attacker can flip bits of plaintext by flipping bits of ciphertext, and the disk is used to store software, and attacker knows what block a particular binary will be stored in. (The last assumption actually seems plausible if you an OS installer is sufficiently deterministic. Maybe the OS installers from hosting providers like Linode, AWS, and DigitalOcean are that deterministic, even if some desktop OS installers aren't?)

Then you could flip bits inside of the binaries to convert them into backdoored ones. I have an example in a current presentation that I've been giving where a fencepost error in the OpenSSH server in 2002 (that resulted in a potential privilege escalation attack) was fixed by changing > to >=, which the compiler compiled as JL instead of JLE, which turned a single byte 0x7E into 0x7C. (That is literally the entire effect of the patch!)

If you can find where on the disk that byte is stored and manage to flip that single bit back, you can re-introduce a particular security hole that the user believes was fixed back in 2002. There are probably thousands of such bit-flips that would produce exploitable bugs, typically by modifying or inverting the sense of conditional branches, but maybe in other ways too. (In cracking DRM in Apple II games, people used to replace tests or conditional branches with NOP, and that would still work to affect how security policies are enforced.)

Not all encryption modes will let you do this attack, but I think that relates to tptacek's point originally that encryption software implementers need to be very careful about their selection of cipher modes and understand what kinds of attacks they defend against.

Elsewhere in this thread sdevlin was thinking about how in a non-tweak mode you can copy ciphertext from one block to another, which means if you could cause a particular file to exist on the drive, and then you can modify the ciphertext, you can cause one file to be overwritten by another (if you know the offsets where both are stored, which you might be able to learn if you know exactly when both were created and could monitor the ciphertext sufficiently often). So for example, if you could send somebody an e-mail attachment that they would save to disk, you could later overwrite some other file (like one of their OS binaries?) with the contents of that attachment by copying the ciphertext block that represents the attachment over the ciphertext block that represents the other binary.

This is one of the main reasons that tweak modes exist, specifically to prevent an attacker from recognizing or copying ciphertext blocks between different locations within the overall ciphertext.

(Edit: sdevlin, not rdl.)




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: