
You Don't Want XTS - pytrin
http://sockpuppet.org/blog/2014/04/30/you-dont-want-xts/
======
Tomte
I don't get the title.

The title tells me I don't want XTS.

Then the post explains that I don't want FDE, for sundry reasons. Then it
tells me, sure, go ahead and use FDE, but be aware of the limitations.

At that point it looks like I want XTS, after all.

Then it shows how XTS works and where cryptographers are seeing problems.

And then... well, there is no "then". The recommendations don't deal with FDE
at all.

I think the post would be much stronger, if it either had some alternative
mode to present or (I'd be more interested in that) it showed other ways to
achieve what people usually try to achieve with FDE. And usability is a big
one there. Bitlocker is huge. FileVault is huge. People can actually use it.

An aside: Has anyone ever seriously used XTS mode for anything but FDE? I've
only encountered it in Truecrypt and the part of Boneh's Crypto I where he's
talking about FDE. But that doesn't mean much, I admit.

~~~
tptacek
The post ends with the takeaways I'm hoping for:

1\. Try not to rely on FDE at all.

2\. For God's sake don't use XTS for anything other than FDE.

There's an encrypted filesystem guy that suggested they were moving to XTS.
That's what prompted the post.

Usability is great. I think you should turn FDE on. But turning on FDE buys
you a lot less than you think it does. There's a pretty good chance that FDE
isn't going to do anything for you if your computer is seized by the FBI,
because the key will probably resident when they do that.

You don't want XTS, in general, because you don't want to turn a simpler
problem (safely encrypting your secrets) into a hard problem (simulating
hardware encryption).

~~~
schoen
Do you know if there are any effective forensic attacks in use against the
TRESOR design?

(I was one of the authors of the cold boot paper at USENIX Security, but I
haven't followed the subsequent history of attacks and defenses very
consistently.)

~~~
sweis
TRESOR is only effective against passive forensics and only protects key
material.

It does not protect against an attacker able to manipulate memory contents
through, say, DMA or other malicious devices.

~~~
schoen
Hi Steve,

I remember the DMA situation from a few years ago being quite bad, and it
looks like you or one of your colleagues has posted some good resources on
that at

[http://privatecore.com/resources-overview/physical-memory-
at...](http://privatecore.com/resources-overview/physical-memory-attacks/)

I realize that with PrivateCore you're taking quite a comprehensive approach
to this problem and not bothering with more piecemeal counterforensic
approaches, but I'm still curious about what counterforensic techniques exist
for people who aren't going as far as you are. For example, can we make stock
Linux deny memory access on external buses with a software policy, or is this
simply not something that can be accomplished from software?

I imagine you have good arguments for why the piecemeal approach is likely to
fail in the face of an skilled forensic attacker. But many of the attacks that
you described to me when we last talked about this were more along the lines
of hardware-assisted Evil Maid attacks in an unmonitored and unsupervised
colo, rather than forensic attacks against an unmodified encrypted laptop. I'm
curious about how different the threat scenarios are between these two cases
now.

~~~
sweis
Enabling IOMMU is one relatively easy software mitigation to DMA attacks.
There are some gaps, but it stops the trivial cases where a device can access
all of memory.

~~~
JoachimSchipper
> [IOMMU:] There are some gaps

Do you happen to have a link where I can read more about those?

------
tptacek
If anyone's interested in a fun project:

It's apparently pretty common for people to create virtual Truecrypt volumes
and stick them on Dropbox, as a sort of poor-hacker's encrypted Dropbox.

Someone should bone up on the "Evil Maid" attack and do a proof-of-concept of
it on a Dropbox-backed volume. Call it "Evil Maid In The Cloud", or "The
Mallory Poppins Attack".

My intuition is that the Evil Maid attack is much more powerful vs. Dropbox-
backed volumes.

~~~
NateLawson
The block layer is a poor location to support security, whether access control
or encryption. The lack of knowledge of upper layers & the resource
constraints of the lower layers (block stealing for metadata) unnecessarily
hamper its security and performance.

The only reason for block-level encryption in software is because it was too
much work to retrofit UFS, ext23456fs, FAT, NTFS, etc. with encryption. The
block device provided a convenient bottleneck, and no one cared too much about
the downsides you bring up because it seemed better than nothing.

ZFS generally gets this right, using AES-CCM (an AEAD mode as you recommend).
It also has proper integration with the deduplication layer so encryption
doesn't waste space. These reasons and more show why security needs to be
implemented at the filesystem layer.

Another good example people here may be familiar with is Tarsnap. Again, it
handles encryption, integrity protection, and deduplication without storing
keys on the server side or unnecessarily restricting itself to a block device
metaphor.

You're right that TC+DB are not a safe combo. Dropbox needs to be the
encrypted Dropbox.

~~~
tveita
Keep in mind that the simpler implementation of block layer encryption also
makes it easier to implement and audit.

I'm not as confident in the design and implementation of encryption in e.g.
ZFS as I am in full disk encryption.

I'm not sure Tarsnap is a great example of doing encryption at a high level
either. As I understand it the encryption layer only deals with opaque chunks
of data; it doesn't know anything about high-level concepts like files and
folders that the Tarsnap application operates on.

~~~
cperciva
_As I understand it the encryption layer only deals with opaque chunks of
data_

Correct.

 _it doesn 't know anything about high-level concepts like files and folders
that the Tarsnap application operates on._

Tarsnap doesn't really operate on high-level concepts like files and folders.
It flattens everything into a tar archive, then encrypts and signs that. That
does however ensure that data and metadata are kept together, in a way that
encrypting individual disk sectors does not.

~~~
tveita
From an end user's perspective Tarsnap certainly deals with files and folders.
It outsources most of that work to libarchive, which is kind of my point.

The article says to "Encrypt things at the highest layer you can.", but you
could imagine a backup utility doing encryption at a higher layer than Tarsnap
does, with all the complications that would bring. And would it be even better
if instead of file system level encryption, every application encrypted it's
own files? I don't think so.

So maybe it should just say "Encrypt things at the highest layer that makes
sense", which is kind of vacuous.

IMO it would make just as much sense to say "Encrypt things at the level where
it's easiest to do.", where block level encryption is a strong contender. It
doesn't give you every property you might want, but it's easy to get right.

~~~
tptacek
I think it would be better if applications could provide their own encryption.
Applications know better how to cryptographically protect content. For
instance: we use GPGMail for our email at Matasano. The operating system could
not do a better job of encrypting email messages than GPGMail does.

I understand your point. "Encrypt at the highest layer you can" is an ideal,
and achieving the ideal is not always worth the expense. Some lowest-common-
denominator crypto in the OS is valuable. But disk block crypto is a pretty
crappy lowest common denominator, as this article is at pains to say. What
part of it do you disagree with?

~~~
tveita
If every application provided their own encryption, most of them would get it
wrong, and it still wouldn't protect important metadata. I don't think that's
much of an ideal.

I don't have an issue with adding application specific encryption, as long as
you also keep doing it at a suitable "lowest-common-denominator" layer.

A more achievable ideal would be for every layer to do strong integrity
checking.

~~~
tptacek
I agree that one of the costs for doing everything at the highest possible
layer is the potential for mistakes. I don't think every application should
_invent_ their own cryptosystem; I think most applications would be well-
served with Nacl, and I think things like GPGMail show another good path
forward.

I agree that it would be good if the lowest common denominator provided
integrity checking, but as we can see with XTS, sector-level encryption makes
that hard. Hence the article. :)

------
oggy
I don't want XTS? Yes I do, for the purpose it was designed - block device
level encryption. The title is misleading, and potentially dangerously so - I
could see somebody switching to some CBC-like mode in TrueCrypt "coz tptacek
said so" (even if that's not what he said).

And I think several statements are, well, let's say disputable. OK, XTS has
the unnecessary complication of two keys, introduced in the standardization
process. But the security in storage group (whatever it's called) has
apparently seen the light is working on essentially allowing just one again.

Furthermore, XTS is denounced for having "unclear security goals", and the
post leaves the impression that wide-block schemes are so much better. Yet
their goals are essentially exactly the same as those of XTS, just on a sector
as opposed to a (cipher) block level. And do correct me if I'm wrong, but I
believe they are pretty clear: essentially ECB encryption of every cipher
block on the disk, with a different key. In more concrete terms, I believe
those translate to security under deterministic chosen-plaintext attacks, and
non-malleability.

Finally Evil Maid attacks (at least the ones presented so far) have absolutely
nothing to do with any cipher mode you're using, and everything to do with the
trusted platform problem. Maybe in a Dropbox setting the attacker could
benefit from the cut-n-paste abilities, but I'm not sure how realistic/severe
those attacks would be (presumably you wouldn't have your system partition on
Dropbox). I'm having a hard time imagining them, but then again there exist
much smarter people than myself in this world.

Not to be only critical of the article, I do believe that the basic messages
are sound: be aware of the limitations of FDE and don't use XTS in the
contexts outside of FDE (unless you really know what you're doing, I guess).
But the rest of it I could do without.

~~~
tptacek
Couple responses:

You scare-quote "unclear security goals", but that's not my objection, that's
Phil Rogaway's objection. And his argument (and Ferguson's argument, and
Liskov's argument) isn't hard to understand: by adopting disk sectors as the
setting for your encryption, you trade transparency (the ability to use
encryption on any filesystem) for a whole mess of constraints, which the
article you're commenting on lays out in detail. The worst of these
constraints is that you lose any real authentication, but the fact that XTS is
basically XEX-ECB is another problem.

You've also taken the "two keys" issue out of context. The issue isn't that it
unnecessarily uses two keys. The issue is that it's hard to derive clean
security proofs for XTS, because of all its complications. The way Rogaway
puts it: there are three constructions involved in XTS --- the XTS wide-to-
narrow block adapter, the "XEX2" two-key XEX construction for full narrow
blocks, and the "XEX3" two-key construction for partial narrow blocks.

I believe you're also wrong about the Evil Maid scenario. Perhaps you're
getting hung up on a detail that I wasn't implicating (for instance, booting
from the drive). The issue is that in a normal FDE setting, attackers don't
get _repeated_ use-tamper-use cycles. But in a cloud-backed FDE setting, they
easily do get that, so more sophisticated versions of the same attack scenario
are possible. Think "Evil Maid" here in the same sense as Kenny Patterson used
the "BEAST" attack scenario to build a plausible attack on RC4.

The problem with XTS is that there are better modes you can use, _even for
sector-level encryption_ , if you shake off the constraint that you're working
with physical disk geometry. Which, in the real world, a lot of FDE users can
in fact do, because they're not working with real disks but rather virtual
disks stored on cloud filesystems.

If you don't need to comply with physical disk geometry, it is probably
feasible to get block-level encryption _with strong authentication guarantees_
, and with a native wide-block PRP that will get rid of the ECB data leak in
XTS.

Finally, don't use XTS for anything but FDE, even if you know what you're
doing. That's a bit of a tautological statement for me to make, because if you
know XTS, you also know that it's only meant for FDE --- but I've been getting
comments from people who have seen XTS used in application-layer crypto. Bad.
Ick.

~~~
oggy
As for security goals: are they not the ones I stated in my post? No,
ciphertext integrity is not among them, and yes, there are consequences to
this. Yes, it's a trade-off for efficiency. But I really don't see how all of
that relates to the issue of missing security goals.

What paper by Rogaway are you referencing there? I'm assuming it's not the
original XEX paper, and I'm genuinely interested in reading it, this
discussion aside. Anyway, I don't understand the "wide-to-narrow block
adapter" part. Assuming that the standard group removes the two key
"improvement", the only bit that I can see that remains to be proved, is the
ciphertext stealing. I admittedly don't know whether proofs for the whole
construction exist. A quick search revealed Liskov's review of the XTS draft
which gives a sketch, but not the whole proof.

I'm confident about my Evil Maid statement. The essence of (what's commonly
referred to as) Evil Maid really is the fact that you cannot trust your own
computer. The multiple visits make the attack significantly easier, but are
not necessary - you just need a more advanced malware.

As for repeated tampering, I'm still having a hard time imagining a realistic
attack. I just don't see how the attacker can get any predictable amount of
control of the FDE's inputs, but like I said, people can come up with smart
things. The "system drive" in my post was referring to things like weakening
the system security by overwriting configuration files with garbage and the
like, which are theoretically possible although I don't really see them being
practical.

Yes of course you can get better security guarantees if you change the
constraints. You're probably correct that it should be feasible to offer a 512
byte sector view to the OS on the one side, and use whatever size on the
physical side, if your physical side isn't really physical.

P.S. I didn't really think much of the the scare-quotes retort, seemed like a
bit of a cheap jab. If you however have a better idea (or pointers for that
matter) about how to refer to a specific part of a blog post I disagree with,
I'd be interested to hear.

~~~
tptacek
It's not the XEX paper; it's "Evaluation of Some Block Cipher Modes". Highly
recommended.

Efficiency in the sense of "comparing block cipher modes" is not the reason
XTS loses integrity and resistance to chosen-ciphertext attacks. Format
constraints are the major reason. To wit: there's no good place to stick an
auth tag, and for that matter, no good sense of what you'd be authenticating,
because you're dealing with fragments of files, not actual files.

Integrity _does_ factor into the security model of XTS. Rogaway points this
out when he explains why NIST apparently rejected CBC (the chaining gives
attackers a forward bit-flipping capability) and CTR (which was rejected
because it's trivially malleable). NIST claimed that some notion of resisting
ciphertext tampering was part of the goal, but, as he said, and Ferguson said,
and I said repeatedly in this article, nobody really knows what that tamper-
resistance is supposed to mean. Attackers can tamper with ciphertext in XTS,
and they can probably do it in ways that will create backdoors in binaries!

But lack of integrity checking isn't the only problem with XTS. Another
problem is that XTS is for the most part the ECB mode application of XEX. You
can't even get a strong definition of confidentiality from this construction,
because an attacker observing block offsets into a given sector can collect
useful information as those blocks are changed, changed again, changed back,
and changed again. This is the attack Ferguson points out in his objection to
NIST standardizing XTS, and it's also related to Rogaway's objection that NIST
standardized a wide-block-narrow-block construction rather than something that
behaved more like a tweakable native wide block construction. The native wide
block construction would effectively randomize _the whole sector_ , not
16-byte chunks of sectors. That would also lessen the harm from XTS's
malleability, but here we're more concerned with the granularity of the data
we leak by encrypting deterministically.

If you want to call the attack I'm talking about something other than "Evil
Maid", that's fine by me. I'd also accept "Mary Poppins is a governess and not
a maid" as a valid objection. The point is that virtual disks stored on
Dropbox are exposed to far more interesting attacks than physical disk media
is, because with physical media, attackers don't get an unbounded number of
use-tamper-use cycles.

~~~
oggy
Thanks for the paper reference.

I believe I do understand the constraints under which XTS operates on, and I
agree with almost everything you said in the last comment. The only thing I
don't agree with is that nobody knows what tamper resistance is supposed to
mean - it's non-malleability on the level of a cipher block. Yes this is
clearly worse than a sector-level non-malleability, but is also clearly
/much/, /much/ better than CBC-style bit-level malleability. But I'll have a
look at Rogaway's paper, maybe there's something I'm missing.

I'm still having a hard time conjuring anything resembling a practical attack
coming from this. To plant backdoors, you'd have to know the exact location of
a binary (feasible and done before, if with pretty strong assumptions), then
get the user to change those exact same sectors to something that would be
useful to you, and then copy-paste the ciphertext. I just don't see that
happening in a real world scenario. Otherwise, taking advantage of the weaker
confidentiality notion (deterministic CPA)... still seems quite far fetched,
but I accept that I could be proven wrong.

At any rate, I still maintain that this has nothing to do with Evil Maid
attacks. Use a CCA2 secure cipher and encrypt the whole disk at once if you
want, the Maid still wins.

~~~
tptacek
You only need to know where on the disk the binary is. Once you know that, you
attack the binary directly, by randomizing a 16-byte chunk of it. The
effectiveness of that attack depends on a lot of factors, but remember that
X64 binaries aren't the only thing that XTS needs to protect.

For that matter, you can trigger vulnerabilities in the kernel or even
application code, by surreptitiously randomizing offsets into metadata that
they depend on.

What's clear to a cryptographer is that no scheme that provides serious non-
malleability should have this property, but XTS does. What's maddening, in a
theoretical sense, is that you can't really put this into formal terms,
because --- as I've been saying --- nobody has provided a clear definition of
what security guarantees XTS is supposed to provide.

No amount of use-tamper-use breaks a cipher that resists CCA2 attacks, because
_every_ attempt to tamper produces a hard failure. That's what CCA2-resistance
means: attackers don't get to adaptively choose ciphertext (or really, choose
any ciphertext at all). In practice, cryptosystems resist these attacks by
authenticating ciphertext.

For what it's worth, the article we're commenting on is clear that CBC is
inferior to XTS for disk encryption. I take no responsibility for readers who
read random selections of the article and come to their own conclusions,
although if you look elsewhere on this thread that has clearly happened at
least once.

~~~
oggy
I concede the point about the possibility of triggering crashes in say the
kernel code. This however a) does not depend on repeated access to the
encrypted drive, b) is only slightly less likely to occur with a wide-block
mode, c) might be hard to meaningfully exploit, but I'll defer to your
expertise on that one.

It also doesn't really seem to rely on multiple access to the ciphertext
(you're not gonna brute force 16 bytes). And as I mentioned, you are not that
likely to have your system partition on Dropbox, and I'd wager the contents
and offsets of your user partition are not that easy to guess.

For a definition of non-malleability applicable to XTS, see "Security Notions
for Disk Encryption".

Lastly, you're missing my point about CCA2 encryption - it does not solve the
trusted platform problem, and an encryption system based on it is thus as
vulnerable to Evil Maid attacks as your poor old XTS. Unless you want to
exclude installation of malware from the definition of an Evil Maid attack,
but then your usage of the term conflicts with just about everybody else's.

~~~
tptacek
We're in a semantic rut here. Like I said way upthread: I'm not trying to
litigate the term "Evil Maid". I'm pointing out that Truecrypt+Dropbox is
vulnerable to use-tamper-use attacks in ways that physical disk encryption
isn't. Authenticated encryption rules these attacks out, because there is no
way to fail anything but completely when modifying ciphertext on the stored
image. Call the attack whatever you'd like; I've clearly mentally generalized
the Evil Maid idea in a way that you haven't. Maybe I'm giving Joanna
Rutkowska too much credit (but I doubt it :)

Regarding "multiple access to the ciphertext" \--- you will indeed brute force
16 bytes, but you won't be trying to find a 128 bit needle in the haystack;
you'll be looking for 128 bit blocks that happen to have a mix of innocuous
bytes --- of which there will be many possibilities --- coupled with the
opcode you want, and you'll have the bytes trailing the previous block and the
bytes leading the subsequent block to play with as well to narrow your search.
The attack will look a little bit like the byte-at-a-time attack on ECB.

I think that attack is significantly more plausible than you think it is.
Apparently, so did Ferguson, since he brought it up ("Code modification
attacks") in his NIST objection.

~~~
oggy
My issue with semantics is not that you're generalizing the notion, but rather
specializing it. The "tamper" part of use-tamper-use in Evil Maid refers to
the entire system, whereas you insist on tampering with only the ciphertext.

I'm still skeptical about code modification. The access the attacker gets is
not really an oracle as in your byte-at-a-time scenario (I take it you're
referring to the one from your crypto challenges), they can only observe the
decryption output in a very indirect fashion. The number of potentially useful
"needle" configurations is also hard to tell, you'd need to get alignment
right and produce valid opcodes throughout the block etc. And also how many
tries would the attacker get in reality? The executable would need to be
reloaded for each one of his tries.

But more fundamentally, I find the whole scenario unlikely. To get there in
the first place would require the user to sync their system partition with
Dropbox. I think it's fair to say that will almost never happen. And if we're
talking about a physical repeated-tamper attacker, well the Evil Maid (the
original (TM)) is a much easier option for him.

~~~
tptacek
If you store an image with executable code in it on Dropbox via Truecrypt, how
many times will you potentially run it out of that image?

Again: we are talking about properties that cryptographically sound systems
simply don't have. And if you're using Dropbox as your backing store, and not
a physical drive, there is no reason to accept those properties!

~~~
oggy
If I store a program on Dropbox, I might run it many times. Thousands?
Millions to be generous? But if the attacker has to do a brute force search
which relies on me executing each different tampered version of the program,
that's probably too small of a number.

The reason to accept sub-optimal properties in this case is convenience.
Truecrypt is popular and has a nice interface. Using it with Dropbox gives
much better security than not using it with Dropbox. It's not perfect, but I
still don't think we have any practical attacks against such a scenario.

I agree that having stronger crypto there is however possible and desirable.

------
comex
So why can't we authenticate all the sectors? Is the performance really that
bad, even with AES-NI and such? Chrome OS and Android devices that use dm-
verity already do this (albeit for read only), so that's hard to believe...

With SSDs, seek time shouldn't be an issue. You can argue that if an attacker
can modify your disk, they can probably run code to steal your password, but
this does not apply to Dropbox-stored disk images, and AFAIK you can try to
secure a PC against modified code seeing the keys using TXT.

~~~
tptacek
This is a good question. Here are some reasons:

* If you're working with 512 byte sectors, the cost of individual MAC tags gets high.

* No matter what your sector size, reserving space for the MAC tag gets you odd-sized sectors.

* Most importantly, what does it mean to have a sector with a bad MAC? How does that get reported back to the filesystem? Filesystems aren't designed to cooperate with the disk to track cryptographic attacks.

By the time you solve that last problem, you've already gotten yourself pretty
far down the track of just providing crypto in the filesystem layer. Which,
because the filesystem layer is format-aware, you might as well just keep
going and make a proper cryptographic filesystem.

~~~
caf
Is there anything wrong with reporting a sector with a bad MAC as an
unrecoverable read error, just as the hardware would do if the CRC failed?

Reserving the space is the real killer - either you end up with non-512-byte
sectors (and what is more problematic, sectors that don't divide the system
memory page size) or you stash the MACs elsewhere in which case every write to
a sector also requires a read-modify-write of the sector that holds the MAC.

~~~
tptacek
Does an "unrecoverable read error" mean that the file is read with a big
zeroed out chunk (and an error somewhere)? Does it mean that the whole file
becomes unreadable? Can the user recover any part of the file? How do they
know what to do? Do all filesystems handle those read errors the same way?

It's clearly not impossible to get the level of cooperation between the
fileystem and the encrypted device that you need, but that's effort, and it's
effort perhaps better spent on simply encrypting at the filesystem layer.

I think you're right though that the alignment issue is dispositive.

~~~
caf
Speaking in terms of UNIX-style operating systems...

No serious filesystem will return a zeroed out chunk. Apart from that, it
obviously depends on where the read error hit - it might cause just one block
in the file to return EIO from read(); it might cause a large part of a file
to return EIO from read(); it might make the entire file inaccessible (EIO
from open()); it might make entire parts of the directory tree inaccessible
(EIO from open() or readdir()) or it might make the entire filesystem fail to
mount.

Files that are mmap()ed rather than read() will SIGBUS the process on an IO
error - this includes executables and libraries.

I think I see what you're driving at, though - application code isn't designed
with the idea that EIO might be maliciously-induced, so you could do things
like induce a database to rollback to an earlier point in time by corrupting
its journal, or make a badly-designed firewall fail-open when its
configuration files fail to load?

~~~
tptacek
Yep. If your cryptosystem is designed properly, _none of that_ should be
possible.

~~~
caf
I'm not sure it's solved by encrypting at the filesystem layer either, though
- the problem is that the smarts need to move all the way up to the
applications themselves. That's likely impossible, so your next best option is
to just hard-fail the entire system on a MAC failure, which is just as easy
for dm-crypt to do as ZFS, for example.

------
sweis
I primarily like XTS because it's very fast and can be efficiently pipelined
on x86 platforms with AESNI. I can get 215 Gbps throughput on 8 cores of a
modern Intel CPU.

Also, if XTS is used in a length-preserving fashion, it can encrypt standard
block devices without significant changes.

However, length-preserving means that it's either not going to be
authenticated or you need to keep authentication material elsewhere. In the
latter case, I'd rather use GCM.

~~~
tptacek
The AES-NI properties that XTS exploit can also be exploited by better wide-
block-narrow-block constructions; Rogaway for instance suggests that XTS could
have cleaned up the CTS construction by using a better understood Feistel
construction.

Even better would be a "native" wide-block tweakable cipher, one that created
a strong PRP out of the entire sector, rather than chunking the sector into
narrow blocks and then ECB'ing those narrow blocks. However you did that, it
would also probably involve invocations of the AES block transform, and thus
benefit from AES-NI.

But you're right: XTS has a convenience advantage for encryption in the
standard hardware block device setting. What's sad about that is that
simulated hardware block encryption is actually not a particularly strong
protection for users.

~~~
pbsd
Have you checked Rogaway's AEZ? Its core could probably be repurposed to disk
encryption, being an AES-based tweakable wide block cipher.

~~~
tptacek
I remember reading this, but will admit to not having an immediate intuition
for what it would look like as a wide-block disk encryption scheme.

------
zokier
Authors recommendations seem bit "handwavy" to me. I understand that this
article probably is not intended for end-users, but still it would have been
nice to have more concrete advice. Eg dm-crypt is usually used with XTS mode,
what would the author (or the good crowd of HN) say to be "better" solution
(not necessarily at block layer) for eg. protecting laptop?

~~~
NateLawson
As I mentioned above, ZFS is a good example of filesystem encryption without
the block device constraints.

But between your OS and the hard drive in your laptop, use whatever full disk
encryption software that is best supported & most secure. He's not criticizing
FDE in general, he's saying "don't constrain yourself to the block device
model if you're designing database encryption etc etc."

~~~
zokier
> ZFS is a good example of filesystem encryption without the block device
> constraints

Would you happen to know if there is anything equivalent planned/done for
btrfs (or ext4)?

------
zanny
Man I've been having a run around recently with a notebook I just ordered.

First intuition is to just LUKs both disks. Ok, that might work, and then I
see the horrible performance degradation from FHDing an SSD.

Second intuition is SED, and I got a Crucial m550 to hopefully satisfy that,
but then I find out there is no documentation on if the ATA password is stored
in the firmware. If it is, I'm just wasting my time, and I kind of just have
to hope Crucial does the right thing and doesn't store the AES key anywhere. I
also have to hope the marketing "hardware encryption" is true like on my 840
Pro, where I don't see any performance loss.

And even userspace level encryption of config files that use plaintext
passwords is terrible (and lets be honest, way too many different programs
hide credentials in plaintext somewhere for me to find all of them easily with
a full desktop - off the top of my head, networkmanager, KDE-PIM, Telepathy,
Firefox, and Steam all have their own independent unrelated credential
stores).

In general I would just want to encrypt all of base ~, /var, and /etc, since
that is where personal data can end up (and maybe /opt, because random stuff
ends up there) - but then I'm still losing most of the reason of having an
SSD, especially one with a hardware AES accelerator that would go unused.

And don't get me started on the mechanical drive, which I'm going to have to
part bin when I get the thing and see if it has working hardware encryption.
At least on that it isn't too bad to use LUKs, because then the overhead isn't
as bad - but having overhead at all kind of sucks.

------
tbirdz
As someone who uses dropbox to store encrypted VirtualBox vdi virtual hard
disk files using XTS encryption, can someone tell me what I should be using
instead of XTS?

------
midas007
TL;DR For random-seek block encryption, don't use XTS, _use CTR._

It's simple. I like simple maths and code, it's less to screw up _and less for
implementations to screw up_. For example, I don't trust EC or GCM, even if
some people thinks they're the new hotness, because complexity creates more
opportunities for obfuscation and puts the code further out of reach of the
already few eyeballs actually (or not) looking at it.

Maybe 'cpervica explain why

~~~
tptacek
What? No. Don't do that.

~~~
midas007
? What's wrong with CTR? CTR is basically an OTP. Being OTP, encryption and
decryption are basically the same construction (thank you XOR).

    
    
        cipherblockdata = blockcipher(key, nonce . block #) ^ plainblockdata
        plainblockdata = blockcipher(key, nonce . block #) ^ cipherblockdata
    

If MAC is needed, that can happen after encrypting, before decrypting. (Needed
if bytes traverse network, but maybe not for local disk or file encryption
unless.)

Edit fixed my maths:

~~~
lvh
Uh, yeah, except not a cryptographic hash function, first of all :-)

Secondly, CTR has serious issues too. It is trivial to bit-fiddle. The naive
implementation you're suggesting leaks the keystream in one CCA query.

Just because CTR in and of itself is easy to get right doesn't mean that any
system composed using CTR is easy to get right.

~~~
midas007
Fixed.

That's beyond the scope of which mode, but it's important. However the less
code one has, the fewer places there are for things to hide.

~~~
tptacek
No, malleability is _not_ beyond the scope of which "mode" you encrypt
something with. That's like saying that security is beyond the scope of which
"mode" you encrypt with. People used to believe you could divorce
confidentiality from integrity, back in the 1990s, but that turned out not to
me true, due to adaptive chosen ciphertext attacks.

