
TrueCrypt Security Assessment [pdf] - silenteh
https://opencryptoaudit.org/reports/iSec_Final_Open_Crypto_Audit_Project_TrueCrypt_Security_Assessment.pdf
======
sigil
_The iteration count used by TrueCrypt [in its PBKDF2 key derivation] is
either 1000 or 2000, depending on the hash function and use case. In both
cases, this iteration count is too small to prevent password guessing attacks
for even moderately complex passwords._

Until TrueCrypt gets patched to use scrypt for key derivation, roughly how
long should a volume password be to put it out of reach?

Edit: There's a table in the scrypt paper from 2002 [1] that estimates the
cost of various brute force attacks. Back then, a PBKDF2 iteration count of
86,000 and a password of length 40 would cost $200K to crack. TrueCrypt's
choice of 1000-2000 iterations look staggeringly low in comparison. And that's
not even accounting for hardware advances in the last 12 years.

[1] page 14,
[http://www.tarsnap.com/scrypt/scrypt.pdf](http://www.tarsnap.com/scrypt/scrypt.pdf)

~~~
lucb1e
> a password of length 40 would cost $200K to crack

I'd love to see someone try. Here, an md5, those are super vulnerable right?
This is a 9 character password. Piece of cake if I should believe the news.

    
    
        f1f107c27cae21b5b5b01002e9c9ead8

~~~
drblast
Some numbers for you.

Using non-optimized, managed code on my laptop, I just went through 250
million iterations of plain text in ten minutes. You didn't specify what
counts as a character, but let's assume it's a printable ASCII character. 95^9
= 630249409724609375 possibilities.

Divide by 250 million, and we get about 2520997639 times ten minutes for the
time it would take me to enumerate all possible plain text passwords. That's a
long time, about 47000 years. Not going to happen today on my laptop. But if I
were a government, maybe I'd get a few thousand laptops together and set about
calculating this over the next five years or so. In 1997.

But if I were to throw some purpose built hardware at this problem or use the
video card to speed this up and not just laptops with general purpose CPU's,
it's very likely I could decrease this time by thousands of times.

At the end I's have all MD5's for all passwords, and now the problem becomes a
rainbow table lookup. And I only have to do this once and I forever gain the
capability to break any 9 character (and below) password with just a lookup.

And that's assuming that MD5 is a perfectly secure hash with no shortcuts
allowing you to narrow down the input set.

~~~
xhrpost
Did some more calculations just for the fun of it. It looks like you'd need
nearly 16 exabytes of storage to hold the MD5 table for every 9-character hash
[1]. Not accounting for any database overhead. In high density, you can fit a
petabyte in what now a days? Half a rack? So around 8,500 racks. Certainly in
the realm of possibility for a government but it would be a lot for storing
just one type of hash list.

[1]
[https://www.wolframalpha.com/input/?i=%28630249409724609375+...](https://www.wolframalpha.com/input/?i=%28630249409724609375+*+25+bytes%29)

~~~
mattdw
I feel like there should be a more efficient way to store these things. You
have a complete enumeration of possible inputs (passwords), but unfortunately
we need to go in the opposite direction, so we couldn't just index into a n*9
bytes array. Depending on how well distributed the md5 hash space is, it's
possible a prefix tree might save you a bit of storage (you'd only have to
save a couple of bytes per entry to cancel out the overhead) but I couldn't
tell you the numbers there. Otherwise I think you're right, and we'd just have
to store a map of md5 -> password.

(Edit: Also being limited to alphanumeric for hashes and printable ascii for
input gives you pretty good compression potential.)

~~~
tomrittervg
They are actually much more complicated than just a big table of 'password |
hash'. Check out
[https://www.freerainbowtables.com/en/articles/](https://www.freerainbowtables.com/en/articles/)
for in-depth explanations.

------
616c
I wish someone would also audit tcplay [1], the BSD-license system with full
TrueCrypt compatibility. I would much rather use that, if at least for the
license concerns repeated many times in the comments here.

[1] [https://github.com/bwalex/tc-play](https://github.com/bwalex/tc-play)

~~~
watwut
License review is listed as #1 among goals on
[http://istruecryptauditedyet.com/](http://istruecryptauditedyet.com/) .

------
floatboth
"The assessment explicitly excluded the following areas... Cryptographic
Analysis, including, RNG analysis, Algorithm implementation, Security tokens,
Keyfile derivation"

------
higherpurpose
A feature I'd like to see on both our laptops and mobile devices to protect
their privacy from random and abusive border searches: being able to hide the
fact that you have an encrypted account.

Use case: Say you pass the UK border, and they can willy nilly decide to check
your laptop or mobile phone. But you have an account that is password
protected and encrypted, and they see that, and ask you for your password. You
say no - and you get arrested for it. If they couldn't see you have that
password protected account, and all they could see is a "normal" (clean of
sensitive stuff) account, then they'd just check that and move on. Or you
could even password protect that, too, and give them the password to it, and
they'd be none the wiser.

All you'd need is to be able to hide that account from the main screen when
you turn on the laptop or mobile device, and you should only be able to re-
enable it from a menu prior to booting into the OS. It shouldn't be easily
accessible either, otherwise it defeats the point.

I don't see Microsoft doing something like this - ever. Apple might if enough
people asked for it, but I'd incline to say they wouldn't for now. Google
probably won't do it either for Android or Chrome OS, since they probably see
no benefit to it. But it would be nice if that feature at least came to Linux
and some custom Android ROMs like Cyanogen.

~~~
coob
It's called deniable encryption[1]. TrueCrypt supports it[2]. Bruce S is not a
fan of their implementation, however[3].

[1]
[http://en.wikipedia.org/wiki/Deniable_encryption](http://en.wikipedia.org/wiki/Deniable_encryption)

[2] [http://www.truecrypt.org/docs/plausible-
deniability](http://www.truecrypt.org/docs/plausible-deniability)

[3]
[https://www.schneier.com/blog/archives/2008/07/truecrypts_de...](https://www.schneier.com/blog/archives/2008/07/truecrypts_deni.html)

~~~
TheLoneWolfling
Huh. I hadn't thought about multiple-image attacks.

Could that be mitigated by having TC occasionally randomly write random data
to unused blocks?

~~~
mseebach
The table of unused blocks would violate your deniability. Indeed, any such
mutating of ostensibly free space by TrueCrypt would give it up.

You have to consider the adversary. Are you a non-targeted individual who want
to keep some things private from a spurious search by law enforcement or
border agents, then simply the ability to boot up and not have sensitive stuff
and not have an obvious encrypted partition lying around will do the trick.

If, on the other hand, you're targeted by someone with a strong reason to
believe that you are hiding stuff on your computer, maybe even someone who
will break into your apartment every day to image your computer, you're gonna
have a bad time (and a keylogger on your computer, but that's a different
kettle of fish). Basically, evidence of the existence of your hidden partition
can leak out into the real world, not because of bad cryptography, but because
you're human.

------
pogue
Did anybody give this a thorough read and can give a cliff notes on the
results? How did Truecrypt do - good/bad/indifferent?

~~~
TheLoneWolfling
All things considered, pretty good.

No massive exploits; the worst problem was using too small a number of
iterations of a keygen.

There were a bunch of other minor problems, but most of them were information
disclosure only trigger-able by malicious software running in the encrypted
environment (ex: finding out if a file you don't own exists) or things that
could only be triggered by someone with raw access to the hard drive (at which
point they could just overwrite your bootloader)

Although this explicitly doesn't cover a large chunk of TC.

------
doe88
> Issue 4: Windows kernel driver uses memset() to clear sensitive data

> Calls to memset() run the risk of being optimized out by the compiler.

I would be curious to know which compilers or which options actually still do
these ill-fated _optimizations_?

~~~
30thElement
They should do it only in situations where it doesn't change the program
behavior. I use memset frequently in C just to be safe, but if it's written to
later on before it's ever read from, the compiler can optimize that away. I'm
guessing their recommendation here is if you did something like

    
    
        char* plain_text = malloc(size);
        ///do stuff with plain_text
        memset(plain_text, 0, size);
        free(plain_text);
    

For most programs that last memset is unnecessary (and may even be unnecessary
according to the standard, but it's probably implementation defined, not
undefined behavior) and it makes sense for the compiler to optimize it away.
But for crypto purposes you have to be afraid of someone being able to read
plain_text later, so the memset is important

~~~
Marat_Dukhan
I checked 3 compilers: icc 14, gcc 4.8, and clang 3.3. Clang is the only one
which optimized the memset away.

~~~
lawnchair_larry
Did you enable optimization? I know first hand that GCC will optimize
something similar out in many cases.

[http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8537](http://gcc.gnu.org/bugzilla/show_bug.cgi?id=8537)

~~~
Marat_Dukhan
Yes, I compiled with -O3.

------
dmix
Does anyone know if Tomb [1] or ecryptfs [2] is a safe choice for encrypting a
local directory?

Everyone always talks about Truecrypt which is clearly most popular but both
of the above are fully open-source. The latter is even part of the kernel. I'm
curious what the appeal of Truecrypt is besides maybe portability (which is a
pretty strong selling point).

[1] [http://www.dyne.org/software/tomb/](http://www.dyne.org/software/tomb/)

[2] [http://ecryptfs.org/](http://ecryptfs.org/)

~~~
dublinben
The appeal of Truecrypt is that it works on Windows. As you've suggested,
there are better encryption programs and schemes for users of GNU/Linux
operating systems.

The questionable license, and obscure nature of the project should be
significant reasons not to use Truecrypt. That doesn't even begin to consider
the actual security of the program.

~~~
yabatopia
Too bad Windows 8 isn't supported, so Truecrypt loses much of its appeal to
Windows users.

~~~
lawnchair_larry
I replied to this comment using a TrueCrypted Windows 8 machine.

------
angry_octet
Actual PDF
[https://opencryptoaudit.org/reports/iSec_Final_Open_Crypto_A...](https://opencryptoaudit.org/reports/iSec_Final_Open_Crypto_Audit_Project_TrueCrypt_Security_Assessment.pdf)

Die slideshare, die scribed.

------
Evolved
Encrypted 32gb microSD card containing all sensitive information removed from
phone prior to flight/border checkpoint thus rendering what is on the phone
itself to be unimportant and maybe even substitute another password-protected
microSD card with interesting but non-sensitive info (maybe a risqué photo or
two of the wife and some saucy conversations)?

Anyone know if something as small as a single microSD card positioned in the
densest part of the suitcase will show up on an X-ray?

~~~
tsaoutourpants
In America, you are better off putting it in your sock under your foot. No TSA
search includes the bottom of the foot, and you'd have to really arouse CBP
suspicion to get them checking out your feet.

But, to answer your question, no, airport security screeners are not x-raying
for objects of that size. If visible at all, it would be ignored.

------
yvishyar
I have been waiting for the security audit report since the first time it was
mentioned. Now that it is out i feel a little disappointed that there are no
real intentional risks

~~~
aroch
Why would you be disappointed that there's nothing wrong with it? Why would
you be hoping that the program millions use to protect their sensitive
information was broken?

~~~
daeken
I'm not disappointed by a fairly uneventful report, but quite honestly, I'm
always a little bit worried when nothing horrible is discovered in the course
of testing.

It's not that I want there to be bugs, but that in a large enough codebase,
there's always a game-over bug -- major information leakage, arbitrary code
exec, whatever. As a security consultant, I'm always more confident in a test
when I find a horrendous bug than when I don't; I know that bug will be fixed,
and it makes me feel like the test is more complete, even if I know full well
that I did the test to the absolute best of my abilities regardless.

I've heard similar sentiments from most testers I know.

~~~
lawnchair_larry
Heh, a clean bill of health always has that elephant in the room attached.

------
dfc
I recall seeing something about reviewing the TC license. Does anyone know if
this is something that will be part of Phase II or am I misremembering the
details?

------
rsync
I would really, really like to see an effort like this for OpenSSL ... or
better yet, for one of the OpenSSL alternatives that are not a spaghetti mess.

~~~
lucb1e
Hindsight bias? We should audit libnss and all the other cryptography
libraries too then. And don't forget how much of the world relies on closed-
source solutions like Microsoft's Bitlocker. Better shun those because they
had no public audits and for all we know they're even more spaghetti code.

~~~
rsync
No, it's not hindsight bias. I have posted (and so have others) on many public
forums for _years_ about the need for an audit of OpenSSL and OpenSSH and
there have been many discussions about the sad state of the codebase in
OpenSSL.

I can think of a particular discussion on the cryptography mailing list at
randombit from ... two years ago ?

------
higherpurpose
Would Threefish be a better cipher than AES for TrueCrypt/disk encryption,
considering it can have 1024-bit blocks?

~~~
oggy
For security, somewhat. The current de-facto standard mode of operation for
disk encryption utilities is XTS, which effectively encrypts each block on the
disk with a different key, where the blocks are of the same size as the cipher
block.

Whether this is of any significance depends on your adversary model. If the
adversary controls your storage medium (imaging putting an encrypted container
on Dropbox or Google Drive), they can mix-and-match (e.g. copy-paste)
different versions of blocks from your history. Imagine your disk to be in a
version control system; the adversary could pick the value of the block 1 from
version 50, the value of the block 2 from the version 42, the value of the
block 3 from the version 100 and so on. They could also potentially discover
usage patterns (seeing e.g. that the value of block 3 remained constant
between versions 20 and 200, while block 5 remained the same). Additionally,
they could corrupt any of the blocks, by turning the corresponding plaintext
into random bits.

Having a smaller block size means that they can perform any of these with
finer granularity. Increasing the block size thus increases your security;
ideally, your entire disk would be just one block (the only thing the
adversary could do in that case is to completely restore an old version of the
disk); but this is hugely impractical, since the performance would be abysmal
(you'd have to re-encrypt the whole disk to change just one byte). So you have
a spectrum of performance/security tradeoffs. Where on this spectrum the
1024-bit blocks lie, I'm not sure, but I suspect that they are better than
128-bit ones.

Note that we do have schemes which can do sector-level encryption (the EME
mode), but they're not used since they're 2x slower than the schemes with
smaller sizes.

Edit: in conclusion, for pretty much every scenario, other security concerns
are much more significant than the block size :)

~~~
usefulcat
> Additionally, they could corrupt any of the blocks, by turning the
> corresponding plaintext into random bits.

Given that the premise is that the adversary controls the storage medium, this
point doesn't seem terribly interesting. I feel like I'm missing something.

~~~
oggy
The theoretical attack there is that one could selectively garble the
(decrypted) parts of your storage, and thus destroy the contents of e.g. your
configuration files. E.g. they could corrupt your firewall configuration file
and thus leave your computer open to outside network connections. I don't
think this has been done in practice, and it's probably not very feasible.

Such an attack could not be detected, because the encryption modes commonly
used do not provide integrity protection. This is due to the desire to have
equal sizes for both the encrypted and plaintext sectors (as far as I
understand, that's an efficiency/ease of implementation concern).
Incidentally, you can easily see that such a scheme can never provide
integrity, since the encryption operation has to be a permutation, and you
have to be able to decrypt any ciphertext.

------
samplonius
I read this whole thing, but they left out a description of the NSA planted
back doors. As everyone knows, the NSA has compromised all crypto systems.
Since the audit did not reveal the NSA back door or doors, what else is
missing?

More likely iSECPartner's is the NSA's new RSA 2.0

------
cordite
Build tools from 1993? What kind of nonsense is this?

> Page 8

~~~
hobbes78
To offer full disk encryption, boot code for the MBR must be produced, and it
has to be 16-bit. Current compilers just produce 32 and 64-bit binaries. So
you have to use an old compiler...

~~~
cordite
Thanks for detailing that (I didn't know)

------
whoismua
tl; dr

"1.3 Findings Summary

During this engagement, the iSEC team identified eleven (11) issues in the
assessed areas. Most issues were of severity Medium (four (4) found) or Low
(four (4) found), with an additional three (3) issues having severity
Informational (pertaining to Defense in Depth).

Overall, the source code for both the bootloader and the Windows kernel driver
did not meet expected standards for secure code. This includes issues such as
lack of comments, use of insecure or deprecated functions, inconsistent
variable types, and so forth. A more in-depth discussion on the quality issues
identified can be found in Appendix B....

The team also found a potential weakness in the Volume Header integrity
checks. Currently, integrity is provided using a string (“TRUE”) and two (2)
CRC32s. The current version of TrueCrypt utilizes XTS 2 as the block cipher
mode of operation, which lacks protection against modification; however, it is
insufficiently malleable to be reliably attacked. The integrity protection can
be bypassed, but XTS prevents a reliable attack, so it does not currently
appear to be an issue. Nonetheless, it is not clear why a cryptographic hash
or HMAC was not used instead.

Finally, iSEC found no evidence of backdoors or otherwise intentionally
malicious code in the assessed areas. The vulnerabilities described later in
this document all appear to be uninte ntional, introduced as the result of
bugs rather than malice."

