Hacker News new | comments | show | ask | jobs | submit login
TrueCrypt Master Key Extraction And Volume Identification (volatility-labs.blogspot.com)
143 points by lelf 1260 days ago | hide | past | web | 73 comments | favorite

A naive person googling about TrueCrypt and stumbling on this article (well written with an authoritative tone) will think that TrueCrypt is completely cracked, and not bother to use it.

This research is interesting and useful, but do we really want to scare off people from using TrueCrypt?

Couldn't you add a paragraph at the top or to the side that says, for example:

"If TrueCrypt is used in the intended way, i.e., you finish your work with TrueCrypt, dismount the TrueCrypt volume, and then shut down your computer, then the data protected by TrueCrypt is secure if your computer is lost, stolen, or copied at that point."

I understand your intended audience (I'm one of them). But TrueCrypt is the best protection we've got (in terms of price (free), quality, license terms, multi-platform support, algorithm choice, etc.). I presume the author likes it and uses it himself too.

We want more people to use it, right? So let's at least make it clear that it isn't cracked or broken when used in the intended manner.

We want more people to use it intelligently. If they're not capable of understanding the implications of this research, are they really going to be able to use this software effectively? How much hand holding do we need to do, here?

> How much hand holding do we need to do, here?

A lot more than is currently the standard.

Security is a horribly complex subject, and the people that need it most often don't have the time or knowledge to judge tools and sort out implementation details.

How much hand holding do we need to do, here?

I don't know who "we" would be in this case, but in general, I'd like to see enough hand holding that everybody can use encryption safely and confidently without having to be cpercival or tptacek.

Yes and no. On one hand, we want to specify that it's not vulnerable, on the other, it is important to discuss this thoroughly. Ensure that people can understand the ramifications of this discovery - short of doing nothing, nothing is less secure than using security software "badly", or "incorrectly" - that false visage of security can be deceptive and lead to other bad practices.

I just noticed that the author commented that he attempted to add some clarity to this effect.

Acknowledged - I added a few words about this to the end of the post.

Thanks for the edit, I really liked the link to the cold boot abstract. Somehow I keep forgetting physical access in a world of radio-controlled computer implants and thunderbolt/firewire DMA.

The examples really made the article shine. I knew you could dump memory, but seeing identification tools in action is ... too easy! :)

The addition you're recommending is somewhat orthogonal to the article. The update at the end of the article clarifies that all one must do, strictly speaking, to protect against the weakness they describe is to power the computer down. If an adversary is knocking down your door, it's seems more important to get the keys out of RAM than to first ensure filesystem and OS integrity by saving/dismounting and cleanly shutting down.

Can't Truecrypt secure erase sensitive data they keep in the RAM, say by overwriting the variables with random noise, when a volume is dismounted?

If so, I'd say anyone who participates in Truecrypt's development is also intended audience.

Your average Joe wont be reading this article... As soon as he sees the technical nature of the discussion, he'll move on.

In this blog post, forensic experts realize TrueCrypt uses headers.

But theres still a valuable lesson: a half-encrypted system is a not encrypted system, and it will leak information. Theres a paper on this from 2008 I think, before TrueCrypt implemented full operating system encryption:


The new era of encryption will be marked not by making existing encryption solutions more secure, but by concealing the very act of existence of encrypted data.

"Here's my data but you cannot read it" - does not runs so well with courts, high stakes competitors and deep pocketed enemies.

Once it is known "what to crack" the "how to" solution will be found. "Rubber hose cryptography" is one of these :)

If you're not possessing anything to crack (or so "they" think), you're safe :)

Interestingly, this is something Julian Assange worked on several year before starting Wikileaks:

    Starting around 1997, he co-invented the Rubberhose deniable encryption
    system, a cryptographic concept made into a software package for the Linux
    operating system designed to provide plausible deniability against
    rubber-hose cryptanalysis;[68] he originally intended the system to be used
    "as a tool for human rights workers who needed to protect sensitive data in
    the field."

The flip side of this is that it becomes impossible for someone to prove that they haven't got an encrypted volume stored somewhere. It will be interesting to see in which way the courts go with this.

That's always been impossible.

Not always (as long as you're talking about something they are in possession of, you're right if you mean you can't prove that you don't have some flash drive hidden somewhere but, then again, it need not be encrypted if they don't have access to it).

To start simply, it's certainly possible to prove that a disk installed with a base install of an OS (and then never touched) contains no encrypted volumes if all of the remaining blocks are empty. Every file can be compared to a known good copy and if there's no other data anywhere on the disk then there can't be anything hidden on it.

From there it's a case of proving that each bit of extra data on this disk relates to a file available under the OS or a fragment of such a file (now deleted). This gets progressively harder the more the disk has been used.

It impossible to deny this when you have a GB+ sized area of unpartitioned data (or unmounted partition) containing seemingly random noise. Or a similar GB+ sized file that you cannot demonstrate to be something benign by unpacking/decrypting/etc. "It's a file of random noise I generated" is going to set off the alarm bells.

If I remember correctly, when you buy a blank hard drive, and then format it, the formatting tool does not zero out the entire partition. The empty space left after a base install of an OS is note literally all zeroes or ones, it's random noise.

Am I wrong or outdated on this?

I think that knowability is one place where it pays to be careful and pedantic.

So in that framework "stored somewhere" is a long ways from "You can characterize the contents of a known drive".

Implementations of crypto, such as Truecrypt, rely on algorithms/ciphers such as AES which (in some modes) basically appears random... But the appearance of randomness is not enough if someone is convinced there is meaningful data there. Of course, a break such as "we can tell if there's a hidden truecrypt volume" is bad, and if I recall correctly there are ways of doing this now.

You'd need to basically never transmit the data, transmission automatically implies there is something there. If you didn't transmit, just used the data locally... And it appeared random, you'd have a pretty solid case for "they can't know". But if you slip up just once it's all over. They know.

Or, transmit random data on a schedule indistinguishable from the actual data, I suppose.

Number stations are always so creepy. Now I'm scared to get out of my room.

By the way, this is exactly why the NSA will not catch many terrorists. You can always hide data and you can always hide communication (which is just data written and read by two different endpoints).

Spying on terrorists is just a red herring to keep public calm about it. I don't think NSA really cares about it, there are far more valuable benefits of spying.

That would be steganography then, which is distinct.

I am not following you. Concealing? do you mean physically hiding the media on which it is stored? Like a flash chip in a tie clasp?

If you mean concealing as in hidden partitions, data streams, or digital Stenography - these are all easily detectable upon close inspection. If there is extra bits where none are expected, this becomes a giveaway. Perhaps enough misdirection and a custom strategy of hiding could further obfuscate the location and content of the data, but as for hiding it's existence - this is not easily accomplished (if even possible).

TrueCrypt offers a volume-within-a-volume option. The free space of a volume normally contains random data, and the hidden volume is presumably headerless. The idea is then to put something that you might want to hide in the outer volume (freaky porn, for example), then put your actual secrets (evidence of your criminal enterprise, for example) in the hidden volume. If forced to disclose the password, then only the outer volume is apparent and you can disclose that under duress.

The bit that's never made sense to me about that is that the attacker is equally aware of this feature of TrueCrypt. Once you've given the password for the outer volume, wouldn't the attacker just keep on with the rubber hose until you've also given the password for the inner volume? Obviously, she can't prove that there is an inner volume and she might just be wasting her time. On the other hand, you can't prove that there is NOT an inner volume, so it's in the attacker's best interest to just keep up the torture until either the attacker has enough keys that the encrypted data size equals that of the entire TrueCrypt volume or you've been killed by the rubber hose.

If we believe that torture would cause me to disclose the password of a file without a hidden volume, wouldn't it be just as effective at getting me to give the password to my hidden volume?

This isn't a panacea. Check out what the dm-crypt folks have to say about it (question 5.2):


tl;dr: even if your adversary can't "prove" you have another encrypted volume, when they can see all the random data on your disk or inside the "outer" volume, you can't prove you don't have an "inner" one. In a situation shitty enough that you're compelled to divulge incriminating secrets, you're boned whether you have another secret volume or not. Elsewhere in the FAQ they propose zeroing out unused space on your disk whenever travelling to totalitarian states that could demand decryption keys.

> they propose zeroing out unused space on your disk whenever travelling to totalitarian states that could demand decryption keys.

Like the UK: http://arstechnica.com/tech-policy/2007/10/uk-can-now-demand...

Wait a minute.

If I zero out drive using traditional method - data is still recoverable.

If I zero out drive using random data - I can be suspect and/or get free rubber hose for life, trying to extract keys for something that doesn't even exist?

The authoritarians would like to be able to torture and imprison you based on nothing more than their whim, so that's why they set it up this way.

The authoritarians' apologists will probably note that to address your concerns, you should use random data first, to whatever extent you think will thwart recovery efforts, and then zero out your empty space.

I think on most operating systems the free space usually contains remnants of deleted files, not random data, unless you use some sort of secure deletion mechanism (srm, etc). If the free space of your volume contains truly random looking data then there would be reason to suspect a hidden volume.

Presumably that's not the case with TrueCrypt volume-within-a--volumes, but if you're using TrueCrypt there's already reason to suspect you might have hidden volumes.

It's impossible to prove the existence or nonexistence of hidden volumes, but that could be a good or bad thing depending on your adversary's willingness to jail/torture/kill you on suspicion alone.

I'm not aware of hidden volume software that integrates with "unsuspicious" file systems. Does something like that exist?

Interesting writeup and cool that Mr. Ligh has provided these plugins / tools. On the whole though, doesn't appear to contain much novel information.

Of course, partition-only encryption has weaknesses in that the OS may store data in another partition (i.e. you've encrypted the "D:" drive but Windows just dumps a cached file to "C:", let alone the whole pagefile challenge). So you need to trust your OS to not write the masterkey to disk, which is widely acknowledged. I personally run with no page file, so memory ought not to be written to disk by the OS itself (barring a malicious adversary), although this solution isn't the best for someone on 1GB RAM.

Full-disk encryption would block this attack, i.e. encrypted swap on Linux (crypttab makes this quite easy) or system-drive on Truecrypt. Even if it's dumped to disk, you can't get it, again barring online access to the system. Online access this is all null and void regardless as they could just issue commands to dump memory to disk no matter what you've done!

One important thing this analysis points out: using a truecrypt volume (ie: a USB stick) on a non-truecrypt system is dangerous.


This would emphasize the need to always be cautious in your use of cryptosystems, since you cannot simply claim "oh my data is Truecrypt'd". That will not save you from everything by itself. But if you look into the documentation, Truecrypt itself warns you about using it, and the threat model is very careful in defining what steps you need to take to adequately protect your data with Truecrypt.

It's one of those things where for most people, just a file-volume (the simplest kind where it's just a file that can be mounted as a block device), will do fine. The write-to-disk wouldn't happen very often, and to lose your data to a thief would require both the unlikely "OS dumped the memory to disk" (meaning the OS doesn't respect the flags TC puts on that memory), AND on top of that "a thief stole your laptop/desktop/external". If your adversary is organized crime, a law enforcement agency, or some other state-like actor with heavy-duty resources and specifically wants y-o-u... Then you'll need to be very careful and use a full disk encryption solution, or rather just not use a computer.

Know your tools. Know your adversary. Sleep a little easier knowing both. Or turn paranoid.

    An advantage you gain right off the bat is that patterns
    in AES keys can be distinguished from other seemingly
    random blocks of data. This is how tools like aeskeyfind
    and bulk_extractor locate the keys in memory dumps, packet
    captures, etc. In most cases, extracting the keys from RAM
    is as easy as this:

    $ ./aeskeyfind Win8SP0x86.raw
Shouldn't it be possible to store an AES key in a way that's indistinguishable from random data?

At PrivateCore, we keep key material (and the entire Linux stack) pinned in the CPU cache, then encrypt main memory. This would thwart physical memory extraction attacks, like cold booting, Fireware, Thunderbolt, NV-DIMMs, bus analyzers, malicious RAM, etc.

Note, that doesn't help if someone compromises the software stack and extracts memory contents logically. A compromised kernel running in cache can just decrypt memory contents.

I was not aware pinning memory in the CPU cache was even possible. Is this done via some Linux interface? Or directly by using some hardware feature of the CPU?

In any case, it sounds like a very interesting way of maintaining greater protection for secrets.

PrivExec does something similar with ephemeral keys http://www.onarlioglu.com/privexec/

I hope linking to reddit doesn't get me banned, but this question was asked there and explained nicely:


So it's not the actually the key that's easily located, but the key schedule from which you can get the key out easily?

Yes for projects like aeskeyfind

For Volatility we use the TC data structures in memory to lead us to the key (the same ones TC uses to perform reads/writes)

I thought that this issue was already well known? Paging of virtual memory causes keys to be written to disk...

But this article. Wow: "This is a risk that suspects have to live with, and one that law enforcement and government investigators can capitalize on"

I love it. Only criminals use TC so let's call all TC users 'suspects'.

What about people who have portable devices and want to store sensitive financial or medical information? What about people who want to backup this information into a cloud?

Its a forensic science post about how law enforcement can compromise TC. I'd be very worried if the people they were targeting weren't suspects.

How does secure virtual memory work with this? If data in ram, or at least is encrypted when written to disk, wouldn't that stop this? Do I know what I'm talking about?

What does this mean for me? I have all my interal + external hard drives encrypted as well as my system drive.

This article is just an analysis of one of the inherent and well-documented weaknesses in truecrypt: the fact that the encryption key must stay in RAM the entire time you are using an encrypted volume. So, as has always been the case, treat the contents of your RAM as precious when a truecrypt volume is mounted.

How would you treat your RAM contents as precious? Just making sure you're on a pristine machine, and nothing else is running? Can other unrelated processes access the key from RAM?

Well, in theory, let's say you've got a laptop encrypted with Truecrypt. You put it in sleep mode instead of switching it completely off or hibernating,because you are just nipping out for a coffee. An attacker could then steal it, lower its temperature(let's say they put it in a freezer for a while), and then extract - literally take out - the RAM from that machine and plug it into a specially prepared station which would then be used to extract the contents of that memory. In low temperatures, RAM data retention is measured in minutes, so all data you had in your system would be preserved, including the encryption key.

Unlikely? Quite, unless someone like NSA or FBI want your data. Possible? Yes, with the right resources.

Cold boot attacks don't work on DDR3

Why? Do you have any reference?

Note the comment at the end of the paper. The authors had not been able to do it successfully with their relatively simple methodology. Sure it is harder than DDR2 but this doesn't mean it is impossible. As pointed out by the authors, the failure can simply be due to the memory controller implementation (or DDR3 protocol itself) on their test setup. If this is the case, then all it takes is a custom memory controller that is optimized for this type of extraction.

> How would you treat your RAM contents as precious?

For one, don't let anyone get physical access to the computer while it is running and the volume is mounted (even if the screen is locked). This may even apply for several minutes after the machine is turned off: https://freedom-to-tinker.com/blog/felten/new-research-resul...

> Can other unrelated processes access the key from RAM?

Processes running as the root user can.

>Processes running as the root user can.

Unless you're using a trusted computing environment, right? In which case, if you trust the processor and startup environment, the kernel can be assured to run safely and prevent such attacks. Correct?

Avoid using Thunderbolt/IEEE 1394/DisplayPort or any interface that has DMA to connect devices to your computer.

Thank you.

It means if you're worried about the contents of your encrypted drives being uncovered, you need to make sure no malicious processes gain access to a dump of your system's memory while it's booted / running / encrypted drives are mounted.

Forgive a fool. But are the volatility plugins used (truecryptmaster and truecryptsummary) only provided to students of the official volatility training or is it released somewhere? Can't find it on their Google code page.

right now we are only using them in the class, but in the next Volatility release (2.4) they will be in SVN like all our other ones.

Would it help to store a long string of data in memory, say 10mb. Then place the key somewhere in the middle of it? The placement could be based on the password. Just an idea..

It would not help. Volatility is not doing any searching, instead it is reading TC's data structures in memory to find the key. Basically replicating TC's algos offline.

It seems like that would be extremely easy to brute force.

I wonder, wouldn't it be useful to hide keys to encrypted container in some other type of memory, say videocard memory or processor caches?

Are there any 'better' (knowing it's all relative) alternatives for what TrueCrypt provides?

Nothing remotely as peer-reviewed and time-tested and open-source.


This suggests that e.g. dm-crypt is definitely more "open-source", and may in fact be better audited. I guess TC is still more "time-tested".

So not safe?

Nothing's 100% safe - if data exists and can be read by its owner, de facto it can be read by someone else. The only "safe" data is that which never leaves your brain.

Until brain scanners become a thing. Then we can all get potentially charged for thought crimes.

A lot less secure if not using full disk encryption.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact