This research is interesting and useful, but do we really want to scare off people from using TrueCrypt?
Couldn't you add a paragraph at the top or to the side that says, for example:
"If TrueCrypt is used in the intended way, i.e., you finish your work with TrueCrypt, dismount the TrueCrypt volume, and then shut down your computer, then the data protected by TrueCrypt is secure if your computer is lost, stolen, or copied at that point."
I understand your intended audience (I'm one of them). But TrueCrypt is the best protection we've got (in terms of price (free), quality, license terms, multi-platform support, algorithm choice, etc.). I presume the author likes it and uses it himself too.
We want more people to use it, right? So let's at least make it clear that it isn't cracked or broken when used in the intended manner.
A lot more than is currently the standard.
Security is a horribly complex subject, and the people that need it most often don't have the time or knowledge to judge tools and sort out implementation details.
I don't know who "we" would be in this case, but in general, I'd like to see enough hand holding that everybody can use encryption safely and confidently without having to be cpercival or tptacek.
I just noticed that the author commented that he attempted to add some clarity to this effect.
The examples really made the article shine. I knew you could dump memory, but seeing identification tools in action is ... too easy! :)
If so, I'd say anyone who participates in Truecrypt's development is also intended audience.
But theres still a valuable lesson: a half-encrypted system is a not encrypted system, and it will leak information. Theres a paper on this from 2008 I think, before TrueCrypt implemented full operating system encryption:
"Here's my data but you cannot read it" - does not runs so well with courts, high stakes competitors and deep pocketed enemies.
Once it is known "what to crack" the "how to" solution will be found.
"Rubber hose cryptography" is one of these :)
If you're not possessing anything to crack (or so "they" think), you're safe :)
Starting around 1997, he co-invented the Rubberhose deniable encryption
system, a cryptographic concept made into a software package for the Linux
operating system designed to provide plausible deniability against
rubber-hose cryptanalysis; he originally intended the system to be used
"as a tool for human rights workers who needed to protect sensitive data in
To start simply, it's certainly possible to prove that a disk installed with a base install of an OS (and then never touched) contains no encrypted volumes if all of the remaining blocks are empty. Every file can be compared to a known good copy and if there's no other data anywhere on the disk then there can't be anything hidden on it.
From there it's a case of proving that each bit of extra data on this disk relates to a file available under the OS or a fragment of such a file (now deleted). This gets progressively harder the more the disk has been used.
It impossible to deny this when you have a GB+ sized area of unpartitioned data (or unmounted partition) containing seemingly random noise. Or a similar GB+ sized file that you cannot demonstrate to be something benign by unpacking/decrypting/etc. "It's a file of random noise I generated" is going to set off the alarm bells.
Am I wrong or outdated on this?
So in that framework "stored somewhere" is a long ways from "You can characterize the contents of a known drive".
You'd need to basically never transmit the data, transmission automatically implies there is something there. If you didn't transmit, just used the data locally... And it appeared random, you'd have a pretty solid case for "they can't know". But if you slip up just once it's all over. They know.
If you mean concealing as in hidden partitions, data streams, or digital Stenography - these are all easily detectable upon close inspection. If there is extra bits where none are expected, this becomes a giveaway. Perhaps enough misdirection and a custom strategy of hiding could further obfuscate the location and content of the data, but as for hiding it's existence - this is not easily accomplished (if even possible).
If we believe that torture would cause me to disclose the password of a file without a hidden volume, wouldn't it be just as effective at getting me to give the password to my hidden volume?
tl;dr: even if your adversary can't "prove" you have another encrypted volume, when they can see all the random data on your disk or inside the "outer" volume, you can't prove you don't have an "inner" one. In a situation shitty enough that you're compelled to divulge incriminating secrets, you're boned whether you have another secret volume or not. Elsewhere in the FAQ they propose zeroing out unused space on your disk whenever travelling to totalitarian states that could demand decryption keys.
Like the UK: http://arstechnica.com/tech-policy/2007/10/uk-can-now-demand...
If I zero out drive using traditional method - data is still recoverable.
If I zero out drive using random data - I can be suspect and/or get free rubber hose for life, trying to extract keys for something that doesn't even exist?
The authoritarians' apologists will probably note that to address your concerns, you should use random data first, to whatever extent you think will thwart recovery efforts, and then zero out your empty space.
Presumably that's not the case with TrueCrypt volume-within-a--volumes, but if you're using TrueCrypt there's already reason to suspect you might have hidden volumes.
It's impossible to prove the existence or nonexistence of hidden volumes, but that could be a good or bad thing depending on your adversary's willingness to jail/torture/kill you on suspicion alone.
I'm not aware of hidden volume software that integrates with "unsuspicious" file systems. Does something like that exist?
Of course, partition-only encryption has weaknesses in that the OS may store data in another partition (i.e. you've encrypted the "D:" drive but Windows just dumps a cached file to "C:", let alone the whole pagefile challenge). So you need to trust your OS to not write the masterkey to disk, which is widely acknowledged. I personally run with no page file, so memory ought not to be written to disk by the OS itself (barring a malicious adversary), although this solution isn't the best for someone on 1GB RAM.
Full-disk encryption would block this attack, i.e. encrypted swap on Linux (crypttab makes this quite easy) or system-drive on Truecrypt. Even if it's dumped to disk, you can't get it, again barring online access to the system. Online access this is all null and void regardless as they could just issue commands to dump memory to disk no matter what you've done!
This would emphasize the need to always be cautious in your use of cryptosystems, since you cannot simply claim "oh my data is Truecrypt'd". That will not save you from everything by itself. But if you look into the documentation, Truecrypt itself warns you about using it, and the threat model is very careful in defining what steps you need to take to adequately protect your data with Truecrypt.
It's one of those things where for most people, just a file-volume (the simplest kind where it's just a file that can be mounted as a block device), will do fine. The write-to-disk wouldn't happen very often, and to lose your data to a thief would require both the unlikely "OS dumped the memory to disk" (meaning the OS doesn't respect the flags TC puts on that memory), AND on top of that "a thief stole your laptop/desktop/external". If your adversary is organized crime, a law enforcement agency, or some other state-like actor with heavy-duty resources and specifically wants y-o-u... Then you'll need to be very careful and use a full disk encryption solution, or rather just not use a computer.
Know your tools. Know your adversary. Sleep a little easier knowing both. Or turn paranoid.
An advantage you gain right off the bat is that patterns
in AES keys can be distinguished from other seemingly
random blocks of data. This is how tools like aeskeyfind
and bulk_extractor locate the keys in memory dumps, packet
captures, etc. In most cases, extracting the keys from RAM
is as easy as this:
$ ./aeskeyfind Win8SP0x86.raw
Note, that doesn't help if someone compromises the software stack and extracts memory contents logically. A compromised kernel running in cache can just decrypt memory contents.
In any case, it sounds like a very interesting way of maintaining greater protection for secrets.
For Volatility we use the TC data structures in memory to lead us to the key (the same ones TC uses to perform reads/writes)
But this article. Wow:
"This is a risk that suspects have to live with, and one that law enforcement and government investigators can capitalize on"
I love it. Only criminals use TC so let's call all TC users 'suspects'.
What about people who have portable devices and want to store sensitive financial or medical information? What about people who want to backup this information into a cloud?
Unlikely? Quite, unless someone like NSA or FBI want your data. Possible? Yes, with the right resources.
For one, don't let anyone get physical access to the computer while it is running and the volume is mounted (even if the screen is locked). This may even apply for several minutes after the machine is turned off: https://freedom-to-tinker.com/blog/felten/new-research-resul...
> Can other unrelated processes access the key from RAM?
Processes running as the root user can.
Unless you're using a trusted computing environment, right? In which case, if you trust the processor and startup environment, the kernel can be assured to run safely and prevent such attacks. Correct?
This suggests that e.g. dm-crypt is definitely more "open-source", and may in fact be better audited. I guess TC is still more "time-tested".