Hacker News new | comments | show | ask | jobs | submit login
Full-disk encryption for all computer drives, coming soon (computerworld.com)
27 points by nreece on Jan 29, 2009 | hide | past | web | favorite | 37 comments



My problem with encryption technology like this is that it falls into that gray area between theoretically useful and actually useful.

Equivalent software encryption has existed for a while (it's built into Vista for example). But when you read about these Government or Financial laptops getting stolen with people's private information on them they're never encrypted. Why?

Because any info that's valuable is too valuable to risk losing just because someone lost the password. I mean, he says it himself...

"it can't be brought back up and read without first giving a cryptographically-strong password. If you don't have that, it's a brick. You can't even sell it on eBay."

So when you deploy this kind of thing you have to ask yourself if there's any circumstance under which that password could get lost. I'm not against the technology I just don't know a lot of people who will be brave enough to use it.


I don't want to comment much on a story from a trade rag like Computerworld, especially when it ignores the fact that we already have better options than this press release product. But, I'll take a moment to correct you:

In many F-500 companies, disk encryption is already a corporate standard on all laptops; the organizations that are worried about losing passwords keep escrow keys. The problem people are trying to solve is, if you leave your car unlocked and lose your laptop, you don't have to cut a press release about how many SSNs you just lost.


I don't deny that there are ways in which encryption can be very useful and that a very organized company can guard against ever losing their data because of a password mishap. I'm just saying I've never met one of these companies and I've done my fair share of consulting work for companies large and small,

People are used to security that can be circumvented if need be (through an administrators password or at worst a disk recovery service). When you're talking about hardware based encryption that option is no longer on the table and I think that rightfully makes a lot of people uncomfortable. In that way it's a huge risk.

Which is why things like bitblocker (again, built into Vista) don't get used.


Big organizations don't trust individuals enough to make them the single point of failure for encryption. Most often, these organizations run key servers that have the actual key, and provide the key based on some form of auth (usually Active Directory). If the user forgets their AD password, its an easy reset; and if they leave the company or something, their data is easily recoverable.

This sort of "encryption in the sky" approach is available to consumers too. There are products by most of the big encryption vendors (like http://voltage.com/vsn/) that make it easy for a user to encrypt their data without putting themselves at risk for data loss.


> if you leave your car unlocked and lose your laptop,

You're assuming that they didn't lose the key as well.

You remember the key; it's the sequence of digits and letters on the post-it just beside the track pad.


That kind of encryption is useless, because I can't audit it. How do I know my data really IS encrypted and the key isn't just stored on the drive itself?


Yup. And who tells me that the manufacturer doesn't keep a master key to himself?

When I go so far as to encrypt my drive then I'd rather use an open source product that I can download and compile myself. Not that I'd ever actually audit the source code myself - but at least I can be reasonably sure that multiple independent third parties have done it for me.


Proprietary hardware encryption schemes are assessed by third parties all the time. If you just aren't comfortable with anything but open source, that's fine. But you already rely on plenty of other security systems you can't audit.


I don't. And I really do believe that I should be able to audit any security solution I use.

I don't understand what you mean by "Proprietary hardware encryption schemes are assessed by third parties all the time". Sounds like corporate speak to me. First, any "proprietary encryption scheme" is rubbish, and second, I should be the one to assess it.

My point with the drive encryption solution was that you are told that your data is encrypted, but you have no way of checking it yourself. What if encryption is disabled in your drive? Can you tell?

The final output of any encryption solution needs to be independently accessible, so that you can verify if it resembles white noise (which it should, ideally).


So go ahead and audit them. The vendors will pay you to do it. Go thumb through a couple years of Black Hat talks for examples of people finding vulnerabilities in firmware, microcode, and closed-source cryptosystems in their spare time.

By "proprietary encryption scheme", I'm just referring to crytosystems for which you don't have the source code. You will have trouble finding any mainstream full-disk encryption vendor that isn't using something like AES, LRW-AES, or XTS-AES.

Nobody you care about is ever going to ship a product based on Super-Mega-40960-Bit-Matrix encryption. It's not 1994 anymore. There is absolutely no business case to be made for using nonstandard encryption: if you do it, you can't sell to the government, you can't sell it to any Fortune 500 company, and you get made fun of in magazine reviews.

I don't understand your "what if encryption is disabled in your drive" comment. What makes you think that's hard to check? Also, what makes you think that "secretly not encrypting an encrypted drive" would be a sane business decision for any vendor? They'd be open to spectacular liability. The first credit card processor that lost a secretly unencrypted disk drive would end up owning Seagate.


So, how do you check if your data is encrypted? Open the hard drive and look at the platters?

I'm sorry, I am not convinced. I would much rather run software than I can a) audit and b) check the output of.


(1) I think all your questions are answered at the Opal spec site: http://tinyurl.com/aulrpj

(2) If you think that hardware-level analysis is out of scope for assessing a corporate full disk encryption system, you're out of step with the security industry; hobbyists do hardware analysis in their spare time. There are obviously likely to be much simpler way to verify encrypted storage than "looking at the platters".

(3) Like I said originally, if you're religious about open source, more power to you. It's obvious that there's little I can say to make you happy with the TCG. But, and I mean no offense, from what little I know of you it seems like they know much more about this topic than you do.


As to (2), my point was that I have no way of doing it.

As to (3), I am not "religious about open source". My point was that at the very least I would like to be able to verify what the output is, which I can't.

Your arguments about companies' reputations being on the line are something I don't buy. This is hacker news, I buy technical arguments. And if you believe "reputable companies" don't do strange things with your keys or our data, may I kindly remind you of http://en.wikipedia.org/wiki/NSAKEY

Regarding (1), thanks for the reference, I will certainly learn more about the technology involved.


I don't think it's really reasonable to rule out hardware security simply because jwr from Hacker News isn't capable of assessing it, but I don't blame you for not using it.


It isn't just me, Bruce Schneier doesn't trust the vendors either, and for good reasons:

http://www.schneier.com/blog/archives/2009/02/the_doghouse_r...

(yes, I know this is about a hardware enclosure, not about a drive, that's actually lucky because someone could _check_ if the bytes are actually encrypted)

Now let's hear you mock Bruce :-)


If you really want to know, you could always unplug the drive, plug it into another computer, and try your best to recover data--but other people are trying that; people like tptacek, and you could just rely on their assessments.


Will we still own our data?


I was kind of worried when I saw "Trusted Computing", but having read the article I don't think there's any DRM-related use for it. It's really just HD encryption.


"Trusted Computing" is a bit of a bugbear; the technology behind it is totally inevitable and fundamentally innocuous. All it's saying is, the OS should be able to extract the promise of a secure channel to the chipset and the hardware. Without that promise, you lose your machine just once to a piece of malware and you can never trust it again.


I wouldn't call the idea of physical access not being the highest level (to hardware state) particularly innocuous. From the article - "You can't even sell it [a stolen drive] on eBay". Theft deterrence is nice, but the piles of bricked drives left over from xbox upgrades are disgusting.

The same eviction of malware can be achieved through a documented hardware interface to the lowest levels of microcode (say, JTAG). The current problem is not due to openness, but closed CPU microcode.


> The same eviction of malware can be achieved through a documented hardware interface to the lowest levels of microcode (say, JTAG). The current problem is not due to openness, but closed CPU microcode.

Would you please elaborate?


kragen's comment hits the nail right on the head. Any software update system suffers from the ability to install a "rootkit" that can emulate the update mechanism itself to ensure its own survival. I was mistaken in thinking that CPU microcode was stored in non-volatile memory on the CPU itself, but the same idea goes for any area in which such rootkits can be installed.

The simple solution is to allow easy updating of that base firmware through a dedicated hardware interface. A socketed DIP isn't hip anymore (eww through-hole), but a USB device header on the motherboard would certainly work.

I'd feel better about trusted computing technology if one of their design goals wasn't preventing physical "tampering" but instead they provided unfettered access through a debug port. However, it seems like they're aiming for the exact opposite in order to facilitate naive software, theft deterrence, and business model preservation.


Can you explain more about how the TPM facilitates naive software and business model preservation? That's not my experience with it, but maybe you've had a different experience.


Certain restrictions are impossible to encode in protocols where one's computer acts as their agent. For example, it's impossible to restrict the duplication or longevity of a document. Remote attestation restricts what code can be used to run a protocol, and thus enables implementation of such naive rules.

I don't currently use remote attestation for anything, but it's a matter of time until some bozo decides that remote attestation is the way to solve online banking security ("if only we could be sure they weren't running malware!"), and brings the technology stack mainstream.

When that happens, the non-techies (seeing no distinction) will enforce their business rules on end user's computers (they occasionally try to do this now, but run up against reality). Client-side-only verification of form fields may be laughable now, but won't be so funny with no incentive to fix it because common people cannot exploit it.


Maybe it would be better if the machine's owner could extract that secure channel, rather than the OS. For example, if you could reflash the BIOS with a JTAG header on the motherboard, then you could be sure that no compromised BIOS code survived (unless your JTAG programmer was broken). Or you could have a boot monitor in non-Flash ROM that could checksum the Flash BIOS, and then you could compare the checksum to the version before the malware ran, and pop the Flash chip out of its socket to replace it if necessary.

Does that answer the threat model you're thinking of?

A lot of the stuff under the "Trusted Computing" rubric goes a lot further than what you're describing. Some of it is designed to allow third parties (other than the OS and the machine's owner) to establish a secure channel to the chipset and the hardware over the network. That is almost certainly a bad idea.


The "secure channel to the chipset over the network" is just a public key pair in a chip. You can use it however you want, or not use it at all. There's no system built in to the TPM that allows your chipset to go talk to Microsoft without your permission.

Obviously, if you give Microsoft permission to own your machine, it can go talk to Microsoft.com anytime it wants. But it already could without the TPM.


A public-key pair in a chip is way overkill for the threat model I was talking about.

Is it the same threat model you were talking about? You didn't answer that question.

And while it's perhaps necessary and desirable that our machines contain tamper-resistant chips with public-key pairs in them, it's neither necessary nor desirable that the private key be secret from the machine's legitimate owner, which is part of the TCG scheme.


The key isn't kept secret from the machine's legitimate owner, Kragen. For instance, TPM keys need to be migratable.

Do you spend a lot of time working with TPM/TCG technology? I don't want to sound patronizing; maybe it's me that's missing something.


> The key isn't kept secret from the machine's legitimate owner

Please explain how remote attestation can work if the machine's owner has access to the signing key.

(edit: having just read up on the newer Direct "Anonymous" Attestation scheme, I see that the signing key is no longer a permanent part of the TPM chip, but is generated with an issuer. Still, this generated key is kept secret in the "trusted" module, and my question remains)

I know the article is talking about secure storage and I'm picking on remote attestation, but they're both part of a technology suite which treats the end-user as an attacker.

A simple rule of thumb - if the capabilities of the hardware cannot be emulated by a VM, it's not in the owner's best interest.


We're both right, and you're more right than I am, because I was ignoring the EK, which is "burned in".

But regardless, the EK and attestation schemes are just capabilities of the TPM chip. You can use them or not use them. The problem isn't the TPM --- which we need. The problem is what Microsoft wants to do with the TPM.


The problem is that having those capabilities in your hardware changes informed users' negotiating position with Microsoft from "I don't have any way to prove to you that I'm not running under SoftICE" to "I don't want to prove to you that I'm not running under SoftICE".

To answer your earlier question, I don't do any work with TCG standards, in part because I don't want to make the situation any worse and in part because I find modern computer security in general extremely depressing.


If you believe it's possible to have bulletproof software, then the TPM is a bad idea; we should just make our software bulletproof. This is what Daniel J. Bernstein believes.

If, like most of us, you don't believe it's possible to have bulletproof software, then at some point you need something like the TPM to bind a known-good running kernel to your hardware securely. Without it, any bug in your kernel leaves you with no way to trust your system from that point on.

I think the TPM solves way more problems than it creates; for instance, TPM-related techniques will allow us to get rid of clunky hardware crypto tokens, and instead bake them into our machines securely; it will also potentially allow us to have public kiosk computers that are safe to use.

My only objection to TPM discussions is that EFF-types engage in a lot of hyperbole about them. It's true that the TPM makes it easier for Microsoft to enforce DRM and copy protection. But that's just a property of having better system security. Most of what the TPM does, you want.


Only a fool would give up control over his computer because he's afraid of wiping it. You can bet your last dollar that promise will be enforced against you as well.


Who said anything about "giving up control over your computer"? Do you actually know anything about Trusted Computing, or this just propaganda? The TPM specs are public, the programming interface is public, and if you don't like how your software uses it, write different TPM software.

Your problem is with Microsoft, not with hardware security. If you don't like Microsoft, don't run Windows.


How will this affect data recovery?


If you know the passwd then tools going through the drive will not change. If you have to remove the PCB and replace it with one from another drive or you have lost the passwd then you are stuffed. Unless of course the drive maker has built in a backdoor - from past experience of security on PCs the unchangeable security passwd will probably be 'seagate'


THIS IS NOT MY COMMENT.

I don't know what happened, but there must be a glitch somewhere in the software. I doubt someone has bothered to crack my account, Most likely there is a bug that attributes other posts to my user.

This is the second post that is not written by me. Does HN have a place to post bugs?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: