I was kind of worried when I saw "Trusted Computing", but having read the article I don't think there's any DRM-related use for it. It's really just HD encryption.
"Trusted Computing" is a bit of a bugbear; the technology behind it is totally inevitable and fundamentally innocuous. All it's saying is, the OS should be able to extract the promise of a secure channel to the chipset and the hardware. Without that promise, you lose your machine just once to a piece of malware and you can never trust it again.
I wouldn't call the idea of physical access not being the highest level (to hardware state) particularly innocuous. From the article - "You can't even sell it [a stolen drive] on eBay". Theft deterrence is nice, but the piles of bricked drives left over from xbox upgrades are disgusting.
The same eviction of malware can be achieved through a documented hardware interface to the lowest levels of microcode (say, JTAG). The current problem is not due to openness, but closed CPU microcode.
> The same eviction of malware can be achieved through a documented hardware interface to the lowest levels of microcode (say, JTAG). The current problem is not due to openness, but closed CPU microcode.
kragen's comment hits the nail right on the head. Any software update system suffers from the ability to install a "rootkit" that can emulate the update mechanism itself to ensure its own survival. I was mistaken in thinking that CPU microcode was stored in non-volatile memory on the CPU itself, but the same idea goes for any area in which such rootkits can be installed.
The simple solution is to allow easy updating of that base firmware through a dedicated hardware interface. A socketed DIP isn't hip anymore (eww through-hole), but a USB device header on the motherboard would certainly work.
I'd feel better about trusted computing technology if one of their design goals wasn't preventing physical "tampering" but instead they provided unfettered access through a debug port. However, it seems like they're aiming for the exact opposite in order to facilitate naive software, theft deterrence, and business model preservation.
Can you explain more about how the TPM facilitates naive software and business model preservation? That's not my experience with it, but maybe you've had a different experience.
Certain restrictions are impossible to encode in protocols where one's computer acts as their agent. For example, it's impossible to restrict the duplication or longevity of a document. Remote attestation restricts what code can be used to run a protocol, and thus enables implementation of such naive rules.
I don't currently use remote attestation for anything, but it's a matter of time until some bozo decides that remote attestation is the way to solve online banking security ("if only we could be sure they weren't running malware!"),
and brings the technology stack mainstream.
When that happens, the non-techies (seeing no distinction) will enforce their business rules on end user's computers (they occasionally try to do this now, but run up against reality). Client-side-only verification of form fields may be laughable now, but won't be so funny with no incentive to fix it because common people cannot exploit it.
Maybe it would be better if the machine's owner could extract that secure channel, rather than the OS. For example, if you could reflash the BIOS with a JTAG header on the motherboard, then you could be sure that no compromised BIOS code survived (unless your JTAG programmer was broken). Or you could have a boot monitor in non-Flash ROM that could checksum the Flash BIOS, and then you could compare the checksum to the version before the malware ran, and pop the Flash chip out of its socket to replace it if necessary.
Does that answer the threat model you're thinking of?
A lot of the stuff under the "Trusted Computing" rubric goes a lot further than what you're describing. Some of it is designed to allow third parties (other than the OS and the machine's owner) to establish a secure channel to the chipset and the hardware over the network. That is almost certainly a bad idea.
The "secure channel to the chipset over the network" is just a public key pair in a chip. You can use it however you want, or not use it at all. There's no system built in to the TPM that allows your chipset to go talk to Microsoft without your permission.
Obviously, if you give Microsoft permission to own your machine, it can go talk to Microsoft.com anytime it wants. But it already could without the TPM.
A public-key pair in a chip is way overkill for the threat model I was talking about.
Is it the same threat model you were talking about? You didn't answer that question.
And while it's perhaps necessary and desirable that our machines contain tamper-resistant chips with public-key pairs in them, it's neither necessary nor desirable that the private key be secret from the machine's legitimate owner, which is part of the TCG scheme.
> The key isn't kept secret from the machine's legitimate owner
Please explain how remote attestation can work if the machine's owner has access to the signing key.
(edit: having just read up on the newer Direct "Anonymous" Attestation scheme, I see that the signing key is no longer a permanent part of the TPM chip, but is generated with an issuer. Still, this generated key is kept secret in the "trusted" module, and my question remains)
I know the article is talking about secure storage and I'm picking on remote attestation, but they're both part of a technology suite which treats the end-user as an attacker.
A simple rule of thumb - if the capabilities of the hardware cannot be emulated by a VM, it's not in the owner's best interest.
We're both right, and you're more right than I am, because I was ignoring the EK, which is "burned in".
But regardless, the EK and attestation schemes are just capabilities of the TPM chip. You can use them or not use them. The problem isn't the TPM --- which we need. The problem is what Microsoft wants to do with the TPM.
The problem is that having those capabilities in your hardware changes informed users' negotiating position with Microsoft from "I don't have any way to prove to you that I'm not running under SoftICE" to "I don't want to prove to you that I'm not running under SoftICE".
To answer your earlier question, I don't do any work with TCG standards, in part because I don't want to make the situation any worse and in part because I find modern computer security in general extremely depressing.
If you believe it's possible to have bulletproof software, then the TPM is a bad idea; we should just make our software bulletproof. This is what Daniel J. Bernstein believes.
If, like most of us, you don't believe it's possible to have bulletproof software, then at some point you need something like the TPM to bind a known-good running kernel to your hardware securely. Without it, any bug in your kernel leaves you with no way to trust your system from that point on.
I think the TPM solves way more problems than it creates; for instance, TPM-related techniques will allow us to get rid of clunky hardware crypto tokens, and instead bake them into our machines securely; it will also potentially allow us to have public kiosk computers that are safe to use.
My only objection to TPM discussions is that EFF-types engage in a lot of hyperbole about them. It's true that the TPM makes it easier for Microsoft to enforce DRM and copy protection. But that's just a property of having better system security. Most of what the TPM does, you want.
Only a fool would give up control over his computer because he's afraid of wiping it. You can bet your last dollar that promise will be enforced against you as well.
Who said anything about "giving up control over your computer"? Do you actually know anything about Trusted Computing, or this just propaganda? The TPM specs are public, the programming interface is public, and if you don't like how your software uses it, write different TPM software.
Your problem is with Microsoft, not with hardware security. If you don't like Microsoft, don't run Windows.