The "secure channel to the chipset over the network" is just a public key pair in a chip. You can use it however you want, or not use it at all. There's no system built in to the TPM that allows your chipset to go talk to Microsoft without your permission.
Obviously, if you give Microsoft permission to own your machine, it can go talk to Microsoft.com anytime it wants. But it already could without the TPM.
A public-key pair in a chip is way overkill for the threat model I was talking about.
Is it the same threat model you were talking about? You didn't answer that question.
And while it's perhaps necessary and desirable that our machines contain tamper-resistant chips with public-key pairs in them, it's neither necessary nor desirable that the private key be secret from the machine's legitimate owner, which is part of the TCG scheme.
> The key isn't kept secret from the machine's legitimate owner
Please explain how remote attestation can work if the machine's owner has access to the signing key.
(edit: having just read up on the newer Direct "Anonymous" Attestation scheme, I see that the signing key is no longer a permanent part of the TPM chip, but is generated with an issuer. Still, this generated key is kept secret in the "trusted" module, and my question remains)
I know the article is talking about secure storage and I'm picking on remote attestation, but they're both part of a technology suite which treats the end-user as an attacker.
A simple rule of thumb - if the capabilities of the hardware cannot be emulated by a VM, it's not in the owner's best interest.
We're both right, and you're more right than I am, because I was ignoring the EK, which is "burned in".
But regardless, the EK and attestation schemes are just capabilities of the TPM chip. You can use them or not use them. The problem isn't the TPM --- which we need. The problem is what Microsoft wants to do with the TPM.
The problem is that having those capabilities in your hardware changes informed users' negotiating position with Microsoft from "I don't have any way to prove to you that I'm not running under SoftICE" to "I don't want to prove to you that I'm not running under SoftICE".
To answer your earlier question, I don't do any work with TCG standards, in part because I don't want to make the situation any worse and in part because I find modern computer security in general extremely depressing.
If you believe it's possible to have bulletproof software, then the TPM is a bad idea; we should just make our software bulletproof. This is what Daniel J. Bernstein believes.
If, like most of us, you don't believe it's possible to have bulletproof software, then at some point you need something like the TPM to bind a known-good running kernel to your hardware securely. Without it, any bug in your kernel leaves you with no way to trust your system from that point on.
I think the TPM solves way more problems than it creates; for instance, TPM-related techniques will allow us to get rid of clunky hardware crypto tokens, and instead bake them into our machines securely; it will also potentially allow us to have public kiosk computers that are safe to use.
My only objection to TPM discussions is that EFF-types engage in a lot of hyperbole about them. It's true that the TPM makes it easier for Microsoft to enforce DRM and copy protection. But that's just a property of having better system security. Most of what the TPM does, you want.
Obviously, if you give Microsoft permission to own your machine, it can go talk to Microsoft.com anytime it wants. But it already could without the TPM.