> If attacker have physical access, the discrete TPM is an attack surface anyway and even a known attack already.
If you're wondering what they mean by this, [1] has been around since 2018. It's not unusual for a motherboard to put the TPM on a removable module, so you don't even have to desolder the chip to MITM the communications.
The most recent Intel and AMD CPUs have "firmware TPMs" that run in the CPU's so-called "trusted execution environment" so there's no I2C to interpose. Of course, that doesn't mean you're protected against attackers who have physical access to the machine; they can simply install a keylogger.
Funnily enough, in TPM 2.0 there's way around MITM attacks like that - you can establish encrypted connection between TPM and CPU, which outside first-time configuration (which should happen in controlled environment anyway) should provide reasonable roadblock to successful MITM attack.
But CPU-side software needs to use it, and without default well-known keys...
Doesn't work either: To establish the secure connection, you need some way to verify the other end (through public keys, certificates). That verification happens before any measurements can be done securely, so it can be bypassed.
Why doesn't a key exchange in a secure environment before any attacker has physical access give the same security benefits of "an interactively provided secret like a PIN"?
> Because where do you store the CPU side private key after the exchange for future sessions?
eFuses, maybe? Or a bit of battery-backed SRAM. Lots of devices have a small amount of hardened storage for e.g. encryption keys. FPGAs supporting bitstream encryption and Atmel's ATSHA device line are examples.
> CryptoAuthentication devices have full metal shields over all of the internal circuitry, so that if an attacker cuts or short circuits any trace in the shield, the product stops functioning.
> eFuses, maybe? Or a bit of battery-backed SRAM. Lots of devices have a small amount of hardened storage for e.g. encryption keys. FPGAs supporting bitstream encryption and Atmel's ATSHA device line are examples.
To clarify, I was referring to the status quo of current discrete TPM implementations, from a bigger picture perspective, there is certainly room for improvement.
Also I am not sure the current TPM standard is compatible with that idea at all. Operating systems set up their own TPM sessions, so there would need to be secret storage only available to a specific operatings system, e.g. similar to what TPM provides, and we are back to the chicken and egg scenario.
fTPM is a firmware-based TPM implemented, usually, by coprocessor (or trustzone style enclave) inside the CPU, yes. It's not related to what TPM standard it implements
You can also have external TPM 2.0 compliant devices (commonly referred to as dTPM, probably brought the naming from iGPU/dGPU), and in fact many options offered for making desktops fully compliant with windows 11 (which requires TPM 2.0) involve a dedicated TPM 2.0 chip.
Ultimately, TPM standard does not care where the chip is, it just provides mechanisms for their use, which do include encrypted tamper protected interface... if one wants to use it.
You're correct, but also I'm reasonably certain that, as much bullshit the list of Win 11 supported CPUs is, all the CPUs on it have fTPM 2.0 available.
The PIN is the important part there, encrypted sessions (and/or EK cert verification) without PIN are not much more then obfuscation, and defeated by both the interposer attack, and the tweezer attack. (Or the TPM hack to rule them all, e.g. desoldering the chip and connecting it to a microcontroller you control)
I supposse a PIN is a slight improvement over a regular password, but a big appeal of TPM FDE in my opinion is unattended unlock.
I think discrete TPMs don't really have a future in systems that need robust system state attestation (both local and remote) against attackers with physical access. TPMs should be integrated into the CPU/SoC to defend against such attacks.
> discrete TPMs don't really have a future in systems that need robust system state attestation (both local and remote) against attackers with physical access. TPMs should be integrated into the CPU/SoC
What are your thoughts on Microsoft Pluton and Google OpenTitan as TPM alternatives/emulators?
Should system attestation roots of trust be based on open-source firmware?
Recent AI/Copilot PCs based on Qualcomm SDXE/Oryon/Nuvia, AMD Zen5 and Intel Lunar Lake include Microsoft Pluton.
> What are your thoughts on Microsoft Pluton and Google OpenTitan as TPM alternatives/emulators?
I am not familiar enough of the technical details of Pluton or OpenTitan to make a meaningful statement on their security.
> Should system attestation roots of trust be based on open-source firmware?
Yes, and not only root of trusts, I am strong believer in open source firmware in general. I have been developing coreboot as a hobby for a long time. I wish their was more industry support for such things, especially at the lowest levels of modern systems.
> encrypted sessions (and/or EK cert verification) without PIN are not much more then obfuscation
this is completely incorrect, encrypted sessions defeat TPM interposers when there is a factory burned-in processor side secret to use. lol at being just "obfuscation" because you can spend $5m to decap and fetch the key then put the processor back into working order for the attack.
that just requires a vertically integrated device instead of a consumer part-swappable PC.
What you are saying is sound, and I agree it could be done.
But there are multiple caveats:
- How do you hide the secret so that only "legitimate" operating systems can use it for establishing their sessions and not "Mate's bootleg totally not malware live USB"?
- And unfortunately current CPUs don't implement this.
- Additionally don't be so smug to think you need to decap a CPU to extract on-die secrets. Fault injection attacks are very effective and hard to defend against.
I agree the security of this can somewhat be somewhat improved, but if you are building a custom CPU anyhow, you might as well move the TPM on-die and avoid this problem entirely.
before the popularity of ARM SoCs that contain everything on-die there were much fewer choices for vertically integrated devices. it's a different segment.
if you look at apple's vertically integrated devices, they chose a cryptography coprocessor that was not on die originally. with a key accessible only by both pieces of silicon's trusted execution environments, rather than the operating system directly, encrypted comms are established in a similar fashion as the TPM2.0 proposal.
>robust system state attestation (both local and remote) against attackers with physical access
Phrases like this give me the shivers, as it translates into "mandatory surveillance by some authority telling me what I can and can't do with my computer".
TPM is an evil concept. Physical access should be final.
That "attestation" in the full disk encryption case means your disk encryption key only being available to the operating system you chose to install. And disallowing the ability of a laptop thief to change that.
Or remote attestation can be used to restrict access to a corporate network to corporate controlled devices only. No one surveills you, or has access to your device in this scenario either, the TPM there is used to produce a certificate of the device state that can effectively act as access credentials to a resource.
This is about recognising the fact that the person in physical possession of a device isn't necessarily the legitimate owner.
I get the reaction, but what about the trust factor of a box you own and have running on the other side of the world? TPM isn’t an evil concept, it’s fairly useful for some scenarios. Coercion to use TPMs, that sounds evil.
>So I get my hands on your laptop for a few minutes, there should be nothing you can do to impede me from doing whatever I want to it?
Correct. This is true of all my other possessions as well.
Ultimately, the physical hardware of the computer cannot tell the difference between a legitimate user and an illegitimate one. The distinction is social, not mathematical - the kind of thing one might litigate in court, rather than by multiplying some large primes together. Technologically enforcing the concept of ownership over an object implies the construction of a parallel, extra-legal system of rights management, with some final higher authority that is neither you nor in all likelihood your government. Here's how that plays out: yes, you paid for the computer, yes, you "legally" own it, but you did something to it that Microsoft doesn't approve of and so we're afraid it doesn't work anymore. Might makes right. Too bad!
Problem is that the BCM and the BIOS/UEFI and every component talking to the TPM all need to store one (or more) public keys for it (and the corresponding templates and/or save files) in order to set up encrypted sessions to the TPM.
I'd buy you an replacement laptop of the same model and then install a rendering of your boot process and password prompt on it. Doing a switcheroo and waiting in my bunker until the fake sends me the password you entered.
The screen/keyboard is not authenticated to the user, and TPM is not capable of fixing that.
It doesn't require some state actor to do that. Just money.
The "replaced laptop" scenario is a full MITM on the hardware. TOTP generally does not protect against MITM. The required TOTP code is, in this scenario, generated by the device in the attackers hand. So the fake could also display it.
It's never unsealed. `tpm2-totp` does an encrypted session to the TPM and runs `TPM2_HMAC` on the TPM shielded key, you can also include PCRs to add further authentication to this entire exchange.
What do you mean with "relay"?
(All of this is trivially solved with glitter nail polish anyway.)
The same way the fake laptop can relay your password to me, i could also relay the generated TOTP code from the stolen laptop to the fake in front of you. As tried to convey, the fake laptop is basically a full MITM on your screen/keyboard.
Making a machine visuals non-reproducible helps that, but only if the attacker cannot easily switch the exterior parts (chassis, keyboard) between the two machines.
> The same way the fake laptop can relay your password to me, i could also relay the generated TOTP code from the stolen laptop to the fake in front of you. Also any authentication to generate that TOTP in the first place. As tried to convey, the fake laptop is basically a full MITM on your screen/keyboard.
This is a hollywood level threat scenario.
It involves the attacker having intimate familiarity with the operating system, and having to break inn twice to even get this attack done.
If you do put inn the effort then I deserve to be hacked and can pick up sheep farming in the country side.
The OS does not matter? Grab the video output via HDMI/DisplayPort and insert the keypresses via USB. Thats likely gonna work. Basically what modern KVM switches do. And setup the fake laptop as VNC client. Same tech that companies can use to remotely manage servers.
Of course it does. You are replaying the logos and screens.
> Grab the video output via HDMI/DisplayPort and insert the keypresses via USB. Thats likely gonna work. Basically what modern KVM switches do. And setup the fake laptop as VNC client. Same tech that companies can use to remotely manage servers.
You believe you can boot up an entire VNC client to display something that would take most machines under a second to display?
Which the real machine happily gives me via HDMI/DisplayPort.
> You believe you can boot up an entire VNC client to display something that would take most machines under a second to display?
Do i need to? That the user presses the power button does not mean the machine will freshly boot. It could also be an unsuspend/wakeup or some regular ACPI event if the machine is only appearing to be off.
> Do i need to? That the user presses the power button does not mean the machine will freshly boot. It could also be an unsuspend/wakeup or some regular ACPI event if the machine is only appearing to be off.
This is a completely imaginary scenario. I'd be amazed to see it pulled off.
EDIT: I hear Amazon is still getting pitches for Hacker 2. You might have a shot.
OP is trying to say that this TPM TOTP approach doesn’t help verify a machine is legitimate if there is a possibility that the machine you’re using has been swapped with a malicious one.
This doesn't really mesh well with what the TPM-TOTP idea is trying to solve: trust in the machine you’re using.
Hyperbolic or fairly extreme-sounding scenarios are common when discussing this kind of thing, partly because it makes discussion about a fairly boring topic a little bit more interesting. Don’t get distracted by that.
That being said, using a TPM-based TOTP is pretty extreme sounding in and of itself.
> Hyperbolic or fairly extreme-sounding scenarios are common when discussing this kind of thing, partly because it makes discussion about a fairly boring topic a little bit more interesting. Don’t get distracted by that.
It's not. They are very much intended to derail serious discussions around threat models.
> That being said, using a TPM-based TOTP is pretty extreme sounding in and of itself.
I'd like to add that the VNC relay machine only has to fool the end user once. So, the attacker wins as long as they think "the bios is a bit janky this morning, and this is more kernel panicky than usual", and type their pin/password anyway.
Of course, it's much easier to just pop the original laptop open and interpose on the keyboard. Even easier: use acoustics to snoop the keystrokes. The snooper could even be 5g/wifi/gps, assuming it's easy to steal some power from the mainboard. I guess fingerprint + camera ID make that attack harder. Still, the hypothetical device could stream HDMI at a few FPS if it was easy to splice into the display panel cable. (I haven't cracked a laptop recently, but those used to be socketed + unencrypted.)
Miniaturization is weird. The latter attack is probably easier to pull off these days than the former. If you wanted to swap my laptop, you'd need to replicate the dents and stickers. Good luck doing that!
There are like three operating systems in common use. An attacker being familiar with the one you use certainly isn't a "Hollywood level threat scenarios".
Buying the same model laptop and swapping it with your targets is an elementary level targeted attack
A solution would be to have two passwords, and display a secret security image between them.
User is required to not enter the second password if the wrong security image is displayed.
You can still attack it with a fancy radio transmitter which transmits the security image from the stolen laptop when it's displayed after you've entered the first password to the second laptop.
Attestation closes this vulnerability, for example through tools like Ultrablue [1] which provides a self-hosted method of verifying that the TCB has not been modified through external tool (in this case, your phone running Ultrablue)
The TCB has not been modified - that's the point of that attack. Its just physically elsewhere. A high 24 dBi high gain antennae to close that gap costs 70 EUR and you would attest the device in the attackers hands, not the one in front of you.
I think some of those hardware attestation thingies use clocks and tight latency jitter bounds to make replay attacks harder. If it takes more than "2 x time light takes to move 10 ft + deterministic delay from the other side", or less than the deterministic delay, then they refuse to unlock.
Some cars even get this right these days. Most don't.
You have to consider what kind of risk you are protecting yourself against.
It's highly unlikely that you would be the target of such a highly sophisticated attack, but a hacker could get into a place where you left your computer without surveillance (such as your home or a hotel) for about 15 minutes, and install it inside your computer.
If you think you could be the target of such an attack, you could maybe enable an alert in the settings of your UEFI if your computer has been opened (I know that my ThinkPad has that option), or the better option is to always keep your laptop with you.
I'm mostly asking because the original poster was painting a process that can be sniffed off the bus (that is - buy a stolen laptop off ebay, try to boot it, sniff the key off the bus) with a process that requires active targeting and multiple breakins to work as equivalent.
It seems like these security discussions always devolve into rather funny moving of goalposts without actually considering how much work each exploit requires.
The goalposts haven't moved in my mind, but I suppose I didn't make them clear in my first post.
Basically the TPM provides a set of features that are really useful for corporate Windows deployments. No more forgotten passwords, because the self-unlocking disk encryption sends the user straight to the Windows login screen, and helpdesk can reset forgotten Windows passwords remotely.
And for casual home Windows users, it lets them log in with a 4-digit PIN or with biometrics, so it's got usability benefits for them too. If every OS now needs Microsoft's signature of approval, or a really fiddly setup process? Well they were running Windows anyway, so no problem.
These usability/support benefits rely on self-unlocking disk encryption, which is vulnerable to sniffing if someone gets a stolen laptop on ebay.
For the kind of technically sophisticated, security enthusiast users who comment on blog posts about TPMs? We're more than happy to key in a strong unique password at every boot, and if we forget the password and lose access to everything on that disk that's just the system working as it's supposed to.
For us, the benefits of TPMs and measured boot for personal use are a lot more obscure. You'll sometimes hear people claim it protects against 'evil maid attacks' where an attacker repeatedly gets physical access to your laptop. The truth is it provides no such protection.
> For us, the benefits of TPMs and measured boot for personal use are a lot more obscure. You'll sometimes hear people claim it protects against 'evil maid attacks' where an attacker repeatedly gets physical access to your laptop. The truth is it provides no such protection.
TPMs give you fine and adequate protections in many scenarios, even physical ones.
They also provide you with better protection for private key material.
> TPMs give you fine and adequate protections in many scenarios [...] my `ssh-tpm-agent` project
I agree that's adequate, in the sense that keeping the an SSH key as a password-protected file on disk is adequate, and having it be a password-protected secret in the TPM is no less secure than that.
But the whole point of binding a key to hardware is to be secure even if a remote attacker has gotten root on your machine. An attacker with root can simply replace the software that reads your PIN with a modified version that also saves it somewhere. Then they can use the key whenever your computer is online, even if they can't copy the key off. And although that's a bit limiting, once they've SSHed to a host as me once they can add their own key to authorized_keys in many cases.
That's why Yubikeys and U2F keys and suchlike have a physical button.
TPMs would be a lot more useful if the spec had mandated a physical button for user presence.
> But the whole point of binding a key to hardware is to be secure even if a remote attacker has gotten root on your machine. An attacker with root can simply replace the software that reads your PIN with a modified version that also saves it somewhere. Then they can use the key whenever your computer is online, even if they can't copy the key off.
It protects against extraction, not usage on the machine itself. Of course they can use the secret on the compromised machine.
> And although that's a bit limiting, once they've SSHed to a host as me once they can add their own key to authorized_keys in many cases.
Assuming they can edit the file.
> That's why Yubikeys and U2F keys and suchlike have a physical button.
The TPM spec has a policy setup to account for some fingerprint reader that can be used to authenticate. I haven't been able to figure out how/what/whys of the implementation here but this is very much a thing.
> It protects against extraction, not usage on the machine itself. Of course they can use the secret on the compromised machine.
Yes, this is why I was careful to say that the benefits are obscure, rather than saying they're entirely nonexistent.
I'll admit that's a benefit, but it seems very small benefit considering the far-reaching changes it's needed like kernel lockdown mode, the microsoft-signed shim, distro-signed initrd, the difficulties it creates with DKMS, and so on.
Whereas people who need to bind their SSH key to hardware can get a higher degree of security with a far smaller attack surface by simply spending an hour's wages on a Yubikey.
> I'll admit that's a benefit, but it seems very small benefit considering the far-reaching changes it's needed like kernel lockdown mode, the microsoft-signed shim, distro-signed initrd, the difficulties it creates with DKMS, and so on
None of this is needed to take advantage of TPMs.
> Whereas people who need to bind their SSH key to hardware can get a higher degree of security with a far smaller attack surface by simply spending an hour's wages on a Yubikey.
Yubikeys are expensive devices, and TPMs are ubiquitous. Better tooling solves this problem.
> None of this is needed to take advantage of TPMs.
You're not binding the secret to PCR values? I thought TPM fans loved those things?
I don't blame you - they look like a design-by-committee house of cards to me, with far too many parties involved and far too much attack surface. Just like the rest of the TPM spec.
> You're not binding the secret to PCR values? I thought TPM fans loved those things?
Binding things to PCR values doesn't imply you need Secure Boot, signed initrd, lockdown mode, shim and signed kernel modules. All of these things are individual security measures that can be combined depending on your threat model.
> I don't blame you - they look like a design-by-committee house of cards to me, with far too many parties involved and far too much attack surface. Just like the rest of the TPM spec.
The v2.0 version of TPM doesn't really make PCR policies easier to use, so I've had troubles getting them properly integrated into the tools I write as you need to deal with a key to sign updated policies. `systemd-pcrlock` might solve parts of this but it's all a bit.. ugly to deal with really.
The entire TPM specc is not great. But I find TPMs too useful to ignore.
> Basically the TPM provides a set of features that are really useful for corporate Windows deployments. No more forgotten passwords, because the self-unlocking disk encryption sends the user straight to the Windows login screen, and helpdesk can reset forgotten Windows passwords remotely.
Unclear why this requires a TPM. Boot the system from a static unencrypted partition containing no sensitive data, display the login screen, when the user authenticates the system uses their credentials to get the FDE decryption key from the directory server. Bonus: Now the FDE keys are stored in the directory server and if the system board fails in the laptop you can remove the drive and recover the data.
An attacker with physical access could modify the unencrypted partition to compromise the user's password the next time the user logs in, but they could do the same thing with a hardware keylogger.
> And for casual home Windows users, it lets them log in with a 4-digit PIN or with biometrics, so it's got usability benefits for them too.
This could be implemented the same way using Microsoft's servers, given that they seem to insist you create a Microsoft account these days anyway.
It's not clear that unsophisticated users actually benefit from default-FDE though. They're more likely to lose their data to it than have it protect them from theft, and losing your family photos is generally more of a harm than some third party getting access to your family photos.
If the machine is already on but asleep, the keys are in memory, they only have to be downloaded from the server on first login. If the machine has been off and you have no network connection then you need the long password to unlock it instead of the short one, but for most users that is already irrelevant because everything else requires a network connection too.
Ah ok, so I'll need to memorize the super long password whenever I'm out and about and want to just check something real quick. I guess I'll just put that on the sticky note on the bottom of the computer.
You want to check something real quick on what... the internet? Then you have internet access. You also have access to the local data on the machine as long as it was asleep rather than off, which will be the case the vast majority of the time.
Keeping the key stored on the machine, TPM or no, is also less secure than keeping it somewhere else. If someone steals your laptop, you deny all access to the key on the server and they can't get it even if they could guess the pin (or the user wrote that on the bottom of the computer), and there is no way to use an offline method to extract the key from the TPM because it isn't there.
So the sole legitimate use case for a TPM is when you're somewhere with neither cellular service nor Wi-Fi (rare) and your portable device is off rather than asleep (rare) and you can't remember a long passphrase, which doesn't have to be unmemorable, it's just less convenient to type.
This seems like it isn't worth the cost in authoritarianism?
For that matter you could still implement even that with just a secure enclave that will only release the key given the correct PIN (and then rate limits attempts etc.), but then does actually release the key in that case and doesn't do any kind of remote attestation or signing.
> a secure enclave that will only release the key given the correct PIN
So...a TPM?
> This seems like it isn't worth the cost in authoritarianism?
You know what's really authoritarian? Having your computer practically only decryptable by some remote directory server, potentially not even under your control.
I doubt your physical keyboard's connection to the motherboard is encrypted (I'd guess USB, I2C or maybe even PS/2 internally). I would also not be surprised if you can get small in-line sniffers that an attacker, with physical access for half an hour, could hide in your laptop.
All bets are off if your attacker is determined and has physical access.
There have been papers about extracting key presses from acceleration sensors of a phone, or from the sounds of key clicking by statistical inference what feels like a decade ago. You probably don't even need to touch the laptop to do that.
and by recent, TPM was external last in gen 8 of intels, so this attack works on cpus released last in October 2017. That's almost 7 years ago. Most organizations have a 3-5 year replacement schedule.
TPM seems beyond useless to me. I wanted to protect a certificate and private key for a Java application, so that you can't just copy the pkcs12 file and use it elsewhere, but there is no decent API in Java to use a TPM 2 chip. So the road ends there... The only protection now is a hardcoded passphrase in the application but you don't have to be a genius to figure that out...
Discrete (i.e., chip) TPMs (dTPMs) are slow. They are way too slow to use as HSMs.
Firmware TPMs (fTPMs) are faster, but I doubt they're really fast enough to use as an HSM.
There are TPM APIs for Java, so you can do this, but it's not surprising that the Java keystore providers lack builtin support because of the performance issues.
Ideally fTPMs should come with EKcerts and platform certificates and they would be very fast and as secure as (more so than) dTPMs. Then using fTPMs as HSMs might take off.
its meant for secureboot, but i suppose the rest of the platform, built usually by other ppl than ones who designed the TPM, needs to also implement it correctly. an d as this article shows, this is not an easy feat. (this attack seems silly but it's really clever tbh. good inspired idea likely based in lots of domain expertise). - if you can protect the boot-chain with secureboot, what you can do for your private key, what for example AV vendors do, is have a (efi?)driver that contains the certificate, which is a boot-driver protected by secure-boot. - for windows this might require Microsoft cooperation to assign you a driver level so other stuff can't disable it (otherwise it's still tricky and possible to get around your protections likely). (windows -> process protection light / telam drivers). Optionally you could also have the certificate provided by an EFI applcation somehow that's signed / secured by secureboot. (could drop it on disk somewhere, efi partition is easily accessible...).
If the chain is protected by the tpm, this method if implemented correctly through the whole chain should protect your cert and pkey.
that being said _should_ is the keyword,. i dont think any platform really managed to escape all attacks, though a lot in this area do need hw access (the tweezers previously implemented by the author :)).
Heh I always thought that TPM was there to secure anything. If it's only meant for secure boot then I understand the poor tooling and absence of APIs to use the thing properly inside applications.
It is absolutely used and designed for secure boot. There are now simpler mechanisms to accomplish the same thing, but if you want remote attestation, you need a TPM.
Hi Fox, i'm not aware of any other usages on the platform i'm familiar with sorry, maybe a gap in my understanding.
Afaik, if you want access to a chip like TPM, the OS will need to cooperate as such I/O access or MMIO or however it's accessed, will be privileged instructions likely.
I'd find it somewhat logical an OS or loader component starts verification, and then components upwards in the chain are well, chained together via verification, taking away the need to access the TPM after the initial modules are verified.
- Do you have any examples of how else the TPM is used? I'm very keen to learn more about it's use-cases.
> Hi Fox, i'm not aware of any other usages on the platform i'm familiar with sorry, maybe a gap in my understanding.
The TPM as a device is completely democratized and accessible by the normal user.
You can use it for platform attestation (which is this post is trying to point out might be broken in some cases), but it also works as a "discount" smartcard where you can seal data and shield keys.
Two examples here which I have written:
A file encryption utility for `age` that shields the keys in the TPM.
Conceptually both of these tools can also use PCR policy sealing as a form of platform attestation, but I have not implemented that yet as it's a bit hard to do this in a user friendly way UX wise.
As noted by others, you can also do disk encryption. `systemd-cryptsetup` does this on Linux.
> Afaik, if you want access to a chip like TPM, the OS will need to cooperate as such I/O access or MMIO or however it's accessed, will be privileged instructions likely.
Not really? `/dev/tpmrm0` is a TPM resource manager for Linux that is accessible by being part of the `tss` group.
> I'd find it somewhat logical an OS or loader component starts verification, and then components upwards in the chain are well, chained together via verification, taking away the need to access the TPM after the initial modules are verified.
This is only one of several use-cases of a TPM :)
This is also orthogonal of whether or not Secure Boot is part of the chain, depending on the operating system.
Thanks, that's interesting. as commenter noted i'd say this is part of the boot-process, but you are right, it's not technically secure boot related. booting securely != secure boot that's correct :)
It's used to establish a root of trust. If your operating system is modified, it fails to validate and the TPM doesn't release the secrets. If your BIOS is modified, it fails to validate and the TPM doesn't release the secrets. If your CPU is modified, it can tell the TPM what it wants to hear and get the secrets even if the BIOS or OS is modified.
For some people, this is a useful increase in security. Those people set up their own TPM according to their own rules. For the rest of us, who had one forced on us by Microsoft, it's just more anti-right-to-repair.
Each TPM having an unique certificate, you may use that to trace a specific machine to a specific user. Game developers could use that to ban (toxic) players from an online service, for example.
I believe you can't "reset" a TPM, but you can buy a new one (if your CPU does not support fTPM). Or buy a new PC I guess. Both solutions are costly though, depending on how often the cheat is found.
If you are sharing a computer with a hacker then your account would be banned anyways. All the tpm does is ensure you can't just make another account and use it from that computer.
Not the only uses, but certainly all of the widespread implementations of trusted hardware just haven't been able to avoid that temptation of privileging the manufacturer while securing against the user themselves. Every little hole is another setback that keeps the designers/implementers working on this layer instead of starting to tighten the noose on the next one.
TPMs are a cryptographic coprocessor with added platform state attestation functionality. That can for example be used locally for secure secret storage that is only available in certain platform states, or remotely to certify the state of a device trying to access a corporate network.
Of course TPMs can be (ab)used for DRM, but the same property in general to many ideas in cryptography. We still don't say AES or RSA are tools designed to restrict your rights.
In reality TPMs are almost always used to (attempt to) protect the user's data over restricting them.
I would argue that the discrete chip variation of them aren't very good at this (and even less good at DRM), but a lousy implementation doesn't mean the concept is bad. (As Foxboron mentioned earlier in this thread, discrete TPMs can still act as reasonably good "discounted" SmartCards, but they are bad at platforms state attestation.)
In fact I would have much preferred if the industry embraced the measured boot idea more instead of mainly pushing stricter verified boot schemes.
Of course TPMs can be (ab)used for DRM, but the same property in general to many ideas in cryptography. We still don't say AES or RSA are tools designed to restrict your rights.
AES and RSA are just algorithms, not implementations. I'd compare TPMs to HDCP, AACS, or CSS (the DVD one) instead.
A bit tangential, but it’s a bit shocking how consistently bad firmware for x86 motherboards and laptops is, as is most visible in the UEFI configuration screen. It makes me wonder if a new entrant in the motherboard/laptop space couldn’t make a name for themselves by simply caring about the quality of their firmware and trying to make it good.
I have to believe there are hardware engineers out there who know locking people out of their devices is essentially bad, and so they leave in those tweezer based attacks on purpose.
Although, designing against physical attacks is very difficult, so I guess there’s no need to imagine a good-hearted conspiracy of conscientious hardware folks.
The fundamental operation in hardware engineering is the digital signal, pulling a pin to one or zero - which is all the tweezer attack does. It's comparable to writing a byte of memory. Imagine how hard software security would be if your adversaries could write arbitrary data to your process: there's no ASLR or even an MMU to randomize trace layouts on physical circuit boards.
Well yes, but there is a difference between a signal being accessible on a PCB trace I can see with my eyes, vs it being accessible only on the inside of a 7nm silicon die.
There is a reason why a lot of system integrate the security processor on the same piece of silicon whose state the security processor is meant to protect.
The reason discrete TPMs exist is supposed compliance with crypto standards, and physical protection against key extraction, but they sort of miss the forest before the trees. What matters to users is the protection of their data, not the TPM's secrets, and discrete TPMs arent very good at the former.
There’s some value to being able to lock a device against somebody who physically has control of it. Like it is nice that stolen iPhones have reduced value.
But there’s a pretty big social harm to locking people out of their devices, like the generation of tech-illiterate kids growing up that haven’t been allowed to break their computers well enough to learn anything about them.
Ok, I understand how a TPM gets attached to a muxable GPIO block.
But, did no one stop and question whether a TPM should have been on a dedicated block that couldn't be reprogrammed rather than assuming there wouldn't be bugs or whatever in the GPIO pin muxing? Never mind all the additional complexity of assuming page permissions access/etc to shared purpose MMIO regions?
The CPUs and OSs (other than Windows 11) support operation without any TPM.
So either the pin is configurable, or you've wasted a pin that could otherwise be used for decorating the motherboard with RGB LEDs.
Also, the pin layout has to be standardized by the socket specification (eg "LGA 2011"), which may have to retain compatibility for a decade or more. This strongly favors defining reconfigurable over fixed-function pins.
If you're wondering what they mean by this, [1] has been around since 2018. It's not unusual for a motherboard to put the TPM on a removable module, so you don't even have to desolder the chip to MITM the communications.
The most recent Intel and AMD CPUs have "firmware TPMs" that run in the CPU's so-called "trusted execution environment" so there's no I2C to interpose. Of course, that doesn't mean you're protected against attackers who have physical access to the machine; they can simply install a keylogger.
[1] https://github.com/nccgroup/TPMGenie