>Our attacks have shown that an fTPM cannot sufficiently protect its internal state against firmware or physical attacks. In such a scenario, a passphrase-only key protector of reasonable length provides better security than a TPM-only protector with a numeric PIN (5.3.1). This is in stark contrast to Microsoft’s claim that “BitLocker provides the most protection when used with a Trusted Platform Module” [29] (see also in 2.3). In fact, of all available protectors (seen in Figure 1), TPM-only is arguably the weakest protection strategy.
This might not be surprising to some, despite Windows hiding the GUI passphrase functionality behind some group policy settings, both "Require additional authentication at startup" and "Enhanced PIN", which isn't perhaps the most intuitive and a normal user might not even realize unless they notice the "normal" PIN is numerical-only. In any case, for the average person that might have their devices stolen, this is likely not to be a threat, but I think a passphrase should always be preferable, BitLocker doesn't support any better option.
It's because tpms are small and have small storage. The outrageous "its a secret cabal" voices are a prime example of what people cook up when faced with something they cant explain due to ignorance but feel the need to have an answer. Its as outrageous as a Republican saying "Q did it."
Wondering if something was requested by law enforcement isn't implying a cabal, chill.
Also a couple kilobytes of flash costs basically nothing. And you could hash keys over a certain length, which is much better than having such a short limit on a human-typed string.
A couple of kilobytes of flash also doesn't come with the protections the tpm offers (or at least is supposed to offer, considering the article in the OP)
If you'd like to provide schemata, open standards and source code for them, then don't keep the class waiting.
Don't/can't? Then you're a fool trusting someone else to do something you yourself cannot inspect. Then again, most people seem to be oddly fine with that. I am not of that number.
>to allow the British secret service to eavesdrop more easily. The British proposed a key length of 48 bits, while the West Germans wanted stronger encryption to protect against East German spying, so the compromise became a key length of 54 bits
>Documents leaked by Edward Snowden in 2013 state that the NSA "can process encrypted A5/1"
Why? The FBI pitched a fit over access to a shooter’s phone in the press a few years ago, then stopped.
Now, you have a multiple products on the market that can crack passcodes by utilizing flaws that allow you to brute force PINs, which are by default 6 digit numbers. (Despite most guidance demanding 8)
I have no idea if anyone has covered it. It's industry knowledge. Source: me
I figured it would be generally known at this point, especially with the whole perceptual hash debacle (intended to satisfy LEAs despite the plan to finally enable image encryption). I'm not sure what the internal politics looked like after the perceptual hash snitch got axed - my friends who would know quit Apple by then.
Any key strength limitation is mandated by... certain forces. This is not a secret (anymore). "If anything in consumer tech can be weakened, make sure it is".
Amateur-me thinks that it would not be too hard to prevent such an attack: Have a voltage fault detection circuit at all (TPM-relevant) supply pins that hard-locks the chip until it gets power cycled, and have those circuits be powered by on-chip capacitors that survive just a little longer than their time-to-trigger. Would that be feasible?
This mitigation helps until the attackers stop confining themselves to the supply pins. You can do voltage fault injection through any exposed metal (or metal that can be made exposed), and the attacker can be entirely happy with attacks where they don't actually pull the voltage plane down, but instead just make some specific circuit a bit iffy by injecting an opposite voltage at some specific point and with low enough current that it's undetectable when looking at the chip as whole.
Approaches such as yours do make the attack require a lot more skill to accomplish.
If you spike the voltage on other pin you'd most likely trigger ESD protection which is just a diode that's connected to one of the power rails, and spike the chip's power supply.
Which would be detected if internal watchdog monitors voltages. But if spikes are short enough they might be pretty hard (or just expensive) to detect.
Note that the SPI bus sniffing attacks on dTPMs are simpler to mount (being passive) but also require direct access to the bus (motherboard), and unlike this attack can be prevented in software by knowing the public key of a primary key object on the TPM (e.g., the endorsement key) and using it to authenticate the TPM to the host.
That said, even a host using a dTPM with proper authentication of the dTPM has a problem if the SP is vulnerable to voltage fault injection attacks, because even though SPI bus sniffing wouldn't yield the unlocked FDE keys, the attacker presumably could get full control of the CPU and recover the unlocked FDE keys there.
In other words, the problem here is the voltage fault injection vulnerabilities in general. The fTPM part of it is not as big a deal as the total compromise of the whole system due to those voltage fault injection vulnerabilities.
I haven't looked closely at the TPM protocols lately, but I think this doesn't help against active bus attacks against a dTPM. The host can (I think) reboot the dTPM and send an arbitrary sequence of PCR changes to the dTPM, and the dTPM will believe it. And then the host can ask the dTPM to unseal something, and it will.
This won't help break a PIN that is protected by the dTPM, but it will fully break any protection relying on the host to verify a PIN. (Which is the default BitLocker behavior.)
TPM 2.0 Extended Access policies can be very complex. You can make it so that in order to unlock key objects in the TPM you have to have more than matching PCRs, but also a PIN, and the software on the host shouldn't provide the PIN if the PCRs don't match what it expects. Of course, TPM-using software is still too simple in this regard so that you're correct right now, but it doesn't have to be that way.
2. The bug is not in Devuan, it's in something called refractainstaller, which is used for Devuan live-ISOs. If you just install Devuan that doesn't happen.
3. With a refractainstaller live-ISO, and if you chose to not define a root user, then this bug manifests.
The bug seems to have lingered for so long because despite being rather obvious (i.e. you can just become root) because nobody tried to secure a live-ISO-based system, which is something you typically use as a "rescue disk" or to diagnose hardware.
So - is this a screw-up? Yes. Does it reflect significantly on Devuan as a project? Not really.
To draw a parallel - the fact that systemd has had bugs which other init systems didn't, or had gotten over decades before, does not mean it's an undesirable project. My (and many people)'s problems with systemd regard its fundamental design philosophy, as well as its governance/behavior as a software project.
>2. The bug is not in Devuan, it's in something called refractainstaller, which is used for Devuan live-ISOs. If you just install Devuan that doesn't happen.
From the link:
>>When you download and install the desktop-live Devuan image, you will be prompted to create a user account at the end of the process.
... ie it seems to be talking about the normal process for installing a Linux distro - you make a live CD, boot it, and run the installer.
Nothing. Systemd is a suite of software that handle a lot of the low-level operations on Linux (In particular, the service manager, some network configuration, along with some other stuff). Historically, those operations were handled by different services (like SysvInit).
A lot of people are mad about it for a lot of reasons, but if you're not a system administrator, it's probably better to stick with systemd, since it's what most of the linux community has standardized on, and thus it's much easier to find information online about using those systems.
Amen to this. I understand some of the complaints with it getting into name resolution and other things, but having had to decipher vendors init scripts it's a huge improvement for managing services.
When you add in things like Podman and it's use of systemd it's overall not a bad thing at all.
> What's the impact of having systemd (or not) for the everyday layman like me that just uses Visual Studio Code to build flutter apps ?
Compared to initrd, after Debian upgrading to systemd, the system boots slightly faster and shuts down slightly slower (or same speed, depends really what you run)
By "unencumbered by systemd" they mostly mean "works worse". We've removed tens of thousands lines of fixes for "easy", "simple" sysv scripts when we upgraded our servers to version that runs systemd.
most of the general purpose distributions use systemd. Almost every general purpose distro you've ever heard of. They all adopted it by choice, because they thought it was a better option than the alternatives for one reason or the other.
It's an idealogical flamewar, that in most cases is without impact.
The binary format of the log file still seems to me to be a bad idea - it introduces corruptibility and complexity to a vital service that should be simple and uncorruptible.
If a failing system starts to do weird things, plain text append only logging is preferable.
It’s a problem because people do not wish to learn new paradigms. Put me in that camp - took months to debug a 90 second hang and work around it. Of course, fast boot owes to its parallelization.
TPM = Trusted Platform Module. Trusted is an adjective modifying platform. The business case is that software should not run on untrusted platforms because hardware can always attack software. So, the promise is that TPM will allow software to guarantee* to users that the hardware is not malicious.
From my perspective TPM is mostly about compliance with security directives. Actual security engineers would realize the TPM's security provided is about as far as you can throw it. You have no idea who designed the thing, who manufactured it, or who swapped it out while it was in shipping to you.
You have no idea who designed the thing, who manufactured it, or who swapped it out while it was in shipping to you
Wait, what? In most cases you 100% know who designed and manufactured it. Regarding "swapping out" a TPM, how do you do that for fTPMs or TPMs that are on the same die as the CPU? Come up with a perfect replica AMD CPU with a bugged TPM? Desolder the original CPU and put the replica in?
I think if you went back in time, you'd find people saying that openssl was developed by a consortium of open source contributors.
It's painfully obvious nowadays that openssl was written by the NSA (or equivalent state level entity) via intermediaries deliberately adding subtle but significant vulnerabilities.
Let me propose this metaphor: you buy a front door to your house. For some reason I (the door vendor) include a complete lockset. I tell you that the lockset is totally secure. I offer you volumes of academic research and attestations that this is true. But there is no practical means by which you can establish the security of that lockset.
Do you believe that no one else can open your front door?
> It's painfully obvious nowadays that openssl was written by the NSA
The problem is that your lack of context and perspective on this fairly simple, easily-falsified theory calls all of your opinions into question.
A more nuanced conspiracy theorist would say "if you look at PR's to openssl that contributed later-discovered security issues, 70% were from first-time contributors who never went on to submit any other PR's". And I'd be like "wow, that's suggestive of a coordinated action", and we could dig into it.
But "the NSA wrote openssl" is as factually, demonstrably wrong as saying "the NSA builds every door lock that's for sale at Home Depot". It's too big of a conspiracy, to inefficient for the supposed state goals, and too easy to falsify by just looking at a couple of examples.
The only one that would fit my mainboard I bought on Amazon, and I only got it to upgrade Windows. The indication on it is "made in China". As a consumer I also have no idea how these suspicious chips that I'm required to plug into my mainboard are supposedly trusted, and by whom, and what for.
"Plug this chip that says made in China into your mainboard, for security reasons, to continue." is not really emitting trust or confidence in any way.
> "Plug this chip that says made in China into your mainboard, for security reasons, to continue." is not really emitting trust or confidence in any way. "
I'm sure 30 other chips made in china on same board are entirely fine
> You have no idea who designed the thing, who manufactured it, or who swapped it out while it was in shipping to you.
All wrong, as others have pointed out. As for the last of the above, if the OEM includes (as they should) platform certificates for the TPM, then the TPM cannot have been swapped out while in transit w/o the OEM helping the attacker. For example, Dell includes platform certificates binding the TPM.
Trusted system is one whose failure would break a security policy. It's not about it being secure, it's about it breaking other things when it's broken.
> Honestly TPM is probably creating more bad than good at this point.
The problem isn't the TPM. The problem is the CPU (SP) being vulnerable to voltage fault injection attacks. You could be using no TPM and still have all your secrets leak if the host is fully compromised.
Microsoft is edging closer and closer to dropping support for Windows 10(even a computer I built in 2017 that's still running perfectly fine can't upgrade). But for many users, changing to another OS besides Windows is tantamount to not functioning, so planned obsolescence continues apace.
They chose an arbitrary cut-off date for hardware support for their new OS. They decided not to support old stuff anymore and they had to pick a date/technology platform. It was always going to be arbitrary. In my opinion they should've picked a clearer distinction (i.e. require a certain level of AVX support so all binaries can be built with AVX optimizations enabled) but I can see why they chose to do this. After all, they're going to have to support the OS for ten years, that four year old CPU is fourteen years old by the time Windows 11 goes out of support.
That link does not say it will work fine. It says it is not recommended or supported and if it blows up it is your problem. Additionally it says you might not get any updates. Certainly, it sounds like it might just be some ass covering on their part but they were also testing a nag watermark for unsupported installs like this so maybe not. Either way it doesn't sound like a really solid path forward.
It's certainly not a path forward that Microsoft will recommend. It'll work fine, though.
If not, there are other operating systems that do work. Microsoft isn't the exclusive owner of the PC space, that's one of the major points of all of the antitrust fines and lawsuits.
He said it'll work fine, not that Microsoft is promoting it as such.
There're no fundamental changes to what's actually required, the limitations are arbitrary and set by Microsoft (and they provide a means to get around them).
It can work yes but AMD chips didn't get GMET until Zen2 so if you leave virtualization based protection on you might see a performance hit.
From Microsoft's website:
>Memory integrity works better with Intel Kabylake and higher processors with Mode-Based Execution Control, and AMD Zen 2 and higher processors with Guest Mode Execute Trap capabilities. Older processors rely on an emulation of these features, called Restricted User Mode, and will have a bigger impact on performance.
Which is amazing since windows 11 is full of mentions of green energy, lowering energy consumption and asking you to lower screen brightness to lower carbon emissions. Total green washing when you consider the gigantic amounts of ewaste that arbitrary cut off date will lead to. It's just funny tbh, like I get it's most likely very different teams working on those things but it's tone deaf at best.
But hey at least the OS has a revolutionary new green technology called... Battery saving mode!
The worst part is that it is clearly cargo culting. What consumer suddenly buys kool aid they never bought before because it says "paper straw now!" on the packaging?
True. It is not as though I've ever cared about what the vendor supports at home before.
Most of my machines were on a Linux distro before that decision, and Debian 12 is providing a good enough to me experience on the desktop, even gaming.
I'll probably just stay there indefinitely. Is that an upgrade? Subjective. I'm happier here.
IIRC it has more to do with that series of chip not having GMET (AMD Guest-Mode Execute Trap for NPT) which is used in Windows 10/11s virtualization based protection. Microsoft requires this option for all new PCs from their partners but you can install windows 11 and run it fine without this CPU feature (there is a performance hit if you leave virtualization based protection on though since it has to be done in software).
Don't reward them for that. The only solution is to move to another OS, that's the only thing they will ultimately understand - no matter how inconvenient it might be.
That's not an option if you use your machine for work and your Dev tools only work on windows. PS5/Xbox toolchains only work on windows, as a gamedev I don't really have a choice.
I think they hate unix, not windows, because they dragged all legacy nonsense unix had for hardware reasons straight into modern times so everyone can laugh on it.
Sounds like the last time you used Linux was 20 years ago. Linux been as easy as Windows to install and use for at least 10 years now. My mothers laptop runs Linux Mint and she doesn't know its not Windows even though the colors are all wrong. Why? It's all about the DE - if it looks like windows, walks like windows and quacks like windows, its windows. If she can click on the menu and find the internet then it's a win. Installation was was as easy as windows and everything worked out of the box.
Hell, I installed the Chicago95 XFCE theme on my main system to see how well it emulated the look and feel of Windows 95 and wound up liking it. Why? Because even though it looks dated, the icons were immediately familiar and I felt navigation instantly become easier.
Do not underestimate the power of familiarity. Many of us grew up on DOS/Windows playing games and typing school work up in Word so moving away from those familiar waters is HARD. It's like being an immigrant moving to a new country - you have to put in extra effort to learn to adjust to culture and language. Some can, some cant. YMMV.
I've always felt that $BIG_BRAND_DISTRO+KDE got pretty close. It's still Linux, so not quite the same, but as far as look and feel, it's pretty close I think.
Use any Unbutu distribution with MATE or Cinnamon. Like shit dawg, I got my grandma using Linux Mint with MATE.
Changed the background to match the same one on her old laptop and added some desktop icons, and boom 99% of her experience was the same. Had to help her a little with the LibreOffice -- that's a little different -- but otherwise functionally similar to Windows 7 / Win10.
I'm using linux as a daily driver and at this point there isn't anything I can't do on Windows. The hold out for a while was games, but Proton w/ Steam works well and I can play big titles like Cyberpunk 2077.
The problem with the cloud is how often I've run into blanket linux/bsd support bans. Especially when it comes to professional certifications done online. I had one website refusing to work on my FreeBSD or Debian installs. It would just get to a certain point and not let me proceed and multiple buttons refusing to work properly.
Got on the phone with support and they were dumbfounded. Got the idea to just spoof my user agent as a windows box on edge and it worked perfectly afterwards.
Even if not done on purpose there is a lot of crufty shit online that breaks in unsuspecting ways when I'm on a linux/BSD box. Especially if interfacing with the government websites and webapps. Our state fire code website looks straight out of 2002 and has multiple warnings about making sure to use IE6... in 2023.
Maybe its just my use case (fire industry / local government), but it helps to have a mac or windows machine lying around as backup.
Windows 10 was released in 2015, so they will have offered 10 years of support, which was the standard policy in place ever since Windows Vista (Windows Vista, 7, 8/8.1, 10).
Apple does not support any macOS version for that long, and is unlikely to support a current version of macOS on any given device for that long.
You say “support” like the moment they stop it, OS cease to work. OSX Leopard is usable for most of tasks. Lot of people still running Windows 7 and Windows XP without any need for “support”. Windows 10 will be even better without constant “support” reboots. Can’t say this for Windows 11, as it so tightly stuffed with spyware and online integrations, it might just not boot if MS plug some server switch.
Windows 7 is only starting to get obsoleted now : while Microsoft is still updating it with critical security fixes, Qt and Chromium dropping it is pretty much the end for non-legacy usage. (Good bye Windows I guess...)
Apple does not provide security updates for Catalina, which was released 3.5 years ago. People would be crazy to run unpatched OSes for any use case involving the internet or wifi.
That does not make these PCs obsolescent in any way. People still use Windows 7, and even Windows XP. Especially in the many contexts where "security" is not important at all.
I've had laptops where secure boot was permanently on unless you used legacy bios, and I've fixed several, both PCs and laptops, where there was no option to disable secure boot or use legacy bios. This was some years ago, and the situation has improved a lot since then.
ARM that are microsoft certified will specifically not give you that option.
You are probably largely using x86_64 which come with the option to do so, but there has been a lot of push around moving to things like ARM for energy efficiency reasons.
An AMD Lenovo Laptop supposedly gets 16 hours of battery life and again, this is subject to what you're running, but it runs x86 stuff natively instead of having to translate it. Isnt this 'good enough' for most people? I know we have people wanting week long laptop batteries, but over 12 solid, real world hours should be good enough for the majority of users, I'd think.
Uhh what? How does that work? Where do they request it? The TPM manufacturer? Microsoft?
TPMs aren't very secure and as a discrete component their connection to the CPU can be intercepted (unlike fTPM or apple's integrated solutions).. There's a big difference between having a deliberate backdoor and just a vulnerable design that can be exploited.
I haven't seen them accused of being backdoored. Intel's ME (and AMD's equivalent) perhaps but that's not the TPM.
TPMs can also be used to hide DRM keys from the user and I'm also opposed to that, but generally that stuff is hidden in other hardware. Like Google's wildvine stuff in mobile CPUs.
Yes, if you want to recover keys from a dTPM you have two options:
- decap it, scan it with an electron scanning microscope, reverse engineer it (or have already done so), and read the seeds and all NVRAM on the chip
- force the manufacturer to record the seeds even though they have processes to never do so, then force the manufacturer to reveal the seeds a dTPM shipped with given an EKpub for it
A few nation states could probably pull off the latter, but probably very few. And I suspect they haven't bothered and won't until TPM usage finally gets in the way. This is pure speculation, and they may well have forced all the manufacturers already for all any one of us knows.
More nation states could pull of the former. But again, they might not bother until TPM usage finally gets in the way.
As long as BMCs and BIOSes continue to use non-encrypted sessions to talk to dTPMs there is no need to do any of this when the attacker has physical access to the motherboard.
They have some weaknesses. A dTPM uses an unencrypted protocol to communicate with the CPU (simple i2c or SPI) and it's pretty easy to sniff it if you manage to get legit access. But you do need a legit user to log in to the machine once. This is a bit of an achilles heel.
In this sense an integrated solution is better because there is no simple bus to sniff, but it does have to be properly implemented of course. Which seems to be not the case here.
By the way a dTPM should have a real entropy RNG so technically it shouldn't have any (usable) seed. It's basically a smartcard soldered onto the mainboard. Of course smartcards can also have key generation flaws like the Infineon flaw a while back. https://www.schneier.com/blog/archives/2017/10/security_flaw...
> A dTPM uses an unencrypted protocol to communicate with the CPU
While that is strictly speaking true, the TPM command set allows you to set up an encrypted session to the TPM using an ECDH or RSA key for key exchange that authenticates the TPM.
The problem is that the BMCs and BIOSes out there don't record a public key for a primary key on the TPM and then don't bother using encrypted sessions (not even opportunistically getting that public key from the TPM, which would defeat passive attacks).
Thanks, I didn't know that, I thought indeed that it was simply not possible with TPM 2.0.
I do think it's time for a TPM 3.0 though. What apple does with their T2 security chip, and later with the M1/M2, is having the secure element not only handle the key material but the actual encryption as well. They have hardware acceleration that can handle encryption at full disk speeds. This is still a much better option than a TPM especially with symmetric encryption where the key would inevitably end up in the main CPU. In Apple's scenario this no longer happens.
- encrypt all command and response parameters instead of up to just one
- add a version of TPM2_Quote() that encrypts and signs so one can have ciphertext that one can demonstrate were made by a TPM encrypting to a restricted, shielded key
- add a small secure enclave facility
- add more EC algorithms, EdDSA, etc.
- add more cipher modes for AES
- increase RAM and NVRAM requirements
All of this can be done incrementally in 2.x, so calling it 3.0 would be just marketing (perhaps pretty good marketing).
> By the way a dTPM should have a real entropy RNG so technically it shouldn't have any (usable) seed. It's basically a smartcard soldered onto the mainboard. Of course smartcards can also have key generation flaws like the Infineon flaw a while back. https://www.schneier.com/blog/archives/2017/10/security_flaw...
The seeds are an essential part of the TPM story as for generation (derivation) of primary keys, and being able to "take ownership" of a TPM by changing those seeds.
The seeds are not an essential part of the TPM story for its RNG. A TPM absolutely can and should have a solid HW RNG. Though, were I designing a TPM, I'd combine the output of a HW RNG w/ a PRNG seeded internally.
But the seed itself should still be fully random though? And generated on-device during initialisation. Derived keys are a thing of course, and I understand the benefit thereof.
But a manufacturer-installed seed that they have control over sounds like a very bad idea.
> TPMs aren't very secure and as a discrete component their connection to the CPU can be intercepted (unlike fTPM or apple's integrated solutions)..
The problem here is that while it is possible for a BMC / BIOS to know a dTPM's EKpub and use it to establish encrypted (and authenticated) sessions to the dTPM, most BMCs/BIOSes don't. This is a limitation on the host side, not the TPM side. I get that in total the vulnerability exists, but it doesn't have to, and TPM has a perfectly good solution for it. Take it up with the OEMs!
Why not simply abandon TPM and focus on making simple, trustable, massively parallel general-purpose hardware without backdoors for spy agencies and corporations?
Mine and I want a TPM, it's a device essential for modern laptop security.
_Even if you would be able to control every bit of firmware on your computer and there was no DRM or similar you still would want a TPM!_
through potential a different implementation and not some of the features build on top of it
like something like a TKey integrated into your CPU with some additions for securing the boot chain (including the EFI itself) would probably be a convenient, simple, way to have the necessary security without many of the problems ... or did I just reinvent TPM ?
TKey is based on ideas like TPM and DICE. Think of TKey as a TPM-in-time, and a discrete TPM on an x86 mainboard as a TPM-in-space. Both go through the load-hash-measure-trust/execute steps, but TKey only needs one hardware domain to accomplish this whereas a discrete TPM needs two.
Measurements and their results need to be computed in a context that can't be subverted by the object that is being measured. In the discrete-TPM case the host loads and hashes the object and then sends it to the TPM before letting the object influence the host's control flow.
The only difference in the TKey case is that instead of sending the hash to a TPM the TKey derives key material based on Hash(sk, Hash(object)) and then removes sk from RAM. This is roughly equivalent to TPM sealing.
The TKey equivalent of TPM quoting/attestation would involve a signing step using sk, and leaving the resulting certificate in RAM before letting the object influence control flow. A downside to this approach is that each measurement creates another level in a PKI-like structure. If you do the same thing with a discrete TPM you can do multiple measurements and still only have one signature attesting all of them.
TPM is more or less an API specification. The specification is fine but people are worried about implementation backdoors and pre-provisioned keys. It should be possible to have an open source public trustable implementation that anyone can synthesise onto an FPGA or a real chip design. This ought to avoid fears about backdoors, while keeping a mature security model and good software support. I suspect there isn't sufficient demand or skill for such a project.
> I suspect there isn't sufficient demand or skill for such a project.
IMHO more like: There is little to no profit in this, nor much motivations for AMD/Intel to provide this. But it does involve additional work, especially if the fTPM implementation they use currently does (partially) use code they got from other companies which they can't open source.
Through you wouldn't need a FPGA, for a TPM to really be secure you want it to be integrated into the CPU. And most times this means it's not a "special" physical chip but just a "standard co-processor" running some software. E.g. in case of ARM Android smart phones it's likely a more or less normal Cortex M0 processor (and it likely runs more then "just" a TPM, e.g. some DRM pipe protection code).
So theoretically you would just need to publish the "bar metal" code and anyone could analyze it and then build and run it e.g. using qemu (if qemu can handle co-processors idk.). And by also allowing to extract the build code from the CPU combined with reproducible builds you could also verify it runs what it says it runs (kinda, I mean who says there isn't a hardware backdoor rewriting the code, and it being a FPGA doesn't help there because the FPGA hardware could also rewrite the FPGA bin-code.... at least theoretically)
As I'm sure you're aware FPGAs are nice because they can complicate things for an attacker in the physical supply chain. Andrew "bunnie" Huang did an excellent talk on the subject:
TPM 2.0 is a fantastic spec. There's little that's wrong with it. Even the bus sniffing vulnerabilities w/ dTPMs aren't TPM's fault but the BMC's and BIOS', as there is absolutely a way to encrypt and authenticate secrets to/from the TPM.
Reinventing this wheel will probably lose a lot of good things. People who reinvent wheels often fail to understand what came before.
The passive attacks on dTPMs in question require physical access, and opening the computer (and can be defeated with software). That is a pretty unlikely event for most users' computers. Active attacks (which can also be defeated with software) are more interesting because those are more likely, since the user doesn't have to cooperate with the attacker in those attacks.
Is a non removable TPM actually the right level of security? It feels like the same level of security I would get by having one of those realtor lock boxes permanently affixed to my front door and always keeping my house key in it.
Maybe the key itself is "more secure" because it only lives in the lockbox and is only taken out to unlock the front door and then put back right away. Is this the right system for actually securing access to my home, though?
Any security model that can not differentiate the device owner from a threat actor misses the point of who security is meant to protect.
TPM and secure boot combined can create computers that run key-per-cpu encrypted system binaries that can not be modified by the device owner meaning next time microsoft does something fucky there will be no path out, no programs to disable it, its just how you have to live now.
If you could replace all the vendor keys (including the firmware signing keys) with your own then TPM could make sense. But today's TPMs don't support that.
TPMs have four key "hierarchies" each of which has a seed. Of those four, one (the "null" hierarchy) gets a random seed each time the TPM is reset, while the other three (the platform, endorsement, and owner hierarchies) have their seeds stored in EEPROM/NVRAM, and there are functions in the spec for replacing those with new, randomly generated seeds.
All primary keys in a TPM are derived from seeds, and the derived keys are not stored anywhere -- they are always derived as needed from the hierarchy's seed and a given template. Therefore changing a hierarchy's seed loses all access to primary keys previously used in that hierarchy.
All other keys are saved off-chip encrypted to a primary key. Thus rotating a hierarchy's see loses all access to all keys previously used in that hierarchy (not just the primary keys).
The only thing here that is remotely problematic here is that you have to trust the TPM's RNG. If you're paranoid you might believe that the RNG is itself a PRNG with a hidden seed and that the manufacturer knows it. But if you roll the endorsement hierarchy seed and delete the endorsement key certificate from the TPM, then how will the manufacturer identify the TPM in order to look up its putative hidden RNG seed? Even if they could find it, how would they know which RNG output was used as the hierarchy's new seed?
So, yes, you can "replace all the vendor keys". It really is trivial. However, before you do it you may want to use the existing keys and certificates to bootstrap (enroll) the host into your organization's network and then change the seeds and certify various public keys as derived after the seeds are changed so that you can continue using the TPM for attestation.
You cut off the "including the firmware signing keys" part, that's critical. Otherwise the vendor could be coerced or subverted to sign malicious firmware which then subverts your system at runtime.
Only when you can bring your own keys for the entire boot and trust chain can you untether yourself from the vendor once you have purchased the hardware.
Yes, you can get compromised through firmware updates. But then, not using a TPM also leaves you vulnerable to firmware and software updates. If NSA has compromised all TPM vendors, then you can expect that they've compromised much more still, and so you've basically lost the fight against them. Key management is always a weak link in the chain.
I.e., I'm objecting to this focus on TPM in TFA and this discussion because a voltage fault injection vulnerability in the SP is fatal to security regardless of TPM usage/non-usage. I'm also objecting to the idea that TPM adds vulnerabilities when a non-TPM-using system already is full of ways for NSA and/or other such agencies to backdoor it.
> Yes, you can get compromised through firmware updates.
It's not just about updates through regular channels but about evil maid attacks with signed malicious firmware. This vector would be avoidable if you could sever the trust relationship.
Another wrinkle is that some of the blobs are encrypted (e.g. ME), so they can't even be audited.
Currently too much of the trust chain relies on untrustworthy components. So you can't trust the system. But the DRM vendor can well enough for their purposes. Which makes them a negative.
> I.e., I'm objecting to this focus on TPM in TFA and this discussion because a voltage fault injection vulnerability in the SP is fatal to security regardless of TPM usage/non-usage.
> Currently too much of the trust chain relies on untrustworthy components.
This will always be true unless you build all the components yourself. And you don't have time to build all the components yourself. Therefore this will always be true.
With root of trust measurement you get to see that you're running code you've arbitrarily decided to trust. Everything else you could do would be mitigations (e.g., look for access patterns that imply compromise) or attempts to suss out vulnerable and/or backdoored components (e.g., reverse engineering and analysis). Not that one should not do those other things, but that root of trust measurement is still both, essential and insufficient.
Remember: perfection is the enemy because it's unattainable. We can tilt at windmills, but that won't get us anywhere.
> It's not just about updates through regular channels but about evil maid attacks with signed malicious firmware.
Yes, evil maid attacks are the primary way for targeted attacks using malicious firmware.
So this is something that maybe the TCG should tackle. It should be possible (maybe it is?) to require that the host meet some policy before the firmware update can run -- this would prevent unauthenticated evil maid attacks.
Mine, and I like it to stay that way. A TPM can be a valuable tool for protecting my data, making it difficult if not impossible for anyone to decrypt my drives. TPM+PIN is hard to beat.
The first link said nothing about TPMs. The second link is nonsense. The third link says the NSA "teams" with the TCG, which could be concerning indeed, but there's no details there. The fourth link is light on details and full of FUD. The fifth link says roughly the same as the third, and is equally light on details. The sixth link is like the fourth but it does have some actually useful information that says you're wrong:
> It is also important to note that any user concerns about TPM 2.0 are addressable. The first concern, generally expressed as "lack of user control," is not correct as OEMs have the ability to turn off the TPM in x86 machines; thus, purchasers can purchase machines with TPMs disabled (of course, they will also be unable to utilize the security features enabled by the technology). The second concern, generally expressed as "lack of user control over choice of operating system," is also incorrect. In fact, Windows has been designed so that users can clear/reset the TPM for ownership by another OS of they wish. Many TPM functions can also be used by multiple OSes (including Linux) concurrently.
This refers to the fact that you can:
- disable the TPM if you don't want to use it
- change the platform/endorsement/owner hierarchies' seeds, delete all the platform and endorsement certificates, and thus render any agreements between the NSA and the TPM manufacturers useless to backdooring the host (unless the agreement involves voluntary vulnerabilities in the TPM's firmware)
The last article you link to is much more interesting because it actually involves thinking about how the NSA (or other such agency) could have a backdoor inserted into the TPM:
> However, such “trust” can be easily misused to break security. In the talk, I used TPM as an example. Suppose TPM is used to implement secure data encryption/decryption. A standard-compliant implementation will be trivially subject to the following attack, which I call the “trap-door attack”. The TPM first compresses the data before encryption, so that it can use the saved space to insert a trap-door block in the ciphertext. The trap-door block contains the decryption key wrapped by the attacker’s key. The existence of such a trap-door is totally undetectable so long as the encryption algorithms are semantically secure (and they should be).
Er, well, this fails because there is no way to compress cryptographic material, and we're talking about random or pseudo-random keys being compressed (which, you can't) then encrypted. So this particular idea fails immediately.
That doesn't mean that there aren't other backdoors. For example, the ciphertext could be larger than necessary rather than use a compression of the plaintext. But this too fails because the sizes of the ciphertexts are easy to determine from the plaintext sizes.
The best way to add a backdoor is to have secret commands that use public keys for authentication (and even encryption) so that you have to know the backdoor in order to be able to use it. I cannot prove that there is no such backdoor, but if you have the means to decap and reverse engineer a dTPM then you can do this.
>Er, well, this fails because there is no way to compress cryptographic material, and we're talking about random or pseudo-random keys being compressed (which, you can't) then encrypted. So this particular idea fails immediately.
Excuse me, but wtf. That's BS. Cryptographic material is nothing but data. A Huffman encode will work on a number that happens to be a public key just as happily as it will a anything else.
Cryptographic material doesn't have a magic "immune to compression" characteristic.
An RSA public key is a prime number. How are you going to compress a prime number?
Encrypted material should not be compressable because it needs to appear random no matter what the unencrypted contents are, otherwise you have information about those contents. (You can trade off between the security and compressability, but shouldn't.)
An RSA public key is a small exponent for encryption and a large composite pq where p and q are large primes. There's more to it, naturally, but I'll stop there. You can have many many RSA keys of any given size, say 2048 bits for now. So something somewhat less than 2^2048 possible keys for 2048 bit RSA keys, let's say 2^2000 keys, which... is a lot! Since one should generate keys randomly we can expect that no one key will be much more frequently used than any other, so Huffman encoding is right out, but even if you try Huffman encoding anyways you'll find that most keys will end up with Huffman codes that are longer than the keys themselves meaning that the keys are not compressible!
Thinking about AES-256 will be easier. Say we have 2^40 computers able to randomly generate 2^64 AES-256 keys every day. Well, it will still take a long time to generate all possible AES-256 keys. So say in the best case scenario we get close to 2^120 keys or so. How would we even measure their frequencies so we could assign Huffman codes to them? Well, we can't, and even if we could most keys would have a frequency between 1 and 3. And still the next key to be generated could be outside that set so we really have to assign a code to each key, and there's... 2^256 possible keys... which means that even if we could assign a code to each key we couldn't write that down because there's not enough atoms on Earth to do it with. But let's say we just define a collation of AES-256 keys and assign then Huffman codes in order... But now we'll soon see that most keys will get codes assigned that are longer than 256 bits. Which means that the average input to this compressor would... yield an expansion, not a compression.
> An RSA public key is a prime number. How are you going to compress a prime number?
In principle, given an n-bit prime p, you can store an expected log₂ n − 0.5 additional bits by using the gap between p and the next prime. Though in practice, I'd be surprised if the 19 or so extra bits could be dangerous.
Try it for yourself. Encrypt /dev/zero in some key using AES, then compress it with whatever compressor you like. Try it many times. Let us know how it goes!
And, yes, you can compress specific pseudo-random looking symbols, but only a few. We're talking about arbitrary pseudo-random data, and that will not compress.
Why is PIN insufficient? Assuming encryption algorithms are sane, a 20 character long PIN should be able to achieve adequate entropy to keep the data safe.
It says any TPM can be defeated in 2-3 hrs with physical access. Is the AMD one different? Can it be defeated over networks? And is this something I should be concerned about since I just bought a new AMD machine?
One of the authors here. This attack is relevant if your machine is physically exposed to attacks, e.g., in an office environment or while traveling, and if you don't use any additional pre-boot passphrase to protect the disk (but rely solely on AMD's fTPM).
When TPMs became popular, dedicated TPMs were mainly used, being a separate chip on the mainboard connected via the SPI or LPC bus. These were prone to (relatively primitive) bus sniffing attacks, where you would hook up a Logic Analyzer to the bus, watch a regular boot procedure grab the disk key, and then use software like Dislocker to extract all data from a USB Live Linux or alike.
Nowadays, most modern CPUs (both on Intel and AMD) ship firmware TPMs that are "included" with CPU die, making them safe against the bus sniffing attacks. However, they can still be prone to more sophisticated attacks like ours.
> These were prone to (relatively primitive) bus sniffing attacks, where you would hook up a Logic Analyzer to the bus, watch a regular boot procedure grab the disk key, and then use software like Dislocker to extract all data from a USB Live Linux or alike.
The TPM supports encrypted sessions, but they are opt in. See Parameter Encryption in the TPM spec. The issue is that Bitlocker doesn't use them for whatever reason. If Bitlocker turned on encrypted sessions, it would be not possible to sniff the key. It's crazy that Microsoft keep things insecure.
This is important because one purpose of TPMs is to prevent the owner of the machine from doing certain things (as in Digital Rights Management). And the owner of the machine presumably has physical access.
they are for boot chain security which is an essential featur for any laptop
TPM by itself never prevents anyone from doing anything
but it's used with features like secure boot, but as long as they fully implement the spec they don't prevent you from doing with your laptop what you want as long as you don't install software which does so
secure enclave and similar used for DRM isn't directly a TPM feature but more an extension using TPM and other CPU features
and yes it can be used to security store keys in your hardware. Any keys! A feature any user understanding security would appreciate.
For DRM to work, it has to be running in a trusted environment where the user can’t just load up a debugger as superuser and read the keys from memory.
The way you do that is by using secure boot to ensure that you are running a trusted kernel that enforces appropriate access controls… which requires TPM.
One of the main selling points of TPM is that you have chain of trust to ensure the boot process wasn’t tampered by a rogue boot loader that modified your code. And, yes, that has security benefits as well, but don’t for a second think that DRM wasn’t a major consideration.
Author here: TPMs are not a TEE (trusted execution environment), and the TEE included in AMD's CPUs function completely separately from the TPM. So you could disable the TPM and still have the TEE run DRM code.
The fact that both TEE and fTPM run on the PSP (or AMD-SP) might add a little confusion, but is nevertheless interesting.
> TPMs are not a TEE (trusted execution environment),
I was using the phrase “trusted environment” more generally than that.
I do not mean a separate environment from the main CPU. Rather, that applications (like software DRM, or even the graphics driver) running on the CPU can’t trust the OS to enforce access controls without a secure boot environment.
How do you know that windows won’t let the user spin up a debugger and dump all your memory (or load a modified driver that lets them dump the frame buffer after content has been decrypted) for later use?
You need to trust that you are running in an environment where users haven’t just loaded whatever kernel modules or graphics drivers they want.
TPM is generally how you get a secure boot chain, so it is a prerequisite. Hence, TPM facilitates DRM.
no secure boot only enforces you run a trusted kernel not that the kernel enforces access controls and in a full secure boot implementation the user can freely choose what _they_ trust
and attacks which mess with the boot chain have been a huge problem for a long time for enterprises, TPM likely would have ended up very similar to how it did even if there wouldn't be DRM. Also the DRM lobby has since a long time pushed for moving (parts of) the DRM into the firmware (i.e. in a context where TPM doesn't matter much), which is where vendor-locked secure enclaves and similar come in which are related to TPM2.0 but not the same. For example on some ARM/Android chips part of the DRM system is in a locked secure co-processor.
And just because something can be abused doesn't mean it isn't useful or it's fundamentally bad. Through you seem to be making exactly that argument now with a "but it was designed with bad things in mind" added, which is a IMHO pointless argument. What matters is what it _is now_, not why it ended up there.
And what it is now is an _essential_ security feature for laptops, which also can be abused iff used in combination with some other features and that other features are tweaked to harm the user (e.g. don't allow custom keys for secure boot).
> no secure boot only enforces you run a trusted kernel not that the kernel enforces access controls
Yes, I am aware of that. But having DRM that is not completely ineffective has a prerequisite that it runs on a kernel that does enforce those access controls. The only way that works is with a trusted boot chain.
You originally responded “no” to a post saying that one purpose of TPM is to facilitate DRM.
TPM can be used for DRM. If you roll all the hierarchy seeds though then the TPM can't be used for DRM and you can just be denied access to media. In order to use a TPM for DRM you need a fairly dystopian secure boot of a consumer OS that enforces all DRM -- this is very much a possibility, naturally, and TPM enables it but does not guarantee it (otherwise we'd already be there today, though we're heading in that direction now).
There aren't really any mainstream DRM systems that use a general computing platform TPM, precisely because they have a terrible track record of being breached.
The point isn’t to store keys in the TPM. The point is to ensure you’re running an unmolested version of Windows that will enforce whatever security controls the DRM maker wants to have.
Part of that is things like:
* Don’t load an unsigned (or wrongly-signed) GPU driver, because it might be modified to allow a user to read from framebuffer memory after content has been decrypted.
All this effort for nothing making life difficult for the end-user. Physical video splitters are a thing. They are asked to respect HDCP, but they don't have to. It's how streamers are able to play a game on their monitor while also streaming the video of them playing the game.
Oh, no argument from me there, I’m just pointing out that you kinda need TPM to make your DRM not trivially bypassable.
I imagine that the MPAA et. al. are planning to attack the splitter thingy one day, so they’ll want to make sure you can’t slurp the frame buffer when that avenue is gone.
"Motivated by Windows 11’s push to use the TPM
for even more applications, we apply the vulnerability
to Microsoft BitLocker and show the first fTPM-based
attack against the popular Full Disk Encryption solution.
BitLocker’s default TPM-only strategy manages – without
any changes to the user experience – to swiftly step up
a user’s security in the face of a lost or stolen device.
However, as our work complements the established at-
tacks against dTPMs with an even more potent attack
against AMD fTPMs, a TPM-only configuration lulls a
non-technical user with high protection needs into a false
sense of security."
"Users who fear a physical attacker with reasonable
resources should opt for a TPM and PIN configuration.
When BitLocker identifies that the underlying TPM is
an fTPM, users should be urged to turn their PIN into
a passphrase."
It could also be a concern for Linux users if they have configured their system to use systemd-cryptenroll (even with --tpm2-with-pin=yes or --fido2-with-client-pin=yes). The user is not asked for a secure passphrase in addition to having a TPM present. The user is just asked for a short PIN that is provided to the TPM2 or FIDO2 device and the device is not meant to return the secret without a valid PIN being provided.
There is actually an interesting point regarding TPM+PIN and systemd-cryptenroll: The data sealed in the TPM can directly be used to decrypt the disk (it is base64 encoded and used as a passphrase for a luks key slot). The PIN is only used to authenticate the TPM's unsealing of the data.
In contrast, the unsealed BitLocker data still needs to be decrypted with the pin to get to the VMK.
When our attack is successfully executed on a target, this means that TPM+PIN is broken on systemd-cryptenroll, and as secure as PIN-only with the same PIN on BitLocker.
ChromeOS also heavily relies on TPM for disk encryption, and unlike Windows doesn't even give you the option of adding a passphrase or pin on top of it.
And there's probably some large enterprises that use regular Linux desktops with LUKS/Btrfs/ZFS encryption in TPM only mode, to match their Windows setups. Systemd e.g. added systemd-cryptenroll with ergonomics comparable to Windows' Bitlocker enrollment.
yes and it does use the TPM and is affected by this attack ;=)
at least the recent ubuntu versions when using the default full disk encryption setup do setup decryption using TPM you still need an additional password as without you would e.g. lose access to your data if you change some hardware, you motherboard brakes or depending on how they set it up you also need the password after kernel upgrades etc.
but the vulnerability allow someone with hardware access to access all your data by booting their code but messing with the TPM in a way where it still measures as if it's was booting your code
The passphrase is what makes it a poor user experience. Many people simply need an encrypted disk that you can't boot offline and not the boot-time PIN/passphrase (which Microsoft abandoned as the default in Windows 8, I believe, again due to UX).
Typical Linux installation will not rely on TPM in any way. But if you use systemd-cryptenroll to provide BitLocker-like UX for FDE then the concerns are mostly same.
I am wondering the same, considering the proliferation of AMD in the last few years I would be devastated to have to go back to Intel. Just when the Framework came out with their AMD versions too.
Access/privileges to execute on the host is required. I, personally, would not feel concerned unless I was a TPM user and knew I was a specific target.
"security properties of the TPM - like Bitlocker's TPM- only protector - can be defeated by an attacker with 2-3 hours of physical access to the target device"
So I take this is more of a data exfiltration type of attack?
I've not yet read past the abstract, though I've read (and responded to) a lot of the commentary here.
QUESTION FOR THE AUTHORS: Is there any way in which a voltage fault injection vulnerability in the SP can affect only the fTPM and not actually be a full host compromise but for the fTPM compromise?
I believe the answer to that has to be no. If you can compromise the SP you can compromise the whole system. Therefore this isn't really about fTPM. But you'll notice that many commenters are running away with this and saying that TPMs make systems less secure, which is not really correct.
Yes, TPMs are not used correctly by most BMC/BIOS implementations, or even by OSes, and there are vulnerabilities that arise from that misuse. But TPM 2.0 does provide what is needed to solve those issues.
The wholesale attacks on TPM 2.0 itself here are not warranted, especially if the SP vulnerabilities are more general rather than being specifically limited to fTPMs.
Relying on an TPM to rate-limit an attacker's ability to brute-force a comically short PIN and get the encryption key (Microsoft's promoted way to login to a Windows device), is less secure (once the TPM is exploited) than relying on the attacker to guess a high-entropy password to get the encryption key.
I sometimes wonder if it'll be a selling point of Chinese CPUs in the future, "our CPU might not be the fastest, but it's the only one running at any given time!".
People don't need "flagship" CPUs for every single purpose. There is no reason why one cannot have a slower more private system for specific purposes, say general purpose computing, and the faster one with the autonomous network-aware CPU and OS be used for games only or something.
Interesting it is the Chinese processors (e.g. Allwinner, T-Head, Rockchip) that happen to be free of this secure boot garbage by default. If you want to you can blow an efuse (with power applied to the VPP pin) and enable secure boot. But otherwise it's off.
I am waiting for a high performance ARM or RISC-V chip that's on par with AMD and Intel performance. One without secure boot. The moment that comes out, my Ryzen system is going in the bin immediately.
On par? Depends which year you're targeting and perhaps what applications. If you mean on par with current year then it'll be a long time before that happens because AMD and Intel are able to run power-hungry while ARM and RISC-V try to use less power.
Password-less TPMs will always be fundamentally vulnerable. The only question is what is the price to break them. It can be very high (as seen in the Xbox).
Thanks for the pointer, we haven't checked out OPAL yet. It seems to be the most popular standard when it comes to "Self Encrypting Drives" (SED).
Looking into it shortly, I've found a paper from 2019 from Meijer et al. ([1]) finding several flaws with OPAL-compliant drives. They further find that BitLocker entirely depends on SSD-based encryption if the hardware advertises it. This finding's nature is very similar to ours in that BitLocker's Disk Encryption is insecure/unreliable in particular hardware configurations.
Thanks for the hint. From the paper it seems it's highly implementation-dependent which drives can be compromised and there's no immediate way to tell. Still it seems OPAL 2.0 is good enough to deter data leaks in case of theft (excluding targeted attacks).
Stupid question: was there no responsible disclosure and is there no cve assigned?
Seems like a rather practical attack (or am I missing something?
Do the researchers consider it a known fact that that fTPM is broken by design and thus do think that nobody will get hurt? It would make sense to require an additional passphrase on fTPM devices for bitlocker. Was there a statement from Microsoft or AMD?
From the paper: "All security-relevant findings discussed in this paper were responsibly disclosed to AMD, Microsoft, and the systemd-cryptenroll maintainers. The systemd-cryptenroll maintainers quickly got back to us to discuss specific mitigation strategies."
TPM based disk encryption is mostly useful as theft protection, so that companies, education facilities or government agencies can provision 100,000 devices without having to match smartcards/fobs/disk passwords to devices and users. (Which often leads to terrible security practices, like backup passwords shared between devices etc.)
So cracking it in 3 hours by amateurs is pretty bad, because now even not too sophisticated thieves can start looking for crypto wallets or sensitive data to ransom.
This is why you need to evaluate your threat model. Is the NSA out to get you? If so, I hope you aren't relying on HN for your security strategy. Are you worried about theft? Add a passphrase to your encrypted volume and don't rely on TPM. Worried about your coworker or family snooping on your computer and seeing your emails to your mistress? TPM is fine.
Well, considering I am 008, my threat model is quite high indeed! But ya, I was being hyperbolic. And HN has provided plenty of worthwhile info on the topic of OpSec over the years, do not be quick to discount. I would be a fool to reveal too much about my threat model but the reality is there exist many people who have reason to fear the NSA. Am I one of them? Nice try, NSA!
The popularity of hyperbole in HN discussions makes it easier for people to discount HN, both in drowning out useful information and in weakening discourse. Please reconsider in the future.
Perhaps, a counterpoint, the popularity of patronizing comments on HN make it... well, easy to discount what is actually quite good advice. Reconsider?
Not an expert, but I think the biggest exposure most consumers might face is lost/stolen devices where it's conceivable an adversary could have physical access.
Mostly academia and nation states. Physical access will always be king and this provides one more avenue for adversaries to bypass encryption more easily.
No. Your wording suggests that once attackers gain physical access, all is lost. It is not true. With a passphrase based full disk encryption, if the passphrase is strong and the machine is powered off, physical access doesn't imply data access.
Attach a microphone to the device while they are not looking, decode the keys they are pressing from the sounds, figure out what keys are the password, done.
This is _trivial_ for any mildly sophisticated attacker.
It's worth mentioning that standalone TPM chips from Infineon and others are a lot more hardened than Intel or AMD's fTPMs. Infineon's TPMs are tested against fault injection attack, package removal, side channels and so on.
Please note, though, that's imperative to then go for a BitLocker TPM+PIN configuration at least. A standalone (discrete) TPM with TPM-only protectors can be attacked by bus sniffing, a hardware attack much simpler than ours. [1]
The beauty of a discrete TPM is its anti-hammering protection, making a numerical PIN a very effective security measure (akin to a SIM/SmartCard).
Gnarly, especially because these vulnerabilities seem unpatchable. Luckily, it seems TPM+PIN should remain safe if your PIN is difficult enough to brute-force, though.
As this includes an attack against the secure processor, does this also pose risks to any DRM keys?
That cannot be a serious proposal. These attacks can and do happen, and it's in our interest to design systems that make them as hard as possible.
I'll take "nightmarish complexity" that puts these attacks outside of the scope of a technically savvy teenager over having to carry my machine with me everywhere I go, any day of the week.
Did you try OpenBSD with bioctl?
You can tamper with the bootloader, but not the rest. And you can always set the bootloader in another media and always boot from that.
Tampering with the bootloader is game over. And what, are you keeping this other bootloader medium on your person and in your sight at all times? It's never ever unattended?
Using TPM with closed source firmware, especially written and designed by Microsoft, probably full of backdoors, when you don't even know what it's doing is a worse choice.
Yes, it's an idiot proof method that's vetted by millions of reviewers because it's developed in the open. The ability to self validate is also invaluable, hard to have major widespread bugs with that, especially if it's widely adopted by trillion $ companies. OpenSSL is a great example of this.
This might not be surprising to some, despite Windows hiding the GUI passphrase functionality behind some group policy settings, both "Require additional authentication at startup" and "Enhanced PIN", which isn't perhaps the most intuitive and a normal user might not even realize unless they notice the "normal" PIN is numerical-only. In any case, for the average person that might have their devices stolen, this is likely not to be a threat, but I think a passphrase should always be preferable, BitLocker doesn't support any better option.