One concern I've been having regarding a read-only root file system – an idea that I really like! – was how cumbersome software updates (say, through apt) and quick config changes (in /etc) must be. AFAIU I'd have to manually sign a new rootfs image every single time which looks rather painful to me. I wish Linux distributions provided a clear separation between user-facing software & configs and system-internal stuff that one hardly ever has to touch: IMHO software & configs should by default get installed on a per-user basis and not require root. (And applications should also be sandboxed by default but I'm digressing…)
Something that I'm not happy about is that the snaps all live on the writable /var since they want to do automatic updates all the time. This is problematic for a locked-down configuration and might recommend against a snap based distribution.
Separating out the bootable bits from the rest of the packages might help, as would running more things in sandboxes. Another option that we're exploring is some lvm magic to create a snapshot, upgrade the snapshot, sign it, and then on the next reboot use it as the real root. This is also useful for fleet management -- the new root filesystem, kernel, initrd, etc can arrive "behind the scenes" and on the next reboot is the one that is used. Since the PCRs can be predicated as well, the PCR policy can be signed and sent along with the upgrade to make it seamless.
This sounds very nice and similar to Android's A/B partitions!
> Since the PCRs can be predicated as well
This may be a stupid question but… what are PCRs? Google yields "polymerase chain reaction" – a method used, among others, for detecting the coronavirus but I'm sure that's not it. :)
IMHO the TPM should be a required piece but not the only piece of the puzzle. If I loose my laptop, I don't want the goods to be protected exclusively by a key that's trivial to recover from it (stored in something that's not a secure-element).
I've covered it in a talk I gave at 44con:
tl;dr; Use the TPM (and potentially other technologies like SGX) as part of your KDF to strengthen PIN/passphrase that the user provides. This breaks the asymmetry of offline attacks (attacker will always be bound by TPM/SGX-speed). Do NOT give it the only key required to decrypt your data.
> The PCR values in the TPM are not "secret", so an adversary with physical access could directly wire to the TPM and provide it with the correct measurements to extend the PCRs to match the signed values. The user PIN is still necessary to unseal the secret and the TPM dictionary attack protections both rate-limit and retry-limit the attacker.
Decaping chips to recover secrets is outside of the threat model, however.
Decaping a chip from a lost laptop is far from science fiction and can be performed at a fixed cost. Mitigation is super-cheap... There's just no good reason to store the "final" key on the TPM.
Here I interleave rounds of argon2id (configured with parameters that fit my system: use up all the RAM and all the cores since there's nothing else to do in the initrd) with HMAC rounds from TPM and/or SGX (configured with the right policies so that they rate-limit and only unlock if the PCRs check out).
If an attacker can get hold of the hw long enough to get into the TPM they could just copy the encrypted drive, and replace the laptop entirely - the only thing needed would be to ship the typed pass-phrase home on next login?
I suppose ideally there'd be some kind of challenge-response to verify the TPM (very naive version - type in a wrong pin/pw first - if it's accepted you know the system is compromised..).
But, assuming the attacker can replace the whole system - I'm not sure I see how it could be trusted fully, assuming it's not under 24/7 watch (and even then, it could of course be compromised, but shifting the attack toward eg bribery, betrayal, neglect etc).
If that is the case, how does it strengthen a PIN? Any attacker wanting to find a decryption key could simply extract the key, and then brute-force the PIN outside of the TPM constraints, can't they?
It definitely does when there is no attempt made at protecting against it. L2 means "tamper evidence", you need L3 for things to start to be designed to prevent it from being "basic".
SGX is L3, you'll be hard pressed to find a TPM that does better than L2.
Phrasing it another way: Even if you don't have the skills/equipment to do it. How much do you think it costs to get someone to do it for you? How reproducible is that process? Why are we assuming it's hard?
Yes, I'd like more security, but it's not bad.
For example for cryptographic primitives, if you didn't include the NSA in your threat model, you did something deeply wrong in your modelling.
They could know more than civilian cryptographers, have new direct attacks that we don't know yet, e.g. algebraic attacks and specialized hardware to solve gigantic systems of equations. Or, they could have a working quantum computer with many qbits. We don't know, do we?
Sometimes information leaks (most well-known example are the leaks of Snowden) or hints come up.
One example: https://theintercept.com/2017/05/11/nyu-accidentally-exposed...
A (German) commentary on this article: http://blog.fefe.de/?ts=a73ff836
It's not just the NSA, it's literally everyone else as a class of threat they might need to consider. Also, I use opposition researchers as threats for politically exposed people, and who cross over into foreign spy level stuff.
The controls it prescribes are straightforward, and realistically, it's a risk you just understand, do your best to mitigate it, and accept.If you are going to not do business because you are afraid of state level consequences, you've got a legal/regulatory problem, and not a technical one.
I didn't look at it in details, but in one of the screenshots the system asks for a pin to unlock the disk.
I agree that storing a full decryption key in the TPM may be risky. Even if the thread model should be considered (it may not be interesting for an attacker to go around doing this to Joe Random's laptop) it is something that users should be aware of.
IMHO It's clearly better than no TPM... as for whether it's better or worse than a physical chip, it's a different trade-off.
One one side you have:
- higher speed
- higher protection against physical attacks (if only because the die is larger... it's smaller and the "bus" isn't as trivial to interact with)
On the other:
- new side channels (think spectre, meltdown & friends) and they are probably easier to exploit thanks to the higher speed (more samples)
- more parties to trust (microcode, ME, ...)
- erasure is harder
For the specific purpose of hardening passphrases/keys ... use both. :p
Now I have to prepare for the five dollar wrench attack. [https://xkcd.com/538/]
Perhaps, I can use Shamir's Secret Sharing to share a key with other people I trust, including the lawyer I paid which must keep client's secret and exempt from police raid by law.
Please update your main page title, so a quick-made bookmark remains searchable when needed.
sbctl is essentially a secure boot key manager. It enrolls keys and ensures the relevant files are signed on your system. It works fine and I use it day-to-day these days, but it lacks several nice UX features.
The second thing I did was reimplement the UEFI API portion in native Go code from scratch. It currently is feature comparable to sbsigntools, but in pure Go. The top-level API is not completely nailed and It lacks some granularity, but I have written several test tools that replicates the sbsigntools binaries.
I think more development in these area can help make Secure Boot as accessible as full disk encryption is these days.
Fortunately, there's an easy alternative if you want to protect against evil maid attacks: use full disc encryption and keep the bootloader (and key) on a usb drive on your person.
Secure Boot is trustable, if you remove the vendor keys and reprogram the platform key with one under your own control. Likewise, the TPM is useful for protecting your secrets, not just enforcing DRM, if you take ownership of it and make use of the sealed key policies. See the safeboot.dev threat model for how these protections are applied and how they detect or prevent many sorts of attacks.
...you have verified the silicon of your TPM chip, motherboard, etc.
The purpose of Secure Boot is to validate that the bootloader is trusted so that you can have some assurance that you're not giving your disk encryption password to a fake bootloader which phishes you.
Secure Boot doesn't give any agency more control over your machine than if you were not running Secure Boot. Using Secure Boot is strictly more secure than not using it, even if you don't trust the parties who made the implementation.
The only argument against it is that it provides a false sense of security, which is only a problem if you decrease security in other areas as a result of using Secure Boot.
And additionally trusting them even with the possibility for a NSL that very likely was sent to them in the past already and means they probably have an automated pipeline for handing over the keys to federal institutions.
I'd never trust any OEM BIOS with anything. Just as I won't trust Intel ME.
Or are you literally replacing all OEM firmware, using purely open hardware. Using purely open source firmware. Verifying the firmware you have corresponds to the sources you have. Verifying that there is no additional secret firmware you don't know about, verifying that the hardware you have actually corresponds to the open hardware specs you have, etc. i.e. Doing an insane amount of steps that are so impractical, you might as well make your own computer starting from first principles.
What you can do though is trying your best that you can influence with your own skillset. I would never claim that any device is secure (heck, eversince BadUSB not even my power transformator is) but I'd have a better feeling when using coreboot that I configured, built and flashed via my CH341a adapter instead of an OEM SeaBIOS, for example. I mean, software is my skillset. Software I can influence. Hardware: not so much.
I don't know whether there are government-level exploits available for coreboot or libreboot, but I think that's the level of security where we can just dump our hardware into the trash anyways.
Additionally I don't have the skillset of verifying that a RISC-V chipset is really open, verified or secure. Therefore I would have to trust somebody else to do it, which might become the centralized point where the red tape fails for all of us.
When it comes to open hardware, mntmn  got pretty far already. Even though I personally think that the touchpad is still unusable in terms of modern UX. But I really admire them for what they do, and that they do not compromise on their core principles.
If you spend $1000 (or equivalent in time/whatever you care about) mitigating a risk that at worst would cause you $10 worth of damage, that is a poor use of resources.
Of course some people like locking things down as a hobby. Nothing wrong with that, but at that point you're doing it for fun, not to protect yourself.
the only acceptable phrasing of "I don't trust anything" is to finish that sentence with "... therefore I don't use computers" ... the very idea of using computers means that data is processed. and so eliminating all attack surface is not to play.
You can "not trust" computers, and use them.
I wouldn't want to do that on x86 platform though. A64 SoC is simple enough that getting close to this ideal is possible.
> With a Librem Key linked to your encrypted drive, you can boot your system, insert your key, and enter your PIN when prompted. You can always fall back to your passphrase if your Librem Key isn’t at hand.
Emphasis mine. Since the bootloader is not protected, it's susceptible to evil maid attacks.
or here https://docs.puri.sm/Librem_Key/Getting_Started/User_Manual....
So the bios that boots your USB in a hypervisor can't read what you type?
Thats what TPM can sort of help with.
For offensive work, then the motto (of one of the NRO's satellites) comes to mind: Doing god's work with other people's money. Everything is tapped, we should assume at least a proportion of what we take for granted now is unsafe. It's unlikely they'll have broken any big fry protocols or schemes but planting a backdoor is trivial for them (if you can manipulate the entropy on 10% of computers so you should be able to crack it 10 years, think of all the kids you could save!).
This was not an NRO mission patch, but one for the Air Force Rapid Capabilities Office.
And I'm very likely in the minority of HN on this one, but I think this is generally probably fine and warranted. That kind of hoarding is exactly what I would expect and want them to do, as opposed to the warrantless domestic dragnet surveillance I don't want them doing. If you're in a non-stop ever-changing arms race, you want every edge you can get, as long as there's a carefully considered cost-benefit analysis (which they likely at least attempt to perform).
Why is compartmentalization natural? The business world analog is "silos", and we're forever trying to break them down, or work around them or something. Are intelligence agency compartments just jargon-justification for bureaucratic fiefdoms? We know human organizations tend towards individual small warring tribes, are compartments just a justification of that?
Would an intelligence agency that scraps compartmentalization have an advantage? How would you see that advantage?
Because intelligence agencies are always also concerned with counterintelligence as a major function.
> The business world analog is "silos", and we're forever trying to break them down, or work around them or something.
Most businesses try to keep highly sensitive data that has adverse consequences for release siloed. Unlike intelligence agencies, for most businesses such information is exceptional, rather than the rule.
> Are intelligence agency compartments just jargon-justification for bureaucratic fiefdoms?
They aren't just that, which is why the practice is universal. There is, of course, the perennial risk that the legitimate need gets exploited for that, though.
> Would an intelligence agency that scraps compartmentalization have an advantage?
As long as they were never penetrated by a hostile agency, maybe (though it might also reduce focus, contribute to analysis paralysis, and have other deleterious effects without penetration.) But the impacts of any penetration would be magnified, and while major penetrations may be rare because of compartmentalization, penetrations of intelligence agencies aren't rare enough for magnifying their impact to be discounted.
>That kind of hoarding is exactly what I would expect and want them to do, as opposed to the warrantless domestic dragnet surveillance I don't want them doing
Why do you think they aren't collecting these exploits for more domestic surveillance?
They may very well be. But, first, because a 0-day in Microsoft Word or something isn't really helpful for spying on hundreds of millions of people; it's for rare, highly targeted spear phishing and other kinds of very precisely-aimed operations, and I think that's the type of stuff they generally discover and/or are given/sold
In theory some kind of major flaw in TLS or networking equipment could enable it, but the latter is risky to be doing all the time (dragnet implies constant surveillance), and the former is as well unless it can be done purely from passive observation of traffic, and I think such a critical vulnerability in modern TLS requiring no active interference (e.g. not Heartbleed) is fairly unlikely and rare - though of course definitely not impossible.
Also, I think after all the leaks and recent high-ranking court rulings, it's just not very tenable for them to keep that going as it existed before. Even if only due to future leaks and backlash. Plus, PRISM and XKEYSCORE are cool and have rad cyberpunk codenames and stuff, but from what I can tell the actual valuable, actionable intelligence they got out of it wasn't worth even 1% of what they put into it, due to having so much raw data to deal with. Trying to filter the signal out of the noise is like a needle in a galaxy-sized haystack. Future ML and other software developments could maybe make finding the needle, but it'll always be a very technically challenging problem.
And now that there's a precedent of leaking, there's a higher risk that a future dragnet surveillance program might get exposed by people who otherwise wouldn't have exposed different programs. "Vacuum everything, ask questions later" / "collect them all and let God sort them out" just seems technically, politically, legally, and practically not worth continuing. I'd also like to think some percentage of employees have probably been swayed and now morally oppose it, even if they wouldn't say it openly.
And, finally, I actually don't personally care much about being caught in that dragnet myself, so the thought of it doesn't really bother me. I work in infosec and am very privacy-conscious, too, to the point of some friends thinking I'm paranoid - I've just been in enough positions to know that it's like being the Earth: you feel important, but relative to the universe you're so small you might as well not exist. My threat model and risk profile is just very different. However, it's of course unconstitutional and unethical, and the fact that many other people feel very violated by it is more than enough reason for me to oppose it, even if it's more on abstract, philosophical grounds.
you mean 'abstain'?
and no, it would be responsible to abstaint because offensive cyber relies on knowledge of vulnerabilities in software and hence creates a incentive to not fix them which in turn weakens security for everyone.
X terrorist does it, so why can't the US right? Is this line in the sand really drawn at cyber? And does cyber not kill people in meatspace? Last I looked you drone strike weddings based on metadata.
I would guess "offense" in the digital domain to be even less of a rivalrous good than in the analogue.
So I think it is fair to stay critical if the NSA supports unique identifiers for hardware.
*: Depending on your threat model and risks, some of which are discussed here https://safeboot.dev/threats/
We also know from smartphones that manufacturers can indeed be motivated to lock bootloaders. I think the main reason we don't have that on PC is that there are still multiple manufacturers and legacy considerations.
Aside from that it remains true:
I cannot read the minds of Microsoft, but I have my assumptions that I believe are quite safe.
https://trustedcomputinggroup.org/ has rebranded themselves because they got a bad name. Justified in my opinion. People have identified the motivation on day one.
But again, yes, it can have some security advantages against the numerous disadvantages. I think it is bad for open computing overall. There are certainly mechanisms to secure your OS that don't rely on TPM. It may benefit you, but I would actually like to see it removed from my machine with all the consequences (which would be not being able to play DRM protected media).
UEFI checks first stage boot loaders and/or kernels for a signature backed by a key/cert in the TPM. AFAIK know, out of the box, that means Microsoft, RedHat/IBM, Canonical or a handful of others have signed your bootloader/kernel.
Whatever you run next is supposed to check that any further code is signed - at least code allowed into kernel space.
If there exist any signed snippets with exploitable errors, the whole card house collapses (but you can limit exposure by only allowing code you yourself signed, such as a single build of the Linux kernel).
Linux can/will require signed drivers with secure boot enabled - so that can lead to some issues (that can be fixed, eg by adding a signing key and signing the drivers).
In theory, you'll never run untrusted code in the kernel - no blue pill hypervisor root kit etc.
The purpose of Secure Boot is to verify that the binaries (e.g. bootloader) that the firmware is executing from your EFI System Partition (Yes, UEFI systems are aware of both partitions and filesystems, unlike BIOS systems) are digitally signed with a key in its database. Likewise, those binaries are themselves supposed to verify that the things they're loading (e.g. kernels) are signed with a trusted key, which can either be a key built into the Secure Boot database, or a key built into the bootloader (where changing such a key would invalidate the signature on the bootloader itself).
If you're running Linux, you can even eschew a bootloader entirely, by building the kernel itself as an EFI binary and relying on the UEFI Boot Manager to load it directly. This is called EFI stub mode, and is still compatible with Secure Boot if you sign the kernel binary yourself, with a key that you provision into the database. This is how my NAS boots.
Note that nothing here implies any sort of encryption. Whether you use disk encryption or not is independent of whether you use Secure Boot or not -- Secure Boot does not require, or even provide, any disk encryption services. Something like Microsoft's BitLocker can use a TPM to store the decryption key, and Windows will not require that UEFI Secure Boot is enabled to do this. However, changing the system firmware settings after the fact (e.g. turning Secure Boot on or off) will make the TPM (correctly) refuse to divulge the disk encryption key you've sealed into it during BitLocker setup, rendering the machine unbootable again until you either (a) undo your configuration change or (b) enter your BitLocker recovery code and set up BitLocker all over again.
 Speaking only of the bootloader portion, which is the first 440-ish bytes. The disk identifier and partition table is in the rest of the first 512 bytes, but on a UEFI system booting in UEFI mode, there is (usually) only a single "partition" in the table here anyway: a protective MBR containing a single whole-disk GPT. The actual GPT with the real list of partitions follows after. Implementations differ on whether they actually require a protective MS-DOS partition table, so a GPT-only disk (no protective MBR) could be bootable on some systems anyway.
Secure Boot shouldn't affect VirtualBox in any way. Are you sure your VirtualBox issue didn't have to do with Hyper-V?
It performs bootchain hardening, eliminating any vendor secureboot certificate and replacing with own. Entire bootchain gets signed, including ramdrive and grub configs.
Security benefits are questionable. There is no scenario where non-technical users boot into a malignant OS. They open mails with strange attachments. The security analysis leading to this initiative is some kind of fantasy novel.
According to the report there is, and I believe the report is correct, when the "malignant OS" is not actually a full OS, but rather a rootkit. This rootkit might have been the result of a non-technical user opening a mail with a strange attachment.
It's supposed to protect from a very theoretical risk.
I'm running Arch on a "professional" HP ProBook with secure boot activated. This laptop has no official linux support whatsoever from HP. It's fairly recent, too – 2018 I think.
The bios allowed me set up my own keys which I use to sign the kernel. A fun fact is that with the way it is setup currently, I refuses to boot windows because of the... wrong signature!
Not all manufacturers allow this, and even then, it adds considerable user complexity to have to do this to install Linux, as opposed to a normal liveUSB GUI flow, not to mention that it screws up dual-boot, as you said. So, it is practically a requirement, even if it is technically possible to work around it with good technical skills on specialized hardware.
I think in this case there are two problems, more related to the hardware and SecureBoot implementation rather than secureboot itself.
1. Shoddy hardware that doesn't allow the user to control it.
2. A process that is relatively involved, although I'm not sure how you could go about providing an "easy way" for people without technical skills.
One way or another, the keys have to get in the UEFI. It's technically possible to configure them from inside Linux (while it's running) – this worked on an HP EliteDesk – but the UEFI SecureBoot has to be disabled. I suppose that's relatively simple to do, certainly simple enough for someone interested in trying linux (as opposed to people who don't care what os they use as long as they can accomplish what they want with their computers).
I guess a nice way would be for the UEFI to expose some sort of interface where the OS can change the keys and then, on reboot, asks the user if they really wanted to the change. But this would still be a problem for the initial setup, because, if booting linux (instead of the usual windows) you are actually booting a different OS than what the UEFI expects.