Hacker News new | past | comments | ask | show | jobs | submit login
Roots of trust are difficult (mjg59.dreamwidth.org)
84 points by todsacerdoti 10 months ago | hide | past | favorite | 37 comments



Worth pointing out OpenTitan (which I work on) here: www.opentitan.org, www.github.com/lowrisc/opentitan

It's an open silicon root of trust. All the RTL (systemverilog code that describes the hardware), DV (design verification, which is used to demonstrate the functional correctness of the design so you are confident in building a chip with it), documentation and software is open source and available on the repo linked above.

Being able to trust your root of trust is obviously vital and current implementations are proprietary and heavy NDAs required to get any more detail. With OpenTitan you can go as deep as you want to understand what's actually going on.


This is still aimed at megacorps, not at your average hacker without a semiconductor fab handy.


Maybe not quite the same space, but bunnie's Precursor uses an FPGA instead: https://betrusted.io/


I mean of course it is? I’m not sure what you mean by “still”. I don’t see how it could ever be aimed at your average hacker.


Give it some time. I think it's possible that sometime down the line you'll get asics the same way you get PCB's ordered now.


> when I say "trustworthy", it is very easy to interpret this in a cynical manner and assume that "trust" means "trusted by someone I do not necessarily trust to act in my best interest". I want to be absolutely clear that when I say "trustworthy" I mean "trusted by the owner of the computer"

[...]

> Of course, simply measuring the fact that verified boot was enabled isn't enough - what if someone replaces the CPU with one that has verified boot enabled, but trusts keys under their control? We also need to measure the keys that were used

The motivation for this stuff has always seemed pretty weird to me.

Out of the box, secure boot means your PC only boots things signed by Microsoft. Microsoft has deigned to sign a 'shim' which lets you boot Linux - so long as your kernel is signed by a vendor with Microsoft's blessing, like Canonical. Otherwise, you've got to go into the BIOS and disable Secure Boot - or enrol your own MOK, a similarly manual process, with similar security impact.

I get why this stuff is useful for big corporations making TiVo-ized products, wanting to lock out the device owners. For them, I get that this tech is great.

But if you're a human being - this seems like a whole lot of hassle to... protect against evildoers breaking into your hotel room and replacing your CPU?


> But if you're a human being - this seems like a whole lot of hassle to... protect against evildoers breaking into your hotel room and replacing your CPU?

Secure Boot - if done right - also prevents malware that gains root access by whatever mechanism of privilege escalation from persisting within the kernel address space.

Another, more ethically questionable requirement is RF laws. Basically some jurisdictions require as part of certification criteria that a transmission-capable device be protected against someone overriding legal limitations for frequency bands or transmission strengths. There's no way out for device manufacturers other than a complete chain of trust including a trusted way to provide a location (to prevent the user from setting a location with higher transmission limits).


> RF laws ... There's no way out for device manufacturers other than a complete chain of trust ...

You make the baseband processor physically separate from the app processor and run it off of firmware that the application processor can't reflash. All phones do this. Then it doesn't matter what boots up on the app processor, and only the baseband processor has to be certified. Nobody is trying to obey RF laws using Secure Boot on the application processor. There are other reasons for Secure Boot on the AP but this isn't one of them.


There are more laws than RF laws, anyway. Why don't we trust application processors to obey RF laws, but we trust them to obey copyright laws and port scanning laws?


RF laws have teeth and enforcement and the other laws don't. You get in actual trouble if your device violates FCC broadcast regulations, and you have to get your device certified before you're allowed to sell it. There's no law that gets you, the device manufacturer, in trouble if someone hacks your device and gets it to port scan or violate copyright, and no mandatory certification that those things are impossible on your device. Hell, you won't even get in trouble for doing the copyright infringement yourself (e.g. GPL violations). Ditto for literally shipping app-side malware from the factory (e.g. sketchy Chinese phones).


Also, knowing whether copyright or the CFAA has been violated might require a lengthy court case between the accused and the purported legitimate owner.

RF violations are an FCC guy with an antenna telling you to cut it out. Everything is explicitly licensed by the Feds ahead of time, copyright and computer access usually isn't.


As soon as a manufacturer allows untrusted code to say "you're in country X" to the modem/wifi firmware, it's hard to keep up with the requirement of not exceeding emission limits.


And as I understand it, that is the problem with MOK. In the case of secure boot, the signing key is kept far away in a MS vault. Making it basically impossible to put something in the secure boot chain without actively re-compromising the machine on every reboot. This can be replicated by the end owner, if they replace the MS keys in the secure boot chain as well. But in order to make it as secure, they need the infrastructure to assure that the boot/kernel/etc is all signed on a secure machine somewhere entirely firewalled off from the machines running the signed code.

So, AFAIK, shim/MOK, breaks all this. As a root user it then becomes possible to enroll custom keys that allow a 3rd party, or the machine owner (or exploiter) to add code to the boot path nullifying its greatest strength IMHO, which is protecting the boot chain from a root level exploit.

At that point the only protection would be a TPM + 3rd party root of trust, that notices the shim measurement of the next stage isn't one that has been reported "safe". Which doesn't exist in linux land, outside of possibly some large corps that have taken ownership of the entire thing for their own internal purposes.


Enrolling an MOK requires physical access, so no, a root compromise isn't sufficient to change the boot security chain.


...unless you've already got a MOK enrolled.

For example, to install DKMS modules like nvidia drivers.

Or indeed to exercise your right to run any code you like on the system you own.


Yes, if you change the security state of your system, its security state is changed. This is, uh, unsurprising?


The main goal behind secure boot was a nod and a wink to the OEMs ("nice business you've got there") to introduce just enough friction into the process of installing Linux to inhibit it without being overt enough to attract unwanted attention.

It's a neat trick coz they can put pressure on the OEMs to introduce that friction. They can subtly reward that but fully disclaim responsibility for it when they overdo it (which at least one OEM has done).

The actual security concerns it deals with (e.g. evil maid attacks) aren't entirely imagined but they're played up.


> The actual security concerns it deals with (e.g. evil maid attacks) aren't entirely imagined but they're played up.

Not really. God knows what the US CBP does to your devices if you hand them over at border control - which is why I refuse (and every other IT professional should as well) to travel to the US until that is taken back. Pry my devices from my cold dead hands.


As a US citizen, flying domestically or internationally, what I do is to not take my electronic devices with me. If I need them where I'm going, I ship them ahead via a parcel carrier. On the plane trip itself, I use a burner phone that has no data on it aside from the record of what calls I may have placed or received during that trip.

It's rather sad that these precautions are needed, but here we are. I'm just bringing all this up to say that there are ways of travelling without having to put your devices at risk of compromise.


For you as an US citizen that works out, they can't deny you entry - but I'd rather not risk the parcel with my device getting lost in the post or stolen, or getting refused entry and being deported (and thus, losing the many thousand dollars one needs for a vacation in the US) for being "suspicious".

And there's the entire import duty/customs stuff to work out as well...


> I'd rather not risk the parcel with my device getting lost in the post or stolen

I think the risk of that with mainstream parcel carriers is pretty low. But I understand the hesitation, for sure.

> getting refused entry and being deported (and thus, losing the many thousand dollars one needs for a vacation in the US) for being "suspicious".

I have no idea what does or does not mark you as suspicious enough to be denied entry, so I can't comment on that. But if it's a vacation trip, then do you need to take electronic devices at all? Aside from a cell phone, anyway, but if you're concerned about the device being compromised, surely that cell phone doesn't have to be the one you use when you're in your home country.

Anyway, I'm not arguing with you even a little, and I hope I'm not coming off as unsympathetic. At heart, I agree with you that everything you're saying is a real problem that needs real solutions. I'm just looking at how to reduce the problem in the meantime until/unless real solutions come about.


The use cases for this should largely not be considered individual users. Presumably, a lot, if not most, personal computing devices, and certainly enterprise-grade networking equipment, is owned by companies that either 1) issue equipment directly to users who are not necessarily trusted, or 2) put it out somewhere where it might be accessible to untrusted individuals. Probably the most obvious example is something like WiFi access points for shops and hotels, which are often just sitting in a hallway mounted to the ceiling where anybody can get to them.

I guess from the perspective of laptop and desktop PC makers who primarily ship devices with Windows preinstalled, they just don't want to bother with separate product lines for business versus personal use.

For what it's worth, I can't speak to what is common versus not without real data, but at least from my own perspective, every non-laptop PC I have in my house is a machine I built myself from parts, and no motherboard I have ever purchased came with secure boot enabled by default.

This is separate, of course, from Microsoft's decision to make Windows 11 only run on devices with a TPM.


I just turn all that stuff off in the BIOS and be done with it. So far, anyway, I haven't encountered a motherboard that doesn't let me disable it.


A lot of hassle?


This seems to have been written under the assumption that these "elaborate security implementations" discussed are designed to serve the interests of the owner (or, these days, let's better call that person the "user" instead?) of the particular machine/hardware/node. I do not share that sentiment, and believe they are designed and implemented to service the interests of the content industry and their ilk, and purported benefits for Joe User are a pretty much a gigantic fig leaf.


Yup, and in the embedded world, they are used to achieve encryption of the OS firmware, thus preventing reverse engineering of the device, protecting IP and securing devices through obscurity.

I'm thinking of enterprise grade networking hardware for example. Their main OS ROM is encrypted and they will refuse to decrypt if the boot process is altered.

There are no real secrets or user data being protected by the encryption. Heck, actual passwords and user configurations can be stored in clear.


It is not for protecting user secrets. It has nothing to do with users.

It is about protecting against "after hours" production runs. If you, as a vendor, order manufacturing 1000 pieces of widgets, you want to be sure that only those delivered to you are really produced. You don't want more, and then your own product competing against you on the market.

Hence, encryption. Only your devices can run your firmware. All those extras cannot.


Solution: Make your own fab?

Enjoy all the benefits of keeping stuff to yourself I guess.

Bonus: potentially lowers the barrier to entry to electronics fabrication as more engineer hours are spent getting the fabrication equipment/processes better documented/more accessible.

No one wants that though. That'd make too much sense.


Foolproof root of trust would make cloud providers a lot more trustworthy. For example AMD's SME+SEV is in theory able to attest that a cryptographically verified (as opposed to trusted) boot process resulted in an encrypted-RAM VM running on someone else's computer that it would be pretty hard for them to impersonate or read data from (they can always inject errors). It's always up to the person evaluating the measured attestation to determine the trust in the software that was attested.

Individuals don't have quite as much to benefit from. I don't bother to run SEV on my personal cloud VMs. Mostly because it's more likely for there to be enough software 0-days to not offer much practical protection. If I was trying to do cryptocurrency in the cloud on top of seL4 or something equally verifiable then maybe SEV/SecureBoot would be worth it?


Owners and users are different people in the case of enterprise issued devices, which is a very common use cases for things like this.

In that case the user isn't trusted, not because of maliciousness, but because of security mistakes, which are quite easy to make. Having attestation to prove the device is not tampered with before letting it connect to the company network is a good idea.


It's both. Whoever gets a hold of a secure boot system first can ensure the machine serves them.

How else can the machine tell who is its rightful owner?


Oxide has described some stuff around their Root of Trust implementation: https://oxide.computer/blog/exploiting-undocumented-hardware...


I don't understand why Heads and Pureboot are not mentioned here, which are FLOSS: https://docs.puri.sm/PureBoot/Heads.html#heads.


How does Heads and Pureboot solve the Root of Trust issue outlined in the blog post?


Heads relies on TPM and a hardware key to verify the integrity of the firmware.


Heads makes use of a root of trust, it doesn't provide one


So much in computing is made difficult by the assumption that solutions can be evergreen.

Foundations are best stable, as in unchanging. But few foundations can live up to that. Indeed, with our knowledge of how the very earth is dynamic, it is funny that we maintain the ideas of foundational stability on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: