Hacker News new | past | comments | ask | show | jobs | submit login
A cryptographically secure bootloader for RISC-V in Rust (codethink.co.uk)
189 points by fork-bomber 41 days ago | hide | past | favorite | 75 comments



Measured boot > trust chain through signature verification:

With measured boot, components in the boot chain tell some trusted component (e.g. a TPM, possibly in FW) about all of their input and only if the hashes at the end match, $something is accessible (in most cases a secret key for data decryption).

1. More flexibility (with TPM e.g. you can "seal" a secret against different parts independently)

2. No need for PKI, which gets very complex once revocations are involved (have fun looking at the "Secure Boot" DBX lists and the shim SBAT mechanism)

3. More freedom: The system still boots if the measurements don't match, you just don't get access to secrets. You're free to seal your own secrets against your new measurements and whoever did the last sealing has no access anymore. (Unlike on PCs where the Microsoft trust is in most cases not removable).


1. This is interesting. So in a measured boot scenario, you wouldn't be able to boot the main OS, but it would give you access to sort of a minimal initramfs environment for debugging? It's a good idea for personal computers, like a tamper-proofing approach.

I assume the TPM in this case would only have a partial decryption key? I think something similar could be accomplished with SSS, no?

2. As for this, I can say i've never used DBX with UEFI Secure boot. Instead of revoking keys, I just remake the entire PKI from the top. The PKI is only there to support independent use by OS Vendor/OEM hence the separation of PK/KEK/db.

3. Counterpoint: over-reliance on TPMs and such. Whereas the ordinary trust chain only requires signature verification at the start of boot (presumably on-chip), measured boot requires more complex trusted computing hardware (presumably off-chip).

Personally, I find that systems that are overly-reliant on complex trusted computing hardware tend to lack in other areas. For example, iphones or google-pixel devices encourage the user to use a low-entropy password like a 4-digit PIN. These systems try often to reconcile "analog" passkeys like Biometrics (FaceID, fingerprints) by using trusted computing. Of course, if the trusted computing systems are breached (https://www.404media.co/leaked-docs-show-what-phones-cellebr...), then security is very weak.

I suppose the advantage of the measured-boot method is that it is optional. So you can still boot whatever OS you want, just without some TC features.


>1. This is interesting. So in a measured boot scenario, you wouldn't be able to boot the main OS, but it would give you access to sort of a minimal initramfs environment for debugging? It's a good idea for personal computers, like a tamper-proofing approach.

If you would like to play around with measured boot and similar functionality of TCG DICE. Thats on a USB stick that open, and have a good team behind it.

https://tillitis.se/


> 1. This is interesting. So in a measured boot scenario, you wouldn't be able to boot the main OS, but it would give you access to sort of a minimal initramfs environment for debugging? It's a good idea for personal computers, like a tamper-proofing approach.

Depends on how it's set up. Currently most setups that use measured boot (systemd-pcrlock, partially BitLocker) ask for a recovery key if unsealing fails due to measurement mismatches and offer other options.

> I assume the TPM in this case would only have a partial decryption key?

That's also possible, but so far I haven't seen that. The sealed secret is sent to the TPM which then uses its hidden internal seed to derive the master key for volume decryption and sends it back. (In the case of bitlocker with TPM < 2 that could trivially be sniffed on the LPC bus...)

> I think something similar could be accomplished with SSS, no?

If you mean Shamir's secret sharing, possibly. Question is what to do with the shares.

2. Yeah, for your local machine this is a working approach, if you make sure that really only your own key works. Another reason against PKI is also that the trusted authority can't retroactively sign a backdoored executable to gain access to devices, as the measurements are independent from authority and ideally device specific.

3. Signature verification isn't just needed at the start of boot, it's ideally from start of booting until user authentication, which is the part that can be tampered with. I'd argue that the software side for measured boot is simpler, while the hardware side may be more complex.

> For example, iphones or google-pixel devices encourage the user to use a low-entropy password like a 4-digit PIN.

Using TPM+PIN is actually not that bad: Only if measurements match it's possible to unlock with a PIN and the TPM uses a counter in nonvolatile memory to prevent brute force attacks. It's not unfathomable that some manufacturer screws that up, but it's IMO stronger than relying on multiple parties (CPU, BIOS, OEMs, OS) developing an actually secure trust chain.


That said, it does require more care when you do OS updates or UEFI updates to remember to update the TPM sealed secret with the new measurements. Windows and Linux both have the former automated so it should generally be fine.

UEFI updates can also be a problem if they wipe the TPM as part of the update and thus destroy the sealed secret entirely (as my PC mobo does).


> That said, it does require more care when you do OS updates or UEFI updates to remember to update the TPM sealed secret with the new measurements. Windows and Linux both have the former automated so it should generally be fine.

Yep, this can be a pain also in regards to firmware bugs (broken TCG event log anyone?). In the worst case you need to enter the recovery key or if you know in advance, exclude some component from measurement temporarily while supervising the next boot. If something goes wrong with the trust chain like a key got revoked but the bootloader didn't update correctly, you end up with an unbootable device and can't even go back easily.

> UEFI updates can also be a problem if they wipe the TPM as part of the update and thus destroy the sealed secret entirely (as my PC mobo does).

Ouch, that's bad design. The firmware is measured into the TPM on boot so there's no reason to do that..


Yeah, every time I update the UEFI it pops up a warning that the TPM will be cleared and I better have disabled Windows Bootlocker before I did this. The warning also goes away within a fraction of a second because the PC reboots which is not nearly enough time to read it, and I only know what it says because I've updated the UEFI enough times to be able to piece it together. Weird.

It might just be a warning to cover their asses; ie it doesn't actually clear the TPM but they don't want to be responsible for your un-unlockable drive in case it does. I don't actually use the TPM for measured boot or anything else so I haven't checked.

In any case, UEFI updates are relatively common right now (once every couple of months or so) because it's a relatively new mobo (AM5), and because AMD is about to release new CPUs that requires corresponding AGESA etc updates. It'll probably become less frequent in a few years.


It appears Apple Silicon uses a combination of measured boot and trusted boot concepts: https://support.apple.com/guide/security/boot-process-secac7...


Without a trust chain measured boot is security wise meaningless and doesn't work.

But you don't need any fancy PKI or vendor certificates for a trust chain, local trust is good enough.

I.e. the problem of secure boot isn't that it has a secured boot chain but how it has one.

Either way measured boot is still a nice think to complement a secure boot chain.


tbh I feel bad for the kid, his thesis supervisor should have helped him more here to scope and direct the work in some sensible way. now it is bit of a mess :(

like just doing a review and comparison of existing boot verification mechanisms would have been already good scope for a thesis. Instead they are barely even mentioned as a side-note, which puts this in a awkward position.

or if crypto was focus, then putting more work on designing and implementing the crypto scheme would have been relevant. Now they got so tangled with the nitty gritty boot details that the crypto ended up also as questionable side-note.

or if rust was focus, then just implementing clean pure-rust bootloader could have been already enough for the thesis, avoiding the stumbling over on misguided crypto bits.

or many other ways this could have been more successful. overall it now feels the author ended up biting far more than what they can chew. also they should have imho spent less time coding and more time on editing the actual thesis. the text is all over the place.


You have rather high expectations towards the supervisor there ... maybe you were one of the "unlucky" ones who had a genius supervisor ...

There aren't that many people in the world who write bootloader code daily for a living, an even smaller subset of those would ever have done one entirely from scratch (vs. just board-porting U-Boot), and even of those, the likelihood of someone having done a secureboot capable one wouldn't be high. Assuming you have such a person as a thesis advisor is a huge stretch ...

That said though, a thesis is (or should be) a learning exercise. Even a complete fail (to achieve the "initial" objective) can make a great thesis if new knowledge - even negative proofs, "this isn't possible because ..." - come out of it. It's just my opinion maybe, but a graduate should never be taught "fake it till you make it"; rather, describe and document failures thoroughly, explain the observations, highlight and justify the parts that work or are breakthroughs. That can make a much better readable and much more useful thesis than one that skims over one-time lucky else nonreproducible results and delivers uncommented sources that fail to compile for anyone but the original author. In that sense, I would argue that a good thesis advisor will focus on the scientific process, documentary evidence, clarity of experimental setups and tests/checks/benchmarks performed, not whether the outcome would be "great enough" to pitch as a founder to VCs.


I feel we agree far more than disagree. I'm not sure where you got the impression that I expected the supervisor to be some sort of genius or deep domain expert on bootloaders. I just expected them to be expert in writing theses, which includes those things you mentioned.

> In that sense, I would argue that a good thesis advisor will focus on the scientific process, documentary evidence, clarity of experimental setups and tests/checks/benchmarks performed, not whether the outcome would be "great enough" to pitch as a founder to VCs.

Exactly! And in this case I feel the exact opposite happened; the advisor let the student work overly fancy project with lot of buzzwords ticked and with minimal rigor; from the result I think it's is pretty obvious that way too big part of the whole project was spent on coding, and too little on those things. That's why I blamed the advisor/supervisor in my parent comment, they should have intervened at some point in the process.

Of course I do realize that in practice these thesis supervisors might have gazillion students under them and minimal time allocated per student, so most likely they just weren't very involved at any stage. And academic institutions have pressure to get students to graduate which creates its own set of problems.


I don't get the "1/10 size of U-Boot" argument. As it can only boot 3 RISC-V64 boards via TFTP, it also has less than 1/10 of the features and supported hardware of U-Boot. https://github.com/u-boot/u-boot


Supported hardware doesn't matter because they're comparing the compiled binary size, not source code size. The u-boot binary you'd compile would also only have the stuff relevant to that particular hardware compiled-in.

If you don't need the other features of u-boot that this doesn't have, it makes sense to count the lower binary size and runtime memory usage as an advantage.

That said, they compared it to "an example U-boot binary", which sounds like they probably didn't tweak the bajillion config options u-boot has to produce one with an equivalent feature set to theirs, which would've been a fairer comparison for sure.


The problem with Rust in the boot process is that it's going to become much harder to find vulnerabilities for roots/jailbreaks. Still, this is great work!


Yes, ironically increased transparency and more secure systems will lead to less freedom for the user, because trusted computing is most often securing the interests of manufacturers against users (what RMS refereed to as "Treacherous Computing")

I think that we have been able to thwart "treachery" in the meantime by exploiting side-channels in trusted computing implementations. Ultimately it may be necessary to amend the constitution to prevent manufacturers from distributing locked-down hardware for the good of free society (competition, democracy, etc.) at large. Otherwise, computer giants will have ultimate control over the distribution of information (given that the economics of manufacturing are driven by economies of scale).


Could you elaborate on this? I'm not sure what you mean; are you saying that there will still be vulnerabilities that are of similar difficulty to exploit as would be found in a C bootloader, but will be harder to find by security researchers? Or are you just saying that there will be fewer vulnerabilities, but the ones that do exist will be more "obscure" than would be the case if it were written in C, because Rust eliminates some of the more "obvious" vectors?

Either way, do you consider this a bad thing?


It's neither of those. The trade-off is that features like this are often used against users, preventing them from actually controlling their own machines. Under those circumstances, bugs in the "security" of the machine are a mixed bag: malware can exploit them to break out, but users can exploit them to get full control over the machine that they own. This has happened with Android phones, for instance, allowing people to root phones and/or replace the ROM with a community version that gets security patches not available in the stock ROM, which is probably a net security improvement even with the bootloader left vulnerable.

So it's really hard to call it a good thing or a bad thing; it's a trade.


That is I've impressive final year project, nice work!

Vector crypto is very cutting edge too. I guess there isn't any hardware that has it yet...



This is a very specific type of bootloader for devices that get their code over the internet:

> SentinelBoot's threat model focuses on thin client devices which do not store their own OS and over-the-air updates (e.g. how phones are updated): both of these cases involve executable code being sent over a network, usually the internet. We ignore the risk of direct hardware modification, as an attacker can just swap out the bootloader (making any potential defence implemented by SentinelBoot in vain).

The author readily acknowledges that it does not defend against hardware modification. The other comments here trying to vilify this project don't understand what it is supposed to do.


Congratulations on the development. Part of me is concerned that this will be used to push devices that cannot be unlocked and tinkered with by end users, reducing their technological freedom.


The purpose of this bootloader is to avoid executing malicious code sent over the internet, such as by a MITM attack.

The author explains that it does not attempt to defend against hardware attacks or attempts to replace the bootloader:

> SentinelBoot's threat model focuses on thin client devices which do not store their own OS and over-the-air updates (e.g. how phones are updated): both of these cases involve executable code being sent over a network, usually the internet. We ignore the risk of direct hardware modification, as an attacker can just swap out the bootloader (making any potential defence implemented by SentinelBoot in vain).


That shift has already started in various areas.

You see it with phones but also with cars where OEMs require Secure Boot enabled devices when possible. This ranges from the central unit to all small ECUs.

You can kind of see it pushed for desktops as the default mode but at least there for now you usually have a way to disable it.

For embedded devices though they never really provide a way to disable it. I think you could find a compromise where you could provide a physical method to disable it. If you disable it you clear all the DRM keys and other secrets. This way you can still protect against malicious updates for most users and tinkerers can do what they want once the company stops supporting it.


This was going to happen regardless. I believe Nvidia's RISC-V coprocessor ships with hardware fuses that serve more-or-less the same purpose.

If anything, it just makes me glad that RISC-V also has specs for UEFI-like interfaces.


Many (most) devices with secure boot have hardware fuses, but the software that reads them is usually broken. Rust and an eye towards sound cryptographic primitives (especially against side channels) will definitely go a distance towards protecting against this, although logic bugs are also quite common.

This bootloader doesn’t actually seem to be a real secure / trusted boot implementation anyway, just a thing that verifies updates.


What is the story with rust and cryptographic side channels? I imagine the layers of abstraction (e.g. an iterator may have arbitrary O) would make it harder to see those?


The layers don't really matter. Any side-effect or implementation detail (especially in a language with an optimizing compiler) is not a good protection against side channel attacks.

Where Rust (and most languages with a robust type system) can help prevent side channel issues is via typing - all cryptographic operations in a given library can be implemented with (for example), a TimingSafeU32 type that only implements constant-time operations (comparison, multiplication, addition, etc). This massively reduces the opportunity for mistakes. Traits make implementing this type of system much easier and more elegant.

`rust-timing-shield` and `subtle` are both a good example here, as well as really everything in https://github.com/dalek-cryptography .


I'd be more worried about early exits from loops in an iterator, e.g. if you had a .any() derived function, or short-circuiting logic in boolean operators with a lazily evaluated second parameter. A type system won't protect you from that.

Or maybe an early return and having an expensive destructor called as it goes out of phase with RAII


I'm sure that's part of the story, but there's tons of boring old company/government equipment out there in the world that wants secure boot too.


That'll happen with permissive licenses


[flagged]


> honestly, I still feel safer using my iPhone over a PinePhone.

But is that a result of it refusing to let you run whatever program you want, or your faith in its ability to sandbox programs regardless?


If someone physically steals my phone, they can’t install a keylogger that loads before the login screen.

The sandboxing actually works pretty well and serves a useful feature, the PinePhone doesn’t have sandboxing at all (and even Flatpak sandboxing often has massive holes punched through).

The iPhone has dozens, if not hundreds, of research organizations working on it to find bugs. Even Apple’s competitors like Google contribute. I have no assurance the PinePhone’s login screen doesn’t have an obvious buffer overflow.

If my iPhone does somehow get a virus, no virus has been found that can persist between reboots since iOS 9. Linux, persistence is the default.

If an iPhone app gains my PIN code, it’s bad but mostly meaningless to system integrity, which is forbidden at a kernel level from being tampered with by anything. If a Linux app gets your root password, or a privilege escalation bug is found in the many userspace utilities, daemons, or processes, you’re SOL.

The iPhone has hardware level features called “mailboxes” in the Secure Enclave, which enforce the limits for how often a PIN can be guessed; making a software bypass impossible and even a 4-digit code relatively safe. PinePhone doesn’t even have secure boot, so loading a tool that runs unlimited guesses could not be easier.

Do I need to go on?


You’re comparing a trillion dollar (of value) product to a thousand dollar one, and then implying the second is failing due to a technical issue.

Pine drops hardware and lets other folks do the software work.


To be quite fair, I think the pinephone is technically inferior in some ways; having a TPM and secure boot would let you make the device safer than is currently possible. But that's orthogonal to it being a walled garden; if users could run arbitrary apps downloaded off the internet, iPhones would still be safer, and if the next pinephone includes a TPM and user-controlled secure boot, it would have most of the same security properties while still being completely under the user's control.


PinePhone SoC (Allwinner A64) does have the hardware needed for verifying boot via TBBR. It's just that no person or distro uses it for that because the key can only be written once. Also IIRC it's vulnerable to bus snooping.


> > PinePhone doesn’t even have secure boot

If GP is correct about that, software won’t fix that limitation in PinePhone hardware. I expect the lack of a secure enclave is a similar story.


To a customer, it's irrelevant.


> Do I need to go on?

It would help; it feels like you're trying to argue in favor of a walled garden but only using arguments that are unrelated. Actually, it kinda feels like you might only be arguing that you think iPhones specifically happen to be really secure in a way that has nothing to do with user freedom at all.

In more details:

> If someone physically steals my phone, they can’t install a keylogger that loads before the login screen.

Fully mitigated by secure boot with user-controlled keys; unrelated to letting the user run whatever they want. Also, perfectly possible to mitigate on a pinephone by just reimaging everything except your encrypted home partition.

> The sandboxing actually works pretty well and serves a useful feature, the PinePhone doesn’t have sandboxing at all (and even Flatpak sandboxing often has massive holes punched through).

Yes, we agree that the sandbox matters... in fact, much of my point is that once you have a good sandbox it doesn't matter what the user wants to run inside it. (Also, you can use bubblewrap/firejail to sandbox things properly on Linux, it's just less common.)

> The iPhone has dozens, if not hundreds, of research organizations working on it to find bugs. Even Apple’s competitors like Google contribute. I have no assurance the PinePhone’s login screen doesn’t have an obvious buffer overflow.

Well that one's totally unrelated to letting users run whatever they want. At this point it almost feels like you're trying to just argue that iPhones are pretty robust and have completely given up on whether user control is relevant at all.

> If my iPhone does somehow get a virus, no virus has been found that can persist between reboots since iOS 9. Linux, persistence is the default.

Default in common distros, sure, but 1. still unrelated to walled garden, and 2. there are immutable distros if that's what you want.

> If an iPhone app gains my PIN code, it’s bad but mostly meaningless to system integrity, which is forbidden at a kernel level from being tampered with by anything. If a Linux app gets your root password, or a privilege escalation bug is found in the many userspace utilities, daemons, or processes, you’re SOL.

Is iOS somehow immune to privilege escalation bugs? Anyways, if you really want to lock down root selinux does exist, but also... what exactly are we protecting here? At the point where a program has the ability to impersonate the user including their password, we're at https://xkcd.com/1200/ and you might manage to protect the OS but if you lose all userspace what's the point?

> The iPhone has hardware level features called “mailboxes” in the Secure Enclave, which enforce the limits for how often a PIN can be guessed; making a software bypass impossible and even a 4-digit code relatively safe. PinePhone doesn’t even have secure boot, so loading a tool that runs unlimited guesses could not be easier.

Again, unrelated to user freedom; the only reason not to add a TPM to the pinephone is cost.


That's what it's for, make no mistake.

Those who sacrifice liberty for safety will receive and deserve neither.



I am sorry if I have caused any misunderstanding; the point I was trying to make is that, as ownership is a nebulous concept, yet one that must be protected, one with physical access to a device ought to be able to do whatever they want to it. To imply that bootloader locking is not primarily used to restrict what the owner of a device is able to do with it is disingenuous at best. I fundamentally disagree with the concept of bootloader locks in general under the idea that the evil that corporations are able to enact with them has historically far outweighed the real life security concerns that they actually protect users against. I understand this may be a controversial viewpoint but personally I feel (opinion) that the erosion of ownership is one of the most important issues we face today.


[flagged]


It's zero overhead for zero extra functionality, here it seems they've added some

> Additionally, SentinelBoot employs public-key cryptography to verify the integrity of a booted kernel (digital signature), by the use of the RISC-V Vector Cryptography extension, establishing secure boot functionality.


Fair enough


The overhead is for doing computation (fr cryptographic hashing). No language has zero cost computation. Rust advertises what’s called zero-cost abstractions; which is a major improvement over many languages where abstractions can be expensive.


I thought the C version (U boot) did that as well. I guess it does not.


I don't think the rust team would claim writing idiomatic safe rust code is always zero-overhead. Indexing into an array adds bounds checks which takes non-zero time. So by default you trade-off safety for performance to some degree; but this can be bypassed if needed, that's why you have unsafe.

For completeness, it's 0.1x the size with 1.20x the runtime, so these are not just language differences


I think the phrase is "zero-cost abstractions", meaning "Supposing you want bounds checks, you may as well use ours and not roll your own"


It's true that you can escape them though. There are some other small things (which AFAICT you can escape from to as well)


> Indexing into an array adds bounds checks which takes non-zero time.

* When done in such a way that the compiler cannot know the access is in-bounds. The same code in C would also need to perform bounds checks on access. If you do the same assertion you would in C checking that the indices are always less than the length, then Rust will not perform the bounds check because it detects it's unnecessary. Leveraging iterators is one idiomatic way of avoiding those bounds checks.


20% overhead from doing crypto


"cryptographically secure bootloader" is a meaningless phrase.

They mean a boot loader that validates cryptographic public key signatures of the loaded component. That would be a secure cryptographic bootloader. AFTER they have proven that it is, in fact, secure.

You can't just write some code and then say it must be secure because Rust was involved.


> You can't just write some code and then say it must be secure because Rust was involved.

The article doesn't claim that at all.

The cryptographically secure part comes from doing cryptographic verification of the code before running it.

The article talks about using Rust to improve memory safety.


> You can't just write some code and then say it must be secure because Rust was involved

Did they say that?


Yes. They call it "secure" and have zero arguments to back up that claim except Rust's memory safety guarantees.

Which, by the way, do not apply, since the SHA256-Code is marked unsafe.


Unsafe blocks do not imply equivalence with C. They imply that if there are memory safety issues, the issue originates in one of the unsafe blocks. Usually there are few enough lines of code in unsafe blocks doing small enough tasks that you can feasibly to rule out issues by thinking hard enough.

Contrast that with C, where every line may be a source of safety issues. It's a meaningful difference.


The original commenter is correct, though.

Any "unsafe" block within a rust source code potentially corrupts the entire application as undefined behavior has no bound and will/may leak right out of the unsafe block.

You are arguing something else. Enumerating all unsafe code is a good feature, but when one of the fundamental building blocks of your security is marked unsafe, it raises questions about its overall benefit.


I'm not addressing that because I don't think it's an interesting discussion. There's excellent tooling for validating crypto code.

I'm emphasizing the difference between rust and C here because unlike Martin, I don't agree that it's meaningfully possible to eliminate UB in C by careful analysis. You're able to do this in Rust in large part because those issues are all concentrated in a few tightly scoped blocks with clear responsibilities.

Unsafe also gives you a massive hint where to look when you're debugging.


Well, not every construct in C can have safety issue. Saying that every line in C may be the source of memory safety issues is as accurate as saying that every line of Rust may be a source of memory safety issues, because it could make use of unsafe.

There is another issue: Unsafe code in Rust could violate assumptions that could cause other code in Rust to be unsafe. So it needs more care to write than regular C.

But I agree that it still a huge net benefit with respect to memory safety, but let's not exaggerate.


Those unsafe lines in C could be anywhere in your program. In Rust they cannot exist outside of unsafe blocks. This is not a trivial distinction! For all intents and purposes, each and every line of C must be treated as potentially unsafe.


The really big difference is the searchability and frequency of possibly unsafe operations. If you want to audit all possible unsafe lines of code in a Rust project, you can grep for "unsafe" and find all of them (and in most projects there will be very few if any). In C, on the other hand, you need to look at literally every indexing operation, every pointer dereference, every use of a variable (to make sure it isn't potentially used after free or before initialization), every cast, and probably some extras that I've forgotten. As such, rather than having a low double digit number of cases to look at, you have to look at the vast majority of lines of code.


While true, my point is that you can write C in a way that many functions are also obviously free of UB, and you only need to carefully vet the pointer arithmetic in some low-level functions.

So I agree with the point in principle, I just do not like the "spin" of "every line of C is time bomb nobody can understand" while in Rust you just have to look at some lines of "unsafe" and all is good.


It's not my experience that C can be obviously free of UB and I'm curious to know how you approach that. I'm not aware of any methods or tools that claim to achieve it and there's a long history of "correct" programs written by experts were discovered to contain subtle UB with improvements in automated analysis. Here's one example, from Runtime Verification: https://runtimeverification.com/blog/mare-than-14-of-sv-comp...


For example, the following function has obviously no UB:

unsigned int mul(unsigned int x, unsigned int y) { return x * y; }

Or there are many high level function structures as, which also has no UB (with some assumption on the called functions):

void bar() { struct foo *p = foo_alloc(); foo_do1(p); foo_do2(p); foo_delete(p); }

Such code can be easily screened and also this can be done automatically. There is a lack of open-source which can do this, but I have an experimental GCC branch which starts to do this and looks promising.


> Or there are many high level function structures as, which also has no UB (with some assumption on the called functions):

    void bar() { struct foo *p = foo_alloc(); foo_do1(p); foo_do2(p); foo_delete(p); }
Are we assuming foo_alloc always succeeds? malloc returns NULL to indicate failure to allocate, which this code wouldn't handle.

> Such code can be easily screened and also this can be done automatically.

That doesn't sound right at all. Robust static analysis of C code is extremely involved. It's an area of ongoing research.

Prior efforts along these lines have not been successful. Even adopting the MISRA C ruleset doesn't guarantee absence of undefined behaviour, for instance.


The first has no UB, but this trivial modification does:

unsigned short mul(unsigned short x, unsigned short y) { return x * y; }

I don't know about you, but I wouldn't think to treat these any differently unless I put on my language lawyer hat.


It is converted to int, so you have a signed multiplication. I don't think you need be a language lawyer to know this, just very basic C.

But I also do not worry about signed overflow anyhow, because compilers can turn them into traps.


I don't think I need to explain why it's unintuitive that multiplying two unsigned numbers sometimes results in a signed multiplication, even though signed types appear nowhere in the code. I couldn't tell you how many times I've seen some DSP application taking uint16s and throwing them into a filter without realizing it could be UB.

Language standards shouldn't rely on compiler options to save developers here. There's a lot of compilers in the world that don't support the same range of options GCC and clang have, like CompCert. Those are often the ones building safety-critical applications these days, where trapping would be inappropriate.


I don't think this is intuitive for somebody knowing the rule, but I agree that it is easy to make a mistake here. But this is not the point: The point is that it is still relatively easy to avoid and screen for this problems - not as easy as looking "unsafe" blocks but also not impossibly hard.

Whether trapping is appropriate or not depends on the context, but it surprising to hear as an argument, because Rust also has a fail hard policy...


The key point is that no matter how you write your C code, for anyone else that wants to verify a lack of memory safety problems, they need to read every single line to determine which ones do the low level unsafe bits.


I understand this, but the importance of this is highly exaggerated. How in the world does it make sense to only audit for memory safety? There plenty of other safety and security issues. Only if you pretend that memory safety is all that matters, you can then claim a fundamental example that you only need to look at "unsafe blocks" and nothing else. Now, you can say that with limited time we can at least more easily ensure memory safety by reviewing "unsafe blocks" carefully and neglecting other problems. And this is true and I agree that this is an advantage, but the overall improvement for safety and security is incremental in reducing risk and not fundamental.


This isn't only about formal audits. Memory corruption and UB type bugs are also some of the hardest to debug since they may not reproduce in debug builds.


With sanitizers and valgrind I do not quite see this, in my experience subtle logic in overly complicated logic bugs much harder to debug.


> You can't just write some code and then say it must be secure because Rust was involved.

I have a feeling that the qualifier is there in the headline to distinguish from the potential security improvements that come from replacing a C bootloader implementation with a feature -parity Rust one.


How does it know what keys to trust? TPM?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: