Hacker News new | comments | show | ask | jobs | submit login
Mundane: Rust cryptography library backed by BoringSSL (github.com)
120 points by briansmith 14 days ago | hide | past | web | favorite | 30 comments



The repo includes a DESIGN.md file that describes the design motivations for the library, and it's definitely worth reading. It goes through a few really powerful techniques for writing misuse-resistant APIs, and I think some of its advice is applicable to good software engineering in general.

https://github.com/google/mundane/blob/master/DESIGN.md

In particular, their use of the type system to expose opaque types that only allow meaningful operations on them is something that I've seen used to great effect in other statically typed languages, like Haskell.


The `miscreant` library also does this: https://github.com/miscreant/miscreant. It's really effective. For example, any method that requires a one time pad takes ownership. This allows the borrow checker to validate that you don't accidentally use an OTP more than once.


One exception to the otherwise solid design principles:

"If an error requires particularly subtle error-handling, prefer panicking or aborting the process. When cryptographic operations fail in a way that would require reporting an error to the user (in other words, there's no valid non-error interpretation like in the case of signature verification), and handling that failure is particularly error-prone, it may be justified to make the function's API infallible, and instead panic or abort the process on error. BoringSSL famously does this when failing to read randomness (e.g., from /dev/urandom), as this has historically been a source of vulnerabilities."

It is almost never okay for a library to abort the process. The only exception I can think of is when discovering e.g. memory corruption or other UB, or when continuing would result in the above.


> It is almost never okay for a library to abort the process.

Using Joe Duffy's distinction [1], bugs aren't recoverable errors. Error handling admits the necessity of error recovery, but you generally don't have such a fine-grained policy for bugs. A dual-use of exceptions as a means to signal bug conditions is unfortunate, and I believe Mundane's policy is clearly following this distinction.

Also, panicking in Rust can be caught in the same thread [2]. So it is actually much more flexible than process abort---if you need a general, umbrella protection against bugs, here you have a way.

[1] http://joeduffyblog.com/2016/02/07/the-error-model/

[2] https://doc.rust-lang.org/stable/std/panic/fn.catch_unwind.h...


That's a thought provoking article, and I see no problem with the proposed error management technique as a whole, for that particular project.

This however is a crypto library designed to run on consumer OSs, as part of programs which will likely offer much more functionality that generating random numbers.

In general, a bug is a software fault, which is a passive flaw in the program, introduced by a programmer. Faults can manifest and cause the program to behave in an uninteded way, which in turn can lead to failure - the program can no longer perform its function.

Any point on the fault -> error -> failure chain can be an intervention point. The relevant point for this discussion is the detection of errors and preventing them from becoming system failures, and if not possible, handling those gracefully.

Let's agree not to discuss recovery attempts and assume that the system is instead moved directly to a safe state when such a bug/error is detected.

A crash-only safe state is simple and quite easy to implement, but whether it's the best approach for a particular software depends on the dependability requirements of that software. Abandoning the current operation and returning to the top level execution context is an alternative that shouldn't be so easily dismissed.

In the BoringSSL case, there doesn't seem to be a reason to abort. The error condition is known, can be detected and failure can be returned just as easily. Panicking is also fine, if the parent program can react to it.


> This however is a crypto library designed to run on consumer OSs, as part of programs which will likely offer much more functionality that generating random numbers.

Are you referring to getrandom(2)? Unless you are using `/dev/random` (i.e. GRND_RANDOM) instead of `/dev/urandom` (which by the way you don't need to use [1]), the only case that getrandom(2) blocks or fails is the very beginning of the machine startup where not enough entropy has been collected. It is not something you would expect to occur more or less randomly.

[1] https://www.2uo.de/myths-about-urandom

> In general, a bug is a software fault, which is a passive flaw in the program, introduced by a programmer. Faults can manifest and cause the program to behave in an uninteded way, which in turn can lead to failure - the program can no longer perform its function.

The OP does explicitly say that it may be justified to make the function's API infallible. They strive to simplify the error case to handle (e.g. verification failure and other recoverable errors are combined to ease the error handling), and they are expected to exercise this right only when there exists no good and reasonable error handling strategy.

By the way, it seems that Mundane actually does not panic but aborts the entire process [2] with a rationale that panic handling in Rust is not as trivial. This decision can be problematic by its own, but I found that aborts are only used to guard against generally improbable error cases, e.g. linking or calling to a different library that happens to provide the same set of symbols as BoringSSL. If you say that this should be caught gracefully, uh, I'd say that you should also guard against an invocation failure due to dynamic linkage failure for the sake of user experience...

[2] https://github.com/google/mundane/blob/8aaa1c8/src/boringssl...


You can factor out whatever functionality of your system is that uses crypto in a separate process and then the crash is simply "my crypto process died", which you can handle and recover from. I think it's ok to force people to cleanly separate functionality from their main process when its failure doesn't have a meaningful recovery process and a half-assed recovery can lead to catastrophic security problems.


> It is almost never okay for a library to abort the process.

For a library with a design principle to be as hard to misuse as possible it seems like the right decision, and the "exception to the rule".

Infinitely better to crash than to potentially let the process/library run in some kind of degraded state.

If you are aware of this behaviour and are savvy/experiences enough you'll either 1) Catch the panic and perform an appropriate library 2) Use a different library.


The issue is that the library can't know what state the program is running in, since it's a library... it has a job to perform some crypto functionality and it can only either return a result or an error for that particular operation.

When libraries start calling abort out of the blue it's like the janitor deciding to send everyone home for the day because their mop broke. It's not their call to make :)

Panicking or any other error handling mechanism which permits the main application to decide how to continue is perfectly fine.


> When libraries start calling abort out of the blue it's like the janitor deciding to send everyone home for the day because their mop broke.

A more apt analogy would be the janitor that pulls the fire alarm because they saw smoke coming out of the boiler room. So yes, it's their call to make, and it would be the right call.

Besides, as lifthrasiir points out, you can isolate this behavior in Rust, akin to triggering the fire alarm for a single building, but not for the entire complex.


> Infinitely better to crash than to potentially let the process/library run in some kind of degraded state.

Why? Just set the entire library to "failed" mode and have every function return an error or do nothing from that point forward. That is far more sensible than just panicking and bringing down the entire application.

Imagine if people want to use this in a cash machine or something like that.


I would much rather the cash machine crashes than starts communicating with the bank seeded by 00*inf bytes of random.

Besides, what exactly do you expect to do with the library in an "exceptional condition"? Do I now need to check the output of every single function for some non-local effects they have on each other?


How can the library know that the cash machine will communicate with the bank, or that there even is a cash machine?

The library should just tell the program that it failed go perform its task, not guess at what its parent program could, should or would do.

Note that the discussion started from "abort". Maybe the authors meant something else by abort, but in a system programming context it means calling abort and terminating the program's execution immediately.

If they just meant it as a synonym for panic, we're just having a nice discussion here.


It is very interesting, though I do have a small gripe with how they characterise the "goto fail" bug. #[must_use] probably wouldn't have helped because the bug was that they effectively returned Ok() early. However, having an ergonomic error handling design in a systems language definitely helps. try! and ? are actually a massive improvement in the state of systems languages. Error handling is very important when designing a language.


There is also ring[1] which is somewhat based on BoringSSL. May be their priorities/philosophies are different. It is nice to see more crypto libs being developed for Rust.

[1]https://github.com/briansmith/ring


I know it's too early to ask but it would be wonderful to see one of them totally implemented in Rust. Any time in include one of the crypto wrappers building for non-linix targets(wasm/asmjs or win32) ends up being a massive pain.

The base Rust experience is so nice that it ends up spoiling you :).


We are continuing the project of replacing C code with Rust code in ring. The pace of that effort is mostly a function of how much work time I have available to spend on ring.

Day-to-day ring development happens on Windows and it was designed to "just work" on Windows (as well as Linux, macOS, Android, iOS, and other supported target operating systems). We did a huge amount of work to make the build system fully automatic to enable this, and we have a pretty extensive CI mechanism to make sure everything "just works" as much as we can.

As for WebAssembly, I'm looking forward to something like https://pdfs.semanticscholar.org/3887/6d86e5e7851181efc9ed3b....

However, you should probably use the WebCrypto API in a web app if at all possible, instead of a wasm crypto library. Unfortunately the WebCrypto API is 100% asynchronous whereas almost every other crypto API is 100% synchronous so that's easier said than done. Really browsers should add a synchronous WebCrypto API; then crypto libraries can transparently delegate to it when targetting wasm.


async/await makes it pretty easy to take an async library and use it in a synchronous context.


No it doesn't. await doesn't make asynchronous functions synchronous. It just changes how you write the code, not how the code itself works. Notably, await can only be used from inside an async function.


Once you go asynchronous (e.g. call an async function, wait for a promise) you can never go back to a synchronous context in that chain of code.

async just makes asynchronous look like a synchronous code, but it's still event-based and async.


> it would be wonderful to see one of them totally implemented in Rust

Yes definitely. Crypto libs are so critical that building on something like BoringSSL is still good win. There were attempts made to build pure rust crypto libs, some of them are not maintained and some are a bit granular. I think pure rust lib similar to BoringSSL is going to be quite an effort


The BoringSSL build system has the following dependencies:

- CMake 2.8.11 or later

- Perl 5.6.1 or later.

- Either Make or Ninja.

- A C compiler

- Go 1.11 or later

- To build the x86 and x86_64 assembly, your assembler must support AVX2 instructions and MOVBE.

It's time for a pure Rust replacement for OpenSSL. This is a shim to call a Google fork of OpenSSL.



"ring exposes a Rust API and is written in a hybrid of Rust, C, and assembly language."

Is rust getting more popular inside Google in production use?


Well this is indeed Mundane. This is just a Rust frontend to an OpenSSL fork. There's nothing of major interest here and not sure why it's getting attention. There are already Rust frontends to OpenSSL.

From the install notes, can someone tell me why BoringSSL needs Perl to build?


I guess for the same reason OpenSSL needs Perl, to generate some files.

IIRC most notable the ASM files are generated via Perl.

Update: Here is a link to some Perl BoringSSL stuff: https://boringssl.googlesource.com/boringssl/+/master/crypto...


Worse: why does it need Go?


Perl is used to generate the final assembly files so they can be adapted between different x86_64 variants/systems/assemblers.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: