This is an exciting update! Leveraging AWS libcrypto is a smart move that will allow Rustls to access robust and validated cryptography while focusing development efforts elsewhere. Achieving FIPS 140-2 validation expands possibilities for regulated industries to securely adopt Rustls. Kudos to the team on enabling more widespread usage of this critical communication security technology!
The Rustls TLS implementation and certificate verification are all safe Rust.
The underlying cryptography is still a mix of C and asm, that's the best option we have now particularly if we want support for things that make it deployable, like FIPS. We are looking for ways to improve the safety of the underlying crypto in the future.
Has anyone in this space considered adding type annotations to assembly?
It’s totally possible and it’s a thing compilers for memory safe languages sometimes have to do internally.
It wouldn’t take a lot of language engineering to make it nice. You’d end up being able to take that asm code more or less as is and annotate it with just a type proof so that Rust/Go/Fil-C can call into that shit without worrying about it blowing up your rules.
The issue with writing crypto code in anything above assembly is that there's a risk that optimizing compilers could turn your finely crafted constant time, constant power code into something which is not that.
Generally speaking, an optimizing compiler is not going to introduce a branch (especially a data-dependent branch) where one doesn't exist in the code.
To my knowledge, the bigger reasons for writing assembly for low-level cryptography are (1) performance, and (2) avoiding UB. The latter, particularly around C's type promotion and signed integer shifting rules, are a significant source of bugs[1].
Afaik there's no guarantee that the compiler won't perform such changes. Even if it happens to generate the expected instructions today it could change with the next compiler release.
I think on some x86 cpu tuning levels this can happen around 1bit integers (aka bools) when the cost model says it's cheaper for whatever reason
(carry: bool, c: u64) = a.carrying_add(b)
d += carry as u64
could be turned into
(carry: bool, c: u64) = a.carrying_add(b)
if carry
d += 1
And I recall doing some bittwiddling to get something like a cmov but the compiler recognized the pattern and turned it back into a branch (this was for performance optimization, not crypto, but still...)
I’ve had rustc turn my branchless code consisting purely of bitwise operations into branching code. There was probably nothing rustc-specific about it. Just LLVM estimating that branches would be faster in that case.
> Generally speaking, an optimizing compiler is not going to introduce a branch (especially a data-dependent branch) where one doesn't exist in the code.
You can't count on that, especially if you give the compiler a loop that has a versioning opportunity.
It is still a safe wrapper around unsafe code. I agree that I would prefer that situation (mainly because of not needing an additional compiler) but it’s structurally the same thing.
Not really, a number of our crypto implementations are pure Go. In fact, we always have a pure Go fallback that you can select with "-tags purego". As of Go 1.23 we will be systematically testing it, too, because it enables other compilers like TinyGo. They might be slower, but with the notable exception of AES (because implementing AES in constant time without AES-NI is hell) the pure Go implementations are just as secure.
Moreover, some of the assembly cores are a couple dozen lines for the hottest loops. I guess you could call the whole Go package a safe wrapper around that unsafe code, but I am used to think of a wrapper as not the place where the substantial logic is.
It's also meaningfully different from AWS-LC, discussed here, which has the entire cryptographic operation (like a signature or encryption API) implemented in C. (It's still great progress to move the TLS and X.509 implementations to a safe language, as that's where most memory safety bugs are!)
Sorry, just to be clear, I don’t know anything about Go’s crypto implementations, I was purely responding to the parent who claimed they were wrappers around asm.
I think we’re making two different points. I am talking about at a very high level, when people say “yeah it’s safe but there’s unsafe under there” that that is always the case at some point in the stack. Even a pure Go or pure Rust program ends up needing to interact with the underlying system, whose hardware isn’t safe. There is still some code that has to reach outside of the ability of the language to check that it conforms to their abstract machines in order to do things at that level.
I don’t disagree that minimizing the amount of unsafety is a good general goal. Or that because there’s unsafe inside, that the code is not overall safe. Quite the opposite! I’m saying that not only is it possible, but that it’s an inherent part of how we build safe abstractions in the first place.
(Oh and to be honest, I wish Rust had gotten the same level of investment around cryptography that Go has had. Big fan. Sucks it never happened for us. I’m glad to see continued improvements in the space, but like, I am not trying to say Go is bad here in any way.)
There is also the compiler. A language may claim to be implemented without any unsafe code in the standard library, but all that happened is the unsafe code got hidden in the compiler, in how it generates code, in intrinsics etc.
And they now have to deal with the same kind of timing attacks related stuff as everybody else https://github.com/golang/go/issues/49702 (and they'll likely lag behind)
Do you know of any cryptography implementation that sets the Data Independent Timing flag? We've been trying to figure out what others are doing about it, because as far as I can tell nobody is.
Anyway, not sure why relying on C/C++ would have helped us here.
The point is not about relying on C/C++, it's about using existing implementations instead of re-inventing the wheel all the time. This is a cultural thing when it comes to Go and it has bitten them multiple times, like when they tried not to use the system's libc on MacOS or when they had issues when dealing with memory pages on Linux.
Good to know there's someone in charge specifically of the cryptographic stuff for Go at Google though.
Go has good reasons not to bring C/C++ into every build, starting from the ability to cleanly cross-compile.
I can't comment on the rest, but the security track record of the crypto libraries is stellar compared to pretty much any other library (and it already was before my tenure).
(BTW, I am not at Google anymore, although I still maintain specifically the crypto libraries.)
Fil-C is memory-safe down to the libpizlo POSIXish syscall layer, and then even those syscalls do memory safety checks (so you can't read(2) into an OOB area of a buffer, for example).
So, some safe code is built on a crapton of unsafe code, while other safe code is built on a tightly controlled TCB. There's a big spectrum there.
You’re describing exactly what I am describing: you still call out into a syscall that is not safe. You prevent that by checking things in the wrapper. Very standard.
You’re disingenuously conflating calling into a pile of userland unsafe code that does crypto using arrays and ptr math, which also does unsafe syscalls, with making all that memory safe except the syscall.
They’re not the same thing.
If they were the same thing then there would be no point to memory safety at all.