
Auditing Rust Crypto: The First Hours - lwhsiao
https://research.kudelskisecurity.com/2019/02/07/auditing-rust-crypto-the-first-hours/
======
drexlspivey
The author JP Aumasson (known for inventing the BLAKE hash, SHA3 finalist)
recently audited grin's underlying crypto library. Report here [https://grin-
tech.org/audits/jpa-audit-report.html](https://grin-tech.org/audits/jpa-audit-
report.html)

------
firethief
> Are sensitive values set to zero after being used? In Rust this may be done
> using the Drop trait rather than explicitly.

A zeroing Drop is theater unless the sensitive data is Pinned too. Rust likes
to copy things around.

~~~
eridius
I'm not particularly familiar with Pinned but the documentation on std::pin
suggests it prevents moving data, not copying data (and is primarily designed
around enabling self-referential structs).

~~~
legulere
Moves are exactly the problem, as the data is just copied, without drop
running on the source, and they happen implicitly.

~~~
eridius
Rust isn't going to just silently move the pointed-to data in something like
Box<[u8; 32]>. It looks like the point of Pinned is to protect against
operations like swap() that would move the pointed-to data, but the point here
is to maintain the data's invariants (e.g. a self-referential struct cannot be
moved or its self-references would be invalid) rather than to protect against
silent moves (a call to swap() is definitely not silent).

~~~
masklinn
> Rust isn't going to just silently move the pointed-to data in something like
> Box<[u8; 32]>.

Of course not, but for efficiency you might have an [u8;32][0] instead and
_that_ will get memcpy'd when it's moved around.

[0] 32 bytes on the stack isn't much, a single Vec or String is 24

~~~
eridius
A [u8;32] doesn't work with Pin. Pin only works when you wrap it around a
pointer type. From the documentation on Unpin:

> _Since Rust itself has no notion of immovable types, and will consider moves
> to always be safe, this trait cannot prevent types from moving by itself._

From the documentation on std::pin:

> _In order to prevent objects from moving, they must be pinned by wrapping a
> pointer to the data in the Pin type. A pointer wrapped in a Pin is otherwise
> equivalent to its normal version, e.g. Pin <Box<T>> and Box<T> work the same
> way except that the first is pinning the value of T in place._

> _First of all, these are pointer types because pinned data mustn 't be
> passed around by value (that would change its location in memory). …_

Looking at Pin itself, it appears that it works by only implementing DerefMut
if its target is Unpin, thus preventing you from mutating (and therefore
swapping/replacing) the data unless the data supports Unpin. But beyond that,
it also only implements Deref and DerefMut if its wrapped type itself
implements Deref/DerefMut, which means using Pin around a non-pointer
basically prevents you from accessing the wrapped data.

------
vardump
It's a bit surprising that the article didn't mention ensuring constant time
algorithms are used. If latency or the emissions of the CPU (etc. externally
observable) are data dependent on something secret, that secret might leak
through the side channel.

One way to combat these side channels is to use constant time multiplication,
loop conditions (including functions that might return early, like memcmp),
etc.

Is it possible for Rust to have any new or surprising mechanisms that cause
loss of constant time execution?

(Of course also other aspects can leak than the CPU instruction execution
latency itself. For example cache effects might be measurable.)

~~~
GordonS
I've only read about the relevance of constant-time comparisons in relation to
comparisons, such as checking that a calculated hash is the same as a stored
hash - could you elaborate on how it applied to symmetric encryption?

~~~
vardump
Some operations, such as integer multiplication and division have data
dependent latency. Depends on CPU model.

See for example this page:
[https://bearssl.org/ctmul.html](https://bearssl.org/ctmul.html)

> could you elaborate on how it applied to symmetric encryption?

Non-constant time multiplication is mostly (only?) an issue with asymmetric
crypto.

------
saagarjha
> Look for any recursive function calls and see if they could be abused to
> overflow the stack.

This isn't an exploitable vulnerability in Rust, though? I thought the program
would just abort if it overflowed the stack.

~~~
muricula
If you can make a truly enormous stack array you can sometimes skip past the
end of the stack onto the heap (or another section of memory). Then any writes
to the array which the program thinks are going to the stack are actually
going to the heap. This is sometimes known as stack clashing.

~~~
saagarjha
Is this something you can do in Rust?

~~~
eridius
I haven't tried, but maybe with something like

    
    
      let huge_ary: [u8; 2097152 /* 2MiB */] = unsafe { ::std::mem::uninitialized() };
      // hopefully huge_ary has bumped the stack pointer past any guard pages
      // and any further stack values will overwrite the heap

~~~
saagarjha
Well, you're doing unsafe things there, so all bets are off.

~~~
eridius
You might be able to hack it by saying something like

    
    
      let huge_ary: [u8; 2097152];
      if false_value_compiler_cant_prove_is_false {
        huge_ary = [0; 2097152];
      }
      // hopefully huge_ary has reserved space on the stack despite not being initialized
      // and so any further stack values would overwrite the heap

~~~
estebank
You'll get

    
    
        error[E0381]: borrow of possibly uninitialized variable: `huge_ary`
         --> src/main.rs:6:7
          |
        6 |   bar(&huge_ary);
          |       ^^^^^^^^^ use of possibly uninitialized `huge_ary`
    
    

[https://play.rust-
lang.org/?version=nightly&mode=debug&editi...](https://play.rust-
lang.org/?version=nightly&mode=debug&edition=2018&gist=49d82e946bd933ff8af80a4ed2345c1e)

~~~
eridius
You're not supposed to _use_ the array, it's just supposed to exist on the
stack. I'm not sure how to ensure the stack space isn't reclaimed; maybe we
can do something with declaring it up top, then putting the `if` that
initializes it later on, and hope the compiler still reserves the stack space
for huge_ary ahead of stack for other variables?

------
CaliforniaKarl
Looking at the article, I like the Drop trait, as a way to ensure memory are
cleaned before being released.

What is the preferred way in Rust to say "If at all possible, please do not
allow this value/object/whatever to be paged out of RAM?"

~~~
orf
How would that work at the operating system level, which is where page in and
outs occur? You might be able to tell some OS's not to page out a specific
page, but with variable length data that becomes impossible

~~~
charleslmunger
[http://man7.org/linux/man-pages/man2/mlock.2.html](http://man7.org/linux/man-
pages/man2/mlock.2.html) at least for linux.

~~~
monocasa
mlock, like VirtualLock, doesn't guarantee that a copy won't go out to disk,
only that a copy will always be in RAM.

~~~
charleslmunger
The notes section of the page I linked addresses this - it'll only be paged in
a hibernate state where memory is powered down. It won't be paged without
that.

~~~
monocasa
That's just one of many examples.

For instance if you're running in a VM, all bets are off, beacuse the host can
and will page you out guest phys memory.

------
SilasX
Sorry if too off-topic, but I figured this was a good place to share my
experience with trying to use the Rust `crypto` library, which I recounted in
this comment [1]. It's not a security concern, but it does affect usability.

I had the simple task[2] of encrypting bytes with AES using the library. But
(as detailed in the post), they didn't expose a simple, "give us plaintext
bytes, give us key bytes, give us options, we'll spit back the ciphertext
bytes". Instead, you have to delve into the irrelevant details of constructing
mutable buffers (which require special care in Rust), so it can read from and
write to them, and without even any good examples in the docs for how to
construct those buffers.

It seems designed from the perspective of "I have extremely limited memory"
\-- which is fine, but why not have a simple wrapper on top of that for the
common, non-embedded use case, and some example code?

[1]
[https://news.ycombinator.com/item?id=18943940](https://news.ycombinator.com/item?id=18943940)

[2] for the Matasano/cryptopals challenge. Interestingly, I found that the
hardest parts weren't figuring out the vulnerabilities, but in implementing a
spec (every writeup was poor) or getting "off-the-shelf" software to behave as
expected so I can have a utility function for it and forget about it from then
on!

~~~
cetra3
Are you talking about the crate `crypto` or `rust-crypto`? Both have not been
updated in a couple of years.

The `ring` library is a bit more modern, but still requires a bit of a run
around. Here's an example of encrypting a cookie:
[https://github.com/alexcrichton/cookie-
rs/blob/master/src/se...](https://github.com/alexcrichton/cookie-
rs/blob/master/src/secure/private.rs#L144)

~~~
SilasX
Why does everyone leap to suggesting that 2016 was some kind of dark age of
Rust when no one could be expected to do anything right? You're not the first
[1].

I'm reminded of the Onion article about "Stating current year still the
leading argument for reform" [2] (e.g. "It's 2019, people!")

I guess the corollary is, "That was 3 years ago" still leading excuse for
shoddy work :-p

[1]
[https://news.ycombinator.com/item?id=18943056](https://news.ycombinator.com/item?id=18943056)

[2] [https://www.theonion.com/report-stating-current-year-
still-l...](https://www.theonion.com/report-stating-current-year-still-
leading-argument-for-1819576151)

~~~
fpgaminer
I don't believe that's what the comment you're replying to stated. The fact
that `rust-crypto`, for example, has not been updated since 2016 doesn't imply
that the code was necessarily bad in 2016, but that the project itself may not
be maintained anymore. Static, perfect code is rare.

~~~
SilasX
I know they didn't state it. I said they _suggested_ it. Which they, and the
linked comment, did.

When your immediate reply is to dismiss the dominant rust crypto library in
2016 as being understandably bad because the development ended in 2016, you're
_suggesting_ that the year (and before) was some kind of dark age for rust.

>Static, perfect code is rare.

My criticism isn't "hey, there's a lingering bug that no one fixed and which
broke my use case". My criticism is, "The design of the _exposed interface to
the core library functionality_ forces me to care about irrelevant
implementation details and enable attributes (mutability) that my use case
doesn't need."

When you say that a project developed in 2016 can be expected to fail on that
front, what you're saying is, 2016 was too early to expect developers of a
major project (in a hot language followed by a lot of smart people) to
understand the concept of separation of concerns and abstraction of irrelevant
details.

Do you see why I might be skeptical of the relevance of the year of
development?

There's a similar point to be made about (the lack of) clear examples, a
concept that didn't spring into existence in 2017.

~~~
kaoD
> When your immediate reply is to dismiss the dominant rust crypto library in
> 2016 as being understandably bad because the development ended in 2016,
> you're suggesting that the year (and before) was some kind of dark age for
> rust.

I was there and it actually was.

Rust 1.0 was just released and the ecosystem was mostly maturing at that
point.

You're talking about version 0.2.36 of a library that had been in development
for less than two years during a quite tempestuous time in Rust. I'm sure they
were more concerned with making the code work, making it secure, etc.

------
richardwhiuk
> Review the dependencies listed in Cargo.toml: Will the latest version be
> used? (preferable but not always the right choice) Are these established,
> trustworthy packages?

The author seems to have forgotten about Cargo.lock....

~~~
eridius
Cargo.lock is something the top-level program uses. If you're writing a
library that does crypto, your Cargo.lock will be ignored by any upstream
libraries/applications that depend on you (and because of this the
recommendation is to not even check your library's Cargo.lock into version
control).

~~~
richardwhiuk
Ah right - I thought this was regarding security tool behaviour not an app -
I'm aware Cargo.lock doesn't provide locking at an app level.

I consider a library insisting on locking it's second level dependencies for
no good reason to be obnoxious behaviour. As producer of an app, I want the
ability to upgrade your dependencies to patch security problems.

In my experience, excessive locking can make security problems worse not
better.

------
Animats
If you have unsafe Rust code in your crypto code, you're doing it wrong.

~~~
oconnor663
I'm not aware of any SIMD frameworks that are advanced enough to spare me from
writing unsafe code yet, particularly if I want to ship multiple
implementations and detect support dynamically.

Also I'm not sure you can get any constant time guarantees today without
calling into C or asm.

