
Rust’s standard library was vulnerable for years and nobody noticed - agumonkey
https://medium.com/@shnatsel/how-rusts-standard-library-was-vulnerable-for-years-and-nobody-noticed-aebf0503c3d6
======
cdoxsey
In Go you can write unsafe code in libraries. You can even write raw assembly
if you want.

We had an encoding library written in a very c-ish style for maximum
performance, and sure enough, it had an off-by-one error on the last byte that
could've been exploited.

It's funny that in severely limiting the area we could use unsafe we still
somehow managed to write it improperly.

If anything it serves as a reminder of why memory safety is so critical and
Rust was right to focus on it from the beginning... But developers always
think they're the exception to the rule. I wonder how often 3rd party
libraries end up using unsafe unnecessarily.

~~~
jack1243star
Here's an example of a quite high profile Rust project discovered to have been
(ab)using unsafe:

[https://www.reddit.com/r/rust/comments/8s7gei/unsafe_rust_in...](https://www.reddit.com/r/rust/comments/8s7gei/unsafe_rust_in_actixweb_other_libraries/)

~~~
steveklabnik
... and, due to said awareness, the issues were all fixed. The presence of the
unsafe keyword made the audit much easier. Everything is working as intended!

~~~
HugoDaniel
[https://github.com/actix/actix-
web/search?utf8=%E2%9C%93&q=u...](https://github.com/actix/actix-
web/search?utf8=%E2%9C%93&q=unsafe&type=)

~~~
steveklabnik
Sure. I haven’t looked at these exact blocks, but the point is that they were
using tons of unsafe _that they didn’t need to_. Unsafe exists to be used,
when it’s neccesary.

11 now? Iirc it was on the order of 100 before.

~~~
HugoDaniel
11 files, showing only the top two references per file. In a proper grep it
shows about 35 unsafe references. Not bad indeed.

------
sometimesijust
Some readers seem to be getting thrown by the casual tone of this article. For
those struggling to not rage my two takeaways were:

* Automated checking of rusts std lib could improve rusts security

* Don't use unsafe if you don't need it

* Releasing a fix for a security vulnerability should be complemented with a cve if you want people (such as anyone using Debian) to not still be vulnerable two years later

Note: despite the initial slant, the author is very pro rust.

~~~
vanderZwan
I wouldn't even say the initial slant is anti-Rust. To me it reads as _" Rust
wants what I want, which I appreciate, and that is exactly why I won't go easy
on it"_. Yes, the title is click-baity, but author also immediately
acknowledges that "unsafe" is a necessary compromise:

> _But dealing with those things is necessary to run code on modern hardware,
> so something has to deal with it._

The main point is that "unsafe" is the weakest link in the language, since it
bypasses the safety guarantees of Rust. Therefore it needs solutions outside
of the language (in the form of how bugs are handled, and programmer culture)
to minimize the risks it brings.

------
mdip
The comments in this entry are harsher than I expected after having completed
the article. I agree that the headline was a little misleading (maybe that's
harsh, but perhaps a little click-baitey). He had one, sort-of peripheral
story that was really important to me -- paraphrased:

> Your bug tracker probably has some security vulnerabilities that were
> misidentified as routine bugs.

This is especially true if the language you've chosen to write your code in is
C/C++, the problem happens in an unsafe context in another language (including
unexpected ones like C#). This was something I sort-of did by habit. In a past
job I had a side-responsibility of tracking CVEs for any software run within a
large global organization. We had a (pretty small, but surprising) incident
that we were unprepared for where a bug in a Microsoft product[0] was causing
a DoS on a number of sites on our intranet. It turned out to be a bug that was
resolved with a service pack, but was not assigned a CVE or an MSxx-xxx number
(I believe this was revised later).

From that point on, we paid special attention to a handful of apps with the
rule of "If it causes a crash, it's a DoS, which makes it a security issue"
followed immediately by "If we can't prove that said crash cannot lead to
exploitation". Which meant that almost every little problem was being treated
far more severely than it needed to be. After a while this was tempered; we
did a little less research and marked those that did not have a CVE associated
to simply "monitor"[1].

[0] Sorry, I searched old notes and couldn't find the one, but it was around
the early Vista timeframe affecting, I think, one of the parsers used by
Sharepoint ... I could have that very wrong, my eyes bled from reading so many
of those.

[1] Patches, sometimes, break things. Back then, in the MS world, OS patches
broke things with far greater frequency than they do today (and it was more
painful to recover from), so patching a "non-problem", breaking a bunch of
workstations and taking with it that employee's ability to do their job and
that's a quick path to unemployment. Of course, failing to patch a known issue
will lead that way too. It's a wonder many of the secops folks I've worked
with are so grumpy.

------
leshow
This article casually throws around definitions of 'memory safety' that are
massively different.

Rust's 'memory safety' guarantee is different from Go's 'memory safety'. Rust
includes protection from things like modifying memory from 2 different spots
at the same time, among other things.

AFAIK that's perfectly fine to do in Go, you can pass around pointers between
threads as much as you like. This sort of behaviour wouldn't be allowed in
safe Rust.

edit: I just really don't understand the criticism against Rust here for
having 'unsafe' given much of what is covered in those unsafe blocks is
allowed in many other languages, including a language mentioned in the article
as being 'safe'

~~~
zozbot123
> Rust's 'memory safety' guarantee is different from Go's 'memory safety'.

Nope. Rust protects against data races, that are _universally_ considered
memory unsafe, given the usual C/C++ concurrency model. There is no ambiguity
here. And yes, sequential Go _is_ memory safe, but concurrent Go is not. Deal
with it, gophers - and please do not impute purposeful ambiguity and
disingenuous characterization where there isn't any.

~~~
thanatos_dem
...Are you really trying to use “gophers” as a derogatory term?

~~~
kjeetgill
Many Go programmers refer to themselves as Gophers actually after the mascot
the same way university students might refer to them selves as Jaguars,
Titians, or Aggies.

It's hard to google a good example but here's one I found:

> A creative and passionate community of Go developers and users (“Gophers”)
> has grown up around the language, and today there are U.S. Go conferences,
> European Go conferences, […] even a group for LGBT Gophers (yes,
> “Gayphers”).

[https://blog.newrelic.com/technology/golang-experts-
follow-o...](https://blog.newrelic.com/technology/golang-experts-follow-
online/)

------
lelf
> _For example, Erlang — that funky language that people use to program
> systems with 99,9999999% uptime (no, that’s not an exaggeration) — has
> repeatedly shipped with a broken implementation of Map data structure in its
> standard library._

…during the period ‘maps’ was a experimental/beta-feature.

~~~
acdha
Unless you have to type “Please enable maps even though it will break all of
my code” every time you run it, that's a distinction most people will ignore.
If something is useful enough to ship, people will want to use it — helping
test it is even the point — and only the most prominent “This is terribly
unsafe” warnings will be heeded.

~~~
heavenlyblue
So what’s more important, to be secure or to ship, then?

I think people made their choice and the developers of the said maps are not
responsible for whatever comes out of map’ lack of security.

~~~
acdha
Those aren't the only options. A key part of the original comment was “in its
standard library” — in the modern era where package managers have existed for
decades, it's quite reasonable to develop things in separate projects and only
pull them into the standard library when they're more stable. That allows
people to use them without giving the weight of being in the standard library.

Similarly, you could require some sort of opt-in / runtime warning approach
where you have to explicitly opt-in to having them and they're clearly marked
as unstable. If you had to compile with a `ENABLE_INSECURE_MAPS` or do some
sort of import from an experimental module it'd make it a lot easier for
someone to recognize that they're doing something less stable than normal,
similar to Rust having you mark code as unsafe.

------
vanderZwan
> _There is a highly effective technique for discovering vulnerabilities that
> I haven’t applied to Rust yet. It beats everything else by a long shot, and
> can be used only by the bad guys who want to break stuff, not the good guys
> who fix it. It’s… searching the bug tracker._

> _The proper way to handle them is to file [memory safety bugs] into a
> database called Common Vulnerabilities and Exposures (CVE for short) so that
> people who care about security are alerted to it and ship fixes to users. In
> practice such bugs are silently fixed in the next release at best, or remain
> open for years at worst, until either someone discovers them independently
> or the bug is caught powering some kind of malware in the wild. This leaves
> a lot of security vulnerabilities in plain sight on the public bug tracker,
> neatly documented, just waiting for someone to come along and weaponize
> them._

Interesting that this is such a common communication failure. Wouldn't it be
relatively easy to make automatic tools to help with this sort of thing?
Searching for keywords in new issues for example?

~~~
vanderZwan
> _Searching for keywords in new issues for example?_

By which I mean using bots, obviously.

Would it be an exaggeration to flag all memory issues as security issues by
default?

------
d33
I don't really like the message. For me, it's not "boo, a bug was found after
years", it's "yay, they managed to find it!". Bugs are inevitable, security
bugs as well.

What I don't understand though is how they ended up with that bug in deque
implementation anyway. Does Rust use "unsafe" keyword for its major data
structures? If so, why? I would assume that having Box would let you implement
most of the ideas without resorting to working with pointers and manual memory
allocation...

~~~
0xCMP
Mentioned here

> You see, Rust provides safe abstractions that let you do useful stuff
> without having to deal with the complexities of memory layouts and other
> low-level arcana. But dealing with those things is necessary to run code on
> modern hardware, so something has to deal with it. In memory-safe languages
> like Python or Go this is usually handled by the language runtime — and Rust
> is no exception.

> In Rust, the nutty-gritty of hazardous memory accesses is handled by the
> standard library. It implements the basic building blocks such as vectors
> that expose a safe interface to the outside, but perform potentially unsafe
> operations internally.

~~~
deepsun
Huh, Java didn't know it's necessary to be unsafe to implement a queue
(ConcurrentLinkedQueue).

~~~
dbaupp
Java data structures are built on top of a pile of unsafe code (the JVM and
especially the GC), in a similar manner to building things on top of the std
data structures in Rust.

For concurrency in particular the JVM exposes a more restricted (slower) model
than the hardware, to guarantee no unsafety no matter how wrong the code is.
Rust can't make that trade-off and still reach its performance goals.

~~~
deepsun
Mmm, no, ConcurrentLinkedQueue uses the same RAM primitives as Rust can use
(compare-and-set, compare-and-set etc) that are needed to implement concurrent
queues. And it doesn't use Unsafe class (same as Rust's "unsafe" keyword) in
it, it's easy to check.

Or maybe I just miss something?

~~~
dbaupp
Yes: a GC means the programmer can avoid thinking about object lifetimes and
just pretend every object lives forever. The garbage collector is deeply
unsafe (as in, bugs may cause memory corruption): if a GC fails to account for
even a single pointer somewhere, it might deallocate an object too early,
potentially leading to problems like use-after-free.

Additionally, the JVM ensures every read and write to memory has (minimal)
synchronisation so that there is no risk of a data race, meaning no undefined
behaviour, no matter how wrong the code is. And, even the reads and writes
using proper synchronisation (compare-and-swap, etc.) have only one choice:
sequential-consistency.

In Rust, a global built-in-to-the-language GC isn't appropriate, and managing
object lifetimes properly, with shared memory, is hard. It is thus something
that a library has to use `unsafe` for, to assert to the compiler that the
programmer has got things right because it is unable to check. Similarly,
using synchronised reads/writes everywhere isn't right for Rust, so it is up
to the concurrent data structure author to use the right synchronisation
(which can be weaker for performance: acquire/release, instead of only SeqCst)
just for the rest of the code to be safe: this is also something a compiler
can't check.

------
glennpratt
Can this get a 2018 label?

------
jononor
The forbid unsafe crate feature seems nice. It would however be much more
useful if crates.io would allow to filter based on it? Or that one could
specify in Cargo to not have dependencies with unsafe in it?

------
IshKebab
(2018)

------
stfwn
TLDR: presumably 2 y/o (10 Jul '15) segfault bug in stdlib was discovered,
fixed and merged into master on the same day on 27 Sept '17\. Next release was
12 Okt '17, 15 days later. OP argues the bug should have been filed into the
CVE [0] so that people who run old versions of Rust know about it and can act
accordingly.

[0]:
[https://en.m.wikipedia.org/wiki/Common_Vulnerabilities_and_E...](https://en.m.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures)

Personally I'd say running old versions of software already means you don't
have all bug fixes. I'm sympathetic to the Rust team's argument that if they'd
have to test every bug for security vulnerabilities in order to know if they
need to submit it to the CVE they wouldn't get much other work done. It's
better to offer smooth upgrade paths and advise users to run the latest
versions.

~~~
debiandev
The opposite is true: new vulnerabilities can be introduced in new feature
releases, while older releases can receive backported security fixes.

That means that the level of security of a well maintained stable release
train can only increase over time.

This is why Debian puts so much effort in freezing and baking the distribution
and backporting security fixes.

~~~
stfwn
Good points, thank you. This did change my perspective.

------
heavenlyblue
I find these marketing-kind of posts incredibly amusing.

>> Rust’s standard library was vulnerable for years and nobody noticed

That's literally as if a developer would say: "World was hungry for this
database for years. Now I've built it.".

Come on - that IS your job, after all.

And we know well enough open-source projects never get enough time nor
resources.

