We had an encoding library written in a very c-ish style for maximum performance, and sure enough, it had an off-by-one error on the last byte that could've been exploited.
It's funny that in severely limiting the area we could use unsafe we still somehow managed to write it improperly.
If anything it serves as a reminder of why memory safety is so critical and Rust was right to focus on it from the beginning... But developers always think they're the exception to the rule. I wonder how often 3rd party libraries end up using unsafe unnecessarily.
11 now? Iirc it was on the order of 100 before.
>> Folks who are considering using actix-web might want to follow this thread about its heavy use of unsafe Rust.
It's one thing to make heavy use of a small bit of unsafe Rust. It's another to use unsafe heavily - like someone who just refuses to use the language properly and therefore just keeps writing unsafe code. It's not clear from the wording which case this was but they do say it was fixed, rather than declaring it an unmitigated disaster ;-)
So why did they gave up on memory safety then and ditched their GC? Relying on alloca only has limited use cases, esp. with threads with their severe stack limits.
Internally rust stack allocates most local objects, but it's size is unchecked. see eg. https://github.com/rust-lang/rust/issues/48055
2nd rust had memory safety in early 0.x releases with a proper GC, but it was removed.
Now you either have to manually check local objects sizes or handle global refs manually or use unsafe and unhandled memory allocation for custom aggregates. The stdlib is full of such. nobody can declare memory safety for rust, only you, but unfortunately you exhausted your credibility long time ago. Also with your claim of concurrency safety ("then don't use mutexes"). rust is a fine languages, you don't need to lie about its properties. the docs need to remove false claims, and you seriously need to stop overhyping it.
If you can find memory unsafety in safe Rust, it would be a huge deal. Please file bugs. That doesn’t seem to be what you’re saying, though.
But the general problem is that rust cannot be made memory safe at all with its current architecture. Overlarge alloca calls are unsafe by default (either accept compile-time failures or accept run-time stack failures on new threads), and the strategy with global containers and references into it is unsafe. Even when rust would remove its unsafe keyword, and disallow malloc via ffi.
Maybe you should fix your attitude towards your unsafeties.
* [ ] How can we mitigate the risk of unintended unsized or large allocas? Note that the problem already exists today with large structs/arrays. A MIR lint against large/variable stack sizes would probably help users avoid these stack overflows. Do we want it in Clippy? rustc?
* [ ] How do we handle truely-unsized DSTs when we get them? They can theoretically be passed to functions, but they can never be put in temporaries."
Neither of those are memory safety relevant. (A stack overflow is a reliable crash, not memory corruption, in Rust.)
* Automated checking of rusts std lib could improve rusts security
* Don't use unsafe if you don't need it
* Releasing a fix for a security vulnerability should be complemented with a cve if you want people (such as anyone using Debian) to not still be vulnerable two years later
Note: despite the initial slant, the author is very pro rust.
> But dealing with those things is necessary to run code on modern hardware, so something has to deal with it.
The main point is that "unsafe" is the weakest link in the language, since it bypasses the safety guarantees of Rust. Therefore it needs solutions outside of the language (in the form of how bugs are handled, and programmer culture) to minimize the risks it brings.
Rust's 'memory safety' guarantee is different from Go's 'memory safety'. Rust includes protection from things like modifying memory from 2 different spots at the same time, among other things.
AFAIK that's perfectly fine to do in Go, you can pass around pointers between threads as much as you like. This sort of behaviour wouldn't be allowed in safe Rust.
edit: I just really don't understand the criticism against Rust here for having 'unsafe' given much of what is covered in those unsafe blocks is allowed in many other languages, including a language mentioned in the article as being 'safe'
I think you are confusing two different narratives in the article: software development in general, and Rust the language/ecosystem/community.
The article raises many points about how the former as a whole does not care enough about security. How we do not have the right incentives to fix memory safety bugs (the bug bounty thing) nor treat them as the security risks that they are (the pushback from developers when you explain that something is a security issue).
That is the software industry in general though, not Rust.
There is also constructive criticism about how Rust handles these things and how it could do better. But it is clear that the author loves Rust for trying to find ways to fix this and how it aims to be a more secure and safe language. Note how the article is full of praise for how the Rust community responded to his feedback, and for the ambitions of the language.
EDIT: as I wrote in another comment, the main point regarding Rust is that "unsafe" by definition throws safety guarantees out the window, so to minimize the risks it brings, solutions outside of the language are needed (aside from hypothetical language improvements that reduce the need for unsafe, of course). This is easily forgotten in the hype about Rust's safety guarantees. Saying "but all languages screw this up in some way" is not really an excuse.
I wish they had chosen a different keyword, it doesn't necessarily mean that. For instance, the type checker still runs on the contents of unsafe blocks, so does the borrow checker, the only thing you can do that you can't normally is dereference raw pointers. This is potentially unsafe, and you should be careful, but it absolutely does not "throw all guarantees out the window".
The type checker still runs by default, but unsafe allows you to opt out of it selectively (with std::mem::transmute). The same way that borrows are still checked, but unsafe allows you to opt out by using raw pointers (as you mentioned).
Nope. Rust protects against data races, that are universally considered memory unsafe, given the usual C/C++ concurrency model. There is no ambiguity here. And yes, sequential Go _is_ memory safe, but concurrent Go is not. Deal with it, gophers - and please do not impute purposeful ambiguity and disingenuous characterization where there isn't any.
You're agreeing with me, I was saying that Go isn't memory safe by Rusts definition. As you mentioned, it doesn't protect against data races, therefore, when Go claims memory safety they are using some different definition of memory safe.
It's hard to google a good example but here's one I found:
> A creative and passionate community of Go developers and users (“Gophers”) has grown up around the language, and today there are U.S. Go conferences, European Go conferences, […] even a group for LGBT Gophers (yes, “Gayphers”).
> Your bug tracker probably has some security vulnerabilities that were misidentified as routine bugs.
This is especially true if the language you've chosen to write your code in is C/C++, the problem happens in an unsafe context in another language (including unexpected ones like C#). This was something I sort-of did by habit. In a past job I had a side-responsibility of tracking CVEs for any software run within a large global organization. We had a (pretty small, but surprising) incident that we were unprepared for where a bug in a Microsoft product was causing a DoS on a number of sites on our intranet. It turned out to be a bug that was resolved with a service pack, but was not assigned a CVE or an MSxx-xxx number (I believe this was revised later).
From that point on, we paid special attention to a handful of apps with the rule of "If it causes a crash, it's a DoS, which makes it a security issue" followed immediately by "If we can't prove that said crash cannot lead to exploitation". Which meant that almost every little problem was being treated far more severely than it needed to be. After a while this was tempered; we did a little less research and marked those that did not have a CVE associated to simply "monitor".
 Sorry, I searched old notes and couldn't find the one, but it was around the early Vista timeframe affecting, I think, one of the parsers used by Sharepoint ... I could have that very wrong, my eyes bled from reading so many of those.
 Patches, sometimes, break things. Back then, in the MS world, OS patches broke things with far greater frequency than they do today (and it was more painful to recover from), so patching a "non-problem", breaking a bunch of workstations and taking with it that employee's ability to do their job and that's a quick path to unemployment. Of course, failing to patch a known issue will lead that way too. It's a wonder many of the secops folks I've worked with are so grumpy.
…during the period ‘maps’ was a experimental/beta-feature.
I think people made their choice and the developers of the said maps are not responsible for whatever comes out of map’ lack of security.
Similarly, you could require some sort of opt-in / runtime warning approach where you have to explicitly opt-in to having them and they're clearly marked as unstable. If you had to compile with a `ENABLE_INSECURE_MAPS` or do some sort of import from an experimental module it'd make it a lot easier for someone to recognize that they're doing something less stable than normal, similar to Rust having you mark code as unsafe.
> The proper way to handle them is to file [memory safety bugs] into a database called Common Vulnerabilities and Exposures (CVE for short) so that people who care about security are alerted to it and ship fixes to users. In practice such bugs are silently fixed in the next release at best, or remain open for years at worst, until either someone discovers them independently or the bug is caught powering some kind of malware in the wild. This leaves a lot of security vulnerabilities in plain sight on the public bug tracker, neatly documented, just waiting for someone to come along and weaponize them.
Interesting that this is such a common communication failure. Wouldn't it be relatively easy to make automatic tools to help with this sort of thing? Searching for keywords in new issues for example?
By which I mean using bots, obviously.
Would it be an exaggeration to flag all memory issues as security issues by default?
What I don't understand though is how they ended up with that bug in deque implementation anyway. Does Rust use "unsafe" keyword for its major data structures? If so, why? I would assume that having Box would let you implement most of the ideas without resorting to working with pointers and manual memory allocation...
> You see, Rust provides safe abstractions that let you do useful stuff without having to deal with the complexities of memory layouts and other low-level arcana. But dealing with those things is necessary to run code on modern hardware, so something has to deal with it. In memory-safe languages like Python or Go this is usually handled by the language runtime — and Rust is no exception.
> In Rust, the nutty-gritty of hazardous memory accesses is handled by the standard library. It implements the basic building blocks such as vectors that expose a safe interface to the outside, but perform potentially unsafe operations internally.
For concurrency in particular the JVM exposes a more restricted (slower) model than the hardware, to guarantee no unsafety no matter how wrong the code is. Rust can't make that trade-off and still reach its performance goals.
To me it was important to know that rust doesn't create perfect code out of the box even with all the beautiful theory and design.
It's good to stay alert in a way
Personally I'd say running old versions of software already means you don't have all bug fixes. I'm sympathetic to the Rust team's argument that if they'd have to test every bug for security vulnerabilities in order to know if they need to submit it to the CVE they wouldn't get much other work done. It's better to offer smooth upgrade paths and advise users to run the latest versions.
That means that the level of security of a well maintained stable release train can only increase over time.
This is why Debian puts so much effort in freezing and baking the distribution and backporting security fixes.
..this means i386 and these sysadmins who apply only security updates were still affected at that time.
>> Rust’s standard library was vulnerable for years and nobody noticed
That's literally as if a developer would say: "World was hungry for this database for years. Now I've built it.".
Come on - that IS your job, after all.
And we know well enough open-source projects never get enough time nor resources.