Hacker News new | comments | show | ask | jobs | submit login
The Rustonomicon: The Dark Arts of Advanced and Unsafe Rust Programming (rust-lang.org)
191 points by jvns on July 10, 2016 | hide | past | web | favorite | 59 comments



Hey there, I wrote pretty much all of this. Sadly I wasn't ever able to finish it, and won't ever be able to due to contractual obligations. It has since languished due to lack of maintainership, even though it's part of the official rust docs.

If anyone wants to take up the mantle and clean it up, it would be greatly appreciated!


Are you writing a book instead? If so, looking forward to it.

If not, I'm completely a loss to understand why contractual obligations would prevent you from writing something.


Perhaps the contractual obligations tie up all of his or her time.


Given the entry paragraph, perhaps Lord Cthulhu doesn't want him revealing any more summoning secrets.


Could be copyright assignment for off-hours work too.


These answers basically cover it.


I, too, fear the wrath of Lord Cthulhu.


From your perspective, do you have any cleanup items that are top priority ?


"However BTreeMap is implemented using a modest spoonful of Unsafe Rust (most collections are)."

Unsafe pure Rust code reflects a lack of expressive power. How raw memory becomes a typed object is a touchy area for arrays. To do collections safely, you have to be able to talk about an array which is partially valid. This is quite possible, but you need formal verification like constructs to do it. You need to be able to express that a slice of an array is valid, even though the whole array isn't.

If you can talk about a slice being valid, and an element being valid, you're almost there. It's a theorem that if an element is adjacent to the end of a slice, the element has some property, and all elements in the slice have a property, then a slice containing the original slice and the element has that property. Using that, you can prove by induction that as you grow a collection, the elements in use remain valid. If the access primitives don't let you access outside the valid range, they're safe.

With some extra predicates and a modest theorem prover, much unsafe code in the collection area could probably be proven safe.

Rather than describing unsafe code as a "dark art", it would be more useful to try to formalize safety in this way. The theory is well understood, and there are many proof of correctness systems around for other languages. It might require a lot of annotation in unsafe code, to help the prover along, but that's not a bad thing.


Agreed that there are more powerful systems to verify this code, but these quickly face diminishing returns relative to their costs.

On the other hand, collections are embarassingly testable. It's never been clear to me that formal methods gain much over thorough fuzzing/unit testing in this context.

(rust's testing falls a fair bit short of thorough unfortunately and there have been a handful of mem safety errors in their impls. although I'm not aware of this ever leading to an exploit in an application. mostly caught quite early with the release trains as a buffer)


http://envisage-project.eu/proving-android-java-and-python-s...

https://research.googleblog.com/2006/06/extra-extra-read-all...

Yes, you can asymptotically approach correct with testing, but it turns out that surprises can lurk for quite a long time if that's your approach. Of course, that's how most collections code works and what most of the world is built on, so yes, it is generally "good enough", but it is still worth pointing out that it is not necessarily correct, even after being out in the field for decades.


Proving something correct is not a pancea either, although it can result in code that is generally "good enough". https://research.googleblog.com/2006/06/extra-extra-read-all...

> I was shocked to learn that the binary search program that Bentley proved correct and subsequently tested in Chapter 5 of Programming Pearls contains a bug.


This is, to some extent, moving the goal posts. You're considering "total correctness" whereas the focus of this stuff is largely "only" memory-safety.


I would have been moving the goal posts if I didn't acknowledge that you generally get "good enough" code from what we're doing now. The point is that even being embarrassingly testable isn't a total panacea, though.


Yeah, was going to say something similar. You shouldn't be reimplementing linked list unless you have a very good reason and if so you should be taking a careful look at what you do.

Rust does a great job of error checking in the 99% case and the 1% of unsafe should be from well vetted libs. No different than trusting the kernel to do the right thing.


Derek Dreyer and collaborators are working on specifying what needs to be proven about unsafe code, see http://plv.mpi-sws.org/rustbelt/


That's good. They may have trouble recruiting. They're looking for people who could make upwards of $150K in Silicon Valley and want to pay them at postdoc rates. Glassdoor says €41,595. Their own material has even lower rates.[1]

[1] https://www.mpg.de/8746248/Foerderrichtlinien_Postdoc_en.pdf


That's just the status quo for postdocs and PhD students, AFAICT. And yet universities everywhere still find people to take them up on the deal.


Well, the academic and implementation-wise freedoms are huge compared to any SV startup. And you don't need to worry much about financials, nor marketing, nor a million other things that many hackers dislike.


I agree. I regularly post the amazing work such people produce. A good example is Cyclone, which contributed to Rust's success.


Are there lots of opportunities to work on static analysis and theorem proving outside academia, especially in Silicon Valley?


Static analysis: for sure. Companies with large codebases are interested to understand how their code works, build refactoring tools, find bugs, etc. They have dedicated teams doing this. For example https://github.com/facebook/pfff.

Formal proofs: there are specific applications (see Amazon's use of TLA+). I personally feel nearly every company has a small piece of code that you can justify spending a few weeks/months modeling to ensure its correctness. Also, engineers working on medical devices, avionics, robotics, finance, etc. probably get to prove the correctness of their code.


> Also, engineers working on medical devices, avionics, robotics, finance, etc. probably get to prove the correctness of their code.

Some might, but the relevant ISO standards are ok with testing if you reach sufficient coverage, so generally testing is all that's happening.


There is ongoing work in formal verification of both Rust and the unsafe code in the stdlib.

However, the average writer of unsafe code will not use formal verification. For them, the nomicon is a very useful resource. And describing it as a dark art is the best way to do it, because unsafe code can be hard and fraught with pitfalls.


I don't know much about what I'm about to say, but I've always thought it would be cool to embed some proof language like coq inside the rust compiler so that devs can write proofs that their unsafe code obeys some properties that the rust compiler can use.


Wouldn't it be better to skip writing the unsafe code, & have the proof generate proven-safe unsafe rust code?


Aw, see, this is me showing what I don't know :) I was thinking it would be something similar to this: https://en.wikipedia.org/wiki/Proof-carrying_code except at compile time.


Ouch. If data structures require unsafe operations, doesn't this make Rust's safety a 90/10 solution, and leave a small but credible threat of what are effectively buffer overflow attacks?

Look at all of the programming languages that have had to reimplement their hash tables because someone figured out that you can DDOS a web server by sending requests with just the right query parameters. If there's a bug in hash resizing and it's unsafe code, what could someone make of that?


Probably unlikely. There was that thread::scoped issue last year right before 1.0 where some incorrect assumptions lead to an unsafe API that was thought safe. But it was unlikely to happen to app code and be exploitable.

I examined every MS security bulletin over a 2 year period. Nearly all the serious ones were regular C/C++ code memory corruption issues. From what I could tell, Rust would have prevented every one.

MS's .NET had a few serious bugs. They were basically all class loader logic bugs (XBAPs getting more permissions, ways to bypass the CAS system). It's similar to the Java plugin issues: Java applets themselves were never exploitable. But the plugin might accidentally let you do things you're not supposed to. The most user-impacting was stuff in ASP.NET, which is more like an app framework, not a runtime. (There were some "load up arbitrary code/read arbitrary file" bugs IIRC, caused by bad crypto and logic.)

I actually found probably the first memory safety CLR bug[1]. If you loaded a function pointer of an un-jitted function, you could end up with a safe/verified pointer (delegate) to unmanaged code. So there's a true issue, where the implementation is letting you escape and do unsafe things while saying it's verified. But, there's basically no way for this to turn into an app-level security issue. The only use was to escape the .NET code-level sandboxing (CAS).

1: Well to my knowledge -- none were published before. But neither was this one so who knows how many there actually were.


> If data structures require unsafe operations, doesn't this make Rust's safety a 90/10 solution, and leave a small but credible threat of what are effectively buffer overflow attacks?

Somewhat gibly, every solution is a 90/10 solution (with different exact numbers of course): there has to be some assumptions/assertions buried in the system somewhere. For instance, correctness of the runtime/built-in types in "safe" languages like Python or Java or Haskell, behaviour of the operating system when doing syscalls (or behaviour of specific machine instructions), or even bug-free-ness of a theorem prover used. Obviously different classes have different rates of errors, but it is still very useful to reduce and focus the places where incorrect assumptions can lead to "critical failures" (memory safety ones) even if one doesn't/can't sink the effort/money into reducing it to (essentially) zero. All of these systems are a tradeoff in some sense, between guaranteed-correctness and things like performance, "productivity", and cost of development.

Rust's power is its ability to significantly narrow the places where memory unsafety is a risk without imposing a cost ("zero cost abstractions"), but still giving all the convention control and power needed for systems programming. There is of course the risk of those places having bugs, but they're explicitly marked and exhaustively testing/auditing the 100 lines of code that build safe abstraction is easier than a whole 1000 or 10000 application that uses it.

> that you can DDOS a web server by sending requests with just the right query parameters

The HashDOS problem is not a memory safety one. In some sense, it isn't even a bug in the implementation but rather the design. The problem arises from using a poor/predictible hash function that allows an attacker to construct many values with the same hash, even if the hash table and the hash function itself are implemented 100% to spec. It is... difficult for a programming language to protect against spec bugs, especially because what is a bug/not helpful sometimes might be desirable at other times.

(Incidentally, Rust's HashMap actually defends against this by default, using SipHash with random keys, which is why it lags behind, say, C++ in some benchmarks that use the default data structures.)


For example:

I found a trivially exploitable buffer overrun in the Source game engine. The cause? Somebody used strcpy instead of strncpy... when adding an animated ellipsis at the end of a string.

You really have to try in Rust to get pwned adding an ellipsis to a string.

(And yes, tools should (and probably would) have caught that bug. But they either weren't used, or didn't catch it, so...)


Quis custodiet ipsos custodes?


> If data structures require unsafe operations, doesn't this make Rust's safety a 90/10 solution, and leave a small but credible threat of what are effectively buffer overflow attacks?

No worse than JavaScript, Perl, PHP, Python, Ruby, Go, ..., all of which implement their hash tables in C.

How many production server side apps do you know of that have been compromised due to memory safety problems in the core hash table data structure?


Go's implementation is in Go + may be some assembly.


That doesn't make it immune to memory unsafety, because the initial implementation had to be written in something other than Go, namely C. And the current compiler was compiled by an older compiler, which was itself compiled by an older one, ..., which was itself compiled by the initial Go compiler, written in C.


Are they seriously not using an interpreter to compile the current version with itself?


"Reflections on Trusting Trust", Ken Thompson.

https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...


I really hope nobody mistakes the sexy title of this book as a reason to learn unsafe rust. If it can be done without this advanced feature, it probably should.


> However if you intend to write unsafe code -- or just want to dig into the guts of the language -- this book contains invaluable information.

Agreed, however I think the latter part of the intro is worthwhile to anyone who wants a deeper understanding of Rust.

I've always found that a in depth understanding of what you use pays off in unforeseen ways.


It's true. I have a 30k+ LoC project, and it has zero unsafe code blocks.

Though I do have another project that exports a C API, and that requires unsafe, though it's literally only for wrapping/unwrapping pointers.


Are you able to say what your project is? I'm always curious to hear about bigger projects!


Sure: https://github.com/bluejekyll/trust-dns

It's a WIP, but mostly feature complete at this point. I'm working on DNSCrypt at the moment.


Nice project. Seems a bit light on documentation? Is the intention to be a bind work-alike, or have you taken some inspiration from other servers, like eg: DJB's djbdns?

[ed: just found src/config/test - looks more like a "better bind" than "djbdns in rust"]


Not sure I totally understand what you mean by djbdns in Rust. I have taken some of djb's ideas, and plan on incorporating more, like randomized QID and Port to make cache poisoning more difficult (added recently). I've implemented this from RFC's only. It's not based on any other DNS server per sé.

My insiration started with yet another Bind exploit, and then grew from there. Unlike djbdns, I want this really to have as little manual operation as possible. After DNSCrypt I'm going to look at implementing a version of Raft on top of DNS updates/notifies, etc, to make server resiliency even better. DNSCrypt is next, mostly so thy I can use it for private comms between DNS nodes.

djb's is amazing, but what I'm looking for is to build something more flexible and more feature rich.


Re: docs

The code is well documented ;)

Operational docs will come, but I haven't really had an opportunity to focus on those yet.


The book actually contains a bunch of information on implementation details and other things that aren't unsafe, but just don't have a good place in the regular book.

Additionally, if nobody writes unsafe Rust, who's going to maintain the abstractions that the rest of us rely on? :)


I wouldn't worry too much, the text of the book goes to great and colorful lengths to dissuade people from using these features frivolously. :)


Looking at Rust code with the eyes of a Wirth and Xerox Parc language fan, I still get the feeling that Rust requires too much uses of unsafe code versus Ada, Modula-3 and similar.

Specially for basic data structures like trees, graphs and double linked lists.


Aside from what other people have pointed out about these being possible in safe Rust (just with a cost -- a cost you pay in the other languages anyway); this is not a useful yardstick for the amount of unsafe code you need to write.

These data structures are write once. You write them once, verify them, and use them as much as you want. The doubly linked list is in the stdlib. A tree without parent pointers is possible with zero cost in safe rust. A tree with parent pointers IIRC exists in a crate somewhere. So you will rarely be implementing these, just using these with safe code.

While we use datastructures to teach the "basics" of languages like C and C++, that doesn't mean that they are basic and any language where they are hard is crap. Rust makes them hard so that the rest of your code is easy to write. And since these are write-once, they have a minimal to zero impact on the amount of unsafe code in your codebase.


I would be very interested in how these two languages enable safe data structuring. Do they have significant tradeoffs in terms of ergonomics, control, or perf?

Keep in mind basically all of Rust's libcollections uses unsafe code for perf and low-level representation reasons. They can all be made safe using the strategies every other safe language uses (and eating the associated downsides).

Also keep in mind Rust's data structures are basically language primitives in a lot of other languages!


Modula-3 has GC. You can only make use of untraced pointers in unsafe modules.

So unless proven otherwise by the profiler, no need to make use of unsafe code.

Ada requires explicit use of unsafe code for memory deallocation, but it is not required when using RAII pools.

So unless a custom allocator is required, the standard pools can be used.

In Rust even a basic double linked list requires unsafe.

Then there are those that make use of unsafe to try to express code that cannot be proven by the type system, instead of plain memory corruption issues.

Thankfully the core team is against this trend.


> Modula-3 has GC. You can only make use of untraced pointers in unsafe modules.

> So unless proven otherwise by the profiler, no need to make use of unsafe code.

> Ada requires explicit use of unsafe code for memory deallocation, but it is not required when using RAII pools.

So, if you want to write hash tables like this without unsafe code, you're in luck: Rust lets you do it!

(As an aside, I keep seeing these fantastical claims about Ada that, when you dig into them, come with the exact same or more restrictive semantics than Rust. Ada is a good language, but it isn't magical, and the region/lifetime/uniqueness systems in Rust actually solve real problems that Ada doesn't have a solution to.)


"As an aside, I keep seeing these fantastical claims about Ada that, when you dig into them, come with the exact same or more restrictive semantics than Rust."

Ive been looking forward to seeing such a point-by-point comparison of safety features between Rust, Ada, and SPARK. I gave you or someone else involved in Rust the Safe and Secure Ada 2012 book listing those features to help. Just link to it in a reply to me if and when you publish such analyses.

Btw, even if I agree on restrictive semantics, the thing you didnt mention is results where Ada and esp SPARK are immune by default to so many error classes. Rust definitely had it beat on dynamic safety and probably concurrency. However, unsafe programming in SPARK is still quite safer than Rust. :)


> In Rust even a basic double linked list requires unsafe.

Well this just isn't true. You can write a doubly linked list using Rc and Weak pointers and no unsafe code. This is the same as saying you can write a doubly linked list using GC'd pointers.


I didn't see this comment when replying above, but you can write a doubly-linked list in safe Rust with either of these approaches (GC, or stuffing everything into a single giant reference). The only reason a "basic" doubly-linked list in Rust is unsafe is that those aren't really reasonable options, and Rust wants to do better. I don't think it would improve Rust to give it mandatory GC (Go and Java are both great languages if you want that), or to require that all allocations are unsafe unless you use memory pools.


How do Ada and Modula-3 let you implement doubly-linked lists without unsafe code? I'd guess they don't enforce the constraint Rust does that mutable references must be unique, but my experience with both C and C++ is that you want this rule anyway for your code to have any chance of being correct.

In particular, removing a node from a doubly-linked list has three steps: repoint prev->next at next, repoint next->prev at prev, deallocate. If you can write this in safe code, it's equally possible to write a procedure that fixes prev->next and deallocates, but leaves next->prev dangling. What do Ada and Modula-3 do to prevent this?

(If you're willing to sacrifice efficiency, you can always use reference-counting or actual GC. Or you can just stuff each node in a dynamically-allocated vector, and have the prev and next pointers be indexes into that vector, instead; you'll have semantically-dangling numbers, but you won't have memory unsafety. Both of these approaches can be done in safe Rust with the normal standard library.)


If I'm not mistaken, Modula-3 is garbage-collected, so doubly-linked lists are implemented with GCed pointers like in Go.


I can't remember if it bounds-checks the pointers or just arrays. However, SPARK lets you prove conformance of code to specs and absence of specific error conditions. Here's a linked list example:

http://www.spark-2014.org/entries/detail/spark-2014-rational...

There's also formal verification work on linked lists. If any have simple VC's, they could be encoded in containers like above as well. Tools exist to do similar stuff for C, too.


I'm a Lovecraft fan, never really touched Rust, and a name like "Rustonomicon" could really get me into it, hehe.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: