> our verification work resulted in uncovering and fixing a bug in Rust’s standard library, demonstrating that our model of Rust is realistic enough to be useful.
One thing about formal verification, and a lesser extent automated testing, is that you often end up spending the majority of the time adapting the program or language to work with verification rather than actually solving a real-world problem or fixing a real bug.
His third comment fits this category well. The problem set he describes was that the code is designed in a way that's ill suited for formal verification. But he never mentions any other benefits or justification for changing how it works.
So I'm not sure it's necessarily a 'bug' in a traditional sense that supports the use of formal verification, except in some self-serving way.
That said, I'm very much in support of formal verification in these types of languages.
> But he never mentions any other benefits or justification for changing how it works.
I read "it lets me write a program that has a data race", accompanied by a program that has a data race, as saying that the bug is that a language feature meant to provide mutual exclusion does not provide mutual exclusion. In a language one of whose main selling points is "you always get mutual exclusion". How is this not a bug?
The benefit of changing this bug is that the guarantees Rusts makes to programmers are actually guaranteed.
I was specifically referring to the 3rd comment on the Github issues page, not the first comment which mention a race condition which the eventual commit addressed
> His third comment fits this category well.
And even then I was only talking about the language he used, aka the justifications and rationale he proposed, which seemed entirely around being difficult to verify formally, rather than mentioning any real hypothetical 'bug', security flaw, or architectural downside.
Fair enough about the particular Github comment, but even there I think you are wrong.
The architectural downside is being unable to understand a program you have written: "However, if the type does not have an explicit impl for Send/Sync, I don't know how to even figure this out -- I would have to chase all types (including safe ones, and across all abstractions) of all fields recursively and then check when they are Send/Sync... that's way too error-prone."
This doesn't just talk about his particular proof tool being unable to understand this. It's about any Rust developer being unable to easily understand the types used by their programs, using only standard library constructs. (If anything, this should be easier for tools than for humans.)
We regard "this program is hard to test" as a problem that's worth refactoring to fix; why shouldn't we regard "this program is hard to verify" the same way?
This is a good question. I'm inclined to say that the value proposition for testing is better than for verification. First of all, if your unit is hard to test, it is likely to cause other issues, but probably more importantly, testing finds a lot more bugs per unit effort than formal verification. That last point is based exclusively on my intuition and it could well be wrong--it would be good to hear from people with more experience with both testing and formal verification.
Also, now that I think about it, the value proposition isn't fixed for all applications. Testing is the better value proposition if you need to catch the easiest 95% of your bugs (arbitrary number for example purposes), but if you need to catch 99.999% of your bugs, the testing effort may likely be much greater than the formal verification effort. That said, open source projects like Rust get tons of testing for free by virtue of being very, very popular, so I'm not sure how to weigh one against the other.
But "this language is hard to verify" should not trump the usability of the language. The verification only needs to be done once, while the language will be used to write code again and again. If it doesn't negatively affect the language users then sure, change it to make verification easier.
You're not verifying a language, you're verifying a program (or some part of one). Sure- that part in this instance may be implementing a language standard library, but who cares? That program also has tests, docs, types and all the other things people use to convince themselves of correctness.
It really is like unit testing. There too you could make the argument that in some cases untestable code might be worth it if the alternative is an api that's hard to use. And there too an api may well be used many orders of magnitudes more frequently than it is changed. I think you have to decide that on a case by case basis (at least, I can't think of any convincing general argument there).
It depends on the cost of a defect. A language used for aircraft control software is going to have much stricter requirements for static analysis that a language used to create a website with cute pictures of kittens — even if happens to be the very same language.
That's sort of the trend of the industry for the past 50 some odd years, towards design for verification. First re-entrant code, then removing globals entirely, then splitting your dependencies out for testing, and recently there's been a bigger push towards formal verification.
people often mention forcing refactoring/rewrites as a benefit of unit testing, that you end up with code that is actually better in testing-unrelated ways. this is true in my experience.
i wonder how true this is of formal verification? especially in a language like Rust that has a fairly fancy type system?
From the Adrian Colyer's review, we find the title might be misleading depending on what ownership model entails for a given developer’s use of the language:
“We had to make some concessions in our modelling: we do not model…
1. more relaxed forms of atomic accesses, which Rust uses for efficiency in libraries like Arc
2. Rust’s trait objects, which can pose safety issues due to their interactions with lifetimes
3. stack unwinding when a panic occurs, which can cause similar issues to exception safety in C++
4. automatic destruction, “which has already caused problems for which the Rust community still does not have a modular solution.”
5. a few details unrelated to ownership, such as type-polymorphic functions and “unsized” types.”
Also shows why it’s good to have a semantics in the beginning covering the whole language like with ML or later SPARK Ada. It forces language developers to spot and address these corner cases early on. Good news is that, like with SPARK, one might just subset their use of Rust to exclusively what has been proven with no use of anything else. Is anything on the list common enough in most Rust development where it can’t be avoided and must be verified later? I’ve obviously seen many people mention traits. Arc and automatic destruction safety not being proven is a bit ironic given they’re presumably there to boost safety versus raw pointers and manual destruction.
EDIT: Thanks for quick, clarifying replies about these. It's looking like a subsetting approach can work.
You can avoid 1 if you do no multithreading, or if you accept a slower implementation of Arc.
Embedded code in Rust almost always avoids 1, 2, 3, and 4. Arc is used for freeing dynamically allocated memory when all references, including across threads, are dropped, but if you're running without dynamic memory allocation, you don't need that. Trait objects also require allocation. Embedded targets also generally don't implement stack unwinding, and stack unwinding can be disabled on other targets as well; it's really only most useful in larger applications in which you want some threads to be able to continue running even if another thread panics. Automatic destruction can be fairly easily avoided if you're running without an allocator, as it is mostly used for freeing such allocated memory, as well as a few things like file handles and the like.
The things covered in 5 might be harder to avoid, but also more likely to be amenable to being covered by further versions of the RustBelt work.
So yes, using a subset of Rust without these features is quite feasible, and often done when working in embedded (aka no_std) code.
I don't know SPARK Ada that well, but I believe it has similar constraints of no dynamic allocation. If you can do without dynamic allocation in Rust, many of the problematic features wouldn't be used.
I should also add that while you can do a lot of this subsetting, I wouldn't be surprised if some of these issues were already covered by the time the rest of Rust has been formally modelled and verified enough to be a competitor to Spark.
That is, what RustBelt analyzes right now is a language which has an analogue to Rust's borrow checker, but is much much simpler overall; and they ported Rust's implementation of several types of shared mutable or owned references over to this language to verify them.
There is a lot of Rust outside of these few caveats that hasn't been covered; these are just the things which could be in scope of the RustBelt work that haven't been covered by it.
Here are a few things that there's ongoing work on that you would want to finish before saying you were confident in a using a formally verified language:
1. The memory model. This gives a formal definition of what kinds of aliasing assumptions the compiler is able to make about raw pointers in `unsafe` code. This is being worked on here: https://github.com/nikomatsakis/rust-memory-model
2. The borrow checker. While the RustBelt work shows that a much simpler borrow checker can be sound, Rust's borrow checker has to deal with a much more complicated language, and there are few known soundness bugs in it (discussed elswhere in this thread). You'd want that to be formally modeled and verified.
3. The type system. Rust has a fairly full featured type system, so you want to ensure that's sound
4. The compiler itself. Even once you have sound code in a sound language, you need to make sure that the compiler follows the rules and compiles it correctly.
5. Any libraries or code that you are using which might be using `unsafe`. The RustBelt work so far has done so for a few core primitives, as well as a couple of third-party libraries, but of course you'd want to re-do this work against the full language (or a real-world subset, not LambdaRust that this paper used) and include any other libraries that you might need to use.
So, I'd say that there's a good chance that the RustBelt work could be extended to include one or more of these listed features before all of the rest of the above has been done. It will be a while before you could have a usable, verified subset of Rust, but once you do I think it could be a good alternative to something like SPARK, or development in C following extremely stringent rules and compiling with CompCert.
Sorry, you are right about that. You can have trait objects without an allocator.
I'm not sure how common a pattern that is. Many people like to avoid trait objects and just parameterize the function on a type bounded by the trait in that case. And impl Trait is replacing some of the other cases in which you had to return an allocated trait object.
Anyhow, you're right, I was wrong that people couldn't be using trait objects in an embedded context with no allocator, but I also think it's perfectly reasonable to write in a subset of Rust in which you don't use trait objects at all.
For clarification, "trait objects" are a subset of the trait system, and have much less use in rust programs than traits overall. A trait object is a dynamically dispatched pointer (often owned through Box or Arc), as opposed to the usually statically dispatched generic usage of traits that's more common in rust.
Destructors aren't guaranteed to run in Rust, they aren't necessary for memory safety. It's a fair concern to note that they aren't modeled, because they are pervasive and can impact memory safety, but they don't exist to support it.
Panics not being modeled is of course also a big gap, although you could always compile your code with panic=abort if you wanted to dodge that issue!
I think I can understand and appreciate many different aspects of Computer science, but programming language theory truly is a bit too arcane and formal for me. I still try to follow it somewhat though (like when the Morning Paper had its nice writeup about RustBelt) and I enjoy learning Rust (which exposes you to the theory underlying its design more than other languages). Maybe it's good that the results of formal methods now find more practical application, so it's easier to get people (who aren't hardcore logicians) interested in the whole topic.
I find Rust's RFC/design discussions fascinating. They're a mix of serious Programming Language Theory and practice. The same feature in Rust is seen by some as "algebraic data types" and "oh cool, I can add data to enums" by others.
Some feature proposals starting with "wouldn't it be nice to have…" end up designed with input from people who literally have a PhD in this feature (and are bikesheded until people without PhDs can use the feature).
I felt a similar way for a long time. Software Foundations by Pierce (and a few other similar books) worked for me personally, by illuminating verification of PL through a programming-like environment. Scripting proof automation was enlightening is what I'm trying to say. If you haven't given it a shot and have the time, check it out.
What the paper demonstrates is that if you have a correct borrow checker, and you have the ability to create types using `unsafe` which can extend the semantics of the language to cover things that the borrow checker cannot cover, you can write proofs that demonstrate that the language is still sound given the addition of those types.
This helps demonstrate that one of the fundamental design goals of Rust, have a simple but limited system of borrowing built into the language, and allow more complex forms of managing reference types safely built on top of it as library features using `unsafe`, is a fundamentally sound design. In other words, you don't have to treat the language and all libraries using `unsafe` as one large system that you need to prove soundness of, but you can do modular proofs of the core language, and each library that adds a new type of memory and reference management, independently.
Those bugs you reference are simply bugs in Rust's borrow checker. The language used for the RustBelt paper, LambdaRust, is a language that is much simpler than Rust itself, making the borrow checking much easier but the language not as convenient to use (it is not intended to be used at all, just to act as a model of Rust).
The things you reference are simply bugs, but because of the fairly expressive syntax that Rust offers actually getting the borrow checker right can be be difficult in some cases.
I don't think that there are any theoretical concerns about the ability to write a correct borrow checker, just practical issues with the current implementation not correctly handling certain cases.
> I don't think that there are any theoretical concerns about the ability to write a correct borrow checker, just practical issues with the current implementation not correctly handling certain cases.
Is it really that trivial? When looking through https://github.com/rust-lang/rust/blob/master/src/librustc_b..., it is not at all clear that these rules cover everything to me. There are some complex side conditions, isn't it easy to miss something there?
There is a large amount of room between "trivial" and "serious concerns that it will be impossible without fundamentally altering the language." All I was saying is that there aren't such serious concerns about the borrow checker; there are some known holes, but most people think it's just a matter of doing the work to fix those holes (which could be a lot of work, and involve significant refactoring), not that needs to be an entire change in the fundamental way the borrow checker works, or that borrow checking itself is theoretically unsound, or that some features of the language are fundamentally incompatible with sound borrow checking.
The RustBelt work addressed one of the big open questions; is it possible to treat `unsafe` code in a modular fashion, or does all analysis of unsafe code have to consider all possible interactions with all other modules which use `unsafe` code as well?
That was in many ways quite a big question about the design of Rust; can you prove or very each module which uses `unsafe` independently?
The borrow checker, on the other hand, is an entirely local analysis. It can get quite complex, especially as you allow for more fine-grained borrow checking to make it convenient to use compound objects, and introduce non-lexical lifetimes which help reduce the number of restrictions on what you can do, but since it's entirely local, it's a much more tractable problem.
I think that it would be good to eventually formalize the full Rust borrow checker, and of course it's always good to fix these soundness bugs, but there are some more important open questions right now like fully specifying Rust's memory model (https://github.com/nikomatsakis/rust-memory-model) so it's possible to know what kinds of aliasing you can actually do in `unsafe` code, and which will be safe even under future versions of the compiler.
The RustBelt work so far is just a nice foundation getting Rust to be more formally analyzed and specified. There's a lot of work left to be done.
No-one is implying that writing a borrow checker is trivial. Proving it correct will take some work, though as your own link indicates the borrow checker was designed with formal reasoning in mind from the beginning.
Every post in this thread slightly criticizing Rust is greyed. This is an unfortunate part of being an overhyped language: no one is allowed to emit discordant opinions.
Alternative take: upvoted comments are about the article content, while greyed out ones are people criticizing the programming language in a way that is tangentially related at best.
adding security features to a language itself seems a bit silly. it just adds complexity and bloat to the code (which has been and still is mostly unverified..) which gives people a sense of security that's most likely false (as people imagine hyped things to be bigger than they actually are). I'd say a good coder could write more seucre and sound code in C, though someone who isn't conscious of the nature of the language would write 'safer' code in rust or so.
In the end it's better to learn to do things without these training wheels >.> all this hand holding is just making people weak like anything else in life.
There's nothing false about the sense of security. In fact, after I started using Rust, my C/C++ has become more disciplined as well. The idea that memory safety is "training wheels" is silly.
By the way, this is a project funded by the European Union. It got 2 Million EUR by the European Research Council (https://cordis.europa.eu/project/rcn/200802_en.html). It is only one of the many projects which give back a lot to the open source community. Thank you, EU!
I'm not quite as enthusiastic about their choice of funding target. There are plenty of other worthy languages and tools out there that could use that money better in my mind. It's the C++/Java thing all over again, but stepped up several notches; to the point where it's creeping me out since I can't even mention C online without being attacked by the Rust Brigade; and they just don't give up, it's like one of those shitty zombie movies where you have squirting body fluids and jaws chopping at you from every angle. I don't think they need any more money; more like therapy, chill pills and rooms with soft walls/locked doors for a while.
This funding was for verification work, which was applied to Rust. Rust is a relatively new language, and hasn't been the focus of much research yet. C and C++, in contrast, has already been the subject of tons of research and tooling. For example, Wikipedia lists 28 static analysis tools for C, and none for Rust:
Is there a particular project for a particular language you think is promising and just needs some funding? Or does it just irk you when other people are happy?
EDIT: I'm also not sure why you assume that none of this EU funding is going to your language(s) of choice; have you checked?
From my knowledge, the ERC funds fundamental science. In my personal area (theoretical physics), people tend to move from Fortran to C++ nowadays. As an effect, mostly readability decreases (without linear algebra libraries) or the compiler doesn't help anymore with compile-time bound checks (so people use run-time bound checks with assertions instead). Not even to mention the years of wasted time for people finding bugs in their parallel codes. And all codes with the "HPC" badge are parallel nowadays. I think supporting a modern ecosystem is a good decision if people move away from C++ towards Rust or Julia (another ERC... http://opendreamkit.org/).
(Note: Please don't feed the trolls. Nobody has time for language rants)
How is questioning the choice of project to fund feeding trolls? I was under the impression that diversity was mostly considered a good thing, that has to go for perspectives as well. I just don't get why Rust desperately needs a lot more money than any other tool that's equally useful, they were already pretty well funded by other organizations.
This is one research grant on Rust among tons of grants on other projects. And it funds research. When somebody has equally valid research project on C, rest assured it’ll be funded as well, as it has already happened (EDIT) with research on abstract interpretation, separation logic and so on.
Right, question something and you just don't want others to be happy. Good luck with that. Every C compiler and major C library out there to begin with, given that it's the most important and fundamental language in the real world right now. C++ already pulled this trick, it lead nowhere but in circles.
This is a great place to engage in intellectually stimulating conversations. I, among many others, would love to read your objective thoughts about the fundraising choice.
Instead you've decided to facetiously label rustaceans as psychotic and mentally ill because they like sharing an upcoming technology.
I don't know what reaction you wanted but it won't be a positive one here.
Every C compiler and major C library out there to begin with, given that it's the most important and fundamental language in the real world right now. Did you get it this time?
Have you been living under a rock lately, or are you just pulling the oldest trick in the psychopath book; condemning others for reacting to abuse? Sharing and attacking others for not agreeing is not the same thing, I hope we agree so far.
C is fundamental and important now, but are you really not sick of it? Don't you wish we had a language that made it easier and more intuitive to program? Because many of us do. I'm personally rather tired of the boilerplate and legacy issues that come up when writing C -- Seriously, some of them being as simple as writing identical function signatures in multiple files.
It works but we live in a time where we can move on. We don't have to be discontent with an old technology. We have the momentum and community to, who knows, maybe even totally replace C one day.
People are already making games and servers out of Rust. I think that's huge and definitely worthy of further funding and research of all sorts; People are using this technology whether you like it or not!
Did you get it this time?
Anyway, I'm sorry to hear you felt abused for having a differing opinion on a forum somewhere. I understand how emotionally compromising it must be to have your opinions challenged! Hopefully people will treat you with great respect in the future and use C forever.
No, that's my point; there are thousands of very experienced and dedicated coders out there who are not sick of it; because we've been around long enough to not blame our tools, and we have enough experience to appreciate its unique strengths. I'm all for simpler and more intuitive, but Rust is the opposite of that; it's rigid and complex if anything. Why does everything have to be a competition? Why can't several technologies be allowed to coexist? This is not about having your opinions challenged, you need to work on your reading comprehension; this is about not attacking others for expressing different preferences from your own.
I would alternatively suggest you improve your conveyance of thoughts. It's clear now you have a rationalized and objective view of the situation, but it took us three or four replies to understand your underlying reasoning.
When you start a comment chain by needlessly bashing the users of technology X, and don't really support your opinion of them with anything factual, what are you hoping we read there? It feels like you want us to read between the lines and assume you have a better reason to bash them; That's mindreading, and unfortunately something none of us can do.
I think many Rust advocates, myself included, tend to foresee an eventual obsoletion of C because of theoretical overlapping purpose. We see Rust providing certain development luxuries without the usual heavy performance decrease accompanying them. While we can't say for sure, we think a majority of people will prefer having those luxuries in the future.
The most compelling thought for me is: In a world where Rust had more complete libraries, and most developers knew Rust instead of C, is there an incentive to learn and use C for new projects?
It's a complete matter of opinion but I think it's the reason most rustaceans "hound" on C devs. I don't think there would be such an incentive. I find them equally complex in their own ways, but also find more benefits with Rust versus C.
This raises an interesting question of whether Rust would make sense as a code generation target for other languages or runtimes. It is conceivable that you could compile and run untrusted Rust (with no unsafe blocks and restricted imports) the same way web browsers run untrusted JavaScript? It could be the only GC-less IR that you can run with a reasonable defense against memory corruption.
I'm not sure I understand your idea, how can code generation benefit from memory safe abstractions of target languages? The whole point of code generation or rather compilation is to fit one representation into something else, doesn't matter what it is, but it has to satisfy its constraints no matter what, even if you have to use a giant global array to emulate pointers via indexes. In other words, the goal is to work around any constraints of the target language to fit yours, but not to design your language to fit any constraints of the target.
Safety guarantees can only work if you carefully choose and rely on appropriate abstractions for your problem.
Raising logic bugs from "this is maybe occasionally an annoyance, or cool piece of trivia" to "this is a security issue" seems like a bad idea in something as complex as Rust's compiler.
Especially when there are many currently known bugs of this sort:
Verification of unsafe languages such as unsafe Rust (here), or C, PHP or assembly (in other research) tries to show if some particular program is correct or not — by exhibiting bugs, proving safety, or both. Tons of approaches exist.
But verification can be easier for higher-level languages.
You've lost most of the semantics by the time you're at LLVM bitcode, since the high-level abstractions have been lowered into a basic for.
Rust's type system provides memory and race safety, but those details are lost at a lower level. It's very hard to analyze data races at the assembly level!
> Rust is a new systems programming language that promises to overcome the seemingly fundamental tradeoff between high-level safety guarantees and low-level control over resource management. Unfortunately, none of Rust’s safety claims have been formally proven, and there is good reason to question whether they actually hold.
Aw, c'mon!?
Rust is Sudoku for super-smarties.
- - - -
I'm going to "double-down" on that: Rust-lang is a toy, a beautiful bauble for minds addicted to a certain kind of complexity. Rust isn't about producing production code, Rust is about producing Rust. Scratching that itch.
My point is that I feel like the time and energy spent on Rust won't ever be justified in terms of productive bug-free software, when compared against some other simpler language (Hopgood's Noether comes to mind) or approach.
I work for a company with a massive rust codebase. Rust is very much about building production code.
What is 'good production code' ?
* Few errors
* Readable, well documented
* Testable, has tests, has testing tools like quickcheck, fuzzing, etc
* Meets performance constraints
Rust hits those better than any language I've used. The downside is, oh gosh, you'll have to actually learn a programming language that isn't just another variation of the ones you learned in school.
I have recently performed a relatively simple development by using programming languages on which I had low-to-to-no experience: Perl (low), Ruby (no), Rust (no) and Go (no). Note that I am quite adaptable on the programming language front and that this small experiment was precisely meant to showcase these adaptability skills. Rust was, by far, the most difficult-to-learn, difficult-to-research, counter-intuitive, unfriendly, constrained, unappealing, etc. of all of them. Warnings and errors appeared systematically and, despite their verbosity, were rarely helpful.
The most ironic part is that so many restrictions and problems are likely to provoke people to rely on whatever option happens to work, which might not be the best/safest one. Being so concerned about making sure that the generated code is extremely safe no matter what by sacrificing flexibility and user friendliness is far from ideal. Restrictions and prohibitions have always to be seen as an in-the-worst-case-scenario resource, not as a primary solution; much less when dealing with something as complex as programming, a very powerful tool supposed to be managed by knowledgeable individuals. The higher the freedom, the better the results delivered by a sensible/knowledgeable person. Unless Rust changes a lot, I don't see it going anywhere. It might get some support from theoretical/academical/inside-whatever-bubble circles, but seriously doubt that developers with real-world experience can like or even accept most of what this language represents.
> The most ironic part is that so many restrictions and problems are likely to provoke people to rely on whatever option happens to work, which might not be the best/safest one.
Programs written in Rust are empirically safer than programs written in C or C++.
> The higher the freedom, the better the results delivered by a sensible/knowledgeable person.
If that were true, then C and C++ code would be safer and more secure than programs written in memory-safe languages. Obviously, that's not the case.
> seriously doubt that developers with real-world experience can like or even accept most of what this language represents.
Lots of developers with real-world experience, including me, use Rust all the time without any more problems than they would have in any other language.
> > The most ironic part is that so many restrictions and problems are likely to provoke people to rely on whatever option happens to work, which might not be the best/safest one
> Programs written in Rust are empirically safer than programs written in C or C++.
But Rust defines safety as being safe from the kinds of errors that the Rust compiler is capable of making one safe from (buffer overflows, race conditions, undefined behaviour, etc).
If one if distracted by trying to satisfy the borrow checker, they might overlook a logical error. People have limited concentration, patience and perseverance and usually have to work to deadlines.
Rust is great for writing high-performance code, or code that needs to be parallelised within a single process. However if performance is not a concern then a statically typed language with a GC is usually a better choice.
> If one if distracted by trying to satisfy the borrow checker
That distraction goes away with experience. Once you "get" how to structure ownership in the program in a borrow-checker friendly way, it stops being a problem.
Similarly, if you are used to dynamically typed languages, you may find static typing a distraction that tests your patience. But OTOH if you think in types and use design patterns for them, they become a tool to improve code reliability, not an obstacle.
In this case, safer == memory safety (which is not a Rust-specific thing), not general forms of correctness, and the comparison in the original comment is to C and C++, not statically typed languages with a GC.
Unless it has been edited, the original comment doesn't say memory safety, it says "so many restrictions and problems are likely to provoke people to rely on whatever option happens to work, which might not be the best/safest one". The reply to that comment then interprets safety as memory safety and compares to C and C++. This is kind of my point.
Rust defines safety as memory safety, but unfortunately memory safety is only the tip of the iceberg when it comes to safety. Logic errors are just as pernicious.
Programmers have a natural bias towards solving problems with technical solutions, but for me, Rust's memory safety is achieved at the cost of a high productivity penalty and a moderate readability penalty. Both of these reduce my ability to understand and manually verify the code.
Whereas languages that achieve memory safety via a GC, allow me to focus my mental energy on writing correct code, not on memory management.
> Whereas languages that achieve memory safety via a GC, allow me to focus my mental energy on writing correct code, not on memory management.
Ownership type systems are supposed to rule out synchronization errors (say, data races), which is something you may want regardless of how memory is managed. Say, the research on ownership types that came out of MIT in early '00s worked with a subset of Java (see OOPSLA'02 "Ownership Types for Safe Programming: Preventing Data Races and Deadlocks").
> If one if distracted by trying to satisfy the borrow checker, they might overlook a logical error.
I see it the other way around. I let the compiler deal with the borrow and type checkers and to tell me what I missed, while I focus on the logic of the code.
Some high-performance code is CPU bound. To approach advertised FLOPS figure on any modern CPU, you must manually write SIMD code (typically SSE, AVX, and/or NEON). It’s only in Rust nightly, and is very limited. Without SIMD, you’ll only use a small fraction of CPU’s computational power.
The only programming languages great for high-performance code are C and Fortran. C++ is also OK because C compatibility.
You might be thinking of the `simd` crate, in which case, you're right. But we are getting closer and closer to exposing a large set of vendor intrinsics: https://github.com/rust-lang-nursery/stdsimd --- You can use it today on Rust nightly. This includes runtime CPU feature detection!
ripgrep is an example of a program that is written in Rust, is CPU bound and uses explicit SIMD in various places to speed up text search. (It hasn't moved to stdsimd yet and is still using the `simd` crate and the old platform intrinsics system exposed by rustc. But ripgrep will move to stdsimd soonish.)
Finally, someone in Rust community realized we don’t want higher-level abstractions, we just want whatever the hell our hardware can do. If that thing will go stable, I will consider Rust for a next project (BTW, doing CAD/CAM stuff for Windows).
Feature detection is no big deal, it’s just a couple of lines in C.
But I’m curious how well that thing gonna work in practice? In C++ we have low-level control over memory layout allowing to place these SIMD vector values in STL containers with correct alignment. We have function pointers with vectorcall calling convention that enable implementing CPU-dependent routines very efficiently. The compiler inlines a lot of that SIMD code. Maybe you know how much of that applies to rust + stdsimd?
> Finally, someone in Rust community realized we don’t want higher-level abstractions, we just want whatever the hell our hardware can do. If that thing will go stable, I will consider Rust for a next project (BTW, doing CAD/CAM stuff for Windows).
That's great, but please consider that this is an unfair characterization. stdsimd has been in the works since 2016, and many people have been involved. It's not like nobody "realized" we needed to expose vendor intrinsics before then, but it just hadn't been worked on. It's a ton of work.
> Feature detection is no big deal, it’s just a couple of lines in C.
It's only a few lines in Rust as well. But this is spoken like someone who uses it instead of someone who was involved in making it available. :-) It's not just about querying the CPU, but enabling functions to compile with specific target and also reasoning through Rust's safety story there, which is non-trivial.
> But I’m curious how well that thing gonna work in practice? In C++ we have low-level control over memory layout allowing to place these SIMD vector values in STL containers with correct alignment. We have function pointers with vectorcall calling convention that enable implementing CPU-dependent routines very efficiently. The compiler inlines a lot of that SIMD code. Maybe you know how much of that applies to rust + stdsimd?
I don't see any reason why any of that would be a problem with Rust. Inlining is just as important to Rust as it is to C++, and so is control over memory layout. You should try it out and give feedback. :-) Now would be the time!
One response I received to that argument was “not many projects need to multiply a lot of single-precision floating point matrices”.
Later additions, SSE2, SSE 4.1, SSSE3, add double-precision floating point, and 8-16-32-64 bit integers support. Some of them come with unsigned and saturated versions (the latter clips to min/max when overflown).
In C++, I use these for performance-critical functions doing integer computations, bit manipulation, image processing (saturated integer math is especially useful for the latter). While it adds some complexity to the code esp. when it benefits from SSE 4.1 or SSSE3 instructions (I must implement SSE2-only workarounds for older CPUs), the performance improvement IMO justifies that.
> Rust is great for writing high-performance code, or code that needs to be parallelised within a single process. However if performance is not a concern then a statically typed language with a GC is usually a better choice.
Are there Rust programmers who disagree with that?
I use Rust for all sorts of things that don't need to be high performance, because I know Rust very well. But I don't think it's the ideal choice for most of those tasks--although I will say that I wish more languages had (for example) the same approach to namespacing that Rust takes, or as nice of a package manager, those are things that could coexist with a very different language. I imagine most Rust programmers would agree...
The youth of the language should be a reasonable predictor of that difference, alone.
It may indeed turn out that the way Rust's learning curve is front-loaded will prevent it from breaking into the mainstream, but that's a separate issue from developers deciding it's unsuitable for work after they learn it.
With the exception of swift - which has the distinct advantage of being one of two languages you can write programs in for the most popular (? probably true since Android's market share is scattered across many models) piece of consumer hardware in the world - I can't think of a language of similar age to or younger than rust with as many job ads.
You see plenty for C/C++ because they've existed forever, have a huge amount of code written in them, and a huge number of programmers who already know them. Just like with Cobal you will almost certainly see plenty for C/C++ many years into the future. That doesn't really say much about Rust.
You have successfuly uncovered that Rust is a fundamentally different programming language, instead of a variation of the same. I feel that having learned Rust, any new language similar to Rust would have way lower learning curve. It is quite similar to the jump required from imperative to a funtional language. We may be missing a good name for a paradigm the Rust falls into.
Achieving Rust's safety in C took separation logic with the best tool probably being VCC from Microsoft. I encourage anyone that thinks the borrow checker is unnecessarily hard or alternative languages would be easy to try doing an existing Rust library or app in VCC with separation logic. Also, if applicable, ensure it passes soubd, static analysis to be free of any defects Rust's design would stop in that library that C doesnt.
Then you'll appreciate the work languages like Cyclone and then Rust save you. There's other models published and in development that might make the next one a bit easier. Yet, separation logic is the proper comparison to borrow checker and most programmers didnt try to learn it.
Until we have as much code written in Rust as is currently implemented in C, experience hasn't shown anything. The same claims and promises have been made for C++ in relation to vanilla C for ages, and it's just not true. An experienced C coder (which arguably takes longer to achieve) will perform as good as a C++ coder of equal skill level. All they do is trade complexity and convenience for simplicity and flexibility; it's a matter of preference, not life and death.
This working group literally just proved that a significant subset of Rust is safer than any comparable subset of C++. This isn't a matter of opinion. What more proof do you actually want than a formal proof? I seriously wonder what people think the whole point of this paper was.
The main difference that I've seen between C and Rust compiler errors is that, with C, my reaction is "goddamnit, you know what I mean, why can't you just accept it?!", whereas with Rust my reaction is "oh yes, I see how that may cause problems".
I'm sorry you found Rust hard, but it really is a world apart from Perl. I started learning programming using assembly (Motorola 68k), and then went a different professional route (via Java and Javascript).
Rust to me is back to basics. There is a stack, and there is a heap. Many languages mask this from you, but notably C/C++ doesn't, and although it's a notch up in complexity from Perl, the control you get over these aspects lets you write code that is equivalent to C/C++ in performance. Something Perl/Go can do for you under some scenarios, but mostly not.
Granted, not everyone needs/wants to understand or work closer to the system.
And not everyone who needs/wants that is going to prefer Rust's complexity and rigidness to the simplicity and flexibility of C, or the even more complex yet more flexible C++. It's preferences, priorities; not black or white; no one has any right to force anyone to do anything. As long as humans are in the loop we're going to have bugs, what kinds of bugs will depend on priorities.
For some languages, that distinction doesn't seem to apply. For example, it seems that the D language can be treated as both a scripting language and a system language, as the speed of compilation + execution can be comparable to that of scripting languages. For evidence of this, take a look at the first D versus Python comparison in this article, note that both scripts have comparable execution times (both finished running in under a second):
There are effectively two Ds: one that's garbage collected and one that's manually memory managed and unsafe. I don't think you can really consider them equivalent: there's always a large difference in development friction when you move from a managed environment to an unmanaged one.
"Rust has a second language hiding out inside of it, unsafe Rust, which does not enforce these memory safety guarantees. Unsafe Rust works just like regular Rust does, but it gives you extra superpowers not available in safe Rust code."
By the way, I don't say this as a criticism, I admire what the Rust team are doing, and will be glad to check out the language again in the future.
You can, but there is a big difference: betterC is a subset of D, whereas unsafe Rust is a superset of safe Rust. That is, when you use unsafe, you don't lose any language features. When you drop into betterC, you do lose a bunch of stuff from D.
You don't have to restrict yourself to betterC to use manual memory management. BetterC is mainly targetted at embedded programming and to port components of existing C applications. Malloc vs. GC vs. stdx-allocator is a separate topic, though betterC doesn't link with the GC by default.
You loose builtin arrays and hashes and need to use a container library instead. Same goes for newing aggegates which can be replaced with smart pointers.
Escaping closures (delegates) no longer works, but you can explicitly capture context in a struct.
Those choices for builtin language features date back to the early inspiration from Java, but are increasingly being replaced with library implementations.
We've added escape checking to the type system last year, and are currently working on safe aliasing (preventing use-after-free).
So yes, atm. when opting for @nogc you're somewhat at the frontier, and it's mostly a choice if you have time, hard requirements, or am very practised.
Avoiding most (but not all) GC allocations is already widely adopted in D's community though.
Only for some code it's not apples to apples. For most code GC is not a big deal and the languages just offer different tradeoffs. Perl, for example, has XS for performance critical code, which is essentially C or C++. But the ecosystem is big enough that you don't necessarily need to write any C code yourself.
And all of this is not even relevant to usability issues he tries to discuss.
> our verification work resulted in uncovering and fixing a bug in Rust’s standard library, demonstrating that our model of Rust is realistic enough to be useful.
The bug was https://github.com/rust-lang/rust/issues/41622.