One thing I've realized with Rust is that its guarantees are a moving target. Today we can guarantee (barring compiler errors) memory safety in safe rust, but not unsafe rust.
But that story is improving. For one thing, we have people working to build safe abstractions for more unsafe use cases, at zero cost. We also have people improving fuzzing, and in theory safe rust code grows at a much faster rate than unsafe rust code, so fuzzing is far more tractable. We have people working on proving more about rust code, even when unsafe is around.
I'm quite excited to see how far Rust is able to go, I don't believe the state today is the end at all.
While some patterns that call for unsafe today might be eliminated in whole or part, code that violates the ownership and borrowing rules will have to remain unsafe. I think fuzzing is not guaranteed to reach all unsafe code paths, nor provide full test coverage. Ideally, you want a) some sort of formal proof that the unsafe code cannot violate memory safety, b) 100% branch coverage for unsafe code, or c) both (because profilers and proofs can be wrong too :)
Does it really require formal semantics for unsafe rust though? I'm not familiar enough with rust to give an example, but if you imagine there's the unsafe rust level and beneath that the "machine code" (not actually machine code, just at the abstraction level equivalent to it) you should be able to hand write what the rust code is doing, without requiring the compiler to construct the machine level operations.
With a formal semantics the proof checker just checks that the written proof (at rust level) matches with the rust code, but requires as you said an understanding of how unsafe rust interacts with the proof, whereas with a proof written at machine level, you don't need to understand the rust semantics, you just need to translate the rust to machine level and then check the proof there.
Perhaps the machine level could be some layer in llvm? I'm only a little familiar with compilers, and hardly at all with more complicated compiler theory, but this seems reasonable to me.
> With a proof written at machine level, you don't need to understand the rust semantics, you just need to translate the rust to machine level and then check the proof there.
This approach would only be able to verify a particular compilation result as safe. If you want to verify that it will always be safe, you need to be comparing against the behavior of future compilers, which requires some kind of contract about their behavior. “Formal semantics” is the technical term for that contract.
The problem is that the unsafe-safe boundary needs to be specified. Unsafe code often relies on invariants ensured due to safe rust's checks. Also, there are properties that unsafe code has to satisfy which are special to rust, for example involving rules around special traits like `Drop`. So even if the unsafe code itself could be proven to have no memory corruptions, if it does not satisfy these invariants, it could lead to wrong behaviour in other parts of the program.
Unsafe Rust can be (and has been) formally verified to satisfy its Rust type, meaning calling it from safe code can't violate memory unsafety. We don't need to trust manual inspection.
IMHO, this is going a bit too far. Some parts of unsafe Rust have had a model produced that can check some of the invariants required for safe Rust. For example, in my understanding, traits were not modeled at all. Still very promising work, but don't want to overstate it either!
I never said all unsafe Rust code was verified, but we certainly have verified the parts required for stuff like Arc and RwLock beyond reasonable doubt.
You mean, if they were used with dynamic trait objects or unsized types or something? I suppose theoretically there could be some issue, since they aren't modeled yet, but in that case it would almost certainly be a (much more worrisome) issue with the safe part of Rust itself, not the specific unsafe code involved in those types. I don't think any proposal for how to model trait objects, for example, would change the model of how semantic types are defined, it would involve adding new semantic types and new theorems about them.
Cool, then yes, we’re on the same page here, I am just extremely cautious when talking about this. It is true that that stuff would indicate a larger problem, but it’s not like we haven’t discovered unexpected larger problems in the past.
Today we can guarantee (barring compiler errors) memory safety in safe rust, but not unsafe rust.
Not quite. An unsafe function can export its lack of safety to other code. If an unsafe function can be called with parameters which make it violate memory safety, it opens a hole in memory safety.
I assume OP s point is that unsoundness or UB in an unsafe block is not contained in that block, but can taint safe code anywhere in the program. Which is true.
Yes, it's true, but that wasn't what Animats said. And even if that was Animats' point, it doesn't invalidate the GP. The key value proposition of Rust isn't that "you'll never have memory safety bugs," but rather, that if your unsafe code is correct and sound, then safe Rust will be free of memory safety bugs. The important bit is the implication that grants one the power to make that sort of reasoning. It's important precisely because it limits the places in your code that need to be audited.
This is indeed a very powerful property. However, I have a question: is it a regular practice to ensure that code marked unsafe is actually safe for whichever parameters it receives? Or could the safety of some unsafe code depend on the way its safe wrapper is called?
If you have an unsafe function---that is, a function that is unsafe to call---then it is common practice to document the precise preconditions that the caller must uphold in order to ensure safety. This may indeed include passing a correct parameter. For example, the preconditions of the slice method `get_unchecked` require the caller to verify that the index provided is in bounds, otherwise the behavior is UB.
If you have a safe function that uses unsafe internally, then all possible invocations of that function should be safe. If this isn't true, then we call those sorts of APIs unsound and they are strongly discouraged. David Tolnay wrote a great blog post about it: https://docs.rs/dtolnay/0.0.7/dtolnay/macro._03__soundness_b...
Rest of your program blindly believes that the interface of your "trusted set" is safe because you owned that responsibility when you marked it unsafe.
If you have a memory safety bug in that interface, then you can taint the rest of the program's memory safety as well, correct?
This may hang or even crash (malloc in the child) but there's no sense in which the unsafe block is "incorrect." Some unsafe code has unavoidable implications for the entire program.
Indeed, some uses of unsafe aren't meaningfully factorable. File backed memory maps or shared memory with other processes are other important real world examples that are difficult or impossible to meaningfully factor in the context of a single process. But I still think that saying "safety in Rust is factorable" is accurate beyond a mere first approximation. Figuring out how to encapsulate unsafety is, in my experience, the essential creative aspect to using unsafe at all. Because if safety wasn't something you could encapsulate, then Rust really wouldn't be what it is. (I have spent many long hours thinking about how and to what extent the safety of file backed memory maps could be encapsulated. Which is to say, factoring safety is really the ultimate goal, even if certain things remain beyond that goal.)
I’m unsure what you’re responding to. As I understood it, the comment took issue with saying that you can make guarantees about safe Rust but not unsafe Rust because unsafe Rust is not verified in that way and opens the door to violating guarantees in your “safe” code if your bugs leak out.
The function containing the unsafe block can be called from safe code. So unsafe functionality can be exported. Said unsafe functionality might not be safe for all possible inputs. That's a classic hole, as with APIs that can be exploited.
Then that's a "function containing an unsafe block," not an "unsafe function." In any case, this is still a misunderstanding of what staticassertion said. I elaborated more here: https://news.ycombinator.com/item?id=24028359
My bigger point here is that you're misunderstanding the advocated Rust value proposition. The value proposition isn't literally "Rust will forever and always eliminate all memory safety bugs in safe Rust." That's silly and no serious person with any credibility would double down on that claim.
Yes but it's easy to take this too far and conclude that e.g. Javascript is not memory safe because browsers are written in C++ and they have to interface with the kernel which is written in C. At some point you simply need to trust that the current implementation is correct and bug free. This is also a problem with formal verification. What verifies the verification?
Yes, of course you can. But that's not what Animats said. Animats specifically said "unsafe function," which means you need to use the unsafe annotation to call it. See my other reply for more elaboration: https://news.ycombinator.com/item?id=24028359
Yep. For example, it's easy to make intentional memory leak in Rust using safe API, which is unsafe. However, it's also easy to find such memory leak. Rust cannot stop you from doing weird things, when you want this, but it can help you to prevent, or quickly find, weird things, when you don't want them in your code.
> However, it's also easy to find such memory leak.
How? By manually inspecting all code in the project + dependencies marked with "unsafe"? The approach doesn't scale past "hello world" level of complexity.
I don't code Rust, using C# for same purpose. I remember couple times I spend hours debugging weird crashes caused by stupid bugs in totally unrelated unsafe C# code.
It's much easier to find memory leaks or native memory corruptions in C++, than in unsafe subset of a memory safe language. C++ has lots of runtime support for that (especially in debug builds), and many great tools, both in the compilers and external ones. Unsafe C# has none of them.
Simpler stuff like debug C heap, and checked iterators in C++ standard library, catch 95% of memory corruption issues in C++. These are enabled by default in debug builds, very often just pressing F5 in visual studio finds the buggy line of code in a matter of seconds.
Valgrind is Linux-only, I don’t have it. One Windows equivalent is memory profiler under Debug / Performance Profiler / Memory Usage in visual studio, Rust is not supported by visual studio. A cross-platform equivalent is Intel VTune Profiler, no support for Rust either.
I mean, if you’re talking about checked iterators... that’s checked in Rust too.
I am 100% Windows, but don’t triage these issues often enough to suggest tools; the unsafe code I write tends to be small and pretty obvious. (More like “write to this register” than “here’s a complex data structure.”)
In safe Rust. Safe C# is the same, it doesn’t even have raw pointers in the language unless compiling with /unsafe, and writing code in unsafe blocks.
> More like “write to this register” than “here’s a complex data structure.”
I sometimes write C++ DLLs precisely to implement these complex data structures. Modern CPUs have progressively worse proportion of RAM latency / compute speed. This means you need full control over memory layout of data on the performance critical paths of code, or it will be slow.
They’re checked in unsafe too. You only skip those checks if you use a specific “do this index with no checks” function. Unsafe does not change the semantics of code, only gives you more options.
Yes, and many people write that kind of code in Rust too, and use tools to help them debug it. I’m just saying that it’s not an issue for the kinds of code I write, so I can’t personally recommend tooling. I know "use GDB" isn't a great response to a Windows user, even if it is what I end up personally doing.
(It's true that I can't get the performance tools working though, but given I've used VS for all of 20 minutes... I'm also very interested to see if support happens native-ly, given how much interest there is for Rust inside of Microsoft right now.)
However you put it, tooling support is way, way better for C and C++ for obvious reasons.
Other critical bits for commercial development in many industries (certs, standards, third-party support, official bindings...) are also completely lacking.
That is not something against Rust, it is just what happens until things get popular enough.
> Bounds checks are more effective than the stack cookies a C compiler might insert because they still apply when indexing linear data structures, an operation that's easier to get right with Rust's iterator APIs.
Not only that, bounds checks always work, while stack cookies are possible to bypass either by luck or by information disclosure.
The key thing with bounds checks is to hoist them out of inner loops. If you don't have that optimization, people will turn them off because of the performance impact. Except in inner loops, the performance penalty isn't usually that bad.
> The key thing with bounds checks is to hoist them out of inner loops.
The compiler can't always hoist the check on its own, because program behavior might depend on the bounds check occurring in the loop. But you can write an assert!() outside the loop to hoist it explicitly, and verify that the bounds checks are optimized away - or use unsafe unchecked access when they aren't.
How do you write the assert? I've not heard of that before.
Oh you mean `assert!(array.len() > 100); for i in 0..100 { array[I]; }`
I don't think that guarantees that bounds checks will be hoisted. It's just a strong hint. I mean in this case it will almost certainly work, but in more complexes cases it might not and the compiler is still free to emit bounds checks without telling you.
It would be nice if there was an explicit way of forcing an error if the bounds check was not hoisted. Similar story for lots of other optimisations - autovectorisation, tail call optimization, etc.
Some game developer made a good point that sometimes fast is correct, i.e. it's actually a bug if autovectorisation or whatever doesn't happen, so you really need a way to guarantee it.
Iterators do indeed not force the removal of bounds check, but that's simply because they don't exist to remove in the first place. That's because iterators actually do use `unsafe` internally. They are a safe abstraction.
// This is unidiomatic. Don't do this.
for i in 0..100 {
// Oops! Array indexing. Here's a nasty bounds check!
array[i] * 2
}
// Very unidiomatic. Never do this! Not even as an
// optimization. You should properly use safe abstractions
// in stead.
for i in 1..100 {
// Oh, NOOOO! This isn't supposed to be C!
unsafe { array.get_unchecked(i) * 2 }
}
// This is still unidiomatic unless you want to use
// it to explicitly show your code has side-effects.
// However, we're completely rid of the bounds check and
// yet it's perfectly safe perfectly safe.
// The only way this could be unsafe is if rustbelt's
// formal proofs, reviews, audits
// and (probably) MLOC of thoroughly tested and
// possibly fuzzed code using this in practice
// have all somehow missed an unsoundness bug
// in one of the more used safe abstractions in the
// core library for years.
for a in array {
// Look, Ma! No indexing, therefore no bounds check!
// Internally `get_unchecked` is (conceptually) used
// but it's perfectly safe!
a * 2
}
// This is idiomatic Rust. Again, there's no bounds check
// to start with, since the save abstraction knows
// exactly how many elements are in the array and
// that the borrow checker will ensure nobody can
// possibly invalidate its indexing.
// Ditto what was said above that the safe abstraction
// is pretty much guaranteed to be sound.
a.iter().map(|a| a * 2).collect()
Reminds me of an interesting thing I noticed, a few times the compiler has been able to make a tighter inner loop simply by adding an `assert!` on the array length before the loop. I guess the compiler considers the specific error message that will be printed to be a side effect, so the assert moves that check to before the loop instead of at some point during the loop.
Right. It's mostly a thing in number-crunching code working on matrices and in some string operations. It's important to handle small inner loops well; other than that, it's seldom a big time issue.
Speculative execution mitigations throw in another performance wrench. Might as well drop the need for them altogether, which is what happens with a lot of the integrator constructs we have in modern languages.
It's hard to know if that optimization is active, and nearly impossible to ensure that it stays active. So for this to work, it would need to be enforced, e.g. the same way that the compiler enforces that a let binding is assigned before use.
Top of my wish list is better support for debug-checked indexing. It would be extremely useful if unsafe get had bounds checking in debug mode.
Or prove the impossibility of bounds-check failing and then disable them. The perfect use-case for silver-level SPARK. It would make sense to require some proof effort when you want to disable runtime checks...
The idiomatic thing to do in rust is use iterators, and bounds checks are elided when you do. You can't always use iterators of course, at which point llvm is there to optimize what it can.
The diagram arrows of »Only valid references«, »No dangling pointers«, and »No data races« only point to the Heap Memory but should also point to the Stack Memory. One reason is that a function can borrow a stack pointers to other functions it calls at which point for the other functions there is no difference whether they are stack or heap pointers. Valid references are relevant the same way. Other reasons are that the borrow checker prevents data races for data on the stack, and it also disallows to pass a stack pointer as return value which would be a dangling pointer.
The pervasive mention of "C/C++", as if C and C++ were the same language with the same failure modes, soured the whole thing for me, for reasons:
In modern C++, C bugbears are just not a problem that demands much attention; I had one (1) memory mistake in five years, caught in initial testing.
There are still plenty of bugs, of course, but they are overwhelmingly specification bugs: code is doing what was asked for, but what was asked for was wrong.
Babysitting the borrow checker steals attention from preventing those actually very common problems. In effect, the borrow checker has also consumed all the time then spent tracking down and fixing the bugs it prevented avoiding, and all the time spent adapting to bad interfaces it caused that more or less worked.
Attention is, by far, the scarcest resource every programmer manages. Anything that burns attention without adequate return is actively harmful. Coding at a higher level, using powerful, safe libraries trusted not to cost too much, is how C++ programmers get the safety that Rust coders grind out on the borrow checker. C++ has many, many features, not in Rust, meant specifically to help capture powerful semantics in libraries that raise the level of coders' attention.
The large and still growing suite of such features, and the powerful libraries written using them, account for the almost exclusive use of C++ in all the highest-paid, most demanding applications in fintech, medtech, HPC, CAE, telecom, and aerospace. Rust will never be able to call those libraries.
Still, Rust is the only extant language plausibly gunning for C++'s role. It starts with the advantage of leaving behind many of C++'s worst backward-compatibility boat anchors, but its focus on low-level safety features detracts attention from the high-level coding support that makes them decreasingly relevant. Rust is adding features and users at a rapid rate, but C++ picks up, easily, many more new users in each week than the total headcount of Rust programmers working in that week, and will continue. To be remembered in ten years, Rust will need to become useful to many, many more programmers than it is now winning over (HN buzz notwithstanding).
Gunning for C++ is a losing strategy. Planning to coexist with C++ will have better results. Rust is an overwhelmingly better language, on every axis, than C, Go, Java, C#, ObjC, Visual Basic, Delphi, and COBOL, all being coded today mostly by people who will and often should never use C++. Every conversion from those is a big net gain for the world. Rust will pick up few C coders, despite that this would produce the greatest benefit to society, just because almost all who might jump already did. But the rest are wide open.
Promoting memory safety is not the way to win those, either because they have it (at enormous cost) or don't value it. To win those coders, Rust needs to take memory safety as given, and promote fun, performance, and a future.
Java won big in 1995 by offering Microsoft sharecroppers a way out. Rust could be the way out for a new generation, if only it can raise its sights from C's too familiar failings.
I'm getting an increasingly bad taste from rust. In spite of the hype 9 out of 10 rust programs I download end up with a panic in my first 10 minutes of using them.
Javascript is a memory safe language yet there is plenty of broken code written it it.
Memory safety is a critical step forward but its wasted if it's paired with an ecosystem that thinks memory safety means that software doesn't need to be correct or is so fixated on reimplementing things for "memory safety" that understanding the problem domain and actually making things that work falls by the wayside.
Interestingly, I have had the polar opposite experience. When using apps written in scripting languages like Python, I almost expect them to crash at some point because I've seen it happen so often. On the other hand, Rust applications, much like Go, Haskell or C++ applications, in my experience, generally do what they are told without any hiccups.
Care to name some of the "9 out of 10" programs that crashed for you?
> 9 out of 10 rust programs I download end up with a panic in my first 10 minutes of using them.
Do you have some examples?
Firefox, ripgrep and fd are the only programs I use that I know are written in rust (I know only a small part of Firefox is in Rust), and they work fine for me
Note that the use of the word "panic" in Rust can be a bit misleading. It is not equivalent to a segfault in C/C++ but rather to a volontarily uncaught exception, killing the application.
It's that kind of thinking that likely contributes to the low software quality.
The software failed and did so in a user uninformative way.
Except in things like network facing software or whatnot --where a segfault might also be an RCE--, a panic is not superior to a segfault. A panic can even still be a vulnerability: DOS attacks are attacks too.
That the failure isn't one that could have resulted in an RCE is only a small consolation in cases where the software doesn't have remote exposure.
Not to discredit you, but you postulate things like "9 out of 10 times" and generally "low software quality" but don't support your statements. It certainly is not my experience that this is (more, or at all) common in the Rust ecosystem.
If an application author uses things like unwrap or expect (two of the most common ways to abort the program with a panic) that is indeed lazy software engineering. But these are also one of the first things that are called out when an inexperienced Rust user asks for feedback on their program in the forum, so should be quite uncommon in any program which is more than a toy or prototype.
Again, not to say that you did not experience this, but just pointing out that it is definitely neither idiomatic nor usual practice.
Edit: There is indeed one point where the Rust community could do better to further reduce the number of panic-ing operations: Some time ago, it was not easy to use the error propagation operator "?" in documentation examples, so some documentation still shows examples which make use of expect or unwrap, possibly guiding inexperienced users towards a bad style. Those will hopefully vanish more or less completely in the near future.
For production code, I agree with you. When writing PoC software, using those can speed up development, rather than forcing you to figure out the best way to deal with those options or errors upfront. Refactoring in Rust tends to also make it reliably produce production quality code when you go back and fix all that.
On the “?” In examples, that can also lead to annoyance for a new to Rust Dev, getting error return types correct, or knowing to leverage crates like anyhow, can be complex at first. Unwrap and expect are great ways to get started. I also make heavy use of them in test code.
Oh yeah absolutely, for prototypes you can liberally use unwrap/expect to figure out the happy path first and remove those calls later.
I haven't checked out the Rust book in quite some time, but it would be cool to make newcomers comfortable with the Result<..., Box<dyn Error>> return type early on, so they know what they have to do to make code containing "?" compile.
I do have a list of 10. I don't really feel comfortable posting it because I don't want to shame people, and also because it will guarantee a bunch of no-true-scottsman, "X, Y, and Z were student projects and Q was just a demo and R only panic because cargo built a debug build by default and there was an integer overflow, and M was only because I didn't provide all the required command-line arguments...".
All those things are true, but my experience, in practice, is that the overwhelming majority of rust things I have attempted to use are broken out of the gate in a way that software written in C/C++ is usually not. I'm glad to hear that not everyone has had that experience-- I did kind of wonder if my comment was going to trigger one of those things where everyone has had the same experience but no one talks about it until someone breaks the ice--, but I have also heard other people express experiences similar to mine the experience is jarring particularly because of the hyped context that rust is usually discussed in.
> still shows examples which make use of expect or unwrap,
right, or older code which is full of unwrap from top to bottom. :)
>in a way that software written in C/C++ is usually not
This is the part that seems odd, more than your experience with Rust. How could you possibly generalize based on an unspecified sampling method across "software written in C/C++"? It's like referring to the typical quality of scientific papers written in English.
There's also just not that much Rust software out there, and 80% of everything is crap. If you compare the best, say, C tool to do something against the best Rust tool, the odds that the Rust tool will be lower-quality are higher - the Rust tool is probably the only Rust tool written to do that, while the C tool is probably the best C tool that's survived over the years.
As someone who has worked with a lot of Rust libraries and applications, your estimate on quality is completely made up hyperbole with the clear intention of maligning Rust.
I have experienced far more success in using Rust and its library ecosystem than I ever have in other languages.
Yes there are bugs in somethings, yes there are others that don’t follow best practices, but throwing out a number of “80%” completely dismisses how solid and capable so many pieces of Rust software are.
As somebody who has written a lot of cruddy, unmaintained Rust library bindings because I needed them for a project, I'm speaking, in no small part, about my own work here. Sorry if that wasn't clear.
90% of everything is crap- C or Rust- because there are a whole lot more low-effort libraries and tools than high-quality, maintained, documented ones. There's less Rust code, so the low-effort tools are more visible.
Rust is my favorite programming language, and I use it for everything where a REPL isn't critical (there Python still wins) unless other factors prevent the use of Rust, like the need to work with others or run on weird embedded architectures. I've been writing Rust since before 1.0, was at the 1.0 launch party in SF, and even contributed the (hilariously small, but) std::iter::once API.
So- my intention was "clearly" to malign Rust? Well, actually, it was to defend it. Again, sorry if that wasn't clear, but ...assume good faith next time! Please!
A panic is superior to a segfault, because panics can (and should — if they don’t, then yes, that’s lazy) include a programmer-specified error message. A segfault just says “oops” (and a hopefully a stack trace in both cases).
I mean, yes, if a program can recover, it should, but in the case of “error and quit”, panic is an improvement over segfault.
> The software failed and did so in a user uninformative way.
How is this different to any other crash in terms of user information? Whether it segfault or panics is just as uninformative.
Ignoring vulnerabilities here, a SEGV catches memory bugs, but it can only catch those that result in accessing an invalid region of memory. There's plenty of memory bugs that do not result in a SEGV and instead just corrupt the program state possibly causing all kinds of damage. eg. if `rm` had a memory bug with the paths it read it could be catastrophic. Safe rust prevents all those memory bugs, instead causing a panic. I don't know about you but I'd much rather have a program panic than continue running in an invalid/corrupted state.
> Safe rust prevents all those memory bugs, instead causing a panic.
Rust is even stronger than that, in the vast majority of cases, panics are not related to memory bugs as those are caught at compile time in Rust (panics are usually the quick and dirty way to deal with invalid files names and such things, as Rust refuses to accept those silently).
In C the program implicitly assumes there won't be an invalid file name or such thing. If the assumption turns out to be wrong, something unexpected (which may or may not be a memory bug) happens.
In Rust the program explicitly expects there to be an invalid file name or such thing and explicitly panics. This is both safer and more deterministic than C. Additionally, it's easy to grep for things like `panic!`, `unwrap`, `expect`, `unimplemented!`, `todo!`, etc. in your own code or your dependencies' code (assuming you have access to the source). Panicking code also tends to stand out like a sore tooth during code review.
Finally, Rust's approach slightly decreases the odds that someone will be sloppy in the first place. Since you have to explicitly acknowledge the issue and handle it (even if simply by saying `.unwrap()`), there's a chance you'll decide to just handle it properly even though you wouldn't have bothered in C. This is especially the case if you could've handled it properly by shortening `unwrap()` to `?`.
Rust software is of consistently higher quality than most C/C++ software that I use, both in terms of reliability and in terms of UX. I don’t know where you’re getting your claims from.
Are you sure this isn't just confirmation bias? When a program "panics" then its easy to conclude that it must be a Rust program but if it doesn't then aren't you simply forgetting to count all the working Rust programs?
>Javascript is a memory safe language yet there is plenty of broken code written it it.
Rust has a lot of safety features javascript doesn't, like lack of null, and data race protection..and types. But the biggest difference is Rust gets you memory safety AND C/C++ performance.
But that story is improving. For one thing, we have people working to build safe abstractions for more unsafe use cases, at zero cost. We also have people improving fuzzing, and in theory safe rust code grows at a much faster rate than unsafe rust code, so fuzzing is far more tractable. We have people working on proving more about rust code, even when unsafe is around.
I'm quite excited to see how far Rust is able to go, I don't believe the state today is the end at all.