Hacker News new | past | comments | ask | show | jobs | submit login
Linux kernel drivers in Rust might become an option in the future (lwn.net)
372 points by jobstijl on Aug 29, 2019 | hide | past | favorite | 254 comments



We're working on it: https://github.com/fishinabarrel/linux-kernel-module-rust

Check out the demo in PR #122, which lets you create three boolean sysctls and a character device that prints the state of those sysctls in JSON (using serde).

We gave a talk about it last week at Linux Security Summit, I'll submit it once the recording is up :) Slides are at https://ldpreload.com/p/kernel-modules-in-rust-lssna2019.pdf .


For those like I was, wondering why a framework is required to write out of tree Linux modules in rust

It’s to auto create rust bindings. Link between rust crates and kernel build system


Half that, and half to create safe interfaces that feel like native Rust ("ergonomic") and not like you're writing C in Rust.

For instance, the kernel wants you to define a character device by passing a struct with a bunch of C function pointers for how to read, write, seek, ioctl, etc. on the device. Rust has a trait / interface system that's a good match for this use case, so in src/chrdev.rs we define a Rust trait with all these methods, and we have a helper function to create the C struct with FFI-safe function pointers that call the various methods on the Rust trait.

The broad goal is that you don't need to use the "unsafe" keyword to write kernel modules that don't access memory directly themselves (filesystems, network protocols, etc.: device drivers might still need unsafe code where actually talking to the device, but it should be as little as possible). That means the interface used by kernel modules can't involve any unwrapped C functions or raw C pointers.


Looks interesting and definitely promising

Though it would probably make sense to make the API more "Rust like" and avoid things like the multiple cstr!()

Some things are definitely trickier than others, for example how to deal with the different options of kmalloc if it's rust that's allocating memory


Feedback is definitely welcome! Re cstr see https://github.com/fishinabarrel/linux-kernel-module-rust/is... and https://github.com/fishinabarrel/linux-kernel-module-rust/pu... . What we have now is definitely better than b"foo\0" but not the best possible thing.

I think there's work upstream on custom / multiple allocators; we'll plug the various kmalloc(GFP_FOO) flags into that once it exists. It is super beneficial to use the standard library's collections and third-party crates that use them.


I attended LSS this year and enjoyed your talk. I also attended KP Singh's talk on eBPF LSMs, and I couldn't help but compare the two mechanisms for making code in the kernel more safe.

eBPF's approach is to generate code that's verified to be safe and that's limited in the operations it's allowed to do in the kernel, using trusted helper functions to access kernel data structures and functions. I couldn't help but draw an analogy to unsafe wrappers for Rust.

In its current form I might not want to write core logic of a device driver in eBPF, as it started out as something suited for mandatory access control policy, packet filtering, and auditing. That's changing though, and people seem to be demanding more and more functionality in eBPF. I'm admittedly bad at predicting the future, but one thing I can say with some confidence is that eBPF's capabilities are going to increase with time.

I would also be hesitant to attempt to write a device driver in Rust because of the degree to which I'd have to interact with other subsystems and structures of the kernel that are still written in unsafe C. I wouldn't have an intuition for the incremental benefit of using Rust for some of the core logic of the driver while still having to use unsafe wrappers to muck with all the other parts of the kernel where, if I get it wrong, my driver can still oops/hang/etc. Would the complexity of mixing two languages with unsafe wrappers plus increased code size "pay for" any incremental benefit, or would I have to expect future dividends from eventually having "enough" of the kernel written in Rust for it to be a worthwhile investment?

One advantage of eBPF, I suppose, is that you can write your code in old familiar C. With forthcoming support for bounded loops, I expect people are going to be proposing more and more use cases for it. I can imagine scenarios where one camp will say, "This is a job for Rust," and the other camp will retort, "Actually, this job can currently be done in eBPF." eBPF having the other distinct advantage of already being a supported feature of the upstream kernel.

This is all pretty new to me, and so all I have are my impressions and intuitions. I'd appreciate hearing more perspectives on this.


Thanks for the heads up about this year's Summit.


Johannes Lundberg wrote a thesis on doing this with FreeBSD and has a framework https://github.com/johalun/rustkpi

https://kth.diva-portal.org/smash/get/diva2:1238890/FULLTEXT...


Sounds like the start of something good. Though the comments feel a bit weird. Maybe it's just a strange day for some reason, but I feel like today, I've seen an awful lot of comments from people pushing rewriting things in Rust who know absolutely nothing about the code they're asking about or the problem domain it runs in.

I mean, I personally think Rust is cool and all that, but come on, who does that?


For the record, as one of the developers on the Rust project: we consider that kind of over-the-top evangelism counterproductive and unwelcome. Note that in the talk that inspired this news, I specifically said that I wasn't there to push Rust, just there to explain how I'm working to make Rust an option for more people and projects.

Even if you want people to use more Rust, that kind of aggressive evangelism doesn't serve you well, and people find it off-putting.


> I mean, I personally think Rust is cool and all that, but come on, who does that?

Some people have an LDS complex (they've found their messiah and want to spread the word). Apparently there's also people who've decided that over-the-top inane "evangelisation" was a good troll and way to turn people off. A significant number of PL threads on /r/programming have a comment attempting this sort of shit-stirring.


> Some people have an LDS complex (they've found their messiah and want to spread the word).

Or, some craftspeople/artisans prefer tools that are a joy for them to use instead of just a means to an end, and have the luxury to choose what they work with.


He's referring to people who open issues on random projects asking the project owners to RIIR. These people often don't create much on their own, so they're pushing a "I can't help this cool project because it's not using a language like Rust that would make me capable of contributing, if they rewrote it in Rust then they'd have the reward of my presence in their codebase" when in reality language choice is a small hill to climb

I was recently having someone do this on a web project of mine that's written entirely in ES6 Javascript


LDS ?


The Church of Jesus Christ of Latter-day Saints.

AKA Mormon Church, who are pretty famous for their door-to-door proselytisation.


This particular linux thing wouldn't involve rewriting anything in Rust.


I don't know why you got downvoted. This is about new drivers. I suppose some existing drivers might later get rewritten, but that's later, when there's been more experience with it.

In any case, FF shows that rewriting a complex C/C++ codebase in Rust bit by bit is feasible, though again, that doesn't seem to be the intent in this case.


Correct me if I'm wrong, but these drivers wouldn't be merged into the kernel tree itself, right? They would be written as modules. I doubt any of the core developers would feel comfortable reviewing Rust code at this time.


About zero chance of rust drivers being merged into the kernel now or in the near future.


[flagged]


IIRC, Rust came about because Mozilla did took the iterative improvement approach with it's DOM infrastructure code and after multiple tries that all involved critical, security impacting bugs being missed, no matter the level of diligence they required from contributors, they wanted to see if they could create a system that enforced that diligence up front. They chose to create a language whose semantics and defaults make it a lot harder to make the kinds of mistakes that kept dining previous attempts at iterative improvement. To that end, Rust has been, in practice, for the Mozilla team, the most effective way to achieve their goals.

Whatever else there is to say about Rust, it did solve a real problem Mozilla had, that they had been otherwise unsuccessful at solving after multiple serious attempts over a period of years.


The Netscape rewrite was a complete tossing of the codebase, starting from scratch on virtually every component of the web browser and internal implementation details. It wasn't the start of the troubles, but it did make things worse.

Mozilla isn't doing that today. The current code is being "slowly but surely improved," and the use of Rust is largely happening only in new components (and not all new components are being written in Rust!). There's very little gratuitous conversion of C++ code to Rust.


> Mozilla isn't doing that today. The current code is being "slowly but surely improved," and the use of Rust is largely happening only in new components (and not all new components are being written in Rust!). There's very little gratuitous conversion of C++ code to Rust.

So one can build FF without Rust? Because if the answer is "Not any more" then it is a beginning of a rewrite, just done slower.


It is not the beginning of a rewrite just because Firefox requires Rust. All that it means is that there are now three languages you can choose to use to implement Firefox components, instead of just two.

You're conflating the ability to choose Rust in addition to C/C++ with the intent to replace C/C++ code with Rust.


Not at all.

There was a state X, where to build FF one needed to use a certain tool chain.

The new state is Y. To build FF one now has to use two tool chains: the previous one and the new one. Not having either of those tool chains prevents the build.

That's a definition of a beginning of a rewrite. And it is an incredibly dumb idea from a business perspective which will bite Mozilla in the ass in the future because it is increasing the build complexity today and tomorrow for some pie in the sky promise that may or may not pan out in future.

I'm not saying "Don't use Rust for a new project". I'm saying that the software industry is littered with the corpses of entities that decided to add a new way of doing something to an existing project where it did not pan out.


As a counterexample: Stylo alone is already a huge success story, not some "pie in the sky promise that may or may not pan out".



Stylo shipped in Firefox 57, under the name "Quantum CSS".


I stand corrected on it being a complete pie in a sky.


While all code shouldn't be rewritten in the "next-best-thing," there might be merit in rewriting some bug-prone code that expands the security vulnerability surface area. The borrowchecker and other safety mechanisms can help new and potentially unsecure code be just a little bit safer without significant external vetting.


I wouldn't assume the commenters are young, but if they are, attracting new blood to kernel development is a good thing, right? Especially if the new blood is prevented by the language from introducing memory errors :)


New blood and the kernel sound like a great combination for problems.

There are many more kinds of errors that people could make besides memory errors. Linux powers a good portion of the world, the vetting of kernel contributors should be done accordingly and the processes put in place should be strong enough that even if 'young and new blood' makes contributions that these are reviewed with a keen eye to all the lessons learned over the years, something those new and young people will still need to do.

I'm all for including newcomers into important open source projects but the kernel is the one place where I would expect some experience to be a requirement before contributing simply to avoid overloading the people further up the chain with a stream of obvious errors.


Correct me if I'm wrong, but I've been under the impression that kernel contributors are not vetted? Only patches are vetted.


You are mostly correct, there's no real barrier to just anyone submitting a patch. However, the kernel does have a rule against anonymous or pseudonymous contributions, mainly for licensing clarity reasons (not that you need to submit any form of proof of identity).


Even the patches are not vetted that much, give the track record of the linux kernel from the security point of view.


That would open up the kernel to malicious contributions. It is always a good idea to know who your counterparty is.

There is good precedent to warrant such vigilance:

https://www.newscientist.com/article/dn24165-how-nsa-weakens...

Accidents can never be ruled out but a malicious operator will have far less chance of getting away with something when detected early.


When the first prototypes of Linux were released in 1991 Linus was 21-22 years old.


And it was not used anywhere. And the code was 1/1000 the size of what it is now. Today it runs business critical operations across the entire globe. It is not the same situation as in 1991.


Yes, and nobody cared about whether it was released in one piece or not. And there was plenty wrong with those initial releases, especially compared to what was already out there.

Linux is now mission critical, which means the rules of 1991 no longer apply.


You mean services built with Linux are mission critical?


> but come on, who does that?

Yesterday Go, today Rust.


I'm certainly guilty of thinking that. I don't think I've actually been pushing the "rewrite all the things in Go" thing though.

I'm also guilty of thinking language X is the next big thing, and that I should use it for everything. And that's happened more than a few times: Smalltalk, Eiffel, Python, Haskell, Lua, Go.

So having said all that, I really do think that Rust is the next awesome thing. Like the others on the above list, Rust has certainly stretched my brain with new and better programming concepts. And I think it offers some solid benefits that few other programming languages are providing right now. Its also interesting to start to see other languages like Swift incorporate concepts from Rust.


I remember, 25 years ago, we wanted to re-write everything in Oberon.


Then Java was announced to the world. :(


I thought Java was pretty cool at the time of its announcement.

I even started working on Java applets for an educational website. The idea was to teach algebra / calculus concepts, and use a Java applet to interactively plot graphs and such.

Even at the time, I did wonder about the standard types vs. Objects split, though I wasn't to become an OO purist (temporarily) until learning more Eiffel.


Java was undoubtedly cool, I also thought like that.

Now what disappointed me was that languages like it, namely Oberon variants, Modula-3, Eiffel, Sather, all had a mix of AOT/JIT toolchain, with value types and low level capabilities.

Java nuked all of this, and is now trying to catch back those features that should have been there since the beginning (AOT kind of was, but only on commercial JDKs).


Java still is cool! :)

I have high hopes for graal native compilation and project loom.


It is, but it could have been much better from the get go.


There's nothing wrong with Go, in its proper domain! Heck, maybe the typical program written in a GC'd "scripting"-like language should be rewritten in Go. Rust is nice, but sometimes you really can't do without a GC.


When I started reading HN, around 2013, all I can remember is the endless stream of ruby/rails posts. A good rule of thumb, is probably to invest in language that are still growing at HN+5years. Of course, you can always look into it a bit earlier, one of the point of HN is (early) discoverability. (If only I had mined some bitcoins..)


I like the mental model of "innovation points". You have a certain budget of innovation points that you can spend while building your project. If you spend too much (i.e. use new shiny things throughout the stack), you're going to be chasing after bugs everywhere and won't get very far. If you spend too little (i.e. use only old stuff), you'll have a hard time implementing real innovations. The idea is to spend innovation points strategically, i.e. use new shiny components (like a new programming language or a new DB) only where it actually brings you a tangible benefit, while using tried and trusted components for everything else.

(The concrete budget of innovation points varies per project. A government project will have way fewer innovation points than a weekend side-project.)


If I recall correctly, Torvalds already addressed the hype around this ( totally not new ) idea by pointing out that memory errors really make up only tiny part of the intricacies of building a kernel. This idea predates Rust, and for better or worse will probably outlive it.


He also said that most of the issues with writing kernels have nothing to do with choice of programming language, but instead hardware compatibility. Neither Rust nor anything else could help this.


Here's the source of the original quote: https://www.infoworld.com/article/3109150/linux-at-25-linus-... Admittedly, kernel and driver development are different things. The original quote was to do with kernel development.

"That's not a new phenomenon at all. We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.

I'm not convinced about Rust for an OS kernel (there's a lot more to system programming than the kernel, though), but at the same time there is no question that C has a lot of limitations.

...I don't think you actually solve any of the really hard kernel problems with your choice of programming language. The big problems tend to be about hardware support (all those drivers, all the odd details about different platforms, all the subtleties in memory management and resource accounting), and anybody who thinks that the choice of language simplifies those things a lot is likely to be very disappointed."

I disagree with him about Ada, for what it's worth, but the overall point is correct. The real problems of kernel development aren't things a borrow checker will help with. A big part of developing a kernel takes place before management of heap memory is even relevant. All this being said. For user-space applications, Rust has a lot to offer.


I read that slightly differently. I read that as: "even after putting the shoes on, the marathon still needs to be run". So yes, lifting reasoning burden away from eg. function interfaces (owning vs uniquely borrowing vs shared borrowing), lifetime problems (and so on), precisely enables everyone to put/spend more "effort" on the remaining difficulties.


Yeah, I would not like this becomming required. Now, it's still fairly manageable to keep developing and building my own kernels. You need binutils/gcc for your host platforms, and that's about all. And I have 3 architectures I build for.

If I also need clang and rust for all the platforms, and learn rust, for some questionable benefits... That would make things hard.


> If I also need clang and rust for all the platforms, and learn rust, for some questionable benefits

I can see the arguments against needing 2 different C-compiler toolchains, not to mention how this may limit target platform-support to the minimum subset supported by both compilers...

But to argue that Rust only provides "questionable benefits" is really not reasonable.

Even the hipster Javascript crowd has discovered that providing more information to a/the compiler (Typescript) almost unconditionally provides higher code-quality and better results.

When will the C-crowd do the same? When will they shed their "I know better than any machine"-like elitism?


Read it as "Questionable benefits in the context of the kernel."


I think we should just evaluate Rust based on its merits. Ignore fanboys and ignore the opposite as well.

In other words, Rust is just another tool. Regard it as one without subjective emotions either way. Be constructive, don't only look for faults, but also ways how to overcome them. But don't close your eyes from them either. Acknowledging weaknesses is the first step to improvement.

Some possible questions and measures to consider below. I'm sure there's a lot more to add on this list.

1) Stability. Other than for development, unstable kernels are a no go. Can possible negative effects be mitigated?

2) Security is often what Rust is expected to bring on the table. So is Rust actually more secure in the environment kernel requires? This could be tested by "clean-room" reimplementing something that is historically known to have many security issues.

3) Are there showstoppers for kernel builds? Interoperability, build performance, architectures unsupported by Rust, etc. If so, could these be mitigated? Conversely, is there something positive Rust could provide?

4) How does it affect runtime performance? Average case. Bloat issues? Any pathologic cases? Any benefits?

5) What other unexpected it brings on the plate? Both benefits and disadvantages. For example, could Rust types also be used to catch errors other than memory related, like invalid states?

6) <Your consideration here or above>


> 2) Security is often what Rust is expected to bring on the table. So is Rust actually more secure in the environment kernel requires? This could be tested by "clean-room" reimplementing something that is historically known to have many security issues.

We actually have an objective experiment to answer this question: the rust-afl trophy case [1]. It is remarkable how different it is from the upstream AFL trophy case, which tests C and C++ code [2]. An enormous fraction of the AFL trophy case uncovered potentially-exploitable memory safety issues; by contrast, very few of the Rust issues were, with most being safe panics.

[1]: https://github.com/rust-fuzz/trophy-case

[2]: http://lcamtuf.coredump.cx/afl/


I don't think (2) needs any additional demonstration. Any experienced C programmer who just reads the Rust book should conclude that Rust >> C.

(3) can be a problem: Rust does not (yet) make it possible to define an ABI for Rust, which means that only by using external representations (C repr) can Rust code interoperate with non-Rust code. However, that's not a very big deal right now.

As to (1), ignoring toolchain stability (which affects C as well), the main concern would be (3) (see above).

Re (4), that's a legitimate question for sure. (My suspicion is that Rust will improve performance in general, mostly due to forcing better (public and internal) API designs on programmers. However, that's just hunch.)

Re (5), benefits. I think mostly it will bring this benefit: cleaner APIs. Obviously that won't apply to Linux's ABI to user-land, since that's not to be broken, but it could benefit the kernel in-tree. Re (5), disadvantages, I think mainly it's the learning curve.


> Any experienced C programmer who just reads the Rust book should conclude that Rust >> C.

Mighty nice of you to speak for all of us C programmers, so let me continue the story where you left off: The programer then tries it, gets annoyed with complaints about ownership in cases where the programer clearly did nothing wrong, and goes back to getting work done in C, putting the rust book into his long-overflowing "cute toys to play with later" pile


“I know this code I’ve just written is safe” is the opening verse for the thousands of security vulnerabilities we see every year.


I know i can implement an up-tree or a doubly-linked-list safely just fine. I do not need to have a straightjacket put on me that prevents me from doing that.

If you wish to use it, do. If you want to force your employees to do it, feel free. Telling everyone else how to live their lives and do their jobs is a dick move.


> I know i can implement an up-tree or a doubly-linked-list safely just fine.

Great, just make a few unsafe definitions for your list implementation and the rest of your code can enjoy memory safety.


But the point is that there is so much provably safe code that the borrow checker doesn’t understand. Let’s not get confused: Rust borrow checker forces you to write in a very narrow space that it can understand, which is a very small subset of safe programs. Sometimes it’s worth it to be so constrained, others simply don’t.


I wonder if there could be a mechanism for reporting such cases.

" So and So issue detected. If you believe this is not a bug, send this Snippet or A ST as a bug report (If you want, we can do this automatically for you!)"

And also "We've detected so and so issue. Please see our wiki <link to specific issue> for an explanation on what may have triggered this, and for ideas on how to rewrite such code"


"doubly linked list detected, this language does not support such constructs without 'unsafe' keyword being used to wrap it"


Which is why you implement a doubly linked list using data indices in a Vec<Node>. The resulting data structure is fully safe and resembles an ECS.


And features an extra level of indirection for your CPU's dcache prefetch predictor to trip over



Isn't this exactly what they're doing?


I hope so, that's the impression I got. Other than that, I hope the discussion here has less of that "Playstation vs. Xbox" tone we're all familiar with.


> 6) <Your consideration here or above>

"Would C++ fare better or worse than Rust on these fronts?" is the question I would have.


I was a C++ developer a long time ago (I've actually used CFront as a professional -- that's how long ago ;-) ). C++ is a fine language. You can write good code in C++. I've worked on large projects that were extremely stable and easy to work with. It can be done.

The one thing I will say about C++ from long ago was that you needed to be skilled at C++. You needed to understand it and you needed to know how to write code that wasn't going to cause you problems in the future. You also needed to work on a team that allowed you to use your skill. The language is so different now that it's practically I different language so I don't know how much of that still remains, but I guess quite a lot.

I've been doing some Rust recently. I like Rust. There are bits I think still need work, but generally it's a fine language. The biggest difference I see between it and C++ from long ago is that Rust helps you a lot. It's very convenient and friendly. It also scolds me when I do something stupid, which I appreciate.

However, I don't really feel like I need to know less than I did as a C++ programmer. With Rust, it helps you, but if you start causing problems for yourself you can be in for a world of hurt trying to understand what the compiler doesn't like. It will catch that thing that slips by you, which is awesome, but you still have to understand what's going on under the hood.

So in my book, it feels very much like C++ with a super friendly and super powerful linter. I mean it's really, really nice, but I don't think you could just hire random people off the street and give them a Rust compiler and say "This will protect you".

So to answer your question (finally), I think you won't really fare much better or worse, but the journey seems a little bit more relaxed with Rust.


The failure modes of "can't get code to compile" and "something weird and possibly exploitable could happen at runtime" may both be frustrating to developers, but they're worlds apart in practice. Looking at it from the point of view of a kernel maintainer evaluating submissions, the former is not your problem --- in fact it keeps the bad code from ever coming to your attention --- but the latter creates ongoing problems for you.


This is an interesting point I hadn't considered before: converting run-time failures to compile-time failures is already desirable on its own, but in the context of submitting a patch/pull request to an existing piece of software, it also results in shifting more of the maintenance burden from the person evaluating the submission to the person submitting, since presumably they will usually not submit their patch until it compiles cleanly.

I wonder if easing the maintenance burden of beleaguered open source maintainers is an even larger benefit in practice than catching errors at compile time.


This is true, but there's also a lot of trivial busy-work and confusion that the compiler and language foist on you that isn't security related. Like the fact that you always seem to have a &str when you actually need a String or vice-versa and your code is littered with .to_string() everywhere. Or figuring out what type your iteration is at a particular stage and why you can't call map() on it. Or figuring out the vicissitudes of what collect() will accept and convert for you and what it won't. Or clumsily dealing with a Vec of Results when what you really want is a Result with a Vec

Having a really sophisticated type system means...dealing with types a lot. I love Rust - especially ADTs and everything being an expression - but I wouldn't call it more productive than C++. And I think that was mikekchar's point?


Just wanted to make sure you knew, you can use `.collect()` to convert an iterable of results into a Result with an iterable.

See the 6th example here: https://doc.rust-lang.org/std/iter/trait.Iterator.html#examp...


> So in my book, it feels very much like C++ with a super friendly and super powerful linter. I mean it's really, really nice, but I don't think you could just hire random people off the street and give them a Rust compiler and say "This will protect you".

I think that's probably an acceptable trade-off for certain types of development, and kernel engineering is probably exemplary of that type of development. I might be more hesitant to endorse it if it was the only way (or only way without major hurdles) since that would raise the bar of entry for people wanting to figure out how to make a Linux kernel module (the "make your own kernel module" howto's are empowering for Linux), but as an additional supported interface with the constraints mentioned in the article I think it makes sense.


C++ is a different language than it was even 10 years ago. The standard library is still lacking, but if you haven't worked with it in the past decade, I'd give it another look. C++11 makes it bearable, later revisions make it very nice.


As someone who has had to deal with C++ in the Windows and Linux kernels for the last couple of years, the thing that excited me and drew me to Rust is the use of Result rather than exceptions.

Exceptions are incompatible with environments like the Linux kernel. This then makes you incompatible with the stdlib and most third party libraries. It feels like most exception-less C++ libraries also handle error handling in their own way. This means you are in this weird language+library word disconnected from using existing C++ code. You also have to deal with training, not just for kernel + language but kernel + language + special language variant with all of the gotchas of all three.

Herb Sutter is working on improving this situation but I feel it is a ways out and last I looked at it, I was a little uncomfortable with it.

Even once that is handled, there is the issue of what subset of the stdlib is safe to use. Rust has done some work to help with this with nostd and most third party code that isn't compatible with nostd could probably be easily fixed to do so.

bcraig is working on "freestanding" proposals to fix this in C++.


There's nothing that requires exceptions in C++ though. It's not like Python where handling exceptions is a requirement for some facilities in the language or standard library. You can entirely avoid them if you want.


Exceptions are the only way to signal errors from a constructor in C++. If you don't want to use exceptions you end up having to use static factory methods or similar to do all of your constructing and that's a bit dull.


Nah, you can just return the status code with a parameter or state variable.


Except that you've still got to construct an object (because that's how constructors work), you're just signalling that it's an unusable one.

Let's say your object is going to contain another object that wraps a file, but constructing that file wrapper fails because the file doesn't exist. You are forced to construct your object, so you are forced to have something to put there. Maybe it's a dummy, maybe you can 'null it in some way', but it's immediately not as ergonomic. You also then have to either rely on the caller to not use the failed constructed object, or put a check in every method call checking for 'valid'.


Did I say it would be "ergonomic"? You're deliberately refusing to use the correct tool for the job; of course the next-best thing is going to be "not as ergonomic". Is that surprising? It was never "ergonomic" in C to begin with. You had to check status codes and manually allocate and deallocate and construct and destroy everything everywhere. Now we have a language here that automates a ton of this for you, and gives you better tools. It also lets you avoid the individual tools you don't like, while letting you use the others, and while letting you fall back to your old style entirely whenever you prefer, and you're still complaining about it? What are you even complaining about? If you think it's better to have everything to be difficult instead of some things, then just stick with C. There's no point having a pointless argument with me about it.


You don't have to use exceptions in C++. Google doesn't, for example. Errors are reported with StatusOr<> like so:

https://github.com/protocolbuffers/protobuf/blob/master/src/...


C++ exceptions would not help at all. There's a reason they are not used in-kernel in Windows.

C++ RAII would help a great deal, but Rust's main feature is that it's RAII on steroids.


C++ is more stable and probably supports more architectures. But Rust is strictly better for security and correctness concerns.

However, I imagine it would be a huge undertaking to restructure Kernel code in a way that the rust compiler finds palatable.


<Your consideration here or above>

I guess that as more languages are added to a project, it gets harder and harder to understand and develop. You need to spend time learning yet another tool to be comfortable with inspecting the code and understanding the inner workings of it.


In this vain, what about other options like beterC (a very capable runtime-less D subset)? What about allowing Ada into the kernel? The fact they are focussing their evaluation on rust only is already an indication that they are playing the fanboy game to some extent. There are other languages out there that improve over C and can be used for systems programming.


Linus' opinion on Ada (vs Rust):

> We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.

From: https://www.infoworld.com/article/3109150/linux-at-25-linus-...


Not sure whether I agree with him calling Modula-2 and Ada disasters. But his statement about programming languages not solving problems by themselves applies to a lot more than just operating systems.


Complete opinion here but this is my take on anything Linus.

Linus has learned that anything he says is potentially going to quoted and quoted over and over again. So it seems like he only talks in extremes now. I always tend to look at it like he likes rust more than the other two and nothing more.

If he said the other 2 are alright, people would run with that quote, and vice versa.


I do not see any reasons given for why he considers Ada a disaster. Cannot really do much with this opinion as it is.


Although we were talking about Rust here, I'd love to see those evaluated and compared between each other as well.

This is nothing away from BetterC and other languages. If anything, this could open up ways to using languages other than C in kernel development.

That said, I don't think there's room for more than maybe two languages for the critical kernel components even in the long run. Not because of technical limitations, but human ones.


Someone would need to make the case for them.

AFAIK neither Ada nor BetterC prevent memory safety bugs like Rust does... for example it appears neither of them prevent use-after-free of dynamic heap allocations, a pretty common sort of exploitable memory safety bug.


SPARK Ada has a concept of pointer ownership that would offer similar protections: https://blog.adacore.com/using-pointers-in-spark


That is true, but SPARK is more heavyweight and less mature than Rust.

If someone wants to make the case that SPARK would be a better approach than Rust to writing safe code in the Linux kernel, they should go ahead, but I haven't seen anyone make that case.


I don't think anyone is going to make that case, given how lukewarm the response has been in the past, and how lukewarm the response to Rust is now. SPARK is used extensively in areas much more mission critical than the Linux kernel. One recent area I found it used which is very interesting is in the CoreBoot BIOS firmware. I'd hesitate to call SPARK a heavier language, but I'm not sure that's what you're saying. It's definitely far, far more mature than Rust.


How is SPARK less mature than Rust given its use in the industry?


This shifting-the-referent bug annoys me, because it happens with C++ too.

"C++ is really mature and there are so many C++ projects and developers!" "Yeah but it has these problems..." "Just use C++17, it solves all that!" ... but C++17 is not the language that is really mature and that all those C++ projects and developers are using.


I'm talking about SPARK-with-pointers. As of June this year, that was still only available as a preview:

> If you now feel like using them, a preview is available in the community 2019 edition of GNAT+SPARK.


Fair enough, however SPARK-without-pointers is even more constrained in what concerns dynamic allocation, basically it is forbidden and everything must be known at compile time.


All that Rust does wrt to that is to enforce reference counting overhead. That has well known issues and limitations and is not very attractive, either.


The only types in the Rust standard library that do reference counting are `Rc` and `Arc` (which are both implemented in Rust).

Borrow checking is done statically.


But borrow checking doesn't cover heap allocations.


Yes it does. Borrow guarantees that the lifetime of the borrower does not exceed the lifetime can provide. Heap allocations have a lifetime--the start of a Box<> is the call to malloc, and the end is the call to free. Borrow checking prevents you from using the value after the free.


Huh? It absolutely does, you can store a Box<T> somewhere and get multiple &'a Ts out of it, where 'a is limited to the lifetime of the box, but only one &'a mut T out of it at once.


No, this does not mean that the borrow checker understands heap allocations. Box is just a wrapper to shoehorn heap management into the stack based analysis that the borrow checker is able to perform.

In other words, Box<> is just the equivalent of std::unique_ptr in C++.


Box is just a library type; the borrow checker is a compiler feature. Different things entirely.

I think what you're trying to get at is that the borrow checker is based on static lifetimes (that's not the same thing as a "stack-based analysis", by the way). That's true. But that's simply because most lifetimes in practice follow simple static patterns. This is the same observation that motivates RAII in C++. Because heap management tends to follow the same few patterns over and over, judicious use of Rust compiler features and standard libraries can eliminate a lot of problems.


I see what you mean, but I think there are two orthogonal issues here. One is the ownership/lifetime of the heap allocation. That is indeed coupled to a stack object, because non-stack bound lifetimes can (with the current state-of-the-art) not be analyzed at compile time. Calling this shoehorning is strange, that's the whole idea of RAII. The other issue is borrows of heap objects, which the borrow checker can of course handle since they are no different than borrows to stack objects.


Yes, it's very similar to unique_ptr.

The point here is that you can implement many kinds of complex and useful heap-allocated data structures in safe Rust code, without any reference counting, and the compiler will verify that you have no use-after-free bugs, or any other kind of memory-safety bug. The same is not true of (pre-SPARK) Ada, or C, or C++, or BetterC.


You can play around with C++ lifetime analysis in Visual C++ 2019 and clang tidy.

It is still WIP, yet to be deployed at large, naturally doesn't cover binary libraries, but it already a very good improvement.


I'm excited about the C++ lifetimes work, but it's important to recognize that its goals are much more modest than the goals of memory-safe languages. They aim to catch many common memory-safety bugs, which is very valuable, but that's not nearly the same thing as guaranteeing the absence of memory-safety bugs. https://robert.ocallahan.org/2018/09/more-realistic-goals-fo...


True, but that way I can still do my .NET / C++/WinRT interop without having to reboot the world, keeping the nice Visual Studio mixed mode debugging integration and the whole Windows infrastructure that is build around C++.

Maybe now with Microsoft having some care for Rust, the tooling situation will improve, but right now, even if handicapped, C++ lifetimes are easier to sell in some shops than rebooting the whole stack in Rust.

And in the end they all share the same goal, improved safety in our daily stacks, even the bits we only touch as user.


It's exciting to see C++ pick up some of those ideas! Anything that improves the current situation is appreciated.


That is what I see as most valuable contribution from Rust.

Even if Rust would fail at large scale mainstream adoption, the fact that the community has pickup up Cyclon and ATS ideas to the point that Swift, OCaml, Haskell, Ada, C++, D, Chapel, ParaSail and eventually other communities started to adopt similar ideas, that alone is a major victory to everyone involved in Rust.


That's interesting, still on VS2017 for Windows stuff. I can't wait to try that on kernel drivers, etc.



A good example of the borrow-checker against data that is 100% not on the stack is crossbeam-epoch: https://docs.rs/crossbeam/0.5.0/crossbeam/epoch/struct.Guard...

Crossbeam uses "epoch-based memory reclamation," a strategy for maintaining an object shared across threads without either locks or a global garbage collector. The tl;dr of the strategy is that there's a concept of "epochs," and if you update an object, you have to keep the old copy of the object around until the epoch is over. How long an epoch is alive is determined by how slow your readers are.

So, to access some data, you create a Guard object and pass a reference to that Guard object into the functions that access Crossbeam-protected objects. You then get a reference whose lifetime is bounded by the lifetime of your Guard object. (Typically you're going to create the Guard object in your local stack frame, but nobody's stopping you from putting a Guard object on the heap, getting an arbitrarily-long reference to an object, and blocking reclamation for arbitrarily long, if you really want.) You can safely access the data through this reference until its lifetime is over, and other threads won't reclaim the data (i.e., deallocate it from the heap) until your Guard object is gone.

The implementation uses unsafe, but that's not surprising, the implementation of Box itself uses unsafe so it can call malloc/free or whatever your platform equivalent is. What's important is that the safe interface can translate the requirements on paper into requirements that the borrow checker can check, using this Guard object.

std::unique_ptr can't do that. (Also more generally, std::unique_ptr isn't memory-safe - see the example in https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/ .) There's no way in C++ to say "Here's a reference to some data in this unique_ptr, and by the way it's perfectly safe to have more than one reference to it, but you can't hold onto this reference forever because I'd like to deallocate it soon."

We're already finding that the Crossbeam-style guard pattern is helping us express kernel RCU in ways that are safer than what could be done in C. Namely, there are functions that expect to be called inside an RCU read-side critical section but have no way of enforcing that in C other than by adding runtime checks for the current state of RCU. In Rust we can pass the Guard object around and ensure that readers a) create a critical section and b) don't continue to dereference data once they've declared the end of their critical section.


Sure it does. The borrow checker is concerned about lifetimes, regardless of whether the object lives on the stack or the heap.


The borrow checker is what is being referred to and has no runtime overhead.

Now, an environment without a GC will use ref counting sometimes but it will regardless of C, Rust, or C++


Reference counting has well known issues that are independent of the programming language:

- can't deal with circular references without user intervention

- storing the reference counter is hard: either it clobbers a whole cache line or it ruins memory alignment

- atomic reference counting has the potential to utterly ruin runtime performance unless it used extremely carefully


I'm pretty sure you are confusing Rust (borrowck, done at compile-time, no runtime overhead) and Swift (where every object is reference-counted, with some, sometimes big, overhead).


In Rust GUI code is quite common to have Rc<RefCell<>> scattered across all widgets callbacks.


I assume you're talking about GTK or Qt? AFAIK they are the only viable options for Rust GUI at the moment, but their paradigm doesn't fit well with Rust ownership and you need to use ref-counting everywhere as you said. I don't think the problem is inherent to GUI in general though.


It is, because Rust ergonomics for self referencial structs from closures require this type of dance.

And when one cares about performance, creating/destructing hundreds of instances per frame like on reactive approaches, it isn't the best use of CPU cycles.


The borrow checker does not use reference counting.


> 1) Stability. Other than for development, unstable kernels are a no go. Can possible negative effects be mitigated?

If by "stability" you mean "crash-proof-ness," I think there's no particular inherent reason Rust code is going to be more crash-prone than C, especially since basically the whole purpose of the language is increased stability. One notable shortcoming is that Rust doesn't currently have a fallible allocations API nor widespread support in common libraries for using it, so under memory pressure, if kmalloc fails, your only choice is to panic (Rust panic, i.e., unwind, maybe BUG() and kill the current thread). See https://github.com/fishinabarrel/linux-kernel-module-rust/is... for some discussion.

If by "stability" you mean interface stability, the Rust project has made great progress in the last year or two at stabilizing everything needed to write code that doesn't use the full standard library / link to a libc that can open files etc. See e.g. https://github.com/fishinabarrel/linux-kernel-module-rust/is... .

> 2) Security is often what Rust is expected to bring on the table. So is Rust actually more secure in the environment kernel requires? This could be tested by "clean-room" reimplementing something that is historically known to have many security issues.

Agree that in practice we're going to need to test this. But all of Rust's safety features (type system that handles null pointers, borrow checker, bounds-checked arrays, safe iterators so you don't need to bounds-check in the first place, etc.) work fine in kernelspace.

> 3) Are there showstoppers for kernel builds? Interoperability, build performance, architectures unsupported by Rust, etc. If so, could these be mitigated?

Build performance is a bit slow, but it's not as bad as userspace Rust because you inherently can't use that many crates and you generally don't want to be linking third-party code anyway - everything should be in the kernel tree.

For architecture support see https://github.com/fishinabarrel/linux-kernel-module-rust/is... . Notably, all the architectures that have kernels by the major distros (I checked RHEL, Fedora, Debian, Ubuntu, SUSE, Android, Oracle, and Arch) should work.

One challenge for interoperability is that most kernels in the real world are built with GCC, and rustc itself emits code using LLVM. The most common way of binding C code is using rust-bindgen, which uses libclang to parse C headers; even if you're not using bindgen, I believe you're still using LLVM's idea of C layout with #[repr(C)] structs and extern "C" functions. It's possible that kernels are built with particular GCC -m options that change the ABI (e.g., regparm) or GCC plugins (e.g., randstruct); if those aren't supported in compatible ways by LLVM / clang, then it's hard to write modules that load into an existing kernel. But, of course, if the question is to build new kernels with components in Rust, one workable restriction is to say that the C parts need to be built with Clang. There is good support for building the Linux kernel with Clang, and there are production Android models with Clang-built kernels.

> 4) How does it affect runtime performance? Average case. Bloat issues? Any pathologic cases? Any benefits?

There's a team that did some investigation on a prototype, and found that runtime performance was comparable, but binary size wasn't too great: https://mssun.me/assets/ares19securing.pdf (The prototype driver uses lots of unsafe code, but it's a good proof of concept for what ought to be achievable.)

I think we can claw back binary size with some focused work.

> 5) What other unexpected it brings on the plate? Both benefits and disadvantages. For example, could Rust types also be used to catch errors other than memory related, like invalid states?

One thing I'm very curious about is whether you can use a battle-tested third-party ASN.1 implementation (for example) instead of writing your own ASN.1 implementation in the kernel. (In fact the kernel has multiple ASN.1 implementations!)

Another useful thing is to use Rust's linear(ish) type system to prevent TOCTTOU bugs when checking userspace pointers, by making it very explicit when you're dereferencing the same address more than once.

A little closer to memory safety: Rust's Send and Sync traits make it easy to ensure you're not unsafely using data across threads (i.e., you're forced to pay attention to shared data and are unlikely to get into a big-kernel-lock situation), and you can use the typesystem to get a better and safer interface to things like RCU pointers. RCU requires that you be in a read-side critical section to read pointers; there's no concept of locking a specific pointer, being in one lets you read any RCU-protected pointer (as long as it's of the same RCU flavor, and confusing RCU flavors did lead to a use-after-free vulnerability recently!). In Rust it's pretty easy to have a Guard object on the stack that uses RAII to enter/exit the critical section, and ensure that you pass a reference to your Guard to any dereference of an RCU pointer type.


Rust is still a research language. And using more than one language for a kernel/drivers, is that really a good idea? Anyone trying to fix/debug things in the kernel now has to learn Rust.


> Rust is still a research language.

It's not. Rust is being used for production systems by several companies, so it hardly counts as a "research language".

> Anyone trying to fix/debug things in the kernel now has to learn Rust.

Wrong again. If a driver for some SD card reader is written in Rust, then you won't come in contact with this if you're working on the IPv6 subsystem.


> It's not. Rust is being used for production systems by several companies, so it hardly counts as a "research language".

I don't know that Rust has ever been a research language in the first place, maybe they mean "in development", as in pre-stability?


There's some history in this Twitter thread: https://twitter.com/graydon_pub/status/958192076209897472

The 2005-ish "one man side project" and "searching for its niche" phases do sound researchy.

Interesting quote: "Funny enough, the zero cost obsession really wasn't me. I was ok paying some runtime costs. We just got overrun by C++ folks :P"


I don't think it's fair to say Rust is still a research language any more than it's fair to see the ongoing evolution of C++ makes it a research language. Rust is being used to engineer production software (parts of Firefox, Tor and Microsoft Azure), and its development is aimed mostly at practical and ergonomic issues, rather than experimental or esoteric features.

I agree that there's a cost to introducing a new language to a codebase. On the other hand, there's a cost to being stuck with only C, which manifests as security vulnerabilities and high cognitive overhead for want of better abstractions. I believe the Linux maintainers are taking a sensible and conservative stance here in allowing use of Rust, but not allowing Linux to depend on it.


> I don't think it's fair to say Rust is still a research language any more than it's fair to see the ongoing evolution of C++ makes it a research language

I would accept that. The idea that C++ code from ten years ago can't always interoperate with C++ code written today is indeed a problem, but I think it's a very different problem that you're running into if you're trying to use Rust applications and libraries built ten years ago.

> there's a cost to being stuck with only C, which manifests as security vulnerabilities and high cognitive overhead for want of better abstractions

There's also a cost with abstractions. People who use Rust thinking it has "zero-cost abstractions" should probably get help crossing the street as well.

> I believe the Linux maintainers are taking a sensible and conservative stance here in allowing use of Rust, but not allowing Linux to depend on it.

100% agreed. Rust may indeed be the future, but the only way we'll know for sure is if we try it.


C and C++ were also being used in production with partial support for K&R, ISO C and C++ ARM.

Hardly any different.


Check out the HotOS'19 paper from our lab :) https://danyangzhuo.com/papers/HotOS19-Stone.pdf



Could we have universal standardized hardware interfaces for basic functionality of hardware devices? For example at least get the same programming interface for basic functionality of Wifi chips. Then one could have more accelerated drivers which makes use of all the specific hardware features of chips.

Network chips has started to offer a common programming api as far as I understand Switch Abstraction Interface (SAI). Could one do this for more different than network hardware classes?


Existing languages in the Linux kernel - https://www.openhub.net/p/linux/analyses/latest/languages_su...

Unsurprisingly it's mostly C (95.6%) with a bit of C++ (2%) and Assembly (1.6%) and tiny bits of make (0.2%), shell scripts (0.3%), Python (0.1% and Perl (0.2%). I'd guess it'd be at least half a decade before Rust cracks 1% here.


> with a bit of C++ (2%)

That sounded strange to me, since the dislike of C++ within the Linux kernel is well-known, so I took a quick look. All uses I found were on user space tools, not in the kernel itself, and all of them (except for a "check if this compiles" test file) are in C++ because they have to call into libraries with a C++-only API (LLVM and Qt).


AFAIK kernel itself is only coded in C and assembly. Shell, python, perl and make are for building and tools.

Linux kernel does not support c++ (its exceptions, RTTI etc). Only c++ I found was related to perf tool, 6 of 7 cpp files had test in their name.


I think setuid executables would be a good place for rust or go, but I have reservations about the kernel itself.


Genuine question: who restricts the sourcecode language in which a kernel driver is written?

I'm thinking machine opcodes in a .o file, to be linked wherever, are only of a concern if they're doing function call linkages (and maybe kernel space vs user space transition bookkeeping) improperly?


C++ written with -pedantic -Wall, smart pointers, and clang static analysis tools, ASan, valgrind, etc. enabled is just as safe as Rust. Change my mind.


Valgrind is runtime... You're not going to catch a double free that you don't hit on the common path.

Not to mention race conditions in memory accesses.


Does C++ do any setup of signal handling, differently than Rust (which, I am guessing, doesn't)?


Are smart pointers thread safe?


Did you talk about that to Linus :):) ?


I have Rust on a long backlist of things to check out, but the idea of writing kernel drivers in something other than C is interesting to say the least.


Give it as an exercise to college grads


[flagged]


in 2025 npm will be a dependency for your kernel build


This is running MO for comp sci: adding loads of complexity for a minor gain.

Compile-time borrow-checking works for self-contained applications--the compiler has a complete picture of what's going on. Move that model inside kernel space--where blobs are being mutated across separately compiled modules, different chipsets (CPU/DMA/GPU), sometimes even in parallel--might as well wrap the whole thing in a big `unsafe` block.

Somebody's going to say "Oh, you're exaggerating. It's not that bad in Redox." Redox doesn't have to integrate with 28 years of kernel written in C.


Rust borrow-checking doesn't depend on a closed-world assumption. Federico converted librsvg to Rust and it worked fine.


"Foreign functions are assumed to be unsafe so calls to them need to be wrapped with unsafe {} as a promise to the compiler that everything contained within truly is safe. C libraries often expose interfaces that aren't thread-safe, and almost any function that takes a pointer argument isn't valid for all possible inputs since the pointer could be dangling, and raw pointers fall outside of Rust's safe memory model."

https://doc.rust-lang.org/nomicon/ffi.html


Not sure what you're getting at. Rust can't catch all bugs, so it's not worth having it to catch the ones it can?


It is right there in black-and-white: 28 years of C to be interfaced with -> more FFI -> more unsafe blocks -> less that Rust can verify.

I don't think there will be enough left outside of unsafe blocks to justify a second toolchain here. If the kernel were also in Rust, it would be a different story.


It's quite the opposite. Wrapping existing C code in Rust FFI can make it safer.

Rust side can add missing type information to C interfaces. Things that are "RTFM" for C, such as thread safety of the types involved and which function arguments are borrowed/owned, can be expressed on the Rust side even for C code, and automatically enforced when it's used via FFI. This adds a layer of safety to existing C code.


"Using C libraries in a portable way involves a bit of work: finding the library on the system or building it if it's not available, checking if it is compatible, finding C headers and converting them to Rust modules, and giving Cargo correct linking instructions. Often every step of this is tricky..."

Adding loads of complexity (i.e. more opportunities to screw something up) for a minor gain.


Note that this mess isn't created by Rust. The difficulty of building arbitrary C libraries on all platforms comes from C.


Look at what the people working on Rust-in-Linux are actually doing. They are writing safe Rust wrappers around internal Linux APIs that drivers use and the Linux interfaces that drivers implement. Then you can write a driver in Rust that needs little or no unsafe code of its own. Better still, then you can write ten drivers in safe Rust reusing the same glue.

And honestly, writing safe Rust wrappers around C APIs is not hard to do. The community has tons of experience with this. It does not wreck the coherence of the safe Rust code that uses those wrappers, like you seem to think.


Pardon the slightly fanboyish comment, but I fail to remember a single negative response regarding Rust. Considering its domain I'm more than impressed.


There are plenty of negative comments about Rust, especially about borrow checker complexity and compiler speed. Maybe you're not paying attention to them?


Also executable size because of monomorphization. I'm kinda worried that the top comment by goefft mentions an example module that uses Serde which is particularly vulnerable to this kind of bloat.


Sorry I am not familiar with monomorphization. What does it mean in the context of Rust?


If you define a generic function, the function is "duplicated" for each concrete type it's used with. If the function is large, or it's used with many different types, that leads to a lot of semi-duplicates and extra code in the binary. For instance there will be one version of Vec::push for each type used in a Vec<T>.

A common pattern to mitigate this (though not always applicable) is a generic trampoline to a monomorphic function for the cases where the genericity is mostly a matter of convenience e.g. let's say that your function works on strings, but for convenience you take a `T: AsRef<str>` (callers can pass anything from which a string reference can be created cheaply), the first thing you do is `let s = s.as_ref()` so the vast majority of the function is monomorphic (it takes only one set of concrete types as input). Rather than this:

    fn my_function(s: T) where T: AsRef<str> {
        let s = s.as_ref();
        // do stuff with s: &str
    }
you can do this:

    fn my_function(s: T) where T: AsRef<str> {
        _my_function(s.as_ref());
    }
    fn _my_function(s: &str) {
        // do stuff with s
    }
unless rustc decides to inline just _my_function into my_function (which would be quite odd), this leads to very little monomorphization and bloat.


This was quite educational, thanks!



No, but I kinda think that the benefits outweigh the costs.


I'm quite bullish on Rust. But note that it generally takes a while to find the "bad" parts of a technology, as they often are unintended consequences of "good" parts (which are clear and designed up front). So much that I'd consider it a sign of a mature technology when the bad parts are well-understood.


There are two types of languages, those people complain about and those nobody uses, rust is closer to the later at this point in time. Outside a few filter bubbles like HN rust is virtually unknown or little thought about.

I think the biggest issues are reliance on cargo (the systems language that ignores your system libraries) and lack of a stable ABI, this is why it will never replace C. Many people find the borrow checker aggravating. Even rust fans seem to find the compile times far too long. As with all languages some will find it too high level and others too low level, should it throw exceptions or should it use error codes, etc.

And let's not forget the rust fans brigading any disagreement.

edit - do you mean in general or for this specific issue on the LKML, my reply assumed in general.


> Outside a few filter bubbles like HN rust is virtually unknown or little thought about.

It's still pretty new, but it's gaining adoption quicker than what I would have expected. It's now used in Google, Facebook, Amazon, Dropbox and Microsoft. Most of the time it's for some niche or experimental project, but it's still an interesting trend.

> I think the biggest issues are reliance on cargo (the systems language that ignores your system libraries)

This is not accurate. There is no "reliance" on cargo, you can just use plain rustc and it would work. And you can of course link dynamically to a system's library: by default Rust's binaries are even linked with the system's libc actually, and AFAIK most external C dependencies are linked that way (bindings to openSSL for instance).

> and lack of a stable ABI

Many people would love to have a stable ABI, but it's way too early for that because it would freeze Rust development, while there are still a lot of things that need to be improved in the language.

I agree with you on compile times though and I hope things continue to improve in that regard.


> gaining adoption quicker than what I would have expected

Basically that's what I was trying to say but failed somehow.


Even though you probably just picked some language design aspects as random examples...

Rust doesn't have recoverable exceptions. It has fancy error codes. It is also quite low level in the same spirit as C++: only pay for what you use, and what you use should not be possible to write more efficiently by hand.


> Rust doesn't have recoverable exceptions.

Playing devil's advocate: when compiled with panic=unwind, it has. However, the fact that when compiled with panic=abort they turn into unrecoverable exceptions tends to make the "should it throw exceptions or should it use error codes" choice clear.


We're talking use cases.

In Rust, you'd use Result where in C++ you would use exceptions.

Panic is equivalent to assert and should not be used where you'd use exceptions in other languages.


C++23 might get value type exceptions, and the design is even being discussed across ISO C working group, so that will eventually change.


> Outside a few filter bubbles like HN rust is virtually unknown or little thought about.

FWIW I had been hearing buzz about Rust from the community that I work with regularly (including the occasional "I'm working on rewriting $COMPONENT in Rust") long before I started frequenting HN.


I don't know much about rust but it seems like most of what I hear comes from fans so I end up doubting everything I hear.


Not a comment on Rust, but this sort of thinking appears strange.

You hear "X is good" from 7-8 people. You dismiss these people as fans lacking objectivity.

If you now hear "X has shortcomings" from 2-3 other people, do you now tell yourself "Yes I knew it all along. X isn't as good as the fans said it was" or do you think "Yeah, now that I have other opinions I can say that the majority of people think X is good and therefore it's probably good".

If it's the first, that's just confirmation bias. If it's the second, that's better but I don't get it - why does the mere presence of a contradicting opinion make the first opinion more valid?

In any case wouldn't you be better off trying X first hand and forming your own opinion?


Fan is short for fanatic. Ok it's not used much like that any more.

To expand what I mean, when I recall what I have seen people say on Rust, some things strike me: overwhelming positivity, lack of depth, and it-cures-all-ills.

In a word, hype.

I don't owe anyone my attention to do some determination about how valid this or any other hype is.

If I hear people talking about something and the loudest and most numerous voices are hype, I think it is fair and rational to have a solid doubtful bias much moreso than if conversation seemed more varied, substantial, and rational.

Things that attract more rational people than hype-r people correlate with quality.


The dichotomy between hyped and rational people is kind of artificial though. And in a crowd, the most loud echo will always be the hype.

There are bad projects with a lot of hype, bad projects with little hype, good project with little hype and good project with a big hype. Hype just isn't a good proxy for quality.

But hype is often a good proxy for success though: no matter how good your product/techno is, it's going to fail if there is no traction. Overall, if you think a product is good, it's good news if there is hype about it also. (And I think it's exactly what's happening with Rust: a good tool with a good amount of traction. But yeah, fanatics are always boring)


Similarly though, those that are hyped about something are more likely to speak louder than those who are just interested in using it. Looking from the outside you are likely to get a very biased sample.


If you haven't heard criticisms of Rust, you haven't heard very much of Rust. If you give it more than a superficial glance, you will find that there are issues with Rust - build times, build artifact sizes, ability to integrate with non-cargo build systems, cross-platform compatibility, difficulty in finding developers proficient in Rust, time to train engineers new to Rust, FFI with C++ not being good and on and on. Even the very article we're discussing is by a person who is unhappy with the FFI experience and wants to improve it to unlock contributions to Linux.

What makes people excited about Rust is not that it's perfect right now, nor do people claim that it's perfect. They're excited because they've seen a lot of progress in the last 3 years and see a clear path for it to become better in the next 3.

Almost every blog post I've read mentions shortcomings. Take one I read today from way back in 2017 (showing that people criticising Rust is not new) - https://onesignal.com/blog/rust-at-onesignal/. While they are very happy with their choice, they mention

1. Build times being very long. This is partly the fault of rustc generating verbose LLVM IR and partly because of code-gen macros like the ones used by serde (the crate that makes json parsing easy)

2. Libraries such as async clients for postgres and redis not being available. Http clients being immature and having to spend time improving them.

3. IDE experience being poor.

Of course, 2.5 years later many of these issues are no longer true.

* Build times have improved a 8-30% depending on the project in 2019 (https://blog.mozilla.org/nnethercote/2019/07/25/the-rust-com...). Incremental builds are a thing now, as are check builds (they generate no artifact, just make sure your code is compilable).

* Many more libraries available. Certainly all the basics will be covered once async-await becomes stable and all network libraries add support for it.

* RLS is reasonably good, IntelliJ-Rust is better and rust-analyzer is being worked on. Current situation is much better than the days of using Racer and will likely improve within 2 years when rust-analyzer is mature.

So there you go - an example of a Rust "fan" pointing out it's shortcomings 2.5 years ago, and the ways in which Rust has improved since then.

If your issue is that you see Rust being mentioned by fans a lot, it's likely because we keep reading stories about vulnerabilities that happen over and over, issues that wouldn't exist in a Rust code base. If these vulnerabilities were less common, maybe we wouldn't see Rust being mentioned so much.


Usually, when something is being oberly hyped by too many people, it means that they are falling victim to good marketing and the actual improvements are mediocre at best. A real, honest improvement comes only when there are clearly communicated downsides and it is obvious that their impact is tolerable. Rust fanboyism is like git fanboyism in that they claim having the one true solution. It didn't hold for git, so being cautious and jaded is the right attitude at this point.


It's just going in many places with a good reception tone. Consider this, cpp was roasted by Linus (IIRC), but there rust is considered an option for drivers.


C++ was also considered an option for the whole kernel; there were a few releases which could be compiled as C++. The C++ option was dropped because the generated code was slower.


Apparently it is/was fast enough for Be, Apple, Google, ARM, Microsoft, Nokia OSes, IBM.


> I fail to remember a single negative response regarding Rust

That's because the primary (and so far only) use for Rust is as an excuse to avoid learning C++.

Nobody actually writes real software in it.

As you doubtless already know, the programming language everybody likes is a programming language nobody uses.


I used C++ for 15 years before I wrote a line of Rust, and I absolutely love Rust. What does that say about your theory?


It quite handily demonstrates the truth of my theory, no?


Looking forward to enough Rust adoption that people stop seeing it as a magic bullet. Once it's in large scale use and there are still errors that lead to security problems maybe people will get over themselves.


Nothing's ever perfect but there is empirical evidence that Rust code is far less vulnerable than C or C++ code. For example this list of bugs found via fuzzing: https://github.com/rust-fuzz/trophy-case Almost none of those bugs were security-sensitive. Compare to similar lists for C and C++ projects, e.g. http://lcamtuf.coredump.cx/afl/.


I am not exactly sure of the point of this comparison. The OP got it right. There is not enough adoption. Of course you will find less programs with bugs, because there are less programs written in Rust! That, and in case of rewrites, they are almost nowhere complete functionality-wise.

I see that there is one use-after-free bug, and lots of out of range access bugs. How does that happen?


Fuzz bugs in C and C++ code very often produce exploitable memory corruption. Fuzz bugs in Rust code very rarely produce exploitable memory corruption. This is independent of the absolute volume of code that you test or the total number of bugs that you collect.

> I see that there is one use-after-free bug, and lots of out of range access bugs. How does that happen?

The use-after-free bug was in explicitly unsafe Rust code (via the "unsafe" keyword). I assume that code was intended to be a performance optimization, but I haven't looked at the details.

An "out of range" bug here is typically an array-index-out-of-bounds, which results in a panic (safe abort) at run-time. Panics aren't great but they're not exploitable.


> I see that there is one use-after-free bug, and lots of out of range access bugs. How does that happen?

The use after free [1] is because of unsafe code, specifically, a function that says "trust me, I know what the type of this is."

I only checked a few of the "out of bounds" bugs, and they all are panics [2], which are not memory unsafe. Of course, someone could cause a real one with the use of unsafe.

1: https://github.com/shepmaster/sxd-document/issues/47#issueco...

2: https://github.com/image-rs/image-tiff/issues/28


I know that I could probably find this out myself but is there a command-line switch that would make the use of unsafe {} a compile-time error? Of course the alternative is just a quick grep for it, but personally I would feel safer if the code did not compile at all in the presence of a compiler switch/flag and unsafe blocks.


So, there is, but it only works for your code. You can do it via an attribute in the source (#![forbid(unsafe)]) or via a command line flag (cargo rustc -- -D unsafe).

This will not check your dependencies. You can use other tools, like cargo-geiger https://github.com/anderejd/cargo-geiger to check your dependencies.

And of course, at the lowest levels, doing anything useful requires unsafe, since your operating system doesn't expose a Rust API to do tasks.


Thank you! I will use that attribute wherever I can.


Someone else mentioned it in another thread, but the propagation of issues related to code in unsafe regions to other sections of code that are nominally safe is an ongoing research area. It sounds really cool, though. Would love to hear more about it.


I think that's a bit exaggerated. The limits are known: unsafe code can cause problems when combined with any other code in the same module. If you use unsafe, you need to check the whole module, not just the unsafe block. This has been known for a long time, but people tend to not go into detail when talking about this because it's laborious and not really the point.


I get it. Languages are where major security gains are going to be made. Everyone knows that. That being said, the proselytizing is obnoxious.


That's fair. I'm probably guilty of that sometimes.

Personally, I find it obnoxious when people start new projects in C code because they're comfortable with the language and unwilling to learn something better, and those projects contribute to the toxic cesspool of vulnerable software that is undermining civilization.


> I find it obnoxious when people start new projects in C

Indeed, how dare they spend their own free time doing things you find obnoxious? The scoundrels!

> and unwilling to learn something better

They also seem to not be willing to blindly accept your judgement instead of their own in terms of how best to implement things they wish to! How inconsiderate of them!


> Indeed, how dare they spend their own free time doing things you find obnoxious?

That's not the obnoxious part. The obnoxious part is where their vulnerable software gets deployed and subverted to harm to third parties.

> They also seem to not be willing to blindly accept your judgement

I certainly wouldn't ask anyone to blindly accept anyone else's judgement. The evidence is in.


Okay. Don't use their software. Unless somebody is making you? You are entirely free to write your own versions in whatever language you want.

Speaking the way you do, though, you just sound like an ingrate. Take a moment to thank all those people writing that software you use, instead of telling them that they are being obnoxious.


When someone writes a buggy C library and someone deploys that library in some set of devices, which are compromised and used as part of a botnet that DDoSes my site, I'm hurt through no choice that I made.

When I buy something from a shop and their database gets hacked via some bug and my data is extracted, I'm hurt even though I had minimal control over the situation.

Vulnerable software is a negative externality, like air pollution.


You are free to not buy any product or use any service that uses code not written in the language of your choice.


Great, does the programming language now comes listed on the box, just besides the EULA?


1. You're directly replying to someone who pointed out that not choosing a product with vulnerable code doesn't protect you.

2. When it comes to your own choices, information about what language is used by every facet of a product/service isn't usually available.

3. Even when information is available, most people don't and won't choose stuff based on language anyway, which means if you do care, you can't expect a viable alternative to be produced by the market.

So the standard libertarian advice is inapplicable in a triply redundant manner.


Sure, but the toxic cesspool that is undermining civilization has less to do with software and more to do with the way that we have organized civilization.

I can understand what you mean, though. It's really lame to hear "that's how we've always done it". My personal opinion, however, is that language's like Rust are still taking the wrong approach because we should be making the computers manage the memory for us. Much of the guarantees from Rust's type system could also be gained by more advanced garbage collection or designing hardware in conjunction with garbage collectors to get around some of the thornier issues in that field.

If you could take out the perceived drawbacks of garbage collection, then there isn't a reason to do manual memory management. That's my personal soapbox, though, and I could go on.

Cheers.


One of the key differences here is that there are resources other than memory. GCs can help you manage memory, but not other kinds of resources that also need managed.


I don't mean to sound dense, but couldn't all resource management be automated to a large degree? It's not my area, so I am really asking, not trying to be a smart a.


That's what languages like Rust are trying to accomplish, yes. But it attempts to do so by unifying the systems used to manage these resources; what you're advocating is using GC for memory, which means you need two systems now: the GC for memory, and something for everything else. Languages like Rust say "that's too complex, let's use one system for all things."


I get that, but why couldn't you use something like GC for resource management. Maybe I sound stupid here, but it seems like the kind of task that machines would be better at than humans.


You can! The performance is not as good. For many applications, you don't have to care about performance. Rust is specifically designed to get maximum performance, so it can't be the default mechanism.

(Swift basically does what you're talking about, for example)


Sweet. Thanks, pal.


I agree that not having to reason about lifetimes can lower the cost of software development. However, GC is a very well-developed field, both in research and in practice, and I don't expect any game-changing breakthroughs there in the forseeable future. All GCs impose significant performance tradeoffs --- forcing you to take a hit on at least one of (worst-case) latency, throughput, and memory overhead. Apart from pure reference-counting, they require some sort of runtime support and make it difficult to write code that can be used as a library by non-cooperative applications (e.g. C programs). (Hardware support for GC can be interesting but then you have the problem that there's no one-size-fits-all GC that works well for all languages and workloads.)

I agree that for some applications the overheads of GC are fine and wrangling lifetimes is tricky enough you should use a GC-based language, not Rust (and probably not Swift either, since it's easy to write Swift-style reference counting in Rust).

However --- Rust lifetimes are about more than just memory safety and performance. They also enable data race freedom and strong control over aliasing, which prevent other kinds of bugs and can enable higher degrees of optimization. They enable affine types, which let you write APIs with very nice static guarantees, e.g. that a Nonce value is only used once. I don't think we're fully exploiting Rust lifetimes yet.

It's also important to remember that for many applications Rust lifetimes are hardly any burden at all. For lots of code I write --- serial, synchronous code manipulating off-the-shelf data structures where the overall shape of the heap is tree-ish, or DAG-ish and reference counting is acceptable --- I hardly have to think about them at all.


We are well aware that Rust isn't a magic bullet.

Likewise we are also aware that 68% of Linux kernel exploits in 2018 were caused by memory corruption bugs easily preventable in any systems language with bounds checking enabled by default.


I normally like your comments.

And, sorry, we don't all know that it isn't a magic bullet. Thus the people who seem personally invested in this stuff like its a conspiracy and the one true thing is being kept down by the man.

Again, I've seen your other comments. You know full well that it's a problem that other systems tackled with language and hardware support. Why is Rust the one true way when there are other approaches that have worked in the past?

Did you see that other comment about using C as "a toxic cesspool that's undermining civilization"? That's a tad much, right? It's a type of technical myopia. Software isn't undermining civilization. We are, by having perverse incentives in multiple areas of life that make it more profitable to prioritize one thing over another.

It's also the same kind of stupid that makes people think they are a freedom fighter because they use Debian.

It's also boring. So, yeah. Thanks.


I might be biased, as I develop kernel drivers and other low level stuff that must just work. In C and C++. Sometimes some assembler.

I've seen very little C/C++ code where I can't find tons of bugs. I've been in the business for over 25 years and gotten tired of this a long time ago. Enough is enough. Even the smartest people in business don't seem to be able to use C/C++ safely. (Yes, C++11 etc. did improve matters, but not enough).

Personally I'm happy there's finally at least one language that is effectively runtimeless, C/C++ like performance, significantly more secure than C/C++, C ABI compatible and has reasonable momentum. Rust might be it.

It doesn't need to be a silver bullet for everything, just give me more memory and concurrency safety.

If it's Rust, fine. If it's something else realistic, fine. It's not cure all, but borrow checker is clearly a major improvement for low level safety.


As you might have noticed I wrote "any systems language with bounds checking enabled by default.", so no Rust is not the only true way.


Wouldn't modern C++ be better suited than Rust because it's closer to C?

I know that old C++ got some bad reputation in the past but I think it's time to reconsider that with the changes that started with C++11. The only area where Rust really is better is the management of lifetimes and its associated higher memory safety.


I'll bite.

C++ has quite a few powerful features that Rust doesn't have, including better platform support, but Rust has a lot of things going for it that would make it well suited to kernel development:

* no exceptions, and language features and a type system that is set up for explicit error handling - the kernel would need to disable C++ exceptions anyway, making error handling just as cumbersome as in C.

* the ML inspired type system helps to write correct code (eg sum types)

* the language is not riddled with UB and legacy C support induced cruft. You can opt in to unsafety with `unsafe` blocks, but these can be tightly scoped to where it's necessary and can be especially scrutinized

Especially in the context of a kernel, the increased safety guarantees are worth a lot.


On the flipside, the kernel developers are a group of people with a detailed understanding of what makes C unsafe, and how to watch out for it in code reviews. I'm not saying that every bug gets caught in code review, not by a long shot, but the kernel developers as a group don't have any experience with reviewing Rust code, let alone reviewing it for unsafe or undefined behavior.

We're still pretty early in Rust's lifecycle, and there is still a lot to be learned about what `unsafe` can really break, especially at-a-distance. The UCG WG is doing this work in [1], but I can understand if the kernel developers want to hold off on using Rust for more central parts of the kernel until this work is farther ahead.

[1] https://github.com/rust-lang/unsafe-code-guidelines


In my uninformed opinion, it's fairly simple to write Rust code that's easy to review. Write unsafe as little as possible and if you need to, put that unsafe block in as small a module as possible. That's mostly it.

I think it might be at least half a decade before Rust starts being used in more central parts of the kernel (if at all) because adding a second language significantly complicates the build process. Also, it is blocked on Rust supporting all the platforms that Linux supports just as well. It would be disastrous for Linux to drop support for it's long tail of platforms because Linux could no longer be built for those platforms.


> Write unsafe as little as possible and if you need to, put that unsafe block in as small a module as possible. That's mostly it.

I don't have links at hand, but there were already instances where a bug in an `unsafe` block had effects in completely different (and seemingly random) places. Discovering all the ways in which `unsafe` blocks can cause unsafe or undefined behavior in unrelated places is still an active field of research.

> It would be disastrous for Linux to drop support for it's long tail of platforms because Linux could no longer be built for those platforms.

Which is why driver modules are a good place to start. Drivers are specific to certain pieces of hardware which are oftentimes only used with CPUs of a specific ISA.


> Discovering all the ways in which `unsafe` blocks can cause unsafe or undefined behavior in unrelated places is still an active field of research.

unsafe blocks can cause UB period, UB means the program is broken but the UB can manifest anywhere.

C or C++ don't make this any better, they just make the entire program into a source of UB.


Exactly. But many Rust proponents do not communicate that clearly. They often make it sound like unsafe blocks contain the undefined behavior and prevent it from spreading to the rest of the program, which they don't.


The important distinction is cause vs effect.

Only code in unsafe blocks can cause unsafety (ignoring already and yet to be discovered soundness bugs in the compiler [1]). But the effect can easily materialize in any location that uses the unsafe code, or types that go through it.

To me this is somewhat obvious, but it's true that this is easily overlooked and should be communicated.

To express this better in the type system, Rust would need an effect system.

[1] https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais...


That's very true. The value of unsafe blocks is that they restrict the number of places you need to inspect / audit for UBs.

They're just places where you're telling the compiler "I know what I'm doing", once an unsafe blocks has created an UB thing can break anywhere.


I don't think that's really true, let's suppose that your unsafe code has a presupposition that cause an UB if not met. If this is a bug and you can remove the presupposition, great, but is-it always possible without performance issue? If not, then you have to audit all the usage of the unsafe part.


> I don't think that's really true

It is though.

> let's suppose that your unsafe code has a presupposition that cause an UB if not met. If this is a bug

It is, or the code should not present as being safe.

> but is-it always possible without performance issue? If not, then you have to audit all the usage of the unsafe part.

If it's not possible to fix it (or if you don't want to fix it) then the wrapper for that unsafe code should also be unsafe, and it should document its assumptions such that callers can know what to look for.

The tautological contract is that safe rust is safe. If it's possible to trigger UB by passing the "wrong" value to a rust function then that function is not safe and must be marked as unsafe itself. An unsafe block means the compiler trusts that you know what you're doing, which is different from lying to the compiler, which is what you're apparently advocating / defending.


> On the flipside, the kernel developers are a group of people with a detailed understanding of what makes C unsafe, and how to watch out for it in code reviews. I'm not saying that every bug gets caught in code review, not by a long shot, but the kernel developers as a group don't have any experience with reviewing Rust code, let alone reviewing it for unsafe or undefined behavior.

I believe the kernel core developers are good programmers and good at code reviews. That said, a huge proportion of Linux CVEs are memory-safety problems -- use-after-free, race conditions, out-of-bounds access, etc -- which do not exist in safe Rust.

> I can understand if the kernel developers want to hold off on using Rust for more central parts of the kernel until this work is farther ahead.

I can understand this too! It takes time for large communities to change, and the only real research we have on `unsafe` is the RustBelt paper, which demonstrates that the concepts of the borrow checker are sound provided that `unsafe` code respects its invariants. The way this framework has been pitched, though, is for building optional modules. If everyone takes this seriously, I think it'll result in wins all around -- Linux benefits from memory-safety improvements, Rust benefits from kernel developers' experience, and the world benefits from having more secure code running in ring-0. I'm looking forward to this.


If this is true, how come we have this many security bugs?


Security was never high in priority list of Linux kernel development actually.


Actually OS kernels and Linux kernel in particular have many not obvious requirements. Consider easy disassembly for example.

> The code generation part ends up being nice when something goes wrong. When somebody sends in an oops, I often end up having to look at the disassembly (and no, a fancy debugger wouldn't help - I'm talking about the disassembly of the "code" portion of the oops itself, and matching it up with the source tree - the oops doesn't come with the whole binary), and then having code generation match the source makes things a _lot_ easier.

https://yarchive.net/comp/linux/error_jumps.html


> Wouldn't modern C++ be better suited than Rust because it's closer to C?

I think the point is that it would be worse because it's closer to C.


Would kernel development be one of the use-cases where the security benefit of Rust's safety guarantees would outweigh the higher performance ceiling you get with C++? I don't know much about kernel development, so I would be curious what experts think about those tradeoffs.


There's really no evidence that C++ has a higher performance ceiling. From a broad view, both are compiled with no runtime. More specifically, Rust and C/C++ trade the lead constantly in the language benchmarks game. In some cases, Rust can be faster because it makes stack allocating easier. In others, C++ can be faster because of specialization (which Rust doesn't have yet).


Oh really? It's hard to believe that C++ could not achieve better performance with hand-tailored memory management than safe rust. I can understand that unsafe rust should perform about on par with C++.


Rust has some extra costs, but it also has advantages over C and C++.

For example, the Rust compiler can and does reorder fields in structs to improve packing. In C and C++ you have do it manually, and for C++ templated types you sometimes can't pick an order that's optimal for all type parameters. (Rust can pick different field orders for different monomorphized types.)

The Rust compiler does other nice representation optimizations, e.g. Option<bool> is represented as a single byte with three possible values. Not just a hack for Options, but general.

Maybe even more importantly, Rust is really strict about aliasing and this can improve optimization. E.g. a variable that's a mutable reference to T ("&mut T") cannot alias any other reference to T in scope. An immutable reference to T ("&T") can alias other immutable references to T, but the data is truly immutable (unlike a C or C++ const reference). This completely subsumes "type-based alias analysis" and also C/C++ "restrict", and is stronger than both. This information is potentially really useful for optimizers, but unfortunately, since LLVM is mainly for C/C++, the Rust compiler can't take full advantage of this aliasing information yet :-(.


> For example, the Rust compiler can and does reorder fields in structs to improve packing.

Yikes! That can be disabled, right?


Yeah, if you need "C" style struct representation (which guarantees order), you can enable that on a per-struct basis.


Use #[repr(C)] to force C-compatible layout.


Real-world C++ code often copies data far more than is really necessary, because that's the easy way to be sure you aren't mutating a string that someone else expects to remain immutable.

Rust's borrow-checker, obviously, prevents that problem. Certainly it's possible to write C++ code which is as efficient -- and in some cases more efficient -- but will it actually happen?


Rust's memory safety rarely comes with a significant performance penalty, especially compared with normal C++ practice.


At the beginning, C was a handy language with some traps and pitfalls. Thanks to standardization committees and insane optimizers, it has become a minefield of undefined behaviors and security issues. Linux is coded in C because of the "C is the desert island language". This should change and I dream to see something that will replace C at least in the kernel.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: