Hacker News new | past | comments | ask | show | jobs | submit login
A GPIO Driver in Rust (lwn.net)
226 points by brundolf 6 months ago | hide | past | favorite | 109 comments



I'm already in bed so reading the code on my phone is really painful. That said, at first glance this looks really promising. This is great news on two fronts: on one hand this will drastically simplify supporting devices and would speed up development a lot. Also it's great news for rust: the fact that it's been used in a place where it can show it's potential and isn't another case of "let's use rust for the sake of using rust", which is a lot more common than I used to imagine.


> the fact that it's been used in a place where it can show it's potential

And more importantly, a place where C++ has failed.


Only on Linux due to Linus religious hate more specifically, C++ managed alright on Windows, macOS, BeOS, Symbian, GenodeOS, Mbed, Android Treble drivers, Arduino, AUTOSAR.


Well yes. But Linus is right on that one.

I wouldn't say Windows drivers were C++, more like "C with Classes" (and maybe a little bit C++)

Also, Windows drivers are usually much more convoluted than Linux drivers https://github.com/microsoft/Windows-driver-samples


Linus is only right for those that share the same opinion.

Anything that uses 1% of C++ specific features is C++.

C as used by Linux kernel is not ISO C, yet no one argues it isn't C.


> Anything that uses 1% of C++ specific features is C++.

Technically correct and this raises an important point. How long would the kernel take to compile in C++ versus in C?

Though in practice if you think the kernel should support all the craziness of "modern" C++ you're going to have a big problem in making it work in kernel space. (things like heavy templates, etc)

And just to disagree with Linus I think the 'new' part is the least of our problems in making the kernel work with C++


Apple, Google and Microsoft do just fine without modern C++ craziness in kernel and drivers.

As for compilation times, it is all a matter of how the build system is architected.

Most of my C++ build pipelines compile faster than Rust, in make clean all scenarios.

There is no black magic in how to achieve it, just knowing what one is doing, making use of binary libraries, build caches, and be aware of how much craziness is allowed, most of which wouldn't be allowed on the kernel anyway.


All my rust projects (>1 million LOC at this point) compile 10x faster than our C++ code base ever did (300k LOC and shrinking, used to be 900k LOC c++ but over the last 4 years all new functionality has been rust, and every substantial fix of existing functionality has resulted in rewriting C++ code to Rust).

How to do this is surprisingly easy: write small crates that do 1 thing.

A Rust crate is a C++ TU, pretty much everybody that complains that Rust is slow is comparing Rust crates that are 10-100x larger than the size of the .cpp files they use in their C++ projects.

For some reason, people, and particularly C++ devs, like to write huge Rust crates (I’ve seen crates with 200k LOC). They’d never write a 200k LOC .cpp file, so I really don’t get why they do this.

Either way, if your Rust compile times are slow, it’s probably your and only your fault.

How to get blazing fast Rust compile times isn’t a secret.

We have a lot of data of the time it took to compile C++ TUs vs the equivalent Rust crate, and Rust reliably compiles at least one order of magnitude faster, sometimes two.

All our crates are less than 10kLOC, median is 1812 LOC.

Rust crate binary compatibility is also pretty awesome, so that in general a new hire does not need to compile anything at all (an unmodified build just fetches binary objects from sccache). When they start modifying the project, only those binary objects that actually changed have to be recompiled, the rest is just dynamically linked. This gives you <5s edit-compile cycle if you are working on a leaf crate.


Now go around and apply the same build optimizations to the C++ source base, including the use of incremental compilation, incremental linking, pre-compiled headers, export templates in TU, making use of lib files.

If your C++ compile times are slow, it’s probably your and only your fault.


Our C++ code base has been using Clang modules and all those optimizations (explicit template instantiations, incremental linking, dynamic libraries, etc.) since 2011..

It also supports unity builds, and C++ modules TS, so we can compare the performance of all 3 approaches

Rust consistently beats all of them by at least an order of magnitude in apples to apples comparisons, with at least an order of magnitude less fiddling in terms of modifying the source code, fiddling with build system and compiler options, etc.

So in our experience, Rust gives you 10x faster compile times with 10x less effort than C++ if you use similarly sized compilation units.

Rust makes it easy to write compilation units that are 10x-100x faster than C++ ones and still compile reasonably fast for their size, but we don’t allow this by convention (we have a clippy lint that rejects crates that are too large).


Which happens to be known to not be as good as MSVC in incremental linking and pre-compiled headers.

So do you also have something comparable to Incredibuild and Clearcase derived object files in place?


Pre Compiled Headers are 1:1 perf wise identical to Clang modules for our 1 million Loc C++ project modulo system noise. There was no difference in compilation speed, neither for clean nor incremental builds.

We supported them for a while (until 2015 or so; no perf difference over 3 years), but Clang modules gave the exact same perf and was easier to use, and unity builds were much faster for clean builds anyways, so there was no point in maintaining PCH in parallel to these two other options.

For people that are not aware, PCH, modules, etc. bought us 20-40% compile time reduction. I expect 30% or so to be typical for projects that are using TUs correctly.

A 30% compile time reduction or even a 2x compile time reduction is not “life changing”. It’s peanuts.

Particularly compared with the 10-100x compile time reductions that Rust gives you. From 100min to 1 min, that’s a huge difference. From 100 min to 50-70min? That’s an irrelevant difference.


∀ X, X is right for those who share the same opinion.

If you think a simple tautology disproves someone's claim, there's a good chance you're missing the point.


I don't see why you call it religious.

One can endlessly discuss pros and cons of using C++ in the Kernel, but in the end it's about either making a bet on C++ or not; it's not that one can or has to formally prove that the bet is right to avoid being called religious. Linus listed cons that he finds important (which are real) and decided not to bet on C++.

Linus has a long history of successfully managing Kernel, I think this makes his bets worth something. So yes, passing Linus filter is a great success for Rust, and even greater given that C++ didn't pass.

Also, the fact that C++ succeeded somewhere doesn't disprove this claim.


Do you have specific knowledge of how C++ tried and failed to apply to this usecase?


At the least, it's failed to pique Linus's interest. He's re-affirmed his disinterest in C++ in this mailing thread again[1]:

> You'd have to get rid of some of the complete garbage from C++ for it to be usable.

> One of the trivial ones is "new" - not only is it a horribly stupid namespace violation, but it depends on exception handling that isn't viable for the kernel, so it's a namespace violation that has no upsides, only downsides.

> Could we fix it with some kind of "-Dnew=New" trickery? Yes, but considering all the other issues, it's just not worth the pain. C++ is simply not a good language. It doesn't fix any of the fundamental issues in C (ie no actual safety), and instead it introduces a lot of new problems due to bad designs.

[1] https://lore.kernel.org/ksummit/CAHk-=wiwZWAo_Ki587FD2BrAQVK...


The exception handling thing is a showstopper. Stroustrup is resolute that C++ style exception handling is the right error model. So even if a big fraction of C++ programmers disagree, the ISO C++ language standard, and the guidance for new developers, is going to keep saying Exceptions are the way forward for the foreseeable future.

Linus didn't like the panic-on-fail memory allocation handling from userspace Rust for the same reason and so the kernel Rust people had to be very clear early on that was temporary and wouldn't survive from a prototype into the actual kernel.

There are some edge cases where this is surprising and I expect Linus will ask for further work. e.g.

  let x = "Clowns".to_owned();
  let mut y = x + " are not welcome here";
  y += " any more.";
The first variable ends up with a growable String, containing the word "Clowns" so that's an allocation and it might fail, in userspace it panics, in kernel Rust should this not compile? If it needs to "return an error" how?

The second line needs a concatenation operation, it won't end up with a new String, the old one gets moved instead (in Rust this means if you try to refer to x after this that won't compile, it was moved and the compiler knows that) but it also gets a concatenation to add this extra string to the end, probably allocating again. Thus it too might fail, in userspace that would panic, but in the kernel what happens this time? Does this Add operator have a Result type now?

The third line concatenates again, but this time the operation doesn't even have a return type, it's obliged to mutate y, so there is just no possible way it can result in an error. Either this mustn't happen in the Linux kernel, or it panics when it can't allocate.

Whereas these are all fine, they don't allocate and aren't writeable so they won't bother Linus:

  let x = "Clowns"; // A str, definitely UTF-8 encoded
  let y = b"\xf7\xf7\xf7"; // not Unicode, just some bytes
  let z = r#"No need to "escape" the double quotes now."#;


Can't we just write some --hella-strict flag and refuse to compile any of these corner cases?

> Does this Add operator have a Result type now?

Essentially yes. There could be other APIs which accomplish the same results but have different constraints, such as not being able to mutate strings without also creating a Result to be handled. Yeah you lose some ergonomics but even that feels more ergonomic than C.

I think Rust is in a unique position in that zero / extremely few panics is actually achievable, and easier to achieve, with less manual discipline, than C.


> Essentially yes.

Maybe you understood this as purely a rhetorical question, rather than I'm lazy and didn't take time to go read the source code yet when I wrote it. I apologise.

  impl Add<&str> for String {
    type Output = String;

    #[inline]
    fn add(mut self, other: &str) -> String {
        self.push_str(other);
        self
    }
  }
So, the implementation of Add for String in the kernel when it exists gives a String, it does not give a Result.

What is unclear to me still is the significance of no_global_oom_handling. If this is the intended final state of Rust for Linux, then in fact all these traits just go away, so, you can't concatenate Strings with the + operator, and in fact most ways to do stuff with Strings in Rust go away too†. Which is one way to approach it, it's not as though people were dying to do string manipulation inside the Linux kernel anyway.

† In Rust the String type is a growable heap allocated data structure you can change while str (mostly seen as the reference &str) is just a "slice", a length, plus a memory location, plus a promise that the series of bytes in the slice form valid UTF-8 encoded Unicode, but with no way to grow or change it.


Rust's String type is in the standard library [1], rather than in the language core where things like str live [2]. There's a lot of stuff in the standard library that assumes that it can allocate, and will panic on failure. The solution for now is simply to use Rust's `no_std` support [3] and not import the standard library into kernel code, since it's unsuitable for writing kernel code. In the longer term, Rust is investigating adding fallible variants for all of the standard library traits, so that kernel code could use the standard library with a compiler flag enabled that bans usage of any of the infallible standard library traits. There was a lot of discussion about this a few months ago in https://news.ycombinator.com/item?id=26812047.

[1] https://doc.rust-lang.org/std/string/struct.String.html

[2] https://doc.rust-lang.org/core/str/index.html

[3] https://docs.rust-embedded.org/book/intro/no-std.html

[4] https://github.com/rust-lang/rfcs/blob/master/text/2116-allo...


> So even if a big fraction of C++ programmers disagree,

The percentage of c++ programmers using exceptions seems to increase in jetbrains survey - for the last one it's at 80%, it used to be close to 60% iirc. I wouldn't call 20% a big fraction.

There are no other ways to prevent an invalid object representation to exist unless one is willing to wrap literally everything in optionals, basically reimplementing an exception unwind stack by hand.


> Does this Add operator have a Result type now?

I suspect the answer is: those traits not implemented for String in no-panic mode. And instead you have to use fallible “try_extend” methods that return a result?

I’m just guessing though.


This is usually the right way to go. If you cannot implement the exact trait then you make a trait of your own with similar semantics instead. This one is a language item though, which complicates things.


To clear something up here, while the Add trait is a language item, String is not. The String type is just a normal type implemented in liballoc[0]. As far as the language is concerned, there's nothing separating it from any other random type a user might write[1]. If you're in a no_std environment, String doesn't even exist unless you depend on liballoc.

The reason you can use String with the + and += operators is because it implements the Add and AddAssign traits[2], and it looks like the implementations are already feature gated for OOM handling so might not be implemented for String.

[0] https://github.com/rust-lang/rust/blob/master/library/alloc/...

[1] Note that the compiler knows about it to provide better diagnostics to the user, but this has no impact on the language itself.

[2] https://github.com/rust-lang/rust/blob/master/library/alloc/...


Yes, that’s a good point. I suppose the compiler can parse «foo + "bar"» regardless of whether the Add language item is implemented for the types involved. It’ll only fail during type checking, which is a later step. Not so complex after all.

Oh, and good point about the existing feature gate; I debated going and looking to see if one existed yet or not, but I was hungry so I didn’t. Glad you were around to point it out.


Yeah, first we just nuke a bunch of impls, then we figure out whether we need new infra like a TryClone or whatever.


We've cfg'd out all global OOM handling from alloc, so I think this example must be out of date. In the latest version you indeed would have to write it s different way.


> If it needs to "return an error" how?

Because this is all just library features from a library that isn't being used. They could use core::alloc and have panic on allocation failure, or they could implement their own library and that library presumably wouldn't have the explicit panics that core::alloc has.

Kernel rust isn't a matter of a variation of the language, it's just about not using the standard libraries


Your examples will not compile without alloc. IMHO, kernel needs another kind of String type anyway, e.g. https://github.com/paulocsanz/arraystring .


Even though neither Rust nor C++ are supported Kernel languages, Rust has passed a few iterations of trial and discussion, C++ on the other hand was filtered out at the first one. Linus refused to even take C++ seriously (and also mentioned trying it for Kernel in 1992) [1].

Some might argue this doesn't qualify as "C++ trying", but I think it does.

[1]: http://harmful.cat-v.org/software/c++/linus


I've been programming on embedded with c++ for quite a while and I really can't agree with linus here.

Just the superior type checking, being able to use RAII and simple templates to replace macros completely transform the way you work on a fundamental level without doing anything fancy.

Although it's not as pleasant as it should be given its historical baggage (the bad rap is sadly well deserved), I still consider it to be miles ahead of C for embedded work to the point that while I do like Rust a lot, I still consider it to be overhyped in this area.


My main complaints about C++ are lack of a good way to catch all exceptions and mysterious copying/conversions. In Rust, it is very easy to catch all panics and conversion/cloning is explicit. Copy is only implemented for types where copying is actually cheap/free by convention.

Another good thing to look at is how disgusting std::move is and how straight forward the equivalent is in Rust. Additionally, there isn't anything to prevent you from using some variable where you used std::move (which invalidates the contents of said variable). Rust will not allow you to use some variable if ownership has moved.

Finally, I personally despise that you end up needing so many constructors in C++ just to do basic things like the assignment operator.


Agree on most points. I never truly liked how complex C++ was from day 1, and it definitely got more and more complex over time. I'm using cppreference all the time, which is something I feel I shouldn't need. I also think the C++ i/o library is utter garbage too.

However I also think how we got to the modern C++ standard, and how early we are with Rust.

I think of all the weird C++ features I had to use over time in embedded, and I'm glad I always had the option to have manual control over memory sizing, the ability to alias pointers and everything that you normally wouldn't want or consider unsafe.

I also think of Rust and this hell-bent approach to memory safety, and actually think that on embedded I basically never have dynamic memory allocation, and very rarely have issues with ownership that I could solve nicely (without fighting with the very limited lifetime constraints we can have) with Rust. On small targets there is no memory allocation. There's often no copying at all, because it's too expensive: ids and counters are often exchanged over shared buffers.

What I feel is a massive improvement in terms of safety in embedded are state machine compilers more than memory safety. Think "ragel", or similar compilers. Using these in embedded couple with wither C or C++ is a game-changer.

Memory safety and pointer lifetimes start to be an issue with much bigger targets. I don't have experience with kernel development, but if you can run linux on a platform I would hardly call it "embedded". Or at the very least, we're talking about the very high-end embedded platforms.


The way I see it, C heavily discourages powerful abstractions. Rust encourages them, but allows contracts between them to be tightly constrained and safe. C++ encourages them, but doesn't give you much help making them constrained or safe. When it comes to being careful and intentional, C++ is the worst of both worlds.

I think Linus was wise not to empower people to write shaky abstractions in one of the world's a) largest and b) most important low-level codebases.


A lot of the problems that tend to be ignored when talking about embedded C++ come from the fact that we have good C++ compilers for embedded targets now. This was certainly not the case just a few years ago, especially before ARM was the de-facto standard for embedded devices.

Most MCUs compilers had little if any support for most C++ "fanciness", which meant that trying out C++ ended almost always in some big catastrophe, either by hidden bugs being inserted by those compilers, or portability disasters (+80% your supposedly portable code uses this feature that compiler X doesn't have).


True. I'm old enough that I had these issues with C++ in the early 2000 even with _system_ compilers (AIX's C++ compiler was especially horrifying).

However times have changed. The features and C++ general coding style itself is quite different that what you'd write in '98.

If you're considering Rust, you should consider the modern available C++ with all the latest tooling to be fair. And it's _pretty_ good, all things considered.


I don't know about embedded, but I'm quite happy with Linus' decision and appreciate that I can compile kernel on my laptop in reasonable amount of time.


Just wait until Rust starts being used across the kernel, if compilation speed is an issue.

At least with C++ most corporations actually use binary libraries, we aren't doing Gentoo style across our codebases.


> Just wait until Rust starts being used across the kernel, if compilation speed is an issue.

I don't expect people to start (re)writing their drivers in Rust anytime soon. Maybe some PoC things and stray FAANG-devs aiming for promotions, but it is highly unlikely I will need to have those drivers enabled. Hope Rust compilation times improve by the time it becomes mainstream in systems.

> At least with C++ most corporations actually use binary libraries, we aren't doing Gentoo style across our codebases.

This is not relevant for the kernel development (Linus' case) and doesn't match my experience working with C++ code bases for ~10 years.


When the author has religious hate against a specific language, there is nothing to try.

There are other platforms that have embraced C++, both of them run on my pocket.


C++ has evolved a lot in the THIRTY years that have (almost) passed since 1992.


Yes, it grew 30+ new appendages. Jokes aside, it has improved quite a bit with lambdas, move semantics, type inference, standardized filesystem api, smart pointers, optional types, etc... Despite this, it still can't come close to offering the confidence that Rust gives you.


The most important trait of a language, especially when safety is a high concern, is what it doesn't let you do. And it's virtually impossible to improve on that axis down the line while maintaining backwards-compatibility, which means that ship has sailed for C++ no matter what new features get added.


True, yet despite all security talk, Google, Apple and Microsoft rather improve C and C++ instead of just throwing away 50 years of libraries, OS code and GPGPU tooling.

So while I enjoy doing toy applications in Rust, C++ is the language I have available on the IDE installer to write native code on our projects.

Until Rust manages to get a similar checkbox on the same installer, it isn't going to be an option for certain kinds of corporations.


A few years ago I tried out VC++ version 4 (??) for a bit of a laugh. And I felt no wonder as to why C++ got such a bad wrap. The MFC classes seemed to have suffered from gold-plating the specifications, and there was all sorts of casting stuff to cajole C++ into interfacing with the C windowing system.

Since then, C++ has gotten massively better. We now have strings and vectors as part of the standard library, so there's no need to use someone's quirky implementation.

As I said in a previous post, C++ gets you most of the way to Python, so there's really no point in not using it.

Still, it's taken us until C++17 to get sum-types (via variants), and it is amazing to think that it took so long to get such a basic idea included. The handling of variants could do with a bit of simplification, mind. And I'd do wish they could implement the equivalent of "typeclasses", enabling recursive data structures.

I'm a big C++ fan, in case that wasn't clear.


C++ is the most popular programming language. It overtook Python in the number of active developers this month: https://www.openhub.net/languages/compare?utf8=%E2%9C%93&mea...

It's share of developers has been growing since early 2013, when C++11 began to be widely available, and more recent revisions have apparently boosted its growth.


Rooting for others to fail instead of one's own success is a recipe for always being miserable. In this case it's also logically dubious, because we have no idea how C++ would work in a Linux kernel - it was never tried.


C++, python, basic, or JS can work in kernel.

For example, Click modular router for Linux kernel is written in C++: https://github.com/pbuonado/click/blob/master/linuxmodule/mo...


In the function 'pl061_irq_type' the original code has (line 224):

    writeb(gpiois, pl061->base + GPIOIS);
    writeb(gpioibe, pl061->base + GPIOIBE);
    writeb(gpioiev, pl061->base + GPIOIEV);
but the translated code has a differing order:

    pl061.base.writeb(gpioiev, GPIOIEV);
    pl061.base.writeb(gpiois, GPIOIS);
    pl061.base.writeb(gpioibe, GPIOIBE);
Isn't the order of operations important when talking to I/O?


In this case GPIOIS, GPIOIBE and GPIOIEV are basically three registers that between them determine how interrupts work for each pin. Do you want edges or level triggered? Both edges or just one? High levels or low? Order likely makes no practical difference whatsoever.

The data sheet actually suggests a particular course of action if you desire edge triggered interrupts but that recommendation doesn't match what either driver actually does here. Real world experience suggests that "Follow the exact steps from the data sheet" is a worthwhile diagnostic step when the driver doesn't work but probably not worth doing if your driver works. AIUI this driver works.

It sounds like the worst thing that can go wrong is you eat one spurious interrupt because something happened to the GPIO pin you cared about while you were meddling with the interrupt handling. Confusing for a tightly coded embedded system, but probably unnoticeable in the Linux ecosystem.


I've read through a chunk of the thread and no one has mentioned that on the list yet.

You could, I imagine, submit that as a Pull Request here:

https://github.com/wedsonaf/linux/tree/pl061

Or you can email a note about it to the author at the email address shown in the thread archives.

Either way, I bet they'd like to know about the order swap and either add a code comment or unswap it.


> Isn't the order of operations important when talking to I/O?

Sometimes, yes. It’s hard to tell if this is one of those times without a closer look at the documentation or at least some experimentation, but because “Rust is Memory Safe(tm)” I think it is safe assume this is fine simply because it compiled, and without knowing anything about you, would recommend you assume the same, because, well, Rust. Obviously.


This is writing to the register space of some peripheral so no, the compiler knows nothing about that. It is also extremely common for such peripherals to have requirements on the order of writes and otherwise silently discard them. They also have all kinds of side effects and don't behave like normal R/W memory at all (e.g. clearing an interrupt is usually done writing a 1 to a register which will read 0 after).


The compiler doesn't know what's going on here, but fortunately the programmer does and they can use Rust's type safety to reduce the opportunity for foot guns.

Notice how the C code calls readb() with some address it got out of another structure adding to a constant from this file, whereas the Rust has readb() as a function implemented on the structure defining what it can write to. As a result, C code can easily mistakenly readb() something it didn't intend to, while the same mistake in Rust fails to compile.

Specifically, when you call that Rust readb() function with a constant like GPIO_SIZE that's not actually a GPIO register, it compares the constant you asked to readb to the size of the GPIO's I/O memory (1 page = 4096 bytes) and check it is smaller - if not the type match fails and your compiler tells you this won't work. No runtime crash, it doesn't compile.

If you don't have a constant, you can call a try_readb() function, this time you won't fail to compile if you screw up, but at runtime you get EINVAL instead of crashing.


> fortunately the programmer does and they can use Rust's type safety to reduce the opportunity for foot guns.

And what, they just didn’t?

This is an ordering bug. You could encode that into types perhaps, but I do not think that would be easy or as valuable as other ways to spend time.

> Specifically, when you call that Rust readb() function with a constant like GPIO_SIZE that's not actually a GPIO register, it compares the constant you asked to readb to the size of the GPIO's I/O memory (1 page = 4096 bytes) and check it is smaller - if not the type match fails and your compiler tells you this won't work. No runtime crash, it doesn't compile.

I think nearly every c compiler has had the ability to warn on something like this for thirty years. What’s the point? The rust programmer could do things differently but so could the C programmer?


> This is an ordering bug.

It turns out that it isn't. I discussed in another part of the thread that these registers control interrupt handling between them. Maybe if you fiddle with them while something is bouncing the relevant IO pins up and down, you get a spurious interrupt, but I'm actually not sure which ordering would be least likely to cause that, the correct thing is almost certainly to just not blow up if you eat a spurious interrupt while fiddling with these registers or tell the device you don't want interrupts while fiddling with them, both of which are the same in either driver.


If you want that kind of guarantee in a Rust program, you can encode it into the types. It does make the implementation more complex, however, so it is not always worth doing.


Wait wait wait. Are you telling me that writing this code in rust created bugs that weren’t in the C implementation?

I thought Rust was supposed to eliminate Memory errors and “foot guns” and other programming mistakes idiots make.


> it is safe assume this is fine simply because it compiled, and without knowing anything about you

I think this is sarcasm, but for the record, no compiler can make guarantees about the behaviour of a different computer.


> I think this is sarcasm

Yes. Maybe satire as well. It’s hard to tell because most comments about rust’s Memory Safety are utter gibberish as well.

> no compiler can make guarantees about the behaviour of a different computer.

Honestly, I think it’s worse than that: the computer simply cannot tell you what your program is going to do.

This should be obvious, and yet many people are using Rust in prayer of exactly this.


I doubt order of operations is important here, but nice catch, I bet that was not on purpose.


I bet it would be possible to enforce this with some clever types


Allowing two languages in such a large, collaborative project like Linux may give rise to friction and issues in the future. There are not many people who are expert in both C and Rust who could review pull requests that lie at the border between the two in the kernel. Or when an apparent bug in Rust code is actually caused by calling out to a C API, or vice versa.

All this eventually means most kernel contributors in the future will have to become well-versed in both C and Rust, eventually. And in the meantime the high-level, subsystem maintainers are going to have to learn enough Rust to review submissions.


It's a small price to pay for improved safety with respect to memory and concurrency, which is why Rust is being allowed into the kernel. I think we'll see a gradual shift where some corners of the kernel are Rust only and others stay C; with occasional efforts to replace one with the other where that makes sense and where there are enough people to do the work and get it reviewed. Ultimately that's what will drive this: people and their contributions.

If other large code bases where Rust was introduced, e.g. Fuchsia, Firefox, etc. are an indication, we should expect some positive outcomes over time.


I've never seen learning a new language as a huge barrier. I've learned new languages on the job several times, and I'm probably not as smart as a large portion of Linux's contributors (especially the ones who would be reviewing pull requests). I think they can work it out.


I wrote a GPIO driver in Rust for STM32. It's macro and DRY soup! Although this is because I wanted a single `Pin` type abstraction, while STM32 register blocks are defined at the port level. (STM32 has ~7 GPIO ports depending on variant) Results in a clean API, but a messy implementation.

Alternatively, I could have written a port-based approach that would have had a cleaner implementation, but a more complex API.


This is a perfect resource for exactly the use case Rust is (supposedly?) designed for. It’s a great, brief way to really get the gist of the language.


sorry to say this, but Rust version is much more difficult to read and comprehend than the C version. cannot comment on the benefits from memory-safety perspective, but from maintenance point of view the C version would be much easier to keep going.

my wild guess is that (as others pointed out) a C++ version may simplify and make things easier than Rust does with this example.

this said with all due respect to C, C++ and Rust and having been introduced to C back in 1993, and to C++ in 1995, and then teaching some C++ again in 2003-2005 and then some basic programs written in Rust.

code like this makes one really wonder whether Cxx14/17 programs are easier to get right/read than Rust, considering the massive amounts of template enums that inevitably finds way into every larger Rust program.


> Rust version is much more difficult to read and comprehend than the C version

By this, do you simply mean "I know C better than i know Rust"?


Perhaps so, but then - also from the perspective of someone who easily reads/writes Perl, Python, ES6, complex PLSQL and templated C++.

From my (personal) perspective something went in weird way with Rust borrower system, making it over-complex... although I have high regard for everyone who actually masters and uses it.


I agree. I like the ideas brought in by the borrow checker but I find Rust to be an unergonomic language to read.

I would love to see something more like C, with a better type system, module import management, and a borrow checker.

Think C + TypeScript


What about Zig? I think it checks everything off but a borrow checker.


How specifically did you find the Rust version harder to grasp?


I'm not well versed in Rust, so that's obviously a factor.

However the Rust version seems to have more line noise, like all the ".ok_or(Error::ENXIO)?;" bits that are sprinkled all over.

Also compare the pl061_probe function and it's Rust equivalent. The C version is fairly straight forward and clean in terms of syntax, while the Rust function looks very complex in comparison.

Also if you compare pl061_irq_ack with the Rust equivalent, suddenly the Rust version has an if statement in there while the C does not. What happens when the if statement is not taken?


In some part the problem here is that Rust is making error handling explicit, while the C driver has all those error handling covered in other files.

For example, in the C version:

  struct pl061 *pl061 = gpiochip_get_data(gc);
What happens if there's no chip? you should have a NULL check, but it's omitted because error handling is done outside of this file. The same is true for the additional if statement on the IRQ.

In the end is a side effect of this being an example, because it should be possible to refactor the code where error handling is simply done through the ? operator. IMO that's a better way to do it than the C alternative, because right now there's no way to know if those struct pointers have proper error handling or not without leaving this file and hunting for each struct and function.


"opt.ok_or(err)?" is a pretty neat way of returning early with the error "err" if opt is empty. This is pretty idiomatic Rust you should get familiar with before comparing the readability with a language you know better.

I see a few places where Rust is definitely more readable, for example

    for offset in 0..PL061_GPIO_NR {
       if inner.csave_regs.gpio_dir & bit(offset) != 0 {
vs

    for (offset = 0; offset < PL061_GPIO_NR; offset++) {
       if (pl061->csave_regs.gpio_dir & (BIT(offset)))
OTOH, the next line is a real mouthful in Rust compared to C:

    if let Ok(v) = <Self as gpio::Chip>::get(data, offset.into()) {
      inner.csave_regs.gpio_data |= (v as u8) << offset;
    }
vs

    pl061->csave_regs.gpio_data |=
 pl061_get_value(&pl061->gc, offset) << offset;

 * "<Self as gpio::Chip>::" is the price Rust pays for having multiple possible "get()" for the same data, but it could be hidden away in a wrapper if needed.
 * ".into()" might be due to a young not-yet-ergonomic API
 * The last difference comes from the API difference where Rust's get() actually tells you if it could get that integer instead of (presumably) returning the integer 0. The C API is arguably a footgun, justifying Rust's slightly wordier syntax.
For a more subjective example, I find "pl061.base.readb(GPIODIR)" more readable than "readb(pl061->base + GPIODIR)". Bonus: the offset argument can be enum-typed, avoiding the footgun of reading from an invalid offset.

Going back to the probe functions, I find them hard to compare as I do not know how the hardware works. If that "device::Data::new()" is equivalent to the many lines of init in the C version, it looks less footguny. In the same vein, "Ref::try_new_and_init()" and "data.registrations()" look like they are giving the developer more guarantees.

There's a trend there : some Rust APIs may be wordier but still easier to read/review, because they uphold more invariants, which reduces the reviewer's cognitive load.

Concerning the ack() functions I'm not sure. It seems that Rust is checking that it has proper access to the data ressource while C doesn't. I would turn the question around : How are you sure that "gpiochip_get_data(irq_data_get_irq_chip_data(...))" never fails and can be safely dereferrenced ?


> If that "device::Data::new()" is equivalent to the many lines of init in the C version, it looks less footguny.

The Rust version is probably more robust, I would hope and assume so. However it looks considerably more complex, I think even if I knew Rust I'd have a harder time parsing it and trying to figure out what it tries to do, compared to the C version.

So maybe we could say the Rust version is harder to parse but easier to reason about once parsed.

> How are you sure that "gpiochip_get_data(irq_data_get_irq_chip_data(...))" never fails and can be safely dereferrenced ?

Fair point, it should have a check there. Which makes this comparison a bit harder, given that the code is so different.


> So maybe we could say the Rust version is harder to parse but easier to reason about once parsed.

I feel this is a matter of familiarity. For me the Rust version is easier to parse and reason about. Entirely because of familiarity.


Rust syntax isn't particularly friendly.


Importing DerefMut and not Deref surprised me: I’ve never done it and never expect to, since you can’t implement DerefMut without implementing Deref. Turns out it was being imported to call it, which is something I’ve never done and never expect to: instead of writing `d.deref_mut()`, I’d write something like `&mut d` or `&mut **d`. (Autodereferencing makes the question of what asterisks are needed always a bit fun—but that largely applies to d.deref_mut() too: I think (*d).deref_mut() would produce the same result.)


This sort of thing is mostly a matter of preference; Rust does a good job being unopinionated about things like how you invoke a function. In this particular case I think the explicit call is more readable than the character-soup, personally


I think it might be trying to get at “dereferencing this does something special” (i.e. it’s not just a reference, but some type that manually implements Deref/DerefMut). But it doesn’t actually do that—if you have a &mut String, .deref_mut() will give a &mut str (equivalent to &mut *self), but if you have a &mut str, .deref_mut() will give a &mut str too, by doing reference reborrowing (so the final result is equivalent to &mut **self).

In the end, I’d prefer to deal with the sigils, because I think it’s a little easier to be confident what they’re going to do than calling the method that takes &mut self but effectively changes how it behaves depending on the type in question.


Can rust sanely encode constraints like "I don't want to be able to call functions that may call functions that sleep from atomic context" that would be checked at compile time?


I don't think so. Once a function is compiled, it basically becomes a black box with a type signature so unless sleeping in a function affects its signature, that information is erased. If you pass in some kind of a sleep token that has to be used to sleep, then yeah I think you could enforce it by only being able to get that token in a non-atomic context and making it leak proof.

The Cortex-M crate does something similar, but for proving that you are in an atomic context. Another function that expects a CriticalSection type is then assured that it's running without interrupts enabled. The type itself is zero sized so it's completely optimized away.

https://github.com/rust-embedded/cortex-m/blob/master/src/in...


I feel that this would be a tad bit easier to read with syntax highlighting.


If it had syntax highlighting someone would ask if their email client now has to implement syntax highlighting too, and comment on how they've always discussed unhighlighted C code just fine.


The original did have syntax highlighting (it's in the linked comment) but when whoever posted it to lwn, they stripped that out.


In the Rust code, where is the Result type they're using defined? I assume that the parameterless `Result` return types are intended to be `Result<()>`, and I suppose it might be possible to achieve this with default generics, but I've never seen that used before, so it sticks out as a bit odd to me.


Look's like kernel::Result does just that. It is re-exported as part of the prelude.

https://rust-for-linux.github.io/docs/kernel/type.Result.htm...


Is that doc old? It doesn't include `kernel::device` and many other modules/types that are imported in the code.


Well, I guess it might be more reasonable to say kernel::device is very new. The person who mailed in this work hasn't landed it in the Rust for Linux kernel from which the documentation is built. We're seeing the code that the new person wants to check is roughly OK before they raise a PR, whereas those docs are built from the repo that PR would be against.


Interesting! TIL


It looks like something like that is indeed possible. See a post on the rust internals forum where that's setup with a type alias. It was called `Fallible` in that post, but it's common in the Rust ecosystem to override the default Result type with a custom error parameter but still calling it Result.

https://internals.rust-lang.org/t/make-stds-result-a-type-al...

So I'd guess in this driver that the Result type being used is a type alias defined in the prelude that's glob imported.


> it's common in the Rust ecosystem to override the default Result type with a custom error parameter but still calling it Result.

Yep, I do that all the time! I never considered putting a default value for the parameter though


I see they're importing `kernel::prelude::*`, so I would guess there.


And in the end, it's basically safe PEEK and POKE.


The unsafe readb and writeb is hidden inside another API--IoMem module. The same API could be created for C, but isn't--either deliberately or just because nobody bothered.


Rading through it... how do I know whether data.lock() is a spinlock or mutex, or whether it saves flags or not?


From the type. `data` is a `&Ref<DeviceData>` which means it is a smart pointer to something of type `DeviceData`. `DeviceData` is defined in this module as `type DeviceData = device::Data<PL061Registrations, PL061Resources, IrqDisableSpinLock<PL061Data>>`. I would have to peek at `device::Data` to be sure, but my supposition is that the lock method acts on the `IrqDisableSpinLock` type. I assume that the name is meaningful, making this one a spinlock that disables irqs. This lets each driver using `device::Data` supply the most appropriate type of lock.


The C version is so much cleaner.


This ... isn't really a great example--either of Rust or of embedded programming in Rust.

Someone should ask one of the Rust Embedded folks to review this and provide commentary.


All the Rust Embedded code I've seen is off the rails deep into "C++ template meta programming" and the absolute antithesis of what to do.

Even the code here, given the absolute impossibility to detect errors with something like a GPIO chip, has too many Results. It's an incredible code smell for "terrible abstractions".


The Results here aren't from the GPIO operations, so that's not really relevant.


Exactly! I think the point of OP's example is how to write clean driver code. Much of the Rust embedded code available open source is a mess of abstractions, with no specific use case in mind. Ie, no way to weight if the abstractions' utility is worth their complications.

This includes code by the Rust Embedded Working Group.


I would be interested in what you think of something like BBQueue:

https://github.com/jamesmunns/bbqueue


Thanks! So its no just me. This feels like reading one of those Haskell-Blogs: certainly impressive, where author explains the dimensions along which this solution is superior, but its just incomprehensible brainfuck and doesn't pass any standards of beauty.

I've written some Rust; I know it is workable, and "if it compiles it works" is mostly true. But its so ugly.


This is very nearly a mechanical line by line conversion of the C to rust. What on earth is so much uglier about it that it renders it "incomprehensible brainfuck"?

Is this satire and I'm missing the joke


I assume that they are talking about the abstractions that people sometimes build when working with hardware at a low level. In Rust you can literally build the hardware restrictions into your types, so that for example all registers (or pins in this case) have both read and write methods except the ones that the hardware does not allow you to write to; those only have read methods.

Or think about access control: it’s not ok for two programs (or two kernel threads) to interleave accesses to the same GPIO pin, for example. You decide that you want to have some kind of handle or struct that lets you read and write to the pin, but that cannot be duplicated or dropped. This way you can create that object once during initialization, and then pass it to whoever needs it. If you want to call something that needs access to the pin, you then have to pass it the object and you can only regain access to the pin if it passes the object back as a return value. The compiler just won’t let you interleave accesses, because the code won’t even compile if you try to do something not allowed.

But all of those neat features add complexity, and often some people just prefer to write comments instead. Usually I recommend starting with very straight–forward code and only adding abstractions in later refactoring steps. Another option is to write the usage code first; this can help you design abstractions without going full architecture astronaut.


> This ... isn't really a great example--either of Rust or of embedded programming in Rust.

Why not? You could also feel free to respond to the mailing list with your critiques




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: