Hacker News new | past | comments | ask | show | jobs | submit login
Committing to Rust in the Kernel (lwn.net)
208 points by todsacerdoti 7 months ago | hide | past | favorite | 170 comments



"Torvalds said that it is not necessary to understand Rust to let it into a subsystem; after all, he said, nobody understands the memory-management subsystem, but everybody is able to work with it."

Chuckled a bit at this line, anyone have context on how true this is?


"Torvalds said that, for now, breaking the Rust code is permissible, but that will change at some point in the future. Kroah-Hartman said that the Rust developers can take responsibility for the maintenance of the abstractions they add."

This need some very good expectation management.


For most driver or subsystem, maybe you don't need to know how mm works.

Rust is different. The kernel Rust teams are trying to encode some safety invariant. If any of those mismatch with the C side, it breaks. Those invariant need some non trivial knowledge of rust to understand


Is there an example of what you're describing?


There’s a recent drama where the Rust folks asked some people to clarify some of the semantics of some of the filesystem APIs, and this request wasn’t taken well. There’s been a bunch of hn threads about it.


like which?


not sure if this is what the op referred but like this one https://news.ycombinator.com/item?id=41450347

didn't find threads that regarding "clarify APIs semantics", but kernel docs are indeed not in a very good condition. Since C does not provide same level of soundness that Rust does, there are many hidden traps.

asahi developer had a good discuss about this https://threadreaderapp.com/thread/1829852697107055047.html


This overall situation is, yes. And the stuff from Lina is related, thanks for also pointing that out.


I apologize, I am on my phone, so rather than curating links, check out https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu..., most of which are about this situation.


About invariants generally: Rust wants to know when memory behind each pointer is immutable, or mutable by Rust only, or could be mutated by something else while Rust has a pointer to it. Rust also encodes which types can't be moved to another thread, and which pointers own their memory and need to be freed by Rust.

These are part of the type system, so they need to be defined precisely. The answer to these questions can't be just "it depends". If there are run-time or config-dependent conditions when the same data is owned or not or immutable or not, Rust has enums, unions and unsafe escape hatches to guard access to it.


[dead]


By definition the undefined memory behavior is only in the undefined parts of the spec modulo bugs. The spec is written against an abstract virtual machine and C was one of the first to pioneer such a concept and why it was so successful at getting ported everywhere.


[dead]


> I was under the impression most of how c interacts with memory is part of the undefined part of the spec

For a long time, the memory model, formally speaking, was underspecified. Both C and C++ agreed on and added a memory model in C11 and C++11.

> fencing

You can add a fence via this API: https://en.cppreference.com/w/c/atomic/atomic_thread_fence

> This varies per arch

Right, so what assembly this API will emit depends on the underlying architecture details.

Notably, the Linux kernel does not use the standard C memory model, it uses its own.


To be clear, that’s the definition of the memory model under concurrent execution. I’m pretty sure the single threaded version was well defined


My understanding is that “memory model” is definitionaly about what happens when you add parallelism and/or concurrency.

Regardless, this isn’t a slight against C. Most languages don’t have an explicit memory model at all.


I introduced the word memory model so that’s on me for that error - definitionally memory model does refer to concurrency.

However Op used “memory behavior”. The C standard definitely has defined and defined and undefined memory behaviors even pre C11. For example, dereferencing null is an undefined single threaded memory behavior that bit the kernel when the compilers started optimizing from assumptions about never reaching UB code. But for example there’s lots of defined behaviors like what happens when you dereference an aliased pointer.

And the Linux kernel definitely defined a memory model for itself before C and made all architectures conform to that.


Yeah that's fair. As I said, I'm not trying to imply something was bad here before C11. I thought "underspecified" instead of "unspecified" would communicate "some of it was defined but some of it wasn't but it's been fully so for a while now" but maybe that missed the mark.


It’s an opinion, but it sounds very good from the perspective of treating the relationships between system and subsystems as an interface to be managed.


It's very probably true in the totality of "as expressed in a real build for all configurations and architectures", too much variation of behavior to have the whole map in mind at once. You can work through it potentially, and I'm sure a few come close, but others will have things top of mind that experts don't.


It's true in the sense that nobody understands it well enough to avoid writing memory safety bugs.


A lot of kernel resources are managed through infrastructure like devres:

https://docs.kernel.org/6.0/driver-api/driver-model/devres.h...

These days it's entirely possible to write a decent driver with only the foggiest idea of how memory management happens in the kernel.


Aside: LWN is basically what ChatGPT summarization as advertised itself to be, except it's actually good and coherent and useful. I can trust Jonathan to summarize the conversation in a way that is mostly sane and reasonable in a way that I can never hand off the job to generative AI. It's an important area where humans seem to excel over computers.

Also, just to have some content that is actually on-topic: is anyone actually shipping upstream Linux Rust code yet? I understand that some stuff is slowly merging in but I'm not sure if it's actually being exercised yet.


> I can trust Jonathan

Given the clear bias in the writing and the coverage, how do you reconcile this? LWN isn't a news site staffed with reporters any more than Fox News is and it is much more closely defined by being a personal infoblog these days. Phoronix, Linuxiac, and DistroWatch are a few examples closer to actual news without trying to lead the reader. The Register, even with snark, is more representative of a news site these days.

I used to be a fan of Jonathan and LWN but in recent times, he seems less interested in being a reporter and more interested in being a Rust evangelist. That isn't a source I can trust.


I don't treat LWN as a news site. I read it as Jonathan's summary of what is going on. He does a good job at summarizing. If I asked a friend to give that summary too I would treat it like that, or maybe I would value it as if I had summarized it myself. There is always a need to understand what biases went into this kind of digest and that does not go away regardless of who is writing it. My point is that Jonathan might forget to include, say, some Rust person or a kernel developer but he isn't going to write something like "kernel developers debate whether security is important; Rust mentioned" like an LLM might. I get the information I want and nothing more.


Jonathan is eminently reasonable. He just wants everyone to get along. He’s no more an evangelist for Rust than Linus is, who, you'll recall, is trying to make it work also.


That is factually misleading. Look at LWN's coverage since the recent controversy over a Rust developer quitting the project:

1. Rust-for-Linux developer Wedson Almeida Filho drops out - https://lwn.net/Articles/987635/

2. Airlie: On Rust, Linux, developers, maintainers - https://lwn.net/Articles/987849/

LWN never once mentioned any blogs or opinions from kernel developers or otherwise unless they were in support of Rust. There has also never been an article highlighting the very real challenges not only with Rust but also the attempt to integrate it into the Linux kernel project. Scroll the home page for recent articles and it's pretty evident where the bias exists.

Meanwhile, The Register[3] cited at least Drew DeVault among others:

3. https://www.theregister.com/2024/09/02/rust_for_linux_mainta...

Yes, The Register wrote a more balanced article on the topic than LWN.

> He’s no more an evangelist for Rust than Linus is

Linus does not have a personal website where he is only publishing positive articles about Rust.


> LWN never once mentioned any blogs or opinions from kernel developers or otherwise unless they were in support of Rust.

Others please visit the links and see for yourself.

It's bare coverage. A few sentences when something notable happens. What do they say? Re: 1, it's a few nice sentences about Wedson after he leaves. Re: 2, it's one sentence and quote from a kernel dev.

Moreover, are we seriously trying to work the refs, as if LWN is NBC or Fox News, in this dispute? If you know of something interesting said be anti-Rust persons within the Linux kernel community, just share it with us. No reason it needs to be intermediated by LWN.

> Meanwhile, The Register[3] cited at least Drew DeVault among others:

AFAIK Drew Devault isn't a Linux kernel developer?

And re: 3, this blog post is like Drew's other blog posts. It is, excuse me, unthoughtful and incurious. Like all of Drew's writing, it is an Epistemic Closure Express to "Another Drew hobby horse". This time it was "Drew doesn't like Rust again." Is it any wonder someone else didn't want to include his bad writing for what?... Balance?

I think it's perfectly reasonable to have doubts about Rust in the Linux kernel. I'm sure there are many well qualified devs with interesting, learned views on the subject. Drew is not a such a person.

I have written reams about how Drew makes poor arguments. If you're curious, or don't think my case is well made here, you can my other comments re: Drew:

[0]: https://news.ycombinator.com/item?id=41404644 [1]: https://news.ycombinator.com/item?id=41409049


> If you know of something interesting said be anti-Rust persons within the Linux kernel community, just share it with us.

Already done before in the second link here[1] and then the developer sharing this content was the victim of an attempted SWAT:

1. https://news.ycombinator.com/item?id=41410228


> Already done before in the second link here[1]

"Already done"? Are you suggesting we should have been searching through your past HN comments for anti-Rust content?

Your prior comment notes two issues re: Rust: 1. "A core problem with Rust is the lack of an adequate standard library." and 2. "The problem with Cargo is when you have an application with hundreds of dependencies."

I'd say -- I'm not sure either is a problem in the kernel/the true object of this discussion. Re:1, as you may know, the Rust standard library isn't used re: the kernel. The kernel has its own Rust library, just like C has its own Rust library, klibc. C also has a much smaller standard library than Rust. C has nothing like Rust's collections or C++'s containers. Re: 2, I'm pretty sure the Rust for Linux team doesn't use Cargo for kernel code. The team simply vendors is deps.

The video to which you linked is a reaction video of René Rebe to an article re: Rust. Let's be very clear -- Rene's issue had nothing to do with kernel space programming. Rene discussed Cargo in the context of the userspace tools for bcachefs, and why he doesn't particularly like what he sees as Rust's culture of micro dependencies. He also doesn't like Cargo as a build system, which auto-downloads deps from the cloud. Which is fair enough, but I think we can agree this is not really an issue re: Rust in the Linux kernel.

I would further argue that composition is a feature of Rust. To put it mildly, C is not known for being easily composed. In C, you often just write something yourself, because C doesn't compose well with your app.

> ... and then the developer sharing this content was the victim of an attempted SWAT

What exactly are you suggesting? That, because you shared this content 25 days ago, that was the reason Rene was SWAT-ed?


>Meanwhile, The Register[3] cited at least Drew DeVault among others:

Drew DeVault is not anti-Rust-in-the-linux-kernel.


> Veteran developer Drew DeVault, founder and CEO of SourceHut and a critic of Rust in the Linux kernel

> I am known to be a bit of a polemic when it comes to Rust

> Rust will eventually fail to the “jack of all trades, master of none” problem that C++ has. Wise languages designers start small and stay small. Wise systems programmers extend this philosophy to designing entire systems, and Rust is probably not going to be invited. I understand that many people, particularly those already enamored with Rust, won’t agree with much of this article. But now you know why we are still writing C, and hopefully you’ll stop bloody bothering us about it.

Do you have anything to support your statement?


I believe the kernel graphics drivers for the Apple M1 are written in rust, and are upstreamed.


From what I understand those drivers currently cannot be merged since the devs. rewrote various kernel APIs to match their expectations.


This is true in a literal sense, but there’s some nuance. They were working on upstreaming them, but there was some strong disagreement about one of the earlier patches and so at least for now they’ve given up. And that patch was pretty small, it’s not like a massive change was required.


AFAIU the asahi rust gpu driver is shipping in asahi but not upstreamed yet

(torvalds/linux.git doesn’t have drivers/gpu/drm/asahi)


On a semi-tangent, does anyone happen to know how Microsofts push to use Rust in the Windows kernel is coming along? They rewrote some components in Rust and rolled them out to production about a year ago but it has seemed pretty quiet on that front since then, unless I missed something.


I noticed the other morning that they’ve either re-written or are growing the win32k driver in rust. This was either in Server ‘25 or vNext though I can’t remember.

Given win32k implements a good chunk of the kernel-mode graphics and windowing system it’s a pretty good place to start that effort.

edit: win32kbase_rs.sys was what it was called, and I’m pretty sure it was 2025 I pulled it from but it might be on earlier versions too


I googled for this file and I LOL'ed as the first link was a crash report linked to this one.

https://www.elevenforum.com/t/latest-beta-causing-program-cr...


Out of curiosity, how did you notice this?


I was hunting for what Windows libraries used a particular new API and saw it in my scrolling!

You can see all the panic and error strings, and some internal package paths if you run strings over it. Win32k looks like it got split pretty hard into a couple of sub libraries in recent versions though (win32k, win32kbase, win32kbase_rs etc)


They haven’t spoken publicly about it. As a Windows user, I am very intrigued!


They have, provided you pay attention to Windows developer channels, see my reply, https://news.ycombinator.com/item?id=41645415


Thanks! I’m aware of most, but not all of that. But that stuff isn’t kernel stuff, though obviously much of it is serious systems work. That’s what I was referring to specifically.


Besides GDI regions and CoreWrite, there isn't much public info.

They dumped Rust bindings to DDK, but it is more a over the fence kind of thing.

WinDev is culturally against anything not C++, see .NET adoption for key Windows features versus what happens on Apple/Google OS land.

I am quite curious on how WinDev will proceed with Rust and .NET versus C++, given the new security mandate tied to job evaluation.


While they are rather quite on that front, Rust is now the official systems language for new projects in Azure infrastructure, with C#, Java and Go as alternative is a managed language is also possible.

"Decades of vulnerabilities have proven how difficult it is to prevent memory-corrupting bugs when using C/C++. While garbage-collected languages like C# or Java have proven more resilient to these issues, there are scenarios where they cannot be used. For such cases, we’re betting on Rust as the alternative to C/C++. Rust is a modern language designed to compete with the performance C/C++, but with memory safety and thread safety guarantees built into the language. While we are not able to rewrite everything in Rust overnight, we’ve already adopted Rust in some of the most critical components of Azure’s infrastructure. We expect our adoption of Rust to expand substantially over time."

From https://azure.microsoft.com/en-us/blog/microsoft-azure-secur...

Several key projects have been migrated to Rust, or started from Rust althogether.

=> Azure Boost

https://learn.microsoft.com/en-us/azure/azure-boost/overview

"Rust serves as the primary language for all new code written on the Boost system, to provide memory safety without impacting performance. Control and data plane operations are isolated with memory safety improvements that enhance Azure’s ability to keep tenants safe."

=> OpenHCL, Azure's para-virtualization

https://techcommunity.microsoft.com/t5/windows-os-platform-b...

"OpenHCL is a para-virtualization layer built from the ground-up in the Rust programming language. Rust is designed with strong memory safety principles, making it ideally suited for the virtualization layer."

=> Security processor Pluton firmware (used by XBox, Azure and CoPilot+ PC hardware)

https://learn.microsoft.com/en-us/windows/security/hardware-...

Post from David Weston, Microsoft's vice president of OS security regarding the Rust rewrite and TockOS adoption, https://x.com/dwizzzleMSFT/status/1803550239057650043

=> CoPilot+ UEFI firmware

https://techcommunity.microsoft.com/t5/surface-it-pro-blog/r...

"Surface and Project Mu are working together to drive adoption of Rust into the UEFI ecosystem. Project Mu has implemented the necessary changes to the UEFI build environment to allow seamless integration of Rust modules into UEFI codebases. Surface is leveraging that support to build Rust modules in Surface platform firmware. With Rust in Project Mu, Microsoft's ecosystem benefits from improved security transparency while reducing the attack surface of Microsoft devices due to Rust’s memory safety benefits. Also, by contributing firmware written in Rust to open-sourced Project Mu, Surface participates in an industry shift to collaboration with lower costs and a higher security bar. With this adoption, Surface is protecting and leading the Microsoft ecosystem more than ever."


>Changing C interfaces will often have implications for the Rust code and may break it; somebody will the have to fix the problems. Torvalds said that, for now, breaking the Rust code is permissible, but that will change at some point in the future.

I think this is the main technical change needed from the Linux kernel. It needs a layer of quasi-stable well documented subsystem APIs, which ideally would be "inherently safe" or at least have clear safe usage contracts. And it's fine for these interfaces to have relaxed stability guarantees in the early (pre-1.0, if you will) experimental stages. Changing them would involve more work and synchronization (C maintainers would not be able to quickly "refactor" these parts), but it's a familiar problem for many large projects.

It's the only reasonable point from the infamous tantrum by Ted Ts'o during the Rust for filesystems talk, everything else, to put it mildly, was a really disappointing behavior from a Linux subsystem maintainer.


> It needs a layer of quasi-stable well documented subsystem APIs

I think the Rust developers weren't even asking for that. They just want the C developers to sign up to some semantics. But the C developers know the function interface has evolved to be merely functional: it works, but with few invariants, riddled with caveats and few cross-function guarantees. It can't be hoisted into meaningful semantics, no less a type system, particularly across 10 filesystem API's.

Rust developers should focus on drivers in subsystems with stable API's, instead of trying to stabilize what decades of work has failed to.


Of course it has semantics. Whether someone knows all of them is a different matter. But whether you use rust or c you have to know the contract to be able to write correct code. The only reason the kernel devs got away with such sloppiness is because c is the ultimate 'I do what you tell me to boss" language.


And yet, Rust is the exact opposite. Why try to make the two meet? The argument for Rust in the kernel over memory safety can be remediated by better memory safety tools for C can't it? (Not that the Linux kernel project doesn't have any already.)

Why not devote the time to writing a better memory safety tool for the Linux kernel (or C in general) rather than keep trying to force two disparate cultures and ideologies to meet in some fantasy middle?

ThePrimeagen actually had an interesting opinion on this recently too, worth a watch: https://www.youtube.com/watch?v=62oTu9hjxL4


Rust has the very strong pro of already existing. If those tools existed, I’m sure the conversation would be quite different.

It’s also not just about memory safety. Greg in particular has recognized how Rust’s type system will be helpful for preventing other kinds of bugs, for example.


> Rust has the very strong pro of already existing

But not in the Linux kernel. Any new effort will be greenfield, why spend the last two years and many more rallying around an entirely different programming language instead of writing a novel tool?

> Greg in particular has recognized how Rust’s type system will be helpful for preventing other kinds of bugs

Do you have any examples here?


> But not in the Linux kernel.

Sure, it is still an experiment, but that's irrelevant: your theoretical "make C memory safe" tooling does not exist and does not exist in the kernel.

> instead of writing a novel tool?

Do you have a proposed design for the novel tool? Many have tried, nobody has succeeded.

> Do you have any examples here?

From the article:

> Kroah-Hartman said that it could eliminate entire classes of bugs in the kernel.

He's said similar things elsewhere: https://social.kernel.org/notice/AlxbVeMxyJNsLoNa6q


Why, because of the cost of exploited vulnerabilities and critical systems failures that could have been avoided.


There are no decent memory safety tools for C. Could they be theoretically created? Perhaps, I doubt it considering the amount of money flowing in this industry. To solve it you really have to design a new language. Which is exactly what happened it's called Rust.

Kernel devs should just put on their big boy pants and go with the times. C is simply not the right tool for the job anymore for a lot/most kernel work.


For those curious, this is the link[0] to the filesystems talk with the relevant timestamp. A bit more was discussed in this[1] article as well about Wedson Almeida Filho leaving.

[0]: https://youtu.be/WiPp9YEBV0Q?t=1529

[1]: https://lwn.net/Articles/987635/


Why do we need quasi-stable anything within the kernel? Wouldn't a much better long-term solution be changing the rule to "break whatever APIs you want, but you have to fix all of the in-tree Rust uses too"?


The problem is that some high-profile people who want to preserve their right to "break whatever", do not want to shoulder responsibility of fixing Rust code which depends on the broken stuff. Even worse, they even do not want to explain and document semantics of existing APIs (sic)! See the video linked in the sibling comment.

Speaking more broadly, freely breaking stuff in large projects is dangerous, since fixing stuff may require a more specialized expertise and knowledge, e.g. being well-versed in Rust safety rules, knowing some obscure information about hardware behavior, or being aware about some tricky invariant which must be preserved by the code. This is why changes in API boundaries often require synchronization between different teams.


At least Ted Tso has clarified his position greatly from what initially read as “I refuse to learn rust”. Instead yes asking for a guide to help him understand the Rust:

> There is a need for documentation and tutorials on how to write filesystem code in idiomatic Rust. He said that he has a lot to learn; he is willing to do that, but needs help on what to learn.


This is heartening to read for sure.


One of the advantage of big monorepos (and Linux really is just one big "monorepo") is that you can change $whatever as long as you also fix everything that depends on that.

Sometimes this is just simple stuff, like adding a function argument, changing a type, or something like that.

"Break whatever" is a bit of a crude way of putting it, but yeah, it's definitely an advantage IMHO. Not just for the final patch, but also prototyping, experimenting, showing ideas, etc.


> since fixing stuff may require a more specialized expertise and knowledge

That's why sign-offs from reviewers are "required", no?

It seems the correct mentality to encourage people to do hack on the kernel, show that things are possible, they are "working on my machine", and then ... yeah, they will need to go through the usually length process of getting it merged.

But starting with "synchronization between different teams" usually gets shrugs and "let's get back to it soon"-s.


Thought that ThePrimeTime in his video https://youtu.be/62oTu9hjxL4?si=E98WZ0zJSNUC8TEH&t=287 hit the nail on the head re: Rust versus C.

Max level C programmers, have designed their programming style around control down to the absolute bit. C derives control from absolute control over behavior.

Max level Rust programmers have complete mastery of types. Rust derives control from types.

Seems somewhat philosophically incompatible.


It is a good video, but I'm not at all convinced that a philosophical "authoritarian" vs "anarchy" difference between Rust and C actually exists. C programmers work through all sorts of constraints and rules on how to correctly use a system, same as anyone else. Heck, there are hundreds of pages of kernel documentation explaining how developers are expected to use the various locking subsystems. Does that make C authoritarian? I don't think so... that's just a fact of programming. The details matter.

IMO the only real cultural difference in Rust is that you are expected to explain those constraints through the language of the type system, not just in English. That's a lot of work up front but it also gives you way more automation down the line (eg checking that you used a mutex correctly via rustc rather than through emails to Linus Torvalds). Some people definitely take it too far, and blow up their code with endless incomprehensible traits. But the islands aren't incompatible, it just takes work and skill to bridge them.


> Does that make C authoritarian? ...

Well, he called Rust authoritarian and C anarchy. As with all analogies if I don't stretch too far it does make lot of sense to me.


Sure! I'm just disagreeing that "C is anarchy because anyone at any point can do anything". You can't do anything, Linus will eventually yell at you in an email. To me, the only philosophical difference is that Rust wants to automate Linus


That would also explain why Linus is on board with it, A thousand rust compiler instances is way more efficient than one Linus instance yelling at people, plus eventually he won't be there anymore.


This is less true in "how the language works", and more true in "code I have read in these languages follow these patterns". The latter I agree with, the prior less so. Rust is more type heavy sure, but there are plenty of type shenanigans in rich enough C ecosystems, the kernel has plenty of vtable structures which have varying level of richness and type complexity in their own C expression. Ever worked with the addr types in the BSD sockets API - super messy types that can be a real pain for FFI for example, heavier typing exists in C - not "higher order" and so on, and yes you _can_ do that in rust, but do you want to debug it in kernel use cases, maybe not. Rust kernel code may look different in the end from a lot of other Rust code in a similar way to some C code in the kernel being quite different from other C code elsewhere.


Good C code will select a rigid set of patterns, carefully chosen to maximize safety, and stick to them. It makes a lot of the bug prone patterns stand out.

This is somewhat like what a higher level compiler can produce, but more manual. It's reliance on code smell instead of a type checker.


"C derives control from absolute control over behavior."

Many C devs like to think C is like Assembly, then they discover it is an high level systems programming language like every other.


IDK, I thought well-written c and c++ had the same notions of ownership and lifetimes, just not enforced by the compiler


No, C++ and especially C have much looser notions around ownership and lifetimes. They still require the basic idea of temporaral safety at all times, but differ greatly in how that's achieved in practice. For example, you'd typically share an Arc<RwLock<T>> between threads in rust, even if it doesn't strictly need them for whatever reason. I can't say I've ever seen someone manually implement Arc in C, and RwLock might be ensured half a dozen ways spanning the entire gamut from no protection for things like file handles all the way up to full mutexes.


There are dozens of Arc implementations in the kernel. Atomic reference counting is an important way to manage lifetimes and is used extensively in things like making sure that shared resources (files, pages, …) are managed correctly.


The rust notion of lifetimes is one of the most c++ things I've ever seen. I've always assumed it grew out of the same culture that produced c++ smart pointers and RAII.

> I can't say I've ever seen someone manually implement Arc in C

In C++, shared_ptr in the standard. But this was common even before the c++11 standard introduced that. I've rolled my own before, more than once. Microsoft ATL had CComPtr.

It wasn't surprising to me either that rust came from Mozilla, which made heavy use of COM, an object model that is very heavily based on reference counting.


I might not have made it clear enough, but I was speaking specifically about plain ol C rather than dialects like kernel C or other languages like C++.

Anyway, what do you prefer to lifetimes? They go back to at least the 60s with lisps and algol and possibly earlier. I wouldn't be surprised to find that idea predates electromechanical computers entirely.


In plain old C it's very common to roll your own atomic reference counting. I've done it many many times. I'm surprised you haven't seen it done. GCC and MSVC both provide intrinsics to wrap synchronization primitives such as compare and swap (lock cmpxchg instruction on x86). __sync_bool_compare_and_swap and InterlockedCompareExchange respectively. Often you might put the reference counting in a helper function for your specific data structure.

To your second question, writing C I like to let the scope dominate the lifetime by doing a free as soon as an object goes out of scope. To make something borrow, you assign it to something with a different scope, then assign your local copy to NULL, exploiting the fact that free(NULL) is a well behaved no-op. In c++ you do the same but you can use destructors to make it more automatic.


I think that's misunderstanding their point a little bit?

I agree with them that in any well-written program (in any language really, not just C and C++) the ownership and lifetime constraints are not necessarily enforced by the compiler - that is, they are not expressed through types or even necessarily through code at all - but they are definitely existent in the design and behaviour of the system. If the programmer cannot explain who holds what and for how long, and therefore when it's safe to read or write particular resources, then that isn't well-written code IMO.

> I can't say I've ever seen someone manually implement Arc in C

...an Arc is literally just a shared pointer. It's in the name, Atomically Reference Counted. Reference counted resources that use atomic operations to adjust the count are a dime a dozen in C projects in my experience.

Rust did not invent the concept of a multiple-reader/single-writer lock either, e.g. the Linux kernel has the `rw_semaphore` type. I don't understand treating types like this as something arcane simply because they're given an explicit tag in Rust.


I'm not calling arc arcane. Have you genuinely ever come across a "small" C project that uses arc? I can't say I have, only large industrial projects like the kernel and gtk. That's my point, it's almost never done by people writing "plain and simple" C, it's something that crops up much later when people start building a bunch of infrastructure onto the language to restrict things into safer patterns.


Is it not because “plain and simple C” often doesn’t often use threads?


Golang has also had these types for quite a long time


C and C++ are very different. I think Casey Muratori hits the nail on the head here:

https://youtu.be/xt1KNDmOYqA

In short, RAII and smart pointers and borrow checker are all signs of "individual element thinking" and that way lies madness. You smear out lifetime and ownership so badly that it practically by definition becomes a problem.

Your goal is to think about this stuff in groups to make thinking about allocation way, way easier. Programming is all about abstracting to the next layer.

I'm not sure he has quite gotten to the heart of the issue, yet. However, a bunch of smart people (Casey Muratori, Jonathan Blow, Andrew Kelley, etc.) are all dancing around something that Rust and C++ don't seem to fit the bill on. Hopefully they can crystallize it out so that everybody can see it.


Oh interesting. I was expecting to see nonsense in that video (probably triggered by the first line -- C and C++ are basically the same thing, in that you can, and folks did forever, implement all the C++ fancy stuff in C -- just because something isn't in the core language, doesn't mean it wasn't done).

But actually it's suggesting how I've been wanting memory to be managed for a couple decades (and have achieved long ago in C projects). Basically some sort of "domain-orientated arena allocator". A simple example is in a network server -- you receive a request from a client, do a bunch of stuff to service that request, and now you're done. Please blow away the memory used to service the request, thanks, all at once. Of course it's not quite that simple because there will be some "chaff" objects thrown off in processing the request that we need to keep around for a while later (e.g. for logging to drain).

I suppose the reference to Jonathan Blow should have been a clue that this would be something worthwhile.


Casey writes games. Of course to him everything ought to be an arena allocator. This is not actually the best way to write all software.


That's a bit glib. Games almost always have significant networking component nowadays.

And Andrew Kelley writes compilers. And arenas still appear to be superior.

Games and compilers seem to do better with custom allocators. Embedded almost always static allocates. GPU workloads generally don't have "heap-like" allocations either.

We have an increasing amount of evidence that "malloc-like" or "heap-like" allocations on an individual level seem to be a net negative.

My gut feel, as someone with a grey beard, is that we're looking at a breakpoint like we did with garbage collection.

Garbage collection absolutely suuuuucks until you get big enough memory that you can overallocate by about a factor of 2 at which point garbage collection flies.

I think we're at a similar point in "systems programming". It's now okay to overallocate, overcopy, and especially overcalculate things due to current CPU architectures. Chasing pointers is now mega-bad so vtables and the like are becoming a performance bottleneck.


Embedded almost always static allocates is outdated, like integer is almost always faster than floating point.

Some embedded nowadays has hundreds of megs of RAM, with its application using mmap, out of an overcommitted virtual memory.


I would argue that if you have an MMU, you aren't really "embedded" anymore. An RPi isn't "embedded". However, you may still be doing systems programming.


Embedded means that something that is not itself computer contains one, running some dedicated control application. If a Raspberry Pi is built into a toaster, where it controls the temperature and toasting duration according to the user-selected selected darkness level, it's embedded. If the toaster has a screen, and you can install and uninstall apps on the Pi, then it doesn't look so embedded any more.


> And Andrew Kelley writes compilers. And arenas still appear to be superior.

Is Andrew pro-arenas for compilers? I have seen him preach re: data oriented design, and Zig certainly does more re: allocators than Rust, but do you have more info re: this claim?


> C and C++ are basically the same thing

I disagree here. For example, try writing something that does reference counting in C vs C++. In C++, it's practically trivial. In C, it's a nightmare of bugs. RAII support being directly in the language is huge in this case.


> RAII and smart pointers and borrow checker are all signs of "individual element thinking"

I'd call it object-oriented thinking. To me Rust and modern C++ are attempting to "OO-ify" systems programming and I think you get push back from folks who view their resources more holistically.


I wouldn’t really describe a lot of Rist codebases I’ve seen as “OO-ify’d”. I’d say it’s got a more functional flavour if anything.

In my experience “de-programming” OOP-programmers is one of the first things teams I’ve been have had to do so that said OOP devs have a better time and write more idiomatic code.


Rust can do these things. It is maybe true that culturally many Rust programmers do not. There’s both good and bad reasons for this.


The point is that many of the problems rust aims to solve become much less relevant. For example, if your program only does 10 Malloc and frees, you can probably track down the memory bugs.


I agree that these techniques help you write better code, but enforcing something is better than not. Obviously it’s a spectrum, so I wouldn’t say doing that is bad, but it does not really mean Rust is irrelevant.

And Rust brings more to the table than just the borrow checker.


Sure, it just invalidates the impending doom, ban C programming narrative.


I’m not sure I would characterize it this way, but it doesn’t satisfy the criteria of “memory safety by default,” which is what more and more organizations are desiring.

Time will tell.


I took my time to watch part of this. I don't entirely agree, however. I'm not really a large systems programmer (I'm a scientist, actually). I really do like the group oriented thinking, but it does seem like there is space for "individual element thinking" at times. This sounds a lot like the philosophical notions of reductionist thinking vs. holistic thinking which is something I think about a lot when it comes to science and physics in general. The way of thinking I've come to value most at this point in my life for understanding the world that might sound a bit silly applied to this is a synthesis (so called hegelian dielectic), which is this case for me means that reductionist thinking and holistic thinking are not really opposites but are simply modes of mental modeling, and can be apply in different ways and different times to your code. Sometimes, it is valuable to have simple, self-contained types which is reductionist or individual element thinking. However, nests of pointers are always much more complicated and unnecessary than programming at the system level for a group of related objects. Which you use depends on the context, and I don't really like identifying them as opposing forces more than different modes that can be drawn from at different times.

That said, most of the time, I find myself utilizing group oriented thinking in my codes and I avoid atomistic, reductionist thinking whenever analyzing a problem at a first pass, but reductionist reasoning does help at times, it just depends on the problem and the context. It is also true, unfortunately, that in science teaching at least we teach students to be reductionist first, and then that reductionist thinking clouds their understanding and it is something young scientists need to break out of at some point, and some never do break out of it. May be in that way it's similar to what this person refers to here (n vs n+1) I just didn't also get stuck thinking reductionism is always bad and avoided it as a rule but I draw from both sides, so to speak.


Can anyone explain to me why these two issues aren't considered deal breakers for introducing Rust into the kernel?

1. It doesn't map almost 1:1 to assembly the way C does, so it's not inherently clear if the code will necessarily do what it says it does. That seems questionable for something as important as a kernel and driver.

2. Only one real Rust compiler, that's a recursive compiler, which reminds me of the Trusting Trust problem: https://dl.acm.org/doi/abs/10.1145/358198.358210


> 1. It doesn't map almost 1:1 to assembly the way C does, so it's not inherently clear if the code will necessarily do what it says it does.

As someone who works on a C compiler, I will tell you that Rust maps marginally better 1:1 to assembly than C does. No major C compiler goes 1:1 to assembly; it all gets flushed into a compiler IR that happily mangles the code in fun and interesting ways before getting compiled into the assembly you get at the end. Rust code does that too, but at least Rust doesn't pull anything silly on you like the automatic type promotion that C does.

If C maps 1:1 to assembly in your view, then (unsafe) Rust does; if Rust doesn't map 1:1 to assembly, nor does C. It's as simple as that.


I get that GCC and Clang does all sorts of optimizations, but doesn't unoptimized C map closely to 1:1?

I've heard it being called a high level assembly that maps closely to assembly many times at this point, it makes sense to me why people would say that.

> If C maps 1:1 to assembly in your view, then (unsafe) Rust does; if Rust doesn't map 1:1 to assembly, nor does C. It's as simple as that.

I thought the mapping issue was unrelated to the borrow checker, and that it's possible to write a borrow checker for a restricted subset of C. I thought the thing that was making it not map 1:1 was actually all of the extra features in Rust, like the ADTs and async and all of that. Is that not actually the case?


> but doesn't unoptimized C map closely to 1:1

What is a variable in C? A register? A memory location? The language doesn't have basic concepts needed to map anything 1:1 to assembly and the ones it has usually come with half a dozen standards worth of required error handling, because having single instruction features like sqrt return -1 on error wasn't enough.


> What is a variable in C? A register? A memory location?

Wouldn't it depend on the type? Something like:

int p; p = &x;

MOV @R1, R2 ; R1 contains the address of x, move it to pointer p in R2

int p; int value = *p;

MOV @R2, R0 ; Dereference pointer p (in R2), load the value into R0 (int value)

int x = 5;

MOV #5, -(SP) ; Push the value 5 onto the stack (stack-allocated int)

int x = 10; int y = x + 5;

MOV #10, R0 ; Load the immediate value 10 into register R0 (for x)

ADD #5, R0 ; Add 5 to the value in R0 (x + 5), store result in R0

or

MOV #10, -(SP) ; Push 10 onto the stack for x

MOV (SP), R0 ; Load x from stack into R0

ADD #5, R0 ; Add 5 to x

Whether a variable gets stack-allocated or register-allocated, it's still a pretty close mapping afaict. From my understanding the original C mapped closely to PDP-7 and then PDP-11 assembly. The original implementation and how it maps to PDP-11 could be used as a reference implementation.


The C standard does not reference the stack anywhere.

Depending on optimization level, things can change. Without any optimizations, variables of “automatic storage duration” such as local variables, may get placed on the stack. But with optimizations turned on, they may end up in a register, or even not be stored anywhere, for example if they’re an integer literal that never gets modified after assignment.


> I get that GCC and Clang does all sorts of optimizations, but doesn't unoptimized C map closely to 1:1?

Nope. There's actually a number of "optimizations" that get applied to "unoptimized C" code. For example, gcc decides to apply even/odd mathematical function laws to the math library functions even with -O0, and both gcc and clang are very happy to throw "unused" code at -O0 that prevented me from doing jump table shenanigans.

C fundamentally has no idea of the distinction between registers and memory, and this is probably the most important distinction in modern assembly languages. It's especially obvious when you get to exotic architectures that have thousands of registers and a relatively thin memory pipe. Making a C compiler get out the assembler that you expected is a lot trickier than you might expect, and when you need exactly some assembly, you'll find that most compiler engineers will tell you "the compiler won't guarantee that, please use assembly" while the people trying to do so often end up spiraling into a rant about how compiler writers are idiots who can't write working compilers because it won't give them the assembly they need.

> I thought the thing that was making it not map 1:1 was actually all of the extra features in Rust, like the ADTs and async and all of that. Is that not actually the case?

People use a variety of different definitions of "map 1:1" that makes it hard to really answer your question for certain. What you seem to be getting at is the notion that C's ABI is predictable. But there are plenty of C features whose mapping to assembly is as unpredictable as Rust's ADT or async features are: C's bitfields are the most notorious example, but I'd throw in variable arguments, atomics, and the new _BitInt into the mix. Which is to say, if you're an engineer for whom this stuff matters, you'll know how the compiler is going to handle these constructions for your targets of interest, but that's not the same as saying that those constructions will always work the same way on all targets.


Unoptimized C is not something anyone actually uses. And it maps less obviously to assembler than C with some optimizations, because C compilers in no-optimization mode generally do brain-dead things like allocating all variables on the stack.

ADTs and such don't actually make the mapping less obvious. Async kinda does, but again it's not hard to have at least some mental model of how an async function will turn into a state machine implementation. C, C++, and Rust are all about equal in terms of how well I can predict how a given function maps to assembly, which is that if I care, I need to check, but I'm rarely completely bamboozled by what I see.


1. C doesn’t actually do that. Rust is the same as C in this regard.

2. The Linux kernel doesn’t use standard C, it uses many gcc specific extensions. By this point, clang also supports those extensions and can compile the kernel, but that took work, and upstream has never tried to be only standard C.


This complexity issue is very similar to memory issues of older languages. Most rust people say it is ok or you can avoid it etc. but they don’t understand people just tend to go to the path of least resistance and most times this means a lot of traits and generics. Would be super cool to have something like zig but with borrow checker


Even though I don't use Linux in my projects, Rust on Linux is extremely important to convince others to use Rust. I hope it succeeds.


Even though I don't write any Rust, I still think Rust on Linux is important. I rather learn and deal with Rust than with C/C++, and I too, hope it succeeds.


C/C++ is like water/alcohol; certainly they look very similar to an uninvolved observer, and one can mix them easily, but they differ drastically. One is an utterly simple life substrate, another is a toxic and hallucinogenic but potent rocket fuel.

For the record, Torvalds has always vehemently resisted any attempts to use C++ in the kernel. I completely support his position.


You are probably writing your comment via lots of software written in C++. It's a great language with lots of flaws introduced by legacy decisions that would be made differently today.


Rocket fuel, as I said. Dangerous, powerful, indispensable in certain kinds of projects. Up until the advent of Rust, there was no viable alternative for C++ in large, long-running, performance-critical codebases, like browsers or game engines.

(I say this as a fan of Haskell and Rust, and a daily user of Python, Typescript, and elisp. Last time I wrote production C++ was in 2021, like 50 lines.)


Not exactly rust-in-kernel, but I've been using Aya[0] to write ebpf programs in Rust and it is quite nice. The verifier is still a bit trigger-happy with some constructs, but it is manageable.

[0]: https://aya-rs.dev/


Totally unrelated comment to my previous one

I think it's folly to encode the semantics of APIs in the Rust type system and memory model and that's the impedance mismatch that has people riled up. unsafe code isn't incorrect code, and trying to add abstraction where there wasn't one before is encoding principles where they didn't previously exist and should be an obvious problem.

I've written a lot of systems-type Rust using unsafe and I think the design pattern of -sys bindings and then a higher level safe wrapper is mostly incorrect because callers should always use the -sys bindings directly. It's more workable and doesn't suffer from changes that the detractors complain about.


Totally agree with this. In performance critical code this is a problem always. Even zstd bindings in rust is slow compared to directly using sys version. People don’t want to acknowledge this but it is the truth. Not sure why this comment is flagged.

I have been writing performance critical code for production in rust for several years now. And generics/traits usage and needless atomics/copy/abstraction is very much an issue. Biggest concrete one is memory allocation api and work stealing everywhere kind async apis. It is understandable these make sense for general development but they are just a problem when doing performance critical and low level code


My point is not about performance at all, you should raise that as an issue in the zstd wrapper crate.


What is the value in relying on distro authors to publish rust compiler versions, when the bespoke release channel for rustc is kept much more up to date?


How can you build a new kernel with distro tools otherwise?


With rustup? Why do you need the distro to be the be and end all of package management when it's more concerned with user space and not real development?


Distros have historically been very concerned with "real development"!

Debian, for example, packages all of the tools and libraries needed to build any package in Debian, so users can easily modify and recompile any package. Because there are a lot of packages in Debian, it's become a great, stable, vetted source for general-purpose compilers and libraries.

Rust really is an outlier here -- its marketing has managed to walk a delicate tightrope to be considered "stable" and "mature" enough to use for important projects like Linux, while also still being new and fast-moving enough that it's unreasonable to expect those projects to use anything but the most recent bleeding-edge nightly build. And that will create problems, if only for distros trying to build the kernel binaries that they ship to their users.


That's still user-focused. I actively avoid debian as a development distro because things are so out of date and so customised. Arch is a much nicer development experience because they for the most part just take the up-to-date upstream projects and build them without fiddling with a bunch of stuff. (OTOH, if I'm standing up a box to run critical network services, debian is strongly preferable)


If I'm writing new software I'm necessarily developing something that's not yet a package in any distro, so I don't necessarily want to be using distro tools to build it.

I also strongly disagree with the characterization that it's "easy" to modify and recompile "any" package in a given distro - typically, someone would prefer to modify the upstream and build it (which may not be possible with the distro's supplied tools) and use the modified version. Distributions in my experience are quite bad about shipping software that's "easy" to be modified by users.

It's a gross mischaracterization of the ecosystem to suggest that many Rust projects require "bleeding-edge nightly" to build. Kernel modules have a moderate list of unstable features that are required but many (all?) have already been stabilized or on the path to stabilization so you don't need a "bleeding edge" nightly.

In my opinion the lagging nature of distros illustrates one of the fundamental problems of relying on them for developing software, but hey, that's an ideological point.


> Kernel modules have a moderate list of unstable features that are required but many (all?) have already been stabilized or on the path to stabilization so you don't need a "bleeding edge" nightly.

https://github.com/Rust-for-Linux/linux/issues/2 lists the "unstable features" required by the Rust for Linux codebase. It's a long list!

One of the features in the "Required" section was "added as unstable in 1.81", a version released three weeks ago. Presumably that means you need a nightly build that's newer than (or at least close to the release of) Rust 1.81, which seems pretty bleeding-edge to me.

I sure hope none of those "paths to stabilization" involve making any changes to the unstable features, because then release versions of the Linux kernel would be stuck pinning random old nightly builds of the Rust compiler. That seems even worse than depending on bleeding-edge ones.


I think the point is that maybe you shouldn't. (i'm not agreeing or disagreeing with that position).

If you used a distro that provided multiple versions of something and kept them up to date or used nix or guix you wouldn't have this problem at all.


You definitely need to be able to build the kernel that shipped with the distro using tools from the distro. Thats basic table stakes. Needing newer tools for building from the latest upstream repository is fine though


Does Rust support the same range of architectures as C? Or is there some way to compile Rust to C to support weird obscure systems?

Forgive me if this is something commonly known by the people involved. It just strikes me as the most obvious objection to Rust in the kernel and I haven't seen any explanation of what's going to happen.


Currently Rust is only acceptable in drivers. These are inherently platform specific, so there’s no issue with platform support. If Rust doesn’t support the platform, the driver won’t be written in it.

Rust’s platform support is better than you might assume, but it is missing some platforms the kernel itself supports, so until that’s resolved, it can’t be in the core of things.


Rustc lowers to LLVM IR, so it can target anything LLVM can. For a long while, that meant there were obscure architectures that were excluded, but IIRC coverage has improved and the Linux kernel has decided to drop support for them.

The main push has actually been from the BSD family pushing clang (and thus LLVM) to support a broader swath of less common architectures.


Check out the Rust's documentation page on platform support[0]. You'll be able to find the full list of supported platforms, as well as the target tier policy, and specific target requirements and maintainers.

[0]: https://doc.rust-lang.org/rustc/platform-support.html


rustc does not yet no, there's a lot of hope in the gcc rust frontend to help bridge the gap, but also there is increasing support in llvm for more architectures and things like this should drive interest in accelerating those projects.


rust doesn't support alpha, parisc, or super-h, but afaict there are at least nominally functional back ends for the other platforms modern linux runs on


As a side note, does anyone know what the backup plan is if Linus is suddenly no longer able to lead in his current capacity? I think it's likely there is one, but am not sure what it is.


In a previous interview (several years ago) Linus said "there are at least a dozen people who can take over tomorrow if I get hit by a bus", or something to that effect. I don't think there's a specific concrete plan.

Realistically, what will probably happen is that all the core maintainers will end up in a big room and discuss what to do next. Maybe they will do something with white smoke from a chimney. It's also very possible the nature of the leadership will change, rather than a simple s/Linus/…/.


>there are at least a dozen people who can take over tomorrow

Well, this is its own sort of problem though [0]. A good thing about a BDFL is unambiguously having someone in charge, accountable and responsible with power to "decide" as opposed to no one or too many fighting for it. When people ask this question I think they intend to know "who does Linus want or will he put in charge?"

https://en.wikipedia.org/wiki/Succession_crisis [0]


ATM Greg KH would probably take over (since he's already done it once before). I doubt he's the last in the line of succession though


No, but as long as it doesn't land in Microsoft's hands I'm happy. edit: Not Oracle either


So Nvidia?


We might get a stable and performant upstream driver in that case


Regardless of what upstream decides, downstream on ChromeOS and Android land, it is already being used.


I’ve read many times that a goal is to see if the use of Rust can work out in the kernel.

What does this experiment for? What exactly is the evaluation? What are some example findings, both pro and con?


You can find the home of the project here: https://rust-for-linux.com/

Linus has been talking publicly over the last few years about how it’s getting harder and harder to find maintainers. There are various reasons for this, but C is part of the problem. If there are newer technologies that would be appropriate for the kernel, they should be investigated.

At the same time, more and more downstream users of the kernel have been adopting Rust, and more kernel devs have been checking out Rust and have liked what they’ve seen. Many also don’t, to be clear, but the point is that a group of people were willing to put in the work to give Rust a real shot, and so did some initial work to sketch it out.

Linus agreed to give it a try, but there are some caveats: Rust code cannot hold up improvements to the C code. Changes to the C are allowed to break the Rust, in other words. Then, it was discussed that the right first place to try Rust would be in driver code. This has the advantage of being a real project, but also sidesteps another issue: today, Rust doesn’t support every platform Linux does. Drivers are platform specific, so you just simply only write drivers for platforms Rust supports.

So far, there have been some successes: some example drivers have been written. There’s also been some struggles: there are some experimental features of Rust and its standard library that the code relies on. But the Rust project has committed to trying to get those finished sometime soon, so that’s not a permanent issue. There have also been some… skepticism of both Rust and the experiment by some members of the kernel development community, and that’s caused some problems. Hopefully those can be worked out too.


Thanks for the thorough details. That really helped my mental model of the topic.

> ...some experimental features of Rust and its standard library that the code relies on.

In my work I've experienced this, which can cause a sort of inversion where usage of experimental (or non-documented) features become so relied upon that I lose most control over how the experiment might resolve (ie. sorry, now it must find its way into main). Any concerns of the same kind of thing?


You're welcome.

> Any concerns of the same kind of thing?

Here's the canonical list: https://github.com/Rust-for-Linux/linux/issues/2

There's a lot, and I don't know the status of many of them, personally. But I don't see anything there that I know is not gonna work out, like for example, they aren't using specialization. Most of it feels like very nuts and bolts codegen options and similar things.

That said, back in August, the Rust Project announced their goals for the second half of this year: https://blog.rust-lang.org/2024/08/12/Project-goals.html

They say that they're committed to getting this stuff done, and in particular: https://rust-lang.github.io/rust-project-goals/2024h2/rfl_st...

> Closing these issues gets us within striking distance of being able to build the RFL codebase on stable Rust.

So, things sound good, in my mind.


> he also is unable to work with Rust until a suitable compiler is available in RHEL

What an arse. Best to ignore these people.


Why can't all symbols exported to modules maintain a C ABI, obviating (if I understand correctly) the genksyms problem?


(by my reading) the problem is that determining what the C ABI is relies on a C parser. If a module depends on an API, genksyms checks if the ABI has changed and won't load the module as a result (or more broadly, it could also use content addressing to look at the body of the function).

It seems like the solution is to use DWARF info to determine that but it has to be backwards compatible because previous implementations relied on the parsed C code and not the DWARF symbol info.


Because then when you want to write a module in Rust, all your interfacing with the rest of the kernel will be done through C function calls, with C semantics, and C expectations of ownership. There's really little point to adding Rust abstractions to the kernel, then, at least when it comes to modules.


But without a stable ABI can you expect modules to realistically work across versions? It seems like a similar problem to shared libraries, which rust punts on.


The kernel doesn't offer a stabke ABI for modules regardless of Rust. So what you're describing is already the case.


Sure, it changes when something changes, but with rust it changes (in theory) without any code changes but with a tool chain change.


The point is that there’s no requirement at all, so the rate of change is irrelevant.


[flagged]


HN automatically changes some titles, to counteract against SEO'd clickbait titles.

dunno about the link, but the part after "si=" isn't required, e.g.:

https://youtu.be/_wc7ujflrnI

works fine


You can also edit the post after submission to revert the automatic title changes


With the Rust team falling apart, who is actually long term going to maintain it? I want to use it but I just see drama.


One person left. "Falling apart" is a bit hyperbolic, no?


Who says the Rust team is falling apart?


The project lead left after being admonished by Theodore T'so


The RfL project lead is Miguel Ojeda. He hasn’t gone anywhere.


I think that was the lead for the group getting RUST in the kernel.

RUST itself is still strong outside of the kernel.


Not sure I'd agree they are falling apart but just to be clear, I think you're referring to the rust Linux team, not the actual rust language team, right?


The Rust team is not "falling apart". The article specifically says it's not. One person left. Many people leave projects for many different reasons.


Same thing will happen as happened when OpenBSD team fell apart.


Not sure why the downvoting. It’s a serious question. Who is ensuring that this project has a future and isn’t going to collapse tomorrow?


Asserting that a team is collapsing with no evidence will get downvotes.

This work is funded by Google and I believe others. So that’s a positive signal towards maintainability. Regardless of a specific person being involved or not, Google has significant Rust components in Android and a vested interest in all this.

But also on some level, it’s too early for these questions. Stuff is still an experiment. It could all get removed tomorrow. Once it’s closer to being permanent, then “how do we continue to maintain this” becomes a more important question.


Google funding is a positive signal?

https://killedbygoogle.com/


This link is about consumer products, not about technical projects.

If you believe Android will be killed soon, sure. I do not.


Android sits at the heart of their mass surveillance network. Rust experiments in the kernel? Seems like quite a stretch between the two to me.

I'd find more comfort in the "others" that you mentioned, although I don't know who they are.


Saying Android is bad is irrelevant. It’s an important project for Google.

> Seems like quite a stretch

It’s not a stretch, they’ve been talking publicly about their increased Rust usage in Android (among others) for years now. Here’s a 2023 post for example https://security.googleblog.com/2023/10/bare-metal-rust-in-a...


> I'd find more comfort in the "others" that you mentioned, although I don't know who they are.

Go, for example, or gRPC, or tons of other stuff.


It's a loaded question, like "have your stopped beating your wife yet?"


>> I just see drama.

> It’s a serious question.

Is it?


I rolled my eyes but then I realize you self-replied and actually laughed out loud.

Maybe next time do a tiny bit of looking before asserting unsubstantiated statements and then doubling down calling your own unsubstantiated question "serious". Not a serious way to conduct discourse, just tossing that out there.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: