These will of course still be around for the foreseeable future (i.e. decades). At the same time, we are seeing Rust pop up in a lot of places that used to be the exclusive domain of C. People are already re-implementing libraries, popular command line tools, etc. A lot of these implementations have clear merit in the sense that they are faster, safer to use, easier to scale, maintain, etc.
IMHO it is just a matter of time before vendors start providing e.g. drivers written in Rust for their hardware. I could see linux evolve to a point where integrating drivers like that is both possible and common. Once that happens, it will be a hybrid kernel effectively. Rust is not the only thing moving that direction; wasm is also going that direction. People are talking about running that in the kernel. And of course Rust runs on top of that as well. So one outcome would be a lot of Rust code running in a wasm sandbox on top of legacy kernels.
Please just... stop. You are using terms that you clearly don't understand the meaning of. What is a "monolith" from your point of view? Because I can tell you that Windows NT is most certainly not a monolithic kernel. Its architecture is very much that of an impure hybrid kernel. You're using the term "monolith" as some sort of buzzword in a realm where that exact term already has a highly specific meaning.
>At the same time, we are seeing Rust pop up in a lot of places that used to be the exclusive domain of C. People are already re-implementing libraries, popular command line tools,
Yes, but some (e.g. ripgrep) don't actually do the same thing as the programs they are supposed to "re-implement". Most Rust "re-implementations" don't actually re-implement programs, they are new programs doing a similar-but-not-quite-the-same thing to the older C implementations. For example, ripgrep is neither GNU or POSIX compliant, and therefore it is not a re-implementation.
>I could see linux evolve to a point where integrating drivers like that is both possible and common. Once that happens, it will be a hybrid kernel effectively.
Hybrid kernels have absolutely nothing to do with the language they are written in beyond the fact that they're written in it. Monolithic, hybrid, microkernels and exokernels are all terms to describe the architecture of a kernel.
>So one outcome would be a lot of Rust code running in a wasm sandbox on top of legacy kernels.
So then Rust and wasm isn't a part of the kernel is what you're saying.
> Yes, but some (e.g. ripgrep) don't actually do the same thing as the programs they are supposed to "re-implement". Most Rust "re-implementations" don't actually re-implement programs, they are new programs doing a similar-but-not-quite-the-same thing to the older C implementations. For example, ripgrep is neither GNU or POSIX compliant, and therefore it is not a re-implementation.
That ripgrep isn't GNU or POSIX compliant is indeed a distinction worth making, but certainly not in this context. You're taking on a very pedantic interpretation of "re-implementation" here that doesn't really matter to the general point. Specifically:
> At the same time, we are seeing Rust pop up in a lot of places that used to be the exclusive domain of C. People are already re-implementing libraries, popular command line tools, etc. A lot of these implementations have clear merit in the sense that they are faster, safer to use, easier to scale, maintain, etc.
None of this requires any of the "re-implementations" to be 100% strict bug-for-bug compatible tools.
This general confusion comes up so often that I have a FAQ item for it: https://github.com/BurntSushi/ripgrep/blob/master/FAQ.md#pos...
It is a worthwhile distinction, because the "re-implementation" talked about is often materially different, to the point that it no longer adheres to what little standardization there is.
So realistically we are talking about a new product, or a radical evolution of an existing one. It and everything above it also have to change.
Consider this. If instead of
> At the same time, we are seeing Rust pop up in a lot of places that used to be the exclusive domain of C. People are already re-implementing popular command line tools, albeit with different interfaces that lack POSIX compatibility. A lot of these implementations have clear merit in the sense that they are faster, safer to use, easier to scale, maintain, etc.
Other that the words being more precise for pedants, does the central point of the statement change? No, it doesn't.
Is there a reason that command line tools cannot be re-implemented in Rust and maintain GNU and POSIX compliance? If not then the fact that ripgrep is not compliant is irrelevant to the larger point that Rust command line tools could eventually replace existing C command line tools.
No, there isn't, but what "re-implementations" that I know of don't even try to do that.
>If not then the fact that ripgrep is not compliant is irrelevant to the larger point that Rust command line tools could eventually replace existing C command line tools.
Of course it's possible. You can go ahead and re-implement them in BASIC even. The point is that it hasn't been done, and until it has been done then such a replacement will not happen.
At that point there are further considerations. The C linker is ubiquitous, and remains a strong, underlying presence even on Windows, the one platform where C stands the weakest. The reason that the C linker is so widespread is that while C has no standard ABI, the platforms have very strongly defined ABIs (e.g. systemv abi), and those ABIs are extremely stable and haven't had a single change in over a decade. Those ABIs are also very, very simple and the result is that writing any type of software towards a C library in any language is dead simple.
C++ at least has extern "C" going for it, which disables name mangling. Rust doesn't have that, nor does it have a stable ABI. C/C++ are the two most interoperable language out there, with an extremely mature and reliable toolchain. It's simply not justifiable to replace C/C++ with Rust as a systems programming language until this serious problem is fixed.
Then furthermore, a vast amount of effort has been put into these tools over the last years to ensure that they run on any *nix and any architecture, and even so they can be ported with relative ease. For example, how do you suggest implementing glibc in Rust? You might argue "why would we need the C standard library when we're porting things to Rust", and the answer is that if you want to make a move towards rust, having a C library (particularly the GNU one, a lot of software depends on GNU extensions) is of utmost importance until that goal has been achieved.
> C++ at least has extern "C" going for it, which disables name mangling. Rust doesn't have that, nor does it have a stable ABI.
Rust has #[no_mangle] and extern "C" and those two guarantee that ABI stability you're looking for.
> Then furthermore, a vast amount of effort has been put into these tools over the last years to ensure that they run on any *nix and any architecture, and even so they can be ported with relative ease.
That's actually a breeze in Rust, including cross-compilation (which is relatively painful in C/C++).
> For example, how do you suggest implementing glibc in Rust? You might argue "why would we need the C standard library when we're porting things to Rust", and the answer is that if you want to make a move towards rust, having a C library (particularly the GNU one, a lot of software depends on GNU extensions) is of utmost importance until that goal has been achieved.
There's work towards that: https://gitlab.redox-os.org/redox-os/relibc
I am primarily a systems programmer and write software for the petroleum industry in my country. I work with C++ and in some (rare) cases C, and I've been doing that for the past 4 years. I have my own WIP hobby unix-like microkernel project written in C as well.
>Rust has #[no_mangle] and extern "C" and those two guarantee that ABI stability you're looking for.
I admittedly did not know this before it was pointed out to me.
>That's actually a breeze in Rust, including cross-compilation (which is relatively painful in C/C++).
It really isn't. Have you looked at rustc's compiler targets? It's not a very long list. Support is improving, but there's still a lot of key areas that are completely missing. To my knowledge it supports x86, ARM, MIPS and POWER. That's not a long list.
>There's work towards that: https://gitlab.redox-os.org/redox-os/relibc
That's good, and is genuinely what's necessary to put Rust in key components.
Anyway, most of the work of porting Rust to new platforms is not in rustc itself, but rather porting LLVM to target that platform. This is bad in a way and good in a way. It's true that LLVM doesn't have as extensive a list of supported targets as GCC. However, for people developing new architectures and OSes, LLVM is often the go-to choice as the first compiler to port. For example, WebAssembly, eBPF, and AMDGPU are all supported in upstream LLVM at present, but not in GCC. On the other hand, RISC-V got GCC support first – but lowRISC also actively developed an LLVM backend, which is now upstream. 
Of course, C benefits from having multiple implementations; even on platforms that only LLVM has a backend for, you can just use LLVM's C compiler (Clang), whereas there's (currently) no Rust compiler with a GCC backend. I'd love to see that change in the future. Still, I'd say Rust target support is already pretty strong when it comes to the (embedded) platforms most people are doing new development for, and rapidly getting stronger.
I’ll look into it tomorrow, thanks.
It does, it's called "#[no_mangle]", and together with an extern "C" fn declaration you get a non-mangled function directly callable from C.
Genuine question: why would WASM in the kernel make any sense at all?
Hardware implementation runs code faster (kind of "executes more instructions per second"), but switching from user-space into kernel-space and back are costly. The costs are very high, high enough to projects like "user-space TCP/IP stack" start popping up: it is all about avoiding switching to and from kernel-mode.
Software implementation of sandbox in the kernel allow costless (or near costless) switching of context, but runs slower. It is tradeoffs, and not every one piece of software would benefit from moving into kernel-WASM, but some would do.
I'm very skeptical of the idea of throwing out virtual memory and process isolation to depend instead only on software sandboxes.
What do you mean by this exactly?
Depends on who you ask. Hipp on Rust in 2016:
Rewriting SQLite in Rust, or some other trendy “safe” language, would not help. In fact it might hurt.
> drivers written in Rust
If you have to write inline assembly, and generally bypass Rust's security features and use "unsafe" code, why wouldn't you just use C?
All that said, it is possible that SQLite might one day be recoded in Rust. Recoding SQLite in Go is unlikely since Go hates assert(). But Rust is a possibility. Some preconditions that must occur before SQLite is recoded in Rust include:
A Rust needs to mature a little more, stop changing so fast, and move further toward being old and boring.
B Rust needs to demonstrate that it can be used to create general-purpose libraries that are callable from all other programming languages.
C Rust needs to demonstrate that it can produce object code that works on obscure embedded devices, including devices that lack an operating system.
D Rust needs to pick up the necessary tooling that enables one to do 100% branch coverage testing of the compiled binaries.
E Rust needs a mechanism to recover gracefully from OOM errors.
F Rust needs to demonstrate that it can do the kinds of work that C does in SQLite without a significant speed penalty.
If you are a "rustacean" and feel that Rust already meets the preconditions listed above, and that SQLite should be recoded in Rust, then you are welcomed and encouraged to contact the SQLite developers privately and argue your case.
I follow the Rust story from afar, so I don't know the status of every one of these points.
B and F (FFI and speed) are covered.
C and E (embedded story and OOM recovery) is being actively worked on.
A will improve with time (for example, I suppose that the Rust 2015 dialect will settle at some point, feature-wise, once Rust 2018 is out).
D (branch coverage of compiled binaries) is on the radar, but not addressed yet.
// safe, tested, C
// potentially dangerous C deemed thus for whatever arbitrary reason
// "safe" Rust
// "unsafe" Rust
So yes I do belive, given the quality of the code bases I have seen in production.
And as discussed at Linux Kernel Security Summit 2018, which videos are freely available for you to watch as well, even with the rigorous process to accept patchs into the kernel, CVEs keep increasing.
68% of them were caused by memory corruption, something mostly unique to C and languages that are copy-paste compatible with it.
Then another good chunk was related to UB, something that even Linus already ranted a few times.
So yeah, Safe C is an oxymoron, unless we are talking about a new variant with safety turned on by default, where warnings are dealt as errors and everyone uses static analysers that break their CI builds on error.
> Go etc. are garbage collected, making interacting with C either impossible or excruciatingly slow.
"Impossible" is, of course, a lie. "Excruciatingly slow" may be the case, though it depends on the kind of interaction you are looking at. Without any context, this statement is overly general.
Slide 14 can be taken out of context and misrepresented as showing that "Rust can be faster than C", although the programs being compared use different data structures, so it's inconclusive. (It's not clear to me whether the author wanted to make a broad claim like this. The title suggests so, but again, the difference in data structures suggests otherwise.)
> every operating system retains some assembly for reasons of performance
Is this really the case? I would be interested in examples. My impression was that assembly is needed for hardware interaction for which there are simply no C constructs available (setting timers, interrupts, system registers, and whatnot), and possibly highly space-constrained bootloader stuff, but performance? C compilers are not stupid.
> in-kernel C tends to be de facto safe
This is a bit of a stretch. I mean, yes, most lines of the Linux kernel have never been the cause of a CVE. But it's hard to tell which lines are the critical ones. Properly encapsulating safe/unsafe regions does seem like an improvement. (My understanding is that Rust's "unsafe" is not properly encapsulated: You might mess something up that will cause a segfault later on, in ostensibly "safe" code. Still, it's a step forward.)
> This is a bit of a stretch.
Not "a bit;" quite a lot of a stretch. What makes C problematic is that the language is specified in terms of an abstract machine that looks absolutely nothing like real hardware, and executing instructions that make no sense in the abstract machine but are very well-defined in the real hardware is liable to cause problems for the optimizer in the compiler. The assumption that it's safe is built on the principle that a) compiler vendors aren't going to break the kernel build and b) it's possible to jam enough flags onto the compiler to make the abstract machine adhere to the real semantics. Both of these are dangerous assumptions.
One of the more infamous compiler-writers-are-breaking-code-for-no-reason rants is the optimization that assumes that a dereferenced pointer cannot be null, so it deletes null checks. Of course, the actual code that caused that rant was this:
static unsigned int tun_chr_poll(struct file *file, poll_table * wait)
struct tun_file *tfile = file->private_data;
struct tun_struct *tun = __tun_get(tfile);
struct sock *sk = tun->sk;
unsigned int mask = 0;
Is there a language (other than assembly) that would have a closer model to the hardware?
The internal IRs of compilers (such as LLVM IR or GCC GIMPLE) is likely to be much more accurate to hardware (and the ones I called out are indeed so). It's debatable if you want to actually call these non-assembly languages, though.
More concretely, LLVM IR has integer types for any bit size, while actual hardware architectures have only a few. Many real CPUs have condition code registers set as side effects of operations, while LLVM requires explicit compares, etc.
I definitely would not call LLVM IR accurate to hardware, but it is more accurate than C is.
(I have worked on a system that worked liked this. There was some kind of process information block at address 0, and NULL at runtime was all bits reset. The system compiler was not gcc.)
That is, how much is the overhead of the call from OCaml to C? Not just how much faster did the entire system get.
My hunch is that it is still quite favorable.
One other problem is that you cannot inline C into OCaml functions (which is generally true when mixing languages) so there's always some overhead even in the noalloc case, although it will be fairly small on modern CPUs.
IMO the big GC related issue is lack of determinism especially in the management of external resources. If you need that in a managed language you're forced into grafting reference counting on top which is particularly error-prone and unpleasant. Interacting with other languages can be slow, and bridging uncomfortable.
> Slide 14:
Rust can be faster than C in the general case because of the optimizations permitted by, among other things, strict aliasing rules.  The Rust language invariants are stronger and therefore the compiler can be more aggressive.
> Slide 16:
IMO you can do all that stuff in C with a combination of volatile pointer I/O and if necessary compiler intrinsics. I've done a lot of AVR programming and never dropped down to assembler because something wasn't available in C. I'd imagine the answer is really neither, just that some people are set in their ways and view ASM as 'faster' even though in reality LLVM would probably generate faster code by better taking into account instruction issue and data dependencies.
> Slide 19:
Unsafe Rust is unsafe because you can't express what you want in safe Rust. To your point, you can absolutely break things. The implicit contract is when writing unsafe sections that you are to maintain the invariants of the safe language at entry and exit of the block.
What makes safe Rust safer than C, though, is all the flagship features. You can detect data races across threads (and interrupts) statically. You can't write to and read from data at the same time. You have fully deterministic dynamic collection of resources. etc.
As for Rust can be faster than C, well, people said the same about Ada but it rarely is the case so I wouldn't hold my breath for Rust either. After all, those languages use the same backends - GCC in the case of C & Ada, LLVM in the case of C & Rust. But it's enough to as fast as optimized C and have all the safety features.
The killer flaw of C for performance is the pointer aliasing problem. C only has the very coarse-grained tools of strict aliasing (often disabled in large applications because it breaks code!) and restrict (which relies heavily on manual programmer annotation) to control these aliasing issues. Since Rust has the rule that you can only have one mutable reference to an object at once (and read-read aliasing issues don't matter), it can effectively automatically add in these annotations without relying on programmer annotation.
This isn't a property of GCs per se; it's just that most GCs are optimized for throughput. Go's GC's pause times are on the order of 1ms, which might not be appropriate for every application, but it's probably fine for soft-realtime systems. There are a lot of other levers one could imagine as well, like semantics for demarcating critical sections.
I’m not criticizing pause times, but rather that you can’t rely on code inspection to identify the point of resource deallocation, which is pretty much the flagship feature of a GC :)
This is completely wrong. The flagship feature of a GC is memory management, __not__ filehandle management or management of other resources. While many GCs support finalizer hooks, most languages with GCs (Go, Python, etc) explicitly discourage using these to close file handles for specifically the reasons you cite. Automatic resource management is useful and desirable, but it's out of scope for GCs.
I suppose in reality it depends. Finalizer hooks are absolutely bad, I agree. Which means that resource management has to be grafted on top of a GC memory model. Since the compiler won't be helping you enforce the management of resources since, as you point out, it's out of scope, it introduces a new vector for error. Rust allows you to bind the two and handle them together, both in a checked/enforced way. I've yet to see a GC'd language that does the same, but I can't think of why one couldn't exist.
It's not a new vector of error; it's the same vector that C has to deal with.
> resource management has to be grafted on top of a GC memory model
Only if this means "don't bind resource lifetimes to memory lifetimes", which is the solution that every tracing GC language implementation (that I'm aware of) uses. There's nothing stopping you from refcounting file handles just like you would in C. The compiler could even automatically add in the refcounting instructions for you, or the language runtime could have a special GC just for file handles.
Besides, a kernel has to deal with plenty of forms of nondeterminism anyway.
Do you maybe have hard real-time systems in mind when you say "deterministic"? I agree that a GC would be a bad idea for those, but these require validation of the hardware in combination with the software anyway.
Sure. The same is true for Fortran, for example. It's just that the few benchmarks of Rust vs. C I have seen so far all seem rigged in one way or another.
Yes, you're right. Some of those can occasionally be accessed via memory mapped registers, but at least some inline assembly should be expected/required for most architectures. So it's not merely performance. Though it also makes sense for some critical tasks to be tuned via intrinsics or assembly.
L4 is too low-level - you have to run another OS on top of it, usually Linux. QNX offers a POSIX interface.
You say that like its valuable. POSIX has a lot of problems, if you're going to go through the trouble of throwing away a mature kernel like Linux with all the hardware support that brings, you may as well also ditch your POSIX baggage.
Here's the original QNX paper.
On April 9, 2010, the day the acquisition closed, Research and Motion took the source code offline, with no notice. All user open source projects on QNX were abandoned shortly thereafter.
So if someone implemented a QNX-style microkernel in Rust, file systems and networking could be adapted from rump kernels. Drivers would require more porting. That would be a nice way to get a usable but small OS for embedded and IoT devices. Linux has more than you want in an embedded device.
I'm also excited to see how it turns out :)
I can't think offhand of any language that's really better here in a sense of being more efficient, except probably Classical Latin which is horribly complicated and of course a dead language.
> No, that's an example of someone not knowing the language very well.
I disagree. The sentence is clear and unambiguous as written. The subject of the first part of the sentence is the CTO himself. If the continuation of the sentence is intended to switch subjects, then it must be written so. For example, '... by the Joyent CTO, whose company ...', or '... by Joyent, who/which ...' thus 'who' refers to 'CTO'.
My entire adult life I've been writing C, I resisted Rust for a while but took the plunge when a C++ project came along that was in dire need of being reworked. I liked it, the language and the compiler and tooling around it helped me tremendously.
I assume you're aware of all the things that Rust statically guarantees for you, and I'll also assume you know the difference between language complexity and language implementation complexity (eg, the compiler or some other tooling):
I've been skimming through the C++ section of cppreference.com, and oh boy, there's heaps and heaps of... stuff... just stuff. The interplay between all that stuff is complex and it's hard to keep everything in mind. Keeping things in mind is important for correctness, or at least for having a reasonable amount of confidence in what you're writing. Information locality is also important for correctness, and C++ lacks both of those. I'll even go further and say that C++ is a write-only language, like a garbage bin of features and exceptions to each of those features. Rust has many many many clear advantages (and some disadvantages) compared to postmodern C++.
A lot of the static guarantees that rust provides can be modelled in C++ as well, see for example "https://clang.llvm.org/extra/clang-tidy/checks/cppcoreguidel.... Eventually someone will write a rust-style borrow checker as a clang analysis pass. Already now it is possible to express a lot of
compile time properties cleanly in c++, with features such as constexpr lambdas, static_assert etc. Additional features such as Concepts / meta-classes / modules will improve C++ expressivity even further.
In short, yes C++ has a lot of stuff and everyone knows that it is hard to get a grip on all its features and misfeatures. But there has been a continuous effort to rectify and improve on its short comings. See for example https://clang.llvm.org/extra/clang-tidy/checks/list.html for a list of clang based code transformations that are able to automatically improve your c++ code.
The way I see it, just because of legacy reasons (I can interface with the majority of commercial software developed in the last 30 years (CAD, EDA software, Houdini, Abelton, ...)), tooling reasons (the rust compiler is a slow hot mess, compared to the state of the art c++ compilers) and momentum (there is a large incentive to continuously improve c++, because it is the foundation of a large fraction of commercial software out there) it is unlikely that rust will outcompete c++ in a significant way in the long run.
To close of with an example the lean theorem prover https://github.com/leanprover/lean, implemented in C++ handily beats the 20+ year old Coq prover. One of its key features is multicore support. If you look into its implementation details, the equivalent rust code would have to work around a lot of rust "safety" features, essentially because in many cases it is hard to convince a hardcoded heuristic like a borrow checker that whatever you are doing is safe. The static guarantees that rust gives wouldn't help you at all in correctly implementing some of the very involved algorithms a theorem prover kernel has to implement, while standing in the way of the implementation being straightforward. This leaves aside the obvious tooling (you can use Visual Studio, good profilers and debuggers) and integration (SAT checkers like Z3, LLVM backend) advantages.
Which has its share of segmentation faults (around 50 closed issues) and unknown number of corner cases with silent memory corruption/data races.
the same bit applies to C, and we all know how it's working out
There is also a bit that most of the bad security failures are not as sophisticated as the ones we mostly talk about. (Ironically, of course, the widest vulnerability lately was at the CPU level...)
I think parts of an OS can and should be written in something rust like, however the whole thing should probably not be written in rust.
My personal thought is that C and C++ 98 era are bad, but they were what we needed at the time. Part of this badness was that you could do anything but exact facilities for specialized tasks had to be rolled from scratch. Libraries are possible, but not ideal (no namespaces for C and crazy header + recompile issues for both).
In order to replace C and C++ we need more than just one language, we need an entire family of languages that all have the specific features implemented to support their domain. Rust can probably be used in game programming, but Zig (or in theory eventually Jai) is being specifically made with that goal in mind so they will probably be better. Rust itself will be much more useful for applications that cant stomach managed languages (ie web browsers, etc) because that's what they are specifically building it to do. It's got high level features in a form factor that you would expect from C#/Java with low runtime and no GC.
We can probably write OSes in Rust or Zig, but ideally someone who really likes thinking about OSes will create a language specifically geared towards writing OSes that will be able to replace C.
The top slogan for Rust is "fearless concurrency", which ties back into the memory model, mutability and sharing references. This helps you prevent (at compile-time) data races and other interesting bugs.
I didn't see any support for that in Zig at all.
If you are writing a kernel it's highly likely that you'll end up in a lot of scenarios where you have to use unsafe Rust, in which case Zig would be the better language for the task. Admittedly, it would be nice to have some of both, and they both have room to get better. But I see Rust as more of a "libraries and applications" centric setup, where it's the right thing for a web browser and the wrong thing for a device driver.
This would be, to put it mildly, quite difficult. Even then, unsafe code is sometimes not written correctly which brings the whole thing down.
I, personally, still think it's worth it. I think efforts like Redox OS can teach us a lot about what we're doing and offer a chance to collapse some of the layers of cruft existing OSes have accumulated.
It just limits the places shit can happen and which need to be closely reviewed which alone is a big help.
For example, if you have some data structure synchronized with a mutex, the mutex would be the owner of that data structure. Everything else would just get borrowed access to the data structure when it locks the mutex. Rust's borrow checker can make sure that you don't keep any references to that data structure after you have released the mutex so you can't access the data structure again without locking the mutex again. The mutex itself would need unsafe code, but everything using the mutex wouldn't.
Realtime just means that actions triggered by event occur in an explicitly bounded time. It doesn't usually mean that they occur in the same time. So a GC with bounded pause could be used in a realtime system assuming you could guarantee the explicit bounds.
I can't comment on whether or not this is true of Nim and pause time probably isn't sufficient (depends on how it's defined) but they're not mutually exclusive.
And most kernels don't need to be realtime unless they are realtime kernels.
Having said that, of course, the vast majority of GC systems are most definitely not realtime, or anywhere close. But that's just actually true, not theoretically true.
You can't guarantee explicit bounds on when a given resource will be collected with GC. Bounded times can provide upper bounds on GC pass runtime, but they do that by potentially deferring further collection. Those deferrals make it nondeterministic.
> And most kernels don't need to be realtime unless they are realtime kernels.
That's not true. Most (all?) kernels have internal realtime needs, e.g. responding to interrupts.
There is a way to write a kernel with a GC'd language, as Niklaus Wirth has demonstrated, by circumventing the GC where needed. But the critique I was bringing up was simply that there's no such thing as a realtime GC.
1. elimination of backwards compatibility. this is true for all new operating systems, independently of the language. clean slate and throw away all the baggage that was reasonable 20 years ago, doesn't hold true today anymore but still is baked into the old architecture. this might be less relevant for all-purpose OSes, but might be a viable option for specialized systems (IoT, Network Appliances, ...). you remember what linus says when a kernel patch breaks buggy userland code? we now get a second chance to implement a new system that avoids whole classes for buggy code (while the underlying problem doesn't go away, it may get greatly reduced).
2. reduction of complexity. this _is_ a rust thing, and the story to back this up is [stylo](https://blog.rust-lang.org/2017/11/14/Fearless-Concurrency-I...), the parallel css engine used in firefox. i remember reading that google tried to implement concurrent styling in chrome but failed, because it got too complex in c++. human intellect is pretty much finite and doesn't scale well; we rely on better tools. so, rust might enable us to do things that were considered impossible before.
the emergence of such an OS would be gradual of course, niche at first, then slowly growing until one day it's the new de-facto standard.
lets say, game engines. you rarely write your own game engine, because that's too complex and ties up all the resources you need to actually implement your game. so you buy one and it's in C++, of course. because they all are, and thus your developers are fluent in C++. there are no game engines in rust and few developers who know the language. but over the years, more and more game devs will try rust in their spare time and like it and write game engines in rust for their side or indie projects, where it's still feasible. a few of those will grow and get more features and tooling and at _some point_, suddenly, rust will be a viable alternative. even though rusts strengths aren't really that important in game engines. performance happens on the graphic cards anyway. security is not as big of a deal. parallelism is quite constrained anyway. etc, etc. but if all else is equal, rust might be more productive.
so, is it time to rewrite the OS in rust? sure, why not. it's just that we wont all switch to redox overnight.
Also maybe you could have a less complicated memory manager as you can deal with allocations statically.
> Kernel programmers would be using something else if that's what they wanted to use.
The vast majority of the languages having arisen in the last 40 or so years are completely unsuitable for kernel development, so not necessarily.
Less “down the drain,” possibly more close to “baked directly into the tools we use.”
No. Next question, please.