Hacker News new | past | comments | ask | show | jobs | submit login
MotorOS: a Rust-first operating system for x64 VMs (github.com/moturus)
328 points by sbt567 9 months ago | hide | past | favorite | 115 comments



The author wrote this on reddit:

> What does "Rust-first" mean here? It means not only that both the (micro) kernel and the drivers are implemented in Rust, but also that Rust is the first (and only, at the moment) language that userspace programs can be written in.

> Although technically one can reverse-engineer the Rust-based ABI and the provided Rust toolchain to write apps for Motor OS in e.g. C, that is some work. But standard Rust programs (using standard Rust library, without FFI) will just compile and run - see e.g. https://github.com/moturus/motor-os/tree/main/src/bin/httpd.

> This Rust-first approach is rather unique, as e.g. Redox uses relibc and C-based kernel ABI as the glue...

https://old.reddit.com/r/rust/comments/190znk5/mot%C5%ABrus_...


Unstable, undocumented, compiler-(version-)dependent ABIs are the reason Haiku for x86-32 still has to use GCC 2.95.

I’m never not going to welcome a hobby OS, especially in an interesting language (still have fond memories of House). By all means, go for it. Just... it’s important to be aware of the prior art here.


Haiku was stuck on old GCC only because they wanted to support old BeOS binaries? But binary backwards/forwards compatibility is not requirement for many use-cases.


Haiku without the BeOS applications wouldn't have that much to show.

This is the way to go on platforms that favour shipping binaries instead of source code.

Outside Linux/BSD, stable ABI across OS versions is a concern, hence why Apple, Google and Microsoft platforms all offer ways to keep this going.


I thought the Rust ABI wasn't even stable?


> Rust is the first (and only, at the moment) language that userspace programs can be written in.

Shouldn't be much effort to make it compatible with Crablang, no?

/s


I'm the project author/dev. Thanks a lot for posting this, and for comments/discussions!

I see two main concerns raised here:

(a) long-term viability and support (b) compilers, binary compatibility, etc.

While the first concern is definitely valid, and without a community this project will not succeed, I do believe that potential benefits of Motor OS (or a similarly focused/structured project) will eventually result in a widely used new operating system. There are major problems with Linux inside VMs (and sometimes outside), and the Linux devs are not focused on this enough to clean things up anytime soon. I work on Linux Kernel at my day job, I know.

Re: compiler instability, binary compatibility, etc.: I'm sorry, I don't understand what is the issue here. The latest Linux kernel can be compiled with different GCC or LLVM toolchains on x86_64, and the result will happily run old binaries compiled years ago with who knows what. repr(C) structs in rust are stable... So why so many concerns here?

Again, thank you all for your comments and questions - I'm happy to answer more (at least until my day job kicks in).


Out of curiousity, why would a small kernel take a whole 200ms to start on a modern computer? Wouldn't it need to initialize some metadata for the memory pages, mount the filesystem, and try to launch an init process? I suppose there might be an ethernet driver and possibly something to pipe logs to ("stdout" for the VM) to initialize. Shouldn't that all take a few microseconds?

Or is all the slowness in the host preparing the resources? (as in QEMU and KVM?)


4 months ago we had FreeBSD booting in 25ms. That gives some insight where the bottlenecks might be: https://news.ycombinator.com/item?id=37319180


In general, lots of hardware requires long waits to initialize. As in, program a value then you must wait 50 ms before querying the registers or the hardware will give you non-deterministic garbage with no error indication.

So, it depends on the hardware, though you are unlikely to need more than 1 second on anything other than truly degenerate hardware assuming you are initializing devices in parallel.


But it’s only for VMs. So what hardware are you initializing?


I pass through a GPU and USB hub to a VM running on a machine in the garage. An optical video cable and network compatible USB extender brings the interface to a different room making it my primary “desktop” computer (and an outdated laptop as a backup device). Doesn’t get more silent and cool than this. Another VM on the garage machine gets a bunch of hard drives passed through to it.

That said, hardware passthrough/VFIO is likely out of the current realistic scope for this project. VM boot times can be optimized if you never look for hardware to initialize in the first place. Though they are still likely initializing a network interface of some sort.

“MicroVM” seems to be a term used when as much as possible is stripped from a VM, such as with https://github.com/firecracker-microvm/firecracker


Many years ago I contributed to IncludeOS and that thing could boot a vm in a millisecond under the right circumstances, allowing for things like per request vm isolation. So I suspects it's a mix of hardware circumstances and the way the microkernel is implemented.


Because of the debug mode


Yes! And because of sandbagging :)


Timer calibration is a big time suck.


One thing I keep hoping to see in all of these kernels in Rust is an async first kernel. Is there something that makes this particularly difficult or do folks not see the value in it? I know from following along with Phil Oppermann’s OS in Rust series that is definitely possible, but these last few OS’ in Rust seem to not be attempting this, https://os.phil-opp.com/async-await/


It would help if async Rust was actually a fully done feature, instead of half way there.

As Niko Matsakis puts it, async/await is Rust in hard mode, you don't need that when having to also worry about everything writing an OS from scratch entails.


Yes, there might be some significant gaps that make kernel development much more difficult. Off the top of my head, the current state of async traits is still being fleshed out, but it’s coming. The async_trait macros rely on boxing, so I could see that as a big downside in the kernel space. And I know that we don’t yet have an async streaming interface stabilized in the std lib.

I’m not sure if these are blocking issues or not, thus my question.


async trait macros are no longer needed for the majority of use cases as of 1.75. additionally, I doubt a hobby OS will hit bottlenecks where boxing like this is a problem. Async streaming not being standardized as an interface would only matter if the kernel wanted to use libraries, and I would assume most kernels will be light on external dependencies so it shouldn't matter.

If you are targeting a posix syscall interface, a lot of that doesn't really require async to pull off. You need to switch threads whenever an operation is blocked rather than run unrelated work on that thread. I suppose you can still model that with async, but it certainly doesn't feel as helpful.


I tried with async in the kernel first, but the cruft that was needed two years ago was not worth it in the kernel itself. The net I/O is actually async-first, see here: https://github.com/moturus/moto-runtime/blob/main/src/net.rs

File I/O will move to this model later (the current file I/O code is quite old and mostly a placeholder).


I think I missed this on my quick pass through the code, I see a lot async usage here as well, https://github.com/moturus/moto-runtime/blob/main/src/net_as...

Very cool. Thanks for pointing this out.


There are so many posts about how async Rust isn't the right abstraction it would surprise me if someone doubled-down on it right now.


Maybe I'm misunderstanding your proposal, but it seems to undermine process isolation. If you trust processes to yield under a co-operative multitasking system, why not trust them to leave each other's memory alone?


I was thinking of the internal Kernel tasks that might block, where it might be possible to have things happen in parallel, like bringing various hardware components online. See the sibling discussion on 200ms startup times.


Out of curiosity, what would be the benefit?


In theory, it would simplify concurrency primitives in the kernel space, especially on the main thread, which is something I’ve notice is not an area that a lot of kernels explore.


Sounds interesting, but it also reminds me of what Linus once said when asked about fearing competition. From my memory his answer was something like: I really like writing device drivers. Few people like that and until someone young and hungry comes along who likes that I'm not afraid of competition.


You don't need very many device drivers to run in a VM.

The real question is, how useful is a non-C-compatible, non-Linux-compatible, VM-only OS. Maybe a little bit, for microservices?


According to The American Heritage Dictionary an operating system is:

"Software designed to control the hardware of a specific data-processing system in order to allow users and application programs to make use of it."

The question is how much Operating System is a software that delegates all "control the hardware" parts to the layer below.

I know there are wider definitions of OS, but my point is this is not going to replace Linux.

Without question projects like MotorOS are sill useful. Besides practical applications it's a nice idea. Just recently I wrote in another thread that I would love to have a glimpse into an alternative universe, where Pascal had won over C and everything was Pascal-based. The idea to have everything Rust-based is even more exciting.


Is this really non-C-compatible? Seems like it just needs a libc layer written in rust. Most C programs don't make syscalls directly anyways (on some OSes they aren't even able to). The rest of the question is definitely fair though.


For work, I run a rust service on GCP. It could run on this instead of docker on Linux.

I'm going to assume without looking at this project that observability and operability would be worse than it already is though, so I'm not in a hurry to move. Anyway, GCP means cost of compute is a rounding error compared to cost of bandwidth, so I have no reason to find the edge of efficiency.


It's relatively trivial to have a `FROM scratch` dockerised microservice in Rust. I'm not convinced having an entire OS brings that much to the table.


From the description, this aims to recreate that but forgo the entirety of the linux+docker stack needed to actually be able to do FROM scratch in the first place


I don't think drivers are what warrants Linux its popularity / indispensability. I worked in a few places where people, well, at least moderately, liked to write drivers. Also, consider that it would've been definitely possible to write an OS with a mechanism for module loading compatible with Linux drivers.

My take on this question would be the ease of use for application developers. This ease of use consists of plenty of somewhat ready-to-use libraries that cover plenty of use-cases, huge community that both produces documentation and will answer questions should you have any, multiple hardware vendor support, and of course, licensing.

It's possible to compete with Linux in very special, very narrow use-cases, but trying to win against Linux on every front would require an insane amount of effort by a very large group of people.


"Also, consider that it would've been definitely possible to write an OS with a mechanism for module loading compatible with Linux drivers."

Sure, you can do that and it will work exactly until the next change in the driver interface, which is not very long.

So you end up in maintenance hell and will still constantly be behind the curve, because Linux gets all the updates first. You will be less secure too, because you'll get the security patches later.

That being said, someone in a similar thread from a few days ago said one of the BSDs does that. Maybe it's not that bad.


So Linux will essentially just become a HAL / "bios"


No, Linux remains where it is.

AFAICT this could run on top of Xen, VMWare, or the hypervisor which runs Win 10/11.


I don't think so. There is no stable interface to the drivers which makes their reuse in other projects a huge effort. With Linux you either get the whole kernel or nothing and the whole thing is more than a HAL or BIOS.


It sounds like a cool project and I hope it continues development, but there is such a huge graveyard of such projects that have never gone anywhere that I struggle to get excited about them anymore. Replacing Linux is really hard, even for specific uses like cloud.


Replacing Linux is hard even for the BSDs, and those are very well established and considered basically next-in-line for similar tasks.


You think BSDs are next-in-line? Why is that? I have the distinct impression BSDs will remain niche. Something radical (like MotorOS?) seems more likely.


What do you think is running on your favorite game consoles (that aren't Xbox)? I'll give you a hint, it isn't Linux.

Nintendo is using a custom OS but with a huge chunk of user space borrowed from FreeBSD. Sony on the other hand just went and forked FreeBSD outright.

You might also want to look into what OS are being used for server environments. A lot more BSD there than you might have initially guessed.


And thanks to the BSD license, the project is getting zero back from Sony, while those Playstation profits get a big higher thanks to less R&D costs spent on OS code.

Same applies to clang/LLVM port to the Playstation, regarding everything that would expose console implementation details without an NDA.


It is probably for another product but there is at least one Sony email address, and a number email addresses from different corporate entities, on the FreeBSD contributors list. All 3 majors BSD OSes also list donators (can be financially or hardware).

So saying corps that use BSD code never give anything back because of license is not true. And an awful lot don't do any more or even hide their use of gpl licensed code anyway.


When, say, IBM contributed a lot of stuff into the Linux kernel in early 2000s, that stuff became immediately available to anyone. Whatever cool stuff Nintendo or Sony may introduce in their versions of BSD kernels, we don't even know, let alone seeing them contribute it back.

GPL works similarly to a patent pool: every participant sees that openly contributing to the pool is more profitable than being a renegade, as long as everyone else plays by the rules, too. MIT/BSD, while as open as possible, can easily promote a trade-secret type of environment, where any enhancements are never heard of, except under an NDA, and perish if their creators go under or lose interest.


> your favorite game consoles

unless it's a steamDeck, ofc. No love for consoles, though.


If I'm in the mood for being a little glib... where do you think that Linux distribution that SteamOS is based on got its user-space (or more seriously, its drivers?)

Also: the SteamDeck is by any reasonable standard a console. It just happens to run a windowed environment out of the box. Don't be that guy. If you want to pump Valve, focus instead on their contributions to the Wine project.


> where do you think that Linux distribution that SteamOS is based on got its user-space (or more seriously, its drivers?)

Are you saying that Arch gets its userspace and drivers from FreeBSD?


I guess there is some BSD software that is common in Linux, like OpenSSH tools and dhcpcd. But that's not unique to Arch.


No idea why the personal involvement with valve pumping has come from. Atari "VCS All-In Bundle" is debian based, too. They are not playstation popular, of course. Like mentioned "no love for consoles", dont own one, not interested, either.


> but with a huge chunk of user space borrowed from FreeBSD

Any sources on this, and on what parts were borrowed specifically?

I was under the impression that Nintendo did away with most of the Unix layers we know and love and went all-in on custom code and APIs, is that not the case?


The network stack is taken from BSD, but then again so was the network stack on Windows 2000/XP.


macOS and iOS are BSDs. Pretty good niche!


This is pretty much a myth. Both run a different kernel. macOS used be known to have a network stack (and maybe some stuff like a virtual file system) from freebsd but I am pretty sure most of the code has been replaced by now.

Having some BSD userland binaries doesn't make your OS a BSD. Otherwise Windows is just a fork of curl.


Last I checked, Darwin sources for tcp still look a lot like FreeBSD circa 2000 plus some Apple patches (MPTCP). No syncookies in 2024, because FreeBSD added those months after Apple forked the stack.


I argue that having a BSD license (in Darwin), BSD heritage (NeXTSTEP, FreeBSD, briefly NetBSD), and a mostly BSD userland 20+ years into the project makes this OS a BSD.


Darwin is not Mac OS X and vice-versa.


Maybe the BSD are too close to Linux to displace it.


It's not like the myriads of these hobby OSes (written in Rust or otherwise) are any different. They're all Unix clones to various degrees.


I was skeptical at first- the healthy approach to any new tech. But thinking again, the efficiency and security gains from stripping away layers of cruft after rather compelling


"Maestro: A Linux-compatible kernel in Rust" (2023) https://news.ycombinator.com/item?id=38852360#38857185 ; redox-os, cosmic-de , Motūrus OS; MotorOS


> Docker, Nix OS, "serverless", etc. all exist because of Linux's complexity

Docker and NixOS exist because of userspace problems with package management and serverless exists because businesses want to pay for compute on demand.


It isn't hard to start writing a new operating system. It is very hard to support that operating system for the next 5 decades.


Linus Torvalds resolve is unbreakable


Yes, it's most impressive. I couldn't do it. That kind of long term focus is an incredible asset.


> ...Docker, Nix OS, "serverless", etc. all exist because of Linux's complexity

Yeah, this seems like it's more directly competing with those than Linux. I'd want to see those addressed in the "Why?" -- that is, why MotorOs instead of Docker, etc.?


> why MotorOs instead of Docker, etc.?

It basically says it already:

> Motūrus OS is a microkernel-based operating system

By and large, microkernel and containers solve the same problem. Except, I wouldn't call containers a "solution", more like a workaround. Not in the sense that containers by themselves are a workaround, but the way they are used is a workaround for the same problem.

The way containers are used today, especially in the context of Kubernetes, is to finely slice the available physical resources. The orchestration allows to manage which part of the application gets what slice of the resource, thus attempting to cut down on waste that's typically generated in the world where resources are managed through VMs. Where the typical problem is that a single VM will require too many resources because of the bloated OS it needs to run and because it's hard to create VMs with very limited resources, since OSes usually come as a package deal.

So, containers "solve" the problem by giving up VM optimization -- instead it's usually beneficial to create very beefy VMs, on top of which then a new virtualization layer is created with containers. This minimized the VM waste, but doesn't get rid of it entirely, and, of course, creates a lot of complications with all the indirection resulting from two-tiered virtualization.

Microkernel is the opposite of this "solution" -- it's the attempt to make OSes more modular, and as such less demanding of resources. Ideally, in the world if microkernels you don't need containers (at least not for the thing they are usually used today): your VMs can slice the resources in the way at least as efficient as containers do (or, hopefully, even better).

So... to predict the next possible question: why containers are so popular and micorkernels aren't: it's because the later is harder on the applications (even when applications don't actually need something, they often use it because it's available in a full-blown OS, applications aren't usually written with resource scarcity in mind). Secondly, containers, essentially, exist on top of somewhat uniform, somewhat stable interface of Linux kernel. Micorkernel VMs would expose users to the zoo of ideas hardware vendors put into their products, making portability difficult. Finally, for all its flaws, Kubernetes is a big system with many (even if not so great) solutions for many problems. So, programmers who fear technology feel like it gives them a safety net and will allow them to program from a more comfortable position of modifying YAML files, copying the most upvoted answers from StackOverflow. There won't be such an easy cake in microkernel world.


My first thought was "can this run my containers?"


> a simple multi-processor round robin (SMP)

> the kernel is very small and does not block, so does not need to be preemptible

I don't believe you and I don't even need to look at the code to know this is false.


Care to elaborate?


From the docs at the top of scheduler.rs

    // The scheduler.
    //
    // As the kernel supports wait/wake/swap, and no blocking in the kernel
    // other than wait, any kind of more sophisticated scheduling policy
    // can be implemented in the userspace (in theory; maybe tweaks are needed
    // to make things perform well in practive).
    //
    // So the default scheduler here is rather simple, but should work
    // for a lot of use cases.
    //
    // Priorities: see enum Priority
    // Cpu affinity: a single CPU hint
    // Sched groups: none: everything within a priority is round robin, so
    // a process with many threads will negatively affect a process with few threads.
    //
    // This should be OK for most situations: if there is a need for isolation,
    // just use another VM.
That's elaboration enough I think?


It could be cool if there was a WASM container built in Rust that could run in this OS. I don't really have a sense of how complicated a WASM container is, though it seems non-trivial to even decide what such a thing is, so it would be nice not to treat the WASM container itself as the OS (i.e., more room to safely experiment). WASM also seems like it dodges the ABI issue by being more explicitly about composition instead of shared binary data structures.


When you say wasm container, you mean something like wasmtime that provides a non-browser wasm runtime?

https://github.com/bytecodealliance/wasmtime


Yeah, just that sort of thing. I mean, heck, can you put #!wasmtime at the top of a wasm file? Would such a file literally run right now (assuming #! is supported, and wasmtime doesn't hit any MotorOS Rust limitations)?


Serendipitously I see this article about WASM and how it has all these nice properties as a runtime abstraction: https://wingolog.org/archives/2024/01/08/missing-the-point-o...


> Nix OS all exist because of Linux's complexity

That said, I would be thrilled to build this and other alt OSes and their userlands with Nix / Nixpkgs :).


> because of Linux's complexity

Linux is not complex but there are some design issues with C ABI and glibc stability and symlinks.


Indeed I would say Linux is as simple as possible given various goals ... however, 'simple as possible' can still be complex!


Linux is definitely complex. Linux's ABI stability (good!) also means we cannot really blame C / glibc so much, it has internalized that problem.

I agree the goal posts make much of this complexity inevitable, but the solution is to stop trying to intersect everyone's objectives into a much narrower and daunting goal.

The big galaxy-brained goal should be:

1. Make more of Linux Rust.

2. Leverage Rust's superior expressive power to switch from a "configuration" mindset (Linux's current gazillion options on single giant code base) to a "composition mindset": we want to use types to combine various subsystems in various flavors.

3. Something like this and Linux proper can share lots of library code / subsystems. We can blur the line between "separate kernels" and "separate 'distrobutions' mixing and matching stand-alone Linux components".

I think this is the only sustainable way to reign in the accidental complexity.


I don’t think swapping out the first order language does anything other than boxes out potential support.


Kind of cool as a hobby OS.

But would be nice to backup your claims regarding performance with actual numbers.


The lainding page at https://github.com/moturus/motor-os explicitly says that both networking and file I/O are slow and have to be improved. The only claim is about fast bootup, and the number is there, and can easily be verified.

Where do you see any unsupported claims re: performance?


I think the most close to production ready "mostly" Rust OS is Fuchsia.


I've seen this stated in a couple places, but IIRC the kernel is C++ and the UI is Dart. I assume lots of things in between could be in rust, but do you know which? daemons, services, drivers, window server?


isn't this similar to other microkernel and something like wasi/wasm kinda addressing this the correct way ?


[flagged]


Everything is relative of course but it's a consensus view in OS literature that the Unix family of operating systems doesn't have strong security. There's a huge TCB in which bugs lead to vulnerabilities with high probability, as we see all the time.

This is why eg cloud providers don't rely on the OS to isolate customers from each other.

Since the reneissance of virtualization, many security focused systems have built on that, like Qubes OS, seL4-based virtualized systems, etc.


Everything in security is layers of defense against a threat model.

To design a secure system one must first ask who is going to attack it and with what forces.

Making truly secure systems is an art and is rarified ground.


What is using seL4-based virtualized systems? I am very interested in trying that out of the prices aren't for enterprise only.


Idk if it's "the Unix family". Clouds are running with SmartOS for example with containers running on bare metal and I've not heard of security issues with this model.


SmartOS is a fork of OpenSolaris right? Solaris used to have its share of public vulnerability discourse when it had more users, and it quieted down as the user base shrank and people stopped deploying it as a general purpouse server OS. In a slow moving niche OS I wouldn't put much weight on low volume of public security problem discourse especially in face of apparent architectural problems.


Yes it's a fork of OpenSolaris. Companies are running clouds with it, with containers of different clients running next to each other on bare metal (no VM to re-isolate). If it was so easy to exploit it would have been done already wouldn't it ?


No, absence of evidence is not evidence of absence, that's a very central thing to understand when thinking about software vulnerabilities.


It's not wrong. Linux does not have a great security track record, it was built on completely different security principles than the ones we have today. You can find public criticism from Theo de Raadt and grsecurity if you want to find out more.

It's a kernel that has served us well, especially considering its ubiquity, but it will never be a "secure" kernel.


I think it’s a shock when compared with the insecurity of Windows also forgetting that that was the reality on the ground in the 90s and that Windows also had a larger share. Windows has since hardened their security model and could be a more secure out of the box experience than Linux (eg shipping with TPM and FDE set up correctly).

But yeah, there’s plenty of things you’d do differently if you wanted to properly secure Linux. It just shows that economically that’s not the most important thing for companies using Linux.


I think you are confusing Linux the Kernel with Linux-based operating systems. The original quote from the README was referencing the kernel and made no comparison to Windows or the NT kernel.


The Linux kernel and distributions is what I’m talking about so yes compared with NT kernel and windows as a distribution.

The design choices you’d make to build a secure OS (kernel and user space) looks very different. The microkernel design is a more secure design but techniques to make it work fast took a very long time to develop (+ computer HW also got fast enough that the overhead is no longer as big of a deal + we have multithreading everywhere that microkernel can sometimes exploit more naturally).


There are heaps and heaps of security issues in Linux (the kernel).

Simply mounting a file system can result in your system being compromised if the file system was maliciously crafted. This isn't something that is actively protected against.

Another issue I can think of OTTOMH, is unprivileged user namespaces causing lots of security issues to surface because certain parts of the kernel's code was written with the assumption that only "root" would be able to invoke it, so if you already were "root" securing this didn't matter.

There are many, many more issues similar to these, too many for me to write about, and probably many that are way beyond my comprehension. I don't think it's wrong to say that Linux has not been very secure.


It is an accurate statement.

Linux is a monolithic UNIX-like OS, with all the implications of an accidental design that dates to the late 60s/early 70s.

For starters, it has huge (Linux has MLoCs) TCB. And it is very trusted (undeservedly so) code, as it runs with supervisor privileges.

One bug in these MLoCs is all it takes.

There are far better system architectures out there.


Yeah, I read that and thought "woah, shots fired". But I'm old enough to know the definition of "secure system" varies so widely from person to person, it's just pointless to bicker about it. Some people won't be happy until programs have to ask whether they can flip a bit in memory by presenting the user's blood for analysis.


Just because something is better than an alternative doesn’t make it “good,” it just makes it less bad.


AFAIK if you're an OS-head, this is meant in the sense that it was designed to be a *NIX, not fundamentally advance security or make better decisions. (but I'm curious to hear I'm wrong)


You probably need to interpret this as saying "the system is written in such a way that the application developer needs to take extra steps to secure their applications, compared to how it could've been written".

For example, privilege elevation in Linux is a very complex and complicated mechanism. The famed "sudo" command has a very long history of bugs. To the point that I remember RHEL peddling their own version ("ksudo" if memory serves... I cannot find any mentions of this, it was some 15 years ago). It's very hard to get things right. Especially, to prove that you've done things right, if you are after security.

The system interface is designed "for comfort" rather than "for security". Many things could've been a lot more secure, but very tedious on application programmer (imagine SeLinux, but in hyperactive mode... that's every sysadmin's nightmare).

In practice, with a lot of community effort, tools like "sudo" eventually reached a point where they are mostly reliable for what they are expected to do. So, to face practical threats, today, the system may be OK, but this isn't the testimony to its design, rather it is a consequence of how much effort was spent plugging the holes.


[flagged]


It is fascinating how despite the thousands and thousands of high profile memory related RCE vulnerabilities, how this community doesn't seem to like pointing the fact that C is an unsafe language.


Keep in mind the other important difference for this project which is a microkernel design with most operations running in isolated user space processes/libraries. That also helps limit the ability to exploit vulnerabilities in one place to take over other parts of the system.


Not just the other important, but the most important.

A safety-friendly language can help, but the system architecture is most important.


>pointing the fact that C is an unsafe language.

The implication there is that Rust is safe.

Reality is, it is not.

Rust provides some tools C does not, which can help in writing safe programs. That is about it.

In exchange, it is a relatively young language with all it implies. For instance, there are not yet any successful OSs written in Rust.

Kudos to the authors for actually writing code to change this, instead of evangelizing Rust on HN.


> For instance, there are not yet any successful OSs written in Rust.

The rate at which new successful OS's come about is extremely low. Despite that, at least one OS in rust is being successfully deployed commercially today ('hubris', thanks to oxide computing), there are numerous reasonably advanced 'amateur' OS projects in rust (redox os being a reasonably polished example), and rust has begun finding it's way into the two most popular existing OSes (both the linux and windows kernels - I believe the windows kernel is farther along here).

By any reasonable metric, rust has been wildly successful in the OS writing space. No doubt, in large part thanks to evangelizing by people who said "this is a reasonable thing to do".


Rust is only “safe” because by default you’re not allowed to do certain things. If you applied those same limitations to C it would be just as if not more safe. And the things you’re not allowed to do in “safe” Rust is required in various applications that C excels at, meaning that for Rust to do the same it has to be “unsafe”.


Nobody has ever denied that unsafe Rust code exists, so I don't know why people keep acting like it's some gotcha that invalidates the value of Rust. The point is that in Rust, it is clearly delimited where those unsafe operations are occurring, so you can focus on making sure those specific parts of your code are solid. In C, you have to be vigilant about your entire code base, not just a handful of blocks. Moreover, most Rust programs don't need to use unsafe at all, so those authors can rest easier.


I am usually the first to say that Rust evangelists are annoying, but… your argument isn’t valid. That any tool can be dangerous when used wrongly enough is not an argument in favor of or against rust. Additionally, that any given tool might be better for a certain job over any other isn’t an argument for or against other tools more generally.


> If you applied those same limitations to C it would be just as if not more safe.

If frogs had wings, they wouldn't bump their butts when they hop.

If you add all the security guarantees of Rust to C you would have something substantially similar to rust.

As to your last point, it's very easy in rust to isolate those operations so you can just go over those with a fine tooth comb and not have to worry about unsafe stuff appearing everywhere in your code.



OpenBSD is safer than Linux in spite of C, not because of it.


OpenBSD has existed for 30 years. In retrospect I wonder if the time would have been better spent developing a more secure language and porting BSD to that instead of meticulously scrubbing C code for each new class of bug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: