Hacker Newsnew | past | comments | ask | show | jobs | submit | andreww591's commentslogin

At least for certain types of OSes, it should be relatively easy to get most of Linux's hardware support by porting LKL (https://github.com/lkl/linux) and adding appropriate hooks to access hardware.

Of course, your custom kernel will still have to have some of its own code to support core platform/chipset devices, but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).

Also, it probably wouldn't work so well for typical monolithic kernels, but it should work decently on something that has user-mode driver support.


>but LKL should pretty much cover just about all I/O devices (and you also get stuff like disk filesystems and a network stack along with the device drivers).

thus calling into question why you ever bothered writing a new kernel in the first place if you were just going to piggyback Linux's device drivers onto some userspace wrapper thingy.

Im not necessarily indoctrinated to the point where I can't conceive of Linux being suboptimal in a way which is so fundamental that it requires no less than a completely new OS from scratch but you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.


You make a good argument, but let me take the other side:

What you describe is probably necessary for getting _fast_ Linux compatibility. However, if you are willing to take the overhead of a few layers of indirection, you can probably sandbox the Linux land somewhere, and not have it impact the rest of your design much.

Most hardware access doesn't have to be particularly efficient. And, yes, for the few pieces of hardware that you do want to support efficiently (eg your storage devices or networking, whatever you want to concentrate on in your design) these you can handle natively.

Btw, I would suggest that most people these days should write their toy operating systems to run as a VM on a hypervisor like Xen or similar. The surface to the outside world is smaller that way.


If you're going this route, I have found netBSD a better option for this sort of thing.

It has a rump kernel architecture which makes reusing the drivers almost trivial compared to reusing linus drivers with a new kernel.


  > you're never going to get there off of recycling linux's device drivers because that forces you to design your new OS as a linux clone in which cade you definitely did not need to write an entire new kernel from scratch.
thats in interesting point, and makes me wonder if some kind of open interface for drivers to write to (and os's could implement) wouldn't be worthwhile?

probably it would have to be very general in design, but something along the lines of driverkit or iokit might work?


That is how all OSes with binary drivers kind of work.

However it goes into the same direction of the previous commenter, device drivers are intertwined with the OS semantics, even on microkernels, so eventually it ends being just something like POSIX.


Is this the old 'an OS is just a bag of buggy device drivers' argument?


I'm a little bit skeptical as to how successful a hard fork of Linux that only differs from the mainline kernel by having a bit more Rust code actually would be.

If you're going to rewrite significant parts of the kernel, you might as well do what I've been doing and try to write what amounts to a better Linux than Linux that tries to maintain compatibility, but moves beyond the rather limiting conventional Unix architecture. The conventional Unix architecture was fine on a something like a 70s/80s-era PDP-11/VAX, but in the modern world its limitations have been apparent for quite some time.

What I've been working on is an OS very similar to QNX Neutrino in terms of general architecture, but with a somewhat different IPC protocol layering that reduces the number of user-visible primitives and allows for more consistent management of security. Most of the functionality of the system will be implemented in user-level server processes that export their services through special filesystems, with the only special/irregular parts of the system being the microkernel, the process manager (which also contains the core VFS and memory filesystems since these will be tightly linked to the process model), and the base syscall library (vaguely akin to the vDSO under Linux). Literally everything else will just be a regular process. It's not a "Rust OS" as such, as there will still be some C (for instance, the microkernel, which was forked from an older version of seL4), although it will have a fair bit of Rust code.

IMO the issues with Linux are mostly due to a combination of poor/ad-hoc extensibility and the development model that's way too decentralized in some places but excessively centralized in others. The architecture I'm going with will allow for more experimentation, since adding functionality to it will typically just be a matter of adding a regular user program (or a plugin for a regular user program), and much of the system will be based around standardized filesystem-based RPC protocols (generic tooling for implementing RPC interfaces will of course be provided). Therefore it would be easier to maintain experimental functionality in a separate repository and merge it into the base system later on.

Currently it's still quite preliminary, and it only runs some hardcoded tests built into the process server, although despite that, some developers from a major company have taken interest in it recently because of the possibility of using it as a replacement for QNX both in embedded systems and development workstations. I'm working on the VFS layer and built-in special filesystems at the moment, and hopefully should be able to get user processes running pretty soon.

https://gitlab.com/uxrt/uxrt-toplevel


Redox's IPC is a fair bit weaker of a design than that of QNX or my seL4-derived kernel. It works sort of like that of QNX in that it has direct copying of messages between address spaces and RPC-like semantics with direct context switch from sender to receiver and back, but it is more limited and implemented in a slower way. Also, there seems to be no consideration for real-time performance at all.

Outside the core IPC model, the namespace model is also more limited than that of QNX; QNX allows multiple servers of a given type mounted anywhere within a namespace, whereas Redox's scheme-based namespace model only allows a single server of a given type per namespace, mounted on a fixed top-level prefix.


I think elaborate IPC designs are too much trouble, and I favor seL4's "context switch with benefits" approach of delegating larger messages to shared memory. It avoids performance and security hiccups with copying or kernel-mediated shared memory, like in EROS. To be fair, I'm less focused on efficiency and more on simplicity/robustness of the kernel, similar to seL4. Could you elaborate on your thoughts in the IPC space?


Basically I'm trying to write a better Linux than Linux and a better QNX than QNX, at least for certain use cases that is. At least initially, my primary focus is on workstation/enthusiast desktops and higher-end embedded systems (some types of servers, especially ones where high security is important, are also a focus as well but maybe not quite as much for now). Pure consumer desktops aren't really much of a priority; the goal is to make something that's approachable but definitely not dumbed down. Almost every day I see people complaining about mainstream OSes and I think there's an opportunity for an OS that's cleaner and more integrated than Linux and has various advanced features not found in Linux, but still tries to maintain a high degree of compatibility with it. I'm not really trying to clone QNX or Linux 1:1, but I am trying to maintain similar enough semantics to allow easy implementation of compatibility layers for both systems.

At the moment I have a working microkernel and an incomplete process server with a partially-implemented VFS but no support for running regular user processes. Currently I am working on some of the first special filesystems built into the process server (specifically an early log/console device at the moment).

The kernel is a hard fork of seL4 (the biggest addition at the moment is a preliminary implementation of QNX-style arbitrary-length arbitrary-address message copying); I've forked it because seL4 is focused on lower-end static embedded systems, and full verification of just the kernel isn't really particularly useful for a dynamic OS when all of the servers are unverified. The process server is written in Rust, and it's likely most of the other low-level servers will be as well, although I'm not really writing a "Rust OS" as such. Many of the basic utilities will most likely be BSD-derived, and some of the higher-level desktop-related stuff may be written in various other languages (haven't completely decided there).


Interesting, but sounds like much more than a lifetime achievement. As far as I understand, it is written in Rust; this raises the question of whether there are significant differences to Redox OS, which seems to share most of the goals. But sel4 on the other hand is written in C; are you going to migrate sel4 to Rust?


Like I said, it's not meant to be a "Rust OS" and will incorporate a lot of C code still (and various other languages as well). I doubt that the kernel will ever get completely rewritten in Rust, although eventually some Rust code might get added (once I break up the giant kernel lock I am thinking I'll use the same locking library I'm using in the process server, so that would add some Rust code to the kernel). I don't see a lot of point to rewriting the kernel since it's probably not going to see lots of new code being added all the time and is generally pretty limited as to what it includes (I'm not quite as reluctant to add stuff to the kernel as the seL4 developers are, but I still am going to try avoiding it unless there's no other good way to do something).

Redox has several of what I consider to be questionable design/implementation decisions. I explained a couple of them in a previous comment (the limited filesystem namespace and IPC models and the lack of real-time support). Some other things include the weak implementation of locking (it uses pure spinlocks instead of something like the adaptive queue-based locking that I'm using in the process server), a rather limited heap allocator that doesn't support growing/shrinking the heap or custom object sizes for slabs (the heap allocator I'm using is a derivative of the same one that Redox uses but it's been heavily enhanced with support for shrinking/growing the heap and custom slab sizes), and the rather boilerplatey implementation of some of the libraries (for example fork() is implemented as a library function that copies process context through a filesystem-based API, but rather than having object-oriented wrappers on top of the filesystem for the various state objects that it's manipulating, the fork() implementation directly invokes open(), read(), and write() a lot).


That's interesting, thanks for the explanations. Which of the many OS text books do you think covers the topics you are confronted with in this projectk best (especially how to implement an efficient microkernel and integrated ipc)?


It's not completely "legacy-free", nor is it purely capability-oriented, but I'm writing a QNX-like OS (https://gitlab.com/uxrt/uxrt-toplevel) based on a kernel forked from seL4 with a preference for servers written in safer languages. Even though it is going to be fairly Unix-like it will diverge from conventional Unix in a number of ways and either discard or demote quite a few legacy Unix features. For example, there will be no support for reversion to the traditional Unix security model in the core OS, with the base VFS implementing a security model based on a mix of per-process ACLs and capability transfers, and on top of that there will be a role-based access control model (there will be a fakeroot-type compatibility layer that simulates traditional Unix security within an environment constrained by the native security model). I don't think there's a good way to make a purely capability-oriented general-purpose OS since as soon as you have a way to look up files by human-readable names instead of capabilities your OS is no longer purely capability-oriented.

IMO most of the issues with typical Unix-like OSes are more due to specific outdated architectural features rather than the Unix API or shell environment. A lot of what could be done with a completely new OS could instead be done by designing a Unix-like OS for extensibility by reducing the core API down to a small set of file calls that act as an IPC transport layer and name service and then building everything on top of that, building object-oriented wrapper libraries on top of the filesystem, and reimplementing the traditional Unix APIs outside the core filesystem ones on top of the newer APIs. Existing applications could be ported to such a system incrementally rather than having to do everything all at once or relegate them to some kind of limited "penalty box" compatibility layer (and any binary compatibility layer for Linux or other conventional Unices would integrate better into such a system than into something completely new).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: