Hacker News new | past | comments | ask | show | jobs | submit login
Redox – A Unix-Like Operating System Written in Rust (redox-os.org)
321 points by wtetzner 5 months ago | hide | past | web | favorite | 168 comments

Interesting. Not clear if they got the big lesson of QNX vs. Mach - scheduler and interprocess message coordination is crucial to microkernel performance. The relevant pages of their "book" are blank. I'm trying to find the interprocess communication primitives and can't. Anyone know where that's documented?

I notice they put an ELF loader for executables in the kernel. It's better to have the loader outside the kernel. See this Linux vulnerability.[1] They may get there; they don't have shared objects yet. QNX starts up new processes with a canned startup which links to a shared object to do the loading of the executable. So loading runs with the privileges of the thing being loaded, which is safe.

They put very few drivers in the kernel - just a serial console. That's good. Like QNX, the boot image contains user-level processes you want run at startup. So you don't need drivers in the kernel to get going. This is good for embedded applications. QNX comes with a boot image builder into which you put what's needed at startup. There's a set of services for disk-based systems which get you a UNIX desktop like environment. But for embedded use, you can build a diskless QNX image with much less.

One advantage of a small unchanging kernel is that it can be put in a boot ROM for the life of the machine. This is a win in embedded devices. Any updating is done by loading new processes in userland, not with some super-powerful low level thing that nobody understands and probably relies on security through obscurity.

[1] https://nvd.nist.gov/vuln/detail/CVE-2018-6924

I am the creator of Redox, and I am glad to have this rundown. Yes, the design of the Redox kernel is such that the kernel can load any set of drivers from the initramfs, including being completely diskless.

In order to make this easy, ELF loading is part of the kernel. Otherwise, the ELF loader would have to be loaded somehow. How does QNX solve this issue? I could see there being a simpler executable format, or maybe the loader could do what the kernel does and modify its own paging tables.

With regards to the book, it was outlined by a contributor who is no longer part of the project, so it is likely to be restructured when there is time, and the sections may change.

There are drivers for what are considered to be critical features for any userspace inside the kernel. This mainly means the interrupt controller, but has included a serial console for kernel debugging. This will be an optional component in the future.

The QNX boot image can contain user processes, including shared objects. So the shared object needed to load executables is loaded at boot time, but as a user shared object. Processes start up other processes by a fork-like operation which starts some stub which connects to the executable loader shared object, which loads the executable. So the kernel is not involved. This is not a sufficient justification for a whole shared object system.

QNX, being a hard real time system, does not page at all. This has advantages. You never have to worry about something being paged out when you need it. Response is very consistent. That's worth considering. Really, you don't page in embedded, you don't page in mobile, and if you're paging on servers you're in big trouble. RAM is cheap. I'd suggest not putting in paging.

How does message passing work? Can't find the docs.

seL4 has a guaranteed WCET.

Did you make such a calculation for your microkernel?

No, but it would be interesting to look into

> They may get there; they don't have shared objects yet.

I'm a bit of a novice when it comes to operating system implementation, so I'm not sure if this is a naive question, but how big of a priority is this for an OS written in Rust? By default, Rust statically links everything besides things written in C/C++ (e.g. libc), so would static linking be sufficient for an OS designed to implement everything in Rust?

Call me crazy, but it would be nice if binaries were bundled with all their dependencies but then the OS maps identical files to the same memory region to save space+perf.

You get the best of both worlds?

Honestly the best part of shared libraries in my opinion isn't that they save space and memory. It's that when there is a security bug in libc I don't have to update everything.

And make it relatively painless to create plugins with more performance vs using IPC.

On the other hand if they crash or contain memory corruption bugs, the host process suffers as well.

The corollary being that you don't need to worry about breaking things when upgrading libc

Well sure, but at least for networked services (which is unfortunately a substantial fraction of the binaries on my computer) I'd happily take the security over functionality tradeoff.

If things don't work, I notice and fix it. If things aren't secure, I don't notice, get hacked, and my files get ransomed back to me.

You could have both, in theory. Ie, use static linking unless a compatible version of the library is present on disk with a higher version number at runtime.

That won't prevent the higher version from breaking your application.

In theory it could, simply remove the newer version and the app uses it's internal library again.

Or alternatively, like LD_PRELOAD, just have an env variable; LD_DONT_PRELOAD. Though that would make it more complicated.

This is exactly the approach of newer application formats in the Linux world such as flatpak and snaps.

The problem, though, is that all the libraries are duplicated on disk. Flatpak and snaps get around this by recognizing when there is a shared dependency and only downloading it once. But if you "bake in" all your dependecies into a single executable file, that would cause pretty significant inflation for program sizes.

This may be less of an issue now as disk space gets cheaper and cheaper, but it still feels like a step in the wrong direction.

the other issue is that when a bug or vulnerability is patched in the shared code, you no longer get to fix every dependency by updating a single synamic library.

If it’s easy to inspect the packages, then you just recursively fix everywhere.

It’s better than having static libraries that you cannot fix without recompiling from source.

It would be cool to have a filesystem that unique'd storage of identical objects under the hood. E.g. even though the user sees a particular filename, it's actual stored using a SHA to detect duplicates.

If you edit a file, it could do a copy-on-write, to ensure you're not inadvertently changing a different file.

Deduplicating filesystems do exist; ZFS is one of the better examples. Of course, maintaining the indexes turns out to become a larger percentage of your cpu time, and those indexes have to live somewhere so they occupy a lot of memory. (You could reduce that impact by doing file-level rather than block -level deduplication, but then you also lose many of the benefits)

If you want something that's higher level, there are tools like git annex that do this.

ZFS (and others) feature block-based deduplication, not rule-level deduplication which is cheap and fast (though less “effective”).


There was Venti in Plan9 that had this style of approach.

[1] https://www.usenix.org/legacy/events/fast02/quinlan/quinlan_...

This is true of Win 3.1 and all versions subsequent (unless a JIT is involved).

The problem with this is DLL hell. You need a central repository of images for mapping, but that leads to conflicts. Possibly a good opportunity for CAS?

That’s exactly what a shared library is right? It’s not crazy, it’s existed for a very long time.

But then address space randomization won't work.

Note: libc is not a part of Redox's future. Instead, a rewrite in Rust (relibc) will be used: https://gitlab.redox-os.org/redox-os/relibc

So do you suggest that every executable on the system should be statically linked? (Early Unix was like this, too.)

Rust is apparently capable of producing shared libraries: https://doc.rust-lang.org/cargo/reference/manifest.html#buil...

I'm not suggesting anything, but more asking how feasible it is for a modern OS. Rust is definitely capable of producing dynamic libraries, but it just doesn't by default. Given that the Rust ecosystem tends to prefer static binaries, I was curious about whether something in the OS space would run into unique issues from this approach that wouldn't apply to the application development currently being done.

This might vary from OS to OS, but at least for many Unix-based systems, everything running in a kernel context shares the same address space, so shared libraries aren't that relevant. Shared libraries for user processes are still desirable for a variety of reasons (memory usage, not needing to reload shared libs into the cache after a context switch).

Back in the old days static linking was the only option available in most computer systems, even UNIX only got dynamic linking support during the mid-90's.

Has anyone found the spec for inteprocess communication? It's a message-passing system; this must be written down somewhere.

Redox is a really cool project. I haven't tried running it in a while though. Last time I tried it I couldn't get it running on real hardware, maybe things have gotten better there.

Either way, if you find Redox interesting, I bet you'll find https://os.phil-opp.com/ interesting too. It's a much less full fledged project, but the blog posts are a great way to learn about Rust and OSes at the same time.

> Unix-Like

It has been my impression that one of the advantages of Unix is not having to rewrite the userland each time you write a new kernel or port an existing one to a new platform.

Since one of the goals of the project is rewriting the entire user land in rust, that's sort of a moot point.

On the other hand it would be a good point if the rewrite could run on unix

I am the creator of Redox. We have a somewhat POSIX compliant C library called relibc that supports existing Unix applications.

Exactly, there is no sense in trying to write UNIX in another language because it defeats the original goal of the system.

It's Unix-like, not a Unix clone. In particular they settled on a "everything is a URL" instead of "everything is a file", with different URIs being delegated to userspace services.

It still draws quite a bit from Unix though, and provides a shim translating `/dev/proc` to `proc:` or whatever.

"Everything is a URL" is a win for browsers because everything with a URL can be linked to and bookmarked.

But it seems like for type safety, you wouldn't want different OS services with different API's to have the same type? If they aren't actually interchangeable, pretending they are with a common naming scheme is just a source of bugs.

I think e.g. a different protocol may give an idea of differently typed namespace. Imagine (not from real Redox:)


Well, that's a major imstake -- now I need to know what service I'm talking to, which means that I can't transparently replace my /dev/net with a userspace proxy. Unix half screwed this up, and they took the mistake and perfected it.

Redox is a microkernel. All drivers are run in userspace. I'm not clear on what your criticism is here.

You can replace anything transparently, the Redox kernel supports scheme namespacing.

So, for example, can I replace my network traffic with a file for testing, by binding a scheme to a file on disk?

To do that, you need to use a driver that translates between a file: and ip: (or ethernet:, tcp:/udp: or network:, allowing binding at different levels of the stack), because the protocols allow different operations. Said driver of course runs in userspace (because microkernel) and already exists. (Because it's absolutely needed for development.)

> driver that translates

That's something new... So, for the N-th device type in addition to the device driver I might need 2*(N-1) translating drivers!

Most of those would be meaningless, or require enough decisionmaking that the program between them cannot reasonably be called a driver. What kind of driver translates between audio and window system objects?

In any case, you only need 2N for translating both ways between file: and whatever: to get the same experience Unix does, and wherever that makes sense it is usually provided.

The win is of course that each of those protocols can be strongly typed and provide exactly all the operations that make sense for that protocol. Basically, think of all the things IOCTL does and give them their own names.

Ioctl was a horrible mistake, since it prevents configuring and introspecting devices with general interfaces and ad-hoc scripts.

This elevates the mistake, and polishes the worst of the early Unix conceptual fuckups.

Is there any advantage to be gained from that versus having file:// urls? With the latter, every file is also a url.

> Is there any advantage to be gained from that versus having file:// urls? With the latter, every file is also a url.

it can be used as a primitive typing layer

That sounds a lot more different from Unix than Linux, which is commonly described as Unix-like.

Assuming that "Unix-like" is a spectrum, why does Linux have to be the furthest point from Unix?

Well, Linux seems to be the Unix of today.

i guess, that's why it's called unix-LIKE. it's something, people are familiar with and a design that works. good enough, i would say

Really cool project! It reminds me of TockOS from SOSP'17 which is also written in Rust but designed for extremely resource-constrained devices.

Serious question: how is this similar to TockOS, other than being written in Rust? TockOS doesn't use an MMU since it's designed for single-address-space applications on microcontrollers.

I think Redox is cool but I wonder if it not being POSIX compliant is going to cause adoption issues.

I am the creator of Redox. We have a somewhat POSIX compliant C library called relibc that supports existing Unix applications.

Redox is cool but I'm pumped for the OS' that really take advantage of Rust to create new architectures.

The issue with such an OS is that there aren't won't be any libraries and tools. So the OS vendor might need a custom shell, custom utilities, custom libraries for everything ... or they offer POSIX-like APIs. One area where one can see this is IncludeOS - they had their custom APIs for a long time to leverage their architecture, but are now focussed on providing POSIX compatibility ... and that's only for single purpose unikernel systems. If the OS should also be universal this is even more of a problem. (Especially if you also want a graphical desktop ...)

With that attitude you'll never be able to ditch legacy cruft, and there's quite a lot of it in POSIX. I agree with the other poster that a compatibility layer in the form of virtualization and/or emulation should be sufficient, as long as your new OS brings something to the table that's desirable. Shunt the legacy crap off into its own contained environment and build a nice new clean one for new stuff to use.

I still think there are a plenty of reasons to support POSIX in many places. As someone who's been running fish for a while, I can appreciate the common ground it provides across the current UNIX ecosystem, but it's not like translating a bash script to fish is impossible.

There should always be a supported standard, but there should be nothing forcing you too it. This is the freedom we need to demand of our OSes.

Yeas, in a imaginary world replacing POSIX would be great. But that's not an easy endeavor. 30 years of legacy tied to it.

As long as you don't have POSIX only special interest users will use your system.

I believe the only way forward is to start on POSIX and then move on step by step and deprecating part by part.

In the same imaginary world, creating a new POSIX OS that isn't Linux is a giant waste of time just from gap in drivers alone. If you're going to ditch all that hardware support then you may as well ditch POSIX and its crap while you're at it.

Which is a reason why despite disliking what Google did to Sun and how hard they make to use the NDK, in the end I appreciate their Android design.

No POSIX for userspace as official API, rather ISO C and C++ APIs plus Android native APIs.

Unless you have some portable standard to target, you just tied the entire universe to exactly 1 OS implementation. I would like to not do that.

Bull. People developed software for multiple completely different computer architectures throughout the 80s and 90s. People do it today between different game console platforms in addition to operating systems. Even if you try and target something like SDL or a web browser your abstraction won't save you from a platform's quirks once you reach a certain level of complexity and then you'll have to work around it anyway.

Hell, even between supposedly POSIX systems there's a lot of #ifdef going on to make things work.

My issues with POSIX stem from the fact that writing completely correct code which handles signals, interruptable operating system calls, and threads is hard. There are plenty of little details that are easy to get wrong. And you won't know you've gotten something wrong until much later when some confluence of events occurs.

I don't know if deprecating parts of POSIX is going to work any better than deprecating parts of C++. If all the bad stuff is still there waiting to be misused...

Or successfully pull off the library OS concept with POSIX as a first class citizen. Z/OS is most of the way there. NT has tried a couple times with the POSIX subsystem first, and now their Windows Subsystem for Linux work came out of MS Research's drawbridge library OS work.

And SQL Server on Linux makes use of library OS concept as well.

That's heavily overstated.

On the real mobile OS world that is already a fact for user space apps.

Virtualization/Emulation seems to be in the making. Let's see how it goes

As a user, I personally wouldn't mind a "fresh start" when it comes to userspace. Just look at Haiku -- yes, it's technically not a new design, but it sure ain't Unix.

> yes, it's technically not a new design, but it sure ain't Unix.

Except we are? We are pretty POSIX compliant all the way into the kernel, we have "/dev", filemodes, etc. We don't have X11 or other UNIX staples, sure, but we are pretty UNIXy.

I didn't know that. Thanks for clarifying.

By "userspace" I was more talking about "the programs and interfaces that a normal user interacts with". Haiku is pretty unique in that regard.

> By "userspace" I was more talking about "the programs and interfaces that a normal user interacts with". Haiku is pretty unique in that regard.

In terms of GUI apps... sorta? We use extended attributes and IPC messaging more than most Linux desktops do, that's true, and our UI/UX is often different.

But if you're talking CLI, then, also no. Bash is the default shell, coreutils are installed by default, sshd is activated by default, etc.

I don't recall BeOS being that POSIX friendly though.

Haiku own extensions to the original design?

Yes, indeed; we implement most of the mainline POSIX specification where BeOS did not at all (it didn't even have pthreads.)

BeOS supported enough posix to use gcc.

ISO C and POSIX are orthogonal. GCC also supports non-POSIX platforms.

> The issue with such an OS is that there aren't won't be any libraries and tools.

This might not be as big of a deal. Rust increases your productivity quite a bit and I'm really impressed with the pace of progress in the community. I can imagine that new, better & more integrated tools will be made.

Don't even need to implement POSIX if you just emulate another OS inside, this seems to be a pretty viable solution for some of the cases.

I'm more concerned with drivers/firmware, which could be handled the same way, but seems less appropriate.

Ditching POSIX is all a matter of money and willingness, as proven by iOS, Android and ChromeOS.

Not really relevant to rust, but if you squint, Kubernetes is basically an operating system.


My guess is someone will try to have an OS based on containers of Web Assembly apps. There are quite a few APIs that have been built over the years that are familiar to programmers. I do believe this will cut down on the pain of having to new develop system level tools to manage such a beast.


"Nebulet is a microkernel that executes WebAssembly modules in ring 0 and a single address space to increase performance. This allows for low context-switch overhead, syscalls just being function calls, and exotic optimizations that simply would not be possible on conventional operating systems. The WebAssembly is verified, and due to a trick used to optimize out bounds-checking, unable to even represent the act of writing or reading outside its assigned linear memory."

WebAssembly has a few more advantages when deployed as part of the kernel; you can run things in ring 0. Even better, you can transparently remap things in memory. And even better than, you can realize IPC as a simple function pointer. Safely.

Of course, you can't have everything as WebAssembly, some core drivers will need to run some critical machine code, but those could be tightly enough integrated that the overhead is almost zero (ie, by using WA imports you can turn this into a function call overhead)

* entirely async api

* more light weight memory manager (possible thanks to borrow checker)

* GPU first OS

* a better shell. I know I can run whatever shell I want on unix but a better shell being native would go far.

I agree with the first two points, not sure how Rust and GPUs are really related yet. I mean I know you can bind into GL/etc libs, but there's something more profound about Rust's type system, and the parallelism / memory model of a GPU (or CPU/heterogeneous computation in general). AFAIK, there's no way to write GPU shader code that shares the static analysis from the Rust CPU code. It would be very interesting to be able to talk about the move semantics across the full modern computing architecture.

If anyone knows work being done in this area I'd be curious to read more personally.

As for a better shell, I also completely agree, but I'm not sure it needs to break POSIX. Shameless little plug, I recently started a shell in Rust myself: https://github.com/nixpulvis/oursh

As a fish user myself, I would love to see a new shell that retains many of the UI features of fish (like the excellent autocompletion behavior while typing) but with an actual usable modern fast scripting language.

POSIX compatibility at the scripting layer is beneficial for being able to run existing shell scripts, but the sh scripting language sucks in many ways.

What I'd really like to have is a shell that supports both a POSIX compatibility mode for running existing scripts, alongside a more powerful and modern scripting language for use in writing scripts targeting the new shell directly. I'm not sure how to identify which mode an arbitrary script should run in though, or which mode should be used at the command line.

I've started allowing just that `{#! }` also as a means to write other languages inside the shell syntax.

Take a look at: https://nixpulvis.com/oursh/oursh/program/index.html

Oh great! I hope you take good care in designing the "modern" language, because once people start writing scripts for your shell, it becomes very hard to fix mistakes (this has been a problem for fish's scripting). I wish I had the time to be involved in designing a new shell scripting language, as it's something I'd really like to see done right, I just have no time to spend on that.

Incidentally, the link to the `modern` module is broken, it's just program::modern (which is of course not a valid link). Given that I don't see a `modern` module in the TOC I'm assuming the module doesn't actually exist yet?

I still living in the 70s for now... Maybe one day I can be a modern man living in a modern world.

Until then I struggle with background jobs, because they are a fucking pain in the ass.

Oh man, a lot of the ideas I've thought about for improving handling of shell scripts have problems in the presence of background jobs, and in particular the ability to background a job that's currently foregrounded.

On a related note, here's something I've been thinking about:

I want to be able to insert shell script functions in the middle of a pipeline without blocking anything. Or more importantly, have two shell functions as two different components of the pipeline. I believe fish handles this by blocking everything on the first function, collecting its complete output, then continuing the pipeline, but that's awful. Solving this means allowing shell functions to run concurrently with each other. But given that global variables are a thing (and global function definitions), we need a way to keep the shell functions from interfering with each other. POSIX shells solve this by literally forking the shell, but that causes other problems, such as the really annoying Bash one where something like

  someCommand | while read line; do …; done
can't modify any variables in the while loop because that's running in a subshell.

So my thought was concurrently-executed shell functions can run on a copy of the global variables (and a copy of the global functions list). And when they finish, we can then merge any changes they make back. I'm not sure how to deal with conflicts yet, but we could come up with some reasonable definition (as this all happens in-process, we could literally attach a timestamp to each modification and say last-change-wins, though this does mean a background process could have some changes merged and some discarded, so I'm not sure if this is really the best approach; we could also use the timestamp the job was created, or finished, or we could also give priority to changes made by functions later in a pipeline so in e.g. `func1 | func2` any changes made by func2 win over changes made by func1).

When I first started typing this out I thought that this scheme didn't work if the user started a script in the foreground and then backgrounded it, but now that I've written it out, it actually could work. If every script job runs with its own copy of the global environment, and merges the environment back when done, then running in the foreground and running in the background operates exactly the same, and this also neatly solves the question of what happens if a background job finishes while a foreground job is running; previously I was thinking that we'd defer merging the background job's state mutations until the foreground job finishes, but if the foreground job uses the same setup of operating on a copy of the global state, then we can just merge whenever. The one specialization for foreground jobs we might want to make is explicitly defining that foreground jobs always win conflicts.

This is along the lines of things I was thinking myself. I'm currently aiming to get the POSIX programs working 100%, which I don't believe could allow this. But, the framework for running managing foreground and background jobs should support both the POSIX and Modern syntax and something like this. This is EXACTLY the kind of thing I want to add to the new "modern" language!

Also, the ability to "rerun" previous commands from a buffer without actually re-executing anything would be a cool somewhat related feature.

If you want to chat about shells anytime shoot me an email or something: nathan@nixpulvis.com

Entirely async api and GPU first OS were fulfilled by Windows Phone, sadly did not work out well.

A bit terse, since I'm on mobile:

> * entirely async api

Surely async only makes sense for IO?

> * GPU first OS

What does this actually mean? The GPU can't be used for everything

> * a better shell. I know I can run whatever shell I want on unix but a better shell being native would go far.

What do you mean by 'native'? Do you just mean 'ships with the OS'?

As in something different from unix I take it

I feel like a new fresh look at what UNIX is today could be valuable. I wouldn't want to give up a lot of the philosophy around it. Redox does in fact do this in a number of places, for example I think the expansion on "everything is a file" [1] is a pretty awesome idea.

[1]: https://doc.redox-os.org/book/design/url_scheme_resource/eve...

I'd love to see a "there are no paths, file system is a db" like it was originally rumored Vista was going to be. I'd also love to see more opinionated OS integration e.g. per-application volume sliders, better file system search indexing, real-time built-in file and directory watching API that can be blocking or non-blocking, standardized storage of program files instead of installed app locations being ambiguous, standardized system and app settings via a neat API, etc., etc., an easy and sane git-based package manager, etc

Windows 10 does per-application volume control.

Chrome OS has a fixed app install location, as does Windows 10 in 'S' mode (since you can only install store apps).

I remember being intrigued by the database-as-filesystem idea when it was first touted - has any OS actually implemented this? I'd be interested to see how it works in practice.



Everything old is new again... this was from the mid-60s and was relatively popular for its time.

Very interesting. The one thing that stood out to me was this, though:

> It is named after one of its developers, Dick Pick.

What an unfortunate name.

OS/400 did this, very cool tech that was never "cool"

Given the way Android and UWP apps work, I always have a warm fuzzy feeeling of carrying a subset of OS/400 design ideas with me.

These are all very good ideas I’ve also thought about.

WinFS was ahead of its time. Also re file watching, I think I want a reactive api for a lot ornate operations.

That only makes things more confusing because Redox is explicitly not a traditional Unix. It's more like Plan9, except it goes even further than Plan9 in few places, and things like the Ion shell in Redox don't attempt to be Posix compatible.

What about comparing Redox with Inferno instead?

The actual end of the line for UNIX improvements done by the original designers.

I never get the mistic around the middle stop instead of the end of the line.

If an operating system was built to run lambdas instead of processes, what would that look like?

What is a lambda to you in this case? That lambda will need to be scheduled, it will need to maintain it's scope... All the same general issues could exist. There have been plenty of machines that simply run functions, in fact before there was things we call OSes today, the machines that ran the code would typically not have an notion of time sharing, and would map even more closely to a pure lambda evaluator.

Without specifics about what differences you mean, lambda = function = process = thread = fiber = service = worker = ...

What is a lambda to you in this case?

Pure functions.

That lambda will need to be scheduled, it will need to maintain it's scope... All the same general issues could exist.

But no users would have to be able to start processes. Instead, lambdas could be associated with persistent storage of state, and processes would be started by the OS to apply the lambdas in a simulation loop, but users wouldn't directly start processes.

Perhaps thinking of those as processes isn't quite right either.

Have you looked at Urbit? It uses this idea - programs are state machines that you give to the runtime, which pumps them with events (IO, RPC calls, etc) that return [new-state events-out]. All programs are purely functional and deterministic, so you can replay events or serialize the app to transfer somewhere else.

All programs are purely functional and deterministic, so you can replay events or serialize the app to transfer somewhere else.

Interesting. Actually, I have one of my processes serializing its state and exporting it to a different process, while all of the clients continue playing the MMOsteroids game they're logged into.

The only truly pure functions exist in the imaginary world of mathematics, if you even believe that.

But I think I kinda get your point, but would challenge you to think about this issue with the frame of mind of access control and permissions a bit. I think you'll find the need for some kind of process like task. Maybe not...

The only truly pure functions exist in the imaginary world of mathematics, if you even believe that.

Note I already mentioned persistent stores of state.

I think you'll find the need for some kind of process like task. Maybe not...

Maybe re-read. I've already said that there would be a process like task. Lambdas will need to be associated with state. Users won't have to start processes. Instead, processes will be more like processors.

This sounds like the Erlang VM.

I'll take that as positive feedback.

I'd look at Haskell-based stuff first. I was told, though, by Haskellers that things like House operating system are imperative in style even if done in Haskell. So, a quick search for functional programs for OS's gave these possibilities:


You were actually in that thread. Did the commenter's work not actually use FP or you not see it?

You were actually in that thread.

That's kinda trippy. It's like I was a different person. Also, I was working on the predecessor system to the one I'm currently working on. Back then, the thing was written in Clojure. I later ported it to Go. The design philosophy has changed a heck of a lot as well. Back then, I was going to have everything on the same very large and fast virtual instance, with modest goals for the largest population/scale. Now my system is scalable by adding more "workers," which I've spread out onto small AWS instances.

"A security kernel based on the lambda calculus" http://mumble.net/~jar/pubs/secureos/


If K-A-T doesn't spell 'cat', what does it spell?


It would look like a LISP machine from Symbolics...

Where can one find lists of hardware and software supported to the date?

Hardware is pretty much just try it and see if it works. Drivers supported are pretty much just what virtualbox/qemu use. https://gitlab.redox-os.org/redox-os/drivers Things like e1000. Many Thinkpads work.

Software list is here https://static.redox-os.org/pkg/x86_64-unknown-redox/

Saw it before but did not try it out yet. On a first look I mistook it for Harvey https://harvey-os.org/


>microkernel based

Aren't these two contradictory? Unix-like would be a classic monolithic design.


It adopted the same CoC as the Rust community.

I hope not

Is there like... a joke to the screenshots page being literal pictures of screens? Because I'm dying on the inside.

Running your own OS on real hardware is an accomplishment.

I no way am implying or saying it’s not an accomplishment. It’s just the fact they’re literally screenshots I found hilarious/brilliant.

Redox is still very much a prototype and shows you can't simply slap "safe" Rust together and magically conjure up a kernel and OS: https://gitlab.redox-os.org/redox-os/redox/issues/1136

So, no one anywhere is claiming you can write a kernel in purely safe code. It fundamentally needs to access arbitrary memory locations which is unsafe.

That said, I don't think any of the issues in that thread are related to unsafety. It's perfectly safe to panic. None of those bugs are memory corruption, arbitrary code execution, or so on, which is what safety tries to protect against.

They aren't. Some verification experts and I are:



It's safe if you can prove it's safe. Even C with Frama-C. If the type system can't do it, use an external tool to do it. Tools exist, automated and manual, for doing such safety proofs on high-, low-, assembler-, and microcode-level software. There's a long tradition of verification in hardware, too. Rockwell-Collins even uses the tools common for hardware to verify software and hardware together.


The real problem, aside from just limited resources to build tooling, is a mindset of trying to rely on one tool for safety instead of mixing up different approaches to get their benefits.

But people are claiming you can't write a safe kernel in C, and point out safe languages which should be adopted in its place. This misses the point that, just like the theoretical safety of C is 100% safe, the theoretical safety of Rust as a low-level language is zero.

Those are all bugs produced essentially on request. That doesn't bode well for the security and robustness of the project. The end user doesn't care whether the class of bug is memory related or whatever else if the end consequences are the same. Despite having the benefit of safety and the decreased burden on the programmer this offers, bugs still abound in Redox, which points to it being written in Rust as incidental at best.

Don't gloss over the distinction between security and robustness. The consequences of panics and memory corruption are very much NOT the same. The former means a reboot, which is annoying; the latter means corrupted or exfiltrated data or a hostile takeover of the system.

It's hard to tell what argument you're making, but it sounds something like "All languages whose safety is less than 100% are equally safe". Obviously Rust is safer than C (both in theory and in practice, both for high- and low-level programming).

NOTE: Observers should resist the temptation to interpret this post as an endorsement for the "rewrite everything in Rust!" crowd.

Nobody has ever said that it is impossible to write buggy code in Rust. It merely makes security guarantees about certain classes of bugs, and the use of unsafe code blocks makes it easier to isolate cases where those classes of bugs can occur.

Just go watch the talks from Linux Security Summit 2018.


Linux kernel developers are the first ones to acknowledge that something has to be done to change the course of CVE in the Linux kernel.

Just in 2017, 68% of exploits were caused by out of bounds errors.

I think you could probably go as far as to say that, with todays understanding of an OS, one can't simply slap together an OS given any medium. LOL

I wonder how much confusion could be avoided if the term was unchecked or unrestricted instead of unsafe? Though I suppose unsafe establishes that some caution is required.

I think this is awesome but how many more Linux OS' do we need? The community needs to come together and hammer out the end user desktop issues and get a unified Linux out to the unwashed masses. Windows 10 can't be the only future.

Can you name any end user desktop issues for Linux? I don't know any except lack of native Adobe Photoshop, limited selection of games and inefficient marketing. It already is easy and pleasurable for any reasonably young non-gamer non-geek with Windows7&Mac background to use Ubuntu or Manjaro, just install it, install and pin all the apps they need and no questions emerge.

> Can you name any end user desktop issues for Linux?

I've installed Linux on like 5 of my desktops/laptops. The best way to describe my issues with it are "death by a thousand cuts". Namely that random stuff just doesn't work, either at all or the way I expect or want.

Installing Nvidia drivers is hit or miss. Wifi often doesn't work out of the box. The touchpad experience is far inferior. Also all of the desktop environments I've used have been really ugly (GNOME, KDE, Unity).

Even when installing Linux there are so many options for partitioning (what format you want, swap space size, etc) which are likely overwhelming for non technical users.

> Also all of the desktop environments I've used have been really ugly (GNOME, KDE, Unity).

This is very much in the eye of the beholder. Linux with KDE has been my daily driver since at least as far back as 2009 (with the KDE 3.5 series), possibly earlier.

In no way would I say it's any uglier than windows, especially now with all the effort to make key GTK applications fit with Qt ones via theming. Windows 10's hodgepodge of old and new styles for things like settings is more offensive to me than anything a Linux graphical desktop does.

Agree the driver situation lagging on Linux vs. Windows is sub-optimal, but when the driver support is there, I don't find Linux to work any more poorly than Windows.

The reality is that every major desktop system has issues, but at least with Linux, if you learn enough about the plumbing, you can go in and try to fix or work around issues that arise. Until we move into a new world of robust, correct-by-construction, non-worse-is-better software, I'll take the lumps I get with Linux over the others whenever I have the choice.

> The reality is that every major desktop system has issues, but at least with Linux, if you learn enough about the plumbing, you can go in and try to fix or work around issues that arise.

Have you ever tried that? You'll find that the plumbing consists of 20 different standards of pipe cobbled together over the past 30 years by dozens of different plumbers, each with their own conception of how plumbing should work but too lazy to tear out the whole thing and replace it so they just patch in their change with duct tape and rubber bands.

And worse, that's the culture the community seems to prefer. Case in point: the one guy who's shown a willingness to unify that plumbing, Lennart Pottering, is loathed for being successful at it.

> Have you ever tried that? You'll find that the plumbing consists of 20 different standards of pipe cobbled together over the past 30 years by dozens of different plumbers, each with their own conception of how plumbing should work but too lazy to tear out the whole thing and replace it so they just patch in their change with duct tape and rubber bands.

This is actually true:), but I defy you to find a desktop OS of which the same isn't true. Did MS ever fix the fact that they have 2 completely different control panels with partially-overlapping functionality? And I know Windows Explorer still can't open certain paths because DOS had a ... poor implementation of device files.

> Pottering, is loathed for being successful at it.

Pottering unified the plumbing by taking a demolition crew to the house and replacing the plumbing and electrical systems while people were living in it, informed us that objects being automatically thrown in the trash if they were on the floor when you left the room was a feature[0], and demanded that all faucet manufacturers adopt a new pipe size that only his plumbing uses[1].

[0] https://github.com/systemd/systemd/commit/97e5530cf20

[1] https://news.ycombinator.com/item?id=11797075 https://github.com/tmux/tmux/issues/428

> Did MS ever fix the fact that they have 2 completely different control panels with partially-overlapping functionality?

if you're referring to the split that came with Windows 8 , there's been (slow) progress in Windows 10. There's still a handful of settings left in the old Control Panel, but most of them have been moved to the new Settings app.

> Have you ever tried that?

I have. I have been able to find offending runaway Flash sessions and kill them without taking out all my Firefox windows. I have been able to force wifi associations when some software flaw is preventing an automatic join to those networks. In similar situations Windows will simply not enumerate the network and recourse is limited. The examples go on for situations where things don't work.

Look, I agree with you and the criticisms of the CADT development model. I'm not claiming the Linux experience is objectively great, end-of-story. I'm claiming that if the hardware is supported by a mature enough driver (which is true of a lot of hardware!), I don't find the Linux experience to be more frustrating than Windows/Mac, subject to the caveats I made about commercial software in https://news.ycombinator.com/item?id=18444723 . And it's great to not to have to go out of my way to keep the OS vendor from gathering data from my system without my express permission.

Another nice thing about Linux is that it makes a good host for VMs, so for those times when Windows is needed (assuming not for games), it can be kept in a VM with some measure of control.

We are a long way from desktop software utopia, but real breakthroughs probably depend more on rigorously-architected and implemented environments vs. working on the edges of decades-old architectures whose fundamental shortcomings are legion and are implemented in unsafe languages. Windows, MacOS, and Linux (or name your choice of free Unix-alike) all suck in this regard.

> And worse, that's the culture the community seems to prefer. Case in point: the one guy who's shown a willingness to unify that plumbing, Lennart Pottering, is loathed for being successful at it.

I don't know that that's a fair characterization. With some of the people raging, you'll pry their current way of doing business out of their cold dead hands. Others welcome better and more sound ways of doing things <raises hand>. But there are plenty of problems with the ad-hoc, NIH, and questionable software quality approach that the systemd implementers use. There was a front-page HN submission just a day or two ago on readiness protocols (written by J. de Boyne Pollard) covering systemd shortcomings, not to mention udev screwups, dhcp issues (does systemd really need its own dhcp client??), etc. All in all, MHO is that systemd is a significant step forward but suffers mightily from its ad-hoc development approach.

So there was!

* https://news.ycombinator.com/item?id=18416854

Just for clarification: the punctuation in that sentence should not be mis-read as my FGA on readiness protocols covering udev and DHCP, which it does not. (-:

> Installing Nvidia drivers is hit or miss.

That's why I've specified it as for non-gamers. I have an old built-in Intel graphics controller (the same as on the old MacBook Air model that still has F-keys) and have no problems. Nevertheless I have indeed always been highlighting graphics drivers quality as a permanent problem, nobody ever writes really good ones, even Windows drivers are quirky and Linux drivers are always full of problems in every single version (yet it rarely is too hard to set up a configuration working nicely and forget about that if 3D graphics is not among your primary computer usage tasks).

> Wifi often doesn't work out of the box.

On MacBook only. According to my experience it has always been working out of the box on non-Apple PCs for about 7 years already.

> The touchpad experience is far inferior.

According to my experience exactly the opposite. But I haven't used PCs with multitouch touchpads so it's probable you're right.

> Also all of the desktop environments I've used have been really ugly (GNOME, KDE, Unity).

As for me and whom I've shown it Unity and today KDE5 (as shipped with Manjaro) look great (old KDEs looked ugly, I agree). And the look can be customized to whatever you may desire.

> Even when installing Linux there are so many options for partitioning (what format you want, swap space size, etc) which are likely overwhelming for non technical users.

It's exactly the same as with Windows: either use default partitioning or whatever partitioning you want, the only difference is Linux installers usually allow you to define more complex partitioning without having to use 3-rd party tools like PartitionMagic/Acronis if you want. Anyway, it is always a great idea for a non-geek user to ask a geek friend of theirs to install an OS for them rather than to do it themselves, regardless to what OS they would like to install.

Lack of a good financial alternative to Quicken keeps my dad off of Linux, but the chasm between Linux desktops and Mac is huge (I wish it weren't so, as I'm very fond of the idea of a high-quality Linux desktop experience). It's not just the availability of apps, but the general quality of the offerings and overall user experience. A lot of this comes down to the fact that GTK and Qt are absolutely terrible compared to native Mac toolkits.

Another huge problem for Linux desktops is the lack of support for high-quality hardware--for example, I haven't found any Linux laptops with trackpads that are in the same ballpark as Macs' (and installing Linux on Macs and configuring/calibrating it to behave sanely is a huge pain).

This is a very fair point about Linux being an uncompelling target for commercial software. Sadly, even many engineering tools such as CAD systems have stopped supporting Linux.

There is a lot of value to having some organization have both end-to-end responsibility and authority for the functioning of end-user software stacks such as desktop environments. Even the Red Hat model is not enough to keep all the myriad independently-developed and maintained pieces of FOSS synchronized and moving in the right direction collectively to make an appealing target for commercial desktop development. I don't know if there is a viable solution to this problem building on the FOSS ecosystem as it exists.

And of course, irrespective of what RMS would wish, it seems the only people willing to work on a lot of the hard and unsexy problems are in fact commercial developers that need to make money from the sale of the software, not just support.

Last time I used Ubuntu it didn't have support for horizontal scrolling for instance and all the answers online were outdated dconf-editor paths

I've been on Debian for two years and I've never not had horizontal scrolling.

Yeah like you know UX.

Redox isn’t Linux. It has its own kernel that you can read about in the docs.

I get where the post is coming from though. Redox decides to write a brand new kernel with some interesting ideas, but then basically just slaps a UNIX-like userland on top of it.

What's wrong with changing a few things at a time? Why should this project attempt to solve every problem at once? I can't imagine that resulting in a usable system, at least not in the next decade.

Nothing wrong with it really, especially since Redox bills itself as a research OS (at least it did last I checked) and not a contender for replacing desktops.

But I still share the original poster's disappointment in yet another UNIX-like system, especially at a time when, in my opinion, Personal Computing and the Desktop in particular are being driven towards extinction.

If they just waited around for people to support Redox in every application natively, it would never happen. A Posix'y shim layer is a very practical concession.

Path of least resistance -- but it seems like you can write a non-unix-like userland on top of it if you want to.

Unfortunately, improving the desktop picture and having a unified Linux for the masses has been an unfulfilled dream for at least 20 years. There have been real improvements, but short of a black swan event it still feels like we're far away from that goal.

It's not likely that the small groups of people working on new operating systems really effect the outcome of open source desktop. One guess is that not enough UI/UX people work on open source desktops. The other guess is that hardware information is hard to come by, so driver support is still lacking.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact