Hacker News new | past | comments | ask | show | jobs | submit login
MINIX from Scratch (github.com/o-oconnell)
280 points by elvis70 on Feb 23, 2023 | hide | past | favorite | 94 comments

The one thing MINIX really needs is a maintainer.

The 3.3.0-rc8 or whatever version has been sitting there for several years, despite all blockers already fixed. Just because there's nobody at the helm to push the release out.

There's also significant unmerged work that's just waiting for someone to merge it.

It's not just the lack of active maintainers. MINIX 3 is a technological dead-end on multiple fronts and the various grants and funding sources have dried up.

The micro-kernel is mostly unchanged from its previous incarnations. It can be described as an hollowed-out Unix-like kernel design from the 80's, to the point where a fair number of its syscalls are direct equivalent to Unix ones. Basic features taken for granted nowadays, like 64-bit support, SMP or kernel threads are missing. Its code is especially old and difficult to work with.

The driver layer works best with a computer from 20 years ago. There's no support for USB (on x86), NVMe, display controllers, UEFI and so on. I don't think that booting on modern hardware would be even possible at this point without some major overhaul.

The service layer is similarly outdated. The most modern filesystem supported is ext2 and it wasn't very stable from what I remember. The native MINIX 3 file-system implementation is solid, but its design is very similar to the System V file system from the early 80's. The most advanced isolation mechanism available is chroot and there's no support for running multiple, isolated userspace instances, which is a shame for a micro-kernel, service-based operating system.

Replacing the outdated MINIX userland with a source port of NetBSD's userland a decade ago was a colossal mistake in hindsight. The NetBSD source tree required a lot of modifications to make this work and back-porting newer versions of NetBSD's source tree is extremely difficult. Instead, a NetBSD syscall translation layer to achieve binary compatibility would've probably been a far more maintainable solution. Additionally, pkgsrc support wouldn't be a problem either.

Finally, I'm pretty sure no one used it as a daily driver back in the 2010's. While it was reasonably functional through SSH inside a VM, trying to use it on real hardware was an exercise in frustration because of all the sharp edges.

Don't get me wrong, MINIX 3 has extremely cool features like the ability to transparently updating system services on-the-fly or extreme resiliency against device driver crashes. The presentation talks done by Andrew Tanenbaum [1] are in my opinion still extremely good to this day and a fair number of the research papers on the MINIX 3 are worth the read. I'm not trying to discourage anyone from trying it out or stepping up as a maintainer, but there's a reason why it became unmaintained.

[1] https://www.youtube.com/watch?v=MG29rUtvNXg (there are multiple versions of it done over the years)

Source: I'm a former contributor.

So, what's your suggestion for... umm... practical/real world OS dev class? xv6 code is really compact, but toyish?

xv6 is pretty much the best osdev beginner's ressource out there. Once you've mastered it, "Operating Systems: Design and Implementation" is still a worthwhile read even though it's getting a bit old.

After that, it depends on what you find interesting. Personally, I think Unix is a fossilized relic whose 50 year old design has been pushed far beyond its use-by date. I'm a fan of Fuchsia OS and especially its Zircon kernel, its design documentation is well worth a read just to shake off the idea that POSIX makes for a great syscall layer in the 21st century.

In the end, it doesn't matter too much, you'll always learn something no matter what the operating system you look into. Try things out and discover what you like instead of relying on what others find interesting. I just wish that people stop thinking that the traditional, monolithic Unix system design is some sort of holy scripture that shall not be questioned.

Hmm.. what about Haiku (https://www.haiku-os.org)?

It's inspired by BeOS. I'm not old enough to use BeOS at its time, but at least from what I read, it's not "yet another Unix"

I'm guessing the stickiness of Unix is that a lot of useful software, espescially FOSS, just compiles and works on Unix-y operating systems, assuming the right syscalls are implemented. There is far less reinventing of the wheel by going that route.

Implementing POSIX as a compatibility layer on systems not designed to be Unix-like is possible. You can even skip a fair number of the most problematic parts and still run useful programs.

OpenVMS doesn't have fork(), just vfork() and it can run LLVM and modified versions of bash and GCC. Windows NT used to have a POSIX subsystem. Heck, IBM even managed to certify z/OS to be POSIX compliant.

Just because you happen to need some level of POSIX compatibility for interoperability doesn't mean that POSIX has to be the basis for your syscall layer.

That is why I rather be bothered to look into other stacks, specially how mobile and cloud OS are being designed.

UNIX had a couple of interesting design options and that is about it, time to move on.

Did you look into Genode?

The best I've found by far: Introduction to Operating System Abstractions using Plan 9 from Bell Labs


The key problem is:

> practical/real world

Definitions of that vary wildly.

Oberon is a personal favourite of mine: tiny, a few hundred KLOC for the entire OS, compiler IDE, UI, in a native-code-compiled, type-safe language.

Ah, another Niklaus Wirth's work. Any guide on how to build and run it?

It depends on what platform you wish to work.

There is quite comprehensive documentation on wikibooks:


You can run it in a browser:


There is an app in the Mac App Store:


You can run it on Linux:


It also runs atop Windows and macOS. Lots of choices, really. :-)

You can use it live in the browser from here: http://schierlm.github.io/OberonEmulator/

Book and code are available from Wirth's site: https://people.inf.ethz.ch/wirth/ProjectOberon/index.html

But the mirror is a little nicer looking (although older? It says, "The second (2013) edition of the book and source code are published on Prof. Wirth's website. We provide links to the original material here, and local zipped copies, with kind permission from the authors."): http://www.projectoberon.com/

Several emulators for the Oberon RISC CPU are available:

In C with SDL graphics: https://github.com/pdewacht/oberon-risc-emu

In JS and Java, from the live version above: http://schierlm.github.io/OberonEmulator/

In Go: https://github.com/fzipp/oberon

People have used Wirth's Verilog with FPGAs to make actual Oberon workstations.

(My own somewhat embarrassing Python emu: https://git.sr.ht/~sforman/PythonOberon )

Linux? (Obviously depends on what you want to teach but in terms of practicality it may be daunting to comprehend the whole kernel – of course it is – but equally the ground is very well trodden)

Err... well I get it. What I mean is something less complex than Linux, and still usable enough for real world tasks?

I can understand the xv6 codebase in a few days (obviously couldn't say the same thing about Linux), and it's very easy to rebuild, but for me it only stays inside QEMU.

Or perhaps take a more realistic approach: production-grade kernel is complex, thus you are not be expected to understand the whole thing in short time?

I suppose that reveals something: cheap virtualization completely changes the OS development experience.

Even beyond teaching toys, there are huge projects now (SerenityOS for example) where "runs on bare metal" is barely, or not a consideration. I wonder if this causes stagnation in the niche of device driver development-- if all you have to worry about is a handful of "friendly" VM devices, who is learning to deal with the hassles of persnickety real hardware?

No it doesn't because the cutting edge of device driver development is on high performance devices in the hypervisor or IO server layer, passthrough and virtualized functions, etc. So it's all the same challenges as before with additional complexity that you also have to virtualize them either by passing function to the guest or by introducing a virtual server layer above them that has to perform well.

I don't know if I've ever seen much innovative device driver development work going on in toy or learning operating systems before virtio type interfaces. Before that most of them used a couple of the simplest hardware available anyway.

There has long been (and still is) a bunch of different emulated hardware that something like qemu offers (and not too hard to add your own emulated hardware) that you can trivially add to your VM if you want to develop device drivers. A number of real hardware companies nowadays actually contribute their hardware models to qemu and even develop to them internally for pre-release development and CI and regression testing.

So I would argue running in VMs with capable device model support like QEMU has actually made learning and developing hardware device drivers more accessible rather than less.

I always figured that a major source of complexity in drivers was papering over imperfections in the underlying hardware.

I imagine things like video and chipset drivers are full of "if we're on stepping A0 or A1 hardware, we have to do this magic wiggle dance to get feature XYZ to actually work" hacks, most of which are only documented within the manufacturer's own propriatery-driver team.

There's probably also a lot of compromise for external confounding variables-- "Our part works to spec 98% of the time, but if you install it in specific systems, they're at a specific corner of the compatibility envelope, and you have to do ABC to get it actually stable."

I'd suspect as a result, something like "Driver for VirtualBox SVGA" is going to be a lot simpler than a comparable level of functionality targeting real hardware. OTOH, we do have stuff like 86box which seems to be trying to be a closer simulation of real hardware.

I noticed the same. During lockdown I felt bored and thought about porting my fav software to minix only to find out that minix is really dead nowadays

I think TempleOS is a better unmaintained operating system.

The original is forever frozen in time as a memorial to Terry’s genius, but there is an (at least somewhat) actively maintained fork, which has added features of which Terry himself would not have approved - https://github.com/Zeal-Operating-System/ZealOS is the main one, which was last updated only a couple of months ago; by contrast, Minix’s last commit appears to have been in 2018 - https://github.com/Stichting-MINIX-Research-Foundation/minix

But maybe that’s the answer for MINIX too - maybe one of the people who have authored all those unreviewed PRs might start a community-based fork. If all the activity moves to the fork, there is a chance the originators might officially bless it

I think some things were added to Jesus’s OS that He wouldn’t have approved of.

The one thing MINIX needs is to disappear.

I dont know a lot about MINIX but why is that the case?

Nice. It is a pity that this is necessary, but a quick look at the Minix online presence makes it seem as though it has been largely abandoned. Interesting, since Minix is one of those projects that is really everywhere. Still an important component in lots of firmware.

Without MINIX and the GPL, the world might be vastly different:


We'd have BSD instead of Linux, and all the people who poured their effort into Linux would have poured it into BSD. It would be different, but maybe not vastly different.

Yeah and most notably Intel management engine.

If it were GPL they'd have to give back their modifications to the community..

If it were GPL they wouldn’t have used it in the first place for that. And instead they would have based it off of something else, or written it from scratch.

I think xv6 is a much more palatable intro to operating systems than minux. It's a tiny codebase, with an annotated guide, and a book explaining its core ideas and implementation in a very direct way.

I've been going through the xv6 book and source code over the past week, and it has been a delight. It's ready to compile, the source code is very easy to read and very small with the biggest files weighing in at 500 lines. The book is written really well. Just delighted to be learning this material in such a nice way.

I agree, and Minix has been (advertized as) production-quality for a while now.

Impossible, it's not even written in Rust! /s

I promise that this is a good-faith question: is MINIX still the best OS teaching tool? Personally, I enjoyed the lectures of https://rust-class.org/, though the Rust code itself may be a little outdated (it's from 2014).

Yes. The MINIX book is still the most accessible text, and MINIX is small enough to be comprehended by a student in a semester.

Any idea how it compares to xv6?

xv6 is a toy, compared.

Has it ever been a consensus best? Today xv6 is likely the most popular choice and before xv6 there was nachos and its more recent offshoots.

'Best' is subjective, of course. If you think a pretty well written, accessible textbook coupled with a working, non-trivial but understandable codebase to examine is a good way to learn things, then "Operating Systems: Design and Implementation" and Minix is a pretty good choice. YMMV.

the foundational aspects of actually-used operating systems have been basically the same for 25 years or so. Lots of improvements, sure, but the basic ideas remain the same. MINIX3 is even kind of advanced compared to, say, Linux, since it uses a microkernel architecture. So it's as fine place to start as any.

> basically the same for 25 years or so

Minix is a lot older than 25 years. Your point stands, but you need to increase the age. Minix and Tannenbaum's textbook was part of my operating systems class in undergrad CS. I think I took that class in 1989 or 1990.

Minix always used a microkernel even in early versions. It was the source of a big flamewar between Linus and Tanenbaum back in the day :)

Microkernels are not really modern as such. Just different.

I think that they answer that here^1 But the short answer is the author seems to believe so.

1. https://github.com/o-oconnell/minixfromscratch#why-minix

I don't get it. Tannenbaum would normally keep it in a buildable state -- he's basically the Linus of Minix no?

As an aside I enjoy the irony that we are trying to build Minix on Linux systems many years after the debate...

> As an aside I enjoy the irony that we are trying to build Minix on Linux systems many years after the debate...

To add another layer - as I understand it, the processors we are mostly running those Linux systems on are secretly running a minix based system for their management functions [1].

[1] https://www.networkworld.com/article/3236064/minix-the-most-...

This is a good link, appropriate for the conversation, thank you for providing it!

Just to rant about this article though, I really dislike this statement by the author:

> Why on this green Earth is there a web server in a hidden part of my CPU? WHY?

> The only reason I can think of is if the makers of the CPU wanted a way to serve up content via the internet without you knowing about it.

That's absolutely not the use case here. Intel is not on some secret conspiracy to collude with governments to serve content from your computer.

The management system is a legitimately useful functionality for the support teams of large corporations, public entities (schools, local government), etc. enabling management of their fleet of user devices. It's a very unforgiving and thankless task to be able to support a worldwide workforce, for example, or large non-computer savvy users.

Now, does the Intel Management Engine pose a security risk. Absolutely! We tech folk should absolutely know about this capability and be able to make decisions if this functionality should be enabled for our fleet. Having it default "on" without a way to sensibly turn it off broadly or update it when necessary is a big issue. But this is nothing more than Intel giving the majority of its user base functionality that is desirable, not some deep laden conspiracy.

I don't like the Intel Management Engine running on my personal device. But I sure appreciate it for its intended use case and audience.

That rant is merely your opinion and somewhat optimistic. Reality is that you get an extra CPU running completely on its own without code available nor clear path to disable such "feature".

If it can serve data from your computer, it _will_ serve data from your computer.

Not everyone across the globe is confortable with lack of privacy on their own devices.

Agreed on your points. I particularly like, "If it can serve data, it will serve data." Totally get that point of view.

I fully wish that Intel would have gone with the "opt in" approach. Either opt-in with choices in architecture, or opt-in with the option disabled by default. It's the "always there, you don't see it, you can't disable it" thing that's the problem. Intel messed up here, for sure.

But the idea of the Intel Management Engine is sound and extremely useful. And it's the visceral (possibly unsubstantiated) attack against it in discourse that's the thing I'm addressing.

I mean, truthfully, maybe the only way to change things anymore is to be overly loud and exaggerate about issues, basically black & white arguments without any middle ground. Maybe social media has brought us to this point, where we can't see issues as both positive and negative. It kind of starts to sound like our politics, in this way. So maybe the only thing that could possibly change how the IME is configured or deployed is to be a huge stinker about it and make large noise. Sad, but that's probably the case.

> I fully wish that Intel would have gone with the "opt in" approach.

Even a sensible opt-out approach would be better than what we have today, we're simply stuck with it whether we like it or now.

I think that's what makes most people disregard the fleet management aspect and leap towards the conspiracy angle.

Interesting. But it seems that MINIX does not run on the (same) processor as we're running Linux on (which would have been pretty surprising) but there is a separate low power CPU on the mother board that runs the management software under MINIX.

Reference: https://en.wikipedia.org/wiki/Intel_Management_Engine

Yes, it looks like I was confusing it with the AMD equivalent, which does have an ARM core literally in the CPU silicon to run the management software (but looks like it doesn't run minix).

Tanenbaum retired and minix was more of an educational project. Not really for production. It was a great way for students to learn about Unix when Unix only ran on expensive computers. Including the design of it (be aware that with the first unixes Sysadmins would receive the code and regularly made changes!)

Tanenbaum is 78 years old, and most likely has better things to do with his life as compiling Minix.

At 78 I hope I'm still able and enjoying compiling kernels for fun in fact. Sounds like a great way to pass a retirement. :)

Linux started as a terminal emulator for Minix.

oh man, this was part of my intro to operating systems programming (not that I have a clue what I'm doing) in school 13 years ago. a month or two of "os" work in a simulated kernel on the JVM, and then in the last 3 or 4 weeks the professor hit me with "here's minix, play with some these changes for homework", and then the final exam was "make these changes to the filesystem module".

Worst I ever did in a CS class. I did not deserve whatever grade he gave me, but I was the only person that stuck it out and didn't drop the class.

Sometimes the disconnect between what is taught and what is expected by lecturers in assignments is very big, don't feel too bad!

It could be because they are coming at it from the "experts bubble" (I'm sure someone will know the proper name) where they can't conceive of the types of problems someone coming at it with no experience will encounter.

Going from a tiny "hello world" OS you've made by following instructions and notes to being dropped into a totally different codebase that uses probably entirely different patterns and conventions seems a little unfair. It's like expecting someone who has played around with some graphics API for drawing pixels to expecting them to make changes to the DirectX shader compiler.

I had a similar experience - a totally new professor gave us a about 5 lectures on parsers, compilers, AST's, BNF etc, and then just expected us to design a whole language that would be compiled to C for the assignment, using classic tools like yacc etc. For reference, the average programming ability of the students in the class was "What's a switch statement?".

Absolutely everybody failed to achieve anything, and the only reason I even passed it was because I took an example language the professor provided and reviewed it and stated how I would liked the language to work instead.

It's all a bit absurd.

> It could be because they are coming at it from the "experts bubble"

I believe you mean the curse of knowledge [0].

[0]: https://en.wikipedia.org/wiki/Curse_of_knowledge

I've had a professor tell the class, in a relatively rare and small course:

> I've been doing this so long that I can't remember which things are easy and hard to pick up for the first time, so you all will have to tell me— if you don't ask questions after each reading, I won't know what to reinforce and you will be lost.

I think that's how it goes for classes that are not frequently taught or haven't been taught before, in almost any subject area.

ironically, the same professor taught a compilers class and did a FANTASTIC job with it.

We did pascal (probably a subset of the language, cant remember) and then c--. we did an interpreter + runtime for one, and compilation to some sort of bytecode for the other. dont remember which was which.

The same professor?


I was curious about the statement in the README:

>"Until now, I have not been able to find a MINIX 3 project that allows you to compile the code that is referenced in the book Operating Systems: Design and Implementation (3e) (v3.1.0)."

For such a well-known and longstanding project I was surprised to read this. Are the sources available at https://git.minix3.org just too old or is this more to do with the book?

Well, Tanenbaum retired in 2014 [1], so the book probably won't get updated anymore.

[1] http://www.few.vu.nl/~ast/afscheid/

I tried with a group of friends about 2 years ago, and this rings true with our experience. I think we did eventually get a build working, but there were many build issues, and they were different on different linux distros. Much documentation seemed out of date, especially with respect to the migration to the BSD toolchain (IIRC).

From an 8 year old HN comment:

> if someone thinks they're going to be able to pick up the latest edition of Operating Systems: Design and Implementation in search of documentation for either MINIX-the-project or MINIX-the-software, they will be sorely disappointed. The fact that the book is now about a decade out of date is one reason for the latter. There are a number of reasons for the former.


Interesting. I wonder if every university that uses the textbook just maintains it's own repo then?

My class used the textbook, but we never built minix from scratch or even used it in real-life. Still a lot to be learned that way.

i had similar experience a few years ago, ended up playing with xv6. not the same of course

If we put this on docker, that would avoid a similar situation in future right

It doesn't fix the issue of code comments and other documentation being out of date (read: wrong).

In case anyone else is interested, I ran across a similar project called Fiwix yesterday, and it compiles and boots:


> Fiwix is an operating system kernel written from scratch, based on the UNIX architecture and fully focused on being POSIX compatible. It is designed and developed mainly as a hobby OS and, since it serves also for educational purposes, the kernel code is kept as simple as possible for the benefit of students and OS enthusiasts. It is small in size (less than 50K lines of code), runs on the i386 hardware platform and is compatible with a good base of existing GNU applications.

Amazing to see that every file and directory name is short single word with very few exceptions.

I remember having to rewrite the file blocking primitives to allow blocking only some sections of each file as a class assignment in college.

It was fun, and I learned a lot about OSs in that class.

Do we have virtual memory API at the network level ? The goal is i can use Redis as my real memory.

Is the book worth reading?

Yes. It is a good introduction to some of the basics of OS design and implementation. And imho it is nicely written.


How can it be from scratch if you start with MINIX?

> MINIX from Scratch

Can you install gcc and X ?

X: yes, minix has X11R7 binaries: https://wiki.minix3.org/doku.php?id=usersguide:introductiont...

I don't think I've run an X server on it ever, but I can verify you can at least get it to pop up xterm's on a remote machine.

gcc: I don't think any recent gcc's work, but the latest minix (3.4.0rc6) does have clang-3.6 as /usr/bin/cc

How about creating an OS from scratch without bias from existing designs?

It’s difficult to eliminate bias without also eliminating the lessons learned by history.

Apparently not much was learned in 50 years from doing systems programing in languages like C in regards to safety.

I was a Minix fan before Linux, and Linus studied Minix to write Linux. I thought a microkernel was better than a monolithic kernel for security, reliability, portability, and simplicity reasons. But then again, Betamax was the better videotape format and VHS one out, so the best tech doesn't always win. I remember the heated exchanges between Linus and Andy back in the day. Funny someone commented above about Minix being antiquated from the 80s, and Andy had posted that Linux was technology from the 70s due to it choosing a monolithic kernel and being tied to x86 processors. I had MachTen, a BSD, Unix-like microkernel, running on my PowerMac in the early 90s.

Ironically, Minix runs Linux on Intel CPUs, and the Linux kernel jungling containers on top of an hypervisor, with DPKG, FUSE,... looks like a fat microkernel.

On the cloud workloads, with kubernetes on top of hypervisors, the workload is more akin to microservers than anything else.

Likewise on Android, since Project Treble and starting with Android 8, all newer drivers are userspace processes with Android IPC.

The Intel ME chip runs Minix, so I guess Minix runs a lot more than it would seem![1]

[1] https://www.networkworld.com/article/3236064/minix-the-most-...

Probably because if we're teaching people 'Operating Systems', most people think 'operating systems and OS concepts we might actually use in our daily lives'. "Creating an OS from scratch without bias from existing designs" is for hobbies and PhDs, and there are a number of them out there.


... is about the closest I know.

What kind of bias?

Look at all the things that Plan9 and Fuchsia do differently.

There are design choices you probably wouldn’t make this way today for a new green-field OS, like the process forking mechanism, the POSIX threading and file IO API, the whole TTY architecture, and the file permissions model. These are all aspects that are different on Windows, for example. Similarly, starting with a stripped-down Windows, you wouldn’t end up with a Unix-like design.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact