
HelenOS: portable microkernel-based multiserver operating system - ingve
http://www.helenos.org/
======
shp0ngle
Ah, HelenOS. The popular MSc thesis factory at my old school. Good times.

edit: just look at all the theses that were made out of it. :)

[http://webcache.googleusercontent.com/search?q=cache:geWe63M...](http://webcache.googleusercontent.com/search?q=cache:geWe63MvkSAJ:www.helenos.org/wiki/Documentation&num=1&strip=1&vwsrc=0)

~~~
msla
Interesting blog post related to this:

[https://utcc.utoronto.ca/~cks/space/blog/tech/AcademicMicrok...](https://utcc.utoronto.ca/~cks/space/blog/tech/AcademicMicrokernels)

> microkernels are small in both scope and (hopefully) code size. Small
> projects are achievable projects in academic research.

> a microkernel is naturally limited in functionality, so you have an
> excellent reason to not implement a lot of things. This drastically reduces
> the scope of your work and gives you a great excuse for not coding a lot of
> boring, routine infrastructure that's necessary for a usable system.

> (Note that in academic research you basically cannot afford to implement
> things that will not result in papers, because papers are your primary and
> in fact only important output.)

> microkernels are (strongly) modular, which makes it easier to farm out work
> among a bunch of students, postdocs, and research assistants. A strong
> division of labour is important not just for the obvious reason but because
> it increases the number of papers that all of the people working on the
> project can collectively generate. If everyone is involved in everything you
> probably get one monster paper with a monster author list, but if people are
> working mostly independently you can get a whole bunch of papers, each with
> a small number of authors.

> (And if someone wants to get a MSc or PhD thesis out of their work, they
> have to be working on their own.)

> microkernels have a minimal set of primitives, which makes it easy to write
> papers about them. You don't have to try to describe a large, complex OS and
> what makes it interesting; you can simply write a paper about the
> microkernel primitives you chose (and perhaps some clever way that you
> implemented them and made them acceptably fast).

> there are lots of choices for viable primitives and thus lots of different
> microkernel designs, which means that it's easy to do new work in the field;
> all you have to do is pick a set of primitives that no one has used yet.

> there is a strong liking for elegant minimalism in basically all parts of
> academic computer science. The minimalism of microkernels plays right into
> this overall attitude.

> the whole 'normal kernel on microkernel' idea of porting an existing OS
> kernel to live on top of your microkernel gives you at least the hope of
> creating a usable environment on your microkernel with a minimum amount of
> work (ie, without implementing all of a POSIX+ layer and TCP/IP networking
> and so on). Plus some grad student can probably get a paper out of it, which
> is a double win.

> the drawbacks of microkernels are mostly at the pragmatic levels of actual
> performance, which academics mostly don't care about and don't get called
> on. You can excuse relative performance figures against something like Linux
> or FreeBSD by saying that your microkernel has not had the kind of
> optimization that those OSes have had, or that the performance loss is worth
> it for some benefit that microkernels are supposed to give you.

> in academic computer science you do not have to actually prove any claims
> that your microkernel is more reliable, easier to write software for, or the
> like than traditional kernels. In fact it's not clear how you would prove
> such claims; productivity and reliability claims are notoriously hard to
> validate because there are so many variables involved.

~~~
shp0ngle
I mean, I didn't mean it that bad. It might be useful to actually try to
implement something on an uncomplete system; and many useful things started as
a research projects....

But well, on MFF UK, everyone kind of knows HelenOS is mostly a thesis-making
toy OS. It exists for 15 years now and it's still that - a toy OS

(I don't know much about its internals, I was in a different section of the
faculty)

------
iomotoko
Regarding microkernels:

[0] Minix 3: still around, great resource for learning, especially since there
is a great book about it by Tanenbaum himself. Btw: it utilizes NetBSD's
pkgsrc infrastructure, so quite a lot of applications should run

[1] RedoxOS: effort to create a unix-like microkernel OS using the Rust
programming language, pretty interesting. Hope it win gain some traction, the
idea itself is neat

[2] QNX: RIM/Blackberry's proprietary RTOS used in embedded systems (for
example cars), also BlackberryOS is based on QNX

[3] GNU/Hurd: not sure how much software runs on it, the "most usable" form is
probably trying to run the Debian GNU/Hurd distribution

[0] [http://www.minix3.org/](http://www.minix3.org/)
[https://www.amazon.com/Operating-Systems-Implementation-
Pape...](https://www.amazon.com/Operating-Systems-Implementation-Paperback-
January/dp/B010TT8WIS)

[1] [https://www.redox-os.org/](https://www.redox-os.org/)

[2] [https://en.wikipedia.org/wiki/QNX](https://en.wikipedia.org/wiki/QNX)

[3]
[https://en.wikipedia.org/wiki/Debian_GNU/Hurd](https://en.wikipedia.org/wiki/Debian_GNU/Hurd)

(kind of keep track of various operating systems as a hobby, always interested
what's out there for some reason)

~~~
snvzz
Look into [http://www.microkernel.info/](http://www.microkernel.info/)

Note one of these doesn't belong (hurd), as unlike everything else, it is
first gen and effectively dead.

~~~
bogomipz
Can you say why it is effectively dead? I see a recent-ish release:

[http://www.gnu.org/software/hurd/news/2016-12-18-releases.ht...](http://www.gnu.org/software/hurd/news/2016-12-18-releases.html)

~~~
snvzz
It has about 0 developers on any given time, with the occassional work being
done. There's been no real (fundamental) progress on it for over a decade. The
issues in the hurd critique paper haven't been addressed. The L4 port failed.

------
bantunes
Good to see multiserver microkernels back on HN. Have significant advances
been made to make message passing better in modern CPUs?

~~~
aidenn0
There are, and always will be, performance disadvantages to a microkernel.
However, for many many tasks the difference is negligible.

X11 has IPC, this has performance overhead, but nobody seriously suggests that
all X11 applications run in the same process.

I work regularly with a microkernel that allows threads (including device
driver threads) either in their own separate address space, or in the
supervisor address space. The difference in debugability between "there was
memory corruption in the kernel" and "there was memory corruptoin in this
specific device driver" is huge.

The throughput overhead of message passing versus not on a data-heavy driver
tends to be ~20%, though just like IPC in userspace multiprocessing systems
you can reduce that somewhat with shared memory, batching and other tricks if
needed.

But seriously when people worry about message-passing overhead, I feel like
it's 1990 again and we are debating whether the context switch overhead of
processes on UNIX is worth it. I'm not sure why the userspace battle was
decided so long ago while the kernel debate still goes on.

[edit]

Also sometimes the natural design for a microkernel yields better performance
than the natural design for a monlithic kernel. When you are pushing the
envelope for performance, sometimes some very counterintuitive things can give
you benefits. I have seen cases where a single copy was faster than zero-copy,
and cases where the safety of no-shared memory allowed easier parallelization
of the work.

~~~
gue5t
+1 on single-copy frequently being faster than zero-copy. See
[https://lwn.net/Articles/580194/](https://lwn.net/Articles/580194/) for a
real-world example from the Linux kernel.

On the topic at large, I think microkernel vs monolithic is not actually the
important distinction in kernel design; robustness could improved vastly if
the business logic of a kernel were implemented in a memory-safe language
instead of C or C++. Register bashing and system setup won't ever be
effectively captured by a typesystem (knowing what the semantics of hardware
are is the hard problem, and GIGO applies to typesystems as much as anything),
but most vulnerabilities are in code that doesn't directly interact with
hardware (even if they're in a driver for some obscure hardware, it's
generally the syscall-facing side that gets exploited).

I'd like to see more OS development look at non-UNIX designs, rather than
trying a million different ways to implementa UNIX-like interface.

~~~
aidenn0
Also, as a note, high-reliability microkernels disallow direct access to DMA
registers (as, absent an I/O MMU, being able to write to a DMA register is the
same as being able to write to any physical memory), and instead provide an
API for programming them that can be verified on its own. This is often just
dozens of lines of code, so can be as correct as the hardware.

So the separation of register-bashing vs. the rest of the software already
exists, but it solves the memory safety problem by just moving the unsafe code
out of the kernel.

I agree that memory-safe languages need to come into low-level systems
programming, but garbage-collection is a non-starter for many many reasons,
and the various academic languages (e.g. cyclone) have not made any inroads to
industry that I'm aware of.

Rust is very interesting to me in this field because the simple fact that it
originated outside of academia may make it more palatable to many people; time
will tell.

[edit]

I still think a microkernel is a good idea even with a memory-safe language
just for defense in depth if nothing else (memory-safe runtimes tend to be
complicated, and complicated code tends to have bugs).

~~~
pjmlp
> I agree that memory-safe languages need to come into low-level systems
> programming, but garbage-collection is a non-starter for many many reasons,
> and the various academic languages (e.g. cyclone) have not made any inroads
> to industry that I'm aware of.

Yet it was proven to work with:

\- Mesa/Cedar at Xerox PARC

\- Oberon, Oberon-2, Active Oberon at ETHZ

\- Modula-2+, Modula-3 at DEC/Olivetti

\- Sing# and System C# at MSR

Apparently we can only get it properly done if a big company bullies the
developers to adopt such an approach.

Which is why I see as positive Apple, Google, Microsoft bullying devs to use
mostly Swift, Java/Kotlin/Dart and .NET on their platforms, with C and C++'s
role reduced to a few use cases.

It really annoys me when I need to use the NDK on Google's case, but somehow
it makes sense from security point of view.

~~~
wolfgke
> Which is why I see as positive Apple, Google, Microsoft bullying devs to use
> mostly Swift, Java/Kotlin/Dart and .NET on their platforms, with C and C++'s
> role reduced to a few use cases.

Microsoft is actively encouraging developers to use C++ for development on
Windows. The phase when Microsoft was trying to push (or as you call it
"bully") developers into ".net for everything" is long gone.

~~~
pjmlp
Try to find C++ talks for Windows development on BUILD 2016 and 2017 archives.

Check how many C++ samples exist on the Windows 10 SDK samples.

See the blog about the new Windows UI Composition engine and which languages
are being used on the demos.

Then let us know where is this active encouragement you are speaking about.

C++ is being driven down the stack as the implementation language for the UWP
COM layer, kernel programming, graphics and audio.

Everything else is .NET Native.

~~~
wolfgke
> Try to find C++ talks for Windows development on BUILD 2016 and 2017
> archives.

See the talks at CppCon 2016:

>
> [https://channel9.msdn.com/Events/CPP/CppCon-2016](https://channel9.msdn.com/Events/CPP/CppCon-2016)

~~~
pjmlp
I watched those talks live.

Since October 2016 there is radio silence on the actual state of C++/WinRT,
including the status of feature parity with C++/CX for XAML, Blend and
creation of UWP components.

VS 2017 already had two releases since those presentations.

The only Microsoft talk at CppCon 2017 are about VS Code support for C++ and
ANSI C++ compliance, nothing relevant to actual Windows 10 application
development.

Any long time Windows developer is recognising the signs that UWP is turning
into what Longhorn aspired to be, just remains to be seen how willing
Microsoft is to keep pushing it.

[0] [https://cppcon2017.sched.com/](https://cppcon2017.sched.com/)

------
ww520
Is it possible to run it in VirtualBox to try it out?

User space device drivers should significantly shorten the development time.

~~~
chrisper
I just tried it out. It works in Virtualbox. Just use the amd64 ISO image.

------
exikyut
Slightly OT, general microkernel question: Is it possible to viably do zero-
copy message-passing?

As in, the kernel lets you know you have a new message, but the message buffer
is the same one the calling/sending process built.

The main thing I wonder about is security. Maybe the memory manager could
allow only the sending process to write to the memory region containing the
message! Or maybe that would be really hard to get right.

Once refcounting declares that only the sender has an active lock on the
message, it could be deleted.

~~~
rurban
Sure see e.g pony-lang. Reference capabilities guarantee all three safeties,
even with shared memory. You just aren't allowed to do much with it, depending
on ownership.

------
vmateixeira
Load issues?

~~~
agumonkey
[http://archive.is/w6Li0](http://archive.is/w6Li0) to the rescue (1 week old
though)

~~~
vmateixeira
Thanks :)

------
throwaway7645
Are python, C, and C++ the only supported langs?

~~~
oldandtired
If C is supported then so are a heap of other languages because they are
implemented via a C compiler.

~~~
throwaway7645
Yes, but is it super straight forward to then implement say Python & Perl? Or
does someone have to do a bunch of work?

~~~
snvzz
>python

Is already there.

>perl

There'd be porting effort, if not using their POSIX compatibility framework.
Else, effort would be put in making it build.

------
the-dude
Well it is obvious this is no Beowulf cluster.

~~~
ty_a
Serious question, are Beowulf clusters still a thing?

~~~
zrail
Sure. They're not really called Beowulf clusters anymore, but even at the
beginning there wasn't any particular defining characteristic other than a
cluster of commodity servers running some kind of software that made them work
together. You could consider the clusters at LHC, the render farms at Weta and
Pixar, and (if you squint) the non-web clusters at Google and Facebook to all
be spiritual descendants of the original Beowulf cluster.

[https://en.wikipedia.org/wiki/Beowulf_cluster](https://en.wikipedia.org/wiki/Beowulf_cluster)

~~~
bogomipz
>"... the non-web clusters at Google and Facebook to all be spiritual
descendants of the original Beowulf cluster."

I was intrigued by this comment. Could you elaborate on this? Are there
specific concepts from Beowulf that FB's Tupperware and Google's Borg
borrowed?

~~~
zrail
The entirety of the Beowulf concept is a cluster of commodity machines
networked together that can work together to perform some kind of task. It's
pretty general. The original Beowulf-style clusters ran things like mosix to
present a single system image for applications but as far as I can tell that
was never a defining characteristic.

~~~
bogomipz
Thanks, yes that's pretty broad, makes sense. Cheers.

