
Redox – A Unix-Like Operating System Written in Rust - tilt
http://www.redox-os.org/
======
Animats
Very nice. It was time to do this.

There are three near-term products to develop on this:

\- A small router (home/small office, not data center)

\- A DNS server

\- A BGP server

Those are standalone boxes which do specific jobs, they're security-critical,
and the existing implementations have all had security problems. We need
tougher systems in those areas.

~~~
nickpsecurity
I was worried a whole UNIX might be too big a project so I gave him old school
recommendations before. Essentially told him to make a split system that lets
you run on tiny kernel bare metal, run UNIXy apps with different API, and let
them communicate through some IPC mechanism. That way, like security & MILS
kernels, people can write security critical components in isolated partitions
with checks at interfaces. Already done with Ada and embedded Java. Rust is
best one to try it with now.

Far as your recommendations, I like those. I'll add a web server, Redox or
UNIX compatible, that's efficient enough to be deployed in all these web-
enabled embedded devices. The dynamic part can just be Rust plugins or
something. However, just a robust Ethernet stack, networking stack, time API,
and simple filesystem could be used to implement all of yours, the web server,
and more. So, I encourage people building these OS projects to stay on 80/20
rule hitting features almost all critical things use. Others can jump in at
application layer from there.

~~~
burfog
In other words, like GNU HURD. That design is very very hard to make correct
and fast.

Correctness suffers because the UNIX API has all sorts of interactions between
different parts. This includes atomicity. It's a bear to get this right with
IPC.

Performance suffers because you are unable to effectively share data
structures. This too relates to the interactions between different parts of
the UNIX API.

Look, there is a reason GNU HURD is slow and suffers from incompatibility.
It's a cute thought experiment, dominating academia around 1990, but it's not
actually fast or maintainable. Experience has proven this.

~~~
nickpsecurity
I was talking about systems like KeyKOS and GEMSOS fielded in production
before Hurd was a thing. Then, systems like QNX, OKL4, BeOS, Minix 3, and
others that largely removed performance issues often with self-healing and
legacy app support. Actually, BeOS and QNX in Blackberry Playbook outperformed
monolithic competitors. These altogether long proved our approach builds
reliable, fast-enough systems with more security.

GNU Hurd is some crap along lines of Mach that tried to mix too many models
while not leveraging lessons learned by others far as I can tell. It's
something I hear about every year without any real field use or evaluation
results. It's not representative of anything in microkernels except a bad
approach.

GenodeOS is a better example where they apply many lessons from old school
with modern components and virtualization. Already proven for embedded with
desktops in alpha stage.

~~~
burfog
No, those do not perform well. They are just less horrible than GNU HURD. When
you apply similar optimization and implement similar functionality, monolithic
kernels always win. It cannot be otherwise; think about it.

Self-healing is generally a security problem. It gives the attacker a second
chance. It's also generally a failure. You might think you can restart, but
there are huge problems: Instead of a crash, you may get a memory leak or
hang. Hardware may be in a strange state, needing a power cycle to restart.
Other things start failing once one driver is down. Most systems are unable to
keep DMA from scribbling all over everything in RAM, and probably all are
unable to keep it from scribbling all over a filesystem.

Tanenbaum is biased. It's time to move on.

~~~
nickpsecurity
"No, those do not perform well. They are just less horrible than GNU HURD.
When you apply similar optimization and implement similar functionality,
monolithic kernels always win. It cannot be otherwise; think about it."

I do. It doesn't have to be better. It simply has to perform well enough that
users accept it. Older systems did that slowly. BeOS was a great example where
it was running on 90's hardware several movies, graphic animations, a song,
and productivity apps all simultaneously with no slowdown. Blackberry Playbook
outperformed iPad in tests I saw in responsiveness with one demo running a 3D
game with other intensive apps simultaneously. It's at the point where Linus
et al's argument about performance being too limited is ridiculous. Only users
maxing out performance with little care of reliability will need a monolithic
kernel on COTS hardware.

"Self-healing is generally a security problem. It gives the attacker a second
chance. It's also generally a failure."

What are you talking about? There's a bit of extra attack surface due to more
code and interactions. The first one, though, was implemented in KeyKOS kernel
whose total size was around 20Kloc. MINIX 3's Reincarnation Server is
straight-forward, too, given how components it restarts are designed. It's
actually easier to get this right than reliability and security of monolithic
systems since it's simpler and smaller than them. I mean, starting with UNIX
Hater's Handbook and such, it took monolithic UNIX (and Windows too) decades
to get where they are in reliability and security. The stuff I push got most
of that done in first few years with a handful of people plus acceptable
performance. So are you arguing against microkernels getting it done or in
favor of throwing 8+ digits worth of labor at monoliths to achieve similar
results? Neither look good in face of evidence.

"Tanenbaum is biased. It's time to move on"

Tannenbaum's is most immature system on my list. I could drop everything he's
ever said and done while still having others as exemplars for mainframe,
embedded, and desktop use that had acceptable or great performance with better
security and/or reliability. You must have a beef with Tannenbaum or
something. I respect his work & like the one presentation I watched but don't
need it to back my claims.

"Hardware may be in a strange state, needing a power cycle to restart. Other
things start failing once one driver is down. Most systems are unable to keep
DMA from scribbling all over everything in RAM, and probably all are unable to
keep it from scribbling all over a filesystem."

That's all interesting except these kinds of systems, esp proprietary ones,
have been in the field for years in systems where failure and unpredictability
had to be minimalized. They worked as advertised. Security-focused ones also
passed pentests and analysis by people who knew what they were doing. These
kinds of things are where monoliths, esp UNIXen and Windows, often failed or
took crazy amounts of labor. Even the immature MINIX 3 is more reliable than
you describe with all kinds of failures at the component level for a system
that stays up. Your DMA example shows you're really grasping at straws to
fight microkernels with an example that (a) represents a tiny set of failures
in complex HW/SW systems and (b) still applies to monoliths with the exact
solutions available for both styles.

Btw, the first IOMMU I found was in a system called SCOMP: a microkernel-like
system that was first to be certified to high-security after IIRC 5 years of
analysis and pentesting. Name one monolithic OS that pulled anything like that
off. Don't worry, I'll wait.

~~~
bogomipz
Could you elaborate on:

"Most systems are unable to keep DMA from scribbling all over everything in
RAM, and probably all are unable to keep it from scribbling all over a
filesystem."

In what context do you mean? When would this occur?

Thanks

~~~
nickpsecurity
That was burfog's comment I was quoting. It refers to the fact that direct
memory access by some devices can bypass any OS or software protections.
Breaks whole security model as memory can change arbitrarily. So, risk of
attacks or leaks should be mitigated there.

A few methods follow:

1\. Use non-DMA links.

2\. Use trusted hardware/firmware that mediates things properly.

3\. Use IOMMU to enforce access controls on DMA.

4\. Use a combo of full safety in system and careful API for access to DMA
features.

I used 1 and 2. A few use 4. Number 3 is most common with basic version
mainstreaming. Not enough, though, as complex firmware and OS's still provide
attack opportunities.

Note: There's also interrupt floods and other esoteric issues to counter. So,
it's the start rather than full solution. EMSEC issues too with malicious
peripherals.

------
qz_
I'm pretty impressed that 37 people are capable of writing an entire OS with a
gui in under a year. Looks really cool.

~~~
teamhappy
Operating systems aren't that big, and if you know your stuff, I'm sure it's
not too hard to pull off. Here's an example I borrowed from StackExchange:

According to cloc run against 3.13, Linux is about 12 million lines of code. 7
million LOC in drivers/, 2 million LOC in arch/, and only 139 thousand LOC in
kernel/.
([http://unix.stackexchange.com/a/223753](http://unix.stackexchange.com/a/223753))

Edit: Would be nice to have the numbers for the latest Minix release for
comparison. Does anybody know how big their core team is? (The kernel is
something like 15k LOC IIRC.)

~~~
iagooar
Warning: I have no idea about systems programming.

I understand that the amount of lines of driver code comes from the variety of
devices. But still, it looks completely unbalanced, when compared to the
kernel code itself. So I have some questions here:

1) Shouldn't there be common interfaces / abstractions for most of the
devices? 2) If they exist, could they be improved somehow? 3) A bit unrelated,
but how fun / interesting is it to develop driver code?

~~~
bluecmd
There are. USB, SATA, SCSI and such share a lot of code between the drivers -
but drivers are huge.

~~~
k__
I would guess, that most of the size comes from history.

Like, once it seemed a good idea to have it like that and later people found
out there are a bunch of things missing. Now many of them have to be
implemented on the driver side over and over again.

~~~
mrob
Have you read the USB specs recently? They're publicly available here:

[http://www.usb.org/developers/docs/](http://www.usb.org/developers/docs/)

Just making a standards compliant implementation takes a huge quantity of
code. And with a standard this complex people inevitably get it wrong, so
you'll need workarounds for all the broken devices too.

~~~
tracker1
Not just with hardware interfaces, but even in pure software... I wrote an LMS
that the core of which was used by a fortune 100 company, and a few airlines
for nearly a decade... I literally left out about 1/3 of the SCORM spec, and
implemented one piece badly... it wasn't until about 8 years in that the
missing piece was even an issue, and the part I got wrong never became an
issue.

In the end, you understand as much as you can, implement what you have to, and
do your best to get through it... and even then, someone will mess things up
on some end or another... to this day, I'm surprised that SCORM was
synchronous... hell, it feels like they're half the reason XHR has a sync
option.

I feel the same way when looking at terminal emulators... sigh, so many things
to implement to get something useful, even if you're only looking to get a
small subset working.

------
kpozin
I've been looking through some of the kernel code [1] out of curiosity, and
I'm very surprised to see that most components have almost no internal
documentation — minimal file and class comments, and even fewer inline
comments. Is this typical of code for something as complex and central as an
OS kernel? Are the developers planning to go back later and add documentation,
or is the expectation that anyone who might need to work with this code will
find its structure and details intuitive?

[1] [https://github.com/redox-
os/redox/tree/master/kernel/](https://github.com/redox-
os/redox/tree/master/kernel/)

~~~
jaltekruse
I only took a brief look over the code to try to find what documentation is
actually there, but I managed to find this

[https://github.com/redox-
os/redox/blob/master/kernel/fs/url....](https://github.com/redox-
os/redox/blob/master/kernel/fs/url.rs#L107)

On your question of how typical a lack of system-internal documentation is in
systems projects, the answer from my experience is unfortunately more common
than it should be. There are a bunch of usage and design patterns that cannot
be expressed regardless of the language you are using. I know very little
about Rust, but I work mostly in Java, which is safer than many other
languages (less flexible than interpreted or dynamic languages, less dangerous
than C/C++). Even java, which has interfaces and strict typing can save you
some of the more mundane unit tests and input validation code, has a bunch of
design patterns on top of the core language. Projects could serve themselves
well to document their code well to allow new contributors to look at any of
the code and reasonably modify it to fix a bug or add a feature.

~~~
ivanceras
In java, the jvm is the OS. Every object in Java is allocated in the heap, and
doesn't provide developers direct control of a more optimized way of
allocating objects. In rust, you can specify an object to be on the stack or
on the heap, rust compiler is very smart and it knows where an object goes out
of scope, thus not needed anymore and insert a code on that location to drop
that object and free memory. Unlike in Java, which waits for the garbage
collector to drop objects.

In a nutshell, java engineering effort is directed into the jvm(jit) to
provide high performance when the applications are run. Rust engineering, on
the other hand is directed on the compiler figuring out where and when an
object is out of scope, no matter how complex it was attached to other
objects, passed around multiple functions. It will figure it out.

~~~
Rusky
That's not really how it works. Rust's rules for when an object is freed are
very simple, basically the same as C++- they don't depend on references or
anything. The complicated part is that the compiler makes sure no references
to an object exist after this predetermined point in the program.

------
yincrash
I wonder why they decided to write their own coreutils instead of using
[https://github.com/uutils/coreutils](https://github.com/uutils/coreutils)

~~~
steveklabnik
That project is attempting to re-create GNU-style coreutils, they wanted a
slimmer, BSD style one, from what I understand.

------
progman
Great news! Now it's worth to learn Rust just to keep up with this project.

Does Redox run in qemu only or also in VirtualBox? I tried it with all devices
(USB, Network, etc.) disabled. Installation from ISO to /dev/sda worked fine
but after reboot I got a hangup: "IDE Primary Master: Unknown Filesystem". Do
I have to format the hardisk image with ZFS before installation?

~~~
progman
Thanks for the recommendations!

First I have to install the current Rust version. Are there any MD5/SHA256/GPG
to verify the integrity? Rust's download page doesn't provide anything like
that.

[https://www.rust-lang.org/downloads.html](https://www.rust-
lang.org/downloads.html)

~~~
steveklabnik
[http://static.rust-lang.org/dist/](http://static.rust-lang.org/dist/) has
them. If you use multirust to install, and you have GPG installed, it will use
them to check upon installation.

~~~
progman
Ah, very good -- thanks!

------
amelius
I was reading this:

[http://www.redox-
os.org/book/book/design/urls_schemes_resour...](http://www.redox-
os.org/book/book/design/urls_schemes_resources.html)

and I really got interested in the "everything is a URL" idea. But then I
noticed that the most important parts of this text were missing :/

Perhaps somebody can clarify here.

~~~
Manishearth
There's a bit more info here: [https://github.com/redox-
os/redox/wiki/URL](https://github.com/redox-os/redox/wiki/URL)

~~~
amelius
Very interesting.

The only odd part is that the modules (drivers) themselves are not referenced
by URL, but only by a simple word (in the example "port_io").

Also, I wonder how we could combine drivers. For example, (theoretically)
instead of using "https" as driver, we could compose it as "HTTP over TLS over
TCP", and change any of those subcomponents as desired. With URLs this might
become clumsy.

~~~
XorNot
It depends how you want to express the hierarchy - URLs say "this is my
transport system, and this is my address".

But you could propose doing:
tcp+tls+[http://www.some.url/?options=here](http://www.some.url/?options=here)

if you were okay with that. The problem creeps in when that tls fragment needs
options and a path of it's own.

------
b3h3moth
"Using Rust for an Undergraduate OS Course"[0] is an interesting reading about
Rust, OS and C.

[0] [http://rust-class.org/0/pages/using-rust-for-an-
undergraduat...](http://rust-class.org/0/pages/using-rust-for-an-
undergraduate-os-course.html)

------
fche
How many "unsafe" declarations should one expect in regions of the code that
will need to deal with untrusted data (like networks, syscalls)?

~~~
progman
I guess not very much if Redox will support open hardware.

[http://www.xda-developers.com/risc-v-cores-and-why-they-
matt...](http://www.xda-developers.com/risc-v-cores-and-why-they-matter/)

[http://hackaday.com/2014/08/19/open-source-gpu-
released/](http://hackaday.com/2014/08/19/open-source-gpu-released/)

[http://www.openfpga.org](http://www.openfpga.org)

[http://hackaday.com/2014/08/19/open-source-gpu-
released/](http://hackaday.com/2014/08/19/open-source-gpu-released/)

[http://open-ethernet.com](http://open-ethernet.com)

Some compelling projects (GPU) failed for lack of interest. The concepts
behind those projects did not fail because most of the projects were isolated.
The best time for open hardware is yet to come. Redox could accelerate that.

~~~
fche
rust "unsafe" is not about proprietariness of the hardware. It's about doing
lower level operations that, in order to express, the rust compiler's safety
logic must be disabled.

~~~
progman
You are right. Barebone stuff is usually unsafe.

The point I'm trying to make is that open hardware could also be programmed in
Rust, reducing the number of "unsafe" blocks in user applications. Software
for proprietary hardware is usually written in unsafe C/C++.

~~~
greydius
I think what you're trying to say is that architecture-specific code will be
isolated in unsafe blocks. That's not necessarily true, as a lot of safe code
can definitely benefit from knowledge of the underlying metal. I'm thinking of
scheduling in particular.

~~~
progman
Ok, seems I have yet a misconception about Rust's "unsafe".

~~~
burntsushi
"unsafe" is a superset of "safe" Rust. There are exactly 3 things you can do
in unsafe code that you can't do in safe code: access/modify a global mutable
variable, dereference a raw pointer and call other unsafe functions. That's
it.

See: [http://doc.rust-lang.org/book/unsafe.html#unsafe-
superpowers](http://doc.rust-lang.org/book/unsafe.html#unsafe-superpowers)

~~~
burfog
No inline assembly? No ability to manipulate the bits of a pointer for stuff
like alignment in a memory manager?

This is truly awful. It means you need to carry around a C compiler for the
low-level parts. You could also use assembly, but then you're forced to write
whole functions in assembly.

~~~
cgh
Inline assembly: [https://doc.rust-lang.org/book/inline-
assembly.html](https://doc.rust-lang.org/book/inline-assembly.html)

Arbitrary pointer arithmetic is also supported.

~~~
burfog
Oh good.

There is something else missing AFAIK. It's not quite as critical, but it sure
helps: bitfields

This was done rather badly in C, reducing portability by not letting the
programmer fully specify the layout. Normally we mostly ignore portability;
for x86 gcc and Visual Studio are compatible.

Imagine writing an x86 emulator which might run on hardware of either
endianness. In theory, bitfields are perfect for implementing the GDT, LDT,
and IDT. Bitfields are also great for pulling fields out of opcodes.
Unfortunately, bitfield layout in C is undefined. The same trouble hits when
parsing a file, for example a flash animation file.

One should be able to specify spans of bytes with chosen endianness and bit
order, then subdivide each span into fields. Normally each bit should belong
to exactly one field, with an error if violated, but it should be possible to
define overlapping fields if the programmer insists. Fields should then be
able to be joined into larger fields, even if they come from different byte
spans. This allows handling split fields such as the x86 descriptor's base and
limit or the PowerPC opcode SPR encoding.

Lack of bitfield support and lack of a "restrict" keyword are probably the two
biggest things holding me back from rust now.

~~~
steveklabnik

      > Lack of bitfield support and lack of a "restrict" keyword 
    

[https://crates.io/crates/bitflags](https://crates.io/crates/bitflags)

Rust automatically adds the appropriate 'restrict' annotations to `&mut T`
pointers, or well, does generally, but I think an LLVM bug made us take it off
temporarily. Point is, this isn't something that you annotate in Rust like you
do in C, you use the type system and the compiler handles it where
appropriate. (It's more than just &mut T, like, UnsafeCell will also cause the
annotation to _not_ happen, for example.)

------
chillingeffect
Oh man if we could get hard realtime scheduling in early, large, critical
embedded systems like medical would benefit greatly!!!$$$

~~~
0xcde4c3db
While I'm not sure this fits with Rust's existing concepts of safety and
correctness, it would be interesting (and not just for RT purposes) to have a
language in which one could mark functions as <= some complexity (given the
usual simplifying assumptions) or provably terminating, and have the compiler
throw an error if it can't prove that those constraints are met. Does
something like this exist?

~~~
Rusky
The closest thing that comes to mind is total functional languages- the
compiler doesn't deal with complexity classes, but it does prove termination.
It's mostly used in dependently-typed languages and theorem provers so they
can prove the type checker will terminate.

------
cgcardona
Previous Discussion:
[https://news.ycombinator.com/item?id=10295187](https://news.ycombinator.com/item?id=10295187)

------
freekh
Looks real cool. Congrats! Wonder if some of the utils like the shell would be
a cool Rust project for me :)

------
pointfree
"MIT licensed"

...and consequently limited driver support. Copyleft _is_ permissive. There's
no need for non-copyleft licensing unless you want _restrictive proprietary
licensing_ somewhere or sometime.

~~~
ci5er
"Limited driver support", here, means ... you don't have access to, and the
right to fork, the source code?

As an old engineer, to me, limited driver support always meant: "not that many
drivers". You seem to mean: "I can't fork".

Or am I mis-reading you?

Many of those of us in the new-new world of next-gen system-integrators-
acting-like-new-software-product-developer types don't always have sole
control over the drivers in the/our stack necessary to deliver key features to
key clients.

The "hard" open source position of the copy-left crowd incentivizes old-school
pragmatic management to take a "why bother" stance and instead of open
sourcing 80% and getting yelled at because it isn't 100%, just go with 0%.

Which is sad. An unnecessary. The won't deal with the very real business risk
that happens when you liberally treat with zealots.

Or am I (I ask again) mis-reading you?

~~~
lolidaisuki
80% isn't any better than 0%.

------
jpgvm
The Rust implementation of ZFS interests me greatly. Going to pull this down
and try hack on it.

------
yuchi
I’m not into OSes but I have to say it looks incredibly interesting and makes
me want to dig the codebase. Awesome brand also!

------
nickpsecurity
I like the home page: unusually good choice of attributes for a security or
reliability-focused OS. The path usually not taken. So, for those familiar,
what milestones has the project achieved since we last discussed it here?

Note: Best bang-for-buck will be getting solid networking, filesystem, time
API, and crypto lib in there. People can crank out purpose-built appliances or
VM's for all Internet or Web servers with... not ease but easier. 80/20 rule
always best for OSS projects to get adoption & contributions up. ;)

------
skoink
I've seen this project a few times - very impressive work! It'll be awesome to
see if they can get it self-hosting (right now, I think you need to compile
with a different OS).

------
IshKebab
Why create a new Unix-like OS? Sure, have a Unix compatibility layer, but
there are so many ways to improve on Unix. Seems stupid to repeat the mistakes
of the past.

------
iso-8859-1
It is a shame that OS development is so dependant on toolchains that whatever
focus an OS may have, it excludes other areas of development. For example, I
very much like Genode. But to redo Genode in Rust would probably be harder
than making a Unix-clone. Safe language or safe API's, it seems like you have
to choose one.

------
nailer
Anyone have screenshots of the orbital GUI?

~~~
ethanbond
I'm also curious how to contribute to the GUI. I'll trawl the github in the
meantime but if anyone happens to know a good point to start I'd love to hear.

Edit: Found it, in case anyone else is interested: [https://github.com/redox-
os/redox/blob/master/CONTRIBUTING.m...](https://github.com/redox-
os/redox/blob/master/CONTRIBUTING.md#other)

------
clem16
That looks impressive. It looks very clean and well put together. I will have
to check this out!

------
jeffdavis
What is the main differentiator? Security/stability because it's written in
rust?

~~~
mastax
Maybe in 10 years. Right now it's more interesting/unique because it's written
in Rust.

------
chris_wot
I feel silly I haven't o ask this, but what is the best guide to learning
rust?

~~~
steveklabnik
I am biased, but the book is the most complete resource: [https://doc.rust-
lang.org/book/](https://doc.rust-lang.org/book/)

There's also [http://rustbyexample.com/](http://rustbyexample.com/)

~~~
chris_wot
Hey, thanks :-)

------
FlyingSnake
This is bloody Brilliant! The Rust community is doing a great job in
delivering great projects.

Are there any concepts borrowed from past OS experiments like Plan-9, Midori,
BeOS etc?

------
tbolt
Awesome. Really great to see this. Nice work to everyone involved

------
YngwieMalware
Beautiful piece of work. Thank you to the whole development team

------
tbolt
Awesome. Nice work to everyone involved

------
dschiptsov
UNIX98, POSIX compliance?)

~~~
unsignedqword
Apparently not [http://www.redox-
os.org/book/book/introduction/what_is_redox...](http://www.redox-
os.org/book/book/introduction/what_is_redox.html)

------
terda12
Not a fan of the name, sounds too much like Redux :/

~~~
pjc50
Presumably named after the redox reaction that causes iron to rust.

~~~
iagooar
The name is quite clever, actually.

~~~
Sharlin
Yes indeed.

* There's the Unixy "x" suffix

* Iron turning to rust is a redox reaction

* Redo means "do again (differently)"

* Redox is an almost-homophone with redux meaning "revived"

~~~
hamburglar
Plus the "iron" / "bare metal" imagery for running directly on hardware as
opposed to on top of some other layer. Redox puts rust on metal.

------
tomlong
So this is like Hurd, right?

