Hacker News new | past | comments | ask | show | jobs | submit login
Doas – Dedicated OpenBSD application subexecutor (tedunangst.com)
94 points by ingve on July 20, 2015 | hide | past | favorite | 34 comments



Compare the code in doas: http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/doas/ to the epic beast that was sudo: http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/sudo/

Amazingly - for such a tiny bit of code, it still has 100% of the functionality I've ever used with sudo: http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man5/...

And you just have to love a man page like this:

http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man1/...

I really, really like the direction OpenBSD is going. Bit by bit the base operating system is becoming a leaner animal, rid of a lot of cruft that has the potential to bite people in the future.

And, note - for anyone who is wondering how one can get by without sudo for backwards compatibility - sudo isn't going away, it's just being moved out of base into ports. If you want to install it in your system - it's just a pkg_add away.


to the epic beast that was sudo:

I don't want to defend sudo, but by definition any C utility that is multi-platform is going to be larger. Also, sudo has features that are required in some enviroments, like LDAP or PAM support.

I definitely agree that the direction they are going in is great: a lean base system that has as little attack surface as possible and covers 95% of the use cases. If you do need extra functionality, get it from ports.

One thing I wonder about though is: why does a system which goal is security still use C for utilities/programs that are not really CPU-bound? Maybe it's time to slowly ditch C for a safer language where possible? (There are languages that would fit in quite well with OpenBSD, except that they require some portability work.)


> why does a system which goal is security still use C for utilities/programs that are not really CPU-bound?

I think it has something to do with compiler ubiquity.

Let's use Python as a good example of "get shit done if you don't care too much about perf." The source tarball for Python is 18MB - even the Windows binaries are 25MB. TCC[1] is 100KB. We can use that as a bad estimate of how hard the platform/runtime is to port.

Let's say you wanted to get Doas working on some pet Unix OS that you are writing. If it was written in Python you'd have to first get the entirety (25MB) of Python working; if it was written in C you'd simply cross-compile as you would be already doing for everything else. I'd put good money on you wanting to get Doas/Sudo working before you get Python working or the bajillion different runtimes that all the utils are written in.

In that way C has this very peculiar portability: it's not write-once-run-everywhere but it can be write-once-compile-everywhere. It's a common denominator.

I could be completely wrong and it could just come down to: old habits die hard.

[1]: http://bellard.org/tcc/


Most systems developers are proficient with C. Code isn't going to be safer in language XYZ if developer isn't proficient with it, no matter how awesome XYZ is. This also ties into the legacy: there's a lot of code in C out there that you can't simply replace. Also keep in mind that most of the "safe" languages depend on runtime which isn't written in a safe language. So you're just pushing the problem somewhere else.

Even pretty bare metal languages like Rust aren't very well suited for OS development (yet) since they come with certain assumptions about memory subsystem. Which has to be implemented to be used. This is also true for C - there's some stuff in every C standard library that has to be written in asm, once for each supported platform.

So the bottom line is: there's always something underneath. You can get to a safer place by minimizing these pieces of "unsafe" code and writing code in something else above or, what OpenBSD does, you can abstract and isolate things away and use the tiniest possible solution for each problem and achieve safer environment by proper separation of concerns.


Code isn't going to be safer in language XYZ if developer isn't proficient with it, no matter how awesome XYZ is.

There are safe languages that are similar enough to C that it is not hard to pick them up for a C programmer. I agree if the replacement is ML or Haskell. But e.g. Rust, Cyclone, or Go (if a GC is acceptable) are not going to be hard for most C programmers.

Also keep in mind that most of the "safe" languages depend on runtime which isn't written in a safe language.

Rust is almost runtime-less. The Go runtime is mostly Go since 1.4.

This is also true for C - there's some stuff in every C standard library that has to be written in asm, once for each supported platform.

I didn't say it's not going to be work :).

So you're just pushing the problem somewhere else. [...] So the bottom line is: there's always something underneath.

That does not mean that, say, Haskell is not tremendously much safer than C. First, because you only have to implement the icky parts once. Second, because a better type system and abolishment of undefined behaviour gives you more safety for a very small cost (if at all, the extra compile time for a better type system is probably negated by the cost of headers).


> similar enough to C that ... But e.g. Rust

Rust is a very large language with an equally large set of standard library functions. It also requires changing how you structure your programs to work within the constraints of the borrow checker.

Rust also relies on LLVM, which has a limited set of target architectures when compared to C.

It's a much better replacement for C++ than C.


"There are safe languages that are similar enough to C that it is not hard to pick them up for a C programmer."

It's not about similarity, it's about differences. System code is all about subtleties in language, UBs in C for example. Just because Rust syntax is similar doesn't mean it can be easily picked up by a C dev.

"Rust is almost runtime-less."

Almost is the key. Even FAQ mentions that it's not suitable for low level code exactly because of its assumptions about how memory allocator works. You'd have to implement it. And you still assume that compiler output is flawless (which is very unlikely given Rust's age).

"I didn't say it's not going to be work :)"

But that's exactly what you're implying ;P "Oh come on, it can be done, why not?!" I'm giving you reasons why. These aren't superficial, I worked on OS, OS drivers, I'm currently working on hardware simulators. If your aim is safer systems, switching from C to Rust (which I happen to love) isn't the answer yet. And probably not for the next 10+ years.

"That does not mean that, say, Haskell is not tremendously much safer than C."

I'm not saying that there are no languages inherently safer than C. I'm saying that a) people can't switch just like that, b) familiar syntax isn't enough, c) legacy matters, d) there's always something that will still be "unsafe". As a pragmatic I'm claiming that you can spend time better than switching languages.

"First, because you only have to implement the icky parts once."

How is this not true in C?

"Second, because a better type system and abolishment of undefined behaviour gives you more safety for a very small cost (if at all, the extra compile time for a better type system is probably negated by the cost of headers)."

No, again, cost associated with new language is massive, especially if you care about quality.


zamalek got it correct above. Those languages above only cover 2 or 3 architectures that the project works with. There is no Rust or Go compiler for vax, alpha, macppc, sparc64. There are C/C++ compilers though.


Yes, I know, that's what I mentioned in my original post: it requires portability work.

There is no Rust or Go compiler for vax, alpha, macppc, sparc64. There are C/C++ compilers though.

Well, I think compilers for some of these architectures (e.g. VAX and Alpha) still exist in gcc largely because OpenBSD and NetBSD have put so much effort in keeping them alive (yes, I know that OpenBSD is using an older version). So, it's not as if you get C compilers for these platforms for free.


Actually, Go has support for ppc, alpha and sparc64 through the gccgo compiler frontend. Not sure about VAX, though.

(Also, not really advocating the use of Go to replace C in operating system development, just wanted to make a note)


> they come with certain assumptions about memory subsystem.

I'm curious as to what you're referring to here, with Rust. Could you elaborate?


I confess! ;) No, seriously, I'm parroting what was said by the developers on the group some time ago. My understanding at the time was that core allocation library is part of std, so if you get rid of std (which is what you'd do writing OS - things like printing on the screen have no place in kernel) you'd have to recreate this. But I never bothered to dig deeper since OS development is not what I want to use Rust for.

My feeling though is that you need to be able to allocate memory to write code in Rust (duh!) and memory allocation is something that OS provides. So if you write OS in Rust... you get the idea. My understanding of Rust isn't deep enough to state whether you'd have to reimplement internals of Arc, Box and what not or you'd just have to provide something they piggy back on. But there's an explicit assumption in most of the code that you can allocate memory (in a thread safe fashion no less) which isn't there if you have no OS underneath.


It's all good, that's exactly why I was curious. Your understanding isn't exactly wrong, but basically, all that code relies on there being an 'alloc' and 'free' symbol, so you can swap that out with whatever you want, so you wouldn't actually need to re-write all that code. Also worth noting, there's a clear line between libraries with no allocation (libcore) and the rest, so even if you did throw it all away, you'd still have a chunk of the library.

That interface is super unstable at the moment, and we're working on stabilizing it, but yeah, that's what you'd have to do: write your own allocator and hook it up.


Cool, thanks for the details. I guess I should check who's asking before I respond the next time around. ;) It wasn't clear whether you're pulling my leg or you're interested in deep dive I can't provide. But you can. :] So, question then - wasn't it the case (at one point) that you had dependency on stack unwinding library or some external implementation of TLS? My feeling is that what I read about OS development at that point had also something to do with one of these two. Maybe. Unwind is native to Rust right now and is no longer a problem (on platforms supported), no?


:) It's all good, I was a little bit terse... I should consider that in the future.

We do use gcc's unwinding library, last I recall, but if you're writing an OS, you can implement the language items to do whatever you want, including just abort.

There's no inherent TLS implementation that I'm aware of.


Rust is fine for OS development and plenty of hobbyists are using it for that.


Sure, and I'm glad they do, it's a great language. But you're not going to go beyond hobby projects for the next I-don't-know-how-many-years. And not because there isn't enough love for Rust (which was the case e.g. with D). :)


I don't see any major reason why Rust couldnt be used for a new "real" OS. The "stdlib doesn't report failure on allocation" stuff isn't really relevant; it's not like people use libc malloc in the Linux kernel. Lack of intrinsic exposure on stable doesn't matter, people will just use the nightlies. Intrusive lists are useful, but they don't require that much unsafe code. Likely / unlikely will be in the nightlies any day now. Performance is already on par with C++ if you care enough; removing drop zeroing will help, though. Only big thing I see missing is unsafe unions.

(If you're arguing that drivers will still need some inline assembly, well, yeah. But the assembly required for hardware interop is pretty straightforward, at least compared to code written in assembly for performance).


"I don't see any major reason why Rust couldnt be used for a new "real" OS."

I'm not saying it can't in principle, I'm saying it would take years to get to a usable point. But then again - I wouldn't mind being wrong here! :]

"I'd be interesting in knowing what their thinking is as well."

OpenBSD is proudly exploiting quirks of various platforms to stress test the code and find cases where differences break the code and expose bugs. Change from C won't happen just because Rust is here to stay. Especially not until LLVM (and preferably musl) runs on most of the platforms OpenBSD runs on. the official stance for LLVM in general is that it'd have to get to the parity with gcc WRT platforms that OpenBSD finds interesting but reality is that a platform or two out of ~15 could be dropped if LLVM provided benefits over gcc. LTS compiler is something OpenBSD (and not only them) would appreciate greatly. But it's not going to happen, I'm afraid.


It's fine indeed, but LLVM (and therefore Rust) doesn't support nearly all the targets OpenBSD supports.


> One thing I wonder about though is: why does a system which goal is security still use C for utilities/programs that are not really CPU-bound? Maybe it's time to slowly ditch C for a safer language where possible? (There are languages that would fit in quite well with OpenBSD, except that they require some portability work.)

OpenBSD's security goal (and expertise) comes from its programmers typically being really skilled at C programming. Some OpenBSD utilities use other languages, though (like the package management tools, which are written in Perl last I checked).

The bigger concern, however, is portability. OpenBSD supports a lot of hardware platforms, rivalled only by NetBSD as far as I know. There's also licensing restrictions; the goal is for OpenBSD (or at least as much as possible) to be entirely permissively-licensed. Any language that doesn't satisfy those two requirements may very well be a nonstarter.


Re: Multiplatform Sudo - you bring up a really good point, which is that sudo, a multi-platform utility; It really didn't belong in base. I wonder how many other targets there are that will yield to the wonderful minimalist that is Ted Unangst (who also wrote the signify tools).

I'd be interesting in knowing what their thinking is as well. If I had to totally guess - it would be that the OpenBSD team has 15+ years of world-class expertise in writing safe/secure C code, and that switching to a different language would (A) mean having to learn how to do this all over in another language (albeit in a much, much easier manner) and (B) dealing with the abstractions that a language might bring, that didn't exist when you were writing in C.

Presumably the advantages of having a memory safe language don't have the benefits for the OpenBSD team that outweigh the negatives. Yet. I guess time will tell.


This opens you up for disruption, if others with less expertise can write the same secure code using e.g. Rust (I have no clue about Rust)


At the moment, the best candidate for such a disruption may be MirageOS, written entirely in OCaml.

On the other hand, MirageOS has no hardware drivers and depends entirely on a Xen hypervisor. This may be a smart move to get it all going, but also means a fundamental dependency on yet another large piece of unsafe code.


Why rewrite all the operating system utils when you can just refine the existing ones and improve the OS exploitation prevention mechanisms? If you rewrite you'll add bugs and you need a lot of competent developers to rewrite everything (which is the same reason OpenBSD doesn't provide official binary updates for base packages IMHO). Plus, you can't benefit from upstream improvements since you rewrote everything.



Exactly. That's why they've written doas and they do not rewritten the whole OS in Rust :)


FYI OpenBSD doesn't do PAM


This tool exec's, and its return status reflects whether the command could be launched. Sudo waits and returns the exit code of the process. I'm surprised this wasn't mentioned in the blog post, it's probably the most important difference.


If the exec is successful, it will be replaced by the exec'd program, and its exit code will then be whatever the exit code of that is.


Does anybody can use sudo in a way that it adds additional but limited privileges to certain user groups securely? I'm sure it's used only for giving admin privileges to the admin group 99% of the time. If your OS needs a tool that allows users to run processes as another user then maybe your OS needs finer access controls instead.

And for the 99% use case I think a few lines long C code would suffice.


sudo (at least everywhere I've used it) - is typically used to (A) allow me to run code as root without having to type my password, and (B) give select groups of people the rights to run a few specific commands. In terms of number-of-time executed, in a recent environment that I was in, in which approximately 100 or so people had sudo rights, 90+% of the time it was to allow people in various group to run commands like "tcpdump" or "netstat", without giving them full root rights to the system.


The correct way to set up BackupPC to run full disk backups against a remote system is to create a backuppc user on that system and then add an entry to your sudoers file for the most restrictive possible backuppc command. Here's an example from one of my servers:

    backuppc ALL=(root) NOPASSWD: /usr/bin/rsync --server --sender ?*
This primarily is to help limit the amount of damage that backuppc can do if something very strange happens and it decides to try to eat the filesystem.


Yes.

One example I've used was setting up some non-technical windows users with a shortcut they could click which would:

do a key-based, passwordless ssh login to the server rotate a proxy log, specific to their test machine

The logrotate config for these logs included code to move a copy to a mapped network share.

So clicking the shortcut caused the log for their most recent bout of testing to magically appear in a folder on their desktop.

The sudoers file allowed only this command to be run by these users. They did not get shell access at any stage.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: