Hacker News new | past | comments | ask | show | jobs | submit login
OpenBSD system-call-origin verification (lwn.net)
112 points by swills 31 days ago | hide | past | web | favorite | 92 comments



This is a pretty weak security mitigation since any ROP attack will typically achieve return-to-libc anyway, but yelling at people who make syscalls directly is good.


> but yelling at people who make syscalls directly is good.

Why though? Doesn't this mean that all non-C based languages are going to be treated somewhat as second class citizens, having to link the standard C library (eg. as Go is doing on some platforms) in order to call into the kernel?

While languages such as Go and Rust are aiming to replace/displace C due to it being designed in an age where security was considered less of an issue, it seems counter-intuitive to me that we should insist that they should link in the apparent attack surface of the standard C library. The syscall boundary seems an ideal place to make the delineation between the kernel and userland via an established API, and I would have expected that languages that want to displace C be able to use that interface directly in order to bypass the standard C library. That would seem to allow userlands to be built that include no C code whatsoever. But I'm very obviously no expert.


"having to link the standard C library”

Ideally, IMO, there should be two libraries, a “Kernel interface library” and a “C library”, but for now, just think of it as two logical libraries in a single file.


Nothing about this prevents applications from being built that do not depend on libc, this is a flag set on elf binaries that the program loader honors, tooling to build things that really want to generate syscalls can just flag all sections as syscalls allowed.


Oh ok, I missed that. So how does this increase security then, if an attacker can just flag the binary safe? Seems like a bit of an honour system.


For the same reason that ASLR improves system security even if you can still in principle pass a special flag to build a non-ASLR binary.


An attacker would have to set such a flag before the binary was loaded, meaning they've already achieved code execution (and syscalls).

This feature is yet another layer of defense against remote exploitation of a currently running program (for example, a web browser). It complements things such as ASLR, stack canaries, the NX bit, ...


How does having a password increase security if an attacker can just run passwd as root and change it?

Hint: having some security and mitigations makes it less likely for an attacker to start with such capabilities.


Disregarding the condescending nature of your comment - I can't find a reference to the actual whitelisting methodology in this article. Several other comments in this thread claim that this is a flag set on ELF section headers, which can be done entirely before the binary is delivered to the system and executed. So in my opinion you need to try harder than this.

If someone can show this is done by configuring the local dynamic linker, so that the end user has full control of the mechanism, then I'm all ears.


If an attacker can ship a malicious binary and run it on the target system, then this defense mechanism is 100% pointless.

It's trying to mitigate exploits, but if your attacker is already running arbitrary code, they don't need those exploits. They're way past that phase.


It's totally acceptable to write to libc in something other than C, or in fact to write libc itself in something other than C.


Example, Microsoft has rewritten their C standard library in C++ with extern "C" entry points.


Yes. Microsoft have made a number of silly mistakes with respect to OS design, which has cost them the server space outside of authentication for your and my lifetime. Bloat is not the answer to either security or performance problems.


Quite the contrary, I admire Microsoft for being one of the first companies to start driving C away from our IT stacks.

Thankfully Apple, Google, IBM and Unisys seem to be on the same boat.


And llvm-libc may be one such implementation: https://llvm.org/docs/Proposals/LLVMLibC.html


That's a pipe dream, not an implementation. Let's wait until it actually happens before getting too excited, and then we be actually able to evaluate the drawbacks.

> Current Status - llvm-libc development is still in the planning phase.


Can you help me explain where this second class citizen sentiment comes from? If all programs need to go through libc, doesn't that mean they are all equal? Whether to make system calls go through libc or not is just a matter of where you put the ABI boundary. Putting that boundary "above" the raw system call instruction (like most OSes do) doesn't hurt anyone. Linux does it differently mostly because it just shipped the org chart.


It's not a sentiment, it's a question. I don't have an agenda here, I'm just trying to understand.

> If all programs need to go through libc, doesn't that mean they are all equal?

It means that all programs need to link libc into their binary, whether statically or dynamically. Part of the reason-de-etre for Rust seems to be as a replacement for C and C++ so it would seem peculiar to me that the C library would become a forced dependency for compiled languages like those. But as the other poster pointed out, you can disable it anyway, so no matter.


Linking libc dynamically is essentially free. Every program on the system runs it, so almost all its code and clean data pages are already in memory.

As for static libc: please don't do this. A static libc, from a compatibility POV, is just as bad as embedding random SYSENTER instructions in program text. It makes the system much more brittle than it would otherwise be. I understand the desire to package a whole program into a single blob that works on every system, but we should support this use case with strong compatibility guarantees for libc, not with making SYSENTER the permanent support boundary!

When I am god emperor of mankind, on my first day, I will outlaw both static linking of libc and non-PIE main executables.


> As for static libc: please don't do this.

Preaching to the converted here, I'm a big fan of dynamic linking. It seems that while Go binaries are generally statically linked (last time I checked, which was a while ago), that libc is generally dynamically linked for the reasons that you have stated, also also because some features like NSS require dynamic linking to work correctly.

Disclaimer: I mostly program in C and C++, not Rust or Go (yet).


Say that to the plan9/9front users.


Ok. I will. So? Am I supposed to believe that a technique that works fine everywhere but Linux is somehow unworkable in general?


OS stable API and ISO C standard library aren't the same thing though, some UNIX just end up mixing both, while others don't even allow documented access to them e.g. Apple one's.


You still need an information leak of exact address of a libc function to achieve that, that will be different on every program invocation due to ASLR, and brute forcing is useless on a 64bit memory space. Even information leaks about the location of other less useful libc functions + offset isn't enough, because OpenBSD randomly relinks libc every boot, so just having the libc of a release to harvest offset information isn't useful.


Windows yells plenty and look at how that turned out. Like pitching a tent in a hurricane.


I think it might protect against JIT shellcode.


CFI goes a long way towards defeating ROP


Are there any systems that implement widespread CFI for all binaries? The most I’ve seen is glibc’s endbr64 at the top of _start.


Fedora 31 compiles everything with -fcf-protection. Of course it requires hardware support before it actually does anything, and there is a lot of missing support in the non-C toolchain and in certain packages. You can use "annocheck" to check if a particular binary is compiled with full control flow protection or not, eg:

    $ annocheck -v /usr/bin/ls
    [...]
    Hardened: /usr/bin/ls: PASS: Compiled with -fcf-protection.


Folks interested in this sort of news might also enjoy the OpenBSD Journal, which covered this recently, albeit more succinctly: https://undeadly.org/cgi?action=article;sid=20191202105849


Totally off-topic, but it's interesting how the URL uses ; as the GET parameter separator rather than &.


The internal syntax and semantics of the query part is not really defined by HTTP. (https://tools.ietf.org/html/rfc3986#section-3.4)

Also:

"Historic RFC 1866 (obsoleted by RFC 2854) encourages CGI authors to support ';' in addition to '&'"

https://en.wikipedia.org/wiki/URL#cite_note-23


I strongly support this policy not only for security, but also for general operating system robustness. Linux is pretty much the only system that places the ABI compatibility boundary and the machine privilege level boundary in the same place in the stack. I think that's the wrong place: it pushes a lot of complexity that could be in userspace into the kernel, because the kernel is the first place past the ABI break. Linux would be better if nobody except libc were allowed to make system calls and we just made libc (or a giant VDSO) the ABI support boundary.

We should at the very least have a VDSO for every system call. There should be some opportunity to run code in userspace before a privilege transition. Doing so would give us a lot more flexibility than we have now.

For example, consider socketcall(2): for a long time, all Linux socket system calls (like recvmsg) were multiplexed through a single system call. A few years ago, the kernel community realized that this multiplexing was a bad idea and made individual system calls for all the traditional socket operations. But since we have to support old programs, we have socketcall(2) and the new system calls in the kernel. Why should we?

If the ABI support level had been libc all along, we could have changed the system call strategy (from multiplexing to fine-grained calls) transparently without bloating the kernel with compatibility code.


> We should have a VDSO for every system call.

This doesn't really make sense; the vDSO only works for system calls that do nothing but read a value from the kernel and return it to userspace.


Not the case. Some VDSOs just forward to regular system calls. Try clock_gettime with CLOCK_BOOTTIME sometime. (I need to send a patch to fix this. Maybe over Christmas.)

Requiring that every system call go through a VDSO would be great because it would give us a "hook" for changing system behavior before entering the kernel, which is a good thing, because in the age of speculative execution mitigations, entering the kernel is expensive, just like in the days of yore.


On my computer I ended up going through a rdtsc instead of a syscall…


Wouldn't this make containers way more difficult?


Why would it? You need to bootstrap allowed call paths, and whatever init does to do that, PID 1 in a namespace could do that too.


> Since OpenBSD is largely in control of its user-space applications, it can enforce restrictions that would be difficult to enact in a more loosely coupled system like Linux, however.

In which ways is OpenBSD more in control of userspace applications, and in which ways is Linux more loosely coupled?

> Switching Go to use the libc wrappers (as is already done on Solaris and macOS)

Did Solaris and macOS also do that for security reasons? (A linked article mentions ABI instability as the reason, but maybe there's more to it.)


> In which ways is OpenBSD more in control of userspace applications, and in which ways is Linux more loosely coupled?

One of the major differences between Unix and Linux is which parts of the system are in the source tree.

Linux is technically just the kernel, and a Linux distribution will put together the base libraries and userland programs and kernel along with various tools for installing and managing.

Kernel space programs are generally the kernel itself and device drivers.

Base userland programs are things like `cat` and `ls` and more that allow for manipulating and inspecting parts of the system. User-space programs are any userland program which is not in kernel-space, including third party packages and custom software written by the user.

Userland programs need to be linked against libraries, which is where libc comes in.

In most BSD Unix systems, the source tree will include the kernel and the base userland programs together.

That means that OpenBSD's versions of `libc` and `ls` are maintained in the same source tree as the kernel, allowing for much tighter coupling and integration for changes.

By contrast, the Linux distributions are generally using some versions of GNU C and GNU Coreutils, perhaps with patches downstream.

So a core change to the Linux kernel must make its way upstream to the user space application and then downstream from that, the distribution must integrate both.

This is much more loosely coupled and should hopefully illustrate the difficulty in coordinating such a change in Linux.


The coupling here is specifically the lockstep of the kernel syscall ABI and libc.


It's just ABI instability. In BSD, Solaris, and MacOS the kernel and libc are distributed by essentially the same people as an atomic unit, and the syscall layer changes over time. On Linux, it's the syscall layer that's stable.


It's entirely doable in Linux too. Introduce new syscalls for the new interface, plus one lockdown functionality that disables the old interface so sysadmins/distributions can switch to the new interface. Of course this can and should be coupled with compile time switching, so eventually the old interface will become deprecated, and if no one steps up to maintain it, it will be removed.

It's simply a different approach to doing backward compatibility. (Instead of maintaining very different trees for different versions, the functionality is concurrently available in multiple versions, but with feature switches [compile- and run-time configuration].)


> and if no one steps up to maintain it, it will be removed.

"You don't break userspace". How many decades until people who no longer have source for the broken binaries that they run will stop complaining?

How long until sysadmins know to turn the knobs on?

How many knobs are sustainable? How much will they interact, and in how many ways will those interactions expose vulnerabilities?

The Linux way to introduce this is hell for everyone involved.


If no one uses it, removing it does not break userspace. Removal of features happen regularly.

How many? Dunno, doesn't matter. If there are users they are usually willing to step up to maintain it, and that makes having parallel APIs not a problem (the old one becomes a wrapper).

Furthermore UNIX/BSD faces the same problem with every other program that lives out of tree. (Which is the majority of them anyway.)

Knobology is always an endless debate, yet also regularly done without much fuss, usually simply by what the maintainer(s) decide to provide.


As done on the Trebelified Linux in Android.

Standard Linux drivers are considered legacy, with all new drivers using their own processes talking via Android IPC.

Userspace is all about Android Java, ISO C, ISO C++, and NDK APIs, everything else is considered off hands and there is also some gatekeeping via LinuxSE, securecomp of whatever else is allowed.


Right, but the abstraction later for Treble is the HAL boundary and not the syscall boundary. The seccomp filter is unrelated and IIRC relatively permissive.


Syscalls are not part of the NDK stable APIs contract, so although seccomp isn't as extensive as it might be, there are zero guarantees about blocking further syscalls.


Nothing you need to write a driver is part of the NDK stable API contract either. They're orthogonal concepts.


Sure, however we are speaking about general purpose access to syscalls from user space here.

Only Android OEMs get to publish drivers.


> Did Solaris and macOS also do that for security reasons? (A linked article mentions ABI instability as the reason, but maybe there's more to it.)

That's all there is to it (I did the Solaris port).


> In which ways is OpenBSD more in control of userspace applications

They port and package them themselves.


This effectively treats libc as a trusted interface. Checking should be at the protection boundary, the system call. This just makes attacks more complicated, not less successful.


Yes it’s more complicated but that’s how all security works and why security exploits are more expensive today than before. The reason this defense requires libc is because syscalls are not a function calls (you push args onto the stack and then trigger a software interrupt).

This defense technique is both simple (in terms of relative change from how things work today) and effective in that buffer overflows can’t ever call into the kernel but now instead have to go through libc which has had its location randomized on the application launch.

To provide this protection at the syscall layer would require some kind of randomization of how a syscall is performed which is something that has no prior art since that’s just pushing arguments into the stack and triggering an interrupt through a single instruction meaning complexity of the work is unknown in terms of how to secure a buffer overflow from being able to do that (ie harden an arbitrary process to not be able to directly jump into a syscall). Additionally because there’s no prior art the scope of that kind of change wouldn’t have the benefit of having well-understood behavior. Not impossible or prohibitive but raises the bar for the benefit you’d have to provide (& probably get a chip manufacturer and compiler vendors to go along with you). Also even if you could 3rd party binaries wouldn’t gain that protection magically. Putting this in libc is fantastic. Arguably Linux’s decision to treat libc as an external unrelated project and keep ABI comparability at the syscall layer prevents a protection mechanism like this.


Yes it’s more complicated but that’s how all security works and why security exploits are more expensive today than before.

That's how security theater for software works. Today's attackers are well-funded and technically competent. This is a a defense against script kiddies.

The way to get rid of buffer overflows is to get rid of C in trusted code, not requiring users to go through libc. It isn't hard any more. We have Go, Rust, C#, Java, and even node.js and Python for server side. If you're running C in a user-facing server application, you're doing it wrong.


I hope you're not suggesting that modern mitigations such as W^X and ASLR are "security theater" :/


There are now many known attacks that can overcome address-space randomization. From Javascript in the browser, even. If someone can execute something locally and can peek at memory even in a limited way, they can find the code.[1]

The problem is buffer overflows. Fix that, and you don't need ASLR. ASLR is a form of security theater. It protects against dumb attackers.

[1] https://threatpost.com/bypassing-aslr-in-60-milliseconds/121...


ALSR was never meant to be the only line of defense against buffer overflow attacks; it’s a mitigation and not a solution. It makes it so a single exploit is not enough to achieve full control.


Unfortunately the only UNIX derived OSes that seem to care about that are Solaris on SPARC, iOS and Android on ARM, given how they take advantage of hardware tagging.

On x86 clones Intel just dropped the buggy MPX, with no replacement in sight.

And by the way things are going, any form of Safe C variant, doesn't appear to ever be adopted by UNIX clones.


So let's assume the argument you're making is valid. ASLR is security theater because it protects against dumb attackers and fixing buffer overflows is the real solution.

Let's ignore that only recently have languages become available that make it possible to do so for high-performance code. That's a straightforward problem to solve - we know how to rewrite code in other languages, we can build tools to help us do it effectively, etc.

Let's also ignore that the recency of the languages mean that few people know how to code that effectively. We can train armies of engineers.

Let's ignore that training engineers takes significant amounts of time & shifting massive industries is difficult. That will be solved over time.

Lets ignore that Rust is more difficult to write than C/C++ - after all we're just converting existing code so that could be easier & the more time spent during writing is paid off by less time debugging at runtime.

Let's also ignore that A LOT of money has been poured into existing software and a non-trivial amount would have to be donated/volunteered to migrate to languages where it's impossible. We have OSS volunteers and enthusiasts and corporate interests are aligned to sponsor conversions.

Let's ignore that it takes time to integrate new languages into large corporations with existing build systems, infrastructure, tooling, etc. They'll solve that - again, interests are aligned here.

Let's ignore ALL of those issues. XSS, Rowhammer, Meltdown, Specter, downgrade attacks, man-in-the-middle, spear phishing, bugs in cryptographic algorithm design, side-channel attacks, etc are all still security attacks that have very real, and perhaps sometimes more serious, consequences than buffer overflows and waving a magic wand and fixing all buffer overflows wouldn't really adjust the security landscape we deal with today (maybe exploits would get more expensive - maybe, but ASLR et all do that too).

Following the same logic you laid out, this focus on buffer overflows at all is just security theater: ASLR is to buffer overflows what buffer overflows is to all security bugs overall. Back in reality ASLR, syscall origin verification, etc are all tools to try to mitigate the impact of all the things we ignored above & we know they have an impact because exploits keep getting more expensive, the low-hanging fruit bugs always get patched quickly, etc.

So sure, once the world has switched to Rust and we can turn off all these other protections that make buffer overflows more difficult. Oh, except we can't because now we remember that Rust has "unsafe" & that's used A LOT for various reasons. So now you need those protections anyway in case there's a bug in the unsafe code. And I 100% promise you there will be exploits there or in the Rust compiler. We can have confidence this will be the case because it's already happened: https://medium.com/@shnatsel/how-rusts-standard-library-was-..., https://blog.rust-lang.org/2019/11/01/nll-hard-errors.html). So your claim that we know how to fix buffer overflows is just inane even on its face. Hell, this isn't even a Rust-specific thing. I would be very skeptical if you could find a single semi-popular language without a buffer overflow bug of some kind somewhere, even in managed languages. The Java VM is full of them. Go has them. etc.

TLDR: You may want to reexamine your arrogance. Security is an extremely hard topic that will never get solved. Are there shitty programmers? Sure. Are security problems exclusively written by them? No. Security issues are an emergent property of complex systems. Some are easy to solve once and for all once the root cause is known. Others are MUCH harder.


In a long run the non-complicated solution is to get rid of anything that can compile to something with a buffer overflow.

Just half trolling.


There goes Rust I guess...


I doubt we will ever see a UNIX clone written in Rust.

An OS that happens to expose some support for POSIX, yeah (e.g. Redox), but anything else wouldn't be an UNIX clone per se.


And that's all good, we don't need more people cloning an OS that was good 50 years ago.

Modern software is sidestepping the UNIX way left and right. Rightly so, as anyone who's tried to write anything serious in shell would know. Or anyone who realizes that non-blocking I/O was an afterthought (look into the history of select(2) and why libraries like libev exist and what problems they run into on unix derivatives..) and that there's still nothing like a standard, robust, well designed async I/O API.


UNIX was good, for an OS given away with source code and an almost free license (versus what other mainframe OS were asking for).

History would be quite different if AT&T was allowed to sell it from day one.


If you can change libc then you have equal to or greater permissions than the calling process. So this strengthens the security model.


Does that mean that linking against libc is mandatory and that applications are not allowed to bring their own version of libc along for instance when linked statically?


Statically linked programs automatically lose this protection, apparently.


Partly. If they jit, or have shellcode injected, that code is not mappable with syscall privilege.


This is a flag in the ELF binary section headers, if you don't want it, flag your entire binary as "syscalls allowed".


No. There's nothing special about libc.

(Oh, double checked, and uh, maybe. ld.so does treat libc specially if it finds it in the library list, but at that point, you are linking with libc. But afaik there's no requirement you use the system ld.so either.)


Then isn't Animats' point valid?


The point of doing this is not to protect against people bringing their own bad libc.


But surely applications can make syscalls directly without invoking libc at all, by setting up the registers directly and directly executing the processor instruction that causes the right software interrupt? In which case the parent comment's point still stands: you can bypass checking that's at the libc level in a way you can't bypass checking within the kernel.


No, because the calling address would be wrong.


Why? After all, if libc is optional then you could simply provide a statically linked binary and be done with it, there would be no way to change the addresses.


If you link against libc, that’s the only place you can enter the kernel from. If you statically link that is relaxed to “system calls can come from anywhere on your code”. The former has stronger protections, obviously, but as far as I can tell you still have protection from “wild shellcode in a RWX region can make syscalls”.


Should be noted, that by default on OpenBSD, an RWX page is impossible, mprotect() will fail and the process will get killed trying.


Surely there is an “out” for JITs that have not yet adopted W^X?


An opt in actually, to request looser checks.


Can't you make page temporarily RW, JIT, then switch back to RX?


You can, but the application needs to support this.


If you can provide a staticly linked binary, its already game over as far as these protections are concerned.

They are designed as a protection against stuff like ROP and other memory based, zero information attacks that hit already running processes.


> Checking should be at the protection boundary, the system call.

This doesn’t make sense, though. This would mean the OS would need to know the control flow of the program itself to figure out whether it “should” allow a system call to proceed.


So what's stopping the attacker from searching the libc export table for the right syscall wrapper?


ASLR, for one.


Can't they use dlopen to find libc?


dlopen is a dynamically linked function, so they would have to find that too. Its location will also be randomized.

Note we are talking about exploit code here, i.e. you have just exploited a buffer overflow, not ELF code loaded in a well behaved fashion.


There are easier ways to leak libc, such as reading from the stack or the GOT, but these would require more work than a simple ROP chain.


And then WebAssembly will ruin all these efforts.


WebAssembly should not be making direct system calls anyways.


You wish, but Chromium is pledged under OpenBSD.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: