Hacker News new | past | comments | ask | show | jobs | submit login
Rethinking OpenBSD Security (tedunangst.com)
151 points by zdw on March 31, 2020 | hide | past | favorite | 65 comments



Good writeup, fairly made points. It'd be easy to laugh at OpenBSD finding SMTPD vulnerabilities even now (I did); but as TFA points out it's complicated and often hard to see these things.

And that's in OpenBSD which has had the honest effort and passion of scary geeks with no sense of proportion for how many years?

Compare that to the security posture inherent in projects that install with

    sudo bash <(curl -sL someurl.wtf/thatredirectsanyway )

"Secure" computing today is just not going to happen. Not saying its a bad goal, or a waste of time; just that its a metaphysical pursuit of perfection doomed to beautiful failure.


It will happen, slowly but it will, liability law suits and media exposure just need to increase, then more companies will start paying attention to security.


Quite a lot of companies have massive security budgets. The problem is, nobody wants to do the work, so most CISOs just meet vendors and then mostly spend the budget there.


I think some people just need to step up instead of only evangelizing, and actually build something that is useful and "secure"

Oh wait... There are actually tons of such projects. Why did they not take off? (I'm trying hard not to sound cynical because I don't really know that there is absolutely no way that they could, but also I do feel cynical)


Someone missed the memo it seems.

We are doing exactly that, in what concerns me, I switched away into managed languages back in 2006, already replaced a couple of C and C++ based solutions by .NET and Java ones. On my line of work C and C++ only have a place as means to writing bindings to OS libraries written in those languages. what can be replaced by secure languages, gets replaced.

In what concerns big companies, C is persona non-grata on Windows, Azure IoT makes use of .NET and Rust, Microsoft is a big pusher for C++ static analysers, drives Checked C research, did two safe OS whose learnings got added back to .NET (async/await, Span<>, .NET Native), UWP sandboxing is merged with Win32, with upcoming Windows 10X having everything sandboxed in little pico processes.

Google allows very little C and C++ code on its OSes, ChromeOS sandboxes everything, Android has probably the Linux kernel fork with most security knobs turned on, NDK has very little API surface, and Android 11 now requires hardware memory tagging for C and C++ code.

Even though I don't think it is going far, Google is working on eventually use Swift for Tensorflow further developments. It remains to be seen how much C++ are they willing to replace by Swift.

Apple is now since one year driving drivers outside the kernel, with the exception of DriverKit, everything else can make use of Swift as well. Very few modern macOS APIs are C or Objective-C based. Since iPhone X, iOS makes use of hardware pointer validation.

Apple is also hiring Rust developers to replace C based infrastrutured on their backend.

GenodeOS is written in C++, and has since one year started to migrate security critical parts to Ada/SPARK.

Speaking of which, NVidia, a C++ powerhouse, is now using Ada/SPARK for security critical firmware.

We, security conscious people will get there, slowly, might not be on our lifetimes, but society will be there eventually.

Then again, progress only happens one person at a time.


> There are actually tons of such projects. Why did they not take off?

Secure computing requires a relatively low time preference to be economical. Or, more precisely, marginal demand for secure computing increases with a marginal decrease in time preference. Most companies have (artificially inflated) high time preferences, so it's not economical for them to adopt what we consider secure computing.

This is, I believe, to the net detriment of human society as a whole (at least, at my personal time preference).

I'm not going to get too much into why time preferences are so (artificially) high on here, but I encourage people to look into it themselves. A lot of it has to do with government economic intervention, especially around monetary policy.


> A lot of it has to do with government economic intervention, especially around monetary policy.

Don't single it out. Fiscal policy is as much guilty as monetary one. Even civil and criminal legislation help here.


Rates are low, which signals willingness to invest long term. Are you sure you got the monetary policy right?


Long delay, but I just saw this: the availability of loans well below “natural” market rates encourages consumption now, because you can just take out a honking big loan and you won’t have to deal with the consequences for a long time. I.e. it inflates effective time preference.


> Why did they not take off?

There are lots and lots of software that is both well designed and popular. SQLite is a great example. Postfix and OpenSSH too.


SQLite is well designed? Give me a break. It's an ugly and insecure hack. https://research.checkpoint.com/2019/select-code_execution-f...

The meta tables declaring all table structures are changeable via SELECT and CREATE VIEW statements, the fulltext search engine still allows follows arbitrary user pointers by default. Well-designed is different.

The problem is not the language, the problem is in the design.


They are written in C


I'd say the opposite is true, there isn't enough evangelising, not enough people are discovering and accessing projects where security is prioritised.


A project that wants to take of needs to prioritize taking off. The incentives are set. If secure computing is more inconvenient it will not be used.

In my experience secure computing is a lot more inconvenient. I don't mean Haskell is inconvenient (it is). I mean like, it's way more inconvenient to write in C# or Java than it is to write in plain C if you want to get real work done.


I don't believe that is true. If you look at the history of the world you'll find that security is rarely enough of a priority for anyone for it to receive real attention. I see no reason for computing to change that trend.


It is a priority when liability and law suits come into play.

Speaking of history,

> if a builder build a house for some one, and does not construct it properly, and the house which he built fall in and kill its owner, then that builder shall be put to death.

-- Code of Hammurabi, Babylon.

Naturally, we shouldn't go to such extremes.


The devil is all in how one defines "properly". I've seen a bit of what passes for bureaucratically defined security policy and I'm not impressed. It is largely an exercise in checking boxes for liability reasons and nothing more.


Is there a house or a castle that has not eventually fallen to dedicated siege by a determined adversary?


If anything, liability laws would only make it harder for secure computing to exist and be viable.


Sudo curl is terrible but is it really that much worse than downloading a dvd image and installing it? The main difference is how casually people give away the keys to the kingdom.


It is worse. Notice that h2odragon didn't include "https" on purpose because of how many these install instructions do the same.


I never saw a curl | bash install instruction without https.


You are lucky. Last time I saw it here on HN less than week ago. When I called an author out about this I was called rude and ignorant and apparently doing so is "nasty" "especially since this is a well-known point of disagreement—not to mention well-trodden, therefore generic, therefore predictable, therefore tedious, and therefore mostly off-topic".

The software in question was https://github.com/txthinking/nami


The thread in question: https://news.ycombinator.com/item?id=22701891

Dang disappoints me again.


DVDs also don’t implement https, and DVD images can download over http.


I can download DVD images over insecure transport and then check their signature. Not so if I'm piping code straight to shell from a http server.


But at least with a DVD you have the actual physical copy of what was installed, and you can examine it forensically after the fact, and you may even be able to use it as evidence of wrongdoing. You can't do that with some random file that you downloaded at some point in time (this, incidentally, is why in-browser crypto is hopelessly and forever broken).


To be fair there often is a key chain in place nowadays, for booting a PC. It's a little bit better even though I don't like how basically Microsoft controls the keys.


One issue with sudo curl is that it may end up your system in an unusuable (or unknown) state, even if the script you're fetching is not malicious, because your connection can be interrupted and bash will start executing even before the script is fully downloaded.


It is a lot less likely, but it can happen when you first download and then execute a script, too.

Disk read errors may prevent running the entire script or the script may run out of memory, disk space, or other resource limit such as the maximum number of open files or the maximum length of a file path.


It won't happen for properly written script.


I suppose you don't do the `curl | bash` thing on your server fleet?

I see how desktops (laptops) are more or less screwed because of the need to run ad-hoc things.

But servers, which are much more controlled ad limited environments, could be more secure. A lot of orgs follow rather sane software security policies; if the software itself were less vulnerable, it would help.

For a lot of end users that would make a significant difference, because they run most of their critical software in the browser anyway.


> laugh at OpenBSD finding SMTPD vulnerabilities even now

OpenSMTPD is a relative newcomer -

Sendmail: 1983

Exim: 1995

Postfix: 1998

OpenSMTPD: 2008 (2013 as active MTA in OpenBSD)


The counterpoint to that is that there is a lot of prior art, and prior mistakes to learn from, for something begun in 2008. So the assumption that it will take the same length of time, and the same learning and development process, to reach parity with other softwares has a flaw.


OpenSMTPD has only been going for 7 years and it's gone through a bunch of changes almost every OpenBSD release.


There's a factor with OpenBSD that isn't mentioned here at all: it has very few developers.

A single developer can hold much or all of the state and important security considerations in their head. A small team can communicate well and can hold to a single style, and members of a small team can ask each other things easily. Single developers or small teams can write secure code in unsafe languages.

The problem is that this doesn't scale. Add a large number of developers and it becomes increasingly difficult to police security in an unsafe language. Safe languages, unit tests, and fuzzing help a lot more here by making large classes of errors difficult or impossible to manifest or catching them automatically.

Linux is a bit special here: it has a lot of developers but a small number who are responsible for approving things, and it's so massively important and popular that it has a lot of eyeballs on it and gets a lot of automated testing. Security issues still creep in though.


You might claim that the security design of OpenBSD does not scale, and the observations of the parent article do cast some reasonable doubt on code security.

However, a number of projects from OpenBSD have been widely adopted and are successful in their own right, for example:

    C:\>ssh -V
    OpenSSH_for_Windows_7.6p1, LibreSSL 2.6.4
The two projects above, OpenSSH and LibreSSL, have been even more strongly adopted by Apple, as I understand it. OpenSSH is adopted by just about everybody.

Google Android's "bionic" C library is also based on OpenBSD, and I would imagine that the coding concerns for libc will trickle down.

OpenBSD is not the sum of its parts, as so much of it is used in other environments.


Bionic took pieces from OpenBSD, as well as other places[1].

[1]: https://en.wikipedia.org/wiki/Bionic_(software)#Components


Creep it enough, that Google created the Kernel Self Protection Project.

https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Pr...

One of their first activities was to clean the kernel from all usage of C's VLA.


Honest question. Why not just completely outlaw code like:

  exec("/bin/sh", "-c", "...");
And its friends? It feels like it does way more harm than it does good.


Because arbitrary "Thou Shalts" do not prevent security holes. Make a restriction, and hackers will find the cracks around it. The only way to stop them is to fill the cracks, and the only way to find the cracks is to look for them. Now, certainly if there was an exec() replacement that was more secure, you should use that; but banning exec() entirely is doing more harm to functionality than good to security.

Particularly, the exec() bug here is one of misunderstanding the security guarantees of interfaces; particularly, that there are none. Not validating inputs is the single most universally known security practice, and yet it wasn't done here. On top of that, not using typed objects further increases potential for attacks. This was just shit security design. exec() had about as much to do with the bug as your brakes have to do with you locking up your wheels when traveling too fast in rain/snow/ice. (that you should have been going slower, not that the brakes are defective)


I haven't proposed to ban exec entirely. I've proposed to ban exec uses that just sends data into a shell.


When a user runs 'sh -c "foo"' on the command line, how do you think that is executed?

Taking a step back, Unix's programming model is heavily influenced by the design requirements of interactive bourne shell. It's hard to outlaw shell-like things without breaking shells.


Sometimes you need to tell something to programmatically spawn a shell:

watch bash -c “ps | grep argname”

and similar.


Because it wasn't what actually did the harm in this case. That was a lack of some kind of "--" delimiter, and not using the Bernstein checkpassword approach.

And because there are genuine uses for it. I use it in service ./run programs, for example, where simple tooling is not enough and I want shell script things like variable expansions and conditional operators.

The first reason is of particular note because it is two good practices to get used to: employing "--" as the norm; and not passing user credentials around in either environment variables or command-line arguments. (Bernstein checkpassword receives the user credentials as NUL-terminated strings over a pipe.)

* https://cr.yp.to/checkpwd/interface.html


> Privilege separation is a key component of OpenBSD security, and interprocess communication is at the heart of it. More focus on what can go wrong with corrupted processes would help. Secure browsers are evolving longer and longer kill chains. smtpd in particular is supposed to be secure against memory corruption in the network processes, but the ease with which it can control the parent is pretty alarming.

This invites the sincere, honest technical inquiry as to why OpenBSD doesn't have an SELinux-style Mandatory Access Control (MAC) facility, but is OK with the traditional Discretionary Access Control (DAC).


They have pledge(2) and unveil(2) instead.


Excellent. Thank you.


I feel like I should add a bit that pledge(2) and unveil(2) take the "opposite" approach of SELinux. Instead of caging an application in hopes that the cage is correctly set up, the responsibility for pledge(2) and unveil(2) is intended to be with the application developer, who minimizes system calls as much as possible and only keeps paths unveiled that they actually require. As far as I know, the idea here is that the developer knows best.

People seem to have been experimenting with applying these restrictions from the outside, but it's generally hard to guess how a large program from ports will behave.


SELinux wasn't even invented for app sandboxing, even though e.g. Android uses it for that. There are LSMs designed for app sandboxing, like Canonical's AppArmor. SELinux (and MAC in general) comes from the NSA world of big government servers with complex policies about which actual human users can access which documents and so on.


Those BSD tools do not protect the system’s users against their tools, they protect the tool’s programmers against themselves (which, in turn, will make users trust those tools better)

They also do so at runtime, which means they can restrict themselves much more than a generic configuration can.

Once you’ve stated, for example, “this execution of this tool will only read standard input and write file F”, nothing you do, whether it is sloppy programming or the use of third party libraries with bugs, can change that.


It's worth pointing out that chrome is pledged and unveiled. Even a compromised chrome can only modify the downloads dir.


Ideally, that's how SELinux would work as well. The application developer can be the one writing the SELinux config... it's just that in practice, few of them do, so it's up to the community to do instead.

The rub being that SELinux is mandatory, so if you don't provide some kind of profile, it simply won't run.


> OpenBSD aims to be a secure operating system

Pretty idiotic aim if you're writing it in C.


AFAIK (correct me if I am wrong) OpenBSD does not have a single unit test.

No unit tests.

There is so much code that parses text input and yet I can’t find a single test that actually shows that the code works and that it properly deals with malicious input.

How do OpenBSD developers test? How do they make sure changes do not regress things.

Why is unit testing just not a thing? (Same for many other C based projects)


You're wrong. They are in their own directory in the source tree (https://cvsweb.openbsd.org/src/regress/).

ld.so for example is pretty well exercised (https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/regress/libexe...)


Can you point me to a unit test that validates for example parsing of incoming commands to opensmtpd or ftpd or ntpd?

Nothing. It is just not a common practice it seems.

Now I don't think unit tests are a silver bullet. But having just code and zero tests is definitely a huge red flag for me.


Which one of these bugs would have been prevented by unit tests? Probably none, because you'd have to know about the bug in order to craft a test that exploits it.


Unit tests could prevent reintroducing old bugs.


You have regression tests for those.

http://cvsweb.openbsd.org/src/regress/


This subcategory of unit tests is known as regression tests, and those exist too.


Sometimes that is definitely true.

But very often when you spend time to write unit tests you discover limitations and bugs in your code while you come up with test scenarios.

The only thing that the FTP bug needed was a "test_redirect_responses()" test. The SMTPd bug was all about parsing user input. Ideal candidate for unit testing and probably even some fuzzing.


FTP doesn’t have redirect, so the original code wouldn’t have that test.

The writers who added handling of HTTP would only have added it if they had realized the HTTP library they used could do a redirect. Even if they did, they would have to have thought about redirecting to some specially crafted URL.


Excuses excuses excuses ...


Hi Ted,

Please change your page design so that the content is centered horizontally. Otherwise we have to swivel ourselves toward the left side of the screen to read the article.

Sincerely, My Eyes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: