Hacker News new | past | comments | ask | show | jobs | submit login
A survey of BSD kernel vulnerabilities [pdf] (defcon.org)
169 points by beliu on July 27, 2017 | hide | past | favorite | 47 comments

This was a great preso.

I think we need to fund regular audits of FreeBSD that use this type mindset.

You might find How to find 56 potential vulnerabilities in FreeBSD code in one evening, PVS-Studio delved into the FreeBSD kernel, and Weaknesses in GCC, Clang and FreeBSD Code interesting reading as well:

* https://www.viva64.com/en/b/0496/ (https://news.ycombinator.com/item?id=14057568)

* https://www.viva64.com/en/b/0377/ (https://news.ycombinator.com/item?id=11131532)

* https://www.viva64.com/en/b/0487/ (https://news.ycombinator.com/item?id=13894115)

Thanks for these!

I wonder if the foundation would consider putting money into such a task?

Impressive that FBSD only had 20% more bugs over OBSD despite having 200% more lines of code.

Considering that this was apparently one person doing investigative research, and a very small fraction of each code base (freebsd apparently has about 9 million LOC, openbsd 2.9 million), I do not think it is reasonable to state that freeBSD "only had 20% more bugs", but rather only that the investigator found 20% more bugs in it in his research. He probably examined a similar number of LOC of each OS and spent a similar amount of time on each OS if I had to make a guess -- the slides do not say however.

If he had spent three times more time on freebsd than on openbsd I would agree with you.

This raises some interesting questions about sample selection when statistically or probabilistically analyzing a piece of software.

In statistical analysis you need to have a clear definition for your "unit of observation". Is it individual lines? Is it individual methods? Is there some way to break down code intofunctional units? Is it entire code paths?

Without having a well-defined unit of analaysis and a consistent sampling scheme, extrapolation is difficult or impossible using conventional statistical tools, without making some very big assumptions.

Ilja is one of the baddest dudes out there. Excellent work once again, chapeau!

OpenBSD stopped claiming "N years without a localhost hole in the default install!" in 2000. See https://web.archive.org/web/20000815063126/http://openbsd.or... and https://web.archive.org/web/20001110110500/http://www.openbs...

But Page 6 of this pdf has OpenBSD developers going on about Linux and its 20 localhost kernel security holes in 2005. Seems a bit dishonest.

There are numbers greater than zero and less than 20.

And there's the Chinese proverb, roughly translated and paraphrased: "solder who ran away 50 paces mocks another who ran 100 paces for his cowardice" [0].

I get it, preventing local privilege escalation on *NIX is hard, and I appreciate OpenBSD's focus and stance on security, v.s. say Linus's more laid-back attitude. And having arguably fewest of them is an achievement, even if it's not zero. Let's focus on making OpenBSD better and not belittling other's shortcomings.

[0] https://zh.wikipedia.org/zh/%E4%BA%94%E5%8D%81%E6%AD%A5%E7%A...

Does OpenBSD have so few vulnerabilities because no one looks for them (compared to Linux/Windows) or because it's actually more secure?

I have an OS I coded my self that no one has ever found a vulnerability on.

I would imagine that it's a bit of both, but 1 point of comparison in OpenBSD's favor would be libressl vs openssl, the number & severity of issues in the OpenBSD maintained project has been significantly lower.

I was at the defcon presentation & talked with Ilja after.

The answer is a bit of both.

No one can answer this question.

FYI, since you're here: visiting your site on FF in Windows 10 is throwing an insecure error, saying you're using an invalid certificate. 100% possible that's somehow an error on my end - but just in case it isn't, thought you'd want to know. Sample URL throwing it: https://www.tedunangst.com/flak/post/books-chapter-three

Side note: I'm hugely grateful for all your work on OpenBSD.

Same in FF on Mac OS X:

www.tedunangst.com uses an invalid security certificate. The certificate is not trusted because the issuer certificate is unknown. The server might not be sending the appropriate intermediate certificates. An additional root certificate may need to be imported.


And in Chrome:

Attackers might be trying to steal your information from www.tedunangst.com (for example, passwords, messages, or credit cards). NET::ERR_CERT_AUTHORITY_INVALID

Despite the browsers' dire warnings, you are still far more protected visiting a website with a self-signed certificate than visiting a plain http website.

M. Unangst xyrself said:

> Yesterday, reading this page in plaintext was perfectly fine, but today, add some AES to the mix, and it’s a terrible menace, unfit for even casual viewing.

-- https://www.tedunangst.com/flak/post/moving-to-https

Not sure how well this stacks up against automating LetsEncrypt these days.

March 24, 2017 -- "During the past year, Let's Encrypt has issued a total of 15,270 SSL certificates that contained the word 'PayPal' in the domain name or the certificate identity. Of these, approximately 14,766 (96.7%) were issued for domains that hosted phishing sites" [1]

LetsEncrypt isn't perfect either. You've got to be cognizant of what details you are sharing over the connection, regardless of who signs the certificate.


I agree!

> You've got to be cognizant of what details you are sharing over the connection, regardless of who signs the certificate.

So why not cut out the browser errors, for free? Somehow this vaguely feels like Don Quixote straining hard to hold onto the original definition of the term "hacker".

Props to this guy for sticking to his beliefs; I don't mean for this to be interpreted as saying anything should be changed.

There's no reason phishing sites shouldn't have encrypted connection. Case closed.

more protected from what?

- protected from third parties reading the data in transit

- protected from alteration of the data in transit without either end's knowledge

and, IF you have a means to authenticate the owner of the certificate outside of the regular certificate signing process

- protection from impersonation of the other end of the connection

my point is that TLS can still be useful without the participation of a certificate authority. but you have to be careful.

Right. I've visited Ted's site in the past, and had no certificate warnings. Now, suddenly, I am getting warnings. So isn't that the exact scenario in which I should be very suspicious?

It's just a blog; I'm not submitting anything, but still it's an indicator that something fishy might be going on.

Given that one is one of the the most widely used OSs in the world and one is a fringe OS, which surely must affect likelihood of detection, I'm not sure any number above zero makes this a favorable comparison for OpenBSD.

Ted: I wanted to let you know that the one-page archive listing on Flak is no longer working.

>Seems a bit dishonest.

Welcome to *BSD

Are there any CVEs for these? Have they been fixed?

I only subscribe to OpenBSD but there have been 24 fixes, in the last weeks, for issues reported by Ilja van Sprundel.


Seems interesting. Is there a recorded talk to go with this?

Real soon now (not sure on the timeframe)


  expired pointers, Double frees, Underflows, overflows, signedness,  NULL deref, Division by zero, Memory leaks


Have the BSD devs seen this? Have the required bugs been filed?

What is the likelihood that we will see a major operating system written in a safe language such as Rust in the next 10 years?

The problem is more political than technical.

There have been operating systems written in mostly safe systems programming languages since the 60s, 10 years before C was born.

Burroughs created ESPOL and NEWP, IBM did their RISC research with PL/8, Xerox had Mesa/Cedar, UK Royal Navy Algol-68RS, and many many others with Midori's System C# being the latest example.

If there isn't political will, there isn't the motivation to overcome any technical hurdles that might appear.

Now even with Microsoft adopting a Linux compatibility layer we are reaching the age of UNIX mono-culture, and its dependency to C for kernel development.

So the question is who will have the political will to break this reality, and drive the technical adoption regardless of the hurdles that might appear along the way.

That depends; do you mean that it has to be written in the next ten years and then later becomes widely used, or that it has to become widely used in the next ten years? If the former, I'd say maybe if Redox becomes mature enough to be widely used maybe a decade or so down the line (with "widely used" meaning "used as much as at least one of the major BSDs", since those are what are being discussed in the PDF), and if the latter, I'd say almost definitely not (and this is coming from a huge Rust fanatic)

The best case scenario in that time frame would be to implement portions/new features of an existing kernel in a safe language. Maybe it could start with officially supported driver modules in rust?

Yeah I agree. It would be great if parts of the kernel could be written in a safer language. I think in 10 years that will be happening in at least some widely used OS, but most likely one that only supports a small range of architectures.

Various OpenBSD devs have had mixed (at best) feelings about Rust, recognizing it as a language that is by no means a silver bullet for safe code. Part of this was/is also flavored by the fact that LLVM (and therefore Rust) supports only a subset of the hardware platforms supported by OpenBSD.

However, now that OpenBSD is continuing to switch from GCC to LLVM/clang as the default C compiler for an increasing number of targets and as an increasing number of non-LLVM-supported targets are discontinued, I reckon the idea of some or all of OpenBSD being incrementally reimplemented in Rust is - bit by bit - becoming a little less far-fetched. It also helps that Rust is liberally-licensed.

That all said, I'm not really holding my breath for that happening anytime soon. Until then, Redox continues to look interesting.

I'm really excited for Redox.

That being said, I still think the best strategy will be to start porting different portions of the Kernels to Rust. Perhaps starting with drivers. There have been a few examples of Rust drivers for the various OSes out there.

First, the maintainers have to believe that this is better than continuing with C. That doesn't seem to be something everyone believes.

It's been done in the past. OS's come to dominate mostly for marketing rather than technical reasons. Much like startups. Integration with legacy systems and rapid development increase chance they'll get uptake. The existing stuff is mostly C, C++, etc. So, the new stuff that dominates tends to reuse that. Redoing it all in a safe language will be a lot of work. That's why high-assurance security has focused on redoing the smallest, most-trusted parts of the stack or working on transformations of legacy code like Softbound+CETS or CHERI CPU that address root problems.

Although I doubt we'll see a big one, there's already smaller projects that do useful stuff with safer languages. ExpressOS comes to mind given the kernel is C# and Dafny running Android apps that are Java. The L4 kernel and Linux remain in the system with L4 replaceable by safe stuff like Muen in SPARK language. Replacing Linux will be a lot harder. ;) However, if wanting mobile or other apps that aren't Android compatible, one might be able to use work like Redox, JX, or Oberon to eliminate unsafe language entirely.


Another in alternative OS's is A2 Bluebottle from Wirth's people at ETH written in an Oberon dialect. Main Oberon OS is designed for simplicity and teaching. A2 is a batteries-included OS with a GUI designed to be useful. When I tried it, it ran snappy fast with networking, editor, compiler, and so on. Very minimal given only a few students worked on it but it is usable and extendable. Below was first link I found in Google so I haven't vetted it.


Redox OS in Rust can do a lot currently. They plan to expand it more. Already has a GUI, shell, networking, and the standard library of Rust.


Well, assuming by "major OS" you mean a full blown OS and not just a kernel, Android is a major operating system and large chunks of it are written in Java. So I think that is the closest humanity has got so far.

I was tempted to say ChromeOS but I don't think anything non-trivial in ChromeOS is actually written in JavaScript: that's C++ for the OS people and JS for everyone else. Android is more consistent in that core OS components are written in the same way as user apps are.

If I had to place bets, therefore, I'd say that Android is best placed to get closest to the ideal of a fully safe OS within 10 years. I have no idea if Google would do it, but they have the funds, the security commitment and there's a clear path opening up for them to make nearly all userland code in Android safe. But it doesn't look like "rewrite everything in Rust". It looks like this:

Continue to improve ART's JIT and AOT compilers, possibly teaming up with the Graal project. The Java guys are doing a lot of work right now on adding support for value types and subsequent large upgrades to support for vectorisation (as with value types you can represent 256-bit wide primitives and things like that). This unlocks the potential to port parts of the most security-vulnerable and performance sensitive components like the media server and media codecs into Java or Kotlin, without much performance slowdown (bulk of the most important h264 decoding is done in hardware anyway).

But there will still be lots of C/C++ code in Android and rewriting it would be very expensive. So a feasible next step is to swallow the bitter political pill and team up with Oracle Labs to get access to SubstrateVM + Managed Sulong (or wait and see if they are open sourced). This rather magical combination of tools appears to offer the potential to run anything LLVM can compile in a way that eliminates all forms of memory unsafety including buffer overflows, out of range reads/writes, double frees etc at a nearly trivial cost (around 10-20% at most):


If this technology lives up to its potential, and it seems like it is in with a very good chance based on progress so far, I suspect we'll see a loss of interest in Rust over time ... at least for the "make existing software secure" use case. Managed Sulong + SubstrateVM can run C/C++ code that is strictly standards compliant (it struggles with code that makes spec-violating assumptions as it exploits UB very aggressively), but it's much easier to fix compliance issues in an existing codebase than rewrite it from scratch.

The Android team have already switched to OpenJDK and abandoned their plans to rewrite the Java compiler, so with the resolution of the Oracle lawsuits they do seem to be technically moving closer to the Java platform rather than away from it. Collaboration between Google and Oracle on this doesn't seem as unthinkable as it once did.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact