
Multiple Bugs in OpenBSD Kernel - hassel
http://marc.info/?l=oss-security&m=146853062403622&w=2
======
djcapelis
It sounds like one local privilege escalation (possibly?) and a series of
crashers?

Honestly walking away with those being the highest severity bugs is a credit
to the OpenBSD team and their focus on security. They're totally bugs and it
sounds like they're getting fixed immediately, but... many kernels fix these
types of things all the time and don't even consider them security bugs.

~~~
viraptor
Those bugs come from syscall fuzzing. Local privilege escalation is the worst
thing you can get from it, so the results are as bad as they can be (apart
from finding multiple privilege escalations that is).

~~~
djcapelis
I see your point, but it doesn't happen to be true, a bug in recv, send or
other networking syscalls certainly could be much worse than a local priv
escalation.

And seriously, only one local priv escalation and some crashers is still a
pretty light haul for a good fuzzing session from a competent team. The sky is
not falling for OpenBSD today.

~~~
tptacek
I'm not sure I follow. Certainly, bugs in the TCP/IP subsystem can be worse
than syscall bugs. But what's the bug in recv(2) that is going to be worse
than that?

I don't think anyone thinks the sky is falling. To me what's interesting about
this is the approach Jesse and Tim took, and how quickly it generated very
straightforward panics.

~~~
djcapelis
If there was a systematic bug in receive or send where a certain string of
data corrupted a buffer allowing for code execution, exploiting it remotely
would be simple if there's any code on the machine using that syscall.

I'm not saying this is a likely bug, or even something vaguely reasonable, but
it could totally be found with local syscall fuzzing, so local priv escalation
isn't the worst bug you can uncover.

I'm mostly being pedantic here.

------
SEJeff
And for those not aware, Project Triforce, is NCC's effort to run the
wonderful fuzzer American Fuzzy Lop, on everything:

[https://www.nccgroup.trust/us/about-us/newsroom-and-
events/b...](https://www.nccgroup.trust/us/about-us/newsroom-and-
events/blog/2016/june/project-triforce-run-afl-on-everything/)

------
epmatsw
I wish someone would release something like Fuzzing At Home. I've got computer
power to throw at it, but I don't really have the expertise to do the setup
work...

~~~
runeks
The last time I tried to compile an executable using all the afl fuzzer magic-
compile-time stuff, I gave up, so I have the impression that CPU time isn't
the bottleneck here.

~~~
quentusrex
It took me a day to understand the tooling required to get AFL working. Now I
can spin up a new test case for a library within a couple hours. Once you have
the test case, then it is CPU bound. I had on one series of tests, of the
baresip sip library libre, running across 4 machines(24 cores each) for a day
before it was 100% sure it had tested every code path looking for a SIP
protocol decoding error through fuzzing.

~~~
voltagex_
How did you split up the work across the machines?

~~~
justinclift
AFL has documentation on how to setup that up. eg:

    
    
      https://github.com/mcarpenter/afl/blob/master/docs/parallel_fuzzing.txt

------
lbradstreet
It's striking how many of these issues cause panics because of assertions that
were already in the code. Without good assertion use, I would assume that many
of these would have been much worse.

------
jwilk
More readable archived copy:
[http://permalink.gmane.org/gmane.comp.security.oss.general/1...](http://permalink.gmane.org/gmane.comp.security.oss.general/19946)

(I'm not a fan of gmane, but it did a better job with this particular mail
than the alternatives.)

~~~
jfb
The gmane interface is so bad it beggars belief.

~~~
lomnakkus
Heh -- it kind of makes sense since it's really meant to be used via NNTP.
(That's the whole raison d'être for GMANE.)

~~~
jfb
Yeah, I get that, but the HTML interface is so awful I think we'd all be
better off if we just used Gnus to read it.

------
djgs
Ted U. announced that usermount will be removed in OpenBSD 6.0

[https://marc.info/?l=openbsd-
announce&m=146854517406640&w=2](https://marc.info/?l=openbsd-
announce&m=146854517406640&w=2)

~~~
ams6110
So I wonder what the approach will be to allow users to mount USB flash
drives?

~~~
ben_bai
doas(1), sudo(8), hotplugd(8), ...

------
gbrown_
Firstly glad to see these reported and fixed.

Secondly how many of these were remotely exploitable? Yes OpenBSD is limited
in it's exposure with the "base system", but it seems like few of these pose
as "holes" for the system? Arguably pledge(2) could factor into this, maybe?
I'll let someone better qualified comment.

Again glad to see these fixed. But is the baseline free user access to the
whole system for NetSec/ OpSec these days? I don't know maybe it is.

I'm just reluctant to have to read through the HN, "OMG OpenBSD had CVEs" and
"C is insecure". Arguably the later has some merit but C isn't going away
anytime soon, for better or for worse.

~~~
caf
Direct remote privilege escalation exploits in kernels are relatively rare.

The usual attack path is exploiting an application bug for local user access,
followed by a local privilege escalation.

------
smhenderson
While a bit surprising to see so many at once in OBSD kudos to the team for
the rapid response and to those who found the bugs for their responsible
disclosure.

------
lifeisstillgood
Fuzzing is not something I have looked at seriously (to be honest it seems
like asking clients to take up running before walking) but the outcomes are
... Impressive.

~~~
Matt3o12_
How come? What better way to test your function then giving them
random/unexpected input and see how that works out.

Would you not want to try out what happens if I pulled a door you built that
is only meant for pushing? There will always be that one person who ignores
the sign and tries pulling just like there is always that person who uses your
function in an unexpected way.

~~~
runeks
The issue I have with tests, is that I feel like I'm just implementing my
algorithm a second time, in a different wording (language/domain), hoping that
this will catch some unknown bug(s).

If a test suite covers all aspects of an implementation, it's basically a
second implementation, and then the question becomes whether the time spent
writing this test code would actually have been better spent looking over the
actual code that defines your algorithm. If we can't detect errors in code by
reading it, I don't think tests will help us much.

Furthermore: if our algorithms require a second "test implementation",
shouldn't we also write a test for the test, to confirm that the written test
actually tests correctly? What good is a broken test? If you've built a
plywood door, and a machine for testing that door, wouldn't it be useful to
have a machine that tests that your testing machine actually tests the door
properly?

All jokes aside, I really enjoy automated, high-level tests, which mimic
actual user behaviour. I think this is different because we're no longer
implementing an algorithm a second time around (as a test), but rather
encoding user behaviour (of the algorithm) into an algorithm (new code).

~~~
sverige
> If a test suite covers all aspects of an implementation, it's basically a
> second implementation, and then the question becomes whether the time spent
> writing this test code would actually have been better spent looking over
> the actual code that defines your algorithm. If we can't detect errors in
> code by reading it, I don't think tests will help us much.

The OpenBSD code has been audited many times by very experienced and careful C
experts. Fuzzing apparently found the very few things they hadn't already
found by reading the code.

~~~
runeks
> The OpenBSD code has been audited many times by very experienced and careful
> C experts. Fuzzing apparently found the very few things they hadn't already
> found by reading the code.

That's definitely comforting, but I guess my point is that the fact that bugs
can even hide in plain sight in the first place is the root cause of the
problem. If we can look at a specification of what something is supposed to do
- the code - and not see that it does something entirely different, then the
problem is with the programming/specification language.

The very purpose of a programming language is to make the specification of
computer programs readable (rather than reading assembly). If a specification
language needs a a computer program to go through the specification and see if
it specifies things that we didn't intend to, then that specification language
misses the very point of being a language in which to specify things in the
first place.

~~~
et1337
I don't think it's that simple. A lot of bugs are the result of disparate
pieces of code -- each of which is fairly easy to verify by reading it --
interacting in an unforeseen way.

~~~
runeks
I would argue that this effect is not a product of code but of global state.
If you have a pure function that transforms some input to an output, all you
can do wrong is use the wrong tool for the wrong job, which the type system
should, preferably, prevent you from doing. Using the wrong function for the
wrong job means we haven't restricted well enough the input the function
takes, through clearly defining the types of the arguments, not allowing for
meaningless invariants.

If a specification language requires the writer/reader to be aware of the
entire specification in order to verify whether an isolated piece of
specification is valid, that specification language is not of much value, I
would argue.

It occurs to me that the problem with traditional languages is that they do
not allow the direct reference to information (values); only to variables
which contain values (information). In Haskell, if you want a place to store
information you create a TVar (variable), while you are forced to do this,
always, in traditional languages, if you want to work with information/values
properly. You're forced to keep information in a register if you want to
manipulate it, and always refer to the register, when the information it
contains is what you're really concerned with. Having to both think about the
values/information _and_ where you stored this information is double work. Why
not reference the information directly?

~~~
et1337
I agree that many bugs occur due to state. I disagree that languages,
particularly imperative languages (which you seem to be attacking here), are
significantly responsible for bugs.

Syntax is not really an issue for any programmer with a reasonable amount of
experience. I very rarely have trouble accurately transcribing ideas to code.
More often, my ideas are faulty.

Physics engines in games are a perfect example. They're a perfect use case for
functional programming, there's not a lot of complex state ideally, and yet
even simple simulations are notoriously fraught with unexpected behavior. See:
thousands of YouTube videos of physics glitches in games.

~~~
runeks

        > Syntax is not really an issue for any programmer with a reasonable amount of experience.
    
        > They're a perfect use case for functional programming, there's not a lot of complex state ideally,
        > and yet even simple simulations are notoriously fraught with unexpected behavior. See: thousands
        > of YouTube videos of physics glitches in games.
    

I'm not sure I'm following. Are you arguing that because physics engines are a
perfect use case for functional programming, that they are implemented as pure
functions? I don't know of any physics engines that are implemented this way,
hence the buggyness.

If an implementations of 3D scene rasterization contains glitches, the
language used is most definitely the issue, because producing glitch-free 3D
animations is a solved problem. No programmer writes a physics engine to
produce glitches, so if they magically appear, even though they aren't visible
in the spec, the spec language is faulty.

I guess I'm arguing that the problem is that most popular languages allow you
to write programs/specs that are invalid (will crash when executed). So we're
writing specifications in languages where we can't even say whether the
specification is valid or not (actually implementable). If humans produce
buggy programs, that is proof that we're using the wrong tool, I'm arguing,
because no one intends to produce bugs.

Haskell is a bit scary, at least it was for me in the beginning, because it
can seem so hard just to get your program to compile. But that's because the
compiler actually verifies that your program is valid - that no unhandled
exception will occur. If people are having writing valid, purely functional
programs, perhaps this is, in part, because their idea is unimplementable, and
Haskell is telling them this via not being able to compile your spec, while
other languages inform you of this via an unhandled exception that crops up a
year later, after the code is in production.

~~~
et1337
> No programmer writes a physics engine to produce glitches, so if they
> magically appear, even though they aren't visible in the spec, the spec
> language is faulty.

I'm saying that popular physics engines generally have bulletproof, tried-and-
tested code, and are even implemented in a very functional way on a conceptual
level. Despite this, they exhibit glitches. The glitches don't appear
magically out of the spec language, they are an inevitable result of the
discrete nature of realtime physics simulations.

This is meant to be a counter-example to your assertion that bugs come from
miscommunication between computers and humans. My argument is that more often
than not, we communicate our ideas perfectly, but our ideas are flawed. In the
case of physics engines, they are flawed by design in order to compromise
accuracy for performance.

~~~
runeks

        > In the case of physics engines, they are flawed by
        > design in order to compromise accuracy for performance.
    

This is an odd definition of "flaw" to me. I would call it a design choice,
compromising accuracy over speed, which we're forced to do always, since we
don't have infinite compute power. Would you call H.264 video and MP3 audio
flawed by design because they prioritize bandwidth reduction over lossless
reproduction?

    
    
        > The glitches don't appear magically out of the spec language,
        > they are an inevitable result of the discrete nature of
        > realtime physics simulations.
    

You seem to be arguing both that glitches in physics engines are inevitable,
and that they're a design choice (sacrificing precision over speed).

------
hiphopyo
Compared to other OSes and distros this is one in a million.

------
rhabarba
Multiple (!) security issues in OpenBSD (!).

OK.

We're doomed.

~~~
PeCaN
At least these (seem to) result in a kernel panic instead of privileged
escalation or reading unauthorized memory.

But in a world where even djbdns had a (minor) security issue, I don't think
it's reasonable to expect any non-trivial software—particularly software
written in C—to be fully secure.

~~~
ams6110
I found it interesting that most of the patches are one or two line changes.
Shows how one tiny oversight can open up a DoS or security vulnerablity.

