AFAICS, this was exposed by the addition of sockfs_setattr() in v4.10. So it's incorrect to claim that kernels older than that are vulnerable, even though the code being fixed was older.
Also, note that there may not actually be a proof-of-concept exploit yet, beyond a reproducer causing a KASAN splat. When people request a CVE for a use-after-free bug they usually just assume that code execution may be possible. (Exploits can be very creative.)
It depends on manipulating/knowing what gets allocated in the same region. If you know what gets written to the freed space or you can try to force the right allocations to happen, you get almost-arbitrary write.
For a made up example, imagine something allocates a structure with a pointer to a function struct->foo, then frees it, but still holds a reference. Now you call a function which allocates a buffer in the same place and copies your arguments into it. When you make the original code try to call struct->foo again, you can any function you want.
Controlling where the OS allocates an important structure seems very difficult and mistakes would likely result in corrupting unrelated data and a crash later on.
It’s not difficult. Most modern allocators are pretty similar, and quite predictable. They keep a cache (usually per-thread or per-CPU) of free allocations for each allocation size[1], in the form of a stack. malloc() pops the last allocation from the stack, and free() pushes onto it. If you malloc() when the stack is empty, there’s some more logic to obtain a new chunk of allocations, but that’s not relevant here. Once the target (kernel in this case) frees an object, the allocation will be at the top of the stack, and whatever object it allocates next with the same size, on the same thread/CPU, will likely take its place. In the case of a kernel, most allocations are in response to syscalls or other requests from userland, so you can usually control what gets allocated next.
There usually is some risk of the allocator not doing what you expect, e.g. because of intervening mallocs/frees from other threads, which indeed is likely to lead to a crash. However, most exploits don’t need to be 100% reliable anyway. And there are various ways to increase reliability.
Caveat: Not all allocators work this way, and even among the ones that do, each has its unique quirks. However, the basic “first in, first out” behavior is quite common across a multitude of implementations. IIRC, the default Linux allocator (it has three to choose from!) is one of them.
[1] Actually range of allocation sizes. Such allocators will usually have a fixed list of “size classes”, which are just specific allocation sizes they support; allocation requests that fall between size classes will just be rounded up to the next highest one. Large allocations (e.g. 4kb+) are handled with a different path.
Sure there are. Kernel makes this a bit easier since you have heaps for objects of a given size (which means it's much easier to overlap old structure when you know the size). Browsers (for JS -> native escape) make it a bit easier by allowing you to allocate lots of objects and having pointers everywhere. In many cases you can also spam all over the heap and hope for the best, since allocations are almost always aligned.
What is the kernel-space equivalent of a root shell? presumably you wouldn't try to load your full payload into the kernel space, just try to give yourself root access somehow?
Not sure, but here's some data from Matt Miller at MSFT:
Fellow data nerds: here's a snapshot of the vulnerability root cause trends for Microsoft Remote Code Execution (RCE) CVEs, 2006 through 2017. A few callouts: heap corruption, type confusion, and uninit increased in 2017. Use after free steady y/y but proportionally declined.
https://twitter.com/epakskape/status/984481101937651713
Right...but that doesn't mean it's correct to refer to a privesc exploit as arbitrary code execution. For a privesc exploit to be valuable, you typically need to already have code execution.
EDIT: I concede in another comment that privesc without code execution can still be useful, but my original claim that privesc does not imply arbitrary code execution still stands.
But privilege escalation does not imply "being root", it only means "ability to perform operation as root" (or whatever the role is can can be compromised). If your distribution accidentally packaged a suid root /bin/cat, that would allow a normal user to perform (read) operations on all files on the system, so it would be a privilege escalation, but it would not necessarily lead to arbitrary code execution as root (unless that allowed you to read a file with the root password, say).
No, since privilege escalation can also mean that you are able to take specific actions that you wouldn't otherwise be able to take. It does not always mean code execution.
No, in some cases code execution is not the goal at all. A privesc can mean admin access to a system you didn't previously have access to, it can mean access to logs or a direct tap into network data in some systems. You often don't need arbitrary code exec for that since its features built into the apps or the systems hosting them.
Not necessarily. You could, for instance, elevate the privileges of a process you can't completely control, which might allow you to read sensitive files or disrupt a system, but not perform arbitrary actions with those privileges.
Overwrite the return address or a function pointer instead. Then when the address/pointer is used you can execute whatever you want. Loop up Return Oriented Programming.
When drafting a style guide, one of your goals is to make the code as uniform as possible. It removes ambiguity and makes it easier to enforce the rules of the style guide, hopefully with automated tooling.
Mandatory bracing is a step towards more uniformity, and makes additional statements in a conditional block always safe, and at the cost of just one extra line in the code.
It also makes your commits just a little bit smaller; if you do add more lines to a conditional block that was previously braceless, now you just get the new lines in the diff, instead of new lines + opening brace + closing brace.
Cowboy coding of course scoffs at all this and people have different values when making tradeoffs between readability and concision, but there are good reasons for enforcing mandatory braces.
The uniformity argument becomes rather silly when applied to other constructs.
Would you ditch switch-cases in favor of if-else chains (leaving aside fallthrough/Duff's/etc for a moment) because the latter is uniform with existing constructs? Would you ditch "for (int i = 0; ..." in favor of "int i; for (i = 0; ..."?
> Would you ditch switch-cases in favor of if-else chains
No. You're talking about applying style rules for if-else conditionals to statements which aren't if-else, which isn't what I was talking about.
This is why the Linux kernel style guide, OpenBSD style, PSR, etc. all have separate guidelines for switch-case along with guidelines for mandatory bracing in if-else.
> Would you ditch "for (int i = 0; ..." in favor of "int i; for (i = 0; ..."?
There isn't a clear-cut rule about this convention in style guides. That's probably in part because the behavior in your two examples is different: in the first case (in C++), i only exists within the scope of the for loop, whereas in the latter case it will continue to exist outside the scope of the for loop.
Whichever one is appropriate would probably depend on context that's outside the scope of a style guide. They're called style guides, after all, not Programming Rules of Law. :-)
You can think of 1 high profile incident. But the failure is easy to overlook. Why do you think that there aren't more? (Also, I'm pretty sure I've seen others like this in the news; it's not "one time" caught)
If I had a penny for each of these coding errors I have personally fixed, across various projects and languages, I would probably have...about a dollar. Which, IMNSHO, allows me to sufficiently extrapolate that this error is extremely widespread, and to speculate that it might be lurking in other critical locations.
Yes, all the more reason to disallow it in the standard so it is caught by the linter: green/mediocre programmers write a lot of code (which you most likely use).
You can also add "tired programmers" to your list of easily-confused programmers, and that pretty much covers everyone at some point.
Then color me green and mediocre because I have been bit by this before. Or maybe we should have the humility to realize none of us are above silly mistakes.
MISRA C 2012 allows scoped goto. I can't see anything disallowing pointers to functions or continue.
There are advisory rules against unscoped goto and multiple returns. These can be ignored with justification.
Rules are described as 'decidable' if they can be found reliably with static analysis. I'd say that's outside the scope of MISRA C, though, and more of a missing feature of ISO 26262.
A little further down in the document linked: "This does not apply if only one branch of a conditional statement is a single statement; in the latter case use braces in both branches:"
> but all right-thinking people know that (a) K&R are right and (b) K&R are right
> ...
> Rationale: K&R.
I REALLY do not like just calling something correct because it was in the K&R book.
> Also, note that this brace-placement also minimizes the number of empty (or almost empty) lines, without any loss of readability. Thus, as the supply of new-lines on your screen is not a renewable resource (think 25-line terminal screens here), you have more empty lines to put comments on.
Is that really an acceptable justification in the era of cheap 4K displays?
> Is that really an acceptable justification in the era of cheap 4K displays?
Keeping line count down help readability. Some people use high resolution with small fonts but that makes my eyes hurt so my terminal has 45 lines when maximized.
The only problem that would solve is braindead code like
if (something)
doThis();
andThat();
which is absolutely absurd to pass even basic code review. Even most compilers (clang, GCC, etc.) will warn you about that.
What omitting curly braces after `if` statements does do is make code ~1% more readable, which can have massive cumulative positive results in security and stability with a multi-million line codebase.
Agreed it should, but that doesn't seem to have been the cause of the issue. Looks there there was originally only one statement after the if but a new one had been added, so the braces were also added.
A basic linter does not resolve the semantic question. The linter could be satisfied by changing the code syntax given the rule to always include curly brackets after an if to:
if (something) {}; { do_something(); }
But, the question is the empty bracket correct or should the contents of the subsequent scope be cut into the scope of the if bracket or copied into it or should the if condition be removed altogether? Code styles do not change the intent and always requiring {} after an if is unnecessary for single statements given the intent is correct:
if (side_effect()); else { do_something(); }
Alternative syntax styles for the same semantics do not clarify the intent of a program.
This. A hundred times this. Yes, the linter will only save you from basic errors...but indeed, most of the errors are of this type. With a linter, they can be caught even before making it into source control, much less into code review.
So, the brace style really doesn't matter when it comes to tools catching mistakes. If you turn on the warnings and linters, they'll catch all the common bracing errors. If you don't, most of the suggestions here don't really help much.
Of course it depends on your configuration, but my organization occasionally uses code blocks to denote things like critical sections in baremetal applications. So you'd have
DINT;
{
// code that should not be interrupted
}
EINT;
I would expect a linter to call out a conditional with no code block, but our workflow is such that we've usually tested things (at least in a manual testing capacity) prior to running any static analysis, so I don't know for sure.
Yeah. We have spent some time configuring our linters so they match our organization's style - and syntax exists to exclude specific exceptional-but-correct usages from being flagged - but now, if the linter raises any flags, they're almost 100% signal, with only an occasional false alarm*. The rules don't need to be universally applicable :)
(The caveat being, of course, that the linter is only a helper tool - "linter doesn't complain" doesn't guarantee correctness, just that "linter complains" is a fairly certain sign of incorrectness)
FYI there has not ever been a Linux kernel that lacks an exploit available to, at least, local users. There is every reason to believe that the current kernel contains at least one such flaw.
DOS. It has no privilege escalation exploits, because by design it has no concept of different privileges.
Personally, I think as a regular user of a PC, it's the remote exploits that you really need to worry about, the ones of the form "connect to the Internet and get pwned without doing anything else", fortunately quite rare; and in this era of user-hostile devices, local privilege escalation can even be friendly in terms of rooting and jailbreaks.
> DOS. It has no privilege escalation exploits, because by design it has no concept of different privileges.
No, DOS has no protection against exploits, because by design it has no concept of different privileges. Anyone with access to a DOS system is automatically root and can do anything. "No exploits" is not a good description of that state of affairs.
Its a kernel based crypto API. Ciphers, hashes, and keyed hashes (HMAC). It uses sockets to move data back and forth. Because its kernel based, there is going to be the usual kernel switch penalty. Sure, its hardware accelerated, but there are user space alternatives with zero kernel overhead. However, it does let you stash your private keys into kernel space and wipe those keys completely from user space. You then simply reference the keys you need to use. So in this regard, it can provide a higher level of key protection.
It's used for crpyto acceleration AFAIK. E.g. I've used it to expose a beagleboards AES acceleration to OpenVPN via OpenSSL to make a small VPN client go faster.
I literally just compiled 4.20.11 for Slackware.
I swear these things are popping up more often. Or more likely since spectre I've just been paying more attention
Yup. Kind of has my mind racing. And practically speaking, at that point, it feels like I need to employ a similar aggressive approach similar to our application. Go faster, and accept pain on the outer layer.
Maybe the answer is to do a daily automated recompilation of the kernel, rebuild of the edge LB and FW images followed by automated, staged rollout and rollback. If we're vulnerable anyway, let's just pick up the fixes from yesterday at least and roll back if things go wrong. That sounds really scary, but doable.
Didn't know about KASAN ( https://github.com/google/kasan/wiki ). "KernelAddressSanitizer (KASAN) is a dynamic memory error detector. It provides a fast and comprehensive solution for finding use-after-free and out-of-bounds bugs in Linux kernel. KASAN is available in the upstream Linux kernel starting from 4.0. Can be enabled with CONFIG_KASAN=y."
Looking at the reproducer (in the kernel commit message for the fix), this is a PE. Unless some software allows a remote user to control the issuing of these calls (seems unlikely).
I'm waiting for the day where there's one reliable place in the world where someone can look at info about a vulnerability and see very important and basic information like this. I think the interesting technical discussions for security experts get mixed in with the useful executive summary for the rest of us.
Remember when Intel ME bugs started coming out and we all had to go to Hacker News and trade rumors to figure out what was really going on?
The problem is one can only really talk about CVE's within the context of the product, perhaps a global CVE wiki /discussion page that can have edits/discussion would be a useful resource.
Very good, gets right to the point. I'll try to remember this for next time, thank you.
Still could be improved a lot. I was thinking of writing up a "requirements doc" at some point for what exactly I think would be useful for such things. I'm not sure I'd be so happy with a Wiki, unless it was curated by trusted people. I wouldn't want (for example) malicious people editing a serious report to say it's no big deal just to buy themselves more time to exfiltrate more data.
Various websites, shady forums, tor sites, and 'hacker spaces'. Sometimes that includes white/gray hat haunts for exploit code. Pastebin like sites, github.com, etc. 'darkweb' and OSINT searches. If there is a know vuln, there is someone out there eager to be 'first' to writing exploit code and showing off.
It's as if these places are worth something to know about and to keep to oneself. That, or "searching for exploits" is just a phrase to sound important.
It's a use-after-free, which can often be turned into arbitrary code execution by massaging the allocator into letting you write over memory you shouldn't be able to (like a function pointer) and using that to gain control over execution.
I really shouldn't comment on Linux kernel development, given mu lack of knowledge, but since this is a use-after-free vuln, doesn't that strengthen the case to moving to memory safe languages?
It very hard to do that, because UNIX and C go hand in hand.
Moving away from C, which is a good thing on my opinion, would ultimately mean moving away from UNIX, as those OSes have other cultures.
So what we really need, since UNIX based OSes aren't going away that fast, is to tame C.
Solaris already did this with hardware memory tagging when running on SPARC.
On Solaris/SPARC an use-after-free attempt would terminate the process, or in this case trigger a kernel panic.
Google has been pushing for the Kernel Self Protection Project[1].
One of such initiatives is collaboration with ARM for hardware memory tagging on ARM v8.5[2].
Intel had MPX, but apparently it didn't took over and now is scheduled for removal, leaving SPARC and future ARM releases as the only architectures where it is possible to have some hardware control over C memory accesses.
Including Android and ChromeOS, which while being based on the Linux kernel don't expose its interfaces directly to userspace, and with Treble Android Linux variant is a kind a pseudo micro-kernel with several services running on their own processes and using Android IPC for data exchange.
Also, note that there may not actually be a proof-of-concept exploit yet, beyond a reproducer causing a KASAN splat. When people request a CVE for a use-after-free bug they usually just assume that code execution may be possible. (Exploits can be very creative.)