Hacker News new | past | comments | ask | show | jobs | submit login
Linux kernel: multiple x86_64 vulnerabilities (seclists.org)
256 points by jgeralnik on Dec 17, 2014 | hide | past | web | favorite | 64 comments



> This is likely to be easy to exploit for privilege escalation, except on systems with SMAP or UDEREF

Another reminder why everyone should be using https://grsecurity.net which provides these mitigations to the Linux kernel via patches. GRSecurity has had SMAP aka KERNEXEC for a long time as well as UDEREF https://grsecurity.net/~spender/uderef.txt

If you keep any sensitive data on a Linux server you should seriously consider grsec.

Even last week there was an ASLR bypass posted on OSS-security which of-course grsec already protected you against http://seclists.org/oss-sec/2014/q4/908

There is a lot of drama around the fact Linux core devs don't adopt these patches by default. But regardless Linux is pretty insecure by default and grsec makes privesc via various classes of exploits significantly harder.


> There is a lot of drama around the fact Linux core devs don't adopt these patches by default.

What is the main cause of resistance for implementing these fixes? It worries me that they haven't put forth the effort to do so yet.


Daniel Micay who works on security with Arch Linux explains why GRSecurity hasn't been upstreamed here: http://article.gmane.org/gmane.comp.security.oss.general/150...

And you can see Greg K H (a linux core dev) snarky reply here: http://article.gmane.org/gmane.comp.security.oss.general/150...

Basically a few people have tried in the past but Linux core devs are against large patches. And if you look at old threads [1] where people have attempted to break it up into small patches the core devs have been disinterested.

Just to be clear we're talking about many 2003-era exploit mitigation techniques not being adopted into the kernel. And as a side effect every year there are countless vulnerabilities that come out - for which proactive mitigations with up-to-date PoC's have existed for years.

Greg KH basically said in that thread that it would need to be broken up into tons of small patches. Then each patch will have to be submitted and go through the massive politics of getting it upstreamed. This would require a full-time paid team of people doing it, since Linux foundation or similar organizations don't seem to think it's worth paying for a team of security experts themselves to do similar work hardening the kernel.

Additionally, a long time ago (before grsec I believe) the person (or team) behind PaX, whose code is now a significant percentage of GRSecurity, is anonymous and the Linux core refused to accept patches from anonymous developers.

Also for a more meta-discussion on how security is handled by the core devs see Spenders summary here in "KASLR: An Exercise in Cargo Cult Security" https://forums.grsecurity.net/viewtopic.php?f=7&t=3367

[1] Spender links to old threads here where people tried breaking it up and submitting small patches:

https://twitter.com/grsecurity/status/541797486479028225

https://twitter.com/grsecurity/status/541797673419145217

https://twitter.com/grsecurity/status/541797780482977792


"I'm glad to help out with this if you can point me at specific examples of things that should be changed."

Doesn't sound the least bit snarky. Submitting small patches isn't unreasonable, and GRSecurity-inclined people would do well to play nice with the kernel dev process.


Totally agree. His reply is an overly polite response to a rather snarky rant on why the Linux devs should just shut up and accept the author's massive merge. Giant projects like the Linux kernel just don't work that way.


No one in that thread is recommending the Linux devs take the monolithic GRSecurity patches flat out. If you read the originally linked thread Daniel explains why it can't be accepted this way nor does he propose it should.

Rather attempts to submit it in smaller patches have been met with disinterest. As well as the fact security in general has the appearance of being sidelined by the core developers - which has created a large disincentive for developers interested in getting GRSecurity upstreamed from even trying (again).


I still really can't figure out how you characterised that particular post as 'snarky'. You complain of 'massive politics', but you're contributing to it with heavy mischaracterisations like that, turning an apologetic, helpful, and informative email into 'a snarky reply'.


A singular email example will always be missing a lot of context. Just because someones tone is nice and friendly doesn't mean there isn't a ton of subtext to what is being said. I'll give a few examples:

1. Saying that since no one has yet "paid for a team of people to do it" then it "must not be worth doing"

2. Sarcastically using info leak in quotes (see KASLR post in my original email for context on info leaks)

3. Repeatedly saying: if you discover a problem "I can help out with that" or "just let me know" when there is a long history of people doing exactly that and linux core devs including Greg K H largely ignoring them.

Etc, I could go on.

And this is all politics. I never said I was apolitical in the posts above. The whole reason people are saying it would take a team of people to submit patches is because politics.


Saying that since no one has yet "paid for a team of people to do it" then it "must not be worth doing"

Except that it's much less declarative than you're stating ('kind of implies' is pretty far from 'must'), and even has an emoticon added to indicate commiseration: "kind of implies that no one thinks it is worth doing :(". I agree that context can be missing, but at the same time, you shouldn't be significantly changing the visible context like that - you seem to be more about projecting your own issues rather than reading what's on the page when you do that.


Right, I should take exemplary lessons of politeness and politics from Greg and Linus.


Where did I imply that you should pattern yourself after someone else? I'm working from your own complaints and behaviour. You're projecting again.

A nicely ironic reply, though - if you do actually have problems with the way they behave, why invoke their behaviour to defend your own?


Well speaking of projections, I am not pointing to the lack of politeness, nor politics, as the problem in itself here.

I remarked on his snarkyness simply because it indicative of the problem: there has been a long history of dismissiveness during any discussion of upstreaming PaX/grsec-style mitigations. So considering it is not being taken seriously we will continue to enjoy the side-effects for the foreseeable future.


Well, I wouldn't take them off whoever posts under the PAXTeam account to LWN.


Maybe the grsec people should better communicate the advantage. I suggest taking each CVE and listing whether it would have been mitigated by running a grsec kernel, and compare it to something else (selinux or whatever)


If there is a kernel privilege escalation then SELinux can be disabled as Spender loves to demonstrate https://www.youtube.com/watch?v=WI0FXZUsLuI GRSec does includes it's own MAC system as an alternative to SELinux but that is only a small part.

PaX/grsec is in a different class of mitigation. I don't really know any competitors besides other implementations of small subsets by different operating systems or hardware manufacturers.

To your other point, I don't think anyone who has been following Linux security for any amount of time thinks that Spender or PaX are in need of proving themselves.


> To your other point, I don't think anyone who has been following Linux security for any amount of time thinks that Spender or PaX are in need of proving themselves.

No major distro carries the patch, and the kernel devs don't want to merge it as it is.

A change in tactics is needed - make it easier for everyone to see how much better things with grsec are. The tweets are good, a summary of those tweets would be better.



I don't think these links speak well for the patches you are talking about. I see a number of instances where the patches are being rejected for legitimate quality issues. It is pointed out that for PowerPC one piece doesn't compile with -Werror, and a bunch of configuration ifdefs no longer build. For ptmx_fops it is pointed out that the old code is better encapsulated and more maintainable if the ops structure gets new members. I did not see this answered. A lot of the diffs insert the "const" keyword in kind of unusual and unconventional places without much explanation, and without looking too deeply I kind of doubt it's the only place it can go to achieve the desired effect. This seems to be confusing reviewers on the thread because they are unused to the pattern. (It's much less of a wtf to see the whole vtable declared const than the individual function pointers, for example.)

Then to get all smug about it and call politics on people for doing a code review, rather than fix the patches or communicate their importance better... They could be doing good work but I don't think they come off well in these threads.


Does systemd even work w/patched grsec kernel? Last I heard Spender was considering writing a security module to get deep hooks into the kernel to handle systemd, so I assumed systemd killed off any hope of more grsec/pax patches making it upstream.


I've tried running the grsec kernel on Arch a few times, and I've never gotten it to boot once, it usually just panics immediately. In the same way openSSL was a base of nobody wanting to invest in covering your own ass, there is no interest in having a legitimately secure kernel the way grsec proposes, and even in Arch there are only a few people working on it and while it is in the official repos it really should be in the stock kernel if they wanted to send a message.


It works fine with PaX / grsecurity.


> Linux core devs are against large patches

Really I think it comes down to territoriality. If there was no 'grsecurity' or 'PaX' name or team or whatever, and it was just a random dev submitting a simple feature, they would accept it. But when this other entity comes trying to improve upon their flawed system, suddenly ego gets involved.

I don't mean to rag too hard on the core devs, but many of their decisions are based on gut instinct, which is very frequently misguided or wrong (heuristics are evolved for reacting quickly to emergencies, not making logical decisions). Grsec should have been introduced into the mainline years ago. The Linux kernel's security track record is embarrassing.

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=linux+kerne...


I think its entirely reasonable for Linux devs to not accept patches from anonymous developers.

I value my privacy more than most. But when you are contributing code to a public project used all over the world by millions of people, you should at least be willing to identify yourself, or have very good reasons for not doing so. And that's before we get to the nontrivial copyright issues.


Even if that is the case for whatever legal reason just because they don't accept patches from PaX directly doesn't mean they can't provide their own implementations.

For example, PaX was the first person to implement ASLR for Linux... and that was adopted into the Linux Kernel many years later (as well as by Windows, OSX, and other BSDs). https://en.wikipedia.org/wiki/Address_space_layout_randomiza...


I don't think anyone should have to identify themselves in order to contribute to open source, but I will grant you the copyright issues.

The person or entity submitting the code should be irrelevant. Exacting code review and testing establish the code as trustworthy or compromised, not the identity of the author.


Linus hates security and security related projects. He is against prioritizing security bugs over normal bugs. Also he is against embargo (coordinated release).

One example of many, many instances:

http://yarchive.net/comp/linux/security_bugs.html


I simultaneously agree and disagree with him.

I think he's wrong because security is increasingly critical, but I agree with him because I can't stand (most of) the infosec profession.

The vast majority of infosec gives zero thought to any concern other than security, resulting in systems that are insanely complex with incredibly poor user experience. Not only does this break everything else, but it's bad for security in the long run. The typical security solution is something that makes it so hard for people to do ordinary work that they either disable it entirely (e.g. SELinux) or work around it (tunneling through aggressive firewalls, etc.) and defeat the whole purpose.

I worked in a research institution once where some departments had people whose informal job description (I was told) was to "help work around security so we can get our work done." If you worked with security, they'd put up so many road blocks and say "no" to so many things that months and months could go by without a single thing being accomplished. There was a whole culture of literally conspiring to ignore and sidestep the security department. People kept doing this even after we got hacked, arguing (again explicitly) that the cost of complying with infosec was greater than the cost of the hack.

Infosec is largely a ghetto populated by people who only think about one thing, and who think that one thing is the sexist, coolest, most important thing in the entire world. There are other such ghettoes and they all suffer from similar sorts of problems.

I firmly believe that almost nothing can be done well without whole system thinking. You have to look at the entire context of a system, not just the one thing you're building or optimizing. Think about the system from technical details all the way up to how users will interact with it on a day to day basis. A good starting point is to look at whatever you're building and ask "how annoyed would I be if management foisted this on me?" If the answer is "very annoyed," your system has horrible UX.

This isn't the only case where I simultaneously agree and disagree with Linus. Another notable one is C vs C++. Short version: I think C++ is fine and makes it a lot easier to do many things, but anyone writing C++ should have YAGNI and KISS tattooed on their forehead. As with many other "powerful" multi-paradigm languages it takes discipline to avoid over-using the language's features, and most programmers unfortunately have the opposite tendency. They over-abstract and over-engineer as if they're trying to show how clever they are.


I'm from infosec, and I understand your point. However let me provide the infosec side of the story.

Any infosec veteran can tell you that defense is so much harder than offense. If you defend a system you need to be perfectly vigilant, know all the threats, have full knowledge about _everything_ that goes on inside your company/network, you can not let a _single_ system be compromised because once "they" have a foothold inside everything is possible.

Some examples I have seen over the years include dropped usb sticks in employee parking lots, internal taps on traffic disguised as (functional) laptop adapters, bugged phones, hacked printers punching holes in the firewall, mass spam emails that _someone_ is guaranteed to click. And that's just getting _inside_ your network.

Let me ask you, would you open an attachment sent to you by a coworker? Would you ask a "coworker" who's not wearing hier/her company badge/id to identify him/herself? Would anyone notice an abandoned nondescript black box plugged into the wall somewhere?

If you answered any question with "no" I'd say your security is not up to snuff, but then that's my job.

This is why infosec is so pervasive in everything that even comes near to computers these days. No chance should be given to the attacker, no matter how trivial it may seem and no matter how _inconvenient_. Security should be inconvenient to the point of almost preventing you getting any work done, because anything less is sure to fail you when you need it most.

Just my two cents ;)


I both agree and disagree. :)

The problem is that nobody's looking for deeper solutions to security problems. For most vendors, even enterprise and well financed ones, security is a bolt-on afterthought (along with privacy, a form of security). For most infosec people, infosec is a game of patching dams full of holes with chewing gum. (Or whack a mole if you prefer that metaphor.)

These two problems are related in that there's a feedback loop at work -- vendors don't prioritize security because infosec is bad for UX, and infosec is bad for UX because vendors don't give them anything to work with other than blocking things and hole-patching. There's also the reverse feedback loop in that vendors don't build in good security because infosec isn't delivering well thought out and deep innovations that don't wreck UX.

The whole situation is utterly pathological IMHO.

I gave a mostly theoretical talk on this at a conference called border:none last October in Germany:

https://www.zerotier.com/misc/BorderNone2014-AdamIerymenko-D...

[warning: big honkin' PDF]

I don't consider the ideas there entirely baked and they deal almost entirely with the networking domain, but I think the same "race against ourselves" argument applies to other aspects of infosec.


I've been putting some thought in some deeper solutions, and while I don't have a solution I do think I have some inkling as tho were the problems come from.

The real problems with security are obviously not inherently tied to computers. In very _very_ abstract terms I think security is very closely tied to decision making systems, not in the sense of talking with others in boardrooms (although this constitutes a decision making system), but the decisions you make unconsciously to achieve some goal. The type of decision making I'm getting at now is very well explained by Eliezer over at less wrong[0]. The act of 'breaching' security is of course altering someone else's decision(to suit your needs) with false/fake inputs that your target fails to verify correctly

To be secure in the decisions you make you must verify all the inputs into your personal decision algorithm. For humans this system is more often right than for computers.

We have eyes: we can see who we're talking to, computers only have a vague 'trust' 'these numbers ensure that who you're talking to really is who he says he is'. A very fragile system that can be broken if enough resources are available to the adversary.

Now I'm not saying that the 'trust' systems of humans and computers differ fundamentally as both rely on external inspection (eg. I can't read you mind I'm still only basing my trust in you on what my senses tell me), but I do say that the 'trust' of computers has a much lower standard as opposed to that of a human.

We can better verify if what others are saying is true, in any context. Human understanding, under most circumstances, is such that it allows us to test things that others are saying on a huge amount of experience and knowledge, either that of our own or that of other humans.

Computers, on the other hand, only understand the protocols we've written for them. And those are only as good as the risk predictions of the programmer(s) that wrote the protocol. They are very limited and very fragile in the sense that they age and weaken as they get older. What if a break-through in prime-factorization is to take place at this very moment? In one swoop we'd lose a major part of the security infrastructure on the Internet. There is no reason to believe that prime-factorization can't be solved efficiently and quickly, someday. A human faced in this situation understands that his/her security protocol no longer suffices, he/she would change said protocol.

Of course, in the hypothetical scenario of prime-factorization break-through we'd just patch the systems, right? Yes, but then we're back at playing whack-a-mole again.

Last but not least, there are no huge/vast differences of intelligence between two people talking, where one is trying to manipulate the other. Sure Einstein vs the Village idiot could be considered a 'vast' difference in intelligence. But next to the differences in what constitutes intelligence for computers it is almost negligent. A smart phone with a punny dual-core processor must somehow be resistant against billion-dollar super computers, hell your simcard who's clock-speed can be measured in megahertz must be resistant against these super computers. It's just not doable, such a difference in capabilities just does not exist between two people.

Of course when you start mixing people and computers, your security is only as good as the weakest link (the computer). You would not fall for a 'phising' talk if you were face-to-face with the hacker (who is at this point not inside your network yet). The scenario is akin to the hacker, who's outside your door, asking you to give him the key. You wouldn't fall for this. But in your email program things are different, you can't see a face. Can't hear the inflections in the hacker's voice, he isn't even asking for the key he's only asking for you to click a link...

The solution, in a very general way, would be to design systems such that they are more 'intelligent'. Some sort of AI security. Don't just make the protocols more complex by adding more rules/functionality to it, instead program the computer in such a way that it can be 'inventive' in verifying it's inputs. Obviously such a system does not exist (yet?), and it's only wishful thinking that it can exist anytime soon (no infrastructure or backwards-compatibility), software not there yet, hardware might be but not sure). So for the time being we're stuck playing whack-a-mole.

[0] http://lesswrong.com/lw/v9/aiming_at_the_target/ - Aiming at the target.


I just want to point you out to a book. It is a good read, I promise.

http://www.amazon.com/Social-Engineering-The-Human-Hacking/d...

It looks like you genuinely believe that two people talking face to face would not be subjected to exploitation.

The book exactly talks about how exploitation in this context had been thriving even before Computer and Network Security became a thing.

In computer setting, an adversary still needs to do factorization to crack keys or priming the victim's computing machinery, both of which require advanced knowledge in science and technologies mind you, to do the exploits.

But for people, they come with beliefs, cultural and social biases, personal habits, and ignorance which are not too hard to discern, making human factor in systems a larger risk.


>We have eyes: we can see who we're talking to

And yet your security would still fail visual inspection if there was an evil twin brother.

>Of course when you start mixing people and computers, your security is only as good as the weakest link (the computer).

I disagree. It's almost always the human, except in cases where the human is very well trained in security.

Most of the problems you talk about have analogs in the real world, and existed long before computers were at thing. Set most people in front of a camera and tell them to look for shoplifters. Set a trained specialist (or thief) in front of the same camera and they will far more often see the theft occurring.


The point is that there is another possible outcome beyond the "yes"/"no" questions you asked, which is "I can't do my job with these security requirements" and "I don't understand these complicated methods and/or requirements". Both of these lead to security being bypassed on purpose, which is often far worse than the risk of someone walking in with a trojan USB stick.

Education about the risks involved in picking up random hardware, email requests/attachments, etc are important, but so is acclimating the average user to the very idea of secuirty. Both sides need time and experience to find ways to make security and the work being done compatible with each other.

A good example of the problem is PGP/GPG - it's too complicated and has a terrible UI (for most peole), so nobody even bothers. Yes, even if we somehow forced everybody to use GPG, the mistakes made while using it will likely compromise most of the "protected" data.

On the other hand, even if we went into the situation expecting 100% failure due to someone messing up, there would still be benefits. We would end up with keys being exchanged (infrastructure), making the beginnings of a trust network. Even better, we would see a lot more people starting to learn about keys and how they should (or shouldn't) be trusted.

TL;DR - never let perfection be the enemy of good, even in must-be-perfect security; even if it will probably fail in the usual sense, it may be worth it as a necessary lesson for future security attempts.


I have little practical experience with IT security, but I like to look at these issues from the side of the customer. There's a quote I like to repeat:

"If you prioritize security over accessibility, you'll have a perfectly secure system that noboody ever uses, ending up with zero customers. But if you prioritize accessibility over security, you can still build something as large as PlayStation Network."


Funny, it's the Infosec guys (me) with the coolest tools and some of the most innovative code where I work, but IT won't let us use FOSS monitoring software because it hasn't gone through the proper approval board.

See how broad that brush was?


That's why I put (most) in parens. While I'm generally disappointed with the majority of infosec, there are really good infosec people out there. In my experience the good people in infosec are really, really brilliant, and are also just as frustrated with the rest of the field.


Infosec is largely a ghetto populated by people who only think about one thing, and who think that one thing is the sexist, coolest, most important thing in the entire world. There are other such ghettoes and they all suffer from similar sorts of problems.

I normally avoid "me too" comments. But that's the most insightful comment I've seen this week for sure, and probably this month.


> Another notable one is C vs C++. Short version: I think C++ is fine and makes it a lot easier to do many things, but anyone writing C++ should have YAGNI and KISS tattooed on their forehead.

But that is the pain point. Most developers don't care to wear such tattoos.

Lint was created alongside C and left to a separate tool as per UNIX philosophy.

The result is that C and C++ developers never cared about it.

We had to wait for LLVM and daily security exploits for developers to start caring about it.

And now almost every C and C++ compilers offer static analysis.

This is why languages like Ada and SPARK are slowly being looked into by some European companies.

Intel is building pointer and buffer validation in their new processors as a means to help bringing safety to exiting applications:

https://software.intel.com/en-us/articles/introduction-to-in...

When security is not outsourced to a separate tool, it cannot be avoided.


I wholeheartedly agree with your commentary about such "compartmentalized" approach to systems tasks. The description of narrowmindedness is on target for many domains, certainly I've observed it in medical fields.

The problem arises from complexity. We are able to track only so much information at a given time so specialization develops. But as we find out more and more how processes intertwine it calls for having awareness of the generalist point of view as well.

It's a difficult feat to accomplish, to hold a meta and detailed view simultaneously. Probably it's a particular talent to be able to do so, but as complexity is not going to decrease, future leaders will need to be talented in this way if the problem is going to be solved.


To some extent I blame the Silicon Valley cult of recent grads, where "25 is the new 55" as someone once put it to me.

One of the things you get from doing something for decades is a very broad and deep understanding of the entire system. If you've been a developer for 35 years you've not only seen fads come and go, but you've also had a chance to use numerous systems from numerous perspectives and on both sides of the user/developer fence.

It only works if you've kept learning though. I've found that older devs occupy an extreme "U" distribution -- they're either utterly outdated or they're super-brilliant and can leverage tons of experience.

Another source of the problem is a lack of emphasis on -- or even contempt for -- UX among systems level (so-called "neckbeard") type developers. "Real men don't need ..."

Last but not least you've got the business models of many infosec companies which revolve around "streaming" constant "definition updates," patches, and selling loads of different one-off fixes for little problems.


I don't think it is a contempt at the existence of UX. But when UX gets in the way of efficiency. Something like being able to wrap up a common task in a script, rather than having to constantly poke at that wizard with its 1001 next buttons.


The vast majority of infosec gives zero thought to any concern other than security, resulting in systems that are insanely complex with incredibly poor user experience.

At Ruxcon last year, one of the speakers mentioned that security 'recommendations' from infosec come so thick, fast, and complex, that not even security specialists (who are up-to-date and fully aware of all the issues) use them all. He then gave a few examples of security specialists ignoring their own recommendations because it was 'too hard to comply'.


I would suggest having YAGNI and KISS tattooed on your knuckles instead, where you'll actually see them. Unless you have a little mirror on your desk. In that case, go ahead with the forehead tattoo. (Make sure they do it backwards.)


That old term comes to mind, "perfect is the enemy of good".


Linus explains his stance on Linux security, issues with security people, SELinux, CVE commit log, etc. at DebConf 14: https://www.youtube.com/watch?v=1Mg5_gxNXTo#t=3936


That was an interesting and enlightening read. Thank you.


I think that UDEREF works here, on brief inspection of the code.

However I haven't looked too hard, but grsec seems to have its own bugs in this area. I'll email them.

As usual, my nasty IRET test case is available [1], and, as of the embargo expiration (Monday), it contains tests for the whole pile of issues here, among others. Save your work before running it.

[1] https://gitorious.org/linux-test-utils/linux-clock-tests/sou...


FYI that link is 404



Thanks very useful, I was just writing something similar.

INterested in: syscall32_from_64.c

What's does that do? I know in IA32 emu mode on a 64bit kernel from running binaries using strace that they make gettimeofday etc syscalls rather than use vdso.


I think I wrote syscall32_from_64 to try to test something related to the 32-bit syscall (as opposed to sysenter or int80) handler in ia32entry.S. TBH, I don't remember exactly what it was for.

On my Intel-based box, it gets SIGILL, but if I run it under QEMU without KVM, it says "syscall return = 137".

The more interesting things in there include segregs (testing for an info leak that has no CVE yet), syscall_exit_regs_64 (run it under strace and watch it fail), and dump_all_pmcs.

I'm hoping that dump_all_pmcs will stop working in Linux 3.20.

Also, on new enough kernels with new enough glibc (I think), even 32-bit programs use vdso timing.


> Also, on new enough kernels with new enough glibc (I think), even 32-bit programs use vdso timing.

I saw a patch being discussed but wasn't sure if it made it in. I'd rather it didn't because some people are insisting they need it so they can run 32bit Java (for performance reasons because GC is marginally faster). The irony is entirely lost on them.

Really useful little set of experiments, good to have them in one place.

Not familliar with pcmcs, what are they?


> Not familliar with pcmcs, what are they?

They're performance monitoring counters, which are complicated programmable things on most x86 chips that could cache misses, cycles, etc. They don't come with a sensible way to selectively grant user access.


Ahhh yes. of course. Thanks


GRSecurity users have to actually upgrade their kernel anyways:

> On those systems, assuming that the mitigation works correctly, the impact of this bug may be limited to massive memory corruption and an eventual crash or reboot


I haven't really explored this but I remember a few years ago I liked the idea of using grsecurity on my VPSs. Unfortunately I couldn't come up with a good way to do it.

I believe the idea I stumbled onto was to use kexec to load a hardened kernel after boot, which seems doable but I am not sure it would be a good route to go.

Have you/anyone ever tried anything like this?


Linode VPSs can boot custom kernels: https://www.linode.com/docs/tools-reference/custom-kernels-d...

I don't know if this is possible with other providers.



Would anyone happen to know of grsecurity is compatible with the linux-ck patchset?


Status for at least one of the CVEs in Debian is here: https://security-tracker.debian.org/tracker/CVE-2014-8133 (currently unfixed)


How can this/these be exploited?


I'm sure this question is important to ask to learn how to protect yourself, but it struck me as a question that is difficult to ask. By asking it, you sound like someone who wants to exploit others vulnerabilities rather than someone who is trying to protect themselves.


Is there any information whether the fix is in 3.18.1, which was released yesterday?


CVE-2014-9090 and CVE-2014-9322 were fixed in 3.18, just before it was released.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: