Another reminder why everyone should be using https://grsecurity.net which provides these mitigations to the Linux kernel via patches. GRSecurity has had SMAP aka KERNEXEC for a long time as well as UDEREF https://grsecurity.net/~spender/uderef.txt
If you keep any sensitive data on a Linux server you should seriously consider grsec.
Even last week there was an ASLR bypass posted on OSS-security which of-course grsec already protected you against http://seclists.org/oss-sec/2014/q4/908
There is a lot of drama around the fact Linux core devs don't adopt these patches by default. But regardless Linux is pretty insecure by default and grsec makes privesc via various classes of exploits significantly harder.
What is the main cause of resistance for implementing these fixes? It worries me that they haven't put forth the effort to do so yet.
And you can see Greg K H (a linux core dev) snarky reply here:
Basically a few people have tried in the past but Linux core devs are against large patches. And if you look at old threads  where people have attempted to break it up into small patches the core devs have been disinterested.
Just to be clear we're talking about many 2003-era exploit mitigation techniques not being adopted into the kernel. And as a side effect every year there are countless vulnerabilities that come out - for which proactive mitigations with up-to-date PoC's have existed for years.
Greg KH basically said in that thread that it would need to be broken up into tons of small patches. Then each patch will have to be submitted and go through the massive politics of getting it upstreamed. This would require a full-time paid team of people doing it, since Linux foundation or similar organizations don't seem to think it's worth paying for a team of security experts themselves to do similar work hardening the kernel.
Additionally, a long time ago (before grsec I believe) the person (or team) behind PaX, whose code is now a significant percentage of GRSecurity, is anonymous and the Linux core refused to accept patches from anonymous developers.
Also for a more meta-discussion on how security is handled by the core devs see Spenders summary here in "KASLR: An Exercise in Cargo Cult Security"
 Spender links to old threads here where people tried breaking it up and submitting small patches:
Doesn't sound the least bit snarky. Submitting small patches isn't unreasonable, and GRSecurity-inclined people would do well to play nice with the kernel dev process.
Rather attempts to submit it in smaller patches have been met with disinterest. As well as the fact security in general has the appearance of being sidelined by the core developers - which has created a large disincentive for developers interested in getting GRSecurity upstreamed from even trying (again).
1. Saying that since no one has yet "paid for a team of people to do it" then it "must not be worth doing"
2. Sarcastically using info leak in quotes (see KASLR post in my original email for context on info leaks)
3. Repeatedly saying: if you discover a problem "I can help out with that" or "just let me know" when there is a long history of people doing exactly that and linux core devs including Greg K H largely ignoring them.
Etc, I could go on.
And this is all politics. I never said I was apolitical in the posts above. The whole reason people are saying it would take a team of people to submit patches is because politics.
Except that it's much less declarative than you're stating ('kind of implies' is pretty far from 'must'), and even has an emoticon added to indicate commiseration: "kind of implies that no one thinks it is worth doing :(". I agree that context can be missing, but at the same time, you shouldn't be significantly changing the visible context like that - you seem to be more about projecting your own issues rather than reading what's on the page when you do that.
A nicely ironic reply, though - if you do actually have problems with the way they behave, why invoke their behaviour to defend your own?
I remarked on his snarkyness simply because it indicative of the problem: there has been a long history of dismissiveness during any discussion of upstreaming PaX/grsec-style mitigations. So considering it is not being taken seriously we will continue to enjoy the side-effects for the foreseeable future.
PaX/grsec is in a different class of mitigation. I don't really know any competitors besides other implementations of small subsets by different operating systems or hardware manufacturers.
To your other point, I don't think anyone who has been following Linux security for any amount of time thinks that Spender or PaX are in need of proving themselves.
No major distro carries the patch, and the kernel devs don't want to merge it as it is.
A change in tactics is needed - make it easier for everyone to see how much better things with grsec are. The tweets are good, a summary of those tweets would be better.
Then to get all smug about it and call politics on people for doing a code review, rather than fix the patches or communicate their importance better... They could be doing good work but I don't think they come off well in these threads.
Really I think it comes down to territoriality. If there was no 'grsecurity' or 'PaX' name or team or whatever, and it was just a random dev submitting a simple feature, they would accept it. But when this other entity comes trying to improve upon their flawed system, suddenly ego gets involved.
I don't mean to rag too hard on the core devs, but many of their decisions are based on gut instinct, which is very frequently misguided or wrong (heuristics are evolved for reacting quickly to emergencies, not making logical decisions). Grsec should have been introduced into the mainline years ago. The Linux kernel's security track record is embarrassing.
I value my privacy more than most. But when you are contributing code to a public project used all over the world by millions of people, you should at least be willing to identify yourself, or have very good reasons for not doing so. And that's before we get to the nontrivial copyright issues.
For example, PaX was the first person to implement ASLR for Linux... and that was adopted into the Linux Kernel many years later (as well as by Windows, OSX, and other BSDs). https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
The person or entity submitting the code should be irrelevant. Exacting code review and testing establish the code as trustworthy or compromised, not the identity of the author.
One example of many, many instances:
I think he's wrong because security is increasingly critical, but I agree with him because I can't stand (most of) the infosec profession.
The vast majority of infosec gives zero thought to any concern other than security, resulting in systems that are insanely complex with incredibly poor user experience. Not only does this break everything else, but it's bad for security in the long run. The typical security solution is something that makes it so hard for people to do ordinary work that they either disable it entirely (e.g. SELinux) or work around it (tunneling through aggressive firewalls, etc.) and defeat the whole purpose.
I worked in a research institution once where some departments had people whose informal job description (I was told) was to "help work around security so we can get our work done." If you worked with security, they'd put up so many road blocks and say "no" to so many things that months and months could go by without a single thing being accomplished. There was a whole culture of literally conspiring to ignore and sidestep the security department. People kept doing this even after we got hacked, arguing (again explicitly) that the cost of complying with infosec was greater than the cost of the hack.
Infosec is largely a ghetto populated by people who only think about one thing, and who think that one thing is the sexist, coolest, most important thing in the entire world. There are other such ghettoes and they all suffer from similar sorts of problems.
I firmly believe that almost nothing can be done well without whole system thinking. You have to look at the entire context of a system, not just the one thing you're building or optimizing. Think about the system from technical details all the way up to how users will interact with it on a day to day basis. A good starting point is to look at whatever you're building and ask "how annoyed would I be if management foisted this on me?" If the answer is "very annoyed," your system has horrible UX.
This isn't the only case where I simultaneously agree and disagree with Linus. Another notable one is C vs C++. Short version: I think C++ is fine and makes it a lot easier to do many things, but anyone writing C++ should have YAGNI and KISS tattooed on their forehead. As with many other "powerful" multi-paradigm languages it takes discipline to avoid over-using the language's features, and most programmers unfortunately have the opposite tendency. They over-abstract and over-engineer as if they're trying to show how clever they are.
Any infosec veteran can tell you that defense is so much harder than offense. If you defend a system you need to be perfectly vigilant, know all the threats, have full knowledge about _everything_ that goes on inside your company/network, you can not let a _single_ system be compromised because once "they" have a foothold inside everything is possible.
Some examples I have seen over the years include dropped usb sticks in employee parking lots, internal taps on traffic disguised as (functional) laptop adapters, bugged phones, hacked printers punching holes in the firewall, mass spam emails that _someone_ is guaranteed to click. And that's just getting _inside_ your network.
Let me ask you, would you open an attachment sent to you by a coworker? Would you ask a "coworker" who's not wearing hier/her company badge/id to identify him/herself? Would anyone notice an abandoned nondescript black box plugged into the wall somewhere?
If you answered any question with "no" I'd say your security is not up to snuff, but then that's my job.
This is why infosec is so pervasive in everything that even comes near to computers these days. No chance should be given to the attacker, no matter how trivial it may seem and no matter how _inconvenient_. Security should be inconvenient to the point of almost preventing you getting any work done, because anything less is sure to fail you when you need it most.
Just my two cents ;)
The problem is that nobody's looking for deeper solutions to security problems. For most vendors, even enterprise and well financed ones, security is a bolt-on afterthought (along with privacy, a form of security). For most infosec people, infosec is a game of patching dams full of holes with chewing gum. (Or whack a mole if you prefer that metaphor.)
These two problems are related in that there's a feedback loop at work -- vendors don't prioritize security because infosec is bad for UX, and infosec is bad for UX because vendors don't give them anything to work with other than blocking things and hole-patching. There's also the reverse feedback loop in that vendors don't build in good security because infosec isn't delivering well thought out and deep innovations that don't wreck UX.
The whole situation is utterly pathological IMHO.
I gave a mostly theoretical talk on this at a conference called border:none last October in Germany:
[warning: big honkin' PDF]
I don't consider the ideas there entirely baked and they deal almost entirely with the networking domain, but I think the same "race against ourselves" argument applies to other aspects of infosec.
The real problems with security are obviously not inherently tied to computers. In very _very_ abstract terms I think security is very closely tied to decision making systems, not in the sense of talking with others in boardrooms (although this constitutes a decision making system), but the decisions you make unconsciously to achieve some goal. The type of decision making I'm getting at now is very well explained by Eliezer over at less wrong. The act of 'breaching' security is of course altering someone else's decision(to suit your needs) with false/fake inputs that your target fails to verify correctly
To be secure in the decisions you make you must verify all the inputs into your personal decision algorithm. For humans this system is more often right than for computers.
We have eyes: we can see who we're talking to, computers only have a vague 'trust' 'these numbers ensure that who you're talking to really is who he says he is'. A very fragile system that can be broken if enough resources are available to the adversary.
Now I'm not saying that the 'trust' systems of humans and computers differ fundamentally as both rely on external inspection (eg. I can't read you mind I'm still only basing my trust in you on what my senses tell me), but I do say that the 'trust' of computers has a much lower standard as opposed to that of a human.
We can better verify if what others are saying is true, in any context. Human understanding, under most circumstances, is such that it allows us to test things that others are saying on a huge amount of experience and knowledge, either that of our own or that of other humans.
Computers, on the other hand, only understand the protocols we've written for them. And those are only as good as the risk predictions of the programmer(s) that wrote the protocol. They are very limited and very fragile in the sense that they age and weaken as they get older. What if a break-through in prime-factorization is to take place at this very moment? In one swoop we'd lose a major part of the security infrastructure on the Internet. There is no reason to believe that prime-factorization can't be solved efficiently and quickly, someday. A human faced in this situation understands that his/her security protocol no longer suffices, he/she would change said protocol.
Of course, in the hypothetical scenario of prime-factorization break-through we'd just patch the systems, right? Yes, but then we're back at playing whack-a-mole again.
Last but not least, there are no huge/vast differences of intelligence between two people talking, where one is trying to manipulate the other. Sure Einstein vs the Village idiot could be considered a 'vast' difference in intelligence. But next to the differences in what constitutes intelligence for computers it is almost negligent. A smart phone with a punny dual-core processor must somehow be resistant against billion-dollar super computers, hell your simcard who's clock-speed can be measured in megahertz must be resistant against these super computers. It's just not doable, such a difference in capabilities just does not exist between two people.
Of course when you start mixing people and computers, your security is only as good as the weakest link (the computer). You would not fall for a 'phising' talk if you were face-to-face with the hacker (who is at this point not inside your network yet). The scenario is akin to the hacker, who's outside your door, asking you to give him the key. You wouldn't fall for this. But in your email program things are different, you can't see a face. Can't hear the inflections in the hacker's voice, he isn't even asking for the key he's only asking for you to click a link...
The solution, in a very general way, would be to design systems such that they are more 'intelligent'. Some sort of AI security. Don't just make the protocols more complex by adding more rules/functionality to it, instead program the computer in such a way that it can be 'inventive' in verifying it's inputs. Obviously such a system does not exist (yet?), and it's only wishful thinking that it can exist anytime soon (no infrastructure or backwards-compatibility), software not there yet, hardware might be but not sure). So for the time being we're stuck playing whack-a-mole.
 http://lesswrong.com/lw/v9/aiming_at_the_target/ - Aiming at the target.
It looks like you genuinely believe that two people talking face to face would not be subjected to exploitation.
The book exactly talks about how exploitation in this context had been thriving even before Computer and Network Security became a thing.
In computer setting, an adversary still needs to do factorization to crack keys or priming the victim's computing machinery, both of which require advanced knowledge in science and technologies mind you, to do the exploits.
But for people, they come with beliefs, cultural and social biases, personal habits, and ignorance which are not too hard to discern, making human factor in systems a larger risk.
And yet your security would still fail visual inspection if there was an evil twin brother.
>Of course when you start mixing people and computers, your security is only as good as the weakest link (the computer).
I disagree. It's almost always the human, except in cases where the human is very well trained in security.
Most of the problems you talk about have analogs in the real world, and existed long before computers were at thing. Set most people in front of a camera and tell them to look for shoplifters. Set a trained specialist (or thief) in front of the same camera and they will far more often see the theft occurring.
Education about the risks involved in picking up random hardware, email requests/attachments, etc are important, but so is acclimating the average user to the very idea of secuirty. Both sides need time and experience to find ways to make security and the work being done compatible with each other.
A good example of the problem is PGP/GPG - it's too complicated and has a terrible UI (for most peole), so nobody even bothers. Yes, even if we somehow forced everybody to use GPG, the mistakes made while using it will likely compromise most of the "protected" data.
On the other hand, even if we went into the situation expecting 100% failure due to someone messing up, there would still be benefits. We would end up with keys being exchanged (infrastructure), making the beginnings of a trust network. Even better, we would see a lot more people starting to learn about keys and how they should (or shouldn't) be trusted.
TL;DR - never let perfection be the enemy of good, even in must-be-perfect security; even if it will probably fail in the usual sense, it may be worth it as a necessary lesson for future security attempts.
"If you prioritize security over accessibility, you'll have a perfectly secure system that noboody ever uses, ending up with zero customers. But if you prioritize accessibility over security, you can still build something as large as PlayStation Network."
See how broad that brush was?
I normally avoid "me too" comments. But that's the most insightful comment I've seen this week for sure, and probably this month.
But that is the pain point. Most developers don't care to wear such tattoos.
Lint was created alongside C and left to a separate tool as per UNIX philosophy.
The result is that C and C++ developers never cared about it.
We had to wait for LLVM and daily security exploits for developers to start caring about it.
And now almost every C and C++ compilers offer static analysis.
This is why languages like Ada and SPARK are slowly being looked into by some European companies.
Intel is building pointer and buffer validation in their new processors as a means to help bringing safety to exiting applications:
When security is not outsourced to a separate tool, it cannot be avoided.
The problem arises from complexity. We are able to track only so much information at a given time so specialization develops. But as we find out more and more how processes intertwine it calls for having awareness of the generalist point of view as well.
It's a difficult feat to accomplish, to hold a meta and detailed view simultaneously. Probably it's a particular talent to be able to do so, but as complexity is not going to decrease, future leaders will need to be talented in this way if the problem is going to be solved.
One of the things you get from doing something for decades is a very broad and deep understanding of the entire system. If you've been a developer for 35 years you've not only seen fads come and go, but you've also had a chance to use numerous systems from numerous perspectives and on both sides of the user/developer fence.
It only works if you've kept learning though. I've found that older devs occupy an extreme "U" distribution -- they're either utterly outdated or they're super-brilliant and can leverage tons of experience.
Another source of the problem is a lack of emphasis on -- or even contempt for -- UX among systems level (so-called "neckbeard") type developers. "Real men don't need ..."
Last but not least you've got the business models of many infosec companies which revolve around "streaming" constant "definition updates," patches, and selling loads of different one-off fixes for little problems.
At Ruxcon last year, one of the speakers mentioned that security 'recommendations' from infosec come so thick, fast, and complex, that not even security specialists (who are up-to-date and fully aware of all the issues) use them all. He then gave a few examples of security specialists ignoring their own recommendations because it was 'too hard to comply'.
However I haven't looked too hard, but grsec seems to have its own bugs in this area. I'll email them.
As usual, my nasty IRET test case is available , and, as of the embargo expiration (Monday), it contains tests for the whole pile of issues here, among others. Save your work before running it.
INterested in: syscall32_from_64.c
What's does that do? I know in IA32 emu mode on a 64bit kernel from running binaries using strace that they make gettimeofday etc syscalls rather than use vdso.
On my Intel-based box, it gets SIGILL, but if I run it under QEMU without KVM, it says "syscall return = 137".
The more interesting things in there include segregs (testing for an info leak that has no CVE yet), syscall_exit_regs_64 (run it under strace and watch it fail), and dump_all_pmcs.
I'm hoping that dump_all_pmcs will stop working in Linux 3.20.
Also, on new enough kernels with new enough glibc (I think), even 32-bit programs use vdso timing.
I saw a patch being discussed but wasn't sure if it made it in. I'd rather it didn't because some people are insisting they need it so they can run 32bit Java (for performance reasons because GC is marginally faster). The irony is entirely lost on them.
Really useful little set of experiments, good to have them in one place.
Not familliar with pcmcs, what are they?
They're performance monitoring counters, which are complicated programmable things on most x86 chips that could cache misses, cycles, etc. They don't come with a sensible way to selectively grant user access.
> On those systems, assuming that the mitigation works correctly, the impact of this bug may be limited to massive memory corruption and an eventual crash or reboot
I believe the idea I stumbled onto was to use kexec to load a hardened kernel after boot, which seems doable but I am not sure it would be a good route to go.
Have you/anyone ever tried anything like this?
I don't know if this is possible with other providers.