Hacker News new | past | comments | ask | show | jobs | submit login
Grsecurity Developer Spender's Feelings on the State of Linux Security (grsecurity.net)
197 points by jsnathan on Nov 6, 2015 | hide | past | web | favorite | 76 comments

It always make me sad when I hear BSDs are underfunded, OpenBSD was about to "turn off the lights", FreeBSD was in sersious problems before they got 1M$ donation from WhatsApp. Heartbleed bug in OpenSSL? They also didn't have enough (full time) developers to even review the code. Now grsecurity makes me feel bad about it.

Everyone uses their software, firewalls, servers, email serves, openssl is everywhere, corporate/bank cluster without BSD or Linux with grsecurity is unimaginable.

I recently started donating to opensource project I use everyday. I realised how little they ask for, F-Droid, I easily doubled their BTC found used to cover server maintenance, LibreOffice asks for 3EURO donation by default (also BTC)! OpenBSDFundation asks for 10$ per month.


Edit: I also found a nice way how to donate to Tor, there is a site https://oniontip.com/ where you can donate others for running Tor nodes, one of two top 200nodes has WikiLeaks BTC address, another one goes to my wallet and I send it back to TorProject. I had enough free resources, I used them :)

Most of us can afford to pay it too. That's the real tragedy.

All of us should consider doing something similar, allocate a couple $ a month and give it to people who make our lives/jobs easier or better.

> Most of us can afford to pay it too. That's the real tragedy.

I think the hardest part for me is: I use soooooo much open-source software, that I can't contribute to all of them. Don't get me wrong, I should contribute more than I do, and I'm not excusing myself, but it's a legitimate problem. I'm sure people smarter than me have debated models for this, but I still don't think we have a good answer.

Please, please donate to the library developers. Scan the dependencies for some of your favourite packages and see if there's anything common to a few that might not be obvious. SDL backs so many things, for example, but rarely gets called out.

Grsecurity's approach is superior to OpenBSD's, but both are acceptable.

FreeBSD is actually behind Linux - it lacks an effective access control framework and did not have ASLR until the latest release. At least they're working on it (TrustedBSD, Capsicum).

Some interesting insights on Grsecurity's approach by OpenBSD's Nick Holland in the comments section:


Link directly to the comment:


And here is the referenced email with a bit of context:


He's actually talking about the SELinux/"RBAC in general" approach. His only criticism of Grsecurity is that it's not in the mainline and therefore not as effective as it could be.

Substitute "Untrusted user" for "possibly buggy server code" and you will see why Grsecurity's approach can have value in single-user systems.

FreeBSD supports Mandatory Access Control, implemented as part of the TrustedBSD project. It was introduced in FreeBSD 5.0. Since FreeBSD 7.2, MAC support is enabled by default. The framework is extensible; various MAC modules implement policies such as Biba and Multi-Level Security.

and how much of the system is protected by trusted bsd by default: none of it

how many people ever bother to write and deploy a trustedbsd policy: (to first order approximation) nobody

Defaults matter, a feature matrix checkbox is simply deceptive because the fact something isn't on (and configured) by default often means its an insane amount to work to try to enable it and/or thing are unfixably broken when you do (from a user point of view)

unfortunately both these things are true of trustedBSD

The TrustedBSD features are used by appliance vendors who base their product on FreeBSD. Applicances have very narrow profiles of acceptable use and thus it's actually sane to develop policies for them.

That's true. It goes back further than TrustedBSD: Secure Computing Corporation invented Type Enforcement, put it in a high assurance system (LOCK), put it into a BSD-OS for a firewall (Sidewinder firewall), and helped create Flask architecture for integration of type enforcement into vanilla OS's. Flask was ported to Linux in SELinux project. That got enough acceptance that TrustedBSD project was started to do same for FreeBSD. So, full circle back the the OS the tech was first fielded on.

LOCK System http://www.cyberdefenseagency.com/publications/LOCK-An_Histo...

Sidewinder firewall http://www.ittoday.info/AIMS/DSM/83-10-35.pdf

Flask project/architecture https://www.cs.utah.edu/flux/fluke/html/flask.html

Nonetheless, the old stuff (esp LOCK & LOCK/ix) are still stronger in security architecture and design despite all these years. Good design is timeless I guess. :)

Note: Cambridge's CHERI project and CheriBSD are the cutting-edge for FreeBSD security as they do capability-security from hardware up with FreeBSD already ported. Also supports Capsicum, Flask, and separation kernels if one wanted. True integration of each major branch of INFOSEC. :)


Sounds like a demand problem rather than a FreeBSD problem. I've heard the same about SELinux etc with them overly permissive by default due to user apathy. I'd say Linux is ahead of usability of these controls, even supported by vendors like Tresys. It's also ahead in terms of risky code/tools a major distribution will support vs a major BSD. So, comparisons are a moving target.

Fortunately, the best security approaches (HW-centric) are portable to both w/ FreeBSD getting most prototypes. You can already run capability-secure FreeBSD via Cambridge CHERI project. Criswell's people are doing lots of stuff with FreeBSD and maybe Linux:



Examples for Linux include these:




That doesn't even include software-related techniques like microkernels, low TCB software, safe low-level languages, and automatic compiler transformations for security that neither are adopting. They're both low-medium assurance by my standards due to cultural refusal to apply what's proven to work. So, I already have predictions about tech-transfer of papers above to Linux/FreeBSD use at large. You can probably guess how optimistic I am. ;)

MinGW-w64 also lacks ASLR and DEP. As a result most FOSS Windows packages lack them as well:


FreeBSD did not have serious problems, they were doing reasonably well. Clearly they can do more now but that is definitely not true.

Sorry if I provided incorrect information, I didn't confirm all of it before posting, just wrote what I read on other sites.I saw they made a huge progress porting C# compiler and VM to BSD, it bodes them well :)

Aye, too many people have this defeatist attitude that since perfect security will never be possible, therefore the only valid solution is reactive security (bug-patch cycles). Patch dependence is considered too entrenched for making some changes like replacing ambient authority with capabilities, using failure-oblivious computing [1] to redirect invalid reads and writes, using separation kernels, information flow control, proper MLS [2], program shepherding for origin and control flow monitoring [3] and general fault tolerance/self-healing [4].

I used to look up to Linus Torvalds as many did, but am increasingly beginning to see him as a threat to the advancement of the industry with his faux pragmatism that has led him to speak out against everything from security to microkernels and kernel debuggers.

[1] https://www.doc.ic.ac.uk/~cristic/papers/fo-osdi-04.pdf

[2] http://citeseerx.ist.psu.edu/viewdoc/download?doi=

[3] https://www.usenix.org/legacy/events/sec02/full_papers/kiria...

[4] https://www.cs.columbia.edu/~angelos/Papers/2007/mmm-acns-se...

I wouldn't be so harsh. Linus thinks and works in the here and now. He is neither interested in the theoretical or bothered by what theoretical people have to say about him. He ships code that works and works well and generally speaking has a good security track record compared to many userspace systems (Adobe Flash anyone?).

At the time he was against microkernels it would be fair to say monolithic kernels did definitely have (and continue to have) performance advantages over microkernel architectures. Have things changed? Somewhat. Some of how OS kernels are used has changed and that has made microkernels more attractive again.

I feel the rest of your argument just feels like the jab at Linus is tacked on though because he doesn't seem to be against capabilties system. (infact the kernel has what? 3 capabilities systems?) Nor does he seem against forms of multi-level security or program shepherding. So maybe those weren't meant to be directed at him.

Either way I just wanted to say that people should give him some slack, his job isn't to please security zealots but to ship software all of us use and many of us depend on for our livelihood in a timely and reliable manner.

At the time he was against microkernels, QNX had already demonstrated they were faster than the monolithic state of the art. [1]

Linux does not have any form of capabilities (maybe Capsicum, but it's not finished). Capabilities are not POSIX capabilities, which redefined a decade-old term, but rather this [2].

The rest is just trite dismissals of the same faux pragmatism that Linus embodies. He's "not interested in the theoretical", as if there is any other? Before the now-mundane ideas became the staples of pragmatists, they were theories hidden in the research literature.

Linus is a major figurehead and his promoting of self-destructive attitudes is undesirable.

In fact, you claim he lives in the here and now. He does not. The here and now has long surpassed him and he now lives in his own realm.

[1] https://cseweb.ucsd.edu/~voelker/cse221/papers/qnx-paper92.p...

[2] http://www.eros-os.org/essays/capintro.html


My vote goes to Spender.

The first systems to be designed for robustness (and have it in practice) were opposite of Linus's approach to doing things. There's some similarities on occasion, which mainly shows talent learned through trial and error. The proven principles for highly reliable and secure OS's aren't applied, though. This is intentional despite massive evidence to the contrary of his position.

Meanwhile, systems like THE (Dijkstra), Burroughs B5500, IBM System/38, GEMSOS (Schell), RC4000 (Hansen), MULTICS, KeyKOS (Bomberger et al), and VAX VMM Security Kernel (Karger/Lipner) showed how to design and implement systems with ultra-high reliability and/or security. These lessons were mostly ignored time and time again even when they could be applied. User-mode drivers and languages with pointer/buffer/stack protection would have by themselves prevented ridiculous amounts of problems. I heard that, after around 2-3 decades, the UNIX crowd is adding some user-mode drivers. See the problem?

Note: Far as lightweight systems & adaptivity, you should see the work of Brinch Hansen, Niklaus Wirth, and Andy Tannenbaum. They applied safe, modular techniques on systems with less resources than today's. UNIX and Linus resist for both personal preference and inertia, not valid technical objections.

No, nobody should give Linus any slack whatsoever. He's responsible for the current fiasco, where stock Linux kernel is a joke security-wise, one can find reliable vulnerabilities in a few hours, once the debugging infrastructure is there. Compare that with Windows where it's a few weeks to a few months and then a lot more work to get the reliability.

Linus holds the vast majority of the blame here, simply put, he is an idiot when it comes to security or looking ahead at how things will be in a couple of years. Linux is slowly entering every single aspect of our lives and this trend will only accelerate via IoT. Imagine what this means about security with Linus in charge.

We now KNOW that there are intelligent adversaries out there that spend hundreds of millions to defeat security worldwide. It is simply UNACCEPTABLE and downright MALICIOUS STUPIDITY for Linus to hold the views he does in this day and age. He's been repeatedly warned and cautioned for more than a DECADE now and he's laughed at and ignored valid criticisms from people with much more foresight than him. If this trend continues, he should be NAILED to the cross.

A security bug is NOT just a bug.

No, I shouldn't have to DISCONNECT everything that runs Linux from the Internet if I want it to have a modicum of security.

I think that your tone will turn a lot of people off, which is why you've been downvoted (by others), but I think that what you said needs to be heard, especially re: IoT. It's a nightmare waiting to happen.

> Linux kernel is a joke security-wise, one can find reliable vulnerabilities in a few hours

Do you have some details on this? I did not realise the situation was so bad.

I'm not sure objectively what the current state is. However, it's a little known fact that Linux kernel specifically (due to fame) benefits from tons of academic work on bug hunting tools. Every time they run a new one, they find all kinds of problems that are preventable with a safer language or sound architecture. Many of them would've been contained in a microkernel architecture rather than have full access to memory.

So, one could say it's pretty bad even if the "many eyes" and code audits are finding/fixing a ton. Simply too many to justify if their process is any good. An recent example I found was the Saturn project throwing an automated tool at it and finding 100+ real bugs in one go.


I am unaware of the security warnings etc. against Linus and Linux in general as I am a bit out of touch with Linux, having moved to OSX a few years back.

Do you have recommended reading or links so that I can get up to speed?

Did you read the linked article? Grsecurity people, gentoo security people and other security insiders have been urging Linus and other maintainers to change their views for decades. Nothing has happened except plenty of hand waving and ridicule. Visit the grsecurity forums, the IRC channel for more details.

Here is a recent example where PaX team are called "leeches" by a maintainer: http://lists.infradead.org/pipermail/linux-arm-kernel/2015-A...

If this sort of attitude is not stupid, I don't know what is. These people can't see the forest from the trees.

Ah more reading required. Thanks!

> perfect security will never be possible, therefore the only valid solution is reactive security

Yes, I've heard this implied before. This is effectively doing the adversary's work for them, and for free! Perfect truth is never attainable, therefore let's not do science?

To put it in more positive terms, achieving perfection is not important. What is important is a continual methodical process to keep improving, that more-than-offsets natural tendencies to deteriorate. In software engineering terms, it means not letting the project grow to a state where the exploit-discovery rate is so high. Since exploits generally affect the entire kernel, it's negligent and reckless, to be satisfied with merely keeping the bug-per-SLOC ratio constant.

The concept you list are orthogonal to most sources of kernel vulnerabilities and most of grsecurity's defenses: C-related exploitable memory safety bugs. grsecurity's C exploitation mitigation tech is just a band-aid for these.

Of course a kind of Amdahl's law applies to these: eliminate memory safety related vulnerabilities, capabilities and such become important in eliminating the rest of the bug classes...

Yeah, but you get really far by eliminating memory and control flow attacks. There's also relatively cheap ways to do that with hardware and software. There's even schemes like SAFEcode/SVA, Code Pointer Integrity, and Softbound + CETS that do it automatically with a performance hit many apps can take.

Yet, what's uptake of such methods in mainstream, kernel development? There's the problem.

Or you could just use a memory safe language instead of getting into the C mitigation arms race.

Time and time again mitigation techs have gotten widely deployed, exploits have caught up, it's a perpetual cycle. Yes the exploitation gets a little harder each time around, but you don't get sound memory safety that way.

I agree: my main recommendation for new projects. However, avoiding C or porting Linux/BSD to a safe language fell on deaf ears for decades. So, for them, I recommend techniques that work with C and UNIX architecture. Maybe more progress that way.

Those are all user space level mitigations so you don't need the kernel to implement them. So you are barking at the wrong group.

Those might be but microkernels, safe languages, interface correctness (pre/post-condition checks), and safe coordination schemes aren't: built from kernel up and proven since 70-80's to prevent/contain many issues that affect C-based monoliths like Linux. That they actively argue against using them despite decades of evidence they work says plenty about them. That they also advocate and use methods that haven't worked for decades in terms of predictability/reliability/security is final nail on the coffin.

People should keep barking given how much depends on the project now. Plus support alternatives that take better approaches to architecture like old EROS, MINIX 3 (reliability), or GenodeOS (security/reliability). Safe native approaches like security-enhanced Oberon System or JX Operating System would also kick butt. Each achieved certain robustness properties in mere years with small teams due to good design.

UNIX and Linux took decades to get usable, still give hackers MB of opportunities for kernel attacks, and still crash my systems on occasion. Meme: "Failure to learn the lessons of the past and apply them."

Separation kernels aren't, MLS likely works best with kernel cooperation (given Linux's large surface) and same with capabilities at least.

My point wasn't the specific proactive mitigations, but rather Linus' attitudes creating negative perceptions.

There's no real leadership in Linux as far as security goes from within the kernel community itself.

I'm beginning to get the impression this (in general, not just for Linux) is because the talented security folks rather just do the fun parts. It'd be really awesome if more security conscious people were like the OpenBSD developers and worked on products, not just security.

I got into software through security. Getting a dump of my high school's faculty and staff password database was my first high and I chased it for years. My current job is in engineering where security is part of, but not all of, my focus. Since taking on this role, I've started feeling alienated participating in the "security community."

Work isn't always fun in the moment, work is sometimes just work. There seems to be a gap between how much work the "security community" wants to be able to push on the rest of the open source developer's plate, and how much those developers are willing to take. Security already (rightly) gets a shortcut over a lot of things, but it takes man-hours to make security happen.

Why can't it be the security guys? If spender doesn't want to send his kernel patches through the same review and legal processes the rest of us do, that's his problem. Why doesn't he stand up and become that security leadership in the kernel? Of course the submission process could be better, and of course he's not going to get everything he wants from the other maintainers right away... because it's work, and work isn't always fun.

Talented security folks, especially those that can engineer security rather than penetration testing, have so many opportunities that fighting an uphill battle on mailing lists just isn't very attractive.

That's part of my point. As long as the work is "someone else's problem," will it ever get done?

If other opportunities are there for someone with a security skillet, what makes a libfoo maintainer become skilled enough to make the most secure libfoo possible, but also stay on libfoo? Does saying "security is important" mean that security is important, or does it means "have my skillset, and also do the busywork someone with my skillset is able and happy to ignore?"

Perhaps the way to push security into the industry is to use consumer's rights to their full capacity. In the EU if you buy something, you get 6 months of warranty and 24 months of implied warranty.

If you buy an Android phone and stop getting updates after 18 months and there is a new security hole, you should return the phone to your dealer and demand your money back. After all, it's relatively easy to prove that the defect (the security hole) was already present when you bought the phone. The dealer must fix the defect. If he can't, he must take back the article. He will then complain to the manufacturer. The pressure from these complaints hopefully lead to a change of behaviour by the manufacturers (i.e. provide two years of security updates, for example, even if you buy a new phone that's already been available for a year or two).

That's a very interesting point, and a good idea! It does put the onus on us as developers to ensure we do a good job and get it right from the beginning, which can only be a good thing.

The plan for allowing a device to be free from defects for two years since date of purchase is good; since date of announcement is practically worthless, unless companies start announcing products and then waiting a year to release to shorten their support time?

This is the Washington Post interview he wrote this for: http://www.washingtonpost.com/sf/business/2015/11/05/net-of-...

Source: https://twitter.com/grsecurity/status/662393322699415554

> Very fair article on the topic of Linux security: [...] … Was a pleasure talking with @craigtimberg

The comments in the HN submission must be some new record in middlebrow dismissals.


I think the article did a terrible disservice to the critics: there's lots of hot air in the article but little devoted to the meat of the critics complaints. Meanwhile, Linus' simple rebuttal is given full time, and seems completely reasonable. It wasn't until I read this post that I felt that the Linux maintainers may be doing some things wrong.

Astonishingly so, since it's actually a very good article. Only one or two (minor) flaws in a long article about a technical subject, written by a journalist, is a good tally.

> The industry is entirely broken in terms of what it values.

Couldn't agree more. I feel that we, as entire IT industry, have failed to provide robustness, security, and privacy after dozens of years of development of Internet technologies. Just take the recent vulnerabilities in Android and iPhones, used everyday by millions of people worldwide. How could that happen after so many billions of dollars invested in the development of the major technology used nowadays? We failed miserably and don't even understand the root problems.

Of course, completely different thing is functionality: here we've seen tremendous improvements over the years - which is very positive - but that's another story.

I think Google has understood the systemic security problems in Android pretty well since the beginning, but adopted a typical data driven approach: gather data, and when/if phones start getting compromised start figuring out what countermeasures are cost effective.

I wouldn't agree with the statement. The Android app store is filled with apps that steal user data and malicious apps. It only get cleaned up when somebody does some research and tracks things down and it ends up in the press.

Or maybe that it's just cost effective to have others do the work for you?

I've been follow grsec for a while now, and I really like the honesty around it. They admit what they are and aren't good at, and as for the product itself (grsec), it has become my go to hardening system for the kernel over SELinux (I know you can combine the two, I don't though). Combined with other measures I think I am doing a pretty good job in balancing out the usability security scale.

If you haven't taken the time to learn grsec, you will thank yourself later if you do. Keep in mind though there was some recent drama with some people/companies not properly attributing grsec, so you want to use current instead of stable imho. Alpine linux has grsec build in, gentoo has some good guides, and so does arch, but I tend to add it to debian.

As far as the state of linux/kernel security, I blame one thing in particular, and that is complexity and amount of code. The many eyes theory has a fault, in that it assumes a lot of people will look at the code and with enough people the bugs (security bugs) will be found. Well the problem is that the linux kernel is now at 10 million+ loc. So even with a shitton of people digging through the code, lots of stuff is going to get missed, and the real problem is that there are a lot less people looking at the code than we all want to think.

I think the primary way we will be able to move to security in the future is in efforts to refactor and reduce complexity of code in general, along with working on making it easier to read (or better commented).

This is one reason why I find minix 3 to be a very interesting project, at <10k loc.

Good points. Far as SELinux and grsec combined, it might help if you know what Type Enforcement is really supposed to do in practice. It's not just isolation like rule-based control. The most powerful things about it were "assured pipelines" that could deal with transitive issues or force things to happen somewhat in order.

Relevant papers for it here:


LOCK platform still kicks its successors' (esp Linux + SELinux) asses in many ways despite time passed. Just shows how little mainstream learns from the past or even present in terms of secure stuff in academia. Hope you enjoy the LOCK and CHERI designs if not FLASK, of which I'm not a fan either.

Get used to the Linux situation cause it's not going to change I think. The "many eyes" theory is downright stupid, because guess what, there are few if any eyes.

The eyes that are many are on the attacker side, extremely skilled individuals who have cut their teeth on the kernel for 15+ years.

On the defender side, apart from Google project zero (who are not just focusing on the linux kernel) and a few stray individuals, there is nobody looking for vulnerabilities in the kernel in order to make them public.

As far as complexity goes, Linus knows all of that which is why he's playing "catch the baby" or "throw the hot potato". I called him maliciously stupid in a previous comment and I think that's a fair characterization. He's not simply stupid, he knows the stakes and the sad state of affairs in the kernel (complexity, 0 security mindset, archaic architecture) and he sees the options available to him:

+ Make security top priority (as Microsoft did 10 years ago) which will expose him as a fool for his past mindset since that will amount to him admitting that he was dead wrong all these years. I don't think he has it in him to do this, he's too much of an egomaniac now.

It will also expose most of the kernel maintainers and developers as total incompetents when it comes to writing secure code and slow the pace of development.

+ Let others solve the problem. This is where Grsecurity/PaX comes in. That would necessitate him releasing a lot of control over the kernel into 3rd parties, since the best parts of Grsecurity are pretty intrusive and touch a lot of kernel components. I don't think he's willing to do that either.

+ Do nothing and deal with the problem by making idiotic statements of the sort "If you care about security, don't connect Linux to the Internet" or "insulate the kernel by adding layers of security such as sandboxes ...". In sort, he's saying it's not his problem STFU and deal with it yourself. These comments are idiotic because any sort of security person knows that you can't build a fortress on shifting and rotting foundations. You can pile as many sandboxes and intrusion detection systems you want, but they can all be bypassed if the kernel is weak.

So, to summarize, he knows he has a clusterfuck in his hands due to decades of development with 0 security mindset and he's simply not willing to own up to it. He's throwing the hot potato to us and tries to shift awareness and focus away from the part he's directly responsible for.

Grsecurity languishes in (relative) obscurity because no distribution ships it. I know several people who know about it and would pick the option if it was distro-supported. If you don't get automatic updates it's a non-starter.

Popularity in distros would put a lot of pressure on the mainline kernel and might get things moving there.

Alpine Linux ships it, and while they're not a major distro, I get the impression they're not "insignificant".

Arch Linux has it in the repos but it's not installed by default of course.


The Gentoo Hardened Project makes using grsec/PaX relatively easy. https://wiki.gentoo.org/wiki/Project:Hardened

I used to be a security freak guy. Using the Gentoo Hardened, GRSecurity PaX/RBAC, customized ACLs, etc. IMHO is a high-quality piece of software, very polished and well-designed... I'm a Ubuntu guy today. For my small business, such level of security is too much time consuming, drawing me back. It's kinda sad.

You know, that's the problem. There is basically no reason why this is so hard. Many security features could just be enabled by default by major distributions with hardly any downside. You don't even have to look at grsecurity. Just using pie binaries to enable proper ASLR would be a start.

Ubuntu already does quite a bit: https://wiki.ubuntu.com/Security/Features

I'm not well versed enough to understand whether "Just using pie binaries to enable proper ASLR" is included, but the chart does show green against various things mentioning ASLR. It looks like specific packages are built with PIE, too.

Same here.

I used to make my own Linux distribution, from scratch, with Grsecurity, PaX/RBAC for everything.

Then it wasnt so usable, when I needed new packages/software, or upgrades, compiling was tiresome, and I didnt know how to make a package manager, or how to automate everything.

I assumed somebody else would do it, a big multi billion dollar company perhaps, since I was just 16 year-old doing that over a summer, they would do better, right?

Oh how sad. Nobody really cares about security.

Since, enterprises just use lawyers instead of security.

Arch Linux has a working out-of-the-box config too. The guy who contributed it to Arch Linux is Daniel Micay, who was also quoted in the WaPo article.

Anyone that is curious about grsec and comfortable installing a gentoo overlay would have no problem installing grsec on their own; especially if gentoo was not their normal distro. You are not doing anyone any favors by making grsec look like a Sisyphean task without gentoo.

I used to run mandrake many years ago, and there was a -grsec kernel available in their package repository, and it just worked.

See also this story: https://grsecurity.net/announce.php.

There is no monolithic upstream organization. The real problem is that it's really hard work to upstream code, particularly when it touches core parts of the kernel. Look how long it took to get other invasive stuff like tickless or preempt RT. But it got done, it just took time and patience.

And insulting the upstream people like this doesn't make your job any easier.

> And insulting the upstream people like this doesn't make your job any easier.

The upstream people are pretty wedded to the idea that throwing insults is a reasonable response to frustration with poor behaviour. They do not have any legs to stand on re: hurt feelings.

I would think that everyone here agrees that 'computer' security is in a state of turmoil. Is it possible to design a computing system that fails-safe in the event of a bug in a component, instead of opening the entire system up to exploits. Fails Safe as in the process does nothing or restricts the targeted surface area of the malware.

There were systems that did that all the way down to hardware in the 1960's with many more since:


Market rejected them because they cost a bit more or didn't have highest, raw performance. Such short-sightedness means most done exist any more in any turn-key form. Many of the modifications are straight-forward enough that even academics are prototyping them and porting Linux/FreeBSD to them.


Mainstream just refuses to learn or adopt proven methods of the past. They use every justification in the world even when the labor is free (FOSS) with someone only asking to use the minimal of proven techniques. Market rarely buys the stuff outside very limited sales of some robust appliances: see Aesec's GEMSOS stuff, SAGE Guard on XTS-400, Nexor Mail Guard (on XTS-400), Green Hill's INTEGRITY-178B OS w/ virtualization, Mikro-SINA VPN on L4, Secure64's SourceT OS for DNS, Sentinel's HYDRA firewall (uses INTEGRITY), and so on. Social, political, and economic problems rather than technical. I see no end to it outside continued sales and development of niche solutions.

Note: The things I referenced in last paragraph are either still on the market w/ descriptions available via Google or at least have papers in reach. I left out tons of good stuff that's no longer around or just a prototype. Happy Googling and learning. :)

>A fail-safe or fail-secure device is one that, in the event of a specific type of failure, responds in a way that will cause no harm,

Key statement "Specific type of failure". In theory any particular piece of large software has tens of thousands of fail safes in it all ready. For example, when you send an oversized buffer to an application with input checking it does not explode in a ball of flame (unlike programs from the '90s) and warns you about the problem. But that is where the analogies break down between mechanical items and software, software is far more connected internally than almost all other machines are.

That looks like a political problem. Maybe the state should fund security for its citizens - maybe we need some new kind of institutions to do this.

Currently states is looking for a way to legally hack into your phone, computer, tablets, intercept all kind of communication and read offline data without warrant. It's not how that works nowadays.

They do that because nobody demands that they don't.

Well there is no Linux security.

L4 provides that.

Upvoting you because the first statement is right: there is no Linux security. Why? Security at a minimum requires a formal policy of what it's to achieve along with evidence the design meets that policy. Linux never had it. It's track record for flaws goes way in the opposite direction, too. So, it's insecure by default until proven otherwise and that might not even be possible due to complexity.

Combinations of micro/sep kernels and paravirtualized Linux... from L4 projects to commercial like LynxSecure... at least have a start on a security via isolation argument. A start...

Mother of all pissing matches...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact