Hacker News new | past | comments | ask | show | jobs | submit login

     here other approaches to    
     secure them actually work, 
     like sandboxes and containers,
"It is a fallacy to think that for 30+ years people haven't been able to write a secure OS. But virtualization, and containers are" - Theo de Raant



GEMSOS, KeyKOS, ASOS, LOCK, VAX VMM, INTEGRITY-178B, seL4, NOVA... all in the 4Kloc-50+Kloc range for a reason. All simpler, smaller, and easier to model than a full OS. Two survived pentesting by NSA with others during validation way better in defect rate or correctness case than most OS's.

Theo's quote is BS by the numbers and pentest results. Sandboxing done right is way more secure.


While true, in practical terms, those OSes are not what people have lying around to do their image processing. I'd love a secure, formally verified, capabilities-based microkernel OS as much as next person, but all I have is Linux and various BSDs. It's a bit academic at this point.


The commenter countered that sandboxing was worthwhile by citing a claim by Theo de Raadt that getting those (i.e. VM's) right was about as hard as getting whole OS's of code right. That one proved futile meant the other would. I countered that citation with numerous examples that disprove his point. Then, by implication, sandboxing might be effective if it's done like in my examples. And there are CompSci designs doing stuff like that.

Whether you want to do image processing on a Linux, BSD, or one of my kernels is a separate topic. However, it's worth noting that even stuff like I cited could be useful for that where it stashes the image processor into its own protection domain with read access to memory containing input and write access to memory that will have the output. The code itself isn't allowed to do or touch anything else. At this point, exploits must be chained in clever ways. Modifying this to preserve pointer integrity, lowest overhead range being around 1-10%, would choke those kind of attackers further.

Using Rust is yet another tactic that can work well with or in lieu of a sandbox. Ideally with for issues the language itself can't cover. Three precedents for that are right on my list: GEMSOS's safety was improved using a Pascal subset with call-by-value semantics; ASOS was a combo of security kernel and Ada runtime to host embedded, Ada applications (lang-safety + sandbox); separation kernels like INTEGRITY-178B usually support runtimes for Ada and Java subset to get safety within components. Now, there's some OS's like Tock and Redox plus low-level libraries written in a new, safety-enhancing language. Such designs can benefit from the old strategies which we see a few in Redox. On other end, Genode uses many old and recent tricks in architecture but not safe language. It can also run a Linux desktop with your image processing code. ;)

Note: A bit academic is also misleading. Several, separation kernels are deployed commercially for things like secure, web browsing. INTEGRITY and LynxSecure are among oldest. The Nizza and Perseus architectures got turned into Sirrix's Turaya Desktop. These things aren't just academic.


It's academic in the colloquial sense that the tools you describe aren't available to use. I'd love to have all the futurustic, secure stuff you describe, but for a user's run-of-the-mill AWS deployment, that's not an option.

All the other stuff you say is on point, of course, and there's an argument or a hundred to be made that a lot of these problems are already solved, and that we (as users and developers) are missing out on a massive amount of amazing OS research that could have made everything better than our current insecure, 1970s-tech monolithic kernel OSes.


"It's academic in the colloquial sense that the tools you describe aren't available to use."

I just named multiple tools available to use in commercial sector and FOSS. Their predecessors were used by commercial and government sector for a decade before that. That's 100% not academic even if a few started there. If anything, more work just needs to be poured into FOSS stuff so it gets more deployment and impact.

"All the other stuff you say is on point, of course, and there's an argument or a hundred to be made that a lot of these problems are already solved, and that we (as users and developers) are missing out on a massive amount of amazing OS research that could have made everything better than our current insecure, 1970s-tech monolithic kernel OSes."

We're definitely in agreement there. Fortunately, we're seeing some exploration into that including on the Rust front with Redox.


    Then, by implication, sandboxing might be effective if 
    it's done like in my examples. And there are CompSci 
    designs doing stuff like that.
I mean it is great if you can wave your hands and invent a Hoarce Logic proof your code is secure. But your Hoarce Logic models hardware. While you can prove your Hoarce Logic is consistent you can't prove your Hoarce Logic maps to existing hardware. You can only ensure the review board you made your best effort.

Which defeats the exercise completely.


You could argue against safety of Rust, OpenBSD, etc. with the same strawman. What it and those kernels did were creating an abstract model realistic enough to be useful, embed a safety/security policy in that model, and produce code that corresponds to that model. Older systems used what's called Abstract, State Machines which are quite easy to model-check or even analyze by hand. Typically converted to FSM's with same advantage. It was usually done by hand and eye instead of formally. In GEMSOS, the Gypsy specs mapped to code's functions/modules about 1-to-1 in most cases.

Recent efforts did it formally for things ranging from kernels to application-layer sandboxes. In case of VCC and SPARK, they do annotations on the code itself. In things like seL4, they do proofs on the code connecting it to a model. So, one can do that. It's usually not necessary since we're talking about improving the security of things over the status quo. You seem to be adding an additional requirement of totally unhackable with mathematical proof down to the hardware. AAMP7G and SSP are production CPU's close to that, esp AAMP7G. VAMP with CodeSeal & CHERI might do it, too. One academic one was proven down to the gates. They're all irrelevant as we're talking about reducing damage of an image library on regular CPU's & OS'.

You trust the Rust typesystem, compiler, and optimizations were implemented correctly enough that they'll knock out a bunch of problems. Certifying compilers like FLINT & CompCert took mountains of verification effort to pull off a fraction of that. Whereas, I'm saying one can trust 4-12Kloc or a compiler transformation connected to an abstract model fitting on a page or two might do its job with fewer problems than trusting just complex, native code in C or whatever. I also encourage mixing such TCB's with language-based security. However, such a tiny, clean, ASM-style program will have way fewer defects than a full OS or application.


    You could argue against safety of Rust [...] with the same strawman.
Not in the slightest.

Rust does a mapping of input symbols to output symbols. The input symbols have no 1:1 mapping with machine instructions on any 1 platform. All Rust does is attempt to ensure the constraints of the input symbol language are ensured to exist in the output symbol language.

Lastly no compiler attempts to say it's model is 100% consistent will always work correctly in all circumstances.

Hell C doesn't even bother defining how most integer operations work (in older versions).

    What it and those kernels did were creating an abstract 
    model realistic enough to be useful

    realistic enough

    enough
That is my point. It is a close enough model. The model is an approximation. Most models do not cover all hardware, or inter-generational bugs for example the recent skylake 0xF000000000000000F bug.

This is impossible for them too. But for an attacker it is trivial to exploit these bugs.

Saying you've proved a system means the system is rigorously true beyond the shadow of a doubt. The fact you can't rigorously test underlying state machine, or hoarce logic correctly maps 1:1 with the underlying machine you haven't proved anything.

Okay so let us pretend I prove P=NP for a collection of symbols which I state map 1:1 to a set of all possible algorithms in both P and NP space. But I don't prove that second statement. I'll get laughed out my advisors office.

What proving software correct does is effectively no different then my antidote from a rigorous perspective.

Done a damned good audit? Yes.

Proved? No.

You are auditing software, not proving it.

This isn't a straw man. Or a logical fallacy. It is a failure of logic. You prove A is true in system B. But never prove system B is correct... You've proved nothing.


"It is a fallacy to think that for 30+ years people haven't been able to write a secure OS. But virtualization, and containers are" - Theo de Raant"

"here are some examples of containers & virtualization that were either proved secure against a policy with realistic model, survived tons of pentesting, or both. Sandboxing done right is way more secure" - Nick P. paraphrased

"you can't prove your Hoarce Logic maps to existing hardware... Which defeats the exercise completely." (you)

"Most models do not cover all hardware, or inter-generational bugs for example the recent skylake 0xF000000000000000F bug."

"let us pretend I prove P=NP for a collection of symbols which I state map 1:1 to a set of all possible algorithms in both P and NP space."

It's like you're talking about a whole, different conversation. Theo's claim is thoroughly refuted whether it was done by eye, hand proofs, model-checking, or full verification down to the gates. Done by each of these actually. The use of smaller systems with simpler API's plus assurance techniques that worked to reduce defects got results no monolithic OS or UNIX got. Hence, why people are recommending investigation or use of such approaches to sandbox applications given those approaches might similarly result in fewer vulnerabilities or with less, severe damage.

Then, you counter talking about how they aren't proven to code, in all circumstances, the hardware, P = NP, etc. What does all that have to do with the claim about reduction of efforts required or damage created using these approaches vs what building or relying on a whole OS would entail? You could throw all that same stuff at the alternative approaches. It would still be true. Whereas, people acting on my recommendations instead of Theo's would provably reduce their attack surface and number of hits they take based on the results gotten in the past by examples I gave. That's why your reply is a strawman to the discussion of whether sandboxing can reduce impact of problems or not with Theo's criticism being destroyed. Even by his own OS which had way many more kernel bugs that needed mitigation than stuff I cited which ran such crap deprivileged or eliminated it at design level where possible.


If you squint then containerisation looks rather like the minimalistic microkernels which have formed the few genuinely secure OSes of those 30+ years.


That's a hilariously appropriate typo in the spelling of his last name.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: