Hacker News new | past | comments | ask | show | jobs | submit login

It is not a real solution. The people delivering memory-safe code today do not think their systems are secure against individual, lone attackers, let alone fully-funded state actors. The overwhelming majority of them, all software developers, and software security professionals probably think it is literally impossible to design and develop usable systems secure against such threats, i.e. can achieve the desired requirements.

Let us do this thing that literally every practitioner thinks can not achieve the requirements and maybe we will accidentally meet the requirements in spite of it is a bona-fide insane strategy. It only makes sense if those are not "requirements", just nice-to-haves; which, to be fair, is the state of software security incentives today.

If you actually want to be secure against state actors, you need to start from things that work, or at least things that people believe could, in principle, work and then work down. Historically, there were systems certified according to the TCSEC Orange Book that, ostensibly, the DoD at the time, 80s to 90s, believed were secure against state actors. A slightly more modern example would be the Common Criteria SKPP which required NSA evaluation that any certified system reached such requirements.

But if you think they overestimated the security of such systems, so there are no actual examples of working solutions, then it still makes no sense to go with things that people know certainly do not work. You still need to at least start from things that people believe could be secure against state actors otherwise you have already failed before you even started.




> f you actually want to be secure against state actors, you need to start from things that work, or at least things that people believe could, in principle, work and then work down. Historically, there were systems certified according to the TCSEC Orange Book that, ostensibly, the DoD at the time, 80s to 90s, believed were secure against state actors. A slightly more modern example would be the Common Criteria SKPP which required NSA evaluation that any certified system reached such requirements.

Right. I was around for that era and worked on some of those systems.

NSA's first approach to operating system certification used the same approach they used for validating locks and filing cabinets. They had teams try to break in. If they succeeded, the vendor was told of the vulnerabilities and got a second try. If a NSA team could break in on the second try, the product was rejected.

Vendors screamed. There were a few early successes. A few very limited operating systems for specific military needs. Something for Prime minicomputers. Nothing mainstream.

The Common Criteria approach allows third-party labs to do the testing, and vendors can try over and over until success is achieved. That is extremely expensive.

There are some current successes. [1][2] These are both real-time embedded operating systems.

[1] https://www.acsac.org/2009/program/case/ccsc-Kleidermacher.p...

[2] https://provenrun.com/provencore/


> That is extremely expensive.

And it proves nothing. And it's as expensive for every update, so forget updates.


Provenrun, used in military aircraft, has 100% formal proof coverage on the microkernel.

We know how to approach this. You use a brutally simple microkernel such as SEL4 and do full proofs of correctness on it. There's a performance penalty for microkernels, maybe 20%, because there's more copying. There's a huge cost to making modifications, so modifications are rare.

The trouble with SEL4 is that it's not much more than a hypervisor. People tend to run Linux on top of it, which loses most of the security benefits.


> The trouble with SEL4 is that it's not much more than a hypervisor. People tend to run Linux on top of it, which loses most of the security benefits.

Well, yeah, that's a problem.

But the bigger problem is that this works for jets, as long as you don't need updates. It doesn't work for general purpose computers, for office productivity software, for databases (is there an RDBMS with a correctness proof?), etc. It's not that one couldn't build such things, it's that the cost would be absolutely prohibitive.


It's not for everything. But the serious verification techniques should be mandatory in critical infrastructure. Routers, BGP nodes, and firewalls would be a good place to start. Embedded systems that control important things - train control, power distribution, pipelines, water and sewer. Get those nailed down hard.

Diagrams like this scare me.[1]

[1] https://new.abb.com/docs/librariesprovider78/eventos/jjtts-2...


> Routers, BGP, ...

Well, but those need new features from time to time, and certification would make that nigh impossible. I'd settle for memory-safe languages as the happy middle of the road.


seL4 is a bit more than a hypervisor, but it's definitely very low-level. In terms of a useful seL4-based system, you may want to look at https://trustworthy.systems/projects/LionsOS/ – not yet verified, but will be.


Ooooo thanks for the rare gem of comment to give me fun things to look up.


I think certification overestimates security, absolutely. Certification proves nothing.

You can use theorem provers to prove correctness, but you can't prove that the business logic is correct. There's a degree to which you just cannot prevent security vulnerabilities.

But switching to memory-safe languages will reduce vulnerabilities by 90%. That's not nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: