Hacker News new | past | comments | ask | show | jobs | submit login

Note that even if you prove correctness of code, there still might be bugs in the compiler, so you will have to prove the compiler. Then there could be bugs in the modules / DLLs / dependencies in the OS, so more proving needs to be done. Then there might be bugs in the CPU. So it's not so easy. T

hen there are things which you simply can't prove, eg. In crypto the code may be correct, yet it may still leak some side-channel information (such as timing). Also, nobody has proven that things like sha256 and so on are unbreakable.






The goal isn’t to deliver a provably correct system - that’s effectively impossible.

But if core OS features or APIs (like the network stack) can be written in a theorem-proving language, huge benefits can be realized — the attack surface can be vastly reduced.


People will just go around.

I find it hard to resist making a comment in those movies set in New York City where a character has a steel jacketed door and a million locks but I bet the wall is cheap gypsum board and studs on 18 inch centers. You could steal a lot of stuff around a door without even compromising the building structure. You just need the right sheet rock knife and a quiet hallway.

Someone told me about Microsoft having a data center in a leased building, and they had the forethought to fill the space above the false ceiling with motion sensors to prevent someone just getting a ladder and going over the top of the walls. People don't always think about these things.

If you have a model that works for certain patterns, people will begin to ask what situations it can't handle. The attacks will move to that space. They may even look at release notes and try things that were reported as fixed, similar to the way people are now doing analysis on updates for operating systems.

Will it keep lazy criminals out? Yes. But they're not all lazy.

Where it's more likely to help is that you'll find crashing bugs you may have missed, and you save some face by not having particularly naive bugs in your code. That alone may be worth the cost of entry. But it's 'safer', not 'safe'.


> Where it's more likely to help is that you'll find crashing bugs you may have missed, and you save some face by not having particularly naive bugs in your code.

That doesn't sound right at all. Systems with atrocious security are more likely to be compromised than systems with good but imperfect security. Attackers do not have infinite resources.


Which also doesn't sound right.

I took the proposal on the table to mean that we systematically harden infrastructure code. If that's what we're doing then a rising tide will lift all boats. Except for the people who fall behind on upgrades. There is a period of chaos where your prediction will play out over and over, but eventually there's a new equilibrium where it's just different rocks to look under. Then people get bored again and they fall behind on upgrades, and now every time new functionality is added to the formal system, by definition people using the old version aren't getting that protection, so you'd look there first.

I think the critical question would be is how quickly does a formal system reach homeostasis? If the rate of bugs (or maybe you call them features?) is negligible after a brief flurry of initial activity, then most of my argument collapses in a divide by zero error.

As to priorities, I don't think penetration and debugging are fundamentally that different. And by that I mean, there's some fuzzy math you do regarding cost (to you) of checking a thing times the value of having that information. Yes, that means a ton of cheap scenarios will be tested very early. But ultimately, how valuable is breaking into a boring system? It's definitely not zero, because you've created another resource for yourself.

Doesn't code that runs on a lot of systems get far more attention than code that runs on a few? Wasn't that part of why OS X got away with fewer viruses? Not because it was profoundly harder to hack, but simply because fewer people cared?


> There is a period of chaos where your prediction will play out over and over, but eventually there's a new equilibrium where it's just different rocks to look under.

When we take steps to eliminate classes of vulnerabilities, well, those vulnerabilities stop being a problem, and we're all more secure. For example, code written in Java or C# is immune from the buffer-overflow attacks, and use-after-free attacks, that continue to plague C code.

There was no counterweight built into this. Security was simply improved.

> Then people get bored again and they fall behind on upgrades, and now every time new functionality is added to the formal system, by definition people using the old version aren't getting that protection, so you'd look there first.

Failure to update packages causes security issues, yes. That doesn't mean that security improvements don't, well, improve security, overall. They absolutely do.

> If the rate of bugs (or maybe you call them features?) is negligible after a brief flurry of initial activity, then most of my argument collapses in a divide by zero error.

If they're using formal methods, they won't be adding bugs. (Discounting side-channel issues.)

I believe it's pretty unusual to see a company make non-verified changes to a codebase originally derived through formal methods. If they care about formal verification, they'll probably stick with it as they make changes.

> how valuable is breaking into a boring system? It's definitely not zero, because you've created another resource for yourself.

Sure - botnets.

> Doesn't code that runs on a lot of systems get far more attention than code that runs on a few?

Of course, but I'm not seeing what you're getting at here.


Well, more details about it are out. It looks like the library worked correctly, but in this case the devs forgot to check something (that the curve had the right params). No amount of writing it in a "theorem proving" language can save you if you don't have the actual specification for the problem itself right.

I suppose that with enough time and attention one could build a proof system that accounts for things like Rowhammer and timing attacks but are we even anywhere close to that right now?

The only time anyone wrote a mechanical proof for my code, I was still finding security bugs a year later. One in particular stands out.

(note to bosses: never, ever ask the security guy "what's the worst that could happen?" the worst that could happen is always along the lines of "we go out of business and you go to jail", maybe add on "people could die," depending on domain. "How bad is this?" will get you more coherent answers)


You could go further. There are a lot of bugs in provers like Coq themselves for which developers create their own proofs of correctness [1]. More precisely: the paper which is linked implements a type checker for the core language of Coq which is proven correct in Coq.

https://www.irif.fr/~sozeau/research/publications/drafts/Coq...


> you will have to prove the compiler

Strictly speaking you don't - that's one of two approaches. Fortunately, the CompCert compiler already exists if you want to go the route you describe.

The alternative to use a non-verified compiler, but verify that the generated code corresponds to the model. This is the route the seL4 project took. They used GCC, but verified the correctness of the code it generated.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: