Hacker News new | past | comments | ask | show | jobs | submit login

Not if the implementation is formally verified, like miTLS [0] and EverCrypt [1]. Parts of the latter were integrated into Firefox, which even provided a performance boost (10x in one case) [2].

I think what is needed is something like EverCrypt but for TLS. Or in other words, something like miTLS but which extracts to C and/or Assembly, to avoid garbage collection and for easy interoperation with different programming languages (preferably including an OpenSSL-compatible API for backwards compatibility).

[0]: https://mitls.org/ [1]: https://hacl-star.github.io/HaclValeEverCrypt.html [2]: https://blog.mozilla.org/security/2020/07/06/performance-imp...




Formally verified against what? and what assumption were made?

miTLS is formally verified against the TLS spec on handshaking, assuming the lower-level crypto routine are good. It is not even free from timing attack.

EverCrypt have stronger proof, but it is only safe as in not crashing and correct as in match the spec in all valid input. It is not proved to be free from DoS or invalid input attack.

OpenSSL do more then TLS. Lots of interesting thing are in the cert format parsing and management.


> Formally verified against what? and what assumption were made?

Well, tell me again, what is OpenSSL formally verified against? What assumptions were made in OpenSSL?

Formal verification does not eliminate very large classes of bugs only when absolutely everything is formally verified, including timing attacks. Instead, it consistently produces more reliable software, many times even when only certain basic properties are proved (such as, memory safety, lack of integer overflows or division by zero, etc).

Formal verification can be a continuum, like testing, but proven to work for all inputs (that possibly meet certain conditions, under certain assumptions). The assumptions and properties that are proven can always be strengthened later, as seen in multiple real-world projects (such as seL4 and others).

The result is code that is almost always a lot more bug-free than code that is not formally verified. And as I said, more properties can be proven over time, especially as new classes of attacks are discovered (e.g. timing attacks, speculation attacks in CPUs, etc).


It's also not like you'd just give up on fuzzers because you have formally verified code. All the same tools should still be available, plus one more.


To formally verify an implementation you need a... formal description of what is to be implemented and verified.

Constructing a formal specification from RFC8446 is possible.

Constructing a formal specification for PKIX... is not. PKIX is specified by a large number of RFCs and ITU-T/ISO specs, some of which are more formal than others. E.g., constructing a formal specification for ASN.1 should be possible (though a lot of work), while constructing a formal specification for certificate validation is really hard, especially if you must support complex PKIs like DoD's. Checking CRLs and OCSP, among other things, requires support for HTTP, and even LDAP, so now you have... a bunch more RFCs to construct formal descriptions of.

And there had better be no bugs in the formal descriptions you construct from all these specs! Recall, they're mostly not formal specs at all -- they're written in English, with smatterings of ASN.1 (which is formal, though the specs for it, though they're very very good, mostly aren't formal).

The CVE in question is in the PKIX part of OpenSSL, not the TLS implementation.

What you're asking for is not 5 man-years worth of work, but tens of man-decades. The number of people with deep knowledge of all this stuff is minute as it is -- maybe just a handful, tens at most. The number of people with deep knowledge of a lot of this stuff is larger, but still minute. So you're asking to spend decades' worth of a tiny band of people's time on this project, when there are other valuable things for them to do.

The number of people who can do dev work in this space is much larger, of course -- in the thousands. But very few of them have the right expertise to work on a formal, verified implementation of PKIX.

Plus, it's all a moving target.

Sure, we could... train a lot of people just for such a project, but it takes time to do that, and it still takes time from that tiny band of people who know this stuff really well.

I'm afraid you're asking for unobtanium.

EDIT: Plus, there's probably tens of millions of current dollars' (if not more) worth of development embodied in OpenSSL as it stands. It's probably at least that much to replace it with a verified implementation, and probably much more because the value of programmers expert enough to do it is much more than the $200K/year suggested above (even if you train new ones, it would take years of training, and then they would be just as valuable). I think a proper , formally verified replacement of OpenSSL would probably run into the hundreds of millions, especially if it's one huge project since those tend to fail.


Well, sure, if your start with those premises, then I'm not surprised that you reach the conclusion that the goal is unachievable.

First of all, if constructing a formal specification for PKIX is not possible, then that should be telling you that it either needs to be simplified, better specified or scrapped altogether for something better (the latter would require an extremely large transition period, I'm imagining, so the first two are much preferred in this situation).

Otherwise, how can you be sure that any implementation in fact implements it correctly?

> And there had better be no bugs in the formal descriptions you construct from all these specs!

Well, I don't think that is true. You should in fact allow bugs in the formal description, otherwise how will you ever get anything done in such a project?

You see, having a formal description with a bug is much better than having no formal description at all.

If you have no formal description, you can't really tell if your code has bugs. If you have a buggy formal description, then you are able to catch some bugs in the implementation and the implementation can also catch some bugs in the formal description.

Also, some parts of the formal description can catch bugs in other parts of the formal description.

So the end result can be strictly better than the status quo.

> Plus, it's all a moving target.

Sure, but hopefully it's a moving target moving in the direction of simplification rather than getting more complex, otherwise things will just get worse rather than get better, regardless if we keep with the status quo or not. I'm not a TLS expert by any means but I think TLS 1.3 moved in that direction for some parts of the protocol, at least (if I'm not mistaken).

Also, I think you are not fully appreciating that formal verification can be done incrementally.

You can start by doing the minimum possible, i.e. simply verifying that your code is free of runtime errors, which would eliminate all memory safety-related bugs, including Heartbleed.

This would already be better than reimplementing in Rust, because the latter can protect from memory safety bugs but not other runtime bugs (such as division by zero, unexpected panics, etc).

BTW, this minimal verification effort would already eliminate the second bug in this security advisory.

You can then verify other simple properties, even function by function, no complicated models are even necessary at first.

For example, you could verify that your function that verifies a CA certificate, when passed the STRICT flag, is really more strict than when not passed the STRICT flag.

This would eliminate the first bug in this security advisory and all other similar bugs in the same function call-chain.

BTW, what I just said is really easy to specify, and I'm guessing it's also very easy to prove, since I'm guessing that the strict checks are just additional checks, while the normal checks are shared between the two verification modes.

Many other such properties are also easy to prove. The more difficult properties/models, or even full functional verification, can be implemented by more expert developers or even mathematicians.

I think the problem is also that OpenSSL devs, like the vast majority of devs, probably have no desire/intention/motivation/ability to do formal verification, otherwise you could even do this in the OpenSSL code base itself (although that is not ideal because verifying a program written in C requires more manual work than verifying it in a simpler language whose code extracts to C).

I'm also guessing that your budget estimate is an exaggeration since miTLS and EverCrypt, although admittedly projects whose scope has still not reached your ambitious goals (i.e. full functional verification of all layers in the stack), was probably done with a much smaller budget.

And it's not like you can't build on top of that and incrementally verify more properties over time, e.g. more layers of the stack or whatever.

You don't need a huge mega-project, just a starting point, and sufficient motivation.


> Well, sure, if your start with those premises, then I'm not surprised that you reach the conclusion that the goal is unachievable.

I didn't say unachievable. I said costly.

> First of all, if constructing a formal specification for PKIX is not possible, then that should be telling you that it either needs to be simplified, better specified or scrapped altogether for something better (the latter would require an extremely large transition period, I'm imagining, so the first two are much preferred in this situation).

https://xkcd.com/927/

The specs for the new thing would basically have to be written in Coq or similar. Even if you try real hard to keep it small and not make the... many mistakes made in PKIX's history... it would still be huge. And it would be even less accessible than PKIX already is.

> For example, you could verify that your function that verifies a CA certificate, when passed the STRICT flag, is really more strict than when not passed the STRICT flag.

That's just an argument for better testing. That's not implementation verification.

> This would eliminate the first bug in this security advisory and all other similar bugs in the same function call-chain.

Only if you thought to write that test to begin with. Writing a test for everything is... not really possible. SQLite3, one of the most tested codebases in the world, has a private testsuite that gets 100% branch coverage, and even that is not the same as testing every possible combination of branches.

> BTW, what I just said is really easy to specify, and I'm guessing it's also very easy to prove, since I'm guessing that the strict checks are just additional checks, while the normal checks are shared between the two verification modes.

It's not. The reason the strictness flag was added was that OpenSSL was historically less strict than the spec demanded. It turns out that when you're dealing with more than 30 years of history, you get kinks in the works. It wouldn't be different for whatever thing replaces PKIX.

> I think the problem is also that OpenSSL devs, like the vast majority of devs, probably have no desire/intention/motivation/ability to do formal verification, otherwise you could even do this in the OpenSSL code base itself [...]

You must be very popular at parties.

> I'm also guessing that your budget estimate is an exaggeration since miTLS and EverCrypt, [...]

Looking at miTLS, it only claims to be an implementation of TLS, not PKIX. Not surprising. EverCrypt is a cryptography library, not a PKIX library.


> That's just an argument for better testing. That's not implementation verification.

No, I'm not talking about testing, I'm talking about really basic formal verification:

forall (x: Certificate), verify_certificate(x, flags = 0) == invalid ==> verify_certificate(x, flags = STRICT) == invalid

This is trivial to specify and almost as trivial to prove to be correct for all inputs. Testing can't do that, no matter how good your testing.

> Only if you thought to write that test to begin with. Writing a test for everything is... not really possible.

I agree, but I'm not talking about testing. I'm talking about formal verification.

> It's not. The reason the strictness flag was added was that OpenSSL was historically less strict than the spec demanded. It turns out that when you're dealing with more than 30 years of history, you get kinks in the works. It wouldn't be different for whatever thing replaces PKIX.

That's totally fine and wouldn't affect the verification of that function at all. This is a very simple property to verify and it would avoid this bug and all similar bugs in that function (even if that function calls other functions, no matter how large or complex they are).

Other functions also have such easy to verify properties, this is not hard at all to come up with (although sure, some more difficult properties might be harder to verify).

> You must be very popular at parties.

I didn't want to be dismissive of OpenSSL devs or other devs in general, I just find it frustrating that there are so many myths surrounding this topic, and a lot less education, interest and investment than I think there should be, nowadays.


You are confusing verification as in "certificate" with verification as in "theorem proving", and you are still assuming a formal description of what to verify (which I'll remind you: doesn't exist). And then you go on to talk about myths and uneducated and uninterested devs. Your approach is like tilting at windmills, and will achieve exactly as much.


If I understood you correctly, then I am not confusing those two things.

Maybe you haven't noticed but I actually wrote a theorem about a hypothetical verify_certificate() function in my previous comment. Maybe you also haven't noticed, but I didn't need a formal description of how certificate validation needs to be done to write that theorem.

And I assure you, if the implementation of verify_certificate() is anything except absolute garbage, it would be very easy to prove the theorem to be correct. I actually have some experience doing this, you know? I'm not just parroting something I read, I've proved code to be 100% correct multiple times using formal verification (i.e. theorem proving) tools. That is, according to certain basic and reasonable assumptions, of course, like e.g. the hardware is not faulty or buggy while running the code, and that the compiler itself is not buggy, which would be a lot more rare than a bug in my code -- and even then, note that compilers and CPUs can be (and have been) formally verified.

Maybe you don't agree with my approach but I think it's the most realistic one for a project such as this and I am absolutely confident it would be practical, would completely eliminate most (i.e. more than 50%, at least) existing bugs with minimal verification effort (which almost anyone would be capable of doing with minimal training), and would steadily become more and more bug-free with additional, incremental verification (i.e. theorem proving) and refactoring effort.


https://project-everest.github.io/ :

> Focusing on the HTTPS ecosystem, including components such as the TLS protocol and its underlying cryptographic algorithms, Project Everest began in 2016 aiming to build and deploy formally verified implementations of several of these components in the F* proof assistant.

> […] Code from HACL*, ValeCrypt and EverCrypt is deployed in several production systems, including Mozilla Firefox, Azure Confidential Consortium Framework, the Wireguard VPN, the upcoming Zinc crypto library for the Linux kernel, the MirageOS unikernel, the ElectionGuard electronic voting SDK, and in the Tezos and Concordium blockchains.

S2n is Amazon's formally verified TLS library. https://en.wikipedia.org/wiki/S2n

IDK about a formally proven PKIX. https://www.google.com/search?q=formally+verified+pkix lists a few things.

A formally verified stack for Certificate Transparency would be a good way to secure key distribution (and revocation); where we currently depend upon a TLS library (typically OpenSSL), GPG + HKP (HTTP Key Protocol).

Fuzzing on an actual hardware - with stochastic things that persist bits between points in spacetime - is a different thing.


Funny, the first hit for that search you linked is... my comment above. The "few things" other than that are for alternatives to PKIX, which is fine and good, but PKIX will be with us for a long time yet. As for Everest, it jives with what I wrote above, that verified implementations of TLS are feasible (Everest also implements QUIC and similar), but -surprise!- not listed is PKIX.

I know, it sounds crazy, really crazy, but PKIX is much bigger than TLS. It's big. It's just big.

The crypto, you can verify. The session and presentation layers, you can verify. Heck, maybe you can verify your app. PKIX implementations of course can be verified in principle, but in fact it would require a serious amount of resources -- it would be really expensive. I hope someone does it, to be sure.

I suppose the first step would be to come up with a small profile of PKIX that's just enough for the WebPKI. Though don't be fooled, that's not really enough because people do use "mTLS" and they do use PKINIT, and they do use IPsec (mostly just for remote access) with user certificates, and DoD has special needs and they're not the only ones. But a small profile would be a start -- a formal specification for that is within the realm of the achievable in reasonably short order, though still, it's not small.


Both a gap and an opportunity; someone like an agency or a FAANG with a budget for something like this might do well to - invest in the formal methods talent pipeline and - very technically interface with e.g. Everest about PKIX as a core component in need of formal methods.

"The SSL landscape: a thorough analysis of the X.509 PKI using active and passive measurements" (2011) ... "Analysis of the HTTPS certificate ecosystem" (2013) https://scholar.google.com/scholar?oi=bibs&hl=en&cites=16545...

TIL about "Frankencerts": Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations (2014) https://scholar.google.com/scholar?cites=3525044230307445257... :

> Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations.

> Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc.

W3C ld-signatures / Linked Data Proofs, and MerkleProof2017: https://w3c-ccg.github.io/lds-merkleproof2017/

"Linked Data Cryptographic Suite Registry" https://w3c-ccg.github.io/ld-cryptosuite-registry/

ld-proofs: https://w3c-ccg.github.io/ld-proofs/

W3C DID: Decentralized Identifiers don't solve for all of PKIX (x.509)?

"W3C DID x.509" https://www.google.com/search?q=w3c+did+x509


Thanks for the link about frankencerts!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: