
EverCrypt, a cryptographic library that is provably secure against known attacks - pseudolus
https://www.quantamagazine.org/how-the-evercrypt-library-creates-hacker-proof-cryptography-20190402/
======
protz
Several comments note that the headline is perhaps a little misleading; I
agree, and I don't think any of us (I'm one of the cited people in the
article) would've made such a strong claim.

Ignoring for a moment the fact that this is an alpha release (and that as
such, a handful of minor proofs, e.g. mundane facts related to endianness
conversions, have yet to be completed), there are several limitations of our
work.

\- Our tools themselves are not verified, so there could be a bug in F _, z3,
the kremlin F_ -to-C compiler; or a bug about the C compiler itself (unless
one uses a constant-time fork of CompCert). We consider most of these to be
theoretically unpleasant, but still an acceptable tradeoff.

\- Specifications are extremely hard to get right: one might introduce a typo
when reading an RFC, or, say, state an endianness conversion wrong. So, even
before implementing an optimized version of anything, we audit and test the
spec first. This is usually a good sanity check, but certainly not bulletproof
(note that this is a general problem of software verification, not specific to
us).

\- As noted near the end of the article, our models do not account for Spectre
and Meltdown; and there might be new classes of attacks that we can't guard
against, for the simple reason that they haven't been found yet.

As to whether this is an attempt to get free security audits, I guess we
weren't intentionally thinking of it! But we certainly would love it if people
could test the code and give us feedback on what to improve to make this an
even more compelling choice for whoever needs a good cryptographic library.
I've written a bit about this here:
[https://jonathan.protzenko.fr/2019/04/02/evercrypt-
alpha1.ht...](https://jonathan.protzenko.fr/2019/04/02/evercrypt-alpha1.html)

~~~
avmich
Thank you, this looks like a great advancement.

> there might be new classes of attacks that we can't guard against, for the
> simple reason that they haven't been found yet.

I've also puzzled about this phrase in the article. Why "class of attack
haven't been found" necessarily leads to "we can't guard against this class of
attack"?

I guess we need more definitions here to clarify what we're talking about...
but you're doing that already, so maybe this statement can be made more
precise elsewhere?

~~~
jon_richards
>Why "class of attack haven't been found" necessarily leads to "we can't guard
against this class of attack"?

One thing I can think of is the practice of using multiple random number
generators to generate a number that is random as long as any 1 generator is
random. This guards against attacks on a specific generator even if those
attacks are unknown. The _class of attack_ is known, however, and protected
against.

------
mahemm
Moving past the silly headline, there is actually a pretty substantial
achievement here. Using formal verification tools, the team proved the
following properties for the library:

* Memory safety (no buffer overruns etc)

* Type safety (all compiler-visible interfaces/abstractions used as per spec)

* Functional correctness (all the crypto implementations are faithful to their algorithms)

* Side-channel resistance (all crypto is constant time)

This has been confirmed by fallible tools and is checked against human-made
models which are also fallible, but this code is still likely to be about as
close to bug-free as currently possible.

I think Barghavan's work in this area are the future of cryptographic coding
in the medium term and likely all security-sensitive code long term.

~~~
SilasX
So, stupid question...

>Functional correctness (all the crypto implementations are faithful to their
algorithms)

You have to translate the human-written spec into the computer code that is
used to validate the implemenation's code, right? And this point just means
that _this_ aspect is proven, yes?

So, doesn't that mean it's not proving an absence of errors in translating the
human-readable version of spec into the validator code, right?

~~~
krastanov
Usually there is a little bit more to it: the human-written spec (the
mathematical description of the algorithm) is itself written in a format that
can be validated. After the spec has been validated (i.e. finding a
mathematical proof that the "platonic" version of the algorithm is "correct"),
only then you do the second round of validation that confirms that the
algorithm implementation is indeed equivalent to the platonic version of the
algorithm.

It is validators all the way down. And at the very bottom there is a base
piece of theorem-proving software that is only a few hundred lines of code
that has been validated only by humans and self tests.

~~~
Radle
"And at the very bottom there is a base piece of theorem-proving software that
is only a few hundred lines of code that has been validated only by humans and
self tests."

Is it possible for that part of the system, to proof it's own correctness?

It's obvious that the danger with such a proof is that the code may think it's
correct while it's incorrect in thinking it's correct.

But if I have learned one thing, then that Mathematicians can sometimes be
really sly foxes.

~~~
taktoa
It could prove its own correctness, but there is still the possibility that it
has a bug that causes its proof of correctness to be meaningless :-)

------
woofcat
via: [https://github.com/project-everest/hacl-star/blob/fstar-
mast...](https://github.com/project-everest/hacl-star/blob/fstar-
master/README.EverCrypt.md)

>EverCrypt is a work in progress -- if you're seriously contemplating using
this code is a real system, get in touch with us first!

~~~
hackinthebochs
I have only proved it correct, not tried it

~~~
wahern
For the lucky ten thousand: [https://www-cs-
faculty.stanford.edu/~knuth/faq.html](https://www-cs-
faculty.stanford.edu/~knuth/faq.html) (scroll to "What's the exact citation of
your oft-cited comment about bugs?")

Meta citation: [https://xkcd.com/1053/](https://xkcd.com/1053/)

------
pfortuny
Well, as long as Intel does not devise a chip which chooses what to keep and
what not to keep in the cache depending on the number of jumps executed and
the name of the calling function... (just as way of example).

Also "optimized advanced heuristic multithreading" (for instance)...

Anyway: an astounding feat, to be honest.

~~~
CodeWriter23
I was thinking the only true way to prevent side channel information
disclosure is to ensure x microseconds are spent encrypting/decrypting each
byte in a message and using an external timer source as a reference clock to
enforce the constant time interval.

~~~
pfortuny
Yes but you have to be sure also that nothing at all remains in the cache...

~~~
rrobukef
And EM characteristics are fixed.

------
4ensic
Reminds me of Steve Ballmer touting Windows XP as the "Most secure operating
system ever" \- on the day it was released. Claims that a code is unbreakable
or hacker proof are big red flags.

~~~
rhokstar
Very, very true! By proclaiming "Can’t Be Hacked" is an invite to the world.

~~~
dmarlow
Correct. Also claiming that it's mildly secure is also inviting hackers.
Damned if you do, damned if you don't.

~~~
close04
There must be something in between right? Like... "secure"? Claiming "hacker-
proof" isn't just an invitation, it's a challenge. Nothing triggers people
like telling them something is %-proof.

The mathematical proof might be 100% solid and hacker proof. The
implementation will probably never be.

~~~
fuklief
> The mathematical proof might be 100% solid and hacker proof. The
> implementation will probably never be.

I agree, I see a few ways that it could go wrong:

\- Code is proven correct against the specification, but specification is
wrong/buggy

\- source code is correct/secure, but compiler is overzealous and botch the
securities guarantees.

~~~
gargravarr
\- someone just outright botches the implementation. Heartbleed wound up in
OpenSSL for years before anyone noticed.

------
bluGill
Protzenko noted there will always be attacks that no one has thought of
before. EverCrypt can’t be proven secure against those, if only for the simple
reason that no one knows what they will be.

I'm glad those behind this know their limits.

------
gok
The code that their F*->C compiler (KreMLin) emits is…interesting…

[https://github.com/project-everest/hacl-
star/blob/evercrypt-...](https://github.com/project-everest/hacl-
star/blob/evercrypt-v0.1%2B/dist/compact/Hacl_AES.c)

~~~
basementcat
It is written in a very unambiguous way. Pretty much exactly what you might
expect.

~~~
gok
The body of `Crypto_Symmetric_AES_multiply` is unambiguous to you?

~~~
coldacid
It's certainly unambiguous! It's just not idiomatic at all.

------
cmurf
Part of effective trolling is to actually get some credible folks to try and
beat what you've created. I think the creators know saying things doesn't make
it true. What makes it true is when you stick your neck out a little, troll a
little, get people to take a few swings, and everyone sees they just keep on
missing the mark. Or perhaps they land a punch, EverCrypt get a bloody nose,
and we see whether and how they dust themselves off and try again.

I translate "hacker proof" into two things: this is something we all need; and
we're ready, give us your best shot!

------
pseudolus
I'm guessing that ultimately it will be shown to be "hacker-resistant" and not
"hacker-proof". Big difference.

------
endymi0n
“I love cryptography, it tells me what part of the system not to bother
attacking” -- Dr Andrew Gross.

------
petra
For those who understand formal verification: how hard is to specify the
requirement "no buffer overruns" ? and is that specification generalizable ?

~~~
Datenstrom
The Rust compiler does this. No out-of-bounds accesses and therefore no buffer
overflows. So yes it has been generalized.

~~~
fpgaminer
The Rust compiler does some limited "static analysis"* to determine if an
array access is guaranteed to always be within bounds. But it doesn't do this
universally. In other words, it's easy to write a program in safe Rust code
that will attempt out of bounds accesses.

The saving grace is that in those cases the compiler just inserts a bounds
check which panics at run-time.

That's a bit different from a formally verified OOB safe program which
wouldn't require any compiler-injected bounds checking. All accesses are
provably safe either because they are naturally so, or because the code
explicitly performed a bounds check.

* IIRC what Rust's compiler actually does is just inject bounds checks on all array accesses and then depends on the underlying optimizations to remove them when possible. That's certainly a form of static analysis, but perhaps some wouldn't consider it as such.

~~~
Datenstrom
Very true. Wouldn't it be possible to solve it to that standard using
dependent types? I think I read that those are on a distant wishful thinking
roadmap for Rust in some RFC. I really hope they get added some day, I find
myself wanting them all the time.

------
papaman
Nice article! As a web developer though, I did a quick "View Page Source" on
my browser and I saw a complete mess... I'm sorry Quanta Magazine, this is not
good for your business.

They really need to rethink their website. Very low Pagespeed Insights score,
very slow website and HTML/CSS is just a mess.

I understand that Cloudflare or their Minimize/Cache system is doing a lot of
this mess, but this will definitely effect your Google results. For example:

Inside the Source code, search for the phrase "Programmers are human, but
mathematics is immortal". Do you see that it appears two times? JS is doing
some crazy things and this affects their actual HTML script.

The blog seems to be build with React. I don't understand why they need to
reinvent the wheel and build Blogs from scratch. Just use WordPress.

Please developers. You can do better.

~~~
papaman
I suppose all those negative points are from people that don't understand web
development or semantic web... It's really sad.

