
Another Look at Provable Security - jcr
http://cacr.uwaterloo.ca/~ajmeneze/anotherlook/
======
tptacek
Some things that are important to remember:

(a) "Proof of security" is a term of art that doesn't mean what it sounds
like. In cryptography, you "prove" specific things about an algorithm, often
in the context of a "game" with one or more adversaries. You need to know more
than that an algorithm has a proof; you also need to know which proofs those
are, and how they apply to your application.

(b) Proofs generally apply to algorithms and low-level constructions. Once you
start composing proven primitives into whole cryptosystems, typically, all
bets are off. Proofs of entire protocols are an active research area.

(c) A proof will almost always apply to the abstract, mathematical notion of
an algorithm or construction, _not to the code that might implement it_.

I apologize in advance at how little this has to do with the actual resource
linked here; I'm just sort of predicting what the thread on HN, if any, would
look like.

~~~
black_knight
Regarding (c), if your program is written say Agda[1], the code can come with
a proof that it has the mathematical properties you want. Or rather, your
implementation might simply be the proof.

[1]
[https://en.wikipedia.org/wiki/Agda_(programming_language)](https://en.wikipedia.org/wiki/Agda_\(programming_language\))

~~~
cvwright
The folks at Galois have a language called Cryptol that can be used for
similar things with hardware.

[https://galois.com/project/cryptol/](https://galois.com/project/cryptol/)

------
pron
This is what's known as "searching under the lamppost". Mathematicians would
sometimes seek to prove only that which they suspect would be rather easy to
prove, so they construct the problem in such a way that the perceived
difficulty is exactly that which is easily provable, thereby solving the
problem. They then shove everything else (that is hard to prove) under their
assumptions (which are sometimes very well-hidden), and say the problem is now
mathematically solved!

Figuring out whether the research is applicable in practice then becomes a
game not of validating the proof, but of uncovering all the assumptions.

I see it in the PL community, too, where, rather than starting by analyzing
where software bugs actually lie and how expensive each kind of bug turns out
to be, and _then_ see how most expensive bugs can be prevented, they start
with a programming model which is very easy to prove, construct programs that
are easily expressed by the model, and then claim they've found a provably
bug-free way to program.

The question of how many real-world programs can be easily transformed into
the model, at what costs, and whether it is the expensive bugs that are
actually prevented by the proof or rather the trivial ones is left as an
exercise to the reader.

Of course, I don't blame them. If you don't start with what you strongly
suspect you'll be able to prove, you might not get any publishable results at
all.

~~~
nullc
Your description sounds like intent, but it's important to keep in mind there
is a selection force here.

Ten earnest mathematicians try. 9 fail. The tenth structures the problem in
such a way that the solution is meaningless but not obviously so-- his work
gets published. But he doesn't know and certantly never intended to produce a
meaningless proof.

When useful proofs are harder to construct than difficult to detect as not
useful trivial proofs, we should expect lots of the latter, even just by
chance.

~~~
pron
Right. It's not a conspiracy -- it's just what happens :)

Also, I don't think that such work is meaningless in any way. Any discovery in
math is meaningful in itself. It just isn't as applicable in real life as
advertised (or the application is much more difficult than advertised).

------
nickpsecurity
It's better to read the papers where they attempt provable security and then
those published years later reviewing that work or those methods. Certain
methods worked well, some failed over time, and so on. The lessons learned and
looking back papers are where I find the most wisdom in this field. Naturally.

The papers on Orange Book-era B3 and A1 class systems along with the Common
Criteria EAL6/EAL7 works taught me the most. They all required a clear,
security policy and a strong argument the design/implementation embody it. The
systems are modular, layered, internally simple, use structured programming
style, use safer subsets of programming languages, often are state-machines,
account for failure modes for everything, and trace each potential execution
flow of the system. They also add, where possible, static analysis and covert
channel analysis. Some did formal proofs but the above gets you 90-99% of the
way.

So, the idea is that you design the system or algorithm in such a way that you
can easily demonstrate it has certain properties. It was hard work. However,
LOCK project even with proofs only cost around 30-40% extra on top of solid,
development process. Another lesson is the specs, implementation, and
proofs/arguments should always evolve side-by-side with plenty of
communication between team members. This spots problems early. Analysis of
various phases showed above methods caught all kinds of reliability and
security flaws in actual systems with many doing excellent during years of
pentesting. Proof themselves, on other hand, should only be used if the
property is easy to model mathematically. Otherwise, you use a lot of effort
with little gained. Final lesson came from TCB concept: focus all your
provable security on mechanisms that are flexible and simple so the cost is
spread over all projects benefiting from it. Security & separation kernels did
this to a degree but I think CompCert compiler is best, modern example of ROI
on high-assurance development.

~~~
angry_octet
Do you have any links/references to the papers you mention?

~~~
nickpsecurity
Most of the best papers are behind paywalls, unfortunately. I do have a small
sampling on-hand from there and now that illustrate some of the high-assurance
methods.

[http://www.iwia.org/2005/Schell2005.PDF](http://www.iwia.org/2005/Schell2005.PDF)
Describes GEMSOS security kernel (A1 class) with nice graphics to illustrate
the approach.

[http://www.cse.psu.edu/~trj1/cse543-f06/papers/vax_vmm.pdf](http://www.cse.psu.edu/~trj1/cse543-f06/papers/vax_vmm.pdf)
VAX Security Kernel (A1 class). Pay special attention to the layering of the
kernel, coding strategy, and assurance section.

[http://cisr.nps.edu/downloads/sdm/SEKE08-Shaffer.pdf](http://cisr.nps.edu/downloads/sdm/SEKE08-Shaffer.pdf)
Shaffer's Alloy technique elaborates a bit on how state machines are useful in
security arguments, even without full formal.

[http://cryptosmith.org/docs/Lock-eff-
acm.pdf](http://cryptosmith.org/docs/Lock-eff-acm.pdf) Costs and effectiveness
of assurance activities for LOCK system (A1 class).

[http://www.sigops.org/sosp/sosp09/papers/klein-
sosp09.pdf](http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf) Modern
effort (seL4) to exceed EAL7 by bringing verification to code itself. Note
that some A1 projects had close to 1-to-1 spec to implementation mapping. So,
full formal might be replaced by low-level, verified specs and matching code
verified by inspection, etc.

[http://www.ccs.neu.edu/home/pete/acl206/slides/hardin.pdf](http://www.ccs.neu.edu/home/pete/acl206/slides/hardin.pdf)
Rockwell Collin's EAL7 framework for AAMP7G processor and CRYPTOL integration

[http://www.crash-safe.org/assets/verified-ifc-long-
draft-201...](http://www.crash-safe.org/assets/verified-ifc-long-
draft-2013-11-10.pdf) Previous work proved kernels supported MLS or MILS
models. This is better given it's proving a hardware mechanism both enforces a
security requirement and meets functional requirements. Significant given SAFE
(see crash-safe.org) makes inherently secure apps easier to write.

[http://goto.ucsd.edu/quark/usenix12.pdf](http://goto.ucsd.edu/quark/usenix12.pdf)
Proving security properties of a web browser. Typically, you know the work is
legit if it spends time talking about the flaws formal verification caught. I
don't see so much of that in this but they might have omitted that to publish
what worked instead.

[http://www.cse.chalmers.se/~andrei/jsac.pdf](http://www.cse.chalmers.se/~andrei/jsac.pdf)
A significant amount of recent work builds it in the language itself.

[http://web.archive.org/web/20130718103347/http://cygnacom.co...](http://web.archive.org/web/20130718103347/http://cygnacom.com/labs/cc_assurance_index/CCinHTML/PART3/PART36.HTM)
Cygnacom's breakdown of Evaluation Assurance Level's is pretty nice. Shows the
extra efforts put in and what you get.

[http://pastebin.com/y3PufJ0V](http://pastebin.com/y3PufJ0V) My framework
compliments that and reminds that it must be applied to every layer down to
the gates.

So, although my sharing is limited by paywalls, I hope you find these
interesting and can see the different perspectives of highly assured security.
There have been numerous successes in independent testing, pen-testing, and
field use. Also remember that this is _only_ provable security rather than
verification of properties in general. The field of verification and
correctness assurance has progressed quite a bit (see CompCert, SPARK,
Esterel, etc). Have fun.

~~~
angry_octet
Very interesting reading. The whole notion of the AAMP7G processor is
fascinating. Made solely for government in tiny quantities, in tightly
controlled facilities, probably rad hard, resistant to power analysis, etc.

In this ppt
[http://usacac.army.mil/cac2/cew/repository/presentations/3_H...](http://usacac.army.mil/cac2/cew/repository/presentations/3_Hatcliff-
SAnToS-Ft-Leavenworth-2008-07-09.pdf)

there is mention of an interesting tool/language called Spark Ada:
[http://www.spark-2014.org/](http://www.spark-2014.org/)

It makes me think that maybe my next study topic should be Spark or Esterel
instead of Rust/Golang/some JS thing.

~~~
nickpsecurity
Fine to learn for personal use but other stuff is better if you're job
hunting. If you want to learn, then skip Esterel (big $$$) to learn Ada with
AdaCore's free GNAT IDE. Helps to understand why it does certain things (eg
foundations). Get that here:

[http://www.adacore.com/knowledge/technical-papers/safe-
secur...](http://www.adacore.com/knowledge/technical-papers/safe-secure/)

AdaCore's "Gems" and other resources will help you plenty. SPARK is a variant
of Ada designed to be easy to analyze with mathematical techniques with goal
of automatically proving absence of certain bugs in over 95% of code. The
article below has links to many resources and demonstrations. Tokeneer further
demo's Praxis's Correct by Construction process, which is also cool.

[https://en.wikipedia.org/wiki/SPARK_%28programming_language%...](https://en.wikipedia.org/wiki/SPARK_%28programming_language%29)

Here's a nice example of applying it to a DNS server to prove absence of
single-packet vulnerabilities leveraging common, coding issues:

[http://ironsides.martincarlisle.com/](http://ironsides.martincarlisle.com/)

I'll end with a few other goodies. Adam Chlipala does great work in teaching
and making formal methods lightweight. His book below describes such methods.
More links in comments.

[https://news.ycombinator.com/item?id=9862127](https://news.ycombinator.com/item?id=9862127)

Another high assurance processor below. Verified for correctness rather than
security although one helps the other a bit.

[http://scidok.sulb.uni-
saarland.de/volltexte/2006/633/pdf/Di...](http://scidok.sulb.uni-
saarland.de/volltexte/2006/633/pdf/Dissertation_9032_Dali_Iako_2006.pdf)

Sandia's Secure Processor (SSP), also called Score, utilizes a highly assured
design and implementation that results in a Java processor immune to obvious
code injection techniques or radiation-related issues. Their SEED architecture
is really nice. Modern stuff is paywalled but this old paper tells plenty.
Thing it leaves out, IIRC, is they use an ML-based hardware model to do
equivalence checking between executable specification, hardware spec (in ML),
and actual hardware HDL. Already in ASIC's on rad-hard processes. Availability
of ASSET tools or SSP/Score unknown.

ftp://ftp.cs.kent.ac.uk/people/staff/phw/.old-1999/tmp2/443-cpa2007-wickstrom.pdf

Happy reading and bug hunting. :)

------
zekevermillion
Very interesting. How does one get comfortable relying on cryptography that
uses a method which lacks provable security? Simulating proof based on actual
use?

~~~
EthanHeilman
Two questions that I find myself asking:

1\. Are proofs effective at judging the security of a protocol? For example
many proofs rely on the random oracle model, yet the random oracle model is
problematic. How can we construct convincing proofs?

2\. Are there better measures of security? For example crypto-contests in
which people try to break other peoples algorithms. Very little systems
security relies on proofs (systems security is also almost always the weakest
link).

My personal belief is: 1=yes, 2=maybe and it is worth it to develop
alternative methods since they can be used in tandem with proofs.

~~~
tptacek
What's an example of a protocol that has been well-served by formal methods
_prior to 2015_?

Contests have fared well at selecting algorithms, but (prepared to be smacked
back for this), ad hoc methods have done OK with algorithms as well --- you
might say MD5 is an exception, but MD5 didn't have much competition when it
was "selected". You might also say RC4, but RC4's history is very complicated.

~~~
EthanHeilman
Not sure what you mean by "well-served"? In terms of design, I've seen
attempts at proofs discover attacks that likely would not have been otherwise
found during development. In that case the proof served the designers in
building a more secure protocol. Some of the early kerberos work could have
benefited from making the designers prove or at very least justify their
claims (say the use of xor fan-folding as a one-way function).

Does a proof increase your confidence in an algorithm? As someone that worked
on MD6's proof of differential resistance it certainly increased my confidence
in the protocols ability to resist certain types of attack.

By ad-hoc do you mean people selecting algorithms without a contest? If so I
would counter with the following incomplete list of broken ad-hoc standards:
DTD-1, FEAL, WEP, A5/1, A5/2, KASUMI, GOST 28147-89, Cryptomeria cipher, CCS,
SHA-0.

The crypto that has held up, has generally held up and most of the other ones
that haven't we don't use, so we may be biased toward remembering good ad-hoc
crypto and forgetting the bad.

~~~
tptacek
People have used formal methods to find flaws in protocols, but I'm unaware of
a protocol that is assured formally in any meaningful way. Certainly not TLS
or IPSEC. I guess I'm asking, is there a protocol you feel more comfortable
working with due to formal assurance?

------
doe88
There is a short discussion and links to their papers toward the bottom of the
second section of this Wikipedia entry
[https://en.wikipedia.org/wiki/Provable_security](https://en.wikipedia.org/wiki/Provable_security)

------
hamsterdame
I would like to note that the chapter "Another Look At Automated Theorem-
proving II" seems quite ill informed, and very hostile, to me.

For instance, to my knowledge the Hashed El Gamal proof in Easycrypt checkes
the whole proof, not just the hops. (If I understood Remark 2 correctly)

------
sjudson
I tend to see Menezes and Koblitz as having three different arguments against
the common focus on provable security. In roughly increasing order of
importance.

1) "Provable security" is a misleading term. Those well versed in the field
understand that it is a computational proof of a mathematical object, not in
anyway a statement of the practical security of an algorithm when implemented
in computer code. However, many people misunderstand this to mean
"unbreakable" or "perfect". Menezes and Koblitz see it as too frequently
acting as a marketing gimmick, something that allows vendors trying to sell
security software to say "it's based on provably secure algorithms" without
anyone understanding what that means. To them, the misleading quality of the
term outweighs its value. I generally disagree with them on this point, not
because the point isn't well taken, but because it is far too common for
phrases from academic discourse to be misunderstood outside the specific field
(e.g., teleportation in quantum mechanics) for such a semantic argument to
carry any weight with the specific case of provable security.

(2) Provable security proofs have frequent gaps/the authors do not recognize
important assumptions they are operating under. Their "Another look..." papers
focus on these cases, and it is somewhat widespread. Most of the concerns tend
to be completely academic, and don't open any practical attack vectors,
however their point tends to be that computational proofs are dependent on
assumptions that allow a positive proof of a negative, and if the methods of
the proofs sufficiently obfuscate those assumptions to where their
practitioners frequently miss them, it calls the validity of the arguments
into suspect. The best example of this is probably their discussion in the
original 'Another look at "provable security"' article linked to on the right
of the proof of Optimal Asymmetric Encryption Padding (OAEP, required since
textbook RSA and other asymmetric encryption algorithms leak information about
the plaintext, in RSA's case, the Jacobi symbol). It's hard to argue with
their evidence, and their work has definitely led to a much more critical eye
of the proof justifications and assumption discussions given in newer
articles.

(3) "Provable security" misses the point. To Menezes and Koblitz, the goals of
cryptography mean it shouldn't be treated as a discipline of mathematics, and
while academic cryptographers chase the abstract goal of provable security and
debate the philosophical strengths of computational vs. information theoretic
arguments, "real-world" cryptographers, including those working for the
intelligence establishments, focus on useful goals like efficiency and ease of
implementation in software and hardware. To them, building cryptosystems using
primitives with strong heuristic arguments for their security which are easy
to implement is far more beneficial for security than the most perfectly
crafted mathematical algorithm will be. In addition, although the academic
field of cryptography can easily adapt to any new result which challenges
basic assumptions (say, a result attacking the hardness of the DLP or
factoring), all the systems relying on those assumptions in practice will be
broken or greatly damaged, so holding those assumptions as central to total
security is problematic. As someone particularly interested in how
cryptographic systems delegate and understand trust, this argument carries a
lot of weight with me, and is part of the reason that although I consider
having a solid understanding of the theory and practice of provable security
to be a necessary piece to greater understanding of the field, my short and
longer term interests in cryptography are focusing around secondary qualities
cryptosystems and their primitives.

