
Verified Correctness and Security of OpenSSL HMAC - dezgeg
http://katherineye.com/post/120638230126/verified-correctness-and-security-of-openssl-hmac
======
jeffreyrogers
I always find these sorts of things interesting and research in this area is
something I try to pay attention to whenever it comes to my attention, for
example here is another recent paper about verifying curve25519
[http://dl.acm.org/citation.cfm?id=2660370](http://dl.acm.org/citation.cfm?id=2660370).

However, it seems that there is a tremendous amount of effort that goes into
proving this software and I wonder if the time investment makes sense. Humans
are pretty good at doing proofs, so it seems that with a comparable amount of
time you could just have a human verify that the C implementation matched the
spec. (Yes, I know that this result goes a bit beyond that by using compcert,
etc.)

Now obviously the state of the art is progressing and it's getting easier to
formally verify properties of programs, but I wonder whether it will ever be
feasible to make this part of everyday software development. Anyways, these
are just my half-baked ramblings since this isn't a topic I've thought about a
whole lot, but I'm interested to hear other peoples perspectives.

~~~
mpu
It's not really a 'bit beyond'. The proof is about the machine code that runs
on the CPU! It's a huge abstraction gap from C and it is the result of years
of formal proofs and PL research.

About the effort being worth it or not, I believe that security critical
programs MUST have formal proofs. Vulnerabilities in this kind of software are
extremely costly. The same goes for software whose failure can be of danger
for humans (c.f. the Toyota debacle).

~~~
tptacek
A surprisingly large number of lines of code become security critical by dint
of inclusion in security critical systems designed by others, or in support
systems for those critical systems.

~~~
derefr
...which is a big reason why I find it insane that people aren't more
interested in microkernel (or unikernel+hypervisor, which ends up in the same
place) designs.

If you can have a tiny trust-kernel that has been proof-checked, keep
everything else outside of it, and the things outside of it can only
communicate with (or even observe) their peers via messages sent through it,
then you don't need to worry about including untrusted code in your app.

Instead, you just slap any and all untrusted code into microservices
(microdaemons?) in their own security domains/sandboxes/VMs/whatever, and
speak to them over the kernel's message bus (or in the VM case, a virtual
network), and suddenly they can't hurt you any more.

~~~
dmix
Do you have any examples of this type of approach being used in any projects?
I'd be curious to check out how it works code-wise.

~~~
dlitz
I'm not sure if this matches the description, but seL4 supposedly has a pretty
well-developed proof system: [https://sel4.systems/](https://sel4.systems/)

~~~
tptacek
Worth keeping in mind that L4 kernels do much, much less than conventional
operating systems. They're more like libraries for building useful OS's on top
of.

------
hypotext
Here's the full paper: [http://www.cs.princeton.edu/~appel/papers/verified-
hmac.pdf](http://www.cs.princeton.edu/~appel/papers/verified-hmac.pdf)

~~~
nickpsecurity
Thanks. My favorite part is your thorough breakdown of the assurance case in
section one. For redoing security evaluations, I recommended [1] a while back
that vendors illustrate the assurance levels of each component in their
systems. Your breakdown looks closer to my recommendation than most things
I've seen. Such a breakdown is both honest and shows exactly where
improvements (or mitigations) need to happen.

Otherwise, verifications were as practical and good as I'd expect. My favorite
section to scope out, Related Work, gave me new insights as usual. You also
had a useful idea of future work [2]. All together, great work.

[1]
[https://www.schneier.com/blog/archives/2014/04/friday_squid_...](https://www.schneier.com/blog/archives/2014/04/friday_squid_bl_419.html#c5457853)

[2] "One important future step is to con- dense commonalities of these
libraries into an ontology for crypto-related reasoning principles, reusable
across multiple language levels and realised in multiple proof assistants. "

------
0x0
Even coq have had its bugs where it was possible to prove that true is false:
[https://github.com/clarus/falso](https://github.com/clarus/falso)

~~~
nickpsecurity
True. That's why formal verification is only one of many techniques in high
assurance (A1/EAL7) evaluations. The methods are all a check against each
other with the minds of the designers and evaluators being the strongest
check. That might seem counter-intuitive given we're doing formal verification
due to people's inability to write code. Yet, people equipped properly can see
mistakes in good specs or designs much better than they can do tedious work
(eg low-level code) without error.

So, in high assurance, we use every tool at our disposal to counter problems
and then some for redundancy. Works out fine. That said, I have a nice paper
for you if you want to see how screwed up formal verification can get:

[http://www.cypherpunks.to/~peter/04_verif_techniques.pdf](http://www.cypherpunks.to/~peter/04_verif_techniques.pdf)

Things have gotten a bit better but plenty of truth left in that paper.

------
mpu
Writing the definition of sha 256 in Coq must have been of great fun, hehe.

~~~
YAYERKA
"Verification of a Cryptographic Primitive: SHA-256", Andrew Appel

[https://www.cs.princeton.edu/~appel/papers/verif-
sha.pdf](https://www.cs.princeton.edu/~appel/papers/verif-sha.pdf)

~~~
nickpsecurity
Thanks for the link. I always enjoy reading Appel's work.

------
nickpsecurity
Three gripes I have with many formal methods projects are (a) choose something
about useless to prove, (b) reinvent the wheel unnecessarily, and (c) an
assurance argument with huge gaps or on a knockoff of the actual problem. I
like that this project does the opposite in each area: a useful algorithm
implementation whose proof build on other's projects with end-to-end
assurance. Wise. I look forward to reading the paper.

So, what to do next. I suggest working on stuff that isn't getting much
attention. For security, realistic proofs are lacking of useful protocols and
crypto-constructions from requirements to design to implementation. Also, much
foundational software builds on libraries implementing ZIP, PNG, reg ex's, and
so on. More verification of those has widespread effect. Program
transformations, optimizations of CompCert, integration of covert channel
analysis into such tools, more static/dynamic checking, _assemblers /linkers_,
and so on could all use more verification work done with the nice qualities I
mentioned about this one. So, any readers thinking of a project might consider
the above. That said, it would be nice if this could benefit the average
person without formal methods abilities, right?

I thought hard on it. Our systems stuff is usually coded in low-level,
imperative languages for performance. Our high assurance work often uses
functional programming (or functional style) for specs, tools, and so on. Yet,
the limitations and TCB's of those in systems space are huge obstacles. Yet,
old Scheme/LISP work showed how to turn a limited functional program into an
imperative one step by step by fleshing out its state (among other things).
We've also seen metaprogramming & MDD techniques allow us to specify something
at a high level with low-level, fast code automatically generated for the
target.

I think the trick is to combine all of this: subset of functional programming
specifies high-level operation of program and low-level operation of target
language (esp fast parts); verification of useful primitives (eg stacks,
pointer arithmetic) with high-level interfaces macro-style; a coding style +
methodology for going from high functional to low imperative; verified
transformations for optimization, macro expansion, and code generation. Each
of these exist in some form, most verified in some way. What's left is to
verify all of them and their integration. Such an integrated approach might
dramatically simplify verification of software by letting developers simply
describe it in a high-level, functional way with a step-by-step process to
deployment. Verification, depending on talent, might range from manual
inspection to machine-checked proofs. Yet, doing it this way should be much
easier than converting them to formal verification experts.

What do you developers or formal methods people think of this?

