
Ideas for a New Elliptic Curve Library - tptacek
https://briansmith.org/GFp-0
======
tptacek
In which Brian Smith proposes that instead of coding each of the specific
curves a TLS library has to support in independent but closely coupled C
libraries, we instead invent a DSL for expressing curve routines and a
compiler to compile it down directly to C or assembly.

~~~
nine_k
Quite nice, but how about the need for key functions to be constant-time to
thwart time-based attacks? It's probably quite hard to express and requires
low-level per-platform tweaking.

~~~
briansmith
The idea is that the DSL compiler would be responsible for ensuring constant-
timeness. Note that C, C++, and Rust offer no guarantee of constant-timeness.
Even processors don't offer constant-timeness guarantees and some instructions
(often division) are not constant-time. See also
[http://blog.erratasec.com/2015/03/x86-is-high-level-
language...](http://blog.erratasec.com/2015/03/x86-is-high-level-
language.html).

I wrote the paper mostly in the context of implementing signature
verification, which doesn't require side-channel protection since there are no
secrets in verification. But, I think the DSL approach is even more important
for ECDH and signing, where side-channel protection is important.

Finally, given that processors don't even promise us that any instruction is
constant-time, and given that other types of attacks are possible where
constant-timesness doesn't help so much (e.g. power analysis), I think it is
worth considering other approaches to side-channel protection (e.g. blinding).
For many applications, constant-timeness isn't sufficient and I'd love to
learn that it isn't necessary.

------
nickpsecurity
Does he mean doing what Galois did [1][2] with tools like CRYPTOL [3][4]? Or
something more like this [5] with EasyCrypt and CompCert? Or something simpler
like Altran's SPARK crypto [6]? And maybe with protocol-level verification
like miTLS [7]?

I don't think there's a question about whether the goals can be met so much as
a lack of uptake of methods and tools that meet them. Uptake and improvement
on their capabilities, that is. So go for it people. :)

[1]
[https://www.acsac.org/2012/workshops/law/pdf/Launchbury.pdf](https://www.acsac.org/2012/workshops/law/pdf/Launchbury.pdf)

[2] [https://galois.com/blog/2012/03/verifying-ecc-
implementation...](https://galois.com/blog/2012/03/verifying-ecc-
implementations/)

[3] [http://www.cps-vo.org/file/19230/download/59388](http://www.cps-
vo.org/file/19230/download/59388)

[4] [http://www.cryptol.net/](http://www.cryptol.net/)

[5] [https://www.easycrypt.info/trac/raw-
attachment/wiki/BibTex/C...](https://www.easycrypt.info/trac/raw-
attachment/wiki/BibTex/CCS.ABBD13.pdf)

[6] [http://www.adacore.com/press/spark-
skein/](http://www.adacore.com/press/spark-skein/)

[7] [https://www.mitls.org:2443/downloads/miTLS-
report.pdf](https://www.mitls.org:2443/downloads/miTLS-report.pdf)

~~~
briansmith
> Does he mean doing what Galois did [1][2] with tools like CRYPTOL [3][4]? Or
> something more like this [5] with EasyCrypt and CompCert? Or something
> simpler like Altran's SPARK crypto [6]? And maybe with protocol-level
> verification like miTLS [7]?

Yes.

> I don't think there's a question about whether the goals can be met so much
> as a lack of uptake of methods and tools that meet them.

I agree. And, that's a big part of the reason I didn't spend a lot of time on
formal verification when I wrote what I wrote. The target audience of my
writing is people think that the result must be as fast as the fastest
implementation, but that only think that formal verification of correctness is
nice to have and/or impractical to the point of not being worth trying. That
isn't my own personal prioritization, but I think that actually describes the
prioritization of almost everybody deploying open source crypto software
today.

Anyway, I hope to have more to say about formal verification of ECC
implementations later.

~~~
nickpsecurity
Well, best response I've seen to a post like this. The audience you refer to
might be best served with a SPARK, Frama-C, etc. implementation decomposed as
much as possible with experts doing the DbC annotations or VCC's.

More likely an informal method. In the distant past, my method was to use a
language with compile-time macro's to decompose the overall algorithm into
simplest functions. In dev mode, I could manually run checks to hopefully
ensure my assumptions were correct. Each module was simple enough to
extensively test and spot coding defects. If necessary, I could do it at ASM
level while wrapping low-level stuff in HLL calls w/ checks. In production
mode, it compiled to straight-forward, high-performance, low-level code.

So, two routes with different levels of formality and optimization. The 2nd
method is more likely to get adopted. Just not sure how many weird corner
cases can be prevented that way.

------
bradleyjg
I really enjoyed this part of the plan:

 _The profiling mode would generate a random seed, generate the code, compile
the code, run a program that measures the performance of the code, and
compares the execution time to the previously-known best execution time. Then
we would just let it run until we run out of development time, take the best
seed the program output, update the build system so that the new seed is used
for production builds, and say we’re done. We could call our ridiculous
compiler the “super dumb compiler” and hope that people confuse it for a
superoptimizing compiler. To scale this incredibly inefficient optimization
strategy, we could mint a new cryptocurrency, Superdumbcompilercoin, where we
trick people into running the super dumb compiler on their computers, cross-
certifying each others’ results, and sending us increasingly better
optimization seeds in return for plausible delusions of becoming rich._

