

Why Bitcoin Core 0.10's release notes say “…libsecp256k1 is better than…OpenSSL” - bdr
https://www.reddit.com/r/Bitcoin/comments/2rrxq7/on_why_010s_release_notes_say_we_have_reason_to/

======
mjs
Interesting position on testing: "I generally don't consider my own software
adequately enough tested until its tests have turned up a bug in a
compiler/toolchain. So far I've not encountered a compiler bug for
libsecp256k1 [...] so I may have to settle for discovering that a ubiquitous
system library couldn't square correctly."

(The bug was found via comparing the output of libsecp256k1 and OpenSSL on
"specially-constructed random inputs.")

~~~
illumen
Comparing multiple implementations is a great way to find bugs. Even running
the same code with different compilers or platforms helps find bugs.

It doesn't find cases where they are both wrong, but often finds cases where
one of them is wrong.

It is a very common testing strategy with maths libraries.

~~~
nickik
You might be intrested in this [1]. Its a 31C3 talk about exactly this. There
idea is to write a runable spec (that only focues on correctness, instead of
speed and things like that) that then can be used to test against real world
application.

There idea is to do this against all the MAJOR interfaces that we all depend
on TCP/IP, x86 memory model and stuff like that.

[1]
[http://events.ccc.de/congress/2014/Fahrplan/events/6574.html](http://events.ccc.de/congress/2014/Fahrplan/events/6574.html)
(the talk is on youtube or on ccc video streaming side)

~~~
pilsetnieks
> instead of speed

Not when developing crypto. You have to look at execution speed, too, to
prevent timing attacks.

~~~
raverbashing
Execution speed _variance_

It doesn't matter if it takes 1s or 10s, but that it takes 10s every time (to
verify a password, for example)

------
lultimouomo
The quoting is quite misguiding, the article and its title say:

> we have reason to believe that libsecp256k1 is better tested and more
> thoroughly reviewed than the implementation in OpenSSL

which is something very different from generically being "better" (which has 0
information value, BTW).

~~~
nullc
Thanks, yes. I spoke intentionally.

~~~
bdr
Sorry. I knew I was mangling it, but it was the only way to get within HN's
title restrictions. Thought about leaving a pre-emptive comment to that
effect, and I probably should have.

------
avian
> The incorrectly squared numbers would be expected to be found randomly with
> probability around one in 2^128

1:2^128 is an incredibly low probability. Universe is only around 2^86
nanoseconds old. For all practical purposes, an event with a 1:2^128
probability is impossible.

They say that they found it with randomized testing (although one that
explores "a class of rare corner cases") and dismiss the claim that this is a
class of bugs that can only be found by analysis of the implementation.

I think a test that manages to find a bug like this can not be called "random"
(as in, throwing random inputs to a black box). Obviously I don't know the
details, but I am sure their test incorporated a great deal of detailed
knowledge of algorithms used in the computation.

~~~
stingraycharles
While I understand where your argument is coming from, it is important to ask
the question: how many times a day is this piece of code executed on all
computers worldwide?

Surely the chance of _one_ person running into this bug is extremely small,
but what about all the people on earth combined?

Nevertheless, I think we can classify this method of testing as some sort of
hybrid between brute force testing and auditing; I don't think the authors are
dismissing that claim this either, but are merely stating that there are ways
to make very informed test cases without looking at the code.

~~~
mike_hearn
Reasoning about large numbers is hard. 2^128 is considered enough security in
any crypto scheme to be "impossible" to break. Consider that Bitcoin uses
ECDSA which gives 128 bits of security. This is not brute forceable as there
isn't enough energy around to actually flip the bits that many times, let
alone do the calculations.

~~~
nullc
OB: pedantry, an attacker who does one test with 256-bit ECC has a probability
of success of ~2^-256. 128-bit security comes from a rho attack which starts
off unlikely (2^-256) to be successful and becomes sure to be successful at
around 2^128 operations: e.g. it makes progress unlike normal "guess and
check" brute force.

I just mention it because this fact means that the sufficiency of ECC security
by itself doesn't mean that 2^-128 is "sufficient" against random chance.

But, course, 2^-128 is unfathomably low probability, and it's generally
sufficient against 'chance'. Though chance is usually the wrong way to think
about attack. For example: If I create software which takes a 256 bit input
and does a "if (input == 8675309) exec_shell();" and expose it to the Internet
what is the probability of that input? ... probably 1. :)

------
java-man
no software is truly safe until the amount of testing code exceeds the amount
of main code...

~~~
comrade1
I don't know man... I can write some pretty lame unit tests.

~~~
tribaal
Not to mention, massive amounts of lame unit tests exercising a tiny fraction
of the code base :p

Of course, "java-man" is probably just trolling.

~~~
jerguismi
The nickname exposes the trolling.

~~~
tribaal
Indeed :)

