Hacker News new | past | comments | ask | show | jobs | submit login

I'm a firm believer in regression tests for complex systems. Sure, this specific case is extremely unlikely to come up again, but in general, it's always good to have that coverage, especially if a large refactor happens later on.



I'm definitely not arguing against regression testing, or really even against test suites for TLS (there should be more of them, and better ones). I'm suggesting that the discipline of aggressively testing code for coverage is unlikely to make as much of a dent as moving to a better language would, and since both are extremely costly changes to the way TLS stacks are developed, we might as well adopt the one that will make us safer.


I claim that writing an extensive test suite for TLS is not nearly as difficult as switching people to a new language. Rewriting in a stricter language, maybe. But Haskell has a big runtime and garbage collection and its own compiler and would be a big pain to integrate everywhere that uses C TLS libraries; smaller compile-to-C languages might be easier, but who wants to use an experimental language to develop crypto code? ...And you'd still want to test it, because although functional programming style makes many bug classes less likely, it's not a panacea. You're still basically hoping that the developer doesn't make a single thinko.

Compared to that:

- Better unit testing is much easier to integrate into existing projects. Yes, it can only prevent a bug if the developer generally thought of the class of error, but at least it sort of forces them to spend some time thinking about possible failure cases, and can detect cases where their mental model was wrong. Also, it helps detect regressions: "goto fail" wasn't a strange edge case the developer didn't think of, it was a copy paste error which good unit tests could have caught.

- Functional testing can be independent of the implementation and written by someone unrelated. They can only do so much in general, but they might have caught both of these bugs.

Yes, audits are another option, but I'd say they should complement tests, not replace them.

ed: oh, and if you want to be really intellectually rigorous, you could try to formally verify your C code; model could have bugs but could also be implementation independent. But I hear that's rather difficult...


It's interesting to think about, but I'm not as bullish.

Stipulate that we're just talking about X.509 validation. (You can still have "goto fail" with working X.509, but whatever).

Assume we can permute every field of an ASN.1 X.509 certificate. That's easy.

Assume we're looking for bugs that only happen when specific fields take specific values. That's less easy; now we're in fuzzer territory.

Now assume we're looking for bugs that only happen when specific combinations of fields take specific combinations of values. Now you're in hit-tracer fuzzer coverage testing territory, at best. The current state of the art in fault injection can trigger these types of flaws (ie, when Google builds a farm to shake out bugs in libpng or whatever).

Does standard unit testing? Not so much!

Would any level of additional testing help? Absolutely.

But when we talk about building test tooling to the standard of trace-enabled coverage fuzzers, and compare it to the cost of adapting the runtime of a more rigorous language --- sure, Haskell is hard to integrate now, but must it be? --- I'm not so sure the cost/benefit lines up for testing our way to security.

For whatever it's worth to you: I totally do not think code audits are the best way to exterminate these bugs.


I appreciate that testing a TLS stack is a major pain. But I'm a bit confused about "move to Haskell" in this context; sure, it would cut down on the buffer overflows, but TLS stacks usually fall to logic errors and timing attacks, not to buffer overflows. "goto fail" can occur in Haskell too: a chain of conditions can still incorrectly short-circuit.

Also note that Haskell doesn't exactly help in avoiding timing attacks. In a sane cryptosystem, you might be able to implement AES, ECDSA and some other primitives in a low-level language and use Haskell for the rest; but as you know, TLS involves steps like "now check the padding, in constant time" (https://www.imperialviolet.org/2013/02/04/luckythirteen.html). You could certainly implement those parts in C, too, and then carefully ensure that no input to e.g. your X509 parser can consume hundreds of MB of memory, and so forth, but you're going to lose some elegance in the process. (Those problems would admittedly be smaller in OCaml, ADA or somesuch.)

I'd be more interested in something like Colin's spiped - competently-written C implementing a much simpler cryptosystem. If only because even a perfect implementation of TLS would still have lots of vulnerabilities. ;-)

(I think the case for writing applications in not-C is considerably stronger, if only because TLS stack maintainers tend to be better at secure coding than your average application programmer. Like you, I do like writing in C, though.)


"would be a big pain to integrate everywhere that uses C TLS libraries"

I don't all together disagree with your points, however I'd like to pint out that Haskell has a great C FFI:

http://www.haskell.org/haskellwiki/GHC/Using_the_FFI http://book.realworldhaskell.org/read/interfacing-with-c-the...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: