Hacker News new | past | comments | ask | show | jobs | submit login

I don't want to hijack the thread subject but here are my thoughts on the usefulness of fuzzing of safe languages.

Even in the absence of memory corruption bugs there is a subclass of bugs that can emerge in any general-purpose language, like slowness/hangs, assert failures, panics and excessive resource consumption.

Barring those, you can detect invariant violations, (de)serialization inconsistencies (eg. deserialize(serialize(input)) != input, eg. see [1]), different behavior across multiple libraries whose semantics must be identical (crypto currency implementations are notable in this regard as deviation from the spec or canonical implementation in the execution of scripts or smart contracts can lead to chain splits).

With some effort you can do differential 64 bit/32 bit fuzzing on the same machine, and I've found interesting discrepancies between the interpretation of numeric values in JSON parsers, which makes sense if you think about it (size_t and float have a different size on each architecture, causing the 32 bit parser to truncate values). This might be applicable to every language that does not guarantee type sizes across architectures like Go (not sure?), but I haven't tested that yet.

You can detect path escape/traversal (which is entirely language-agnostic but potentially severe) by asserting that any absolute path that is ever accessed within an app has a legal path, or by fuzzing a path sanitizer specifically.

And so on.

Code coverage is the primary metric used in fuzzing, but other metrics can be useful as well. I've experimented extensively with metrics such as allocation, code intensity (number of basic blocks executed) (which helped me prove that V8's WASM JIT compiler can be subjected to inputs of average size that take >20 seconds to compile), and stack depth, see also [2].

Any quantifier can be used as a fuzzing metric, for example the largest difference between two variables in your program.

Let's say you have a decompression algorithm that takes C as an input and outputs D. Calculate R = len(D) / len(C), so that R is the ratio between compressed input and decompressed output. Use R as a fuzzing metric and the fuzzer will tend to generate inputs that have a high compressed/decompressed size ratio, possibly leading to the discovery of decompression bombs [3].

Wrt. this, libFuzzer now also natively supports custom counters I believe [4].

Based on Rody Kersten's work I implemented libFuzzer-based fuzzing of Java applications supporting code coverage, intensity and allocation metrics [5], and it should not be difficult to plug this into ClusterFuzz/oss-fuzz.

Feel free to get in touch if you have any questions or need help.

[1] https://github.com/nlohmann/json/blob/develop/test/src/fuzze...

[2] https://github.com/guidovranken/libfuzzer-gv

[3] https://en.wikipedia.org/wiki/Zip_bomb

[4] https://llvm.org/docs/doxygen/FuzzerExtraCounters_8cpp_sourc...

[5] https://github.com/guidovranken/libfuzzer-java




Great post Guido!

Guido's bignum fuzzer which tests the correctness of math operations in crypto libraries is one of the most interesting fuzzers we run on ClusterFuzz.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: