

The Fuzzing Project - hannob
https://fuzzing-project.org/

======
MontagFTB
Cool site! I have applications that would benefit from a deeper fuzz testing
suite, so the site will come in handy for me.

Are you taking submissions for fuzzing tools? I recently released Binspector
as a standalone tool for binary file analysis. One of the features it has is a
strategic fuzzing system, which is used to construct attack files from a known
good file with analyzed weak points.

The source is on GitHub:
[https://github.com/binspector/binspector](https://github.com/binspector/binspector)

And a blog post that briefly covers the fuzzing portion of the tool is here:

[http://binspector.github.io/blog/2014/10/13/a-hairbrained-
ap...](http://binspector.github.io/blog/2014/10/13/a-hairbrained-approach-to-
security-testing/)

If you have any questions/comments/concerns, just ask.

------
xixixao
A team at Imperial is working on
KLEE[[http://klee.github.io/](http://klee.github.io/)], using symbolic
execution to find bugs, with some nice results. It would be great to compare
the two techniques on the same software and see the trade-off between speed
and completeness.

~~~
pascal_cuoq
“The fuzzing project” is not a fuzzing technique. It uses existing fuzzers out
of the box. See “What fuzzing tools are there?”, the FIRST question in the FAQ
at [https://fuzzing-project.org/faq.html](https://fuzzing-
project.org/faq.html)

I am sure the project wouldn't have anything against using KLEE, as long as
KLEE is able to pull its own weight and to justify the time it takes to set it
up with unique bugs that simpler fuzzers do not find.

Perhaps you mean to compare KLEE and afl. That suggestion would be more
appropriate if the submitted article was
[http://lcamtuf.coredump.cx/afl/related_work.txt](http://lcamtuf.coredump.cx/afl/related_work.txt)
(but even so, that URL in itself already explains a lot about the design goals
of afl and what makes it different from the existing fuzzers, KLEE included).

------
damian2000
I found Udacity's course on Software Testing (by John Regehr) to have a decent
coverage of fuzz testing. For those interested, here's the course notes on
that section ...

[https://www.udacity.com/wiki/cs258/unit-3](https://www.udacity.com/wiki/cs258/unit-3)

------
gimboland
A few years ago I was involved in an HCI research project applying this kind
of idea to automated UI analysis - by taking strings of user input (in this
case, computed optimal key sequences performing some number entry task) and
randomly inserting errors of various kinds into the sequence (transposition,
deletion, repetition, etc.) - then playing back the key sequence and measuring
the degree of error (which is quite easy in number entry). We applied the
technique to explore a space of possible designs and draw conclusions about
their relative resilience to error. I think it's a neat technique which
deserves to be more widely used.

Here's a short paper we wrote about it. Its message is somewhat muddied (IMO)
by a parallel story about "differential formal analysis" (which basically
means several researchers did basically the same thing and we found it
interesting which bits we had in common and which bits differed, in our
results), but it's short enough that it's readable, I think...
[http://ewic.bcs.org/upload/pdf/ewic_hci12_full_paper4.pdf](http://ewic.bcs.org/upload/pdf/ewic_hci12_full_paper4.pdf)

------
cordite
Reminds me of QuickCheck [0] in haskell world.

[0]:
[http://hackage.haskell.org/package/QuickCheck](http://hackage.haskell.org/package/QuickCheck)

