Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Fuzzbuzz (YC W19) – Fuzzing as a Service
171 points by evmunro on Feb 27, 2019 | hide | past | favorite | 81 comments
Hey HN,

We’re Everest, Andrei and Sabera, the founders behind Fuzzbuzz (https://fuzzbuzz.io) - a fuzzing as a service platform that makes fuzzing your code as easy as writing a unit test, and pushing to GitHub.

Fuzzing is a type of software testing that generates & runs millions of tests per day on your code, and is great at finding edge cases & vulnerabilities that developers miss. It’s been used to find tens of thousands of critical bugs in open-source software (https://bugs.chromium.org/p/oss-fuzz/issues/list), and is a great way to generate tests that cover a lot of code, without requiring your developers to think of every possibility. It achieves such great results by applying genetic algorithms to generate new tests from some initial examples, and using code coverage to track and report interesting test cases. Combining these two techniques with a bit of randomness, and running tests thousands of times every second has proven to be an incredibly effective automated bug finding technique.

I was first introduced to fuzzing a couple years ago while working on the Clusterfuzz team at Google, where I built Clusterfuzz Tools v1 (https://github.com/google/clusterfuzz-tools). I later built Maxfuzz (https://github.com/coinbase/maxfuzz), a set of tools that makes it easier to fuzz code in Docker containers, while on the Coinbase security team.

As we learned more about fuzzing, we found ourselves wondering why very few teams outside of massive companies like Microsoft and Google were actively fuzzing their code - especially given the results (teams at Google that use fuzzing report that it finds 80% of their bugs, with the other 20% uncovered by normal tests, or in production).

It turns out that many teams don’t want to invest the time and money needed to set up automated fuzzing infrastructure, and using fuzzing tools in an ad-hoc way on your own computer isn’t nearly as effective as continuously fuzzing your code on multiple dedicated CPUs.

That’s where Fuzzbuzz comes in! We’ve built a platform that integrates with your existing GitHub workflow, and provide an open API for integrations with CI tools like Jenkins and TravisCI, so the latest version of your code is always being fuzzed. We manage the infrastructure, so you can fuzz your code on any number of CPUs with a single click. When bugs are found, we’ll notify you through Slack and create Jira tickets or GitHub Issues for you. We also solve many of the issues that crop up when fuzzing, such as bug deduplication, and elimination of false positives.

Fuzzbuzz currently supports C, C++, Go and Python, with more languages like Java and Javascript on the way. Anyone can sign up for Fuzzbuzz and fuzz their code on 1 dedicated CPU, for free.

We’ve noticed that the HN community has been increasingly interested in fuzzing, and we’re really looking forward to hearing your feedback! The entire purpose of Fuzzbuzz is to make fuzzing as easy as possible, so all criticism is welcome.




Can you talk a bit about what you're fuzzing for in Python programs? I feel like I have a good understanding of what cluster fuzzing is accomplishing for C/C++ libraries, but less clarity about the goals for managed languages.


Sure! Some of the classes of bugs that remain low-hanging fruit for languages like Python include slowness, hangs, panics, race conditions, assert failures, excessive resource consumption and other Denial of Service attacks.

Other use cases include using fuzzing to compare implementations of libs that require the same functionality, detecting invariant violations, testing implementations that are meant to work together (i.e. serialize(deserialize(x)) == x).

In general fuzzing C/C++ libraries for memory bugs is the most commonly described use-case, but I think there are tons of fuzzing use cases that haven't been thoroughly explored yet.


I recently wrote a tool called SharpFuzz that enables fuzzing .NET programs with AFL (https://github.com/metalnem/sharpfuzz#trophies). It has found over 70 issues so far in various libraries (including the .NET standard library). The most common ones are unexpected exceptions (for example, method that should not throw anything throws IndexOutOfRangeException or NullReferenceException), but there are also many serious ones, such as temporary/permanent hangs, stack overflows, and process crashes.


As an example, I had a python function that needed to strip the front of a string and went into an infinte loop on an empty string due to a boneheaded mistake on my part. I failed to have a unit test for that particular input and didn't catch it till months after it went into production when empty strings started showing up occassionaly in the wild, and my program would get restarted by a watchdog which noticed it not making progress. A fuzzer would hopefully have caught that sooner.


P.s. Know someone who maintains an open-source project, written in C, C++, Go or Python, that should be fuzzed? Please send an email to oss@fuzzbuzz.io. We’d love to fuzz your code for free on our platform and make the world’s open-source software more secure.


Can't they do this w/oss-fuzz already?


They certainly could if their project is large enough! Every widely-used C/C++ project should use OSS-Fuzz, it's an awesome service.

We support a couple of languages that OSS-Fuzz doesn't (Go & Python as of now), which is why I thought this was worth mentioning :)


>We support a couple of languages that OSS-Fuzz doesn't (Go & Python as of now), which is why I thought this was worth mentioning :)

I thought the main benefit of fuzzing was finding memory security bugs. If your program is crashing or otherwise erroring out given crazy input that's something you want to fix because it is potentially exploitable. With Python/Go that's not really an attack vector. So what's the benefit from finding out that some crazy input crashes my Python program?


Memory security issues have been the main focus of fuzzing, but it's really useful for other use cases as well, such as: slowness/hangs, assert failures, panics, excessive resource consumption and DOS attacks. We've also done some work with Go to detect race conditions while fuzzing.

You can also do differential fuzzing to compare 2 different implementations that solve the same problem, or fuzz for invariant violations/assertion failures. I think the possibilities extend far beyond just memory safety, and I'm really looking forward to finding other areas in which fuzzing is applicable.


Have you guys run FuzzBuzz against FuzzBuzz? If so, how many bugs did you find?


Crashes are exploitable too -- you definitely don't want someone to be able to take your website offline just by sending it a malformed packet. You might not leak user data, but being DoS'd is no fun either.


Debian Developer here.. I wonder how much of our core infrastructure this could cover given the Python support.


If you're interested in giving it a go, I could set you up with an OSS plan & some free CPU power - let me know!

everest@fuzzbuzz.io


FWIW I separated my Oil shell parser into a standalone stdin/stdout filter, which is ready to be fuzzed:

https://github.com/oilshell/oil/blob/master/bin/osh_parse.py

https://github.com/oilshell/oil/issues/171

I'm already testing it by running it on more than a million lines of shell [1], which I imagine should provide a very good starting point for the AFL algorithm. I've only fuzzed one thing before but that's my understanding.

If anyone is itching to try out Python fuzzing, this might be a nice and realistic intro.

I made note of Fuzzbuzz on the bug.

[1] http://www.oilshell.org/blog/2017/11/10.html


Thanks for mentioning us on the issue! I'd love to help get that project up and fuzzing


I know of a company, called Security Innovation, that tried this in 2002. It went very badly at times. They added training and pen testing, and today they bring in just $20 million per year.

They opened an office in Seattle to fuzz for Microsoft. As soon as they proved that they could succeed, Microsoft hired away all the people, leaving the company with a lease for an empty office.

Generally, companies don't trust outsiders and/or don't see a need. You're up against internal politics too. People within the company don't want to compete with you and don't want to be embarrassed by you.


Companies don't trust "outsiders" to do security testing? Veracode was doing 9 figures a few years ago and was recently spun out from Broadcom for almost $1Bn.

Also, 20MM/yr is not a revenue number to sneeze at. Enterprise security is a huge and mature product space and most aspirants in it do not hit that number. These companies aren't Uber, where every dollar coming in is going back out the door with an extra couple dimes to boot.


Eeeep.

There's a substantial counterargument to this I need to type up. I understand where you're coming from as someone who once ran a consultancy, Tom, but from the perspective of someone who hires security firms and consumes their services—and this is essentially a TL;DR for the opinion I need to flesh out here—we don't do it because we trust. We do it out of necessity.

TODO: Bryant to flesh this out in between laundry rounds tonight.


My name is Thomas.


Well, apologies are due as I can't seem to edit that out of my post now. Sorry for making the assumption, Thomas.

But considering the gray patina my earlier comment has developed with time, I'll withhold my point as there doesn't seem to be interest in hearing it.


I actually didn't understand your comment and it was downvoted before I responded (I didn't downvote it). I just wanted to make sure people knew how to spell my name.


First, that's a very successful company. Second, you're extrapolating what an entire market will do from one, ancient example. Today, there's a number of companies selling static analyzers, test generators, model-checkers, and I think I've even seen fuzzing. They tend to succeed if their tools get results for clients with little work on clients' side. They love push-button tools, too, that smoothly integrate into their workflow.

So, this company definitely has a chance. Further corroboration is the uptake Hypothesis was getting in Python shops. There's definitely a demand. I just don't know the numbers for this sector.


Thanks for the heads up! I haven't heard of Security Innovation, so definitely going to look more into what happened there.

I think the key difference though, is that we don't do any consulting/training/manual pentesting. We're more of a dev tool company than a security company in that we don't aim to replace security engineers but to make their lives easier.


The training and pentesting came later, saving the company.

The company was created starting from early fuzzing research at Florida Institute of Technology. The whole point of the company was to fuzz things for software companies. That mostly didn't work out.

That all might not be your fate, but consider it a warning. You could do a better job of making things accessible, or you could offer a more acceptable price point, or you could advertise better, or maybe 2019 is different enough from 2002 that such a business is more viable.


Yes, software security in 2019 is markedly different from software security 17 years ago. 2002 predates the "Summer of Worms" and the Microsoft SDLC (for what it's worth, from 2004-2006, many of the world's software security firms were basically parked almost full-time in Redmond). It would be weird today to see an established company with a "shipping" product or SaaS service that couldn't provide a pentest attestation; back in 2002, it would be weird to see one that could.

For some perspective: the first published "integer overflow" attacks were from 2002 (the attack pattern was known but not published as such before then).


This is a great name, because at least 90% of your customers will find it funny. Some friends in Boulder made a company called RaffleCopter, and its meme-inspired name (based on ROFLcopter) seemed to help it get much more attention than it drove away. https://www.rafflecopter.com/ Another good thing about both names is they work even if you don't get the reference. I think that could be a must.

I don't know much about fuzzing but I'm inspired to give your tool a try if/when I get a chance.


We sort of picked the name as a joke at first, but we noticed that it was very memorable (as meme-inspired names tend to be), so we decided to keep it. Don't regret it, yet.

If you're interested in learning more, our docs [0] explain everything from the ground up.

[0]: https://docs.fuzzbuzz.io/


Did you consider "Fuzzbizz?" It's a truer Spoonerism, plus you get the "business" pun. Although maybe that's outweighed by it no longer rhyming. :P


This looks great, congrats on the launch! All enterprise software companies should take notes, having a "Buy vs. Build" section on the website is incredibly useful and saves time on both sides.


Glad you found it useful - it's a topic that comes up a lot when we talk to customers and we figured we should just be upfront about it.

Wasn't our idea though - we stole it from Labelbox [0] :)

[0] - https://labelbox.com/buy-vs-build


Some feedback going through the docs. For each of the languages you demonstrate a "BrokenMethod" but I don't understand what's broken about it. e.g.:

    func BrokenMethod(Data string) bool {
        return len(Data) >= 3 &&
            Data[0] == 'F' &&
            Data[1] == 'U' &&
            Data[2] == 'Z' &&
            Data[3] == 'Z'
    }
What's broken about this? It returns true for strings that start with "FUZZ", false otherwise, does it not? Python example:

    def BrokenMethod(strInput):
        if len(strInput) >= 2:
            return strInput[0] == 'F' and strInput[1] == 'U'

Other than not being idiomatic I don't see what's wrong with this method.

Next, it's not clear to me how you indicate success/failure of a test. Is success just any program than exits 0 and failure any program that exits non-zero? That would be my guess but the docs don't say.

Typo: https://github.com/fuzzbuzz/docs/pull/3

This page is missing a link to the find-your-first-bug-in-Python example:

https://github.com/fuzzbuzz/docs/blob/master/getting-started...

The docs site loads slowly for me on an older iPad, and there's even a slight delay on a recent Macbook. Looks like it's maybe a font loading issue? (Oh, it's gitbook. How awful. I guess there's nothing you can do about that other than use a different doc provider.)


Thanks for the questions & feedback! Concise docs are really important so this is all super useful. To answer your questions one by one:

1) The BrokenMethods are simple examples of programs that crash on buffer overflows/index out of range errors. If you were to pass "FUZ" into the Go method, it would check Data[3], thus causing a panic since there are only 3 elements in the string.

ninja edit: that python method in your comment IS a valid method with no error - a bit of a brain fart on my end when writing out the docs. It's been changed :)

2) In general a failure is any non-zero exit. We do this to be flexible in the way you report bugs. For C/C++ and Python this is usually with assertions, and in Go you can achieve something similar with:

  if !x {
    panic("Error")
  }
We also have other checkers or "sanitizers" that run with your code to look for certain bugs. For C and C++ code we support tools like Address Sanitizer, which report memory bugs like Heap Buffer Overflows and UAFs, and for Golang you can choose to fuzz your code with a race condition checker. These are just some of the examples of more advanced fuzzing methods we support, and we'll be making nicer tutorials/screencasts to showcase those over the coming week.

3) Thanks for the fixes - much appreciated. And yeah, we know GitBook is pretty slow, and we're in the process of moving to another docs provider.

If you've got any more questions please let me know!


Great thanks for the answers. Maybe on the broken examples just put a comment on the line that's broken so the reader doesn't have to expend mental effort understanding why it's broken.


I believe it's broken if len(Data) == 3 as you are trying to access Data[3], which is out of bounds if Data is length 3.


Ah you're right, it was the Python example I couldn't find anything wrong with and then I copy/pasted the Go example without looking at it as closely. Python example has been fixed though per sibling comment.


That's awesome! Would love to use it on popular ruby gems to make sure they are secure (including Rails). Also having a free plan is perfect for experimenting. I predict this company will be a huge success.


Really interesting to see the desire for ruby support in this thread! It's definitely on our roadmap.

Shoot me an email at everest@fuzzbuzz.io and I'll let you know when we launch ruby fuzzing.


This is awesome! I wish you all the best and hope that this takes off.

I am curious about how you use AFL under the hood - how do you scale? Do you use a shared kernel and run a worker process for each physical core, or do you do some virtualization or perhaps run with a kernel patch such as https://github.com/sslab-gatech/perf-fuzz/ ? My experience is that you will hit a wall pretty quickly unless you start multiple kernels by using virtualization, or simply having a very slow binary so you don't get a high number of execs/s to start with.


We actually distribute the fuzzing workload across physical machines, for precisely that reason. Each instance of AFL gets its own kernel & physical core, and we use a staged synchronization algorithm to make sure all of the machines' corpuses stay up to date.

All of this was done to try and keep the scaling as linear as possible, so that when you double your CPU count you're doubling your execs/second as well.


Thanks for your quick reply!

This sounds like a good solution. I was trying to solve this problem on my own as well and ended up making a minimal kernel+afl image that I then boot multiple times using deduplication features in order to save RAM (I don't have 256 gig ram like you do in a proper server). Each instance ends up eating quite a lot of RAM even with a limited root filesystem so that's why I wanted to keep it low. I'm on a 2990wx rig which was kind of a disappointment from a fuzzing perspective because of the limited memory bandwidth, but that's for another discussion.

Do think it would be worth trying out that snapshot approach that I linked in the parent comment or it might not be worth it? I was thinking of rebasing their patch onto the latest master and getting it to work again - sadly the authors seem to have abandoned the project, at least there hasn't been any public changes since a year back.


Have you thought of integrating some form of exploitability analysis[0][1] for the crashes|etc. fuzzing locates?

So let's say I upload some FOSS project and end-up finding some crashes|potential vulnerabilities. Have you considered some sort of tie-in|integration to bug bounty programs so that I could get a small pay-out without having to go through the trouble of figuring out how exploitable a given crash might be, and more importantly to actually have to deal with trying to get the attention of the project?

[0] https://www.microsoft.com/security/blog/2013/06/13/exploitab... [1] https://github.com/jfoote/exploitable


Yep, we're definitely going to integrate more automated analysis. As of now we do some rudimentary analysis based off the type of the bug (Heap buffer overflow, UAF), read/write size, and similar metrics, but we'll be adding more advanced methods of categorization as the platform matures.

We've been thinking about the best way to use Fuzzbuzz to benefit the OSS/bug hunting community, and the integration idea is a great one. We're also providing free plans with extra CPU power for security researchers & bounty hunters.


Nice. I'm sure you've looked into various backends to use (in addition to AFL). Just wanted to give a shoutout to radamsa[0]. My [somewhat not-up-to-date] experience has been that it produced sometimes findings that AFL didn't (because of e.g. different approach relative to infinite input space).

[0] https://gitlab.com/akihe/radamsa

In regard to CPUs - my laptop reports 4 CPUs, my workstation 16 - where the value for someone involved in fuzzizng would come in my mind would be if you could take away the hassle of scaling fuzzing 'transparently' to 100 or 1000 CPUs. What I am suggesting here is that on your pricing page you might be off by factor of 100 in regard to what number of CPUs actually make offering compelling to someone who would consider outsourcing their fuzzing infrastructure.


Radamsa is awesome! Definitely agree, and one of the goals for Fuzzbuzz is to be able to hot-swap between fuzzing backends without any interface changes (or to use all backends at the same time, to account for differences in findings).

re: pricing, we do offer infinite scalability in terms of CPUs, but that might not be as clear as we'd like it from our pricing page. Or maybe I'm misunderstanding you. Either way, if you have any more thoughts/suggestions on pricing I'd love to hear it.


I thought someone had finally done it. Fizzbuzz as a service.

This is even better.


Actually, asking someone to write Fizzbuzz as a service isn't a bad idea for an interview question. And now that I've written this comment, it will cease to be not a bad idea in about 30 minutes.


Well if you're going to do it as a service you might as well make it enterprise ready. Allow me to introduce you to Java EE Fizzbuzz

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


It's 2019. Should be implemented as a Lambda.


This looks awesome. Wish it integrated with Ruby projects.


Ruby is one of the languages on the roadmap, but it's not super high-priority right now, mostly because we haven't seen a lot of interest. If you have a specific project/use case in mind I'd be interested to hear more about it.


Uploaded CSVs or PDFs would be interesting to me (https://docs.ruby-lang.org/en/2.5.0/CSV.html) (https://github.com/gettalong/hexapdf). There's a fair amount of parsing that's probably not been fuzzed significantly.


Seconded


This is such an awesome service.

Congrats Everest (and team) for making it easier to make software more secure!


This looks very nice. I will definitely try it out once support for Java becomes available.


Java is one of the next 2 languages we plan to support (hopefully in the coming month).

If you send me an email (andrei@fuzzbuzz.io), I can let you know when we launch Java support


I couldn't find any good AFL-like fuzzers for Java, so I wrote one not too long ago: https://github.com/cretz/javan-warty-pig. Feel free to take it and hack/reuse it.


Thanks for the link! We've been looking at all the current AFL-like/AFL wrappers for Java as we decide how best to implement Java fuzzing in Fuzzbuzz, and yours looks pretty nice.

Definitely going to play around with this :)


As a somewhat newbie, how would I use this on say my javascript web app and react native apps?

Thanks!


Since the type of fuzzing you can do right now on Fuzzbuzz is language-specific, you wouldn't be able to fuzz Javascript code.

We are in the process of building a fuzzer for generic web-apps, so watch this space :)


Godspeed!

I recall there is|was a company based out Santa Cruz, CA called Fuzz Stati0n with a pretty similar concept. Might be good idea to ‘compare’ experiences.


I am the founder of Fuzz Stati0n - now a product security engineer at Looker.

Best wishes and happy to talk.


A number of questions:

Do you guys support fuzzing by protocols? Syscalls, REST, or SQL? It might be faster to extend protocol fuzzing than fuzzing by language (I'm not sure though). It'd be cool to have a fuzzer for Apache Calcite; it's a library to slap on a SQL interface to your database.

Any plans to extend fuzzing to property-based testing?

Do you guys fuzz your fuzzer (dogfood)? Probably useful, but also funny :)


We don't support protocol fuzzing yet, but it's definitely on our roadmap - we wanted to start in an area that we felt was lacking the most, and then move into other types of fuzzing. We do have some novel REST API fuzzing techniques in mind that we're really looking forward to implementing as well.

We're also thinking about extending to property-based testing. There are some really awesome hybrid testing tools out there, such as DeepState (https://github.com/trailofbits/deepstate) which combines Symbolic Execution and Fuzzing behind one clean interface, and we'd really like to push the boundaries of that type of testing.

And yep, we do! Everything is written in Go, which is part of the reason it's one of the first languages we support.


A small nit: the video is set with these dimensions: width: 940px; height: 529px;

On scaled display it shows up fairly small and so it's hard to see what's going on without going to full screen. As a quick fix could you enable the full screen button. And as a longer term fix consider recording another version where the windows are smaller or there's some sort of magnification?


My bad - I accidentally disabled the controls when I was setting it up. Should be fixed now.

Also, yeah, we're not video experts, so we had a feeling the video might not be the correct format. We just wanted to put something up that showcased the platform without forcing you to sign up, but we'll definitely make sure our next video is in a better format.


No problem thanks for addressing. Viewing it on youtube worked for me. Congrats on your launch and best of luck!


I started learning about this type of testing by writing Hypothesis tests for Python code: https://hypothesis.readthedocs.io/en/latest/

One of the things that became a source of frustration is writing the specs that define the shape of the inputs.

Does FuzzBuzz make that any easier?


That's a problem that we've been thinking about a lot. The way our fuzzing works right now is that your method consumes an array of bytes, which you can then use to build up arbitrary structures. It's simple, but manages to be really generic and flexible at the same time. Of course, it means you do have to define what your inputs look like.

We do have plans to build some tools to make this easier. I'd like to see a scenario where defining inputs is as simple as specifying the data types that your code requires. (Or perhaps even automatic detection, for less complex cases)


Some people use Protobufs to solve this, but when I work with C++, I use a class that takes a (data, size) as an input (to its constructor), and implements an overridable Get<T> method.

So you can do: auto s = datasource.Get<std::string>();

For each type I need I override Get<T> for that type. If the data source class is out of data, I throw a specific exception, that I catch at the global level. This works like a breeze and the advantage over protobufs is that it's faster (no Protobuf parsing overhead, errors) and you never consume more than you need.


This looks really cool! Fuzzing is something I've been looking into more and more as I learn about it. I'll definitely keep this in mind for the future. Have you guys considered Rust support?


We have! afl.rs[1] is awesome, and seeing as it's found some interesting bugs, I think Rust would be a great addition to Fuzzbuzz. It's on our roadmap.

[1] https://github.com/rust-fuzz/afl.rs


Can you add support for social login like GitHub OAuth? Thanks!


We're already sort of integrated with GitHub, since you can integrate your projects to automatically pull updates from repositories, so GitHub login is definitely on the roadmap!


Why do this in the cloud? Wouldn't it make more sense to make standalone software that does the fuzzing?


> It turns out that many teams don’t want to invest the time and money needed to set up automated fuzzing infrastructure, and using fuzzing tools in an ad-hoc way on your own computer isn’t nearly as effective as continuously fuzzing your code on multiple dedicated CPUs.


Sounds interesting! At least I learned something new, fuzzing.


Can this be done in Rust?


Rust has a bunch of fuzzing tools, see https://github.com/rust-fuzz


Obligatory plug for the OG of Fuzz Testing, Barton Miller: http://pages.cs.wisc.edu/~bart/fuzz/

#OnWisconsin


Fascinating!


wewlad!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: