Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Federacy (YC S18) – bug bounties for startups
71 points by jsulinski on Aug 2, 2018 | hide | past | favorite | 23 comments
Hey all, we're James and William, founders of Federacy (YC S18). We're building a bug bounty platform for startups. (https://www.federacy.com)

I was an early engineer at MoPub, responsible for security and infrastructure. By the time we were acquired by Twitter, we were 20+ engineers, but growing so fast that building software and systems securely was almost an impossible task. I found that there were never enough hands; I couldn’t peel engineers from revenue-driving features and it was really difficult to find contract or full-time security engineers.

William and I started Federacy to make it easier for startups to secure themselves. We think the key is to pair startups with extremely talented, outside security researchers to test their applications for vulnerabilities, review code, and help implement best practices—essentially serving as an outsourced CISO.

We saw that the best security minds we knew either weren't interested in a full-time role for a single company, weren’t able to work in the United States, or already had day jobs at the largest Internet companies. We thought that if we provided an efficient, no-bullshit way for them to do work that they enjoy, make a real difference in how startups secure themselves, and make money while honing their skills, we could unlock a huge amount of talent that wasn’t accessible previously.

We have a lot of respect for what HackerOne and BugCrowd have built, but they are focused on serving mostly enterprise companies with large engineering and security teams, who can afford their services. Their revenue comes largely from triaging the high volume of low-quality and automated/spam bug reports that come through their platforms. These services can be in the six figure range. It may be a good business, but that isn’t where our passion lies.

Startups can’t afford these services and the burden of triaging low-quality bug reports can completely overwhelm even the best dev teams, leaving them worse off than they started.

We think there is a better way:

• We hand-pair startups with a small team of pre-vetted researchers who are subject matter experts in your stack.

• Researchers test your infrastructure for vulnerabilities in an initial scan, and work closely with you to resolve issues and implement best practices.

• Your program can be private, where only you and the researchers you approve will have access to your program. You don’t have to provide source code and all initial testing is done with only the information and access your normal users have.

• We create your program for you and have you up and running in 5 minutes (or you can self-serve, if you prefer).

• We only charge for results (when a researcher finds a vulnerability).

We just started building a couple months ago and are looking for early feedback. Here’s an invite link we made for HN:


We’ll be around all day to chat and are very happy to answer any questions as well as discuss how we built our product, security-related topics (systems automation, vulnerability reporting, coping with imposter syndrome, etc.), what it's like building a startup with family (we’re twin brothers), or anything in between.

Some specific questions we have:

If you’re familiar with other bug bounty platforms, are there any issues we can tackle early on that made the experience frustrating for you?

Would you consider contracting an outsourced CISO or a pentest with a security researcher that has reported vulnerabilities to you through your bug bounty program?

We've used HackerOne at a startup I work at (10-20 employees). We had to turn it off because we were getting bombarded every couple days with the same issues, that were just run by crackers/hackers running basic pen test scripts. They all seemed to have the same toolkit, and would just run the same tests and report the same bugs. Most of which were either invalid, or just not a priority and, so, a waste of our time to read. The write-up of the bug was also poor, with poor English, and this causes wasted time..

Before signing up for another bug bounty program I'd want to know that:

1) The testers were not mostly just amateur crackers running the same toolkit on 100 sites per day, and the same toolkit that 10 testers ran yesterday.

2) The amount of dupe reports was basically 0.. If we get a bug reported and we ignore it, and make zero response, we still do not want to get the same report 10 times over the next 2 months.

3) The write-ups should have proper English, good grammar, and be very clear.

4) If a user reports 10 bugs, and we only want to pay for 1, that should be totally fine. The other 9 are either dupes that we have ignored before, or new reports that are just not a priority or worth looking at.

5) We basically never want to get into a negotiation with the hackers over if a payout should be $2000 because 10 bugs were reported when we know of all the bugs and, basically, don't value them.

Your experience is exactly why we're building Federacy.

Bug bounties can be an incredibly efficient way to work with outside security researchers to find vulnerabilities, test for best practices, etc., but done poorly, can cause more damage then they help. We want to make them work for startups as well as they do for companies like Dropbox, Shopify, and Google. We have our work cut out for us -- but if we're successful, we think it could materially improve how startups secure themselves.

All the dev teams we've been part of share the same challenges. We're always overburdened with work on revenue-producing features, so being flooded with more work that ultimately doesn't add much value in securing our software is the last thing we want.

Right now our solution for spam, dupes, and low-quality reports is to be extremely selective with the security researchers we allow on the platform.

We're launching in private beta so James and I can hand-pair researchers, help companies write their VRP, and review every vulnerability report.

Other ideas we’re working on:

- Very clear “Known Issues” / “Not Issue/Out of Scope” sections

- De-duping based on comparing report attributes

- Utilizing machine learning to improve de-duping based on description of vulnerability

- Collaboration. Encouraging companies to look at their approved outside researchers as a part of their team and building tools to facilitate this

Do you think any of these would help? Are there other ideas we should be focusing on that might solve these problems more efficiently?

My 2 cents: I used to work on the appsec team at Twitter and can attest that we could not get Mopub to ever resolve any of your security vulnerabilities.

Noise is certainly a problem on bug bounty platforms but our team handled all of that - by the time vulnerabilities reached you they were already valid, triaged, important issues to resolve.

> We're always overburdened with work on revenue-producing features

This is the bigger problem - if your leadership doesn't care about security then it doesn't matter whether you use Hackerone or Federacy or something else, it's still not going to be a priority. This was the case with RB, in my personal opinion.

Of course many companies do care or want to care but still need some handholding - I think Federacy can provide them a lot of value and wish you a lot of success in that.

Hah, yeah, this stuff is hard and acquisitions make it even harder.

I think you started a month after I left. We built a lot at MoPub in a short period of time and when we were acquired I had a mile-long backlog. The Twitter security team was great though and built a war-room during integration. We worked some intense hours leading up to the IPO and over the Holidays, and I’m proud of the work we all did. We migrated a sprawling stack that supported what was then the largest mobile ad exchange and billions of sub-second auctions over just a few weeks. Most of the MoPub team transitioned to other projects and teams quickly though and I left not that long after.

Totally agree that it starts at the top. If the C-level doesn’t care, there just won’t be the resources it takes to build good, secure software. We intend to focus on supporting companies who do care, and we think this focus will also impact how companies using Federacy interact with researchers. We want outside researchers to be viewed as allies, not as a burden.

Have any thoughts on how we can best accomplish this?

Every bug bounty platform has tried to be "selective" in the researchers they allow in when they start. You'll soon discover that selective doesn't scale.

The only way you are going to disrupt the current market is by hiring on your own salaried pentesting talent to participate.

What do you think caused being selective not to scale at other platforms? What do you think we can do to keep the quality of our researchers extremely high?

What we’ve heard in talking about this to a bunch of talented researchers is that they’ve been frustrated with payout rates (too low for amount of work), tone of the interactions between researcher and company, number of opportunities/companies where they can add value (given their skillset - many have said they do the work in large part to learn).

I think there is probably a lot we can do to create/keep balance in the marketplace to address a lot of these if we take things slow.

Would love to hear more of your thoughts on the strategy of building out our team with salaried pentesting talent. Why do you think that is critical to adding a lot of value for startups?

Has anyone ever tried requiring an application fee to help with the bombardment issue?

We've tossed around ideas like this -- including something similar to how Numerai uses staking for their data science competitions. The security researcher would stake a small amount based on their confidence that the report is an impactful vulnerability.

I think it's an interesting idea, but could be complicated to get right. We’re also wary of creating barriers that are too prohibitive for some of the really great and hard-working researchers in the world.

I think an easy solution may be to build good vetting tools and a thorough process: a short application, technical interview, and/or trial periods for new researchers. Right now though, we’re personally reviewing every researcher. :)

A big part of this, too, is providing the environment where researchers can learn and emphasize their existing contributions. I think there’s a lot we can do there, while still allowing researchers to provide a lot of value.

What do you think?

That'd be interesting--a small, maybe even just $1-10, deposit that gets refunded if the bug is legitimate.

I don't think punishing dupes is a good idea though, because a researcher has no idea (and should have no idea) whether their bug has been found before, so dupes should probably still result in a refund.

However, as a kid who has no credit card, but has found some pretty spicy bugs (and gotten rewarded for them), it would make it impossible for me to report them.

We definitely don't want to discourage you from contributing. It also doesn't necessarily have to be money, you could stake reputation you've previously earned.

The dupes problem is super important, in my opinion, because it's currently an unpleasant experience for both sides. Not getting paid out for valid work that has simply been reported before (but not disclosed) can make doing this kind of research as a freelancer unfeasible, while triaging duplicate reports burns time for dev teams.

We've tried to build out in-scope/out-of-scope functionality that makes it super simple to keep your scopes current (could even update automatically via API). We definitely want to build out additional functionality that makes publicly acknowledging known, 'won't fix', and non-impactful issues super easy, perhaps by pulling most of the information from a duplicate report. Do you think that’d be useful?

The other thing we want to really focus on is the disclosure process, and encouraging companies to do it as often and soon as possible.

You could try a prepaid card. The overhead was $5 when I used them. They were good for keeping my real card numbers out of circulation, too.

Honestly, I think a 99 cent fee could help to remove a lot of the noise.

As a "researcher" I don't find your vulnerability levels too informative. I'd suggest you use or adapt the bugcrowd taxonomy: https://bugcrowd.com/vulnerability-rating-taxonomy

That is a model that has been shaped from the experience of many programs and has a clear, "yes this is an issue but no you're not getting paid" level which is important for avoiding thousands of time-wasting reports such as non-perfect HTTPS headers, etc.

I'd be interested in hearing how you plan to deal with duplcate reports. This is an area that hackerone does better than bugcrowd. Hackerone is more interested in disclosure and getting reports to a point where they can be disclosed. If a bug is marked duplicate you are given access to the original report which prevents falsely marking duplicates to avoid bounties.

I agree with you, they aren't very informative. We're big fans of BugCrowd's work in this area, and intend to adopt their VRT, though we're still considering how to make P1/P2/P3/P4 more clear/descriptive at a glance.

We're also still brainstorming and looking for good ideas on how to handle duplicate reports. At this point, we're tackling it by vetting researchers and helping with the ones who ignore 'Known Issues' and out of scope limitations. Like HackerOne, we're very interested in encouraging companies to disclose their vulnerabilities, because these disclosures are important to their users, the people who may build on top of any service they offer, and the researchers who are being given public credit.

In regard to a company marking a bug a duplicate to avoid bounties, those are definitely not the type of companies we want to work with. I'm not sure that technical solutions to mitigate that sort of behavior is necessary when we can curate those who have access to the platform. We’re going to keep a high quality of researchers and companies -- and it goes both ways.

Our Vulnerability Disclosure Policy Template, which is based primarily on Chris Evan's work @ Dropbox, and is inspired by a bunch of other well-written programs too, puts full control of payments/recognition into the hands of the company. I think our best recourse on this issue is to simply not include bad actors.

Do you have any ideas for other ways we can limit dupes? Or how to really effectively communicate what is out of scope or a known issue?

The baseline VRT P5's a bunch of things most startups really want to know about, like exposed admin consoles and broken password reset flows.

What does it mean to "contract an outsourced CISO" to a researcher who reported through a bug bounty program? What's an "outsourced CISO"?

I think it's unlikely that "CISO" is the word you want to use in your copy.

How are you vetting researchers? I logged in as a researcher, and it looks like it works just like H1 works: there are public bounties, and private ones for which admission is gated by performance on the public bounties.

It is not the case that H1 typically costs six figures; typical costs for a startup on H1, with triage, are low five figures.

We manage bug bounties for several of our clients (we run outsourced security teams for startups). If there's a problem we have with bounties, it's not getting enough submissions from them. Triage can be annoying (I kind of enjoy it), but we do full-scope penetration tests for each of our clients, and it's noteworthy how much more a real pentest finds than a bounty program. There are different incentives, different information available, and different kinds of work result.

(There are things bounties do better, too; bounties are good for finding oddball XSS and CSRF problems, and good at corner-case web hygiene stuff).

How are you attracting talent? I don't really understand the business model. Bounty researchers already have a bunch of platforms they can use if they want to do bounty-type scanning. Why are they using yours?

Just a quick preface: we started building Federacy a little over two months ago as part of this batch of YC companies. We’re a team of two FTE. Many or all of our assumptions could well prove to be woefully off-target. But.. we think that if we keep our heads down and build what the startups and researchers on the platform ask for, we can make at least a small difference in how startups can secure themselves.

A huge majority of the startups we’ve talked to don’t have a bug bounty program, haven’t worked with an outside pentester, and honestly, don’t know where to start.

Most startups don’t have a CISO or dedicated security team, so by “outsourced CISO” we mean: having a designated, vetted, and experienced person/team on-hand who can help with higher-level strategy and architectural decisions; essentially, a small piece of what you provide at Latacora. We think there are very few firms with your level of experience that are working with non-enterprise customers. Do you agree? Do you think there is a better description?

We’re at the very early stages of conceptualizing how we can make the high-level advisory services work. In talking to a bunch of talented security people at large Internet companies etc, we found a lot were interested in working directly with startup CTOs if they could make a significant impact and not have to deal with the tedious aspects of running a consultancy. Our thinking was that if we could build matchmaking on top of the VRP and other tooling we’re building, it could be an efficient way to connect the two and create a lot of value for both sides. What do you think?

I think the value of a bug bounty program probably comes down to the quality of the people doing the work and the willingness of the company to engage actively with the researchers. It’s our job to manage the balance between the researchers on the platform, and the active programs, so that both find value -- and ultimately, yield more secure startups.

We hope the best startups will use our bug bounty program alongside full-scope pentesting and a myriad of other outside resources. I think you see this done well at some companies that have really good security postures. Shopify, Dropbox, etc. have really strong internal teams, work with outside researchers, and still pay out a lot of bug bounties.

We're currently vetting researchers manually -- James and I are reviewing each registration and reaching out individually so we can pair them with startups on the platform. We’ll build out functionality to help over the long run, and are already tossing around ideas to infer trust through vouching, etc. But, we think it’s important to show our work early and get people using the platform to help guide these decisions.

We’re reaching out to researchers directly -- through our friends, the Y Combinator network, and even cold, if we find someone we think would be a good fit for one of our programs. It’s definitely self-selecting, to an extent, as we’re very much an early-stage startup ourselves, and the work they’ll be doing is with mostly early-stage startups.

Appreciate the heads up wrt H1 costs, edited the original post (edit: can't edit the original post, derp, but duly-noted). I think low five figures still puts H1 out of the reach of a lot of companies, and that a well-run bug bounty program can add a lot of value for almost every startup. I think there are probably tens of thousands of startups that really should be engaging outside security researchers, and, of course, that, in itself, creates a pretty big challenge for an already severe talent crisis.

What do you think?

"Would you consider contracting an outsourced CISO or a pentest with a security researcher that has reported vulnerabilities to you through your bug bounty program?"

Budget permitting, this seems like a no brainer. I mean, they already have some familiarity with our app. The only thing I would be worried about is people gaming the system: finding some low hanging fruit or running their toolkits on a bunch of apps, then charging a lot of money and providing no more value.

Yeah, that definitely makes sense, and I agree.

At the core, Federacy is a marketplace, and the surest way for us to constrain the transactions will be to make it difficult for startups to extract a lot of value. We’ll have to work hard on the tools (reputation, vetting, etc), for startups to trust and work with really talented researchers.

Not quite as important, I think, but also interesting is what tooling we can build to let researchers focus on the work they enjoy, and that adds the most value for startups. If we can make the reporting process more intuitive, they can focus more on research -- and less on writing traditional pentest reports.

Your bullets all line up with what Synack and Cobalt.io are doing. How do you differentiate from the two of them, who themselves are already competing hard with each other? Both of them strictly curate their test base, allow for strictly-private programs, allow for researchers to work closely with firms for resolution, can launch and operate your whole program, and charge per finding.

To be completely forthright, we don’t know. Have you used Synack or Cobalt? Would love to hear your experience. We haven’t heard much about Cobalt, but there are some sharp people behind Synack.

That said, I don’t think there can be too many people trying to help companies secure themselves.

I think HackerOne and BugCrowd have <1,000 customers each. I’d guess Synack and Cobalt have less. I think less than 1% of YC companies have a bug bounty program -- and almost none below 50 employees have one.

We would like every company to have a bug bounty program, and that is what we’re tailoring our product to. (We’d certainly rather pay an outside researcher if they find a vulnerability than risk our customer’s data). Synack et al, I’m guessing, run tens to hundreds of thousand per month and accordingly, their software is focused on supporting a small number of large/enterprise customers. We think something important happens when you have tens of thousands of startups/companies using the same marketplace for bug bounties and pentests.

I think we probably all share the same general mission -- but our approach is a bit different: to build software that will be tailored to startups, and to have a lot more of them.

Hi James and William - congrats on founding Federacy.

I'm the CEO and Co-founder of Cobalt.io. I love startups and it takes a lot of courage to get going, so I applaud you for taking the leap and helping innovate in this space.

We started building the Cobalt.io platform back in 2013. We originally started as a bug bounty platform and since then evolved into a Pen Testing as a Service platform [PTaaS] over the last 5 years.

During this evolution I did a lot of thinking around crowdsourcing freelancers for security testing. I'll recommend these two blogs around how the market has evolved over the years and the different cases where bug bounties make sense vs. pen tests and vuln assessments. - https://blog.cobalt.io/deconstructing-and-rewiring-bug-bount... - https://blog.cobalt.io/the-third-wave-of-application-securit...

I believe you are in the bay area. Feel free to ping me at linkedin or twitter and I'll be happy to meetup.

Cheers Jacob

Hey Jacob, thanks for the kind words, and taking the time to leave a comment. I'd love to meet up, pinging you now!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact