
Bug bounty platforms buy researcher silence, violate labor laws, critics say - keydutch
https://www.csoonline.com/article/3535888/bug-bounty-platforms-buy-researcher-silence-violate-labor-laws-critics-say.html
======
gyanchawdhary
This article is a) very long and written mostly as a whitepaper/opinion piece
and less of a journalistic report b) it's very likely a paid hit piece against
bug bounty companies.

However, I'm not surprised as the timing makes sense. The COVID-19 situ has
impacted a lot of pen-testing firms [canceled testing contracts/security staff
augmentation, longterm security audits] and it's expensive AF for these
companies to keep their consultants on bench.

Bug bounties are by no means perfect, but whatever this article is trying to
point out isn't going to stop CISO's from doubling down on bug bounties. In
fact, CISO's who've been skeptic about BBs will be now be pressed to explore
this option even more.

About Me: Sold a few security startups, before that worked as the head of app
sec testing for a bailed out british investment bank.

~~~
tptacek
Who would _pay_ to put a hit piece on bug bounty providers? What pentesting
firm do you think has their shit together to the extent that they run a media
relations team? Who among them don't have revenue streams from triage
consulting?

It's a bad article. CSO publishes a lot of bad articles. But there's no way
this is a paid piece. The people quoted in this article believe what they're
saying.

~~~
gyanchawdhary
> has their shit together to the extent that they run a media relations team.

Any company that drops cool research (ATM / Automotive hacking / SCADA etc)
use PR firms to drum up their work (which is fair IMHO for SEO/showcasing your
new capabilities and service offerings). I did it and I know loads of boutique
firms in the UK who do it. I also know of education companies working closely
with PR firms to publish articles about "cybersecurity skills gap" or
"cybersecurity shortage" to manufacture and invent demand and they are pretty
effective too.

So it's not that outlandish to think that security companies aren't tactical
enough to have PR companies write such articles either. My suspicion about
this specific article was given its timing.

> Who among them don't have revenue streams from triage consulting?

Trigale Consulting is good money but it's mostly getting crumbs from the
master's table. BB's are at the apex of this pyramid and will continue to do
so. Sharp boutique firms that specialize in stuff will obviously always thrive
but that's a diff topic

>It's a bad article. CSO publishes a lot of bad articles.

Agree

> But there's no way this is a paid piece.

Maybe but I'm still doubtful.

> The people quoted in this article believe what they're saying.

Fair point.

~~~
lvh
I'm quoted in the article. I was not paid for it. I am a practitioner, and I
believe what I am saying. Which, to reiterate succinctly, is:

\- Bug bounties are a waste of time and money for most startups, because you
will drown in nonsense scanner findings.

\- There is a straightforward explanation in which HackerOne benefits from
that dynamic, and their business model incentivizes them to make that dynamic
worse, not better.

I would be happy to debate those points, but frankly they don't seem that
controversial.

While I am not speaking for Latacora in the article, I obviously make money
when Latacora does, so your claim warrants analysis. Latacora's business model
is long term proxies for an initial security hire. It does not include "triage
consulting" as a revenue line item. We do not do one-off 2x2 pentests. There
is relatively little overlap between our clientele (startups considering their
first security hire) and the companies spending a lot of money on bug
bounties. Most of our clients do not, and generally should not, host bug
bounties. For clients that do, we operate the program, but do not bill
separately for that service. Therefore Latacora's financial incentive would
clearly be to have H1 run the program with triage, because it
straightforwardly removes a cost. Our business incentives are aligned with
security outcomes, because we stick around long enough for the impact of our
choices. I think bug bounties are fine for a subset of customers. Given the
size of our consultancy and our customer base and the somewhat obvious fact
we're resource constrained in scaling, it's fairly clear I have no financial
stake in this: people do not buy more Latacora because they bought less
HackerOne. I can't speak for HackerOne, but I can't imagine they think we're a
competing service, not do we think the converse.

COVID-19 has impacted a lot of consultancies. It has also made stock markets
overall drop 30%, so the suggestion that this article is some kind of
consultancy cabal master plan to screw over bug bounty programs specifically I
think requires a more evidence to be credible. As Thomas has pointed out: the
idea that this is some kind of PR campaign does not pass the sniff test. Our
headcount is a baker's dozen. We do not employ a PR firm. You're probably
right that, say, NCC does, perhaps NCC considers themselves a proxy good for
H1, but no NCC representative is quoted in the article.

Moussouris is a former executive and current shareholder of HackerOne. It's
true that she is still in the bug bounty business! But if you look at how
their business is structured, it emphasizes fewer, larger corps and
governments and longer-term engagements with a strong compliance and legal
focus. Whatever upside she gets from bug bounties being painted in a positive
light but HackerOne in a negative one (stipulating this article does that,
which I think is questionable), does not weigh up to the clear likely
financial incentive she does have where HackerOne does extraordinarily well,
IPOs, and her shares turn liquid and she gets a big payday. Do you have
evidence that is not the case?

I don't really see how the legal experts in the article taking the position
that e.g. these bug bounty companies regularly violate AB-5 unless they mean
it. Perhaps they are getting paid, but that's a pretty serious allegation I
haven't seen any evidence for. And, if they're just in it for the money,
wouldn't it be much easier for them to take it from the allegedly-violating
side and argue _their_ case? The contractor-turned-employee side isn't exactly
where the money is, and I can't imagine whatever the going rate for a quote is
is the most effective way for a lawyer to make money. (Regardless, I am not a
legal expert, and AB-5 violations are not part of my argument--the extent of
my argument is that "this is a paid hit piece" seems unlikely for them.)

~~~
tomlockwood
Like I said in another comment, you may not have been paid because of this
article, but the advertisers on CSO online paid for this article in part. So
even if it wasn't a paid piece, it was paid for by others in the industry.

~~~
tptacek
That is obviously not what you meant, and one way we know that is you wrote a
comment insinuating this at length and in particular about multiple people
quoted in the article, one of whom you now find yourself conversing with.

~~~
tomlockwood
Quote me where I insinuated it was a paid piece.

~~~
tptacek
I'm comfortable with what the thread says about our respective arguments. It's
all right there for people to read.

------
tptacek
I think bounty programs are mostly a bad idea for startups and medium-sized
tech companies. That said, the critiques in this piece do not make a lot of
sense to me.

Take transparency. The claim this article makes is that commercial bounty
programs work against transparency by paying researchers only when they agree
to NDAs. But transparency isn't a norm in software security to begin with;
most vulnerability researchers work for labs and consultancies that rarely if
ever disclose vulnerabilities. Disclosure of findings is not a norm on
commercial software security assessments. Meanwhile, for any target an
independent researcher can lawfully assess, the researcher retains the ability
to ignore the bounty program and publish straight to Twitter.

My sense of it is that HackerOne has probably increased transparency, in the
sense that I've read a lot of published reports on H1 that I don't expect I
would have seen if the platform didn't exist, and, in my commercial work,
haven't seen a lot of private reports that cut the other way.

Or this "Safe Harbor" argument, that commercial bounties force researchers to
sign NDAs to be immunized from CFAA suits and prosecutions. Sure, but in the
absence of H1, most of those CFAA immunizations weren't available on any
terms. H1 doesn't enforce CFAA liability; CFAA liability is a natural default
under US law. If anything, H1 is mitigating CFAA concerns, not amplifying
them.

I don't know what to say about the labor law concerns here. I know that the
people offering legal opinions here are lawyers and are versed in California
labor law. But knowing what I know about how bug bounty people work: there is
_no way_ the median bug bounty "participant" would qualify for minimum wage
and benefits at the various companies they interact with. The modal bounty
participant has a grab bag of a couple dozen scripts they spam against
hundreds of different companies with bounties. Is the claim here that H1 owes
them a wage? Is the argument here simply that H1 needs to move out of
California?

The minimum wage thing doesn't ring true either, since it implies that all
project-based consulting --- that is, non-T&M consulting where clients pay an
agreed-upon rate for an outcome regardless of the time a project takes to
complete --- is susceptible to the "ABC" test as well. But project contracts
are very common in California. I'm sure there's a subtlety I'm missing.

The "ISO compliance" thing is just silly. By the logic in this article,
practically none of the thousands of commercial application pentests performed
in 2020 to date will be "ISO compliant".

Ultimately: I think if you have to ask, you shouldn't run a bounty program.
But that's mostly because I think bounty programs don't work very well, and
generate an avalanche of noise. I don't think many of the reasons in this
article matter.

~~~
CiPHPerCoder
> Meanwhile, for any target an independent researcher can lawfully assess, the
> researcher retains the ability to ignore the bounty program and publish
> straight to Twitter.

I've had vuln disclosures go bad before and after the rise in popularity of
bug bounty programs, so I'm probably qualified to chime in here.

Before HackerOne, the default for a company that didn't receive vuln reports
well was to threaten you and/or your employer with lawsuits.

This happened to me with Bullhorn and Intuit, despite my investigation and
reports being unrelated to my employment. They ultimately went no where, but I
imagine the conversation I wasn't present for was, at best, _awkward_.

Last year, under one of my aliases, I found a vuln in Credit Karma, and the H1
triage staff declared it out of scope. So I posted it on Github/Twitter.

Instead of threatening to sue, CreditKarma asked me to pull the tweets/gists
and walked back the H1 triage decision and ultimately awarded a bounty for my
finding.

Thus, I don't buy the chilling effects narrative the article tries to sell. It
actually made security research more normalized than it used to be.

Just my unsolicited $0.02

~~~
tptacek
Of course, I go back to the 1990s with this stuff, and your experience matches
mine, to the extent that I got lawsuit threats for doing research on programs
_I ran on my own machines_. I find the idea that H1 is making it legally
riskier to conduct research patently silly.

------
keydutch
Commercial bug bounty companies like Hackerone and Bugcrowd will suffer the
most from the crisis for sure. Even more then pentesters. When there are such
cool sites like Openbugbounty, all they have to do with their abnormal pricing
is to organize their own funerals.

------
tomlockwood
> "I've seen some quote unquote valid vulnerability reports," Laurens ("lvh")
> Van Houtven, principal at Latacora, a secops and cryptography expert, tells
> CSO. "If someone asked me 'should I put this in my appsec report?', I'd say
> 'you can put it in there, but I will never let you live it down.'"

Is there any reason that someone who works for a company that wants to be
perpetually hired by startups to do security would poo-poo a bug bounty
program? This comment seems pretty motivated.

Additionally, a vulnerability is a vulnerability. Plenty of companies get
owned due to pretty trivial vulnerabilities.

> Moussouris, now founder and CEO of bug bounty consultancy Luta Security,
> questions how much of HackerOne is real.

Again, could be motivated.

> The bug bounty platforms' NDAs prohibit even mentioning the existence of a
> private bug bounty. Tweeting something like "Company X has a private bounty
> program over at Bugcrowd" would be enough to get a hacker kicked off their
> platform.

Sure. How many traditional pentesters are signing NDAs?

> Consider a finder who spends weeks or months of unpaid work to discover and
> document a security flaw. Someone else independently discovers, documents
> and submits that same bug five minutes before the first finder. Under the
> rules of most HackerOne and Bugcrowd bounty programs, the first submitter
> gets all the money, the second finder gets nothing.

How are we proposing this alternately works in a way that isn't gameable by
having control of multiple accounts?

> The lack of vetting of bug bounty hunters, where anyone, including this
> reporter, can sign up for a HackerOne or Bugcrowd account with any email
> address, is the key sticking point, Antokol says.

> Bugcrowd declined to elaborate on what the process of pre-vetting
> researchers actually looks like. So, this reporter signed up for an account
> and had immediate access to all public programs without any additional
> steps.

What is really the point of this so-called journalism? In the preceding
paragraphs we were complaining about NDAs stopping people talking about
private programs, then complaining about the lack of vetting, then saying we
gain access to the PUBLIC programs when we sign up, then saying the bug bounty
companies refuse to talk about their vetting methods??? What is this? I can
see why Bugcrowd and Project Zero refused to talk to this person.

> Mature organizations can and should run their own VDP in house. If they are
> ready for an avalanche of dubious bug reports, they might optionally choose
> to run a bug bounty.

I have a feeling that this journalist is pretty in bed with the traditional
pen-testing industry. His history of articles is pretty focused. Bug bounty
programs use a customer's real money to pay out bugs. Bugcrowd, at least,
triages those bugs before any payment is made. It is in the interest of nobody
involved to submit "dubious" bug bounty reports. The only way I can explain
that opinion is to think it comes from the mouth of someone with a grudge.

~~~
tptacek
Latacora is paid in part to manage bug bounty programs, so the motivation is
not in the direction you suggest it is.

~~~
gyanchawdhary
In part.

~~~
lvh
We bootstrap security teams in a long-term engagement. We staff bug bounty
programs, but do not bill for that service separately. If a client uses H1 and
pays them for triage, it fairly straightforwardly removes a cost for us.

The bug bounty nonsense spray I'm talking about is "sev crit: DMARC is set to
quarantine instead of reject" and literal descriptions of how session cookies
work (that inexplicably get paid out for: we've seen it happen). I assume
we're in agreement I should not be fine with people putting that in pentest
reports; if that's the case, the quote is not really controversial: all it
says is that H1 has a lot of nonsense reports.

The mechanism GP describes would require Latacora to be a proxy good for H1.
Do you believe that is the case?

------
anonytrary
Bug bounty programs don't make much sense to me. Why would you find bugs and
hire external people to fix them when you can hire external people to find
bugs and pay trusted employees to fix them? Just hire pentesters.

~~~
mic47
Bug bounty programs are not supposed to replace you other security activities,
but it's a way for you to have additional source of vulnerabilities.
Advantages of these programs is that security researcher will get rewarded
when they find a bug, and that there is clear process for disclosing bugs.

You still should hire pentesters, you still should have trusted employees to
find bugs and fix them, and more... If you are relying just on bug bounties,
your security will suck.

That being said, NDA's sound sketchy, if you disclose bug, than after it is
fixed, you should be able to blog about it (or when they do not fix it for
looong time).

~~~
PappaPatat
> Bug bounty programs are not supposed to replace you other security
> activities, but it's a way for you to have additional source of
> vulnerabilities.

Exactly the way we position our own Bug Bounty Program. Where the pentesters
can be hired to also confirm things done well, the hunters are only paid for
failures they found.

In our case there is an added bonus with the Bug Bounty Program: we've come to
REALLY apriciate the technical level of reports. Since they only get paid for
triagable findings, the details we get reported are so much better then what
we used to get from our pentesters. Of course we now require the same quality
of reporting from them.

What also helps is that the pentesters are motivated more to deliver higher
quality findings since they are aware the service will enter the Bug Bounty
Program after their findings are resolved.

Again, BBP should NOT replace your other security activities, they are an
additional source with possible unforeseen benefits.

