
HackerOne breach lets outside hacker read customers’ private bug reports - louis-paul
https://arstechnica.com/information-technology/2019/12/hackerone-breach-lets-outside-hacker-read-customers-private-bug-reports/
======
DyslexicAtheist
if a platform which stores information about current 0days in their system
get's pwned by an OWASP top-10 it should be considered gross negligence on
their part. this isn't the security researchers fault but HackerOne trying to
spin the story to their advantage. Absolutely disgusting, but I wouldn't
expect anything else from them.

We should agree to a system that auto-pwns (takes offline) anything that has
an OWASP top-10 in it. janit0r had the right idea when s/he/they took all
those IoT devices offline. The Japanese did something similar (iirc in 2018).

the top comment in the article nails it:

 _> I'm not a fan of a vulnerability reporting platform giving someone grief
for finding a vuln in THEIR platform. Of all orgs, HackerOne should understand
the optics of how they communicate with a reporting party.

Interrogating your voluntary reporting party with things like "why did you
access so much data, you didn't need to" is just stupid, that's common
exposure assessment practice for most in the industry.

Your vuln was found and exploited either way. Your choices are two:

1\. Honorable guy validates scope of access and reports it. 2\. Less honorable
guy pwns your shit.

Come on, HackerOne._

EDIT: there is so much we could do (programmatically) using _.well-known
/security.txt_ to drive automation for all kinds of things in addition to mere
providing of a contact point to report vulns to.

~~~
shkkmo
You should read the actual report [0]. HackerOne repeatedly thanks the hacker
for the report and awards a 20k bounty.

> Thank you for confirming you no longer have unauthorized access. As part of
> our investigation, we also want to make sure we have all the relevant
> information from you to ensure we’re capturing everything, even as we review
> our own logs / audit records. As such, we would appreciate your answers to a
> few questions to assist us.

HackerOne ask a bunch of questions about what was accessed.

> Again, thank you so much for responsibly reporting this issue to us. We
> really do appreciate it, and we thank you for your assistance with our
> investigation.

Then later after awarding a 20k bounty:

>To make it clear on how we believe these situations should be handled by
hackers, we’ve added a clarifying section to our policy. We urge you to review
this section, which will reduce the blast radius going forward. We’d like for
you to continue to be a valuable member of the community and our top hacker,
but for that, we require you to be mindful of the access that particular
vulnerabilities give you. Being responsible and understanding the potential
consequences of your actions is very important in this field. We hope you
understand and will take this into consideration going forward.

[0]
[https://hackerone.com/reports/745324](https://hackerone.com/reports/745324)

~~~
tastroder
Not sure what exactly that changes. It's still a relatively basic
vulnerability and they still school the reporter after changing ("clarifying")
their policy [0]. I realize it's worded to not come across as a threat but I'm
not sure why that was necessary either. It might have been out of scope
according to some H1 guideline the reporter wasn't aware of but what the
reporter did wasn't really out of the ordinary.

[https://hackerone.com/security/policy_versions?change=362468...](https://hackerone.com/security/policy_versions?change=3624684)

~~~
shkkmo
> Not sure what exactly that changes.

Tone and how you treat your reporters matters. The comment quoted by the
person I responded to was edited to include this after they read the full
report:

>> Edit: This comment was based on an incomplete understanding of the
specifics of the communication between H1 and the reporter. This assessment
wasn't the most fair, and I probably should have read the full report before
passing judgment. [0]

You say:

>I'm not sure why that was necessary either. It might have been out of scope
according to some H1 guideline the reporter wasn't aware of but what the
reporter did wasn't really out of the ordinary.

Isn't it exactly the commonness of that practice that makes it clear it is
important to take the time to educate your reporters and improve your policy?
The amount of sensitive data was accessed increased the severity of the
breach. HackerOne specifically acknowledges this as one of their planned
courses of action to improve things going forward:

>> As the community grows, HackerOne needs to ensure that HackerOne is
reinforcing the best practices in bug bounty hunting. The HackerOne Community
team will look to increase hacker education around delivering proof of
critical severity vulnerabilities in case sensitive information has been
accessed by the hacker.

[0] [https://arstechnica.com/information-
technology/2019/12/hacke...](https://arstechnica.com/information-
technology/2019/12/hackerone-breach-lets-outside-hacker-read-customers-
private-bug-reports/?comments=1&post=38352571#comment-38352571)

~~~
tastroder
Fair enough. I find it rather patronizing but I also kind of get your point.
H1's gamification requires them to educate their users to some degree, sure.
Although I find that far more problematic than some common practice. I'd
prefer policy that reflects reality. "If you see usernames or reports, stop
and report it" might help with the part of the user base that bought into the
gamification but I'm really not sure how that helps responsible disclosure in
general.

If you're not a paid external security researcherm, of course you click on at
least one report to confirm that the vulnerability is actually working and has
meaningful impact. Plus, they're still working with curious humans here, not
OWASP top 10 scanners.

That's just like all those bug bounty programs including some phrase
disallowing automated tools. Even for active participants of the platform
that's completely unrealistic and introduces unnecessary uncertainty.

~~~
thaumasiotes
> That's just like all those bug bounty programs including some phrase
> disallowing automated tools. Even for active participants of the platform
> that's completely unrealistic and introduces unnecessary uncertainty.

I was a report triager on HackerOne for over a year. There is a good reason
for that clause, and I would definitely recommend any program that didn't
already have it (which is only programs new to bug bounties) to adopt it.

What's disallowed is filing a report consisting of the output of an automated
tool. Look at Uber's scope statement (
[https://hackerone.com/uber](https://hackerone.com/uber) ):

> Out-of-Scope

>> Negligible security impact

>>> Vulnerabilities as reported by automated tools without additional analysis
as to how they're an issue

>>> Reports from automated web vulnerability scanners (Acunetix, Vega, etc.)
that have not been validated

This prevents a denial-of-service attack on Uber's bug bounty team. It's very,
very common for someone to run an automated tool against one of your domains,
copy the output into a HackerOne report, and walk away. In nearly all cases,
that output is a scary-looking false positive. It doesn't make sense to
investigate reports like this, because they are almost always spurious -- so
they are placed out of scope. If you file a report _consisting solely of
automated output_ , your report will be closed without investigation per the
company's policy. The only problem being addressed by the clause is that
reports like this are a waste of our valuable time.

Automated tools themselves are not disallowed. (How would we know?) The
requirement is that you investigate the bug yourself, and provide a writeup
that indicates that you did so and substantiated that there really was a
security issue.

> If you're not a paid external security researcher, of course you click on at
> least one report to confirm that the vulnerability is actually working and
> has meaningful impact. Plus, they're still working with curious humans here,
> not OWASP top 10 scanners.

I have a lot more sympathy for you here. I have personally seen the same
company pay out thousands of dollars more for the same security issue just
because one of the reports alleged a bigger potential problem. (In this case,
"you might be able to write to the database" as opposed to "you can read from
the database".) This makes it hard for me to recommend "stop as soon as you
know an issue is present", even though that is definitely the preferred policy
of the companies.

~~~
tastroder
First off, I have to admit to a bit of snark when writing that up yesterday
tbh and completely forgot about folks reporting some Nessus output etc. I'm
absolutely with you when it comes to that type of automated tool and
reporting, thanks for separating that.

The Uber one is actually a great example of wording I can appreciate. I was
thinking of one like Verizon Media's (
[https://hackerone.com/verizonmedia](https://hackerone.com/verizonmedia) ).

> Do not use automated scanners/tools - these tools include payloads that
> could trigger state changes or damage production systems and/or data.

Unlike Uber's that's just a blanket statement and while I'm sure nobody wants
to use it in that way (I actually like H1 acting as proxies for that alone,
they surely mean something very similar to the example you posted), it's
unrealistic and could make pretty much any attack ineligible.

All that being said, at least the change to H1s own policy let's people know
where they draw the line. Maybe I should change my mind, not every program has
to be relevant to everybody at the end of the day.

------
staticassertion
This isn't very interesting, at least no more interesting than any other
disclosure that goes through H1.

If you're not familiar with disclosures and communication via H1 maybe this
all feels very odd or bad or something, idk. Looks fairly standard to me.

Ultimately, to me, someone found a vulnerability, reported the vuln, and it
was remediated within ~3 hours. There was no "breach", unless you consider
every H1 report a breach?

Having a repository of vulns is quite a scary thing though. I often wonder,
with all of the "wontfix" and "too low impact to care" bugs chained together,
how easy is it to own many of these customers?

~~~
eterm
It used to be that wontfix / low impact stuff would still be publically
disclosed. Responsible disclosure was the key principle.

There shouldn't be a gap between "won't fix" and "Too bad to be made public".

If a company is happy not to fix something, they shouldn't complain if it's
published.

------
ryanlol
Original ticket, the article is a waste of time:
[https://hackerone.com/reports/745324](https://hackerone.com/reports/745324)

~~~
tptacek
I agree, this is not a great article, tries to make a lot of drama out of what
appears to be a really simple event, manages to contain less information than
the actual HackerOne report, and has that weird coda of Katie Moussouris and
the DoD program, which has nothing to do with this story.

~~~
MaxGabriel
Do you have any thoughts on their mitigations? I would like to protect against
attacks like this, but their mitigations seem to have big drawbacks:

> Bind Sessions to IP Addresses

As they admit, "IPs... change for legitimate reasons", so it seems like most
sites wouldn't want to roll out this protection, because it would randomly
logout their users. I haven't tested this, but it seems like this would be
pretty common on mobile networks from some googling.

> Bind Sessions to Devices

They say they will "investigate binding the session to a specific device"; is
there a known good way to do this? I could imagine attempts to do this
breaking legitimate use cases like Apple's Handoff between iOS and Mac, or
just not adding much security.

My instinct is that the admin functionality could be put run behind a VPN, and
that would be a good defense against a HackerOne employee's credentials or
sessions cookie being leaked.

~~~
codexon
For such a highly valuable website like hackerone, IP binding should be done,
the extra security is worth the slight annoyance at having to relog when your
IP changes.

------
user5994461
>>> As haxta4ok00 suggested, one step was to bind authentication cookies to
the IP address of the user it was issued to.

Do NOT bind session cookies to an IP. It breaks usability on mobiles, they
change networks quite often.

What you can try to do instead is to associate the cookie to the country is
user currently is in. It's available in the CF-IPCountry header if you use
CloudFlare. This may also cause usability issues for frequent travelers or
border workers.

[https://support.cloudflare.com/hc/en-
us/articles/200168236-C...](https://support.cloudflare.com/hc/en-
us/articles/200168236-Configuring-Cloudflare-IP-Geolocation)

~~~
jsploit
Locking sessions to countries doesn't offer any security benefit - an attacker
can quite trivially connect from an IP in any target country.

~~~
user5994461
Lock out the login page too so users can't login from countries you don't do
business with, especially the actively hostiles ones (CN, RU, T1).

It's simple and it's extremely effective. It benefits security immensely.

~~~
jsploit
I don't understand how it's anything beyond a minimal defense-in-depth
measure.

    
    
      1. Target leaks session cookie
      2. Attacker uses VPN to connect from target's country with the leaked session cookie
      3. Attacker's session is deemed valid

~~~
user5994461
Attackers usually don't know the country of the user. Even if they do, it's
not as easy as it sounds to find a VPN or an open proxy in any country.

If you're thinking of Tor, Tor is trivial to detect and block.

I've been working in fintech companies and e-commerce, handling real money or
wallets that could be stolen or emptied by a successful attacker. If you don't
do defense in depth, users lose. One trivial example, the days after a new
breach is published on Hacker News, there will be bots on the door attempting
to brute force login with all these new email and password combinations.

Any other question? I feel like I have a context to start writing a blog post
on that.

~~~
jsploit
> Attackers usually don't know the country of the user

Nothing a little recon or social engineering can't solve.

> it's not as easy as it sounds to find a VPN or an open proxy in any country

Only for some small/niche countries perhaps. Worst case, an attacker can rent
a VPS in the target country.

Perhaps a city-based lock would be more effective.

~~~
user5994461
Most attacks are opportunistic, trying credentials that were leaked online or
brute forcing simple passwords. It's also heavily weighted from russia, china,
tor, open proxy, non-reputable hosting and cheap VPS. Each of these can be
detected and blocked.

Attacker do not maintain proxies, especially not proxies across tens of
developers countries, and if they that would be organized crime and it's a
whole new level. By the way, banking trojans evolved to operate from the
browser of the victim specifically to evade these protections.

Should I cover all these in a blog post if it's something you're interested
in?

~~~
jsploit
I think you're deviating from the topic of protecting against the
vulnerability HackerOne encountered: using a leaked session cookie.

In their case, let's say the victim analyst was based in the United States,
and they have implemented your proposed session country-lock. I also happen to
reside in the US, so the country-lock protection is worthless.

For other cases, you can _try_ to block proxies, VPNs, TOR, VPSs... but that
in itself is perhaps a usability fail.

------
tyingq
Feels like they are trying to have it both ways.

Complaining that the finder poked around too much while asserting that he had
access to very little.

~~~
jcims
Lots of bug bounty policies expressly state you should not access other
customer information. E.g.:

[https://www.google.com/about/appsecurity/reward-
program/#rep...](https://www.google.com/about/appsecurity/reward-
program/#reporting)

Sometimes that's not easy but slurping tons of data to demonstrate the scope
of the vulnerability is rarely necessary from a technical perspective. That
said, if your report is being ignored it becomes a very tempting avenue to get
priority.

~~~
AGKyle
With our bug bounty program we also state that researchers should not access
customer data or interrupt normal operation of the services. If their testing
is believed to impede normal operation we can provide separate servers for
testing purposes.

We also provide a test account upon which researchers can use if they wish to
attempt to modify or read private information. They should stick to that data
and I'd encourage other bug bounty programs to do similar.

In my eyes this researcher should've stopped as soon as they started seeing
private data and reported it, it sounds as though they continued to read
private information well after they realized it was private data they were
viewing.

I realize not every bug bounty program plays fair. We look specifically at our
bug bounty triage team (via the platform we use) to make sure we are treating
researchers fairly and that researchers are obeying the agreed upon rules.
They're the neutral third party in our eyes. They keep us honest and
researchers honest. At least that's how I approach it all.

~~~
jcims
That’s cool. It’s so important for bounty programs to have a systematic way to
approach awards. The security researcher community overall is very cool, but
there are definitely some assholes and bottom feeders out there that can get
under your skin and introduce bias in your decisions.

------
trulyrandom
It's stunning that HackerOne staff use their production credentials to
reproduce security reports. Is there no internal instance of the HackerOne
platform they can run their tests on?

------
wizzwizz4
This title is misleading. It _let_ an outside attacker do this – past tense –
due to a token leak; this isn't an ongoing thing, and the token was revoked so
even _that_ attacker can't do anything now.

------
jupp0r
Do they have a bug bounty program?

