Disagreements are to be expected on a bug bounty platform, but these days they just stop responding altogether and don't pay. It borders on outright fraud.
I've been trying to report a Squid RCE (CVE-2020-8450) since October. The Squid maintainers seemed unprepared for dealing with the report as they kept being unresponsive and it took 2 months to merge my patch. Maybe they're volunteers, so I can't blame them. Reported it to the bug bounty  which promises high rewards on January 20th and apart from triaging it, there has been radio silence since despite having invoked HackerOne mediation. I have more Squid memory bugs and I'd rather rm -rf them than go through this process again.
HackerOne used to be decent but this appears to be a structural problem now .
The cybersecurity team had a backlog of roughly 30 critical issues discovered internally before starting HackerOne. We were unable to fix those issues, or the ones reported to us, because we had no visibility into source code, there were 12 different development teams, most of them outsourced, and all the project managers were interested in was covering their ass.
The HackerOne deployment was invite-only, but the few hackers in it did fantastic work. I kept being told to find excuses to reduce the amount we'd pay for the critical issues they'd find and we'd fail to fix. At least we triaged faster than Paypal.
Little technical details, high on "let us handle this for you, we know hackers / Well throw a big Defcon party for anyone you want."
I submitted a vulnerability to a vendor on H1 along with a typical “I plan on publicly disclosing this vulnerability on X date” note, and started getting emails directly from H1 telling me that this undermined vendors’ confidence in the platform and that doing what I was doing might make it so I can’t use HackerOne any more. In the same correspondence they said that my approach made sense—but they continued to threaten that “it would be a shame if you weren’t able to participate any more”.
In my case, the vendor verified the vulnerability quickly, but kept dodging my follow-ups by replying without answering my questions. When the vendor refused to assign a CVE after I asked four times, I contacted the HackerOne CNA directly to get an assignment. They replied within 48 hours asking if there was any public information already, I said no and that I was planning on disclosing on X date, and then they just stopped replying for a month until after the deadline passed.
At a glance, H1’s disclosure guideline appears fairly reasonable: 30 days by default, an upper bound of 180 days. In actuality, those times only start once a vendor closes a ticket, and can be extended indefinitely. Reporters aren’t allowed to speak publicly about anything they send to the platform until the ticket is closed and the vendor agrees to allow it, even in the public programs.
As far as I can tell, HackerOne’s primary purpose now is to act as a shield for bad vendors to hide their security defects from the public by using network effects to bully reporters into keeping quiet. The community team claim this isn’t what they’re doing and that they always ask “why should this be private?”, but their marketing material to vendors tells a different story, their actions with me tell a different story, and the vendor I reported to had over 100 closed reports, going back years, and none of them were publicly disclosed.
Unless you must pay your bills with security bounties, or don’t actually care and just want to dump a report and forget about it, I unequivocally recommend against using HackerOne to report a vulnerability.
 https://www.hackerone.com/sites/default/files/2018-11/The%20... page 12: “even with a public program, bug reports can remain private and redacted, disclosure timeframes are up to you”
I would say to step back and even question the concept of 'responsible' disclosure. For starters, even the very name seems to be manipulative by setting the tone of the conversation in a way that, in most other settings, doesn't pass the smell test.
It seems like a short term optimization with longer term costs. While at the moment release the vulnerability into the wild will likely be followed by bad actors exploiting it, by sitting on it until the company fixes it we create an environment where companies are given a grace period if vulnerabilities are found. This is in turn factored into their decisions making when it comes to how prioritized security is.
I always assumed the responsible part had multiple non-conflicting meanings, that 1) The researcher would not disclose it to the public until the vendor has a reasonable amount of time to fix it and 2) the vendor is assumed to want to do the right and responsible thing in fixing the flaw.
In 2020, non-ironic use of the term "responsible disclosure" has become somewhat of a "tell" that the person speaking isn't super connected to vulnerability research.
So, what is the correct term that is supposed to be applied to the approach of disclosing to a vendor first, giving them a hard deadline, and then doing a public disclosure? As far as I know, it’s not “coordinated disclosure”, since “coordinated disclosure” normally means the vendor controls the timeline.
Edit: The Google Project Zero FAQ explicitly states its approach is not coordinated disclosure:
> Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. Coordinated vulnerability disclosure is premised on the idea that any public disclosure prior to a fix being released unnecessarily exposes users to malicious attacks, and so the vendor should always set the time frame for disclosure.
It seems to me that if “responsible disclosure” is problematic for the reasons you’ve mentioned, “coordinated disclosure” is too. Actually, it’s maybe even worse, since “the researcher refused to coordinate with us on the deadline” is objectively true, whereas “the researcher didn’t disclose this vulnerability responsibly” is totally subjective.
As I said in https://news.ycombinator.com/item?id=22407821 I don’t like the phrase “responsible disclosure”, especially given its history, but “coordinated disclosure” doesn’t seem to do any better at being a phrase that can’t be weaponised against researchers. It also has the downside of meaning different things to different people within infosec which makes it unreasonably hard to communicate effectively and concisely.
So, you know, anyone reading this with high stature in infosec, please coin something unambiguously unique (“time-gated disclosure”?) so less time can be spent talking about semantics and more time can be spent on how to improve software security for everyone. :-)
The vendor is informed before disclosure. The security researcher is an informed expert disclosing to end users. Good behavior vendors can further inform these researchers on challenges driving vendor need for alternative timing.
On the timing dimension, consider “cadenced disclosure”?
Less about consensus, more about the beats.
In reality, it's not incumbent on researchers to wait for patches at all. You can straightforwardly argue that you're obliged to give users enough of a head start to stop using the product if the risk is intolerable to them, and then disclose ready-or-not.
Otherwise, research suggests that the chances of a vulnerability being independently rediscovered within three months may be as high as 1 in 5 for certain types of defects. This means that even if you don’t know a particular vulnerability is being actively exploited, you’ll eventually find one that’s being quietly exploited by someone. Since you don’t know which one it’ll be, early disclosure at least gives end users the opportunity to apply mitigations and hopefully burns a 0-day being used by an internet bad guy.
 https://googleprojectzero.blogspot.com/p/vulnerability-discl... - “Why are disclosure deadlines necessary?”
There’s no question that the name is manipulative since it was first coined to refer to what’s now called “coordinated disclosure”, in an essay which referred to full disclosure as “information anarchy”.
In this case with HackerOne, the issue is not with terminology, nor is it with time-gated full disclosure versus immediate full disclosure. Rather, the issue is that HackerOne go out of their way to serve the interests of vendors who don’t want to fix defects quickly, at the expense of reporters, end users, and software security in general.
HackerOne could take the approach of saying “we are a neutral platform for connecting researchers and vendors, it is not within our purview to try to stop reporters from disclosing vulnerabilities if they feel it’s necessary”, but this is not how they operate today, and people should know this.
I don't mean to sound so dismissive but... of course they do!
Who's paying HackerOne? The vendors!
Who is gonna be their first priority? The vendors that are paying them!
They certainly aren't going to do anything to harm that relationship that is keeping them in business.
I reported to one program, which ignored the report and effectively stalled until the startup had pivoted to a different idea. HackerOne didn't remove them from the platform and did not make it possible for me to publish the report through their platform (and publishing it otherwise would have likely violated some ToS).
I reported a second issue to Cloudflare. It was acknowledged as a known issue within less than an hour, but still not fixed months later, and again I was unable to publish it.
Despite waiting for months and requesting disclosure repeatedly, none of these reports are disclosed yet.
In the future, if I find a vulnerability and the only reporting path the company provides is HackerOne, I will apply full disclosure instead.
Completely disagree with this.
I launched a HackerOne program for my company last month (for free, not using their “managed” service).
Of the many reports people submitted, we triaged 30-40 valid reports (most very minor, one or two moderate). We paid out a few thousand dollars in rewards.
At the same time, we also did a more traditional 2-week penetration test with Cobalt (https://cobalt.io/) that cost over $10,000, and HackerOne was the clear winner when it came to the number of high quality security reports worth fixing. And H1 was 2-3x cheaper after paying out the bounties.
I’m sure HackerOne isn’t great for all companies, but just posting this to refute the blanket statement that HackerOne is “completely broken” across the board.
For companies, it might as well be very good, both for honest ones, and ones just looking to "cover their ass".
Perpetuating a system wherein security researchers are massively underpaid for their services because of a terrible abusive platform doesn't seem like a very nice way to do business.
None of these findings appear to have been worth much of anything.
it's ok for people who start out and only want to work on vulns and not bother with "sales" (building long term client relationships). severely limiting though in the long run!
much better to spend time on pitching your service directly and build a name for yourself this way. most customers I had always came back and rewarded me with more work. on those bounty platorms however you're constantly competing with drive-by pen-testers who lower your price and you have no say in the whole negotiation and bargaining phase. your previous reputation also tends to stay locked into these platforms.
a better long term approach is to build connections, set up a ltd (LLC) and make sure you have a good lawyer who can advise you (not just when things go down). ideally build a collective with other like minded (e.g. like a consulting or law practice where you don't always have to share clients but you can if you want to complement each others skills).
this is imo the best way to escape the "scope-prison" and the best way to learn about clients additional (and actual) weak points (points that they haven't themselves even thought about).
does anyone here do it this way or with a similar approach?
I've been that person before as both the 'do it yourself' bug bounty program as well as the 'filtered by hacker one' approach and I'll take the latter every time.
Outsourcing to Hacker One helps cut down the bullshit is where their value add is (and to a lesser extent the reputation system, however if someone is reporting on Hacker One I'll give them the benefit of the doubt). Anything else on top of that is just upsell.
They can add value for companies that don't have a reputation and want to have their security problems discovered. But they have to follow through on behalf of researchers and threaten to remove companies that don't pay bounties and/or don't investigate and remediate issues.
That's the tragedy of the open source world : mission critical for everyone, but no actor willing to maintain it properly. It's Heartbleed all over again.
After a certain amount of time passes, the software reverts to an open source license of the author’s choosing:
Example: use restricted to non-production use, reverts to GPLv2 in four years:
The clock protects users from lazy / out of business vendors. If few enough improvements have been made recently for it to make sense, the customers simply fork an old version of the project, and deploy that instead of paying for ongoing “development.”
(The business source license is not “open source”, but I think it is close enough to be a good compromise in practice)
That might be a new license, but it is by definition not open source. And, no, companies like Google won't “automatically” buy commercial software with that style of license; from their perspective it's worse than regular commercial software since it has all the downsides of traditional commercial software plus gives a competitive advantage to upstart competitors.
EDIT: How about instead a “new” license that, if you feel the software isn't maintained adequately for the needs of your organization, allows you to hire whoever you want to maintain it to your requirements, instead of impotently raging that other people aren't supporting it?
> EDIT: How about instead a “new” license that, if you feel the software isn't maintained adequately for the needs of your organization, allows you to hire whoever you want to maintain it to your requirements, instead of impotently raging that other people aren't supporting it?
It makes sense for the people that are getting the most profit from a piece of software to be the ones paying for basic maintenance/cleanup/improvements.
If you want customizations or new features, that's when it makes the most sense to 100% self-fund.
> it has all the downsides of traditional commercial software plus gives a competitive advantage to upstart competitors
On average, I'd expect it to still be a lot cheaper than commercial closed-source software.
And what exactly do you mean by competitive advantage here?
It often does make sense for them, and you can see lots of cases where this is is done with actual open source software without resorting to a commercial license that discriminates on scale. If it doesn't make sense for the people you want to pay, making a free-for-everyone-else license isn't going to convince them that it does, it's just going to convince them that they are better served elsewhere.
> If you want customizations or new features, that's when it makes the most sense to 100% self-fund.
I would argue that it makes sense to 100% self-fund the additional work whenever you want something more than the open-source offering provides and doing so is less expensive than commercial (off-the-shelf or bespoke) solutions, whether the additional work is basic maintenance or something else, and if you are using an open source solution and it's not maintained adequately for your need, the responsibility for addressing that isn't on other users that you’d like to have subsidize your use.
I think we need more experimentation with solutions to the (open-source) public goods problem before we can say that the others are worse. Ditto with experimentation on variants of democracy. Significantly harder to experiment with that than with open source funding though.
I don't really get the democracy comment.
That was a good way to keep companies honest, an implementation of responsible disclosure.
So H1 could implement that again. It doesn't get them a bounty but it does stop companies pretending reports don't exist, if that's what has happened here.
I think the conversation about whether H1 is problematic or not is a fine thing to have at the top of the thread. I can see people going either way on that question (bear in mind that it has as much to do with idiosyncrasies of each of H1's customers as it does with H1 themselves).
> If the attacker has the victim's password, they would already be able to gain access to the account via web UI too. As such, the account is already compromised. As such, there does not appear to be any security implications as a direct result of this behavior.
Seriously? This means PayPal's 2FA is just security theater. I'd rather they didn't offer it at all in this case, at least then I'd know how insecure my account really was.
Overall that may be being too pedantic, and shouldn't give PayPal a pass on the issue. Perhaps even entertaining it is just muddying the waters allowing PayPal to slip away. The extra check is a security check. If cybernews have by bypassed it then they have bypassed a security check. Logically this is therefore a security issue, and if PayPal are saying that it's not a security concern then they're saying that they were just wasting everybody's time with the unnecessary check to begin with. That would clearly be a lie, as the fact that they developed and continued to use the system indicate that they think it provides security.
> I'd definitely call this 2-factor authentication -
You'd be misunderstanding what "authentication" means then. Notification and authentication are different things. Email is notification, not authentication. Confusing it means either not knowing what authentication is, or purposely confusing matters to present issue as something it isn't.
I should note that I haven't really investigated this so I don't claim to know any truth.
> In essence, it would work with phished credentials just as well as with stolen ones
But, sure enough, it's not the opt-in 2FA, triggered on every login, that was bypassed, but the 2FA checks triggered when PayPal detects suspicious activity. As far as I can tell, if you've enabled 2FA yourself, this bypass won't work. Thanks for the link! Going to go make sure I've enabled that...
Adding a second TOTP device is OK security-wise but adding a second device to my safe and making sure it's still working periodically kind of sucks.
SMS is not OK.
Printed scratch codes would beat the snot out of either.
You can then use that later to set up a replacement TOTP device if something happens to your first one.
I usually use "grab" on my Mac to save a copy of the QR code as a PNG, encrypt that, and save it in an offsite location.
Another popular approach is to print the QR code and save the printout in a fireproof sale. If you do that, I recommend printing it before you use it to set up your first device, and then set up the first device from the printed code just to make sure the printed code is fine.
If you save the text code, you can also use that with oathtool from oath-toolkit  to generate the TOTP code on the command line if you need to use Paypal before you have your replacement TOTP device.
Note: if you do want to have two TOTP devices set up at the same time, there are two ways to do this with Paypal. One way is just to scan the same code in both devices. You can either set them both up at the same time, or add the second one later using the backup you made of the original code.
The other way is to go to Paypal's security settings and explicitly say you want to add a backup TOTP. It will then give you a QR code to scan. That is not the same code as it gave you for the first device. The codes generated from the second device initialized from that second code will not be the same as the codes from your first device.
I have no idea what the user interface is for logging in when you have two devices generating separate TOTP sequences. Does it expect you to use the first device, and if that fails ask you to try a backup? Or does it just accept codes from either? Or something else?
Offhand, I can't think of any compelling reason to prefer your two devices to have different codes, or for Paypal to need to know that you are using two devices. Just setting them up with the same code and letting them appear to the be the same device as far as Paypal is concerned seems simpler to me.
isn't that what this bypass is about?
I will say, that if cybernews have done what they say that they've done, and PayPal are claiming that it's not a concern, then PayPal are clearly in the wrong, and that remains true even if we all agree that this isn't 2FA.
"CyberNews claims—and the company showed me a demonstration—that it can successfully login to an account using basic credentials on a new computer. "
So for now, I'd say they did what they're claiming
All factors are just varying obfuscations of "something you know" when you get down to it though.
i recently recently logged inco company paypal from out of country and paypal complained it wants to confirm account via email, fine i confirmed. and then it said it also needs to conform the via phone. ie a call.
so it is a form of 2fa.
can i also complain how is 2fa a pain if multiple persons use that account. you cannot enable it if they allow only one user per account. there are workarounds where there are mutiple 2fa methods and i use the app and other person sms.
However, obviously the real answer is to add multiple users to the same paypal account, which apparently you can do with a PayPal Business account.
looks like it's not email as the second factor, but your device via SMS
 https://github.com/dlenski/python-vipaccess emulates the Symantec VIP app, allowing you to provision a secret key, then export it to a different authenticator app
Here are the vulnerabilities in their report:
1. They can suppress a new-computer login challenge (they call this "2FA", but this is a risk-based login or anti-ATO feature, not 2FA).
2. They can register accounts for one phone, then change it to another phone, to "bypass" phone number confirmation.
3. There are risk-based controls in Paypal that prevent transactions when anomalies are detected, and some of them can apparently be defeated with brute force.
4. They can change names on accounts they control.
5. They found what appears to be self-XSS in a support chat system.
6. They found what appears to be self-XSS in the security questions challenge inputs.
None of these are sev:hi vulnerabilities, let alone "critical". 2 of them --- #4 and #6 --- are duplicates of other people's issues. Self-XSS vulnerabilities are often excluded entirely from bounty programs.
For the last 3 hours, the top comment on this thread has been an analysis saying that, because Paypal is PCI-encumbered, and HackerOne reports can function as "assessments" for PCI attestations, Paypal is in danger of losing its PCI status (and the fact that it won't is evidence that they are "too big to fail"). To put it gently: that is not how any of this stuff works. In reality, formal bug bounty programs are a firehose of reports suggesting that DKIM configuration quirks are critical vulnerabilities, and nobody in the world would expect any kind of regulatory outcome simply from the way a bounty report does or doesn't get handled. It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.
The login challenge bypass finding was actually interesting (it would be more interesting if they fully disclosed what it was and what Paypal's response was). But these reporters have crudded up their story with standard bug-bounty-reporter hype, and made it very difficult to judge what they found. I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).
> Anyone can write malicious code into the chatbox and PayPal’s system would execute it. Using the right payload, a scammer can capture customer support agent session cookies and access their account.
For example, under example quality reports, POCs are provided
Really? Most companies? That seems like an extraordinary claim.
I'm not a security researcher but if I stumbled on some security issue in something that's not open-source and not owned by my employer, the only way I'd consider reporting it is if they have a bug bounty / responsible disclosure program. Otherwise I'd expect it would be about as likely for me to receive a "thank you" as a knock on the door from law enforcement.
Most companies should not run bug bounties. Most companies haven't even had a competently run software security assessment (either from an in-house software security expert or from a retained third party). Authorizing serverside tests and soliciting inbound reports from random people is not on the list of "first things you should do to get your house in order", and most people do not have their houses in order.
If this sounds like an extraordinary claim, I'd suggest maybe paying more attention to software security people and less attention to Reddit and HN stories about bug bounties; it's easy to get the wrong impression from message board threads, and as you can pretty plainly see, a lot of commentary on message board threads isn't well-informed.
Katie Moussouris is maybe a good starting point if you want to inject the "bug bounties can be bad" take directly into your veins. But there are lots of other people to listen to; it's a mainstream take. If you want a pro-bounty take, you can read what Cody Brocious writes. My (mainstream) take isn't the only decent take.
I agree that they have some issues with the way they've reported it, and I agree with your numbered points except that they imply that #5 may make the support agent vulnerable, but I'm not sure you can say PayPal haven't acted abusively. Many of the reports are legitimate vulnerabilities even if they aren't critical ones. The first is clearly a security issue yet PayPal have said that it isn't. In return they have received nothing but a reputation hit, and this is clearly unfair.
Do PayPal specifically say that anything involving stolen details are out of scope? This seems a bit weak considering they have numerous systems in place to combat misuse of stolen accounts. And even if they do it doesn't explain #2.
edit: To answer my own question, the page at lists "Vulnerabilities involving stolen credentials or physical access to a device" as out of scope for web applications. They likely intend that to apply to mobile applications also, but they've structured the page in a way that makes that ambiguous.
This is why this article is a bad HN submission - it's not really on everybody on HN to figure out whether these reports are any good, whether they were handled correctly by PayPal, HackerOne, etc. It's up to the people writing them up to make this as clear as possible and they don't come anywhere close to that. This just creates a massive discussion driven by speculation and off-topic tangents about a problem people had on ebay and talmudic regulatory 'analysis'.
And persistent XSS is definitely not out of scope according to PayPal's guidelines. https://hackerone.com/paypal
Why are you saying #6 --- are duplicates of other people's issues. ? It must have been marked as dupe of N/A. They would haved gained rep if it was a dupe of someone elses report. They lost rep, so it was most likely marked as dupe of an N/A.
As I mention below, the big problem is the OP didn't include POCs. It's easy to claim "oh this is can be exploited so easily" but without a POC, it's not always clear and perhaps he missed some detail that made his assumptions incorrect.
Anyways, I do have to say hackerone looks pretty cool. This is the first I've seen it and they seem like they are working very hard (we all should be working hard) to make this work for everyone. They are likely just victim's of their success.
All I'll say beyond that is that if they had doc'd a real stored XSS bug in Paypal, my assumption would be that they'd get a bounty for that. That they did not get a bounty for it suggests that it was invalid. Paypal does not have any incentive to stiff researchers on valid submissions; they have in fact the opposite incentive.
Security analysis and penetration testing always results in the perception that the security auditor is calling their baby ugly. Always.
Sorry, on further thought while I still disagree with the analysis above as being overly dismissive, I think the OP may share some blame for not writing higher quality reports with POCs. Also, the OP doesn't explain whether or not they saw the original reports for those marked Duplicate. That's a very critical point. See here -
For anyone actually interested here and not just drive by commenting (like me, ahem), it's worthwhile looking into the platform in more detail. See my post below -
Meanwhile: I don't care even a little bit how Paypal arrived at their "duplicate" response, because Paypal has no incentive to deny a bounty for a valid bug. Like I said above, they have the opposite incentive. Duplicates happen all the time. If Paypal --- or any other large company --- says it was a duplicate bug, it would take extraordinarily clear evidence for me to believe otherwise.
Some of these things are probably not true for fly-by-night companies that set up bounty programs (a lot of people run bounties that shouldn't). I'm not denying that there are random companies that do ruthlessly screw with bounty submitters; I don't know any of them, but I believe they exist. But the money Paypal spends on bounties, all put together, barely even qualifies as a rounding error. They do not care; nobody serious cares enough to squelch reports to avoid paying bounties.
The fact that this was reported as "six critical vulnerabilities" is enough for me to tilt the credibility scale in the other direction.
I'd appreciate it if you didn't edit your comment out from under my reply; the convention on HN is to update your post to clarify your argument in a PS, not to simply delete the parts you felt didn't hold up to scrutiny.
That all said, I think you have a knee jerk reaction (given your history) to side with Large Corp. It shows here and really felt like that. Way overly dismissive and condescending.
Having personally worked for large corporations, I can say that the "it's not personal, it's business" motto is pure evil bullshit better suited for mob.
If you (the royal you) can't treat people with respect that they deserve, don't engage until you can.
When I say that Paypal has incentives not to ruthlessly deny bounties, I mean actual incentives, not "it feels good to do good" type stuff. Even if their reputation among bounty hunters is factored out: they literally have an incentive to pay bounties. That's the metric by which bounty programs are judged.
Sadly, this also undermines trust in the overall state of "security research", which most of the time, borders on being silly. :-/
These people at least appear to have done some actual work. Paypal is probably one of the most overfished ponds in application security, and they didn't come up with much, but it's at least sort of interesting.
One would have thought Wells Fargo had a talented team of people to catch their millions of fake accounts they made, but alas it went on for a decade. I will always assume companies have their backs turned to security, until proven otherwise, regardless of size or perceived risk.
Second, if I did, it would be none of your business.
Third, comments like these are forbidden by the site guidelines, which demand that you not make accusations of astroturfing simply because you disagree with a comment.
For clarity's sake: anyone with a significant number of acquaintances in SFBA appsec knows people working appsec at Paypal or their subsidiaries.
FWIW. I wasn't claiming either, I was questioning source of knowledge, as the claim seemed to be factually informed, but are in fact just the posters opinion on the matter.
2FA means 2 Factor Authentication. This works by forcing one to use two different forms of identification to authenticate, such as login/password and, in this case, identification of the computer used.
So, with all respect sir, what I'm saying is while this isn't the best 2FA, it absolutely IS 2FA by definition.
Please explain which parts of my comment are false?
ATO  is authentication by definition, but again, depending on how it's implemented, not usually the best form.
1) I pay for a product on eBay using PayPal, using my creditcard (direct from card, not from any existing PayPal balance).
2) Seller marks item as shipped but then 5mins later issues an e-check refund (rather than a refund on my creditcard).
3) Seller cancels and deletes the original item on eBay so i can no longer raise a dispute there.
4) The e-check refund continues to bounce as clearly the compromised paypal account can't pull those funds from the other source.
5) The refund being in limbo means my dispute with PayPal gets closed as "a refund was previously issue" (which did, and will continue to, bounce).
The important part is 2 - since I paid for this on my card the refund should have gone direct to my card. However, since I paid for this on my creditcard I've raised a chargeback with the issuing bank, which should hopefully make PayPal sit up and put a bit more effort into sorting this out.
Or just close your account and ban you.
Now i'm using other methods to pay for most e-commerce transactions: one time PANs, a distinct debit account that I keep a minimum amount of funds in for this stuff, etc. So PyaPal are no longer seeing anything like the level of use they once did from me. They can ban my account if they want, the issuing bank have already said they will proceed with the chargeback if PayPal don't issue a refund.
Direct banks such as ING or DKB in Germany are offering those cards but dear god if you have a dispute. Money is gone from your checking account right away and you don't get the convenient fraud protection of actual CCs.
Try disputing a card transaction with Steam - your 18 y/o account with thousands of games and dollars invested will be gone in a flash. Same goes for Google/Amazon/Microsoft/etc.
According to this image , they did not respond or refute within 30 days.
If PayPal’s PCI-DSS compliance certification isn’t revoked then PCI-DSS is a farce.
Quote from your source:
> If your scan fails, you must schedule a rescan within 30 days to prove that the critical, high-risk or medium-risk vulnerabilities have been patched.
Scan in this sentence refers to "a PCI DSS external scan".
The list of approved vendors that can conduct PCI DSS external scans can be found here: https://www.pcisecuritystandards.org/assessors_and_solutions...
Please find cybernews' certificate number there and quote it for us, I have looked and can't find it.
I would guess that, contrary to your implication, they are not an approved scanning vendor. If this is the case then it really does not speak to the characteristics of PCI-DSS and your comment just seems wrong.
And even if they were an approved scanning vendor, from what little I know about PCI-DSS, these scans are part of larger process - so even if they were an approved scanning vendor the scan failure would still have had to be part of the larger process for this 30 day limit to apply.
I could go on and on about how much I hate PayPal and random other things, but just because I don't like something does not quite justify making false claims about it.
Actually this makes a pretty good case for this regulation being a joke. They clearly aren’t up to the responsibility of being a payment processor and are leaning on the law to sustain their business rather than simply demonstrating aptitude directly.
Because it's a scanner for PCI-DSS compliance, not a scan for security issues.
They do not fear that unapproved scanners will be more strict than approved scanners, they fear they will be less strict.
PCI-DSS is not government regulation, but an industry created and enforced standard. Compliance is not mandated by federal law and only a couple of states have laws that reference it. For example, Nevada requires compliance while Washington doesn't require compliance but does remove liability for breaches for compliant businesses.
Did you read downthread about the actual "2FA" feature this team "bypassed"?
It's not the size that matters, but how you use it that counts.
Regulations can be self-imposed on an industry, it does not have to be something the government imposes. Calling PCI-DSS a regulation is still accurate despite many people conflating the term "regulation" solely with government action. In this case, the authority creating the regulations is the PCI Security Standards Council who have their power because the big players in the industry give it to them.
Don't look behind the curtain.
All of HackerOne's information that you cite is about them being PCI-DSS-compliant or having undergone a SOC2 Type 2 audit. Nothing you link to identifies them as a PCI-DSS auditing company. They are not.
And the "scans" the PCI-DSS standards refers to are standard pen-test and external vulnerability scans, usually conducted by an accounting company who will certify the scan results. They are for known vulnerabilities, things like the version of Apache you are on, etc. None of the reports sent via HackerOne would qualify as a "scan" under PCI-DSS.
Please read the page again. They specifically say you can achieve compliance certification with HackerOne.
On top of that, there's not really any legal issues for being non-compliant, as has been pointed out elsewhere in this thread.
HackerOne offering PCI-DSS approved auditor approved challenges gets you nowhere towards the claims you made in your first comment.
1. HackerOne would have to be a PCI DSS Approved Scanning Vendor - they are not AFAICT, neither is the CyberNews research team that did the scan AFAICT.
2. HackerOne would have to have conducted the scan - they did not. The CyberNews research team did.
3. The scan that HackerOne did would have to qualify as a PCI-DSS external scan - which ... do you get the part that HackerOne did not do the scan here or not? And nowhere did the CyberNews research team claim they performed a PCI-DSS external scan.
Please at least try to make an argument for your claims
Meet pentest requirements for PCI DSS, SOC2 Type II, and HITRUST compliance certifications.” 
Further, that has absolutely nothing to do with anyone reporting vulnerabilities through HackerOne. That is not a scan by the definition of PCI-DSS, the SOC2 trust services criteria, or any other security framework you care to name.
Just give it up. You're wrong.
Not anywhere on the page you linked. And a "PCI-DSS auditor approved organization" is not a "PCI-DSS approved scanning vendor" which if they were you could just quote the certificate number instead of link to HackerOne.
EDIT: I guess you are referring to this:
> Meet penetration testing requirements for PCI DSS and SOC2 Type II compliance certifications with our auditor-approved penetration testing methodology and Security Assessment Report.
This in no way is the same as claiming "we are a PCI-DSS auditor approved organization". Which again, would be irrelevant if it was the case.
Further, if you read the article, it is clear the "We" does not refer to "HackerOne".
> When we pushed the HackerOne staff for clarification on these issues, they removed points from our Reputation scores, relegating our profiles to a suspicious, spammy level.
As far as I can tell "We" refers to cybernews.com
And again even if cybernews was a PCI-DSS approved scanning vendor it would still have to qualify as an official external scan within the PCI-DSS framework.
Read the page carefully - it specifically states they are an auditor approved org.
Quote from page:
“Meet penetration testing requirements for PCI DSS and SOC2 Type II compliance certifications with our auditor-approved penetration testing methodology and Security Assessment Report..”
Secondly, PayPal works with HackerOne officially  and within the CVSS standards as they clearly state on their HackerOne page, which is complying with PCI DSS.
Edit: Archived incase:
This comment chain has convinced me that PCI-DSS is a farce.
"...satisfy the requirements for external penetration testing for audited PCI DSS and SOC2 Type II certifications."
"Final Report Delivered. Ready for Auditors."
Take a wild guess on what you think will happen.
The purpose of PCI is to shift liability
It just isn't reasonable that PayPal would be cut off. That was always a toothless threat, at least for larger players.
As an aside, PayPal is a marvel to me because it is effectively lost in time. Using their tools and interface is like stepping back to 1995, and it seems -- from an outsider perspective -- that it must be some duct-taped quagmire that is barely holding on.
No? They were created by the industry to avoid actually being regulated and are a way to shift liability.
That doesn't mean they aren't also beneficial, but that's more a side effect than the intention.
If we want to be cynical, of course there was a self-serving reason they created the standards -- because fraud, especially "internet" fraud, was on a massive upswing and it threatened this enormous new market of credit card spending. There is no question it's in their self-interest to improve the general condition of transactions.
It's not cynical, it is literally the reason PCI exists.
> Five different programs had been started by card companies... The intentions of each were roughly similar: to create an additional level of protection for card issuers
You said -
"avoid actually being regulated and are a way to shift liability"
This is like saying a store put razors in a locked case to avoid being regulated. Or they simply don't want their stuff stolen?
I was being facetious when I said if we want to be cynical, because of course everything any business does is in their own self-interest. Of course it is -- that goes without saying, unless one is just trying to be glum.
Equating credit card fraud to physical theft is silly. The intermediaries of the credit card industry earn revenue by charging fees to process transactions. When fraud occurs, they're only liable if they were some how responsible. PCI allows the network to shift liability to the periphery and to allow the central network to deny taking responsibility for systemic problems with the infrastructure.
To use your razors analogy, PCI is like Gillette shipping razors lose in a box to CVS and telling the store that it is liable if anyone gets cut or the razors get stolen AND Gillette can fine them if anyone gets cut or razors get stolen. But that's not how it works, in the real world razors come with safety covers in tamper evident sealed plastic clam shells.
In both cases someone is out money. It isn't a difficult step.
Fraud costs the credit card industry. It costs issuers (they shoulder 60% of the direct cost), merchants, and it costs the future of the industry because it is a nuisance for end-users.
They make a standard of best practices to reduce fraud. Following those best practices is good for every single participant, outside of criminals. Reducing fraud wholesale is the goal, obviously.
Spinning this in a nefarious fashion is not helpful to anyone, and does nothing but muddies the waters.
PCI identifies recommended mitigations and imposes penalties for failures, but it doesn't ensure or validate compliance. It simply shifts liability from one stakeholder to another.
No one is spinning this as nefarious, but rather information that should be taken into consideration.
Or is that one much like GDPR? Crazy fines that only big players can afford, in such a case, that was poorly designed.
> Up to €10 million, or 2% of the worldwide annual revenue of the prior financial year, whichever is higher
> (1) Each supervisory authority shall ensure that the imposition of administrative fines [shall] be effective, proportionate and dissuasive.
> (2) [...] When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case due regard shall be given to the following:
> the nature, gravity and duration of the infringement taking into account the nature scope or purpose of the processing concerned as well as the number of data subjects affected and the level of damage suffered by them;
> the intentional or negligent character of the infringement;
> any action taken by the controller or processor to mitigate the damage suffered by data subjects;
> the degree of responsibility of the controller or processor taking into account technical and organisational measures implemented by them pursuant to Articles 25 and 32;
> any relevant previous infringements by the controller or processor [and other specified criteria]
> (8) The exercise by the supervisory authority of its powers under this Article shall be subject to appropriate procedural safeguards in accordance with Union and Member State law, including effective judicial remedy and due process.
They can't just arbitrarily decide to fine you the maximum.
Fining a small mom and pop site 20 mill (20mil/4% is the highest fine depending on the case) is not proportionate, not effective because I would like to see them actually collect on that and I would say such a fine to a mom and pop would be dissuasive of doing business at all which is not that the ICO in the U.K. would want. Speaking of the ICO, their big fine (fucking auto correct) to BA for shockingly bad security earned them a 1.5% fine instead of the max 4% because they worked with the ICO (but the ICO still found they failing in a duty of care to protect data) and have been pushing the fine down the road ever since it was issued, atm the earliest ICO will actually fine BA is next month and it’s been almost a year since they filed their "intent to fine".
So while they can throw around heavy fines. It’s not like they run every mom and pop site out of the country.
Welcome to the EU. They've pulled these stunts before. When they introduced changes to digital VAT collection the lawmakers "forgot" that VAT has exemption thresholds. This effectively barred some small and micro businesses from selling their digital services/goods to other EU countries, because the business would not have been exempt from VAT afterwards. It took the lawmakers years to implement a minimum VAT threshold.
>It’s not like they run every mom and pop site out of the country.
Of course they won't, because people want to do business. There will always be more businesses that get started. The question is whether there will be fewer businesses started because of the regulation. So far analysis after GDPR points to yes.
The max(€10m, 2%) and max(€20m, 4%) are the most that supervisory authorities may issue as fines.
But supervisory authorities have a legal duty to issue fines that are proportional which means than unless you breach the GDPR in a wilful and egregious manner you're unlikely to be fined that much (and if you are you can appeal the fine to a court who would reduce it to a proportional amount).
Is the law as written somehow vulnerable to some legal hack where all my revenue goes through Company A but all my data goes through Company B, so that Company B has a small global revenue despite being extremely profitable to the controllers of the companies?
No. Who the data controllers are is a matter of fact, not assignment.
To quote the Court of Justice of the European Union in the Fashion ID case (C‑40/17) at paragraph 68:
"[A] natural or legal person who exerts influence over the processing of personal data, for his own purposes, and who participates, as a result, in the determination of the purposes and means of that processing, may be regarded as a controller".
Furthermore, as per that case, multiple data controllers may exist for some processing activities.
So both Company A and Company B may be considered to be Data Controllers and thus both liable.
I'm no lawyer, but this doesn't sound like it's just for the bigger players, at the minimum you'd be looking at some fines. At minimum you'd be paying 10 million if you incur that amount of fines. I guess it could be argued the 2% is geared towards hurting the big players.
 https://gdpr-info.eu/art-83-gdpr/ Art. 83(4)
This really downplays the report or shows a complete lack of understanding.
Getting access to someone's Paypal account which could potentially mean all their credit cards and banks is definitely an issue that needs to be addressed. This in itself should not be reason to lose PCI certification.
However, as the article further indicates , failure to respond (or even closing the issue without resolving) is a completely different story.
Getting access in this way to users' financial accounts is absolutely a vulnerability.
Ok, first, and foremost, don't you think it's a problem if there are stolen accounts? Wouldn't it make sense to visit the .onion site that the author refers to in the article and lock access to all accounts found there?
> Risk-based anti-ATO systems are heuristic.
This is Paypal practicing DID, which is great. What's not great is that the 2FA defense system could be defeated.
> Every bounty program I've ever paid attention to would close that report.
Getting access to a user's financial account and being able to move their money is something I would take serious 100/100 times. I hope you're not paying attention to bounty programs in the financial sector.
it's just there to make people that don't know anything about technology feel better
Their response is dogshit but not for this reason.
PCI Compliance is total bullshit and everybody knows it.
Here is what responsible disclosure looks like in 2020 from somebody that has self-worth:
> (Message posted to Hacker One, and emailed to any address you can find, and sent in a letter by mail. Yes mail. Also copied in all those ways to investors of the target.)
> Dear Sir or Madam:
> I have learned about a security issue in PayPal's service. This includes being able to login to user accounts without the credentials the system is expecting. [Be vague about how exactly it works, but explain the impact.]
> I am not an employee or contractor of PayPal and I will publish this on my blog at https://privacylog.blogspot.com to build on my reputation for finding and improving the security of internet systems.
> This post will publish on 2020-03-09, which is two weeks from today.
> If you are committed to fix this issue before public disclosure, I will be happy to work with you. You can contact me at ...
- The discussion is about my reputation and values.
- I am not demanding any payment (not sure if that is legal).
- Set a firm publish date.
- This asks them to make a commitment to fix and frames the discussion going forward.
And if they do not get back to you, then when you publish you explain it just like you see in newspapers: "the vendor failed to respond and act on this report when I contacted them by email, social media and paper mail with two weeks' notice".
If the legitimate channels are not working then the system is broken and you should blame PayPal and HackerOne. Be pissed at PayPal for not making it easier to report real issues. Be pissed at PayPal for not finding the issues themselves.
It doesn’t morally vindicate people who sell exploits on the black market.
This is, I don’t know, kindergarten-level ethics? I’m flabbergasted at the self-serving rationalizations here.
A dupe costs points?! On bugcrowd you GET points for dupes...
But that was a company policy, not an H1 policy. It's perfectly possible to dupe to a closed issue. (And of course, it's also possible that you get duped to an open issue which is later closed N/A, though that's pretty awkward. You kind of hope for N/A issues to be closed right away, not to stay open for long periods.)
And not duping to closed issues causes other issues -- it meant always having to leave an internal comment citing the other issue that this one was secretly a duplicate of.
Not applicable typically means the reporter is free to try to argue that is in fact applicable, but by stating it's both duplicate and N/A neither the second reporter nor the company will spend further time arguing back and forth, as even if the issue was applicable the credit would go to the original reporter.
It looks like what happened here was that the issue was (explicitly) labeled a duplicate, and the original issue was (implicitly) N/A, which you can tell if you're familiar with the platform by the fact that the duplicate report cost reputation points.
This achieves the result you mention, that interest in litigating the report further is muted because it's a duplicate. Though you might want it recognized as applicable anyway because of the reputation effects, even if you're the duplicate.
I did once see a company receive a report that duplicated an earlier report that had been closed by mistake. When the new one prompted a reexamination, they reopened the earlier report and duped the new one to it. That struck me as pretty honorable compared to the easier path of leaving the closed report closed and just processing the new one as if it were new.
#1 "In order to bypass PayPal’s 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy."
So you need to be MITM'd and have a malicious cert installed? Yeah... not "critical" and out-of-scope for most places.
For "#2 Phone verification without OTP", look at the messages they were sending. Did they not understand H1's responses? Repeatedly demanding answers isn't a great look. It's not surprising it was locked.
For #3: it requires stolen creds. A "security" flaw that requires stolen creds and brute forcing isn't going to get much traction anywhere.
#4 was a dupe
#5 is a self XSS, no one accepts these
#6 is a stored self XSS and a dupe
> #1 "In order to bypass PayPal’s 2FA, our researcher used the PayPal mobile app and a MITM proxy, like Charles proxy."
> So you need to be MITM'd and have a malicious cert installed? Yeah... not "critical" and out-of-scope for most places.
In generally, using a proxy to perform a 2FA bypass wouldn't decrease the risk. In this case, the attacker already has compromised credentials, and they are trying to bypass the secondary control. As they are the one authenticating, the need for MiTM isn't a huge deal.
That being said, another point that was made is that the "2FA" they are bypassing isn't actually Paypal's 2FA. Instead, it is a secondary, risk-based validation. A bit of a semantic difference, but important to note that if a user was leveraging 2FA this bypass wouldn't actually get an attack who had compromised credentials access.
No. The attacker is the man in the middle to himself, because why are you trusting the client.
> A "security" flaw that requires stolen creds and brute forcing isn't going to get much traction anywhere.
The feature is meant to stop people from using stolen creds.
It does not work.
Given that stolen creds exist, that sounds like a security flaw to me.
It's not "impossible to use stolen credentials", it's "you have to clear this barrier to use stolen credentials". If that barrier is broken, that sounds like a flaw in the security model.
But with the flaw here, the effectiveness drops to zero. The "making certain attacks harder to perform" is basically gone in the scenario where someone is buying a bunch of credentials. That's not good!
#5 and #6 are indeed exaggerated, especially that even if hacker has stolen credentials, and bypassed automatic 2FA, security question won't be displayed on same page users use to confirm payment (to replace e-mail address), or keylog credit card information.
There should be a wall of shame for these (not by person, but by company and group). Next time you get a contact/candidate who “lead the sign-on 2fa management” at PayPal, we will know to be extremely cautious.
There is no “karma” in tech world. People design the shittiest systems in company 1 and then move on to some other role in company 2 and float around taking credit for more and more stuff someone else did.
There needs to be a balance, each party needs to play their own role and work in unison. As much as managers need to manage things and largely clear the way for architects and engineers, architects and engineers need to perform their job and role, to which I would argue belongs adhering to industry standards for security as a core aspect.
If there was clear pressure or even overriding of architects/engineers insisting on adhering to standards by managers who were not performing their role of advocating on behalf of or negotiating with architects/engineers, and instead were even sabotaging them and their product, then sure, it's a management failure; but at that point, architects/and engineers should have also even out right refused and revolted against managers or at the very least clearly and expressly voiced their vehement opposition.
As a manager, I would have even stuck my neck out and sided with an architect and engineer rebellion if they were pressured or even asked to sacrifice core requirements. I also understand though that not all organizations have managers that would do that, especially in careerist organizations where managers see people as bodies to pile up to climb the ladder faster.
A few months later I got a voicemail from paypal, apparently my original call bubbled up. They asked if I had destroyed the info and to let them know if I had not (I did). Then there was a long pause (I guess they assumed the voicemail was over), and it turned out there were 4-5 people on that call and they then discussed how the call went and whether or not it was sufficient to CYA.
I've not used it since, and I hoped they got their act together (sounds like maybe not).
That's hilarious. Please tell me you kept that recording.
That seems like a good way to make sure nobody trusts your business. What say you, hackerone? How can anyone trust this business acting against what ostensibly is its core functions.
> When we submitted this to HackerOne, they responded that this is an “out-of-scope” issue since it requires stolen PayPal accounts. As such, they closed the issue as Not Applicable, costing us 5 reputation points in the process.
But Paypal's policy really couldn't be clearer:
> Out-of-Scope Vulnerabilities
> Vulnerabilities involving stolen credentials or physical access to a device
( https://hackerone.com/paypal )
If Paypal says "don't send us this type of report", and you send one anyway, are you really surprised when your account gets a warning attached saying "this person usually files low-value reports"?
>Authentication or authorization flaws, including insecure direct object references and authentication bypass
Reading with the context of the other out-of-scope issues. I think they meant that the ability to buy or steal someones credentials is not a vulnerability in and of itself.
>Vulnerabilities involving stolen credentials or physical access to a device
It is a poorly worded and confusing policy. Yet, if I found a 2FA bypass and I read that policy I would conclude that it is in scope and submit the issue.
If you wanted my advice as something of an insider to the platform, I'd say that you should point to the ambiguity there ("One policy says yes, another policy says no?") and ask for an Informational close rather than Not Applicable. (H1 hates it when researchers ask for a specific close status, but it's common and often reasonable.) Closing your report Informational instead of Not Applicable costs the company nothing, so even an argument that isn't very strong on the merits can carry the day.
I wouldn't push for a payout, given the out-of-scope phrasing. If executing a successful attack requires you to possess stolen credentials, they're on solid ground when they tell you the attack is excluded by their policy.
They also have opt-in 2FA.
It’s unclear which one the author bypassed.
Perhaps the confusion is by design on paypal’s side? Presumably giving people a false sense of security helps them close disputes without paying out?
Completeness or consistency (choose one)
- bulk acquire stolen credentials, bypass 2FA, bypass the security checks when sending money, and accumulate wealth
- sell above process to anyone that has an internet-connected device, the desire to accumulate wealth, and willingness to commit fraud (which I would guess is a non-trivial % of the world's population)
- disclose the vulnerabilities to paypal through any available channels
The fact that they went with the latter AND were punished for it doesn't shock you? Jesus.
Tangentially, as a (former?) PayPal user, it's wild to see that they consider vulnerabilities involving stolen credentials as a non-issue. Why do they offer 2FA at all, then?
e: After taking another look at that massive Out-of-Scope list, I'm having a hard time imagining a bug that couldn't be closed as "Not Applicable." What a sham.
If that's all you can do, then this is a self-XSS, which is excluded.
#6 is much more clear; that one's very obviously a self-XSS.
They're not; you're just choosing to assume bad things about them. Their out-of-scope list is fairly standard. If you asked a guy on the street "what would hacking PayPal look like?", the answer they imagined would probably be in scope.
For example, if I send you a link to my personal website, and when you visit the website your PayPal account automatically sends $500 to my PayPal account, that's in scope.
Nope. From the out of scope list:
> Attacks involving payment fraud, theft, or malicious merchant accounts
I wish I could define what is and isn't a bug in my code at work. My defect rate would be incredible.
"This happened even when the issue was eventually patched..." which, based on that, I understand their gripe here