Responsible disclosure expects delaying public disclosure to protect the users while the vendor prepares a fix. If the vendor says that they won't fix it, then it's not only a right, but a moral duty to disclose that vulnerability to the users.
This sounds to me like an edge case that H1 should address if it really wants to be taken seriously.
This effect (no matter the cause) of incident is about the worst thing that could happen to a company whose value proposition is that. It's like the bad old days where companies would legally threaten you if you found a bug, and from an outside perspective, Hackerone seems to promote it.
If I were an ethical hacker, I'd think twice before using your bug bounty program for fear of that treatment.
If I were a potential customer (or even a current customer), I don't know if I'd want to be associated with a company that tolerates veiled threats against ethical hackers.
Edit: I should add from this, it actually looks like its Hackerone making the veiled legal threat:
2. It is not the case that all reporters want their findings disclosed publicly, even if they're rejected.
3. Reporters already retain the right to publish findings however they'd like. The worst H1 or a client can do is kick you off the platform.
4. A bug bounty platform that mandated disclosure of any sort would lose all its customers to the platform that didn't have that mandate.
As a hacker on hackerone, this is not my understanding of the relationship. Generally speaking the programs give you "authorized access" under the CFAA conditional on following the disclosure guidelines. I don't know about for other countries, but for the US I'm pretty sure this means that breaking the guidelines means you've retroactively committed a felony.
Now seems a little questionable about if any federal prosecutor would actually take the case, but it definitely doesn't seem like a strictly civil issue to me.
Strongly agree on all other points though.
You need permission to pentest someone else's systems, you don't need permission to pentest software on your own systems even if that software is written by someone else. In an enterprise setting it's possible that you have signed a contract where you agree not to do such testing or not to publicize its results; but violating that would be a civil matter regarding the terms of that contract, not a felony in respect to CFAA.
There is no such thing as a retroactive crime in rule of law systems. Disclosure could be a considered an offense in its own, though.
But that doesn't apply to Steam; nothing they write can really impact your ability to conduct security research on your own computer.
You don't have to get into legal trouble to see which way the wind is blowing.
However, why doesn't H1 expressly allow reporters the option of public disclosure for all NA or WONTFIX reports?
My presumption is that the "other reasons" are business/political and centered around the desire to provide value to or establish goodwill with corporate partners.
In reality, valid bugs being quashed by vendors is not the real problem H1 has.
In this particular case, is sounds like H1 (or an employee thereof) actively discouraged disclosure, which seems like a problem.
> In reality, valid bugs being quashed by vendors is not the real problem H1 has.
There can clearly be more than one problem. I still fail to see the relevance of the "bug report quality" problem to this discussion (beyond explaining why automatic disclosure of NA/WONTFIX reports is not helpful.)
There comes a moment when inaction translates to deception, and if you need clarification for what that looks like in the wild, look no further than Facebook.
They are selling their bug bounty program to their customers (e.g. Valve) as offering the equivalent control to a traditional pen test contract (with confidentiality) while also trying to sell the spec work/no findings, no pay price advantage of a bug bounty program. It's scummy as hell.
Your comment is interesting as well, if only for the defensive reaction without addressing the "being scummy" claim. I'm basically hearing, yeah it's scummy, now get off my lawn.
Ended up naming it 'Bad QR', putting this page together and sending them a private link (https://writecodeeveryday.github.io/projects/badqr/)
Be a bit careful when experimenting, though. You may run into problems syncing your cloud saves for some games if/when you go back to the official client.
Legally I don't know where that stands, but morally I'd say we have a right to play the games we paid for.
Doesn't seem too unreasonable.
I'm really curious how much of what is reported to HackerOne ever gets and actual patch. It kind of seems like there are bunch of known vulnerabilities idling on their platform without quick fixes. Should be interesting once the HackerOne database is inevitably leaked.
HackerOne should start requiring companies pay researchers for duplicates - that the company already knew of a flaw should make them more liable, not less.
That would create a perverse incentive for researchers to tell their friends about the vulnerability so that they can resubmit it and also get a bounty.
The problem could be solved on the side of the researchers by splitting the bounty among all submissions of the same bug, but anyone else with access to the report (employees of either HackerOne or the relevant company) could try to get a share by having someone create a duplicate report.
First come, first served seems like it would be the hardest to game, as the first reporter is guaranteed to have actually done the work (not counting rogue employees who create bugs to "find" and report).
There should probably still be some kind of reward for duplicate reports to avoid discouraging researchers, but something symbolic like publicly acknowledging that they found a bug might be enough to provide validation.
For external parties, yes. However it's the easiest to game for those liable, since you can just mark whatever you want as a "duplicate" and refuse to pay the bounty.
Offering bounties for public disclosures helps remove a lot of perverse incentives.
In reality, vendors (or at least, serious vendors) aren't gaming H1 to stiff bounty hunters. If anything, the major complaint vendors have about H1 is that they aren't paying enough --- that is, they deal with too many garbage reports for every report that actually merits a fix.
I assume it'd be hard to convince companies it may be in their better interest to set up an incentive structure this way. But perhaps a third party platform could find some such mutually beneficial equilibrium.
It seems weird that HackerOne put themselves in such a deeply loser position to try to be the ones to prevent submitters from revealing security issues. Why not be a neutral party, and let the companies try to enforce rules on the hackers in these cases?
It was pretty low hanging fruit. I was going through an XSS tutorial and used their site for practice. `<script>alert(1)` could be saved into several user fields including Name and would then be executed on every subsequent pageload around the site.
If there was some indication that someone had reported it recently I maybe would have waited longer, but I suspect this bug had been known for months.
> The patch was almost immediately proved to be insufficient, and another security researcher found an easy way to go around it almost right away.
You might want to read the article.
Isn't security through obscurity largely to be avoided? I thought the working model for most security researchers was: if it's not worth fixing, it's not worth hiding.
More to the point, I thought that responsible disclosure always came with an expectation of public disclosure. The advice I've always been given is that you should never disclose with conditions -- ie. "fix this and I won't tell anyone."
It should always be, "I am going to tell everyone, but I'm telling you first so you can push a fix before I do."
Does HackerOne operate under different rules?
That position isn't "wrong" so much as it isn't useful in reducing risk.
I moved to separate Wintendo box which is the best solution.
But yes, separate hardware is safer.
Steam should maybe be liable if they are actively thwarting disclosure that would protect users but that's a tough thing to establish legally.
Thankfully, the Linux version doesn't seem to have this problem (AFAICT).
In the end, Microsoft will get bad reputation for having an insecure OS (not to even mention Valve here, and in the long run it will hurt them as same as it did Adobe with their Flash stubbornness).
Windows does support this functionality, and ultimately Valve's to blame for not using it, but you're right that Microsoft should be more proactive in encouraging good design and discouraging bad design.
With a foam bat. Just because the flash horse is dead doesn't mean it didn't deserve it's beating or can't continue to be a potent reminder of how bad Adobe was at handling security issues and why other platforms, like Steam, should learn instead of emulate.
Flash was just such a unique special target, ala PDFs and Microsoft word, there were few wide open targets from which a hacker could predictably get the user to open (whether embedded or not) on a targets machine. So it was particularly sensitive to vulnerabilities by design, where a much broader security perspective was clearly needed than most software.
...perhaps I'm taking the analogy too far?
Adobe had a practical monopoly on the interactive web and blew it.
Those were the old days. Or, that damned monkey!
If no one will use or manufacture your baseball bat, then the danger of the bat is moot.
Steam or any other app should always run sandboxed with no root access, no file access, no camera access, no access to other process, etc. For most users, steam only needs a sandboxed local storage to put its game into it and a internet access (and maybe mic access), that's it.
I really hope Flatpak and something similar for Window becomes the norm, the current situation is a security and privacy disaster.
There can still be exploits of course but now you have the find a weakness both in the app + in the OS sandbox which is a whole lot harder
What year is it? To me prehistoric means buying a nice big box with a CDROM or some floppies and installing with no internet required at all. Shell exes that want to download crap is the current nightmare we are living in I thought.
Well, some do... like Doki Doki Literature Club
They want to “open a file” which’s means file open dialog
Or: upload your profile picture, which’s means a one time upload of a file. Right now you give access to the camera and it can be used for anything
I expect them to take security flaws seriously if they want my continued patronage - and that includes EoPs.
Telling a security researcher "we're not going to fix this but please keep it secret" is not a viable strategy, ever.
In the end, the researcher went public (as nearly all will, in that same situation), Valve got a hit to their reputation in the tech press, and they ended up having to (attempt and fail to) fix it anyway. Entirely predictable, and Valve looks really stupid here.
Banning people from your bug bounty problem for following the generally-accepted rules for security disclosures is certainly with in their right, but so what? It's not a winning strategy for any company.
All true and utterly worthless to point out.
Your second post acted like people were calling the ban not-allowed.
Neither is accurate, so your surprise is misplaced.
Even though it was clear that this might happen, it's such a blatant bad decision, for both ethics and customer security, that people are fighting back loudly.
You haven't given a single reason people shouldn't be upset by it.
Was Valve technically within their rights to ban this researcher? Sure. Was it a move that advanced Valve's interests in any way? Obviously not.
In general, having a Bug Bounty program is good. We can agree on that, right?
Most Bug Bounty programs have a scope, and staying inside the scope is important to the business for reasons. My guess is that most scopes are defined by a combination of confidence in the security of the code, resources to triage vulnerabilities in that part of the code, and the risk to the business from vulnerabilities found in different parts of the code.
That is to say, I suspect that either Valve doesn't have many developers well versed in that part of the code base, or they are not confident in the security of that code base, or they considered it a low priority (even if we disagree about the priority of this vulnerability).
Now, let's pretend that I'm right about those reasons. Even further, let's pretend that they did not include it in the scope because they don't want to pay a bunch of bounties on code they knew was insecure.
(Aside, I'd much rather have companies only include things in bug bounty programs once they're confident they are secure, relying on BB to do your security for you is begging for trouble because then the company isn't taking responsibility for, or even trying, to do things securely)
Given this train of thought, which is making more than a couple assumptions, I don't think their actions are extremely bad or pointless. They are trying to keep their bug bounty program in scope. Bug bounty programs involve a fair amount of trust. If that trust is broken and they don't want that researcher anymore, then that's fair.
There probably should have been better communication. It probably (definitely) shouldn't have been a WONTFIX. Overall, terrible outcome for everybody.
It's just one of those things where every decision looks reasonable in isolation and leads to a really bad outcome and the company looking terrible.
If Valve wanted to try and defend the structure of their bug bounty program by essentially arguing that Steam is such a mess that local privilege escalations are out of bounds, they should be forced to publicly reckon with that stance.
Scopes are fine.
But if it wasn't in scope, then clearly none of the program's rules apply to the bug, right? That bug isn't part of the program.
"Fine, I'll tell the world."
"We fixed it."
Their ask was self-serving and dangerous, and deserved to be declined.
Valve...I have your software installed. It has a hole. Fix it.
This mudslinging isn't helping your PR or making me feel more secure about my steam install regardless of the details.
I basically uninstalled Steam client after first 0day was found. At least with gog I don't have install galaxy. But thats a different rant..
I know this will be unpopular with folks like tptacek, but I've always felt strongly that bug bounty programs offer too many perverse incentives to all parties.
More often than not it becomes a tool for companies to sweep issues like this under the rug and then use HackerOne's system to force the reporters to play ball (because they want to keep getting paid). I hate this sytem.
I'm 100% behind open, public disclosure and if it were my own product in question, I would offer bounties for _public disclosures_. That keeps everyone honest.
Now they can hide it for months(ever) allowing others to discover them and keeping the researchers quiet.
- Researcher finds bug
- Researcher discloses to vendor
- Vendor fixes (or not)
- Researcher discloses bug publically once vendor has fixed, or after X time (whichever is first)
This is roughly how Project Zero goes, and it's a good mix between giving the vendor the opportinity to fix it and deploy the update before it gets exploited.
It's very naive to assume that bugs can be fixed before others can exploit them. Bugs take time to fix, and the process takes time, especially when dealing with large enterprises.
The vendor can also usually request an extension, as per the Project Zero guidelines, of I believe 1 month if they confirm to be actively working on a patch.
The goal of responsible disclosure is to help the vendor and their users' be more secure, so having a policy that is balence between the two is important to let the vendor fix it, and to not let the users be possibly hacked
If so, that seems like a superior alternative to immediate public disclosure.
While there are some DRM-free games, majority of games on HumbleBundle are sold as Steam keys, so you still need Steam to launch them.
I can understand that perspective - steam can't spend the time to rewrite to fix the EoP/LPE issues. Their stance must be that the user has to "be careful" not to install malware or other vulnerable software, instead of fixing steam.
On the plus side, reading the writeup  it seems unlikely that this affects the Linux client (or even if it does, it's at least limited to the current user account). So I guess Steam can continue to live on my machine for another day.
Some Steam features will be disabled or broken but whether or not this affects you will obviously vary depending on which ones you like to use.
Let's remember that Valve is the oldest there is in a business they pretty much pioneered with Steam over 15 years ago.
As somebody who's had an account there since day 1, I'm still amazed by how tight they've managed to keep their ship for all these years, even tho plenty of people have been trying to break into that very worthwhile target for over a decade.
If I contrast that to my experiences with services like Uplay, and Origin, then those differences are like night&day, because with both my accounts on these services I had lot's of issues due to my accounts getting hijacked (probably trough support) several times.
In 15+ years of using Steam, this hasn't happened once to me, so whatever Valve is doing at that end, it seems to have worked well for them and their customers.
That's not meant to defend their stance on this particular issue, but imho it's also kinda dishonest to now frame Valve as a company where nobody cares about security.
If that'd be really the case then they would have gone out of business over a decade ago.
If you look at just the assets Valve produces minus rent seeking, are they losing money?
Turns out that all along having no accountability in your company would result in complacency and a critical lack of production. I'm curious to see how Valve Software as a company is going to climb over this security wall they've found themselves in front of if seemingly nobody has to answer to anyone and everybody gets to do what they want in a leisurely fashion. I mean we give Chinese IoT vendors crap all day long, and it turns out Steam might be just as bad!
Personally I always thought it would be cool to work at Valve, but not anymore. I don't see them doing anything broadly relevant that doesn't involve coasting on the momentum/market share of ancient products. Their VR stuff is cool, but even there it feel like they're lagging behind e.g. Oculus in ways that matter.
I'd love to be proven wrong, because I dislike Oculus. I am simply stating my observation that Valve seems to be in decline.
And yet, it's still the best client out there.
If you want slow and instable(sic!) try competition. Steam client is actually fast and stable compared to what else is on offer.
"Damn, you watch some weird porn."
I'm not arguing that this vulnerability isn't one, it's a privilege escalation vulnerability, however in your situation you got physical access which is as far as I know, pretty much game over for your system.
Encryption, however, cannot be broken without your credentials. These can be obtained from default running instance of Windows with Mimikatz if the admin credentials are still in memory from an earlier session.
This privilege escalation attack is probably never going to be used if the attacker has physical access.
Sure that's certainly right but physical access doesn't force you to be "on Windows".
> This is true, and also why I lock down my BIOSs and set the OS as the only boot device.
I never talked about you specifically, you are a tiny tiny minority. Even then, that just block your computer. If you can't bypass that BIOS (seriously doubtful), the hard drive is still accessible.
> Encryption, however, cannot be broken without your credentials.
Is there an encryption on by default? That must be new because I'm pretty sure I never had trouble to access my user folders on some of my old Windows 7 installation (that's would be a good 20% of Steams users).
I can't find anything about this, if I have time tonight I'll try to see if I can access my user folder through another OS.
Is this on by default on Windows? I haven't needed to access my files from another Windows installation for a long time, but I'm pretty sure the last time I tried on my good old hard drive with Windows 7, they weren't encrypted and I had no trouble to access them.
This is not something that 99.99% of Steam users would do though...
It would still be possible to retrieve the encryption keys too if the PC is still running (which is also the only ways to make Steam vulnerability viable) using a can of compressed air .
As I said, physical access to a computer is pretty much already game over... The Steam vulnerability is quite useful while being connected remotely though (which is really the most likely scenario either way).
No, using this 0-day malware, with lower privilege level, could do stuff it could not do.
I have trouble parsing this. Did you mean "could do stuff it couldn't have done" perhaps?
So some companies consider LPE to be serious.
b) Researcher reports vulnerability that falls under X
c) Since it's out of scope, it's closed as N/A
d) Report is locked because company doesn't want to publicly disclose a vulnerability in their system via the Hackerone platform
What's the problem here? Just go with normal vulnerability disclosure. Bug bounty programs are a two way street, and respecting the scope is part of that.
Edit: I guess the important part is that the researcher was then banned for disclosing the report. Seems reasonable, honestly. I don't agree with it, but I understand it.
If Steam had no problem acknowledging that this functionality exists, they should have had no problem with it being disclosed. There lies the problem. In the bathroom with the needle in their arm; "...there's no problem here..." but if you swing the door open they'll still try to shut it. Because they know they're wrong.
If HackerOne isn't going to help you they have no right to hinder you. If they want to strongarm everyone into effectively the same agreement as an NDA then there literally is no point in turning vulnerabilities into HackerOne.
They seem to only exist as a cow-catcher on the locomotive of software vendors too lazy to actually fix crappy code.
"Who needs to fix code and shell out bounty if you can pinpoint and silence the researcher?"
The article gets this part wrong: the hacker isn't banned from H1, which he says in his blog post -- "Eventually things escalated with Valve and I got banned by them on HackerOne — I can no longer participate in their vulnerability rejection program (the rest of H1 is still available though)." HackerOne is in no way punishing the hacker for his reports and/or public disclosures, for what it's worth.
(Disclosure: I am on the community team at H1, though I've had effectively zero involvement with this.)
Hence, this practice by steam makes all users of steam less secure (doubly so as they actually don't want to fix these issues). This is something the public deserves to know, so they can act accordingly.
Obviously it would be better if Valve fixed the issue and gave a (possibly reduced due to out of scope) bounty.
But this is software people install on their desktops, and Valve has no say in how security researchers approach that stuff. Valve can and maybe even should exclude LPEs from their bounty scope (if that's not what they're focusing on right now), but they can't reasonably ban people for publishing vulnerabilities they've scoped out of the only mechanism they've provided for submitting and tracking vulnerabilities.