Hacker News new | past | comments | ask | show | jobs | submit login
Researcher banned on Valve's bug bounty program publishes second Steam 0-day (zdnet.com)
465 points by tareqak 30 days ago | hide | past | web | favorite | 199 comments



I mean, you can't have your cake and eat it too - if you claim that a particular issue is not a bug and you won't fix it, then you have no ethical grounds to say that it shouldn't be disclosed.

Responsible disclosure expects delaying public disclosure to protect the users while the vendor prepares a fix. If the vendor says that they won't fix it, then it's not only a right, but a moral duty to disclose that vulnerability to the users.


Frankly, I think HackerOne deserves a bit of blame for that. Any WONTFIX ought to be made public automatically unless there are extenuating circumstances (like the vulnerability being reported against the wrong product).


H1 itself has no WONTFIX status, FYI. A bug that's not considered to be a bug by the program will either be closed N/A or informative. Ultimately, disclosures are handled and controlled by the program, not by H1; this is both a good and bad thing (and I say that as both a HackerOne employee and a hacker on the platform -- it's a complicated issue from both sides).


It's complicated on both sides means there is politics involved. Being blunt, that sounds like a cop out to me.

This sounds to me like an edge case that H1 should address if it really wants to be taken seriously.


There is definitely politics involved, but not H1 internal. The issue is that every program handles disclosure itself, so H1 itself doesn't really have the power. That could be changed at a policy level, but I'm not sure that'll happen (or should happen, honestly; I don't really know where I land on it).


From what I see, your value proposition is both to bug bounty hunters and to companies who see a value in having Hackerone manage their bug bounty program.

This effect (no matter the cause) of incident is about the worst thing that could happen to a company whose value proposition is that. It's like the bad old days where companies would legally threaten you if you found a bug, and from an outside perspective, Hackerone seems to promote it.

If I were an ethical hacker, I'd think twice before using your bug bounty program for fear of that treatment.

If I were a potential customer (or even a current customer), I don't know if I'd want to be associated with a company that tolerates veiled threats against ethical hackers.

Edit: I should add from this, it actually looks like its Hackerone making the veiled legal threat: https://mobile.twitter.com/enigma0x3/status/1160961861560479...


H1 could make it a proviso if using their service that rejected but reports are automatically disclosed.


1. The overwhelming majority of rejected H1 reports are garbage.

2. It is not the case that all reporters want their findings disclosed publicly, even if they're rejected.

3. Reporters already retain the right to publish findings however they'd like. The worst H1 or a client can do is kick you off the platform.

4. A bug bounty platform that mandated disclosure of any sort would lose all its customers to the platform that didn't have that mandate.


"The worst H1 or a client can do is kick you off the platform."

As a hacker on hackerone, this is not my understanding of the relationship. Generally speaking the programs give you "authorized access" under the CFAA conditional on following the disclosure guidelines. I don't know about for other countries, but for the US I'm pretty sure this means that breaking the guidelines means you've retroactively committed a felony.

Now seems a little questionable about if any federal prosecutor would actually take the case, but it definitely doesn't seem like a strictly civil issue to me.

Strongly agree on all other points though.


CFAA could (and likely would) apply for remote vulnerabilities i.e. exploiting SQLi on someone else's servers; but in the case of local privilege escalation like this particular case all the exploiting/testing happens on systems owned and controlled by the researcher, so it doesn't violate CFAA and doesn't need any permission from Valve - the breach happened with authorization from the system owner.

You need permission to pentest someone else's systems, you don't need permission to pentest software on your own systems even if that software is written by someone else. In an enterprise setting it's possible that you have signed a contract where you agree not to do such testing or not to publicize its results; but violating that would be a civil matter regarding the terms of that contract, not a felony in respect to CFAA.


> you've retroactively committed a felony.

There is no such thing as a retroactive crime in rule of law systems. Disclosure could be a considered an offense in its own, though.


How?


I agree, if you're testing someone else's website or servers, you should comply with the scope and disclosure rules or not do the testing, unless the vendor has something else on their website that implicitly authorizes testing (like an email address to send reports to).

But that doesn't apply to Steam; nothing they write can really impact your ability to conduct security research on your own computer.


Yeah, agree in this specific case about local research (baring DMCA issues). Most H1 scopes seem to be remote targets as opposed to downloadables though.


And this is why many of the researchers I know are based outside of or have left the United States and work out of places like Thailand.


I'm having trouble thinking of a single researcher that has left the US for legal reasons. There are lots of researchers now in Southeast Asia! But that's because bounty programs like H1 let those people work remotely.


There are plenty of researchers not in the US saying "don't do your research in the US".

You don't have to get into legal trouble to see which way the wind is blowing.


So what you're saying is you can't name many researchers who have left the US either.


I mean, the grugq is a really obvious and notable one.


I can see how automatic disclosure may not be a good fit.

However, why doesn't H1 expressly allow reporters the option of public disclosure for all NA or WONTFIX reports?


Among other reasons, because the site is literally stuffed to its gills full of people reporting bullshit security issues, like "user impersonation possible" (if you convince a user to open developer tools and give you their session cookie), and H1 wouldn't be doing any good if it generated a constant stream of people "WONTFIX-disclosing" those reports.


I don't quite follow this reasoning. Are you saying that by allowing people to make their NA / WONTFIX reports public, it will dilute the H1 brand? Does association with H1 have a significant effect of the perceived legitimacy of individual public disclosures? Why does this matter?

My presumption is that the "other reasons" are business/political and centered around the desire to provide value to or establish goodwill with corporate partners.


People can publish whatever they want, and the only thing H1 can do about it is disinvite them from their platform. But anyone who suggests that H1 encourage people to publish NA/WONTFIX bugs probably hasn't had much contact with H1 bounty reports.

In reality, valid bugs being quashed by vendors is not the real problem H1 has.


I would rather suggest that H1 discourage (or even prohibit) their partners from dis-inviting reporters for publicizing NA/WONTFIX bugs.

In this particular case, is sounds like H1 (or an employee thereof) actively discouraged disclosure, which seems like a problem.

> In reality, valid bugs being quashed by vendors is not the real problem H1 has.

There can clearly be more than one problem. I still fail to see the relevance of the "bug report quality" problem to this discussion (beyond explaining why automatic disclosure of NA/WONTFIX reports is not helpful.)


You're probably outside the security sphere, but H1 is already taken seriously. There is no per-requisit for them to do so to be taken seriously as you state.


I'm not sure if you realize this, but your response comes off as gatekeeping.


How is N/A not a synonym for WONTFIX?

There comes a moment when inaction translates to deception, and if you need clarification for what that looks like in the wild, look no further than Facebook.


I think WONTFIX is 'yes I see what you mean but we're not going to change how that works', N/A is 'this isn't relevant why are you posting it here.'


HackerOne should be getting slated a lot more than they are.

They are selling their bug bounty program to their customers (e.g. Valve) as offering the equivalent control to a traditional pen test contract (with confidentiality) while also trying to sell the spec work/no findings, no pay price advantage of a bug bounty program. It's scummy as hell.


If you don't want to participate in bug bounties, don't participate in them. It's not like it's hard out there in 2019 for application pentesters. This is a "world's tiniest violin" argument.


It's still interesting to hear comments like in the GP for an unknowing person like me.

Your comment is interesting as well, if only for the defensive reaction without addressing the "being scummy" claim. I'm basically hearing, yeah it's scummy, now get off my lawn.


I had the same argument with Coinbase about this one...

Ended up naming it 'Bad QR', putting this page together and sending them a private link (https://writecodeeveryday.github.io/projects/badqr/)


Please fix your jquery import, it's being blocked b/c it's coming over http not https


TLS errors are out of scope. #WONTFIX. Don't you dare to talk about this publically.

http://writecodeeveryday.github.io/projects/badqr/


LOL. Banned from HackerOne.


Sorry about that, probably gonna switch to VanillaJS.


a great library.


I couldn't find/install it via npm or yarn :-(

;-)


You should consider The JAMstack, a new web paradigm where a user requests an HTML file and the server gives it to them.


I uninstalled Steam the moment I read the previous disclosure and Valve's approach to it. Any company that treats security as it used to be in the 90s ought to be shunned.


If you don't need it, why did you have it installed in the first place?


Not sure what you're getting at. There's very few things we need, sounds like they're sacrificing a little to avoid giving a company they deem unethical any money.


I love games and can’t play my steam collection now. But if I have to give that up so that some silly bug elsewhere in my system doesn’t expose me to a ransomware attack (or worse), so be it. I’ll find another way.


You should be able to play games in your steam library. Just open your Steam\steamapps\common\ folder and find the exe for the game you want to play.


Steam's DRM (CEG) customizes the executables so it won't play without the Steam client running and logged in to the correct account. There are lots of not-DRM-enabled games on Steam, but they're decidedly in the minority.


That's why GOG.com is my first choice. They even provide a nice Steam-like installer (unfortunately, no Linux version of the installer), while letting you download your games DRM-free, archivable and standalone.


Not all of the games on GOG are DRM-free at this point. Some require GOGGalaxy, their version of the steam client. I went through a frustrating refund process after learning about this after making a purchase.


I have yet to find a game that absolutely requires Galaxy. Could you share which game was it?


Hero's of Hammer watch required it, and would not run without it.


Only game that requires Galaxy is Gwent, and it's free to play.


Really? That sucks, DRM-free has been their major selling point from the start. Why was the process frustrating and which game was it?


There are a number of tools called “steamworks emulators” that allow one to bypass this outright for many games. These are generally seen as piracy tools, but there’s no good reason you couldn’t use them when you wanted to play your purchased game collection without DRM.

Be a bit careful when experimenting, though. You may run into problems syncing your cloud saves for some games if/when you go back to the official client.


Now you've swapped one attack surface for another (of a dubious origin).



As you'd expect there are (or were, a few years ago when my account was temporarily banned for a few days) cracks to unlock the executables. You might need Steam still installed for this to work the first time, I'm not sure.

Legally I don't know where that stands, but morally I'd say we have a right to play the games we paid for.


Uhg, this drives me nuts. Apparently I'm not allowed to play Sepiko: Shadows Die Twice while traveling outside of the country. A VPN can temporarily get things going again, or staying in Offline mode, but what a pain in the ass...


Nope, that won't work for most Steam games. However, that's why there's GOG (gog.com), DRM free games. You can download the game installation kits using your browser and if you ever decide to stop accessing their site (or stop having access to the Internet) you can still play/install downloaded games.


You would think that any platform/app that actually contains the ability to load currency into itself would take any security threat seriously regardless of the scope.


The researcher can still disclose it, they just aren't going to get permission to disclose it on the Hackerone program. Most things out of scope don't get publicly disclosed as far as I know.

Doesn't seem too unreasonable.


Without seeing the communications it's hard to say, but "When the security researcher -- named Vasily Kravets-- wanted to publicly disclose the vulnerability, a HackerOne staff member forbade him from doing so, even if Valve had no intention of fixing the issue" sounds like more than just not being able to disclose on the H1 program.


I submitted an XSS on the tesla website to hackerone, it was marked as a duplicate. A week later, shared it with an XSS mailing list and got an angry email from HackerOne soon after. Public disclosure violates the terms of their reporting program EVEN if they reject your report.

I'm really curious how much of what is reported to HackerOne ever gets and actual patch. It kind of seems like there are bunch of known vulnerabilities idling on their platform without quick fixes. Should be interesting once the HackerOne database is inevitably leaked.

HackerOne should start requiring companies pay researchers for duplicates - that the company already knew of a flaw should make them more liable, not less.


> HackerOne should start requiring companies pay researchers for duplicates

That would create a perverse incentive for researchers to tell their friends about the vulnerability so that they can resubmit it and also get a bounty.

The problem could be solved on the side of the researchers by splitting the bounty among all submissions of the same bug, but anyone else with access to the report (employees of either HackerOne or the relevant company) could try to get a share by having someone create a duplicate report.

First come, first served seems like it would be the hardest to game, as the first reporter is guaranteed to have actually done the work (not counting rogue employees who create bugs to "find" and report).

There should probably still be some kind of reward for duplicate reports to avoid discouraging researchers, but something symbolic like publicly acknowledging that they found a bug might be enough to provide validation.


> First come, first served seems like it would be the hardest to game

For external parties, yes. However it's the easiest to game for those liable, since you can just mark whatever you want as a "duplicate" and refuse to pay the bounty.

Offering bounties for public disclosures helps remove a lot of perverse incentives.


I like your first idea of splitting the bounty. I think its unlikely employees of HackerOne or the relevant company would risk their job for a small share in a bug bounty.


Splitting the bounty does nothing to fix the incentive problem, since it's the same outlay from the vendor whether they fix after 1 report, or a year later after 20.

In reality, vendors (or at least, serious vendors) aren't gaming H1 to stiff bounty hunters. If anything, the major complaint vendors have about H1 is that they aren't paying enough --- that is, they deal with too many garbage reports for every report that actually merits a fix.


I wonder if you could scale it so that the goal behaviors were also a market equilibrium. So no complicated prohibitions for going public, but each additional report (aided easily by going public) would cut into your own earnings some percentage. But on the flip side, each additional report costs the company money too, so they have monetary incentive also for pushing a fix before someone else finds it or you decide to give up waiting and go public with it anyways. With each on appropriately decreasing scales so there’s always appropriate minimum and maximum payouts.

I assume it'd be hard to convince companies it may be in their better interest to set up an incentive structure this way. But perhaps a third party platform could find some such mutually beneficial equilibrium.



If they get a duplicate report they should let you know the disclosure timeline and keep you posted on progress fixing it. If they're not doing that they have no right to prevent disclosure.


Hackers and crackers can't be controlled.

It seems weird that HackerOne put themselves in such a deeply loser position to try to be the ones to prevent submitters from revealing security issues. Why not be a neutral party, and let the companies try to enforce rules on the hackers in these cases?


Could you tell me what this mailing list is? I'd be interested in joining it.


"Cheapbugs" but it appears it is abandoned.


Eh that one is on you I think. How long did you wait? If we have 5 researchers report the same vulnerability in 30 days we're going to count it as duplicate and still expect to have a full 60-90 days from the first report to deploy a fix.


Waited a couple weeks.

It was pretty low hanging fruit. I was going through an XSS tutorial and used their site for practice. `<script>alert(1)` could be saved into several user fields including Name and would then be executed on every subsequent pageload around the site.

If there was some indication that someone had reported it recently I maybe would have waited longer, but I suspect this bug had been known for months.


> Kravets said he was banned from the platform following the public disclosure of the first zero-day. His bug report was heavily covered in the media, and Valve did eventually ship a fix, more as a reaction to all the bad press the company was getting.

> The patch was almost immediately proved to be insufficient, and another security researcher found an easy way to go around it almost right away.

You might want to read the article.


I was responding to a comment that (I interpreted) to be talking in more general terms than the scope of the article.


Even in the scope of the original comment, doesn't it create a pretty perverse incentive to allow companies to mark HackerOne bugs as WONTFIX and then ban researchers who disclose them?

Isn't security through obscurity largely to be avoided? I thought the working model for most security researchers was: if it's not worth fixing, it's not worth hiding.

More to the point, I thought that responsible disclosure always came with an expectation of public disclosure. The advice I've always been given is that you should never disclose with conditions -- ie. "fix this and I won't tell anyone."

It should always be, "I am going to tell everyone, but I'm telling you first so you can push a fix before I do."

Does HackerOne operate under different rules?


I see this as an example where the system works. Valve has an incentive to pay for bugs. The researcher than has an incentive to disclose them privately. If Valve doesn't pay fairly, the bug is disclosed, Valve pays the price and is forced to fix it, and be running a scam of a bug bounty program, they've exposed themselves to more disclosures. Valve now has an incentive to fix their program either by working with this bug hunter or increasing payouts so other hunters beat him to the point. This is how the system should work. Decentralized self-regulation needs people like Valve to fuck up once in a while so that the forces at play sufficiently punish them until they improve their process.


The meta-process might work, Valve's process is still broken.


Everybody makes mistakes, lets see if they can learn from theirs. I haven't heard that they keep making this same mistake (but I could be wrong.)


This happened end of June, they should have had ample time to reach out to @viss, make amends and change their process.



This story continues to be so sad. Steam is reprising the role of Adobe who, for quite a while, refused to acknowledge that being able to use FlashPlayer as a tool to get you something on Windows was just as bad as breaking FlashPlayer. I heard one Adobe executive say, "Hey you can use a baseball bat to bludgeon someone but that isn't the bat maker's fault is it? If they are forced to make foam bats their product is useless."

That position isn't "wrong" so much as it isn't useful in reducing risk.


No that position is wrong, because the analogy is wrong. If the baseball bat would hit the customer in the face every time he tries to hit the ball, that would get fixed pretty quick. Allowing privilege escalation is an unintended side effect of the product being used and should be fixed because the customer never asked to be exposed to that risk.


Knowledgeable people can just add Steam to the set of applications that must be installed in its own isolated environment. How would the typical Steam user know to do that? Is there a prominent warning on the install screen informing users that Steam will be used to hack their machine and anything they have stored on it?


How would one achieve this on Windows short of having the entire Windows install be isolated from your main OS? I would assume most users would not want to run their games in a VM inside Windows for performance reasons.


It would probably be better just to have a separate partition with a separate OS install. Either way, as you indicate, this is an unusual imposition on the user. Valve are holding themselves to a much lower standard than one would expect.


Disable the Steam service. Run Steam only on a separate user session with limited rights (no admin and no access to your files). So essentially you'd have to manually switch user, via the login screen, to play your games.


You've missed "Install Steam somewhere inside separate user's home directory" to not dealing with UAC on each update.

I moved to separate Wintendo box which is the best solution.


As far as I'm aware, Steam makes its own folder writeable by everybody. Besides, you should get an error instead of the UAC if you're running as a non-admin account.

But yes, separate hardware is safer.


Could steam be legally liable for the issues bugs give to the end users if they know of the issud. I know they have TOS forbidding this but ToS need to take into account laws.


At some point this is less of a steam issue and more of a Windows issue. An OS shouldn't allow applications to compromise eachother.

Steam should maybe be liable if they are actively thwarting disclosure that would protect users but that's a tough thing to establish legally.


It's a Steam issue, given that background services don't have to run as SYSTEM, and yet Valve decided to have Steam's background service do precisely that.

Thankfully, the Linux version doesn't seem to have this problem (AFAICT).


I think it should become a Windows/Microsoft issue, to be honest. A good example is the recent Zoom vulnerability, the software made it possible to perform certain exploits on the user and operating system, so Apple stepped in and disabled the exploit. In this case, I think Microsoft should do the same in order to protect their users. My guess would be that the install base of Steam on Windows is as least on par with Zoom on macOS, if not many times larger.

In the end, Microsoft will get bad reputation for having an insecure OS (not to even mention Valve here, and in the long run it will hurt them as same as it did Adobe with their Flash stubbornness).


True; it'd be really nice if Microsoft started deprecating running non-essential things under the SYSTEM user. Windows could/should emulate the OpenBSD strategy of running services/daemons as unprivileged users dedicated to those services instead of as root.

Windows does support this functionality, and ultimately Valve's to blame for not using it, but you're right that Microsoft should be more proactive in encouraging good design and discouraging bad design.


Well, that's the issue with DRMs. I never went into the Steam ecosystem because I don't like dependencies, but most people have no choice and have to use Steam if they want to access their game libraries.


You're beating a dead horse. Flash served a purpose once, and now it's reached end-of-life.


> You're beating a dead horse

With a foam bat. Just because the flash horse is dead doesn't mean it didn't deserve it's beating or can't continue to be a potent reminder of how bad Adobe was at handling security issues and why other platforms, like Steam, should learn instead of emulate.


This also has nothing to do with Flash specifically, rather as you said Adobe's policy. It could have been any software but especially for Flash.

Flash was just such a unique special target, ala PDFs and Microsoft word, there were few wide open targets from which a hacker could predictably get the user to open (whether embedded or not) on a targets machine. So it was particularly sensitive to vulnerabilities by design, where a much broader security perspective was clearly needed than most software.


I'd prefer to describe it as slashing a dead horse's rotten corpse with a katana. As the bloat and flesh of the ecosystem has disintegrated we're able to observe the framework more clearly - the bone structure of the horse, if you will. Using the katana we are making precise, incisive blows to the remnants as we extract meaning from it's corpse - or lessons learned, if you will.

...perhaps I'm taking the analogy too far?


No, they're making a point about Valve by comparing them to Adobe.


Flash died because Adobe failed in the most obvious ways--even to lay outsiders at the time. They could never be bothered to fix the pervasive performance or security issues. It started to die, slowly at first: the desktop flash blocker plugins. Then very quickly: the lack of support from mobile OS--even though those companies practically begged Adobe to get its act together.

Adobe had a practical monopoly on the interactive web and blew it.


Now I’m just waiting for Steam to follow their lead.


.... Hit the dead horse with the foam bat and win an iPod?

Those were the old days. Or, that damned monkey!


It reached its EOL because Steve Jobs considered it a buggy security threat.

If no one will use or manufacture your baseball bat, then the danger of the bat is moot.


Maybe it is also time to switch from the prehistoric model of "hey let's download a .exe on the web, execute it without any sandbox, and let that .exe install other .exe from thousands of other unknown sources around the world and run them without any sandbox either."

Steam or any other app should always run sandboxed with no root access, no file access, no camera access, no access to other process, etc. For most users, steam only needs a sandboxed local storage to put its game into it and a internet access (and maybe mic access), that's it.

I really hope Flatpak and something similar for Window becomes the norm, the current situation is a security and privacy disaster.

There can still be exploits of course but now you have the find a weakness both in the app + in the OS sandbox which is a whole lot harder


> prehistoric model of "hey let's download a .exe on the web, execute it without any sandbox, and let that .exe install other .exe from thousands of other unknown sources around the world and run them without any sandbox either."

What year is it? To me prehistoric means buying a nice big box with a CDROM or some floppies and installing with no internet required at all. Shell exes that want to download crap is the current nightmare we are living in I thought.


See also: https://brew.sh/


Won't sandboxes impact performance of video games? I don't know much about sandboxes except that VMs are often used as sandboxes, and I definitely don't want video games running inside of VMs


Games typically need only access to video adapter, sound card and maybe network. They do not need access to your browser's cookies or history, or documents folder, for example. This probably doesn't require using VM.


> They do not need access to your browser's cookies or history, or documents folder, for example

Well, some do... like Doki Doki Literature Club


I would think it would work in practice closer to mobile apps, where their access requirements are explicitly stated upfront (eg: needs access to documents folder).


Yes please. Still most apps don’t need access to the documents directory or the camera.

They want to “open a file” which’s means file open dialog

Or: upload your profile picture, which’s means a one time upload of a file. Right now you give access to the camera and it can be used for anything


The salient part seems to be that the researcher reported the first vulnerability through HackerOne and was (reportedly) told by Steam it wouldn’t be fixed. He then published it after being instructed that was against the rules and was banned


I wanted to note that the researcher was not banned at HackerOne, he was only banned from reporting bugs to Valve. This is written in the article about a second vulnerability [1]

[1] https://amonitoring.ru/article/onemore_steam_eop_0day/


[flagged]


It wasn't going to be fixed, you can't ship vulnerable software, it's not okay. He was in every right to publish it and to keep shaming Valve.


And Valve has ever right to ban him from their program, right?


And the rest of us have the right to tell Valve, as their paying customers, we're very disappointed in their behavior and find it unacceptable.

I expect them to take security flaws seriously if they want my continued patronage - and that includes EoPs.


File your comment under "missing the point".

Telling a security researcher "we're not going to fix this but please keep it secret" is not a viable strategy, ever.

In the end, the researcher went public (as nearly all will, in that same situation), Valve got a hit to their reputation in the tech press, and they ended up having to (attempt and fail to) fix it anyway. Entirely predictable, and Valve looks really stupid here.

Banning people from your bug bounty problem for following the generally-accepted rules for security disclosures is certainly with in their right, but so what? It's not a winning strategy for any company.


They do, and the obvious and inevitable outcome of that is that Twitter is now their bug bounty program for some researchers.


What's the point of stating these obvious tautologies? Yes, they have that right, he has the right to post on Twitter, someone has the right to post that on HN, we have the right to call Valve out, you have the right to defend Valve, we have the right to reply to your defence, and so on ad inf.

All true and utterly worthless to point out.


I'm not trying to defend Valve, I'm just surprised that everyone seems to be so upset about the ban.


Your first post acted like people were calling the ban unexpected.

Your second post acted like people were calling the ban not-allowed.

Neither is accurate, so your surprise is misplaced.

Even though it was clear that this might happen, it's such a blatant bad decision, for both ethics and customer security, that people are fighting back loudly.

You haven't given a single reason people shouldn't be upset by it.


I don't know how many people care about the ban, per se, but Valve's strategy here is an extremely bad and pointless one.

Was Valve technically within their rights to ban this researcher? Sure. Was it a move that advanced Valve's interests in any way? Obviously not.


I'm having trouble articulating this, so bear with me.

In general, having a Bug Bounty program is good. We can agree on that, right?

Most Bug Bounty programs have a scope, and staying inside the scope is important to the business for reasons. My guess is that most scopes are defined by a combination of confidence in the security of the code, resources to triage vulnerabilities in that part of the code, and the risk to the business from vulnerabilities found in different parts of the code.

That is to say, I suspect that either Valve doesn't have many developers well versed in that part of the code base, or they are not confident in the security of that code base, or they considered it a low priority (even if we disagree about the priority of this vulnerability).

Now, let's pretend that I'm right about those reasons. Even further, let's pretend that they did not include it in the scope because they don't want to pay a bunch of bounties on code they knew was insecure.

(Aside, I'd much rather have companies only include things in bug bounty programs once they're confident they are secure, relying on BB to do your security for you is begging for trouble because then the company isn't taking responsibility for, or even trying, to do things securely)

Given this train of thought, which is making more than a couple assumptions, I don't think their actions are extremely bad or pointless. They are trying to keep their bug bounty program in scope. Bug bounty programs involve a fair amount of trust. If that trust is broken and they don't want that researcher anymore, then that's fair.

There probably should have been better communication. It probably (definitely) shouldn't have been a WONTFIX. Overall, terrible outcome for everybody.

It's just one of those things where every decision looks reasonable in isolation and leads to a really bad outcome and the company looking terrible.


Exclusions from a bug bounty are a part of the game, but if something is excluded, you—as the entity excluding it—can't reasonably demand secrecy re: an out of scope bug. You either accept that you're going to eat a reputational hit (and likely be forced to fix the exploit anyway) or make an exception to your policy.

If Valve wanted to try and defend the structure of their bug bounty program by essentially arguing that Steam is such a mess that local privilege escalations are out of bounds, they should be forced to publicly reckon with that stance.


> Most Bug Bounty programs have a scope, and staying inside the scope is important to the business for reasons.

Scopes are fine.

But if it wasn't in scope, then clearly none of the program's rules apply to the bug, right? That bug isn't part of the program.


Valve is free to define their own scope, but as a user, I'd expect them to place a big fat warning if it means any user on the system with Steam installed could get root. Their currently defined scope goes against user expectations of security.


What trust is involved in this instance? No special access was given to the researcher AFAIK. Anyone with the skills and interest could have found the bug, regardless of the bounty program. The bounty in this case seems like an incentive to report instead of selling an exploit.


Did you read his first report? In scope or not, their right or not, how is a ban without proper dialogue (threats don't fall in that category) the reasonable reaction here? That's not how you interact with a pretty tight knit community, even if you're the one sitting on the pile of money.


Yep, and we have every right to laugh at their stupidity.


We're not discussing the legality of the move.


"Not a bug, wontfix."

"Fine, I'll tell the world."

"We fixed it."

Their ask was self-serving and dangerous, and deserved to be declined.


Drama aside.

Valve...I have your software installed. It has a hole. Fix it.

This mudslinging isn't helping your PR or making me feel more secure about my steam install regardless of the details.


I agree. As a user, I do not care who is at fault much, but I do expect the platform you provide to be somewhat secure.. especially after you are told it is not.

I basically uninstalled Steam client after first 0day was found. At least with gog I don't have install galaxy. But thats a different rant..


My opinion, not my (HackerOne customer) employer's:

I know this will be unpopular with folks like tptacek, but I've always felt strongly that bug bounty programs offer too many perverse incentives to all parties.

More often than not it becomes a tool for companies to sweep issues like this under the rug and then use HackerOne's system to force the reporters to play ball (because they want to keep getting paid). I hate this sytem.

I'm 100% behind open, public disclosure and if it were my own product in question, I would offer bounties for _public disclosures_. That keeps everyone honest.


I agree with you from the other side. Before these programs people would disclose issues to the public. The company found out like everyone else. They would fix it immediately because they had to.

Now they can hide it for months(ever) allowing others to discover them and keeping the researchers quiet.


A normal process goes like this:

- Researcher finds bug

- Researcher discloses to vendor

- Vendor fixes (or not)

- Researcher discloses bug publically once vendor has fixed, or after X time (whichever is first)

This is roughly how Project Zero goes, and it's a good mix between giving the vendor the opportinity to fix it and deploy the update before it gets exploited.

It's very naive to assume that bugs can be fixed before others can exploit them. Bugs take time to fix, and the process takes time, especially when dealing with large enterprises.


Why is it whichever is first and not after a fixed time? I see a benefit to waiting X time regardless, because it allows more time for the patch to circulate to everyone. What is the benefit to disclosing it immediately after it is "fixed"?


It's typically not immediately after it's fixed, but usally about a week or so, to let the majority update.

The vendor can also usually request an extension, as per the Project Zero guidelines, of I believe 1 month if they confirm to be actively working on a patch.

The goal of responsible disclosure is to help the vendor and their users' be more secure, so having a policy that is balence between the two is important to let the vendor fix it, and to not let the users be possibly hacked


It's generally not possible to release a fix without effectively disclosing the vulnerability. It's just too easy for people to deduce what the vulnerability was by looking at the patch.


The vulnerability can often be discerned from the patch. Burying it amidst lots of unrelated bogus changes and not calling out its security relevance is going to annoy your users.


Couldn't the researcher have just sold his finding to Project Zero?

If so, that seems like a superior alternative to immediate public disclosure.


They're arrogant and lazy. Just say thank you and fix it. I hope GOG and HumbleBundle get a nice boost in sales.


> I hope GOG and HumbleBundle get a nice boost in sales.

While there are some DRM-free games, majority of games on HumbleBundle are sold as Steam keys, so you still need Steam to launch them.


I seem to recall the HB site, in the early days, saying something to the effect of "Our promise: 100% DRM free games". They even did a bunch of bundles that donated part of the proceeds to the EFF. It's sad to see them as just another front for Steam sales.


They're owned by Ziff Davis. It's not surprising.


oh, i didn't know that. that kinda sucks. well, anyway, valve will probably be changing their tune from now on


This sucks. We run steam on some public PCs with unprivileged accounts and we wouldn't be very happy to find that users were able to gain admin access and steal other people's passwords through a keylogger. Sigh.


That seems to me the most obvious problem use case here. How can Valve possibly think that isn't important?


because their security model is too myopic. By defining security vulnerability to be only remote code execution from within the steam client, they save themselves a tonne of work (and cost).

I can understand that perspective - steam can't spend the time to rewrite to fix the EoP/LPE issues. Their stance must be that the user has to "be careful" not to install malware or other vulnerable software, instead of fixing steam.


Cybercafes, LAN gaming centers, and PC bangs are about to have some interesting times.


This wouldn't be anywhere near as severe a problem as it is if Steam's service wasn't running as something as ridiculously privileged as NTAUTHORITY\SYSTEM.

On the plus side, reading the writeup [0] it seems unlikely that this affects the Linux client (or even if it does, it's at least limited to the current user account). So I guess Steam can continue to live on my machine for another day.

[0]: https://amonitoring.ru/article/onemore_steam_eop_0day/


I'm amused that anyone does not have a cynical view of H1. H1 is an equivalent of HR for cyber. It exists not to deal with issues or address problems, rather it exists to help companies to manage bad exposure. That's how H1's bread is buttered.


Just a reminder: you can disable the Steam service and still play your games.

Some Steam features will be disabled or broken but whether or not this affects you will obviously vary depending on which ones you like to use.


Yet another reason to switch to GOG.


I wonder if this is a product of Valve's free-form company structure. If as a Valve employee, you have the autonomy to float between projects, how do you maintain a strong security team? Do they even have a dedicate security team?


I've worked with extremely competent security professionals before. Those people love and are fanatical about security. Based on my experience, it seems a near certainty that Valve doesn't employee even a single such person. These people raise hell if security is ignored and have a job freedom that makes typical software engineers look like panhandlers.


That is why many people hate security cybersecurity professionals: https://thenextweb.com/security/2019/01/25/everybody-hates-c...


> Do they even have a dedicate security team?

Let's remember that Valve is the oldest there is in a business they pretty much pioneered with Steam over 15 years ago.

As somebody who's had an account there since day 1, I'm still amazed by how tight they've managed to keep their ship for all these years, even tho plenty of people have been trying to break into that very worthwhile target for over a decade.

If I contrast that to my experiences with services like Uplay, and Origin, then those differences are like night&day, because with both my accounts on these services I had lot's of issues due to my accounts getting hijacked (probably trough support) several times.

In 15+ years of using Steam, this hasn't happened once to me, so whatever Valve is doing at that end, it seems to have worked well for them and their customers.

That's not meant to defend their stance on this particular issue, but imho it's also kinda dishonest to now frame Valve as a company where nobody cares about security.

If that'd be really the case then they would have gone out of business over a decade ago.



Since Valve stopped producing games, I wonder what their net income per employee is based on assets they own less revenue produced by third-parties through Steam.

If you look at just the assets Valve produces minus rent seeking, are they losing money?


Valve figured out how to print money by hooking teenagers with gambling on loot boxes. They stopped having to create AAA titles, they stopped having to do anything remotely creative, and now they are a giant cancer with no value left to add. Their client is an insecure, slow, instable piece of shit and has been this way for well over a decade. I regret being a customer of theirs.


I remember listening to some of their commentary tracks where the employees talk about how their desks had wheels, there's no managers, and there's no deadlines and no stress. They also at one time had higher profit per employee than Google! [1]

Turns out that all along having no accountability in your company would result in complacency and a critical lack of production. I'm curious to see how Valve Software as a company is going to climb over this security wall they've found themselves in front of if seemingly nobody has to answer to anyone and everybody gets to do what they want in a leisurely fashion. I mean we give Chinese IoT vendors crap all day long, and it turns out Steam might be just as bad!

[1] https://www.forbes.com/sites/stevedenning/2012/04/27/a-glimp...


I heard somewhere it's very stressful, toxic politics and so on. Not sure where but it's interesting to see a report to the contrary.

Personally I always thought it would be cool to work at Valve, but not anymore. I don't see them doing anything broadly relevant that doesn't involve coasting on the momentum/market share of ancient products. Their VR stuff is cool, but even there it feel like they're lagging behind e.g. Oculus in ways that matter.


Eh, every review of the index has put it head and shoulders over any rift version so far. I'm not sure if we'd consider that "lagging behind oculus"


It's a premium product, and if I were going to buy a new VR headset right now it'd be the Index, but it's not a generational leap (it's basically a Vive++) and I've lost confidence in Valve to produce such a leap, let alone to bring VR gaming to the mainstream.

I'd love to be proven wrong, because I dislike Oculus. I am simply stating my observation that Valve seems to be in decline.


> Their client is an insecure, slow, instable piece of shit

And yet, it's still the best client out there.

If you want slow and instable(sic!) try competition. Steam client is actually fast and stable compared to what else is on offer.


It's actually not better than Blizzard or EA's client at this point. I will agree it is better than Epic and Bethesda launchers.


From what I've read, the original bug involved malware already installed on the PC using the Steam client to run other code. While I'm not a security expert in any way, that doesn't seem to me like a huge exploit. If the attack requires installing malware on the victim's computer, why not just do the evil stuff directly with that malware? If that's the case and I'm not just remembering it wrong, then I could see why Valve wouldn't want to pay up and could see why this guy would go on a social media rant to slander Valve either hoping they'll pay up or just to get petty revenge.


Lets say you and your brother share a PC, but you're the admin. You both play Steam. His account has no password. I steal the laptop. I log in as him. I pop a SYSTEM shell using Steam. I reset your admin password.

"Damn, you watch some weird porn."


Wouldn't you be able to do the same simply by looking at the file system without any access to admin privileges?

I'm not arguing that this vulnerability isn't one, it's a privilege escalation vulnerability, however in your situation you got physical access which is as far as I know, pretty much game over for your system.


Not if they haven't granted access to the files to you. In fact by default the files in a user's home folder (including Documents, Videos etc.) are inaccessible to other (non-privileged) users on Windows.


If they have physical access, then they don't need to boot into Windows. They could boot from a flashdrive and access any files they want.


This is true, and also why I lock down my BIOSs and set the OS as the only boot device. TRK is a bootable portable linux specifically for resetting and unlocking local admin accounts.

Encryption, however, cannot be broken without your credentials. These can be obtained from default running instance of Windows with Mimikatz if the admin credentials are still in memory from an earlier session.


Yeah, there are definitely ways of securing versus someone with physical access, but I expect most machines with a non-sandboxed steam installed probably don't have them.

This privilege escalation attack is probably never going to be used if the attacker has physical access.


> In fact by default the files in a user's home folder (including Documents, Videos etc.) are inaccessible to other (non-privileged) users on Windows.

Sure that's certainly right but physical access doesn't force you to be "on Windows".

> This is true, and also why I lock down my BIOSs and set the OS as the only boot device.

I never talked about you specifically, you are a tiny tiny minority. Even then, that just block your computer. If you can't bypass that BIOS (seriously doubtful), the hard drive is still accessible.

> Encryption, however, cannot be broken without your credentials.

Is there an encryption on by default? That must be new because I'm pretty sure I never had trouble to access my user folders on some of my old Windows 7 installation (that's would be a good 20% of Steams users).

I can't find anything about this, if I have time tonight I'll try to see if I can access my user folder through another OS.


That won't help in the case of full-disk encryption.


And Windows has useful account-based file encryption too.


> And Windows has useful account-based file encryption too.

Is this on by default on Windows? I haven't needed to access my files from another Windows installation for a long time, but I'm pretty sure the last time I tried on my good old hard drive with Windows 7, they weren't encrypted and I had no trouble to access them.


No, but it only takes a minute to turn on if you're going to be sharing unsupervised access to a computer.


> No, but it only takes a minute to turn on if you're going to be sharing unsupervised access to a computer.

This is not something that 99.99% of Steam users would do though...

It would still be possible to retrieve the encryption keys too if the PC is still running (which is also the only ways to make Steam vulnerability viable) using a can of compressed air [1].

As I said, physical access to a computer is pretty much already game over... The Steam vulnerability is quite useful while being connected remotely though (which is really the most likely scenario either way).

[1] https://www.zdnet.com/article/cryogenically-frozen-ram-bypas...


> Kravets did eventually publish details about the Steam zero-day, which was an elevation of privilege (also known as a local privilege escalation) bug that allowed other apps or malware on a user's computer to abuse the Steam client to run code with admin rights.

No, using this 0-day malware, with lower privilege level, could do stuff it could not do.


> could do stuff it could not do

I have trouble parsing this. Did you mean "could do stuff it couldn't have done" perhaps?


It gets to do fancy admin stuff that regular malware without permissions can't.


From Microsoft's perspective, they consider local privilege elevation on a client computer an "important" vulnerability that requires patching and paying a bug bounty:

https://msrc-blog.microsoft.com/2018/09/10/microsoft-securit...

So some companies consider LPE to be serious.


If Microsoft considered privilege escalation to be serious, they wouldn't have Windows setup a single privileged account on a fresh install. Or at the very least, they wouldn't make it necessary to jump through two hidden hoops to make a separate admin account without (a) a Microsoft profile and (b) without "secure questions" misfeature.


It was a priviledge escalation. An attacker could go from running as a compromised user to running with system priviledges.


Using the Steam client to run other code at an escalated privilege level.


a) Program has scope that doesn't include X

b) Researcher reports vulnerability that falls under X

c) Since it's out of scope, it's closed as N/A

d) Report is locked because company doesn't want to publicly disclose a vulnerability in their system via the Hackerone platform

What's the problem here? Just go with normal vulnerability disclosure. Bug bounty programs are a two way street, and respecting the scope is part of that.

Edit: I guess the important part is that the researcher was then banned for disclosing the report. Seems reasonable, honestly. I don't agree with it, but I understand it.


Acknowledgement is one thing. Disclosure is another.

If Steam had no problem acknowledging that this functionality exists, they should have had no problem with it being disclosed. There lies the problem. In the bathroom with the needle in their arm; "...there's no problem here..." but if you swing the door open they'll still try to shut it. Because they know they're wrong.

If HackerOne isn't going to help you they have no right to hinder you. If they want to strongarm everyone into effectively the same agreement as an NDA then there literally is no point in turning vulnerabilities into HackerOne.

They seem to only exist as a cow-catcher on the locomotive of software vendors too lazy to actually fix crappy code.

"Who needs to fix code and shell out bounty if you can pinpoint and silence the researcher?"


> If HackerOne isn't going to help you they have no right to hinder you. If they want to strongarm everyone into effectively the same agreement as an NDA then there literally is no point in turning vulnerabilities into HackerOne.

The article gets this part wrong: the hacker isn't banned from H1, which he says in his blog post -- "Eventually things escalated with Valve and I got banned by them on HackerOne — I can no longer participate in their vulnerability rejection program (the rest of H1 is still available though)." HackerOne is in no way punishing the hacker for his reports and/or public disclosures, for what it's worth.

(Disclosure: I am on the community team at H1, though I've had effectively zero involvement with this.)


You can define whatever you want for your project's scope, but when you're distributing self updating binaries to an audience the size of steam's and you act this casual about an admin escalation exploit, you deserve whatever damage to your reputation that you get.


Thing is, as a result of the ban the next disclosure was immediately public. This left more people vulnerable than the responsible disclosure method would have.

Hence, this practice by steam makes all users of steam less secure (doubly so as they actually don't want to fix these issues). This is something the public deserves to know, so they can act accordingly.


Bug bounty programs exist primarily for the companies’ benefit. If you do not respect the security community, the best you can expect is for researchers to publicly disclose the vulnerabilities. At worst, black hats will find them and sell them since they can be very valuable.


If the vulnerability is out of scope, why do they care about disclosing it? If it is a vulnerability in their system, why is it out of scope?


Well, they did go with normal vulnerability disclosure, and were retaliated against. That's not okay.


Retailiated as in he was banned from their bug bounty program. The program with a scope that they went outside of. I think it's reasonable to be banned.

Obviously it would be better if Valve fixed the issue and gave a (possibly reduced due to out of scope) bounty.


That makes sense if the application is, like, a SAAS app, and the scope is, like, "don't employ credential stuffing or test any of our 3rd party dependencies that have not given us permission to be included in this scope".

But this is software people install on their desktops, and Valve has no say in how security researchers approach that stuff. Valve can and maybe even should exclude LPEs from their bounty scope (if that's not what they're focusing on right now), but they can't reasonably ban people for publishing vulnerabilities they've scoped out of the only mechanism they've provided for submitting and tracking vulnerabilities.


The next time that researcher finds a vulnerability he's definitively not going to report it "responsibly" even if it is "within scope".




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: