> According to the cybersecurity company, it replied by saying it wouldn't agree to swift disclosure, and pointed JetBrains to its policy against silently patching vulnerabilities, which stipulates that if companies violate that policy, Rapid7 will itself release the full details of the vulnerability, including enough information to allow people to develop exploits, within 24 hours.
Did JetBrains engage Rapid7 to conduct an audit? If not, I think describing this as a “violation” is a step too far: if two companies have no pre-existing relationship then there’s no reason for one of them to start complying with arbitrary policies set by the other.
If on the other hand JetBrains engaged Rapid7 to audit TeamCity and then tried to wriggle out of their contract when a vulnerability was found, then this isn’t a great look.
The reason is to avoid getting dragged by Rapid 7. If you don’t play by the generally accepted community standards for coordinated vulnerability disclosure, you can expect the researchers to point that out as loudly as possible.
It’s extortion, but the alternative is going back to companies ignoring security issues and not telling users about problems.
I think the truth is probably in the middle. Big companies often release patches before they detail what is in them so as to not create a lot of zero day issues where they helpfully explain to hackers how to hack the unpatched masses of users. I can see why that is a good thing. At least I'd hate it if e.g. Apple published a detailed howto on how to hack me hours/days before I even see the patch they released along with it. Normal users and companies need some time.
So, I can see why Jetbrains would want these patches out ASAP without alerting hackers to the notion that there are a lot of valuable corporate targets to hack.
I guess that's why this article is titled "Rapid7 throws JetBrains under the bus ...". And their users, you could add. At least the article suggests "Exploits began within hours of the original disclosure, so patch now". Sounds like things might be getting ugly for some people out there. There might be more graceful ways to deal with this.
That’s not really how it works, unfortunately. Once a company releases a patch, the clock starts ticking on reverse engineering what that patch does. Amusingly, all of the efforts to shrink patch sizes have made this even easier, as you can quickly zero in on which bits are the important ones.
The only safe thing to do is for companies to be loud about which patches relevant for security and for their customers to take them quickly. All else is security-via-hopium.
Agreed, this feels a bit like extortion based on the way this article was written. Especially if they were engaged in a paying contract, why would Jetbrains accept disclosure?
>From a biz perspective, though, why would anyone go to R7 if there's a possibility of them disagreeing with you a throwing you under the bus?
Rapid7 is following the industry-wide best practices of coordinated vulnerability disclosure (CVD). If you don't like how they handled it, you'll have to avoid every reputable infosec company.
The article does a bad job of explaining why silent patches are bad, and a bad job at explaining the normal, industry standard, CVD process.
Yes, silent patches are bad --but even worse is publishing details about the vulnerability so it can be used as an exploit before there is a chance to push the patch out or make it available.
So yes, vendor is not great trying to silently patch but even worse is a security vendor publishing how to exploit it before the patch is made available. Surely it would be in both of their interests to coordinate the disclosure to minimize the risk to the ultimate users. As a user a silent patch is better than there being published exploit paths.
>Rapid7 spotted fresh patches for CVE-2024-27198 and CVE-2024-27199 on Monday, without a published security advisory and without telling the researchers.
That's not much leadtime to patch --but they have a point in that JB did not coordinate with them. To me it still looks spiteful on R7's side and irresponsible on JBs part as well as R7s part. They both are guilty of not having proper communication to reach a mutually agreeable result.
It looks bad. R7 is making the other guy JB look less bad with the way they went about this. They both could have handles this better --but ultimately it looks worse for R7 from my PoV.
I understand your point of view, but this is pretty much all on JBs part in my opinion. They say as much in a recent blog post.
>At this point, we made a decision not to make a coordinated disclosure with Rapid7
>We published a blog post about the release. This blog post intentionally didn’t mention the security issues in detail
These points would have triggered Rapid7s silent patching policy, which would have sped up the entire timeline and lead to the current state of affairs. I don't consider that to be spiteful (the policy is out in the open, JB knew what they were doing and knew the consequences of not coordinating any longer), but that's just me.
Any basis for this statement? Having been through multiple pentests I have never seen any clause like that in a contract. Given the commoditized nature of security vendors these days, I would be surprised to see such an anti-customer clause become commonplace.
Why would you think I’m talking about pentests? Those are largely checkbox engagements run by people who use some automated tool; they’re pretty unlikely to find 0-days.
I’m talking about real security audits, where a researcher takes significant time to understand a codebase. Those tend to be conducted by people and firms who know what they’re doing and can be picky about who they work with. The purpose of an engagement like that is expressly to find and patch 0days, so requiring that CVD be done in a specific way is just an implementation detail.
My source for this is that I require them, and many (although certainly not all) other researchers I have talked with do as well.
I don't know what kinds of pentests you've seen but those automated scans are just for checking boxes. I'm talking about source-available pentests which sounds like the "real security audits" you are talking about.
> The purpose of an engagement like that is expressly to find and patch 0days, so requiring that CVD be done in a specific way is just an implementation detail.
All of the engagements I have been a part of have had NDAs and public disclosure of vulnerabilities would probably result in a lawsuit, but these have been for private companies.
> My source for this is that I require them, and many (although certainly not all) other researchers I have talked with do as well.
The CFAA is what's used against cases for hacking. They engaged with a system for which they didn't have permission to operate against.
> (2) intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains—
> (C) information from any protected computer;
and if you want to argue extortion like other comments here,
> (7) with intent to extort from any person any money or other thing of value, transmits in interstate or foreign commerce any communication containing any—
> (B) threat to obtain information from a protected computer without authorization or in excess of authorization or to impair the confidentiality of information obtained from a protected computer without authorization or by exceeding authorized access; or
UNLESS an entity has a program for this (by which they give permission) or you have an engagement to engage with their system (and this system), you NEVER engage. Ask anyone in security.
"Rapid7 says it reported the two TeamCity vulnerabilities in mid-February, claiming JetBrains soon after suggested releasing patches for the flaws before publicly disclosing them.
Such a move is typically seen as a no-no by the infosec community,..."
As far as I am aware the majority of the infosec community would prefer to release a patch before releasing a security issue, but what do I know...
The issue is JetBrains was trying to avoid having the issues disclosed, or at least significantly delay it. They would have been communicated an explicit timeline and Rapid7s entire process up front. Rapid7 would have let them patch and released the vulnerability details on a generous timeline if JetBrains had continued to communicate with Rapid7. The reason Rapid7 is doing this is because there are so many companies out there that will ghost you and ignore that your firm exists trying to delay the release of your vulns indefinitely. I have released vulns at my company and coordinated dozens of them. Before coordinated disclosure was a thing it was the wild west and companies often did shady things expecting they could muscle or ignore security researchers.
The common practice is to hold off disclosure until a patch is ready and has been tested, that way competent organizations will be able to jump on the problem and patch ASAP.
If you release a patch without disclosure there is a good chance many competent organizations will hold off applying it waiting for others to be the canary.
Releasing a patch and mentioning it fixes security issues while delaying vulnerability details is not much better - malicious actors with enough resources will figure out what has been changed and where the problem is. At the same time there will be a strong incentive to downplay the severity.
The ideal way is to release a security advisory that some CVE patches are coming with no details, so that upgrades can be prepped coordinated for a known patch day.
Why is silently patching considered a "no-no by the infosec community"?
If your product's automatic update functionality can reach most users within the responsible disclosure window, that sounds like a net positive? We still learn about the vulnerability but limits the potential fallout of the disclosure.
I'm very much in-favour of the private vulnerability research and responsible disclosure, but the "no silently patching vulnerabilities" sounds more like wanting to own the press to me than actually wanting to improve people's security.
The general theory is that there's really no such thing as silently patching. Consider:
- Company silently patches issue. Patches have to be applied, which can take some time if people don't know they need to apply them. Even in the case of automatic updates, patching can be delayed if it requires an app restart, for example.
- Malicious actors examine patches, work out exploit, begin exploiting in the wild.
- Customers left in the dark.
- Company assumes that having issued patches is good enough, substantially delays disclosure.
Co-ordinated disclosure aims to prevent all of that by ensuring everyone knows about it at the same time. That removes some of the ability of threat actors to exploit and allows SOCs, EDRs, etc., to update as well, so anything unpatched gets caught. If there are workarounds or other defenses that can be implemented until patching is possible, those can be employed as well.
For starters, it (hopefully) delays when a bad actor knows about it, meaning they start their process of reverse engineering the vulnerability after customers have been notified.
There are always going to be situations where out of date software hangs around. This at least levels the playing field when compared to the idea of trying to silently patch something.
It is common for organizations to delay routine patching for some fixed period of time (allow others to be early adopters and assume the risks of negative impacts from the patch), and/or schedule patches to occur during a predetermined maintenance window.
When you make a public security disclosure coordinated with the release of a patch to fix the issue disclosed, you alert the aforementioned organizations that there is an exploitable security vulnerability present and allow them to make an educated assessment of the comparative risks of patching immediately versus waiting and potentially being exploited.
It's not a perfect system, but transparency is the best compromise possible and allows everyone to make an educated choice. All other options have greater downsides.
>I'm very much in-favour of the private vulnerability research and responsible disclosure, but the "no silently patching vulnerabilities" sounds more like dick swinging to me than actually wanting to improve people's security.
Rather than dick-swinging, it's generally an acknowledgement that large organizations move slowly and need a bit of prodding to apply patches in a timely manner.
If you're a big-and-slow company, there is a significant difference in how quickly you'll worry about applying the patch that says "here's a minor patch" and the patch that says "here's a patch for a severe vulnerability".
I have worked with several companies which simply will not update something unless they are either mandated to or it is a large enough security risk.
Why is there such an urgent need to release details about a vulnerability while services are at risk? Is the glory of security researches to put people and companies at risk? Release a statement when you find it, then the details some weeks after the patches are available.
If you don't disclose the vulnerability, users are potentially open to attack because you didn't tell them about a vulnerability that you knew about. If you do disclose the vulnerability, you potentially alert attackers to take advantage of a vulnerability in the short time before users can address it.
From the perspective of someone discovering a vulnerability, disclosing it is the more ethical thing to do because users should know that they are vulnerable and have the opportunity to prevent harm to themselves, even if it means making a service unavailable or degraded for a short period. If you tell them about it and they are attacked, that's on them. If you don't tell them and they are attacked, that's kind of on you.
I agree there's some grey area if there no known exploits, but a lot of times these vulnerabilities are found in response to an actual out-in-the-wild exploit.
It’s primarily marketing for sure, but the fig leaf is that the details allow targeted detections and mitigations based on the details of the bug. You can use the PoC code to test WAF rules for instance.
So why not inform vendor, release a statement like "found something big" without details, get the marketing from it, and provide the details 30 days after patch is made available. That way it hurts users less.
>and provide the details 30 days after patch is made available.That way it hurts users less.
The bad guys will figure out how to go about exploiting the vulnerability almost immediately after the patch releases, if they aren't already exploiting it. It makes 0 sense and protects 0 people to hold onto details after the patch is made available.
This is true for open-source but not as much for closed-source as in this case.
Also, how does it ever help anyone to have the details released? It only helps the researchers for PR and bad actors, never the users who need to apply the patch (and often need some time to, one can't just upgrade something out of nowhere in less than 24 hours).
>This is true for open-source but not as much for closed-source as in this case.
It is just as applicable to closed-source as it is open-source. The people who are good at developing exploits are, unsurprisingly, good at reverse engineering and using disassemblers. It is generally trivial to figure out exactly what issues a patch is fixing if you are an experienced reverse engineer, and it's a short journey to developing an exploit once you have that knowledge.
>Also, how does it ever help anyone to have the details released?
Because the details will illuminate many potential indicators of compromise, which can be used throughout the defense stack (e.g. YARA rules, ASA rules, etc.)
>Such a move is typically seen as a no-no by the infosec community, which favors transparency.
Transparency, sure, but since when do we release CVE details before a patch? Even in open source projects, there often is only an announcement that there will be an important patch, and the writeup / CVE content is at earliest published the day after the patch is available. Giving some notice on a closed source product seems even more sensible because it's much harder to extract a vulnerability from a point release.
The issue here is that the vendor did release patches, but then tried to hide that they fixed a vulnerability. This is usually called "silent patching", and it's controversial. As with any disclosure discussion, there are lots of opinions here and everyone likes to call everyone else irresponsible.
If you're pro-silent patching, you might argue that it reduces the number of people who know about a vulnerability, so publishing advisories is irresponsible.
If you're anti-silent patching, you might argue that it reveals the vulnerability to the people who monitor patches without giving any warning to the affected users that they need to patch, so not publishing advisories is irresponsible.
Maybe you're just a "minimum details" kind of person, and providing full details is irresponsible. Or maybe you're a "full details" kind of person, and restricting security professionals from accessing the information they need to do their jobs is irresponsible.
In summary, I'm irresponsible for leaving this comment and you're irresponsible for reading it.
This feels like two useless extremes. Enough information to give an idea of severity / class of vulnerability with the patch itself and the writeup a week later when everyone has had a chance to patch and those skilled hackers Rapid7 cites have disseminated the bug from the release diff.
Rapid7's article on the topic[1] mostly focuses on the need for providing enough information to IT admins so they can understand the severity of the problem. There are other unsaid reasons for this (credit/payment I'd imagine is part of it), but on face value doesn't this make sense? When Metabase[2] says "Upgrade your instance NOW" and they have a vuln that "allows unprivileged access to any Metabase instance" I upgraded immediately. When JetBrains[3] says "4 security problems have been fixed. We highly recommend installing this update as it includes a fix for a critical security vulnerability" then many people are not going to upgrade as quickly. Jetbrains eventually gave more information[4] but would they have done this if Rapid7 wasn't threatening to disclose it themselves?
This is a decent explanation for issuing a CVE and a clear disclosure immediately after a patch is available, but the claim in the article is that best practice is to reveal immediately regardless of whether there's a patch. That makes no sense to me.
They do not -- and the industry as a whole does not -- claim that that the best practice is to immediately reveal a vulnerability regardless of a patch.
Thanks. That makes a lot more sense. The Register must have misinterpreted the controversy when they wrote this:
> Rapid7 says it reported the two TeamCity vulnerabilities in mid-February, claiming JetBrains soon after suggested releasing patches for the flaws before publicly disclosing them.
> Such a move is typically seen as a no-no by the infosec community, which favors transparency, but there's apparently a time and a place for these things.
Yes, this article is unfortunately disappointing and seems to have a bit of spin put on it, considering this is all pretty standard coordinated disclosure stuff.
They stopped communicating with Rapid7. When you stop, you know, coordinating with the researchers they are free to do what they want. Rapid7 likely gave them their entire process, set expectations and coordinated with JetBrains up front. This is sort of an “ethical standard” most firms and security researchers follow. The timeline’s they use and the exact process they follow varies. The key goal is to keep the vulnerabilities public and properly credit the researchers. Coordinated disclosure is an olive branch, so you know exactly what to expect and how to behave. It is generally very reasonable. If you break the terms or spirit of this process the researchers have no reason or real recourse but to release their info whenever they want and feel is appropriate. Don’t break the faith, the researchers are essentially doing you a favor by following this policy. It really is not hard to communicate and act in good faith on both sides. I have coordinated the release of dozens of vulnerabilities. I have definitely disclosed them after companies ghosted us or threatened us hoping to make the issue go away.
JetBrains got cute and cut the researchers out of the loop. Now future researchers will likely treat JetBrains as a bad faith actor and proceed accordingly.
I've done responsible disclosure myself. But I wouldn't dunk on a vendor for not telling me they released a fix on the same day because I'd probably just assume it's internal communication issues. And then attribute their late publishing of associated CVEs to my threats. It all feels a bit petty and rushed (to get the credit?) to me. Especially if a fix it cut into a regular release and not a hotfix, it likely just made it in by accident.
I would reserve judgment until seeing everything. I am just advocating from the general perspective of a security firm. I ran one for a decade and disclosed a lot of vulns. My experience was almost always that vendors were the ones creating issues. We tried our best due to how sensitive issues can be and viewed each vendor or company as a potential customer. Rapid7 could have easily been rude or unreasonable, but that is not the norm for sure.
It probably was a little petty, but we don’t know what the lead up was either. You just get tired of being jerked around as a security firm when you do this a lot.
This case aside, the real issue is Rapid7 having a written policy of throwing users under the bus to gain publicity by publishing vulnerabilities before they have a chance to update. Contrary to what the "security journalist" claims, this isn't the norm.
You sure? Coordinated disclosure works like this. I find a vuln. I give you a timeline of my release. You accept that and work with me to acknowledge it and prepare your users. We negotiate release timing. Sometimes, if there is no evidence of exploitation we can push it out a little. If you stop communicating or act in bad faith I am doing whatever I want. You didn’t pay me and your users deserve to know. There are so many bad faith orgs that will try to avoid disclosure entirely or act downright hostile towards security researchers. Someone’s wires got crossed at JetBrains. Rapid 7 discloses a lot of vulnerabilities using this exact process. Many orgs use coordinated disclosure. They don’t all use the same timelines and processes, but those are the broad strokes. I have helped disclose many vulns using coordinated disclosure so I get where Rapid7 is coming from. It’s like this: It’s my vuln info. I will do whatever I want with it. I decided to give you my entire process and timeline to give you a chance to deal with it. You decided to ignore my “terms” and stop participating in the process. I am free to do what I want with the vuln now as I have exceeded my ethical obligations to help you.
It very much is the norm, lots of security research companies have similar policies. It’s called coordinated vulnerability disclosure, or CVD. Rapid 7 is throwing the random company under the bus to protect users of that company’s software.
Does anyone have context for why "best practice" would be to disclose before there's a fix available? This is news to me (most of the time I see CVEs it's right after a patch), and it seems backwards—if there's nothing that customers can do about it yet then what good is releasing the details of the vulnerability?
> Does anyone have context for why "best practice" would be to disclose before there's a fix available?
If it is already being exploited is one reason.
A fix may not be available, but if folks know there's an attack going on, they can make a more informed choice in still whether to allow the code to run: in some cases a loss of service would be better than a compromise.
If the code is externally-facing, then they bring it down; if it is internal-only, then it may be judged that the risk is low(er) and things to be continue to be run (and a maintenance window can be pre-planned for when the fix comes out).
my understanding matches yours. I don't think this article is particularly clear about why rapid7 would threaten to disclose a vulnerability before a patch is ready and then subsequently get angry that jetbrains put out a patch to fix the issue
so rapid7 is mad that jetbrains fixed the vulns they reported? isn't that the point of reporting vulnerabilities? why is rapid7 threatening to release the details in 24 hours?
> Rapid7 says it reported the two TeamCity vulnerabilities in mid-February, claiming JetBrains soon after suggested releasing patches for the flaws before publicly disclosing them.
So JetBrains wanted to have a patch ready before disclosing the vulnerability publicly. It seems they were working on it and were working with Rapid7. I am struggling to think how it would be better for users if an unpatched vulnerability is released before a patch is available. What's the thinking here, that users will take additional precautions to secure the application while they wait for a patch?
> Rapid7 spotted fresh patches for CVE-2024-27198 and CVE-2024-27199 on Monday, without a published security advisory and without telling the researchers.
Rapid7 reported the vulnerabilities mid-Feb. Jetbrains turned around with patches about 2 weeks later, and published them yesterday. The CVE was literally created yesterday. Isn't it a bit premature to claim "silence"?
They released patches without saying they were related to a vulnerability and without notifying Rapid7. That is the textbook definition of what a silent patch is.
I'm not sure how to reword it in another way that would help you understand that Jetbrains did what is called "silent patching".
Maybe this paragraph from the article makes it clear?
>Rapid7 claims that after more than a week of radio silence from JetBrains on the coordinated disclosure matter, Rapid7 spotted fresh patches for CVE-2024-27198 and CVE-2024-27199 on Monday, without a published security advisory and without telling the researchers.
That makes this whole thing fall under Rapid7's silent patching policy.
Above I linked to a blog post Jetbrains put out on March 3rd, on Sunday. It details the vulnerability. March 3rd is before March 4th, so it seems they did not silently patch anything but published the patch and details concurrently.
And this is the part Rapid7 presumably took issue with.
>At this point, we made a decision not to make a coordinated disclosure with Rapid7
As well as
>We published a blog post about the release. This blog post intentionally didn’t mention the security issues in detail
Which is presumably the blog post that Rapid7 saw, which triggered their silent patching policy.
Although, after reading all the blog posts (from Jetrbrains, and from Rapid7), I think this is a much more standard affair than The Reg tries to spin in its article.
JetBrains will do the right thing. They have been doing so for years.
It's amazing to see how, time after time, the internet loves to destroy anything and anyone and utterly ignores the rest of their existence. It excels at bringing out humanity's lowest forms to the forefront.
No association other than being a happy paying customer.
It will take 30-180 days for some enterprise orgs to upgrade Teamcity to mitigate the vulnerability. That grace period used to exist before publishing a vulnerability unless it was being actively exploited.
I don't know what "norms" Rapid7 is expecting JetBrains to have followed, but this article certainly makes them come off in an entitled, power-tripping light, like their glory is more important than security.
I'll be adding them to my list of "companies I'll never do business with".
It’s a bad article trying to spin it. This is business as usual for vulnerability disclosure. The Reg is always sensationalizing things to get clicks. Just add any infosec research firm ever that has disclosed a vulnerability using coordinated disclosure to your list. Like say Google and their Project Zero. They follow very similar policies and if you stop communicating with their researchers they will just drop your bugs.
Everyone has their coordinated disclosure policies, but not all of them are as harsh as Rapid7's. You mentioned Project Zero, it's a great example:
>Project Zero follows a 90+30 disclosure deadline policy, which means that a vendor has 90 days after Project Zero notifies them about a security vulnerability to make a patch available to users. If they make a patch available within 90 days, Project Zero will publicly disclose details of the vulnerability 30 days after the patch has been made available to users.
“ If the responsible organization is showing consistent good-faith effort to develop and ship an update, but cannot complete this work within 60 days, a 30-day extension may be granted at Rapid7’s sole discretion under the Default Policy (or for any of the enumerated exceptions below).”
They do this with good faith as far as I have seen. The issue as I see it is that JetBrains decided to stop communicating, eliminating the “good faith” on their end.
They are following their own practices first a foremost.
Consider this policy by Google's Project Zero for example:
>Project Zero follows a 90+30 disclosure deadline policy, which means that a vendor has 90 days after Project Zero notifies them about a security vulnerability to make a patch available to users. If they make a patch available within 90 days, Project Zero will publicly disclose details of the vulnerability 30 days after the patch has been made available to users.
Rapid7's "we release full details and ready-made exploits the very day vendor releases a patch" seems to be an unnecessarily aggressive take on vuln disclosure. It benefits them as a for-profit security research firm, but definitely not the users who are given no time gap to apply the patches. All while any attacker of any skill level can start using the exploit against them.
As soon as a security patch is released, it will be reverse engineered and exploited, regardless of whether PoC code has been released.
>Rapid7's "we release full details and ready-made exploits the very day vendor releases a patch"
This is not their whole policy. They have exceptions, extensions, etc. But yes, they will publicly disclose a vuln if your company decides to refuse coordinated disclosure and choose to silent patch.
They also don't release "ready-made exploits", please don't be hyperbolic. They release technical details about exploits, yes. But they aren't releasing metasploit modules or the likes...
See also from Project Zero (assuming that is your preferred vendor?):
>Any attacker with the resources and technical skills to turn a bug report into a reliable exploit chain would usually be able to build a similar exploit chain even if we had never disclosed the bug
Does Rapid7 get paid by customers to do that for 3rd-party software the customers use and then they report it to the upstream vendor (JetBrains in this case)?
Seems like infosuck folks are squabbling over "who can be the most-est-est ethical".
Im like, if I do a reporting over a vuln, you can pay me. Ive been stiffed over the last 2 where a real vuln is claimed as "not a problem", and then patched.
Im looking at selling to graymarket dealers my vulns instead. At least they pay.
Trying to have some ethics in this space makes sense though, you probably don't want to be directly responsible for causing pain and stress to users, yes, the company developing the software should do better security-wise, it does not remove your responsibility from selling it to the gray market and potentially causing massive issues to end users who have no horse in the race in your fight against the developers, you are potentially making victims for a quick pay off and making a statement.
Are you working on this to make software more secure or is it simply about money? You can make more money if you cozy up with some criminals, where will you draw the line?
I really hate this take. If a company refuses to pay more than the gray market is offering, then they are the ones at fault for putting their users at risk. Go and guilt trip the entities who are undervaluing the time and effort spent finding vulnerabilities. I place zero blame on researchers selling vulns unless the software company is willing to pay at least as much as the gray market.
And I really hate this take. It's basically a hostage situation then "pay us as much as those criminals there would pay or else... Your users will suffer the consequences". Yes, it's the company's fault for putting the users at risk by creating an exploitable issue, there's the other side of the coin where it will be the "researcher's" fault for selling it to the highest bidder damned be the consequences.
Both sides have their ethical issues, I think the company should pay but also that researchers should look quite deep into their souls if it's ethical to fuck with users because an entity they have no control over (the company) fucked up.
To me it sounds like an immature tantrum, unfortunately from my experience with security researchers it feels to be a field with quite a few rage-tantrums.
Why shouldn’t there be bidding wars to determine the value of an exploit? These large software companies have zero issue with exploiting capitalism when it benefits their bottom line, and yet it’s suddenly unethical for a researcher to demand to be paid market value for their work? Look at Google, a 1.6 trillion dollar company. They have virtually limitless resources when it comes to paying for secure software, and yet they offer less than half the $ for an Android zero click when compared to Zerodium. Pay people the fair market value for their work and all of this becomes a non-issue. They can afford to do it.
Theses less of an ethical issue than what's argued.
I have been screwed by 2 different companies over "proper" reporting. They came back and said I didn't qualify for bug bounties.
Then they turned around and PATCHED said bugs that didn't qualify for stated bug bounties.
The issue isn't "who's the highest bidder" but instead "who will actually pay what they say".
And frankly, the whole of infosec is full of holier-thsn-thou's and corporate scamming. I'll sell to who will pay.
(NOTE: what I type only applies to closed source and corporate websites. FLOSS gets free and discrete reports. No 0days for FLOSS. I'm just done being screwed by corporate interests who claim to pay and then screw you over.)
The ethical issue is when this capitalist fight between market prices for exploits have very real damaging consequences for unwitting users that have nothing to do with the bullshit. Why should they pay the ultimate price because a company is trying to fleece security researchers? Is that the only way to make them pay, to cause damage to what amount to "civilians" in this war?
That's my point, these people, very real people, have nothing to do with the whole bullshit, they trusted in a 3rd party to use their product, they can't audit every single 3rd party they use for security holes, even less research themselves ways to exploit them before using their products, you are saying that it's ok for these people to suffer because Google didn't want to pay for an exploit that was discovered.
It boils down to the same question: why are the researchers doing this job? If it's because they believe they are helping make software more secure then ethically they shouldn't be putting unwitting end users into harm. If they are just looking for a paycheck then it's just morally corrupt to use users as hostages to get a ransom.
I repeat what I said: this is the ethical discussion I'd like to see happening, not this immature "they didn't pay me, so fuck all the users, I will blow it all!", that just sounds like children throwing tantrums.
Again: why do researchers do this job? Underneath that you can judge if it's ethical to just sell to the highest bidder, capitalism is amoral, we as humans imbue some sense of morality into it... This approach of "pay what the market is paying or else I will fuck your users" is, in my opinion, ethically wrong, if a researcher is serious about the job they are performing to make the world a less shit place then they need to have some moral guideline to follow, if not it's all bullshit and they are just mercenaries, and I don't support mercenaries.
Perfect example of the guilt-tripping crap seen in infosec.
Where I come from, when you say if you do X, I'll pay you Y, it's unethical and a tort NOT to pay Y if you do X.
You can do whatever contortions around 3rd parties (users). As long as the exploit vendor doesn't say they're doing anything illegal, I'm in the clear. I've had 1 vendor say that they were engaging in illegal activities, and refused to sell.
Again, some may want to auction to highest bidder. Whereas I just like to eat.
The companies are the ultimate unethical entities here, that summarily made everyone else look bad, all the while they're hitting users and saying "quit making us hit users!"
> Where I come from, when you say if you do X, I'll pay you Y, it's unethical and a tort NOT to pay Y if you do X.
Two wrongs don't make a right.
I did agree that companies are being unethical if they don't pay, that does not give the leeway to then act unethically because of them. Again, ultimately who pays the price if an exploit that was sold is then used in the wild by bad actors are real life users.
Also, the case being discussed is not about companies not paying X, the comment I replied was talking about not paying what the highest bidder would pay, that is not the agreement for a lot of zero-day programs offered by companies trying to secure their software, they give ranges of payments depending on severity of the exploit, not a "we will pay whatever is the highest bid you can get for your exploit".
Is it guilt tripping to ask about ethical questions around infosec? That sounds to me like a thought-terminating cliche, attempting to terminate a discussion for what I believe is very clearly an ethical issue. It does have ethical ramifications, I will not stand on the side of "I will just auction to the highest bidder a potentially harmful exploit", that does not sit right with my moral code.
> As long as the exploit vendor doesn't say they're doing anything illegal, I'm in the clear. I've had 1 vendor say that they were engaging in illegal activities, and refused to sell.
Did JetBrains engage Rapid7 to conduct an audit? If not, I think describing this as a “violation” is a step too far: if two companies have no pre-existing relationship then there’s no reason for one of them to start complying with arbitrary policies set by the other.
If on the other hand JetBrains engaged Rapid7 to audit TeamCity and then tried to wriggle out of their contract when a vulnerability was found, then this isn’t a great look.