I’m the maintainer of many high profile repositories in JavaScript land (react, react native, prettier, excalidraw…) and the following paragraph rings true:
“But there is no public evidence whatsoever that these instances warrant the noise, make-work, and consequent fatigue that their reports induce.”
All the vulnerabilities ever reported through this channel were regex dos and were absolutely not real security issues. Most of the times they were in code paths that were not actually used which makes matter worse.
Because a bunch of companies are hooking up their security processes with those reports, it leads to situations where people are alarmed about those non issues. It generates truly useless work to the maintainer and put them in situation where they have to justify that the report is completely bogus even though it has a “CVE” attached to it.
Until three years ago, the last time I touched javascript was when jquery was all the rage. I had been a security engineer for my whole career, but recently ive been building a security product (with js, obviously) and it has been blowing my mind the state of everything.
Most security products are just lighter fluid on the tire fire that is "vulnerability management" and has gotten to the point (as the post pointed out) where reports are doing more harm than good.
I had seen something like this coming since when I was starting out in security the meritocracy surrounding collecting CVEs was very real for vuln researchers. It isn't their fault, it is just really difficult to prove your worth as someone who is seen as a cost center to a company. Additionally, if you don't actually find a vulnerability, are you a bad security researcher? Is the app actually secure? There is a lot that is left on the table if you can't get that CVE number and proving your worth to the security community becomes challenging.
My company and I are all security experts spending all of our time figuring out how to flip the script on current reporting and response practices. If you (vjeux) or anyone else have any thoughts, ideas, or rants you would like to share, we have a discord https://discord.gg/awx66qBW but if you aren't about it, you could shoot me an email: chris@lunasec.io.
Would love to hear about your war stories from the trenches!
I feel like the scoring of the vulnerabilities is the issue here. If the 50
ReDoS and Prototype Pollution vulnerabilities in Webpack (or other developer tool that only ever touches my code and my configs) were scored low severity, I would probably happily ignore them. But they keep popping up with High or Critical severities, with claimed "Network" attack vectors, which nobody can possibly imagine..
I’d agree that ReDoS is a repeat offender in having overblown severity in vulnerability reports, and prototype pollution reports have contributed to a fair bit of noise when popping up in dev-tools and such, but prototype pollution can be quite significant.
Java has its “gadget chain” class of vulnerabilities, where the presence of certain jars can turn object deserialisation into RCEs. I’d argue that Javascript has “pollution gadgets”.
Some years ago I struggled making lodash – which almost any non-trivially sized Javascript project has at least a transitive dependency on (possibly multiple versions of) – fix its “gadget” in its template function. It’s since been patched, and the conversation unfortunately deleted - https://github.com/lodash/lodash/pull/4518
I understand prototype pollution in JavaScript and know it can be serious, but seeing it get 9.8/10 in libraries like minimist, which only parse process.argv, with CVSS scoring mentioning a "network" attack vector, is by itself contributing to security fatigue.
Not to mention that I've seen a couple of cases where the user isn't really able to control a, b & c, or like the one I mentioned, where it's just `obj[a] = b`, where both are controllable by the user, but `b` can't be an object (it's either null, true, false, or a string), so it shouldn't be exploitable at all, yet it still scored a severity of 9.8/10...
I'm not saying we should completely ignore bug reports of these types just because there's lot of noise among them, just that when a CVE is filed with a critical severity and without a PoC exploit, someone should verify that the reported scoring is sensible before millions of developers using supply chain auditing tools get annoyed with a false positive...
While I disagree with the author’s overarching opinion on ReDoS vulnerabilities, I agree that some CVEs make it through with incorrect severity scores. If you find a CVE like this, MITRE can be contacted to mark it as disputed for investigation.
One example of this was a CVE for ReDoS in the `py` support library, which caused failed CI runs and "noise for hundreds of thousands of pytest users" despite being of questionable severity (as original article explains) and not actually used anywhere in the wild.
Yes! That was the report that originally inspired me to write this post.
There was some ambiguity in that case, because the original reporter claims that they never actually hit “publish” on their drafted CVE. But it’s out there now, and nobody has responded to my (repeated) requests to revoke it, given the lack of any evidence of exploitability.
"lack of any evidence of exploitability" or not detected yet?
I do understand that a CVE can be non pertinent since it could require a technical context, which reasonably cannot exist, to make any "production" exploits using it that "just work".
I guess this is one of them?
At the time I was meeting computer security ppl (decade ago), they told me they usually do not publish "production" exploits, they keep that for themselves or the vendor (if it is listening), but they have some.
> "lack of any evidence of exploitability" or not detected yet?
Lack of evidence and likelihood.
This is a "vulnerable" module for interaction with paths in SVN repositories (in order to grab SVN metadata) in the utility library of a testing framework.
So the normal use case is that the "exploit" would be the repository itself, likely because your repo server has been compromised, and it would require that the test suite binds tightly to SVN for some reason.
I don't think an ReDOS in your test suite is the biggest issue you're facing if your repository has been compromised.
I meant by that, it needs reading between the lines: some ppl were exploiting the "convenient bug" in other kind of benign systems while publishing, but they had real exploits using the "same convenient bug" on more troublesome production systems.
For instance, in a fantasy world, path parsing in SVN has a "convenient bug" and you can find the same "convenient bug" deep in blink URL parsing.
When we started a vulnerability disclosure program for Mastodon (funded by the EU through some start-up provider they had a contract with) the first rule they heavily encouraged was that we consider all DOS vulnerabilities out of scope. There are just so many easy ways to DOS a complex system by taking legitimate-but-costly actions that worrying about the small % of illegitimate actions that could also DOS a system just isn't worth it for most projects.
(This is not to say that we wouldn't consider the very rare case where a trivial input could lock up the system for hours due to us missing a timeout or something important, or that we wouldn't consider e.g. rate limiting bypasses as vulnerabilities. I'm not speaking officially or completely here, we try to apply a lot of nuance and common sense to the way we review vulnerabilities when disclosed)
Not the GP, but I would imagine specifics matter. If one or a handful request blocks the instance, that's worth serious consideration. If 10Gbps line rate requests blocks the instance, that's not worth serious consideration. In between is in between.
Based on the article context, I'd imagine most DoS reports are low quality, lacking context and specificity, so responding to them takes more time than the report did (hey, there's a DoS), so explicity rejecting them in a vulnerability program and making exceptions seems like the right move.
I use the checker to basically fix even regular expressions that are not actually vulnerable. This checker can be used as a lint, so there is no excuse to allow regular lint but not regular expression lint. ReDoS hunt is very enjoyable. Enjoy!
This is not actionable general-purpose advice: many languages have regular expression APIs built around PCRE, which is much more of a "general advanced pattern matching" language than just pure regular expressions. Many of PCRE's features (like backreferences) are widely used and allow for parsing of non-regular inputs, meaning that you can't transform them into a linearly-evaluable regular expression.
The root of that is that CVSS scoring is utterly broken, and that even if it wasn't, issues bellow some threshold probably shouldn't be assigned a CVE.
And if open source developers have it bad, people running public bug bounties actually have it worse as financial incentives add to bragging rights...
I would humbly suggest that the real threat of ReDos vulnerabilities isn't so much the denial of service itself, but the human resources that are consumed investigating and responding to what may appear to be random failures on the affected properties.
For instance, imagine that an attacker has two vulnerabilities ready to unleash on a target, one of them a ReDos and the other an RCE. The ReDos could conceivably provide cover by causing minor mischief while the real attack goes unnoticed.
This is a real issue, not just with DOS type issues. Low quality, high volume reports tied to some form of reward coupled with lots of automation will lead to this[1]
TLDR: I understand the author's sense of frustration, but I think DoS (including ReDoS) is trickier to handle than ignoring a class of vuln.
Denial-of-service vulnerabilities in general are something I usually find uninteresting, because in situations I'm likely to find myself in, they don't let me do anything I consider valuable.
I still typically report them, because I can imagine situations where this would definitely not be the case, e.g. if the organization using the product is under attack (time of war, protestors with a political goal, etc.).
Back when I was still in IT, I even experienced it once myself. Someone (apparently randomly) targeted the business I worked for with a massive UDP amplification DDoS. It was pretty eye-opening. The colocation facility they attacked literally would not take action (not even blocking a single destination port upstream) unless we paid extra for their anti-DDoS service.
ReDoS seems like something that's probably rarely useful to an attacker, but understanding the full scope can be challenging for a pen tester vs. the people who actually develop and use something, and so I err on the side of reporting something as a low/informational severity versus not reporting it at all. This is especially true if the vuln is in a library. We typically report library issues to whoever maintains the library, but that also means that most proofs of concept or exploitation scenarios are going to be less realistic than one where we show end-to-end exploitation in a full application.
Formula-based vuln scoring systems are inherently broken (IMO), but maybe one band-aid for this would be to have a separate score for any availability-related effects, to make it easier to filter out in situations where DoS isn't a significant concern.
The post specifically concerns ReDoS; I think DoS as a broader class has slightly more merit.
(Ultimately, I don't care if people factor out accidentally exponential regular expressions -- that seems like a good thing to do! I take umbrage at the idea, however, that they're a vulnerability class that deserves the serious attention we give to things that attackers actually use. My perspective is that ReDoS reports are better suited for linting and CQA than they are for security reporting.)
Great take. Do you have any thoughts on how to improve on CVSS 3.1? I’m wondering if perhaps the optional “additional details” section, where you can contextually upgrade or downgrade scores, should be a mandatory part of the score.
or... you could just patch the vulnerability. This reasoning reminds me of projects like stockfish who don't care about memory issues and crashes and will just ignore memory bugs because you can't experience it if you are using the program right.
I missed this on my first read through. So I went back and tried to find what you're referencing, and this is all I could find:
> This is even before any of the really cheap shots, like observing that the entire bug class is based on unreliable premises: that there are no timeouts or resource limits anywhere else in the system (almost always false, particularly in web development), and that the regular expression engine itself is susceptible to pathological runtime behavior (plenty aren’t).
Is this what you were thinking of? Or am I still missing something?
I'd be curious to see ReDoS reports against programs using regex engines that provide a linear time guarantee.
> Is this what you were thinking of? Or am I still missing something?
You're not missing anything: I meant to write more about this, but I forgot and snuck that little paragraph at the last minute in instead.
I think I could have worded this better -- the observation was meant to be that (1) the regular expression's superlinear behavior is not part of the public interface of languages like Python, and (2) lots of these "vulnerabilities" show up in code ends up in a task queue somewhere, meaning that it's subject to timeouts and other resource constraints that ReDoS reporters don't bother to check for.
For (1), what I mean is that Python (or an implementation of Python) could switch to a non-backtracking engine for the subset of compatible regular expressions, and nothing about Python's interface would change.
(1) is in theory right, but it's very rare to see it done. The only one I'm aware of actually doing a "switch between linear time and unbounded backtracking" is Tcl, and I'm not familiar enough with it to know if it actually gives any guarantees.
The difficulty in (1) is rooted in: 1) implementation complexity, 2) performance and 3) match semantics. (1)+(2) are related, and indeed, if you want your linear time engine to compete with your backtracker engine, you're going to need to do a fair bit of work. (3) is also quite tricky. Cox's RE2 paves the path for how to get linear time engines to mostly agree with backtrackers, but there may be some corner cases where there is some disagreement. So for something like Python, (3) may be quite difficult to overcome.
Popping up a level, I'm not sure how relevant all of this is to your overall point. As the author of Rust's regex crate, I actually generally agree with your post, insomuch as catastrophic backtracking is handled as a security vulnerability. I'm mostly just picking at nits and elaborating here.
Yeah, agreed -- (1) isn't exactly commonplace. That point was originally meant to fit into an extended section of the post on reporting format imprecision, i.e. our current inability to express anything more precise than "this dependency is exploitable, you need to upgrade it" with existing vulnerability formats and feeds. Not being able to automatically filter by implementation variance or context-sensitivity fall under that.
But I then trimmed that section so this point is mostly irrelevant to the larger post, as you said!
Vulnerabilities are bugs a threat actor can abuse to compromise a security state of a program. Vulnerability IDs are not issued unilaterally, the vendor must acknowledge it as one or respond explaining why it is not.
Given all that, the incentives OP talks about are rightfully aligned. People spending time finding security issues deserve recognition. Finding a lot of small vulns is as valuable as finding a few vulns, specifically because they are not just bugs but something a malicious actor can use to harm the user of the product.
Many researchers get the "it's a feature not a bug" or rather it is a bug but not a vulnerability response. A software maintainer can do that with these bugs the author is talking about. Just another git issue to resolve eventually.
Since these are *Dos vulns, the specific security property affected is availability. Does the reported bug allow a hostile actor to reduce the availability of the software, given users of the software have a reasonable expectation of the software being available? If so, then treat it seriously as a security vuln with the right severity, else reply stating you will fix or not fix the bug but it isn't a vulnerability.
This isn't different than people opening git issues for small things.
Also, since having a good reputation is a (correct) incentive, having a reputation for creating frivulous vuln reports is very bad. No one wants to be the researcher who cried wolf.
Yes. It's not different from people opening git issues for small things. It's exactly like that Digital Ocean "Hacktoberfest" thing, which incentivized people to submit bogus PRs in open source projects to get swag, forcing open source maintainers to do the labor of sifting out the (voluminous) chaff.
Vulnerability IDs are not cred. Nobody who has cred thinks they are cred. They are just numbers that the database will just hand out to you if you request one; the vendor doesn't have to be involved. They are essentially isomorphic to issue numbers; some issues are really good but some are just straight up garbage. The value to having issue numbers is you get a stable identifier for a problem, not something that you can wave around thinking it has some sort of inherent reputational boost.
They are an accomplishment, you spent time and improved security on something. I am only familiar with CVEs where unless MITRE overrides, it is up to the vendor to issue a CVE.
Vuln IDs to researchers are what git merged PRs are to devs. It is "creds" but the amount of reputation depends on the amount of work you did. So finding a minor DoS on a little used app isn't the samr as logstash. This is an important incentive because they could have silently sold it to zerodium or on raid/xss using an alias. Bug bounties are better imo but a vuln id is a resume item at least. Having creds and bragging about them are different things imo.
If the expectation is that you get "cred" based on how much work went into a merged PR or a vuln ID, it doesn't sound like vuln IDs or merged PRs are cred at all. Unless whenever I talk to somebody I'm expected to read through all their recent vulns/PRs to calculate how much cred they've earned based on the contents?
Yeah, you should knoe based on the description. Not the amount neccesarily but the value. If you PR 3 lines that improve performance 10x or find 1 line that causes kernel rce , that's cred no?
The work is the “cred”. Knowing that somebody found a kernel RCE lets me know they’ve done interesting security work in the kernel space. It doesn’t matter whether or not they had a CVE assigned for that work.
PRs and CVEs are mechanisms for interacting with work. They’re not status symbols, they’re not “cred”.
Well said. This is another thing I wish I had expanded upon in the post: the "clout" that people chase with CVEs isn't considered particularly serious by the actually serious players in security. The contents of individual CVEs of course can be a source of cred, but the CVE itself is just an identifier (and brandishing it as anything more can be seen as gauche.)
And then you can argue with Mitte about their wrong CVE's. I got plenty of garbage CVE's for development versions, which were never released, marked them as invalid CVE, but they are still in the CVE database. Mitre doesn't care, the reporter doesn't care, and I don't care neither.
I think OP is over-egging their point, especially recommending that repos ban reports of ReDoS. This isn't "security noise" to the same degree as "misconfigured" certs, for example.
For many services there is a genuine concern for DoS, and ReDoS really isn't the same as "traditional DoS", because the latter has a vastly larger footprint and far more likely to be engaging is extra criminal activity to set up and execute, and therefore far easier to attribute and later prosecute.
While for a service a ReDoS can be smuggled in easily, from a single source that's easier to mask, and often passed off as "legitimate" traffic / query making it easier to avoid prosecution.
So I think it's going too far to recommend just ignoring all ReDoS.
Certainly projects on a project by project basis should say how much they care about ReDoS and grade the severity based on their own evaluation. For some projects that'll be almost nothing, for others it may be more serious.
I agree. The complication is that it's not just "per project" but "can an attacker exploit it in a way that matters?". If so-called ReDoS can only be triggered by trusted data, it's not a vulnerability.
If something relies on trusted data I'd argue there isn't a ReDoS vulnerability there at all.
In some ways this reminds of XML/JSON parsing vulnerabilities. If ran on untrusted / unsanitised data they can be absolutely tier 1 critical (remote code execution) but if parsing trusted data then they are essentially benign.
If someone claims you have a vulnerability and you disagree then it's on the submitter to prove it with a PoC or a plausible mechanism for triggering it.
That doesn't however invalidate the whole class of vulnerabilities.
> ReDoS really isn't the same as "traditional DoS", because the latter has a vastly larger footprint
You're confusing DoS (denial of service, a very broad class of problems that includes ReDoS) with DDoS, which is a distributed denial of service attack, something that disables a service using multiple malicious sources of input.
The terms are used interchangeably (often erroneously) a lot, but the single-D DoS needn't have a "big footprint". It could be making their server crash with a single byte somewhere, like a particular old Linksys router exploit some may remember.
“But there is no public evidence whatsoever that these instances warrant the noise, make-work, and consequent fatigue that their reports induce.”
All the vulnerabilities ever reported through this channel were regex dos and were absolutely not real security issues. Most of the times they were in code paths that were not actually used which makes matter worse.
Because a bunch of companies are hooking up their security processes with those reports, it leads to situations where people are alarmed about those non issues. It generates truly useless work to the maintainer and put them in situation where they have to justify that the report is completely bogus even though it has a “CVE” attached to it.