There is no universe in which the CVE database is going to be reliable as a source of analytical data about vulnerabilities. It exists to provide a common vocabulary in discussions about specific vulnerabilities, and that is literally all it can do (& even then...). It is gamed six ways from Sunday by practitioners, and nothing is going to stop that from happening, because the project isn't staffed adequately to meaningfully adjudicate, let alone analyze, all the vulnerabilities that get filed, and nobody in the industry is remotely interested in funding such a body.
So, however valid these complaints may be, they fundamentally misconstrue the role of NVD. They're taking NVD artifacts far too seriously. I'm sure that's a reaction to incompetents in the security industry also misconstruing NVD, but the correct response to that is to dunk on those incompetents, not to attempt to hold NVD to an impossible standard.
Meanwhile, the CVSS is bad? You don't say. On this point, Stenberg's right to make noise: there is a broad (if quite shallow) belief that CVSS scores have some meaning, and they do not: they are a Ouija board that reflect the interests of whoever calculated the score, and it's easy to show pairs of sev:lo-sev:hi vulnerabilities that illustrate exactly how ridiculous the resulting scores are.
It would be better if the NVD CVE database didn't include CVSS scores; they don't work, they're unscientific, and they too hold NVD to a standard they can't possibly meet, making a lot of this their fault.
The problem is the CVE score do work in most cases. A lot of organizations still prioritize updates based on their CVE score, and don't bother with updating unless it meets a certain threshold. If it doesn't meet that threshold, then they wait until their monthly patching cycle, or don't even bother updating ever at all.
Until that culture is fixed/adjusted, having a scoring mechanism that is easy enough for manager/executive type people to easily understand risk without any technical knowledge is important. Way easier to argue for an emergency patch/downtime that could cost money when there is a big scary 9 associated with it. And so if the scoring is off and not accurately representing risk, let's work to improve those scores, rather than getting rid of them.
Plus, there is a reason environmental scores exist in the CVSS mechanizm, as it allows for folks to adjust the CVE number to better fit their environment and specifications. I'd personally rather see more CVEs appear and be tracked quickly for easier referencing and discussing, with a slightly adjusted formula to better reflect severity.
> The problem is the CVE score do work in most cases. A lot of organizations still prioritize updates based on their CVE score, and don't bother with updating unless it meets a certain threshold. If it doesn't meet that threshold, then they wait until their monthly patching cycle, or don't even bother updating ever at all.
That doesn't mean it works; that could just mean organizations either a) don't understand the actual severity of their own vulnerabilities, and prioritize fixes based on an incorrect metric; or b) recognize that the CVE score is garbage, but don't want to appear to their users as ignoring or de-prioritizing supposedly-severe issues.
They do not work in most cases. They provide psychological comfort to enterprise IT teams, but so would a literal Ouija board, as long as you hid it from the people using them. Obviously, the "environmental score" component of a CVSS is an admission that the whole system is intellectually bankrupt.
I'd be interested if you could find a serious vulnerability researcher (say, anyone who has given a Black Hat talk or submitted a Usenix WOOT paper) who'd be willing to defend CVSS. My perception as an (erstwhile) practitioner is that CVSS is sort of an industrywide joke.
The NVD and CVE teams are very low staffed, with people who get a large amount of reports every day. There is no way they can process each report as thoroughly as people want them to while reporting the security issues as fast as possible. I can understand the frustration, but there should be some consideration for these facts.
The social issue continues in that people want some quick and easy to understand number, because they and their company also cannot process in a short time if these security vulns are substantial or not.
Then there's gaming of the system where products with bugs do not want to be exposed as insecure, so they will try to lower the scores assigned to them. Then there's pentesters who want to get high scoring bugs for bounties and clout, and they will try to raise the scores...
All this ends up where the scores mean very little, but it's the best we have. I've seen vulns with a 9+ that are clearly not exploitable, and ones that are 6-7 which are powerful easily exploitable primitives.
"The social issue continues in that people want some quick and easy to understand number, because they and their company also cannot process in a short time if these security vulns are substantial or not.
"
Uh, none of this explains why they re-score the vulnerabilities themselves, ignoring the vendor scores and deciding they know best, when they seem to know ... nothing.
Which if you read through the links, you'll see is one of the main complaints, and something they document here:
Their take is basically "we get to do this and if it's wrong it's your problem to get us to correct us".
That doesn't make any sense.
As my child's kindergarten teacher once asked a student: "Do you think if everyone decided to do that, it would turn out okay?"
NVD and CVE teams will simply never have the expertise or time or information necessary to actually re-score these things correctly. You'll also note that despite claiming they score things based on publicly available information, and that othres must do the same, they cite absolutely no sources to Daniel to defend their scoring.
They light themselves on fire, generate smoke and noise that others have to deal with, and then complain that it's the job of others to put it out.
The gaming of the system is the reason they rescores themselves and do not take a vendors statement as fact. There's a long history of vendors underreporting and underscoring their security issues.
That's a different problem to deal with a different way.
People keep lying to me, so i'm going to ignore what they tell me and make it up myself doesn't seem like it's going to generate great results.
Put another way - "NVD making stuff up that is not accurate" is not a solution for "other people making stuff up that is not accurate".
If you want responsibility, you have to incentivize it somehow, whether carrot or stick. This doesn't achieve that. They can tell their customers whatever they want, and then blame any difference on NVD being mean.
No, he called out the obvious reason why NVD would decide it needs to operate this way, without acknowledging that the process they've come up with doesn't work, and is sort of an industry embarassment.
The root of the problem is that if NVD lacks the resources to adequately assess CVEs, then they can't do better (well-intentioned or not!) than what the vendor reports, and them trying to do so only adds more noise.
Sure the vendors have motive to underestimate, but they're also far better equipped to accurately assess severity. Better to focus on them and hold them accountable.
I think the implied premise is that vendors are likely to score with a low bias, downplaying security vulnerabilities.
If NVD completely re-scored submitted CVEs from scratch, then they'd replace a biased dataset with a noisier but unbiased dataset.
On the other hand, if NVD accepts high severities and only re-scores low severity CVEs, then the collected database might exchange a low-severity bias for a high-severity bias.
> I think the implied premise is that vendors are likely to score with a low bias, downplaying security vulnerabilities.
Indeed. It occurs to me that there's a possible feedback loop here. To the exent that NVD increases the severity of all (or even some) CVEs, they are effectively increasing the incentive for vendors to report lower severities, since the vendors may rightly assume that NVD will inflate the severity anyway.
If vendors were responsible -- and known to be responsible -- for scoring their own security issues, then everyone would at least know that, and understand that the vendor's assessment could be biased.
Instead, many people seem to believe they can blindly trust NVD, when their severity assessment is arbitrary and based on wild guessing. And individuals at NVD probably have their own biases as well!
I had worked for Red Hat prodsec for 6 years doing the kernel work. It is common that I regularly resubmit rescores requests to CVSSv3 to NVD, they seem to be pretty good about it when it is clearly explained.
NVD will sometimes consider flaws scoring it the 'general' use case not in the specific use case which can make scoring libraries environment and usage specific.
I have to agree, publishing your own scoring is not ideal however it is sometimes specific to the use case. This is one of the reasons why people are uncomfortable with CVSS, some scorers consider the general case, some scorers consider the actual used case.
It's the inverse of the "boy crying wolf" - if they mark a vulnerability low and it ends up being the next log4shell, they look really bad and the organization may be effectively over.
But if they rank everything "7+" then they have plausible deniability.
So everything will be ranked 7-10. Meaning 1-6 are useless.
That was my thought too. Imagine someone at some desk in some drab office building with a huge pile of these to sort though. Different libraries, languages, frameworks etc. They probably have some quota to fill. How motivated are they going to be find out who the authors are, how to contact them, compose a long message, explaining, asking questions? Nah, just look at the code, google a few things, and assign as high of a score as possible. Nobody needs to have someone come and yell at them for assigning a 3 to the next log4shell.
They could have a tiered system where at least things that have deployments in the billions get a special treatment. Holidays can be cancelled, releases rescheduled over a bad enough vulnerability in a sufficiently common software. Miscoring a vulnerability makes a thing that is supposed to be good, by promoting a more secure ecosystem, end up being actually bad, wasting resources that could have been spent on solving actual issue. Not even speaking about the lowered trust in NVD...
Curl is in a LOT of systems. All over IoT. You should see some of the emails he gets asking for support for some random chinese toy or quite frequently automobiles. I see Daniel's point, he doesn't want to be hit with 9.8 over some flawed reasoning, because it makes him look sloppy (which he is not) but the silver lining is it would likely force all those systems to upgrade curl wherever possible. The ones that don't... no biggie, nearly impossible to make a successful attack out of it.
The NVD seems optimized for dealing with companies. It definitely provides a valuable service, but the service it provides is something like “ingesting all the company PR about a given security issue and distilling it into an automated format for algorithmic consumption”. Curl has a particular combination of extremely high deployment numbers and very few developers. This pattern is common in open source projects, but (vanishingly) rare in commercial projects. But almost everything the NVD evaluated is a commercial project, which is probably why the NVD struggles so much to get it right on curl.
The compromise solution is probably something like an “open source security issue” evaluation agency - it can take assessments derived from actual ability to read the code, and adapt it to a “press release” format that the NVD can handle. Basically an interface issue.
As someone who has worked a lot with CVEs (and CPEs :shudder:) and had a few interactions with NVD, they are trying their best but absolutely do not have the resources they need. The other part of this is that CVSS 3.1 overrates severity in many cases, solving this is a major goal of the soon to be released CVSS 4.0!
I don't know why he assumes NVD is a big organization. I'd be surprised if they occupied a full floor. They're most likely understaffed and underpaid, which is not a good start for a mistake free environment. But his point stands it would definitely be much better if they triple checked when scoring stuff that is universally deployed like curl, zlib or SQLite.
This spurious flagging does have huge downstream effects - personally, we had to upgrade container images used for training because curl (among others) was “flagged” as having a critical vulnerability.
Maybe rating stuff is a similar problem to estimating stuff, so whatever is effective in the latter (results vary, of course) may help in the former.
A "standard checklist" for each category may help. Kind of like a "You must be this tall to ride."
At least three types of checks:
* If this is not checked, it can't be that severe.
* If this is checked, it must be that severe.
* If this is checked, it has that common trait.
It may get complex enough where a need to "develop a framework" is started. This could look at threat vectors and mark stages like phishing, recon, post-compromise, etc. These probably already exist.
I said "framework," so in a sense either things have become too complex, or it could not be simplified without.
Checklist are what Danial is complaining about. NVD saw the vuln, thought "Oh, I think that fits in the 'authentication bypass' checklist entry", and ranked it as high severity.
Danial is saying that no, the curl developers don't need a dumb checklist because they know the vulnerability and code, they know it's not severe. Checklists are getting in the way of reality.
Checklists are a tool you use to organize a process, to organize information that exists. They're not a good tool for gaining that knowledge in the first place.
You get good estimates not by having a checklist that says "Oh, if it's in javascript, the checklist says add 3 weeks, if it's in fortran, the checklist says add 6 weeks", you get good at estimating by actually understanding the exact work that needs to be done. Unfortunately, no one ever understands the actual work that needs to be done.
You get good security ratings by understanding the vulnerability, not by having a checklist that says "if there's memory corruption, it's severe".
Thank-you. Yes, checklists that obscure nuance or try to shortcut understanding are bad.
Checklists created in the absence of context, or even worse, without the same folks to evaluate them (developers themselves!) are next to useless.
Regarding estimation, yes--I agree those examples are bad. I do know you could come up with effective ones too, though. Checklists should reflect the domain, some reasonable path that a person of training could use.
Maybe one of the criteria of a good checklist should also include, "This checklist is terrible and should be thrown out; in the absence of Not Applicable as a choice, or even Too Nuanced; Time Ran Out choices, here is an alternative: _____."
I don't think NVD should use one checklist, but maybe they could use it as part of an overall toolkit, one that also includes--time permitting--a detailed investigation of the subtleties behind a less severe "authentication bypass" and a more severe one.
(After all, in agile estimation, points are relative to previous story estimates, given the same tuple of [Team, Product, Sprint, Story].)
I do feel it is unfair to be dinged by a separate org. Maybe high severity ought to be reserved for when there is mutual agreement without optics. Because to NVD it's just another item, but to cURL org it impacts their metrics too.
On the other hand, there may be bad actors trying to game that.
> You get good security ratings by understanding the vulnerability
Yes. Maybe NVD needs more visibility by rating certain vulnerabilities as "High Severity (Contested: Low)," if only to underscore their limited resources.
Everyone knows how important cURL is, and how unhelpful it is to attract developer ire. Maybe we cannot label a vulnerability on a single axis alone.
I wish I could describe the problem: a proof where a mapping of multiple attributes cannot be described by any one in the same set.
> Checklists... use to organize a process... organize information... not a good tool for gaining that knowledge in the first place
I agree. Even an outline, 5 Whys, fishbone, or lots of other tools. Run the branch locally, try to repro the bug. Lots of ways.
But if NVD's reason-for-being is to assess things, not necessarily to fix them, that may also need to be considered in their findings.
At the end, before assigning the final severity, go through one (or more) checklists to ensure nothing is missed, and to shake out contradictions or inconsistencies.
It would also be good to allow a response window for developers. If it's not as bad as all that, use that response. If it reeks of damage control, stick to your guns.
Yep. I feel the same pain with `debug` on npm and have almost given up on the project entirely. A few security "researchers" determined that since the code could technically be executed by an application connected to the network that thus a regular expression complexity issue must also have a network vector of attack, which immediately boosts the score up by a LOT and...
... increases their payout on bug bounty sites. :) Because of course it does.
Snyk, Huntr, and VulDB, I'm looking at you.
CVEs mean nothing to me anymore. The entire system is beyond fucked.
We're building a complex software product and we want to offer support for it.
As part of our contract we initially offered to work on all CVEs detected in one of our products or our dependencies (e.g. Docker images, Rust dependencies etc.) but we quickly learned that that's not feasible as there is a huge amount of low quality CVEs or irrelevant CVEs.
We still need to take all of those into account and evaluate them.
RedHat is using something similar where they have their own ratings which take the CVSS scores into account. They have a custom system built around this.[1]
I'm wondering two things:
1) What would your expectations be towards a software vendor in terms of what issues to "fix"?
2) Is anyone aware of a security database & evaluation tool geared for vendors not for end-users? As in: Gets a feed of CVEs relevant for our products, each of them needs to be analyzed, amended, new CVSS vector and our own analyze result and then publish them to a feed, potentially emailing all affected customers
DISCLOSURE: I'm working on commercial tooling for this exact problem [1]
> 1) What would your expectations be towards a software vendor in terms of what issues to "fix"?
I would want the vendor to communicate their analysis for all CVEs, i.e. letting us know which are exploitable or not, and what kind of response they are planning, or any fixes released.
There are efforts to standardize this workflow with Vulnerability Disclosure Reports (VDRs)[2] and Vulnerability Exploitability eXchange VEXs[3]. Both these use cases are covered by OWASP CycloneDx[4].
> 2) Is anyone aware of a security database & evaluation tool geared for vendors not for end-users?
IMHO, there are not any good tools available that solve the complete workflow. We are certainly aiming to fix that with[1], but it will take some time.
> 1. What would your expectations be towards a software vendor in terms of what issues to "fix"?
This is difficult to answer, as it really depends on the customer. I work in a sector where any vulnerability is something that we have to attempt to remediate, even if it is a tool or library that is installed in a docker image that is not executable by the app that is running inside that container (think base image has a security flaw)... it means we have a lot of work to basically copy the contents of the app out of a container and plug it into our own base images that we can control/scan/deal with the vulnerabilities on. It's a giant pain in the behind. Vendors usually don't care because they rightfully so say "that library is not something our app uses, so it is out of scope" or "that comes installed by default in the debian base image from docker, complain to them".
The field is simply too fast and large. The other one is when your binary/app is using a library that is vulnerable. Think Java/Go/Rust/Python/Ruby using a library that is vulnerable. We can't patch that (without potentially breaking things), so we have to get an exception while we push the vendor to upgrade.
That part is even difficult at times though because sometimes the version that is not vulnerable is not API compatible with the version that is, and the upgrade takes longer than the time we have to remediate the vulnerability. With Ruby/Python/Java we can usually replace the bits necessary to make it not vulnerable and get an exception granted, but for Go/Rust and or other binaries we are stuck getting exception approval and building additional security around the product to make sure that exploit can't be attacked.
> 2. Is anyone aware of a security database & evaluation tool geared for vendors not for end-users? As in: Gets a feed of CVEs relevant for our products, each of them needs to be analyzed, amended, new CVSS vector and our own analyze result and then publish them to a feed, potentially emailing all affected customers
Even if you made this available, we HAVE to use the NVD/CVE scores themselves. We can't deviate just because the vendor says so, unfortunately.
I would love for vendors to have something like this integrated though as some vendors have hopelessly outdated versions of libraries in their products because it is a lot of work for their teams to upgrade/test/validate that the new version doesn't break anything versus just sticking with the known quantity. Upgrading dependencies in a project is work, and the larger the project, the more dependencies, unless you have a really good testing framework it becomes a HUGE chore, and there is always some new feature that gets priority over gardening tasks.
Thanks for the detailed response. Rebuilding the Docker images sounds painful!
It is a minefield. Before starting our product company we did a lot of consulting gigs and security updates etc. were a big part of this. We have seen the other side of this for 10+ years and _want_ to do a good job but it's not actually easy to define what _good job_ means here.
If you have any feedback for us what would make your life as a customer easier please do let me know (my E-Mail address is also in my profile)
It's disaster when system handling user data is showing data for another user and this is how I understand effects of https://curl.se/docs/CVE-2023-27536.html manifesting in real world. It was wrong even before privacy regulations were invented.
With prev issue (https://curl.se/docs/CVE-2022-42915.html) it seems more sophisticated to take advantage of and probably requires planting something in memory by other means, but I'm not "malware specialist", I don't know assembly well enough to judge it myself.
It's all good that some of these things can be found by libcurl authors/maintainers themselves. I'd love to see directly in CVE registries that source of CVE was internal to the project without need to lookup the reporters. If there is some feature even with lowest usage among all of them it is still a feature, and as author of library you don't have control about usage of your library. I think the GSS delegation bug well deserved initial score.
From library user side as developer I'd love to see it as critical and then write report about false positive based on library usage to pass CI checks on my side. And it's not blame for you to have race conditions, it's praise for not having them anymore.
> It's disaster when system handling user data is showing data for another user and this is how I understand effects of https://curl.se/docs/CVE-2023-27536.html manifesting in real world. It was wrong even before privacy regulations were invented.
It exposes data to user who in the first place had a right to see them (it very much evokes the "airtight hatchway", although strictly speaking it is not the same thing). Also the whole combination is incredibly niche in the first place and it is hard to see why the client code would ever want to establish second connection to the same server without GSSAPI.
In essence it is minor implementation oversight that has no practical consequences and probably the only reason why it even has a CVE number is that there is GSSAPI involved with that.
establish second connection to the same server without GSSAPI
As I understood the description, it's not even that: you would need to establish two connections, both with GSSAPI and using the same username, but with different credential scoping options (the text says with or without credential delegation, but GSSAPI has other options besides delegation and it's not clear to me if that's just an example). In such a case, the client has no control over which connection would actually be used.
> It exposes data to user who in the first place had a right to see them
I would be surprised if it is the only scenario. I've seen systems which rely on passing requests here and there with some common account auth wrapping communication to further services in very various ways and for variety of reasons. Some of the reasons to use single authentication mechanisms on gateway levels are laughable or at least cannot be treated seriously and should end up in redesign yet too many dev teams agrees on such shortcuts as their client says they don't have money for something like proper solution.
And again, minor implementation detail is not a valid argument. It is problem of people who judge another project based on unpatched high severity lib version with bug in unused feature. Companies coming seriously to security have security teams or approvals responsible for assessing impact of severe bugs. It's the other ones in huge numbers demanding smaller severity scores. For me these are like alcohol advocacy during pregnancy.
Maybe I'm missing something, but looking at concept explanation here: https://docs.oracle.com/cd/E19455-01/806-3814/overview-77/in... I would never assume both connections can be used safely in general case, yet it can be ok if business logic ok with that and is limited to process single user requests with vulnerable libcurl version.
So, however valid these complaints may be, they fundamentally misconstrue the role of NVD. They're taking NVD artifacts far too seriously. I'm sure that's a reaction to incompetents in the security industry also misconstruing NVD, but the correct response to that is to dunk on those incompetents, not to attempt to hold NVD to an impossible standard.
Meanwhile, the CVSS is bad? You don't say. On this point, Stenberg's right to make noise: there is a broad (if quite shallow) belief that CVSS scores have some meaning, and they do not: they are a Ouija board that reflect the interests of whoever calculated the score, and it's easy to show pairs of sev:lo-sev:hi vulnerabilities that illustrate exactly how ridiculous the resulting scores are.
It would be better if the NVD CVE database didn't include CVSS scores; they don't work, they're unscientific, and they too hold NVD to a standard they can't possibly meet, making a lot of this their fault.