We (F5) published two CVEs today against NGINX+ & NGINX OSS. Maxim was against us assigning CVEs to these issues.
F5 is a CNA and follows CVE program rules and guidelines, and we will err on the side of security and caution. We felt there was a risk to customers/users and it warranted a CVE, he did not.
I worked there before and after the acquisition. F5 Security was woefully incompetent. We spent 3 months trying to get approval for a web hook from Gitlab -> Slack, including endless documents (Threat Model Assessment), and meetings - god, the meetings - at one point on a call with 35 people. So I feel Maxim’s pain trying to deal with that team at F5.
On the other hand nginx core developers (the Russians) were arrogant to the point of considering anyone else as inferior and unworthy of their attention or respect, unless they contributed to nginx oss. They managed that project secretively and rewrote most “outside” contributions. They also ignored security issues - one internal developer spotted security issues with NGINX Unit (a failed oss project 20 years out of date before it started) and was told to fix the issues quietly and not to mention “security” anywhere in the issue messages or commit history.
So I can imagine exactly how these meetings would have gone, I’m sure it was the last straw!
I can agree to this. I worked there too, and it took 2 months to get a simple approval for a similar project, despite preparing extensive TMA documents, etc
This is confusing. The CVE doesn't describe the attack vector with any meaningful degree of clarity, except to emphasize how you'd have to have a known unstable and non-default component enabled. As far as CVEs go, it definitely lacks substance, but it's not some catastrophic violation of best practices. It hardly reflects poorly on Maxim or anything he's done for Nginx. This seems like an extreme move, and it makes me wonder if there's something we're missing.
Maybe, but he only mentioned disagreements on security policies. Doesn't sound very convincing as a last straw, especially from a marketing standpoint when trying to gain more traction for his fork.
Yes, those are the two CVEs I was referring to. All I know is he objected to our decision to assign CVEs, was not happy that we did, and the timing does not appear coincidental.
QUIC in Nginx is experimental and not enabled by default. I tend to agree with him here that a WIP codebase will have bugs that might have security implications, but they aren't CVE worthy.
We know a number of customers/users have the code in production, experimental or not. And that was part of decision process. The security advisories we published do state the feature is experimental.
When in doubt, err on the side of doing the right thing for the users. I find that's the best approach. I don't consider CVE a bad thing - it shouldn't be treated like a scarlet letter to be avoided. It is a unique identifier that makes it easy to talk about a specific issue and get the word out to customers/users so they can protect themselves. And that's a good thing.
The question I ask is "Why not assign a CVE?" You have to have a solid reason why not to do it, because of default is to assign and disclose.
I don't think having the CVEs should reflect poorly on NGINX or Maxim. I'm sorry he feels the way he does, but I hold no ill will toward him and wish him success, seriously.
FWIW, in my project the main reason we don't issue security advisories for "unsupported" code ("experimenal" or "tech preview") is to reduce the burden for our downstreams: many of our immediate downstreams are expected by their users to apply every single security patch, regardless of whether they even use the affected functionality. For cloud providers doing this across a massive fleet, this is a fair amount of work that's worth avoiding if we can.
On the other hand, since the definition of "supported" is specifically designed to help downstreams, if it were known that some bit of code was widely used in production, we'd be open to declaring it "security supported", regardless of whether we thought it was "finished" or not.
Recently I had to support a client who had a "no CVEs in a production deploy, ever" policy.
The stack included Linux, Java, Chromium, and MySQL. It took multiple person-years of playing whack-a-mole with dependencies to get it into production because we'd have to have conversations like:
Client: there's a CVE in the this module
Us: that's not exploitable because it's behind a configuration option that we haven't enabled
Client: somebody could turn it on
Us: even if they somehow did and nobody noticed, they would have to stand up a server inside your VPC and connect to that
Client: well what if they did that?
Us: then they'd already have root and you are hosed
Client: but the CVE
Us:
So I definitely appreciate any vendor that tries to minimize CVEs.
I mean, yeah, but if that's the way big bureaucratic organizations get sometimes. Bigger means more likely to have a brain-dead policy like this, but also more money... so, do you give up the money, or do you accommodate their policy while trying to minimize the cost?
There's tons of reasons why you wouldn't, but the core reason for this fork probably isn't really about the CVEs as such. It's either the final straw in a long line of disagreements, or the entire thing was handled was so badly that he no longer wants to work with these people. Or most likely: a combination of both.
I once quit after a small disagreement because the owner cut off my explanation on why I built something the way I did with "I don't care, just do what I say". This was after he ignored the discussion on how to design it, and ignored requests for feedback when I was building it. And look, I don't mind to re-doing it even if I don't agree it's better better, but I did put quite a lot of thought and effort in to it and thought it worked very well. If you don't even want to spend 3 minutes listening to the reasons on why it's like that then kindly go fuck yourself.
It's not the disagreement as such that matters, it's the lack of basic respect.
As an outsider to this whole thing (having discovered this issue in this thread, like pretty much anyone), the CVE rules simply say that you cannot assign a CVE to vulnerabilities in a product that is not publicly available or licensable. Experimental, but publicly available features are still in scope.
This makes sense IMHO: experimental features may be buggy, but they may work in your limited use case. So you may be inclined to use them...except you don't know they expose you in a critical way.
Exactly - this very question came up. And pretty much everyone looked at me as I'm the one who sits on every CVE.org working group (BTW, the CVE rules are currently being revised and in comment period for said revision) and I explained exactly that - just because it is experimental doesn't mean it is out of scope.
Also, something that keeps getting lost here, the CVE is NOT just against NGINX OSS, but also NGINX+, the commercial product. And the packaging, release, and messaging on that is a bit different. That had to be part of the decision process too. Since it is the same code the CVE applies to both. This was not a rash decision or one made without a lot of discussion and consideration of multiple factors.
But one of our guiding principles that we literally ask ourselves during these things is "What is the right thing to do?" Meaning, what is the right thing for the users, first and foremost. That's part of the job, IMHO. Some vendors never disclose anything, but that's not how we operate. I've written a few articles on F5's DevCentral site about this - "Why We CVE" and "CVE: Who, What, Where, and When" are particularly on topic for this, I think.
All features have limited use case, but experimental features may be buggy in all use cases, which is exactly what happened here. CVE is uninformative there, defects are implied, might as well create a CVE for every commit "something happened, don't forget to repedloy".
That's a whole different discussion - which isn't as dramatic as it is being made out to be.
Other hats I wear (outside of my day job) include being on every (literally, every) CVE.org Working Group and being the newly elected CNA Liaison to the CVE Board. This has been a subject of discussion and things are a bit overblown right now, IMHO. Some of the initial communications were perhaps not as clear as they could have been. But it isn't going to be every kernel bug being a CVE - not every bug is a vuln.
I'm also one of the co-chairs for the upcoming VulnCon in Raleigh, NC. Just a plug. ;-)
Answering your original question to posted to me a bit down thread with this important context. The answer to "why not issue a CVE?" is the same reason that you don't call every random car burglary or graffiti an act of terrorism.
While I agree the whole Linux CVE thing is a bit overblown, but as an outside observer the new policy [1] does not read like they are super happy with CVE in general.
Too bad the CFP is closed for VulnCon, it might be fun to do a "Assume everything is wrong and you can't do anything the way you do it now - how do you build CVE 2.0" (also that title is too long).
We got around 150 submissions for 30ish panel slots over three days, so we're good there. Schedule should be out soon.
The CVE program has grown and changed a lot the past few years, and the rules are undergoing a major revision right now (comment period currently) taking in a lot of the feedback. And the rate of CNAs joining has been picking up rapidly as global interest in the program has increased.
No one thinks it is perfect, but that's why a lot of us are active in the working groups and trying to keep moving things forward.
I think you'd have to ask Maxim. My take is he felt experimental features should not get CVEs, which isn't how the program works. But that's just my take - I'm the primary representative for F5 to the CVE program and on the F5 SIRT, we handle our vuln disclosures.
I'm inclined to agree with your decision to create and publish CVEs for these, honestly. You were shipping code with a now-known vulnerability in it, even if it wasn't compiled in by default.
Incorrect. Features available to users still require a minimum, standard level of support. This is like the deceptive misnomer of staging and test environments provided to internal users used no differently than production in all but name.
If the feature is in the code that's downloaded, regardless of whether or not the build process enables it by default, the code is definitely being shipped.
This is an insane standard and attempting to adhere to it would mean that the CVE database, which is already mostly full of useless, irrelevant garbage, is now just the bug tracker for _every single open source project in the world_.
Why is it insane? The CVE goal was to track vulnerabilities that customers could be exposed to. It is used…in public, released versions. Why wouldn’t it be tracked?
It's in the published source code, as a usable feature, just flagged as experimental and not compiled by default. It's not like this is some random development branch. It's there, to be used en route to being stable. People will have downloaded a release tagged version of the source code, compiled that feature in and used it.
By what definition is that not shipped?
> I am actually completely shocked this needs to be explained. Legitimate insanity.
I've had an optional experimental feature marked with a CVE. It's not a big deal as it just lets folks know that they should upgrade if they are using that experimental feature in the affected versions.
Where did you get this info? It might be the feature is actively being worked on and the DoS is a known issue which would be fixed before merge. Lot of projects have contrib folder for random scripts and other things which wouldn't get merged before some review but users are free to run the script if they want to. Experimental compile time build flags are experimental by definition.
You're all also missing the fact that the vuln is also in the NGINX+ commercial product, not just OSS. Which has a different release model.
Being the same code it'd be darn strange to have the CVE for one and not the other. We did ask ourselves that question and quickly concluded it made no sense.
"made no sense" from a narrow, CVE announcement perspective, but Maxim disagrees from another perspective:
> [F5] decided to interfere with security policy nginx
> uses for years, ignoring both the policy and developers’ position.
>
> That’s quite understandable: they own the project, and can do
> anything with it, including doing marketing-motivated actions,
> ignoring developers position and community. Still, this
> contradicts our agreement. And, more importantly, I no longer able
> to control which changes are made in nginx within F5, and no longer
> see nginx as a free and open source project developed and
> maintained for the public good.
I'm not sure what "contradicts our agreement" means but the simple interpretation is that he feels that F5 have become too dictatorial to the open source project.
The whole drama seems very short-sighted from F5's perspective. Maxim was working for you for free for years and you couldn't find some middle ground? I imagine there could have been some page on the free nginx project that listed CVEs that are in the enterprise product but that are not considered CVEs for the open source project given its stated policy of not creating CVEs for experimental features, or something like that.
To nuke the main developer, cause this rift in the community, and create a fork seems like a great microcosm of the general tendency of security leads to wield uncompromising power. I get it. Security is important. But security isn't everything and these little fiefdoms that security leads build up are bureaucratic and annoying.
I hope you understand that these uncompromising policies actually reduce security in the end because 10X developers like Maxim will start to tend to avoid the security team and, in the worst case, hide stuff from their security team. I've seen this play out over and over in large corporations. In that sense, the F5 security team is no different.
But there should be a collaborative, two-way process between security and development. I'm sure security leads will say that they have that, but that's not what I find. Ultimately, if there's an escalation, executives will side with the security lead, so it is a de facto dictatorship even if security leads will tend to avoid the nuclear option. But when you take the nuclear option, as you did in this case, don't be surprised by the consequences.
OK - I need to make very clear that I'm speaking for myself and NOT F5, OK? OK.
Ask yourself why this matters? What is the big deal about having a CVE assigned? A CVE is just a unique identifier for a vulnerability so that everyone can refer to the same thing. It helps get word out to users who might be impacted, and we know there are sites using this feature in production - experimental or not. This wasn't dictating what could or could not go into the code - my understanding was the vuln wasn't even in his code, but from another contributor. So, honestly, how does issuing the CVEs impact his work, at all?
That's what I, personally, don't understand. At a functional level, this really has no impact on his work or him personally. This is just documentation of an existing issue and a fix which had to be made, and was being made, CVE or no CVE. And this is worth a fork?
What you're suggesting is the best thing to do is to allow one developer to dictate what should or should not be disclosed to the user base, based on their personal feelings and not an analysis of the impact of that vulnerability on said user base? And if they're inflexible in their view and no compromise can be reached then that's OK?
Sometimes there's just no good compromise to be reached and you end up with one person on one side, and a lot of other people on the other, and if that one person just refuses to budge then it is what it is. Rational people can agree to disagree. In my career there have been many times when I have disagreed with a decision, and I could either make peace with it or I could polish my resume. To me it seems a drastic step to take over something as frankly innocuous as assigning a CVE to an acknowledged vulnerability. Clearly he felt differently, and strongly, on the matter. Maybe he is just very strongly anti-CVE in general, or maybe he'd been feeling the itch to control his own destiny and this was just the spur it took to make the move.
His reasons are his own, and maybe he'll share more in time. I'm comfortable with my personal stance in the matter and the recommendations I made; they conform with my personal and professional morals and ethics. I'm sorry it came to this, but I would not change my recommendation in hindsight as I still feel we did the right thing.
Only time will tell what the results of that are. I think the world is big enough that it doesn't have to be a zero sum game.
I guess a vulnerability doesn’t count unless it’s default lol. Just don’t make it default and you never have any responsibility nor does those who use it or use a vendor version that has added it in their product.
>I guess a vulnerability doesn’t count unless it’s default lol.
It's still being tested. It's not complete. It's not released. It's not in the distribution. The amount of people that have this feature in the binary AND enabled is less than the amount of people that agree that this should be a CVE.
CVE's are not for tracking bugs in unfinished features.
It IS in the code that anyone can compile to use or integrate in projects as is the OSS way. Splitting hairs because it’s not in the default binary is absurd. Guess all the extra FFMPEG compilation flags and such shouldn’t count either.
(not explicitly asking you, MZMegaZone) Does anyone understand why a disagreement about this would be worth the extra work in forking the project?
I'm not very familiar with the implications, so it seems like a relatively fine hair to split- as though the trouble of dealing with these as CSV would be less than the extra work of forking.
It probably wasn't. There's likely something else going on. Either Dounin had already decided to fork for other reasons, and the timing was coincidental, or there were a lot of reasons building up, and this was the final straw.
Or he's just a very strange man, and for some reason this pair of CVEs was oddly that important to him.
If you have more information, share it (I don’t think you do, as all you could say was “I’m sure”.). People actually involved sharing their side is a unique advantage of HN. Empty ad hominem attacks are not allowed here, and you have no right to tell anyone to “get out of here”.
Could you expand on your reasoning here? I'm genuinely curious what makes you react in this way?
To me it seems like a very simple disagreement with policies, and because the implications of the decision that was made and the impact it has to the agreed relationships.
F5 is a CNA and follows CVE program rules and guidelines, and we will err on the side of security and caution. We felt there was a risk to customers/users and it warranted a CVE, he did not.