Isn't "really dangerous" a bit hyperbolic here? I'm describing a process by which you figure out the actual risk of vulnerabilities before treating them further. You can find quotes in my comment to make it looks worse than it is,
but I was not expecting to find the process I described to be controversial.
But yes, it's true: I'm advocating a risk-based approach to such vulnerabilities rather than a compliance-based one. I guess which is better depends on organizational fit and personal taste.
I'm also confused what to do with your example of "fixing container vulnerabilities" in this context of base image vulnerabilities. Both employees A and B would have to fix their set of vulnerabilities by either (a) updating the vulnerable base image or (b) switching to a different base image. Fixing base image vulnerabilities is not the pick-and-choose versus all-or-nothing affair you seem to be describing.
The assumption is that we're talking about big-enough orgs here. If you're running a small enough org, take a weekend, put your system in distroless containers and be done with it. Container scanners don't add much value to small orgs to begin with, their whole value proposition is "you run so many containers that you can barely keep track of them, so here's a tool that helps you understand the true state of things." Process and compliance are practically a given.
> either (a) updating the vulnerable base image or (b) switching to a different base image
Non-trivial in practice. Hopefully your org has standardized on a single base image which the org maintains and takes responsibility for, so (b) is a non-starter. If you could just update the base image fleet-wide overnight without issues then we wouldn't need containers in the first place; if you tried to do that twenty years ago you'd instantly cause rolling outages (now you'll roll back after your canary dies, but it's a moot point, you still aren't de-facto instantly updating). Containerization made it easier and safer to deploy services, but it didn't give you a "click here to magically update everything with no risk of rejection" button. Vulnerable services have often been vulnerable for long periods of time, with complicated update paths, possibly needing in-house patches, etc.
If you are working in a more political organization (that is not a value judgment - that often comes with organization scale) then other things influence you processes. I'm sorry, but that's not the perspective I take. That doesn't make my approach any more dangerous though - I think it's an appropriate perspective and I'm happy I can take it.
The problem here is that your assessment of how a vulnerability might be leveraged or accessed is bound by your own teams limited knowledge.
The reality is, the attackers knowledge and creativity is more or less unbounded (and unknown). So making that judgment call of what is a real risk, vs having zero tolerance is a huge gamble IMHO, especially if your teams are not red team wizards.
Moreover, it seems you're stating that "no tolerance" should focus on having no CVEs in container images. Does the CVE database really have that level of authority for people? It seems like the wrong thing to focus on even in these hypothetical no tolerance situations, I'm really not sure what to tell you there.