This is a really dangerous way of thinking of container security.
First of all, if I give employee A a list of 500 "vulnerabilities" to fix, and employee B a list of three real, serious vulnerabilities to fix, employee A is much more likely to wait for the problem to go away and employee B is much more likely to find the issue approachable and get it resolved. It sounds like a big difference until you understand that actually employee A was given the same three serious vulnerabilities plus 497 "unexploitable" vulnerabilities, and just didn't know which was which because the three serious vulnerabilities got lost in the noise. You need to instill a zero-tolerance culture to make sure that you don't let serious vulnerabilities stick around. Compliance regimes acknowledge this which is why they're ultimately reasonably effective at getting large, lumbering enterprises to be secure.
Second of all, while much of the open-source software inside your container may not be directly invokable without an exploit, the fact of the matter is that virtually no organization is subjecting every release of software produced in-house to rigorous security auditing. Yeah, your software needs to be pwned before those vulnerabilities matter, but Murphy would like to remind you that your software was being pwned while your wrote your comment and that the attackers have exploited the other software in the container while you were reading mine. And maybe it makes a difference, for example, if your containerized service runs under a limited-privilege service user but a vulnerability in adjacently installed software permits the attacker to escalate to root within the container.
You're right in that most orgs probably have lower-hanging fruit that provides better bang for the buck to improve their org's security posture. But adopting an attitude of "meh, not all CVEs are really CVEs" is irresponsible at best.
Isn't "really dangerous" a bit hyperbolic here? I'm describing a process by which you figure out the actual risk of vulnerabilities before treating them further. You can find quotes in my comment to make it looks worse than it is,
but I was not expecting to find the process I described to be controversial.
But yes, it's true: I'm advocating a risk-based approach to such vulnerabilities rather than a compliance-based one. I guess which is better depends on organizational fit and personal taste.
I'm also confused what to do with your example of "fixing container vulnerabilities" in this context of base image vulnerabilities. Both employees A and B would have to fix their set of vulnerabilities by either (a) updating the vulnerable base image or (b) switching to a different base image. Fixing base image vulnerabilities is not the pick-and-choose versus all-or-nothing affair you seem to be describing.
The assumption is that we're talking about big-enough orgs here. If you're running a small enough org, take a weekend, put your system in distroless containers and be done with it. Container scanners don't add much value to small orgs to begin with, their whole value proposition is "you run so many containers that you can barely keep track of them, so here's a tool that helps you understand the true state of things." Process and compliance are practically a given.
> either (a) updating the vulnerable base image or (b) switching to a different base image
Non-trivial in practice. Hopefully your org has standardized on a single base image which the org maintains and takes responsibility for, so (b) is a non-starter. If you could just update the base image fleet-wide overnight without issues then we wouldn't need containers in the first place; if you tried to do that twenty years ago you'd instantly cause rolling outages (now you'll roll back after your canary dies, but it's a moot point, you still aren't de-facto instantly updating). Containerization made it easier and safer to deploy services, but it didn't give you a "click here to magically update everything with no risk of rejection" button. Vulnerable services have often been vulnerable for long periods of time, with complicated update paths, possibly needing in-house patches, etc.
If you are working in a more political organization (that is not a value judgment - that often comes with organization scale) then other things influence you processes. I'm sorry, but that's not the perspective I take. That doesn't make my approach any more dangerous though - I think it's an appropriate perspective and I'm happy I can take it.
The problem here is that your assessment of how a vulnerability might be leveraged or accessed is bound by your own teams limited knowledge.
The reality is, the attackers knowledge and creativity is more or less unbounded (and unknown). So making that judgment call of what is a real risk, vs having zero tolerance is a huge gamble IMHO, especially if your teams are not red team wizards.
Moreover, it seems you're stating that "no tolerance" should focus on having no CVEs in container images. Does the CVE database really have that level of authority for people? It seems like the wrong thing to focus on even in these hypothetical no tolerance situations, I'm really not sure what to tell you there.