Unfortunately not. Judging security is actually really hard! But CVSS is essentially a useless number in every sense, and number of CVEs is not directly useful because different OSes get different amounts of attention. My hobby OS is not infinitely more secure than Linux just because it has no CVEs, for example. On might thing that "mitigations" or "vendor approachability" could be used, but they are only part of the story and have their own problems: some software (not saying which…) is known to just glom on "mitigations" without abandon that don't actually help. And, while rarer, there are vendors who respond appropriately to bug reports but fail to ever actually meaningfully improve security.
Really, the best way to judge security is to ask security researchers: they break the software, they're at the forefront of what it takes to do this and what kinds of things the software is doing to keep them out. They'll tell you which things work and which don't, and how "serious" a vulnerability is (assuming it's not one they found, because they're not immune to bragging :P). In general, across the modern OSes, there is no "one" OS that is more security on every front. Windows has its own issues in subsystem X, Linux is broken in responding to Y, etc.
Practically speaking, it's impossible to compare these metrics between open and closed systems for various reasons. In open systems, bug reporting is a part of the culture. Moreover, as you are closer to the vendor, you can actually count on it being fixed, in trivial cases overnight. You have public bug tracking systems where the bug is almost like your baby: you talk to others about it, you argue in favor if it being fixed. Moreover, a good number of users actually fixes the bugs. Sometimes it's enough a bug report is published that you patch your own system without asking anyone. As source code is available, for some people it's a kind of a hobby to go through and find bugs. Some do it for sport, some for learning/as a part of their curriculum, some as a part of their product development or audits.
With closed systems, many of these points above are not true. Users are not accustomed to reporting bugs, and even if they report a bug sometimes, they become put off as they don't know what happens with it later. They don't actively analyze the source code to find bugs. If a bug is found, they pray the patch is released soon. It is very rare someone has enough low-level skills to manually patch a binary based on a CVE description.
Given these differences, I think the only viable metric is the response time, i.e. the time between the bug being disclosed and fixed.
Any data on that?
Disclaimer: I am not religious about operating systems. I am trying to learn here.