You could argue against safety of Rust [...] with the same strawman.
Rust does a mapping of input symbols to output symbols. The input symbols have no 1:1 mapping with machine instructions on any 1 platform. All Rust does is attempt to ensure the constraints of the input symbol language are ensured to exist in the output symbol language.
Lastly no compiler attempts to say it's model is 100% consistent will always work correctly in all circumstances.
Hell C doesn't even bother defining how most integer operations work (in older versions).
What it and those kernels did were creating an abstract
model realistic enough to be useful
This is impossible for them too. But for an attacker it is trivial to exploit these bugs.
Saying you've proved a system means the system is rigorously true beyond the shadow of a doubt. The fact you can't rigorously test underlying state machine, or hoarce logic correctly maps 1:1 with the underlying machine you haven't proved anything.
Okay so let us pretend I prove P=NP for a collection of symbols which I state map 1:1 to a set of all possible algorithms in both P and NP space. But I don't prove that second statement. I'll get laughed out my advisors office.
What proving software correct does is effectively no different then my antidote from a rigorous perspective.
Done a damned good audit? Yes.
You are auditing software, not proving it.
This isn't a straw man. Or a logical fallacy. It is a failure of logic. You prove A is true in system B. But never prove system B is correct... You've proved nothing.
"here are some examples of containers & virtualization that were either proved secure against a policy with realistic model, survived tons of pentesting, or both. Sandboxing done right is way more secure" - Nick P. paraphrased
"you can't prove your Hoarce Logic maps to existing hardware... Which defeats the exercise completely." (you)
"Most models do not cover all hardware, or inter-generational bugs for example the recent skylake 0xF000000000000000F bug."
"let us pretend I prove P=NP for a collection of symbols which I state map 1:1 to a set of all possible algorithms in both P and NP space."
It's like you're talking about a whole, different conversation. Theo's claim is thoroughly refuted whether it was done by eye, hand proofs, model-checking, or full verification down to the gates. Done by each of these actually. The use of smaller systems with simpler API's plus assurance techniques that worked to reduce defects got results no monolithic OS or UNIX got. Hence, why people are recommending investigation or use of such approaches to sandbox applications given those approaches might similarly result in fewer vulnerabilities or with less, severe damage.
Then, you counter talking about how they aren't proven to code, in all circumstances, the hardware, P = NP, etc. What does all that have to do with the claim about reduction of efforts required or damage created using these approaches vs what building or relying on a whole OS would entail? You could throw all that same stuff at the alternative approaches. It would still be true. Whereas, people acting on my recommendations instead of Theo's would provably reduce their attack surface and number of hits they take based on the results gotten in the past by examples I gave. That's why your reply is a strawman to the discussion of whether sandboxing can reduce impact of problems or not with Theo's criticism being destroyed. Even by his own OS which had way many more kernel bugs that needed mitigation than stuff I cited which ran such crap deprivileged or eliminated it at design level where possible.