I didn't say formally verified software is bullshit. I said the marketing here is. FV software exists but its benefits are very limited (basically it's a debugging tool). Running VMs on x86 or ARM in cloud has much more serious problems related to security than using not-formally-verified hypervisor.
It's more than a debugging tool as it allows you talk confidently about entire classes of vulnerabilities rather than the wack-a-mole of fixing bugs traditionally.
And it's a big enough deal that the rumor is that Amazon and Microsoft's type 1 hypervisors have been formally verified. I wouldn't be surprised if Google's is too.
It allows academics and other informed scientists to talk confidently about their design. Everybody else like Linux admins and VM users is still working with a black box software.
Sure cloud vendors may get on board and start announcing how their solution is the best verified software out there. Did I mention bullshit marketing yet?
I don't understand. If I've proved the absence of runtime errors in some code, in addition to classical tests, are you saying that the software is not less likely to crash? Yes, there can be errors everywhere but I don't get this complete rejection of Formal verification tools as 'just a debugging tool'.
It is less likely to crash after you complete the proof than if you couldn't complete the proof, but this does not mean this benefit will be worth much.
In general, all things being equal, software less likely to crash in production is an improvement, but: 1) all things are usually not equal, because verified software is hard to produce and typically much more restrictive and hard to modify 2) how often do standard industry solutions for hypervisor like Xen or KVM crash? Uptime/customer loss due to hypervisor crashes is a negligibly small problem.
The user software should rather work with the assumption that crash can happen. That is much more robust architecture than relying on some proofs that nobody checked.
Though I agree about being 'crash proof' we're talking security here, so DoS or RCE might happen because of a buggy parser, crash or no crash. If I can create a parser that won't produce any error whatsoever, and always return something (be it 'malformed message'), haven't I increased the security of my external interfaces, compared to, let's say, a shit parser full of UB and no check because the program is crashes happen anyway?
@madsbuch I meant classical "Dijkstra-style" proofs. I know about automated proofs, they are great accomplishment, and software that they produce may be good. I wouldn't believe those automated proofs any more than I would believe human-made proofs. Errors in intent and design can happen, I still want to test the resulting software in action and have the option to easily modify it.