Does anybody know how useful this metric is?
I have read NASA uses C, C++, Java and Ada, of which the last three have a lot boilerplate. Heavily commented C can be verbose too. I realize there is probably a lot of review, commenting and redundancy built-in, and that adds to the overall LOC too.
An MS bug might mean Excel crashes, or a bad business decision is made, unless somebody is using MS software for more critical end points. With NASA, it can be guidance systems, and other low-level routines running in radiation-hardened electronics.
Rosetta lander used Forth onboard, so there were probably a lot fewer LOC to make mistakes in. Just one way to approach bug-free programming vs. ADA's or Java's verbosity and checks.
I write J code, so one error usually amounts to 1 in 5 or 10 LOC ;) But then again, I can see all my code in one glance, and I program iteratively in the REPL.
J of course has a different problem; only a handful of people can even parse it, much less opine on correctness.
There is a table in the paper that shows the math formula, the Haskell and then J. Pretty interesting comparison.
To me, if you are not using a prover, then it is mainly going over 10K LOC 1 to 3 times vs. 100 LOC 10 times, 30K LOC reviewed vs. 1K LOC reviewed for errors and correctness.
This forum is a community. Anonymity is fine here, but users should have some kind of consistent identity that other users can relate to. Otherwise we might as well have no usernames and no community at all, and that would be an entirely different forum.
An ADA hello world is 5 LOC vs. 1 for a lot of other languages.
Java is not too different.
J and Python are 1 LOC, and typically not template text, but originally coded.