I’m a co-author of the paper that the OP’s post discusses. A few clarifications:
- In safety- or security-critical systems, you may want to define a small trusted base, or to be able to audit the system to be confident that certain failures won’t occur (https://groups.csail.mit.edu/sdg/pubs/2009/dnj-cacm-4-09.pdf). Reasoning about such things requires being able to say when one part depends on another. Existing definitions of dependency are too vague or incomplete to support such reasoning, let alone to build tools.
- The puzzles are our attempt to distill issues that arise in practice to the simplest possible examples, so if they sound obvious, so much the better. The fallback case is important in critical systems. Suppose you build a component X and you’ve reasoned that it meets a spec S required by a client C. But then you’re not completely confident in your reasoning, so you build a simpler fallback version X’ that you switch to if X fails to meet some property at runtime. Presumably when you reason about C you’ll want to account for X’ as well. This is a basic engineering issue and not a linguistic nicety.
- The problem is to account for systems the way they’re actually built, not to propose a new way to build them. Even if your code was built with objects, the Liskov Substitution Principle wouldn’t be enough because it only addresses subtyping. Global variables, for example, can’t be ignored. They were pervasive in the car code that JPL analyzed as part of a study I was involved in (https://nap.nationalacademies.org/catalog/13342/trb-special-...), and they arise implicitly in OO code whenever you have static features. Another example: if you’re designing a Mars rover, you’d want to account for failures in a battery maintenance module even for modules that make no calls to it. Interfaces can’t account for this kind of thing.
- @layer8 offers a nice formulation of dependence in terms of reasoning and properties. This was a starting point for our work, and I’d previously suggested a model like this in a 2002 paper (https://groups.csail.mit.edu/sdg/pubs/2002/monterey.pdf) and Eunsuk Kang and I suggested a diagrammatic notation for it in a later paper (https://eskang.github.io/assets/papers/mj_workshop_09.pdf). One key limitation of this approach is (as @csours notes) that it doesn’t let you account explicitly for all the assumptions you typically make in such reasoning. It’s not clear, for example, how such an approach could account for Log4Shell.
- A historical note: I first learned about making dependences explicit from Barbara Liskov and John Guttag from TAing their class (https://mitpress.mit.edu/9780262121125/abstraction-and-speci...). Even back then, we realized that the dependence model was only an approximation, and that it couldn’t be derived from the code. Long after Barbara developed her substitution principle, when we taught the same class together (in 1997) we continued to use dependence diagrams, because they address a different problem.
- In safety- or security-critical systems, you may want to define a small trusted base, or to be able to audit the system to be confident that certain failures won’t occur (https://groups.csail.mit.edu/sdg/pubs/2009/dnj-cacm-4-09.pdf). Reasoning about such things requires being able to say when one part depends on another. Existing definitions of dependency are too vague or incomplete to support such reasoning, let alone to build tools.
- The puzzles are our attempt to distill issues that arise in practice to the simplest possible examples, so if they sound obvious, so much the better. The fallback case is important in critical systems. Suppose you build a component X and you’ve reasoned that it meets a spec S required by a client C. But then you’re not completely confident in your reasoning, so you build a simpler fallback version X’ that you switch to if X fails to meet some property at runtime. Presumably when you reason about C you’ll want to account for X’ as well. This is a basic engineering issue and not a linguistic nicety.
- The problem is to account for systems the way they’re actually built, not to propose a new way to build them. Even if your code was built with objects, the Liskov Substitution Principle wouldn’t be enough because it only addresses subtyping. Global variables, for example, can’t be ignored. They were pervasive in the car code that JPL analyzed as part of a study I was involved in (https://nap.nationalacademies.org/catalog/13342/trb-special-...), and they arise implicitly in OO code whenever you have static features. Another example: if you’re designing a Mars rover, you’d want to account for failures in a battery maintenance module even for modules that make no calls to it. Interfaces can’t account for this kind of thing.
- @layer8 offers a nice formulation of dependence in terms of reasoning and properties. This was a starting point for our work, and I’d previously suggested a model like this in a 2002 paper (https://groups.csail.mit.edu/sdg/pubs/2002/monterey.pdf) and Eunsuk Kang and I suggested a diagrammatic notation for it in a later paper (https://eskang.github.io/assets/papers/mj_workshop_09.pdf). One key limitation of this approach is (as @csours notes) that it doesn’t let you account explicitly for all the assumptions you typically make in such reasoning. It’s not clear, for example, how such an approach could account for Log4Shell.
- A historical note: I first learned about making dependences explicit from Barbara Liskov and John Guttag from TAing their class (https://mitpress.mit.edu/9780262121125/abstraction-and-speci...). Even back then, we realized that the dependence model was only an approximation, and that it couldn’t be derived from the code. Long after Barbara developed her substitution principle, when we taught the same class together (in 1997) we continued to use dependence diagrams, because they address a different problem.