Don't waste your time. These are neither puzzles nor convincing. What they do is exploit the ambiguity of the word "dependence" to provide illustrations vaguely from tech you'd understand that somehow tease out the various context-dependent definitions of "dependence".
"If a depends on b, but falls back to c if b isn't available, does a really depend on b?"
Come on, we're not context-free parsers. Ambiguity and subtlety may be an annoying part of communication but they aren't all that important to focus on like this.
A lot of programmers today do read some bad summary of SOLID or GoF or whatever and then get confused or angry when things don't follow the models they're aware of. (It's not a recent phenomenon, you can see it with MVC 15-20 years ago too, but a lesser percentage of programmers today have the fundamental grounding to break out of it.)
In particular I see new programmers struggle with the "shared format" puzzle (they want an answer, and there isn't one), and especially Java developers today are deep on the wrong side of the "inversion puzzle" continually mistaking DI for loose coupling and sound architecture.
Does this article help? I'm not really sure. But does elevating shallow mostly-wrong lessons like Clean Code hurt, and we need to address that somehow? Absolutely.
Yes exactly. There are various ways in which something can depend on something else. This article just conflates the different types of dependency to make confusing questions.
All this did is convince me that I understand dependency better than the author!
I have made through 2/3 of the article and could not figure out what problem the author is trying to solve or what he is trying to achieve. Seems like someone from a consulting firm, who does not write software, making a presentation about how to write software.
It’s common for people to use a term they think is straightforward (dependence) but it’s not. They make arguments and decisions that hinge on this term. This article is showing how “dependence” is three major things that can each be broken down even more.
This teasing apart can be really useful. Sometimes you end up debating “should A depend on B”, but then it’s “does A depend on B” and all the way to “this is what dependency means / means to me”. That last step is a terrible way out of the argument. A better thing is something like “we’ll call your dependency definition type 1 and mine type 2” and you realize there’s more common ground than you had before, the disagreement quite narrow. Maybe the disagreement even becomes a matter of fact and you can go find out! I find having a concept already broken down by someone else to help this process.
The meaning of dependence is context specific. Arguing about what it means in the in abstract does not make sense. There can be a build dependence, a deployment dependence, etc
The beauty of software is that you can pretend the rest of the word doesn't exist while you are writing a single function. You can pretend that data is passed in, transformed, and passed out. You may pretend there are no side effects, or something like that.
But when you pretend this way, you make many assumptions. It is good to have a shared vocabulary for these assumptions.
1) Framework: if your scheduler logic is liskov substitutable with another scheduler, yes, you don't depend on the tasks
2) Shared Format: A writes to file, B reads it: again, the file is a transport mechanism that can be abstracted behind a Liskov substitutable interface. Serialization/Marshalling can abstract the format
3) Modifying a global variable? What a dumb question.
4) Crash puzzles: exception walls solve these as do standard error codes/exceptions in the liskov interface.
5) These are solved by liskov by interfacing with impls that have fallback. Or, read a book on a hundred and one patterns to solve it.
6) errors are part of interfaces. dumb example
7) cycled dependencies, again, don't matter if there are Liskov boundaries. Who cares what the impls are using if the Liskov boundaries are respected and defined. Exception: see next paragraph
The only thing this is significant for is when two incompatible versions of the same library pollute the imports/classpath/whatever. THAT is a problem when dealing with lots of library imports and their downstream/transitive imports/dependencies, regardless of language in mature programming situations. But... there's a reason that's called "a dependency", so those are ... dependencies.
I’d say that A depends on B with respect to a property P(A) of A if a proof (logical reasoning) of why P(A) holds depends on an assumption about B; that is, if some property Q(B) of B is a necessary logical premise of the proof.
The article in its final definition of “dependence” talks about causation, but logical deduction (proof steps) seem more fundamental to me. It also talks about permissible changes, but the assumption that only permissible changes are talking place is just that, an assumption about the dependency.
Every supposed puzzle in here boils down to the same answer: the dependency can be rephrased as a dependency on an interface. Sometimes the interface is just hidden or a bit abstract.
I’m a co-author of the paper that the OP’s post discusses. A few clarifications:
- In safety- or security-critical systems, you may want to define a small trusted base, or to be able to audit the system to be confident that certain failures won’t occur (https://groups.csail.mit.edu/sdg/pubs/2009/dnj-cacm-4-09.pdf). Reasoning about such things requires being able to say when one part depends on another. Existing definitions of dependency are too vague or incomplete to support such reasoning, let alone to build tools.
- The puzzles are our attempt to distill issues that arise in practice to the simplest possible examples, so if they sound obvious, so much the better. The fallback case is important in critical systems. Suppose you build a component X and you’ve reasoned that it meets a spec S required by a client C. But then you’re not completely confident in your reasoning, so you build a simpler fallback version X’ that you switch to if X fails to meet some property at runtime. Presumably when you reason about C you’ll want to account for X’ as well. This is a basic engineering issue and not a linguistic nicety.
- The problem is to account for systems the way they’re actually built, not to propose a new way to build them. Even if your code was built with objects, the Liskov Substitution Principle wouldn’t be enough because it only addresses subtyping. Global variables, for example, can’t be ignored. They were pervasive in the car code that JPL analyzed as part of a study I was involved in (https://nap.nationalacademies.org/catalog/13342/trb-special-...), and they arise implicitly in OO code whenever you have static features. Another example: if you’re designing a Mars rover, you’d want to account for failures in a battery maintenance module even for modules that make no calls to it. Interfaces can’t account for this kind of thing.
- @layer8 offers a nice formulation of dependence in terms of reasoning and properties. This was a starting point for our work, and I’d previously suggested a model like this in a 2002 paper (https://groups.csail.mit.edu/sdg/pubs/2002/monterey.pdf) and Eunsuk Kang and I suggested a diagrammatic notation for it in a later paper (https://eskang.github.io/assets/papers/mj_workshop_09.pdf). One key limitation of this approach is (as @csours notes) that it doesn’t let you account explicitly for all the assumptions you typically make in such reasoning. It’s not clear, for example, how such an approach could account for Log4Shell.
- A historical note: I first learned about making dependences explicit from Barbara Liskov and John Guttag from TAing their class (https://mitpress.mit.edu/9780262121125/abstraction-and-speci...). Even back then, we realized that the dependence model was only an approximation, and that it couldn’t be derived from the code. Long after Barbara developed her substitution principle, when we taught the same class together (in 1997) we continued to use dependence diagrams, because they address a different problem.
"If a depends on b, but falls back to c if b isn't available, does a really depend on b?"
Come on, we're not context-free parsers. Ambiguity and subtlety may be an annoying part of communication but they aren't all that important to focus on like this.