Right, I wasn't advancing the propositional definition as the only one, merely as one that sufficiently captures the generality, because the article's current definition of types-as-sets is insufficiently general.
As I stated in a comment on that gist, what types denote varies from one type system to another: https://gist.github.com/garybernhardt/122909856b570c5c457a6c...
The idea that types express propositions in first- or second-order logic about a program one is writing is sufficiently general for every programmer to understand the proper scope of type checking. Types-as-sets is simply false as an introductory concept.
> where every programmer in existence is using first-order and sometimes second-order logic.
I'm not so sure about this one. Perhaps by “programmer” you mean “every Haskell programmer”? Even then I'm still not sure. Are you counting “let's approximate `forall x. P(x)` with the large but finite conjunction `P(a) /\ P(b) /\ P(c) ...`” as “using first-order logic”?
Haskell programmers also use higher order logic, which is also common now in Java and C#.
I wasn't talking about types either. Most programmers I've talked to simply can't use formal logic.
Perhaps they have trouble abstracting the semantics from their preferred language and/or syntax, but that doesn't mean they don't grasp their meaning.
Subtyping = "A <: B, therefore I can start with an A, forget the details irrelevant to B, and just use it as a B."
Inheritance = "A inherits B, if I try to actually use it as a B, it promises to work. However, it might actually blow up in my face, because it was keeping its extra details secret from me and promising they wouldn't blow up."
Too harsh. Logic is hard for humans. Evolution has trained us to take lots of mental shortcuts, not to be perfect calculating machines. Even the most technically competent people rely on their intuition. What differentiates them from the rest of us is that they have trained their intuition to make it more reliable .
It's the job of a language designer to arrange things so that intuition doesn't mislead programmers regarding the meaning of language features. Before proposing a new feature, the designer must anticipate how programmers will think about code that uses this feature, and there better not be a mismatch between what programmers think the code means and what the code actually means.
> A inherits B, if I try to actually use it as a B, it promises to work.
There's nothing wrong with implementation inheritance per se . It addresses a very common use case: you have a thing Foo, and you want to produce another thing Bar that's similar to Foo, with minor changes here and there. This describes accurately how many things work in real life. However, you can't assume that Bar can always be used where Foo is expected. Those “minor changes here and there” might have rendered Bar incompatible with Foo. Hence, “inheritance is not subtyping”. As stated above, it's the job of a language designer to anticipate this.
 Other than the lack of mathematical elegance, but few things are mathematically elegant in life.
Yes, it is their job, by not allowing subclasses to be used as subtypes.
(0) Not directly mutating superclass fields. (Reading them is fine.)
(1) Not overriding non-abstract methods of the superclass.
One could also have an non-subclassing mechanism of implementation sharing, that might be declared something like this:
class Related does (method1, method2) like Base;
class Related does all except (method3) like Base;
Disagree. C++ added private inheritance for a good reason. There are scenarios when you want to inherit part of an implementation, but override some of the behaviour for code reuse without inheriting any kind of subtyping relationship.
There are also scenarios where you want to declare a subtype without inheriting any parent behaviour.
So ultimately subtyping is distinct from subclassing.
I'd be fine with something more compact for the default case of "steal all behavior except what is explicitly overridden", such as:
class Related like Base;
(1) Do you think there would still be a use case for “extends”? If not, I'd rather drop it as well.
The most obvious is that it makes intent explicit in the case of multiple similarity (I prefer "similarity" to "inheritance" for non-subtyping implementation reuse.)
It might in some cases make sense to do this on an interface rather than (or in addition to) single method level; there's probably a lot to work out in the ergonomics of non-subtyping implementation reuse.
> Do you think there would still be a use case for “extends”?
Sure, classical subclassing (subtyping with implementation sharing) remains an important use case. It might be desirable to statically verify certain guarantees (some have been suggested in at least one subthread of this thread by another poster) to assure that the subtyping is reasonable, but even without such verification the use case remains important.
Keep in mind that similarity is explicitly intended to be a quick-n-dirty form of code reuse. If the programmer has enough time to refactor the code, he or she should probably factor out the common parts, and then use proper (subtyping) inheritance. In view of this, I don't think making similarity overly complicated is a good idea.
> Sure, classical subclassing (...) remains an important use case. (...) even without such verification the use case remains important.
It's precisely the lack of such verification that destroys the guarantees you can extract from Liskov's substitution principle.
I do see the attraction of the idea that all implementation sharing ultimately reflects something that can be expressed through a subtyping hierarchy adhering to the LSP, but I'm not 100% convinced that it is the case. Absent certainty on that point, I'd like to have a model of similarity that is workable when viewed as an ultimate, rather than interim, model.
Also, I think that this kind of explicitness (with similar syntax) is desirable for subtyping inheritance in languages that support multiple inheritance, so it keeps similarity in line with inheritance, with the distinction only in the relationship being expressed (so, while it may make similarity more complex, it keeps the whole language more simple but explicit.)
> It's precisely the lack of such verification that destroys the guarantees you can extract from Liskov's substitution principle.
I agree that static verification of sanity in subtyping inheritance reduces the problem of LSP violations (as, frankly, does non-static verification, such as testing frameworks using subtype relationships to automatically apply tests for supertypes to subtypes.)
The degree and type of verification (as well as other details like cherrypicking and support for multiple similarity) that is appropriate for these type of things really depends on the language. Extending Perl or Python (dynamic languages with multiple inheritance) to support similarity alongside inheritance is probably going to favor different tradeoffs than extending Java (static with single inheritance) to support similarity.
The LSP itself is never violated in a type-safe language. What's violated is the guarantees that you expected the LSP to buy you. For instance, Java doesn't promise about the meaning of non-final methods. If a programmer assumes such guarantees, the problem is with the programmer, not Java. If he wants such guarantees to hold, he has to use a different language.
That's probably too extreme. Subtyping should just be explicitly declared rather than implicit when subclassing.
The pernicious conflation of subtyping and subclassing in common OO programming languages just compounds the confusion.