Hacker News new | past | comments | ask | show | jobs | submit login

No one except mathematicians and type theorists are using HoTT, where every programmer in existence is using first-order and sometimes second-order logic. I think it should be obvious which is more appropriate if this is supposed to be an introduction.

The idea that types express propositions in first- or second-order logic about a program one is writing is sufficiently general for every programmer to understand the proper scope of type checking. Types-as-sets is simply false as an introductory concept.




> No one except mathematicians and type theorists are using HoTT,

Right.

> where every programmer in existence is using first-order and sometimes second-order logic.

I'm not so sure about this one. Perhaps by “programmer” you mean “every Haskell programmer”? Even then I'm still not sure. Are you counting “let's approximate `forall x. P(x)` with the large but finite conjunction `P(a) /\ P(b) /\ P(c) ...`” as “using first-order logic”?


I mean pretty much every programmer, even C programmers. Simple business logic is rife with such propositions. Note that I didn't say they use first-order logic to encode invariants in types or anything, but they obviously use it everyday to write programs

Haskell programmers also use higher order logic, which is also common now in Java and C#.


> Note that I didn't say they use first-order logic to encode invariants in types or anything, but they obviously use it everyday to write programs

I wasn't talking about types either. Most programmers I've talked to simply can't use formal logic.


Hmm, I don't think you're being very charitable. That these programmers can write working programs suggest they sufficiently understand disjunction, conjunction, negation, conditionals and quantification.

Perhaps they have trouble abstracting the semantics from their preferred language and/or syntax, but that doesn't mean they don't grasp their meaning.


I offer this thread as evidence: https://news.ycombinator.com/item?id=12342583


Hey, it's not the fault of the programming masses that some evil motherfucker conflated implementation extension (inheritance) and subtyping.

Subtyping = "A <: B, therefore I can start with an A, forget the details irrelevant to B, and just use it as a B."

Inheritance = "A inherits B, if I try to actually use it as a B, it promises to work. However, it might actually blow up in my face, because it was keeping its extra details secret from me and promising they wouldn't blow up."


> programming masses, some evil (bleep)

Too harsh. Logic is hard for humans. Evolution has trained us to take lots of mental shortcuts, not to be perfect calculating machines. Even the most technically competent people rely on their intuition. What differentiates them from the rest of us is that they have trained their intuition to make it more reliable [0].

It's the job of a language designer to arrange things so that intuition doesn't mislead programmers regarding the meaning of language features. Before proposing a new feature, the designer must anticipate how programmers will think about code that uses this feature, and there better not be a mismatch between what programmers think the code means and what the code actually means.

> A inherits B, if I try to actually use it as a B, it promises to work.

There's nothing wrong with implementation inheritance per se [1]. It addresses a very common use case: you have a thing Foo, and you want to produce another thing Bar that's similar to Foo, with minor changes here and there. This describes accurately how many things work in real life. However, you can't assume that Bar can always be used where Foo is expected. Those “minor changes here and there” might have rendered Bar incompatible with Foo. Hence, “inheritance is not subtyping”. As stated above, it's the job of a language designer to anticipate this.

[0] https://terrytao.wordpress.com/career-advice/there%E2%80%99s...

[1] Other than the lack of mathematical elegance, but few things are mathematically elegant in life.


>Hence, “inheritance is not subtyping”. As stated above, it's the job of a language designer to anticipate this.

Yes, it is their job, by not allowing subclasses to be used as subtypes.


Some subclasses can be safely used as subtypes. For example, those that meet the following two conditions:

(0) Not directly mutating superclass fields. (Reading them is fine.)

(1) Not overriding non-abstract methods of the superclass.


If classes are types, subsclasses should be subtypes -- so the above restrictions should just be applied to subclass relationships.

One could also have an non-subclassing mechanism of implementation sharing, that might be declared something like this:

  class Related does (method1, method2) like Base; 
or

  class Related does all except (method3) like Base;


> If classes are types, subsclasses should be subtypes

Disagree. C++ added private inheritance for a good reason. There are scenarios when you want to inherit part of an implementation, but override some of the behaviour for code reuse without inheriting any kind of subtyping relationship.

There are also scenarios where you want to declare a subtype without inheriting any parent behaviour.

So ultimately subtyping is distinct from subclassing.


That syntax is too heavyweight (IMO). AFAICT, most of the time, when people use inheritance, they don't want to define perfect subtypes. They just want to reuse as much already implemented behavior as possible.


> That syntax is too heavyweight (IMO).

I'd be fine with something more compact for the default case of "steal all behavior except what is explicitly overridden", such as:

  class Related like Base;


(0) Do you have a use case for cherry-picking which methods are inherited? If not, it seems like too much complexity for me.

(1) Do you think there would still be a use case for “extends”? If not, I'd rather drop it as well.


> Do you have a use case for cherry-picking which methods are inherited?

The most obvious is that it makes intent explicit in the case of multiple similarity (I prefer "similarity" to "inheritance" for non-subtyping implementation reuse.)

It might in some cases make sense to do this on an interface rather than (or in addition to) single method level; there's probably a lot to work out in the ergonomics of non-subtyping implementation reuse.

> Do you think there would still be a use case for “extends”?

Sure, classical subclassing (subtyping with implementation sharing) remains an important use case. It might be desirable to statically verify certain guarantees (some have been suggested in at least one subthread of this thread by another poster) to assure that the subtyping is reasonable, but even without such verification the use case remains important.


> The most obvious is that it makes intent explicit in the case of multiple similarity

Keep in mind that similarity is explicitly intended to be a quick-n-dirty form of code reuse. If the programmer has enough time to refactor the code, he or she should probably factor out the common parts, and then use proper (subtyping) inheritance. In view of this, I don't think making similarity overly complicated is a good idea.

> Sure, classical subclassing (...) remains an important use case. (...) even without such verification the use case remains important.

It's precisely the lack of such verification that destroys the guarantees you can extract from Liskov's substitution principle.


> If the programmer has enough time to refactor the code, he or she should probably factor out the common parts, and then use proper (subtyping) inheritance.

I do see the attraction of the idea that all implementation sharing ultimately reflects something that can be expressed through a subtyping hierarchy adhering to the LSP, but I'm not 100% convinced that it is the case. Absent certainty on that point, I'd like to have a model of similarity that is workable when viewed as an ultimate, rather than interim, model.

Also, I think that this kind of explicitness (with similar syntax) is desirable for subtyping inheritance in languages that support multiple inheritance, so it keeps similarity in line with inheritance, with the distinction only in the relationship being expressed (so, while it may make similarity more complex, it keeps the whole language more simple but explicit.)

> It's precisely the lack of such verification that destroys the guarantees you can extract from Liskov's substitution principle.

I agree that static verification of sanity in subtyping inheritance reduces the problem of LSP violations (as, frankly, does non-static verification, such as testing frameworks using subtype relationships to automatically apply tests for supertypes to subtypes.)

The degree and type of verification (as well as other details like cherrypicking and support for multiple similarity) that is appropriate for these type of things really depends on the language. Extending Perl or Python (dynamic languages with multiple inheritance) to support similarity alongside inheritance is probably going to favor different tradeoffs than extending Java (static with single inheritance) to support similarity.


> I agree that static verification of sanity in subtyping inheritance reduces the problem of LSP violations

The LSP itself is never violated in a type-safe language. What's violated is the guarantees that you expected the LSP to buy you. For instance, Java doesn't promise about the meaning of non-final methods. If a programmer assumes such guarantees, the problem is with the programmer, not Java. If he wants such guarantees to hold, he has to use a different language.


Very true. Mixin delegation is the thing we actually want here.


> Yes, it is their job, by not allowing subclasses to be used as subtypes.

That's probably too extreme. Subtyping should just be explicitly declared rather than implicit when subclassing.


That's definitely not being charitable. Subtyping has some pretty subtle interaction with other typing features, which is why it's still being studied more than 20 years after Cardelli's "Subtyping Recursive Types".

The pernicious conflation of subtyping and subclassing in common OO programming languages just compounds the confusion.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: