
Inheritance Often Doesn't Make Sense - signa11
https://www.sicpers.info/2018/03/why-inheritance-never-made-any-sense/
======
panic
A couple of other comments have argued that "ontological inheritance" and
"abstract data type inheritance" are actually the same thing:

 _> This is because Squares are Liskov substitutable for Rectangles... which
is because Squares are, platonically, a kind of Rectangle._

 _> If the type system is sound and expressive enough, ontological inheritance
( this thing is a specific variety of that thing) and abstract data type
inheritance (this thing behaves in all the ways that thing does and has this
behaviour) should be essentially the same thing._

The difference is that ontology exists in the mind of the programmer, but
Liskov substitutability is a property of the program itself. No matter how you
model it, a Square is "platonically a kind of" Rectangle. But in order for
them to be Liskov substitutable, you have to model them in compatible ways. If
my Square class only has a sideLength method, I can't substitute it for a
Rectangle.

This simple example may seem silly, but as models get more complex, it becomes
harder to make them compatible, even if one modeled class of objects seems
like "platonically a kind of" another class. You see this kind of thing all
the time in real-world systems. For example, in a UI system, an "OpenGL View"
is conceptually a kind of "View", and this relationship is modeled by making
OpenGLView a subclass of View. Normal 2D drawing doesn't work in this kind of
view, however, so it's not Liskov substitutable.

~~~
jdmichal
Yes! The thing that ends up breaking in the squares and rectangles examples is
mutability. Remember that math things are immutable by default. This thing is
a square, and by definition also a rectangle, and since its properties and
identity are immutable, that will always be true.

However, programming takes those immutable concepts and tends to make them
mutable. So now we have a rectangle, and we can change its identity by, say,
scaling its width. And it's obvious that scaling just the width of a square
will make it no longer a square. So now a square cannot support the same
operations that a rectangle can. So now a square is no longer Liskov-
substitutable for a rectangle.

~~~
gugagore
Perfect! This is the covariant/contravariant/invariant distinction, right?

"Read-only data types (sources) can be covariant; write-only data types
(sinks) can be contravariant. Mutable data types which act as both sources and
sinks should be invariant." from
[https://en.wikipedia.org/wiki/Covariance_and_contravariance_...](https://en.wikipedia.org/wiki/Covariance_and_contravariance_\(computer_science\))

~~~
jdmichal
Absolutely, and that's another excellent way to explain the problem and come
to the same conclusion. The mathematical relationship between rectangles and
squares is covariant, because squares have _more specific_ constraints than
rectangles. But when you make rectangles mutable, those mutations are
contravariant, because they only guarantee to preserve _less specific_
constraints. So you can't make squares from mutable rectangles!

------
DubiousPusher
If you ever open the hood of a modern car and look around a bit you'll
discover that at the front there's a half dozen pullies connected to one or
more belts. Each of these pullies is connected to some diffeerent system each
able to turn rotational acceleration into a useful service. The alternator
converts that rotational energy into electricity. The AC unit uses it to
operate a heat pump. The coolant and oil pumps use it to circulate fluid. In
the past, these parts were sometimes driven by another system or integrated
directly into another part. But over time, as a convenience to many different
parties it was decided they should all implement a pully and connect dirctly
to one of the main belts that connect directly to the motor. Not a single one
of these parts has anything you'd call a taxonomical relationship. They do
have an invented one. And it is as much about conveniencing the overall system
design as much as conveniencing the individual part design. They are all
"pully implementers" subscribed to the "belt druve" system.

This is the inheritence I find most powerful in software. These things aren't
the same but if we treat them the same, the downstream and upstream code are
simpler, easier to replace and consistent. I don't worry too much about the
philisophical relationship between the objects.

~~~
dyarosla
You’ve described interfaces, not so much inheritance.

~~~
kbsletten
In the language of the article, this is subtyping without subclassing. Sure
Java/C# implement this as interfaces, but in C++ this would be a case of
multiple inheritance. This is a good example of the article's main point that
people mean a lot of different things by inheritance and get different value
from the different applications of the concept.

~~~
humanrebar
There are at least a few ways to define and implement interfaces in C++
without touching inheritance.

~~~
andrewflnr
Structures of function pointers and, what, string method names? My C++ is
rusty, but that's all I can think of. I guess there's a lot of variations on
structures of pointers...

~~~
stephen_gareth
In Sean Parent's 'Inheritance is the base class of evil' talk Sean outlines an
interesting way to achieve runtime polymorphism without inheriting interfaces
in C++.

[https://www.youtube.com/watch?v=bIhUE5uUFOA](https://www.youtube.com/watch?v=bIhUE5uUFOA)

~~~
Paul_Dirac
But he uses interface inheritance.

You see it at 9:44. Where he creates the concept_t and int_model_t inherits
from it.

~~~
stephen_gareth
Indeed. But the original type isn't 'burdened' with the interface. I've used
the technique for a simple audio signal processing chain.

------
kqr
> you cannot use a square everywhere you can use a rectangle (for example, you
> can’t give it a different width and height)

Can someone come up with a better example here? Intuitively, I would say,
"Yes, if you ask me for any rectangle, and you reject a square, you are
wrong." If you say you can use _any_ rectangle to do your thing, you should
absolutely be able to also use a square.

Why am I not convinced with the given example? Because fundamentally, I don't
think "set the width and height" counts as something you can do with a
rectangle. A rectangle has a width and height, and you can't just _will_ it to
have a different width and height and expect it to obey you through some force
of nature. What you can do is construct a new rectangle and destroy the old
one in the process.

In other words, if you expect to change the shape of the item, you should not
permanently identify the item by its shape. The given example is a bit like
saying "The superclass Animal has a method becomeCat which turns any animal
into a cat." and then feigning surprise when your code breaks for any non-
feline animal.

~~~
duncanawoods
> and you can't just will it to have a different width and height and expect
> it to obey you through some force of nature

I think you have let functional programming and immutable data-structures bias
your world view.

The real world is mutable and the wonder of the digital mutable world is that
it is pretty much just will-alone that can set attributes as you describe. It
is immutability that is a trendy but artificial layer on top of this reality.

~~~
greydius
> The real world is mutable

Only if you ignore time. It's not possible to change the state of the world at
some previous instant (as far as our understanding of physics is concerned).
The mutable world model is an artificial construction that aligns with our
human perception.

~~~
seanmcdirmid
Something that changes over time is mutable. Time exists and we can’t escape
it, even with time-indexed immutable data structures that implement explicit
mutability. The only question is does the thing change internally (an object)
or is it replaced with a new one (a value).

~~~
kerkeslager
I think it's not at all evident that reality is not a value replaced with a
new one all the time.

~~~
seanmcdirmid
Even if that were the case, it wouldn’t be useful since we experience time
with continuity anyways. Bob at time t is still Bob at t+1 even if his state
(like position) has changed. If Bob were a value, then he would be another
person, we would have to add a persistent ID to the bob values so we could see
them as the same object.

~~~
greydius
> Bob at time t is still Bob at t+1

I guess we're getting more into philosophical issues now. If I leave an ice
cube on my counter, at exactly what time is it no longer an ice cube.

~~~
seanmcdirmid
Mutability is mutability, it also applies to ontology even if most OO
languages don’t model dynamic ontology with inheritance (unlike say Self or
Cecil).

Your ice cube was never just an ice cube in the first place, it was just some
water that happened to be frozen as a cube...once heat was applied to the
water, it’s state changed so that it eventually could no longer be classified
as an ice cube.

~~~
greydius
> Your ice cube was never just an ice cube in the first place

That's my point. This idea that there are objects with mutable state is a
myth. Even at the smallest scale, what we call elementary particles, are
abstractions. There is nothing except the state of the universe at a given
instant in time.

~~~
seanmcdirmid
Again, we are unable to work or perceive at that level. So the abstraction of
state is incredibly useful to us non-sub-atomic beings.

~~~
kerkeslager
Agreed, but if we're accepting an abstraction instead of reality, than arguing
for mutability because you think it's reality doesn't make much sense now does
it? You're saying it's okay to abstract things away from molecules, but it's
not okay to abstract things away from mutation. Why?

(Note that I'm not even persuaded yet that mutation _is_ reality. I'm just
saying that even _if_ mutation _were_ reality, it doesn't follow that mutation
is the best abstraction with which to model reality.)

------
scarygliders
I took over the maintenance and enhancement of a customer's eCommerce site.

It's written in Python, using an ancient framework - Pylons.

The person who originally designed and wrote it, coded large super classes and
then subclassed off those.

And then somtimes subclassed off those subclasses.

So now I'm lost in a maze of twisty classes/subclasses, which makes
maintaining and enhancing this eCommerce site much, much more of a challenge
than it should have been.

In a few particular cases, I've found myself having to move functions out from
one subclass and into the superclass, in order for related subclasses which
needed access to it, for me to be able to add more functionality to the site.

I think what I'm trying to say is - from my experience, you can use
Inheritance in many, many ways which can make life very difficult - nay,
miserable - for your future self and especially for anyone 'inheriting' your
project.

In my own home-grown Python projects, I have actually yet to use Inheritance
(in classes I write myself - there's no escaping it when you're using other
libraries of course), even after years of Python coding, because when I'm
coding a new class DoSomething(), it's for that particular task and to date
I've not ever had to subclass any of my classes.

~~~
m3kw9
Inheritance was probably was a good idea for a small thing and quickly achieve
certain goals until features ballooned and he keep promising to deliver more
features on top promptly

~~~
hinkley
There’s a rule from XP that people still ignore at their own peril. The rule
of Three is, for those who tend to overengineer, a plea to wait a little
longer. But for everyone else it’s a call to action.

When you hit three copies of a pattern is when you should reconsider your
choices. Just because a pattern was fine ten minutes ago doesn’t mean you
should keep doing it _now_. At some point it has become “too much and as part
of the campsite rule you have to ask if the block of code is ridiculous. It
may have already been ridiculous, or you might be the one who took it there,
but that doesn’t matter because you’re here now and what are you gonna do
about it?

------
bartread
Oh, come on. The title submitted to HN is "Inheritance often doesn't make
sense"; the actual title of the article is "Why inheritance never made any
sense". Are you kidding me?

Do we really need, in 2018, another article continuing this particular
religious war? Inheritance is just another tool in the software engineer's
toolkit. When you need that tool, use it; when you don't, don't. But taking a
position where you say it's never the right tool or, conversely, always the
right tool makes you sound ignorant and inexperienced.

~~~
panic
I didn't get that from the article at all. I actually like what the article is
doing a lot, and I wish people would take the same approach more often. When
arguments about high-level concepts like inheritance go poorly, it's usually
because everyone involved is talking about something slightly different. Maybe
a supporter of inheritance likes the way it lets them think about their
program (ontological inheritance) while a detractor doesn't like how it forces
each subclass to carry along the baggage of its superclass (implementation
inheritance). These people are not going to have a fruitful discussion without
understanding that they're talking about different things.

The concept of "types" in programming languages is another great example --
there are syntactic type declarations, memory-level types to specify which
bytes mean what, things like typeclasses or interfaces which give you runtime
polymorphism, the "type" you have in your head when you're thinking about what
kind of data your code needs to handle... before you even start to have a
discussion, you need some idea of what you're talking about.

~~~
fnl
The problem balancing implementation vs ontological inheritance while
providing strong type safety is that your language needs to either support
defining contravariant types (and you still likely end in a mess, see Scala's
eternal discussion about "total" type safety), or you disallow ontological
inheritance completely (like Golang, and therefore cannot support many modern
programming features, for the better or worse, doesn't matter). Not sure if
there is a language that truly does the opposite _by design_ , though (being
strongly typed and disallowing implementation inheritance, while providing
ontological inheritance). Such a language might be the DDD modeller's heaven?
:-)

~~~
weberc2
> or you disallow ontological inheritance completely (like Golang, and
> therefore cannot support many modern programming features

Sincerely, what features does this preclude. I pretty much ignore ontological
inheritance in any programming language because it never seems useful. Am I
missing something?

~~~
yen223
Same here. The whole "is a square a type of rectangle, or is a rectangle a
type of square" question seems like an unnecessary debate.

------
kccqzy
> A common counterexample to OO inheritance is the relationship between a
> square and a rectangle. Geometrically, a square is a specialisation of a
> rectangle: every square is a rectangle, not every rectangle is a square.

Alternatively, you can argue that a rectangle is just a square with the
additional freedom to vary the width and height independently. So in C++
syntax you would have:

    
    
        class Square {
        protected:
          size_t width;
          // No need to have "height" field because Square guarantees that height ==
          // width.
        public:
          virtual size_t getWidth() const { return width; }
          virtual size_t getHeight() const { return width; }
        };
        
        class Rectangle : public Square {
          size_t height;
          // A rectangle allows height to be different from width.
        public:
          virtual size_t getHeight() const override { return height; }
        };
    
    

See? This makes OO hard because it's hard to decide which should be the
subtype of which intuitively, and people get even more confused since the
arrow is contravariant in its left argument and the relationship would
sometimes appear reversed. There in fact _is_ one correct answer but people
often get it wrong. You need some serious appreciation of OO to know which.

Here are some more examples that can cause confusion. Suppose we have
Reference (which can be read or written), ReadableReference (read-only) and
WritableReference (write-only). From a purity perspective, should we have
Reference inherit from both ReadableReference and WritableReference (ignoring
OO languages that don't allow multiple inheritance), or should we have
ReadableReference and WritableReference inherit from a single base class
Reference? This kind of question frequently trips people up.

~~~
chrismorgan
> There in fact is one correct answer but people often get it wrong. You need
> some serious appreciation of OO to know which.

Please enlighten me which one is _correct_ , and I’ll happily argue that the
other is in fact correct!

(I _think_ you’re going to say Square extending Rectangle is correct lest
Rectangle break Square’s invariants, but I’m not certain. What language you’re
operating in may influence the matter; for there can be very important
differences in how different languages handle variance which can invert the
answer. I’m a little rusty on this, though, because I haven’t had to actually
worry about it for a few years since I last wrote Rust code where the variance
of a type with respect to a generic parameter actually mattered, and for work
I’m mostly writing JavaScript where it’s all fuzzy enough that you pretty much
get to decide what is right and what is wrong!)

~~~
JepZ
If you want to argue about OO you should use Smalltalk. Many other languages
use some shortcuts (often for performance reasons) which ignore central
concepts of OO (e.g. 'everything is an object').

------
yorwba
This article seems to be confused about what abstract data type inheritance
is, as exemplified by "As a type, this relationship is reversed: you can use a
rectangle everywhere you can use a square (by having a rectangle with the same
width and height), but you cannot use a square everywhere you can use a
rectangle (for example, you can’t give it a different width and height)."

If an abstract data type X inherits from Y, that means that _every_ X can be
used where you can use a Y. You can only use a rectangle where a square is
expected if that rectangle happens to be a square. Inversely, you _can_ use a
square everywhere you can use a rectangle: if you change the width and height
of a rectangle, you turn it into a rectangle with different width and heigth;
if you change the width and height of a square, you _also_ turn it into a
rectangle with different width and height.

That also means that you can't have mutable values, static types and subtyping
in the same language: if you apply a mutating function defined for a supertype
on a subtype, you might end up also mutating the type of the mutated value by
invalidating invariants of the subtype. It's the same problem that made
covariant arrays in Java unsound.

If the type system is sound and expressive enough, ontological inheritance (
this thing is a specific variety of that thing) and abstract data type
inheritance (this thing behaves in all the ways that thing does and has this
behaviour) should be essentially the same thing.

~~~
kccqzy
> That also means that you can't have mutable values, static types and
> subtyping in the same language

Yes you can. You just need to make a clean separation between mutable
variables and immutable variables. Then mutable variables must be invariant,
and immutable variables can enjoy subtyping.

Alternatively, classify mutable variables further and make references carry
information about whether only reading/writing is allowed through this
reference. Then read-references enjoy the usual covariant subtyping, and
write-references enjoy contravariant subtyping.

~~~
tome
Agreed, although I'd go a step further. A mutable variable has a type of reads
and a type of writes. They vary in opposite directions. If you constrain them
to be the same then they must therefore not vary at all.

------
sbov
Unless your program's goal is to ontologically categorize objects, #1 is a
trap that will just bite you in the ass.

In the physical world, if I have a rectangle shaped box I want to cover, I
don't want a square, no matter how much "squares are rectangles". It's no
different in a program. What really matters is the behavior and goals of your
program. It's cute to say squares are rectangles but if your program needs to
let your user independently set width and height then a square is completely
useless.

~~~
weberc2
> Unless your program's goal is to ontologically categorize objects, #1 is a
> trap that will just bite you in the ass.

Strangely, ontological inheritance isn't even useful for that purpose. If
you're organizing things ontologically, you certainly want to have runtime
access to the relationships in your ontology, which means you should model
things as data, not static class relationships (although reflection can turn
static class relationships into data, it's a pretty hokey, indirect solution).

~~~
seanmcdirmid
Most languages that have classes expose inheritance relationships in someway
without reflection. Even JavaScript provides a non-reflective instanceof now.
You can also reify inheritance relationships more directly by hand of needed.

------
ken
I feel as though the rectangle/square example falls down mostly because of a
linguistic trick.

Rect->Square could be a perfectly reasonable type hierarchy, as long as you
don't allow self-mutation of the very attribute that defines this
specialization. But then, mutation tends to wreak havoc on inheritance anyway
(e.g., covariance/contravariance) -- and almost everything else. When I take
the needle out of my record player, it can't play records any more, so is it
really still a "record player"?

This isn't special to any particular type of inheritance, either. If you let
all the air out of your ball, it's no longer a sphere, so even with purely
ontological types, you're already in trouble. Inheritance isn't the problem.
Mutation is.

~~~
Viliam1234
This would be overengineering, but the following six types would solve the
problem:

* ImmutableSquare

* ImmutableRectangle

* MutableSquare

* MutableRectangle

* Square

* Rectangle

The "Mutable..." classes are mutable, the "Immutable..." classes are
immutable, and the "Square" and "Rectangle" classes mean that you can read the
values, but there is no guarantee about either mutability or immutability.

In this system, "ImmutableSquare" and "MutableSquare" are subtypes of
"Square"; "ImmutableRectangle" and "MutableRectangle" are subtypes of
"Rectangle"; and also "ImmutableSquare" is a subtype of "ImmutableRectangle"
(and therefore "Rectangle").

But if you go this way, don't be surprised when you end up with millions of
classes.

~~~
ken
In some languages, there are also (non/)threadsafe variants of mutable
classes. As you say, this approach blows up quickly.

If I saw that, I would ask what 'problem' it's trying to solve. Is mutation
actually the goal, or simply the means? In languages which don't support
general mutation (I'm writing Clojure right now), I really don't miss it at
all.

------
devit
The reason implementation inheritance with substitution (aka "virtual
methods") is bad is that it results in classes having two APIs: the normal
public API and one for inheritors, the latter of which is usually
undocumented, and even worse the APIs are conflated.

To see the issue, imagine you have a class representing a thermometer, with
two virtual methods: get_c() and get_f() returning the temperature in Celsius
and Fahrenheit degrees respectively.

Now it turns out that a specific model of that thermometer was miscalibrated
and always returns 10 C more, so you decide to make subclass that corrects the
behavior.

Unfortunately, that's impossible (without composition or mutable state).

A first attempt could be to override get_c() to call the parent and subtract
10. However, get_f() will still be wrong unless it happened to be implemented
by calling get_c() and converting.

A second attempt could be to override both and apply the correction to both.
Except now if get_f() is implemented by calling get_c() and converting, the
correction will happen twice!

The issue here is that how the class uses its internal API is undocumented,
and also that the internal and public APIs are conflated, making it impossible
to change only one.

This can be solved by never having a class call overridden functions on
itself, but that just results in a system that is equivalent to composition
with delegation.

If such overriding is really required, it can be accomplished by adding a
"callback interface" parameter to the constructor and documenting how it is
called and how it's expected to behave.

------
eksemplar
We don’t use inheritance anymore. If something needs to be shared it gets its
own service class.

Every time we’ve used inheritance it’s ended up being more of a hassle to keep
track of as functionality changed. I know we could utilize it better than we
have, but that’s part of development management as I see it, if I know my crew
won’t utilize something to good effect, then it’s often better to adapt our
practices rather than fail at change management after a long period of trying.
Especially because a lot of new hires are really bad at OO principles beyond
the fundamentals.

------
HumanDrivenDev
I don't understand why some functional programmers have so much difficulty
with the concept of inheritance. Discriminated Unions and pattern matching is
just inheritance and virtual dispatch turned inside out. Yet no one is naval
gazing about whether their algebraic datatypes are 'ontological' or not.

I also really wish people wouldn't try and dismiss concepts they're ignorant
of:

 _because multiple inheritance is incompatible with the goal of implementation
inheritance due to the diamond problem_

I have no idea what that sentence is supposed to mean. It doesn't make sense
even given their own definition of implementation inheritance.

~~~
kccqzy
Yes it's true that discriminated unions and pattern matching is just
inheritance and virtual dispatch turned inside out. But discriminated unions
and pattern matching are easier to use, easier to understand than inheritance
and virtual dispatch.

And you don't need to struggle with principles like "is-a relationship" which
can be confusing.

~~~
HumanDrivenDev
_But discriminated unions and pattern matching are easier to use, easier to
understand than inheritance and virtual dispatch._

That seems incredibly subjective. It's 50/50 for me, I just go with the grain
with the language I'm using. I'll admit that in practice pattern matching wins
due to the depressing lack of multiple dispatch in OO and pseudo-OO languages.

 _And you don 't need to struggle with principles like "is-a relationship"
which can be confusing._

Why would you struggle with that question with inheritance and not DUs? what
is it about DUs that make it not an issue?

------
waibelp
Nice article.

Worst thing I've ever seen was a class which inherited another just to keep
the lines of code short (<2500). That said it wasn't just one time
inheritance... It was up to four times. There was no logical split of the
inheritance. It looked like someone simply split one huge file into some
smaller ones.

We called that kind of inheritance "code sharding" and we had a lot of
headache at that time.

------
contingencies
The programmer's perspective:

 _The phrase 'object-oriented' means a lot of things. Half are obvious, and
the other half are mistakes._ \- Paul Graham.

 _Implementation inheritance causes the same intertwining and brittleness that
have been observed when goto statements are overused. As a result, OO systems
often suffer from complexity and lack of reuse._ \- John Ousterhout Scripting,
IEEE Computer, March 1998.

 _The problem with object-oriented languages is they 've got all this implicit
environment that they carry around with them. You wanted a banana but what you
got was a gorilla holding the banana and the entire jungle._ \- Joe Armstrong

The artiste's perspective:

 _Everything tends to make one think that there is little relation between an
object and that which represents it._ \- René Magritte, surrealist

The conceptual purist perspective:

 _The notion of object oriented programming is completely misunderstood. It 's
not about objects and classes, it's all about messages._ \- Alan Kay

The pedagogical perspective:

 _CSCI 2100: Unlearning Object-Oriented Programming - Discover how to create
and use variables that aren 't inside of an object hierarchy. Learn about
'functions,' which are like methods but more generally useful. Prerequisite:
Any course that used the term 'abstract base class.'_ \- James Hague

Quotes via
[http://github.com/globalcitizen/taoup](http://github.com/globalcitizen/taoup)

~~~
TheOtherHobbes
I think the problem is more that OOP and FP teach that the solution is in the
language constructs, when in fact the solution is in the developer.

OOP and FP are typically taught as outputs. You take a problem, you apply FP
or OOP magic, and you get a solution squeezed into an FP or OOP shape. The
implication is that the language somehow half-solves your problem just by
being how it is, and all you have to do is apply it.

IMO this is the wrong way to do things. Developers should be taught abstract
domain modelling skills in a language-independent way, then taught how
solutions can be implemented in different paradigms. _Then_ they can start
coding.

The solution is in the design of the relationships, not in the language
syntax. And there's no such thing as a off-the-shelf one-size-fits all set of
relationships.

There are domains where it makes total sense to subclass polygon through
rectangle to square, and domains where that's a very bad idea and will cause
endless pain. So it's not enough to say "This thing is like this other thing,
so they both belong on the same inheritance tree." Sometimes the resemblance
is superficial and irrelevant in terms of the domain - even though to you as a
human programmer the similarities are "obvious."

Neither OOP nor FP can help you if your ability to design minimal but powerful
abstractions that fit specific problems is poor. OOP and FP will expose you to
new ways of thinking about problems, but neither is general enough to "just
work" or keep you out of trouble if you don't truly understand what you're
trying to do.

~~~
divs1210
I think the problem is more that Roman and Indo-Arabic numerals teach that the
solution is in the notation, when in fact the solution is in the
mathematician. . .

~~~
GuiA
I’m confused about your analogy. Indo-Arabic number representation enables one
to mentally carry out computations that would be much more complex in Roman
notation. The notation might not be the solution, but the notation certainly
enables solutions not really conceivable in other notations.

~~~
divs1210
I believe functions + immutable data enable me to mentally figure out
solutions to complex problems that would be much more complex if solved with
objects and inheritence and mutable state.

~~~
meheleventyone
The analogy breaks down (at least for me) as there are some trade-offs being
glossed over that don’t really exist with the different systems of numerals.
What’s being lost for the conceptual simplicity you’ve gained?

------
catnaroek
> Abstract data type inheritance is about substitution: this thing behaves in
> all the ways that thing does and has this behaviour (this is the Liskov
> substitution principle)

There is no such thing as “inheritance” for abstract data types, because
inheritance is a _syntactic_ relation between two definitions (which needn't
be type defintions, by the way!). Abstract data types may be related by
subtyping, although IMO this is hardly ever useful. What you actually want to
refine is _abstractions_ (of which abstract types are merely constituent
parts).

> As a type, this relationship is reversed: you can use a rectangle everywhere
> you can use a square (by having a rectangle with the same width and height),
> but you cannot use a square everywhere you can use a rectangle (for example,
> you can’t give it a different width and height). Notice that this is
> incompatibility between the inheritance directions of the geometric
> properties and the abstract data type properties of squares and rectangles;
> two dimensions which are completely unrelated to each other and indeed to
> any form of software implementation.

There are valid arguments against inheritance, but this is not one. In fact,
this argument has nothing to do with inheritance at all! All that this says is
“if the type variable T only appears in contravariant position in F(T), and A
is a subtype of B, then F(B) is a subtype of F(A)”. Moreover, this is not an
argument against subtyping either. It's just the description of how to handle
it correctly. It appears in any standard PL semantics or type theory textbook.

Here is a real argument against inheritance: “Try formalizing the semantics of
a toy language with inheritance. Try proving things about a couple toy
programs using this semantics. Note how the proofs about what your
superclasses do often make a lot of unwarranted assumptions about how
subclasses will use or override inherited functionality. These assumptions
couple not only the interfaces (which is okay), but also the implementations
(which is not okay) of superclasses and subclasses. This is the antithesis of
modularity.”

------
neokantian
In practical terms, away from the nebulous confusion, object orientation means
that the actual function to call must first be looked up in the first argument
of the function to execute:

x.f(y,z);

<==>

F=ooLookup(x,f); F(x,y,z);

This scheme does indeed allow for polymorphism, but I wonder why we would
shoehorn polymorphism into a situation, unless it naturally emerges throughout
the programming effort as really needed?

In my opinion, functions should not be polymorphic by default, simply, because
there is no reason to complicate things unless these complications truly solve
a problem.

Inheritance is then a next complication in which F=ooLookup(x,f) recursively
tries to resolve the function from a hierarchical class data structure:

x->class->functions

x->class->class->functions

...

This is even a worse complication, which is even more unlikely to be useful.

------
z3t4
I think inheritance make code more coupled, which make it hard to delete or
refactor. Most of the cases it's better to just copy and paste. You can still
share methods. But the moment you need to modify a shared method just so it
can be reused: Make a new function instead. One popular solution is to only
let functions do _one thing_ but that leads to even more coupling and less
reuse. There's a fine balance between reuse and coupling. Where reuse is good
and coupling is bad. A general rule for when to reuse/share/modularize is when
the method/function can be reused in _other_ code base. eg. when it can be/is
decoupled.

------
test6554
If we are doing OO programming, and we have rectangles, IsSquare would be a
good method for a Rectangle whose dimensions are mutable rather than Square
being a subclass of Rectangle.

~~~
mcbits
Now you have to remember to check (and repeatedly check) IsSquare at runtime
to somehow deal with rectangles that aren't squares, which probably means
throwing an exception that someone else has to remember to catch or face
runtime errors. But if the dimensions are mutable, no alternative is very
satisfactory.

------
mncharity
The conclusion 'separate out implementation inheritance' rather weakens the
claim 'multiple inheritance of implementation isn't useful'. Because then
pragmatics can be allowed to dominate.

We do need more powerful vocabulary for composing implementation (eg, to
specify object layout spread non-locally in memory). But multiple inheritance
can be valuable, and gets a lot of low-quality criticism.

------
everyone
Yeah. Even though I mainly code in C#. The more years I've been at it the less
I've used inheritance. Its become something I try to never do.

------
chiefalchemist
I'm not convinced the sqr rect example is the best one, at least to say
inheritance is wrong. I think this is an example of how not to use
inheritance. In my mind the base class should be shape (not rect) or four-
sided shape, or whatever the name is for shapes with non-curved sides.

Just me?

------
dxbydt
Here are some harder examples-

abs(x) + abs(y) = 1 is a square of side 1.

x^2 + y^2 = 1 is a circle of radius 1.

But abs(x) = positive sqrt(x^2)

so squares and circles ought to have the same 2D parent, since given the x &
y, its just a matter of applying the right transform.

y = ax^2 + bx is a parabola through the origin.

y = bx is a line through the origin.

so lines and parabolas ought to have a common 1D curve as a parent, because a
parabola reduces to a line when a=0.

clearly a line shifted by a constant is still a line.

so also, a line rotated clockwise or anti clockwise through say 90 degree is
still a line.

so then i take a horizontal line, shift it once and rotate it twice, join
these 4 lines and get a square.

so squares ought to be a container type with 4 lines ? or wait, if i collapse
the two verticals ie. set the height to 0, then a square is just a straight
line with no height, which is absolutely true geometrically, but think about
the damage you’d do to your type-theory.

but didn’t we just say lines are just degenerate parabolas. so then we ought
to be able to take a square, grab its 4 sides which are lines, transformed
each line into a parabola, and tell me what that gives you.

and you haven’t even gotten me started on activation functions. since all
logistics are richards curves, a gompertz is just a particular richards. so
then a relu should inherit from a gompertz and a swish should be a child of
relu and somewhere in here we need the hyperbolic tangent, either as a parent
or a child or a sibling or the notorious c++ friend function.

now go ask your friendly neighborhood tensorflow colleague why the activations
don’t share a common parent, why lines aren’t parabolas and squares aren’t
lines and so in and do forth.

------
eikenberry
IMO this is one of the primary reasons many modern languages not not strictly
Object Oriented. They usually support objects to a limited extent, but a
certainly not OO like the 90s languages.

------
JepZ
> Inheritance was never a problem: trying to use the same tree for three
> different concepts was the problem.

I wonder why the author describes Polymorphism[1], but doesn't mention the
term in any way. At the same time he uses the word 'abstract', but in a
different way than any OO/Smalltalk programmer would do. While he mentions the
'Smalltalk blue book', I somehow do not trust his expertise in the field.

[1]:
[https://en.wikipedia.org/wiki/Polymorphism_(computer_science...](https://en.wikipedia.org/wiki/Polymorphism_\(computer_science\))

~~~
jcelerier
> At the same time he uses the word 'abstract', but in a different way than
> any OO/Smalltalk programmer would do.

I doubt most people who define themselves as OO programmers, eg. working in
C#/Java/C++/Python/etc... consider themselves on the Smalltalk side (eg
message-passing) of the OO debate. 95% of actual programming experience in
object-oriented languages is in languages of SIMULA descent.

------
kerkeslager
People in this thread are trying to separate the concept of mutation from the
concept of inheritance, but the problem with this is that you can't separate
the two. Consider the following pseudocode:

    
    
        Rectangle r = new Rectangle(height = 3, width = 5);
        Square s_as_s = new Square(side = 4);
        Rectangle s_as_r = s_as_s;
    
        print(r.height); // prints 3
        print(r.width);  // prints 5
        print(s_as_s.height); // prints 4
        print(s_as_s.width);  // prints 4
        print(s_as_r.height); // prints 4
        print(s_as_r.width);  // prints 4
        print(r is_a? Square); // prints false
        print(s_as_s is_a? Square); // prints true
        print(s_as_r is_a? Square); // prints true
    

Okay, so the question comes up when you mutate these results:

    
    
        r.height = 2;
        s_as_r.height = 2;
    

Keep in mind, this is a perfectly reasonable thing to do in both cases: you're
just making two rectangles a little shorter. But no matter how you handle this
situation, the results are surprising:

One way:

    
    
        print(r.height); // prints 2
        print(r.width);  // prints 5
        print(s_as_s.height); // prints 2
        print(s_as_s.width);  // prints 2
        print(s_as_r.height); // prints 2
        print(s_as_r.width);  // prints 2
        print(r is_a? Square); // prints false
        print(s_as_s is_a? Square); // prints true
        print(s_as_r is_a? Square); // prints true
    

This is the simplest to implement, but only because there's a part of the
contract of Rectangle which is _implied_ and not enforced by the compiler.
When we change the height of a rectangle, we don't expect the width to change.
This is the sort of gotcha that needs to be put in the documentation in big
red letters: "WARNING: CHANGING THE HEIGHT MAY CHANGE THE WIDTH IN SOME
SITUATIONS."

Another way:

    
    
        print(r.height); // prints 2
        print(r.width);  // prints 5
        print(s_as_s.height); // prints 2
        print(s_as_s.width);  // prints 4
        print(s_as_r.height); // prints 2
        print(s_as_r.width);  // prints 4
        print(r is_a? Square); // prints false
        print(s_as_s is_a? Square); // prints true
        print(s_as_r is_a? Square); // prints true
    

But now you've broken the contract of Square: the user is going to be very
surprised when changing the side of one side of a Square means that the Square
instance no longer represents a square.

Okay, what about this:

    
    
        print(r.height); // prints 2
        print(r.width);  // prints 5
        print(s_as_s.height); // prints 2
        print(s_as_s.width);  // prints 4
        print(s_as_r.height); // prints 2
        print(s_as_r.width);  // prints 4
        print(r is_a? Square); // prints false
        print(s_as_s is_a? Square); // prints false
        print(s_as_r is_a? Square); // prints false
    

This might be possible with some horrible hack in a language that does dynamic
typing. This maintains the contracts of all the types, but I'd argue that it
breaks the contract of the language itself: it's deeply confusing to have the
type of the s_as_s variable change out from under you.

Perhaps you could do this:

    
    
        print(r.height); // prints 2
        print(r.width);  // prints 5
        print(s_as_s.height); // prints 2
        print(s_as_s.width);  // prints 2
        print(s_as_r.height); // prints 2
        print(s_as_r.width);  // prints 4
        print(r is_a? Square); // prints false
        print(s_as_s is_a? Square); // prints true
        print(s_as_r is_a? Square); // prints false
    

Setting aside how one might even implement this, we've now got the surprising
result that s_as_s and s_as_r seem to be different instances now.

Maybe we should have prevent this in the first place:

    
    
        r.height = 2; // works
        s_as_s.height = 2; // throws CannotSetSideViaHeightException
        s_as_r.height = 2; // throws CannotSetSideViaHeightException
    

s_as_s.height = 2 throwing an exception sort of makes sense, but now s_as_r
isn't behaving like a rectangle--we're breaking the Rectangle contract again.

Another way to prevent it:

    
    
        r.height = 2; // works
        s_as_s.height = 2; // works
        s_as_r.height = 2; // throws ThisWouldBeConfusingException
    

Again you're breaking the contract of the language, that s_as_s and s_as_r
seem to be different objects.

There's only one way left I can think of to make Rectangle maintain its
contract, Square maintain its contract, and keep s_as_s and s_as_r behaving
the same way:

    
    
        r.height = 2; // throws MutationException
        s_as_s.height = 2; // throws MutationException
        s_as_r.height = 2; // throws MutationException
    

Does this look familiar? It should: it's basically immutability implemented as
checks at run time, which is not a good way to implement it. At this point we
should just implement these as immutable objects.

I'm not necessarily saying that immutability is the only way. You could also
give up inheritance:

    
    
        Rectangle makeSquare(int side) {
          return new Rectangle(side, side);
        }
    
        Rectangle r = new Rectangle(3, 5);
        Rectangle s_as_r = makeSquare(4);
    
        print(r.isSquare); // prints false
        print(s_as_r.isSquare); // prints true
    
        r.height = 2;
        s_as_r.height = 2;
    
        print(r.height); // prints 2
        print(r.width); // prints 5
        print(s_as_r.height); // prints 2
        print(s_as_r.width); // prints 4
    
        print(r.isSquare); // prints false
        print(s_as_r.isSquare); // prints true
    

This also results in unsurprising behavior.

------
galaxyLogic
I think there's no Silver Bullet. But there are bullets

------
yoyar
The Decorator pattern is your friend.

~~~
dyarosla
Until it isn’t.

------
glibgil
Some languages use polymorphic row types and eschew inheritance

[https://brianmckenna.org/blog/row_polymorphism_isnt_subtypin...](https://brianmckenna.org/blog/row_polymorphism_isnt_subtyping)

[https://www.reddit.com/r/types/comments/73lg05/comment/dnt7q...](https://www.reddit.com/r/types/comments/73lg05/comment/dnt7q5p)

[https://noamlewis.wordpress.com/2015/01/20/introducing-
sjs-a...](https://noamlewis.wordpress.com/2015/01/20/introducing-sjs-a-type-
inferer-and-checker-for-javascript/)

[https://github.com/purescript/documentation/blob/master/lang...](https://github.com/purescript/documentation/blob/master/language/Types.md)

------
thaumasiotes
This gets off to a bad start:

> There are three different types of inheritance going on.

> 1\. _Ontological_ inheritance is about specialisation: this thing is a
> specific variety of that thing (a [soccer ball] _is_ a sphere and it has
> this radius)

> 2\. _Abstract data type_ inheritance is about substitution: this thing
> behaves in all the ways that thing does and has this behaviour (this is the
> Liskov substitution principle)

> A common counterexample to OO inheritance is the relationship between a
> square and a rectangle. Geometrically, a square is a specialisation of a
> rectangle: every square is a rectangle, not every rectangle is a square. For
> all s in Squares, s is a Rectangle and width of s is equal to height of s.
> As a type, this relationship is reversed: you can use a rectangle everywhere
> you can use a square (by having a rectangle with the same width and height),
> but you cannot use a square everywhere you can use a rectangle (for example,
> you can’t give it a different width and height).

> Notice that this is incompatibility between the inheritance directions of
> the _geometric properties_ and the _abstract data type properties_ of
> squares and rectangles; two dimensions which are completely unrelated to
> each other

By the definitions given at the beginning, where the "abstract data type
properties" of an object are its properties as considered by the Liskov
substitution principle, this is nonsense.

Compare the definition stated in
[https://en.wikipedia.org/wiki/Liskov_substitution_principle](https://en.wikipedia.org/wiki/Liskov_substitution_principle)
. Type S is substitutable for type T when, if f( _x_ ) is true of all objects
_x_ of type T, then f( _y_ ) is true of all objects _y_ of type S.

Obviously, according to this definition, any subtype S of type T which
satisfies the article's definition 1 will also be Liskov substitutable for T
and therefore satisfy the article's definition 2 as well. This is as far from
being "unrelated concepts" as you can get; the one requires the other.

The example with Squares and Rectangles doesn't make sense either. If you have
code that is set up to handle Rectangles, and you give that code a Square,
then the code will handle the Square correctly. This is _because_ Squares are
Liskov substitutable for Rectangles... which is _because_ Squares are,
platonically, a kind of Rectangle.

It looks like the author wants to ask "if I encounter an interface that
processes Rectangles, and I can only construct Squares, can I achieve
everything, by using Squares, that someone else could have achieved by using
Rectangles?" But that is not the Liskov substitution principle, or any sort of
type theory principle. That would be a principle of computational equivalence.

~~~
mpweiher
> If you have code that is set up to handle Rectangles, and you give that code
> a Square, then the code will handle the Square correctly.

Hmm...try setting different widths and heights on your square.

~~~
seanmcdirmid
Many rectangle APIs, even in OO languages, are immutable, so setting width and
length isn’t allowed anyways, all you can do is operate on it to get a new
one.

I’ve rarely seen a square subtyped as a rectangle, let alone a mutable one.
But I guess it’s possible if the mutable and immutable APIs are divided, so
that squares are rectangles in its immutable sense but not in its mutable one.

~~~
acjohnson55
This also pops up in modeling sets. You can define an immutable set by its
explicit list of members (intrinsic) or by function that tests membership
(extrinsic). If you want to have only one unified type Set that permits both
methods of construction, it has to be invariant with respect to the type of
its members. This is because the intrinsic approach is covariant (just as a
list is) but the extrinsic approach is contravariant (just as a function is in
its inputs).

I wish I had a really clear example of this, but it escapes me right now.

------
John_KZ
I used to feel like this before writing big and complex programs. But in term
of practicality, it's a whole lot easier to just write the expansion of
"rectangle" to "square" than re-writing "square" from scratch, essentially
copying the majority of "rectangle" properties and functionality.

~~~
ec109685
Factor out the commonality into a service class (as stated in another
comment), where your common logic can be shared.

